Big seamless world technology - what's working

Started by
6 comments, last by hplus0603 3 years, 10 months ago

Big, seamless, user-modifiable worlds - that's the next big thing. That's the “metaverse”, say many people. So what's working?

  • Spatial OS from Improbable. Dynamically sized regions. Does anybody actually use that? Worlds Adrift, etc. went bust. Is it just too expensive? Is it inherent in the Improbable technology that it's too expensive, or is that just from their business model and having to recover most of a billion dollars in dev costs?
  • Dual Universe. Dynamically sized regions from planet sized down to 5m^2. But what's happening at small scale so far is not very dynamic. Do they have a good idea, or not? Alpha 3 opens June 11th, for 600 or so hours of access, so more people get to look. So far, most videos are of controlled demos.
  • Second Life / Open Simulator. Fixed sized regions. Maxes out around 50-60 players/region. Works OK, but slow.

What else is out there? Has anyone successfully addressed the hard problems of efficient region splitting and recombining, cross-region physics, viewing many regions at once from a distance, etc.?

Advertisement

Big, seamless, user-modifiable worlds - that's the next big thing.

It's been the “next big thing” for 25+ years. (Look up ActiveWorlds for example)

The problem with SpatialOS seems to be that they solved the wrong problem. Contrary to popular belief, technology to spread tons of users across many servers is not particularly risky, assuming you have good engineers. However, to get the most benefit out of such as system, it really needs to be tightly integrated into your simulation and gameplay systems, and trying to put “sharding with merging" as a separate system on top, removes knowledge that the systems need to share. There was also BigWorld, which started out in that area, but realized they needed to integrate everything, so they moved to building an engine, and then tried to build a game, and then ran out of money.

There.com solved many of the same problems, including dynamic zone sizing and migration. The technology worked fine, and was used for military training for a while, and I think the game is still around, although small in size.

In general, you're going to end up with an N-squared problem, where N bodies all depend (directly or transitively) on each other, and N will have a maximum size for any implementation. You can optimize certain cases: the problem of “sports stadiums” or “venues” is probably best solved with people only seeing “heads” of people far away, and seeing actually-simulated entities only for “nearby” people, for example. But there's not that much gameplay value in building such a solution.

The real problems are almost entirely about gameplay. Why should people connect to your servers in the first place? For Second Life, it was, as far as I can tell, virtual strip bars. For Project Entropia, it was thinly veiled gambling. But if I want to hang out with a bunch of random people, I can do that in a real bar/cafe/sport stadium. If I want more control, I can either choose which people I hang out with (online gaming) or I can do it with text or video (anything from Facebook to WhatsApp to Tik Tok.) A general-purpose gaming world that's not “super great” at any one thing, but has a number of “passable” different activities, has no particular draw to any particular person.

Gameplay design for large, open, single-instance worlds is REALLY HARD, to the point of being an unsolvable problem. The only reason it “works” in the real world, is that we don't have a requirement for everyone to have fun at the same time in the real world – in the real world, it's acceptable that human beings have to give up their enjoyment to make a living.

enum Bool { True, False, FileNotFound };

hplus0603 said:
Gameplay design for large, open, single-instance worlds is REALLY HARD, to the point of being an unsolvable problem. The only reason it “works” in the real world, is that we don't have a requirement for everyone to have fun at the same time in the real world – in the real world, it's acceptable that human beings have to give up their enjoyment to make a living.

Now that's a useful insight. May I quote it elsewhere?

Games have about 10x - 100x as much action per unit time as the real world. That's because the game design forces it. User-built virtual worlds don't necessarily have that property. This leads to the usual new-user complaint in Second Life - “What do I do?”. There's no plot. The system doesn't compel or even encourage the user to do anything. You can walk for days and nothing will happen, although you might stumble upon something interesting. It's a substrate on which people build other things.

The biggest recent success in Second Life is a huge suburbia with tens of thousands of houses. That's all it is. You get a nice house in a nice neighborhood and can furnish it with nice stuff. Have friends over. Hang out with the neighbors. Have block parties. It's like living in The Sims, with better graphics.

This is popular with people who can't afford that in real life. It's much more popular than the steampunk cities, the cyberpunk cities, and the medieval cities.

In general, you're going to end up with an N-squared problem.

The engineering challenge is getting that down to N log N or better.

“There” is, amazingly, still around, supported for Windows XP, 2000, and Vista. They recruited me in their early days. They wanted to do a planet sized virtual world with voice on 56Kb/s dialup. I told them that wasn't going to work and didn't take the job. When they finally went live, it was on DSL.

Now that's a useful insight. May I quote it elsewhere?

Glad you like it! If you want to credit me, I'm “Jon Watte”

The engineering challenge is getting that down to N log N or better.

You fundamentally cannot. If everybody decides to lay down in a big pile of bodies, then the transitive physics dependency chain is N-squared. If you're in a market square, everybody in that square will hear what everyone else is shouting.

This is, btw, why “stream chat” on popular video streams don't functionally scale, and why very large message/bulletin boards have to fracture into more-specialized subsections. Even if only 0.01 of everybody posts a message, everybody reading all messages posted by 0.01 of everybody is still N-squared.

enum Bool { True, False, FileNotFound };

The physics pileup is a hard problem, but physics engines already deal with that. They solve groups of touching objects as one solve. If a physics group is too big, some backup strategy kicks in that tries to do something not entirely correct but, hopefully, not totally stupid. Freezing everything involved except for one object, letting it move, then repeating for each object, is one backup strategy. Do as much as you have time for per frame. The “fully destructable” world people are pretty good at this.

Nagle said:
Freezing everything involved except for one object, letting it move, then repeating for each object, is one backup strategy.

This entirely depends on application, there are certain scenarios where you NEED to solve the physics correctly and completely. Excluding obvious - applications in physics areas - even in games you can find such candidates. Competitive games - where there are various strategies (often built around limiting dynamic objects participating in physics engine to minimum - like we can see in Counter Strike: Global Offensive). This is also one of the reasons you don't generally see many competitive games with more physics.

Nagle said:
What else is out there? Has anyone successfully addressed the hard problems of efficient region splitting and recombining, cross-region physics, viewing many regions at once from a distance, etc.?

This will heavily depend…

Big seamless user modifiable world is quite broad definition - and Minecraft is one of the games that satisfies that category quite well. They did it in quite tricky way - physics is minimal, regions are only simulated in loaded chunks (which are only around players), viewing many regions at once from distance is not considered at all, etc. Although servers are not really having that much players (I've seen some 128 players servers).

Minecraft is one of the gameplays that can work in large worlds.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

some backup strategy kicks in that tries to do something not entirely correct but, hopefully, not totally stupid

This ends up being quite spongey, and there's significant risk that the objects end up in some state that is not physically allowed (interpenetration, ejection into orbit, etc.)

There is an approach where every object only depends on the state of all objects at the previous simulation tick. This is good both for multi-threaded simulation, and multi-process simulation. Your scalability ends up being related to the available network bandwidth, instead of CPU cycle bandwidth, but that turns out to also be a limited resource. (For in-core simulation, that bandwidth is very high, so it's promising in that regard.)

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement