Advertisement

Networking protocol via UDP for an MMO game

Started by November 07, 2016 07:08 PM
16 comments, last by hplus0603 8 years ago

Hey,

I know there are lot of topics about UDP vs. TCP for real-time games, but I'm asking for a different thing.

Assuming, I've decided to use UDP on low-level.

Is there any existed messaging/packets protocol for a multiplayer networking or specifically for replication and sync?

Or each game developer is making own protocol from a scratch each time?

Asking, because those basic networking things are very common for any multiplayer game.

At least, the replication issue is for sure exists for any multiplayer game and especially for MMO games.

And I'm asking not about exact implementation, rather about a description/specs for an algorithm which will be fast and reliable.

Probably, there should be some projects about this, since MMO and multiplayer games are pretty common.

I completely understand, that each game have own specific requirements, but basic stuff is the same, or, at least, very similar – I'm talking about replication, players synchronization, prediction/interpolation and so on.

If you think it's not possible to develop a common protocol for different games – please, could you explain why exactly it's not possible? (besides basic "just all games are completely different" – as I wrote above, it's not true: there are lot of common things in terms of networking)

Is there any existed messaging/packets protocol for a multiplayer networking or specifically for replication and sync?
Or each game developer is making own protocol from a scratch each time?


There are no "standard protocols" similar to how there might be RTP or NTP or NFS or whatever going over UDP.
The reason for this is twofold:
1. There is no need for interoperability -- the developers of Overwatch aren't interested in talking to a running instance of Counter-Strike.
2. The choices made in networking implementation are very important to the feel of fast-action games. What works for Unreal Tournament may be terrible for Doom.

There exists various libraries that "solve this problem" for game developers, if you can live with the assumptions made by those libraries.
This solves the commercial pressure that "if I can use something off the shelf, I can save development time."
Multiple games that use the same libraries will not be able to talk to each other, though, as the gameplay code is totally different.
Examples of libraries includes built-in networking in engines such as Unreal Engine or Source (these make all the choices for you) as well as less opinionated libraries like Lidgren, RakNet, and ENet.

Even if, for some games, interoperability "could" be built, there is zero commercial value in doing so, and thus it isn't built.
For the case of virtual worlds, where some amount of interoperability could perhaps actually make sense to users, I attempted to get the IETF to adopt an approach that would provide at least limited interoperability, but then 2008 happened and nobody had money to worry about that anymore.
https://tools.ietf.org/html/draft-jwatte-less-protocol-01

There are about four basic approaches that actually work. Those approaches are reasonably well known, but no one "bible" book describes them all.
The first approach is "spray and pray" -- just send commands as they happen from the client, and states as they update from the server, and let interpolation sort it out. Very simple, and can be effective for certain kinds of games. Drawback: Poor handling of player/player interactions.
The second approach is "lockstep synchronization" -- send commands for time X ahead of time to the server, which forwards to everyone else; the time step X isn't executed on all machines until they have all participants' commands for time X, then all the commands are executed in lock step; no actual state needs to be replicated as long as simulation is deterministic. Drawback: Long command latency before you see the action on the screen.
The third approach is "client ahead, remote entities behind" -- send commands for time X ahead of time to the server, and simulate them immediately on the client. Drawback: Remote entities are in a different place relative to you on your machine, compared to the server.
The fourth approach is "client ahead, remote entities forward extrapolated" -- same as the previous one, but forward extrapolate remote entities to approximate locations where they would be in the future. Drawback: May display remote entities in positions they will never be in, because they turn around instead of going forward or whatever.
enum Bool { True, False, FileNotFound };
Advertisement
There are about four basic approaches that actually work. Those approaches are reasonably well known, but no one "bible" book describes them all.

The first approach is "spray and pray" -- just send commands as they happen from the client, and states as they update from the server, and let interpolation sort it out. Very simple, and can be effective for certain kinds of games. Drawback: Poor handling of player/player interactions.

The second approach is "lockstep synchronization" -- send commands for time X ahead of time to the server, which forwards to everyone else; the time step X isn't executed on all machines until they have all participants' commands for time X, then all the commands are executed in lock step; no actual state needs to be replicated as long as simulation is deterministic. Drawback: Long command latency before you see the action on the screen.

The third approach is "client ahead, remote entities behind" -- send commands for time X ahead of time to the server, and simulate them immediately on the client. Drawback: Remote entities are in a different place relative to you on your machine, compared to the server.

The fourth approach is "client ahead, remote entities forward extrapolated" -- same as the previous one, but forward extrapolate remote entities to approximate locations where they would be in the future. Drawback: May display remote entities in positions they will never be in, because they turn around instead of going forward or whatever.

Thanks you, this is exactly what I've asked for, just I've hoped there could be a strict specs for such things... but now I see it's unlikely, so will have to implement it from scratch and spend time to debug and so on and so on... That's why I don't really understand why there are no common protocols for such things – it could save a lot of gamedev time :-)

That's why I don't really understand why there are no common protocols for such things – it could save a lot of gamedev time :-)


But nobody would make money from that protocol being public, because there is no incentive or need for two different games to talk to each other.
Also, given that the existing systems already have working implementations, it's not in their interest to document the protocols at a low level, because that would make life easier for possible competitors.

If you just want to save time, you should use an existing engine! Download Unreal Engine and use its system; it's already implemented and works.
enum Bool { True, False, FileNotFound };

to talk to each other.
No need "to talk to each other", I'm just talking about re-using existing, already tested things for a most common part of any multiplayer game.

But you're right it's not a profit for those who developing such protocol. But it's a profit for a whole gamedev community, as any open-source things does. What could we made with Internet, if anyone develop own HTTP-like protocol for each site? (I understand it's not quite equal comparison, but still...)

Download Unreal Engine and use its system
Huh, I'm exactly learning UE4 for now, and all their networking features are hidden and not documented. At the same time, UE4 for server-side requires a windows, at least for compiling, which seems VERY strang for me in case of cross-platform engine...

Basically, I'm also looking for a hybrid way – to use UE4 networking and move all other server-side logic to my custom solutions, but it's an offtopic here.

The most "finished" library I know that is used in games is RakNet, I personaly prefered to roll my own simple just API level network service for TCP/UDP that works in Windows and Linux/Mac, Android using Socket and IOCP for few suers and massive users at the same time. Anything else depends on your game architecture. Everybody in every studio has its own prefered way to do and this may be also the reason why there isnt a general solution as you might seek for.

You need to decide what the server does, what the client does and how they work together. In most MMOs server does the gameplay and only tells client for the results, in other games there might be most processing power on the client side and server just evaluates stuff only and in some games server just spreads update messages only (Like in Dungeons and Dragons Daggerdale) what might result in asynchronously game states on each client where one client has had the intro already but another is still watching it, shops have different items for each player and such things

Advertisement

No need "to talk to each other", I'm just talking about re-using existing, already tested things for a most common part of any multiplayer game. But you're right it's not a profit for those who developing such protocol. But it's a profit for a whole gamedev community, as any open-source things does.
Not necessarily. It's hard to make a shoe that fits on every foot, but for MMO (note that there are two Ms, not just one) it is twice hard. People write their own because it seems plausible that they can squeeze out a few more bits compared to someone who writes code ahead of time without even knowing any detail about your game. Bandwdth costs money.

That being said, there exist open, well-tested libraries, and they are being used (Raknet has already been mentioned, enet would be another much less feature-rich example). Or, some just use TCP. Even that can work well (it really depends). Early versions of one particular successful MMO used unencrypted XML-RPC over TCP in the early 2000s. Yep, that worked fine for up to 1,500 connections per instance, don't ask me how they made it work!

Also, note that talking to the client isn't everything in a MMO. Usually (basically, always), the outward-facing machine(s) talking to clients are not the same phyiscal machines as the ones running the simulation, or the ones storing data, and often/usually "login" and "gameplay" are different machines, too. These servers need to talk with each other in some way. You might very well use something like zeromq/nanomsg for inter-server communication, for example.

In my time working on MMOs I've always been impressed at (a) how different the networking on each of them was, and (b) how much that was down to personal choice rather than accident.

For general object replication, my own code used a system much like the LESS protocol above, sent over TCP because it wasn't a fast-paced game. However one previous project completely refused to have automatically serialised properties because they wanted all updates to be explicit, and because their multi-server model required it. A 3rd project stuck to the ancient "one struct per message type, manually serialised" method because that's what they were comfortable with, and because their codebase was mature enough that they already had most of the message types they needed.

The prediction and interpolation layer was different on all three projects too, ranging from fully "client ahead, remote entities behind" on my code to extensive extrapolation and correction on another because they had to accommodate shooter-style gameplay with the same engine.

So actually, the stuff that the OP suggested was "the same, or, at least, very similar" was probably the area in which the networking differed the most.

I'm completely agreed that everyone has to implement own custom solutions for now. But it does not looks right way, since the issues are exactly the same for almost any game which you've described (I don't talking about personal preferences, just about technical issues).

Of course, in case if game has solid code base, it will not fit to any common protocol, but for a new game – I don't see a real reason why need to invent networking from the scratch every time.

Again, websites are all different, using multiple requests types, receiving and sending completely different data and so on, but all sites are using the same HTTP protocol.

So, for games, what's required, basically: low latency, optional errors handling, prediction/interpolation – and all this could be exactly the same for most of multiplayer games (not only MMO), technically.

You're talking that each game has made own solution – well, it's obvious, since we don't have a public one. They just don't have another choice.

I have pretty few experience with multiplayer networking, but maybe you can give me 2 examples of multiplayer games, which will be controversial to each other in terms of networking? (I'm talking not about existing games, rather about 2 design examples).

the issues are exactly the same for almost any game which you've described (I don't talking about personal preferences, just about technical issues).

No, they're not. The choices made regarding the game design and the technical constraints go hand in hand. Twitchy-shooters don't want to run over TCP with other clients shown at old locations. Slower RPG-type games will be fine with TCP and will prefer smooth movement of other clients with no corrections needed. Games where all you do is move and shoot will be highly optimised for low latency and resilient to dropped packets whereas games where the interactions with the world are more complex may be optimised for arbitrary messages and RPC mechanisms where everything is expected to be sent reliably and in-order.

Even 2 shooters can have different philosophies on how to handle interpolation or extrapolation. Some games focus on trying to be as perfectly synchronised as possible, at the expense of needing to perform more corrections. This might work well for eSports. Others happily rewind game state so that someone who already reached cover can be shot by someone who pulled the trigger when the target wasn't in cover yet, because the engine prefers a consistent experience for the shooter rather than the shootee.

You keep using HTTP as an example but it is a poor comparison. HTTP is designed as a standard for interoperability, not as a way to accelerate the development of web servers and clients. You choose to make a website that is served over HTTP because you want people with existing clients to access it, and because you can cope with the limitations of HTTP. And if you can't, that's why we have websockets, FTP, etc.

This topic is closed to new replies.

Advertisement