Synchronize Skills & Abilities between Server and Client

Started by
9 comments, last by hplus0603 4 years ago

Hello guys, I have been working on a Co-op rpg for quite a while now

I've setup a server and client which synchronizes position and rotation very well, using a quad tree to optimize performance.

And now I have tackled a design dilema which I'm not sure which solution would be “better” to solve the problem:

The design issue I have is how to synchronize skill-result across the network.

In my scenario each skillEffect can have a unique implementation of what the effect can do, no generic constraints whatsoever, and those skillEffects are stored in a dictionary which is accessible for both the server and the client.

When a player/character casts a skill, I send a skill-cast-accept packet to nearby players, and they visually cast the skill (though they do not mutate their state), once the skill's effect has taken place it generates a skill result which would be sent to nearby clients on the next tick.

And now I wonder what schema a skillResult should look like.

I have 2 options that are possible to implement,

the 1st is having the skillResult contain the casterId, effectId, and nearby characters and let the client perform the same skilleffect as well in order to mutate the state - This method is better in terms of bye economy though it lets the client the ability to mutate their state, with possibly non-synced data, which could later cause de-sync with the server.

Option 2 that I thought of - is having each skillEffect return a specific skillResult which is unique to that skill, the skillResult will contain all of the mutations performed through the skill effect – This method could be far more pricey in terms of bye economy though it will be safer to use since there is 1 and only 1 variation of the skill result, and the client will not count on a falsey state.

I'd love to hear your opinion about this, and also if you are familiar with other networking forums please share them ?

Advertisement

If players hack their clients, they will see something different, but as long as the server doesn't accept modified data, that's fine – only the hacked client will see the hack.

In general, the way distribution of “effects” is done, is to play a “pre-effect” on the initiating client – maybe a muzzle flash and gunshot sound for a rifle, or waving the hands in the air and start drawing lightning around the intended target for a spell.

Then, then server gets the command, resolves it, and sends out a packet to everyone, saying “player A did thing B and it had result C.”

Your own client, would update the animation state to show the result C – hit or miss, for example? If hit, play “ouch” on the target, if miss, do something else.

Other clients, who didn't see you cast the effect, first play the casting effects, and then play the result effects right after.

This way, the server is the arbiter for “what actually happens,” and no client other than the local client will see a “hacked” outcome if that's the case.

enum Bool { True, False, FileNotFound };

@hplus0603 Hey again ?

I'm less worried of clients hacking their game (as the model I'm going for is authoritative server), and I'm more worried about players getting de-synced from the server state.

The flow that I had in mind is similar to your suggestion, the client casts a skill and receives immediate visual feedback, during this time he sends a cast request to the server whom can either accept or decline this request, the server will then notify the other clients to play along and visually display the skill cast.

The focus of the issue I'm having is how to sync the state mutation (not visual effect, for example stats change, damage over time the juicy stuff) between the client and server, since each skill in my game can do a lot of stuff and isn't coupled to a limited amount of effects, a skill can do many mutation operations with different values that are extracted through the caster's state when he first cast that skill.

Would you suggest assuming that the client has a valid state (a partitioned state that equals to the server partition of that area), and that previous commands which alter the state have successfully arrived, and as such let the client get the skill's formula and just calculate the changes.

Or take a stricter approach in which the server sends to the client the exact changes, which requires no calculation on the client side, and simply assign those changes.

Thanks for the help

Both methods have been used successfully in games.

If you're a game that's more of a web-based or mobile game, it's probably better to make the server send the total outcome state, because those connections can be finicky.

If your main goal is to use the least possible bandwidth for the most possible units, like an RTS game, then assuming that the client is already correct might be worth it.

If you want to support cross-platform play (JavaScript, ARM, x86, others) then you may also want to use server-sends-state, because the details of under-specified parts of math (signed overflow, floating point rounding, and so on) will vary between compilers and CPUs.

enum Bool { True, False, FileNotFound };

@hplus0603

hplus0603 said:
If your main goal is to use the least possible bandwidth for the most possible units, like an RTS game, then assuming that the client is already correct might be worth it.

I think this solution would be the most suitable in terms of performance, to counter a possible a client desync, I can send every X (probably 60) ticks an entire State update (of only surrounding objects) - is that also a common thing to do in these types of games?

And a general question for this type of networking, letting the client the freedom to perform calculation operations that are triggered by the server is an acceptable method?

hplus0603 said:
If you want to support cross-platform play (JavaScript, ARM, x86, others) then you may also want to use server-sends-state, because the details of under-specified parts of math (signed overflow, floating point rounding, and so on) will vary between compilers and CPUs.

My target environments are Widnows/Mac/Linux, all floated variables in my games are round to 2 points after the decimal.

what exactly did you mean by “server-sends-state”?

Thanks again

If you “baseline” (send the full state every X ticks) then why not send the full state when there's a somewhat-rare event, too? You already budgeted for sending full state updates.

And, yes, sending deltas with occasional “baselines” is pretty common, because it saves some throughput compared to only sending full states, yet it avoids spending too much time with a de-synced client. However, in this mode, you still have to be prepared to be de-synced and hide the corrections, or players will notice “snapping.”

For an RPG sending state syncs every 5 seconds for most objects would be plenty frequent enough, especially if you send state updates for specific objects when “big” things happen.

Rounded floating point doesn't matter, because the FPUs may have different rounding modes set, or different implementations (SSE with denormals? Neon?) or different internal precision (x87 on Intel is 80 bits; on AMD it's 64 bits.) So, you may get “123.4549999999” on one side, which rounds to “123.45” and get “123.4550000001” on the other side, which rounds to “123.46.”

“serder-sends-state” means a model where the server sends some state snapshots. As opposed to “server only sends initial state, and then deltas” like RTS games and other input-synchronous simulations do.

enum Bool { True, False, FileNotFound };

@hplus0603

hplus0603 said:
If you “baseline” (send the full state every X ticks) then why not send the full state when there's a somewhat-rare event, too? You already budgeted for sending full state updates.

Well the RPG type of game I'm aiming for is similar to the Diablo (though mine is a 3D not topdown), so I guess the optimizations I have to do on each packet are more crucial so that the gameplay won't be clunky, I expect that during combat many mutations will be going on - I'm unsure whether or not it will cause major FPS drops during combat (which is rare, but not that rare).

hplus0603 said:
And, yes, sending deltas with occasional “baselines” is pretty common, because it saves some throughput compared to only sending full states, yet it avoids spending too much time with a de-synced client. However, in this mode, you still have to be prepared to be de-synced and hide the corrections, or players will notice “snapping.”

That's an interesting solution, is this model considered to be an optimized solution?

What would be an example of said “snapping” correction? (I guess you refer to non-position related data but rather game data like health and such?)

In your experience which solution would you recommend using, in a scenario where there is a lot of action going on?

Thanks

“snapping” can happen both for position (he was there, and then he warps to that other place!) and for state like hitpoints (I had a full hitpoint bar, and suddenly it's at 15%!)

Practically speaking, I think baselining is the best trade-off.

Regarding frame rate, networking will never affect your frame rate. Graphics rendering and physics simulation and AI (especially pathfinding) may, but networking is so little data each second, there's really no way you could make that, in itself, consume enough CPU to be a problem. The only concern, really, about “minimizing network usage,” is that users have limited amounts of bandwidth. Someone may have a 10 Mbps DSL connection, which is shared with their whole household. Someone might be playing over mobile internet. Someone has a ISP with a 200 GB / month cap. Sending 20 packets of 400 bytes each per second, or sending 60 packets of 20 kBytes each per second, is a big difference! (The latter would likely not actually fit on a 10 Mbps connection in practice.)

enum Bool { True, False, FileNotFound };

@hplus0603 Thanks for the informative answer,

hplus0603 said:
“snapping” can happen both for position (he was there, and then he warps to that other place!) and for state like hitpoints (I had a full hitpoint bar, and suddenly it's at 15%!)

That makes sense, I currently handle these cases with lerping the visualization of the values and behind the scenes I immediately set the actual value to the snap value.

hplus0603 said:
Practically speaking, I think baselining is the best trade-off.

In this scenario the 5 seconds “area state” will suffice?

hplus0603 said:
Regarding frame rate, networking will never affect your frame rate. Graphics rendering and physics simulation and AI (especially pathfinding) may, but networking is so little data each second, there's really no way you could make that, in itself, consume enough CPU to be a problem. The only concern, really, about “minimizing network usage,” is that users have limited amounts of bandwidth. Someone may have a 10 Mbps DSL connection, which is shared with their whole household. Someone might be playing over mobile internet. Someone has a ISP with a 200 GB / month cap. Sending 20 packets of 400 bytes each per second, or sending 60 packets of 20 kBytes each per second, is a big difference! (The latter would likely not actually fit on a 10 Mbps connection in practice.)

This makes sense, though I recently played league of legends, and I noticed that when my latency goes over 340 ms +- the whole frame freezes, which I assume this happened because they block the game loop of running until a snapshot arrives (if a long time has passed)

In this scenario the 5 seconds “area state” will suffice?

Sure.

In practice, you probably have a queue of all objects, and send a snapshot for one or a few objects per outgoing packet, in some kind of round-robin fashion. You don't generally want to structure your networking to be “ONE BIG PACKET” and then tons of tiny packets and then “ONE BIG PACKET” and so forth – burstiness is the enemy of good network performance.

league of legends

I have not read up on LoL internals, but I've played it a little bit.

Being based on RTS-style gameplay, I would not be surprised if they use the state-synchronous simulation model, which means they can't simulate until they have all the inputs for frame X. When you give a command, you actually queue that command for some future frame T+N where N is the maximum allowable round-trip-time and T is the current time. Your local unit may play an acknowledgement animation of some sort, but the gameplay effects are only seen once the command reaches everybody.

If LoL “freezes” rather than just keep rendering animations “stuck in place” seems to be an implementation detail of that game. You can still evolve particle systems and run walk cycles and such, even if the simulation hasn't stepped forward, if you want to make a “I don't have the commands for the next step yet” situation slightly less obvious.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement