Advertisement

Consistent? Floating point precision loss and Game Networking Client-Side Predictions

Started by July 29, 2003 06:21 PM
0 comments, last by mepem 21 years, 6 months ago
Right now my I''m wondering about how I can make the client''s update logic consistent with the server simply by sending bomb spawn messages (and not bomb spawn updates). This is a question of the rounding that occurs in float calculations. Originally I tried just sending a bomb spawn for thrown bombs (which move in a simple projectile path each frame multiplied by a float dt value, until they land or explode). They would land in different places on the client than on the server due to differences in float values. Right now I''m sending the position updates as 2 byte unsigned shorts and converting them to floats. I''m thinking about converting all local float calculations and data into doubles in hopes that it reduces the speed at which precision is lost. I think this would reduce the number of necessary updates. I think what is more important is that the game updates all take a "float dt" sec for calculations. I could make this update use constant dt values. That is, only update when t sec has passed and only update for exactly t sec (t of 1/60 sec for 60 fps). (Or t n times in a row if more than n*t has passed). The full time of the game passed (not delta time between frames) could be stored, and the number of logic frame updates times the constant update time. It seems to me that with this method, float precision calculations would be done exactly the same on both systems. I think that this would mean that although client and server both lose precision, they would lose precision with the exact same rounding errors and thus be consistent. Note that with this method, I would revert to sending actual float positions for bomb spawn/land/explode. With this method, I''m not sure if I''d still have to send player position updates. Maybe I could just send the starting position and the movement requests. From my current pov, the only reason is b/c sending 60fps direction/velocity updates is more bandwidth than sending 10fps position and direction/velocity updates. I feel like I might be missing some major point about float precision loss since it is generally advised to send position/velocity updates instead of just moves due to error accumulation... But assuming the error happens consistently, this method seems like it should work. Particularly, if I enforce consistent calculations on the projectile motion of thrown bombs, then I should only have to send a bomb spawn and let the client simulate the rest. Again, I could acheive consistent calculations (on server and clients) by making the dt calculations happen and constant intervals, where the dt being multiplied in game updates is always constant (1/60 sec). My whole hypothesis assumes that float error rounding is consistent or can at least be set to consistent rules (like always round the last bit down to 0). Most likely the answer is that doing the same float calculation on two different computers won''t give you the same answer, and maybe it might not on the same computer twice. And maybe I can’t do anything reasonable about this because the calculation is a low-level direct assembly call, and to define calculations for a custom float with consistent rounding might be really slow… or maybe not… But even if the above is the case, if that isn''t possible, then maybe I could redo every game logic state and calculation in the game to use integers, and even make the dt represented by integer millseconds or nanoseconds. Such as instead of making a tile the size of 1 float unit, it could be the size of 1000 or so integer units. Maybe all networked games could do this in theory because then they could assume consistent logic updates on the client and server and only send spawn messages.
n/a
Using a consistent time interval and switching to a global tick to keep time, would certinally help. What''s happening on the client side is a lossy simulation of the server code. The client doesnt have complete information the server does, and over time this will result in a noticable divergence between the simulations. Also nondeterministic events like using a local timmer will also increase the speed of the systems diverging.

The update packets just nudge the simulation back into acceptable parameters. What u should do is implement a measure of divergence, and use that to throttle ur update packets. This way each connection can be customized to each player.

Each update packet contains this data actually. If u keep a global tick, pack this with the update. When u recive it, compare this update + tick with the recorded data on the client side for the same tick, this tells u how much the systems are divering. Then ramp the packet rate, but dont execceed the maximum the conneciton can bare before it starts to choke, as this will cause more lag.

It would be kinda cool to implement this for each object. So objects with simple motions can be synchonize with a few updates, while objects which have have more complex motions can have more packets.

There is such a thing as floating point error between different cpus, but thats not what your seeing, i beleive. Your seeing non-deterministic sampling errors proprogating through your system. It could be from a local timmer or some other client side subsystem.

Unless ur using the very smallest significant element of the float, these errors will not be noticable within a propertly synchonize system for a while.

Good Luck

-ddn

This topic is closed to new replies.

Advertisement