Advertisement

frame independent movement and slowdowns

Started by March 31, 2013 07:48 AM
13 comments, last by Norman Barrows 11 years, 7 months ago

what happens with frame independent movement when the frame rate drops drastically?

from my understanding, the visual feedback (frame rate) will slow down, while the simulation continues to run at full speed.

and even putting an upper limit on the time step (say 250ms) still will cause problems if the frame time is say 300 ms, correct?

is it a fair statement to say that frame independent movement does not degrade gracefully when the frame rate drops drastically?

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

well if the frame rate drops drastically then there will be more movement per frame to make up for that frame rate drop.

say you have dt which is the time elapsed from the last frame and say you have something that you want to move 20 units per second - regardless of frame rate..

well every frame you would say "this frame the object will move 20 * dt" and update it with that much movement

So for example if the first frame dt = 0.010 seconds then the movement for the first frame would be 20 * .010 = 0.2 units..

Say the next frame dt = 0.3 seconds (300 ms) then the movement for that frame would be 20 * 0.3 = 6 units

In effect you will always move at 20 units per second regardless of frame rate - However lower frame rate will cause the movement to look more choppy (6 unit jump in a frame rather than 0.2 unit jump ) but at 3 frames per second (300 ms per frame) everything will look choppy most likely

Advertisement

In effect you will always move at 20 units per second regardless of frame rate - However lower frame rate will cause the movement to look more choppy (6 unit jump in a frame rather than 0.2 unit jump ) but at 3 frames per second (300 ms per frame) everything will look choppy most likely

my concern is that slowdowns usually occur in heavy combat with lots of stuff on the screen. so just when things get busy, the display slows down. but the simulation keeps right on running at full speed. since the player can only "play" (respond) as fast as he gets visual feedback, even with a frame independent input loop, the result is that the simulation "speeds up" relative to the video, and the player's "playing" speed. so instead of player and video at 60 fps and movement at 20 fps, you have player and video at 3 fps and movement still at 20 fps (so to speak).

another concern is interpolating or extrapolating the unit's position for drawing, and the loss of lockstep sync accuracy of the rendering loop.

i've been considering frame independent movement, but the problem with slowdowns has caused me to use a single loop for render, input, and simulation and to use a frame rate limiter. the fame rate limited single loop degrades gracefully, even under the heaviest fire.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

my concern is that slowdowns usually occur in heavy combat with lots of stuff on the screen. so just when things get busy, the display slows down. but the simulation keeps right on running at full speed.

The display doesn't slow down relative to the simulation - if the frame rate drops the display shows "choppy" movement not movement in slow motion...

another concern is interpolating or extrapolating the unit's position for drawing, and the loss of lockstep sync accuracy of the rendering loop.

im not sure im following your concern here.. you update everything and then draw everything once per frame... if the unit is moving at a speed per second you multiply that speed by the time elapsed for that frame and the result is the unit will move an amount per frame that keeps the units speed what you want..

if your objects are "slowing down" when the frame rate drops it probably means you are not using the elapsed time per frame to determine how much your objects should move

The display doesn't slow down relative to the simulation - if the frame rate drops the display shows "choppy" movement not movement in slow motion...

that's what i mean by "slow down". the presentation rate slows down, resulting in a choppy / stop motion effect.

i recall Aces of the Deep had this problem. You'd be 400 meters off the starboard side of a destroyer, ready to do a shoot and dive under maneuver, then the shells and depth charges would start going off, and the explosion and water splash graphics would cut the framerate to something like 2fps for about 3 seconds. In that time, your sub would have already passed the minimum firing distance, and the minumum diving distance, resulting in a collision and death every time. all because they didn't slow down the simulation to keep pace with the input/feedback system (IE the graphics is the feedback system, and how fast the player can react to what they see is the input

im not sure im following your concern here

).

RE: INTERPOLATING

some implementations use an accumulator for fractions of dt, and then use that fraction of dt to interpolate between the last and current positions when drawing. or extrapolate between the current and anticipated next positions. interpolation or extrapolation means the screen no longer reflects the current state of the simulation, but some previous state, or anticipated state, IE what was happening up to one game turn ago, or might/would happen with no user input up to one game turn from now. while one frame of lag is probably ok under normal circumstances, if you get that heavy combat spike and drop to 2 fps, you add on top one frame of lag at 2 fps, or an extra half second of delay in feedback. in a hard core flight sim, a half second delay can mean life or death when toggling the pickle.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

some implementations use an accumulator for fractions of dt, and then use that fraction of dt to interpolate between the last and current positions when drawing. or extrapolate between the current and anticipated next positions. interpolation or extrapolation means the screen no longer reflects the current state of the simulation, but some previous state, or anticipated state, IE what was happening up to one game one turn ago

ahh okay so your talking about being able to correctly extract user input during times of lag so that the choppy part doesn't include the plane crashing (which could have been avoided if the user gave some type of input ) or whatever game event that may occur during the large timestep of a dt that is too large that might have been changed with some kind of user input - I understand you now.

Sorry I think I just misunderstood the question - I would have to agree that the ability to get input from the user and apply it to the scene correctly does degrade ungracefully as the frame rate drops. It's true that the user can only respond to what they see and so if all they can see is the enemy there one second and then in another spot the next it really deminishes their ability to hit the enemy with anything

Advertisement

ahh okay so your talking about being able to correctly extract user input during times of lag so that the choppy part doesn't include the plane crashing (which could have been avoided if the user gave some type of input ) or whatever game event that may occur during the large timestep of a dt that is too large that might have been changed with some kind of user input - I understand you now.

sort of, but a bit more than that. even if the input loop is still running fast, the user can't put in input fast, because the user must react to the feedback the screen gives him, which is slower than the simulation and the input loop.

as the display slows down, the rate at which the user can react slows down, but the game keeps running fast. from the player's point of view, who is playing at 3fps (the speed of the graphics during slowdown), the game's speed relative to the players speed has increased. technically the graphics and player reactions slow down, but from the player viewpoint, the simulation behind the graphics "speeds up". but thats not really the point. the point is that the simulation no longer runs at the desired speed compared to the graphics and the player's reactions.

Sorry I think I just misunderstood the question - I would have to agree that the ability to get input from the user and apply it to the scene correctly does degrade ungracefully as the frame rate drops. It's true that the user can only respond to what they see and so if all they can see is the enemy there one second and then in another spot the next it really deminishes their ability to hit the enemy with anything

like so many gamedev questions it can often take two or three askings to get on the same page.

yes, you've hit it "spot on" as they say.

i think for that reason, i will not be using it in any heavyweight title where that could be an issue. since the problem is caused by a sudden increase in drawing time for a single frame, that would rule out any title where things can get really busy and slowdown at just the wrong moment. while "momentary player paralysis" or momentary player "time stop", or momentary player "super slomo" might be acceptable for a shooter where it only costs you a few bullets, for something like a sub sim, where you spend up to half an hour of realtime (at 4096x speed) just getting to the target, then another hour of realtime stalking it to setup the perfect encounter, then 15 minutes on the final apprach (never at more than 2-4x speed), just to have the simulation and render loops come unglued as your about to shoot and crash dive? even with a work of art like aces of the deep, it would sometimes be a day or two before i'd play it again.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Ok, so you're not really complaining about frame independent movement, you're complaining about slow input and output speeds, ie. the latency between the player seeing something and being able to act on it. The movement itself degrades perfectly well at low frame rates - it's the player's ability to respond to it that fails to degrade well.

Removing the independence and making movement framerate dependent will not solve this. Faster frame rates will mean movement can occur too fast for a player to respond. Imagine watching a movie played back at 2x or 3x the speed and you know you won't be able to follow all of it. And slower frame rates will give an advantage to players who will have extra thinking time in which to react.

The only real solution to this issue is to ensure that the frame rate stays relatively high. This is why console manufacturers often dictate that they will not ship your game unless you can guarantee 30 frames per second at all times. That way there is a constant upper bound on the latency that the player has to deal with. In practice this is difficult but not at all impossible to achieve, and is the only real alternative given that linking rendering speed to simulation speed brings its own problems which are worse.

I plan to post an article on the handling of input in a stable manner for fixed-frame-rate games even in high-latency situations soon.

While, of course, the player can never react to something he or she cannot see, if you handle input in such a way that, even if there is a sudden spike in the frame-rate, if the player hits buttons during the lag the end result would be the same as if it had been perfectly smooth the whole time.

This basically facilitates the player’s mental simulation during the laggy period and it is absolutely necessary for smooth “responsive” input, not even just during times of lag but throughout the game. If your input handler is able to allow the player to hit buttons through the lag and end up with the correct result as if the lag had never been there, the player at least has a chance of not dying etc. based on his or her mentally queued commands (we all think ahead and time our jumps, dodges, etc.—small bouts of lag interfere with this by surprising us, but we can still time our jumps etc. for a short while even if we can’t see what is happening).

My article will explain this in detail and it will state close to the start that inputs are not events.
A naïve implementation simply takes the input events from the OS and immediately applies some action to the game or player. The popular misconception that inputs are events stems from application programming, which is indeed event-based and the most common event is input.

The article will go into detail on this but I will explain the overall system briefly here.

Firstly, you do need to catch inputs as they are given to you by the OS, which means by listening to events. That is as far as events go in an input structure.
The thread that listens for inputs from the OS is its own thread. For Windows, this has to be the main thread—the thread that created the window. That means keep your game and render thread(s) off it. You can’t catch inputs accurately if the thread that listens for inputs is busy drawing a monster or colliding objects together.

The caught input is time-stamped and put into a buffer to be processed by the logical loop (remember, in a fixed time-step the logic is not updated every frame, as apposed to the rendering, and although they happen in the same game loop, we call them the logical loop and render loop—don’t be confused in thinking that they are 2 separate loops running). The time-stamp was synchronized with your logical loop’s timer at the beginning of the game.

For each logical update, request from the input buffer only the amount of input up to the current logical game time.
If you lag for 300 milliseconds and update logic only every 30 milliseconds, when you stop lagging you will basically do 10 logical updates before continuing with the next render.
There are 2 naïve implementations you could accidentally do here:
#1: Pure event-based input. As you get events, you immediately modify the player’s state, sometimes overwriting previous states from previous keystrokes. In this case, after 300 milliseconds, the first logical update (out of 10) will just react based off the last state it sees, ignoring any keypresses that came before and got overwritten. This is obviously wrong.
#2: Buffered input, but consuming the whole buffer on each logical update. The result is exactly the same as #1 in the end.

The correct solution is to only consume from the input buffer as many input events as took place during the logical-update period.
That is, since our logical update is 30 milliseconds, you consume only the next 30 millisecond’s worth of events from the buffered input, leaving later inputs in the buffer for the next logical updates to consume.


As each logical update consumes and processes only the inputs that actually took place at that time during the game simulation, the simulation itself remains stable and faithful to what the player had intended.

The player sees a barrel coming and mentally times his or her jump. Suddenly 1 second of lag.
The player is surprised but had already figured out to hit the button after 0.5 seconds.
0.5 seconds into the lag the player sticks to the plan and waits another 0.5 seconds to see the result.
And, success! The player is impressed with your input mechanism and goes to your site to donate $10,000. If only you had added a “Donate” button. sad.png


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Ok, so you're not really complaining about frame independent movement, you're complaining about slow input and output speeds, ie. the latency between the player seeing something and being able to act on it. The movement itself degrades perfectly well at low frame rates - it's the player's ability to respond to it that fails to degrade well.

exactly. there's nothing inherently wrong with frame independent movement. in fact is probably a better way to organize things. that way the simulation can run at the minimum speed necessary, thereby using the fewest possible clock cycles. and the graphics can run at the maximum speed possible, thereby providing the smoothest possible animation. best of both worlds. smoothest possible animation, and lowest possible simulation overhead. the only problem is long frame times when things get hot and heavy. i realize that proper graphics settings for resolution, clip range, scene complexity, etc can avoid this. but the user may not have things dialed down low enough for worst case scenarios. and it also may force them to run at lower graphics quality all the time just to avoid the occasional long frame time.

in my current main project right now, i track frame times and have "automatic clip ranges on/off". this automatically adjusts the clip range as the frame time changes. so it draws stuff out as far as possible while maintaining a given fps. something like this might help, but unfortunately, its reactive, not proactive, so it would still let one long frame time slip through before adjusting.

The only real solution to this issue is to ensure that the frame rate stays relatively high. This is why console manufacturers often dictate that they will not ship your game unless you can guarantee 30 frames per second at all times. That way there is a constant upper bound on the latency that the player has to deal with. In practice this is difficult but not at all impossible to achieve, and is the only real alternative given that linking rendering speed to simulation speed brings its own problems which are worse.

yes, i see no way to avoid the two loops coming "unglued" as i call it, unless one can guarantee no long frame times. about the only other solution would be to slow the simulation down. like a governor driven by frame time. max speed sim can run at is a function of frame time. that would fix it, i think. if frame time went large, sim steps automatically get smaller, so the speed at which the sim runs and the speed at which the player can respond (due to slow graphics), doesn't get totally out of whack. when frame times were small (the normal case) the simulation would run at normal speed.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

This topic is closed to new replies.

Advertisement