Advertisement

Time-Stamped Buffered Input System

Started by January 03, 2015 04:23 PM
5 comments, last by Irlan Robson 9 years, 10 months ago

Hello. After reading for a while, I've realized that buffered inputs with time-stamp is the way to go for a fixed time step simulation that requires input logging.

So, in a fixed-time-step game simulation we need to simulate FIXED_TIME_STEP of game time if the current frame time is bigger than that, so consume FIXED_TIME_STEP of the current time — in order to the game logic catch up the real-time at any situation using a fixed time. Like the above loop:


        m_tRenderTime.Update();
	UINT64 ui64CurMicros = m_tRenderTime.CurMicros();
	while (ui64CurMicros - m_tLogicTime.CurTime() > FIXED_TIME_STEP) {
		NextState();
		if ( !m_smStateMachine.Tick(this) ) { return false; }
		m_tLogicTime.UpdateBy( FIXED_TIME_STEP );
	}

Since now everything looks correct.

From what we know, polling input events from the OS at the beginning of the frame allows you the simplicity, and for some people, may be the state of the art. But in my case I'm using a separated thread — the main thread — to do the job, so it stays all the time doing that:


while (m_bPollEvents) {
		::WaitMessage();
		while( ::PeekMessage( &mMsg, m_hWnd, 0U, 0U, PM_REMOVE ) ) {
			::DispatchMessage(&mMsg);
		}
	}

...and when a event happens for instance, this is what I do:


        case WM_KEYDOWN: {
		m_kbKeyboardBuffer.OnKeyDown(  WindowKeyTableLookUp(_wParam) );
		break;
	}
        (...)

The keyboard buffer basically has a critical section, local timer, etc. When the key gets pressed — like the one above — I insert the keyboard input event in a vector of keyboard events. This allows me a direct fast keyboard keys event look-up without needing to insert key data for each event (which is a waste of time, memory, and cache usage).

A key event is the most basic structure in the world. It has a event that can be pressed or released, and the time-stamp.


        (...)	
        std::vector<KB_KEY_EVENT> m_keKeyEvents[KB_TOTAL_KEYS];

With a one-day research, some guys limited the KB_KEY_EVENT to a maximum size, in order to avoid dynamic allocation, but since we don't want to miss any event, it is probably better to sacrifice that.

The keyboard is updated by its buffer. I do that for every type of device in my game — touch, mouse, gamepad — in order to make things operating in a single responsability only.

So, the problem is:

I need to consume FIXED_TIME_STEP of input each logical update. For such operation, I'd need to update the keyboard by that buffer on each logical update, because the player can press a button before the frame starts for instance and release during the game logic update. If I'd to just update the keyboard each frame, the player could press a button in the start of the frame and the button would be interpreted as it was being held during the entire frame. For instance:

Frame took 500ms;
FIXED_TIME_STEP is 100ms;
Total logic updates in one frame is 5;
Input occurred in the beginning of the frame;
Input was released in the 2th update;
Now the input is pressed until the next frame;

Here, I don't acually know if I'm going too deep into the subject, or on the right way.

I'm actually confused with the relationship of the game logic time with the keyboard buffer time, since they have different time resolutions, are called in diferent times etc. The game logic time has microsecond resolution, but the keyboard has real time resolution, so the only timer that its has a relationship to be synchronized its the game render time (the real game time).

From the fixed-time step simulation point-of-view, makes sense to consume FIXED_TIME_STEP of input each logical tick, which means that each logical tick I can consume any inputs that ocurred until the game logic time (?). Eg. :


	m_tRenderTime.Update();
	UINT64 ui64CurMicros = m_tRenderTime.CurMicros();
	while (ui64CurMicros - m_tLogicTime.CurTime() > FIXED_TIME_STEP) {
                kbKeyboardBuffer.ConsumeKeyEvents(&kKeyboard, m_tLogicTime.CurTime());
		NextState();
		if ( !m_smStateMachine.Tick(this) ) { return false; }
		m_tLogicTime.UpdateBy( FIXED_TIME_STEP );
	}

...but since the time that the inputs was ocurred is different from the time that the keyboard time is updated (in the Consume... method), they won't be synchronized with the game time.

I've synchronized the input time with all my game timers by just copying the game timers into the input timer at the start of the game, so this shouldn't be a problem.

Can you guys give me an advice on these specific questions?

It's usually not worth dealing with input events at a higher temporal resolution than your fixed timestep. Since the minimum granularity of a simulated reaction to an input event is a single fixed timestep, it generally makes little sense to deal with sub-timesteps.

In that case, if you quantise the timestamp of all key up/down events to the beginning of the fixed time step immediately after they occur, you shouldn't have any issues.

The exception to this, of course, is when you have extremely large fixed timesteps, and your example of 500 ms leads me to worry that this may be the case in your simulation. In that case you are going to have to deal with the possibility that a given key may be pressed, released, and potentially pressed again, all in a single fixed timestep... Which makes input handling fairly complex.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Advertisement

If I'd to just update the keyboard each frame, the player could press a button in the start of the frame and the button would be interpreted as it was being held during the entire frame. For instance:

All durations for key events are derived by taking the current logical game time and subtracting from it the time-stamp on the key event.

Example:
500 milliseconds and 5 logical updates, each eating 100 milliseconds of input.
#1: Y-down at timestamp 32.
#2: T-down at timestamp 32.
#3: Y-up at timestamp 98.
#4: T-up at timestamp 480.
#5: At time = 500 the game cycle loops and begins performing the 5 logical updates, each at 100 milliseconds. Logical time = 0.

Logical update #1: Eats 100 milliseconds of input (game time = 100). Y-down[32], T-down[32], and Y-up[98] are in the game-side buffer (the game-side buffer is added also appended to the game-side input log, but the log is not needed in this discussion). Y is not being held and thus its duration is calculated as 98-32 = 66 milliseconds. It registers as both a key-down and key-up event which could cause some system to activate. The system is activated immediately upon detection that the key was pressed.
T-down is still being held, thus the current duration of it being pressed is 100-32 = 68 milliseconds.
The game-side input buffer is cleared for the next update (inputs remain in log for a fixed amount of time or number of events).

Logical update #2: Eats 100 milliseconds of input (game time = 200). T is still being held. Duration is 200-32 = 168 milliseconds.



Logical update #5: Eats 100 milliseconds of input (game time = 500). The game-side input buffer holds T-up[480]. Because the key is no longer being pressed the total duration = 480-32 = 448 milliseconds.


All durations are correct, whether the key is still being held or not, and whether you are in the beginning of an update or a series of updates.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

The exception to this, of course, is when you have extremely large fixed timesteps, and your example of 500 ms leads me to worry that this may be the case in your simulation. In that case you are going to have to deal with the possibility that a given key may be pressed, released, and potentially pressed again, all in a single fixed timestep... Which makes input handling fairly complex.

I see now. I'll take the route of update-per-frame-consume-per-logic using time-stamps.

All durations for key events are derived by taking the current logical game time and subtracting from it the time-stamp on the key event.

Interesting. This way I can get all keyboard events in one frame and process it during the logic updates instead of passing the buffer each logical update based only by time-stamp. Let's quick refine what I'd need to redefine to continue accomplishing such task until this moment:

  1. Each frame, the keyboard buffer passes the data to the keyboard that has all events—but now in one frame;
  2. The per-frame updated keyboard it's used to generate another game logic input buffer that is used by the logical game timer;
  3. The game logic input buffer will pass its data to the input logger.

I'm confused with the time-stamping timer more than the per-game-logic processing since gets called in another thread, and gets updated each frame. Example:

500 milliseconds and 5 logical updates, each eating 100 milliseconds of input.
#1: Y-down at timestamp 32.
#2: T-down at timestamp 32.
#3: Y-up at timestamp 98.
#4: T-up at timestamp 480.
#5: At time = 500 the game cycle loops and begins performing the 5 logical updates, each at 100 milliseconds. Logical time = 0.

The time-stamp of the event appears to be the absolute accumulated time [32, 32, 98, 480] because it's less than the current asbsolute time [500], so when I receive a event I immediately update the input timer, and pass the current time (thus, microseconds). In this case, why would I need to synchronize the timers, and set the input timer resolution as microsecond resolution since it is the absolute time that gets passed to the key-event time-stamp?

T-down is still being held, thus the current duration of it being pressed is 100-32 = 68 milliseconds.

Interesting. This it's basically saying that the keyboard key input gets updated each logical update after the events made changes.
In my current implementation what I'm doing it is making a copy of the keyboard keys buffer each frame, and processing that. So my keyboard class gets updated each logical update just by the events that are up to the current time in the buffer copy:

void CKeyboardBuffer::Update(CKeyboard* _pkKeyboard, unsigned long long _ui64MaxTime /* this is the game current logic time.*/) {
	/*
	* This is called per logical tick.
	*/

	for ( unsigned int I = KB_TOTAL_KEYS; I--; ) {
		CKeyboard::KB_KEY_INFO& kiCurKeyInfo = _pkKeyboard->m_kiCurKeys[I];
		CKeyboard::KB_KEY_INFO& kiLastKeyInfo = _pkKeyboard->m_kiLastKeys[I];

		std::vector<KB_KEY_EVENT>& vKeyEvents = m_keKeyEventsCopy[I]; //This is a copy of the internal buffer. The copy is generated per frame.

		for ( std::vector<KB_KEY_EVENT>::iterator J = vKeyEvents.begin(); J != vKeyEvents.end(); ) {
			const KB_KEY_EVENT& keEvent = *J;

			if ( keEvent.ui64Time < _ui64MaxTime ) { 
				//Can be processed.
				if (keEvent.keEvent == KE_KEYDOWN) {
					if ( kiLastKeyInfo.bDown ) {
						//Do nothing.
					}
					else {
						//Tapped.
						kiCurKeyInfo.bDown = true;
						kiCurKeyInfo.ui64TimePressed = _ui64MaxTime - keEvent.ui64Time;
					}
				}
				else {
					kiCurKeyInfo.bDown = false;
					//Total duration of the event = time-stamp - last time key was pressed.
					kiCurKeyInfo.ui64TimePressed = keEvent.ui64Time - kiLastKeyInfo.ui64TimePressed;
				}

				J = vKeyEvents.erase(J);

				char cBuffer[512];
				::sprintf_s(cBuffer, sizeof(cBuffer), "\n Event Time: %llu, Max Time: %llu", keEvent.ui64Time, _ui64MaxTime);
				::OutputDebugStringA(cBuffer);
				::sprintf_s(cBuffer, sizeof(cBuffer), "\n Time Pressed: %llu", kiCurKeyInfo.ui64TimePressed);
				::OutputDebugStringA(cBuffer);
			}
			else {
				++J;
			}
		}
		//Before changing the last state of the key, we need to update the time that the key is pressed if is being held.
		if ( kiCurKeyInfo.bDown ) {
			if ( kiLastKeyInfo.bDown ) {
				//Holding.
				kiCurKeyInfo.ui64TimePressed = _ui64MaxTime - kiCurKeyInfo.ui64TimePressed;
			}
		}

		kiLastKeyInfo.bDown = kiCurKeyInfo.bDown;
		kiLastKeyInfo.ui64TimePressed = kiCurKeyInfo.ui64TimePressed;
	}
}

This way I can get all keyboard events in one frame and process it during the logic updates instead of passing the buffer each logical update based only by time-stamp. Let's quick refine what I'd need to redefine to continue accomplishing such task until this moment:

No.
Getting all 500 milliseconds’ worth of events and then dishing them out in 100-millisecond intervals to the logical update is nothing but extra and redundant work. You already have a system in place to request “up to X time” inputs from the input buffer, so why not just use that to make 5 100-millisecond requests instead of making 1 500-millisecond request and then making another system to break those into 100-millisecond chunks?

Additionally, my example was contrived. What if the time of the first game update was 550 instead? You will still only tick the logical update 5 times and eat only 500 milliseconds’ worth of input. Just eat input from the keyboard buffer on the input thread once per logical update and leave it at that.


I'm confused with the time-stamping timer more than the per-game-logic processing since gets called in another thread, and gets updated each frame.

The inputs run on their own timer (which is synchronized with the game timer at start-up). That timer is not updated at any specific interval. It is updated once on every event it receives. Thus it can give you a time-stamp of any value regardless of your fixed-step interval. This is specifically necessary for accurate timing of input events.
Updating a timer (without passing an amount by which to update it) always calculates the amount of time that has passed since its own last update and then advances its counters by that much. Thus any call to tTimer.Update(), at any interval at any time, sets the timer’s counters to the correct time (starting from 0).


The time-stamp of the event appears to be the absolute accumulated time

From 0 (the time the game began).


because it's less than the current asbsolute time

It is unrelated to the current absolute time. It is a completely separate timer which has been synchronized with the game timers (both the logical and render timers, which have also been synchronized) and updates at its own interval (whenever an input is received from Windows®).


so when I receive a event I immediately update the input timer, and pass the current time

Yes.


In this case, why would I need to synchronize the timers

Because all timers count from 0 and, for any given timer, 0 is the time when the timer was constructed (it initializes its counters there).
If there is any form of delay between then times each timer is created (and there always will be) then the absolute times of each timer will have no proper relationship with each other. In other words, both timers must be accumulating starting at the same system time (QueryPerformanceCounter()), otherwise you can’t ask timer 2 for events up to timer 1’s current time.


and set the input timer resolution as microsecond resolution since it is the absolute time that gets passed to the key-event time-stamp?

Timers only use system resolution for internal book-keeping. Every value they otherwise return is in microseconds (or microseconds converted to another format as a convenience function).


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid


No.
Getting all 500 milliseconds’ worth of events and then dishing them out in 100-millisecond intervals to the logical update is nothing but extra and redundant work. You already have a system in place to request “up to X time” inputs from the input buffer, so why not just use that to make 5 100-millisecond requests instead of making 1 500-millisecond request and then making another system to break those into 100-millisecond chunks?

I see now. I was afraid of this method is very consumer for a CPU, such that if the current time was very high and the fixed time was very low, because I would have to enter and exit the critical section, etc. But just thinking the way I was doing (per frame) breaks through the main point of the input buffer that is being able to request input at any time during the simulation, and approaches practically the old polling per frame.


The inputs run on their own timer (which is synchronized with the game timer at start-up). That timer is not updated at any specific interval. It is updated once on every event it receives. Thus it can give you a time-stamp of any value regardless of your fixed-step interval. This is specifically necessary for accurate timing of input events.
Updating a timer (without passing an amount by which to update it) always calculates the amount of time that has passed since its own last update and then advances its counters by that much. Thus any call to tTimer.Update(), at any interval at any time, sets the timer’s counters to the correct time (starting from 0).
Truth. In the example I said I was updating per frame just to continue with the time synchronization of the idea (that by the time I thought it was synchronized with the render time because it was updated by frame, as well as the buffer in exemple) but actually I'm updating it as soon as the event is received.

void CKeyboardBuffer::OnKeyDown( unsigned int _ui32Key ) {
	CLocker lLocker(m_csCritic);
	m_tTime.Update();
	KB_KEY_EVENT keEvent;
	keEvent.keEvent = KE_KEYDOWN;
	keEvent.ui64Time = m_tTime.CurMicros();
	m_keKeyEvents[_ui32Key].push_back(keEvent);
}


Because all timers count from 0 and, for any given timer, 0 is the time when the timer was constructed (it initializes its counters there).
If there is any form of delay between then times each timer is created (and there always will be) then the absolute times of each timer will have no proper relationship with each other. In other words, both timers must be accumulating starting at the same system time (QueryPerformanceCounter()), otherwise you can’t ask timer 2 for events up to timer 1’s current time.

Setting all timers last time to the same value of each other synchronizes them.

Advertisement

L. Spiro was correct about passing the current logical game time to the function that eats input up to that. The small mistake I was having (after having everything running) was logging the time that the key events were being held, not its time-stamp.

That's a architeture that can be built to accomplish such method (optimizations are out of topic):

The main point of buffered input (used in most flight simulators, as well a lot of fighting video-games) it's to request inputs at any time, even if the request it is a synchronous operation (not in this specific case). Time-stamping it's necessary for event time measurement and synchronization with the game simulation. To do this we need microseconds because it's the most lower-level way and accurate method of measuring time without losing any floating point or double precision during the measurement.

The class that listens for input events (on its own thread) has its own thread-safe input buffer and its own timer. It can be one buffer for each device or some mechanism that helds until needed. When a event it's received its timer it's updated, so we have the current absolute time-elapsed that will be different each time we update (because some time has been elapsed). The current time it's used as the time-stamp of the event to be processed later. Also, the time was synchronized with the game logic time (the time that it's updated by the FIXED_TIME_STEP). If their last time values aren't the same, we will have incorrect results because one timer can be sightly more advanced of the other or vice-versa at the time of its creation.

For each logical-update we request inputs from the input buffer up to the current logical game time. Why? Since the input buffer timer was synchronized with the game logical timer (starting from 0, updated by FIXED_TIME_STEP), and computes the amount of elapsed time since the last request, requesting inputs up to its current time it's the same of eating FIXED_TIME_STEP of inputs each logical tick, because the input timer get's updated by the current absolute time elapsed the time that an event occured, and thus it's related to the current logic-time slice because was synchronized at the beginning of the simulation. Relative-time at logical-tick space it's the keyword here.

If the user presses a button during the logical tick, that button could probably be consumed in the next logical tick or even the next frame; it's time-stamp will be probably very close to the current logical game time when the timer it's updated (depending of the current time of course).

The input buffer has a function that eats all inputs from its buffer up to the current logical game time and pass to a temporary game-side input buffer. The game logical input buffer it's cleared each logical tick. As rule of the thumb, the input thread input buffer shouldn't be cleared at all because doing that it's the same thing of skipping input events instead of passing it to another logical tick or even frame .

Once we have all setted up (received input up to the current logical game time) we can start doing something at game logical-side level. That's the time when we can think of some other high-level aspects of the input system such input logging, input contexts, input mapping, key durations, mouse, touch, gamepads, etc.

This topic is closed to new replies.

Advertisement