Time based loop and Frame based loop - c++

Trying to understand the concepts of setting constant speed on game loop. My head hurts. I read the deWiTTERS page, but I can't see the why/how...when I get it...it slips.
while(true)
{
player->update() ;
player->draw() ;
}
This will run as fast as possible depending on how fast a processor is...I get that.
To run at the same speed on all computers, the logic is what I don't get. If I am trying to run at 60fps, then it means for every 16ms the objects move by a frame, yeah? What I don't get is how the update() or draw() may be too slow.
deWiTTERS example (I used 60):
const int FRAMES_PER_SECOND = 60;
const int SKIP_TICKS = 1000 / FRAMES_PER_SECOND;
DWORD next_game_tick = GetTickCount();
// GetTickCount() returns the current number of milliseconds
// that have elapsed since the system was started
int sleep_time = 0;
bool game_is_running = true;
while( game_is_running ) {
update_game();
display_game();
next_game_tick += SKIP_TICKS;
sleep_time = next_game_tick - GetTickCount();
if( sleep_time >= 0 ) {
Sleep( sleep_time );
}
else {
// Shit, we are running behind!
}
}
I don't understand why he gets the current time before the loop starts. And when he increments by SKIP_TICKS I understand he increments to the next 16ms interval. But I don't understand this part as well:
sleep_time = nextgametick - GetTickCount()
What does Sleep(sleep_time) mean? The processor leaves the loop and does something else? How does it achieve running 60fps?

In cases where the update_game() and display_game() functions complete in less time than a single frame interval at 60FPs, the loop tries to ensure that the next frame is not processed until that interval is up, by sleeping (blocking the thread) off the excess frame time. Seems like it is trying to ensure that the frame rate is capped to 60FPS, and no higher.
The processor does not 'leave the loop' but rather the thread in which your loop is running is blocked (prevented from continuing execution of your code) until the sleep time is up. Then it continues onto the next frame. In a multi-threaded game engine, sleeping the thread of the main game loop like this gives the CPU time to execute code in other threads, which may be managing physics, AI, audio mixing etc, depending on set up.
Why is GetTickCount() called before the loop starts?
We know from the comment in your code that GetTickCount() returns the milliseconds since system boot.
So lets say that the system has been running for 30 seconds (30,000ms) when you start your program,
and let's say that we didn't call GetTickCount() before entering the loop,
but instead initialized next_game_tick to 0.
We do the update and draw calls (as an example, they take 6ms) and then:
next_game_tick += SKIP_TICKS; // next_game_tick is now 16
sleep_time = next_game_tick - GetTickCount();
// GetTickCount() returns 30000!
// So sleep_time is now 16 - 30000 = -29984 !!!
Since we (sensibly) only sleep when sleep_time is positive,
the game loop would run as fast as possible (potentially faster than 60FPS),
which is not what you want.

Related

Inconsistent chrono::high_resolution_clock delay

I'm trying to implement a MIDI-like clocked sample player.
There is a timer, which increments pulse counter, and every 480 pulses is a quarter, so pulse period is 1041667 ns for 120 beats per minute.
Timer is not sleep-based and running in separate thread, but it seems like delay time is inconsistent: period between samples played in a test file is fluctuating +- 20 ms (in some occasions period is OK and steady, I can't find out dependency of this effect).
Audio backend influence is excluded: i've tried OpenAL as well as SDL_mixer.
void Timer_class::sleep_ns(uint64_t ns){
auto start = std::chrono::high_resolution_clock::now();
bool sleep = true;
while(sleep)
{
auto now = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(now - start);
if (elapsed.count() >= ns) {
TestTime = elapsed.count();
sleep = false;
//break;
}
}
}
void Timer_class::Runner(void){
// this running as thread
while(1){
sleep_ns(BPMns);
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
}
};
void Player_class::PlayFile(int FileNumber){
#ifdef AUDIO_SDL_MIXER
if(Mix_PlayChannel(-1, WaveData[FileNumber], 0)==-1) {
printf("Mix_PlayChannel: %s\n",Mix_GetError());
}
#endif // AUDIO_SDL_MIXER
}
Am i doing something wrong in terms of an approach? Is there any better way to implement timer of this kind?
Deviation higher than 4-5 ms is too much in case of audio.
I see a large error and a small error. The large error is that your code assumes that the main processing in Runner consistently takes zero time:
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
That is, you're "sleeping" for the time you want your loop iteration to take, and then you're doing processing on top of that.
The small error is presuming that you can represent your ideal loop iteration time with an integral number of nanoseconds. This error is so small that it doesn't really matter. However I amuse myself by showing people how they can get rid of this error too. :-)
First lets correct the small error by exactly representing the idealized loop iteration time:
using quarterPeriod = std::ratio<1, 2>;
using iterationPeriod = std::ratio_divide<quarterPeriod, std::ratio<480>>;
using iteration_time = std::chrono::duration<std::int64_t, iterationPeriod>;
I know nothing of music, but I'm guessing the above code is right because if you convert iteration_time{1} to nanoseconds, you get approximately 1041667ns. iteration_time{1} is intended to be the precise amount of time you want each iteration of your loop in Timer_class::Runner to take.
To correct the large error, you need to sleep until a time_point, as opposed to sleeping for a duration. Here's a generic utility to help you do that:
template <class Clock, class Duration>
void
delay_until(std::chrono::time_point<Clock, Duration> tp)
{
while (Clock::now() < tp)
;
}
Now if you code Timer_class::Runner to use delay_until instead of sleep_ns, I think you'll get better results:
void
Timer_class::Runner()
{
auto next_start = std::chrono::steady_clock::now() + iteration_time{1};
while (true)
{
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1);
}
delay_until(next_start);
next_start += iteration_time{1};
}
}
I ended up using #howard-hinnant version of delay, and reducing buffer size in openal-soft, that's what made a huge difference, fluctuations is now about +-5 ms for 1/16th at 120BPM (125 ms period) and +-1 ms for quarter beats. Leaves a lot to be desired, but i guess it's okay

Precise way to reduce CPU usage in an infinite loop

This is my code using QueryPeformanceCounter as timer.
//timer.h
class timer {
private:
...
public:
...
double get(); //returns elapsed time in seconds
void start();
};
//a.cpp
void loop() {
timer t;
double tick;
double diff; //surplus seconds
t.start();
while( running ) {
tick = t.get();
if( tick >= 1.0 - diff ) {
t.start();
//things that should be run exactly every second
...
}
Sleep( 880 );
}
}
Without Sleep this loop would go on indefinitely calling t.get() every time which causes high CPU usage. For that reason, I make it sleep for about 880 milliseconds so that it wouldn't call t.get() while not necessary.
As I said above, I'm currently using Sleep to do the trick, but what I'm worried about is the accuracy of Sleep. I've read somewhere that the actual milliseconds the program pauses may vary - 20 to 50 ms - the reason I set the parameter to 880. I want to reduce the CPU usage as much as possible; I want to, if possible, pause more than 990 milliseconds EDIT: and yet less than 1000 milliseconds between every loop. What would be the best way to go?
I don't get why you are calling t.start() twice (it resets the clock?), but I would like to propose a kind of solution for the Sleep inaccuracy. Let's take a look at the content of while( running ) loop and follow the algorithm:
double future, remaining, sleep_precision = 0.05;
while (running) {
future = t.get() + 1.0;
things_that_should_be_run_exactly_every_second();
// the loop in case of spurious wakeup
for (;;) {
remaining = future - t.get();
if (remaining < sleep_precision) break;
Sleep(remaining);
}
// next, do the spin-lock for at most sleep_precision
while (t.get() < future);
}
The value of sleep_precision should be set empirically - OSes I know can't give you that.
Next, there are some alternatives of the sleeping mechanism that may better suit your needs - Is there an alternative for sleep() in C?
If you want to pause more than 990 milliseconds, write a sleep for 991 milliseconds. Your thread is guaranteed to be asleep for at least that long. It won't be less, but it could be multiples of 20-50ms more (depending on the resolution of your OS's time slicing, and on the the cost of context switching).
However, this will not give you something running "exactly every second". There is just no way to achieve that on a time-shared operating system. You'll have to program closer to the metal, or rely on an interrupt from a PPS source and just pray your OS lets you run your entire loop iteration in one shot. Or, I suppose, write something to run in kernel modeā€¦?

why did the chromium implement Time::Now ? what is the benefit?

code segment as follows, code come frome chromium, why?
// Initilalize initial_ticks and initial_time
void InitializeClock() {
initial_ticks = TimeTicks::Now();
// Initilalize initial_time
initial_time = CurrentWallclockMicroseconds();
}// static
Time Time::Now() {
if (initial_time == 0)
InitializeClock();
// We implement time using the high-resolution timers so that we can get
// timeouts which are smaller than 10-15ms. If we just used
// CurrentWallclockMicroseconds(), we'd have the less-granular timer.
//
// To make this work, we initialize the clock (initial_time) and the
// counter (initial_ctr). To compute the initial time, we can check
// the number of ticks that have elapsed, and compute the delta.
//
// To avoid any drift, we periodically resync the counters to the system
// clock.
while (true) {
TimeTicks ticks = TimeTicks::Now();
// Calculate the time elapsed since we started our timer
TimeDelta elapsed = ticks - initial_ticks;
// Check if enough time has elapsed that we need to resync the clock.
if (elapsed.InMilliseconds() > kMaxMillisecondsToAvoidDrift) {
InitializeClock();
continue;
}
return Time(elapsed + Time(initial_time));
}
}
I assume your answer lies in the comment of the code you pasted:
// We implement time using the high-resolution timers so that we can get
// timeouts which are smaller than 10-15ms. If we just used
// CurrentWallclockMicroseconds(), we'd have the less-granular timer.
So Now gives a time value of high resolution, which is beneficial when you need higher resolution than 10-15ms, as they state in the comment. For instance, if you want to reschedule a task every 100 ns, you need the higher resolution, or if you want to measure the execution time of something - 10-15 ms is an eternity.

Uniformly Regulating Program Execution Rate [Windows C++]

First off, I found a lot of information on this topic, but no solutions that solved the issue unfortunately.
I'm simply trying to regulate my C++ program to run at 60 iterations per second. I've tried everything from GetClockTicks() to GetLocalTime() to help in the regulation but every single time I run the program on my Windows Server 2008 machine, it runs slower than on my local machine and I have no clue why!
I understand that "clock" based function calls return CPU time spend on the execution so I went to GetLocalTime and then tried to differentiate between the start time and the stop time then call Sleep((FPS / 1000) - millisecondExecutionTime)
My local machine is quite faster than the servers CPU so obviously the thought was that it was going off of CPU ticks, but that doesn't explain why the GetLocalTime doesn't work. I've been basing this method off of http://www.lazyfoo.net/SDL_tutorials/lesson14/index.php changing the get_ticks() with all of the time returning functions I could find on the web.
For example take this code:
#include <Windows.h>
#include <time.h>
#include <string>
#include <iostream>
using namespace std;
int main() {
int tFps = 60;
int counter = 0;
SYSTEMTIME gStart, gEnd, start_time, end_time;
GetLocalTime( &gStart );
bool done = false;
while(!done) {
GetLocalTime( &start_time );
Sleep(10);
counter++;
GetLocalTime( &end_time );
int startTimeMilli = (start_time.wSecond * 1000 + start_time.wMilliseconds);
int endTimeMilli = (end_time.wSecond * 1000 + end_time.wMilliseconds);
int time_to_sleep = (1000 / tFps) - (endTimeMilli - startTimeMilli);
if (counter > 240)
done = true;
if (time_to_sleep > 0)
Sleep(time_to_sleep);
}
GetLocalTime( &gEnd );
cout << "Total Time: " << (gEnd.wSecond*1000 + gEnd.wMilliseconds) - (gStart.wSecond*1000 + gStart.wMilliseconds) << endl;
cin.get();
}
For this code snippet, run on my computer (3.06 GHz) I get a total time (ms) of 3856 whereas on my server (2.53 GHz) I get 6256. So it potentially could be the speed of the processor though the ratio of 2.53/3.06 is only .826797386 versus 3856/6271 is .614893956.
I can't tell if the Sleep function is doing something drastically different than expected though I don't see why it would, or if it is my method for getting the time (even though it should be in world time (ms) not clock cycle time. Any help would be greatly appreciated, thanks.
For one thing, Sleep's default resolution is the computer's quota length - usually either 10ms or 15ms, depending on the Windows edition. To get a resolution of, say, 1ms, you have to issue a timeBeginPeriod(1), which reprograms the timer hardware to fire (roughly) once every millisecond.
In your main loop you can
int main()
{
// Timers
LONGLONG curTime = NULL;
LONGLONG nextTime = NULL;
Timers::GameClock::GetInstance()->GetTime(&nextTime);
while (true) {
Timers::GameClock::GetInstance()->GetTime(&curTime);
if ( curTime > nextTime && loops <= MAX_FRAMESKIP ) {
nextTime += Timers::GameClock::GetInstance()->timeCount;
// Business logic goes here and occurr based on the specified framerate
}
}
}
using this time library
include "stdafx.h"
LONGLONG cacheTime;
Timers::SWGameClock* Timers::SWGameClock::pInstance = NULL;
Timers::SWGameClock* Timers::SWGameClock::GetInstance ( ) {
if (pInstance == NULL) {
pInstance = new SWGameClock();
}
return pInstance;
}
Timers::SWGameClock::SWGameClock(void) {
this->Initialize ( );
}
void Timers::SWGameClock::GetTime ( LONGLONG * t ) {
// Use timeGetTime() if queryperformancecounter is not supported
if (!QueryPerformanceCounter( (LARGE_INTEGER *) t)) {
*t = timeGetTime();
}
cacheTime = *t;
}
LONGLONG Timers::SWGameClock::GetTimeElapsed ( void ) {
LONGLONG t;
// Use timeGetTime() if queryperformancecounter is not supported
if (!QueryPerformanceCounter( (LARGE_INTEGER *) &t )) {
t = timeGetTime();
}
return (t - cacheTime);
}
void Timers::SWGameClock::Initialize ( void ) {
if ( !QueryPerformanceFrequency((LARGE_INTEGER *) &this->frequency) ) {
this->frequency = 1000; // 1000ms to one second
}
this->timeCount = DWORD(this->frequency / TICKS_PER_SECOND);
}
Timers::SWGameClock::~SWGameClock(void)
{
}
with a header file that contains the following:
// Required for rendering stuff on time
#pragma once
#define TICKS_PER_SECOND 60
#define MAX_FRAMESKIP 5
namespace Timers {
class SWGameClock
{
public:
static SWGameClock* GetInstance();
void Initialize ( void );
DWORD timeCount;
void GetTime ( LONGLONG* t );
LONGLONG GetTimeElapsed ( void );
LONGLONG frequency;
~SWGameClock(void);
protected:
SWGameClock(void);
private:
static SWGameClock* pInstance;
}; // SWGameClock
} // Timers
This will ensure that your code runs at 60FPS (or whatever you put in) though you can probably dump the MAX_FRAMESKIP as that's not truly implemented in this example!
You could try a WinMain function and use the SetTimer function and a regular message loop (you can also take advantage of the filter mechanism of GetMessage( ... ) ) in which you test for the WM_TIMER message with the requested time and when your counter reaches the limit do a PostQuitMessage(0) to terminate the message loop.
For a duty cycle that fast, you can use a high accuracy timer (like QueryPerformanceTimer) and a busy-wait loop.
If you had a much lower duty cycle, but still wanted precision, then you could Sleep for part of the time and then eat up the leftover time with a busy-wait loop.
Another option is to use something like DirectX to sync yourself to the VSync interrupt (which is almost always 60 Hz). This can make a lot of sense if you're coding a game or a/v presentation.
Windows is not a real-time OS, so there will never be a perfect way to do something like this, as there's no guarantee your thread will be scheduled to run exactly when you need it to.
Note that in the remarks for Sleep, the actual amount of time will be at least one "tick" and possible one whole "tick" longer than the delay you requested before the thread is scheduled to run again (and then we have to assume the thread is scheduled). The "tick" can vary a lot depending on hardware and the version of Windows. It is commonly in the 10-15 ms range, and I've seen it as bad as 19 ms. For 60 Hz, you need 16.666 ms per iteration, so this is obviously not nearly precise enough to give you what you need.
What about rendering (iterating) based on the time elapsed between rendering of each frame? Consider creating a void render(double timePassed) function and render depending on the timePassed parameter instead of putting program to sleep.
Imagine, for example, you want to render a ball falling or bouncing. You would know it's speed, acceleration and all other physics that you need. Calculate the position of the ball based on timePassed and all other physics parameters (speed, acceleration, etc.).
Or if you prefer, you could just skip the render() function execution if time passed is a value to small, instead of puttin program to sleep.

Win32 API timers

I was using the system timer (clock() function, see time.h) to time some serial and USB comms. All I needed was approx 1ms accurace. The first thing I noticed is that individual times can be out (plus or minus) 10ms. Timing a number of smaller events led to less accurate timing as events went by. Aggregate timing was slightly better. After a bit of a root on MSDN etc I stumbled across the timer in windows multi-media library (timeGetTime(), see MMSystem.h). This was much better with decent accuracy to the 1ms level.
Weirdness then ensued, after initially working flawlessy for days (lovely logs with useful timings) it all went pearshaped as this API also started showing this odd granularity (instead of a bunch of small comms messages taking 3ms,2ms,3ms,2ms, 3ms etc. it came out as 0ms, 0ms, 0ms, 0ms, 15ms etc. Rebooting the PC restored nomal accuarce but at some indeterminate time (after an hour or so) the anomoly returned.
Anyone got any idea or suggestions of how to get this level of timing accuracy on Windows XP (32bit Pro, using Visual Studio 2008).
My little timing class:
class TMMTimer
{
public:
TMMTimer( unsigned long msec);
TMMTimer();
void Clear() { is_set = false; }
void Set( unsigned long msec=0);
bool Expired();
unsigned long Elapsed();
private:
unsigned long when;
int roll_over;
bool is_set;
};
/** Main constructor.
*/
TMMTimer::TMMTimer()
{
is_set = false;
}
/** Main constructor.
*/
TMMTimer::TMMTimer( unsigned long msec)
{
Set( msec);
}
/** Set the timer.
*
* #note This sets the timer to some point in the future.
* Because the timer can wrap around the function sets a
* rollover flag for this condition which is checked by the
* Expired member function.
*/
void TMMTimer::Set( unsigned long msec /*=0*/)
{
unsigned long now = timeGetTime(); // System millisecond counter.
when = now + msec;
if (when < now)
roll_over = 1;
else
roll_over = 0;
is_set = true;
}
/** Check if timer expired.
*
* #return Returns true if expired, else false.
*
* #note Also returns true if timer was never set. Note that this
* function can handle the situation when the system timer
* rolls over (approx every 47.9 days).
*/
bool TMMTimer::Expired()
{
if (!is_set)
return true;
unsigned long now = timeGetTime(); // System millisecond counter.
if (now > when)
{
if (!roll_over)
{
is_set = false;
return true;
}
}
else
{
if (roll_over)
roll_over = false;
}
return false;
}
/** Returns time elapsed since timer expired.
*
* #return Time in milliseconds, 0 if timer was never set.
*/
unsigned long TMMTimer::Elapsed()
{
if (!is_set)
return 0;
return timeGetTime()-when;
}
Did you call timeBeginPeriod(1); to set the multimedia resolution to 1 millisecond? The multimedia timer resolution is system-global, so if you didn't set it yourself, chances are that you started after something else had called it, then when that something else called timeEndPeriod(), the resolution went back to the system default (which is normally 10 ms, if memory serves).
Others have advised using QueryPerformanceCounter(). This does have much higher resolution, but you still need to be careful. Depending on the kernel involved, it can/will use the x86 RDTSC function, which is a 64-bit counter of instruction cycles. For better or worse, on a CPU whose clock rate varies (which started on laptops, but is now common almost everywhere) the relationship between the clock count and wall time varies right along with the clock speed. If memory serves, if you force Windows to install with the assumption that there are multiple physical processors (not just multiple cores), you'll get a kernel in which QueryPerformanceCounter() will read the motherboard's 1.024 MHz clock instead. This reduces resolution compared to the CPU clock, but at least the speed is constant (and if you only need 1 ms resolution, it should be more than adequate anyway).
If you want high resolution timing on Windows, you should consider using QueryPerformanceFrequency and QueryPerformanceCounter.
These will provide the most accurate timings available on Windows, and should be much more "stable" over time. QPF gives you the number of counts/second, and QPC gives you the current count. You can use this to do very high resolution timings on most systems (fraction of ms).
Check out the high resolution timers from the Win32 API.
http://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx
You can use it to usually get microsecond resolution timers.