Game clock and Sleep (C++) - c++

I'm currently making a small console game. At the end of the game loop is another loop that doesn't release until 1/100s after the iteration's begin time.
Of course that uses up a lot of CPU, so I placed
Sleep(1);
at the end to solve it. I thought everything was right until I ran the game on a 2005 XP laptop... and it was really slow.
When I removed the Sleep command, the game worked perfectly on both computers, but now I have the CPU usage problem.
Does anyone have a good solution for this?

So I found out that the problem was with Windows NT (2000, XP, 2003) sleep granularity that was around 15 ms.. if anyone also struggles with this type of problem, here's how to solve it:
timeBeginPeriod(1); //from windows.h
Call it once at the beginning of the main() function. This affects a few things including Sleep() so that it's actually 'sleeping' for an exact millisecond.
timeEndPeriod(1); //on exit
Of course I was developing the game on Windows 7 all time and thought everything was right, so apparently Windows 6.0+ removed this problem... but it's still useful considering the fact that a lot of people still use XP

You should use std::this_thread::sleep_for in header <thread> for this, along with std::chrono stuff. Maybe something like this:
while(...)
{
auto begin = std::chrono::steady_clock::now();
// your code
auto end = std::chrono::steady_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(end - begin);
std::this_thread::sleep_for(std::chrono::milliseconds(10) - duration);
}
If your code doesn't consume much time during one iteration or if each iteration takes constant time, you can leave alone the measuring and just put there some constant:
std::this_thread::sleep_for(std::chrono::milliseconds(8));

Sounds like the older laptop just takes more time to do all your processes then it sleeps for 1 millisecond.
You should include a library that tells time,
get the current time at the start of the program / start of the loop then at the end of the loop / program compare the difference of your starting time and the current to the amount of time you want. If it's lower than the amount of time you want (let's say 8 milliseconds) tell it to sleep for minimumTime - currentTime - recordedTime (the variable you set at the start of the loop)
I've done this for my own game in SDL2, SDL_GetTicks() just finds the amount of milliseconds the program has been running and "frametime" is the time at the start of the main game loop. This is how I keep my game running at a maximum of 60fps. This if statement should be modified and placed at the bottom of your program.
if( SDL_GetTicks() - frametime < MINFRAMETIME )
{
SDL_Delay( MINFRAMETIME - ( SDL_GetTicks() - frametime ) );
}
I think the standard library equivalent would be:
if( clock() - lastCheck < MIN_TIME )
{
sleep( MIN_TIME - ( clock() - lastCheck ) );
}

Related

I'm looking to improve or request my current delay / sleep method. c++

Currently I am coding a project that requires precise delay times over a number of computers. Currently this is the code I am using I found it on a forum. This is the code below.
{
LONGLONG timerResolution;
LONGLONG wantedTime;
LONGLONG currentTime;
QueryPerformanceFrequency((LARGE_INTEGER*)&timerResolution);
timerResolution /= 1000;
QueryPerformanceCounter((LARGE_INTEGER*)&currentTime);
wantedTime = currentTime / timerResolution + ms;
currentTime = 0;
while (currentTime < wantedTime)
{
QueryPerformanceCounter((LARGE_INTEGER*)&currentTime);
currentTime /= timerResolution;
}
}
Basically the issue I am having is this uses alot of CPU around 16-20% when I start to call on the function. The usual Sleep(); uses Zero CPU but it is extremely inaccurate from what I have read from multiple forums is that's the trade-off when you trade accuracy for CPU usage but I thought I better raise the question before I set for this sleep method.
The reason why it's using 15-20% CPU is likely because it's using 100% on one core as there is nothing in this to slow it down.
In general, this is a "hard" problem to solve as PCs (more specifically, the OSes running on those PCs) are in general not made for running real time applications. If that is absolutely desirable, you should look into real time kernels and OSes.
For this reason, the guarantee that is usually made around sleep times is that the system will sleep for atleast the specified amount of time.
If you are running Linux you could try using the nanosleep method (http://man7.org/linux/man-pages/man2/nanosleep.2.html) Though I don't have any experience with it.
Alternatively you could go with a hybrid approach where you use sleeps for long delays, but switch to polling when it's almost time:
#include <thread>
#include <chrono>
using namespace std::chrono_literals;
...
wantedtime = currentTime / timerResolution + ms;
currentTime = 0;
while(currentTime < wantedTime)
{
QueryPerformanceCounter((LARGE_INTEGER*)&currentTime);
currentTime /= timerResolution;
if(currentTime-wantedTime > 100) // if waiting for more than 100 ms
{
//Sleep for value significantly lower than the 100 ms, to ensure that we don't "oversleep"
std::this_thread::sleep_for(50ms);
}
}
Now this is a bit race condition prone, as it assumes that the OS will hand back control of the program within 50ms after the sleep_for is done. To further combat this you could turn it down (to say, sleep 1ms).
You can set the Windows timer resolution to minimum (usually 1 ms), to make Sleep() accurate up to 1 ms. By default it would be accurate up to about 15 ms. Sleep() documentation.
Note that your execution can be delayed if other programs are consuming CPU time, but this could also happen if you were waiting with a timer.
#include <timeapi.h>
// Sleep() takes 15 ms (or whatever the default is)
Sleep(1);
TIMECAPS caps_;
timeGetDevCaps(&caps_, sizeof(caps_));
timeBeginPeriod(caps_.wPeriodMin);
// Sleep() now takes 1 ms
Sleep(1);
timeEndPeriod(caps_.wPeriodMin);

Linux measuring time problem! std::chrono, QueryPerformanceCounter, clock_gettime

I use clock_gettime() in Linux and QueryPerformanceCounter() in Windows to measure time. When measuring time, I encountered an interesting case.
Firstly, I'm calculating DeltaTime in infinite while loop. This loop calls some update functions. To calculating DeltaTime, the program's waiting in 40 milliseconds in an Update function because update functions is empty yet.
Then, in the program compiled as Win64-Debug i measure DeltaTime. It's approximately 0.040f. And this continues as long as the program is running (Win64-Release works like that too). It runs correctly.
But in the program compiled as Linux64-Debug or Linux64-Release, there is a problem.
When the program starts running. Everything is normal. DeltaTime is approximately 0.040f. But after a while, deltatime is calculated 0.12XXf or 0.132XX, immediately after it's 0.040f. And so on.
I thought I was using QueryPerformanceCounter correctly and using clock_gettime() incorrectly. Then I decided to try it with the standard library std::chrono::high_resolution_clock, but it's the same. No change.
#define MICROSECONDS (1000*1000)
auto prev_time = std::chrono::high_resolution_clock::now();
decltype(prev_time) current_time;
while(1)
{
current_time = std::chrono::high_resolution_clock::now();
int64_t deltaTime = std::chrono::duration_cast<std::chrono::microseconds>(current_time - previous_time).count();
printf("DeltaTime: %f", deltaTime/(float)MICROSECONDS);
NetworkManager::instance().Update();
prev_time = current_time;
}
void NetworkManager::Update()
{
auto start = std::chrono::high_resolution_clock::now();
decltype(start) end;
while(1)
{
end = std::chrono::high_resolution_clock::now();
int64_t y = std::chrono::duration_cast<std::chrono::microseconds>(end-start).count();
if(y/(float)MICROSECONDS >= 0.040f)
break;
}
return;
}
Normal
Problem
Possible causes:
Your clock_gettime is not using VDSO and is a system call instead - will be visible if run under strace, can be configured on modern kernel versions.
Your thread gets preempted (taken out of CPU by the scheduler). To run a clean experiment run your app with real time priority and pinned to a specific CPU core.
Also, I would disable CPU frequency scaling when experimenting.

Switching an image at specific frequencies c++

I am currently developing a stimuli provider for the brain's visual cortex as a part of a university project. The program is to (preferably) be written in c++, utilising visual studio and OpenCV. The way it is supposed to work is that the program creates a number of threads, accordingly to the amount of different frequencies, each running a timer for their respective frequency.
The code looks like this so far:
void timerThread(void *param) {
t *args = (t*)param;
int id = args->data1;
float freq = args->data2;
unsigned long period = round((double)1000 / (double)freq)-1;
while (true) {
Sleep(period);
show[id] = 1;
Sleep(period);
show[id] = 0;
}
}
It seems to work okay for some of the frequencies, but others vary quite a lot in frame rate. I have tried to look into creating my own timing function, similar to what is done in Arduino's "blinkWithoutDelay" function, though this worked very badly. Also, I have tried with the waitKey() function, this worked quite like the Sleep() function used now.
Any help would be greatly appreciated!
You should use timers instead of "sleep" to fix this, as sometimes the loop may take more or less time to complete.
Restart the timer at the start of the loop and take its value right before the reset- this'll give you the time it took for the loop to complete.
If this time is greater than the "period" value, then it means you're late, and you need to execute right away (and even lower the period for the next loop).
Otherwise, if it's lower, then it means you need to wait until it is greater.
I personally dislike sleep, and instead constantly restart the timer until it's greater.
I suggest looking into "fixed timestep" code, such as the one below. You'll need to put this snippet of code on every thread with varying values for the period (ns) and put your code where "doUpdates()" is.
If you need a "timer" library, since I don't know OpenCV, I recommend SFML (SFML's timer docs).
The following code is from here:
long int start = 0, end = 0;
double delta = 0;
double ns = 1000000.0 / 60.0; // Syncs updates at 60 per second (59 - 61)
while (!quit) {
start = timeAsMicro();
delta+=(double)(start - end) / ns; // You can skip dividing by ns here and do "delta >= ns" below instead //
end = start;
while (delta >= 1.0) {
doUpdates();
delta-=1.0;
}
}
Please mind the fact that in this code, the timer is never reset.
(This may not be completely accurate but is the best assumption I can make to fix your problem given the code you've presented)

Timed memory tiles game. now works without timing

I have done a memory tiles program but i want it to be timed, i.e, the user shoud be able to play the game only for 2 mins. what do i do?
Also in linux sleep() does not work, what should we use for a delay??
I presume the game has a "main loop" somewhere.
At the beginning of the main loop (before the actual loop), take the current time, call this start_time. Then in each iteration of the loop, take the current time again, call this now. The elapsed time is elapsed_time = now - start_time;. Assuming time is in seconds, then if (elapsed_time >= 120) { ... end game ... } would do the trick.

Running a timer in cpp,parallel to a program under execution

I'm running an OpenCV application on visual studio on a windows 7 machine. As part of the end application, I need a timer running in parallel to the OpenCV application which is under execution. The OpenCV application has real time video capture as input to a eye blink detection algorithm. The OpenCV code must be running continuously and can not be paused or stopped. However, to find the interval between blinks, I need to have a timer running after each blink. So the timer has to run while blinks are detected. I have gone through the SetTimer and CreateTimerQueueTimer functions and was unable to obtain a clear understanding of how to go about this. Is there any other way of running a timer in a c++ program? Any suggestions and solutions will be highly appreciated.
Why do you need a timer to calculate the interval between the blinks? Can't you just store the current time at each blink and subtract it from the previous?
With c++11 you can use std::chrono to make a simple timer:
auto start = std::chrono::high_resolution_clock::now();
// do processing here
auto end = std::chrono::high_resolution_clock::now();
std::cout << "time was " <<
std::chrono::duration_cast<std::chrono::nanoseconds>(end - start).count();
Edit in response to the comment on the other answer, you could do the following:
...
auto take = (end - start);
if(take > std::chrono::nanoseconds(x)) {
// ... do whatever you want here
}
Oh and one thing to mention is that you can replace the nanoseconds with any other time unit.