Precise way to reduce CPU usage in an infinite loop - c++

This is my code using QueryPeformanceCounter as timer.
//timer.h
class timer {
private:
...
public:
...
double get(); //returns elapsed time in seconds
void start();
};
//a.cpp
void loop() {
timer t;
double tick;
double diff; //surplus seconds
t.start();
while( running ) {
tick = t.get();
if( tick >= 1.0 - diff ) {
t.start();
//things that should be run exactly every second
...
}
Sleep( 880 );
}
}
Without Sleep this loop would go on indefinitely calling t.get() every time which causes high CPU usage. For that reason, I make it sleep for about 880 milliseconds so that it wouldn't call t.get() while not necessary.
As I said above, I'm currently using Sleep to do the trick, but what I'm worried about is the accuracy of Sleep. I've read somewhere that the actual milliseconds the program pauses may vary - 20 to 50 ms - the reason I set the parameter to 880. I want to reduce the CPU usage as much as possible; I want to, if possible, pause more than 990 milliseconds EDIT: and yet less than 1000 milliseconds between every loop. What would be the best way to go?

I don't get why you are calling t.start() twice (it resets the clock?), but I would like to propose a kind of solution for the Sleep inaccuracy. Let's take a look at the content of while( running ) loop and follow the algorithm:
double future, remaining, sleep_precision = 0.05;
while (running) {
future = t.get() + 1.0;
things_that_should_be_run_exactly_every_second();
// the loop in case of spurious wakeup
for (;;) {
remaining = future - t.get();
if (remaining < sleep_precision) break;
Sleep(remaining);
}
// next, do the spin-lock for at most sleep_precision
while (t.get() < future);
}
The value of sleep_precision should be set empirically - OSes I know can't give you that.
Next, there are some alternatives of the sleeping mechanism that may better suit your needs - Is there an alternative for sleep() in C?

If you want to pause more than 990 milliseconds, write a sleep for 991 milliseconds. Your thread is guaranteed to be asleep for at least that long. It won't be less, but it could be multiples of 20-50ms more (depending on the resolution of your OS's time slicing, and on the the cost of context switching).
However, this will not give you something running "exactly every second". There is just no way to achieve that on a time-shared operating system. You'll have to program closer to the metal, or rely on an interrupt from a PPS source and just pray your OS lets you run your entire loop iteration in one shot. Or, I suppose, write something to run in kernel modeā€¦?

Related

Inconsistent chrono::high_resolution_clock delay

I'm trying to implement a MIDI-like clocked sample player.
There is a timer, which increments pulse counter, and every 480 pulses is a quarter, so pulse period is 1041667 ns for 120 beats per minute.
Timer is not sleep-based and running in separate thread, but it seems like delay time is inconsistent: period between samples played in a test file is fluctuating +- 20 ms (in some occasions period is OK and steady, I can't find out dependency of this effect).
Audio backend influence is excluded: i've tried OpenAL as well as SDL_mixer.
void Timer_class::sleep_ns(uint64_t ns){
auto start = std::chrono::high_resolution_clock::now();
bool sleep = true;
while(sleep)
{
auto now = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(now - start);
if (elapsed.count() >= ns) {
TestTime = elapsed.count();
sleep = false;
//break;
}
}
}
void Timer_class::Runner(void){
// this running as thread
while(1){
sleep_ns(BPMns);
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
}
};
void Player_class::PlayFile(int FileNumber){
#ifdef AUDIO_SDL_MIXER
if(Mix_PlayChannel(-1, WaveData[FileNumber], 0)==-1) {
printf("Mix_PlayChannel: %s\n",Mix_GetError());
}
#endif // AUDIO_SDL_MIXER
}
Am i doing something wrong in terms of an approach? Is there any better way to implement timer of this kind?
Deviation higher than 4-5 ms is too much in case of audio.
I see a large error and a small error. The large error is that your code assumes that the main processing in Runner consistently takes zero time:
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
That is, you're "sleeping" for the time you want your loop iteration to take, and then you're doing processing on top of that.
The small error is presuming that you can represent your ideal loop iteration time with an integral number of nanoseconds. This error is so small that it doesn't really matter. However I amuse myself by showing people how they can get rid of this error too. :-)
First lets correct the small error by exactly representing the idealized loop iteration time:
using quarterPeriod = std::ratio<1, 2>;
using iterationPeriod = std::ratio_divide<quarterPeriod, std::ratio<480>>;
using iteration_time = std::chrono::duration<std::int64_t, iterationPeriod>;
I know nothing of music, but I'm guessing the above code is right because if you convert iteration_time{1} to nanoseconds, you get approximately 1041667ns. iteration_time{1} is intended to be the precise amount of time you want each iteration of your loop in Timer_class::Runner to take.
To correct the large error, you need to sleep until a time_point, as opposed to sleeping for a duration. Here's a generic utility to help you do that:
template <class Clock, class Duration>
void
delay_until(std::chrono::time_point<Clock, Duration> tp)
{
while (Clock::now() < tp)
;
}
Now if you code Timer_class::Runner to use delay_until instead of sleep_ns, I think you'll get better results:
void
Timer_class::Runner()
{
auto next_start = std::chrono::steady_clock::now() + iteration_time{1};
while (true)
{
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1);
}
delay_until(next_start);
next_start += iteration_time{1};
}
}
I ended up using #howard-hinnant version of delay, and reducing buffer size in openal-soft, that's what made a huge difference, fluctuations is now about +-5 ms for 1/16th at 120BPM (125 ms period) and +-1 ms for quarter beats. Leaves a lot to be desired, but i guess it's okay

Increasing a value every 5 seconds

I'm making a simple meteor and rocket game in the console. And I want to increase the spawnrate of the meteors every five seconds. I have already tried the Sleep() function but that will of course not work and sleep the whole application. So does a while loop.
I will only post the Logic() function where it must increase because it's a program
of like 100 lines and I didn't feel like posting it all in here. If you do need context just ask me and I will post everything.
void Logic() {
Sleep(5000); // TODO Increase meteors every Five seconds
nMeteors++;
}
I'm pretty stuck on this so it would be nice if someone could help me :)
There are mainly two ways to approach this problem. One would be to spawn a new thread and put the loop there. You can use C++11's standard libraries <thread> and <chrono. Putting the thread to sleep for 5 seconds is as simple as std::this_thread::sleep_for(std::chrono::seconds{5});
But dedicating an entire thread to such a trivial task is unnecessary. In a videogame you usually have some sort of time keeping variable.
What you'd want to do is probably have a variable like std::chrono::time_point<std::chrono::steady_clock> previous_time = std::chrono::steady_clock::now(); (or simply auto previous_time = std::chrono::steady_clock::now()) outside of your loop. Now you have a reference point you can use to know where you are in time while running your loop. Inside of your loop you create another variable like auto current_time = std::chrono::steady_clock::now();, this is your current time. Now it's a simple matter of calculating the difference between current_time and previous_time and check if 5 seconds have passed. If they have, increase your variable and don't forget to set previous_time = current_time; to update the time, if not then just skip and keep doing whatever else you need to do in your main game loop.
To check if 5 seconds have passed, you do if (std::chrono::duration_cast<std::chrono::seconds>(current_time - previous_time).count() >= 5) { ... }.
You can find a lot more info here for the chrono library and here for the thread library. Plus, Google is your friend.
The typical way to write a game is to have an event loop.
The event loop polls various inputs for status, updates the state of the game, and then repeats. Some clever event loops even sleep for short periods and get notifications when inputs change or state has to be updated.
In your meteor spawning code, keep track of a timestamp when the last increase in spawnrate occurred. When you check if a meteor should spawn or spawn meteors 5 seconds after that point, update the spawn rate and record a new timestamp (possibly retroactively, and possibly in a loop to handle more than 10 seconds passing between checks for whatever reason).
An alternative solution involving an extra thread of execution is possible, but not a good idea.
As an aside, most games want to support pausing; so you want to distinguish between wall-clock time and nominal game-play time.
One way you can do this is by making your value a function of elapsed time. For example:
// somewhere to store the beginning of the
// time period.
inline std::time_t& get_start_timer()
{
static std::time_t t{};
return t;
}
// Start a time period (resets meteors to zero)
inline void start_timer()
{
get_start_timer() = std::time(nullptr); // current time in seconds
}
// retrieve the current number of meteors
// as a function of time.
inline int nMeteors()
{
return int(std::difftime(std::time(nullptr), get_start_timer())) / 5;
}
int main()
{
start_timer();
for(;;)
{
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << "meteors: " << nMeteors() << '\n';
}
}
Here is a similar version using C++11 <chrono> library:
// somewhere to store the beginning of the
// time period.
inline auto& get_time_point()
{
static std::chrono::steady_clock::time_point tp{};
return tp;
}
// Start a time period (resets meteors to zero)
inline void start_timing()
{
get_time_point() = std::chrono::steady_clock::now(); // current time in seconds
}
// retrieve the current number of meteors
// as a function of time.
inline auto nMeteors()
{
return std::chrono::duration_cast<std::chrono::seconds>(std::chrono::steady_clock::now() - get_time_point()).count() / 5;
}
int main()
{
start_timing();
for(;;)
{
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << "meteors: " << nMeteors() << '\n';
}
}
I found this easier than using chrono
Open to feedbacks:
Code:-
include "time.h"
main(){
int d;
time_t s,e;
time(&s);
time(&e);
d=e-s;
while(d<5){
cout<<d;
time(&e);
d=e-s;
}
}

pthread sleep function, cpu consumption

On behalf, sorry for my far from perfect English.
I've recently wrote my self a demon for Linux (to be exact OpenWRT router) in C++ and i came to problem.
Well there are few threads there, one for each opened TCP connection, main thread waiting for new TCP connections and, as I call it, commander thread to check for status.
Every thing works fine, but my CPU is always at 100%. I now that its because of the commander code:
void *CommanderThread(void* arg)
{
Commander* commander = (Commander*)arg;
pthread_detach(pthread_self());
clock_t endwait;
while(true)
{
uint8_t temp;
endwait = clock () + (int)(1 * CLOCKS_PER_SEC);
for(int i=0;i<commander->GetCount();i++)
{
ptrRelayBoard rb = commander->GetBoard(i);
if (rb!= NULL)
rb->Get(0x01,&temp);
}
while (clock() < endwait);
}
return NULL;
}
As you can see the program do stuff every 1s. Time is not critical here. I know that CPU is always checking did the time passed. I've tried do do something like this:
while (clock() < endwait)
usleep(200);
But when the function usleep (and sleep also) seam to freeze the clock increment (its always a constant value after the usleep).
Is there any solution, ready functions (like phread_sleep(20ms)), or walk around for my problem? Maybe i should access the main clock somehow?
Here its not so critical i can pretty much check how long did the execution of status checking took (latch the clock() before, compare with after), and count the value to put as an argument to the usleep function. But in other thread, I would like to use this form.
Do usleep is putting whole process to freeze?
I'm currently debugging it on Cygwin, but don't think the problem lies here.
Thanks for any answers and suggestions its much appreciated.
J.L.
If it doesn't need to be exactly 1s, then just usleep a second. usleep and sleep put the current thread into an efficient wait state that is at least the amount of time you requested (and then it becomes eligible for being scheduled again).
If you aren't trying to get near exact time there's no need to check clock().
I've I have resolved it other way.
#include <sys/time.h>
#define CLOCK_US_IN_SECOND 1000000
static long myclock()
{
struct timeval tv;
gettimeofday(&tv, NULL);
return (tv.tv_sec * CLOCK_US_IN_SECOND) + tv.tv_usec;
}
void *MainThread(void* arg)
{
Commander* commander = (Commander*)arg;
pthread_detach(pthread_self());
long endwait;
while(true)
{
uint8_t temp;
endwait = myclock() + (int)(1 * CLOCK_US_IN_SECOND);
for(int i=0;i<commander->GetCount();i++)
{
ptrRelayBoard rb = commander->GetBoard(i);
if (rb!= NULL)
rb->Get(0x01,&temp);
}
while (myclock() < endwait)
usleep((int)0.05*CLOCK_US_IN_SECOND);
}
return NULL;
}
Bare in mind, that this code is vulnerable for time change during execution. Don't have idea how to omit that, but in my case its not really important.

Time based loop and Frame based loop

Trying to understand the concepts of setting constant speed on game loop. My head hurts. I read the deWiTTERS page, but I can't see the why/how...when I get it...it slips.
while(true)
{
player->update() ;
player->draw() ;
}
This will run as fast as possible depending on how fast a processor is...I get that.
To run at the same speed on all computers, the logic is what I don't get. If I am trying to run at 60fps, then it means for every 16ms the objects move by a frame, yeah? What I don't get is how the update() or draw() may be too slow.
deWiTTERS example (I used 60):
const int FRAMES_PER_SECOND = 60;
const int SKIP_TICKS = 1000 / FRAMES_PER_SECOND;
DWORD next_game_tick = GetTickCount();
// GetTickCount() returns the current number of milliseconds
// that have elapsed since the system was started
int sleep_time = 0;
bool game_is_running = true;
while( game_is_running ) {
update_game();
display_game();
next_game_tick += SKIP_TICKS;
sleep_time = next_game_tick - GetTickCount();
if( sleep_time >= 0 ) {
Sleep( sleep_time );
}
else {
// Shit, we are running behind!
}
}
I don't understand why he gets the current time before the loop starts. And when he increments by SKIP_TICKS I understand he increments to the next 16ms interval. But I don't understand this part as well:
sleep_time = nextgametick - GetTickCount()
What does Sleep(sleep_time) mean? The processor leaves the loop and does something else? How does it achieve running 60fps?
In cases where the update_game() and display_game() functions complete in less time than a single frame interval at 60FPs, the loop tries to ensure that the next frame is not processed until that interval is up, by sleeping (blocking the thread) off the excess frame time. Seems like it is trying to ensure that the frame rate is capped to 60FPS, and no higher.
The processor does not 'leave the loop' but rather the thread in which your loop is running is blocked (prevented from continuing execution of your code) until the sleep time is up. Then it continues onto the next frame. In a multi-threaded game engine, sleeping the thread of the main game loop like this gives the CPU time to execute code in other threads, which may be managing physics, AI, audio mixing etc, depending on set up.
Why is GetTickCount() called before the loop starts?
We know from the comment in your code that GetTickCount() returns the milliseconds since system boot.
So lets say that the system has been running for 30 seconds (30,000ms) when you start your program,
and let's say that we didn't call GetTickCount() before entering the loop,
but instead initialized next_game_tick to 0.
We do the update and draw calls (as an example, they take 6ms) and then:
next_game_tick += SKIP_TICKS; // next_game_tick is now 16
sleep_time = next_game_tick - GetTickCount();
// GetTickCount() returns 30000!
// So sleep_time is now 16 - 30000 = -29984 !!!
Since we (sensibly) only sleep when sleep_time is positive,
the game loop would run as fast as possible (potentially faster than 60FPS),
which is not what you want.

delay loop output in C++

I have a while loop that runs in a do while loop. I need the while loop to run exactly every second no faster no slower. but i'm not sure how i would do that. this is the loop, off in its own function. I have heard of the sleep() function but I also have heard that it is not very accurate.
int min5()
{
int second = 00;
int minute = 0;
const int ZERO = 00;
do{
while (second <= 59){
if(minute == 5) break;
second += 1;
if(second == 60) minute += 1;
if(second == 60) second = ZERO;
if(second < 60) cout << "Current Time> "<< minute <<" : "<< second <<" \n";
}
} while (minute <= 5);
}
The best accuracy you can achieve is by using Operating System (OS) functions. You need to find the API that also has a callback function. The callback function is a function you write that the OS will call when the timer has expired.
Be aware that the OS may lose timing precision due to other tasks and activities that are running while your program is executing.
If you want a portable solution, you shouldn't expect high-precision timing. Usually, you only get that with a platform-dependent solution.
A portable (albeit not very CPU-efficient, nor particularly elegant) solution might make use of a function similar to this:
#include <ctime>
void wait_until_next_second()
{
time_t before = time(0);
while (difftime(time(0), before) < 1);
}
You'd then use this in your function like this:
int min5()
{
wait_until_next_second(); // synchronization (optional), so that the first
// subsequent call will not take less than 1 sec.
...
do
{
wait_until_next_second(); // waits approx. one second
while (...)
{
...
}
} while (...)
}
Some further comments on your code:
Your code gets into an endless loop once minute reaches the value 5.
Are you aware that 00 denotes an octal (radix 8) number (due to the leading zero)? It doesn't matter in this case, but be careful with numbers such as 017. This is decimal 15, not 17!
You could incorporate the seconds++ right into the while loop's condition: while (seconds++ <= 59) ...
I think in this case, it would be better to insert endl into the cout stream, since that will flush it, while inserting "\n" won't flush the stream. It doesn't truly matter here, but your intent seems to be to always see the current time on cout; if you don't flush the stream, you're not actually guaranteed to see the time message immediately.
As someone else posted, your OS may provide some kind of alarm or timer functionality. You should try to use this kind of thing rather than coding your own polling loop. Polling the time means you need to be context switched in every second, which keeps your code running when the system could be doing other stuff. In this case you interrupt someone else 300 times just to say "are we done yet".
Also, you should never make assumptions about the duration of a sleep - even if you had a real time OS this would be unsafe - you should always ask the real time clock or tick counter how much time has elapsed each time because otherwise any errors accumulate so you will get less and less accurate over time. This is true even on a real time system because even if a real time system could sleep accurately for 1 second, it takes some time for your code to run so this timing error would accumulate on each pass through the loop.
In Windows for example, there is a possibility to create a waitable timer object.
If that's Your operating system check the documentation here for example Waitable Timer Objects.
From the code You presented it looks like what You are trying to do can be done much easier with sleep. It doesn't make sense to guarantee that Your loop body is executed exactly every 1 second. Instead make it execute 10 times a second and check if the time that elapsed form the last time, You took some action, is more than a second or not. If not, do nothing. If yes, take action (print Your message, increment variables etc), store the time of last action and loop again.
Sleep(1000);
http://msdn.microsoft.com/en-us/library/ms686298(VS.85).aspx