Windows Sleep inconsistency? - c++

Having a bit of an issue with a game I'm making using opengl. The game will sometimes run at half speed and sometimes it will run normally.
I don't think it is the opengl causing the problem since it runs at literally 14,000 fps on my computer. (even when its running at half speed)
This has led me to believe that is is the "game timer" thats causing the problem. The game timer runs on a seperate thread and is programmed to pause at the end of its "loop" with a Sleep(5) call. if i remove the Sleep(5) call, it runs so fast that i can barely see the sprites on the screen. (predictable behavior)
I tried throwing a Sleep(16) at the end of the Render() thread (also on its own thread). This action should limit the fps to around 62. Remember that the app runs sometimes at its intended speed and sometimes at half speed (i have tried on both of the computers that i own and it persists).
When it runs at its intended speed, the fps is 62 (good) and sometimes 31-ish (bad). it never switches between half speed and full speed mid execution, and the problem persists even after a reboot..
So its not the rendering that causing the slowness, its the Sleep() function
I guess what im saying is that the Sleep() function is inconsistent with the times that it actually sleeps. is this a proven thing? is there a better Sleep() function that i could use?

A waitable timer (CreateWaitableTimer and WaitForSingleObject or friends) is much better for periodic wakeup.
However, in your case you probably should just enable VSYNC.

See the following discussion of the Sleep function, focusing on the bit about scheduling priorities:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx

yes, Sleep function is inconsistency, it is very useful in the case of macro condition.
if you want to a consistency time,please use QueryPerformanceFrequency get the frequency of CPU, and QueryPerformanceCount twice for start and end, and then (end-start) / frequency get the consistency time, but you must look out that if your CPU is mulit cores, the start and end time maybe not the same CPU core, so please us SetThreadAffinity for you working thread set the same CPU core.

Had a same problem. For I just made my own sleep logic and worked for me.
#include <chrono>
using namespace std::chrono;
high_resolution_clock::time_point sleep_start_time = high_resolution_clock::now();
while (duration_cast<duration<double>>(high_resolution_clock::now() - sleep_start_time).count() < must_sleep_duration) {}

Related

How to fix execution speed inconsistencies in C++

This is most noticeable on graphic files. Let's take as an example the OpenGL base program (a spinning triangle).
Whenever I run one normally, with no other apps open in the background, it will spin slowly, but when I run a game in the background, it starts spinning like mad. It seems as if the computer doesn't allocate enough memory for the programs to run at maximum speed, and paradoxically, doing resource-consuming stuff will accelerate it because it gets more memory.
The only way I found to fix this partially is to put a higher value in the Sleep function, however this doesn't fix it completely nor is a consistent solution, as other problems may arise from it. Is there any good way to fix this and make the program run consistently?
This mostly happens because you are not capping your FPS so there's nothing preventing your render loop from being called as much as possible and your logic (that is controlling rotation is executing in same loop).
What happens is that most GPUs have power management so they keep their frequencies low when there's no demand, opening an expensive game makes your GPU bump up its power thus rendering a lot faster, thus calling your rendering loop more times.
To prevent this (and to separate logic from rendering time in general) you must control the frame rate and use the time as an input for your rotation, something like:
auto elapsed = ..
while (!exit) {
render();
auto delta = now - elapsed;
if (delta < time_per_frame)
delay(TIME_PER_FRAME - delta);
updateLogic(delta);
}
For starters, you need to understand what's going on with your program. It has nothing to do with memory, and I don't see a reason to think about memory.
Opening other programs could make your CPU go faster because of the load (doubtfull, but clearly more likely than memory allocation).
The other programs could be messing with some setting.
If you're using sleep(), signals can interrupt the call (no one ever looks at the return code of the function; there's a reason for it to be uint sleep(uint) and not void sleep(uint)).
If you can, don't use sleep. And if you're going to, check. sleep doesn't grant you that the whole time has passed (IMHO bad design, but I'm not a POSIX fan).
The usual behaviour I think would be to have your function called periodically as a callback. If you're going to do some sort of delay or sleep, you should check that the time has passed.
Taken that you want your logic tied to the render and you use some function like sleep that can be interrupted (based on the other answer):
while (!exit) {
auto startofFrame= now();
render();
auto toDelay= startOfFrame + TIME_PER_FRAME - now();
while (toDelay > 0) {
delay(toDelay);
toDelay= startOfFrame + TIME_PER_FRAME - now();
}
updateLogic();
}

C++ Run only for a certain time

I'm writing a little game in c++ atm.
My Game While loop is always active, in this loop,
I have a condition if the player is shooting.
Now I face the following problem,
After every shot fired, there is a delay, this delay changes over time and while the delay the player should move.
shoot
move
wait 700 ms
shoot again
atm I'm using Sleep(700) the problem is I can't move while the 700 ms, I need something like a timer, so the move command is only executed for 700 ms instead of waiting 700 ms
This depends on how your hypothetical 'sleep' is implemented. There's a few things you should know, as it can be solved in a few ways.
You don't want to put your thread to sleep because then everything halts, which is not what you want.
Plus you may get more time than sleep allows. For example, if you sleep for 700ms you may get more than that, which means if you depend on accurate times you will get burned possibly by this.
1) The first way would be to record the raw time inside of the player. This is not the best approach but it'd work for a simple toy program and record the result of std::chrono::high_resolution_clock::now() (check #include <chrono> or see here) inside the class at the time you fire. To check if you can fire again, just compare the value you stored to ...::now() and see if 700ms has elapsed. You will have to read the documentation to work with it in milliseconds.
2) A better way would be to give your game a pulse via something called 'game ticks', which is the pulse to which your world moves forward. Then you can store the gametick that you fired on and do something similar to the above paragraph (except now you are just checking if currentGametick > lastFiredGametick + gametickUntilFiring).
For the gametick idea, you would make sure you do gametick++ every X milliseconds, and then run your world. A common value is somewhere between 10ms and 50ms.
Your game loop would then look like
while (!exit) {
readInput();
if (ticker.shouldTick()) {
ticker.tick();
world.tick(ticker.gametick);
}
render();
}
The above has the following advantages:
You only update the world every gametick
You keep rendering between gameticks, so you can have smooth animations since you will be rendering at a very high framerate
If you want to halt, just spin in a while loop until the amount of time has elapsed
Now this has avoided a significant amount of discussion, of which you should definitely read this if you are thinking of going the gametick route.
With whatever route you take, you probably need to read this.

why is empty while loop using more cpu?

I have two programs that are supposed to do the same thing with slight differences. Both have infinite game loops that runs forever unless user stops the game somehow. One of these programs' game loop is implemented and rendering something, the other game loop is empty and does nothing(just listens for user to stop).
When i opened the task manager to see resource usage, i have discovered that the program with the empty loop uses 14% CPU and the program that actually draws something to screen uses about 1-2%.
My guess on the subject is as follows:
I compared the code of the both programs and looked for differences and there was not much. Then it occurred to me that the loop that renders to screen might be bound by other factors(like sending pixels to the screen, refresh rate maybe?) So after CPU does its thing, it puts that thread to sleep until other stuff is completed. But since other program does pretty much nothing and doing nothing is really easy, CPU never puts that thread to sleep and just keeps going. I lack the knowledge to confirm that if this is the reason, so i am asking you. Is this the reason this is happening? (Bonus question) And if so, why does the CPU stop at about 14% and not going all the way up to 100% ?
Thank you.
Hard to say for certain without seeing the code, but drawing to the screen will, inevitably involve some wait on IO; how much depends on many factors including sync + buffering options.
As for the 14% cpu usage - I'm guessing that your machine has 8 processing units (either cores or cores * hyperthreading) and your code is singlethreaded - i.e. it is maxing out one processing unit.

Threads are slow when audio is off

I have 2 projects. One is built by C++ Builder without MFC Style. And other one is VC++ MFC 11.
When I create a thread and create a cycle -- let's say this cycle adds one to progressbar position -- from 1 to 100 by using Sleep(10) it works of course for both C++ Builder and C++ MFC.
Now, Sleep(10) is wait 10 miliseconds. OK. But the problem is only if I have open media player, Winamp or anything else that produces "Sound". If I close all media player, winamp and other sound programs, my threads get slower than 10 miliseconds.
It takes like 50-100 ms / each. If I open any music, it works normally as I expected.
I have no any idea why this is happening. I first thought that I made a mistake inside MFC App but why does C++ Builder also slow down?
And yes, I am positively sure it is sound related because I even re-formated my windows, disabled everything. Lastly I discovered that sound issue.
Does my code need something?
Update:
Now, I follow the code and found that I used Sleep(1) in such areas to wait 1 miliseconds. The reason of this, I move an object from left to right. If I remove this sleep then the moving is not showing up because it is very fast. So, I should use Sleep(1). With Sleep(1), if audio is on than it works. If audio is off than it is very slow.
for (int i = 0; i <= 500; i++) {
theDialog->staticText->SetWindowsPosition(NULL, i, 20, 0, 0);
Sleep(1);
}
So, suggestions regarding this are really appreciated. What should I do?
I know this is the incorrect way. I should use something else that is proper and valid. But what exactly? Which function or class help me to move static texts from one position to another smoothly?
Also, changing the thread priority has not helped.
Update 2:
Update 1 is an another question :)
Sleep (10), will (as we know), wait for approximately 10 milliseconds. If there is a higher priority thread which needs to be run at that moment, the thread wakeup maybe delayed. Multimedia threads are probably running in a Real-Time or High priority, as such when you play sound, your thread wakeup gets delayed.
Refer to Jeffrey Richters comment in Programming Applications for Microsoft Windows (4th Ed), section Sleeping in Chapter 7:
The system makes the thread not schedulable for approximately the
number of milliseconds specified. That's right—if you tell the system
you want to sleep for 100 milliseconds, you will sleep approximately
that long but possibly several seconds or minutes more. Remember that
Windows is not a real-time operating system. Your thread will probably
wake up at the right time, but whether it does depends on what else is
going on in the system.
Also as per MSDN Multimedia Class Scheduler Service (Windows)
MMCSS ensures that time-sensitive processing receives prioritized access to CPU resources.
As per the above documentation, you can also control the percentage of CPU resources that will be guaranteed to low-priority tasks, through a registry key
Sleep(10) waits for at least 10 milliseconds. You have to write code to check how long you actually waited and if it's more than 10 milliseconds, handle that sanely in your code. Windows is not a real time operating system.
The minimum resolution for Sleep() timing is set system wide with timeBeginPeriod() and timeEndPeriod(). For example passing timeBeginPeriod(1) sets the minimum resolution to 1 ms. It may be that the audio programs are setting the resolution to 1 ms, and restoring it to something greater than 10 ms when they are done. I had a problem with a program that used Sleep(1) that only worked fine when the XE2 IDE was running but would otherwise sleep for 12 ms. I solved the problem by directly setting timeBeginPeriod(1) at the beginning of my program.
See: http://msdn.microsoft.com/en-us/library/windows/desktop/dd757624%28v=vs.85%29.aspx

Concurrency question about program running in OS

Here is what I know about concurrency in OS.
In order to run multi-task in an OS, the CPU will allocate a time slot to each task. When doing task A, other task will "sleep" and so on.
Here is my question:
I have a timer program that count for inactivity of keyboard / mouse. If inactivity continues within 15min, a screen saver program will popup.
If the concurrency theory is as I stated above, then the timer will be inaccurate? Because each program running in OS will have some time "sleep", then the timer program also have chance "sleeping", but in the real world the time is not stop.
You would use services from the OS to provide a timer you would not try to implement yourself. If code had to run simple to count time we would still be in the dark ages as far as computing is concerned.
In most operating systems, your task will not only be put to sleep when its time slice has been used but also while it is waiting for I/O (which is much more common for most programs).
Like AnthonyWJones said, use the operating system's concept of the current time.
The OS kernel's time slices are much too short to introduce any noticeable inaccuracy for a screen saver.
I think your waiting process can be very simple:
activityTime = time of last last keypress or mouse movement [from OS]
now = current time [from OS]
If now >= 15 mins after activityTime, start screensaver
sleep for a few seconds and return to step 1
Because steps 1 and 2 use the OS and not some kind of running counter, you don't care if you get interrupted anytime during this activity.
This could be language-dependent. In Java, it's not a problem. I suspect that all languages will "do the right thing" here. That's with the caveat that such timers are not extremely accurate anyway, and that usually you can only expect that your timer will sleep at least as long as you specify, but might sleep longer. That is, it might not be the active thread when the time runs out, and would therefore resume processing a little later.
See for example http://www.opengroup.org/onlinepubs/000095399/functions/sleep.html
The suspension time may be longer than requested due to the scheduling of other activity by the system.
The time you specify in sleep() is in realtime, not the cpu time your process uses. (As the CPU time is approximately 0 while your program sleeps.)