I have a task to do something every "round" minute(at xx:xx:00)
And I use something like
const int statisticsInterval=60;
time_t t=0;
while (1)
{
if (abs(t-time(NULL)==0)) //to avoid multiple calls in the same second that is the multiple of 60
boost::this_thread::sleep(boost::posix_time::seconds(2));//2, not 1 to make sure that 1 second passes
t=time(NULL);
boost::this_thread::sleep(boost::posix_time::seconds(statisticsInterval-(t%statisticsInterval)));
//DO WORK
}
As you can see I use sleep (60sec - number of elapsed seconds in current minute). But one programmer told me that it is not precise and that i should change it to
while loop with sleep(1) inside. I consider it highly doubtful that he is right, but I just wanted to check is somebody knows if there is less of a precision if the sleep interval is long.
I presume that sleep is implemented in a way that at certain time in the future trigger is activated and thread is put into "ready to execute thread group" so I see no reason for diff in precision. BTW OS is ubuntu and I dont care about less than 2-3 sec errors. For example if I sleep for 52 secs, 53.8 sleep is totally acceptable.
P.S. I know about sleep defining the minimal time, and that theoretically my thread might get activated in year 2047., but I'm asking about realistic scenarios.
When you do sleep(N) it tells the OS to trigger the thread at current time + N.
The reason why it isn't always accurate, is that you're not the only thread in the system.
There might be another thread that asked to be waken at that time before you, and there might just be some important OS stuff that's needed to be performed exactly at that time.
Anyway, there shouldn't be any precision issues, because the method has nothing to do with N.
The only reason that it won't be "precise" is if it's a crappy OS that can't calculate the time right. And then again, the loop won't solve that.
In some threading APIs, it's possible to be awoken before the sleep completes (eg, due to a signal arriving during the sleep). The correct way to handle this is to compute an absolute wake up time, then loop, sleeping for the remaining duration. I would imagine sleeping for one-second intervals to be a hack to approximate this, poorly.
However, the boost threading API's this_thread::sleep() is not documented to have these early wakeups, and so this technique is not necessary (the boost thread API does the loop for you).
Generally speaking, there are very few cases where using smaller sleep intervals improves wakeup latency significantly; the OS handles all wakeups more or less the same way. At best, you might keep the cache warm and avoid pageouts, but this would only affect the small portion of memory directly involved in the sleep loop.
Furthermore, most OSes deal with time using integer counters internally; this means that large intervals do not induce rounding errors (as you might find with floating point values). However, if you are using floating point for your own computation, this may be an issue. If you are currently using floating point intervals (say, a double of seconds since 1970), you may wish to consider integer units (say, a long long of milliseconds since 1970).
sleep is not very precise in many cases. It depends on the OS how precise. In Windows 7, timer resolution is about 15,4 ms I think. Also, you can usually tell the scheduler how to handle sleep slacking...
Here is a good read:
Linux: http://linux.die.net/man/3/nanosleep
Windows: http://msdn.microsoft.com/en-us/library/ms686298(v=vs.85).aspx
PS: if you want higher precision on long waits, sleep some period and use the time diff based on a real-time clock. I.e. Store the current time when you start sleeping, then at each interval check how far you are from the set wait time.
Boost.Thread implementation of sleep for POSIX systems can use different approaches to sleeping:
Timed waiting on mutex in case when thread is created with Boost.Thread and has a specific thread information.
Use pthread_delay_np, if available and thread is not created with Boost.Thread.
USe nanosleep if pthread_delay_np is not available.
Create a local mutex and do timed wait on it (worse case scenario if nothing else is available).
Cases number 2, 3 and 4 are implemented in a loop of 5 times (as of Boost 1.44). So if sleeping thread is interrupted (i.e. with some signal) more than 5 times - there can be a potential problem. But that is not likely to happen.
In all cases, precision will be much higher than a second, so doing multiple sleeps will not be more precise that doing a long one. You can only be concerned about your program being completely swapped out because of long sleep. For example, if machine is so busy, so kernel puts the whole program on disk. To avoid being swapped out, you have to spin (or do smaller sleeps and wake up occasionally). Usually, if performance matters a lot, programs do spin on a CPU and never call sleep, because any blocking call is to be avoided. But that is true if we are talking nano/micro-seconds.
In general, Sleep is not the correct method for timing of anything. Better to use a precision timer with a callback function. On Windows, one may use the "Multimedia" timers, which have a resolution no greater than 1 ms on most hardware. see here. When the timer expires, the OS calls callback function in close to real time. see here.
Sleep works in terms of scheduler time quantums (Edit: Meanwhile, the majority of operating systems supports "tickless" schedulers, i.e. there are no longer fixed quantums, however, the principle remains true... there's timer coalescing and stuff).
Unless you receive a signal there is no way you can wake up before that quantum has been used up. Also, sleep is not designed to be precise or accurate. Further, the time is more a guidline than a rule.
While you may think of the sleep time in terms of "will continue after time X", that is not at all what's going on. Technically, sleep works in terms of "mark the thread not-ready for approximately time X, then mark it ready, invoke the scheduler, and then we'll see what happens". Note the subtle difference between being "ready" and actually running. A thread can in principle be ready for a very long time, and never run.
Therefore, 60x sleep(1) can never be more accurate than sleep(60). It will make the thread not-ready and ready again 60 times, and it will invoke the scheduler 60 times. Since the scheduler cannot run in zero time (nor can a thread be made ready in zero time, nor can you do a context switch in zero time), sleeping many times for short durations necessarily needs to take longer than sleeping once for the cumulative time, in practice.
Since you state that your OS is Ubuntu, you could as well use a timerfd[1]. Set the expire time to 1 minute and read() on it. If you get EINTR, just read() again. Otherwise, you know that a minute is up. Using a timer is the correct thing to do if you want precise timing (on a physical computer, it cannot be and will never be 100.00% perfect, but it will be as good as you can get, and it will avoid other systematic errors, especially with recurring events).
The POSIX timer_create function will work as well, it's more portable, and it may be half a microsecond or so less overhead (maybe! maybe not!) but it is not nearly as comfortable and flexible as a timerfd.
You cannot get more accurate and reliable than what a timer will provide. On my not particularly impressive Ubuntu machine, timerfds work accurately to a microsecond no problemo. As a plus, it's elegant too... if you ever need to do something else while waiting, such as listen on a socket, you can plug the timerfd into the same epoll as the socket descriptor. You can share it between several processes too, and wake them simultaneously. Or, or,... many other things.
If the goal is to sleep until a given system time (xx:xx:00), consider using the overload of boost::this_thread::sleep that takes a time, as in boost::posix_time::ptime, rather than a duration.
for example,
#include <iostream>
#include <boost/date_time.hpp>
#include <boost/thread.hpp>
int main()
{
using namespace boost::posix_time;
ptime time = boost::get_system_time();
std::cout << "time is " << time << '\n';
time_duration tod = time.time_of_day();
tod = hours(tod.hours()) + minutes(tod.minutes() + 1);
time = ptime(time.date(), tod);
std::cout << "sleeping to " << time << "\n";
boost::this_thread::sleep(time);
std::cout << "now the time is " << boost::get_system_time() << '\n';
}
in C++0x these two overloads were given different names: std::this_thread::sleep_for() and std::this_thread::sleep_until();
The answer is yes. It has nothing to do with C++ however. It has everything to do with the operating system.
Because of the greater focus on low power use in current portable systems, the operating systems have been getting smarter about timers.
Both Windows and Linux use timer slack in order to avoid waking up too often. This slack is automatically calculated using the timeout duration. It can be overridden in various ways if a really accurate timer is absolutely required.
What this does for the operating system is to allow it to get into really deep sleep states. If timers are going off all of the time, the CPU and RAM don't get a chance to power down. But if timers are collected together into a batch, the CPU can power up, run all of the timer operations, then power down again.
So if there are 10 programs all sleeping for 60 seconds but offset by a half-second or so, the most efficient use of the CPU is to wake up one time, run all 10 timers and then go back to sleep instead of waking up 10 times.
Related
I have a task to do something every "round" minute(at xx:xx:00)
And I use something like
const int statisticsInterval=60;
time_t t=0;
while (1)
{
if (abs(t-time(NULL)==0)) //to avoid multiple calls in the same second that is the multiple of 60
boost::this_thread::sleep(boost::posix_time::seconds(2));//2, not 1 to make sure that 1 second passes
t=time(NULL);
boost::this_thread::sleep(boost::posix_time::seconds(statisticsInterval-(t%statisticsInterval)));
//DO WORK
}
As you can see I use sleep (60sec - number of elapsed seconds in current minute). But one programmer told me that it is not precise and that i should change it to
while loop with sleep(1) inside. I consider it highly doubtful that he is right, but I just wanted to check is somebody knows if there is less of a precision if the sleep interval is long.
I presume that sleep is implemented in a way that at certain time in the future trigger is activated and thread is put into "ready to execute thread group" so I see no reason for diff in precision. BTW OS is ubuntu and I dont care about less than 2-3 sec errors. For example if I sleep for 52 secs, 53.8 sleep is totally acceptable.
P.S. I know about sleep defining the minimal time, and that theoretically my thread might get activated in year 2047., but I'm asking about realistic scenarios.
When you do sleep(N) it tells the OS to trigger the thread at current time + N.
The reason why it isn't always accurate, is that you're not the only thread in the system.
There might be another thread that asked to be waken at that time before you, and there might just be some important OS stuff that's needed to be performed exactly at that time.
Anyway, there shouldn't be any precision issues, because the method has nothing to do with N.
The only reason that it won't be "precise" is if it's a crappy OS that can't calculate the time right. And then again, the loop won't solve that.
In some threading APIs, it's possible to be awoken before the sleep completes (eg, due to a signal arriving during the sleep). The correct way to handle this is to compute an absolute wake up time, then loop, sleeping for the remaining duration. I would imagine sleeping for one-second intervals to be a hack to approximate this, poorly.
However, the boost threading API's this_thread::sleep() is not documented to have these early wakeups, and so this technique is not necessary (the boost thread API does the loop for you).
Generally speaking, there are very few cases where using smaller sleep intervals improves wakeup latency significantly; the OS handles all wakeups more or less the same way. At best, you might keep the cache warm and avoid pageouts, but this would only affect the small portion of memory directly involved in the sleep loop.
Furthermore, most OSes deal with time using integer counters internally; this means that large intervals do not induce rounding errors (as you might find with floating point values). However, if you are using floating point for your own computation, this may be an issue. If you are currently using floating point intervals (say, a double of seconds since 1970), you may wish to consider integer units (say, a long long of milliseconds since 1970).
sleep is not very precise in many cases. It depends on the OS how precise. In Windows 7, timer resolution is about 15,4 ms I think. Also, you can usually tell the scheduler how to handle sleep slacking...
Here is a good read:
Linux: http://linux.die.net/man/3/nanosleep
Windows: http://msdn.microsoft.com/en-us/library/ms686298(v=vs.85).aspx
PS: if you want higher precision on long waits, sleep some period and use the time diff based on a real-time clock. I.e. Store the current time when you start sleeping, then at each interval check how far you are from the set wait time.
Boost.Thread implementation of sleep for POSIX systems can use different approaches to sleeping:
Timed waiting on mutex in case when thread is created with Boost.Thread and has a specific thread information.
Use pthread_delay_np, if available and thread is not created with Boost.Thread.
USe nanosleep if pthread_delay_np is not available.
Create a local mutex and do timed wait on it (worse case scenario if nothing else is available).
Cases number 2, 3 and 4 are implemented in a loop of 5 times (as of Boost 1.44). So if sleeping thread is interrupted (i.e. with some signal) more than 5 times - there can be a potential problem. But that is not likely to happen.
In all cases, precision will be much higher than a second, so doing multiple sleeps will not be more precise that doing a long one. You can only be concerned about your program being completely swapped out because of long sleep. For example, if machine is so busy, so kernel puts the whole program on disk. To avoid being swapped out, you have to spin (or do smaller sleeps and wake up occasionally). Usually, if performance matters a lot, programs do spin on a CPU and never call sleep, because any blocking call is to be avoided. But that is true if we are talking nano/micro-seconds.
In general, Sleep is not the correct method for timing of anything. Better to use a precision timer with a callback function. On Windows, one may use the "Multimedia" timers, which have a resolution no greater than 1 ms on most hardware. see here. When the timer expires, the OS calls callback function in close to real time. see here.
Sleep works in terms of scheduler time quantums (Edit: Meanwhile, the majority of operating systems supports "tickless" schedulers, i.e. there are no longer fixed quantums, however, the principle remains true... there's timer coalescing and stuff).
Unless you receive a signal there is no way you can wake up before that quantum has been used up. Also, sleep is not designed to be precise or accurate. Further, the time is more a guidline than a rule.
While you may think of the sleep time in terms of "will continue after time X", that is not at all what's going on. Technically, sleep works in terms of "mark the thread not-ready for approximately time X, then mark it ready, invoke the scheduler, and then we'll see what happens". Note the subtle difference between being "ready" and actually running. A thread can in principle be ready for a very long time, and never run.
Therefore, 60x sleep(1) can never be more accurate than sleep(60). It will make the thread not-ready and ready again 60 times, and it will invoke the scheduler 60 times. Since the scheduler cannot run in zero time (nor can a thread be made ready in zero time, nor can you do a context switch in zero time), sleeping many times for short durations necessarily needs to take longer than sleeping once for the cumulative time, in practice.
Since you state that your OS is Ubuntu, you could as well use a timerfd[1]. Set the expire time to 1 minute and read() on it. If you get EINTR, just read() again. Otherwise, you know that a minute is up. Using a timer is the correct thing to do if you want precise timing (on a physical computer, it cannot be and will never be 100.00% perfect, but it will be as good as you can get, and it will avoid other systematic errors, especially with recurring events).
The POSIX timer_create function will work as well, it's more portable, and it may be half a microsecond or so less overhead (maybe! maybe not!) but it is not nearly as comfortable and flexible as a timerfd.
You cannot get more accurate and reliable than what a timer will provide. On my not particularly impressive Ubuntu machine, timerfds work accurately to a microsecond no problemo. As a plus, it's elegant too... if you ever need to do something else while waiting, such as listen on a socket, you can plug the timerfd into the same epoll as the socket descriptor. You can share it between several processes too, and wake them simultaneously. Or, or,... many other things.
If the goal is to sleep until a given system time (xx:xx:00), consider using the overload of boost::this_thread::sleep that takes a time, as in boost::posix_time::ptime, rather than a duration.
for example,
#include <iostream>
#include <boost/date_time.hpp>
#include <boost/thread.hpp>
int main()
{
using namespace boost::posix_time;
ptime time = boost::get_system_time();
std::cout << "time is " << time << '\n';
time_duration tod = time.time_of_day();
tod = hours(tod.hours()) + minutes(tod.minutes() + 1);
time = ptime(time.date(), tod);
std::cout << "sleeping to " << time << "\n";
boost::this_thread::sleep(time);
std::cout << "now the time is " << boost::get_system_time() << '\n';
}
in C++0x these two overloads were given different names: std::this_thread::sleep_for() and std::this_thread::sleep_until();
The answer is yes. It has nothing to do with C++ however. It has everything to do with the operating system.
Because of the greater focus on low power use in current portable systems, the operating systems have been getting smarter about timers.
Both Windows and Linux use timer slack in order to avoid waking up too often. This slack is automatically calculated using the timeout duration. It can be overridden in various ways if a really accurate timer is absolutely required.
What this does for the operating system is to allow it to get into really deep sleep states. If timers are going off all of the time, the CPU and RAM don't get a chance to power down. But if timers are collected together into a batch, the CPU can power up, run all of the timer operations, then power down again.
So if there are 10 programs all sleeping for 60 seconds but offset by a half-second or so, the most efficient use of the CPU is to wake up one time, run all 10 timers and then go back to sleep instead of waking up 10 times.
I used Sleep(500) in my code and I used getTickCount() to test the timing. I found that it has a cost of about 515ms, more than 500. Does somebody know why that is?
Because Win32 API's Sleep isn't a high-precision sleep, and has a maximum granularity.
The best way to get a precision sleep is to sleep a bit less (~50 ms) and do a busy-wait. To find the exact amount of time you need to busywait, get the resolution of the system clock using timeGetDevCaps and multiply by 1.5 or 2 to be safe.
sleep(500) guarantees a sleep of at least 500ms.
But it might sleep for longer than that: the upper limit is not defined.
In your case, there will also be the extra overhead in calling getTickCount().
Your non-standard Sleep function may well behave in a different matter; but I doubt that exactness is guaranteed. To do that, you need special hardware.
As you can read in the documentation, the WinAPI function GetTickCount()
is limited to the resolution of the system timer, which is typically in the range of 10 milliseconds to 16 milliseconds.
To get a more accurate time measurement, use the function GetSystemDatePreciseAsFileTime
Also, you can not rely on Sleep(500) to sleep exactly 500 milliseconds. It will suspend the thread for at least 500 milliseconds. The operating system will then continue the thread as soon as it has a timeslot available. When there are many other tasks running on the operating system, there might be a delay.
In general sleeping means that your thread goes to a waiting state and after 500ms it will be in a "runnable" state. Then the OS scheduler chooses to run something according to the priority and number of runnable processes at that time. So if you do have high precision sleep and high precision clock then it is still a sleep for at least 500ms, not exactly 500ms.
Like the other answers have noted, Sleep() has limited accuracy. Actually, no implementation of a Sleep()-like function can be perfectly accurate, for several reasons:
It takes some time to actually call Sleep(). While an implementation aiming for maximal accuracy could attempt to measure and compensate for this overhead, few bother. (And, in any case, the overhead can vary due to many causes, including CPU and memory use.)
Even if the underlying timer used by Sleep() fires at exactly the desired time, there's no guarantee that your process will actually be rescheduled immediately after waking up. Your process might have been swapped out while it was sleeping, or other processes might be hogging the CPU.
It's possible that the OS cannot wake your process up at the requested time, e.g. because the computer is in suspend mode. In such a case, it's quite possible that your 500ms Sleep() call will actually end up taking several hours or days.
Also, even if Sleep() was perfectly accurate, the code you want to run after sleeping will inevitably consume some extra time.
Thus, to perform some action (e.g. redrawing the screen, or updating game logic) at regular intervals, the standard solution is to use a compensated Sleep() loop. That is, you maintain a regularly incrementing time counter indicating when the next action should occur, and compare this target time with the current system time to dynamically adjust your sleep time.
Some extra care needs to be taken to deal with unexpected large time jumps, e.g. if the computer was temporarily suspected or if the tick counter wrapped around, as well as the situation where processing the action ends up taking more time than is available before the next action, causing the loop to lag behind.
Here's a quick example implementation (in pseudocode) that should handle both of these issues:
int interval = 500, giveUpThreshold = 10*interval;
int nextTarget = GetTickCount();
bool active = doAction();
while (active) {
nextTarget += interval;
int delta = nextTarget - GetTickCount();
if (delta > giveUpThreshold || delta < -giveUpThreshold) {
// either we're hopelessly behind schedule, or something
// weird happened; either way, give up and reset the target
nextTarget = GetTickCount();
} else if (delta > 0) {
Sleep(delta);
}
active = doAction();
}
This will ensure that doAction() will be called on average once every interval milliseconds, at least as long as it doesn't consistently consume more time than that, and as long as no large time jumps occur. The exact time between successive calls may vary, but any such variation will be compensated for on the next interation.
Default timer resolution is low, you could increase time resolution if necessary. MSDN
#define TARGET_RESOLUTION 1 // 1-millisecond target resolution
TIMECAPS tc;
UINT wTimerRes;
if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR)
{
// Error; application can't continue.
}
wTimerRes = min(max(tc.wPeriodMin, TARGET_RESOLUTION), tc.wPeriodMax);
timeBeginPeriod(wTimerRes);
There are two general reasons why code might want a function like "sleep":
It has some task which can be performed at any time that is at least some distance in the future.
It has some task which should be performed as near as possible to some moment in time some distance in the future.
In a good system, there should be separate ways of issuing those kinds of requests; Windows makes the first easier than the second.
Suppose there is one CPU and three threads in the system, all doing useful
work until, one second before midnight, one of the threads says it won't have
anything useful to do for at least a second. At that point, the system will
devote execution to the remaining two threads. If, 1ms before midnight,
one of those threads decides it won't have anything useful to do for at least
a second, the system will switch control to the last remaining thread.
When midnight rolls around, the original first thread will become available to
run, but since the presently-executing thread will have only had the CPU for
a millisecond at that point, there's no particular reason the original first
thread should be considered more "worthy" of CPU time than the other thread
which just got control. Since switching threads isn't free, the OS may very
well decide that the thread that presently has the CPU should keep it until
it blocks on something or has used up a whole time slice.
It might be nice if there were a version of "sleep" which were easier to use
than multi-media timers but would request that the system give the thread a
temporary priority boost when it becomes eligible to run again, or better yet
a variation of "sleep" which would specify a minimum time and a "priority-
boost" time, for tasks which need to be performed within a certain time window. I don't know of any systems that can be easily made to work that way, though.
I know that Sleep() is not accurate, but is there's a way to make it not sleep for more than 10 ms (i.e. only sleep between 1 ms and 10 ms)? Or does Sleep(1) already guarantee that?
If you really want guaranteed timings, you will not be using Windows at all.
To answer your question, Sleep() does not provide any means of guaranteeing an upper bound on the sleep time.
In windows, this is because Sleep() relinquishes the threads's time slice, and it is not guaranteed that the system scheduler will schedule the sleeping thread (i.e. allocate another time slice) to execute immediately after the sleep time is up. That depends on priorities of competing threads, scheduling policies, and things like that.
In practice, the actual sleep interval depends a lot on what other programs are running on the system, configuration of the system, whether other programs are accessing slow drives, etc etc.
With a lightly loaded system, it is a fair bet Sleep(1) will sleep between 1 and 2 ms on any modern (GHz frequency CPU or better). However, it is not impossible for your program to experience greater delays.
With a heavily loaded system (lots of other programs executing, using CPU and timer resources), it is a fair bet your program will experience substantially greater delays than 1ms, and even more than 10ms.
In short: no guarantees.
There is no way to guarantee it.
This is what real time OS are for.
In general case if your OS doesn't experience high loads sleep will be pretty accurate but as you increase load on it the more inaccurate it will get.
No. Or, yes, depending on your perspective.
According to the documentation:
After the sleep interval has passed, the thread is ready to run. If
you specify 0 milliseconds, the thread will relinquish the remainder
of its time slice but remain ready. Note that a ready thread is not
guaranteed to run immediately. Consequently, the thread may not run
until some time after the sleep interval elapses. For more
information, see Scheduling Priorities.
What this means is that the problem isn't Sleep. Rather, when Sleep ends, your thread may still need to wait to become active again.
You cannot count on 10 milliseconds, that's too low. Sleep() accuracy is affected by:
The clock tick interrupt frequency. In general, the processor tends to be in a quiescent state, not consuming any power and turned off by the HLT instruction. It is dead to the world, unaware that time is passing and unaware that your sleep interval has expired. A periodic hardware interrupt generated by the chipset wakes it up and makes it pay attention again. By default, this interrupt is generated 64 times per second. Or once every 15.625 milliseconds.
The thread scheduler runs at every clock interrupt. It is the one that notices that your sleep interval has expired, it will put the thread back into the ready-to-run state. And boosts its priority so that it is more likely to acquire a processor core. It will do so when no other threads with higher priority are ready to run.
There isn't much you can do about the 2nd bullet, you have to compete with everybody else and take your fair share. If the thread does a lot of sleeping and little computation then it is not unreasonable to claim more than your fair share, call SetThreadPriority() to boost your base priority and make it more likely that your sleep interval is accurate. If that isn't good enough then the only way to claim a high enough priority that will always beat everybody else is by writing ring 0 code, a driver.
You can mess with the 1st bullet, it is pretty common to do so. Also the reason why many programmers think that the default accuracy is 10 msec. Or if they use Chrome that it might be 1 msec, that browser jacks up the interrupt rate sky-high. A fairly unreasonable thing to do, bad for power consumption, unless you are in the business of making your mobile operating system products look good :)
Call timeBeginPeriod before you need to make your sleep intervals short enough, timeEndPeriod() when you're done. Use NtSetTimerResolution() if you need to go lower than 1 msec.
Sleep won't guarantee that.
The only way I know of doing that is to have a thread wait for a fast timer event and free a synchronization object every 10 ms or so.
You will pass a semaphore to this "wait server task", and it will free it on the next timer tick, thus giving you a response time between 0 and 10 ms.
Of couse if you want an extreme precision you will have to boost this thread priority above other tasks that might preempt it, and at any rate you might still be preempted by system processes and/or interrupt handlers, which will add some noise to your timer.
A long time ago I had a bug in my program. The root cause was that the C function
sleep(60);
would on rare occasions sleep less than 60 seconds. Or the function did cause the thread to sleep more than 60 s, but the clock was changed automatically by the OS (this seems likely since bug was happening only on XX::00::00), aka it was manifesting itself rarely, and only on "round hour" (sleep shoudl have ended at >xh0m0s, it ended on x-1h59m59.99*s).
Then my project manager went on a rant how he said million times that we should only use timers, not sleep.
From that time I accepted the notion that timers are more accurate than sleep(), but now I feel that I should ask for some more authoritative source.
So :
are timers more precise than sleep?
(related) are they deep down(on the OS level) implemented using different methods?
I know timers are used to do callbacks, sleep just delays execution of current thread, Im talking about delay execution part of the implementation.
BTW OS was Linux, but I care about general answer if possible.
Timers are definitely more accurate than sleep. Sleep is meant as just a rough measure of how long until the task scheduler revives a thread or process. Changes to the system clock, an overloaded task scheduler etc. will affect how long sleep actually sleeps for.
A timer will measure time more accurately. Generally speaking a timer will measure time accurately. There are two kinds of timers - ones based on the system clock like the functions in "time.h". Those will be affected by stuff like changes to the system clock. For example - if you change the system time, switch from daylight savings time, or suspend the machine etc. the actual measured time may be different from the real time.
The other kind of clocks are high resolution timers that are based on CPU ticks. These are timers like QueryPerformanceTimer on windows, and clock_gettime() on linux. These simply count cpu cycles. They won't be affected by changes to the system timer - but they will deviate from real world time in two ways:
Time will skew over long periods because the clock resolution is not exact such that over long periods measuring time this way will cause cpu time to drift from real time.
If the machine is suspended the CPU stops and the timer will not account for this.
What you want to do is sleep for a much shorter amount of time and use the clock that has the appropriate resolution. Eg. if you need to sleep for less than a few minutes, you should use high resolution timers. Sleep 100x more often than you need to and check elapsed time every time sleep comes back to see if the right amount of time has elapsed. If you need to sleep for more than a few minutes do the same but with the functons in time.h to check elapsed time.
If you need to be 100% accurate with time you may need specialized hardware- or to check real time periodically against an online time server - like the navy's atomic clock. ( http://tycho.usno.navy.mil/ntp.html)
There is no general answer for the simple reason that there is nothing in either the C or C++ standard that provides the ability to put an application to sleep. So the discussion is inherently going to be OS-dependent.
The unix sleep() function has a coarse granularity. There's also usleep() and nanosleep() which have much finer granularity. The function select() can also be used to put an application to sleep. Simply specify a timeout and no file descriptors.
Note #1: The interaction between sleep(), usleep(), nanosleep(), itimers, and alarms is unspecified.
Note #2: Don't expect any of these mechanisms to have the precision of an atomic clock.
I was testing how long a various win32 API calls will wait for when asked to wait for 1ms. I tried:
::Sleep(1)
::WaitForSingleObject(handle, 1)
::GetQueuedCompletionStatus(handle, &bytes, &key, &overlapped, 1)
I was detecting the elapsed time using QueryPerformanceCounter and QueryPerformanceFrequency. The elapsed time was about 15ms most of the time, which is expected and documented all over the Internet. However for short period of time the waits were taking about 2ms!!! It happen consistently for few minutes but now it is back to 15ms. I did not use timeBeginPeriod() and timeEndPeriod calls! Then I tried the same app on another machine and waits are constantly taking about 2ms! Both machines have Windows XP SP2 and hardware should be identical. Is there something that explains why wait times vary by so much? TIA
Thread.Sleep(0) will let any threads of the same priority execute. Thread.Sleep(1) will let any threads of the same or lower priority execute.
Each thread is given an interval of time to execute in, before the scheduler lets another thread execute. As Billy ONeal states, calling Thread.Sleep will give up the rest of this interval to other threads (subject to the priority considerations above).
Windows balances over threads over the entire OS - not just in your process. This means that other threads on the OS can also cause your thread to be pre-empted (ie interrupted and the rest of the time interval given to another thread).
There is an article that might be of interest on the topic of Thread.Sleep(x) at:
Priority-induced starvation: Why Sleep(1) is better than Sleep(0) and the Windows balance set manager
Changing the timer's resolution can be done by any process on the system, and the effect is seen globally. See this article on how the Hotspot Java compiler deals with times on windows, specifically:
Note that any application can change the timer interrupt and that it affects the whole system. Windows only allows the period to be shortened, thus ensuring that the shortest requested period by all applications is the one that is used. If a process doesn't reset the period then Windows takes care of it when the process terminates. The reason why the VM doesn't just arbitrarily change the interrupt rate when it starts - it could do this - is that there is a potential performance impact to everything on the system due to the 10x increase in interrupts. However other applications do change it, typically multi-media viewers/players.
The biggest thing sleep(1) does is give up the rest of your thread's quantum . That depends entirely upon how much of your thread's quantum remains when you call sleep.
To aggregate what was said before:
CPU time is assigned in quantums (time slices)
The thread scheduler picks the thread to run. This thread may run for the entire time slice, even if threads of higher priority become ready to run.
Typical time slices are 8..15ms, depending on architecture.
The thread can "give up" the time slice - typically Sleep(0) or Sleep(1). Sleep(0) allows another thread of same or hogher priority to run for the next time slice. Sleep(1) allows "any" thread.
The time slice is global and can be affected by all processes
Even if you don't change the time slice, someone else could.
Even if the time slice doesn't change, you may "jump" between the two different times.
For simplicity, assume a single core, your thread and another thread X.
If Thread X runs at the same priority as yours, crunching numbers, Your Sleep(1) will take an entire time slice, 15ms being typical on client systems.
If Thread X runs at a lower priority, and gives up its own time slice after 4 ms, your Sleep(1) will take 4 ms.
I would say it just depends on how loaded the cpu is, if there arent many other process/threads it could get back to the calling thread a lot faster.