I'm experiencing strange issues with boost::sleep() function. I have this basic code:
#include <sys/time.h>
#include <boost/chrono.hpp>
#include <boost/thread.hpp>
void thread_func()
{
timeval start, end;
gettimeofday( &start, NULL );
boost::this_thread::sleep( boost::posix_time::milliseconds(1) ); // usleep(1000) here works just fine.
gettimeofday( &end, NULL );
int secs = end.tv_sec - start.tv_sec;
int usec = end.tv_usec - start.tv_usec;
std::cout << "Elapsed time: " << secs << " s and " << usec << " us" << std::endl;
}
int main()
{
thread_func();
boost::thread thread = boost::thread( thread_func );
thread.join();
return 0;
}
The problem is that the boost::sleep() functions behaves differently in the created thread and in the main one. The output of this program is
Elapsed time: 0 s and 1066 us
Elapsed time: 0 s and 101083 us
i.e. the boost::sleep() function sleeps for 100 milliseconds in the created thread, whereas it works okay in the main thread (it sleeps for 1 ms). If I'm inside a created thread, I can't get the accuracy below 100 ms (for example by using boost::posix_time::microseconds). However, if I use usleep(1000), it works just fine.
I'm using Fedora 18 (64-bit) 3.8.4 & Boost 1.50.0-5.fc18 on Intel i7 CPU. I also tested the code on different PC with Win 7 & Boost 1.48.0 and the problem does not occur, so I guess it should be related to the system configuration, but I have no clue how.
boost::this_thread::sleep is deprecated (see docs).
usleep is also deprecated (obsolete in POSIX.1-2001 and removed from POSIX.1-2008).
FWIW, in the older (1.44) boost headers I have installed locally, the relative delay version of boost::this_thread_sleep actually calls gettimeofday to calculate the absolute deadline, and then forwards to the absolute version (which is compiled out-of-line, so I don't have it handy). Note that gettimeofday was also marked obsolete in POSIX.1-2008.
The suggested replacements for all these are:
boost::this_thread::sleep_for instead of ...::sleep with a relative delay
boost::this_thread::sleep_until instead of ...::sleep with an absolute time
nanosleep instead of usleep
clock_gettime instead of gettimeofday
Be aware, that calling boost::this_thread::sleep() and related methods not only puts the thread to sleep but asks the scheduler to give the CPU to another thread that is ready for execution. So you are acutally measuring the maximum of the time of the sleep OR the time until the thread gets the CPU again.
Related
I have an application that needs to do work within certain windows (in this case, the windows are all 30 seconds apart). When the time is not within a window, the time until the middle of the next window is calculated, and the thread sleeps for that amount of time (in milliseconds, using boost::this_thread::sleep_for).
Using Boost 1.55, I was able to hit the windows within my tolerance (+/-100ms) with extreme reliability. Upon migration to Boost 1.58, I am never able to hit these windows. Replacing the boost::this_thread::sleep_for with std::this_thread::sleep_for fixes the issue; however, I need the interruptible feature of boost::thread and the interruption point that boost::this_thread::sleep_for provides.
Here is some sample code illustrating the issue:
#include <boost/thread.hpp>
#include <boost/chrono.hpp>
#include <chrono>
#include <iostream>
#include <thread>
void boostThreadFunction ()
{
std::cout << "Starting Boost thread" << std::endl;
for (int i = 0; i < 10; ++i)
{
auto sleep_time = boost::chrono::milliseconds {29000 + 100 * i};
auto mark = std::chrono::steady_clock::now ();
boost::this_thread::sleep_for (sleep_time);
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now () - mark);
std::cout << "Boost thread:" << std::endl;
std::cout << "\tSupposed to sleep for:\t" << sleep_time.count ()
<< " ms" << std::endl;
std::cout << "\tActually slept for:\t" << duration.count ()
<< " ms" << std::endl << std::endl;
}
}
void stdThreadFunction ()
{
std::cout << "Starting Std thread" << std::endl;
for (int i = 0; i < 10; ++i)
{
auto sleep_time = std::chrono::milliseconds {29000 + 100 * i};
auto mark = std::chrono::steady_clock::now ();
std::this_thread::sleep_for (sleep_time);
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now () - mark);
std::cout << "Std thread:" << std::endl;
std::cout << "\tSupposed to sleep for:\t" << sleep_time.count ()
<< " ms" << std::endl;
std::cout << "\tActually slept for:\t" << duration.count ()
<< " ms" << std::endl << std::endl;
}
}
int main ()
{
boost::thread boost_thread (&boostThreadFunction);
std::this_thread::sleep_for (std::chrono::seconds (10));
std::thread std_thread (&stdThreadFunction);
boost_thread.join ();
std_thread.join ();
return 0;
}
Here is the output when referencing Boost 1.58 as an include directory and running on my workstation (Windows 7 64-bit):
Starting Boost thread
Starting Std thread
Boost thread:
Supposed to sleep for: 29000 ms
Actually slept for: 29690 ms
Std thread:
Supposed to sleep for: 29000 ms
Actually slept for: 29009 ms
Boost thread:
Supposed to sleep for: 29100 ms
Actually slept for: 29999 ms
Std thread:
Supposed to sleep for: 29100 ms
Actually slept for: 29111 ms
Boost thread:
Supposed to sleep for: 29200 ms
Actually slept for: 29990 ms
Std thread:
Supposed to sleep for: 29200 ms
Actually slept for: 29172 ms
Boost thread:
Supposed to sleep for: 29300 ms
Actually slept for: 30005 ms
Std thread:
Supposed to sleep for: 29300 ms
Actually slept for: 29339 ms
Boost thread:
Supposed to sleep for: 29400 ms
Actually slept for: 30003 ms
Std thread:
Supposed to sleep for: 29400 ms
Actually slept for: 29405 ms
Boost thread:
Supposed to sleep for: 29500 ms
Actually slept for: 29999 ms
Std thread:
Supposed to sleep for: 29500 ms
Actually slept for: 29472 ms
Boost thread:
Supposed to sleep for: 29600 ms
Actually slept for: 29999 ms
Std thread:
Supposed to sleep for: 29600 ms
Actually slept for: 29645 ms
Boost thread:
Supposed to sleep for: 29700 ms
Actually slept for: 29998 ms
Std thread:
Supposed to sleep for: 29700 ms
Actually slept for: 29706 ms
Boost thread:
Supposed to sleep for: 29800 ms
Actually slept for: 29998 ms
Std thread:
Supposed to sleep for: 29800 ms
Actually slept for: 29807 ms
Boost thread:
Supposed to sleep for: 29900 ms
Actually slept for: 30014 ms
Std thread:
Supposed to sleep for: 29900 ms
Actually slept for: 29915 ms
I would expect the std::thread and the boost::thread to sleep for the same amount of time; however, the boost::thread seems to want to sleep for ~30 seconds when asked to sleep for 29.1 - 29.9 seconds. Am I misusing the boost::thread interface, or is this a bug that was introduced since 1.55?
I am the person who committed the above change to Boost.Thread. This change in 1.58 is by design after a period of consultation with the Boost community and Microsoft, and results in potentially enormous battery life improvements on mobile devices. The C++ standard makes no guarantees whatsoever that any timed wait actually waits, or waits the correct period, or anything close to the correct period. Any code written to assume that timed waits work or are accurate is therefore buggy. A future Microsoft STL may make a similar change to Boost.Thread, and therefore the STL behaviour would be the same as Boost.Thread. I might add that on any non-realtime OS any timed wait is inherently unpredictable any may fire very considerably later than requested. This change was therefore thought by the community as helpful to expose buggy usage of the STL.
The change allows Windows to optionally fire timers late by a certain amount. It may not actually do so, and in fact simply tries to delay regular interrupts as part of a tickless kernel design on very recent editions of Windows. Even if you specify a tolerance of weeks, as the correct deadline is always sent to Windows the next system interrupt to occur after the timer expiry will always fire the timer, so no timer will ever be late by more than a few seconds at most.
One bug fixed by this change was the problem of system sleep. The previous implementation could get confused by the system sleeping whereby timed waits would never wake (well, in 29 days they would). This implementation correctly deals with system sleeps, and random hangs of code using Boost.Thread caused by system sleeps hopefully is now a thing of the past.
Finally, I personally think that timed waits need a hardness/softness guarantee in the STL. That's a pretty big change however. And even if implemented, except on hard realtime OSs hardness of timed waits can only ever be best effort. Which is why they were excluded from the C++ standard in the first place, as C++ 11 was finalised well before mobile device power consumption was considered important enough to modify APIs.
Niall
Starting in Boost 1.58 on Windows, sleep_for() leverages SetWaitableTimerEx() (instead of SetWaitableTimer()) passing in a tolerance time to take advantage of coalescing timers.
In libs/thread/src/win32/thread.cpp, the tolerance is 5% of the sleep time or 32 ms, whichever is larger:
// Preferentially use coalescing timers for better power consumption and timer accuracy
if(!target_time.is_sentinel())
{
detail::timeout::remaining_time const time_left=target_time.remaining_milliseconds();
timer_handle=CreateWaitableTimer(NULL,false,NULL);
if(timer_handle!=0)
{
ULONG tolerable=32; // Empirical testing shows Windows ignores this when <= 26
if(time_left.milliseconds/20>tolerable) // 5%
tolerable=time_left.milliseconds/20;
LARGE_INTEGER due_time=get_due_time(target_time);
bool const set_time_succeeded=detail_::SetWaitableTimerEx()(timer_handle,&due_time,0,0,0,&detail_::default_reason_context,tolerable)!=0;
if(set_time_succeeded)
{
timeout_index=handle_count;
handles[handle_count++]=timer_handle;
}
}
}
Since 5% of 29.1 seconds is 1.455 seconds, this explains why the sleep times using boost::sleep_for were so inaccurate.
I use this code as a workaround if I need the interruptibleness of sleep_for:
::Sleep(20);
boost::this_thread::interruption_point();
I am trying to find a way to wait for a signal or maximum duration such that the duration is wallclock time instead of time the machine is spent awake. For example, for the following order of events:
A wait() function is called for a maximum of 24 hours
12 hours pass
Machine is put to sleep
12 hours pass
Machine is woken up out of sleep
I would like the wait() call to return as soon as the process gets to run since 24 hours of wallclock time have passed. I've tried using std::condition_variable::wait_until but that uses machine awake time. I've also tried WaitForSingleObject() on windows and pthread_cond_timedwait() on mac to no avail. I would prefer something cross-platform (e.g. in the STL) if possible. As a backup, it looks like SetThreadpoolTimer() for windows and dispatch_after() (using dispatch_walltime()) on mac could work, but I would of course prefer a single implementation. Does anybody know of one?
Thanks!
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
condition_variable cv;
mutex m;
unique_lock<mutex> lock(m);
auto start = chrono::steady_clock::now();
cv_status result = cv.wait_until(lock, start + chrono::minutes(5));
//put computer to sleep here for 5 minutes, should wake up immediately
if (result == cv_status::timeout)
{
auto end = chrono::steady_clock::now();
chrono::duration<double> diff = end - start;
cerr << "wait duration: " << diff.count() << " seconds\n";
}
return 0;
}
I am trying to write a simple c++ function sleep(int millisecond) that will sleep the program for user-specific millisecond.
Here is my code:
#include <iostream>
#include <time.h>
using namespace std;
void sleep(unsigned int mseconds) {
clock_t goal = mseconds + clock();
while (goal > clock());
}
int main() {
cout << "Hello World !" << endl;
sleep(3000);
cout << "Hello World 2" << endl;
}
The sleep() function works perfectly when I run this code on windows but doesn't work on Linux. Can anyone figure it out what's wrong with my code?
I don't know why everyone is dancing around your question instead of answering it.
You are attempting to implement your own sleep-like function, and your implementation, while it does busy wait instead of sleeping in the kernelspace (meaning that processor will be "actively" running code to sleep your program, instead of telling the machine your program is sleeping and it should run other code), is just fine.
The problem is that clock() is not required to return milliseconds. clock() will return system time/process time elapsed in ticks from epoch. What unit that time will take depends on the implementation.
For instance, on my machine, this is what the man page says:
DESCRIPTION
The clock() function determines the amount of processor time used since
the invocation of the calling process, measured in CLOCKS_PER_SECs of a
second.
RETURN VALUES
The clock() function returns the amount of time used unless an error
occurs, in which case the return value is -1.
SEE ALSO
getrusage(2), clocks(7)
STANDARDS
The clock() function conforms to ISO/IEC 9899:1990 (``ISO C90'') and
Version 3 of the Single UNIX Specification (``SUSv3'') which requires
CLOCKS_PER_SEC to be defined as one million.
As you can see from the bolded part, a tick is one-one-millionth of a second, aka a microsecond (not a millisecond). To "sleep" for 3 seconds, you'll need to call your sleep(3000000) and not sleep(3000).
With C++11 you can use sleep_for.
#include <chrono>
#include <thread>
void sleep(unsigned int mseconds) {
std::chrono::milliseconds dura( mseconds);
std::this_thread::sleep_for( dura );
}
You can use build-in sleep() function which takes pospond time as seconds not in milliseconds and have to include unistd.h standard library as build-in sleep() function is defined under this library.
Try it:
#include <iostream>
#include <unistd.h>
using namespace std;
int main() {
cout << "Hello World !" << endl;
sleep(3); //wait for 3 seconds
cout << "Hello World 2" << endl;
}
:P
There is no standard C API for milliseconds on Linux so you will have to use usleep. POSIX sleep takes seconds.
This is a sample pgm to check the functionality of Sleep() function.This is a demo only since iam using this sleep() and clock() functions in my app developement.
// TestTicks.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include<iostream>
#include<iomanip>
#include <Windows.h>
int _tmain(int argc, _TCHAR* argv[])
{
int i, i2;
i = clock();
//std::cout<<" \nTime before Sleep() : "<<i;
Sleep(30000);
i2 = clock();
//std::cout<<" \nTime After Sleep() : "<<i2;
std::cout<<"\n Diff : "<<i2 -i;
getchar();
return 0;
}
in this code i am calculating the time using clock() before and after the sleep function.
Since iam using sleep(30000), the time diff would be atleast 30000.
I have run this prgm many times. and printed output as 30000, 30001, 30002. These are ok. But some times i am getting values like 29999 and 29997.How this possible, since i put 30000 sleep b/w the clock().
Please give me the reason for this.
According to http://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx:
The system clock "ticks" at a constant rate. If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time. If dwMilliseconds is greater than one tick but less than two, the wait can be anywhere between one and two ticks, and so on.
It just means that the Sleep function will never sleep exactly for the amount of time given, but as close as possible given the resolution of the scheduler.
The same page gives you a method to increase the timer resolution if you really need it.
There are also high resolution timers that may better fit your needs.
Unless you're using a realtime OS, that is very much expected.
The Operating System has to schedule and run many other processes, so waking yours' up may not match the exact time you wanted to sleep.
The clock() function tells how much processor time the calling process has used.
You may replace the use of clock() by the function GetSystemTimeAsFileTime
in order to measure the time more accurately.
Also you may try to use timeBeginPeriod with wPeriodMin returned by a call to timeGetDevCaps in order to obtail maximum interrupt frequency.
In order to synchronize the sleeps with the system interrupt period, I'd also suggest
to have a sleep(1) ahead of the first "time capture".
By doing so, the "too shorts" will disappear.
More information abount sleep can be found here
#include <iostream>
#include <time.h>
void wait(int seconds)
{
int endwait;
endwait = clock() + seconds * CLOCKS_PER_SEC ;
while (clock() < endwait){}
}
int main()
{
wait(2);
cout<<"2 seconds have passed";
}
I've never actually worked with timers before but I need one for my current project.
So this might be a silly question: but what's the 'normal' way to retrieve a timer for a game, and is there a better/more efficient way?
Thanks
Since you may want the time elapsed, and it might be so little, you might need to use the clock() function defined in time.h.
Here what I found about it in the MSDN Library:
Calculates the wall-clock time used by the calling process.
clock_t clock( void );
Return Value
The elapsed wall-clock time since the start of the process (elapsed time in seconds times CLOCKS_PER_SEC). If the amount of elapsed time is unavailable, the function returns –1, cast as a clock_t.
Remarks
The clock function tells how much time the calling process has used. A timer tick is approximately equal to 1/CLOCKS_PER_SEC second. In versions of Microsoft C before 6.0, the CLOCKS_PER_SEC constant was called CLK_TCK.
Example:
// crt_clock.c
// This example prompts for how long
// the program is to run and then continuously
// displays the elapsed time for that period.
//
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
void sleep( clock_t wait );
int main( void )
{
long i = 6000000L;
clock_t start, finish;
double duration;
// Delay for a specified time.
printf( "Delay for three seconds\n" );
sleep( (clock_t)3 * CLOCKS_PER_SEC );
printf( "Done!\n" );
// Measure the duration of an event.
printf( "Time to do %ld empty loops is ", i );
start = clock();
while( i-- )
;
finish = clock();
duration = (double)(finish - start) / CLOCKS_PER_SEC;
printf( "%2.1f seconds\n", duration );
}
// Pauses for a specified number of milliseconds.
void sleep( clock_t wait )
{
clock_t goal;
goal = wait + clock();
while( goal > clock() )
;
}
Output
Delay for three seconds
Done!
Time to do 6000000 empty loops is 0.1 seconds
If you want cross-platform and performant time library, use boost::date_time. For timers, just get current time, and substract it from the next reading (they have operators for computing time difference etc, the code is readable).
Current time is read using boost::posix_time::microsecond_clock::universal_time() and stored in the ptime struct. (the posix_ does not refer to that it is available only on POSIX systems; it only indicates that it is modeled after POSIX time concepts).
If you are using C++ on windows you will want to use QueryPerformanceCounter/QueryPerformanceFrequency
http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ms644905(v=vs.85).aspx
If you are on linux check out clock_gettime(CLOCK_REALTIME)
http://linux.die.net/man/3/clock_gettime
the clock() suggestion is incorrect as it is time used in the process. since there is a loop his function will end up begin correct, but if you block then this will not work.
http://linux.die.net/man/3/clock