I have an application that needs to do work within certain windows (in this case, the windows are all 30 seconds apart). When the time is not within a window, the time until the middle of the next window is calculated, and the thread sleeps for that amount of time (in milliseconds, using boost::this_thread::sleep_for).
Using Boost 1.55, I was able to hit the windows within my tolerance (+/-100ms) with extreme reliability. Upon migration to Boost 1.58, I am never able to hit these windows. Replacing the boost::this_thread::sleep_for with std::this_thread::sleep_for fixes the issue; however, I need the interruptible feature of boost::thread and the interruption point that boost::this_thread::sleep_for provides.
Here is some sample code illustrating the issue:
#include <boost/thread.hpp>
#include <boost/chrono.hpp>
#include <chrono>
#include <iostream>
#include <thread>
void boostThreadFunction ()
{
std::cout << "Starting Boost thread" << std::endl;
for (int i = 0; i < 10; ++i)
{
auto sleep_time = boost::chrono::milliseconds {29000 + 100 * i};
auto mark = std::chrono::steady_clock::now ();
boost::this_thread::sleep_for (sleep_time);
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now () - mark);
std::cout << "Boost thread:" << std::endl;
std::cout << "\tSupposed to sleep for:\t" << sleep_time.count ()
<< " ms" << std::endl;
std::cout << "\tActually slept for:\t" << duration.count ()
<< " ms" << std::endl << std::endl;
}
}
void stdThreadFunction ()
{
std::cout << "Starting Std thread" << std::endl;
for (int i = 0; i < 10; ++i)
{
auto sleep_time = std::chrono::milliseconds {29000 + 100 * i};
auto mark = std::chrono::steady_clock::now ();
std::this_thread::sleep_for (sleep_time);
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now () - mark);
std::cout << "Std thread:" << std::endl;
std::cout << "\tSupposed to sleep for:\t" << sleep_time.count ()
<< " ms" << std::endl;
std::cout << "\tActually slept for:\t" << duration.count ()
<< " ms" << std::endl << std::endl;
}
}
int main ()
{
boost::thread boost_thread (&boostThreadFunction);
std::this_thread::sleep_for (std::chrono::seconds (10));
std::thread std_thread (&stdThreadFunction);
boost_thread.join ();
std_thread.join ();
return 0;
}
Here is the output when referencing Boost 1.58 as an include directory and running on my workstation (Windows 7 64-bit):
Starting Boost thread
Starting Std thread
Boost thread:
Supposed to sleep for: 29000 ms
Actually slept for: 29690 ms
Std thread:
Supposed to sleep for: 29000 ms
Actually slept for: 29009 ms
Boost thread:
Supposed to sleep for: 29100 ms
Actually slept for: 29999 ms
Std thread:
Supposed to sleep for: 29100 ms
Actually slept for: 29111 ms
Boost thread:
Supposed to sleep for: 29200 ms
Actually slept for: 29990 ms
Std thread:
Supposed to sleep for: 29200 ms
Actually slept for: 29172 ms
Boost thread:
Supposed to sleep for: 29300 ms
Actually slept for: 30005 ms
Std thread:
Supposed to sleep for: 29300 ms
Actually slept for: 29339 ms
Boost thread:
Supposed to sleep for: 29400 ms
Actually slept for: 30003 ms
Std thread:
Supposed to sleep for: 29400 ms
Actually slept for: 29405 ms
Boost thread:
Supposed to sleep for: 29500 ms
Actually slept for: 29999 ms
Std thread:
Supposed to sleep for: 29500 ms
Actually slept for: 29472 ms
Boost thread:
Supposed to sleep for: 29600 ms
Actually slept for: 29999 ms
Std thread:
Supposed to sleep for: 29600 ms
Actually slept for: 29645 ms
Boost thread:
Supposed to sleep for: 29700 ms
Actually slept for: 29998 ms
Std thread:
Supposed to sleep for: 29700 ms
Actually slept for: 29706 ms
Boost thread:
Supposed to sleep for: 29800 ms
Actually slept for: 29998 ms
Std thread:
Supposed to sleep for: 29800 ms
Actually slept for: 29807 ms
Boost thread:
Supposed to sleep for: 29900 ms
Actually slept for: 30014 ms
Std thread:
Supposed to sleep for: 29900 ms
Actually slept for: 29915 ms
I would expect the std::thread and the boost::thread to sleep for the same amount of time; however, the boost::thread seems to want to sleep for ~30 seconds when asked to sleep for 29.1 - 29.9 seconds. Am I misusing the boost::thread interface, or is this a bug that was introduced since 1.55?
I am the person who committed the above change to Boost.Thread. This change in 1.58 is by design after a period of consultation with the Boost community and Microsoft, and results in potentially enormous battery life improvements on mobile devices. The C++ standard makes no guarantees whatsoever that any timed wait actually waits, or waits the correct period, or anything close to the correct period. Any code written to assume that timed waits work or are accurate is therefore buggy. A future Microsoft STL may make a similar change to Boost.Thread, and therefore the STL behaviour would be the same as Boost.Thread. I might add that on any non-realtime OS any timed wait is inherently unpredictable any may fire very considerably later than requested. This change was therefore thought by the community as helpful to expose buggy usage of the STL.
The change allows Windows to optionally fire timers late by a certain amount. It may not actually do so, and in fact simply tries to delay regular interrupts as part of a tickless kernel design on very recent editions of Windows. Even if you specify a tolerance of weeks, as the correct deadline is always sent to Windows the next system interrupt to occur after the timer expiry will always fire the timer, so no timer will ever be late by more than a few seconds at most.
One bug fixed by this change was the problem of system sleep. The previous implementation could get confused by the system sleeping whereby timed waits would never wake (well, in 29 days they would). This implementation correctly deals with system sleeps, and random hangs of code using Boost.Thread caused by system sleeps hopefully is now a thing of the past.
Finally, I personally think that timed waits need a hardness/softness guarantee in the STL. That's a pretty big change however. And even if implemented, except on hard realtime OSs hardness of timed waits can only ever be best effort. Which is why they were excluded from the C++ standard in the first place, as C++ 11 was finalised well before mobile device power consumption was considered important enough to modify APIs.
Niall
Starting in Boost 1.58 on Windows, sleep_for() leverages SetWaitableTimerEx() (instead of SetWaitableTimer()) passing in a tolerance time to take advantage of coalescing timers.
In libs/thread/src/win32/thread.cpp, the tolerance is 5% of the sleep time or 32 ms, whichever is larger:
// Preferentially use coalescing timers for better power consumption and timer accuracy
if(!target_time.is_sentinel())
{
detail::timeout::remaining_time const time_left=target_time.remaining_milliseconds();
timer_handle=CreateWaitableTimer(NULL,false,NULL);
if(timer_handle!=0)
{
ULONG tolerable=32; // Empirical testing shows Windows ignores this when <= 26
if(time_left.milliseconds/20>tolerable) // 5%
tolerable=time_left.milliseconds/20;
LARGE_INTEGER due_time=get_due_time(target_time);
bool const set_time_succeeded=detail_::SetWaitableTimerEx()(timer_handle,&due_time,0,0,0,&detail_::default_reason_context,tolerable)!=0;
if(set_time_succeeded)
{
timeout_index=handle_count;
handles[handle_count++]=timer_handle;
}
}
}
Since 5% of 29.1 seconds is 1.455 seconds, this explains why the sleep times using boost::sleep_for were so inaccurate.
I use this code as a workaround if I need the interruptibleness of sleep_for:
::Sleep(20);
boost::this_thread::interruption_point();
Related
I am trying to find a way to wait for a signal or maximum duration such that the duration is wallclock time instead of time the machine is spent awake. For example, for the following order of events:
A wait() function is called for a maximum of 24 hours
12 hours pass
Machine is put to sleep
12 hours pass
Machine is woken up out of sleep
I would like the wait() call to return as soon as the process gets to run since 24 hours of wallclock time have passed. I've tried using std::condition_variable::wait_until but that uses machine awake time. I've also tried WaitForSingleObject() on windows and pthread_cond_timedwait() on mac to no avail. I would prefer something cross-platform (e.g. in the STL) if possible. As a backup, it looks like SetThreadpoolTimer() for windows and dispatch_after() (using dispatch_walltime()) on mac could work, but I would of course prefer a single implementation. Does anybody know of one?
Thanks!
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
condition_variable cv;
mutex m;
unique_lock<mutex> lock(m);
auto start = chrono::steady_clock::now();
cv_status result = cv.wait_until(lock, start + chrono::minutes(5));
//put computer to sleep here for 5 minutes, should wake up immediately
if (result == cv_status::timeout)
{
auto end = chrono::steady_clock::now();
chrono::duration<double> diff = end - start;
cerr << "wait duration: " << diff.count() << " seconds\n";
}
return 0;
}
I want my application to sleep for precisely 2000 microseconds:
#include <iostream>
#include <chrono>
#include <thread>
std::cout << "Hello waiter" << std::endl;
std::chrono::microseconds dura( 2000 );
auto start = std::chrono::system_clock::now();
std::this_thread::sleep_for( dura );
auto end = std::chrono::system_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
std::cout << "Waited for " << elapsed.count() << " microseconds" << std::endl;
This results in
Waited for 2620 microseconds
Where does this discrepancy come from? Is there a better (more precise) method available?
Thanks!
Quoted from cppreference (see sleep_for):
This function may block for longer than sleep_duration due to scheduling or resource contention delays.
I think that is the most likely explanation. The details will depend on your environment, especially your OS.
In general, I see no portable way to avoid it (non-portable options include increasing thread priorities or reducing the nice level).
Another, however less likely, reason for time differences are external clock adjustments (e.g., caused by a ntp daemon). Using a steady_clock is a portable insurance against clock adjustments.
Evidently, sleep_for is not precise at all. The working solution for this issue is to enter a while loop until the desired duration is reached. This make the application "sleep" for precisely 2000 microseconds.
bool sleep = true;
while(sleep)
{
auto now = std::chrono::system_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(now - start);
if ( elapsed.count() > 2000 )
sleep = false;
}
I am trying to write a simple c++ function sleep(int millisecond) that will sleep the program for user-specific millisecond.
Here is my code:
#include <iostream>
#include <time.h>
using namespace std;
void sleep(unsigned int mseconds) {
clock_t goal = mseconds + clock();
while (goal > clock());
}
int main() {
cout << "Hello World !" << endl;
sleep(3000);
cout << "Hello World 2" << endl;
}
The sleep() function works perfectly when I run this code on windows but doesn't work on Linux. Can anyone figure it out what's wrong with my code?
I don't know why everyone is dancing around your question instead of answering it.
You are attempting to implement your own sleep-like function, and your implementation, while it does busy wait instead of sleeping in the kernelspace (meaning that processor will be "actively" running code to sleep your program, instead of telling the machine your program is sleeping and it should run other code), is just fine.
The problem is that clock() is not required to return milliseconds. clock() will return system time/process time elapsed in ticks from epoch. What unit that time will take depends on the implementation.
For instance, on my machine, this is what the man page says:
DESCRIPTION
The clock() function determines the amount of processor time used since
the invocation of the calling process, measured in CLOCKS_PER_SECs of a
second.
RETURN VALUES
The clock() function returns the amount of time used unless an error
occurs, in which case the return value is -1.
SEE ALSO
getrusage(2), clocks(7)
STANDARDS
The clock() function conforms to ISO/IEC 9899:1990 (``ISO C90'') and
Version 3 of the Single UNIX Specification (``SUSv3'') which requires
CLOCKS_PER_SEC to be defined as one million.
As you can see from the bolded part, a tick is one-one-millionth of a second, aka a microsecond (not a millisecond). To "sleep" for 3 seconds, you'll need to call your sleep(3000000) and not sleep(3000).
With C++11 you can use sleep_for.
#include <chrono>
#include <thread>
void sleep(unsigned int mseconds) {
std::chrono::milliseconds dura( mseconds);
std::this_thread::sleep_for( dura );
}
You can use build-in sleep() function which takes pospond time as seconds not in milliseconds and have to include unistd.h standard library as build-in sleep() function is defined under this library.
Try it:
#include <iostream>
#include <unistd.h>
using namespace std;
int main() {
cout << "Hello World !" << endl;
sleep(3); //wait for 3 seconds
cout << "Hello World 2" << endl;
}
:P
There is no standard C API for milliseconds on Linux so you will have to use usleep. POSIX sleep takes seconds.
I'm experiencing strange issues with boost::sleep() function. I have this basic code:
#include <sys/time.h>
#include <boost/chrono.hpp>
#include <boost/thread.hpp>
void thread_func()
{
timeval start, end;
gettimeofday( &start, NULL );
boost::this_thread::sleep( boost::posix_time::milliseconds(1) ); // usleep(1000) here works just fine.
gettimeofday( &end, NULL );
int secs = end.tv_sec - start.tv_sec;
int usec = end.tv_usec - start.tv_usec;
std::cout << "Elapsed time: " << secs << " s and " << usec << " us" << std::endl;
}
int main()
{
thread_func();
boost::thread thread = boost::thread( thread_func );
thread.join();
return 0;
}
The problem is that the boost::sleep() functions behaves differently in the created thread and in the main one. The output of this program is
Elapsed time: 0 s and 1066 us
Elapsed time: 0 s and 101083 us
i.e. the boost::sleep() function sleeps for 100 milliseconds in the created thread, whereas it works okay in the main thread (it sleeps for 1 ms). If I'm inside a created thread, I can't get the accuracy below 100 ms (for example by using boost::posix_time::microseconds). However, if I use usleep(1000), it works just fine.
I'm using Fedora 18 (64-bit) 3.8.4 & Boost 1.50.0-5.fc18 on Intel i7 CPU. I also tested the code on different PC with Win 7 & Boost 1.48.0 and the problem does not occur, so I guess it should be related to the system configuration, but I have no clue how.
boost::this_thread::sleep is deprecated (see docs).
usleep is also deprecated (obsolete in POSIX.1-2001 and removed from POSIX.1-2008).
FWIW, in the older (1.44) boost headers I have installed locally, the relative delay version of boost::this_thread_sleep actually calls gettimeofday to calculate the absolute deadline, and then forwards to the absolute version (which is compiled out-of-line, so I don't have it handy). Note that gettimeofday was also marked obsolete in POSIX.1-2008.
The suggested replacements for all these are:
boost::this_thread::sleep_for instead of ...::sleep with a relative delay
boost::this_thread::sleep_until instead of ...::sleep with an absolute time
nanosleep instead of usleep
clock_gettime instead of gettimeofday
Be aware, that calling boost::this_thread::sleep() and related methods not only puts the thread to sleep but asks the scheduler to give the CPU to another thread that is ready for execution. So you are acutally measuring the maximum of the time of the sleep OR the time until the thread gets the CPU again.
The boost chrono library vs1.51 on my macbook pro returns negative times when I substract endTime - startTime. If you print the timepoints you see that the end time is earlier than the startTime. How can this happen?
typedef boost::chrono::steady_clock clock_t;
clock_t clock;
// Start time measurement
boost::chrono::time_point<clock_t> startTime = clock.now();
short test_times = 7;
// Spend some time...
for ( int i=0; i<test_times; ++i )
{
xnodeptr spResultDoc=parser.parse(inputSrc);
xstring sXmlResult = spResultDoc->str();
const char16_t* szDbg = sXmlResult.c_str();
BOOST_CHECK(spResultDoc->getNodeType()==xnode::DOCUMENT_NODE && sXmlResult == sXml);
}
// Stop time measurement
boost::chrono::time_point<clock_t> endTime = clock.now();
clock_t::duration elapsed( endTime - startTime);
std::cout << std::endl;
std::cout << "Now time: " << clock.now() << std::endl;
std::cout << "Start time: " << startTime << std::endl;
std::cout << "End time: " << endTime << std::endl;
std::cout << std::endl << "Total Parse time: " << elapsed << std::endl;
std::cout << "Avarage Parse time per iteration: " << (boost::chrono::duration_cast<boost::chrono::milliseconds>(elapsed) / test_times) << std::endl;
I tried different clocks but no difference.
Any help would be appreciated!
EDIT: Forgot to add the output:
Now time: 1 nanosecond since boot
Start time: 140734799802912 nanoseconds since boot
End time: 140734799802480 nanoseconds since boot
Total Parse time: -432 nanoseconds
Avarage Parse time per iteration: 0 milliseconds
Hyperthreading or just scheduling interference, the Boost implementation punts monotonic support to the OS:
POSIX: clock_gettime (CLOCK_MONOTONIC) although it still may fail due to kernel errors handling hyper-threading when calibrating the system.
WIN32: QueryPerformanceCounter() which on anything but Nehalem architecture or newer is not going to be monotonic across cores and threads.
OSX: mach_absolute_time(), i.e. the steady & high resolution clocks are the same. The source code shows that it uses RDTSC thus strict dependency upon hardware stability: i.e. no guarantees.
Disabling hyperthreading is a recommended way to go, but say on Windows you are really limited. Aside of dropping timer resolution the only available method is direct access to the underlying hardware timers whilst ensuring thread affinity.
It looks like a good time to submit a bug to Boost, I would recommend:
Win32: Use GetTick64Count(), as discussed here.
OSX: Use clock_get_time (SYSTEM_CLOCK) according to this question.