How to sleep a C++ Boost Thread - c++

Seems impossible to sleep a thread using boost::thread.
Method sleep requires a system_time but how can I build it?
Looking inside libraries doesn't really help much...
Basically I have a thread
inside the function that I pass to this thread as entry point, I would like to call something like
boost::this_thread::sleep
or something, how to do this?
Thank you

Depending on your version of Boost:
Either...
#include <boost/chrono.hpp>
#include <boost/thread/thread.hpp>
boost::this_thread::sleep_for(boost::chrono::milliseconds(100));
Or...
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/thread/thread.hpp>
boost::this_thread::sleep(boost::posix_time::milliseconds(100));
You can also use microseconds, seconds, minutes, hours and maybe some others, I'm not sure.

From another post, I learned boost::this_thread::sleep is deprecated for Boost v1.5.3: http://www.boost.org/doc/libs/1_53_0/doc/html/thread/thread_management.html
Instead, try
void sleep_for(const chrono::duration<Rep, Period>& rel_time);
e.g.
boost::this_thread::sleep_for(boost::chrono::seconds(60));
Or maybe try
void sleep_until(const chrono::time_point<Clock, Duration>& abs_time);
I was using Boost v1.53 with the deprecated sleep function, and it aperiodically crashed the program. When I changed calls to the sleep function to calls to the sleep_for function, the program stopped crashing.

firstly
boost::posix_time::seconds secTime(1);
boost::this_thread::sleep(secTime);
secondly
boost::this_thread::sleep(boost::posix_time::milliseconds(100));

I learned the hard way that at least in MS Visual Studio (tried 2013 and 2015) there is the huge difference between
boost::this_thread::sleep(boost::posix_time::microseconds(SmallIterval));
and
boost::this_thread::sleep_for(boost::chrono::microseconds(SmallIterval));
or
std::this_thread::sleep_for(std::chrono::microseconds(SmallIterval));
when interval is smaller than some rather substantial threshold (I saw threshold of 15000 microseconds = 15 milliseconds).
If SmallIterval is small, sleep() does instantaneous interruption. sleep(100 mks) behaves as sleep(0 mks).
But sleep_for() for the time interval smaller than a threshold pauses for the entire threshold. sleep_for(100 mks) behaves as sleep_for(15000 mks).
Behavior for intervals larger than threshold and for value 0 is the same.

Related

Explanation why std::this_thread::sleep_for() sleep time differs between MSVC and MinGW-GCC?

Identifying the Problem
I was busy editing a library for lua bindings to rtmidi. I wanted to fix MinGW-GCC and LLVM/Clang compilation compability. When I was done making the edits and compiling the bindings, I noticed a weird timing issue caused by std::this_thread::sleep_for() when compared to MSVC.
I understand that there are bound to be some scheduling differences between different compilers, but in the following examples you can hear large timing issues:
MIDI playback using MSVC compiled bindings
MIDI playback using GCC compiled bindings
I have narrowed it down that this is the piece of code in question:
lua_pushliteral(L, "sleep");
lua_pushcfunction(L, [] (lua_State *L) {
auto s = std::chrono::duration<lua_Number>(luaL_checknumber(L, 1));
std::this_thread::sleep_for(s);
return 0;
});
lua_rawset(L, -3);
Obviously it's about these two lines:
auto s = std::chrono::duration<lua_Number>(luaL_checknumber(L, 1));
std::this_thread::sleep_for(s);
The average waiting time that is passed to sleep_for() is around 0.01s, with some calls here and there between 0.002s - 0.005s.
Troubleshooting
First off I have checked whether the problem was present with my current version of GCC (9.2.0) by using a different version and even using LLVM/Clang.
Both GCC 8.1.0 and LLVM/Clang 9.0.0 yield the same results.
At this point I can conclude there is some weird scheduling going on with the winpthreads runtime, since they depend on it and MSVC does not.
After that I tried to switch out the code with the Windows Sleep() call. I had to multiply by 1000 to adjust for the correct timing.
Sleep(luaL_checknumber(L, 1) * 1000);
As I expected, the timing issue is not present here; this tells me that winpthreads is indeed the culprit here.
Obviously I do not want to make calls to Windows Sleep() and keep using sleep_for() for the sake of portability (as in crossplatform).
The Questions
So based on what I gathered I have the following questions:
Is winpthread indeed the culprit? Am I perhaps missing some compiler defines that would solve the problem?
If winpthreads is indeed the culprit, why are the timing differences so big?
If there is no compiler define 'fix', what would you recommend to do tackle the problem?
To partially answer the third question (if it may come to it), I was thinking of doing something like:
#ifdef _WIN32 && MINGW
#include <windows.h>
#endif
...
#ifdef _WIN32 && MINGW
Sleep(luaL_checknumber(L, 1) * 1000);
#elif _WIN32 && MSVC
auto s = std::chrono::duration<lua_Number>(luaL_checknumber(L, 1));
std::this_thread::sleep_for(s);
#endif
Of course the problem arises that Window's Sleep() call is less precise (or so I've read).

How to sleep for a fixed period of time not affected by system clock adjustment in Visual C++ 2012

I was using std::this_thread::sleep_for(std::chrono::seconds(1)); to sleep for a second. I found that if I adjust the system time backwards during the sleep, the sleep time would extend for the amount of time I just adjusted.
But std::this_thread::sleep_for() is supposed to work regardless of system time unlike std::this_thread::sleep_until(), which probably should exhibit the behavior mentioned above.
When I look at the Visual C++ 2012's implementation of std::this_thread::sleep_for(), I found
template<class _Rep,
class _Period> inline
void sleep_for(const chrono::duration<_Rep, _Period>& _Rel_time)
{ // sleep for duration
stdext::threads::xtime _Tgt = _To_xtime(_Rel_time);
sleep_until(&_Tgt);
}
So sleep_for() is implemented using sleep_until() in Visual C++ 2012. I searched the C++11 standard, and it doesn't really forbid this kind of implementation. So how can I get to sleep for a fixed period of time not affected by system clock adjustment?
You should probably use timer queues these are precision relative timers. To make the timer waitable, have your timer routine trigger an event as in this example.

How can I use boost::thread::timed_join with nanoseconds enabled in boost::date_time?

Here is some C++ code illustrating my problem with a minimal expample:
// uncomment the next line, to make it hang up:
//#define BOOST_DATE_TIME_POSIX_TIME_STD_CONFIG //needed for nanosecond support of boost
#include <boost/thread.hpp>
void foo()
{
while(true);
}
int main(int noParameters, char **parameterArray)
{
boost::thread MyThread(&foo);
if ( MyThread.timed_join( boost::posix_time::seconds(1) ) )
{
std::cout<<"\nDone!\n";
}
else
{
std::cerr<<"\nTimed out!\n";
}
}
As long as I don't turn on the nanosecond support everthing works as expected, but as soon as I uncomment the #define needed for the nanosecond support in boost::posix_time the program doesn't get past the if-statement any more, just as if I had called join() instead of timed_join().
Now I've already figured out, that this happens because BOOST_DATE_TIME_POSIX_TIME_STD_CONFIG changes the actual data representation of the timestamps from a single 64bit integer to 64+32 bit. A lot boost stuff is completely implemented inside the headers but the thread methods are not and because of that they cannot adapt to the new data format without compiling them again with the apropriate options. Since the code is meant to run on an external server, compiling my own version of boost is not an option and neither is turning off the nanosecond support.
Therefore my question is as follows: Is there a way to pass on a value (on the order of seconds) to timed_join() without using the incompatible 96bit posix_time methods and without modifying the standard boost packages?
I'm running on Ubuntu 12.04 with boost 1.46.1.
Unfortunately I don't think your problem can be cleanly solved as written. Since the library you're linking against was compiled without nanosecond support, by definition you violate the one-definition rule if you happen to enable nanosecond support for any piece that's already compiled into the library binary. In this case, you're enabling it across the function calls to timed_join.
The obvious solution is to decide which is less painful to give up: Building your own boost, or removing nanosecond times.
The less obvious "hack" that may or may not totally work is to write your own timed_join wrapper that takes a thread object and an int representing seconds or ms or whatever. Then this function is implemented in a source file with nothing else and that does not enable nanosecond times for the specific purpose of calling into the compiled boost binary. Again I want to stress that if at any point you fail to completely segregate such usages you'll violate the one definition rule and run into undefined behavior.

Boost ptime under MinGW not thread safe

I have a problem with boost library. I'm using MinGW with gcc 4.5.2 to compile the following code:
unsigned long GetEpochSeconds()
{
using namespace boost::posix_time;
using namespace boost::gregorian;
ptime now(second_clock::universal_time());
ptime epoch(date(1970,1,1));
time_duration diff = now-epoch;
return diff.total_seconds();
}
The problem is that this code is not thread-safe. When I run it from within multiple threads, my application crashes. For now I've converted to c-standard functions like time, mktime etc. and everything works fine, but in the future I will need a few boost time functions.
I was compiling also with -D_REENTRANT, but this didn't help.
Thanks for any suggestions.
Check whether your code is calling gmtime() or gmtime_r() (use a debugger for this). See http://www.boost.org/doc/libs/1_48_0/boost/date_time/c_time.hpp and note that BOOST_DATE_TIME_HAS_REENTRANT_STD_FUNCTIONS must be defined in order for getting the time to be thread-safe.

boost::this_thread::sleep() vs. nanosleep()?

I recently came across the need to sleep the current thread for an exact period of time. I know of two methods of doing so on a POSIX platform: using nanosleep() or using boost::this_thread::sleep().
Out of curiosity more than anything else, I was wondering what the differences are between the two approaches. Is there any difference in precision, and is there any reason not to use the Boost approach?
nanosleep() approach:
#include <time.h>
...
struct timespec sleepTime;
struct timespec returnTime;
sleepTime.tv_sec = 0;
sleepTime.tv_nsec = 1000;
nanosleep(&sleepTime, &returnTime);
Boost approach:
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/thread/thread.hpp>
...
boost::this_thread::sleep(boost::posix_time::nanoseconds(1000));
The few reasons why use boost that I can think of:
boost::this_thread::sleep() is an
interruption point in boost.thread
boost::this_thread::sleep() can be
drop-in replaced by C++0x's
std::this_thread::sleep_until() in
future
For why not -- if you're not using threads at all, or of everything else in your project uses POSIX calls, then nanosleep() makes more sense.
As for precision, on my system both boost and nanosleep() call the same system call, hrtimer_nanosleep(). I imagine boost authors try to get the highest precision possible on each system and for me it happens to be the same thing as what nanosleep() provides.
How about because your nanonsleep example is wrong.
#include <time.h>
...
struct timespec sleepTime;
struct timespec time_left_to_sleep;
sleepTime.tv_sec = 0;
sleepTime.tv_nsec = 1000;
while( (sleepTime.tv_sec + sleepTime.tv_nsec) > 0 )
{
nanosleep(&sleepTime, &time_left_to_sleep);
sleepTime.tv_sec = time_left_to_sleep.tv_sec;
sleepTime.tv_nsec = time_left_to_sleep.tv_nsec;
}
Admittedly if you're only sleeping for 1 microsecond waking up too early shouldn't be an issue, but in the general case this is the only way to get it done.
And just to ice the cake in boost's favor, boost::this_thread::sleep() is implemented using nanosleep(). They just took care of all the insane corner cases for you.
is there any reason not to use the Boost approach
I suppose this is kind of obvious, but the only reason I can think of is that you'd require boost to compile your project.
For me the main reason for using the boost variant is platform independence. If you are required to compile your application for both posix and Windows platforms, for example, the platform sleep is not sufficient.