Delaying for milliseconds in C++ cross-platform - c++

I'm writing a multi-platform internal library in C++ that will eventually run on Windows, Linux, MacOS, and an ARM platform, and need a way to sleep for milliseconds at a time.
I have an accurate method for doing this on the ARM platform, but I'm not sure how to do this on the other platforms.
Is there a way to sleep with millisecond resolution on most platforms or do I have to special-case things for each platform?

For Linux and Mac OS X you can use usleep:
usleep(350 * 1000);
For Windows you can use Sleep:
Sleep(350);
EDIT: usleep() sleeps for microseconds, not milliseconds, so needs adjusting.

boost::this_thread::sleep()

usleep provides microsecond resolution theoretically, but it depends on platform.
It seems to be obsolete on windows, so you should use QueryPerformanceCounter there (or write your compatibility layer).
P.S.: building program depending on sleeps is often a way to disaster. Usually, what programmer really wants is waiting for some event to happen asynchronously. In this case you should look at waitable objects available at the platform, like semaphores or mutexes or even good ol' file descriptors.

for timer you could use boost asio::deadline_timer, synchronously or asynchronously.
you also could look into boost::posix_time for timer precision adjustment between seconds,milliseconds,microseconds and nanoseconds.

Windows sleep() does provide millsecond precision, but nowhere near millisecond accuracy. There is always jitter, especially with small values on a heavily-loaded system. Similar problems are only to be expected with other non-real-time OS. Even if the priority of the thread calling sleep() is very high, a driver interrupt may introduce an extra delay at any time.
Rgds,
Martin

Related

Are there any thread specific clocks in the C++ world?

I used to measure time consumption of different threads with CLOCK_THREAD_CPUTIME_ID and clock_gettime.
But clock_gettime is a standard in POSIX world, so it won't work in other platform like Windows when it comes to platform crossing.
I checked the C++ STD, found steady_clock, system_clcok, high_resolution_clock till now, none of these can clock a specific thread.
Did I miss any thing? If yes, what's the bullet? Or if not, any advices?
You will have to abstract out implementation of your thread timer, under windows you can use GetThreadTimes function. Under linux (or non POSIX) use CLOCK_THREAD_CPUTIME_ID. But I suppose if you compile with mingw under windows then CLOCK_THREAD_CPUTIME_ID will be available.
Also, for portability take a look at boost thread_clock
The only "clock" to measure CPU time in the C++ standard is std::clock, which doesn't measure CPU time in Windows so it's still not portable, and anyway it's per process and not per thread.
If you want to measure thread CPU time you have to resort to non-portable platform-specific functions.

Best practice for self made timeout

I am trying to implement a timeout in a C++ method which does some polling. The method currently looks like this (without timeout):
do {
do_something();
usleep(50);
} while(!is_finished());
The solution should have the following properties:
should survive changes of the system time
timeout in milliseconds (some jitter is acceptable)
POSIX compatible
should not use signals (is part of a library, avoid side effects)
might use Boost
I am currently thinking about using clock() and do something like this:
start = clock();
do {
do_something();
usleep(50); // TODO: do fancy stuff to avoid waiting after the timeout is reached
if(clock() - start > timeout * CLOCKS_PER_SEC / 1000) break;
} while(!is_finished());
Is this a good solution? I am trying to find the best possible solution as this kind of task seems to come up quite often.
What is considered best practice for this kind of problem?
Thanks in advance!
A timeout precise to milliseconds is out of the reach of any OS that doesn't specifically provide realtime support.
If your system is under heavy load (even temporarily) it's quite possible to have it unresponsive for seconds. Sometimes standard (non-realtime) systems can become very unresponsive for apparently stupid reasons like accessing a CD-Rom device.
Linux has some realtime variations, while for Windows IIRC the only realtime solutions actually have a realtime kernel that is managing the hardware on which a Windows system is run basically "emulated" in a virtual machine.
clock is not the right choice if you want to be portable. On conforming systems this is the CPU time used by your process, on windows it seems to be wall clock time. Also usleep is obsolete in current POSIX and you are supposed to use nanosleep.
There is one method that could be suitable for a wider range of platforms, select. This call has a fifth argument that let's you place a timeout. You can (mis)use that, to wait for network events that you know will never happen, and that then will timeout after a controlled amount of time. From the linux manual on that:
Some code calls select() with all three sets empty, nfds zero, and a
non-NULL timeout as a fairly portable way to sleep with subsecond
precision.

Precise Timer Queues in C++

I'm developing an application that needs to send out messages at specific times (it's to do with multimedia so the timing precision is important), so effectively I need a mechanism to call a callback function in a specified number of milliseconds.
I need to support both Windows and Mac OS X. I've looked into Timer Queues on Windows which looks like what I need, but I have read that the timing precision is just not precise enough for multimedia based applications (my application is sensing MIDI messages to a driver at specific times). Any ideas?
I think your best bet on Windows is to use Multimedia Timers. On OS X, the simplest function to use would be nanosleep, but you can go a long way with kqueue. I don't think there will be any problems if you are talking milliseconds precision (a millisecond is a very, very long time). The only thing you will possibly need to do is to make sure OS runs your process as "real-time".
Any sleep function is 'a least' with all OS that I know of.
If theat OS has a lot of tasks it is up to the scheduler for those tasks.
In space they have dedicate hardware that does simple functions from a queue - that is the best you can get - and they are not croweded

What is the official way to call a function (C/C++) in ab. every 1/100 sec on Linux?

I have an asynchronous dataflow system written in C++. In dataflow architecture, the application is a set of component instances, which are initialized at startup, then they communicate each other with pre-defined messages. There is a component type called Pulsar, which provides "clock signal message" to other components which connect to one it (e.g. Delay). It fires message (calls the dataflow dispatcher API) every X ms, where X is the value of the "frequency" parameter, which is given in ms.
Short, the task is just to call a function (method) in every X ms. The question is: what's the best/official way to do it? Is there any pattern for it?
There are some methods I found:
Use SIGALRM. I think, signalling is not suits for that purpose. Altough, the resolution is 1 sec, it's too rare.
Use HW interrupt. I don't need this precisity. Also, I aware using HW-related solution (the server is compiled for several platforms, e.g. ARM).
Measure elapsed time, and usleep() until next call. I'm not sure that it's the best way to measure time to call time related system calls by 5 thread, each 10 times in every second - but maybe I'm wrong.
Use RealTime kernel functions. I don't know anything about it. Also, I don't need crystal precise call, it's not an atomreactor, and I can't install RT kernel on some platforms (also, 2.6.x Kernel is available).
Maybe, the best answer is a short commented part of an audio/video player's source code (which I can't find/understand by myself).
UPDATE (requested by #MSalters): The co-author of the DF project is using Mac OSX, so we should find a solution that works on most Posix-compilant op. systems, not only on Linux. Maybe, in the future there'll be a target device which uses BSD, or some restricted Linux.
If you do not need hard real-time guarantees, usleep should do the job. If you want hard real-time guarantees then an interrupt based or realtime kernel based function will be necessary.
To be honest, I think having to have a "pulsar" in what claims to be an asynchronous dataflow system is a design flaw. Either it is asynchronous or it has a synchronizing clock event.
If you have a component that needs a delay, have it request one, through boost::asio::deadline_timer.async_wait or any of the lower level solutions (select() / epoll() / timer_create() / etc). Either way, the most effective C++ solution is probably the boost.asio timers, since they would be using whatever is most efficient on your linux kernel version.
An alternative to the previously mentioned approaches is to use the Timer FD support in Linux Kernels 2.6.25+ (pretty much any distribution that's close to "current"). Timer FDs provide a bit more flexibility than the previous approaches.
Neglecting the question of design (which I think is an interesting question, but deserves its own thread)...
I would start off by designing an 'interrupt' idea, and using signals or some kernel function to interrupt every X usec. I would delay doing sleep-functions until the other ideas were too painful.

Sleep() becomes less accurate after replacing a PC? (C++)

I have a program that was built in C++ (MFC, Visual Studio 6.0) several years ago and has been running on a certain Windows machine for quite some time (more than 5 years). The PC was replaced a month ago (the old one died), and since then the program's timing behavior changed. I need help understanding why.
The main functionality of the program is to respond to keystrokes by sending out ON and OFF signals to an external card, with very accurate delay between the ON and the OFF. An example program flow:
> wait for keystroke...
> ! keystroke occurred
> send ON message
> wait 150ms
> send OFF message
Different keystrokes have different waiting periods associated with them, between 20ms and 150ms (a very deterministic time depending on the specific keystroke). The timing is very important. The waiting is executed using simple Sleep(). The accuracy of the sleep on the old PC was 1-2ms deviation. I can measure the timing externally to the computer (on the external card), so my measurement of the sleep time is very accurate. Please take into account this machine executed such ON-sleep-OFF cycles thousands of times a day for years, so the accuracy data I have is sound.
Since the PC was replaced the timing deviation is more than 10ms.
I did not install the previous PC, so it may had some additional software packages installed. Also, I'm ashamed to admit I don't remember whether the previous PC was Windows 2000 or Windows XP. I'm quite sure it was XP, but not 100% (and I can't check now...). The new one is Windows XP.
I tried changing the sleeping mechanism to be based on timers, but the accuracy did not improve.
Can anything explain this change? Is there a software package that may have been installed on the previous PC that may fix the problem? Is there a best practice to deal with the problem?
The time resolution on XP is around 10ms - the system basically "ticks" every 10ms. Sleep is not a very good way to do accurate timing for that reason. I'm pretty sure win2000 has the same resolution but if I'm wrong that could be a reason.
You can change that resolution , atleast down to 1ms - see http://technet.microsoft.com/en-us/sysinternals/bb897569.aspx or use this http://www.lucashale.com/timerresolution/ - there's probably a registry key as well(windows media player will change that timer as well, probably only while it's running.
Could be the resolution somehow was altered on your old machine.
If your main concern is precision, consider using spinlock. Sleep() function is a hint for the scheduler to not to re-schedule the given thread for at least x ms, there's no guarantee that the thread will sleep exactly for the time specified.
Usually Sleep() will result in delay of ~15 ms or period multiple by ~15ms depending on sleep value.
On of the good ways to find out haw it works is the following pseudo-code:
while true do
print(GetTickCount());
Sleep(1);
end;
And also it will show that the behavior of this code is different for, say, Windows XP and Vista/Win 7
As others have mentioned, sleep has coarse accuracy.
I typically use Boost::asio for this kind of timing:
// Set up the io_service and deadline_timer
io_service io_
deadline_timer timer(io_service);
// Configure the wait period
timer.expires_from_now(posix_time::millisec(5));
timer.wait();
Asio uses the most effective implementation for your platform; on Windows I believe it uses overlapped IO.
If I set the time period to 1ms and loop the "timer." calls 10000 times the duration is typically about 10005-10100 ms. Very accurate, cross platform code (though accuracy is different on Linux) and very easy to read.
I can't explain why your previous PC was so accurate though; Sleep has been +/- 10ms whenever I've used it - worse if the PC is busy.
Is your new PC multi-core and the old one single-core? The difference in timing accuracy may be the use of multiple threads and context switching.
Sleep is dependent on the system clock. Your new machine probably has a different timing than your previous machine. From the documentation:
This function causes a thread to
relinquish the remainder of its time
slice and become unrunnable for an
interval based on the value of
dwMilliseconds. The system clock
"ticks" at a constant rate. If
dwMilliseconds is less than the
resolution of the system clock, the
thread may sleep for less than the
specified length of time. If
dwMilliseconds is greater than one
tick but less than two, the wait can
be anywhere between one and two ticks,
and so on. To increase the accuracy of
the sleep interval, call the
timeGetDevCaps function to determine
the supported minimum timer resolution
and the timeBeginPeriod function to
set the timer resolution to its
minimum. Use caution when calling
timeBeginPeriod, as frequent calls can
significantly affect the system clock,
system power usage, and the scheduler.
If you call timeBeginPeriod, call it
one time early in the application and
be sure to call the timeEndPeriod
function at the very end of the
application.
The documentation seems to imply that you can attempt to make it more accurate, but I wouldn't try that if I were you. Just use a timer.
What timers did you replace it with? If you used SetTimer(), that timer sucks too.
The correct solution is to use the higher-resolution TimerQueueTimer.