Best practice for self made timeout - c++

I am trying to implement a timeout in a C++ method which does some polling. The method currently looks like this (without timeout):
do {
do_something();
usleep(50);
} while(!is_finished());
The solution should have the following properties:
should survive changes of the system time
timeout in milliseconds (some jitter is acceptable)
POSIX compatible
should not use signals (is part of a library, avoid side effects)
might use Boost
I am currently thinking about using clock() and do something like this:
start = clock();
do {
do_something();
usleep(50); // TODO: do fancy stuff to avoid waiting after the timeout is reached
if(clock() - start > timeout * CLOCKS_PER_SEC / 1000) break;
} while(!is_finished());
Is this a good solution? I am trying to find the best possible solution as this kind of task seems to come up quite often.
What is considered best practice for this kind of problem?
Thanks in advance!

A timeout precise to milliseconds is out of the reach of any OS that doesn't specifically provide realtime support.
If your system is under heavy load (even temporarily) it's quite possible to have it unresponsive for seconds. Sometimes standard (non-realtime) systems can become very unresponsive for apparently stupid reasons like accessing a CD-Rom device.
Linux has some realtime variations, while for Windows IIRC the only realtime solutions actually have a realtime kernel that is managing the hardware on which a Windows system is run basically "emulated" in a virtual machine.

clock is not the right choice if you want to be portable. On conforming systems this is the CPU time used by your process, on windows it seems to be wall clock time. Also usleep is obsolete in current POSIX and you are supposed to use nanosleep.
There is one method that could be suitable for a wider range of platforms, select. This call has a fifth argument that let's you place a timeout. You can (mis)use that, to wait for network events that you know will never happen, and that then will timeout after a controlled amount of time. From the linux manual on that:
Some code calls select() with all three sets empty, nfds zero, and a
non-NULL timeout as a fairly portable way to sleep with subsecond
precision.

Related

Execute a function periodically every 10ms on linux

I am currently working on a program that must do some work regularly. At the moment I using the following construction:
int main(int argn, char** argv) {
for(;;) {
//Do work - the programm spends 5-8ms here
nanosleep(...); //Sleep 10ms
}
return 0;
}
The problem is: One loop execution should always last 10ms. Because of the high amount of time spend in the working part of the loop I can't sleep simply sleep for 10ms...
A solution might be measuring the time spend on work with clock_gettime() and adjust the nanosleep() accordingly. But I am not happy with this solution, because it's very easy to place code outside the area that's measured...
I have searched the internet for alternatives, I found the three calls:
timer_create
getitimer
alarm
timerfd_create
It's okay if the solution is not portable to Windows or other operating systems.
But I am not sure which solutions fits best to my problem. Any advice or suggestions? What are the pros and cons with the 4 alternatives I mentioned?
EDIT: There is also another problem with this solution: If i get the documentation right, the nanosleep syscall put the process into sleep for at least 10ms, but it can take longer if the system is under load... Is there any way to optimize that?
EDIT2: For your information: In the do work part a network request is made to a another device on the network (a microcontroller or a PLC that is able to answer the request in time). The result is being processed and send back to the device. I know Linux is not a realtime OS and not optimal for this kind of task... It's no problem if the solution is not perfect realtime, but it would be nice to get as much realtime as possible.
Check the time right before the call to nanosleep and compute how long to sleep right there. There will be no need to measure any code. Just note the time you return from nanosleep and calculate how much more time you need to sleep.
Signal Handling is the most preferred way to do it .I prefer timer_create as it is posix function conforming to POSIX.1-2001. Microsoft also provides help for writing POSIX Standard Code.
Use clock_nanosleep which lets you request to sleep until a given absolute time (TIMER_ABSTIME flag) rather than taking a duration. Then you can simply increment the destination absolute time by 10ms on every iteration.
Using clock_nanosleep also allows you (if you want) to use the monotonic clock option and avoid having to deal with the possibility of the user/admin resetting the system clock while your program is running.

Linux, need accurate program timing. Scheduler wake up program

I have a thread running on a Linux system which i need to execute in as accurate intervals as possbile. E.g. execute once every ms.
Currently this is done by creating a timer with
timerfd_create(CLOCK_MONOTONIC, 0)
, and then passing the desired sleep time in a struct with
timerfd_settime (fd, 0, &itval, NULL);
A blocking read call is performed on this timer which halts thread execution and reports lost wakeup calls.
The problem is that at higher frequencies, the system starts loosing deadlines, even though CPU usage is below 10%. I think this is due to the scheduler not waking the thread often enough to check the blocking call. Is there a command i can use to tell the scheduler to wake the thread at certain intervals as far as it is possble?
Busy-waiting is a bad option since the system handles many other tasks.
Thank you.
You need to get RT linux*, and then increase the RT priority of the process that you want to wake up at regular intervals.
Other then that, I do not see problems in your code, and if your process is not getting blocked, it should work fine.
(*) RT linux - an os with some real time scheduling patches applied.
One way to reduce scheduler latency is to run your process using the realtime scheduler such as SCHED_FIFO. See sched_setscheduler .
This will generally improve latency a lot, but still theres little guarantee, to further reduce latency spikes, you'll need to move to the realtime brance of linux, or a realtime OS such as VxWorks, RTEMS or QNX.
You won't be able to do what you want unless you run it on an actual "Real Time OS".
If this is only Linux for x86 system I would choose HPET timer. I think all modern PCs has this hardware timer build in and it is very, very accurate. I allow you to define callback that will be called every millisecond and in this callback you can do your calculations (if they are simple) or just trigger other thread work using some synchronization object (conditional variable for example)
Here is some example how to use this timer http://blog.fpmurphy.com/2009/07/linux-hpet-support.html
Along with other advice such as setting the scheduling class to SCHED_FIFO, you will need to use a Linux kernel compiled with a high enough tick rate that it can meet your deadline.
For example, a kernel compiled with CONFIG_HZ of 100 or 250 Hz (timer interrupts per second) can never respond to timer events faster than that.
You must also set your timer to be just a little bit faster than you actually need, because timers are allowed to go beyond their requested time but never expire early, this will give you better results. If you need 1 ms, then I'd recommend asking for 999 us instead.

Delaying for milliseconds in C++ cross-platform

I'm writing a multi-platform internal library in C++ that will eventually run on Windows, Linux, MacOS, and an ARM platform, and need a way to sleep for milliseconds at a time.
I have an accurate method for doing this on the ARM platform, but I'm not sure how to do this on the other platforms.
Is there a way to sleep with millisecond resolution on most platforms or do I have to special-case things for each platform?
For Linux and Mac OS X you can use usleep:
usleep(350 * 1000);
For Windows you can use Sleep:
Sleep(350);
EDIT: usleep() sleeps for microseconds, not milliseconds, so needs adjusting.
boost::this_thread::sleep()
usleep provides microsecond resolution theoretically, but it depends on platform.
It seems to be obsolete on windows, so you should use QueryPerformanceCounter there (or write your compatibility layer).
P.S.: building program depending on sleeps is often a way to disaster. Usually, what programmer really wants is waiting for some event to happen asynchronously. In this case you should look at waitable objects available at the platform, like semaphores or mutexes or even good ol' file descriptors.
for timer you could use boost asio::deadline_timer, synchronously or asynchronously.
you also could look into boost::posix_time for timer precision adjustment between seconds,milliseconds,microseconds and nanoseconds.
Windows sleep() does provide millsecond precision, but nowhere near millisecond accuracy. There is always jitter, especially with small values on a heavily-loaded system. Similar problems are only to be expected with other non-real-time OS. Even if the priority of the thread calling sleep() is very high, a driver interrupt may introduce an extra delay at any time.
Rgds,
Martin

What is the official way to call a function (C/C++) in ab. every 1/100 sec on Linux?

I have an asynchronous dataflow system written in C++. In dataflow architecture, the application is a set of component instances, which are initialized at startup, then they communicate each other with pre-defined messages. There is a component type called Pulsar, which provides "clock signal message" to other components which connect to one it (e.g. Delay). It fires message (calls the dataflow dispatcher API) every X ms, where X is the value of the "frequency" parameter, which is given in ms.
Short, the task is just to call a function (method) in every X ms. The question is: what's the best/official way to do it? Is there any pattern for it?
There are some methods I found:
Use SIGALRM. I think, signalling is not suits for that purpose. Altough, the resolution is 1 sec, it's too rare.
Use HW interrupt. I don't need this precisity. Also, I aware using HW-related solution (the server is compiled for several platforms, e.g. ARM).
Measure elapsed time, and usleep() until next call. I'm not sure that it's the best way to measure time to call time related system calls by 5 thread, each 10 times in every second - but maybe I'm wrong.
Use RealTime kernel functions. I don't know anything about it. Also, I don't need crystal precise call, it's not an atomreactor, and I can't install RT kernel on some platforms (also, 2.6.x Kernel is available).
Maybe, the best answer is a short commented part of an audio/video player's source code (which I can't find/understand by myself).
UPDATE (requested by #MSalters): The co-author of the DF project is using Mac OSX, so we should find a solution that works on most Posix-compilant op. systems, not only on Linux. Maybe, in the future there'll be a target device which uses BSD, or some restricted Linux.
If you do not need hard real-time guarantees, usleep should do the job. If you want hard real-time guarantees then an interrupt based or realtime kernel based function will be necessary.
To be honest, I think having to have a "pulsar" in what claims to be an asynchronous dataflow system is a design flaw. Either it is asynchronous or it has a synchronizing clock event.
If you have a component that needs a delay, have it request one, through boost::asio::deadline_timer.async_wait or any of the lower level solutions (select() / epoll() / timer_create() / etc). Either way, the most effective C++ solution is probably the boost.asio timers, since they would be using whatever is most efficient on your linux kernel version.
An alternative to the previously mentioned approaches is to use the Timer FD support in Linux Kernels 2.6.25+ (pretty much any distribution that's close to "current"). Timer FDs provide a bit more flexibility than the previous approaches.
Neglecting the question of design (which I think is an interesting question, but deserves its own thread)...
I would start off by designing an 'interrupt' idea, and using signals or some kernel function to interrupt every X usec. I would delay doing sleep-functions until the other ideas were too painful.

What is the simplest way to write a timer in C/C++?

What is the simplest way to write a timer in C/C++?
Hi,
What is the simplest way to write a timer, say in C/C++? Previously I used a for loop and a do-while loop. I used the for loop as a counter and the do-while loop as a comparison for "end of time". The program worked as I wanted it to, but consumed too much system resources.
I'm looking for the simplest way to write a timer.
Thank you!
EDIT:
The program works on a set of servers both Linux and Windows, so it's a multiplatform environment. I dont want to use the unsleep or sleep function as I'm trying to write everything from scratch.
The nature of the program: The program counts power time and battery time on systems.
EDIT2:
OK, it seems that this caused some confusion so I'm going to try to explain what I have done so far. I've created a program that runs in the background and powers off the system if it's idle for a certain amount of time, it also checks for the battery life on a specific system and goes to stand by mode if the system has been running on battery for a while. I input the time manually so I need a timer. I want to write it from scratch as it's a part of a personal project I've been working on.
Your best bet is to use an operating system primitive that suspends the program for a given amount of time (like Sleep() in Windows). The environment where the program will run will most likely have some mechanism for doing this or similar thing. That's the only way to avoid polling and consuming CPU time.
If you just want your program to wait a certain amount of time, you can use:
Sleep (in Windows)
usleep (in Unix)
boost::this_thread::sleep (everywhere)
If you wish to process or display the time going up until elapsed, your approach of using a while() loop is fine, but you should add a small sleep (20ms, for example, but ultimately that depends on the precision you require) in the while loop, as not to hog the CPU.
There are two ways:
One. Write your own timer which wraps the platform specific command. And stick to using it.
e.g.
void MySleep::Sleep(int milliSec)
{
#ifdef WIN32
sleep(milliSec) ;
#else
#ifdef LINUX
usleep(milliSec*1000); //microseconds
#endif
#endif
}
Two. Choose libraries and toolkits that support all your target platforms. Toolkits like Qt and boost can be used to cover up platform specific goo.
Both boost and Qt have timers with high functionality and are extensible. I recommend you look them up.
http://linux.die.net/man/2/alarm
Description:
alarm() arranges for a SIGALRM signal to be delivered to the process in seconds seconds.
and use cygwin on windows.
What you're already doing is the easiest.
It consumes too much CPU because it's going hard out doing your check (is timer expired?)
or whatever.
To fix that put usleep(1) or whatever the OS equivalent of a very short sleep in that main
loop and you'll have what you need.
You didn't mention the environment you're building a timer in. For example, microcontrollers usually have a timer/counter unit that raise interrupts at some intervals by counting the clock cycles and you can just handle their interrupts.
use a sleep function.. and a function pointer
using sleep function doesnt consume processor time... You can use the function pointer to notify when the timer expired. if you dont need events you can simply use sleep/delay function
Edit do what smallduck has suggested. using macros for currectly calling the approperiate operating system call (if you want to avoid using boost)... using anything else then timer wont be accurate.
You can call time() multiple times and compare the values.
#include <time.h>
int main ()
{
time_t start_time;
time_t current_time;
start_time = time(NULL);
current_time = time(NULL)
while (current_time < start_time + TIMEOUT)
{
/* Do what you want while you're waiting for the timeout */
current_time = time(NULL);
}
...
}
The advantage over sleep() is that you can still execute code while you are waiting. For example... polling for an external stop signal, etc.
A lot of these answers include something known as "busy waiting." Checking the time over and over again in a while() loop is a bad idea 99% of the time.
I think you may want to step back and approach the problem a bit differently.
It sounds like you want a program to turn something off under a given set of conditions.
So you have a few options. You can "wake up" your background program every so often and check if conditions are met (using sleep / usleep, which are standard functions in all languages on all operating systems).
Or you can background the process indefinitely, until some type of event occurs. This would probably best be accomplished in C via signals or some type of wait function.
Its hard to tell exactly what you want, because its hard to tell how your standby / turn off conditions are met and how they are triggered.
You may want your battery monitor program to do some type of IPC or maybe write some type of dummy file to a known directory when it needs to standby. With IPC, you can use some type of wait() function that activates when a signal or IPC is sent to your background process. With the file method, you could sleep(), and check for that files existence every wake-up.
You could also easily use networking / sockets to do this as well. Listen on the loopback interface (127.0.0.1) and on a predefined port. Now wait until data comes in on that socket. When the battery monitor needs to standby, he sends a simple message via loopback to your new process. This will use effectively 0 cpu.
There are probably other ways as well, but I think that should give you a general idea.
It is not a trivial task because, depending on your requirements, it can be quite complex.
The problem with timers is that if you want a good timer you may need to move beyond C++/C into the realm of OS calls causing you to end up with a OS-specific solution or use some library like boost to wrap it.
I mainly program in Windows so my advice come from that realm:
In Windows you can of course use a timer(NULL) as some suggested; however mostly when you are waiting you don't want to bog down the CPU with a loop. Using sleep is one way but instead I usually take the approach of using an object to wait for. Either the object signals or a timeout occurs. E.g. in order to wait for 10 seconds:
res = WaitForSingleObject( someobjecthandle, 10000 );
if the return value is timeout I know I waited 10s, otherwise the object signaled in some way and I didn't wait 10s. Now using that you can create an effective timer.
Another approach which is a bit more work is to create a separate timer thread (Windows again) which periodically sends a message to your message loop.
A third approach is to create a thread that is the actual timer, you start the thread with an argument, the thread sleeps this time (I know you don't want to use that but you can use another MsgWaitForMultipleObjects function inside the thread to react if you want to kill the timer prematurely) and do a WaitForSingleObject on the handle of the thread, when it signals the time is up (or a timeout).
There are more ways to do this, but these are some starting points.
If all you need is a code snippet that lets your program rest, a call to sleep is enough (if you're OK with second granularity).
If you need to run multiple timers with a single thread then maintaining a hash table holding active timers is a very good method. You use the expiry time to form the hash key. Each timer tick you then search the hash table for timers which have expired.
You could always play around with threads. One master thread could be scheduling tasks/jobs to be carried out at certain intervals. But then we are into the area of scheduling, which is something that the OS does. So as GMan said, you're suddenly in the realm of developing your own OS, or mimicing parts of the OS functionality.
void SimpleTimer(int timeinterval)
{
int starttime, currenttime, difference;
starttime = time(null);
do
{
currenttime = time(null);
difference = currenttime - starttime;
}
while (difference < timeinterval);
}