Execute a function periodically every 10ms on linux - c++

I am currently working on a program that must do some work regularly. At the moment I using the following construction:
int main(int argn, char** argv) {
for(;;) {
//Do work - the programm spends 5-8ms here
nanosleep(...); //Sleep 10ms
}
return 0;
}
The problem is: One loop execution should always last 10ms. Because of the high amount of time spend in the working part of the loop I can't sleep simply sleep for 10ms...
A solution might be measuring the time spend on work with clock_gettime() and adjust the nanosleep() accordingly. But I am not happy with this solution, because it's very easy to place code outside the area that's measured...
I have searched the internet for alternatives, I found the three calls:
timer_create
getitimer
alarm
timerfd_create
It's okay if the solution is not portable to Windows or other operating systems.
But I am not sure which solutions fits best to my problem. Any advice or suggestions? What are the pros and cons with the 4 alternatives I mentioned?
EDIT: There is also another problem with this solution: If i get the documentation right, the nanosleep syscall put the process into sleep for at least 10ms, but it can take longer if the system is under load... Is there any way to optimize that?
EDIT2: For your information: In the do work part a network request is made to a another device on the network (a microcontroller or a PLC that is able to answer the request in time). The result is being processed and send back to the device. I know Linux is not a realtime OS and not optimal for this kind of task... It's no problem if the solution is not perfect realtime, but it would be nice to get as much realtime as possible.

Check the time right before the call to nanosleep and compute how long to sleep right there. There will be no need to measure any code. Just note the time you return from nanosleep and calculate how much more time you need to sleep.

Signal Handling is the most preferred way to do it .I prefer timer_create as it is posix function conforming to POSIX.1-2001. Microsoft also provides help for writing POSIX Standard Code.

Use clock_nanosleep which lets you request to sleep until a given absolute time (TIMER_ABSTIME flag) rather than taking a duration. Then you can simply increment the destination absolute time by 10ms on every iteration.
Using clock_nanosleep also allows you (if you want) to use the monotonic clock option and avoid having to deal with the possibility of the user/admin resetting the system clock while your program is running.

Related

Best practice for self made timeout

I am trying to implement a timeout in a C++ method which does some polling. The method currently looks like this (without timeout):
do {
do_something();
usleep(50);
} while(!is_finished());
The solution should have the following properties:
should survive changes of the system time
timeout in milliseconds (some jitter is acceptable)
POSIX compatible
should not use signals (is part of a library, avoid side effects)
might use Boost
I am currently thinking about using clock() and do something like this:
start = clock();
do {
do_something();
usleep(50); // TODO: do fancy stuff to avoid waiting after the timeout is reached
if(clock() - start > timeout * CLOCKS_PER_SEC / 1000) break;
} while(!is_finished());
Is this a good solution? I am trying to find the best possible solution as this kind of task seems to come up quite often.
What is considered best practice for this kind of problem?
Thanks in advance!
A timeout precise to milliseconds is out of the reach of any OS that doesn't specifically provide realtime support.
If your system is under heavy load (even temporarily) it's quite possible to have it unresponsive for seconds. Sometimes standard (non-realtime) systems can become very unresponsive for apparently stupid reasons like accessing a CD-Rom device.
Linux has some realtime variations, while for Windows IIRC the only realtime solutions actually have a realtime kernel that is managing the hardware on which a Windows system is run basically "emulated" in a virtual machine.
clock is not the right choice if you want to be portable. On conforming systems this is the CPU time used by your process, on windows it seems to be wall clock time. Also usleep is obsolete in current POSIX and you are supposed to use nanosleep.
There is one method that could be suitable for a wider range of platforms, select. This call has a fifth argument that let's you place a timeout. You can (mis)use that, to wait for network events that you know will never happen, and that then will timeout after a controlled amount of time. From the linux manual on that:
Some code calls select() with all three sets empty, nfds zero, and a
non-NULL timeout as a fairly portable way to sleep with subsecond
precision.

Getting change of time without polling

I want to refresh and have control of time interval changes. Most people only have an infinite loop constantly polling the time from time.h and wasting cycles. There is a way to get clock changes without disturbing too much the system? I am using c/c++ and really want to learn how to do this manually only using linux libraries. Most programs need the notion of time.
I want to be notified of system clock updates. I am trying to do a scientific app that responds in real time. Sleep() and thing like that only let me specify a time delay starting from the execution of that statemen. Localtime() and string returning times from the c header only give me the specific time when was executed. If I use it this time is too late, it had elapsed too many nanoseconds.
Read the time(7) man pages to understand how to use the system calls gettimeofday(2), setitimer(2), clock_gettime(2), timer_create(2) etc... and library functions (strftime, localtime, ...) related to time.
If you want to code an application recieving timer events, learn about timers and e.g. SIGALRM signal. Read first signal(7)
But you really should read e.g. Advanced Unix Programming and Advanced Linux Programming and understand what are syscalls
You might want to use poll(2) for polling or for waiting.
The most basic approach that's also portable and compatible with most other tasks is select. It sleeps until a certain amount of time elapses or a file becomes ready for I/O, which gives you a way to update the task list before the next task occurs. If you don't need interruption capability, you can just use sleep (or usleep).

What is the official way to call a function (C/C++) in ab. every 1/100 sec on Linux?

I have an asynchronous dataflow system written in C++. In dataflow architecture, the application is a set of component instances, which are initialized at startup, then they communicate each other with pre-defined messages. There is a component type called Pulsar, which provides "clock signal message" to other components which connect to one it (e.g. Delay). It fires message (calls the dataflow dispatcher API) every X ms, where X is the value of the "frequency" parameter, which is given in ms.
Short, the task is just to call a function (method) in every X ms. The question is: what's the best/official way to do it? Is there any pattern for it?
There are some methods I found:
Use SIGALRM. I think, signalling is not suits for that purpose. Altough, the resolution is 1 sec, it's too rare.
Use HW interrupt. I don't need this precisity. Also, I aware using HW-related solution (the server is compiled for several platforms, e.g. ARM).
Measure elapsed time, and usleep() until next call. I'm not sure that it's the best way to measure time to call time related system calls by 5 thread, each 10 times in every second - but maybe I'm wrong.
Use RealTime kernel functions. I don't know anything about it. Also, I don't need crystal precise call, it's not an atomreactor, and I can't install RT kernel on some platforms (also, 2.6.x Kernel is available).
Maybe, the best answer is a short commented part of an audio/video player's source code (which I can't find/understand by myself).
UPDATE (requested by #MSalters): The co-author of the DF project is using Mac OSX, so we should find a solution that works on most Posix-compilant op. systems, not only on Linux. Maybe, in the future there'll be a target device which uses BSD, or some restricted Linux.
If you do not need hard real-time guarantees, usleep should do the job. If you want hard real-time guarantees then an interrupt based or realtime kernel based function will be necessary.
To be honest, I think having to have a "pulsar" in what claims to be an asynchronous dataflow system is a design flaw. Either it is asynchronous or it has a synchronizing clock event.
If you have a component that needs a delay, have it request one, through boost::asio::deadline_timer.async_wait or any of the lower level solutions (select() / epoll() / timer_create() / etc). Either way, the most effective C++ solution is probably the boost.asio timers, since they would be using whatever is most efficient on your linux kernel version.
An alternative to the previously mentioned approaches is to use the Timer FD support in Linux Kernels 2.6.25+ (pretty much any distribution that's close to "current"). Timer FDs provide a bit more flexibility than the previous approaches.
Neglecting the question of design (which I think is an interesting question, but deserves its own thread)...
I would start off by designing an 'interrupt' idea, and using signals or some kernel function to interrupt every X usec. I would delay doing sleep-functions until the other ideas were too painful.

Sleep() becomes less accurate after replacing a PC? (C++)

I have a program that was built in C++ (MFC, Visual Studio 6.0) several years ago and has been running on a certain Windows machine for quite some time (more than 5 years). The PC was replaced a month ago (the old one died), and since then the program's timing behavior changed. I need help understanding why.
The main functionality of the program is to respond to keystrokes by sending out ON and OFF signals to an external card, with very accurate delay between the ON and the OFF. An example program flow:
> wait for keystroke...
> ! keystroke occurred
> send ON message
> wait 150ms
> send OFF message
Different keystrokes have different waiting periods associated with them, between 20ms and 150ms (a very deterministic time depending on the specific keystroke). The timing is very important. The waiting is executed using simple Sleep(). The accuracy of the sleep on the old PC was 1-2ms deviation. I can measure the timing externally to the computer (on the external card), so my measurement of the sleep time is very accurate. Please take into account this machine executed such ON-sleep-OFF cycles thousands of times a day for years, so the accuracy data I have is sound.
Since the PC was replaced the timing deviation is more than 10ms.
I did not install the previous PC, so it may had some additional software packages installed. Also, I'm ashamed to admit I don't remember whether the previous PC was Windows 2000 or Windows XP. I'm quite sure it was XP, but not 100% (and I can't check now...). The new one is Windows XP.
I tried changing the sleeping mechanism to be based on timers, but the accuracy did not improve.
Can anything explain this change? Is there a software package that may have been installed on the previous PC that may fix the problem? Is there a best practice to deal with the problem?
The time resolution on XP is around 10ms - the system basically "ticks" every 10ms. Sleep is not a very good way to do accurate timing for that reason. I'm pretty sure win2000 has the same resolution but if I'm wrong that could be a reason.
You can change that resolution , atleast down to 1ms - see http://technet.microsoft.com/en-us/sysinternals/bb897569.aspx or use this http://www.lucashale.com/timerresolution/ - there's probably a registry key as well(windows media player will change that timer as well, probably only while it's running.
Could be the resolution somehow was altered on your old machine.
If your main concern is precision, consider using spinlock. Sleep() function is a hint for the scheduler to not to re-schedule the given thread for at least x ms, there's no guarantee that the thread will sleep exactly for the time specified.
Usually Sleep() will result in delay of ~15 ms or period multiple by ~15ms depending on sleep value.
On of the good ways to find out haw it works is the following pseudo-code:
while true do
print(GetTickCount());
Sleep(1);
end;
And also it will show that the behavior of this code is different for, say, Windows XP and Vista/Win 7
As others have mentioned, sleep has coarse accuracy.
I typically use Boost::asio for this kind of timing:
// Set up the io_service and deadline_timer
io_service io_
deadline_timer timer(io_service);
// Configure the wait period
timer.expires_from_now(posix_time::millisec(5));
timer.wait();
Asio uses the most effective implementation for your platform; on Windows I believe it uses overlapped IO.
If I set the time period to 1ms and loop the "timer." calls 10000 times the duration is typically about 10005-10100 ms. Very accurate, cross platform code (though accuracy is different on Linux) and very easy to read.
I can't explain why your previous PC was so accurate though; Sleep has been +/- 10ms whenever I've used it - worse if the PC is busy.
Is your new PC multi-core and the old one single-core? The difference in timing accuracy may be the use of multiple threads and context switching.
Sleep is dependent on the system clock. Your new machine probably has a different timing than your previous machine. From the documentation:
This function causes a thread to
relinquish the remainder of its time
slice and become unrunnable for an
interval based on the value of
dwMilliseconds. The system clock
"ticks" at a constant rate. If
dwMilliseconds is less than the
resolution of the system clock, the
thread may sleep for less than the
specified length of time. If
dwMilliseconds is greater than one
tick but less than two, the wait can
be anywhere between one and two ticks,
and so on. To increase the accuracy of
the sleep interval, call the
timeGetDevCaps function to determine
the supported minimum timer resolution
and the timeBeginPeriod function to
set the timer resolution to its
minimum. Use caution when calling
timeBeginPeriod, as frequent calls can
significantly affect the system clock,
system power usage, and the scheduler.
If you call timeBeginPeriod, call it
one time early in the application and
be sure to call the timeEndPeriod
function at the very end of the
application.
The documentation seems to imply that you can attempt to make it more accurate, but I wouldn't try that if I were you. Just use a timer.
What timers did you replace it with? If you used SetTimer(), that timer sucks too.
The correct solution is to use the higher-resolution TimerQueueTimer.

What is the simplest way to write a timer in C/C++?

What is the simplest way to write a timer in C/C++?
Hi,
What is the simplest way to write a timer, say in C/C++? Previously I used a for loop and a do-while loop. I used the for loop as a counter and the do-while loop as a comparison for "end of time". The program worked as I wanted it to, but consumed too much system resources.
I'm looking for the simplest way to write a timer.
Thank you!
EDIT:
The program works on a set of servers both Linux and Windows, so it's a multiplatform environment. I dont want to use the unsleep or sleep function as I'm trying to write everything from scratch.
The nature of the program: The program counts power time and battery time on systems.
EDIT2:
OK, it seems that this caused some confusion so I'm going to try to explain what I have done so far. I've created a program that runs in the background and powers off the system if it's idle for a certain amount of time, it also checks for the battery life on a specific system and goes to stand by mode if the system has been running on battery for a while. I input the time manually so I need a timer. I want to write it from scratch as it's a part of a personal project I've been working on.
Your best bet is to use an operating system primitive that suspends the program for a given amount of time (like Sleep() in Windows). The environment where the program will run will most likely have some mechanism for doing this or similar thing. That's the only way to avoid polling and consuming CPU time.
If you just want your program to wait a certain amount of time, you can use:
Sleep (in Windows)
usleep (in Unix)
boost::this_thread::sleep (everywhere)
If you wish to process or display the time going up until elapsed, your approach of using a while() loop is fine, but you should add a small sleep (20ms, for example, but ultimately that depends on the precision you require) in the while loop, as not to hog the CPU.
There are two ways:
One. Write your own timer which wraps the platform specific command. And stick to using it.
e.g.
void MySleep::Sleep(int milliSec)
{
#ifdef WIN32
sleep(milliSec) ;
#else
#ifdef LINUX
usleep(milliSec*1000); //microseconds
#endif
#endif
}
Two. Choose libraries and toolkits that support all your target platforms. Toolkits like Qt and boost can be used to cover up platform specific goo.
Both boost and Qt have timers with high functionality and are extensible. I recommend you look them up.
http://linux.die.net/man/2/alarm
Description:
alarm() arranges for a SIGALRM signal to be delivered to the process in seconds seconds.
and use cygwin on windows.
What you're already doing is the easiest.
It consumes too much CPU because it's going hard out doing your check (is timer expired?)
or whatever.
To fix that put usleep(1) or whatever the OS equivalent of a very short sleep in that main
loop and you'll have what you need.
You didn't mention the environment you're building a timer in. For example, microcontrollers usually have a timer/counter unit that raise interrupts at some intervals by counting the clock cycles and you can just handle their interrupts.
use a sleep function.. and a function pointer
using sleep function doesnt consume processor time... You can use the function pointer to notify when the timer expired. if you dont need events you can simply use sleep/delay function
Edit do what smallduck has suggested. using macros for currectly calling the approperiate operating system call (if you want to avoid using boost)... using anything else then timer wont be accurate.
You can call time() multiple times and compare the values.
#include <time.h>
int main ()
{
time_t start_time;
time_t current_time;
start_time = time(NULL);
current_time = time(NULL)
while (current_time < start_time + TIMEOUT)
{
/* Do what you want while you're waiting for the timeout */
current_time = time(NULL);
}
...
}
The advantage over sleep() is that you can still execute code while you are waiting. For example... polling for an external stop signal, etc.
A lot of these answers include something known as "busy waiting." Checking the time over and over again in a while() loop is a bad idea 99% of the time.
I think you may want to step back and approach the problem a bit differently.
It sounds like you want a program to turn something off under a given set of conditions.
So you have a few options. You can "wake up" your background program every so often and check if conditions are met (using sleep / usleep, which are standard functions in all languages on all operating systems).
Or you can background the process indefinitely, until some type of event occurs. This would probably best be accomplished in C via signals or some type of wait function.
Its hard to tell exactly what you want, because its hard to tell how your standby / turn off conditions are met and how they are triggered.
You may want your battery monitor program to do some type of IPC or maybe write some type of dummy file to a known directory when it needs to standby. With IPC, you can use some type of wait() function that activates when a signal or IPC is sent to your background process. With the file method, you could sleep(), and check for that files existence every wake-up.
You could also easily use networking / sockets to do this as well. Listen on the loopback interface (127.0.0.1) and on a predefined port. Now wait until data comes in on that socket. When the battery monitor needs to standby, he sends a simple message via loopback to your new process. This will use effectively 0 cpu.
There are probably other ways as well, but I think that should give you a general idea.
It is not a trivial task because, depending on your requirements, it can be quite complex.
The problem with timers is that if you want a good timer you may need to move beyond C++/C into the realm of OS calls causing you to end up with a OS-specific solution or use some library like boost to wrap it.
I mainly program in Windows so my advice come from that realm:
In Windows you can of course use a timer(NULL) as some suggested; however mostly when you are waiting you don't want to bog down the CPU with a loop. Using sleep is one way but instead I usually take the approach of using an object to wait for. Either the object signals or a timeout occurs. E.g. in order to wait for 10 seconds:
res = WaitForSingleObject( someobjecthandle, 10000 );
if the return value is timeout I know I waited 10s, otherwise the object signaled in some way and I didn't wait 10s. Now using that you can create an effective timer.
Another approach which is a bit more work is to create a separate timer thread (Windows again) which periodically sends a message to your message loop.
A third approach is to create a thread that is the actual timer, you start the thread with an argument, the thread sleeps this time (I know you don't want to use that but you can use another MsgWaitForMultipleObjects function inside the thread to react if you want to kill the timer prematurely) and do a WaitForSingleObject on the handle of the thread, when it signals the time is up (or a timeout).
There are more ways to do this, but these are some starting points.
If all you need is a code snippet that lets your program rest, a call to sleep is enough (if you're OK with second granularity).
If you need to run multiple timers with a single thread then maintaining a hash table holding active timers is a very good method. You use the expiry time to form the hash key. Each timer tick you then search the hash table for timers which have expired.
You could always play around with threads. One master thread could be scheduling tasks/jobs to be carried out at certain intervals. But then we are into the area of scheduling, which is something that the OS does. So as GMan said, you're suddenly in the realm of developing your own OS, or mimicing parts of the OS functionality.
void SimpleTimer(int timeinterval)
{
int starttime, currenttime, difference;
starttime = time(null);
do
{
currenttime = time(null);
difference = currenttime - starttime;
}
while (difference < timeinterval);
}