I am try to implement a periodic event in C++, on an embedded device with limited library functions available. I cannot use sleep or other delays since it stop the current execution.
For eg.
do
{
if(something)
{
//Do something
}
if(something)
{
//Do something
}
if(every 5 minutes)
{
//Do something only once after 5 minutes
}
}
while(true)
I am not sure how to go about this. Can you help me with this since I need to implement this in this do while loop? I would have used threads, but on this device that's not really possible, so I'm looking for another way.
Well, of course, the advice to spawn off a separate thread (that's what I would do) or use alarm() given in the comments is good advice, but may not be possible on your device. However, focusing on your current approach, generally the idea is, pseudo-code:
last action time = current time
while (true) {
do things.
if current time - last action time >= 5 minutes then
perform action
last action time = current time
// or last action time += 5 minutes if you want to mitigate drift
end
}
So now you just need to pick your favorite way to get the current time. On a desktop platform you could use e.g. time(), gettimeofday(), GetTickCount() on Windows, etc. Some embedded platforms may provide those as well. However, this could be problematic:
Since you mentioned in a comment that you are working on an embedded device, "your favorite way to get the current time" will vary depending on the platform, you'll just have to check the device docs. If you're lucky, your MCU may provide a way to query the CPU counter, and the frequency you usually know already from your setup. But it may not.
If it does not, another alternative is if your device provides an on-board timer. In this case, you may be able to configure it and respond to periodic interrupts, in that case your two options are generally:
Do the action in the timer interrupt handler - but only do this if it's short, and you take proper precautions for interrupt-safety.
Set a volatile flag in the timer interrupt handler, check that flag in the main loop and execute the action + reset the flag there if it is set. This is generally a much simpler approach.
Doing it that way is architecturally similar to using alarm(). However, there are certainly embedded platforms that implement time() and/or gettimeofday() as well. But those are your choices.
This sounds like you're looking for two different capabilities; one, to dispatch a command without disrupting execution; two, to wait for a certain amount of time. For the first, you're looking for asynchronous programming. For the second, I recommend Boost::deadline_timer. To combine the two, look at Boost::io_service, which will allow you to register asynchronous callbacks with or without the use of a timer.
Related
I am currently working on a program that must do some work regularly. At the moment I using the following construction:
int main(int argn, char** argv) {
for(;;) {
//Do work - the programm spends 5-8ms here
nanosleep(...); //Sleep 10ms
}
return 0;
}
The problem is: One loop execution should always last 10ms. Because of the high amount of time spend in the working part of the loop I can't sleep simply sleep for 10ms...
A solution might be measuring the time spend on work with clock_gettime() and adjust the nanosleep() accordingly. But I am not happy with this solution, because it's very easy to place code outside the area that's measured...
I have searched the internet for alternatives, I found the three calls:
timer_create
getitimer
alarm
timerfd_create
It's okay if the solution is not portable to Windows or other operating systems.
But I am not sure which solutions fits best to my problem. Any advice or suggestions? What are the pros and cons with the 4 alternatives I mentioned?
EDIT: There is also another problem with this solution: If i get the documentation right, the nanosleep syscall put the process into sleep for at least 10ms, but it can take longer if the system is under load... Is there any way to optimize that?
EDIT2: For your information: In the do work part a network request is made to a another device on the network (a microcontroller or a PLC that is able to answer the request in time). The result is being processed and send back to the device. I know Linux is not a realtime OS and not optimal for this kind of task... It's no problem if the solution is not perfect realtime, but it would be nice to get as much realtime as possible.
Check the time right before the call to nanosleep and compute how long to sleep right there. There will be no need to measure any code. Just note the time you return from nanosleep and calculate how much more time you need to sleep.
Signal Handling is the most preferred way to do it .I prefer timer_create as it is posix function conforming to POSIX.1-2001. Microsoft also provides help for writing POSIX Standard Code.
Use clock_nanosleep which lets you request to sleep until a given absolute time (TIMER_ABSTIME flag) rather than taking a duration. Then you can simply increment the destination absolute time by 10ms on every iteration.
Using clock_nanosleep also allows you (if you want) to use the monotonic clock option and avoid having to deal with the possibility of the user/admin resetting the system clock while your program is running.
There is easy way to calc duration of any function which described here: How to Calculate Execution Time of a Code Snippet in C++
start_timestamp = get_current_uptime();
// measured algorithm
duration_of_code = get_current_uptime() - start_timestamp;
But, it does not allow to get clear duration cause some time for execution other threads will be included in the measured time.
So question is: how to consider time which code spend in other threads?
OSX code preffer. Although it's great to look to windows or linux code also...
upd: Ideal? concept of code
start_timestamp = get_this_thread_current_uptime();
// measured algorithm
duration_of_code = get_this_thread_current_uptime() - start_timestamp;
I'm sorry to say that in the general case there is no way to do what you want. You are looking for worst-case execution time, and there are several methods to get a good approximation for this, but there is no perfect way as WCET is equivalent to the Halting problem.
If you want to exclude the time spent in other threads then you could disable task context switches upon entering the function that you want to measure. This is RTOS dependent but one possibility is to raise the priority of the current thread to the maximum. If this thread is max priority then other threads won't be able to run. Remember to reset the thread priority again at the end of the function. This measurement may still include the time spent in interrupts, however.
Another idea is to disable interrupts altogether. This could remove other threads and interrupts from your measurement. But with interrupts disabled the timer interrupt may not function properly. So you'll need to setup a hardware timer appropriately and rely on the timer's counter value register (rather than any time value derived from a timer interrupt) to measure the time. Also make sure your function doesn't call any RTOS routines that allow for a context switch. And remember to restore interrupts at the end of your function.
Another idea is to run the function many times and record the shortest duration measured over those many times. Longer durations probably include time spent in other threads but the shortest duration may be just the function with no other threads.
Another idea is to set a GPIO pin upon entry to and clear it upon exit from the function. Then monitor the GPIO pin with an oscilloscope (or logic analyzer). Use the oscilloscope to measure the period for when the GPIO pin is high. In order to remove the time spent in other threads you would need to modify the RTOS scheduler routine that selects the thread to run. Clear the GPIO pin in the scheduler when another thread runs and set it when the scheduler returns to your function's thread. You might also consider clearing the GPIO pin in interrupt handlers.
Your question is entirely OS dependent. The only way you can accomplish this is to somehow get a guarantee from the OS that it won't preempt your process to perform some other task, and to my knowledge this is simply not possible in most consumer OS's.
RTOS often do provide ways to accomplish this though. With Windows CE, anything running at priority 0 (in theory) won't be preempted by another thread unless it makes a function/os api/library call that requires servicing from another thread.
I'm not super familer with OSx, but after glancing at the documentation, OSX is a "soft" realtime operating system. This means that technically what you want can't be guaranteed. The OS may decide that there is "Something" more important than your process that NEEDS to be done.
OSX does however allow you to specify a Real-time process which means the OS will make every effort to honor your request to not be interrupted and will only do so if it deems absolutely necessary.
Mac OS X Scheduling documentation provides examples on how to set up real-time threads
OSX is not an RTOS, so the question is mistitled and mistagged.
In a true RTOS you can lock the scheduler, disable interrupts or raise the task to the highest priority (with round-robin scheduling disabled if other tasks share that priority) to prevent preemption - although only interrupt disable will truly prevent preemption by interrupt handlers. In a GPOS, even if it has a priority scheme, that normally only controls the number of timeslices allowed to a process in what is otherwise round-robin scheduling, and does not prevent preemption.
One approach is to make many repeated tests and take the smallest value obtained, since that is likely to be the one where the fewest pre-emptions occurred. It will help also to set the process to the highest priority in order to minimise the number of preemtions. But bear in mind on a GPOS many interrupts from devices such as the mouse, keyboard, and system clock will occur and consume a small (an possibly negligible) amount of time.
I am trying to implement a timeout in a C++ method which does some polling. The method currently looks like this (without timeout):
do {
do_something();
usleep(50);
} while(!is_finished());
The solution should have the following properties:
should survive changes of the system time
timeout in milliseconds (some jitter is acceptable)
POSIX compatible
should not use signals (is part of a library, avoid side effects)
might use Boost
I am currently thinking about using clock() and do something like this:
start = clock();
do {
do_something();
usleep(50); // TODO: do fancy stuff to avoid waiting after the timeout is reached
if(clock() - start > timeout * CLOCKS_PER_SEC / 1000) break;
} while(!is_finished());
Is this a good solution? I am trying to find the best possible solution as this kind of task seems to come up quite often.
What is considered best practice for this kind of problem?
Thanks in advance!
A timeout precise to milliseconds is out of the reach of any OS that doesn't specifically provide realtime support.
If your system is under heavy load (even temporarily) it's quite possible to have it unresponsive for seconds. Sometimes standard (non-realtime) systems can become very unresponsive for apparently stupid reasons like accessing a CD-Rom device.
Linux has some realtime variations, while for Windows IIRC the only realtime solutions actually have a realtime kernel that is managing the hardware on which a Windows system is run basically "emulated" in a virtual machine.
clock is not the right choice if you want to be portable. On conforming systems this is the CPU time used by your process, on windows it seems to be wall clock time. Also usleep is obsolete in current POSIX and you are supposed to use nanosleep.
There is one method that could be suitable for a wider range of platforms, select. This call has a fifth argument that let's you place a timeout. You can (mis)use that, to wait for network events that you know will never happen, and that then will timeout after a controlled amount of time. From the linux manual on that:
Some code calls select() with all three sets empty, nfds zero, and a
non-NULL timeout as a fairly portable way to sleep with subsecond
precision.
I want to refresh and have control of time interval changes. Most people only have an infinite loop constantly polling the time from time.h and wasting cycles. There is a way to get clock changes without disturbing too much the system? I am using c/c++ and really want to learn how to do this manually only using linux libraries. Most programs need the notion of time.
I want to be notified of system clock updates. I am trying to do a scientific app that responds in real time. Sleep() and thing like that only let me specify a time delay starting from the execution of that statemen. Localtime() and string returning times from the c header only give me the specific time when was executed. If I use it this time is too late, it had elapsed too many nanoseconds.
Read the time(7) man pages to understand how to use the system calls gettimeofday(2), setitimer(2), clock_gettime(2), timer_create(2) etc... and library functions (strftime, localtime, ...) related to time.
If you want to code an application recieving timer events, learn about timers and e.g. SIGALRM signal. Read first signal(7)
But you really should read e.g. Advanced Unix Programming and Advanced Linux Programming and understand what are syscalls
You might want to use poll(2) for polling or for waiting.
The most basic approach that's also portable and compatible with most other tasks is select. It sleeps until a certain amount of time elapses or a file becomes ready for I/O, which gives you a way to update the task list before the next task occurs. If you don't need interruption capability, you can just use sleep (or usleep).
What is the simplest way to write a timer in C/C++?
Hi,
What is the simplest way to write a timer, say in C/C++? Previously I used a for loop and a do-while loop. I used the for loop as a counter and the do-while loop as a comparison for "end of time". The program worked as I wanted it to, but consumed too much system resources.
I'm looking for the simplest way to write a timer.
Thank you!
EDIT:
The program works on a set of servers both Linux and Windows, so it's a multiplatform environment. I dont want to use the unsleep or sleep function as I'm trying to write everything from scratch.
The nature of the program: The program counts power time and battery time on systems.
EDIT2:
OK, it seems that this caused some confusion so I'm going to try to explain what I have done so far. I've created a program that runs in the background and powers off the system if it's idle for a certain amount of time, it also checks for the battery life on a specific system and goes to stand by mode if the system has been running on battery for a while. I input the time manually so I need a timer. I want to write it from scratch as it's a part of a personal project I've been working on.
Your best bet is to use an operating system primitive that suspends the program for a given amount of time (like Sleep() in Windows). The environment where the program will run will most likely have some mechanism for doing this or similar thing. That's the only way to avoid polling and consuming CPU time.
If you just want your program to wait a certain amount of time, you can use:
Sleep (in Windows)
usleep (in Unix)
boost::this_thread::sleep (everywhere)
If you wish to process or display the time going up until elapsed, your approach of using a while() loop is fine, but you should add a small sleep (20ms, for example, but ultimately that depends on the precision you require) in the while loop, as not to hog the CPU.
There are two ways:
One. Write your own timer which wraps the platform specific command. And stick to using it.
e.g.
void MySleep::Sleep(int milliSec)
{
#ifdef WIN32
sleep(milliSec) ;
#else
#ifdef LINUX
usleep(milliSec*1000); //microseconds
#endif
#endif
}
Two. Choose libraries and toolkits that support all your target platforms. Toolkits like Qt and boost can be used to cover up platform specific goo.
Both boost and Qt have timers with high functionality and are extensible. I recommend you look them up.
http://linux.die.net/man/2/alarm
Description:
alarm() arranges for a SIGALRM signal to be delivered to the process in seconds seconds.
and use cygwin on windows.
What you're already doing is the easiest.
It consumes too much CPU because it's going hard out doing your check (is timer expired?)
or whatever.
To fix that put usleep(1) or whatever the OS equivalent of a very short sleep in that main
loop and you'll have what you need.
You didn't mention the environment you're building a timer in. For example, microcontrollers usually have a timer/counter unit that raise interrupts at some intervals by counting the clock cycles and you can just handle their interrupts.
use a sleep function.. and a function pointer
using sleep function doesnt consume processor time... You can use the function pointer to notify when the timer expired. if you dont need events you can simply use sleep/delay function
Edit do what smallduck has suggested. using macros for currectly calling the approperiate operating system call (if you want to avoid using boost)... using anything else then timer wont be accurate.
You can call time() multiple times and compare the values.
#include <time.h>
int main ()
{
time_t start_time;
time_t current_time;
start_time = time(NULL);
current_time = time(NULL)
while (current_time < start_time + TIMEOUT)
{
/* Do what you want while you're waiting for the timeout */
current_time = time(NULL);
}
...
}
The advantage over sleep() is that you can still execute code while you are waiting. For example... polling for an external stop signal, etc.
A lot of these answers include something known as "busy waiting." Checking the time over and over again in a while() loop is a bad idea 99% of the time.
I think you may want to step back and approach the problem a bit differently.
It sounds like you want a program to turn something off under a given set of conditions.
So you have a few options. You can "wake up" your background program every so often and check if conditions are met (using sleep / usleep, which are standard functions in all languages on all operating systems).
Or you can background the process indefinitely, until some type of event occurs. This would probably best be accomplished in C via signals or some type of wait function.
Its hard to tell exactly what you want, because its hard to tell how your standby / turn off conditions are met and how they are triggered.
You may want your battery monitor program to do some type of IPC or maybe write some type of dummy file to a known directory when it needs to standby. With IPC, you can use some type of wait() function that activates when a signal or IPC is sent to your background process. With the file method, you could sleep(), and check for that files existence every wake-up.
You could also easily use networking / sockets to do this as well. Listen on the loopback interface (127.0.0.1) and on a predefined port. Now wait until data comes in on that socket. When the battery monitor needs to standby, he sends a simple message via loopback to your new process. This will use effectively 0 cpu.
There are probably other ways as well, but I think that should give you a general idea.
It is not a trivial task because, depending on your requirements, it can be quite complex.
The problem with timers is that if you want a good timer you may need to move beyond C++/C into the realm of OS calls causing you to end up with a OS-specific solution or use some library like boost to wrap it.
I mainly program in Windows so my advice come from that realm:
In Windows you can of course use a timer(NULL) as some suggested; however mostly when you are waiting you don't want to bog down the CPU with a loop. Using sleep is one way but instead I usually take the approach of using an object to wait for. Either the object signals or a timeout occurs. E.g. in order to wait for 10 seconds:
res = WaitForSingleObject( someobjecthandle, 10000 );
if the return value is timeout I know I waited 10s, otherwise the object signaled in some way and I didn't wait 10s. Now using that you can create an effective timer.
Another approach which is a bit more work is to create a separate timer thread (Windows again) which periodically sends a message to your message loop.
A third approach is to create a thread that is the actual timer, you start the thread with an argument, the thread sleeps this time (I know you don't want to use that but you can use another MsgWaitForMultipleObjects function inside the thread to react if you want to kill the timer prematurely) and do a WaitForSingleObject on the handle of the thread, when it signals the time is up (or a timeout).
There are more ways to do this, but these are some starting points.
If all you need is a code snippet that lets your program rest, a call to sleep is enough (if you're OK with second granularity).
If you need to run multiple timers with a single thread then maintaining a hash table holding active timers is a very good method. You use the expiry time to form the hash key. Each timer tick you then search the hash table for timers which have expired.
You could always play around with threads. One master thread could be scheduling tasks/jobs to be carried out at certain intervals. But then we are into the area of scheduling, which is something that the OS does. So as GMan said, you're suddenly in the realm of developing your own OS, or mimicing parts of the OS functionality.
void SimpleTimer(int timeinterval)
{
int starttime, currenttime, difference;
starttime = time(null);
do
{
currenttime = time(null);
difference = currenttime - starttime;
}
while (difference < timeinterval);
}