I have a periodic task in c++, running on an embedded linux platform, and have to run at 5 ms intervals. It seems to be working as expected, but is my current solution good enough?
I have implemented the scheduler using sleep_until(), but some comments I have received is that setitimer() is better. As I would like the application to be at least some what portable, I would prefer c++ standard... of course unless there are other problems.
I have found plenty of sites that show implementation with each, but I have not found any arguments for why one solution is better than the other. As I see it, sleep_until() will implement an "optimal" on any (supported) platform, and I'm getting a feeling the comments I have received are focused more on usleep() (which I do not use).
My implementation looks a little like this:
bool is_submilli_capable() {
return std::ratio_greater<std::milli,
std::chrono::system_clock::period>::value;
}
int main() {
if (not is_submilli_capable())
exit(1);
while (true) {
auto next_time = next_period_start();
do_the_magic();
std::this_thread::sleep_until(next_time);
}
}
A short summoning of the issue.
I have an embedded linux platform, build with yocto and with RT capabilities
The application need to read and process incoming data every 5 ms
Building with gcc 11.2.0
Using c++20
All the "hard work" is done in separate threads, so this question is only regards triggering the task periodically and with minimal jitter
Since the application is supposed to read and process the data every 5 ms, it is possible that a few times, it does not perform the required operations. What I mean to say is that in a time interval of 20 ms, do_the_magic() is supposed to be invoked 4 times... But if the time taken to execute do_the_magic() is 10 ms, it will get invoked only 2 times. If that is an acceptable outcome, the current implementation is good enough.
Since the application is reading data, it probably receives it from the network or disk. And adding the overhead of processing it, it likely takes more than 5 ms to do so (depending on the size of the data). If it is not acceptable to miss out on any invocation of do_the_magic, the current implementation is not good enough.
What you could probably do is create a few threads. Each thread executes the do_the_magic function and then goes to sleep. Every 5 ms, you wake a sleeping thread which will most likely take less than 5 ms to happen. This way no invocation of do_the_magic is missed. Also, the number of threads depends on how long will do_the_magic take to execute.
bool is_submilli_capable() {
return std::ratio_greater<std::milli,
std::chrono::system_clock::period>::value;
}
void wake_some_thread () {
static int i = 0;
release_semaphore (i); // Release semaphore associated with thread i
i++;
i = i % NUM_THREADS;
}
void * thread_func (void * args) {
while (true) {
// Wait for a semaphore
do_the_magic();
}
int main() {
if (not is_submilli_capable())
exit(1);
while (true) {
auto next_time = next_period_start();
wake_some_thread (); // Releases a semaphore to wake a thread
std::this_thread::sleep_until(next_time);
}
Create as many semaphores as the number of threads where thread i is waiting for semaphore i. wake_some_thread can then release a semaphore starting from index 0 till NUM_THREADS and start again.
5ms is a pretty tight timing.
You can get a jitter-free 5ms tick only if you do the following:
Isolate a CPU for this thread. Configure it with nohz_full and rcu_nocbs
Pin your thread to this CPU, assign it a real-time scheduling policy (e.g., SCHED_FIFO)
Do not let any other threads run on this CPU core.
Do not allow any context switches in this thread. This includes avoiding system calls altogether. I.e., you cannot use std::this_thread::sleep_until(...) or anything else.
Do a busy wait in between processing (ensure 100% CPU utilisation)
Use lock-free communication to transfer data from this thread to other, non-real-time threads, e.g., for storing the data to files, accessing network, logging to console, etc.
Now, the question is how you're going to "read and process data" without system calls. It depends on your system. If you can do any user-space I/O (map the physical register addresses to your process address space, use DMA without interrupts, etc.) - you'll have a perfectly real-time processing. Otherwise, any system call will trigger a context switch, and latency of this context switch will be unpredictable.
For example, you can do this with certain Ethernet devices (SolarFlare, etc.), with 100% user-space drivers. For anything else you're likely to have to write your own user-space driver, or even implement your own interrupt-free device (e.g., if you're running on an FPGA SoC).
I need to execute some function accurate in 20 milliseconds (for RTP packets sending) after some event. I have tried next variants:
std::this_thread::sleep_for(std::chrono::milliseconds(20));
boost::this_thread::sleep_for(std::chrono::milliseconds(20));
Sleep(20);
Also different perversions as:
auto a= GetTickCount();
while ((GetTickCount() - a) < 20) continue;
Also tried micro and nanoseconds.
All this methods have error in range from -6ms to +12ms but its not acceptable. How to make it work right?
My opinion, that +-1ms is acceptable, but no more.
UPDATE1: to measure time passed i use std::chrono::high_resolution_clock::now();
Briefly, because of how OS kernels manage time and threads, you won't get accuracy much better with that method. Also, you can't rely on sleep alone with a static interval or your stream will quickly drift off your intended send clock rate, because the thread could be interrupted or it could be scheduled again well after your sleep time... for this reason you should check the system clock to know how much to sleep for at each iteration (i.e. somewhere between 0ms and 20ms). Without going into too much detail, this is also why there's a jitter buffer in RTP streams... to account for variations in packet reception (due to network jitter or send jitter). Because of this, you likely won't need +/-1ms level accuracy anyway.
Using std::chrono::steady_clock, I got about 0.1ms accuracy on windows 7.
That is, simply:
auto a = std::chrono::steady_clock::now();
while ((std::chrono::steady_clock::now() - a) < WAIT_TIME) continue;
This should give you accurate "waiting" (about 0.1ms, as I said), at least. We all know that this kind of waiting is "ugly" and should be avoided, but it's a hack that might still do the trick just fine.
You could use high_resolution_clock, which might give even better accuracy for some systems, but it is not guaranteed not to be adjusted by the OS, and you don't want that. steady_clock is supposed to be guaranteed not to be adjusted, and often has the same accuracy as high_resolution_clock.
As for "sleep()" functions that are very accurate, I don't know. Perhaps someone else knows more about that.
In C we have a nanosleep function in time.h.
The nanosleep() function causes the current thread to be suspended from execution until either the time interval specified by the rqtp argument has elapsed or a signal is delivered to the calling thread and its action is to invoke a signal-catching function or to terminate the process.
This below program sleeps for 20 milli seconds.
int main()
{
struct timespec tim, tim2;
tim.tv_sec = 0;
tim.tv_nsec =20000000;//20 milliseconds converted to nano seconds
if(nanosleep(&tim , NULL) < 0 )
{
printf("Nano sleep system call failed \n");
return -1;
}
printf("Nano sleep successfull \n");
return 0;
}
Recently i've been trying to create a wait function that waits for 25 ms using the wall clock as reference. I looked around and found "gettimeofday", but i've been having problems with it. My code (simplified):
while(1)
{
timeval start, end;
double t_us;
bool release = false;
while (release == false)
{
gettimeofday(&start, NULL);
DoStuff();
{
gettimeofday(&end, NULL);
t_us = ( (end.tv_sec - start.tv_sec) * 1000*1000) + (end.tv_usec - start.tv_usec);
if (t_us >= 25000) //25 ms
{
release = true;
}
}
}
}
This code runs in a thread (Posix) and, on it's its own, works fine. DoStuff() is called every 25ms. It does however eat all the CPU if it can (as you might expect) so obviously this isn't a good idea.
When I tried throttling it by adding a Sleep(1); in the wait loop after the if statement, the entire thing slows by about 50% (that is, it called DoStuff every 37ms or so. This makes no sense to me - assuming DoStuff and any other threads complete their tasks in under (25 - 1) ms the called rate of DoStuff shouldn't be affected (allowing a 1ms error margin)
I also tried Sleep(0), usleep(1000) and usleep(0) but the behaviour is the same.
The same behaviour occurs whenever another higher priority thread needs CPU time (without the sleep). It's as if the clock stops counting when the thread reliqnuishes runtime.
I'm aware that gettimeofday is vulnerable to things like NTP updates etc... so I tried using clock_gettime but linking -ltr on my system causes problems so i don't think that is an option.
Does anyone know what i'm doing wrong?
The part that's missing here is how the kernel does thread scheduling based on time slices. In rough numbers, if you sleep at the beginning of your time slice for 1ms and the scheduling is done on a 35ms clock rate, your thread may not execute again for 35ms. If you sleep for 40ms, your thread may not execute again for 70ms. You can't really change that without changing the scheduling, but that's not recommended due to overall performance implications of the system. You could use a "high-resolution" timer, but often that's implemented in a tight cycle-wasting loop of "while it's not time yet, chew CPU" so that's not really desirable either.
If you used a high-resolution clock and queried it frequently inside of your DoStuff loop, you could potentially play some tricks like run for 30ms, then do a sleep(1) which could effectively relinquish your thread for the remainder of your timeslice (e.g. 5ms) to let other threads run. Kind of a cooperative/preemptive multitasking if you will. It's still possible you don't get back to work for an extended period of time though...
All variants of sleep()/usleep() involve yielding the CPU to other runnable tasks. Your programm can then run only after it is rescheduled by the kernel, which seems to last about 37 ms in your case.
I've got a loop that looks like this:
while (elapsedTime < refreshRate)
{
timer.stopTimer();
elapsedTime=timer.getElapsedTime();
}
I read something similar to this elsewhere (C Main Loop without 100% cpu), but this loop is running a high resolution timer that must be accurate. So how am I supposed to not take up 100% CPU while still keeping it high resolution?
You shouldn't busy-wait but rather have the OS tell you when the time has passed.
http://msdn.microsoft.com/en-us/library/ms712704(VS.85).aspx
High resolution timers (Higher than 10 ms)
http://msdn.microsoft.com/en-us/magazine/cc163996.aspx
When you say that your timer must be "accurate", how accurate do you actually need to be? If you only need to be accurate to the nearest millisecond, then you can add a half-millisecond sleep inside the loop. You can also add a dynamically-changing sleep statement based off of how much time you have left to sleep. Think of something like (pseudocode):
int time_left = refreshRate - elapsedTime;
while (time_left > 0) {
if (time_left > threshhold)
sleep_for_interval(time_left / 2);
update_timestamp(elapsedTime);
time_left = refreshRate - elapsedTime;
}
With that algorithm, your code will sleep for short bursts if it detects that you still have a while to wait. You would want to run some tests to find an optimal value for threshhold that balances CPU usage savings for risk of overshoot (caused by your app losing the CPU when it sleeps and not getting any more CPU time in time).
The other method for high-resolution timing is to use a hardware timer that triggers an periodic interrupt. Your interrupt handler would send a signal to some thread that it needs to wake up and do something, after which it goes back to sleep and waits for the next signal to come in.
Real-Time Operating Systems have ways to do this sort of things built into the OS. If you're doing Windows programming and need extremely precise timing, be aware that that's not the sort of thing that a general-purpose OS like Windows handles very well.
Look at some timers delivered by the OS, like POSIX usleep.
On the other hand, if you need hyper precision, your code will not work either, because the OS will break this loop after it would exhaust its process time quantum and jump to the kernel space to make some system tasks. To this end you would need some special OS with interruptable kernel and tools delivered by it; look for RTOS keyword.
Typically, you yield to the OS in some fashion. This allows the OS to take a break from your program and do something else.
Obviously this is OS dependent, but:
#ifdef _WIN32
#include <windows.h>
#else
#include <unistd.h>
#endif
void yield(void)
{
#ifdef _WIN32
Sleep(0);
#else
usleep(1);
#endif
}
Insert a call to yield before you stop the timer. The OS will report less time usage by your program.
Keep in mind, of course, this makes your timer "less accurate", because it might not update as frequently as possible. But you really shouldn't depend on extreme-accuracy, it's far too difficult. Approximations are okay.
On Windows I have a problem I never encountered on Unix. That is how to get a thread to sleep for less than one millisecond. On Unix you typically have a number of choices (sleep, usleep and nanosleep) to fit your needs. On Windows, however, there is only Sleep with millisecond granularity.
On Unix, I can use the use the select system call to create a microsecond sleep which is pretty straightforward:
int usleep(long usec)
{
struct timeval tv;
tv.tv_sec = usec/1000000L;
tv.tv_usec = usec%1000000L;
return select(0, 0, 0, 0, &tv);
}
How can I achieve the same on Windows?
This indicates a mis-understanding of sleep functions. The parameter you pass is a minimum time for sleeping. There's no guarantee that the thread will wake up after exactly the time specified. In fact, threads don't "wake up" at all, but are rather chosen for execution by the OS scheduler. The scheduler might choose to wait much longer than the requested sleep duration to activate a thread, especially if another thread is still active at that moment.
As Joel says, you can't meaningfully 'sleep' (i.e. relinquish your scheduled CPU) for such short periods. If you want to delay for some short time, then you need to spin, repeatedly checking a suitably high-resolution timer (e.g. the 'performance timer') and hoping that something of high priority doesn't pre-empt you anyway.
If you really care about accurate delays of such short times, you should not be using Windows.
Use the high resolution multimedia timers available in winmm.lib. See this for an example.
#include <Windows.h>
static NTSTATUS(__stdcall *NtDelayExecution)(BOOL Alertable, PLARGE_INTEGER DelayInterval) = (NTSTATUS(__stdcall*)(BOOL, PLARGE_INTEGER)) GetProcAddress(GetModuleHandle("ntdll.dll"), "NtDelayExecution");
static NTSTATUS(__stdcall *ZwSetTimerResolution)(IN ULONG RequestedResolution, IN BOOLEAN Set, OUT PULONG ActualResolution) = (NTSTATUS(__stdcall*)(ULONG, BOOLEAN, PULONG)) GetProcAddress(GetModuleHandle("ntdll.dll"), "ZwSetTimerResolution");
static void SleepShort(float milliseconds) {
static bool once = true;
if (once) {
ULONG actualResolution;
ZwSetTimerResolution(1, true, &actualResolution);
once = false;
}
LARGE_INTEGER interval;
interval.QuadPart = -1 * (int)(milliseconds * 10000.0f);
NtDelayExecution(false, &interval);
}
Works very well for sleeping extremely short times. Remember though that at a certain point the actual delays will never be consistent because the system can't maintain consistent delays of such a short time.
Yes, you need to understand your OS' time quantums. On Windows, you won't even be getting 1ms resolution times unless you change the time quantum to 1ms. (Using for example timeBeginPeriod()/timeEndPeriod()) That still won't really guarantee anything. Even a little load or a single crappy device driver will throw everything off.
SetThreadPriority() helps, but is quite dangerous. Bad device drivers can still ruin you.
You need an ultra-controlled computing environment to make this ugly stuff work at all.
Generally a sleep will last at least until the next system interrupt occurs. However, this
depends on settings of the multimedia timer resources. It may be set to something close to
1 ms, some hardware even allows to run at interrupt periods of 0.9765625 (ActualResolution provided by NtQueryTimerResolution will show 0.9766 but that's actually wrong. They just can't put the correct number into the ActualResolution format. It's 0.9765625ms at 1024 interrupts per second).
There is one exception wich allows us to escape from the fact that it may be impossible to sleep for less than the interrupt period: It is the famous Sleep(0). This is a very powerful
tool and it is not used as often as it should! It relinquishes the reminder of the thread's time slice. This way the thread will stop until the scheduler forces the thread to get cpu service again. Sleep(0) is an asynchronous service, the call will force the scheduler to react independent of an interrupt.
A second way is the use of a waitable object. A wait function like WaitForSingleObject() can wait for an event. In order to have a thread sleeping for any time, also times in the microsecond regime, the thread needs to setup some service thread which will generate an event at the desired delay. The "sleeping" thread will setup this thread and then pause at the wait function until the service thread will set the event signaled.
This way any thread can "sleep" or wait for any time. The service thread can be of big complexity and it may offer system wide services like timed events at microsecond resolution. However, microsecond resolution may force the service thread to spin on a high resolution time service for at most one interrupt period (~1ms). If care is taken, this can
run very well, particulary on multi-processor or multi-core systems. A one ms spin does not hurt considerably on multi-core system, when the affinity mask for the calling thread and the service thread are carefully handled.
Code, description, and testing can be visited at the Windows Timestamp Project
As several people have pointed out, sleep and other related functions are by default dependent on the "system tick". This is the minimum unit of time between OS tasks; the scheduler, for instance, will not run faster than this. Even with a realtime OS, the system tick is not usually less than 1 ms. While it is tunable, this has implications for the entire system, not just your sleep functionality, because your scheduler will be running more frequently, and potentially increasing the overhead of your OS (amount of time for the scheduler to run, vs. amount of time a task can run).
The solution to this is to use an external, high-speed clock device. Most Unix systems will allow you to specify to your timers and such a different clock to use, as opposed to the default system clock.
What are you waiting for that requires such precision? In general if you need to specify that level of precision (e.g. because of a dependency on some external hardware) you are on the wrong platform and should look at a real time OS.
Otherwise you should be considering if there is an event you can synchronize on, or in the worse case just busy wait the CPU and use the high performance counter API to measure the elapsed time.
If you want so much granularity you are in the wrong place (in user space).
Remember that if you are in user space your time is not always precise.
The scheduler can start your thread (or app), and schedule it, so you are depending by the OS scheduler.
If you are looking for something precise you have to go:
1) In kernel space (like drivers)
2) Choose an RTOS.
Anyway if you are looking for some granularity (but remember the problem with user space ) look to
QueryPerformanceCounter Function and QueryPerformanceFrequency function in MSDN.
Actually using this usleep function will cause a big memory/resource leak. (depending how often called)
use this corrected version (sorry can't edit?)
bool usleep(unsigned long usec)
{
struct timeval tv;
fd_set dummy;
SOCKET s = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
FD_ZERO(&dummy);
FD_SET(s, &dummy);
tv.tv_sec = usec / 1000000ul;
tv.tv_usec = usec % 1000000ul;
bool success = (0 == select(0, 0, 0, &dummy, &tv));
closesocket(s);
return success;
}
I have the same problem and nothing seems to be faster than a ms, even the Sleep(0). My problem is the communication between a client and a server application where I use the _InterlockedExchange function to test and set a bit and then I Sleep(0).
I really need to perform thousands of operations per second this way and it doesn't work as fast as I planned.
Since I have a thin client dealing with the user, which in turn invokes an agent which then talks to a thread, I will move soon to merge the thread with the agent so that no event interface will be required.
Just to give you guys an idea how slow this Sleep is, I ran a test for 10 seconds performing an empty loop (getting something like 18,000,000 loops) whereas with the event in place I only got 180,000 loops. That is, 100 times slower!
Try using SetWaitableTimer...
Like everybody mentioned, there is indeed no guarantees about the sleep time.
But nobody wants to admit that sometimes, on an idle system, the usleep command can be very precise. Especially with a tickless kernel. Windows Vista has it and Linux has it since 2.6.16.
Tickless kernels exists to help improve laptops batterly life: c.f. Intel's powertop utility.
In that condition, I happend to have measured the Linux usleep command that respected the requested sleep time very closely, down to half a dozen of micro seconds.
So, maybe the OP wants something that will roughly work most of the time on an idling system, and be able to ask for micro second scheduling!
I actually would want that on Windows too.
Also Sleep(0) sounds like boost::thread::yield(), which terminology is clearer.
I wonder if Boost-timed locks have a better precision. Because then you could just lock on a mutex that nobody ever releases, and when the timeout is reached, continue on...
Timeouts are set with boost::system_time + boost::milliseconds & cie (xtime is deprecated).
If your goal is to "wait for a very short amount of time" because you are doing a spinwait, then there are increasing levels of waiting you can perform.
void SpinOnce(ref Int32 spin)
{
/*
SpinOnce is called each time we need to wait.
But the action it takes depends on how many times we've been spinning:
1..12 spins: spin 2..4096 cycles
12..32: call SwitchToThread (allow another thread ready to go on time core to execute)
over 32 spins: Sleep(0) (give up the remainder of our timeslice to any other thread ready to run, also allows APC and I/O callbacks)
*/
spin += 1;
if (spin > 32)
Sleep(0); //give up the remainder of our timeslice
else if (spin > 12)
SwitchTothread(); //allow another thread on our CPU to have the remainder of our timeslice
else
{
int loops = (1 << spin); //1..12 ==> 2..4096
while (loops > 0)
loops -= 1;
}
}
So if your goal is actually to wait only for a little bit, you can use something like:
int spin = 0;
while (!TryAcquireLock())
{
SpinOne(ref spin);
}
The virtue here is that we wait longer each time, eventually going completely to sleep.
Just use Sleep(0). 0 is clearly less than a millisecond. Now, that sounds funny, but I'm serious. Sleep(0) tells Windows that you don't have anything to do right now, but that you do want to be reconsidered as soon as the scheduler runs again. And since obviously the thread can't be scheduled to run before the scheduler itself runs, this is the shortest delay possible.
Note that you can pass in a microsecond number to your usleep, but so does void usleep(__int64 t) { Sleep(t/1000); } - no guarantees to actually sleeping that period.
Sleep function that is way less than a millisecond-maybe
I found that sleep(0) worked for me. On a system with a near 0% load on the cpu in task manager, I wrote a simple console program and the sleep(0) function slept for a consistent 1-3 microseconds, which is way less than a millisecond.
But from the above answers in this thread, I know that the amount sleep(0) sleeps can vary much more wildly than this on systems with a large cpu load.
But as I understand it, the sleep function should not be used as a timer. It should be used to make the program use the least percentage of the cpu as possible and execute as frequently as possible. For my purposes, such as moving a projectile across the screen in a videogame much faster than one pixel a millisecond, sleep(0) works, I think.
You would just make sure the sleep interval is way smaller than the largest amount of time it would sleep. You don't use the sleep as a timer but just to make the game use the minimum amount of cpu percentage possible. You would use a separate function that has nothing to do is sleep to get to know when a particular amount of time has passed and then move the projectile one pixel across the screen-at a time of say 1/10th of a millisecond or 100 microseconds.
The pseudo-code would go something like this.
while (timer1 < 100 microseconds) {
sleep(0);
}
if (timer2 >=100 microseconds) {
move projectile one pixel
}
//Rest of code in iteration here
I know the answer may not work for advanced issues or programs but may work for some or many programs.
If the machine is running Windows 10 version 1803 or later then you can use CreateWaitableTimerExW with the CREATE_WAITABLE_TIMER_HIGH_RESOLUTION flag.
On Windows the use of select forces you to include the Winsock library which has to be initialized like this in your application:
WORD wVersionRequested = MAKEWORD(1,0);
WSADATA wsaData;
WSAStartup(wVersionRequested, &wsaData);
And then the select won't allow you to be called without any socket so you have to do a little more to create a microsleep method:
int usleep(long usec)
{
struct timeval tv;
fd_set dummy;
SOCKET s = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
FD_ZERO(&dummy);
FD_SET(s, &dummy);
tv.tv_sec = usec/1000000L;
tv.tv_usec = usec%1000000L;
return select(0, 0, 0, &dummy, &tv);
}
All these created usleep methods return zero when successful and non-zero for errors.