Boost timed_wait leap seconds problem - c++

I am using the timed_wait from boost C++ library and I am getting a problem with leap seconds.
Here is a quick test:
#include <boost/thread.hpp>
#include <stdio.h>
#include <boost/date_time/posix_time/posix_time.hpp>
int main(){
// Determine the absolute time for this timer.
boost::system_time tAbsoluteTime = boost::get_system_time() + boost::posix_time::milliseconds(35000);
bool done;
boost::mutex m;
boost::condition_variable cond;
boost::unique_lock<boost::mutex> lk(m);
while(!done)
{
if(!cond.timed_wait(lk,tAbsoluteTime))
{
done = true;
std::cout << "timed out";
}
}
return 1;
}
The timed_wait function is returning 24 seconds earlier than it should. 24 seconds is the current amount of leap seconds in UTC.
So, boost is widely used but I could not find any info about this particular problem. Has anyone else experienced this problem? What are the possible causes and solutions?
Notes: I am using boost 1.38 on a linux system. I've heard that this problem doesn't happen on MacOS.
UPDATE: A little more info: This is happening on 2 redhat machines with kernel 2.6.9. I have executed the same code on an ubuntu machine with kernel 2.6.30 and the timer behaves as expected.
So, what I think is that this is probably being caused by the OS or by some mis-set configuration on the redhat machines.
I have coded a workaround that adjusts the time to UTC and than get the difference from this adjustment and add to the original time. This seens like a bad idea to me because if this code is executed on a machine without this problem, it might be 24s AHEAD. Still could not find the reason for this.

On a Linux system, the system clock will follow the POSIX standard, which mandates that
leap seconds are NOT observed! If you expected otherwise, that's probably the source of the discrepancy you're seeing. This document has a great explanation of how UTC relates to other time scales, and the problems one is likely to encounter if one relies on the operating system's concept of timekeeping.

Is it possible that done is getting set prematurely and a spurious wakeup is causing the loop to exit sooner than you expected?

Ok, here is what I did. It's a workaround and I am not happy with it but it was the best I could come up with:
int main(){
typedef boost::date_time::c_local_adjustor<boost::system_time> local_adj;
// Determine the absolute time for this timer.
boost::system_time tAbsoluteTime = boost::get_system_time() + boost::posix_time::milliseconds(25000);
/*
* A leap second is a positive or negative one-second adjustment to the Coordinated
* Universal Time (UTC) time scale that keeps it close to mean solar time.
* UTC, which is used as the basis for official time-of-day radio broadcasts for civil time,
* is maintained using extremely precise atomic clocks. To keep the UTC time scale close to
* mean solar time, UTC is occasionally corrected by an adjustment, or "leap",
* of one second.
*/
boost::system_time tAbsoluteTimeUtc = local_adj::utc_to_local(tAbsoluteTime);
// Calculate the local-to-utc difference.
boost::posix_time::time_duration tLocalUtcDiff = tAbsoluteTime - tAbsoluteTimeUtc;
// Get only the seconds from the difference. These are the leap seconds.
tAbsoluteTime += boost::posix_time::seconds(tLocalUtcDiff.seconds());
bool done;
boost::mutex m;
boost::condition_variable cond;
boost::unique_lock<boost::mutex> lk(m);
while(!done)
{
if(!cond.timed_wait(lk,tAbsoluteTime))
{
done = true;
std::cout << "timed out";
}
}
return 1;
}
I've tested it on problematic and non-problematic machines and it worked as expected on both, so I'm keeping it as long as I can't found a better solution.
Thank you all for your help.

Related

What is the true getrusage resolution?

I'm trying to measure getrusage resolution via simple program:
#include <cstdio>
#include <sys/time.h>
#include <sys/resource.h>
#include <cassert>
int main(int argc, const char *argv[]) {
struct rusage u = {0};
assert(!getrusage(RUSAGE_SELF, &u));
size_t cnt = 0;
while(true) {
++cnt;
struct rusage uz = {0};
assert(!getrusage(RUSAGE_SELF, &uz));
if(u.ru_utime.tv_sec != uz.ru_utime.tv_sec || u.ru_utime.tv_usec != uz.ru_utime.tv_usec) {
std::printf("u:%ld.%06ld\tuz:%ld.%06ld\tcnt:%ld\n",
u.ru_utime.tv_sec, u.ru_utime.tv_usec,
uz.ru_utime.tv_sec, uz.ru_utime.tv_usec,
cnt);
break;
}
}
}
And when I run it, I usually get output similar to the following:
ema#scv:~/tmp/getrusage$ ./gt
u:0.000562 uz:0.000563 cnt:1
ema#scv:~/tmp/getrusage$ ./gt
u:0.000553 uz:0.000554 cnt:1
ema#scv:~/tmp/getrusage$ ./gt
u:0.000496 uz:0.000497 cnt:1
ema#scv:~/tmp/getrusage$ ./gt
u:0.000475 uz:0.000476 cnt:1
Which seems to hint that the resolution of getrusage is around 1 microsecond.
I thought it should be around 1 / getconf CLK_TCK (i.e. 100hz, hence 10 millisecond).
What is the true getrusage resolution?
Am I doing anything wrong?
Ps. Running this on Ubuntu 20.04, Linux scv 5.13.0-52-generic #59~20.04.1-Ubuntu SMP Thu Jun 16 21:21:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux, 5950x.
The publicly defined tick interval is nothing more than a common reference point for the default time-slice that each process gets to run. When its tick expires the process loses its assigned CPU which then begins executing some other task, which is given another tick-long timeslice to run.
But that does not guarantee that a given process will run for its full tick. If a process attempts to read() an empty socket, and has nothing to do in a middle of a tick the kernel is not going to do nothing with the process's CPU, and find something better to do, instead. The kernel knows exactly how long the process ran for, and there is no reason whatsoever why the actual running time of the process cannot be recorded in its usage statistics, especially if the clock reference used for measuring process execution time can offer much more granularity than the tick interval.
Finally the modern Linux kernel can be configured to not even use tick intervals, in specific situations, and its advertised tick interval is mostly academic.

I'm looking to improve or request my current delay / sleep method. c++

Currently I am coding a project that requires precise delay times over a number of computers. Currently this is the code I am using I found it on a forum. This is the code below.
{
LONGLONG timerResolution;
LONGLONG wantedTime;
LONGLONG currentTime;
QueryPerformanceFrequency((LARGE_INTEGER*)&timerResolution);
timerResolution /= 1000;
QueryPerformanceCounter((LARGE_INTEGER*)&currentTime);
wantedTime = currentTime / timerResolution + ms;
currentTime = 0;
while (currentTime < wantedTime)
{
QueryPerformanceCounter((LARGE_INTEGER*)&currentTime);
currentTime /= timerResolution;
}
}
Basically the issue I am having is this uses alot of CPU around 16-20% when I start to call on the function. The usual Sleep(); uses Zero CPU but it is extremely inaccurate from what I have read from multiple forums is that's the trade-off when you trade accuracy for CPU usage but I thought I better raise the question before I set for this sleep method.
The reason why it's using 15-20% CPU is likely because it's using 100% on one core as there is nothing in this to slow it down.
In general, this is a "hard" problem to solve as PCs (more specifically, the OSes running on those PCs) are in general not made for running real time applications. If that is absolutely desirable, you should look into real time kernels and OSes.
For this reason, the guarantee that is usually made around sleep times is that the system will sleep for atleast the specified amount of time.
If you are running Linux you could try using the nanosleep method (http://man7.org/linux/man-pages/man2/nanosleep.2.html) Though I don't have any experience with it.
Alternatively you could go with a hybrid approach where you use sleeps for long delays, but switch to polling when it's almost time:
#include <thread>
#include <chrono>
using namespace std::chrono_literals;
...
wantedtime = currentTime / timerResolution + ms;
currentTime = 0;
while(currentTime < wantedTime)
{
QueryPerformanceCounter((LARGE_INTEGER*)&currentTime);
currentTime /= timerResolution;
if(currentTime-wantedTime > 100) // if waiting for more than 100 ms
{
//Sleep for value significantly lower than the 100 ms, to ensure that we don't "oversleep"
std::this_thread::sleep_for(50ms);
}
}
Now this is a bit race condition prone, as it assumes that the OS will hand back control of the program within 50ms after the sleep_for is done. To further combat this you could turn it down (to say, sleep 1ms).
You can set the Windows timer resolution to minimum (usually 1 ms), to make Sleep() accurate up to 1 ms. By default it would be accurate up to about 15 ms. Sleep() documentation.
Note that your execution can be delayed if other programs are consuming CPU time, but this could also happen if you were waiting with a timer.
#include <timeapi.h>
// Sleep() takes 15 ms (or whatever the default is)
Sleep(1);
TIMECAPS caps_;
timeGetDevCaps(&caps_, sizeof(caps_));
timeBeginPeriod(caps_.wPeriodMin);
// Sleep() now takes 1 ms
Sleep(1);
timeEndPeriod(caps_.wPeriodMin);

Busy Loop/Spinning sometimes takes too long under Windows

I'm using a windows 7 PC to output voltages at a rate of 1kHz. At first I simply ended the thread with sleep_until(nextStartTime), however this has proven to be unreliable, sometimes working fine and sometimes being of by up to 10ms.
I found other answers here saying that a busy loop might be more accurate, however mine for some reason also sometimes takes too long.
while (true) {
doStuff(); //is quick enough
logDelays();
nextStartTime = chrono::high_resolution_clock::now() + chrono::milliseconds(1);
spinStart = chrono::high_resolution_clock::now();
while (chrono::duration_cast<chrono::microseconds>(nextStartTime -
chrono::high_resolution_clock::now()).count() > 200) {
spinCount++; //a volatile int
}
int spintime = chrono::duration_cast<chrono::microseconds>
(chrono::high_resolution_clock::now() - spinStart).count();
cout << "Spin Time micros :" << spintime << endl;
if (spinCount > 100000000) {
cout << "reset spincount" << endl;
spinCount = 0;
}
}
I was hoping that this would work to fix my issue, however it produces the output:
Spin Time micros :9999
Spin Time micros :9999
...
I've been stuck on this problem for the last 5 hours and I'd very thankful if somebody knows a solution.
According to the comments this code waits correctly:
auto start = std::chrono::high_resolution_clock::now();
const auto delay = std::chrono::milliseconds(1);
while (true) {
doStuff(); //is quick enough
logDelays();
auto spinStart = std::chrono::high_resolution_clock::now();
while (start > std::chrono::high_resolution_clock::now() + delay) {}
int spintime = std::chrono::duration_cast<std::chrono::microseconds>
(std::chrono::high_resolution_clock::now() - spinStart).count();
std::cout << "Spin Time micros :" << spintime << std::endl;
start += delay;
}
The important part is the busy-wait while (start > std::chrono::high_resolution_clock::now() + delay) {} and start += delay; which will in combination make sure that delay amount of time is waited, even when outside factors (windows update keeping the system busy) disturb it. In case that the loop takes longer than delay the loop will be executed without waiting until it catches up (which may be never if doStuff is sufficiently slow).
Note that missing an update (due to the system being busy) and then sending 2 at once to catch up might not be the best way to handle the situation. You may want to check the current time inside doStuff and abort/restart the transmission if the timing is wrong by more then some acceptable amount.
On Windows I dont think its possible to ever get such precise timing, because you can not garuntee your thread is actually running at the time you desire. Even with low CPU usage and setting your thread to real time priority, it can still be interuptted (Hardware interupts as I understand. Never fully investigate but even a simple while(true) ++i; type loop at realtime Ive seen get interupted then moved between CPU cores). While such interrupts and switching for a realtime thread is very quick, its still significant if your trying to directly drive a signal without buffering.
Instead you really want to read and write buffers of digital samples (so at 1KHz each sample is 1ms). You need to be sure to queue another buffer before the last one is completed, which will constrain how small they can be, but at 1KHz at realtime priority if the code is simple and no other CPU contention a single sample buffer (1ms) might even be possible, which is at worst 1ms extra latency over "immediate" but you would have to test. You then leave it up to the hardware and its drivers to handle the precise timing (e.g. make sure each output sample is "exactly" 1ms to the accuracy the vendor claims).
This basically means your code only has to be accurate to 1ms in worst case, rather than trying to persue somthing far smaller than the OS really supports such as microsecond accuracy.
As long as you are able to queue a new buffer before the hardware used up the previous buffer, it will be able to run at the desired frequency without issue (to use audio as an example again, while the tolerated latencies are often much higher and thus the buffers as well, if you overload the CPU you can still sometimes hear auidble glitches where an application didnt queue up new raw audio in time).
With careful timing you might even be able to get down to a fraction of a millisecond by waiting to process and queue your next sample as long as possible (e.g. if you need to reduce latency between input and output), but remember that the closer you cut it the more you risk submitting it too late.

Switching an image at specific frequencies c++

I am currently developing a stimuli provider for the brain's visual cortex as a part of a university project. The program is to (preferably) be written in c++, utilising visual studio and OpenCV. The way it is supposed to work is that the program creates a number of threads, accordingly to the amount of different frequencies, each running a timer for their respective frequency.
The code looks like this so far:
void timerThread(void *param) {
t *args = (t*)param;
int id = args->data1;
float freq = args->data2;
unsigned long period = round((double)1000 / (double)freq)-1;
while (true) {
Sleep(period);
show[id] = 1;
Sleep(period);
show[id] = 0;
}
}
It seems to work okay for some of the frequencies, but others vary quite a lot in frame rate. I have tried to look into creating my own timing function, similar to what is done in Arduino's "blinkWithoutDelay" function, though this worked very badly. Also, I have tried with the waitKey() function, this worked quite like the Sleep() function used now.
Any help would be greatly appreciated!
You should use timers instead of "sleep" to fix this, as sometimes the loop may take more or less time to complete.
Restart the timer at the start of the loop and take its value right before the reset- this'll give you the time it took for the loop to complete.
If this time is greater than the "period" value, then it means you're late, and you need to execute right away (and even lower the period for the next loop).
Otherwise, if it's lower, then it means you need to wait until it is greater.
I personally dislike sleep, and instead constantly restart the timer until it's greater.
I suggest looking into "fixed timestep" code, such as the one below. You'll need to put this snippet of code on every thread with varying values for the period (ns) and put your code where "doUpdates()" is.
If you need a "timer" library, since I don't know OpenCV, I recommend SFML (SFML's timer docs).
The following code is from here:
long int start = 0, end = 0;
double delta = 0;
double ns = 1000000.0 / 60.0; // Syncs updates at 60 per second (59 - 61)
while (!quit) {
start = timeAsMicro();
delta+=(double)(start - end) / ns; // You can skip dividing by ns here and do "delta >= ns" below instead //
end = start;
while (delta >= 1.0) {
doUpdates();
delta-=1.0;
}
}
Please mind the fact that in this code, the timer is never reset.
(This may not be completely accurate but is the best assumption I can make to fix your problem given the code you've presented)

How to detect change of system time in Linux?

Is there a way to get notified when there is update to the system time from a time-server or due to DST change? I am after an API/system call or equivalent.
It is part of my effort to optimise generating a value for something similar to SQL NOW() to an hour granularity, without using SQL.
You can use timerfd_create(2) to create a timer, then mark it with the TFD_TIMER_CANCEL_ON_SET option when setting it. Set it for an implausible time in the future and then block on it (with poll/select etc.) - if the system time changes then the timer will be cancelled, which you can detect.
(this is how systemd does it)
e.g.:
#include <sys/timerfd.h>
#include <limits.h>
#include <stdio.h>
#include <unistd.h>
#include <errno.h>
int main(void) {
int fd = timerfd_create(CLOCK_REALTIME, 0);
timerfd_settime(fd, TFD_TIMER_ABSTIME | TFD_TIMER_CANCEL_ON_SET,
&(struct itimerspec){ .it_value = { .tv_sec = INT_MAX } },
NULL);
printf("Waiting\n");
char buffer[10];
if (-1 == read(fd, &buffer, 10)) {
if (errno == ECANCELED)
printf("Timer cancelled - system clock changed\n");
else
perror("error");
}
close(fd);
return 0;
}
I don't know if there is a way to be notified of a change in the system time, but
The system time is stored as UTC, so there is never a change due to DST change to be notified.
If my memory is correct, NTP deamon usually adjust the clock by changing its speed, again no change to be notified.
So the only times where you would be notified is after an uncommon manipulation.
clock_gettime on most recent Linux systems is incredibly fast, and usually pretty amazingly precise as well; you can find out the precision using clock_getres. But for hour level timestamps, gettimeofday might be more convenient since it can do the timezone adjustment for you.
Simply call the appropriate system call and do the division into hours each time you need a timestamp; all the other time adjustments from NTP or whatever will already have been done for you.