Is there a way to get notified when there is update to the system time from a time-server or due to DST change? I am after an API/system call or equivalent.
It is part of my effort to optimise generating a value for something similar to SQL NOW() to an hour granularity, without using SQL.
You can use timerfd_create(2) to create a timer, then mark it with the TFD_TIMER_CANCEL_ON_SET option when setting it. Set it for an implausible time in the future and then block on it (with poll/select etc.) - if the system time changes then the timer will be cancelled, which you can detect.
(this is how systemd does it)
e.g.:
#include <sys/timerfd.h>
#include <limits.h>
#include <stdio.h>
#include <unistd.h>
#include <errno.h>
int main(void) {
int fd = timerfd_create(CLOCK_REALTIME, 0);
timerfd_settime(fd, TFD_TIMER_ABSTIME | TFD_TIMER_CANCEL_ON_SET,
&(struct itimerspec){ .it_value = { .tv_sec = INT_MAX } },
NULL);
printf("Waiting\n");
char buffer[10];
if (-1 == read(fd, &buffer, 10)) {
if (errno == ECANCELED)
printf("Timer cancelled - system clock changed\n");
else
perror("error");
}
close(fd);
return 0;
}
I don't know if there is a way to be notified of a change in the system time, but
The system time is stored as UTC, so there is never a change due to DST change to be notified.
If my memory is correct, NTP deamon usually adjust the clock by changing its speed, again no change to be notified.
So the only times where you would be notified is after an uncommon manipulation.
clock_gettime on most recent Linux systems is incredibly fast, and usually pretty amazingly precise as well; you can find out the precision using clock_getres. But for hour level timestamps, gettimeofday might be more convenient since it can do the timezone adjustment for you.
Simply call the appropriate system call and do the division into hours each time you need a timestamp; all the other time adjustments from NTP or whatever will already have been done for you.
Related
I'm trying to measure getrusage resolution via simple program:
#include <cstdio>
#include <sys/time.h>
#include <sys/resource.h>
#include <cassert>
int main(int argc, const char *argv[]) {
struct rusage u = {0};
assert(!getrusage(RUSAGE_SELF, &u));
size_t cnt = 0;
while(true) {
++cnt;
struct rusage uz = {0};
assert(!getrusage(RUSAGE_SELF, &uz));
if(u.ru_utime.tv_sec != uz.ru_utime.tv_sec || u.ru_utime.tv_usec != uz.ru_utime.tv_usec) {
std::printf("u:%ld.%06ld\tuz:%ld.%06ld\tcnt:%ld\n",
u.ru_utime.tv_sec, u.ru_utime.tv_usec,
uz.ru_utime.tv_sec, uz.ru_utime.tv_usec,
cnt);
break;
}
}
}
And when I run it, I usually get output similar to the following:
ema#scv:~/tmp/getrusage$ ./gt
u:0.000562 uz:0.000563 cnt:1
ema#scv:~/tmp/getrusage$ ./gt
u:0.000553 uz:0.000554 cnt:1
ema#scv:~/tmp/getrusage$ ./gt
u:0.000496 uz:0.000497 cnt:1
ema#scv:~/tmp/getrusage$ ./gt
u:0.000475 uz:0.000476 cnt:1
Which seems to hint that the resolution of getrusage is around 1 microsecond.
I thought it should be around 1 / getconf CLK_TCK (i.e. 100hz, hence 10 millisecond).
What is the true getrusage resolution?
Am I doing anything wrong?
Ps. Running this on Ubuntu 20.04, Linux scv 5.13.0-52-generic #59~20.04.1-Ubuntu SMP Thu Jun 16 21:21:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux, 5950x.
The publicly defined tick interval is nothing more than a common reference point for the default time-slice that each process gets to run. When its tick expires the process loses its assigned CPU which then begins executing some other task, which is given another tick-long timeslice to run.
But that does not guarantee that a given process will run for its full tick. If a process attempts to read() an empty socket, and has nothing to do in a middle of a tick the kernel is not going to do nothing with the process's CPU, and find something better to do, instead. The kernel knows exactly how long the process ran for, and there is no reason whatsoever why the actual running time of the process cannot be recorded in its usage statistics, especially if the clock reference used for measuring process execution time can offer much more granularity than the tick interval.
Finally the modern Linux kernel can be configured to not even use tick intervals, in specific situations, and its advertised tick interval is mostly academic.
I'm working on a smart Irrigation system built on an ESP8266 microcontroller coded on Arduino ide; basically I want to send data of temperature to my database every 4 minutes, check the sensors every 10 minutes, or execute any code after a certain time , but I don't want to use delay() since other codes in the same loop should keep on executing.
So, is there a way to use another function that can execute the rest code and when the timer is done it executes the sending to database?
Thanks
The Arduino package manager has a library called NTPClient, which provides, as you might guess, an NTP client. Using this with the system clock lets you keep reasonable time without a battery-backed RTC.
Once you know what time it is, scheduling events becomes trivial. This isn't a fully working example, but should give you the idea. The docs have more information. The NTPClient also works quite well with the AceTime library for time zone and other management.
#include <NTPClient.h>
#include <WiFiUdp.h>
#include <ESP8266WiFi.h>
#include <AceTime.h>
#include <ctime>
using namespace ace_time;
using namespace ace_time::clock;
using namespace std;
WiFiUDP ntpUDP;
const long utcOffsetInSeconds = 0;
// Get UTC from NTP - let AceTime figure out the local time
NTPClient timeClient(ntpUDP, "pool.ntp.org", utcOffsetInSeconds, 3600);
//3600s/h - sync time once per hour, this is sufficient.
// don't hammer pool.ntp.org!
static BasicZoneProcessor estProcessor;
static SystemClockLoop systemClock(nullptr /*reference*/, nullptr /*backup*/);
const char* ssid = // your SSID;
const char* password = // your wifi password;
void setup() {
WiFi.mode(WIFI_STA);
WiFi.begin(ssid, password);
Serial.println("Connecting to wifi...");
while (WiFi.status() != WL_CONNECTED) {
Serial.println('.');
delay(500);
}
systemClock.setup();
timeClient.begin();
}
int clockUpdateCounter = 50000;
void loop(){
// Update the system clock from the NTP client periodically
if (++clockUpdateCounter > 50000){ // can be more elegant than this... depends on your loop speed, can actually use the real time for this too, I'm just making this simple because I don't have a lot of time...
clockUpdateCounter = 0;
timeClient.update();
// doesn't matter how often you call timeClient.update().
// you can put this in the main loop() if you want.
// We supplied 3600s as the refresh interval in the constructor
// and internally .update() checks whether enough time has passed
// before making a request to the NTP server itself
auto estTz = TimeZone::forZoneInfo(&zonedb::kZoneAmerica_Toronto, &estProcessor);
auto estTime = ZonedDateTime::forUnixSeconds(timeClient.getEpochTime(), estTz);
// using the system clock is optional, but has a few
// conveniences. Here we just make sure that the systemClock remains
// sync'd with the NTP server. Doesn't matter how often you do this
// but usually around once an hour is plenty
systemClock.setNow(estTime.toEpochSeconds());
}
systemClock.loop();
delay(30);
}
The above keeps the system clock in sync with the NTP server. You can then use the system clock to fetch the time and use it for all types of timing purposes. eg :
// get Time
acetime_t nowT = systemClock.getNow();
auto estTz = TimeZone::forZoneInfo(&zonedb::kZoneAmerica_Toronto, &estProcessor);
auto nowTime = ZonedDateTime::forEpochSeconds(nowT, estTz);
Checking the time is fast, so you can do it inside loop() and only trigger the action when whatever time has elapsed.
This is a particularly nice solution for database integration since you can record a real timestamp of the actual date and time when logging your data to the database.
Currently I am coding a project that requires precise delay times over a number of computers. Currently this is the code I am using I found it on a forum. This is the code below.
{
LONGLONG timerResolution;
LONGLONG wantedTime;
LONGLONG currentTime;
QueryPerformanceFrequency((LARGE_INTEGER*)&timerResolution);
timerResolution /= 1000;
QueryPerformanceCounter((LARGE_INTEGER*)¤tTime);
wantedTime = currentTime / timerResolution + ms;
currentTime = 0;
while (currentTime < wantedTime)
{
QueryPerformanceCounter((LARGE_INTEGER*)¤tTime);
currentTime /= timerResolution;
}
}
Basically the issue I am having is this uses alot of CPU around 16-20% when I start to call on the function. The usual Sleep(); uses Zero CPU but it is extremely inaccurate from what I have read from multiple forums is that's the trade-off when you trade accuracy for CPU usage but I thought I better raise the question before I set for this sleep method.
The reason why it's using 15-20% CPU is likely because it's using 100% on one core as there is nothing in this to slow it down.
In general, this is a "hard" problem to solve as PCs (more specifically, the OSes running on those PCs) are in general not made for running real time applications. If that is absolutely desirable, you should look into real time kernels and OSes.
For this reason, the guarantee that is usually made around sleep times is that the system will sleep for atleast the specified amount of time.
If you are running Linux you could try using the nanosleep method (http://man7.org/linux/man-pages/man2/nanosleep.2.html) Though I don't have any experience with it.
Alternatively you could go with a hybrid approach where you use sleeps for long delays, but switch to polling when it's almost time:
#include <thread>
#include <chrono>
using namespace std::chrono_literals;
...
wantedtime = currentTime / timerResolution + ms;
currentTime = 0;
while(currentTime < wantedTime)
{
QueryPerformanceCounter((LARGE_INTEGER*)¤tTime);
currentTime /= timerResolution;
if(currentTime-wantedTime > 100) // if waiting for more than 100 ms
{
//Sleep for value significantly lower than the 100 ms, to ensure that we don't "oversleep"
std::this_thread::sleep_for(50ms);
}
}
Now this is a bit race condition prone, as it assumes that the OS will hand back control of the program within 50ms after the sleep_for is done. To further combat this you could turn it down (to say, sleep 1ms).
You can set the Windows timer resolution to minimum (usually 1 ms), to make Sleep() accurate up to 1 ms. By default it would be accurate up to about 15 ms. Sleep() documentation.
Note that your execution can be delayed if other programs are consuming CPU time, but this could also happen if you were waiting with a timer.
#include <timeapi.h>
// Sleep() takes 15 ms (or whatever the default is)
Sleep(1);
TIMECAPS caps_;
timeGetDevCaps(&caps_, sizeof(caps_));
timeBeginPeriod(caps_.wPeriodMin);
// Sleep() now takes 1 ms
Sleep(1);
timeEndPeriod(caps_.wPeriodMin);
I write a stat server to count visit data of each day, therefore I have to clear data in db (memcached) every day.
Currently, I'll call gettimeofday to get date and compare it with the cached date to check if there are of the same day frequently.
Sample code as belows:
void report_visits(...) {
std::string date = CommonUtil::GetStringDate(); // through gettimeofday
if (date != static_cached_date_) {
flush_db_date();
static_cached_date_ = date;
}
}
The problem is that I have to call gettimeofday every time the client reports visit information. And gettimeofday is time-consuming.
Any solution for this problem ?
The gettimeofday system call (now obsolete in favor of clock_gettime) is among the shortest system calls to execute. The last time I measured that was on an Intel i486 and lasted around 2us. The kernel internal version is used to timestamp network packets, read, write, and chmod system calls to update the timestamps in the filesystem inodes, and the like. If you want to measure how many time you spent in gettimeofday system call you just have to do several (the more, the better) pairs of calls, one inmediately after the other, annotating the timestamp differences between them and getting finally the minimum value of the samples as the proper value. That will be a good aproximation to the ideal value.
Think that if the kernel uses it to timestamp each read you do to a file, you can freely use it to timestamp each service request without serious penalty.
Another thing, don't use (as suggested by other responses) a routine to convert gettimeofday result to a string, as this indeed consumes a lot more resources. You can compare timestamps (suppose them t1 and t2) and,
gettimeofday(&t2, NULL);
if (t2.tv_sec - t1.tv_sec > 86400) { /* 86400 is one day in seconds */
erase_cache();
t1 = t2;
}
or, if you want it to occur everyday at the same time
gettimeofday(&t2, NULL);
if (t2.tv_sec / 86400 > t1.tv_sec / 86400) {
/* tv_sec / 86400 is the number of whole days since 1/1/1970, so
* if it varies, a change of date has occured */
erase_cache();
}
t1 = t2; /* now, we made it outside, so we tie to the change of date */
Even, you can use the time() system call for this, as it has second resolution (and you don't need to cope with the usecs or with the overhead of the struct timeval structure).
(This is an old question, but there is an important answer missing:)
You need to define the TZ environment variable and export it to your program. If it is not set, you will incur a stat(2) call of /etc/localtime... for every single call to gettimeofday(2), localtime(3), etc.
Of course these will get answered without going to disk, but the frequency of the calls and the overhead of the syscall is enough to make an appreciable difference in some situations.
Supporting documentation:
How to avoid excessive stat(/etc/localtime) calls in strftime() on linux?
https://blog.packagecloud.io/eng/2017/02/21/set-environment-variable-save-thousands-of-system-calls/
To summarise:
The check, as you say, is done up to a few thousand times per seconds.
You're flushing a cache once every day.
Assuming that the exact time at which you flush is not critical and can be seconds (or even minutes perhaps) late, there is a very simple/practical solution:
void report_visits(...)
{
static unsigned int counter;
if ((counter++ % 1000) == 0)
{
std::string date = CommonUtil::GetStringDate();
if (date != static_cached_date_)
{
flush_db_date();
static_cached_date_ = date;
}
}
}
Just do the check once every N-times that report_visits() is called. In the above example N is 1000. With up to a few thousand checks per seconds, you'll be less than a second (or 0.001% of a day) late.
Don't worry about counter wrap-around, it only happens once in about 20+ days (assuming a few thousand checks/s maximum, with 32-bit int), and does not hurt.
I am using the timed_wait from boost C++ library and I am getting a problem with leap seconds.
Here is a quick test:
#include <boost/thread.hpp>
#include <stdio.h>
#include <boost/date_time/posix_time/posix_time.hpp>
int main(){
// Determine the absolute time for this timer.
boost::system_time tAbsoluteTime = boost::get_system_time() + boost::posix_time::milliseconds(35000);
bool done;
boost::mutex m;
boost::condition_variable cond;
boost::unique_lock<boost::mutex> lk(m);
while(!done)
{
if(!cond.timed_wait(lk,tAbsoluteTime))
{
done = true;
std::cout << "timed out";
}
}
return 1;
}
The timed_wait function is returning 24 seconds earlier than it should. 24 seconds is the current amount of leap seconds in UTC.
So, boost is widely used but I could not find any info about this particular problem. Has anyone else experienced this problem? What are the possible causes and solutions?
Notes: I am using boost 1.38 on a linux system. I've heard that this problem doesn't happen on MacOS.
UPDATE: A little more info: This is happening on 2 redhat machines with kernel 2.6.9. I have executed the same code on an ubuntu machine with kernel 2.6.30 and the timer behaves as expected.
So, what I think is that this is probably being caused by the OS or by some mis-set configuration on the redhat machines.
I have coded a workaround that adjusts the time to UTC and than get the difference from this adjustment and add to the original time. This seens like a bad idea to me because if this code is executed on a machine without this problem, it might be 24s AHEAD. Still could not find the reason for this.
On a Linux system, the system clock will follow the POSIX standard, which mandates that
leap seconds are NOT observed! If you expected otherwise, that's probably the source of the discrepancy you're seeing. This document has a great explanation of how UTC relates to other time scales, and the problems one is likely to encounter if one relies on the operating system's concept of timekeeping.
Is it possible that done is getting set prematurely and a spurious wakeup is causing the loop to exit sooner than you expected?
Ok, here is what I did. It's a workaround and I am not happy with it but it was the best I could come up with:
int main(){
typedef boost::date_time::c_local_adjustor<boost::system_time> local_adj;
// Determine the absolute time for this timer.
boost::system_time tAbsoluteTime = boost::get_system_time() + boost::posix_time::milliseconds(25000);
/*
* A leap second is a positive or negative one-second adjustment to the Coordinated
* Universal Time (UTC) time scale that keeps it close to mean solar time.
* UTC, which is used as the basis for official time-of-day radio broadcasts for civil time,
* is maintained using extremely precise atomic clocks. To keep the UTC time scale close to
* mean solar time, UTC is occasionally corrected by an adjustment, or "leap",
* of one second.
*/
boost::system_time tAbsoluteTimeUtc = local_adj::utc_to_local(tAbsoluteTime);
// Calculate the local-to-utc difference.
boost::posix_time::time_duration tLocalUtcDiff = tAbsoluteTime - tAbsoluteTimeUtc;
// Get only the seconds from the difference. These are the leap seconds.
tAbsoluteTime += boost::posix_time::seconds(tLocalUtcDiff.seconds());
bool done;
boost::mutex m;
boost::condition_variable cond;
boost::unique_lock<boost::mutex> lk(m);
while(!done)
{
if(!cond.timed_wait(lk,tAbsoluteTime))
{
done = true;
std::cout << "timed out";
}
}
return 1;
}
I've tested it on problematic and non-problematic machines and it worked as expected on both, so I'm keeping it as long as I can't found a better solution.
Thank you all for your help.