Persistent std::chrono time_point<steady_clock> - c++

I am using in my projects some time_point<steady_clock> variables in order to do operations at a specific interval. I want to serialize/deserialize in a file those values.
But it seems that the time_since_epoch from a steady_clock is not reliable, although time_since_epoch from a system_clock is quite ok, it always calculates the time from 1970/1/1 (UNIX time)
What's the best solution for me? It seems that I have to convert somehow from steady_clock to system_clock but I don't think this is achievable.
P.S. I already read the topic here: Persisting std::chrono time_point instances

On the cppreference page for std::chrono::steady_clock, it says:
This clock is not related to wall clock time (for example, it can be time since last reboot), and is most suitable for measuring intervals.
The page for std::chrono::system_clock says that most implementations use UTC as an epoch:
The epoch of system_clock is unspecified, but most implementations use Unix Time (i.e., time since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970, not counting leap seconds).
If you're trying to compare times across machines or hoping to correlate the recorded times to real world events (i.e. at 3pm today there was an issue), then you'll want to switch your code over to using the system clock. Anytime you reboot the steady clock will reset and it doesn't relate to wall time at all.
Edit: if you wanted to do an approximate conversion between steady and system timestamps you could do something like this:
template <typename To, typename FromTimePoint>
typename To::time_point approximate_conversion(const FromTimePoint& from) {
const auto to_now = To::now().time_since_epoch();
const auto from_now = FromTimePoint::clock::now().time_since_epoch();
// compute an approximate offset between the clocks and apply that to the input timestamp
const auto approx_offset = to_now - from_now;
return typename To::time_point{from.time_since_epoch() + approx_offset};
}
int main() {
auto steady = std::chrono::steady_clock::now();
auto system = approximate_conversion<std::chrono::system_clock>(steady);
}
This assumes the clocks don't drift apart very quickly, and that there are no large discontinuities in either clock (both of which are false assumptions over long periods of time).

Related

C++ std::chrono::high_resolution_clock time_since_epoch returns too small numbers. How can I get the correct time since 1970 in microseconds?

I am trying to write a function, which will return the current time in microseconds since 1970. While a debugging I noticed, that the returned numbers are too small. For example: 269104616249. I also added static_assert to check the returned value type is int64_t, which i big enough to hold 292471 years in microseconds. So integer overflow should not be a case here.
What am I doing wrong?
Here is my code:
int64_t NowInMicroseconds() {
static_assert(std::is_same<decltype(duration_cast<microseconds>(high_resolution_clock::now().time_since_epoch()).count()), int64_t>::value);
return duration_cast<microseconds>(high_resolution_clock::now().time_since_epoch()).count();
}
int64_t result = NowInMicroseconds();
There are three chrono-supplied clocks in C++11/14/17 (more in C++20):
system_clock: This measures Unix Time (time since 1970 excluding leap seconds).1
steady_clock: Like a stop-watch. Great for timing, but it can not tell you the time of day.
high_resolution_clock: This has the disadvantages of system_clock and steady_clock, and the advantages of neither. Typically it is a type alias to either system_clock or steady_clock, and which one differs with platform.
You have to use system_clock for measuring time since 1970. Note that this is measured in UTC, not your local time zone. In C++11/14/17 to get the local time since 1970, you will have to either manually take your time zone into account, or use this C++20 chrono preview library.
std::int64_t
NowInMicroseconds()
{
using namespace std::chrono;
return duration_cast<microseconds>(system_clock_clock::now().time_since_epoch()).count();
}
Consider returning a strong type which means "microseconds since 1970" instead of an integral type. Strong type safety helps you find your logic errors at compile time:
std::chrono::time_point<std::chrono::system_clock, std::chrono::microseconds>
NowInMicroseconds()
{
using namespace std::chrono;
return time_point_cast<microseconds>(system_clock.now());
}
1 This is unspecified in C++11/14/17, but is true on all implementations. C++20 finally nails this epoch down in the spec.

chrono: can I validate system clock with steady clock on a time scale of an hour?

My application needs absolute timestamp (i.e. including date and hour) with error below 0.5s. The server synchronises via NTP, but I still want to detect if the server clock is not well synchronised for whatever reason.
My idea is to use steady clock to validate the system clock. I assume that within a period of, say, 1 hour steady clock should deviate very little from the real time (well below 0.5s). I compare time measured with steady and system clocks periodically. If the difference between the two grows or jumps large, it may suggest NTP is adjusting the system clock, which may mean that some of the time values were incorrect.
This is an example code:
#include <iostream>
#include <chrono>
#include <thread>
int main() {
const int test_time = 3600; //seconds, approximate
const int delay = 100; //milliseconds
const int iterations = test_time * 1000 / delay;
int64_t system_clock = std::chrono::system_clock::now().time_since_epoch().count();
int64_t steady_clock = std::chrono::steady_clock::now().time_since_epoch().count();
const int64_t offset = system_clock - steady_clock;
for(int i = 0; i < iterations; i++) {
system_clock = std::chrono::system_clock::now().time_since_epoch().count();
steady_clock = std::chrono::steady_clock::now().time_since_epoch().count();
int64_t deviation = system_clock - offset - steady_clock;
std::cout<<deviation/1e3<<" µs"<<std::endl;
/**
* Here I put code making use of system_clock
*/
std::this_thread::sleep_for(std::chrono::milliseconds(delay));
}
}
Does this procedure make sense? What I'm not sure about in particular is stability of the steady clock. I assume that it might be subject only to a slight deviation due to imperfectness of whatever is the internal server clock, but maybe I'm missing something?
I was very positively surprised by the test results with the code above. Even if I set it to run for 8 hours the maximum deviation I saw was only –22µs, and only around 1µs for vast majority of the times.
This question has little to do with C++.
1) Whether this method has a chance to work depends on accuracy of your computer's internal clock. Cheap clock might drift a minute a day - which is way over 0.5sec per hour.
2) The method is unable to identify a systematic offset. Say, you are constantly behind by a second due to network lagging, ping, or some other issues. The method will display a negligible deviation in this case.
Basically, it can only tell if time measured is precise but provides little knowledge on the accuracy (google: accuracy vs precision). Also in comments were mentioned issues of the algo regarding general clock adjustment.

Consistent Timestamping in C++ with std::chrono

I'm logging timestamps in my program with the following block of code:
// Taken at relevant time
m.timestamp = std::chrono::high_resolution_clock::now().time_since_epoch();
// After work is done
std::size_t secs = std::chrono::duration_cast <std::chrono::seconds> (timestamp).count();
std::size_t nanos = std::chrono::duration_cast<std::chrono::nanoseconds> (timestamp).count() % 1000000000;
std::time_t tp = (std::time_t) secs;
std::string mode;
char ts[] = "yyyymmdd HH:MM:SS";
char format[] = "%Y%m%d %H:%M:%S";
strftime(ts, 80, format, std::localtime(&tp));
std::stringstream s;
s << ts << "." << std::setfill('0') << std::setw(9) << nanos
<< " - " << message << std::endl;
return s.str();
I'm comparing these to timestamps recorded by an accurate remote source. When the difference in timestamps is graphed and ntp is not enabled, there is a linear looking drift through the day (700 microseconds every 30 seconds or so).
After correcting for a linear drift, I find that there's a non-linear component. It can drift in and out hundreds of microseconds over the course of hours.
The second graph looks similar to graphs taken with same methodology as above, but NTP enabled. The large vertical spikes are expected in the data, but the wiggle in the minimum is surprising.
Is there a way to get a more precise timestamp, but retain microsecond/nanosecond resolution? It's okay if the clock drifts from the actual time in a predictable way, but the timestamps would need to be internally consistent over long stretches of time.
high_resolution_clock has no guaranteed relationship with "current time". Your system may or not alias high_resolution_clock to system_clock. That means you may or may not get away with using high_resolution_clock in this manner.
Use system_clock. Then tell us if the situation has changed (it may not).
Also, better style:
using namespace std::chrono;
auto timestamp = ... // however, as long as it is based on system_clock
auto secs = duration_cast <seconds> (timestamp);
timestamp -= secs;
auto nanos = duration_cast<nanoseconds> (timestamp);
std::time_t tp = system_clock::to_time_t(system_clock::time_point{secs});
Stay in the chrono type system as long as possible.
Use the chrono type system to do the conversions and arithmetic for you.
Use system_clock::to_time_t to convert to time_t.
But ultimately, none of the above is going to change any of your results. system_clock is just going to talk to the OS (e.g. call gettimeofday or whatever).
If you can devise a more accurate way to tell time on your system, you can wrap that solution up in a "chrono-compatible clock" so that you can continue to make use of the type safety and conversion factors of chrono durations and time_points.
struct my_super_accurate_clock
{
using rep = long long;
using period = std::nano; // or whatever?
using duration = std::chrono::duration<rep, period>;
using time_point = std::chrono::time_point<my_super_accurate_clock>;
static const bool is_steady = false;
static time_point now(); // do super accurate magic here
};
The problem is that unless your machine is very unusual, the underlying hardware simply isn't capable of providing a particularly reliable measurement of time (at least on the scales you are looking at).
Whether on your digital wristwatch or your workstation, most electronic clock signals are internally generated by a crystal oscillator. Such crystals have both long (years) and short-term (minutes) variation around their "ideal" frequency, with the largest short-term component being variation with temperature. Fancy lab equipment is going to have something like a crystal oven which tries to keep the crystal at a constant temperature (above ambient) to minimize temperature related drift, but I've never seen anything like that on commodity computing hardware.
You see the effects of crystal inaccuracy in a different way in both of your graphs. The first graph simply shows that your crystal ticks at a somewhat large offset from true time, either due to variability at manufacturing (it was always that bad) or long-term drift (it got like that over time). Once you enable NTP, the "constant" or average offset from true is easily corrected, so you'll expect to average zero offset over some large period of time (indeed the line traced by the minimum dips above and below zero).
At this scale, however, you'll see the smaller short term variations in effect. NTP kicks in periodically and tries to "fix them", but the short term drift is always there and always changing direction (you can probably even check the effect of increasing or decreasing ambient temperature and see it in the graph).
You can't avoid the wiggle, but you could perhaps increase the NTP adjustment frequency to keep it more tightly coupled to real time. Your exact requirements aren't totally clear though. For example you mention:
It's okay if the clock drifts from the actual time in a predictable
way, but the timestamps would need to be internally consistent over
long stretches of time.
What does "internally consistent" mean? If you are OK with arbitrary drift, just use your existing clock without NTP adjustments. If you want something like time that tracks real time "over large timeframes" (i.e,. it doesn't get too out of sync), why could use your internal clock in combination with periodic polling of your "external source", and change the adjustment factor in a smooth way so that you don't have "jumps" in the apparent time. This is basically reinventing NTP, but at least it would be fully under application control.

Get current timestamp in microseconds since epoch?

I have a below code from which we are trying to get current timestamp in microseconds since epoch time but we are using steady_clock.
inline uint64_t get_timestamp()
{
std::chrono::time_point<std::chrono::steady_clock> ts = std::chrono::steady_clock::now();
return std::chrono::duration_cast<std::chrono::microseconds>(ts.time_since_epoch()).count();
}
Is this the right way to do that since as per my understanding steady_clock is used to measure the passage of time not to get the current time of day? Or should I use system_clock for this like as shown below:
inline uint64_t get_timestamp()
{
std::chrono::time_point<std::chrono::system_clock> ts = std::chrono::system_clock::now();
return std::chrono::duration_cast<std::chrono::microseconds>(ts.time_since_epoch()).count();
}
I need to use std::chrono package only since that's what all our code is using.
The epochs of the chrono clocks are unspecified. But practically you can think of the chrono clocks this way:
The epoch of steady_clock is the time your application launched plus a signed random offset. I.e. you can't depend upon the epoch being the same across application launches. But the epoch will remain stable while an application is running.
The epoch of system_clock is time since New Years 1970, not counting leap seconds, in the UTC timezone. Different implementations implement this with varying precision: libc++ counts microseconds, VS counts 1/10 of microseconds, and gcc counts nanoseconds.
high_resolution_clock is sometimes a type alias for steady_clock and sometimes a type alias for system_clock.
For a time stamp in microseconds I recommend first defining this type alias:
using time_stamp = std::chrono::time_point<std::chrono::system_clock,
std::chrono::microseconds>;
Store that, instead of uint64_t. The type safety of this type will save you countless run time errors. You'll discover your errors at compile time instead.
You can get the current time_stamp with:
using namespace std::chrono;
time_stamp ts = time_point_cast<microseconds>(system_clock::now());
Another possibility for people who couldn't get other solutions to work:
uint64_t microseconds_since_epoch = std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::system_clock::now().time_since_epoch()).count();

Difference between std::system_clock and std::steady_clock?

What is the difference between std::system_clock and std::steady_clock? (An example case that illustrate different results/behaviours would be great).
If my goal is to precisely measure execution time of functions (like a benchmark), what would be the best choice between std::system_clock, std::steady_clock and std::high_resolution_clock?
From N3376:
20.11.7.1 [time.clock.system]/1:
Objects of class system_clock represent wall clock time from the system-wide realtime clock.
20.11.7.2 [time.clock.steady]/1:
Objects of class steady_clock represent clocks for which values of time_point never decrease as physical time advances and for which values of time_point advance at a steady rate relative to real time. That is, the clock may not be adjusted.
20.11.7.3 [time.clock.hires]/1:
Objects of class high_resolution_clock represent clocks with the shortest tick period. high_resolution_clock may be a synonym for system_clock or steady_clock.
For instance, the system wide clock might be affected by something like daylight savings time, at which point the actual time listed at some point in the future can actually be a time in the past. (E.g. in the US, in the fall time moves back one hour, so the same hour is experienced "twice") However, steady_clock is not allowed to be affected by such things.
Another way of thinking about "steady" in this case is in the requirements defined in the table of 20.11.3 [time.clock.req]/2:
In Table 59 C1 and C2 denote clock types. t1 and t2 are values returned by C1::now() where the call returning t1 happens before the call returning t2 and both of these calls occur before C1::time_point::max(). [ Note: this means C1 did not wrap around between t1 and t2. —end note ]
Expression: C1::is_steady
Returns: const bool
Operational Semantics: true if t1 <= t2 is always true and the time between clock ticks is constant, otherwise false.
That's all the standard has on their differences.
If you want to do benchmarking, your best bet is probably going to be std::high_resolution_clock, because it is likely that your platform uses a high resolution timer (e.g. QueryPerformanceCounter on Windows) for this clock. However, if you're benchmarking, you should really consider using platform specific timers for your benchmark, because different platforms handle this differently. For instance, some platforms might give you some means of determining the actual number of clock ticks the program required (independent of other processes running on the same CPU). Better yet, get your hands on a real profiler and use that.
Billy provided a great answer based on the ISO C++ standard that I fully agree with. However there is another side of the story - real life. It seems that right now there is really no difference between those clocks in implementation of popular compilers:
gcc 4.8:
#ifdef _GLIBCXX_USE_CLOCK_MONOTONIC
...
#else
typedef system_clock steady_clock;
#endif
typedef system_clock high_resolution_clock;
Visual Studio 2012:
class steady_clock : public system_clock
{ // wraps monotonic clock
public:
static const bool is_monotonic = true; // retained
static const bool is_steady = true;
};
typedef system_clock high_resolution_clock;
In case of gcc you can check if you deal with steady clock simply by checking is_steady and behave accordingly. However VS2012 seems to cheat a bit here :-)
If you need high precision clock I recommend for now writing your own clock that conforms to C++11 official clock interface and wait for implementations to catch up. It will be much better approach than using OS specific API directly in your code.
For Windows you can do it like that:
// Self-made Windows QueryPerformanceCounter based C++11 API compatible clock
struct qpc_clock {
typedef std::chrono::nanoseconds duration; // nanoseconds resolution
typedef duration::rep rep;
typedef duration::period period;
typedef std::chrono::time_point<qpc_clock, duration> time_point;
static bool is_steady; // = true
static time_point now()
{
if(!is_inited) {
init();
is_inited = true;
}
LARGE_INTEGER counter;
QueryPerformanceCounter(&counter);
return time_point(duration(static_cast<rep>((double)counter.QuadPart / frequency.QuadPart *
period::den / period::num)));
}
private:
static bool is_inited; // = false
static LARGE_INTEGER frequency;
static void init()
{
if(QueryPerformanceFrequency(&frequency) == 0)
throw std::logic_error("QueryPerformanceCounter not supported: " + std::to_string(GetLastError()));
}
};
For Linux it is even easier. Just read the man page of clock_gettime and modify the code above.
GCC 5.3.0 implementation
C++ stdlib is inside GCC source:
high_resolution_clock is an alias for system_clock
system_clock forwards to the first of the following that is available:
clock_gettime(CLOCK_REALTIME, ...)
gettimeofday
time
steady_clock forwards to the first of the following that is available:
clock_gettime(CLOCK_MONOTONIC, ...)
system_clock
Then CLOCK_REALTIME vs CLOCK_MONOTONIC is explained at: Difference between CLOCK_REALTIME and CLOCK_MONOTONIC?
Maybe, the most significant difference is the fact that the starting point of std::chrono:system_clock is the 1.1.1970, so-called UNIX-epoch.
On the other side, for std::chrono::steady_clock typically the boot time of your PC and it's most suitable for measuring intervals.
Relevant talk about chrono by Howard Hinnant, author of chrono:
don't use high_resolution_clock, as it's an alias for one of these:
system_clock: it's like a regular clock, use it for time/date related stuff
steady_clock: it's like a stopwatch, use it for timing things.