std chrono time synchronization - c++

I need to send some data over network with timestamps, this time should have high precision. Looking at std::chrono clocks I found out that std::chrono::*_clock::now() returns time_point which is depends at clock's epoch. I failed to find out which epoch is used in each clock and which of them can be used safely when sended wia network. For example on windows high_resolution_clock is wrapper around QueryPerformanceCounter it has good precision, but, I think, useless as timestamp for network tasks.
So question is how to "synchronize" high resolution clock over network?

std::chrono::system_clock's epoch is currently unspecified. However it is portably a measure of time since 1970-01-01 00:00:00 UTC, neglecting leap seconds. This is consistent with what is called Unix Time.
I am currently working to standardize this existing practice. I have private, unofficial assurances from the implementors of std::chrono::system_clock that they will not change their existing practice in the interim.
The other two std-defined chrono clocks: high_resolution_clock, and steady_clock, do not have portably defined epochs.
Note that system_clock, though it has a (de-facto) portable epoch, does not have portable precision. On clang/libc++ the precision is microseconds. On gcc the precision is nanoseconds, and on Windows the precision is 100ns. So you might time_point_cast<microseconds>(system_clock::now()) to obtain portable precision.

If you want "high precision" (you should specify better) synchronization you would need first of all to make all your network devices to regularly synchronize with the same NTP server. You need to configure the machines in order to adjust the clocks every few minutes (or seconds), because after updated (with some network delays which will make them not precise at the millisecond) , the clocks will start to drift again.

Related

What are the pros & cons of the different C++ clocks for logging time stamps?

When printing my logs, I want each message to have a time stamp, measuring time since start of the program. Preferably in nanoseconds, though milliseconds are fine as well:
( 110 ns) Some log line
( 1220 ns) Another log line
( 2431 ns) Now for some computation...
(10357 ns) Error!
To my understanding, there three different clocks in the C++ chrono library and two more C-style clocks:
std::chrono::high_resolution_clock
std::chrono::system_clock
std::chrono::steady_clock
std::time
std::clock
What are the pros and cons for each those for the task described above?
system_clock is a clock that keeps time with UTC (excluding leap seconds). Every once in a while (maybe several times a day), it gets adjusted by small amounts, to keep it aligned with the correct time. This is often done with a network service such as NTP. These adjustments are typically on the order of microseconds, but can be either forward or backwards in time. It is actually possible (though not likely nor common) for timestamps from this clock to go backwards by tiny amounts. Unless abused by an administrator, system_clock does not jump by gross amounts, say due to daylight saving, or changing the computer's local time zone, since it always tracks UTC.
steady_clock is like a stopwatch. It has no relationship to any time standard. It just keeps ticking. It may not keep perfect time (no clock does really). But it will never be adjusted, especially not backwards. It is great for timing short bits of code. But since it never gets adjusted, it may drift over time with respect to system_clock which is adjusted to keep in sync with UTC.
This boils down to the fact that steady_clock is best for timing short durations. It also typically has nanosecond resolution, though that is not required. And system_clock is best for timing "long" times where "long" is very fuzzy. But certainly hours or days qualifies as "long", and durations under a second don't. And if you need to relate a timestamp to a human readable time such as a date/time on the civil calendar, system_clock is the only choice.
high_resolution_clock is allowed to be a type alias for either steady_clock or system_clock, and in practice always is. But some platforms alias to steady_clock and some to system_clock. So imho, it is best to just directly choose steady_clock or system_clock so that you know what you're getting.
Though not specified, std::time is typically restricted to a resolution of a second. So it is completely unusable for situations that require subsecond precision. Otherwise std::time tracks UTC (excluding leap seconds), just like system_clock.
std::clock tracks processor time, as opposed to physical time. That is, when your thread is not busy doing something, and the OS has parked it, measurements of std::clock will not reflect time increasing during that down time. This can be really useful if that is what you need to measure. And it can be very surprising if you use it without realizing that processor time is what you're measuring.
And new for C++20
C++20 adds four more clocks to the <chrono> library:
utc_clock is just like system_clock, except that it counts leap seconds. This is mainly useful when you need to subtract two time_points across a leap second insertion point, and you absolutely need to count that inserted leap second (or a fraction thereof).
tai_clock measures seconds since 1958-01-01 00:00:00 and is offset 10s ahead of UTC at this date. It doesn't have leap seconds, but every time a leap second is inserted into UTC, the calendrical representation of TAI and UTC diverge by another second.
gps_clock models the GPS time system. It measures seconds since the first Sunday of January, 1980 00:00:00 UTC. Like TAI, every time a leap second is inserted into UTC, the calendrical representation of GPS and UTC diverge by another second. Because of the similarity in the way that GPS and TAI handle UTC leap seconds, the calendrical representation of GPS is always behind that of TAI by 19 seconds.
file_clock is the clock used by the filesystem library, and is what produces the chrono::time_point aliased by std::filesystem::file_time_type.
One can use a new named cast in C++20 called clock_cast to convert among the time_points of system_clock, utc_clock, tai_clock, gps_clock and file_clock. For example:
auto tp = clock_cast<system_clock>(last_write_time("some_path/some_file.xxx"));
The type of tp is a system_clock-based time_point with the same duration type (precision) as file_time_type.

Is there a standard library implementation where high_resolution_clock is not a typedef?

The C++ Draft par 20.12.7.3 reads:
high_resolution_clock may be a synonym for system_clock or steady_clock
Of course this may mandates nothing but I wonder :
Is there any point for high_resolution_clock to something other that a typedef ?
Are there such implementations ?
If a clock with a shorter tick period is devised it can either be steady or not steady. So if such a mechanism exists, wouldn't we want to "improve" system_clock and high_resolution_clock as well, defaulting to the typedef solution once more ?
The reason that specs have wording such as "may" and "can", and other vague words that allow for other possibilities comes from the wish by the spec writers not wanting to (unnecessarily) limit the implementation of a "better" solution of something.
Imagine a system where time in general is counted in seconds, and the system_clock is just that - the system_clock::period will return 1 second. This time is stored as a single 64-bit integer.
Now, in the same system, there is also a time in nano-seconds, but it's stored as a 128-bit integer. The resulting time-calculations are slightly more complex due to this large integer format, and for someone that only needs 1s precision for their time (in a system where a large number of calculations on time are made), you wouldn't want to have the extra penalty of using high_precision_clock, when the system doesn't need it.
As to if there are such things in real life, I'm not sure. The key is that it's not a violation to the standard, if you care to implement it such.
Note that steady is very much a property of "what happens when the system changes time" (e.g. if the outside network has been down for a several days, and the internal clock in the system has drifted off from the atomic clock that the network time updates to). Using steady_clock will guarantee that time doesn't go backwards or suddenly jumps forward 25 seconds all of a sudden. Likewise, there is no problem when there is a "leap second" or similar time adjustment in the computer system. On the other hand, a system_clock is guaranteed to give you the correct new time if you give it a forward duration past a daylight savings time, or some such, where steady_clock will just tick along hour after hour, regardless. So choosing the right one of those will affect the recording of your favourite program in the digital TV recorder - steady_clock will record at the wrong time [my DTV recorder did this wrong a few years back, but they appear to have fixed it now].
system_clock should also take into account the user (or sysadmin) changing the clock in the system, steady_clock should NOT do so.
Again, high_resolution_clock may or may not be steady - it's up to the implementor of the C++ library to give the appropriate response to is_steady.
In the 4.9.2 version of <chrono>, we find this using high_resolution_clock = system_clock;, so in this case it's a direct typedef (by a different name). But the spec doesn't REQUIRE this.

Get reliable monotonic time in c++(multiplatform)

I need a monotonic clock that can be used to calculate intervals.
Requirements:
Must be monotonic, must not be influenced by the device time.
Must not reset during an application session.(Same epoch for all return values in a session)
Must represent real life seconds (not cpu seconds), must not be influenced by number of threads/processes running at that time.
Seconds resolution is sufficient.
In my research I have found
Candidates:
std::clock()(ctime) - Seems to use cpu seconds
boost::chrono::steady_clock() - Does it use cpu seconds? Can the epoch change during an application session(launch-end)?
Platform specific methods(clock_gettime, mach_absolute_time).
Did you ever encounter such a problem and what solution did you choose? Is steady_clock() reliable multiplatform?
I would use std::chrono::steady_clock. By description it is not influenced by wall clock/system time changes and is best suitable for measuring intervals.
boost::chrono::steady_clock() - Does it use cpu seconds? Can the epoch change during an application session(launch-end)?
std::chrono::steady_clock() is specified to use real-time seconds and is not adjusted during sessions. I assume boost's implementation adheres to that. I have't had issues with std::chrono::steady_clock except that the resolution on some platforms is lower than others.

Portable good precision double timestamp in C++?

Here's what I'd need to do:
double now=getdoubletimestampsomehow();
Where getdoubletimestampsomehow() should be a straight-forward, easy to use function returning a double value representing the number of seconds elapsed from a given date. I'd need it to be quite precise, but I don't really need it to be more precise than a few milliseconds. Portability is quite important, if it isn't possible to directly port it anywhere could you please tell me both an unix and a windows way to do it?
Have you looked at Boost and particularly its Date_Time library ? Here is the seconds since epoch example.
You will be hard-pressed to find something more portable, and of higher resolution.
Portable good precision double timestamp in C++?
There is no portable way to get high-precision timestamp (milliseconds) without using 3rd party libraries. Maximum precision you'll get is 1 second, using time/localtime/gmtime.
If you're fine with 3rd party libraries, use either Boost or Qt 4.
both an unix and a windows way to do it?
GetSystemTime on Windows and gettimeofday on linux.
Please note that if you're planning to use timestamps to determine order of some events, then it might be a bad idea. System clock might have very limited precision (10 milliseconds on windows platform), in which case several operations performed consequently can produce same timestamp. So, to determine order of events you would need "logical timestamps" ("vector clock" is one of examples).
On windows platform, there are highly precise functions that can be used to determine how much time has passed since some point in the past (QueryPerformanceCounter), but they aren't connected to timestamps.
C++11 introduced the <chrono> header containing quite a few portable clocks. The highest resolution clock among them is the std::chrono::high_resolution_clock.
It provides the current time as a std::chrono::time_point object which has a time_since_epoch member. This might contain what you want.
Reference:
Prior to the release of the C++11 standard, there was no standard way in which one could accurately measure the execution time of a piece of code. The programmer was forced to use external libraries like Boost, or routines provided by each operating system.
The C++11 chrono header file provides three standard clocks that could be used for timing one’s code:
system_clock - this is the real-time clock used by the system;
high_resolution_clock - this is a clock with the shortest tick period possible on the current system;
steady_clock - this is a monotonic clock that is guaranteed to never be adjusted.
If you want to measure the time taken by a certain piece of code for execution, you should generally use the steady_clock, which is a monotonic clock that is never adjusted by the system. The other two clocks provided by the chrono header can be occasionally adjusted, so the difference between two consecutive time moments, t0 < t1, is not always positive.
Doubles are not precise - therefore you idea for double now=getdoubletimestampsomehow(); falls down at the first hurdle.
Others have mentioned other possibilities. I would explore those.

UTC timestamp in millisecond using C++ under Windows

How do I get the UTC time in milliseconds under the Windows platform?
I am using the standard library which give me UTC time in seconds. I want to get the time in milliseconds. Is there a reference of another library which give me the accurate UTC time in milliseconds?
Use GetSystemTime API function, or perhaps GetSystemTimeAsFileTime if you want a single number.
GetSystemTime() produces a UTC time stamp with millisecond resolution. Accuracy however is far worse, the clock usually updates at 15.625 millisecond intervals on most Windows machines. There isn't much point in chasing improved accuracy, any clock that provides an absolute time stamp is subject to drift. You'd need dedicated hardware, usually a GPS radio clock, to get something better. Which are hard to use properly on a non-realtime multi-tasking operating system. Worst-case latency can be as much as 200 milliseconds.