Get reliable monotonic time in c++(multiplatform) - c++

I need a monotonic clock that can be used to calculate intervals.
Requirements:
Must be monotonic, must not be influenced by the device time.
Must not reset during an application session.(Same epoch for all return values in a session)
Must represent real life seconds (not cpu seconds), must not be influenced by number of threads/processes running at that time.
Seconds resolution is sufficient.
In my research I have found
Candidates:
std::clock()(ctime) - Seems to use cpu seconds
boost::chrono::steady_clock() - Does it use cpu seconds? Can the epoch change during an application session(launch-end)?
Platform specific methods(clock_gettime, mach_absolute_time).
Did you ever encounter such a problem and what solution did you choose? Is steady_clock() reliable multiplatform?

I would use std::chrono::steady_clock. By description it is not influenced by wall clock/system time changes and is best suitable for measuring intervals.

boost::chrono::steady_clock() - Does it use cpu seconds? Can the epoch change during an application session(launch-end)?
std::chrono::steady_clock() is specified to use real-time seconds and is not adjusted during sessions. I assume boost's implementation adheres to that. I have't had issues with std::chrono::steady_clock except that the resolution on some platforms is lower than others.

Related

std chrono time synchronization

I need to send some data over network with timestamps, this time should have high precision. Looking at std::chrono clocks I found out that std::chrono::*_clock::now() returns time_point which is depends at clock's epoch. I failed to find out which epoch is used in each clock and which of them can be used safely when sended wia network. For example on windows high_resolution_clock is wrapper around QueryPerformanceCounter it has good precision, but, I think, useless as timestamp for network tasks.
So question is how to "synchronize" high resolution clock over network?
std::chrono::system_clock's epoch is currently unspecified. However it is portably a measure of time since 1970-01-01 00:00:00 UTC, neglecting leap seconds. This is consistent with what is called Unix Time.
I am currently working to standardize this existing practice. I have private, unofficial assurances from the implementors of std::chrono::system_clock that they will not change their existing practice in the interim.
The other two std-defined chrono clocks: high_resolution_clock, and steady_clock, do not have portably defined epochs.
Note that system_clock, though it has a (de-facto) portable epoch, does not have portable precision. On clang/libc++ the precision is microseconds. On gcc the precision is nanoseconds, and on Windows the precision is 100ns. So you might time_point_cast<microseconds>(system_clock::now()) to obtain portable precision.
If you want "high precision" (you should specify better) synchronization you would need first of all to make all your network devices to regularly synchronize with the same NTP server. You need to configure the machines in order to adjust the clocks every few minutes (or seconds), because after updated (with some network delays which will make them not precise at the millisecond) , the clocks will start to drift again.

Is the epoch of steady_clock relative to when the operating system starts? or to the process itself?

Using boost::chrono::steady_clock or std::chrono::steady_clock is suppose to guarantee that physical time is always monotonic and is not affected by date time changes in the system. Here is my question, if I have two processes that need to be immune to system date time changes, is it enough to exchange just the time_since_epoch? In other words, the time interpretation of the two processes to the same time since epoch will be the same? Specifically I need to answer this question for Windows and QNX.
EDIT: Both processes are running in the same computer, same operating system and communicate via IPC calls.
No the times are not interchangeable between systems, because C++ doesn't specify the epoch. The epoch is depending on the operating system, different systems can have different epochs.
If, on the other hand, you share the times only locally, within the same system, then it's okay.
C++ standard says about steady_clock:
20.12.7.2 Class steady_clock [time.clock.steady]
Objects of class steady_clock represent clocks for which values of time_point never decrease as physical time advances and for which values of time_point advance at a steady rate relative to real time. That is, the clock may not be adjusted.
Compare this to what the standard has to say about system_clock:
20.12.7.1 Class system_clock [time.clock.system]
Objects of class system_clock represent wall clock time from the system-wide realtime clock.
There's no mention about steady_clock being "system-wide", which leads me to believe that, according to the C++ standard, you cannot trust on two steady_clocks in different processes on the same machine having the same epoch.
Old, but maybe someone will benefit...
Probably, as we understood, it isn't defined by the C++ standard, so it depends on your compiler/system/stdlib implementation.
Eg.: If the implementation uses CLOCK_MONOTONIC from Linux (https://man7.org/linux/man-pages/man2/clock_gettime.2.html):
CLOCK_MONOTONIC
A nonsettable system-wide clock that represents monotonic
time since—as described by POSIX—"some unspecified point
in the past". On Linux, that point corresponds to the
number of seconds that the system has been running since
it was booted.
It's a system-wide clock then.
#DeZee mentioned Linux implementation - the epoch is when the system boots.
How about Windows?
MSVC uses QueryPerformanceCounter and QueryPerformanceFrequency for its std::chrono::steady_clock implementation. And from MSDN (emphasis mine):
In general, the performance counter results are consistent across all
processors in multi-core and multi-processor systems, even when
measured on different threads or processes.
(the doc also mentions some exceptions, so please see it)
And:
The frequency of the performance counter is fixed at system boot and
is consistent across all processors so you only need to query the
frequency from QueryPerformanceFrequency as the application
initializes, and then cache the result.
Also, in the code of the std::chrono::steady_clock implementation of MSVC, we can see this comment:
const long long _Freq = _Query_perf_frequency(); // doesn't change after system boot
Now, your question is:
the time interpretation of the two processes to the same time since
epoch will be the same?
So the answer to your question would be "yes", at least for Linux and Windows.

Why is there no boost::date_time with microsec resolution on Windows?

On Win32 system boost::date_time::microsec_clock() is implemented using ftime, which provides only millisecond resolution: Link to doc
There are some questions/answers on Stackoverflow stating this and linking the documentation, but not explaining why that is the case:
Stackoverflow #1
Stackoverflow #2
There seemingly are ways to implement microsecond resolution on Windows:
GetSystemTimePreciseAsFileTime (Win8++)
QueryPerformanceCounter
What I'm interested in is why Boost implemented it that way, when in turn there are possibly solutions that would be more fitting?
QueryPerformanceCounter can't help you on this problem. It gives you a timestamp, but as you don't know when the counter starts there is no reliable way to calculate an absolute time point out of it. boost::date_time is such a (user-understandable) time point.
The other difference is that a counter like QueryPerformanceCounter gives you a steadily increasing timer, while the system time can be influenced by the user and can therefore jump.
So the 2 things are for different use cases. One for representing a real time, the other one for getting precise timing in the software and for benchmarking.
GetSystemTimePreciseAsFileTime seems to fit the bill for a high resolution absolute time. I guess it wasn't used because it requires Windows8.
GetSystemTimePreciseAsFileTime only became available with Windows 8 Desktop applications. It mimics Linuxes GetTimeOfDay. The implementation uses QueryPerformanceCounter to achieve the microsecond resolution. Timestamps are taken at the time of a system time increment. Subsequent calls to GetSystemTimePreciseAsFileTime will take the system time and add the elapsed "performance counter time" (elapsed ticks / performance counter frequency) as the high resolution part.
The functionallity of QueryPerformanceCounter again depends on platform specific details (HPET, ACPI PM timer, invariant TSC etc.). See MSDN: Acquiring high-resolution time stamps and SO: Is QueryPerformanceFrequency acurate when using HPET? for details.
The various versions of Windows do have specific schemes to update the system time. Windows XP has a fixed file time granularty which is independent of the systems timer resolution. Only post Windows XP versions allow to modify the system time granularity by changing the system timer resolution.
This can be accomplished by means of the multimedia timer API timeBeginPeriod and/or the hidden API NtSetTimerResolution (See this SO answer for more details about using `
timeBeginPeriod and NtSetTimerResolution).
As stated, GetSystemTimePreciseAsFileTime is only available for desktop applications. The reason for this is the need for specific hardware.
What I'm interested in is why Boost implemented it that way, when in turn there are possibly solutions that would be more fitting?
Taking the facts stated above will make the implementation very complex and the result very platform specific. Every (!) Windows version has undergone severe changes of time keeping. Even the latest small step from 8 to 8.1 has changed the time keeping procedure considerably. However, there is still room to further improve time matters on Windows.
I should mention that GetSystemTimePreciseAsFileTime is, as of Windows 8.1, not giving results as accurate as expected or specified at MSDN: GetSystemTimePreciseAsFileTime function. It combines the system file time with the result of QueryPerformanceCounter to fill the gap between consecutive file time increments but it does not take system time adjustments into account. An active system time adjustement, e.g. done by SetSystemTimeAdjustment, modifies the system time granularity and the progress of the system time. However, the used performance counter frequency to build the result of GetSystemTimePreciseAsFileTime is kept constant. As a result, the microseconds part is off by the adjustment gain set by SetSystemTimeAdjustment.

C++: Timing in Linux (using clock()) is out of sync (due to OpenMP?)

At the top and end of my program I use clock() to figure out how long my program takes to finish. Unfortunately, it appears to take half as long as it's reporting. I double checked this with the "time" command.
My program reports:
Completed in 45.86s
Time command reports:
real 0m22.837s
user 0m45.735s
sys 0m0.152s
Using my cellphone to time it, it completed in 23s (aka: the "real" time). "User" time is the sum of all threads, which would make sense since I'm using OpenMP. (You can read about it here: What do 'real', 'user' and 'sys' mean in the output of time(1)?)
So, why is clock() reporting in "user" time rather than "real" time? Is there a different function I should be using to calculate how long my program has been running?
As a side note, Windows' clock() works as expected and reports in "real" time.
user 0m45.735s
clock() measures CPU time the process used (as good as it can) per 7.27.2.1
The clock function returns the implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation.
and not wall clock time. Thus clock() reporting a time close to the user time that time reports is normal and standard-conforming.
To measure elapsed time, if you can assume POSIX, using clock_gettime is probably the best option, the standard function time() can also be used for that, but is not very fine-grained.
I would suggest clock_gettime using CLOCK_MONOTONIC for the clock.
Depending on your specific system, that should give near-microsecond or better resolution, and it will not do funny things if (e.g.) someone sets the system time while your program is running.
I would suggest that for benchmarking inside OpenMP applications you use the portable OpenMP timing function omp_get_wtime(), which returns a double value with the seconds since some unspecified point in the past. Call it twice and subtract the return values to obtain the elapsed time. You can find out how precise time measurements are by calling omp_get_wtick(). It returns a double value of the timer resolution - values closer to 0.0 indicate more precise timers.

UTC timestamp in millisecond using C++ under Windows

How do I get the UTC time in milliseconds under the Windows platform?
I am using the standard library which give me UTC time in seconds. I want to get the time in milliseconds. Is there a reference of another library which give me the accurate UTC time in milliseconds?
Use GetSystemTime API function, or perhaps GetSystemTimeAsFileTime if you want a single number.
GetSystemTime() produces a UTC time stamp with millisecond resolution. Accuracy however is far worse, the clock usually updates at 15.625 millisecond intervals on most Windows machines. There isn't much point in chasing improved accuracy, any clock that provides an absolute time stamp is subject to drift. You'd need dedicated hardware, usually a GPS radio clock, to get something better. Which are hard to use properly on a non-realtime multi-tasking operating system. Worst-case latency can be as much as 200 milliseconds.