UTC timestamp in millisecond using C++ under Windows - c++

How do I get the UTC time in milliseconds under the Windows platform?
I am using the standard library which give me UTC time in seconds. I want to get the time in milliseconds. Is there a reference of another library which give me the accurate UTC time in milliseconds?

Use GetSystemTime API function, or perhaps GetSystemTimeAsFileTime if you want a single number.

GetSystemTime() produces a UTC time stamp with millisecond resolution. Accuracy however is far worse, the clock usually updates at 15.625 millisecond intervals on most Windows machines. There isn't much point in chasing improved accuracy, any clock that provides an absolute time stamp is subject to drift. You'd need dedicated hardware, usually a GPS radio clock, to get something better. Which are hard to use properly on a non-realtime multi-tasking operating system. Worst-case latency can be as much as 200 milliseconds.

Related

What are the pros & cons of the different C++ clocks for logging time stamps?

When printing my logs, I want each message to have a time stamp, measuring time since start of the program. Preferably in nanoseconds, though milliseconds are fine as well:
( 110 ns) Some log line
( 1220 ns) Another log line
( 2431 ns) Now for some computation...
(10357 ns) Error!
To my understanding, there three different clocks in the C++ chrono library and two more C-style clocks:
std::chrono::high_resolution_clock
std::chrono::system_clock
std::chrono::steady_clock
std::time
std::clock
What are the pros and cons for each those for the task described above?
system_clock is a clock that keeps time with UTC (excluding leap seconds). Every once in a while (maybe several times a day), it gets adjusted by small amounts, to keep it aligned with the correct time. This is often done with a network service such as NTP. These adjustments are typically on the order of microseconds, but can be either forward or backwards in time. It is actually possible (though not likely nor common) for timestamps from this clock to go backwards by tiny amounts. Unless abused by an administrator, system_clock does not jump by gross amounts, say due to daylight saving, or changing the computer's local time zone, since it always tracks UTC.
steady_clock is like a stopwatch. It has no relationship to any time standard. It just keeps ticking. It may not keep perfect time (no clock does really). But it will never be adjusted, especially not backwards. It is great for timing short bits of code. But since it never gets adjusted, it may drift over time with respect to system_clock which is adjusted to keep in sync with UTC.
This boils down to the fact that steady_clock is best for timing short durations. It also typically has nanosecond resolution, though that is not required. And system_clock is best for timing "long" times where "long" is very fuzzy. But certainly hours or days qualifies as "long", and durations under a second don't. And if you need to relate a timestamp to a human readable time such as a date/time on the civil calendar, system_clock is the only choice.
high_resolution_clock is allowed to be a type alias for either steady_clock or system_clock, and in practice always is. But some platforms alias to steady_clock and some to system_clock. So imho, it is best to just directly choose steady_clock or system_clock so that you know what you're getting.
Though not specified, std::time is typically restricted to a resolution of a second. So it is completely unusable for situations that require subsecond precision. Otherwise std::time tracks UTC (excluding leap seconds), just like system_clock.
std::clock tracks processor time, as opposed to physical time. That is, when your thread is not busy doing something, and the OS has parked it, measurements of std::clock will not reflect time increasing during that down time. This can be really useful if that is what you need to measure. And it can be very surprising if you use it without realizing that processor time is what you're measuring.
And new for C++20
C++20 adds four more clocks to the <chrono> library:
utc_clock is just like system_clock, except that it counts leap seconds. This is mainly useful when you need to subtract two time_points across a leap second insertion point, and you absolutely need to count that inserted leap second (or a fraction thereof).
tai_clock measures seconds since 1958-01-01 00:00:00 and is offset 10s ahead of UTC at this date. It doesn't have leap seconds, but every time a leap second is inserted into UTC, the calendrical representation of TAI and UTC diverge by another second.
gps_clock models the GPS time system. It measures seconds since the first Sunday of January, 1980 00:00:00 UTC. Like TAI, every time a leap second is inserted into UTC, the calendrical representation of GPS and UTC diverge by another second. Because of the similarity in the way that GPS and TAI handle UTC leap seconds, the calendrical representation of GPS is always behind that of TAI by 19 seconds.
file_clock is the clock used by the filesystem library, and is what produces the chrono::time_point aliased by std::filesystem::file_time_type.
One can use a new named cast in C++20 called clock_cast to convert among the time_points of system_clock, utc_clock, tai_clock, gps_clock and file_clock. For example:
auto tp = clock_cast<system_clock>(last_write_time("some_path/some_file.xxx"));
The type of tp is a system_clock-based time_point with the same duration type (precision) as file_time_type.

std chrono time synchronization

I need to send some data over network with timestamps, this time should have high precision. Looking at std::chrono clocks I found out that std::chrono::*_clock::now() returns time_point which is depends at clock's epoch. I failed to find out which epoch is used in each clock and which of them can be used safely when sended wia network. For example on windows high_resolution_clock is wrapper around QueryPerformanceCounter it has good precision, but, I think, useless as timestamp for network tasks.
So question is how to "synchronize" high resolution clock over network?
std::chrono::system_clock's epoch is currently unspecified. However it is portably a measure of time since 1970-01-01 00:00:00 UTC, neglecting leap seconds. This is consistent with what is called Unix Time.
I am currently working to standardize this existing practice. I have private, unofficial assurances from the implementors of std::chrono::system_clock that they will not change their existing practice in the interim.
The other two std-defined chrono clocks: high_resolution_clock, and steady_clock, do not have portably defined epochs.
Note that system_clock, though it has a (de-facto) portable epoch, does not have portable precision. On clang/libc++ the precision is microseconds. On gcc the precision is nanoseconds, and on Windows the precision is 100ns. So you might time_point_cast<microseconds>(system_clock::now()) to obtain portable precision.
If you want "high precision" (you should specify better) synchronization you would need first of all to make all your network devices to regularly synchronize with the same NTP server. You need to configure the machines in order to adjust the clocks every few minutes (or seconds), because after updated (with some network delays which will make them not precise at the millisecond) , the clocks will start to drift again.

Get reliable monotonic time in c++(multiplatform)

I need a monotonic clock that can be used to calculate intervals.
Requirements:
Must be monotonic, must not be influenced by the device time.
Must not reset during an application session.(Same epoch for all return values in a session)
Must represent real life seconds (not cpu seconds), must not be influenced by number of threads/processes running at that time.
Seconds resolution is sufficient.
In my research I have found
Candidates:
std::clock()(ctime) - Seems to use cpu seconds
boost::chrono::steady_clock() - Does it use cpu seconds? Can the epoch change during an application session(launch-end)?
Platform specific methods(clock_gettime, mach_absolute_time).
Did you ever encounter such a problem and what solution did you choose? Is steady_clock() reliable multiplatform?
I would use std::chrono::steady_clock. By description it is not influenced by wall clock/system time changes and is best suitable for measuring intervals.
boost::chrono::steady_clock() - Does it use cpu seconds? Can the epoch change during an application session(launch-end)?
std::chrono::steady_clock() is specified to use real-time seconds and is not adjusted during sessions. I assume boost's implementation adheres to that. I have't had issues with std::chrono::steady_clock except that the resolution on some platforms is lower than others.

Get time stamp via Boost.Chrono in resolution of nanoseconds

Does boost chrono provides time stamp with nanoseconds resolution?? If yes how to get the time stamp?
Nanoseconds resolution ? On which hardware do you want to run your program ?
On my PC, my performance counter has a frequency of approx. 4 Mhz, so a tick last 250 ns.
As answered here, boost chrono can give you the nanosecond resolution, but you will not be sure of the measure's accuracy.
In order to easily get time stamps with boost chrono for different measurements you can use boost CPU Timers. A table about the timer accuracy is also given on this site.
To measure the resolution yourself on your specific hardware use boost's cpu_timer_info.cpp.

Linux/OSX Clock Resolution with millisecond accuracy?

On Windows, I can call QueryPerformanceCounter to get high resolution data points, but this method call is affected by issues with the BIOS, multi-core CPUs, and some AMD chips. I can call timeBeginPeriod to increase the system clock resolution in Windows down to 1ms (instead of the standard ~15ms) which means that I can use just call timeGetTime and get the time in the clock resolution that I've specified.
So! On OSX/Linux, what C++ clock resolutions should I expect? Can I get 1ms resolution similar to Windows? Since I'm doing real time media, I want this clock resolution to be as low as possible: can I change this value in the kernel (like in Windows with timeBeginPeriod)? This is a high performance application, so I want getting the current time to be a fast function call. And I'd like to know if the clock generally drifts or what weird problems I can expect.
Thanks!
Brett
If you are using C++11 you can use std::chrono::high_resolution_clock which should give you as high a resolution clock as the system offers. To get a millisecond duration you would do
typedef std::chrono::high_resolution_clock my_clock;
my_clock::time_point start = my_clock::now();
// Do stuff
my_clock::time_point end = my_clock::now();
std::chrono::milliseconds ms_duration =
std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
If you aren't using C++11 then the gettimeofday function works in OSX and most linux distributions. It gives you the time since epoch in seconds and microseconds. The resolution is unspecified, but it should give you at least millisecond accuracy on any modern system.
To add to David's answer, if you can't use C++11, Boost's Timer classes can help you.