Does boost chrono provides time stamp with nanoseconds resolution?? If yes how to get the time stamp?
Nanoseconds resolution ? On which hardware do you want to run your program ?
On my PC, my performance counter has a frequency of approx. 4 Mhz, so a tick last 250 ns.
As answered here, boost chrono can give you the nanosecond resolution, but you will not be sure of the measure's accuracy.
In order to easily get time stamps with boost chrono for different measurements you can use boost CPU Timers. A table about the timer accuracy is also given on this site.
To measure the resolution yourself on your specific hardware use boost's cpu_timer_info.cpp.
Related
I need to send some data over network with timestamps, this time should have high precision. Looking at std::chrono clocks I found out that std::chrono::*_clock::now() returns time_point which is depends at clock's epoch. I failed to find out which epoch is used in each clock and which of them can be used safely when sended wia network. For example on windows high_resolution_clock is wrapper around QueryPerformanceCounter it has good precision, but, I think, useless as timestamp for network tasks.
So question is how to "synchronize" high resolution clock over network?
std::chrono::system_clock's epoch is currently unspecified. However it is portably a measure of time since 1970-01-01 00:00:00 UTC, neglecting leap seconds. This is consistent with what is called Unix Time.
I am currently working to standardize this existing practice. I have private, unofficial assurances from the implementors of std::chrono::system_clock that they will not change their existing practice in the interim.
The other two std-defined chrono clocks: high_resolution_clock, and steady_clock, do not have portably defined epochs.
Note that system_clock, though it has a (de-facto) portable epoch, does not have portable precision. On clang/libc++ the precision is microseconds. On gcc the precision is nanoseconds, and on Windows the precision is 100ns. So you might time_point_cast<microseconds>(system_clock::now()) to obtain portable precision.
If you want "high precision" (you should specify better) synchronization you would need first of all to make all your network devices to regularly synchronize with the same NTP server. You need to configure the machines in order to adjust the clocks every few minutes (or seconds), because after updated (with some network delays which will make them not precise at the millisecond) , the clocks will start to drift again.
In my C++ program, I measure CPU time by the clock() command. As the code is executed on a cluster of different computers (running all the same OS, but having different hardware configuration, i.e. different CPUs), I am wonderung about measuring actual execution time. Here is my scenario:
As far as I read, clock() gives the amount of CPU clock ticks that passed since a fixed date. I measure the relative duration by calling clock() a second time and building the difference.
Now what defines the internal clock() in C++? If I have CPU A with 1.0 GHz and CPU B with 2.0 GHz and run the same code on them, how many clocks will CPU A and B take to finish? Does the clock() correspond to "work done"? Or is it really a "time"?
Edit: As the CLOCKS_PER_SEC is not set, I cannot use it for convertion of clocks to runtime in seconds. As the manual says, CLOCKS_PER_SEC depends on the hardware/architecture. That means there is a dependency of the clocks on the hardware. So, I really need to know what clock() gives me, without any additional calculation.
The clock() function should return the closest possible
representation of the CPU time used, regardless of the clock
spead of the CPU. Where the clock speed of the CPU might
intervene (but not necessarily) is in the granularity; more
often, however, the clock's granularity depends on some external
time source. (In the distant past, it was often based on the
power line frequence, with a granularity of 1/50 or 1/60 of
a second, depending on where you were.)
To get the time in seconds, you divide by CLOCKS_PER_SEC. Be
aware, however, that both clock() and CLOCKS_PER_SEC are
integral values, so the division is integral. You might want to
convert one to double before doing the division. In the past,
CLOCKS_PER_SEC also corresponded to the granularity, but
modern systems seem to just choose some large value (Posix
requires 1000000, regardless of the granularity); this means
that successive return values from clock() will "jump".
Finally, it's probably worth noting that in VC++, clock() is
broken, and returns wall clock time, rather than CPU time.
(This is probably historically conditionned; in the early days,
wall clock time was all that was available, and the people at
Microsoft probably think that there is code which depends on
it.)
You can convert clock ticks to real time by dividing the amount with CLOCKS_PER_SEC.
Note that since C++11 a more appropriate way of measuring elapsed time is by using std::steady_clock.
From man clock:
The value returned is the CPU time used so far as a clock_t; to get
the number of seconds used, divide by CLOCKS_PER_SEC. If the processor
time used is not available or its value cannot be represented, the
function returns the value (clock_t) -1.
I need a monotonic clock that can be used to calculate intervals.
Requirements:
Must be monotonic, must not be influenced by the device time.
Must not reset during an application session.(Same epoch for all return values in a session)
Must represent real life seconds (not cpu seconds), must not be influenced by number of threads/processes running at that time.
Seconds resolution is sufficient.
In my research I have found
Candidates:
std::clock()(ctime) - Seems to use cpu seconds
boost::chrono::steady_clock() - Does it use cpu seconds? Can the epoch change during an application session(launch-end)?
Platform specific methods(clock_gettime, mach_absolute_time).
Did you ever encounter such a problem and what solution did you choose? Is steady_clock() reliable multiplatform?
I would use std::chrono::steady_clock. By description it is not influenced by wall clock/system time changes and is best suitable for measuring intervals.
boost::chrono::steady_clock() - Does it use cpu seconds? Can the epoch change during an application session(launch-end)?
std::chrono::steady_clock() is specified to use real-time seconds and is not adjusted during sessions. I assume boost's implementation adheres to that. I have't had issues with std::chrono::steady_clock except that the resolution on some platforms is lower than others.
On Windows, I can call QueryPerformanceCounter to get high resolution data points, but this method call is affected by issues with the BIOS, multi-core CPUs, and some AMD chips. I can call timeBeginPeriod to increase the system clock resolution in Windows down to 1ms (instead of the standard ~15ms) which means that I can use just call timeGetTime and get the time in the clock resolution that I've specified.
So! On OSX/Linux, what C++ clock resolutions should I expect? Can I get 1ms resolution similar to Windows? Since I'm doing real time media, I want this clock resolution to be as low as possible: can I change this value in the kernel (like in Windows with timeBeginPeriod)? This is a high performance application, so I want getting the current time to be a fast function call. And I'd like to know if the clock generally drifts or what weird problems I can expect.
Thanks!
Brett
If you are using C++11 you can use std::chrono::high_resolution_clock which should give you as high a resolution clock as the system offers. To get a millisecond duration you would do
typedef std::chrono::high_resolution_clock my_clock;
my_clock::time_point start = my_clock::now();
// Do stuff
my_clock::time_point end = my_clock::now();
std::chrono::milliseconds ms_duration =
std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
If you aren't using C++11 then the gettimeofday function works in OSX and most linux distributions. It gives you the time since epoch in seconds and microseconds. The resolution is unspecified, but it should give you at least millisecond accuracy on any modern system.
To add to David's answer, if you can't use C++11, Boost's Timer classes can help you.
How do I get the UTC time in milliseconds under the Windows platform?
I am using the standard library which give me UTC time in seconds. I want to get the time in milliseconds. Is there a reference of another library which give me the accurate UTC time in milliseconds?
Use GetSystemTime API function, or perhaps GetSystemTimeAsFileTime if you want a single number.
GetSystemTime() produces a UTC time stamp with millisecond resolution. Accuracy however is far worse, the clock usually updates at 15.625 millisecond intervals on most Windows machines. There isn't much point in chasing improved accuracy, any clock that provides an absolute time stamp is subject to drift. You'd need dedicated hardware, usually a GPS radio clock, to get something better. Which are hard to use properly on a non-realtime multi-tasking operating system. Worst-case latency can be as much as 200 milliseconds.