Here's what I'd need to do:
double now=getdoubletimestampsomehow();
Where getdoubletimestampsomehow() should be a straight-forward, easy to use function returning a double value representing the number of seconds elapsed from a given date. I'd need it to be quite precise, but I don't really need it to be more precise than a few milliseconds. Portability is quite important, if it isn't possible to directly port it anywhere could you please tell me both an unix and a windows way to do it?
Have you looked at Boost and particularly its Date_Time library ? Here is the seconds since epoch example.
You will be hard-pressed to find something more portable, and of higher resolution.
Portable good precision double timestamp in C++?
There is no portable way to get high-precision timestamp (milliseconds) without using 3rd party libraries. Maximum precision you'll get is 1 second, using time/localtime/gmtime.
If you're fine with 3rd party libraries, use either Boost or Qt 4.
both an unix and a windows way to do it?
GetSystemTime on Windows and gettimeofday on linux.
Please note that if you're planning to use timestamps to determine order of some events, then it might be a bad idea. System clock might have very limited precision (10 milliseconds on windows platform), in which case several operations performed consequently can produce same timestamp. So, to determine order of events you would need "logical timestamps" ("vector clock" is one of examples).
On windows platform, there are highly precise functions that can be used to determine how much time has passed since some point in the past (QueryPerformanceCounter), but they aren't connected to timestamps.
C++11 introduced the <chrono> header containing quite a few portable clocks. The highest resolution clock among them is the std::chrono::high_resolution_clock.
It provides the current time as a std::chrono::time_point object which has a time_since_epoch member. This might contain what you want.
Reference:
Prior to the release of the C++11 standard, there was no standard way in which one could accurately measure the execution time of a piece of code. The programmer was forced to use external libraries like Boost, or routines provided by each operating system.
The C++11 chrono header file provides three standard clocks that could be used for timing one’s code:
system_clock - this is the real-time clock used by the system;
high_resolution_clock - this is a clock with the shortest tick period possible on the current system;
steady_clock - this is a monotonic clock that is guaranteed to never be adjusted.
If you want to measure the time taken by a certain piece of code for execution, you should generally use the steady_clock, which is a monotonic clock that is never adjusted by the system. The other two clocks provided by the chrono header can be occasionally adjusted, so the difference between two consecutive time moments, t0 < t1, is not always positive.
Doubles are not precise - therefore you idea for double now=getdoubletimestampsomehow(); falls down at the first hurdle.
Others have mentioned other possibilities. I would explore those.
Related
I'm running my code on Ubuntu, and I need to get the elapsed time about a function on my program. I need a very accurate time, like nano seconds or at least micro seconds.
I read about chrono.h but it uses system time, and I prefer use CPU time.
Is there a way to do that, and have that granularity (nano seconds)?
std::chrono does have a high_resolution_clock, though please bear in mind that the precision is limited by the processor.
If you want to use functions directory from libc, you can use gettimeofday but as before there is no guarantee that this will be nanosecond accurate. (this is only microsecond accuracy)
The achievable precision of the clock is one of the properties of different hardware / OS that still leak into virtually every language, and, to be honest, having been in the same situation I find building yourself an own abstraction that is good enough in your case is often the only choice.
That being said, I would avoid the STL for high-precision timing. Since it is a library standard with no one true implementation, it has to create an abstraction, which implies one of:
use a least common denominator
leak hardware/OS details through platform-dependent behavior
In the second case you are essentially back to where you started, if you want to have uniform behavior. If you can afford the possible loss of precision or the deviations of a standard clock, then by all means use it. Clocks are hard and subtle.
If you know your target environment you can choose the appropriate clocks the oldschool way (#ifdef PLATFORM_ID...), e.g. clock_gettime(), QPC), and implement the most precise abstraction you can get. Of course you are limited by the same choice the STL has to make, but by reducing the set of platforms, you can generally improve on the lcd-requirement.
If you need a more theoretical way to convince yourself of this argumentation, you can consider the set of clocks with their maximum precision, and a sequence of accesses to the current time. For clocks advancing uniformly in uniform steps, if two accesses happen faster than the maximum precision of one clock, but slower than the maximum precision of another clock, you are bound to get different behavior. If on the other hand you ensure that two accesses are at least the maximum precision of the slowest clock apart the behavior is the same. Now of course real clocks are not advancing uniformly (clock drift), and also not in unit-steps.
While there is a standards function that should return the CPU time (std::clock) in reality there's no portable way to do this.
On POSIX systems (which Linux is attempting to be) then std::clock should do the right thing though. Just don't expect it to work the same on non-POSIX platforms if you ever want to make your application portable.
The values returned by std::clock are also approximate, and the precision and resolution is system dependent.
The C++ Draft par 20.12.7.3 reads:
high_resolution_clock may be a synonym for system_clock or steady_clock
Of course this may mandates nothing but I wonder :
Is there any point for high_resolution_clock to something other that a typedef ?
Are there such implementations ?
If a clock with a shorter tick period is devised it can either be steady or not steady. So if such a mechanism exists, wouldn't we want to "improve" system_clock and high_resolution_clock as well, defaulting to the typedef solution once more ?
The reason that specs have wording such as "may" and "can", and other vague words that allow for other possibilities comes from the wish by the spec writers not wanting to (unnecessarily) limit the implementation of a "better" solution of something.
Imagine a system where time in general is counted in seconds, and the system_clock is just that - the system_clock::period will return 1 second. This time is stored as a single 64-bit integer.
Now, in the same system, there is also a time in nano-seconds, but it's stored as a 128-bit integer. The resulting time-calculations are slightly more complex due to this large integer format, and for someone that only needs 1s precision for their time (in a system where a large number of calculations on time are made), you wouldn't want to have the extra penalty of using high_precision_clock, when the system doesn't need it.
As to if there are such things in real life, I'm not sure. The key is that it's not a violation to the standard, if you care to implement it such.
Note that steady is very much a property of "what happens when the system changes time" (e.g. if the outside network has been down for a several days, and the internal clock in the system has drifted off from the atomic clock that the network time updates to). Using steady_clock will guarantee that time doesn't go backwards or suddenly jumps forward 25 seconds all of a sudden. Likewise, there is no problem when there is a "leap second" or similar time adjustment in the computer system. On the other hand, a system_clock is guaranteed to give you the correct new time if you give it a forward duration past a daylight savings time, or some such, where steady_clock will just tick along hour after hour, regardless. So choosing the right one of those will affect the recording of your favourite program in the digital TV recorder - steady_clock will record at the wrong time [my DTV recorder did this wrong a few years back, but they appear to have fixed it now].
system_clock should also take into account the user (or sysadmin) changing the clock in the system, steady_clock should NOT do so.
Again, high_resolution_clock may or may not be steady - it's up to the implementor of the C++ library to give the appropriate response to is_steady.
In the 4.9.2 version of <chrono>, we find this using high_resolution_clock = system_clock;, so in this case it's a direct typedef (by a different name). But the spec doesn't REQUIRE this.
Using boost::chrono::steady_clock or std::chrono::steady_clock is suppose to guarantee that physical time is always monotonic and is not affected by date time changes in the system. Here is my question, if I have two processes that need to be immune to system date time changes, is it enough to exchange just the time_since_epoch? In other words, the time interpretation of the two processes to the same time since epoch will be the same? Specifically I need to answer this question for Windows and QNX.
EDIT: Both processes are running in the same computer, same operating system and communicate via IPC calls.
No the times are not interchangeable between systems, because C++ doesn't specify the epoch. The epoch is depending on the operating system, different systems can have different epochs.
If, on the other hand, you share the times only locally, within the same system, then it's okay.
C++ standard says about steady_clock:
20.12.7.2 Class steady_clock [time.clock.steady]
Objects of class steady_clock represent clocks for which values of time_point never decrease as physical time advances and for which values of time_point advance at a steady rate relative to real time. That is, the clock may not be adjusted.
Compare this to what the standard has to say about system_clock:
20.12.7.1 Class system_clock [time.clock.system]
Objects of class system_clock represent wall clock time from the system-wide realtime clock.
There's no mention about steady_clock being "system-wide", which leads me to believe that, according to the C++ standard, you cannot trust on two steady_clocks in different processes on the same machine having the same epoch.
Old, but maybe someone will benefit...
Probably, as we understood, it isn't defined by the C++ standard, so it depends on your compiler/system/stdlib implementation.
Eg.: If the implementation uses CLOCK_MONOTONIC from Linux (https://man7.org/linux/man-pages/man2/clock_gettime.2.html):
CLOCK_MONOTONIC
A nonsettable system-wide clock that represents monotonic
time since—as described by POSIX—"some unspecified point
in the past". On Linux, that point corresponds to the
number of seconds that the system has been running since
it was booted.
It's a system-wide clock then.
#DeZee mentioned Linux implementation - the epoch is when the system boots.
How about Windows?
MSVC uses QueryPerformanceCounter and QueryPerformanceFrequency for its std::chrono::steady_clock implementation. And from MSDN (emphasis mine):
In general, the performance counter results are consistent across all
processors in multi-core and multi-processor systems, even when
measured on different threads or processes.
(the doc also mentions some exceptions, so please see it)
And:
The frequency of the performance counter is fixed at system boot and
is consistent across all processors so you only need to query the
frequency from QueryPerformanceFrequency as the application
initializes, and then cache the result.
Also, in the code of the std::chrono::steady_clock implementation of MSVC, we can see this comment:
const long long _Freq = _Query_perf_frequency(); // doesn't change after system boot
Now, your question is:
the time interpretation of the two processes to the same time since
epoch will be the same?
So the answer to your question would be "yes", at least for Linux and Windows.
My program frequently calls WINAPI function timeGetTime(), which should be replaced with usage of <chrono> (standard library). What is the fastest standardized way to get system time - in float or int, for my case?
I do not need tracking the date or day time, i only need precise relative ms/seconds value, which always increments. Is there any?
For benchmarking, you likely want std::chrono::high_resolution_clock. It may not be steady - in the sense that it "always increments". The only clock that guarantees steadiness is std::chrono::steady_clock.
The best, steady clock would then be:
using ClockType = std::conditional<
std::chrono::high_resolution_clock::is_steady,
std::chrono::high_resolution_clock,
std::chrono::steady_clock>::type;
Note that high_resolution_clock could itself simply be an alias for steady_clock.
If you need precise relative ms, you're
looking for system-specific
not wallclock
You tagged question with WinApi, I assume, this is Windows-specific.
For Windows that is High Resolution Timer. This technology allows you to precisely calculate relative times (how much time is spent in certain function call, for example).
On posix it is possible to use timespec to calculate accurate time length (like seconds and milliseconds). Unfortunately I need to migrate to windows with Visual Studio compiler. The VS time.h library doesn't declare timespec so I'm looking for other options. As far as could search is it possible to use clock and time_t although I could't check how precise is counting millisecons with clock counting.
What do you do/use for calculating time elapse in a operation (if possible using standards c++ library) ?
The function GetTickCount is usually used for that.
Also a similiar thread: C++ timing, milliseconds since last whole second
Depends on what sort of accuracy you want, my understanding is that clock and time_t are not accurate to the millisecond level. Similarly GetTickCount() is commonly used (MS docs say accurate to 10-15ms) but not sufficiently accurate for many purposes.
I use QueryPerformanceFrequency and QueryPerformanceCounter for accurate timing measurements for performance.