Does steady_clock have only 10ms resolution on windows/cygwin? - c++

I have suprising observation that steady_clock gives poor 10ms resolution when measuring durations. I compile for windows under cygwin. Is it sad true or am I doing sth wrong?
auto start = std::chrono::steady_clock::now();
/*...*/
auto end = std::chrono::steady_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::microseconds>
(end - start).count();
Result is 10000,20000 etc.

The resolution of std::steady_clock is implementation dependent, and you shouldn't rely on a precise minimum duration. It varies across platforms/compiler implementations.
From http://cppreference.com:
Class std::chrono::steady_clock represents a monotonic clock. The time
points of this clock cannot decrease as physical time moves forward.
This clock is not related to wall clock time, and is best suitable for
measuring intervals.
Related: Difference between std::system_clock and std::steady_clock?
If you don't care about monotonicity (i.e., you don't care if someone changes the wall clock while your program is running), you're probably better off with a
std::high_resolution_clock. (the latter is still implementation-dependent)

Related

Time taken between two points in code independent of system clock CPP Linux

I need to find the time taken to execute a piece of code, and the method should be independent of system time, ie chrono and all wouldn't work.
My usecse looks somewhat like this.
int main{
//start
function();
//end
time_take = end - start;
}
I am working in an embedded platform that doesn't have the right time at the start-up. In my case, the start of funcion happens before actual time is set from ntp server and end happens after the exact time is obtained. So any method that compares the time difference between two points wouldn't work. Also, number of CPU ticks wouldn't work for me since my programme necessarily be running actively throughout.
I tried the conventional methods and they didn't work for me.
On Linux clock_gettime() has an option to return the the current CLOCK_MONOTONIC, which is unaffected by system time changes. Measuring the CLOCK_MONOTONIC at the beginning and the end, and then doing your own math to subtract the two values, will measure the elapsed time ignoring any system time changes.
If you don't want to dip down to C-level abstractions, <chrono> has this covered for you with steady_clock:
int main{
//start
auto t0 = std::chrono::steady_clock::now();
function();
auto t1 = std::chrono::steady_clock::now();
//end
auto time_take = end - start;
}
steady_clock is generally a wrapper around clock_gettime used with CLOCK_MONOTONIC except is portable across all platforms. I.e. some platforms don't have clock_gettime, but do have an API for getting a monotonic clock time.
Above the type of take_time will be steady_clock::duration. On all platforms I'm aware of, this type is an alias for nanoseconds. If you want an integral count of nanoseconds you can:
using namespace std::literals;
int64_t i = time_take/1ns;
The above works on all platforms, even if steady_clock::duration is not nanoseconds.
The minor advantage of <chrono> over a C-level API is that you don't have to deal with computing timespec subtraction manually. And of course it is portable.

Persistent std::chrono time_point<steady_clock>

I am using in my projects some time_point<steady_clock> variables in order to do operations at a specific interval. I want to serialize/deserialize in a file those values.
But it seems that the time_since_epoch from a steady_clock is not reliable, although time_since_epoch from a system_clock is quite ok, it always calculates the time from 1970/1/1 (UNIX time)
What's the best solution for me? It seems that I have to convert somehow from steady_clock to system_clock but I don't think this is achievable.
P.S. I already read the topic here: Persisting std::chrono time_point instances
On the cppreference page for std::chrono::steady_clock, it says:
This clock is not related to wall clock time (for example, it can be time since last reboot), and is most suitable for measuring intervals.
The page for std::chrono::system_clock says that most implementations use UTC as an epoch:
The epoch of system_clock is unspecified, but most implementations use Unix Time (i.e., time since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970, not counting leap seconds).
If you're trying to compare times across machines or hoping to correlate the recorded times to real world events (i.e. at 3pm today there was an issue), then you'll want to switch your code over to using the system clock. Anytime you reboot the steady clock will reset and it doesn't relate to wall time at all.
Edit: if you wanted to do an approximate conversion between steady and system timestamps you could do something like this:
template <typename To, typename FromTimePoint>
typename To::time_point approximate_conversion(const FromTimePoint& from) {
const auto to_now = To::now().time_since_epoch();
const auto from_now = FromTimePoint::clock::now().time_since_epoch();
// compute an approximate offset between the clocks and apply that to the input timestamp
const auto approx_offset = to_now - from_now;
return typename To::time_point{from.time_since_epoch() + approx_offset};
}
int main() {
auto steady = std::chrono::steady_clock::now();
auto system = approximate_conversion<std::chrono::system_clock>(steady);
}
This assumes the clocks don't drift apart very quickly, and that there are no large discontinuities in either clock (both of which are false assumptions over long periods of time).

Consistent Timestamping in C++ with std::chrono

I'm logging timestamps in my program with the following block of code:
// Taken at relevant time
m.timestamp = std::chrono::high_resolution_clock::now().time_since_epoch();
// After work is done
std::size_t secs = std::chrono::duration_cast <std::chrono::seconds> (timestamp).count();
std::size_t nanos = std::chrono::duration_cast<std::chrono::nanoseconds> (timestamp).count() % 1000000000;
std::time_t tp = (std::time_t) secs;
std::string mode;
char ts[] = "yyyymmdd HH:MM:SS";
char format[] = "%Y%m%d %H:%M:%S";
strftime(ts, 80, format, std::localtime(&tp));
std::stringstream s;
s << ts << "." << std::setfill('0') << std::setw(9) << nanos
<< " - " << message << std::endl;
return s.str();
I'm comparing these to timestamps recorded by an accurate remote source. When the difference in timestamps is graphed and ntp is not enabled, there is a linear looking drift through the day (700 microseconds every 30 seconds or so).
After correcting for a linear drift, I find that there's a non-linear component. It can drift in and out hundreds of microseconds over the course of hours.
The second graph looks similar to graphs taken with same methodology as above, but NTP enabled. The large vertical spikes are expected in the data, but the wiggle in the minimum is surprising.
Is there a way to get a more precise timestamp, but retain microsecond/nanosecond resolution? It's okay if the clock drifts from the actual time in a predictable way, but the timestamps would need to be internally consistent over long stretches of time.
high_resolution_clock has no guaranteed relationship with "current time". Your system may or not alias high_resolution_clock to system_clock. That means you may or may not get away with using high_resolution_clock in this manner.
Use system_clock. Then tell us if the situation has changed (it may not).
Also, better style:
using namespace std::chrono;
auto timestamp = ... // however, as long as it is based on system_clock
auto secs = duration_cast <seconds> (timestamp);
timestamp -= secs;
auto nanos = duration_cast<nanoseconds> (timestamp);
std::time_t tp = system_clock::to_time_t(system_clock::time_point{secs});
Stay in the chrono type system as long as possible.
Use the chrono type system to do the conversions and arithmetic for you.
Use system_clock::to_time_t to convert to time_t.
But ultimately, none of the above is going to change any of your results. system_clock is just going to talk to the OS (e.g. call gettimeofday or whatever).
If you can devise a more accurate way to tell time on your system, you can wrap that solution up in a "chrono-compatible clock" so that you can continue to make use of the type safety and conversion factors of chrono durations and time_points.
struct my_super_accurate_clock
{
using rep = long long;
using period = std::nano; // or whatever?
using duration = std::chrono::duration<rep, period>;
using time_point = std::chrono::time_point<my_super_accurate_clock>;
static const bool is_steady = false;
static time_point now(); // do super accurate magic here
};
The problem is that unless your machine is very unusual, the underlying hardware simply isn't capable of providing a particularly reliable measurement of time (at least on the scales you are looking at).
Whether on your digital wristwatch or your workstation, most electronic clock signals are internally generated by a crystal oscillator. Such crystals have both long (years) and short-term (minutes) variation around their "ideal" frequency, with the largest short-term component being variation with temperature. Fancy lab equipment is going to have something like a crystal oven which tries to keep the crystal at a constant temperature (above ambient) to minimize temperature related drift, but I've never seen anything like that on commodity computing hardware.
You see the effects of crystal inaccuracy in a different way in both of your graphs. The first graph simply shows that your crystal ticks at a somewhat large offset from true time, either due to variability at manufacturing (it was always that bad) or long-term drift (it got like that over time). Once you enable NTP, the "constant" or average offset from true is easily corrected, so you'll expect to average zero offset over some large period of time (indeed the line traced by the minimum dips above and below zero).
At this scale, however, you'll see the smaller short term variations in effect. NTP kicks in periodically and tries to "fix them", but the short term drift is always there and always changing direction (you can probably even check the effect of increasing or decreasing ambient temperature and see it in the graph).
You can't avoid the wiggle, but you could perhaps increase the NTP adjustment frequency to keep it more tightly coupled to real time. Your exact requirements aren't totally clear though. For example you mention:
It's okay if the clock drifts from the actual time in a predictable
way, but the timestamps would need to be internally consistent over
long stretches of time.
What does "internally consistent" mean? If you are OK with arbitrary drift, just use your existing clock without NTP adjustments. If you want something like time that tracks real time "over large timeframes" (i.e,. it doesn't get too out of sync), why could use your internal clock in combination with periodic polling of your "external source", and change the adjustment factor in a smooth way so that you don't have "jumps" in the apparent time. This is basically reinventing NTP, but at least it would be fully under application control.

Is clock() reliable for a timer?

I'm using clock(), and I'm wondering whether it ever resets or maxes out. all I'm using it for is just too subject it from a previous function call and find the difference.
Thanks for the help so far but I'm not really able to get the chrono thing working in VS '12 but its fine because I think its a little more than I need anyway, I was think about using 's time() but I have no idea how to convert the t_time into an int that contains just the current seconds 0-60, any help?
As far as the standard is concerned,
The range and precision of times representable in clock_t and time_t are implementation-defined.
(C99, §7.23.1 ¶4)
so there are no guarantees of range; the definition of clock() does not say anything about wrapping around, although it says that
If the processor time used is not available or its value cannot be represented, the function returns the value (clock_t)(-1)
So we may say that exceeding the range of clock_t may be seen as "its value cannot be represented"; on the other hand, this interpretation would mean that, after some time, clock() becomes completely useless.
In facts, if we get down to a specific implementation (glibc), we see:
matteo#teokubuntu:~$ man 3 clock
Note that the time can wrap around. On a 32-bit system where
CLOCKS_PER_SEC equals 1000000 this function will return the same value
approximately every 72 minutes.
Depends on what system you are on. It may use a 32+ or a 64-bit clock_t. It will definitely roll over, but if it's 64-bit, it will be OK for quite some time before it rolls over - 264 microseconds is still an awful long time (approx 244 seconds, and there is around 216 seconds per day, so 228 days - which is about 220, or a million, years... ;)
Of course, in a 32-bit system, we have about 212=4096 seconds at microsecond resoltion. An hour being 3600s = about 1h10m.
However, another problem, in some systems, is that clock() returns CPU time used, so if you sleep, it won't count as time in clock().
And of course, even though CLOCKS_PER_SEC may be 1000000, it doesn't mean that you get microsecond resultion - in many systems, it "jumps" 10000 units at a time.
In summary, "probably a bad idea".
If you have C++11 on the system, use std::chrono, which has several options for timekeeping that are sufficiently good for most purposes (but do study the std::chrono docs)
Example code:
#include <iostream>
#include <chrono>
#include <unistd.h> // replace with "windows.h" if needed.
int main()
{
std::chrono::time_point<std::chrono::system_clock> start, end;
start = std::chrono::system_clock::now();
// 10 seconds on a unix system. Sleep(10000) on windows will be the same thing
sleep(10);
end = std::chrono::system_clock::now();
int elapsed_seconds = std::chrono::duration_cast<std::chrono::seconds>
(end-start).count();
std::cout << "elapsed time: " << elapsed_seconds << "s\n";
}
The simple answer is that if you're just using it to time a function, it will probably not wrap around. It may also be too slow and chances are you might see a function duration of zero. If you want accurate timing for a function that executes fast, you're probably better using an OS level call like this one on Windows.

Measuring the runtime of a C++ code?

I want to measure the runtime of my C++ code. Executing my code takes about 12 hours and I want to write this time at the end of execution of my code. How can I do it in my code?
Operating system: Linux
If you are using C++11 you can use system_clock::now():
auto start = std::chrono::system_clock::now();
/* do some work */
auto end = std::chrono::system_clock::now();
auto elapsed = end - start;
std::cout << elapsed.count() << '\n';
You can also specify the granularity to use for representing a duration:
// this constructs a duration object using milliseconds
auto elapsed =
std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
// this constructs a duration object using seconds
auto elapsed =
std::chrono::duration_cast<std::chrono::seconds>(end - start);
If you cannot use C++11, then have a look at chrono from Boost.
The best thing about using such a standard libraries is that their portability is really high (e.g., they both work in Linux and Windows). So you do not need to worry too much if you decide to port your application afterwards.
These libraries follow a modern C++ design too, as opposed to C-like approaches.
EDIT: The example above can be used to measure wall-clock time. That is not, however, the only way to measure the execution time of a program. First, we can distinct between user and system time:
User time: The time spent by the program running in user space.
System time: The time spent by the program running in system (or kernel) space. A program enters kernel space for instance when executing a system call.
Depending on the objectives it may be necessary or not to consider system time as part of the execution time of a program. For instance, if the aim is to just measure a compiler optimization on the user code then it is probably better to leave out system time. On the other hand, if the user wants to determine whether system calls are a significant overhead, then it is necessary to measure system time as well.
Moreover, since most modern systems are time-shared, different programs may compete for several computing resources (e.g., CPU). In such a case, another distinction can be made:
Wall-clock time: By using wall-clock time the execution of the program is measured in the same way as if we were using an external (wall) clock. This approach does not consider the interaction between programs.
CPU time: In this case we only count the time that a program is actually running on the CPU. If a program (P1) is co-scheduled with another one (P2), and we want to get the CPU time for P1, this approach does not include the time while P2 is running and P1 is waiting for the CPU (as opposed to the wall-clock time approach).
For measuring CPU time, Boost includes a set of extra clocks:
process_real_cpu_clock, captures wall clock CPU time spent by the current process.
process_user_cpu_clock, captures user-CPU time spent by the current process.
process_system_cpu_clock, captures system-CPU time spent by the current process. A tuple-like class process_cpu_clock, that captures real, user-CPU, and system-CPU process times together.
A thread_clock thread steady clock giving the time spent by the current thread (when supported by a platform).
Unfortunately, C++11 does not have such clocks. But Boost is a wide-used library and, probably, these extra clocks will be incorporated into C++1x at some point. So, if you use Boost you will be ready when the new C++ standard adds them.
Finally, if you want to measure the time a program takes to execute from the command line (as opposed to adding some code into your program), you may have a look at the time command, just as #BЈовић suggests. This approach, however, would not let you measure individual parts of your program (e.g., the time it takes to execute a function).
Use std::chrono::steady_clock and not std::chrono::system_clock for measuring run time in C++11. The reason is (quoting system_clock's documentation):
on most systems, the system time can be adjusted at any moment
while steady_clock is monotonic and is better suited for measuring intervals:
Class std::chrono::steady_clock represents a monotonic clock. The time
points of this clock cannot decrease as physical time moves forward.
This clock is not related to wall clock time, and is best suitable for
measuring intervals.
Here's an example:
auto start = std::chrono::steady_clock::now();
// do something
auto finish = std::chrono::steady_clock::now();
double elapsed_seconds = std::chrono::duration_cast<
std::chrono::duration<double> >(finish - start).count();
A small practical tip: if you are measuring run time and want to report seconds std::chrono::duration_cast<std::chrono::seconds> is rarely what you need because it gives you whole number of seconds. To get the time in seconds as a double use the example above.
You can use time to start your program. When it ends, it print nice time statistics about program run. It is easy to configure what to print. By default, it print user and CPU times it took to execute the program.
EDIT : Take a note that every measure from the code is not correct, because your application will get blocked by other programs, hence giving you wrong values*.
* By wrong values, I meant it is easy to get the time it took to execute the program, but that time varies depending on the CPUs load during the program execution. To get relatively stable time measurement, that doesn't depend on the CPU load, one can execute the application using time and use the CPU as the measurement result.
I used something like this in one of my projects:
#include <sys/time.h>
struct timeval start, end;
gettimeofday(&start, NULL);
//Compute
gettimeofday(&end, NULL);
double elapsed = ((end.tv_sec - start.tv_sec) * 1000)
+ (end.tv_usec / 1000 - start.tv_usec / 1000);
This is for milliseconds and it works both for C and C++.
This is the code I use:
const auto start = std::chrono::steady_clock::now();
// Your code here.
const auto end = std::chrono::steady_clock::now();
std::chrono::duration<double> elapsed = end - start;
std::cout << "Time in seconds: " << elapsed.count() << '\n';
You don't want to use std::chrono::system_clock because it is not monotonic! If the user changes the time in the middle of your code your result will be wrong - it might even be negative. std::chrono::high_resolution_clock might be implemented using std::chrono::system_clock so I wouldn't recommend that either.
This code also avoids ugly casts.
If you wish to print the measured time with printf(), you can use this:
auto start = std::chrono::system_clock::now();
/* measured work */
auto end = std::chrono::system_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
printf("Time = %lld ms\n", static_cast<long long int>(elapsed.count()));
You could also try some timer classes that start and stop automatically, and gather statistics on the average, maximum and minimum time spent in any block of code, as well as the number of calls. These cxx-rtimer classes are available on GitHub, and offer support for using std::chrono, clock_gettime(), or boost::posix_time as a back-end clock source.
With these timers, you can do something like:
void timeCriticalFunction() {
static rtimers::cxx11::DefaultTimer timer("expensive");
auto scopedStartStop = timer.scopedStart();
// Do something costly...
}
with timing stats written to std::cerr on program completion.