period must be a specialization of ratio in C++17 chrono library? - c++

I'm clearly too stupid to use the C++17 <chrono> library. Compiling the following...
#include <chrono>
#include <iostream>
int main() {
using clock = std::chrono::steady_clock;
using duration = std::chrono::duration<double, std::chrono::seconds>;
using timepoint = std::chrono::time_point<clock, duration>;
timepoint t0 = clock::now();
for (int i = 0; i < 1000; i++) {
timepoint t = clock::now();
duration d = t-t0;
double seconds = d.count();
std::cout << seconds << std::endl;
}
}
I get...
/usr/include/c++/8/chrono:319:16: error: static assertion failed:
period must be a specialization of ratio
static_assert(__is_ratio<_Period>::value,
^~~~~~~~~~~~~~~~~~~
Any ideas?

The second type parameter to std::chrono::duration needs to be a ratio (ticks per second), not another duration (see https://en.cppreference.com/w/cpp/chrono/duration). std::chrono::seconds is a duration. You'd want this instead:
using duration = std::chrono::duration<double, std::ratio<1> >;
FYI std::chrono::seconds is basically a std::chrono::duration<some integer type, std::ratio<1> >; your duration type is sort of like seconds but with a floating point number instead of an integer.

Related

Control loop time with usleep

I try to make sure the execution time of each loop to 10ms with usleep , but sometimes it exceeds 10ms.
I have no idea how to solve this problem, is it proper to use usleep and gettimeofday in this case?
Please help my find out what i missed.
Result: 0.0127289
0.0136499
0.0151598
0.0114031
0.014801
double tvsecf(){
struct timeval tv;
double asec;
gettimeofday(&tv,NULL);
asec = tv.tv_usec;
asec /= 1e6;
asec += tv.tv_sec;
return asec;
}
int main(){
double t1 ,t2;
t1 = tvsecf();
for(;;){
t2= tvsecf();
if(t2-t1 >= 0.01){
if(t2-t1 >= 0.011)
cout << t2-t1 <<endl;
t1 = tvsecf();
}
usleep(100);
}
}
To keep the loop overhead (which is generally unknown) from constantly accumulating error, you can sleep until a time point, instead of for a time duration. Using C++'s <chrono> and <thread> libraries, this is incredibly easy:
#include <chrono>
#include <iostream>
#include <thread>
int
main()
{
using namespace std;
using namespace std::chrono;
auto t0 = steady_clock::now() + 10ms;
for (;;)
{
this_thread::sleep_until(t0);
t0 += 10ms;
}
}
One can dress this up with more calls to steady_clock::now() in order to ascertain the time between iterations, and perhaps more importantly, the average iteration time:
#include <chrono>
#include <iostream>
#include <thread>
int
main()
{
using namespace std;
using namespace std::chrono;
using dsec = duration<double>;
auto t0 = steady_clock::now() + 10ms;
auto t1 = steady_clock::now();
auto t2 = t1;
constexpr auto N = 1000;
dsec avg{0};
for (auto i = 0; i < N; ++i)
{
this_thread::sleep_until(t0);
t0 += 10ms;
t2 = steady_clock::now();
dsec delta = t2-t1;
std::cout << delta.count() << "s\n";
avg += delta;
t1 = t2;
}
avg /= N;
cout << "avg = " << avg.count() << "s\n";
}
Above I've added to the loop overhead by doing more things within the loop. However the loop is still going to wake up about every 10ms. Sometimes the OS will wake the thread late, but next time the loop automatically adjusts itself to sleep for a shorter time. Thus the average iteration rate self-corrects to 10ms.
On my machine this just output:
...
0.0102046s
0.0128338s
0.00700504s
0.0116826s
0.00785826s
0.0107023s
0.00912614s
0.0104725s
0.010489s
0.0112545s
0.00906409s
avg = 0.0100014s
There is no way to guarantee 10ms loop time.
All sleeping functions sleeps for at least wanted time.
For a portable solution use std::this_thread::sleep_for
#include <iostream>
#include <chrono>
#include <thread>
int main()
{
for (;;) {
auto start = std::chrono::high_resolution_clock::now();
std::this_thread::sleep_for(std::chrono::milliseconds{10});
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> elapsed = end-start;
std::cout << "Waited " << elapsed.count() << " ms\n";
}
}
Depending on what you are trying to do take a look at Howard Hinnants date library.
From the usleep man page:
The sleep may be lengthened slightly by any system activity or by the time spent processing the call or by the granularity of system timers.
If you need high resolution: with C on Unix (or Linux) check out this answer that explains how to use high resolution timers using clock_gettime.
Edit: As mentioned by Tobias nanosleep may be a better option:
Compared to sleep(3) and usleep(3), nanosleep() has the following
advantages: it provides a higher resolution for specifying the sleep
interval; POSIX.1 explicitly specifies that it does not interact with
signals; and it makes the task of resuming a sleep that has been
interrupted by a signal handler easier.

Getting the elapsed milliseconds from the beginning of the last second [duplicate]

I am trying to use time() to measure various points of my program.
What I don't understand is why the values in the before and after are the same? I understand this is not the best way to profile my program, I just want to see how long something take.
printf("**MyProgram::before time= %ld\n", time(NULL));
doSomthing();
doSomthingLong();
printf("**MyProgram::after time= %ld\n", time(NULL));
I have tried:
struct timeval diff, startTV, endTV;
gettimeofday(&startTV, NULL);
doSomething();
doSomethingLong();
gettimeofday(&endTV, NULL);
timersub(&endTV, &startTV, &diff);
printf("**time taken = %ld %ld\n", diff.tv_sec, diff.tv_usec);
How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec?
What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?
//***C++11 Style:***
#include <chrono>
std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::microseconds>(end - begin).count() << "[µs]" << std::endl;
std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::nanoseconds> (end - begin).count() << "[ns]" << std::endl;
0 - Delta
Use a delta function to compute time differences:
auto start = std::chrono::steady_clock::now();
std::cout << "Elapsed(ms)=" << since(start).count() << std::endl;
since accepts any timepoint and produces any duration (milliseconds is the default). It is defined as:
template <
class result_t = std::chrono::milliseconds,
class clock_t = std::chrono::steady_clock,
class duration_t = std::chrono::milliseconds
>
auto since(std::chrono::time_point<clock_t, duration_t> const& start)
{
return std::chrono::duration_cast<result_t>(clock_t::now() - start);
}
Demo
1 - Timer
Use a timer based on std::chrono:
Timer clock; // Timer<milliseconds, steady_clock>
clock.tick();
/* code you want to measure */
clock.tock();
cout << "Run time = " << clock.duration().count() << " ms\n";
Demo
Timer is defined as:
template <class DT = std::chrono::milliseconds,
class ClockT = std::chrono::steady_clock>
class Timer
{
using timep_t = typename ClockT::time_point;
timep_t _start = ClockT::now(), _end = {};
public:
void tick() {
_end = timep_t{};
_start = ClockT::now();
}
void tock() { _end = ClockT::now(); }
template <class T = DT>
auto duration() const {
gsl_Expects(_end != timep_t{} && "toc before reporting");
return std::chrono::duration_cast<T>(_end - _start);
}
};
As Howard Hinnant pointed out, we use a duration to remain in the chrono type-system and perform operations like averaging or comparisons (e.g. here this means using std::chrono::milliseconds). When we just do IO, we use the count() or ticks of a duration (e.g. here number of milliseconds).
2 - Instrumentation
Any callable (function, function object, lambda etc.) can be instrumented for benchmarking. Say you have a function F invokable with arguments arg1,arg2, this technique results in:
cout << "F runtime=" << measure<>::duration(F, arg1, arg2).count() << "ms";
Demo
measure is defined as:
template <class TimeT = std::chrono::milliseconds
class ClockT = std::chrono::steady_clock>
struct measure
{
template<class F, class ...Args>
static auto duration(F&& func, Args&&... args)
{
auto start = ClockT::now();
std::invoke(std::forward<F>(func), std::forward<Args>(args)...);
return std::chrono::duration_cast<TimeT>(ClockT::now()-start);
}
};
As mentioned in (1), using the duration w/o .count() is most useful for clients that want to post-process a bunch of durations prior to I/O, e.g. average:
auto avg = (measure<>::duration(func) + measure<>::duration(func)) / 2;
std::cout << "Average run time " << avg.count() << " ms\n";
+This is why the forwarded function call.
+The complete code can be found here
+My attempt to build a benchmarking framework based on chrono is recorded here
+Old demo
#include <ctime>
void f() {
using namespace std;
clock_t begin = clock();
code_to_time();
clock_t end = clock();
double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
}
The time() function is only accurate to within a second, but there are CLOCKS_PER_SEC "clocks" within a second. This is an easy, portable measurement, even though it's over-simplified.
As I can see from your question, it looks like you want to know the elapsed time after execution of some piece of code. I guess you would be comfortable to see the results in second(s). If so, try using difftime() function as shown below. Hope this solves your problem.
#include <time.h>
#include <stdio.h>
time_t start,end;
time (&start);
.
.
.
<your code>
.
.
.
time (&end);
double dif = difftime (end,start);
printf ("Elasped time is %.2lf seconds.", dif );
Windows only: (The Linux tag was added after I posted this answer)
You can use GetTickCount() to get the number of milliseconds that have elapsed since the system was started.
long int before = GetTickCount();
// Perform time-consuming operation
long int after = GetTickCount();
struct profiler
{
std::string name;
std::chrono::high_resolution_clock::time_point p;
profiler(std::string const &n) :
name(n), p(std::chrono::high_resolution_clock::now()) { }
~profiler()
{
using dura = std::chrono::duration<double>;
auto d = std::chrono::high_resolution_clock::now() - p;
std::cout << name << ": "
<< std::chrono::duration_cast<dura>(d).count()
<< std::endl;
}
};
#define PROFILE_BLOCK(pbn) profiler _pfinstance(pbn)
Usage is below ::
{
PROFILE_BLOCK("Some time");
// your code or function
}
THis is similar to RAII in scope
NOTE this is not mine, but i thought it was relevant here
time(NULL) returns the number of seconds elapsed since 01/01/1970 at 00:00 (the Epoch). So the difference between the two values is the number of seconds your processing took.
int t0 = time(NULL);
doSomthing();
doSomthingLong();
int t1 = time(NULL);
printf ("time = %d secs\n", t1 - t0);
You can get finer results with getttimeofday(), which return the current time in seconds, as time() does and also in microseconds.
the time(NULL) function will return the number of seconds elapsed since 01/01/1970 at 00:00. And because, that function is called at different time in your program, it will always be different
Time in C++
#include<time.h> // for clock
#include<math.h> // for fmod
#include<cstdlib> //for system
#include <stdio.h> //for delay
using namespace std;
int main()
{
clock_t t1,t2;
t1=clock(); // first time capture
// Now your time spanning loop or code goes here
// i am first trying to display time elapsed every time loop runs
int ddays=0; // d prefix is just to say that this variable will be used for display
int dhh=0;
int dmm=0;
int dss=0;
int loopcount = 1000 ; // just for demo your loop will be different of course
for(float count=1;count<loopcount;count++)
{
t2=clock(); // we get the time now
float difference= (((float)t2)-((float)t1)); // gives the time elapsed since t1 in milliseconds
// now get the time elapsed in seconds
float seconds = difference/1000; // float value of seconds
if (seconds<(60*60*24)) // a day is not over
{
dss = fmod(seconds,60); // the remainder is seconds to be displayed
float minutes= seconds/60; // the total minutes in float
dmm= fmod(minutes,60); // the remainder are minutes to be displayed
float hours= minutes/60; // the total hours in float
dhh= hours; // the hours to be displayed
ddays=0;
}
else // we have reached the counting of days
{
float days = seconds/(24*60*60);
ddays = (int)(days);
float minutes= seconds/60; // the total minutes in float
dmm= fmod(minutes,60); // the rmainder are minutes to be displayed
float hours= minutes/60; // the total hours in float
dhh= fmod (hours,24); // the hours to be displayed
}
cout<<"Count Is : "<<count<<"Time Elapsed : "<<ddays<<" Days "<<dhh<<" hrs "<<dmm<<" mins "<<dss<<" secs";
// the actual working code here,I have just put a delay function
delay(1000);
system("cls");
} // end for loop
}// end of main
The values printed by your second program are seconds, and microseconds.
0 26339 = 0.026'339 s = 26339 µs
4 45025 = 4.045'025 s = 4045025 µs
#include <ctime>
#include <cstdio>
#include <iostream>
#include <chrono>
#include <sys/time.h>
using namespace std;
using namespace std::chrono;
void f1()
{
high_resolution_clock::time_point t1 = high_resolution_clock::now();
high_resolution_clock::time_point t2 = high_resolution_clock::now();
double dif = duration_cast<nanoseconds>( t2 - t1 ).count();
printf ("Elasped time is %lf nanoseconds.\n", dif );
}
void f2()
{
timespec ts1,ts2;
clock_gettime(CLOCK_REALTIME, &ts1);
clock_gettime(CLOCK_REALTIME, &ts2);
double dif = double( ts2.tv_nsec - ts1.tv_nsec );
printf ("Elasped time is %lf nanoseconds.\n", dif );
}
void f3()
{
struct timeval t1,t0;
gettimeofday(&t0, 0);
gettimeofday(&t1, 0);
double dif = double( (t1.tv_usec-t0.tv_usec)*1000);
printf ("Elasped time is %lf nanoseconds.\n", dif );
}
void f4()
{
high_resolution_clock::time_point t1 , t2;
double diff = 0;
t1 = high_resolution_clock::now() ;
for(int i = 1; i <= 10 ; i++)
{
t2 = high_resolution_clock::now() ;
diff+= duration_cast<nanoseconds>( t2 - t1 ).count();
t1 = t2;
}
printf ("high_resolution_clock:: Elasped time is %lf nanoseconds.\n", diff/10 );
}
void f5()
{
timespec ts1,ts2;
double diff = 0;
clock_gettime(CLOCK_REALTIME, &ts1);
for(int i = 1; i <= 10 ; i++)
{
clock_gettime(CLOCK_REALTIME, &ts2);
diff+= double( ts2.tv_nsec - ts1.tv_nsec );
ts1 = ts2;
}
printf ("clock_gettime:: Elasped time is %lf nanoseconds.\n", diff/10 );
}
void f6()
{
struct timeval t1,t2;
double diff = 0;
gettimeofday(&t1, 0);
for(int i = 1; i <= 10 ; i++)
{
gettimeofday(&t2, 0);
diff+= double( (t2.tv_usec-t1.tv_usec)*1000);
t1 = t2;
}
printf ("gettimeofday:: Elasped time is %lf nanoseconds.\n", diff/10 );
}
int main()
{
// f1();
// f2();
// f3();
f6();
f4();
f5();
return 0;
}
C++ std::chrono has a clear benefit of being cross-platform.
However, it also introduces a significant overhead compared to POSIX clock_gettime().
On my Linux box all std::chrono::xxx_clock::now() flavors perform roughly the same:
std::chrono::system_clock::now()
std::chrono::steady_clock::now()
std::chrono::high_resolution_clock::now()
Though POSIX clock_gettime(CLOCK_MONOTONIC, &time) should be same as steady_clock::now() but it is more than x3 times faster!
Here is my test, for completeness.
#include <stdio.h>
#include <chrono>
#include <ctime>
void print_timediff(const char* prefix, const struct timespec& start, const
struct timespec& end)
{
double milliseconds = end.tv_nsec >= start.tv_nsec
? (end.tv_nsec - start.tv_nsec) / 1e6 + (end.tv_sec - start.tv_sec) * 1e3
: (start.tv_nsec - end.tv_nsec) / 1e6 + (end.tv_sec - start.tv_sec - 1) * 1e3;
printf("%s: %lf milliseconds\n", prefix, milliseconds);
}
int main()
{
int i, n = 1000000;
struct timespec start, end;
// Test stopwatch
clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < n; ++i) {
struct timespec dummy;
clock_gettime(CLOCK_MONOTONIC, &dummy);
}
clock_gettime(CLOCK_MONOTONIC, &end);
print_timediff("clock_gettime", start, end);
// Test chrono system_clock
clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < n; ++i)
auto dummy = std::chrono::system_clock::now();
clock_gettime(CLOCK_MONOTONIC, &end);
print_timediff("chrono::system_clock::now", start, end);
// Test chrono steady_clock
clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < n; ++i)
auto dummy = std::chrono::steady_clock::now();
clock_gettime(CLOCK_MONOTONIC, &end);
print_timediff("chrono::steady_clock::now", start, end);
// Test chrono high_resolution_clock
clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < n; ++i)
auto dummy = std::chrono::high_resolution_clock::now();
clock_gettime(CLOCK_MONOTONIC, &end);
print_timediff("chrono::high_resolution_clock::now", start, end);
return 0;
}
And this is the output I get when compiled with gcc7.2 -O3:
clock_gettime: 24.484926 milliseconds
chrono::system_clock::now: 85.142108 milliseconds
chrono::steady_clock::now: 87.295347 milliseconds
chrono::high_resolution_clock::now: 84.437838 milliseconds
The time(NULL) function call will return the number of seconds elapsed since epoc: January 1 1970. Perhaps what you mean to do is take the difference between two timestamps:
size_t start = time(NULL);
doSomthing();
doSomthingLong();
printf ("**MyProgram::time elapsed= %lds\n", time(NULL) - start);
On linux, clock_gettime() is one of the good choices.
You must link real time library(-lrt).
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <time.h>
#define BILLION 1000000000L;
int main( int argc, char **argv )
{
struct timespec start, stop;
double accum;
if( clock_gettime( CLOCK_REALTIME, &start) == -1 ) {
perror( "clock gettime" );
exit( EXIT_FAILURE );
}
system( argv[1] );
if( clock_gettime( CLOCK_REALTIME, &stop) == -1 ) {
perror( "clock gettime" );
exit( EXIT_FAILURE );
}
accum = ( stop.tv_sec - start.tv_sec )
+ ( stop.tv_nsec - start.tv_nsec )
/ BILLION;
printf( "%lf\n", accum );
return( EXIT_SUCCESS );
}
As others have already noted, the time() function in the C standard library does not have a resolution better than one second. The only fully portable C function that may provide better resolution appears to be clock(), but that measures processor time rather than wallclock time. If one is content to limit oneself to POSIX platforms (e.g. Linux), then the clock_gettime() function is a good choice.
Since C++11, there are much better timing facilities available that offer better resolution in a form that should be very portable across different compilers and operating systems. Similarly, the boost::datetime library provides good high-resolution timing classes that should be highly portable.
One challenge in using any of these facilities is the time-delay introduced by querying the system clock. From experimenting with clock_gettime(), boost::datetime and std::chrono, this delay can easily be a matter of microseconds. So, when measuring the duration of any part of your code, you need to allow for there being a measurement error of around this size, or try to correct for that zero-error in some way. Ideally, you may well want to gather multiple measurements of the time taken by your function, and compute the average, or maximum/minimum time taken across many runs.
To help with all these portability and statistics-gathering issues, I've been developing the cxx-rtimers library available on Github which tries to provide a simple API for timing blocks of C++ code, computing zero errors, and reporting stats from multiple timers embedded in your code. If you have a C++11 compiler, you simply #include <rtimers/cxx11.hpp>, and use something like:
void expensiveFunction() {
static rtimers::cxx11::DefaultTimer timer("expensiveFunc");
auto scopedStartStop = timer.scopedStart();
// Do something costly...
}
On program exit, you'll get a summary of timing stats written to std::cerr such as:
Timer(expensiveFunc): <t> = 6.65289us, std = 3.91685us, 3.842us <= t <= 63.257us (n=731)
which shows the mean time, its standard-deviation, the upper and lower limits, and the number of times this function was called.
If you want to use Linux-specific timing functions, you can #include <rtimers/posix.hpp>, or if you have the Boost libraries but an older C++ compiler, you can #include <rtimers/boost.hpp>. There are also versions of these timer classes that can gather statistical timing information from across multiple threads. There are also methods that allow you to estimate the zero-error associated with two immediately consecutive queries of the system clock.
Internally the function will access the system's clock, which is why it returns different values each time you call it. In general with non-functional languages there can be many side effects and hidden state in functions which you can't see just by looking at the function's name and arguments.
From what is see, tv_sec stores the seconds elapsed while tv_usec stored the microseconds elapsed separately. And they aren't the conversions of each other. Hence, they must be changed to proper unit and added to get the total time elapsed.
struct timeval startTV, endTV;
gettimeofday(&startTV, NULL);
doSomething();
doSomethingLong();
gettimeofday(&endTV, NULL);
printf("**time taken in microseconds = %ld\n",
(endTV.tv_sec * 1e6 + endTV.tv_usec - (startTV.tv_sec * 1e6 + startTV.tv_usec))
);
I needed to measure the execution time of individual functions within a library. I didn't want to have to wrap every call of every function with a time measuring function because its ugly and deepens the call stack. I also didn't want to put timer code at the top and bottom of every function because it makes a mess when the function can exit early or throw exceptions for example. So what I ended up doing was making a timer that uses its own lifetime to measure time.
In this way I can measure the wall-time a block of code took by just instantiating one of these objects at the beginning of the code block in question (function or any scope really) and then allowing the instances destructor to measure the time elapsed since construction when the instance goes out of scope. You can find the full example here but the struct is extremely simple:
template <typename clock_t = std::chrono::steady_clock>
struct scoped_timer {
using duration_t = typename clock_t::duration;
const std::function<void(const duration_t&)> callback;
const std::chrono::time_point<clock_t> start;
scoped_timer(const std::function<void(const duration_t&)>& finished_callback) :
callback(finished_callback), start(clock_t::now()) { }
scoped_timer(std::function<void(const duration_t&)>&& finished_callback) :
callback(finished_callback), start(clock_t::now()) { }
~scoped_timer() { callback(clock_t::now() - start); }
};
The struct will call you back on the provided functor when it goes out of scope so you can do something with the timing information (print it or store it or whatever). If you need to do something even more complex you could even use std::bind with std::placeholders to callback functions with more arguments.
Here's a quick example of using it:
void test(bool should_throw) {
scoped_timer<> t([](const scoped_timer<>::duration_t& elapsed) {
auto e = std::chrono::duration_cast<std::chrono::duration<double, std::milli>>(elapsed).count();
std::cout << "took " << e << "ms" << std::endl;
});
std::this_thread::sleep_for(std::chrono::seconds(1));
if (should_throw)
throw nullptr;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
If you want to be more deliberate, you can also use new and delete to explicitly start and stop the timer without relying on scoping to do it for you.
They are they same because your doSomething function happens faster than the granularity of the timer. Try:
printf ("**MyProgram::before time= %ld\n", time(NULL));
for(i = 0; i < 1000; ++i) {
doSomthing();
doSomthingLong();
}
printf ("**MyProgram::after time= %ld\n", time(NULL));
The reason both values are the same is because your long procedure doesn't take that long - less than one second. You can try just adding a long loop (for (int i = 0; i < 100000000; i++) ; ) at the end of the function to make sure this is the issue, then we can go from there...
In case the above turns out to be true, you will need to find a different system function (I understand you work on linux, so I can't help you with the function name) to measure time more accurately. I am sure there is a function simular to GetTickCount() in linux, you just need to find it.
I usually use the following:
#include <chrono>
#include <type_traits>
using perf_clock = std::conditional<
std::chrono::high_resolution_clock::is_steady,
std::chrono::high_resolution_clock,
std::chrono::steady_clock
>::type;
using floating_seconds = std::chrono::duration<double>;
template<class F, class... Args>
floating_seconds run_test(Func&& func, Args&&... args)
{
const auto t0 = perf_clock::now();
std::forward<Func>(func)(std::forward<Args>(args)...);
return floating_seconds(perf_clock::now() - t0);
}
It's the same as #nikos-athanasiou proposed except that I avoid using of a non-steady clock and use floating number of seconds as a duration.
Matlab flavored!
tic starts a stopwatch timer to measure performance. The function records the internal time at execution of the tic command. Display the elapsed time with the toc function.
#include <iostream>
#include <ctime>
#include <thread>
using namespace std;
clock_t START_TIMER;
clock_t tic()
{
return START_TIMER = clock();
}
void toc(clock_t start = START_TIMER)
{
cout
<< "Elapsed time: "
<< (clock() - start) / (double)CLOCKS_PER_SEC << "s"
<< endl;
}
int main()
{
tic();
this_thread::sleep_for(2s);
toc();
return 0;
}
In answer to OP's three specific questions.
"What I don't understand is why the values in the before and after are the same?"
The first question and sample code shows that time() has a resolution of 1 second, so the answer has to be that the two functions execute in less than 1 second. But occasionally it will (apparently illogically) inform 1 second if the two timer marks straddle a one second boundary.
The next example uses gettimeofday() which fills this struct
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
and the second question asks: "How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec?"
My second answer is the time taken is 0 seconds and 26339 microseconds, that is 0.026339 seconds, which bears out the first example executing in less than 1 second.
The third question asks: "What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?"
My third answer is the time taken is 4 seconds and 45025 microseconds, that is 4.045025 seconds, which shows that OP has altered the tasks performed by the two functions which he previously timed.
Here's a simple class that will print the duration between the time it got in and out of scope in the specified duration unit:
#include <chrono>
#include <iostream>
template <typename T>
class Benchmark
{
public:
Benchmark(std::string name) : start(std::chrono::steady_clock::now()), name(name) {}
~Benchmark()
{
auto end = std::chrono::steady_clock::now();
T duration = std::chrono::duration_cast<T>(end - start);
std::cout << "Bench \"" << name << "\" took: " << duration.count() << " units" << std::endl;
}
private:
std::string name;
std::chrono::time_point<std::chrono::steady_clock> start;
};
int main()
{
Benchmark<std::chrono::nanoseconds> bench("for loop");
for(int i = 0; i < 1001000; i++){}
}
Example usage:
int main()
{
Benchmark<std::chrono::nanoseconds> bench("for loop");
for(int i = 0; i < 100000; i++){}
}
Outputs:
Bench "for loop" took: 230656 units
#include <ctime>
#include <functional>
using namespace std;
void f() {
clock_t begin = clock();
// ...code to measure time...
clock_t end = clock();
function<double(double, double)> convtime = [](clock_t begin, clock_t end)
{
return double(end - begin) / CLOCKS_PER_SEC;
};
printf("Elapsed time: %.2g sec\n", convtime(begin, end));
}
Similar example to one available here, only with additional conversion function + print out.
I have created a class to automatically measure elapsed time, Please check the code (c++11) in this link: https://github.com/sonnt174/Common/blob/master/time_measure.h
Example of how to use class TimeMeasure:
void test_time_measure(std::vector<int> arr) {
TimeMeasure<chrono::microseconds> time_mea; // create time measure obj
std::sort(begin(arr), end(arr));
}

High Resolution Clock in VS2013

I'm looking for a cross-platform clock with high resolution, high precision, and relatively low performance impact (in order of importance).
I've tried:
//using namespace std::chrono;
//typedef std::chrono::high_resolution_clock Clock;
using namespace boost::chrono;
typedef boost::chrono::high_resolution_clock Clock;
auto now = Clock::now().time_since_epoch();
std::size_t secs = duration_cast<seconds>(now).count();
std::size_t nanos = duration_cast<nanoseconds>(now).count() % 1000000000;
std::time_t tp = (std::time_t) secs;
std::string mode;
char timestamp[] = "yyyymmdd HH:MM:SS";
char format[] = "%Y%m%d %H:%M:%S";
strftime(timestamp, 80, format, std::localtime(&tp)); // Takes 12 microseconds
std::string output = timestamp + "." + std::to_string(nanos);
After some trials and testing:
The original std::chrono::high_resolution_clock is typedef to system_clock and has precision of roughly 1 millisecond.
The boost::chrono::high_resolution_clock uses the Query_Performance_Counter on Windows and has high resolution and precision. Unfortunately, Clock::now() returns time since boot and now().time_since_epoch() does not return epoch time (also returns time since boot).
Don't mind using guards for different solutions on different platforms (want VS2013 and Linux). Will likely store the now and do the processing in a separate/low priority thread.
Does a cross-platform, high-resolution, high-precision, performance-friendly timer exist?
Is boost::chrono::high_resolution_clock::now().time_since_epoch() working as intended? It does not give a time since the last epoch. It only gives a time since last boot. Is there a way to convert this now() into seconds since the epoch.
I think the nicest way to do it is to implement a new clock type that models the Clock requirement in the c++11/14 standard.
The windows function GetSystemTimePreciseAsFileTime can be used as the basis of the windows clock. I believe this function returns the time in units of 100 nano-seconds since the start of the windows epoch. If I'm wrong about that, just alter the definition of period to suit.
struct windows_highres_clock
{
// implement Clock concept
using rep = ULONGLONG;
using period = std::ratio<1, 10000000>;
using duration = std::chrono::duration<rep, period>;
using time_point = std::chrono::time_point<windows_highres_clock, duration>;
static constexpr const bool is_steady = true;
static time_point now() noexcept {
FILETIME ft = { 0, 0 };
GetSystemTimePreciseAsFileTime(&ft);
ULARGE_INTEGER stamp { { ft.dwLowDateTime, ft.dwHighDateTime } };
return time_point { duration { stamp.QuadPart } };
}
};
If you want to go ahead and implement the TrivalClock concept on top of it, that should work. Just follow the instructions at http://cppreference.com
Providing from_time_t and to_time_t member functions will complete the picture and allow you to use this clock for both timing and datetime representation.
example of use:
windows_highres_clock clock;
auto t0 = clock.now();
sleep(1);
auto t1 = clock.now();
auto diff = t1 - t0;
auto ms = std::chrono::duration_cast<chrono::milliseconds>(diff);
cout << "waited for " << ms.count() << " milliseconds\n";
example output:
waited for 1005 milliseconds
for non-windows systems the system_clock usually suffices, but you can write a similar per-system clock using the appropriate native timing mechanisms.
FYI:
Here's a portable piece of code you can use to check clock resolutions.
The class any_clock is a polymorphic container that can hold any Clock-like object. It always returns its time stamps as microseconds since the epoch.
// create some syntax to help specialise the any_clock
template<class Clock> struct of_type {};
// polymorphic clock container
struct any_clock
{
template<class Clock>
any_clock(of_type<Clock>)
: _ptr { new model<Clock> {} }
{}
std::chrono::microseconds now() const {
return _ptr->now();
}
using duration = std::chrono::microseconds;
private:
struct concept {
virtual ~concept() = default;
virtual duration now() const noexcept = 0;
};
template<class Clock>
struct model final : concept {
duration now() const noexcept final {
return std::chrono::duration_cast<std::chrono::microseconds>(Clock::now().time_since_epoch());
}
};
unique_ptr<concept> _ptr;
};
int main(int argc, const char * argv[])
{
any_clock clocks[] = {
{ of_type<windows_highres_clock>() },
{ of_type<std::chrono::high_resolution_clock>() },
{ of_type<std::chrono::system_clock>() }
};
static constexpr size_t nof_clocks = std::extent<decltype(clocks)>::value;
any_clock::duration t0[nof_clocks];
any_clock::duration t1[nof_clocks];
for (size_t i = 0 ; i < nof_clocks ; ++i) {
t0[i] = clocks[i].now();
}
sleep(1);
for (size_t i = 0 ; i < nof_clocks ; ++i) {
t1[i] = clocks[i].now();
}
for (size_t i = 0 ; i < nof_clocks ; ++i) {
auto diff = t1[i] - t0[i];
auto ms = std::chrono::duration_cast<chrono::microseconds>(diff);
cout << "waited for " << ms.count() << " microseconds\n";
}
return 0;
}

Boost Timer 24 hour format

I'm using boost::timer::cpu_timer to calculate the "user process time" of an algorithm like so:
boost::timer::cpu_timer timer;
boost::timer::nanosecond_type userTime = timer.elapsed().user;
My question is how do I format userTime in HH::MM::SS.mmm format? I know I can write the code myself, but I was expecting Boost to provide some means of doing this.
I came across this example, but it makes use of boost::chrono::duration<Rep, Period>, which I'm not sure how to obtain from boost::timer::nanosecond_type.
You need conversion nanosecond_type to duration to time_point.
#include <iostream>
#include <boost/timer/timer.hpp>
#include <boost/chrono.hpp>
#include <boost/format.hpp>
namespace chrono = boost::chrono;
int main()
{
// get now time & start timer
chrono::system_clock::time_point start_time = chrono::system_clock::now();
boost::timer::cpu_timer timer;
for (int i = 0; i < 100000; ++i) {}
// elapsed time conversion to time_point
chrono::system_clock::time_point end_time
= chrono::time_point_cast<chrono::system_clock::duration>(
start_time + chrono::nanoseconds(timer.elapsed().user));
// time_point conversion to time_t&tm
std::time_t time = chrono::system_clock::to_time_t(end_time);
std::tm* t = std::localtime(&time);
// formatting
std::size_t fractional_seconds = chrono::duration_cast<chrono::milliseconds>(
end_time.time_since_epoch()
).count() % 1000;
std::string s = (boost::format("%d:%d:%d.%d")
% t->tm_hour
% t->tm_min
% t->tm_sec
% fractional_seconds
).str();
std::cout << s << std::endl;
}
possible output:
10:42:55.445

Sleep function in C++

Is there a function like Sleep(time); that pauses the program for X milliseconds, but in C++?
Which header should I add and what is the function's signature?
Use std::this_thread::sleep_for:
#include <chrono>
#include <thread>
std::chrono::milliseconds timespan(111605); // or whatever
std::this_thread::sleep_for(timespan);
There is also the complementary std::this_thread::sleep_until.
Prior to C++11, C++ had no thread concept and no sleep capability, so your solution was necessarily platform dependent. Here's a snippet that defines a sleep function for Windows or Unix:
#ifdef _WIN32
#include <windows.h>
void sleep(unsigned milliseconds)
{
Sleep(milliseconds);
}
#else
#include <unistd.h>
void sleep(unsigned milliseconds)
{
usleep(milliseconds * 1000); // takes microseconds
}
#endif
But a much simpler pre-C++11 method is to use boost::this_thread::sleep.
You'll need at least C++11.
#include <thread>
#include <chrono>
...
std::this_thread::sleep_for(std::chrono::milliseconds(200));
For Windows:
#include "windows.h"
Sleep(10);
For Unix:
#include <unistd.h>
usleep(10)
On Unix, include #include <unistd.h>.
The call you're interested in is usleep(). Which takes microseconds, so you should multiply your millisecond value by 1000 and pass the result to usleep().
Just use it...
Firstly include the unistd.h header file, #include<unistd.h>, and use this function for pausing your program execution for desired number of seconds:
sleep(x);
x can take any value in seconds.
If you want to pause the program for 5 seconds it is like this:
sleep(5);
It is correct and I use it frequently.
It is valid for C and C++.
Prior to C++11, there was no portable way to do this.
A portable way is to use Boost or Ace library.
There is ACE_OS::sleep(); in ACE.
The simplest way I found for C++ 11 was this:
Your includes:
#include <chrono>
#include <thread>
Your code (this is an example for sleep 1000 millisecond):
std::chrono::duration<int, std::milli> timespan(1000);
std::this_thread::sleep_for(timespan);
The duration could be configured to any of the following:
std::chrono::nanoseconds duration</*signed integer type of at least 64 bits*/, std::nano>
std::chrono::microseconds duration</*signed integer type of at least 55 bits*/, std::micro>
std::chrono::milliseconds duration</*signed integer type of at least 45 bits*/, std::milli>
std::chrono::seconds duration</*signed integer type of at least 35 bits*/, std::ratio<1>>
std::chrono::minutes duration</*signed integer type of at least 29 bits*/, std::ratio<60>>
std::chrono::hours duration</*signed integer type of at least 23 bits*/, std::ratio<3600>>
For a short solution use
#include <thread>
using namespace std;
using namespace std::this_thread;
void f() {
sleep_for(200ms);
}
Recently I was learning about chrono library and thought of implementing a sleep function on my own. Here is the code,
#include <cmath>
#include <chrono>
template <typename rep = std::chrono::seconds::rep,
typename period = std::chrono::seconds::period>
void sleep(std::chrono::duration<rep, period> sec)
{
using sleep_duration = std::chrono::duration<long double, std::nano>;
std::chrono::steady_clock::time_point start = std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
long double elapsed_time =
std::chrono::duration_cast<sleep_duration>(end - start).count();
long double sleep_time =
std::chrono::duration_cast<sleep_duration>(sec).count();
while (std::isgreater(sleep_time, elapsed_time)) {
end = std::chrono::steady_clock::now();
elapsed_time = std::chrono::duration_cast<sleep_duration>(end - start).count();
}
}
We can use it with any std::chrono::duration type (By default it takes std::chrono::seconds as argument). For example,
#include <cmath>
#include <chrono>
template <typename rep = std::chrono::seconds::rep,
typename period = std::chrono::seconds::period>
void sleep(std::chrono::duration<rep, period> sec)
{
using sleep_duration = std::chrono::duration<long double, std::nano>;
std::chrono::steady_clock::time_point start = std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
long double elapsed_time =
std::chrono::duration_cast<sleep_duration>(end - start).count();
long double sleep_time =
std::chrono::duration_cast<sleep_duration>(sec).count();
while (std::isgreater(sleep_time, elapsed_time)) {
end = std::chrono::steady_clock::now();
elapsed_time = std::chrono::duration_cast<sleep_duration>(end - start).count();
}
}
using namespace std::chrono_literals;
int main (void) {
std::chrono::steady_clock::time_point start1 = std::chrono::steady_clock::now();
sleep(5s); // sleep for 5 seconds
std::chrono::steady_clock::time_point end1 = std::chrono::steady_clock::now();
std::cout << std::setprecision(9) << std::fixed;
std::cout << "Elapsed time was: " << std::chrono::duration_cast<std::chrono::seconds>(end1-start1).count() << "s\n";
std::chrono::steady_clock::time_point start2 = std::chrono::steady_clock::now();
sleep(500000ns); // sleep for 500000 nano seconds/500 micro seconds
// same as writing: sleep(500us)
std::chrono::steady_clock::time_point end2 = std::chrono::steady_clock::now();
std::cout << "Elapsed time was: " << std::chrono::duration_cast<std::chrono::microseconds>(end2-start2).count() << "us\n";
return 0;
}
For more information, visit https://en.cppreference.com/w/cpp/header/chrono
and see this cppcon talk of Howard Hinnant, https://www.youtube.com/watch?v=P32hvk8b13M.
He has two more talks on chrono library. And you can always use the library function, std::this_thread::sleep_for
Note: Outputs may not be accurate. So, don't expect it to give exact timings.
I like the solution proposed by #Ben Voigt -- it does not rely on anything outside of C++, but he did not mention an important detail to make the code work. So I am putting the full code, please notice the line starting with using.
#include <thread>
#include <chrono>
...
using namespace std::chrono_literals;
std::this_thread::sleep_for(200ms);