Getting the elapsed milliseconds from the beginning of the last second [duplicate] - c++
I am trying to use time() to measure various points of my program.
What I don't understand is why the values in the before and after are the same? I understand this is not the best way to profile my program, I just want to see how long something take.
printf("**MyProgram::before time= %ld\n", time(NULL));
doSomthing();
doSomthingLong();
printf("**MyProgram::after time= %ld\n", time(NULL));
I have tried:
struct timeval diff, startTV, endTV;
gettimeofday(&startTV, NULL);
doSomething();
doSomethingLong();
gettimeofday(&endTV, NULL);
timersub(&endTV, &startTV, &diff);
printf("**time taken = %ld %ld\n", diff.tv_sec, diff.tv_usec);
How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec?
What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?
//***C++11 Style:***
#include <chrono>
std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::microseconds>(end - begin).count() << "[µs]" << std::endl;
std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::nanoseconds> (end - begin).count() << "[ns]" << std::endl;
0 - Delta
Use a delta function to compute time differences:
auto start = std::chrono::steady_clock::now();
std::cout << "Elapsed(ms)=" << since(start).count() << std::endl;
since accepts any timepoint and produces any duration (milliseconds is the default). It is defined as:
template <
class result_t = std::chrono::milliseconds,
class clock_t = std::chrono::steady_clock,
class duration_t = std::chrono::milliseconds
>
auto since(std::chrono::time_point<clock_t, duration_t> const& start)
{
return std::chrono::duration_cast<result_t>(clock_t::now() - start);
}
Demo
1 - Timer
Use a timer based on std::chrono:
Timer clock; // Timer<milliseconds, steady_clock>
clock.tick();
/* code you want to measure */
clock.tock();
cout << "Run time = " << clock.duration().count() << " ms\n";
Demo
Timer is defined as:
template <class DT = std::chrono::milliseconds,
class ClockT = std::chrono::steady_clock>
class Timer
{
using timep_t = typename ClockT::time_point;
timep_t _start = ClockT::now(), _end = {};
public:
void tick() {
_end = timep_t{};
_start = ClockT::now();
}
void tock() { _end = ClockT::now(); }
template <class T = DT>
auto duration() const {
gsl_Expects(_end != timep_t{} && "toc before reporting");
return std::chrono::duration_cast<T>(_end - _start);
}
};
As Howard Hinnant pointed out, we use a duration to remain in the chrono type-system and perform operations like averaging or comparisons (e.g. here this means using std::chrono::milliseconds). When we just do IO, we use the count() or ticks of a duration (e.g. here number of milliseconds).
2 - Instrumentation
Any callable (function, function object, lambda etc.) can be instrumented for benchmarking. Say you have a function F invokable with arguments arg1,arg2, this technique results in:
cout << "F runtime=" << measure<>::duration(F, arg1, arg2).count() << "ms";
Demo
measure is defined as:
template <class TimeT = std::chrono::milliseconds
class ClockT = std::chrono::steady_clock>
struct measure
{
template<class F, class ...Args>
static auto duration(F&& func, Args&&... args)
{
auto start = ClockT::now();
std::invoke(std::forward<F>(func), std::forward<Args>(args)...);
return std::chrono::duration_cast<TimeT>(ClockT::now()-start);
}
};
As mentioned in (1), using the duration w/o .count() is most useful for clients that want to post-process a bunch of durations prior to I/O, e.g. average:
auto avg = (measure<>::duration(func) + measure<>::duration(func)) / 2;
std::cout << "Average run time " << avg.count() << " ms\n";
+This is why the forwarded function call.
+The complete code can be found here
+My attempt to build a benchmarking framework based on chrono is recorded here
+Old demo
#include <ctime>
void f() {
using namespace std;
clock_t begin = clock();
code_to_time();
clock_t end = clock();
double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
}
The time() function is only accurate to within a second, but there are CLOCKS_PER_SEC "clocks" within a second. This is an easy, portable measurement, even though it's over-simplified.
As I can see from your question, it looks like you want to know the elapsed time after execution of some piece of code. I guess you would be comfortable to see the results in second(s). If so, try using difftime() function as shown below. Hope this solves your problem.
#include <time.h>
#include <stdio.h>
time_t start,end;
time (&start);
.
.
.
<your code>
.
.
.
time (&end);
double dif = difftime (end,start);
printf ("Elasped time is %.2lf seconds.", dif );
Windows only: (The Linux tag was added after I posted this answer)
You can use GetTickCount() to get the number of milliseconds that have elapsed since the system was started.
long int before = GetTickCount();
// Perform time-consuming operation
long int after = GetTickCount();
struct profiler
{
std::string name;
std::chrono::high_resolution_clock::time_point p;
profiler(std::string const &n) :
name(n), p(std::chrono::high_resolution_clock::now()) { }
~profiler()
{
using dura = std::chrono::duration<double>;
auto d = std::chrono::high_resolution_clock::now() - p;
std::cout << name << ": "
<< std::chrono::duration_cast<dura>(d).count()
<< std::endl;
}
};
#define PROFILE_BLOCK(pbn) profiler _pfinstance(pbn)
Usage is below ::
{
PROFILE_BLOCK("Some time");
// your code or function
}
THis is similar to RAII in scope
NOTE this is not mine, but i thought it was relevant here
time(NULL) returns the number of seconds elapsed since 01/01/1970 at 00:00 (the Epoch). So the difference between the two values is the number of seconds your processing took.
int t0 = time(NULL);
doSomthing();
doSomthingLong();
int t1 = time(NULL);
printf ("time = %d secs\n", t1 - t0);
You can get finer results with getttimeofday(), which return the current time in seconds, as time() does and also in microseconds.
the time(NULL) function will return the number of seconds elapsed since 01/01/1970 at 00:00. And because, that function is called at different time in your program, it will always be different
Time in C++
#include<time.h> // for clock
#include<math.h> // for fmod
#include<cstdlib> //for system
#include <stdio.h> //for delay
using namespace std;
int main()
{
clock_t t1,t2;
t1=clock(); // first time capture
// Now your time spanning loop or code goes here
// i am first trying to display time elapsed every time loop runs
int ddays=0; // d prefix is just to say that this variable will be used for display
int dhh=0;
int dmm=0;
int dss=0;
int loopcount = 1000 ; // just for demo your loop will be different of course
for(float count=1;count<loopcount;count++)
{
t2=clock(); // we get the time now
float difference= (((float)t2)-((float)t1)); // gives the time elapsed since t1 in milliseconds
// now get the time elapsed in seconds
float seconds = difference/1000; // float value of seconds
if (seconds<(60*60*24)) // a day is not over
{
dss = fmod(seconds,60); // the remainder is seconds to be displayed
float minutes= seconds/60; // the total minutes in float
dmm= fmod(minutes,60); // the remainder are minutes to be displayed
float hours= minutes/60; // the total hours in float
dhh= hours; // the hours to be displayed
ddays=0;
}
else // we have reached the counting of days
{
float days = seconds/(24*60*60);
ddays = (int)(days);
float minutes= seconds/60; // the total minutes in float
dmm= fmod(minutes,60); // the rmainder are minutes to be displayed
float hours= minutes/60; // the total hours in float
dhh= fmod (hours,24); // the hours to be displayed
}
cout<<"Count Is : "<<count<<"Time Elapsed : "<<ddays<<" Days "<<dhh<<" hrs "<<dmm<<" mins "<<dss<<" secs";
// the actual working code here,I have just put a delay function
delay(1000);
system("cls");
} // end for loop
}// end of main
The values printed by your second program are seconds, and microseconds.
0 26339 = 0.026'339 s = 26339 µs
4 45025 = 4.045'025 s = 4045025 µs
#include <ctime>
#include <cstdio>
#include <iostream>
#include <chrono>
#include <sys/time.h>
using namespace std;
using namespace std::chrono;
void f1()
{
high_resolution_clock::time_point t1 = high_resolution_clock::now();
high_resolution_clock::time_point t2 = high_resolution_clock::now();
double dif = duration_cast<nanoseconds>( t2 - t1 ).count();
printf ("Elasped time is %lf nanoseconds.\n", dif );
}
void f2()
{
timespec ts1,ts2;
clock_gettime(CLOCK_REALTIME, &ts1);
clock_gettime(CLOCK_REALTIME, &ts2);
double dif = double( ts2.tv_nsec - ts1.tv_nsec );
printf ("Elasped time is %lf nanoseconds.\n", dif );
}
void f3()
{
struct timeval t1,t0;
gettimeofday(&t0, 0);
gettimeofday(&t1, 0);
double dif = double( (t1.tv_usec-t0.tv_usec)*1000);
printf ("Elasped time is %lf nanoseconds.\n", dif );
}
void f4()
{
high_resolution_clock::time_point t1 , t2;
double diff = 0;
t1 = high_resolution_clock::now() ;
for(int i = 1; i <= 10 ; i++)
{
t2 = high_resolution_clock::now() ;
diff+= duration_cast<nanoseconds>( t2 - t1 ).count();
t1 = t2;
}
printf ("high_resolution_clock:: Elasped time is %lf nanoseconds.\n", diff/10 );
}
void f5()
{
timespec ts1,ts2;
double diff = 0;
clock_gettime(CLOCK_REALTIME, &ts1);
for(int i = 1; i <= 10 ; i++)
{
clock_gettime(CLOCK_REALTIME, &ts2);
diff+= double( ts2.tv_nsec - ts1.tv_nsec );
ts1 = ts2;
}
printf ("clock_gettime:: Elasped time is %lf nanoseconds.\n", diff/10 );
}
void f6()
{
struct timeval t1,t2;
double diff = 0;
gettimeofday(&t1, 0);
for(int i = 1; i <= 10 ; i++)
{
gettimeofday(&t2, 0);
diff+= double( (t2.tv_usec-t1.tv_usec)*1000);
t1 = t2;
}
printf ("gettimeofday:: Elasped time is %lf nanoseconds.\n", diff/10 );
}
int main()
{
// f1();
// f2();
// f3();
f6();
f4();
f5();
return 0;
}
C++ std::chrono has a clear benefit of being cross-platform.
However, it also introduces a significant overhead compared to POSIX clock_gettime().
On my Linux box all std::chrono::xxx_clock::now() flavors perform roughly the same:
std::chrono::system_clock::now()
std::chrono::steady_clock::now()
std::chrono::high_resolution_clock::now()
Though POSIX clock_gettime(CLOCK_MONOTONIC, &time) should be same as steady_clock::now() but it is more than x3 times faster!
Here is my test, for completeness.
#include <stdio.h>
#include <chrono>
#include <ctime>
void print_timediff(const char* prefix, const struct timespec& start, const
struct timespec& end)
{
double milliseconds = end.tv_nsec >= start.tv_nsec
? (end.tv_nsec - start.tv_nsec) / 1e6 + (end.tv_sec - start.tv_sec) * 1e3
: (start.tv_nsec - end.tv_nsec) / 1e6 + (end.tv_sec - start.tv_sec - 1) * 1e3;
printf("%s: %lf milliseconds\n", prefix, milliseconds);
}
int main()
{
int i, n = 1000000;
struct timespec start, end;
// Test stopwatch
clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < n; ++i) {
struct timespec dummy;
clock_gettime(CLOCK_MONOTONIC, &dummy);
}
clock_gettime(CLOCK_MONOTONIC, &end);
print_timediff("clock_gettime", start, end);
// Test chrono system_clock
clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < n; ++i)
auto dummy = std::chrono::system_clock::now();
clock_gettime(CLOCK_MONOTONIC, &end);
print_timediff("chrono::system_clock::now", start, end);
// Test chrono steady_clock
clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < n; ++i)
auto dummy = std::chrono::steady_clock::now();
clock_gettime(CLOCK_MONOTONIC, &end);
print_timediff("chrono::steady_clock::now", start, end);
// Test chrono high_resolution_clock
clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < n; ++i)
auto dummy = std::chrono::high_resolution_clock::now();
clock_gettime(CLOCK_MONOTONIC, &end);
print_timediff("chrono::high_resolution_clock::now", start, end);
return 0;
}
And this is the output I get when compiled with gcc7.2 -O3:
clock_gettime: 24.484926 milliseconds
chrono::system_clock::now: 85.142108 milliseconds
chrono::steady_clock::now: 87.295347 milliseconds
chrono::high_resolution_clock::now: 84.437838 milliseconds
The time(NULL) function call will return the number of seconds elapsed since epoc: January 1 1970. Perhaps what you mean to do is take the difference between two timestamps:
size_t start = time(NULL);
doSomthing();
doSomthingLong();
printf ("**MyProgram::time elapsed= %lds\n", time(NULL) - start);
On linux, clock_gettime() is one of the good choices.
You must link real time library(-lrt).
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <time.h>
#define BILLION 1000000000L;
int main( int argc, char **argv )
{
struct timespec start, stop;
double accum;
if( clock_gettime( CLOCK_REALTIME, &start) == -1 ) {
perror( "clock gettime" );
exit( EXIT_FAILURE );
}
system( argv[1] );
if( clock_gettime( CLOCK_REALTIME, &stop) == -1 ) {
perror( "clock gettime" );
exit( EXIT_FAILURE );
}
accum = ( stop.tv_sec - start.tv_sec )
+ ( stop.tv_nsec - start.tv_nsec )
/ BILLION;
printf( "%lf\n", accum );
return( EXIT_SUCCESS );
}
As others have already noted, the time() function in the C standard library does not have a resolution better than one second. The only fully portable C function that may provide better resolution appears to be clock(), but that measures processor time rather than wallclock time. If one is content to limit oneself to POSIX platforms (e.g. Linux), then the clock_gettime() function is a good choice.
Since C++11, there are much better timing facilities available that offer better resolution in a form that should be very portable across different compilers and operating systems. Similarly, the boost::datetime library provides good high-resolution timing classes that should be highly portable.
One challenge in using any of these facilities is the time-delay introduced by querying the system clock. From experimenting with clock_gettime(), boost::datetime and std::chrono, this delay can easily be a matter of microseconds. So, when measuring the duration of any part of your code, you need to allow for there being a measurement error of around this size, or try to correct for that zero-error in some way. Ideally, you may well want to gather multiple measurements of the time taken by your function, and compute the average, or maximum/minimum time taken across many runs.
To help with all these portability and statistics-gathering issues, I've been developing the cxx-rtimers library available on Github which tries to provide a simple API for timing blocks of C++ code, computing zero errors, and reporting stats from multiple timers embedded in your code. If you have a C++11 compiler, you simply #include <rtimers/cxx11.hpp>, and use something like:
void expensiveFunction() {
static rtimers::cxx11::DefaultTimer timer("expensiveFunc");
auto scopedStartStop = timer.scopedStart();
// Do something costly...
}
On program exit, you'll get a summary of timing stats written to std::cerr such as:
Timer(expensiveFunc): <t> = 6.65289us, std = 3.91685us, 3.842us <= t <= 63.257us (n=731)
which shows the mean time, its standard-deviation, the upper and lower limits, and the number of times this function was called.
If you want to use Linux-specific timing functions, you can #include <rtimers/posix.hpp>, or if you have the Boost libraries but an older C++ compiler, you can #include <rtimers/boost.hpp>. There are also versions of these timer classes that can gather statistical timing information from across multiple threads. There are also methods that allow you to estimate the zero-error associated with two immediately consecutive queries of the system clock.
Internally the function will access the system's clock, which is why it returns different values each time you call it. In general with non-functional languages there can be many side effects and hidden state in functions which you can't see just by looking at the function's name and arguments.
From what is see, tv_sec stores the seconds elapsed while tv_usec stored the microseconds elapsed separately. And they aren't the conversions of each other. Hence, they must be changed to proper unit and added to get the total time elapsed.
struct timeval startTV, endTV;
gettimeofday(&startTV, NULL);
doSomething();
doSomethingLong();
gettimeofday(&endTV, NULL);
printf("**time taken in microseconds = %ld\n",
(endTV.tv_sec * 1e6 + endTV.tv_usec - (startTV.tv_sec * 1e6 + startTV.tv_usec))
);
I needed to measure the execution time of individual functions within a library. I didn't want to have to wrap every call of every function with a time measuring function because its ugly and deepens the call stack. I also didn't want to put timer code at the top and bottom of every function because it makes a mess when the function can exit early or throw exceptions for example. So what I ended up doing was making a timer that uses its own lifetime to measure time.
In this way I can measure the wall-time a block of code took by just instantiating one of these objects at the beginning of the code block in question (function or any scope really) and then allowing the instances destructor to measure the time elapsed since construction when the instance goes out of scope. You can find the full example here but the struct is extremely simple:
template <typename clock_t = std::chrono::steady_clock>
struct scoped_timer {
using duration_t = typename clock_t::duration;
const std::function<void(const duration_t&)> callback;
const std::chrono::time_point<clock_t> start;
scoped_timer(const std::function<void(const duration_t&)>& finished_callback) :
callback(finished_callback), start(clock_t::now()) { }
scoped_timer(std::function<void(const duration_t&)>&& finished_callback) :
callback(finished_callback), start(clock_t::now()) { }
~scoped_timer() { callback(clock_t::now() - start); }
};
The struct will call you back on the provided functor when it goes out of scope so you can do something with the timing information (print it or store it or whatever). If you need to do something even more complex you could even use std::bind with std::placeholders to callback functions with more arguments.
Here's a quick example of using it:
void test(bool should_throw) {
scoped_timer<> t([](const scoped_timer<>::duration_t& elapsed) {
auto e = std::chrono::duration_cast<std::chrono::duration<double, std::milli>>(elapsed).count();
std::cout << "took " << e << "ms" << std::endl;
});
std::this_thread::sleep_for(std::chrono::seconds(1));
if (should_throw)
throw nullptr;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
If you want to be more deliberate, you can also use new and delete to explicitly start and stop the timer without relying on scoping to do it for you.
They are they same because your doSomething function happens faster than the granularity of the timer. Try:
printf ("**MyProgram::before time= %ld\n", time(NULL));
for(i = 0; i < 1000; ++i) {
doSomthing();
doSomthingLong();
}
printf ("**MyProgram::after time= %ld\n", time(NULL));
The reason both values are the same is because your long procedure doesn't take that long - less than one second. You can try just adding a long loop (for (int i = 0; i < 100000000; i++) ; ) at the end of the function to make sure this is the issue, then we can go from there...
In case the above turns out to be true, you will need to find a different system function (I understand you work on linux, so I can't help you with the function name) to measure time more accurately. I am sure there is a function simular to GetTickCount() in linux, you just need to find it.
I usually use the following:
#include <chrono>
#include <type_traits>
using perf_clock = std::conditional<
std::chrono::high_resolution_clock::is_steady,
std::chrono::high_resolution_clock,
std::chrono::steady_clock
>::type;
using floating_seconds = std::chrono::duration<double>;
template<class F, class... Args>
floating_seconds run_test(Func&& func, Args&&... args)
{
const auto t0 = perf_clock::now();
std::forward<Func>(func)(std::forward<Args>(args)...);
return floating_seconds(perf_clock::now() - t0);
}
It's the same as #nikos-athanasiou proposed except that I avoid using of a non-steady clock and use floating number of seconds as a duration.
Matlab flavored!
tic starts a stopwatch timer to measure performance. The function records the internal time at execution of the tic command. Display the elapsed time with the toc function.
#include <iostream>
#include <ctime>
#include <thread>
using namespace std;
clock_t START_TIMER;
clock_t tic()
{
return START_TIMER = clock();
}
void toc(clock_t start = START_TIMER)
{
cout
<< "Elapsed time: "
<< (clock() - start) / (double)CLOCKS_PER_SEC << "s"
<< endl;
}
int main()
{
tic();
this_thread::sleep_for(2s);
toc();
return 0;
}
In answer to OP's three specific questions.
"What I don't understand is why the values in the before and after are the same?"
The first question and sample code shows that time() has a resolution of 1 second, so the answer has to be that the two functions execute in less than 1 second. But occasionally it will (apparently illogically) inform 1 second if the two timer marks straddle a one second boundary.
The next example uses gettimeofday() which fills this struct
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
and the second question asks: "How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec?"
My second answer is the time taken is 0 seconds and 26339 microseconds, that is 0.026339 seconds, which bears out the first example executing in less than 1 second.
The third question asks: "What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?"
My third answer is the time taken is 4 seconds and 45025 microseconds, that is 4.045025 seconds, which shows that OP has altered the tasks performed by the two functions which he previously timed.
Here's a simple class that will print the duration between the time it got in and out of scope in the specified duration unit:
#include <chrono>
#include <iostream>
template <typename T>
class Benchmark
{
public:
Benchmark(std::string name) : start(std::chrono::steady_clock::now()), name(name) {}
~Benchmark()
{
auto end = std::chrono::steady_clock::now();
T duration = std::chrono::duration_cast<T>(end - start);
std::cout << "Bench \"" << name << "\" took: " << duration.count() << " units" << std::endl;
}
private:
std::string name;
std::chrono::time_point<std::chrono::steady_clock> start;
};
int main()
{
Benchmark<std::chrono::nanoseconds> bench("for loop");
for(int i = 0; i < 1001000; i++){}
}
Example usage:
int main()
{
Benchmark<std::chrono::nanoseconds> bench("for loop");
for(int i = 0; i < 100000; i++){}
}
Outputs:
Bench "for loop" took: 230656 units
#include <ctime>
#include <functional>
using namespace std;
void f() {
clock_t begin = clock();
// ...code to measure time...
clock_t end = clock();
function<double(double, double)> convtime = [](clock_t begin, clock_t end)
{
return double(end - begin) / CLOCKS_PER_SEC;
};
printf("Elapsed time: %.2g sec\n", convtime(begin, end));
}
Similar example to one available here, only with additional conversion function + print out.
I have created a class to automatically measure elapsed time, Please check the code (c++11) in this link: https://github.com/sonnt174/Common/blob/master/time_measure.h
Example of how to use class TimeMeasure:
void test_time_measure(std::vector<int> arr) {
TimeMeasure<chrono::microseconds> time_mea; // create time measure obj
std::sort(begin(arr), end(arr));
}
Related
C++ clock() function time.h returns unstable values [duplicate]
I want to find out how much time a certain function takes in my C++ program to execute on Linux. Afterwards, I want to make a speed comparison . I saw several time function but ended up with this from boost. Chrono: process_user_cpu_clock, captures user-CPU time spent by the current process Now, I am not clear if I use the above function, will I get the only time which CPU spent on that function? Secondly, I could not find any example of using the above function. Can any one please help me how to use the above function? P.S: Right now , I am using std::chrono::system_clock::now() to get time in seconds but this gives me different results due to different CPU load every time.
It is a very easy-to-use method in C++11. You have to use std::chrono::high_resolution_clock from <chrono> header. Use it like so: #include <chrono> /* Only needed for the sake of this example. */ #include <iostream> #include <thread> void long_operation() { /* Simulating a long, heavy operation. */ using namespace std::chrono_literals; std::this_thread::sleep_for(150ms); } int main() { using std::chrono::high_resolution_clock; using std::chrono::duration_cast; using std::chrono::duration; using std::chrono::milliseconds; auto t1 = high_resolution_clock::now(); long_operation(); auto t2 = high_resolution_clock::now(); /* Getting number of milliseconds as an integer. */ auto ms_int = duration_cast<milliseconds>(t2 - t1); /* Getting number of milliseconds as a double. */ duration<double, std::milli> ms_double = t2 - t1; std::cout << ms_int.count() << "ms\n"; std::cout << ms_double.count() << "ms\n"; return 0; } This will measure the duration of the function long_operation. Possible output: 150ms 150.068ms Working example: https://godbolt.org/z/oe5cMd
Here's a function that will measure the execution time of any function passed as argument: #include <chrono> #include <utility> typedef std::chrono::high_resolution_clock::time_point TimeVar; #define duration(a) std::chrono::duration_cast<std::chrono::nanoseconds>(a).count() #define timeNow() std::chrono::high_resolution_clock::now() template<typename F, typename... Args> double funcTime(F func, Args&&... args){ TimeVar t1=timeNow(); func(std::forward<Args>(args)...); return duration(timeNow()-t1); } Example usage: #include <iostream> #include <algorithm> typedef std::string String; //first test function doing something int countCharInString(String s, char delim){ int count=0; String::size_type pos = s.find_first_of(delim); while ((pos = s.find_first_of(delim, pos)) != String::npos){ count++;pos++; } return count; } //second test function doing the same thing in different way int countWithAlgorithm(String s, char delim){ return std::count(s.begin(),s.end(),delim); } int main(){ std::cout<<"norm: "<<funcTime(countCharInString,"precision=10",'=')<<"\n"; std::cout<<"algo: "<<funcTime(countWithAlgorithm,"precision=10",'='); return 0; } Output: norm: 15555 algo: 2976
In Scott Meyers book I found an example of universal generic lambda expression that can be used to measure function execution time. (C++14) auto timeFuncInvocation = [](auto&& func, auto&&... params) { // get time before function invocation const auto& start = std::chrono::high_resolution_clock::now(); // function invocation using perfect forwarding std::forward<decltype(func)>(func)(std::forward<decltype(params)>(params)...); // get time after function invocation const auto& stop = std::chrono::high_resolution_clock::now(); return stop - start; }; The problem is that you are measure only one execution so the results can be very differ. To get a reliable result you should measure a large number of execution. According to Andrei Alexandrescu lecture at code::dive 2015 conference - Writing Fast Code I: Measured time: tm = t + tq + tn + to where: tm - measured (observed) time t - the actual time of interest tq - time added by quantization noise tn - time added by various sources of noise to - overhead time (measuring, looping, calling functions) According to what he said later in the lecture, you should take a minimum of this large number of execution as your result. I encourage you to look at the lecture in which he explains why. Also there is a very good library from google - https://github.com/google/benchmark. This library is very simple to use and powerful. You can checkout some lectures of Chandler Carruth on youtube where he is using this library in practice. For example CppCon 2017: Chandler Carruth “Going Nowhere Faster”; Example usage: #include <iostream> #include <chrono> #include <vector> auto timeFuncInvocation = [](auto&& func, auto&&... params) { // get time before function invocation const auto& start = high_resolution_clock::now(); // function invocation using perfect forwarding for(auto i = 0; i < 100000/*largeNumber*/; ++i) { std::forward<decltype(func)>(func)(std::forward<decltype(params)>(params)...); } // get time after function invocation const auto& stop = high_resolution_clock::now(); return (stop - start)/100000/*largeNumber*/; }; void f(std::vector<int>& vec) { vec.push_back(1); } void f2(std::vector<int>& vec) { vec.emplace_back(1); } int main() { std::vector<int> vec; std::vector<int> vec2; std::cout << timeFuncInvocation(f, vec).count() << std::endl; std::cout << timeFuncInvocation(f2, vec2).count() << std::endl; std::vector<int> vec3; vec3.reserve(100000); std::vector<int> vec4; vec4.reserve(100000); std::cout << timeFuncInvocation(f, vec3).count() << std::endl; std::cout << timeFuncInvocation(f2, vec4).count() << std::endl; return 0; } EDIT: Ofcourse you always need to remember that your compiler can optimize something out or not. Tools like perf can be useful in such cases.
simple program to find a function execution time taken. #include <iostream> #include <ctime> // time_t #include <cstdio> void function() { for(long int i=0;i<1000000000;i++) { // do nothing } } int main() { time_t begin,end; // time_t is a datatype to store time values. time (&begin); // note time before execution function(); time (&end); // note time after execution double difference = difftime (end,begin); printf ("time taken for function() %.2lf seconds.\n", difference ); return 0; }
Easy way for older C++, or C: #include <time.h> // includes clock_t and CLOCKS_PER_SEC int main() { clock_t start, end; start = clock(); // ...code to measure... end = clock(); double duration_sec = double(end-start)/CLOCKS_PER_SEC; return 0; } Timing precision in seconds is 1.0/CLOCKS_PER_SEC
#include <iostream> #include <chrono> void function() { // code here; } int main() { auto t1 = std::chrono::high_resolution_clock::now(); function(); auto t2 = std::chrono::high_resolution_clock::now(); auto duration = std::chrono::duration_cast<std::chrono::microseconds>( t2 - t1 ).count(); std::cout << duration<<"/n"; return 0; } This Worked for me. Note: The high_resolution_clock is not implemented consistently across different standard library implementations, and its use should be avoided. It is often just an alias for std::chrono::steady_clock or std::chrono::system_clock, but which one it is depends on the library or configuration. When it is a system_clock, it is not monotonic (e.g., the time can go backwards). For example, for gcc's libstdc++ it is system_clock, for MSVC it is steady_clock, and for clang's libc++ it depends on configuration. Generally one should just use std::chrono::steady_clock or std::chrono::system_clock directly instead of std::chrono::high_resolution_clock: use steady_clock for duration measurements, and system_clock for wall-clock time.
Here is an excellent header only class template to measure the elapsed time of a function or any code block: #ifndef EXECUTION_TIMER_H #define EXECUTION_TIMER_H template<class Resolution = std::chrono::milliseconds> class ExecutionTimer { public: using Clock = std::conditional_t<std::chrono::high_resolution_clock::is_steady, std::chrono::high_resolution_clock, std::chrono::steady_clock>; private: const Clock::time_point mStart = Clock::now(); public: ExecutionTimer() = default; ~ExecutionTimer() { const auto end = Clock::now(); std::ostringstream strStream; strStream << "Destructor Elapsed: " << std::chrono::duration_cast<Resolution>( end - mStart ).count() << std::endl; std::cout << strStream.str() << std::endl; } inline void stop() { const auto end = Clock::now(); std::ostringstream strStream; strStream << "Stop Elapsed: " << std::chrono::duration_cast<Resolution>(end - mStart).count() << std::endl; std::cout << strStream.str() << std::endl; } }; // ExecutionTimer #endif // EXECUTION_TIMER_H Here are some uses of it: int main() { { // empty scope to display ExecutionTimer's destructor's message // displayed in milliseconds ExecutionTimer<std::chrono::milliseconds> timer; // function or code block here timer.stop(); } { // same as above ExecutionTimer<std::chrono::microseconds> timer; // code block here... timer.stop(); } { // same as above ExecutionTimer<std::chrono::nanoseconds> timer; // code block here... timer.stop(); } { // same as above ExecutionTimer<std::chrono::seconds> timer; // code block here... timer.stop(); } return 0; } Since the class is a template we can specify real easily in how we want our time to be measured & displayed. This is a very handy utility class template for doing bench marking and is very easy to use.
If you want to safe time and lines of code you can make measuring the function execution time a one line macro: a) Implement a time measuring class as already suggested above ( here is my implementation for android): class MeasureExecutionTime{ private: const std::chrono::steady_clock::time_point begin; const std::string caller; public: MeasureExecutionTime(const std::string& caller):caller(caller),begin(std::chrono::steady_clock::now()){} ~MeasureExecutionTime(){ const auto duration=std::chrono::steady_clock::now()-begin; LOGD("ExecutionTime")<<"For "<<caller<<" is "<<std::chrono::duration_cast<std::chrono::milliseconds>(duration).count()<<"ms"; } }; b) Add a convenient macro that uses the current function name as TAG (using a macro here is important, else __FUNCTION__ will evaluate to MeasureExecutionTime instead of the function you wanto to measure #ifndef MEASURE_FUNCTION_EXECUTION_TIME #define MEASURE_FUNCTION_EXECUTION_TIME const MeasureExecutionTime measureExecutionTime(__FUNCTION__); #endif c) Write your macro at the begin of the function you want to measure. Example: void DecodeMJPEGtoANativeWindowBuffer(uvc_frame_t* frame_mjpeg,const ANativeWindow_Buffer& nativeWindowBuffer){ MEASURE_FUNCTION_EXECUTION_TIME // Do some time-critical stuff } Which will result int the following output: ExecutionTime: For DecodeMJPEGtoANativeWindowBuffer is 54ms Note that this (as all other suggested solutions) will measure the time between when your function was called and when it returned, not neccesarily the time your CPU was executing the function. However, if you don't give the scheduler any change to suspend your running code by calling sleep() or similar there is no difference between.
It is a very easy to use method in C++11. We can use std::chrono::high_resolution_clock from header We can write a method to print the method execution time in a much readable form. For example, to find the all the prime numbers between 1 and 100 million, it takes approximately 1 minute and 40 seconds. So the execution time get printed as: Execution Time: 1 Minutes, 40 Seconds, 715 MicroSeconds, 715000 NanoSeconds The code is here: #include <iostream> #include <chrono> using namespace std; using namespace std::chrono; typedef high_resolution_clock Clock; typedef Clock::time_point ClockTime; void findPrime(long n, string file); void printExecutionTime(ClockTime start_time, ClockTime end_time); int main() { long n = long(1E+8); // N = 100 million ClockTime start_time = Clock::now(); // Write all the prime numbers from 1 to N to the file "prime.txt" findPrime(n, "C:\\prime.txt"); ClockTime end_time = Clock::now(); printExecutionTime(start_time, end_time); } void printExecutionTime(ClockTime start_time, ClockTime end_time) { auto execution_time_ns = duration_cast<nanoseconds>(end_time - start_time).count(); auto execution_time_ms = duration_cast<microseconds>(end_time - start_time).count(); auto execution_time_sec = duration_cast<seconds>(end_time - start_time).count(); auto execution_time_min = duration_cast<minutes>(end_time - start_time).count(); auto execution_time_hour = duration_cast<hours>(end_time - start_time).count(); cout << "\nExecution Time: "; if(execution_time_hour > 0) cout << "" << execution_time_hour << " Hours, "; if(execution_time_min > 0) cout << "" << execution_time_min % 60 << " Minutes, "; if(execution_time_sec > 0) cout << "" << execution_time_sec % 60 << " Seconds, "; if(execution_time_ms > 0) cout << "" << execution_time_ms % long(1E+3) << " MicroSeconds, "; if(execution_time_ns > 0) cout << "" << execution_time_ns % long(1E+6) << " NanoSeconds, "; }
I recommend using steady_clock which is guarunteed to be monotonic, unlike high_resolution_clock. #include <iostream> #include <chrono> using namespace std; unsigned int stopwatch() { static auto start_time = chrono::steady_clock::now(); auto end_time = chrono::steady_clock::now(); auto delta = chrono::duration_cast<chrono::microseconds>(end_time - start_time); start_time = end_time; return delta.count(); } int main() { stopwatch(); //Start stopwatch std::cout << "Hello World!\n"; cout << stopwatch() << endl; //Time to execute last line for (int i=0; i<1000000; i++) string s = "ASDFAD"; cout << stopwatch() << endl; //Time to execute for loop } Output: Hello World! 62 163514
Since none of the provided answers are very accurate or give reproducable results I decided to add a link to my code that has sub-nanosecond precision and scientific statistics. Note that this will only work to measure code that takes a (very) short time to run (aka, a few clock cycles to a few thousand): if they run so long that they are likely to be interrupted by some -heh- interrupt, then it is clearly not possible to give a reproducable and accurate result; the consequence of which is that the measurement never finishes: namely, it continues to measure until it is statistically 99.9% sure it has the right answer which never happens on a machine that has other processes running when the code takes too long. https://github.com/CarloWood/cwds/blob/master/benchmark.h#L40
You can have a simple class which can be used for this kind of measurements. class duration_printer { public: duration_printer() : __start(std::chrono::high_resolution_clock::now()) {} ~duration_printer() { using namespace std::chrono; high_resolution_clock::time_point end = high_resolution_clock::now(); duration<double> dur = duration_cast<duration<double>>(end - __start); std::cout << dur.count() << " seconds" << std::endl; } private: std::chrono::high_resolution_clock::time_point __start; }; The only thing is needed to do is to create an object in your function at the beginning of that function void veryLongExecutingFunction() { duration_calculator dc; for(int i = 0; i < 100000; ++i) std::cout << "Hello world" << std::endl; } int main() { veryLongExecutingFunction(); return 0; } and that's it. The class can be modified to fit your requirements.
C++11 cleaned up version of Jahid's response: #include <chrono> #include <thread> void long_operation(int ms) { /* Simulating a long, heavy operation. */ std::this_thread::sleep_for(std::chrono::milliseconds(ms)); } template<typename F, typename... Args> double funcTime(F func, Args&&... args){ std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now(); func(std::forward<Args>(args)...); return std::chrono::duration_cast<std::chrono::milliseconds>( std::chrono::high_resolution_clock::now()-t1).count(); } int main() { std::cout<<"expect 150: "<<funcTime(long_operation,150)<<"\n"; return 0; }
This is a very basic timer class which you can expand on depending on your needs. I wanted something straightforward which can be used cleanly in code. You can mess with it at coding ground with this link: http://tpcg.io/nd47hFqr. class local_timer { private: std::chrono::_V2::system_clock::time_point start_time; std::chrono::_V2::system_clock::time_point stop_time; std::chrono::_V2::system_clock::time_point stop_time_temp; std::chrono::microseconds most_recent_duration_usec_chrono; double most_recent_duration_sec; public: local_timer() { }; ~local_timer() { }; void start() { this->start_time = std::chrono::high_resolution_clock::now(); }; void stop() { this->stop_time = std::chrono::high_resolution_clock::now(); }; double get_time_now() { this->stop_time_temp = std::chrono::high_resolution_clock::now(); this->most_recent_duration_usec_chrono = std::chrono::duration_cast<std::chrono::microseconds>(stop_time_temp-start_time); this->most_recent_duration_sec = (long double)most_recent_duration_usec_chrono.count()/1000000; return this->most_recent_duration_sec; }; double get_duration() { this->most_recent_duration_usec_chrono = std::chrono::duration_cast<std::chrono::microseconds>(stop_time-start_time); this->most_recent_duration_sec = (long double)most_recent_duration_usec_chrono.count()/1000000; return this->most_recent_duration_sec; }; }; The use for this being #include <iostream> #include "timer.hpp" //if kept in an hpp file in the same folder, can also before your main function int main() { //create two timers local_timer timer1 = local_timer(); local_timer timer2 = local_timer(); //set start time for timer1 timer1.start(); //wait 1 second while(timer1.get_time_now() < 1.0) { } //save time timer1.stop(); //print time std::cout << timer1.get_duration() << " seconds, timer 1\n" << std::endl; timer2.start(); for(long int i = 0; i < 100000000; i++) { //do something if(i%1000000 == 0) { //return time since loop started std::cout << timer2.get_time_now() << " seconds, timer 2\n"<< std::endl; } } return 0; }
Control loop time with usleep
I try to make sure the execution time of each loop to 10ms with usleep , but sometimes it exceeds 10ms. I have no idea how to solve this problem, is it proper to use usleep and gettimeofday in this case? Please help my find out what i missed. Result: 0.0127289 0.0136499 0.0151598 0.0114031 0.014801 double tvsecf(){ struct timeval tv; double asec; gettimeofday(&tv,NULL); asec = tv.tv_usec; asec /= 1e6; asec += tv.tv_sec; return asec; } int main(){ double t1 ,t2; t1 = tvsecf(); for(;;){ t2= tvsecf(); if(t2-t1 >= 0.01){ if(t2-t1 >= 0.011) cout << t2-t1 <<endl; t1 = tvsecf(); } usleep(100); } }
To keep the loop overhead (which is generally unknown) from constantly accumulating error, you can sleep until a time point, instead of for a time duration. Using C++'s <chrono> and <thread> libraries, this is incredibly easy: #include <chrono> #include <iostream> #include <thread> int main() { using namespace std; using namespace std::chrono; auto t0 = steady_clock::now() + 10ms; for (;;) { this_thread::sleep_until(t0); t0 += 10ms; } } One can dress this up with more calls to steady_clock::now() in order to ascertain the time between iterations, and perhaps more importantly, the average iteration time: #include <chrono> #include <iostream> #include <thread> int main() { using namespace std; using namespace std::chrono; using dsec = duration<double>; auto t0 = steady_clock::now() + 10ms; auto t1 = steady_clock::now(); auto t2 = t1; constexpr auto N = 1000; dsec avg{0}; for (auto i = 0; i < N; ++i) { this_thread::sleep_until(t0); t0 += 10ms; t2 = steady_clock::now(); dsec delta = t2-t1; std::cout << delta.count() << "s\n"; avg += delta; t1 = t2; } avg /= N; cout << "avg = " << avg.count() << "s\n"; } Above I've added to the loop overhead by doing more things within the loop. However the loop is still going to wake up about every 10ms. Sometimes the OS will wake the thread late, but next time the loop automatically adjusts itself to sleep for a shorter time. Thus the average iteration rate self-corrects to 10ms. On my machine this just output: ... 0.0102046s 0.0128338s 0.00700504s 0.0116826s 0.00785826s 0.0107023s 0.00912614s 0.0104725s 0.010489s 0.0112545s 0.00906409s avg = 0.0100014s
There is no way to guarantee 10ms loop time. All sleeping functions sleeps for at least wanted time. For a portable solution use std::this_thread::sleep_for #include <iostream> #include <chrono> #include <thread> int main() { for (;;) { auto start = std::chrono::high_resolution_clock::now(); std::this_thread::sleep_for(std::chrono::milliseconds{10}); auto end = std::chrono::high_resolution_clock::now(); std::chrono::duration<double, std::milli> elapsed = end-start; std::cout << "Waited " << elapsed.count() << " ms\n"; } } Depending on what you are trying to do take a look at Howard Hinnants date library.
From the usleep man page: The sleep may be lengthened slightly by any system activity or by the time spent processing the call or by the granularity of system timers. If you need high resolution: with C on Unix (or Linux) check out this answer that explains how to use high resolution timers using clock_gettime. Edit: As mentioned by Tobias nanosleep may be a better option: Compared to sleep(3) and usleep(3), nanosleep() has the following advantages: it provides a higher resolution for specifying the sleep interval; POSIX.1 explicitly specifies that it does not interact with signals; and it makes the task of resuming a sleep that has been interrupted by a signal handler easier.
Printing time in seconds
I am writing a program and attempting to time the number of seconds that passes when a given block of code runs. Afterwards I would like to print the total time it took to run the block of code in seconds. What I have written is: time_t start = time(0); // block of code double seconds_since_start = difftime(time(0), start); printf("seconds since start: %2.60f\n", seconds_since_start); I have printf() printing to 60 decimal precision and all of the times still come out to 0.000000... Is there an error in my time function? I find it hard to believe that the task I am asking to time would not account for any time in 60 decimal precision.
You can use the date and time utilities available in C++11: #include <chrono> #include <iostream> #include <thread> int main() { auto start = std::chrono::high_resolution_clock::now(); std::this_thread::sleep_for(std::chrono::seconds(5)); auto end = std::chrono::high_resolution_clock::now(); auto difference = std::chrono::duration_cast<std::chrono::seconds>(end - start).count(); std::cout << "Seconds since start: " << difference; }
The return value from time is an integral number of seconds. Casting to a double won't bring back the fractional seconds that have been lost. You need a more precise clock function, such as gettimeofday (if you want wall-clock time) or times (if you want CPU time). On Windows, there's timeGetTime, QueryPerformanceCounter (which Castiblanco demonstrates), or GetSystemTimeAsFileTime. C++ finally got some standard high-resolution clock functions with C++11's <chrono> header, suggested by chris in the comments.
Actually I prefer to do it with milliseconds, because there are tons of function that can return 0 if you use just seconds, for this reason It's better to use milliseconds. #include <time.h> double performancecounter_diff(LARGE_INTEGER *a, LARGE_INTEGER *b){ LARGE_INTEGER freq; QueryPerformanceFrequency(&freq); return (double)(a->QuadPart - b->QuadPart) / (double)freq.QuadPart; } int main() { LARGE_INTEGER t_inicio, t_final; double sec; QueryPerformanceCounter(&t_inicio); // code here, the code that you need to knos the time. QueryPerformanceCounter(&t_final); sec = performancecounter_diff(&t_final, &t_inicio); printf("%.16g millisegudos\n", sec * 1000.0);*/ } return 0; }
you can use boost::timer template<typename T> double sortTime(std::vector<T>& v, typename sort_struct<T>::func_sort f){ boost::timer t; // start timing f(v); return t.elapsed(); }
Something like that should work: #include <stdio.h> #include <stdlib.h> #include <time.h> int main() { clock_t begin, end; double time_spent; begin = clock(); //Do stuff end = clock(); time_spent = (double)(end - begin) / CLOCKS_PER_SEC; printf("%Lf\n",time_spent); }
Timing the execution of statements - C++ [duplicate]
This question already has answers here: Closed 10 years ago. Possible Duplicate: How to Calculate Execution Time of a Code Snippet in C++ How can I get the time spent by a particular set of statements in some C++ code? Something like the time utility under Linux but only for some particular statements.
You can use the <chrono> header in the standard library: #include <chrono> #include <iostream> unsigned long long fib(unsigned long long n) { return (0==n || 1==n) ? 1 : fib(n-1) + fib(n-2); } int main() { unsigned long long n = 0; while (true) { auto start = std::chrono::high_resolution_clock::now(); fib(++n); auto finish = std::chrono::high_resolution_clock::now(); auto microseconds = std::chrono::duration_cast<std::chrono::microseconds>(finish-start); std::cout << microseconds.count() << "µs\n"; if (microseconds > std::chrono::seconds(1)) break; } }
You need to measure the time yourself. The little stopwatch class I'm usually using looks like this: #include <chrono> #include <iostream> template <typename Clock = std::chrono::steady_clock> class stopwatch { typename Clock::time_point last_; public: stopwatch() : last_(Clock::now()) {} void reset() { *this = stopwatch(); } typename Clock::duration elapsed() const { return Clock::now() - last_; } typename Clock::duration tick() { auto now = Clock::now(); auto elapsed = now - last_; last_ = now; return elapsed; } }; template <typename T, typename Rep, typename Period> T duration_cast(const std::chrono::duration<Rep, Period>& duration) { return duration.count() * static_cast<T>(Period::num) / static_cast<T>(Period::den); } int main() { stopwatch<> sw; // ... std::cout << "Elapsed: " << duration_cast<double>(sw.elapsed()) << '\n'; } duration_cast may not be an optimal name for the function, since a function with this name already exists in the standard library. Feel free to come up with a better one. ;) Edit: Note that chrono is from C++11.
std::chrono or boost::chrono(in case that your compiler does not support C++11) can be used for this. std::chrono::high_resolution_clock::time_point start( std::chrono::high_resolution_clock::now() ); .... std::cout << (std::chrono::high_resolution_clock::now() - start);
You need to write a simple timing system. There is no built-in way in c++. #include <sys/time.h> class Timer { private: struct timeval start_t; public: double start() { gettimeofday(&start_t, NULL); } double get_ms() { struct timeval now; gettimeofday(&now, NULL); return (now.tv_usec-start_t.tv_usec)/(double)1000.0 + (now.tv_sec-start_t.tv_sec)*(double)1000.0; } double get_ms_reset() { double res = get_ms(); reset(); return res; } Timer() { start(); } }; int main() { Timer t(); double used_ms; // run slow code.. used_ms = t.get_ms_reset(); // run slow code.. used_ms += t.get_ms_reset(); return 0; } Note that the measurement itself can affect the runtime significantly.
Possible Duplicate: How to Calculate Execution Time of a Code Snippet in C++ You can use the time.h C standard library ( explained in more detail at http://www.cplusplus.com/reference/clibrary/ctime/ ). The following program does what you want: #include <iostream> #include <time.h> using namespace std; int main() { clock_t t1,t2; t1=clock(); //code goes here t2=clock(); float diff = ((float)t2-(float)t1)/CLOCKS_PER_SEC; cout << "Running time: " << diff << endl; return 0; } You can also do this: int start_s=clock(); // the code you wish to time goes here int stop_s=clock(); cout << "time: " << (stop_s-start_s)/double(CLOCKS_PER_SEC)*1000 << endl;
If you are using GNU gcc/g++: Try recompiling with --coverage, rerun the program and analyse the resulting files with the gprof utility. It will also print execution times of functions. Edit: Compile and link with -pg, not with --coverage, --coverage is for gcov (which lines are actually executed).
Here's very fine snippet of code, that works well on windows and linux: https://stackoverflow.com/a/1861337/1483826 To use it, run it and save the result as "start time" and after the action - "end time". Subtract and divide to whatever accuracy you need.
You can use #inclide <ctime> header. It's functions and their uses are here. Suppose you want to watch how much time a code spends. You have to take a current time just before start of that part and another current time just after ending of that part. Then take the difference of these two times. Readymade functions are declared within ctime to do all these works. Just checkout the above link.
Easily measure elapsed time
I am trying to use time() to measure various points of my program. What I don't understand is why the values in the before and after are the same? I understand this is not the best way to profile my program, I just want to see how long something take. printf("**MyProgram::before time= %ld\n", time(NULL)); doSomthing(); doSomthingLong(); printf("**MyProgram::after time= %ld\n", time(NULL)); I have tried: struct timeval diff, startTV, endTV; gettimeofday(&startTV, NULL); doSomething(); doSomethingLong(); gettimeofday(&endTV, NULL); timersub(&endTV, &startTV, &diff); printf("**time taken = %ld %ld\n", diff.tv_sec, diff.tv_usec); How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec? What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?
//***C++11 Style:*** #include <chrono> std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now(); std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now(); std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::microseconds>(end - begin).count() << "[µs]" << std::endl; std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::nanoseconds> (end - begin).count() << "[ns]" << std::endl;
0 - Delta Use a delta function to compute time differences: auto start = std::chrono::steady_clock::now(); std::cout << "Elapsed(ms)=" << since(start).count() << std::endl; since accepts any timepoint and produces any duration (milliseconds is the default). It is defined as: template < class result_t = std::chrono::milliseconds, class clock_t = std::chrono::steady_clock, class duration_t = std::chrono::milliseconds > auto since(std::chrono::time_point<clock_t, duration_t> const& start) { return std::chrono::duration_cast<result_t>(clock_t::now() - start); } Demo 1 - Timer Use a timer based on std::chrono: Timer clock; // Timer<milliseconds, steady_clock> clock.tick(); /* code you want to measure */ clock.tock(); cout << "Run time = " << clock.duration().count() << " ms\n"; Demo Timer is defined as: template <class DT = std::chrono::milliseconds, class ClockT = std::chrono::steady_clock> class Timer { using timep_t = typename ClockT::time_point; timep_t _start = ClockT::now(), _end = {}; public: void tick() { _end = timep_t{}; _start = ClockT::now(); } void tock() { _end = ClockT::now(); } template <class T = DT> auto duration() const { gsl_Expects(_end != timep_t{} && "toc before reporting"); return std::chrono::duration_cast<T>(_end - _start); } }; As Howard Hinnant pointed out, we use a duration to remain in the chrono type-system and perform operations like averaging or comparisons (e.g. here this means using std::chrono::milliseconds). When we just do IO, we use the count() or ticks of a duration (e.g. here number of milliseconds). 2 - Instrumentation Any callable (function, function object, lambda etc.) can be instrumented for benchmarking. Say you have a function F invokable with arguments arg1,arg2, this technique results in: cout << "F runtime=" << measure<>::duration(F, arg1, arg2).count() << "ms"; Demo measure is defined as: template <class TimeT = std::chrono::milliseconds class ClockT = std::chrono::steady_clock> struct measure { template<class F, class ...Args> static auto duration(F&& func, Args&&... args) { auto start = ClockT::now(); std::invoke(std::forward<F>(func), std::forward<Args>(args)...); return std::chrono::duration_cast<TimeT>(ClockT::now()-start); } }; As mentioned in (1), using the duration w/o .count() is most useful for clients that want to post-process a bunch of durations prior to I/O, e.g. average: auto avg = (measure<>::duration(func) + measure<>::duration(func)) / 2; std::cout << "Average run time " << avg.count() << " ms\n"; +This is why the forwarded function call. +The complete code can be found here +My attempt to build a benchmarking framework based on chrono is recorded here +Old demo
#include <ctime> void f() { using namespace std; clock_t begin = clock(); code_to_time(); clock_t end = clock(); double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC; } The time() function is only accurate to within a second, but there are CLOCKS_PER_SEC "clocks" within a second. This is an easy, portable measurement, even though it's over-simplified.
As I can see from your question, it looks like you want to know the elapsed time after execution of some piece of code. I guess you would be comfortable to see the results in second(s). If so, try using difftime() function as shown below. Hope this solves your problem. #include <time.h> #include <stdio.h> time_t start,end; time (&start); . . . <your code> . . . time (&end); double dif = difftime (end,start); printf ("Elasped time is %.2lf seconds.", dif );
Windows only: (The Linux tag was added after I posted this answer) You can use GetTickCount() to get the number of milliseconds that have elapsed since the system was started. long int before = GetTickCount(); // Perform time-consuming operation long int after = GetTickCount();
struct profiler { std::string name; std::chrono::high_resolution_clock::time_point p; profiler(std::string const &n) : name(n), p(std::chrono::high_resolution_clock::now()) { } ~profiler() { using dura = std::chrono::duration<double>; auto d = std::chrono::high_resolution_clock::now() - p; std::cout << name << ": " << std::chrono::duration_cast<dura>(d).count() << std::endl; } }; #define PROFILE_BLOCK(pbn) profiler _pfinstance(pbn) Usage is below :: { PROFILE_BLOCK("Some time"); // your code or function } THis is similar to RAII in scope NOTE this is not mine, but i thought it was relevant here
time(NULL) returns the number of seconds elapsed since 01/01/1970 at 00:00 (the Epoch). So the difference between the two values is the number of seconds your processing took. int t0 = time(NULL); doSomthing(); doSomthingLong(); int t1 = time(NULL); printf ("time = %d secs\n", t1 - t0); You can get finer results with getttimeofday(), which return the current time in seconds, as time() does and also in microseconds.
the time(NULL) function will return the number of seconds elapsed since 01/01/1970 at 00:00. And because, that function is called at different time in your program, it will always be different Time in C++
#include<time.h> // for clock #include<math.h> // for fmod #include<cstdlib> //for system #include <stdio.h> //for delay using namespace std; int main() { clock_t t1,t2; t1=clock(); // first time capture // Now your time spanning loop or code goes here // i am first trying to display time elapsed every time loop runs int ddays=0; // d prefix is just to say that this variable will be used for display int dhh=0; int dmm=0; int dss=0; int loopcount = 1000 ; // just for demo your loop will be different of course for(float count=1;count<loopcount;count++) { t2=clock(); // we get the time now float difference= (((float)t2)-((float)t1)); // gives the time elapsed since t1 in milliseconds // now get the time elapsed in seconds float seconds = difference/1000; // float value of seconds if (seconds<(60*60*24)) // a day is not over { dss = fmod(seconds,60); // the remainder is seconds to be displayed float minutes= seconds/60; // the total minutes in float dmm= fmod(minutes,60); // the remainder are minutes to be displayed float hours= minutes/60; // the total hours in float dhh= hours; // the hours to be displayed ddays=0; } else // we have reached the counting of days { float days = seconds/(24*60*60); ddays = (int)(days); float minutes= seconds/60; // the total minutes in float dmm= fmod(minutes,60); // the rmainder are minutes to be displayed float hours= minutes/60; // the total hours in float dhh= fmod (hours,24); // the hours to be displayed } cout<<"Count Is : "<<count<<"Time Elapsed : "<<ddays<<" Days "<<dhh<<" hrs "<<dmm<<" mins "<<dss<<" secs"; // the actual working code here,I have just put a delay function delay(1000); system("cls"); } // end for loop }// end of main
The values printed by your second program are seconds, and microseconds. 0 26339 = 0.026'339 s = 26339 µs 4 45025 = 4.045'025 s = 4045025 µs
#include <ctime> #include <cstdio> #include <iostream> #include <chrono> #include <sys/time.h> using namespace std; using namespace std::chrono; void f1() { high_resolution_clock::time_point t1 = high_resolution_clock::now(); high_resolution_clock::time_point t2 = high_resolution_clock::now(); double dif = duration_cast<nanoseconds>( t2 - t1 ).count(); printf ("Elasped time is %lf nanoseconds.\n", dif ); } void f2() { timespec ts1,ts2; clock_gettime(CLOCK_REALTIME, &ts1); clock_gettime(CLOCK_REALTIME, &ts2); double dif = double( ts2.tv_nsec - ts1.tv_nsec ); printf ("Elasped time is %lf nanoseconds.\n", dif ); } void f3() { struct timeval t1,t0; gettimeofday(&t0, 0); gettimeofday(&t1, 0); double dif = double( (t1.tv_usec-t0.tv_usec)*1000); printf ("Elasped time is %lf nanoseconds.\n", dif ); } void f4() { high_resolution_clock::time_point t1 , t2; double diff = 0; t1 = high_resolution_clock::now() ; for(int i = 1; i <= 10 ; i++) { t2 = high_resolution_clock::now() ; diff+= duration_cast<nanoseconds>( t2 - t1 ).count(); t1 = t2; } printf ("high_resolution_clock:: Elasped time is %lf nanoseconds.\n", diff/10 ); } void f5() { timespec ts1,ts2; double diff = 0; clock_gettime(CLOCK_REALTIME, &ts1); for(int i = 1; i <= 10 ; i++) { clock_gettime(CLOCK_REALTIME, &ts2); diff+= double( ts2.tv_nsec - ts1.tv_nsec ); ts1 = ts2; } printf ("clock_gettime:: Elasped time is %lf nanoseconds.\n", diff/10 ); } void f6() { struct timeval t1,t2; double diff = 0; gettimeofday(&t1, 0); for(int i = 1; i <= 10 ; i++) { gettimeofday(&t2, 0); diff+= double( (t2.tv_usec-t1.tv_usec)*1000); t1 = t2; } printf ("gettimeofday:: Elasped time is %lf nanoseconds.\n", diff/10 ); } int main() { // f1(); // f2(); // f3(); f6(); f4(); f5(); return 0; }
C++ std::chrono has a clear benefit of being cross-platform. However, it also introduces a significant overhead compared to POSIX clock_gettime(). On my Linux box all std::chrono::xxx_clock::now() flavors perform roughly the same: std::chrono::system_clock::now() std::chrono::steady_clock::now() std::chrono::high_resolution_clock::now() Though POSIX clock_gettime(CLOCK_MONOTONIC, &time) should be same as steady_clock::now() but it is more than x3 times faster! Here is my test, for completeness. #include <stdio.h> #include <chrono> #include <ctime> void print_timediff(const char* prefix, const struct timespec& start, const struct timespec& end) { double milliseconds = end.tv_nsec >= start.tv_nsec ? (end.tv_nsec - start.tv_nsec) / 1e6 + (end.tv_sec - start.tv_sec) * 1e3 : (start.tv_nsec - end.tv_nsec) / 1e6 + (end.tv_sec - start.tv_sec - 1) * 1e3; printf("%s: %lf milliseconds\n", prefix, milliseconds); } int main() { int i, n = 1000000; struct timespec start, end; // Test stopwatch clock_gettime(CLOCK_MONOTONIC, &start); for (i = 0; i < n; ++i) { struct timespec dummy; clock_gettime(CLOCK_MONOTONIC, &dummy); } clock_gettime(CLOCK_MONOTONIC, &end); print_timediff("clock_gettime", start, end); // Test chrono system_clock clock_gettime(CLOCK_MONOTONIC, &start); for (i = 0; i < n; ++i) auto dummy = std::chrono::system_clock::now(); clock_gettime(CLOCK_MONOTONIC, &end); print_timediff("chrono::system_clock::now", start, end); // Test chrono steady_clock clock_gettime(CLOCK_MONOTONIC, &start); for (i = 0; i < n; ++i) auto dummy = std::chrono::steady_clock::now(); clock_gettime(CLOCK_MONOTONIC, &end); print_timediff("chrono::steady_clock::now", start, end); // Test chrono high_resolution_clock clock_gettime(CLOCK_MONOTONIC, &start); for (i = 0; i < n; ++i) auto dummy = std::chrono::high_resolution_clock::now(); clock_gettime(CLOCK_MONOTONIC, &end); print_timediff("chrono::high_resolution_clock::now", start, end); return 0; } And this is the output I get when compiled with gcc7.2 -O3: clock_gettime: 24.484926 milliseconds chrono::system_clock::now: 85.142108 milliseconds chrono::steady_clock::now: 87.295347 milliseconds chrono::high_resolution_clock::now: 84.437838 milliseconds
The time(NULL) function call will return the number of seconds elapsed since epoc: January 1 1970. Perhaps what you mean to do is take the difference between two timestamps: size_t start = time(NULL); doSomthing(); doSomthingLong(); printf ("**MyProgram::time elapsed= %lds\n", time(NULL) - start);
On linux, clock_gettime() is one of the good choices. You must link real time library(-lrt). #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <time.h> #define BILLION 1000000000L; int main( int argc, char **argv ) { struct timespec start, stop; double accum; if( clock_gettime( CLOCK_REALTIME, &start) == -1 ) { perror( "clock gettime" ); exit( EXIT_FAILURE ); } system( argv[1] ); if( clock_gettime( CLOCK_REALTIME, &stop) == -1 ) { perror( "clock gettime" ); exit( EXIT_FAILURE ); } accum = ( stop.tv_sec - start.tv_sec ) + ( stop.tv_nsec - start.tv_nsec ) / BILLION; printf( "%lf\n", accum ); return( EXIT_SUCCESS ); }
As others have already noted, the time() function in the C standard library does not have a resolution better than one second. The only fully portable C function that may provide better resolution appears to be clock(), but that measures processor time rather than wallclock time. If one is content to limit oneself to POSIX platforms (e.g. Linux), then the clock_gettime() function is a good choice. Since C++11, there are much better timing facilities available that offer better resolution in a form that should be very portable across different compilers and operating systems. Similarly, the boost::datetime library provides good high-resolution timing classes that should be highly portable. One challenge in using any of these facilities is the time-delay introduced by querying the system clock. From experimenting with clock_gettime(), boost::datetime and std::chrono, this delay can easily be a matter of microseconds. So, when measuring the duration of any part of your code, you need to allow for there being a measurement error of around this size, or try to correct for that zero-error in some way. Ideally, you may well want to gather multiple measurements of the time taken by your function, and compute the average, or maximum/minimum time taken across many runs. To help with all these portability and statistics-gathering issues, I've been developing the cxx-rtimers library available on Github which tries to provide a simple API for timing blocks of C++ code, computing zero errors, and reporting stats from multiple timers embedded in your code. If you have a C++11 compiler, you simply #include <rtimers/cxx11.hpp>, and use something like: void expensiveFunction() { static rtimers::cxx11::DefaultTimer timer("expensiveFunc"); auto scopedStartStop = timer.scopedStart(); // Do something costly... } On program exit, you'll get a summary of timing stats written to std::cerr such as: Timer(expensiveFunc): <t> = 6.65289us, std = 3.91685us, 3.842us <= t <= 63.257us (n=731) which shows the mean time, its standard-deviation, the upper and lower limits, and the number of times this function was called. If you want to use Linux-specific timing functions, you can #include <rtimers/posix.hpp>, or if you have the Boost libraries but an older C++ compiler, you can #include <rtimers/boost.hpp>. There are also versions of these timer classes that can gather statistical timing information from across multiple threads. There are also methods that allow you to estimate the zero-error associated with two immediately consecutive queries of the system clock.
Internally the function will access the system's clock, which is why it returns different values each time you call it. In general with non-functional languages there can be many side effects and hidden state in functions which you can't see just by looking at the function's name and arguments.
From what is see, tv_sec stores the seconds elapsed while tv_usec stored the microseconds elapsed separately. And they aren't the conversions of each other. Hence, they must be changed to proper unit and added to get the total time elapsed. struct timeval startTV, endTV; gettimeofday(&startTV, NULL); doSomething(); doSomethingLong(); gettimeofday(&endTV, NULL); printf("**time taken in microseconds = %ld\n", (endTV.tv_sec * 1e6 + endTV.tv_usec - (startTV.tv_sec * 1e6 + startTV.tv_usec)) );
I needed to measure the execution time of individual functions within a library. I didn't want to have to wrap every call of every function with a time measuring function because its ugly and deepens the call stack. I also didn't want to put timer code at the top and bottom of every function because it makes a mess when the function can exit early or throw exceptions for example. So what I ended up doing was making a timer that uses its own lifetime to measure time. In this way I can measure the wall-time a block of code took by just instantiating one of these objects at the beginning of the code block in question (function or any scope really) and then allowing the instances destructor to measure the time elapsed since construction when the instance goes out of scope. You can find the full example here but the struct is extremely simple: template <typename clock_t = std::chrono::steady_clock> struct scoped_timer { using duration_t = typename clock_t::duration; const std::function<void(const duration_t&)> callback; const std::chrono::time_point<clock_t> start; scoped_timer(const std::function<void(const duration_t&)>& finished_callback) : callback(finished_callback), start(clock_t::now()) { } scoped_timer(std::function<void(const duration_t&)>&& finished_callback) : callback(finished_callback), start(clock_t::now()) { } ~scoped_timer() { callback(clock_t::now() - start); } }; The struct will call you back on the provided functor when it goes out of scope so you can do something with the timing information (print it or store it or whatever). If you need to do something even more complex you could even use std::bind with std::placeholders to callback functions with more arguments. Here's a quick example of using it: void test(bool should_throw) { scoped_timer<> t([](const scoped_timer<>::duration_t& elapsed) { auto e = std::chrono::duration_cast<std::chrono::duration<double, std::milli>>(elapsed).count(); std::cout << "took " << e << "ms" << std::endl; }); std::this_thread::sleep_for(std::chrono::seconds(1)); if (should_throw) throw nullptr; std::this_thread::sleep_for(std::chrono::seconds(1)); } If you want to be more deliberate, you can also use new and delete to explicitly start and stop the timer without relying on scoping to do it for you.
They are they same because your doSomething function happens faster than the granularity of the timer. Try: printf ("**MyProgram::before time= %ld\n", time(NULL)); for(i = 0; i < 1000; ++i) { doSomthing(); doSomthingLong(); } printf ("**MyProgram::after time= %ld\n", time(NULL));
The reason both values are the same is because your long procedure doesn't take that long - less than one second. You can try just adding a long loop (for (int i = 0; i < 100000000; i++) ; ) at the end of the function to make sure this is the issue, then we can go from there... In case the above turns out to be true, you will need to find a different system function (I understand you work on linux, so I can't help you with the function name) to measure time more accurately. I am sure there is a function simular to GetTickCount() in linux, you just need to find it.
I usually use the following: #include <chrono> #include <type_traits> using perf_clock = std::conditional< std::chrono::high_resolution_clock::is_steady, std::chrono::high_resolution_clock, std::chrono::steady_clock >::type; using floating_seconds = std::chrono::duration<double>; template<class F, class... Args> floating_seconds run_test(Func&& func, Args&&... args) { const auto t0 = perf_clock::now(); std::forward<Func>(func)(std::forward<Args>(args)...); return floating_seconds(perf_clock::now() - t0); } It's the same as #nikos-athanasiou proposed except that I avoid using of a non-steady clock and use floating number of seconds as a duration.
Matlab flavored! tic starts a stopwatch timer to measure performance. The function records the internal time at execution of the tic command. Display the elapsed time with the toc function. #include <iostream> #include <ctime> #include <thread> using namespace std; clock_t START_TIMER; clock_t tic() { return START_TIMER = clock(); } void toc(clock_t start = START_TIMER) { cout << "Elapsed time: " << (clock() - start) / (double)CLOCKS_PER_SEC << "s" << endl; } int main() { tic(); this_thread::sleep_for(2s); toc(); return 0; }
In answer to OP's three specific questions. "What I don't understand is why the values in the before and after are the same?" The first question and sample code shows that time() has a resolution of 1 second, so the answer has to be that the two functions execute in less than 1 second. But occasionally it will (apparently illogically) inform 1 second if the two timer marks straddle a one second boundary. The next example uses gettimeofday() which fills this struct struct timeval { time_t tv_sec; /* seconds */ suseconds_t tv_usec; /* microseconds */ }; and the second question asks: "How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec?" My second answer is the time taken is 0 seconds and 26339 microseconds, that is 0.026339 seconds, which bears out the first example executing in less than 1 second. The third question asks: "What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?" My third answer is the time taken is 4 seconds and 45025 microseconds, that is 4.045025 seconds, which shows that OP has altered the tasks performed by the two functions which he previously timed.
Here's a simple class that will print the duration between the time it got in and out of scope in the specified duration unit: #include <chrono> #include <iostream> template <typename T> class Benchmark { public: Benchmark(std::string name) : start(std::chrono::steady_clock::now()), name(name) {} ~Benchmark() { auto end = std::chrono::steady_clock::now(); T duration = std::chrono::duration_cast<T>(end - start); std::cout << "Bench \"" << name << "\" took: " << duration.count() << " units" << std::endl; } private: std::string name; std::chrono::time_point<std::chrono::steady_clock> start; }; int main() { Benchmark<std::chrono::nanoseconds> bench("for loop"); for(int i = 0; i < 1001000; i++){} } Example usage: int main() { Benchmark<std::chrono::nanoseconds> bench("for loop"); for(int i = 0; i < 100000; i++){} } Outputs: Bench "for loop" took: 230656 units
#include <ctime> #include <functional> using namespace std; void f() { clock_t begin = clock(); // ...code to measure time... clock_t end = clock(); function<double(double, double)> convtime = [](clock_t begin, clock_t end) { return double(end - begin) / CLOCKS_PER_SEC; }; printf("Elapsed time: %.2g sec\n", convtime(begin, end)); } Similar example to one available here, only with additional conversion function + print out.
I have created a class to automatically measure elapsed time, Please check the code (c++11) in this link: https://github.com/sonnt174/Common/blob/master/time_measure.h Example of how to use class TimeMeasure: void test_time_measure(std::vector<int> arr) { TimeMeasure<chrono::microseconds> time_mea; // create time measure obj std::sort(begin(arr), end(arr)); }