Different values in measuring the elapsed time C++ - c++

I have a simple code and I used clock() and other suggested methods to measure the running time of program. The problem is I got different values when I run it times to times.
Is there any way to elapsed the real execution time of the program?
Thanks in advance

One way of doint it uses #include <ctime>
clock_t t = clock(); // take a start time
// ... do something
clock_t dt = clock() - t; // take elapsed time
cout << (((double)dt) / CLOCKS_PER_SEC) * 1000); // duration in MILLIseconds.
The other approach uses the high_resolution_clock of #include <chrono>:
chrono::high_resolution_clock::time_point t = chrono::high_resolution_clock::now();
//... do something
chrono::high_resolution_clock::time_point t2 = chrono::high_resolution_clock::now();
cout << chrono::duration_cast<chrono::duration<double>>(t2 - t).count();
// or if you prefer duration_cast<milliseconds>(t2 - t).count();
In any case, it's normal that you find small variations. First reason is your other running programms on your PC. Second reason is the clock accuracy (for example the famous 15 milliseconds on windows).

Related

how to write a measurement function for multithreaded function [duplicate]

I am running a .cpp code (i) in sequential style and (ii) using OpenMP statements. I am trying to see the time difference. For calculating time, I use this:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
The time is pretty accurate in sequential (above) run of the code. It takes about 8 seconds to run this. When I insert OpenMP statements in the code and thereafter calculate the time I get a reduction in time, but the time displayed is about 8-9 seconds on the console, when actually its just 3-4 seconds in real time!
Here is how my code looks abstractly:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
#pragma omp parallel for
for( ... )
for( ... )
for (...)
{
...;
}
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
When I run the above code, I get the reduction in time but the time displayed is not accurate in terms of real time. It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
Can someone tell the reason for this or suggest me any other timing function to use to measure the time in OpenMP programs?
Thanks.
It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
This is exactly what clock() does - it measures the CPU time used by the process, which at least on Linux and Mac OS X means the cumulative CPU time of all threads that have ever existed in the process since it was started.
Real-clock (a.k.a. wall-clock) timing of OpenMP applications should be done using the high resolution OpenMP timer call omp_get_wtime() which returns a double value of the number of seconds since an arbitrary point in the past. It is a portable function, e.g. exists in both Unix and Windows OpenMP run-times, unlike gettimeofday() which is Unix-only.
I've seen clock() reporting CPU time, instead of real time.
You could use
struct timeval start, end;
gettimeofday(&start, NULL);
// benchmark code
gettimeofday(&end, NULL);
delta = ((end.tv_sec - start.tv_sec) * 1000000u +
end.tv_usec - start.tv_usec) / 1.e6;
To time things instead
You could use the built in omp_get_wtime function in omp library itself. Following is an example code snippet to find out execution time.
#include <stdio.h>
#include <omp.h>
int main(){
double itime, ftime, exec_time;
itime = omp_get_wtime();
// Required code for which execution time needs to be computed
ftime = omp_get_wtime();
exec_time = ftime - itime;
printf("\n\nTime taken is %f", exec_time);
}
Well yes, that's what clock() is supposed to do, tell you how much processor time the program used.
If you want to find elapsed real time, instead of CPU time, use a function that returns wall clock time, such as gettimeofday().
#include "ctime"
std::time_t start, end;
long delta = 0;
start = std::time(NULL);
// do your code here
end = std::time(NULL);
delta = end - start;
// output delta

How to obtain the execution time of a function in C/C++ [duplicate]

This question already has answers here:
How can I benchmark C code easily?
(5 answers)
Closed 6 years ago.
I have a function which can generate 10000 random numbers and write them in a file.
void generator(char filename[])
{
int i;
int n;
FILE* fp;
if((fp=fopen(filename,"w+"))==NULL)
{
printf("Fail creating fileļ¼");
}
srand((unsigned)time(NULL));
for(i=0;i<10000;i++)
{
n=rand()%10000;
fprintf(fp,"%d ",n);
}
fclose(fp);
}
How can I get the execution time of this function using C/C++ ?
Code profiling is not a particularly easy task (or, as we oft-say in programming, it's "non-trivial".) The issue is because "execution time" measured in seconds isn't particularly accurate or useful.
What you're wanting to do is to measure the number of CPU cycles. This can be done using an external tool such as callgrind (one of Valgrind's tools). There's a 99% chance that's all you want.
If you REALLY want to do that yourself in code, you're undertaking a rather difficult task. I know first hand - I wrote a comparative benchmarking library in C++ for on-the-fly performance testing.
If you really want to go down that road, you can research benchmarking on Intel processors (that mostly carries over to AMD), or whatever processor you've using. However, as I said, that topic is large and in-depth, and far beyond the scope of a StackOverflow answer.
You can use the chrono library;
#include <chrono>
//*****//
auto start = std::chrono::steady_clock::now();
generator("file.txt")
auto end = std::chrono::steady_clock::now();
std::cout << "genarator() took "
<< std::chrono::duration_cast<std::chrono::microseconds>(end - start).count() << "us.\n";
You have already some nice C answers that also work with C++.
Here a native C++ solution using <chrono>:
auto tbegin = std::chrono::high_resolution_clock::now();
...
auto tend = std::chrono::high_resolution_clock::now();
auto tduration = std::chrono::duration_cast<std::chrono::microseconds>(tend - tbegin).count();
The advantage is that you can switch from microsecond to millisecond, seconds or any other time measurement units very easily.
Note that you may have OS limits to the clocking accuracy (typically 15 milliseconds on windows environment), so that this may give meaningful results only if you're really above this limit.
void generator(char filename)
{
clock_t tStart = clock();
/* your code here */
printf("Time taken: %.2fs\n", (double)(clock() - tStart)/CLOCKS_PER_SEC);
}
upd. And add #include <ctime>
Try this
#include <sys/time.h>
struct timeval tpstart,tpend;
double timeuse;
//get time before generator starts
gettimeofday(&tpstart,NULL);
//call generator function
generator(filename);
//get time after generator ends
gettimeofday(&tpend,NULL);
//calculate the used time
timeuse=1000000*(tpend.tv_sec-tpstart.tv_sec)+tpend.tv_usec-tpstart.tv_usec;
timeuse/=1000000;
printf("Used Time:%fsec\n",timeuse);
#include <ctime>
.....
clock_t start = clock();
...//the code you want to get the execution time for
double elapsed_time = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;
std::cout << elapsed_time << std::endl;//elapsed_time now contains the execution time(in seconds) of the code in between
will give you an approximate(not accurate) execution time of the code between the first and second clock() calls
Temporarily make the limit 10000000
Time it with a stopwatch. Divide the time by 1000

precise time measurement

I'm using time.h in C++ to measure the timing of a function.
clock_t t = clock();
someFunction();
printf("\nTime taken: %.4fs\n", (float)(clock() - t)/CLOCKS_PER_SEC);
however, I'm always getting the time taken as 0.0000. clock() and t when printed separately, have the same value. I would like to know if there is way to measure the time precisely (maybe in the order of nanoseconds) in C++ . I'm using VS2010.
C++11 introduced the chrono API, you can use to get nanoseconds :
auto begin = std::chrono::high_resolution_clock::now();
// code to benchmark
auto end = std::chrono::high_resolution_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::nanoseconds>(end-begin).count() << "ns" << std::endl;
For a more relevant value it is good to run the function several times and compute the average :
auto begin = std::chrono::high_resolution_clock::now();
uint32_t iterations = 10000;
for(uint32_t i = 0; i < iterations; ++i)
{
// code to benchmark
}
auto end = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::nanoseconds>(end-begin).count();
std::cout << duration << "ns total, average : " << duration / iterations << "ns." << std::endl;
But remember the for loop and assigning begin and end var use some CPU time too.
I usually use the QueryPerformanceCounter function.
example:
LARGE_INTEGER frequency; // ticks per second
LARGE_INTEGER t1, t2; // ticks
double elapsedTime;
// get ticks per second
QueryPerformanceFrequency(&frequency);
// start timer
QueryPerformanceCounter(&t1);
// do something
...
// stop timer
QueryPerformanceCounter(&t2);
// compute and print the elapsed time in millisec
elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 / frequency.QuadPart;
The following text, that i completely agree with, is quoted from Optimizing software in C++ (good reading for any C++ programmer) -
The time measurements may require a very high resolution if time
intervals are short. In Windows, you can use the
GetTickCount or
QueryPerformanceCounter functions for millisecond resolution. A much
higher resolution can be obtained with the time stamp counter in the
CPU, which counts at the CPU clock frequency.
There is a problem that "the clock frequency may vary dynamically and that
measurements are unstable due to interrupts and task switches."
In C or C++ I usually do like below. If it still fails you may consider using rtdsc functions
struct timeval time;
gettimeofday(&time, NULL); // Start Time
long totalTime = (time.tv_sec * 1000) + (time.tv_usec / 1000);
//........ call your functions here
gettimeofday(&time, NULL); //END-TIME
totalTime = (((time.tv_sec * 1000) + (time.tv_usec / 1000)) - totalTime);

Timing algorithm: clock() vs time() in C++

For timing an algorithm (approximately in ms), which of these two approaches is better:
clock_t start = clock();
algorithm();
clock_t end = clock();
double time = (double) (end-start) / CLOCKS_PER_SEC * 1000.0;
Or,
time_t start = time(0);
algorithm();
time_t end = time(0);
double time = difftime(end, start) * 1000.0;
Also, from some discussion in the C++ channel at Freenode, I know clock has a very bad resolution, so the timing will be zero for a (relatively) fast algorithm. But, which has better resolution time() or clock()? Or is it the same?
<chrono> would be a better library if you're using C++11.
#include <iostream>
#include <chrono>
#include <thread>
void f()
{
std::this_thread::sleep_for(std::chrono::seconds(1));
}
int main()
{
auto t1 = std::chrono::high_resolution_clock::now();
f();
auto t2 = std::chrono::high_resolution_clock::now();
std::cout << "f() took "
<< std::chrono::duration_cast<std::chrono::milliseconds>(t2-t1).count()
<< " milliseconds\n";
}
Example taken from here.
It depends what you want: time measures the real time while clock measures the processing time taken by the current process. If your process sleeps for any appreciable amount of time, or the system is busy with other processes, the two will be very different.
http://en.cppreference.com/w/cpp/chrono/c/clock
The time_t structure is probably going to be an integer, which means it will have a resolution of second.
The first piece of code: It will only count the time that the CPU was doing something, so when you do sleep(), it will not count anything. It can be bypassed by counting the time you sleep(), but it will probably start to drift after a while.
The second piece: Only resolution of seconds, not so great if you need sub-second time readings.
For time readings with the best resolution you can get, you should do something like this:
double getUnixTime(void)
{
struct timespec tv;
if(clock_gettime(CLOCK_REALTIME, &tv) != 0) return 0;
return (tv.tv_sec + (tv.tv_nsec / 1000000000.0));
}
double start_time = getUnixTime();
double stop_time, difference;
doYourStuff();
stop_time = getUnixTime();
difference = stop_time - start_time;
On most systems it's resolution will be down to few microseconds, but it can vary with different CPUs, and probably even major kernel versions.
<chrono> is the best. Visual Studio 2013 provides this feature. Personally, I have tried all the methods mentioned above. I strongly recommend you use the <chrono> library. It can track the wall time and at the same time have a good resolution (much less than a second).
How about gettimeofday()? When it is called it updates two structs (timeval and timezone), with timing information. Usually, passing a timeval struct is enough and the timezone struct can be set to NULL. The updated timeval struct will have two members tv_sec and tv_usec. tv_sec is the number of seconds since 00:00:00, January 1, 1970 (Unix Epoch) and tv_usec is additional number of microseconds w.r.t. tv_sec. Thus, one can get time expressed in very good resolution.
It can be used as follows:
#include <time.h>
struct timeval start_time;
double mtime, seconds, useconds;
gettimeofday(&start_time, NULL); //timeval is usually enough
int seconds = start_time.tv_sec; //time in seconds
int useconds = start_time.tv_usec; //further time in microseconds
int desired_time = seconds * 1000000 + useconds; //time in microseconds

Measure execution time in C++ OpenMP code

I am running a .cpp code (i) in sequential style and (ii) using OpenMP statements. I am trying to see the time difference. For calculating time, I use this:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
The time is pretty accurate in sequential (above) run of the code. It takes about 8 seconds to run this. When I insert OpenMP statements in the code and thereafter calculate the time I get a reduction in time, but the time displayed is about 8-9 seconds on the console, when actually its just 3-4 seconds in real time!
Here is how my code looks abstractly:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
#pragma omp parallel for
for( ... )
for( ... )
for (...)
{
...;
}
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
When I run the above code, I get the reduction in time but the time displayed is not accurate in terms of real time. It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
Can someone tell the reason for this or suggest me any other timing function to use to measure the time in OpenMP programs?
Thanks.
It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
This is exactly what clock() does - it measures the CPU time used by the process, which at least on Linux and Mac OS X means the cumulative CPU time of all threads that have ever existed in the process since it was started.
Real-clock (a.k.a. wall-clock) timing of OpenMP applications should be done using the high resolution OpenMP timer call omp_get_wtime() which returns a double value of the number of seconds since an arbitrary point in the past. It is a portable function, e.g. exists in both Unix and Windows OpenMP run-times, unlike gettimeofday() which is Unix-only.
I've seen clock() reporting CPU time, instead of real time.
You could use
struct timeval start, end;
gettimeofday(&start, NULL);
// benchmark code
gettimeofday(&end, NULL);
delta = ((end.tv_sec - start.tv_sec) * 1000000u +
end.tv_usec - start.tv_usec) / 1.e6;
To time things instead
You could use the built in omp_get_wtime function in omp library itself. Following is an example code snippet to find out execution time.
#include <stdio.h>
#include <omp.h>
int main(){
double itime, ftime, exec_time;
itime = omp_get_wtime();
// Required code for which execution time needs to be computed
ftime = omp_get_wtime();
exec_time = ftime - itime;
printf("\n\nTime taken is %f", exec_time);
}
Well yes, that's what clock() is supposed to do, tell you how much processor time the program used.
If you want to find elapsed real time, instead of CPU time, use a function that returns wall clock time, such as gettimeofday().
#include "ctime"
std::time_t start, end;
long delta = 0;
start = std::time(NULL);
// do your code here
end = std::time(NULL);
delta = end - start;
// output delta