how to write a measurement function for multithreaded function [duplicate] - c++

I am running a .cpp code (i) in sequential style and (ii) using OpenMP statements. I am trying to see the time difference. For calculating time, I use this:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
The time is pretty accurate in sequential (above) run of the code. It takes about 8 seconds to run this. When I insert OpenMP statements in the code and thereafter calculate the time I get a reduction in time, but the time displayed is about 8-9 seconds on the console, when actually its just 3-4 seconds in real time!
Here is how my code looks abstractly:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
#pragma omp parallel for
for( ... )
for( ... )
for (...)
{
...;
}
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
When I run the above code, I get the reduction in time but the time displayed is not accurate in terms of real time. It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
Can someone tell the reason for this or suggest me any other timing function to use to measure the time in OpenMP programs?
Thanks.

It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
This is exactly what clock() does - it measures the CPU time used by the process, which at least on Linux and Mac OS X means the cumulative CPU time of all threads that have ever existed in the process since it was started.
Real-clock (a.k.a. wall-clock) timing of OpenMP applications should be done using the high resolution OpenMP timer call omp_get_wtime() which returns a double value of the number of seconds since an arbitrary point in the past. It is a portable function, e.g. exists in both Unix and Windows OpenMP run-times, unlike gettimeofday() which is Unix-only.

I've seen clock() reporting CPU time, instead of real time.
You could use
struct timeval start, end;
gettimeofday(&start, NULL);
// benchmark code
gettimeofday(&end, NULL);
delta = ((end.tv_sec - start.tv_sec) * 1000000u +
end.tv_usec - start.tv_usec) / 1.e6;
To time things instead

You could use the built in omp_get_wtime function in omp library itself. Following is an example code snippet to find out execution time.
#include <stdio.h>
#include <omp.h>
int main(){
double itime, ftime, exec_time;
itime = omp_get_wtime();
// Required code for which execution time needs to be computed
ftime = omp_get_wtime();
exec_time = ftime - itime;
printf("\n\nTime taken is %f", exec_time);
}

Well yes, that's what clock() is supposed to do, tell you how much processor time the program used.
If you want to find elapsed real time, instead of CPU time, use a function that returns wall clock time, such as gettimeofday().

#include "ctime"
std::time_t start, end;
long delta = 0;
start = std::time(NULL);
// do your code here
end = std::time(NULL);
delta = end - start;
// output delta

Related

Measure the lapsed time when adding items into a vector in C++ VS2013 [duplicate]

I was given the following HomeWork assignment,
Write a program to test on your computer how long it takes to do
nlogn, n2, n5, 2n, and n! additions for n=5, 10, 15, 20.
I have written a piece of code but all the time I am getting the time of execution 0. Can anyone help me out with it? Thanks
#include <iostream>
#include <cmath>
#include <ctime>
using namespace std;
int main()
{
float n=20;
time_t start, end, diff;
start = time (NULL);
cout<<(n*log(n))*(n*n)*(pow(n,5))*(pow(2,n))<<endl;
end= time(NULL);
diff = difftime (end,start);
cout <<diff<<endl;
return 0;
}
better than time() with second-precision is to use a milliseconds precision.
a portable way is e.g.
int main(){
clock_t start, end;
double msecs;
start = clock();
/* any stuff here ... */
end = clock();
msecs = ((double) (end - start)) * 1000 / CLOCKS_PER_SEC;
return 0;
}
Execute each calculation thousands of times, in a loop, so that you can overcome the low resolution of time and obtain meaningful results. Remember to divide by the number of iterations when reporting results.
This is not particularly accurate but that probably does not matter for this assignment.
At least on Unix-like systems, time() only gives you 1-second granularity, so it's not useful for timing things that take a very short amount of time (unless you execute them many times in a loop). Take a look at the gettimeofday() function, which gives you the current time with microsecond resolution. Or consider using clock(), which measure CPU time rather than wall-clock time.
Your code is executed too fast to be detected by time function returning the number of seconds elapsed since 00:00 hours, Jan 1, 1970 UTC.
Try to use this piece of code:
inline long getCurrentTime() {
timeb timebstr;
ftime( &timebstr );
return (long)(timebstr.time)*1000 + timebstr.millitm;
}
To use it you have to include sys/timeb.h.
Actually the better practice is to repeat your calculations in the loop to get more precise results.
You will probably have to find a more precise platform-specific timer such as the Windows High Performance Timer. You may also (very likely) find that your compiler optimizes or removes almost all of your code.

Measure execution time in C++ OpenMP code

I am running a .cpp code (i) in sequential style and (ii) using OpenMP statements. I am trying to see the time difference. For calculating time, I use this:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
The time is pretty accurate in sequential (above) run of the code. It takes about 8 seconds to run this. When I insert OpenMP statements in the code and thereafter calculate the time I get a reduction in time, but the time displayed is about 8-9 seconds on the console, when actually its just 3-4 seconds in real time!
Here is how my code looks abstractly:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
#pragma omp parallel for
for( ... )
for( ... )
for (...)
{
...;
}
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
When I run the above code, I get the reduction in time but the time displayed is not accurate in terms of real time. It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
Can someone tell the reason for this or suggest me any other timing function to use to measure the time in OpenMP programs?
Thanks.
It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
This is exactly what clock() does - it measures the CPU time used by the process, which at least on Linux and Mac OS X means the cumulative CPU time of all threads that have ever existed in the process since it was started.
Real-clock (a.k.a. wall-clock) timing of OpenMP applications should be done using the high resolution OpenMP timer call omp_get_wtime() which returns a double value of the number of seconds since an arbitrary point in the past. It is a portable function, e.g. exists in both Unix and Windows OpenMP run-times, unlike gettimeofday() which is Unix-only.
I've seen clock() reporting CPU time, instead of real time.
You could use
struct timeval start, end;
gettimeofday(&start, NULL);
// benchmark code
gettimeofday(&end, NULL);
delta = ((end.tv_sec - start.tv_sec) * 1000000u +
end.tv_usec - start.tv_usec) / 1.e6;
To time things instead
You could use the built in omp_get_wtime function in omp library itself. Following is an example code snippet to find out execution time.
#include <stdio.h>
#include <omp.h>
int main(){
double itime, ftime, exec_time;
itime = omp_get_wtime();
// Required code for which execution time needs to be computed
ftime = omp_get_wtime();
exec_time = ftime - itime;
printf("\n\nTime taken is %f", exec_time);
}
Well yes, that's what clock() is supposed to do, tell you how much processor time the program used.
If you want to find elapsed real time, instead of CPU time, use a function that returns wall clock time, such as gettimeofday().
#include "ctime"
std::time_t start, end;
long delta = 0;
start = std::time(NULL);
// do your code here
end = std::time(NULL);
delta = end - start;
// output delta

Most efficient way to retrieve a timer?

I've never actually worked with timers before but I need one for my current project.
So this might be a silly question: but what's the 'normal' way to retrieve a timer for a game, and is there a better/more efficient way?
Thanks
Since you may want the time elapsed, and it might be so little, you might need to use the clock() function defined in time.h.
Here what I found about it in the MSDN Library:
Calculates the wall-clock time used by the calling process.
clock_t clock( void );
Return Value
The elapsed wall-clock time since the start of the process (elapsed time in seconds times CLOCKS_PER_SEC). If the amount of elapsed time is unavailable, the function returns –1, cast as a clock_t.
Remarks
The clock function tells how much time the calling process has used. A timer tick is approximately equal to 1/CLOCKS_PER_SEC second. In versions of Microsoft C before 6.0, the CLOCKS_PER_SEC constant was called CLK_TCK.
Example:
// crt_clock.c
// This example prompts for how long
// the program is to run and then continuously
// displays the elapsed time for that period.
//
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
void sleep( clock_t wait );
int main( void )
{
long i = 6000000L;
clock_t start, finish;
double duration;
// Delay for a specified time.
printf( "Delay for three seconds\n" );
sleep( (clock_t)3 * CLOCKS_PER_SEC );
printf( "Done!\n" );
// Measure the duration of an event.
printf( "Time to do %ld empty loops is ", i );
start = clock();
while( i-- )
;
finish = clock();
duration = (double)(finish - start) / CLOCKS_PER_SEC;
printf( "%2.1f seconds\n", duration );
}
// Pauses for a specified number of milliseconds.
void sleep( clock_t wait )
{
clock_t goal;
goal = wait + clock();
while( goal > clock() )
;
}
Output
Delay for three seconds
Done!
Time to do 6000000 empty loops is 0.1 seconds
If you want cross-platform and performant time library, use boost::date_time. For timers, just get current time, and substract it from the next reading (they have operators for computing time difference etc, the code is readable).
Current time is read using boost::posix_time::microsecond_clock::universal_time() and stored in the ptime struct. (the posix_ does not refer to that it is available only on POSIX systems; it only indicates that it is modeled after POSIX time concepts).
If you are using C++ on windows you will want to use QueryPerformanceCounter/QueryPerformanceFrequency
http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ms644905(v=vs.85).aspx
If you are on linux check out clock_gettime(CLOCK_REALTIME)
http://linux.die.net/man/3/clock_gettime
the clock() suggestion is incorrect as it is time used in the process. since there is a loop his function will end up begin correct, but if you block then this will not work.
http://linux.die.net/man/3/clock

Calculating time of execution with time() function

I was given the following HomeWork assignment,
Write a program to test on your computer how long it takes to do
nlogn, n2, n5, 2n, and n! additions for n=5, 10, 15, 20.
I have written a piece of code but all the time I am getting the time of execution 0. Can anyone help me out with it? Thanks
#include <iostream>
#include <cmath>
#include <ctime>
using namespace std;
int main()
{
float n=20;
time_t start, end, diff;
start = time (NULL);
cout<<(n*log(n))*(n*n)*(pow(n,5))*(pow(2,n))<<endl;
end= time(NULL);
diff = difftime (end,start);
cout <<diff<<endl;
return 0;
}
better than time() with second-precision is to use a milliseconds precision.
a portable way is e.g.
int main(){
clock_t start, end;
double msecs;
start = clock();
/* any stuff here ... */
end = clock();
msecs = ((double) (end - start)) * 1000 / CLOCKS_PER_SEC;
return 0;
}
Execute each calculation thousands of times, in a loop, so that you can overcome the low resolution of time and obtain meaningful results. Remember to divide by the number of iterations when reporting results.
This is not particularly accurate but that probably does not matter for this assignment.
At least on Unix-like systems, time() only gives you 1-second granularity, so it's not useful for timing things that take a very short amount of time (unless you execute them many times in a loop). Take a look at the gettimeofday() function, which gives you the current time with microsecond resolution. Or consider using clock(), which measure CPU time rather than wall-clock time.
Your code is executed too fast to be detected by time function returning the number of seconds elapsed since 00:00 hours, Jan 1, 1970 UTC.
Try to use this piece of code:
inline long getCurrentTime() {
timeb timebstr;
ftime( &timebstr );
return (long)(timebstr.time)*1000 + timebstr.millitm;
}
To use it you have to include sys/timeb.h.
Actually the better practice is to repeat your calculations in the loop to get more precise results.
You will probably have to find a more precise platform-specific timer such as the Windows High Performance Timer. You may also (very likely) find that your compiler optimizes or removes almost all of your code.

How to get system time in C++?

In fact i am trying to calculate the time a function takes to complete in my program.
So i am using the logic to get system time when i call the function and time when the function returns a value then by subtracting the values i get time it took to complete.
So if anyone can tell me some better approach or just how to get system time at an instance it would be quite a help
The approach I use when timing my code is the time() function. It returns a single numeric value to you representing the epoch which makes the subtraction part easier for calculation.
Relevant code:
#include <time.h>
#include <iostream>
int main (int argc, char *argv[]) {
int startTime, endTime, totalTime;
startTime = time(NULL);
/* relevant code to benchmark in here */
endTime = time(NULL);
totalTime = endTime - startTime;
std::cout << "Runtime: " << totalTime << " seconds.";
return 0;
}
Keep in mind this is user time. For CPU, time see Ben's reply.
Your question is totally dependant on WHICH system you are using. Each system has its own functions for getting the current time. For finding out how long the system has been running, you'd want to access one of the "high resolution performance counters". If you don't use a performance counter, you are usually limited to microsecond accuracy (or worse) which is almost useless in profiling the speed of a function.
In Windows, you can access the counter via the 'QueryPerformanceCounter()' function. This returns an arbitrary number that is different on each processor. To find out how many ticks in the counter == 1 second, call 'QueryPerformanceFrequency()'.
If you're coding under a platform other than windows, just google performance counter and the system you are coding under, and it should tell you how you can access the counter.
Edit (clarification)
This is c++, just include windows.h and import the "Kernel32.lib" (seems to have removed my hyperlink, check out the documentation at: http://msdn.microsoft.com/en-us/library/ms644904.aspx). For C#, you can use the "System.Diagnostics.PerformanceCounter" class.
You can use time_t
Under Linux, try gettimeofday() for microsecond resolution, or clock_gettime() for nanosecond resolution.
(Of course the actual clock may have a coarser resolution.)
In some system you don't have access to the time.h header. Therefore, you can use the following code snippet to find out how long does it take for your program to run, with the accuracy of seconds.
void function()
{
time_t currentTime;
time(&currentTime);
int startTime = currentTime;
/* Your program starts from here */
time(&currentTime);
int timeElapsed = currentTime - startTime;
cout<<"It took "<<timeElapsed<<" seconds to run the program"<<endl;
}
You can use the solution with std::chrono described here: Getting an accurate execution time in C++ (micro seconds) you will have much better accuracy in your measurement. Usually we measure code execution in the round of the milliseconds (ms) or even microseconds (us).
#include <chrono>
#include <iostream>
...
[YOUR METHOD/FUNCTION STARTING HERE]
auto start = std::chrono::high_resolution_clock::now();
[YOUR TEST CODE HERE]
auto elapsed = std::chrono::high_resolution_clock::now() - start;
long long microseconds = std::chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
std::cout << "Elapsed time: " << microseconds << " ms;