See the following code, which is my attempt to print the time elapsed between loops.
void main()
{
while (true)
{
static clock_t timer;
clock_t t = clock();
clock_t elapsed = t - timer;
float elapsed_sec = (float)elapsed / (float)CLOCKS_PER_SEC;
timer = t;
printf("[dt:%d ms].\n", (int)(elapsed_sec*1000));
}
}
However, if I set a breakpoint and sit there for 10 seconds, when i continue execution the elapsed time includes that 5 seconds -- and I don't want it to, for my intended usage.
I assume clock() is the wrong function, but what is the correct one?
Note that if there IS no standard C or C++ single call for this -- well, how do you compute it? Is there a posix way?
I suspect that this is actually information only knowable with platform-specific calls. If that is the case, i'd like to at least know how to do so on windows (msvc).
Yes, trying to measure the CPU time of your process would be dependent on support from your operating system. Rather than look up what support is available from various operating systems, though, I would propose that your approach is flawed.
Debugging typically uses a debug build that has most optimizations turned off (to make it easier to do things like set breakpoints accurately). Timings on a non-optimized build lack practical value. Hence any timings of your program should usually be ignored when you are using breakpoints–even if the breakpoint is outside the timed section.
To combine using breakpoints with timings, I would do the debugging in two phases. In one phase, you set breakpoints and look at what is happening in the debug build. In the other phase, you use identical input (redirect a file into std::cin if it helps) and time the process in the release build. There may be some back-and-forth between the stages as you work out what is going on. That's fine; the point is not to have exactly two phases, but to keep breakpoints and timings separate.
Although JaMit gives a better answer (here), it is possible but it depends entirely on your compiler and the amount of overhead this creates will probably slow down your program too much to get an accurate result. You can use whatever time recording function you please but either way you would have to:
Record the start of the loop
Record the start of the breakpoint
Programmatically cause a breakpoint
Record the end of the breakpoint
Record the end of the loop.
If you're looking for speed though, you really need to be testing in an optimized release mode without breakpoints and writing the output to the console or a file. Nonetheless, it is possible to do what you're trying to do, and here's a sample solution.
#include <chrono>
#include <intrin.h> //include for visual studio break
#include <iostream>
int main(void) {
for (int c = 0; c < 100; c++) {
//Start of loop
auto start = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::high_resolution_clock::now().time_since_epoch()).count();
/*Do stuff here*/
auto startBreak = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::high_resolution_clock::now().time_since_epoch()).count();
//visual studio only
__debugbreak();
auto endBreak = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::high_resolution_clock::now().time_since_epoch()).count();
/*Do more stuff here*/
auto end = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::high_resolution_clock::now().time_since_epoch()).count();
/*Time for 1 pass of the loop, including all the records of getting the time*/
std::cout << end - start - (endBreak - startBreak) << "\n";
}
return 0;
}
given your objective: which is my attempt to print the time elapsed between loops.
in the C language:
Note that clock() returns the number of clock tics since the program started.
This code measures the elapsed time for each loop by showing the start/end time for each loop
#include <stdio.h>
#include <time.h>
int main( void )
{
clock_t loopStart = clock();
clock_t loopEnd;
for( int i=0; i< 10000; i++ )
{
loopEnd = clock();
// something here to be timed
printf("[%lf : %lf ms].\n", (double)loopStart/CLOCKS_PER_SEC, (double)loopEnd/CLOCKS_PER_SEC );
loopStart = loopEnd;
}
}
Of course, if you want to display the actual number of clock ticks per loop, then remove the division by CLOCKS_PER_SEC and calculate the difference and only display the difference
Related
I am running a .cpp code (i) in sequential style and (ii) using OpenMP statements. I am trying to see the time difference. For calculating time, I use this:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
The time is pretty accurate in sequential (above) run of the code. It takes about 8 seconds to run this. When I insert OpenMP statements in the code and thereafter calculate the time I get a reduction in time, but the time displayed is about 8-9 seconds on the console, when actually its just 3-4 seconds in real time!
Here is how my code looks abstractly:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
#pragma omp parallel for
for( ... )
for( ... )
for (...)
{
...;
}
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
When I run the above code, I get the reduction in time but the time displayed is not accurate in terms of real time. It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
Can someone tell the reason for this or suggest me any other timing function to use to measure the time in OpenMP programs?
Thanks.
It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
This is exactly what clock() does - it measures the CPU time used by the process, which at least on Linux and Mac OS X means the cumulative CPU time of all threads that have ever existed in the process since it was started.
Real-clock (a.k.a. wall-clock) timing of OpenMP applications should be done using the high resolution OpenMP timer call omp_get_wtime() which returns a double value of the number of seconds since an arbitrary point in the past. It is a portable function, e.g. exists in both Unix and Windows OpenMP run-times, unlike gettimeofday() which is Unix-only.
I've seen clock() reporting CPU time, instead of real time.
You could use
struct timeval start, end;
gettimeofday(&start, NULL);
// benchmark code
gettimeofday(&end, NULL);
delta = ((end.tv_sec - start.tv_sec) * 1000000u +
end.tv_usec - start.tv_usec) / 1.e6;
To time things instead
You could use the built in omp_get_wtime function in omp library itself. Following is an example code snippet to find out execution time.
#include <stdio.h>
#include <omp.h>
int main(){
double itime, ftime, exec_time;
itime = omp_get_wtime();
// Required code for which execution time needs to be computed
ftime = omp_get_wtime();
exec_time = ftime - itime;
printf("\n\nTime taken is %f", exec_time);
}
Well yes, that's what clock() is supposed to do, tell you how much processor time the program used.
If you want to find elapsed real time, instead of CPU time, use a function that returns wall clock time, such as gettimeofday().
#include "ctime"
std::time_t start, end;
long delta = 0;
start = std::time(NULL);
// do your code here
end = std::time(NULL);
delta = end - start;
// output delta
EDIT: It appears to be functioning now. The code has been updated to show my revisions. Thank you all for your help.
I imagine I'm just stupid, but I'm attempting to use ctime to count CPU ticks through my entire program. I'm writing an encryption algorithm for a school project and I'm trying to include a timer so that I can add noise processes, equalizing the amount of time among different key/plaintext combinations.
Here is a little test for ctime:
#include <iostream>
#include <string>
#include <ctime>
int main (int arc, char **argv)
{
double elapsedTime;
const clock_t start = clock ();
int uselessInt = 0;
for (int i = 0; i <= 200; i++)
{
uselessInt = uselessInt * 2 / 3 + i;
std::cout << uselessInt << std::endl;
}
clock_t end = clock();
elapsedTime = static_cast<double>(end - start);
std::cout << elapsedTime << " CPU ticks have elapsed since this application's initiation." << std::endl;
return (0);
}
which prints:
0
1
2
4
/* ... long list of numbers ... */
591
594
0 CPU ticks have elapsed since this application's initiation.
[smalltock#localhost Desktop]$
I am using GCC (G++) and it appears that ctime/time.h simply isn't counting ticks like I want it to. Can anybody identify the problem? I'm a relative amateur in this language.
My two cents. When you do cin.get(), it waits for your to input something on the console, did you do anything or simply typed enter?
I did run your code without typing any text but simply press enter, it gave the following output:
Test Text
It's a stone, Luigi... you didn't make it.
0 CPU ticks have elapsed since this application's initiation.
Real 0m0.700s
User 0m0.000s
Sys 0m0.061s
It may be because the precision of CLOCKS_PER_SEC is kind of "big" (in seconds) compared to the CPU time used by your program
Meanwhile, a syntax error in duration line, you either missed another ) or should delete the first (
BTW:
Real is wall clock time - time from start to finish of the call.
User is the amount of CPU time spent in user-mode code (outside the kernel) within the process. This is only actual CPU time used in executing the process.
Sys is the amount of CPU time spent in the kernel within the process.
So you basically have 0 CPU time since you are keep waiting for I/O, no CPU computation.
elapsedTime in your program is a measure of time in seconds, not a count of clock ticks. If you want ticks, use duration.
Since your program (presumably) spends the vast majority of its time blocked on I/O, not very many seconds are going to have gone by.
I am running a .cpp code (i) in sequential style and (ii) using OpenMP statements. I am trying to see the time difference. For calculating time, I use this:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
The time is pretty accurate in sequential (above) run of the code. It takes about 8 seconds to run this. When I insert OpenMP statements in the code and thereafter calculate the time I get a reduction in time, but the time displayed is about 8-9 seconds on the console, when actually its just 3-4 seconds in real time!
Here is how my code looks abstractly:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
#pragma omp parallel for
for( ... )
for( ... )
for (...)
{
...;
}
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
When I run the above code, I get the reduction in time but the time displayed is not accurate in terms of real time. It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
Can someone tell the reason for this or suggest me any other timing function to use to measure the time in OpenMP programs?
Thanks.
It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
This is exactly what clock() does - it measures the CPU time used by the process, which at least on Linux and Mac OS X means the cumulative CPU time of all threads that have ever existed in the process since it was started.
Real-clock (a.k.a. wall-clock) timing of OpenMP applications should be done using the high resolution OpenMP timer call omp_get_wtime() which returns a double value of the number of seconds since an arbitrary point in the past. It is a portable function, e.g. exists in both Unix and Windows OpenMP run-times, unlike gettimeofday() which is Unix-only.
I've seen clock() reporting CPU time, instead of real time.
You could use
struct timeval start, end;
gettimeofday(&start, NULL);
// benchmark code
gettimeofday(&end, NULL);
delta = ((end.tv_sec - start.tv_sec) * 1000000u +
end.tv_usec - start.tv_usec) / 1.e6;
To time things instead
You could use the built in omp_get_wtime function in omp library itself. Following is an example code snippet to find out execution time.
#include <stdio.h>
#include <omp.h>
int main(){
double itime, ftime, exec_time;
itime = omp_get_wtime();
// Required code for which execution time needs to be computed
ftime = omp_get_wtime();
exec_time = ftime - itime;
printf("\n\nTime taken is %f", exec_time);
}
Well yes, that's what clock() is supposed to do, tell you how much processor time the program used.
If you want to find elapsed real time, instead of CPU time, use a function that returns wall clock time, such as gettimeofday().
#include "ctime"
std::time_t start, end;
long delta = 0;
start = std::time(NULL);
// do your code here
end = std::time(NULL);
delta = end - start;
// output delta
I tried some codes by googling :
clock_t start, end;
start = clock();
//CODES GOES HERE
end = clock();
std::cout << end - start <<"\n";
std::cout << (double) (end-start)/ CLOCKS_PER_SEC;
but the result elapsed time always was 0, even with
std::cout << (double) (end-start)/ (CLOCKS_PER_SEC/1000.0 );
Don't know why but when I get the similar in Java : getCurrentTimeMillis() it works well. I want it to show the milliseconds as maybe the computer compute so fast.
I don't think it's guaranteed that clock has a high enough resolution to profile your function. If you want to know how fast a function executes, you should run it maybe a few thousands times instead of once, measure the total time it takes and take the average.
#include <boost/progress.hpp>
int main()
{
boost::progress_timer timer;
// code to time goes here
}
This will print out the time it took to run main. You can place your code in scopes to time several parts, i.e. { boost::progress_timer timer; ... }.
This question is somehow similar to yours: Timing a function in a C++ program that runs on Linux
Take a look at this answer!
I need to "time" or benchmark a number crunching application written in C/C++. The problem is that the machine where I run the program is usually full of people doing similar things, so the CPUs are always at full load.
I thought about using functions from time.h liket "get time of the day" (don't remember the exact syntax, sorry) and similars, but I am afraid they would not be good for this case, am I right?
And the program "time" from bash, gave me some errors long time ago.
Also the problem is, that sometimes I need to get timings in the range of 0.5 secs and so on.
Anybody has a hint?
P.S.: compiler is gcc and in some cases nvcc (NVIDIA)
P.S.2: in my benchmarks I just want to measure the execution time between two parts of the main function
You didn't mention which compiler you are using, but with GNU's g++ I usually set the -pg flag to build with profiling informations.
Each time you run the application, it will create an output file that, parsed with gprof application, will give you lots of information about the performances.
See this for starters.
From your other recent questions, you seem to be using MPI for parallelisation. Assuming this question is within the same context, then the simplest way to time your application would be to use MPI_Wtime().
From the man page:
This subroutine returns the current
value of time as a double precision
floating point number of seconds. This
value represents elapsed time since
some point in the past. This time in
the past will not change during the
life of the task. You are responsible
for converting the number of seconds
into other units if you prefer.
Example usage:
#include "mpi.h"
int main(int argc, char **argv)
{
int rc, taskid;
double t_start, t_end;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&taskid);
t_start = MPI_Wtime();
/* .... your computation kernel .... */
t_end = MPI_Wtime();
/* make sure all processes have completed */
MPI_Barrier(MPI_COMM_WORLD);
if (taskid == 0) {
printf("Elapsed time: %1.2f seconds\n", t_start - t_end);
}
MPI_Finalize();
return 0;
}
The advantage of this is that we let the underlying MPI library handle platform specific ways of handling time, although you might want to use MPI_Wtick() to determine the resolution of the timer used on each platform.
It's hard to meaningfully compare timings from programs running for such a short time. Usually the solution is to run multiple times.
The time builtin in bash (or /usr/bin/time) will report time actually used by the processor, which will be more useful on a loaded machine than wall-clock time, but there is too much going on to really compare timings on a fine grain – huge differences of orders of magnitude will still be apparent.
You can also use clock to get a rough estimate:
#include <ctime>
#include <iostream>
struct Timer {
std::clock_t _start, _stop;
Timer() : _start(std::clock()) {}
void restart() { _start = std::clock(); }
void stop() { _stop = std::clock(); }
std::clock_t clocks() const { return _stop - _start; }
double secs() const { return double(clocks()) / CLOCKS_PER_SEC; }
};
int main() {
Timer t;
//run_some_code();
t.stop();
std::cout << "That took " << t.secs() << " seconds.\n";
return 0;
}