I tried some codes by googling :
clock_t start, end;
start = clock();
//CODES GOES HERE
end = clock();
std::cout << end - start <<"\n";
std::cout << (double) (end-start)/ CLOCKS_PER_SEC;
but the result elapsed time always was 0, even with
std::cout << (double) (end-start)/ (CLOCKS_PER_SEC/1000.0 );
Don't know why but when I get the similar in Java : getCurrentTimeMillis() it works well. I want it to show the milliseconds as maybe the computer compute so fast.
I don't think it's guaranteed that clock has a high enough resolution to profile your function. If you want to know how fast a function executes, you should run it maybe a few thousands times instead of once, measure the total time it takes and take the average.
#include <boost/progress.hpp>
int main()
{
boost::progress_timer timer;
// code to time goes here
}
This will print out the time it took to run main. You can place your code in scopes to time several parts, i.e. { boost::progress_timer timer; ... }.
This question is somehow similar to yours: Timing a function in a C++ program that runs on Linux
Take a look at this answer!
Related
This question already has answers here:
How can I benchmark C code easily?
(5 answers)
Closed 6 years ago.
I have a function which can generate 10000 random numbers and write them in a file.
void generator(char filename[])
{
int i;
int n;
FILE* fp;
if((fp=fopen(filename,"w+"))==NULL)
{
printf("Fail creating file!");
}
srand((unsigned)time(NULL));
for(i=0;i<10000;i++)
{
n=rand()%10000;
fprintf(fp,"%d ",n);
}
fclose(fp);
}
How can I get the execution time of this function using C/C++ ?
Code profiling is not a particularly easy task (or, as we oft-say in programming, it's "non-trivial".) The issue is because "execution time" measured in seconds isn't particularly accurate or useful.
What you're wanting to do is to measure the number of CPU cycles. This can be done using an external tool such as callgrind (one of Valgrind's tools). There's a 99% chance that's all you want.
If you REALLY want to do that yourself in code, you're undertaking a rather difficult task. I know first hand - I wrote a comparative benchmarking library in C++ for on-the-fly performance testing.
If you really want to go down that road, you can research benchmarking on Intel processors (that mostly carries over to AMD), or whatever processor you've using. However, as I said, that topic is large and in-depth, and far beyond the scope of a StackOverflow answer.
You can use the chrono library;
#include <chrono>
//*****//
auto start = std::chrono::steady_clock::now();
generator("file.txt")
auto end = std::chrono::steady_clock::now();
std::cout << "genarator() took "
<< std::chrono::duration_cast<std::chrono::microseconds>(end - start).count() << "us.\n";
You have already some nice C answers that also work with C++.
Here a native C++ solution using <chrono>:
auto tbegin = std::chrono::high_resolution_clock::now();
...
auto tend = std::chrono::high_resolution_clock::now();
auto tduration = std::chrono::duration_cast<std::chrono::microseconds>(tend - tbegin).count();
The advantage is that you can switch from microsecond to millisecond, seconds or any other time measurement units very easily.
Note that you may have OS limits to the clocking accuracy (typically 15 milliseconds on windows environment), so that this may give meaningful results only if you're really above this limit.
void generator(char filename)
{
clock_t tStart = clock();
/* your code here */
printf("Time taken: %.2fs\n", (double)(clock() - tStart)/CLOCKS_PER_SEC);
}
upd. And add #include <ctime>
Try this
#include <sys/time.h>
struct timeval tpstart,tpend;
double timeuse;
//get time before generator starts
gettimeofday(&tpstart,NULL);
//call generator function
generator(filename);
//get time after generator ends
gettimeofday(&tpend,NULL);
//calculate the used time
timeuse=1000000*(tpend.tv_sec-tpstart.tv_sec)+tpend.tv_usec-tpstart.tv_usec;
timeuse/=1000000;
printf("Used Time:%fsec\n",timeuse);
#include <ctime>
.....
clock_t start = clock();
...//the code you want to get the execution time for
double elapsed_time = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;
std::cout << elapsed_time << std::endl;//elapsed_time now contains the execution time(in seconds) of the code in between
will give you an approximate(not accurate) execution time of the code between the first and second clock() calls
Temporarily make the limit 10000000
Time it with a stopwatch. Divide the time by 1000
I have c++ program in that i want to have compiletime, execution time ,performance measure and success measure of a test.Right now i am calculating time as follows:
clock_t starts = clock();
test_case();
clock_t ends = clock();
double time = (double)(ends - starts);
But i dont know wether "time" is compile time or execution time. If it is compile time then how will i get execution time or if it is execution time how will i get its compile time. Also, i need to have performance and success mesure of the "test_case()". So, suggest me how will i get it.
The time that you are calculating is the execution time. clock() returns the number of clock ticks since your program started. Hence taking the difference of starts and ends will give you execution time of test_case() in second multiplied by CLOCKS_PER_SEC. CLOCKS_PER_SEC is the number of clock ticks per second.
Compile time calculation can be done using template metaprogramming.
To get a view template metaprogramming, have a look at: Compile Time Calculation
If you are using UNIX, you can easily get compilation time using command like:
time g++ file_name.cpp
This will output the time required by g++ file_name.cpp to compile.
The above function outputs the execution time. I would prefer to use query performance counter for finding the execution time.
However, we can find the build time if we are using a VC++ compiler.
The option can be found at Tools->Options->VC++ProjectSettings->BuildTime
The time is execution time, because it get by the ends and starts.But the function clock() is not a good way for caculate codes' execution time, for its low accuracy(maybe ms).
I suggest you use c++ STL chrono, you will get more accurate output.Example:
int main()
{
chrono::high_resolution_clock::time_point begin_time = chrono::high_resolution_clock::now();
for(int i=0;i<100000;i++)
{
//do something
;
}
chrono:: high_resolution_clock::time_point stop_time = chrono::high_resolution_clock::now();
chrono::duration<double> slapsed = duration_cast<duration<double>>(stop_time - begin_time);
cout << "time takes " << slapsed.count() * 1000 << "ms" << endl;
return 0;
}
I hope this can help you.
EDIT: It appears to be functioning now. The code has been updated to show my revisions. Thank you all for your help.
I imagine I'm just stupid, but I'm attempting to use ctime to count CPU ticks through my entire program. I'm writing an encryption algorithm for a school project and I'm trying to include a timer so that I can add noise processes, equalizing the amount of time among different key/plaintext combinations.
Here is a little test for ctime:
#include <iostream>
#include <string>
#include <ctime>
int main (int arc, char **argv)
{
double elapsedTime;
const clock_t start = clock ();
int uselessInt = 0;
for (int i = 0; i <= 200; i++)
{
uselessInt = uselessInt * 2 / 3 + i;
std::cout << uselessInt << std::endl;
}
clock_t end = clock();
elapsedTime = static_cast<double>(end - start);
std::cout << elapsedTime << " CPU ticks have elapsed since this application's initiation." << std::endl;
return (0);
}
which prints:
0
1
2
4
/* ... long list of numbers ... */
591
594
0 CPU ticks have elapsed since this application's initiation.
[smalltock#localhost Desktop]$
I am using GCC (G++) and it appears that ctime/time.h simply isn't counting ticks like I want it to. Can anybody identify the problem? I'm a relative amateur in this language.
My two cents. When you do cin.get(), it waits for your to input something on the console, did you do anything or simply typed enter?
I did run your code without typing any text but simply press enter, it gave the following output:
Test Text
It's a stone, Luigi... you didn't make it.
0 CPU ticks have elapsed since this application's initiation.
Real 0m0.700s
User 0m0.000s
Sys 0m0.061s
It may be because the precision of CLOCKS_PER_SEC is kind of "big" (in seconds) compared to the CPU time used by your program
Meanwhile, a syntax error in duration line, you either missed another ) or should delete the first (
BTW:
Real is wall clock time - time from start to finish of the call.
User is the amount of CPU time spent in user-mode code (outside the kernel) within the process. This is only actual CPU time used in executing the process.
Sys is the amount of CPU time spent in the kernel within the process.
So you basically have 0 CPU time since you are keep waiting for I/O, no CPU computation.
elapsedTime in your program is a measure of time in seconds, not a count of clock ticks. If you want ticks, use duration.
Since your program (presumably) spends the vast majority of its time blocked on I/O, not very many seconds are going to have gone by.
I need to "time" or benchmark a number crunching application written in C/C++. The problem is that the machine where I run the program is usually full of people doing similar things, so the CPUs are always at full load.
I thought about using functions from time.h liket "get time of the day" (don't remember the exact syntax, sorry) and similars, but I am afraid they would not be good for this case, am I right?
And the program "time" from bash, gave me some errors long time ago.
Also the problem is, that sometimes I need to get timings in the range of 0.5 secs and so on.
Anybody has a hint?
P.S.: compiler is gcc and in some cases nvcc (NVIDIA)
P.S.2: in my benchmarks I just want to measure the execution time between two parts of the main function
You didn't mention which compiler you are using, but with GNU's g++ I usually set the -pg flag to build with profiling informations.
Each time you run the application, it will create an output file that, parsed with gprof application, will give you lots of information about the performances.
See this for starters.
From your other recent questions, you seem to be using MPI for parallelisation. Assuming this question is within the same context, then the simplest way to time your application would be to use MPI_Wtime().
From the man page:
This subroutine returns the current
value of time as a double precision
floating point number of seconds. This
value represents elapsed time since
some point in the past. This time in
the past will not change during the
life of the task. You are responsible
for converting the number of seconds
into other units if you prefer.
Example usage:
#include "mpi.h"
int main(int argc, char **argv)
{
int rc, taskid;
double t_start, t_end;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&taskid);
t_start = MPI_Wtime();
/* .... your computation kernel .... */
t_end = MPI_Wtime();
/* make sure all processes have completed */
MPI_Barrier(MPI_COMM_WORLD);
if (taskid == 0) {
printf("Elapsed time: %1.2f seconds\n", t_start - t_end);
}
MPI_Finalize();
return 0;
}
The advantage of this is that we let the underlying MPI library handle platform specific ways of handling time, although you might want to use MPI_Wtick() to determine the resolution of the timer used on each platform.
It's hard to meaningfully compare timings from programs running for such a short time. Usually the solution is to run multiple times.
The time builtin in bash (or /usr/bin/time) will report time actually used by the processor, which will be more useful on a loaded machine than wall-clock time, but there is too much going on to really compare timings on a fine grain – huge differences of orders of magnitude will still be apparent.
You can also use clock to get a rough estimate:
#include <ctime>
#include <iostream>
struct Timer {
std::clock_t _start, _stop;
Timer() : _start(std::clock()) {}
void restart() { _start = std::clock(); }
void stop() { _stop = std::clock(); }
std::clock_t clocks() const { return _stop - _start; }
double secs() const { return double(clocks()) / CLOCKS_PER_SEC; }
};
int main() {
Timer t;
//run_some_code();
t.stop();
std::cout << "That took " << t.secs() << " seconds.\n";
return 0;
}
In fact i am trying to calculate the time a function takes to complete in my program.
So i am using the logic to get system time when i call the function and time when the function returns a value then by subtracting the values i get time it took to complete.
So if anyone can tell me some better approach or just how to get system time at an instance it would be quite a help
The approach I use when timing my code is the time() function. It returns a single numeric value to you representing the epoch which makes the subtraction part easier for calculation.
Relevant code:
#include <time.h>
#include <iostream>
int main (int argc, char *argv[]) {
int startTime, endTime, totalTime;
startTime = time(NULL);
/* relevant code to benchmark in here */
endTime = time(NULL);
totalTime = endTime - startTime;
std::cout << "Runtime: " << totalTime << " seconds.";
return 0;
}
Keep in mind this is user time. For CPU, time see Ben's reply.
Your question is totally dependant on WHICH system you are using. Each system has its own functions for getting the current time. For finding out how long the system has been running, you'd want to access one of the "high resolution performance counters". If you don't use a performance counter, you are usually limited to microsecond accuracy (or worse) which is almost useless in profiling the speed of a function.
In Windows, you can access the counter via the 'QueryPerformanceCounter()' function. This returns an arbitrary number that is different on each processor. To find out how many ticks in the counter == 1 second, call 'QueryPerformanceFrequency()'.
If you're coding under a platform other than windows, just google performance counter and the system you are coding under, and it should tell you how you can access the counter.
Edit (clarification)
This is c++, just include windows.h and import the "Kernel32.lib" (seems to have removed my hyperlink, check out the documentation at: http://msdn.microsoft.com/en-us/library/ms644904.aspx). For C#, you can use the "System.Diagnostics.PerformanceCounter" class.
You can use time_t
Under Linux, try gettimeofday() for microsecond resolution, or clock_gettime() for nanosecond resolution.
(Of course the actual clock may have a coarser resolution.)
In some system you don't have access to the time.h header. Therefore, you can use the following code snippet to find out how long does it take for your program to run, with the accuracy of seconds.
void function()
{
time_t currentTime;
time(¤tTime);
int startTime = currentTime;
/* Your program starts from here */
time(¤tTime);
int timeElapsed = currentTime - startTime;
cout<<"It took "<<timeElapsed<<" seconds to run the program"<<endl;
}
You can use the solution with std::chrono described here: Getting an accurate execution time in C++ (micro seconds) you will have much better accuracy in your measurement. Usually we measure code execution in the round of the milliseconds (ms) or even microseconds (us).
#include <chrono>
#include <iostream>
...
[YOUR METHOD/FUNCTION STARTING HERE]
auto start = std::chrono::high_resolution_clock::now();
[YOUR TEST CODE HERE]
auto elapsed = std::chrono::high_resolution_clock::now() - start;
long long microseconds = std::chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
std::cout << "Elapsed time: " << microseconds << " ms;