calculating execution time in c++ - c++

I have written a c++ program , I want to know how to calculate the time taken for execution so I won't exceed the time limit.
#include<iostream>
using namespace std;
int main ()
{
int st[10000],d[10000],p[10000],n,k,km,r,t,ym[10000];
k=0;
km=0;
r=0;
scanf("%d",&t);
for(int y=0;y<t;y++)
{
scanf("%d",&n);
for(int i=0;i<n;i++)
{
cin>>st[i] >>d[i] >>p[i];
}
for(int i=0;i<n;i++)
{
for(int j=i+1;j<n;j++)
{
if((d[i]+st[i])<=st[j])
{
k=p[i]+p[j];
}
if(k>km)
km=k;
}
if(km>r)
r=km;
}
ym[y]=r;
}
for( int i=0;i<t;i++)
{
cout<<ym[i]<<endl;
}
//system("pause");
return 0;
}
this is my program and i want it to be within time limit 3 sec !! how to do it ?
yeah sorry i meant execution time !!

If you have cygwin installed, from it's bash shell, run your executable, say MyProgram, using the time utility, like so:
/usr/bin/time ./MyProgram
This will report how long the execution of your program took -- the output would look something like the following:
real 0m0.792s
user 0m0.046s
sys 0m0.218s
You could also manually modify your C program to instrument it using the clock() library function, like so:
#include <time.h>
int main(void) {
clock_t tStart = clock();
/* Do your stuff here */
printf("Time taken: %.2fs\n", (double)(clock() - tStart)/CLOCKS_PER_SEC);
return 0;
}

With C++11 for measuring the execution time of a piece of code, we can use the now() function:
auto start = chrono::steady_clock::now();
// Insert the code that will be timed
auto end = chrono::steady_clock::now();
// Store the time difference between start and end
auto diff = end - start;
If you want to print the time difference between start and end in the above code, you could use:
cout << chrono::duration <double, milli> (diff).count() << " ms" << endl;
If you prefer to use nanoseconds, you will use:
cout << chrono::duration <double, nano> (diff).count() << " ns" << endl;
The value of the diff variable can be also truncated to an integer value, for example, if you want the result expressed as:
diff_sec = chrono::duration_cast<chrono::nanoseconds>(diff);
cout << diff_sec.count() << endl;
For more info click here

OVERVIEW
I have written a simple semantic hack for this using #AshutoshMehraresponse. You code looks really readable this way!
MACRO
#include <time.h>
#ifndef SYSOUT_F
#define SYSOUT_F(f, ...) _RPT1( 0, f, __VA_ARGS__ ) // For Visual studio
#endif
#ifndef speedtest__
#define speedtest__(data) for (long blockTime = NULL; (blockTime == NULL ? (blockTime = clock()) != NULL : false); SYSOUT_F(data "%.9fs", (double) (clock() - blockTime) / CLOCKS_PER_SEC))
#endif
USAGE
speedtest__("Block Speed: ")
{
// The code goes here
}
OUTPUT
Block Speed: 0.127000000s

Note: the question was originally about compilation time, but later it turned out that the OP really meant execution time. But maybe this answer will still be useful for someone.
For Visual Studio: go to Tools / Options / Projects and Solutions / VC++ Project Settings and set Build Timing option to 'yes'. After that the time of every build will be displayed in the Output window.

You can try below code for c++:
#include <chrono>
auto start = std::chrono::system_clock::now();
// Your Code to Execute //
auto end = std::chrono::system_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count() << "ms" << std::endl;

This looks like Dijstra's algorithm. In any case, the time taken to run will depend on N. If it takes more than 3 seconds there isn't any way I can see of speeding it up, as all the calculations that it is doing need to be done.
Depending on what problem you're trying to solve, there might be a faster algorithm.

I have used the technique said above, still I found that the time given in the Code:Blocks IDE was more or less similar to the result obtained-(may be it will differ by little micro seconds)..

If you are using C++ then you should try this below code as you would always get 0 as answer if you directly use #Ashutosh Mehra's answer.
#include <iostream>
#include <time.h>
using namespace std;
int main() {
int a = 20000, sum=0;
clock_t start = clock();
for (int i=0; i<a; i++) {
for (int k = 0; k<a; k++)
sum += 1;
}
cout.precision(10);
cout << fixed << float(clock() - start)/CLOCKS_PER_SEC << endl;
return 0;
}
Because in C++ you the float and double values will directly be rounded off. So I used the cout.precision(10) to set the output precision of any value to 10 digits after decimal.

shorter version of Ashutosh Mehra's answer:
/* including stuff here */
#include <time.h>
int main(void) {
clock_t tStart = clock();
/* stuff here */
cout<<"Time taken: "<<(double)(clock() - tStart)/CLOCKS_PER_SEC;
return 0;
}

Related

C++ process time [duplicate]

This question already has answers here:
Measuring execution time of a function in C++
(14 answers)
Closed 4 years ago.
In C# I would fire up the Stopwatch class to do some quick-and-dirty timing of how long certain methods take.
What is the equivalent of this in C++? Is there a high precision timer built in?
I used boost::timer for measuring the duration of an operation. It provides a very easy way to do the measurement, and at the same time being platform independent. Here is an example:
boost::timer myTimer;
doOperation();
std::cout << myTimer.elapsed();
P.S. To overcome precision errors, it would be great to measure operations that take a few seconds. Especially when you are trying to compare several alternatives. If you want to measure something that takes very little time, try putting it into a loop. For example run the operation 1000 times, and then divide the total time by 1000.
I've implemented a timer for situations like this before: I actually ended up with a class with two different implemations, one for Windows and one for POSIX.
The reason was that Windows has the QueryPerformanceCounter() function which gives you access to a very accurate clock which is ideal for such timings.
On POSIX however this isn't available so I just used boost.datetime's classes to store the start and end times then calculated the duration from those. It offers a "high resolution" timer but the resolution is undefined and varies from platform to platform.
I use my own version of Python's time_it function. The advantage of this function is that it repeats a computation as many times as necessary to obtain meaningful results. If the computation is very fast, it will be repeated many times. In the end you obtain the average time of all the repetitions. It does not use any non-standard functionality:
#include <ctime>
double clock_diff_to_sec(long clock_diff)
{
return double(clock_diff) / CLOCKS_PER_SEC;
}
template<class Proc>
double time_it(Proc proc, int N=1) // returns time in microseconds
{
std::clock_t const start = std::clock();
for(int i = 0; i < N; ++i)
proc();
std::clock_t const end = std::clock();
if(clock_diff_to_sec(end - start) < .2)
return time_it(proc, N * 5);
return clock_diff_to_sec(end - start) * (1e6 / N);
}
The following example uses the time_it function to measure the performance of different STL containers:
void dummy_op(int i)
{
if(i == -1)
std::cout << i << "\n";
}
template<class Container>
void test(Container const & c)
{
std::for_each(c.begin(), c.end(), &dummy_op);
}
template<class OutIt>
void init(OutIt it)
{
for(int i = 0; i < 1000; ++i)
*it = i;
}
int main( int argc, char ** argv )
{
{
std::vector<int> c;
init(std::back_inserter(c));
std::cout << "vector: "
<< time_it(boost::bind(&test<std::vector<int> >, c)) << "\n";
}
{
std::list<int> c;
init(std::back_inserter(c));
std::cout << "list: "
<< time_it(boost::bind(&test<std::list<int> >, c)) << "\n";
}
{
std::deque<int> c;
init(std::back_inserter(c));
std::cout << "deque: "
<< time_it(boost::bind(&test<std::deque<int> >, c)) << "\n";
}
{
std::set<int> c;
init(std::inserter(c, c.begin()));
std::cout << "set: "
<< time_it(boost::bind(&test<std::set<int> >, c)) << "\n";
}
{
std::tr1::unordered_set<int> c;
init(std::inserter(c, c.begin()));
std::cout << "unordered_set: "
<< time_it(boost::bind(&test<std::tr1::unordered_set<int> >, c)) << "\n";
}
}
In case anyone is curious here is the output I get (compiled with VS2008 in release mode):
vector: 8.7168
list: 27.776
deque: 91.52
set: 103.04
unordered_set: 29.76
You can use the ctime library to get the time in seconds. Getting the time in milliseconds is implementation-specific. Here is a discussion exploring some ways to do that.
See also: How to measure time in milliseconds using ANSI C?
High-precision timers are platform-specific and so aren't specified by the C++ standard, but there are libraries available. See this question for a discussion.
I humbly submit my own micro-benchmarking mini-library (on Github). It's super simple -- the only advantage it has over rolling your own is that it already has the high-performance timer code implemented for Windows and Linux, and abstracts away the annoying boilerplate.
Just pass in a function (or lambda), the number of times it should be called per test run (default: 1), and the number of test runs (default: 100). The fastest test run (measured in fractional milliseconds) is returned:
// Example that times the compare-and-swap atomic operation from C++11
// Sample GCC command: g++ -std=c++11 -DNDEBUG -O3 -lrt main.cpp microbench/systemtime.cpp -o bench
#include "microbench/microbench.h"
#include <cstdio>
#include <atomic>
int main()
{
std::atomic<int> x(0);
int y = 0;
printf("CAS takes %.4fms to execute 100000 iterations\n",
moodycamel::microbench(
[&]() { x.compare_exchange_strong(y, 0); }, /* function to benchmark */
100000, /* iterations per test run */
100 /* test runs */
)
);
// Result: Clocks in at 1.2ms (12ns per CAS operation) in my environment
return 0;
}
#include <time.h>
clock_t start, end;
start = clock();
//Do stuff
end = clock();
printf("Took: %f\n", (float)((end - start) / (float)CLOCKS_PER_SEC));
This might be an OS-dependent issue rather than a language issue.
If you're on Windows then you can access a millisecond 10- to 16-millisecond timer through GetTickCount() or GetTickCount64(). Just call it once at the start and once at the end, and subtract.
That was what I used before if I recall correctly. The linked page has other options as well.
You can find useful this class.
Using RAII idiom, it prints the text given in construction when destructor is called, filling elapsed time placeholder with the proper value.
Example of use:
int main()
{
trace_elapsed_time t("Elapsed time: %ts.\n");
usleep(1.005 * 1e6);
}
Output:
Elapsed time: 1.00509s.

Inaccuracy in std::chrono::high_resolution_clock? [duplicate]

This question already has answers here:
What are the uses of std::chrono::high_resolution_clock?
(2 answers)
Closed 6 years ago.
So I was trying to use std::chrono::high_resolution_clock to time how long something takes to executes. I figured that you can just find the difference between the start time and end time...
To check my approach works, I made the following program:
#include <iostream>
#include <chrono>
#include <vector>
void long_function();
int main()
{
std::chrono::high_resolution_clock timer;
auto start_time = timer.now();
long_function();
auto end_time = timer.now();
auto diff_millis = std::chrono::duration_cast<std::chrono::duration<int, std::milli>>(end_time - start_time);
std::cout << "It took " << diff_millis.count() << "ms" << std::endl;
return 0;
}
void long_function()
{
//Should take a while to execute.
//This is calculating the first 100 million
//fib numbers and storing them in a vector.
//Well, it doesn't actually, because it
//overflows very quickly, but the point is it
//should take a few seconds to execute.
std::vector<unsigned long> numbers;
numbers.push_back(1);
numbers.push_back(1);
for(int i = 2; i < 100000000; i++)
{
numbers.push_back(numbers[i-2] + numbers[i-1]);
}
}
The problem is, it just outputs 3000ms exactly, when it clearly wasn't actually that.
On shorter problems, it just outputs 0ms... What am I doing wrong?
EDIT: If it's of any use, I'm using the GNU GCC compiler with -std=c++0x flag on
The resolution of the high_resolution_clock depends on the platform.
Printing the following will give you an idea of the resolution of the implementation you use
std::cout << "It took " << std::chrono::nanoseconds(end_time - start_time).count() << std::endl;
I have got a similar problem with g++ (rev5, Built by MinGW-W64 project) 4.8.1 under window7.
int main()
{
auto start_time = std::chrono::high_resolution_clock::now();
int temp(1);
const int n(1e7);
for (int i = 0; i < n; i++)
temp += temp;
auto end_time = std::chrono::high_resolution_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::nanoseconds>(end_time - start_time).count() << " ns.";
return 0;
}
if n=1e7 it displays 19999800 ns
but if
n=1e6 it displays 0 ns.
the precision seems weak.

How to use clock() in C++

How do I call clock() in C++?
For example, I want to test how much time a linear search takes to find a given element in an array.
#include <iostream>
#include <cstdio>
#include <ctime>
int main() {
std::clock_t start;
double duration;
start = std::clock();
/* Your algorithm here */
duration = ( std::clock() - start ) / (double) CLOCKS_PER_SEC;
std::cout<<"printf: "<< duration <<'\n';
}
An alternative solution, which is portable and with higher precision, available since C++11, is to use std::chrono.
Here is an example:
#include <iostream>
#include <chrono>
typedef std::chrono::high_resolution_clock Clock;
int main()
{
auto t1 = Clock::now();
auto t2 = Clock::now();
std::cout << "Delta t2-t1: "
<< std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count()
<< " nanoseconds" << std::endl;
}
Running this on ideone.com gave me:
Delta t2-t1: 282 nanoseconds
clock() returns the number of clock ticks since your program started. There is a related constant, CLOCKS_PER_SEC, which tells you how many clock ticks occur in one second. Thus, you can test any operation like this:
clock_t startTime = clock();
doSomeOperation();
clock_t endTime = clock();
clock_t clockTicksTaken = endTime - startTime;
double timeInSeconds = clockTicksTaken / (double) CLOCKS_PER_SEC;
On Windows at least, the only practically accurate measurement mechanism is QueryPerformanceCounter (QPC). std::chrono is implemented using it (since VS2015, if you use that), but it is not accurate to the same degree as using QueryPerformanceCounter directly. In particular it's claim to report at 1 nanosecond granularity is absolutely not correct. So, if you're measuring something that takes a very short amount of time (and your case might just be such a case), then you should use QPC, or the equivalent for your OS. I came up against this when measuring cache latencies, and I jotted down some notes that you might find useful, here;
https://github.com/jarlostensen/notesandcomments/blob/master/stdchronovsqcp.md
#include <iostream>
#include <ctime>
#include <cstdlib> //_sleep() --- just a function that waits a certain amount of milliseconds
using namespace std;
int main()
{
clock_t cl; //initializing a clock type
cl = clock(); //starting time of clock
_sleep(5167); //insert code here
cl = clock() - cl; //end point of clock
_sleep(1000); //testing to see if it actually stops at the end point
cout << cl/(double)CLOCKS_PER_SEC << endl; //prints the determined ticks per second (seconds passed)
return 0;
}
//outputs "5.17"
You can measure how long your program works. The following functions help measure the CPU time since the start of the program:
C++ (double)clock() / CLOCKS_PER_SEC with ctime included.
Python time.clock() returns floating-point value in seconds.
Java System.nanoTime() returns long value in nanoseconds.
My reference: algorithms toolbox week 1 course part of data structures and algorithms specialization by University of California San Diego & National Research University Higher School of Economics
So you can add this line of code after your algorithm:
cout << (double)clock() / CLOCKS_PER_SEC;
Expected Output: the output representing the number of clock ticks per second
Probably you might be interested in timer like this :
H : M : S . Msec.
the code in Linux OS:
#include <iostream>
#include <unistd.h>
using namespace std;
void newline();
int main() {
int msec = 0;
int sec = 0;
int min = 0;
int hr = 0;
//cout << "Press any key to start:";
//char start = _gtech();
for (;;)
{
newline();
if(msec == 1000)
{
++sec;
msec = 0;
}
if(sec == 60)
{
++min;
sec = 0;
}
if(min == 60)
{
++hr;
min = 0;
}
cout << hr << " : " << min << " : " << sec << " . " << msec << endl;
++msec;
usleep(100000);
}
return 0;
}
void newline()
{
cout << "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n";
}

What is the best way to time how long functions take in C++? [duplicate]

This question already has answers here:
Measuring execution time of a function in C++
(14 answers)
Closed 4 years ago.
In C# I would fire up the Stopwatch class to do some quick-and-dirty timing of how long certain methods take.
What is the equivalent of this in C++? Is there a high precision timer built in?
I used boost::timer for measuring the duration of an operation. It provides a very easy way to do the measurement, and at the same time being platform independent. Here is an example:
boost::timer myTimer;
doOperation();
std::cout << myTimer.elapsed();
P.S. To overcome precision errors, it would be great to measure operations that take a few seconds. Especially when you are trying to compare several alternatives. If you want to measure something that takes very little time, try putting it into a loop. For example run the operation 1000 times, and then divide the total time by 1000.
I've implemented a timer for situations like this before: I actually ended up with a class with two different implemations, one for Windows and one for POSIX.
The reason was that Windows has the QueryPerformanceCounter() function which gives you access to a very accurate clock which is ideal for such timings.
On POSIX however this isn't available so I just used boost.datetime's classes to store the start and end times then calculated the duration from those. It offers a "high resolution" timer but the resolution is undefined and varies from platform to platform.
I use my own version of Python's time_it function. The advantage of this function is that it repeats a computation as many times as necessary to obtain meaningful results. If the computation is very fast, it will be repeated many times. In the end you obtain the average time of all the repetitions. It does not use any non-standard functionality:
#include <ctime>
double clock_diff_to_sec(long clock_diff)
{
return double(clock_diff) / CLOCKS_PER_SEC;
}
template<class Proc>
double time_it(Proc proc, int N=1) // returns time in microseconds
{
std::clock_t const start = std::clock();
for(int i = 0; i < N; ++i)
proc();
std::clock_t const end = std::clock();
if(clock_diff_to_sec(end - start) < .2)
return time_it(proc, N * 5);
return clock_diff_to_sec(end - start) * (1e6 / N);
}
The following example uses the time_it function to measure the performance of different STL containers:
void dummy_op(int i)
{
if(i == -1)
std::cout << i << "\n";
}
template<class Container>
void test(Container const & c)
{
std::for_each(c.begin(), c.end(), &dummy_op);
}
template<class OutIt>
void init(OutIt it)
{
for(int i = 0; i < 1000; ++i)
*it = i;
}
int main( int argc, char ** argv )
{
{
std::vector<int> c;
init(std::back_inserter(c));
std::cout << "vector: "
<< time_it(boost::bind(&test<std::vector<int> >, c)) << "\n";
}
{
std::list<int> c;
init(std::back_inserter(c));
std::cout << "list: "
<< time_it(boost::bind(&test<std::list<int> >, c)) << "\n";
}
{
std::deque<int> c;
init(std::back_inserter(c));
std::cout << "deque: "
<< time_it(boost::bind(&test<std::deque<int> >, c)) << "\n";
}
{
std::set<int> c;
init(std::inserter(c, c.begin()));
std::cout << "set: "
<< time_it(boost::bind(&test<std::set<int> >, c)) << "\n";
}
{
std::tr1::unordered_set<int> c;
init(std::inserter(c, c.begin()));
std::cout << "unordered_set: "
<< time_it(boost::bind(&test<std::tr1::unordered_set<int> >, c)) << "\n";
}
}
In case anyone is curious here is the output I get (compiled with VS2008 in release mode):
vector: 8.7168
list: 27.776
deque: 91.52
set: 103.04
unordered_set: 29.76
You can use the ctime library to get the time in seconds. Getting the time in milliseconds is implementation-specific. Here is a discussion exploring some ways to do that.
See also: How to measure time in milliseconds using ANSI C?
High-precision timers are platform-specific and so aren't specified by the C++ standard, but there are libraries available. See this question for a discussion.
I humbly submit my own micro-benchmarking mini-library (on Github). It's super simple -- the only advantage it has over rolling your own is that it already has the high-performance timer code implemented for Windows and Linux, and abstracts away the annoying boilerplate.
Just pass in a function (or lambda), the number of times it should be called per test run (default: 1), and the number of test runs (default: 100). The fastest test run (measured in fractional milliseconds) is returned:
// Example that times the compare-and-swap atomic operation from C++11
// Sample GCC command: g++ -std=c++11 -DNDEBUG -O3 -lrt main.cpp microbench/systemtime.cpp -o bench
#include "microbench/microbench.h"
#include <cstdio>
#include <atomic>
int main()
{
std::atomic<int> x(0);
int y = 0;
printf("CAS takes %.4fms to execute 100000 iterations\n",
moodycamel::microbench(
[&]() { x.compare_exchange_strong(y, 0); }, /* function to benchmark */
100000, /* iterations per test run */
100 /* test runs */
)
);
// Result: Clocks in at 1.2ms (12ns per CAS operation) in my environment
return 0;
}
#include <time.h>
clock_t start, end;
start = clock();
//Do stuff
end = clock();
printf("Took: %f\n", (float)((end - start) / (float)CLOCKS_PER_SEC));
This might be an OS-dependent issue rather than a language issue.
If you're on Windows then you can access a millisecond 10- to 16-millisecond timer through GetTickCount() or GetTickCount64(). Just call it once at the start and once at the end, and subtract.
That was what I used before if I recall correctly. The linked page has other options as well.
You can find useful this class.
Using RAII idiom, it prints the text given in construction when destructor is called, filling elapsed time placeholder with the proper value.
Example of use:
int main()
{
trace_elapsed_time t("Elapsed time: %ts.\n");
usleep(1.005 * 1e6);
}
Output:
Elapsed time: 1.00509s.

High Resolution Timing Part of Your Code

I want to measure the speed of a function within a loop. But why my way of doing it always print "0" instead of high-res timing with 9 digits decimal precision (i.e. in nano/micro seconds)?
What's the correct way to do it?
#include <iomanip>
#include <iostream>
#include <time.h>
int main() {
for (int i = 0; i <100; i++) {
std::clock_t startTime = std::clock();
// a very fast function in the middle
cout << "Time: " << setprecision(9) << (clock() - startTime + 0.00)/CLOCKS_PER_SEC << endl;
}
return 0;
}
Related Questions:
How to overcome clock()'s low resolution
High Resolution Timer with C++ and linux
Equivalent of Windows’ QueryPerformanceCounter on OSX
Move your time calculation functions outside for () { .. } statement then devide total execution time by the number of operations in your testing loop.
#include <iostream>
#include <ctime>
#define NUMBER 10000 // the number of operations
// get the difference between start and end time and devide by
// the number of operations
double diffclock(clock_t clock1, clock_t clock2)
{
double diffticks = clock1 - clock2;
double diffms = (diffticks) / (CLOCKS_PER_SEC / NUMBER);
return diffms;
}
int main() {
// start a timer here
clock_t begin = clock();
// execute your functions several times (at least 10'000)
for (int i = 0; i < NUMBER; i++) {
// a very fast function in the middle
func()
}
// stop timer here
clock_t end = clock();
// display results here
cout << "Execution time: " << diffclock(end, begin) << " ms." << endl;
return 0;
}
Note: std::clock() lacks sufficient precision for profiling. Reference.
A few pointers:
I would be careful with the optimizer, it might throw all your code if I will think that it doesn't do anything.
You might want to run the loop 100000 times.
Before doing the total time calc store the current time in a variable.
Run your program several times.
If you need higher resolution, the only way to go is platform dependent.
On Windows, check out the QueryPerformanceCounter/QueryPerformanceFrequency API's.
On Linux, look up clock_gettime().
See a question I asked about the same thing: apparently clock()'s resolution is not guaranteed to be so high.
C++ obtaining milliseconds time on Linux -- clock() doesn't seem to work properly
Try gettimeofday function, or boost
If you need platform independence you need to use something like ACE_High_Res_Timer (http://www.dre.vanderbilt.edu/Doxygen/5.6.8/html/ace/a00244.html)
You might want to look into using openMp.
#include <omp.h>
int main(int argc, char* argv[])
{
double start = omp_get_wtime();
// code to be checked
double end = omp_get_wtime();
double result = end - start;
return 0;
}