Function execution time - c++

I want to find out the execution time of my function written in C++ on Linux. I found lots of posts regarding that. I tried all the methods mentioned in this link Timer Methods for calculating time. Following are the results of execution time of my function:
time() : 0 seconds
clock() : 0.01 seconds
gettimeofday() : 0.002869 seconds
rdtsc() : 0.00262336 seconds
clock_gettime() : 0.00672151 seconds
chrono : 0.002841 seconds
Please help me which method is reliable in its readings as all the results differ in their readings. I read that your OS is switching between different tasks so the readings cannot be expected to be very accurate. Is there a way that I can just calculate the time CPU spends on my function. I heard about the use of profiling tool but have not found any example for just a function yet. Please guide me.

Read time(7).
For various reasons (and depending upon your actual hardware, i.e. your motherboard) the time is not as accurate as you want it to be.
So, add some loop repeating your function many times, or change its input so that it runs longer. Ensure that the execution time of the total program (given by time(1)...) is at least a second approximately (if possible, ensure that you have at least half a second of CPU time).
For profiling, compile and link with  g++ -Wall -pg -O1 then use gprof(1) (there are more sophisticated ways to profile, e.g. oprofile ...).
See also this answer to a very similar question (by the same Zara).

If you are doing simple test, trying to find out which implementation is better, then any method is okay. For example:
const int MAX = 10000; // times to execute the function
void benchmark0() {
auto begin = std::chrono::steady_clock::now();
for (int i = 0; i < MAX; ++i)
method0();
auto now = std::chrono::steady_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(now - begin);
std::cout << "Cost of method0() is " << elapsed .count() << " milliseconds" << std::endl;
}
void benchmark1() { /* almost the same as benchmark0, but calls method1 */ }
int main() {
benchmark0();
benchmark0();
benchmark1();
benchmark1();
}
You might have noticed benchmark0 and benchmark1 has been called consecutively twice: because there will be cache of CPU, I/O ..., you might want to get rid of the performance gain/lost due to caching, but measure the pure execution time.
Of course, you can also use g++ to profile the program.

Related

std::clock() and CLOCKS_PER_SEC

I want to use a timer function in my program. Following the example at How to use clock() in C++, my code is:
int main()
{
std::clock_t start = std::clock();
while (true)
{
double time = (std::clock() - start) / (double)CLOCKS_PER_SEC;
std::cout << time << std::endl;
}
return 0;
}
On running this, it begins to print out numbers. However, it takes about 15 seconds for that number to reach 1. Why does it not take 1 second for the printed number to reach 1?
Actually it is a combination of what has been posted. Basically as your program is running in a tight loop, CPU time should increase just as fast as wall clock time.
But since your program is writing to stdout and the terminal has a limited buffer space, your program will block whenever that buffer is full until the terminal had enough time to print more of the generated output.
This of course is much more expensive CPU wise than generating the strings from the clock values, therefore most of the CPU time will be spent in the terminal and graphics driver. It seems like your system takes about 14 times the CPU power to output that timestamps than generating the strings to write.
std::clock returns cpu time, not wall time. That means the number of cpu-seconds used, not the time elapsed. If your program uses only 20% of the CPU, then the cpu-seconds will only increase at 20% of the speed of wall seconds.
std::clock
Returns the approximate processor time used by the process since the beginning of an implementation-defined era related to the program's execution. To convert result value to seconds divide it by CLOCKS_PER_SEC.
So it will not return a second until the program uses an actual second of the cpu time.
If you want to deal with actual time I suggest you use the clocks provided by <chrono> like std::steady_clock or std::high_resolution_clock

C++ precise measure time in decimal ms

I am measuring time of sorting algorhytms like Bubble,Insert, Selection and Quick sort.
I am using this for my purpose
long int before = GetTickCount();
QuickSort(pole,0,dlzka-1);
long int after = GetTickCount();
double dif = double((after - before));
cout << "Quick Sort with time "<< dif << " ms " << endl;
I am sorting array with 30 000 integers and working fine for other sort except the QuickSort which is probbably so fast that it sorts 30k integers in less then 1ms and then my timeer says it is 0ms which look like a mistake.
I want to write it for example 0,01ms just to make it looks that it run corectly.
Thank you.
When you benchmark, you never benchmark just one run. Your timer is not precise/accurate enough to give meaningful results across that tiny amount of time.
For example, the documentation for GetTickCount says:
The resolution of the GetTickCount function is limited to the resolution of the system timer, which is typically in the range of 10 milliseconds to 16 milliseconds.
So, it is plainly obvious that obtaining a value of 0.01ms is folly.
Instead, benchmark many runs, then divide by the number of times you ran it.
Put your code into a loop that you run 1000 times, with the clock started and stopped outside of that loop. Then divide the result by 1000. Or, if you like, the result of the clock will now be in µs instead of in ms.
If your loop is very fast, you may need more than 1000 repetitions to get a meaningful measurement. You could run 10,000, 100,000, ... etc times until you get a "reasonable number of milliseconds".
When the piece of code you are testing is very fast, the overhead of the loop may become significant; in that case, you might run an "empty loop" and subtract the two results to give you the "net" timing of the inner part of the loop only.
It is rare, however, that this is something you need to do - most often you are trying to compare different algorithms, and as long as the overhead of the loop is the same it doesn't matter that it exists - the faster algorithm will still be faster.
One more thought - and this is pretty important: if you sort things in the first pass through the loop, and your algorithm speed depends on whether the data is sorted or not, you will get a different answer for multiple passes than you get for a single pass. Thus you need to make sure that you are using the same inputs for every pass through the algorithm. This might mean that you cannot use in-place sorting, or that you copy the unsorted data back into the "to be sorted" array at every pass of the algorithm.
other option: there is a good article on high precision timing that explains the use of the clock_gettime() function, with its various options and flavors. On some systems this will allow you to use higher resolution measurements. It is still always a good idea to do multiple runs, or even multiple runs of multiple runs - so you can compute statistics and thus come up with a confidence interval.
If you are using c++11:
std::chrono::high_resolution_clock represents the clock with the smallest tick period provided by the implementation.
If your compiler supports C++11 with std::chrono, this is best way to measure time at high accuracy; it is cross-platform and part of the standard library.
#include <chrono>
#include <iostream>
#include <iomanip>
::std::chrono::steady_clock::time_point startTime = std::chrono::steady_clock::now();
doWork();
::std::chrono::steady_clock::duration elapsedTime = ::std::chrono::steady_clock::now() - startTime;
std::cout << std::fixed << std::setprecision(9) << std::endl;
double duration = ::std::chrono::duration_cast< ::std::chrono::duration< double > >(elapsedTime).count();
std::cout << "Milliseconds: " << duration * 1000 << std::endl;
To use C++11 in GCC, you run g++ -std=c++11 -o app main.cpp. For the Visual Studio compiler, you need 2012 or higher to use chrono.

C++ , Timer, Milliseconds

#include <iostream>
#include <conio.h>
#include <ctime>
using namespace std;
double diffclock(clock_t clock1,clock_t clock2)
{
double diffticks=clock1-clock2;
double diffms=(diffticks)/(CLOCKS_PER_SEC/1000);
return diffms;
}
int main()
{
clock_t start = clock();
for(int i=0;;i++)
{
if(i==10000)break;
}
clock_t end = clock();
cout << diffclock(start,end)<<endl;
getch();
return 0;
}
So my problems comes to that it returns me a 0, well to be stright i want to check how much time my program does operate...
I found tons of crap over the internet well mostly it comes to the same point of getting a 0 beacuse the start and the end is the same
This problems goes to C++ remeber : <
There are a few problems in here. The first is that you obviously switched start and stop time when passing to diffclock() function. The second problem is optimization. Any reasonably smart compiler with optimizations enabled would simply throw the entire loop away as it does not have any side effects. But even you fix the above problems, the program would most likely still print 0. If you try to imagine doing billions operations per second, throw sophisticated out of order execution, prediction and tons of other technologies employed by modern CPUs, even a CPU may optimize your loop away. But even if it doesn't, you'd need a lot more than 10K iterations in order to make it run longer. You'd probably need your program to run for a second or two in order to get clock() reflect anything.
But the most important problem is clock() itself. That function is not suitable for any time of performance measurements whatsoever. What it does is gives you an approximation of processor time used by the program. Aside of vague nature of the approximation method that might be used by any given implementation (since standard doesn't require it of anything specific), POSIX standard also requires CLOCKS_PER_SEC to be equal to 1000000 independent of the actual resolution. In other words — it doesn't matter how precise the clock is, it doesn't matter at what frequency your CPU is running. To put simply — it is a totally useless number and therefore a totally useless function. The only reason why it still exists is probably for historical reasons. So, please do not use it.
To achieve what you are looking for, people have used to read the CPU Time Stamp also known as "RDTSC" by the name of the corresponding CPU instruction used to read it. These days, however, this is also mostly useless because:
Modern operating systems can easily migrate the program from one CPU to another. You can imagine that reading time stamp on another CPU after running for a second on another doesn't make a lot of sense. It is only in latest Intel CPUs the counter is synchronized across CPU cores. All in all, it is still possible to do this, but a lot of extra care must be taken (i.e. once can setup the affinity for the process, etc. etc).
Measuring CPU instructions of the program oftentimes does not give an accurate picture of how much time it is actually using. This is because in real programs there could be some system calls where the work is performed by the OS kernel on behalf of the process. In that case, that time is not included.
It could also happen that OS suspends an execution of the process for a long time. And even though it took only a few instructions to execute, for user it seemed like a second. So such a performance measurement may be useless.
So what to do?
When it comes to profiling, a tool like perf must be used. It can track a number of CPU clocks, cache misses, branches taken, branches missed, a number of times the process was moved from one CPU to another, and so on. It can be used as a tool, or can be embedded into your application (something like PAPI).
And if the question is about actual time spent, people use a wall clock. Preferably, a high-precision one, that is also not a subject to NTP adjustments (monotonic). That shows exactly how much time elapsed, no matter what was going on. For that purpose clock_gettime() can be used. It is part of SUSv2, POSIX.1-2001 standard. Given that use you getch() to keep the terminal open, I'd assume you are using Windows. There, unfortunately, you don't have clock_gettime() and the closest thing would be performance counters API:
BOOL QueryPerformanceFrequency(LARGE_INTEGER *lpFrequency);
BOOL QueryPerformanceCounter(LARGE_INTEGER *lpPerformanceCount);
For a portable solution, the best bet is on std::chrono::high_resolution_clock(). It was introduced in C++11, but is supported by most industrial grade compilers (GCC, Clang, MSVC).
Below is an example of how to use it. Please note that since I know that my CPU will do 10000 increments of an integer way faster than a millisecond, I have changed it to microseconds. I've also declared the counter as volatile in hope that compiler won't optimize it away.
#include <ctime>
#include <chrono>
#include <iostream>
int main()
{
volatile int i = 0; // "volatile" is to ask compiler not to optimize the loop away.
auto start = std::chrono::steady_clock::now();
while (i < 10000) {
++i;
}
auto end = std::chrono::steady_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
std::cout << "It took me " << elapsed.count() << " microseconds." << std::endl;
}
When I compile and run it, it prints:
$ g++ -std=c++11 -Wall -o test ./test.cpp && ./test
It took me 23 microseconds.
Hope it helps. Good Luck!
At a glance, it seems like you are subtracting the larger value from the smaller value. You call:
diffclock( start, end );
But then diffclock is defined as:
double diffclock( clock_t clock1, clock_t clock2 ) {
double diffticks = clock1 - clock2;
double diffms = diffticks / ( CLOCKS_PER_SEC / 1000 );
return diffms;
}
Apart from that, it may have something to do with the way you are converting units. The use of 1000 to convert to milliseconds is different on this page:
http://en.cppreference.com/w/cpp/chrono/c/clock
The problem appears to be the loop is just too short. I tried it on my system and it gave 0 ticks. I checked what diffticks was and it was 0. Increasing the loop size to 100000000, so there was a noticeable time lag and I got -290 as output (bug -- I think that the diffticks should be clock2-clock1 so we should get 290 and not -290). I tried also changing "1000" to "1000.0" in the division and that didn't work.
Compiling with optimization does remove the loop, so you have to not use it, or make the loop "do something", e.g. increment a counter other than the loop counter in the loop body. At least that's what GCC does.
Note: This is available after c++11.
You can use std::chrono library.
std::chrono has two distinct objects. (timepoint and duration). Timepoint represents a point in time, and duration, as we already know the term represents an interval or a span of time.
This c++ library allows us to subtract two timepoints to get a duration of time passed in the interval. So you can set a starting point and a stopping point. Using functions you can also convert them into appropriate units.
Example using high_resolution_clock (which is one of the three clocks this library provides):
#include <chrono>
using namespace std::chrono;
//before running function
auto start = high_resolution_clock::now();
//after calling function
auto stop = high_resolution_clock::now();
Subtract stop and start timepoints and cast it into required units using the duration_cast() function. Predefined units are nanoseconds, microseconds, milliseconds, seconds, minutes, and hours.
auto duration = duration_cast<microseconds>(stop - start);
cout << duration.count() << endl;
First of all you should subtract end - start not vice versa.
Documentation says if value is not available clock() returns -1, did you check that?
What optimization level do you use when compile your program? If optimization is enabled compiler can effectively eliminate your loop entirely.

How to calculate and print clock_t time roughly

I am timing how long it takes to do three different types of searches, sequential, recursive binary, and iterative binary. I have those in place, and it does iterate through and finish the search. My problem is that when I time them all, I get 0 for all of them every time, even if I make an array of 100,000, and I have it search for something not in the array. If I set a break point in the search it obviously makes the time longer, and it gives me a reasonable time that I can work with. But otherwise it is always 0. Here is my code, it is similar for all three search timers.
clock_t recStart = clock();
mySearch.recursiveSearch(SEARCH_INT);
clock_t recEnd = clock();
clock_t recDiff = recEnd - recStart;
double recClockTime = (double)recDiff/(double)CLOCKS_PER_SEC;
cout << recClockTime << endl;
cout << CLOCKS_PER_SEC << endl;
cout << recClockTime << endl;
For the last two I get 1000 and 0.
Am I doing something wrong here? Or is it in my search Object?
clock() is not an accurate timer, and it just don't work well for timing short intervals.
C says clock returns the implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation.
If between two successive clock calls you program takes less time than one unity of the clock function, you could get 0. POSIX clock defines the unity with CLOCKS_PER_SEC as 1000000 (unity is then 1 microsecond).
(http://pubs.opengroup.org/onlinepubs/009604499/functions/clock.html)
To measure clock cycles in x86/x64 you can use assembly to retreive the clock count of the CPU Time Stamp Counter register rdtsc. (which can be achieved by inline assembling?) Note that it returns the time stamp, not the number of seconds elapsed. So you need to retrieve the cpu frequency as well.
However, the best way to get accurate time in seconds depends on your platform.
To sum up, it's virtually impossible to achieve calculating and printing clock_t time in seconds accurately. You might want to see this on Stackoverflow to find a better approach (if accuracy is top priority).
clock() just doesn't have enough resolution - here is one good discussion/blog on that topic
http://www.guyrutenberg.com/2007/09/10/resolution-problems-in-clock/
I think two options either use clock_gettime or even better have you considered using OProfile or CodeAnalyst?
I personally prefer to use tools - OProfile is good. I have not used CodeAnalyst before - and then there is Valgrind and gprof.
If you insist on using clock_gettime - please check this out
http://www.guyrutenberg.com/2007/09/22/profiling-code-using-clock_gettime/

Measuring the runtime of a C++ code?

I want to measure the runtime of my C++ code. Executing my code takes about 12 hours and I want to write this time at the end of execution of my code. How can I do it in my code?
Operating system: Linux
If you are using C++11 you can use system_clock::now():
auto start = std::chrono::system_clock::now();
/* do some work */
auto end = std::chrono::system_clock::now();
auto elapsed = end - start;
std::cout << elapsed.count() << '\n';
You can also specify the granularity to use for representing a duration:
// this constructs a duration object using milliseconds
auto elapsed =
std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
// this constructs a duration object using seconds
auto elapsed =
std::chrono::duration_cast<std::chrono::seconds>(end - start);
If you cannot use C++11, then have a look at chrono from Boost.
The best thing about using such a standard libraries is that their portability is really high (e.g., they both work in Linux and Windows). So you do not need to worry too much if you decide to port your application afterwards.
These libraries follow a modern C++ design too, as opposed to C-like approaches.
EDIT: The example above can be used to measure wall-clock time. That is not, however, the only way to measure the execution time of a program. First, we can distinct between user and system time:
User time: The time spent by the program running in user space.
System time: The time spent by the program running in system (or kernel) space. A program enters kernel space for instance when executing a system call.
Depending on the objectives it may be necessary or not to consider system time as part of the execution time of a program. For instance, if the aim is to just measure a compiler optimization on the user code then it is probably better to leave out system time. On the other hand, if the user wants to determine whether system calls are a significant overhead, then it is necessary to measure system time as well.
Moreover, since most modern systems are time-shared, different programs may compete for several computing resources (e.g., CPU). In such a case, another distinction can be made:
Wall-clock time: By using wall-clock time the execution of the program is measured in the same way as if we were using an external (wall) clock. This approach does not consider the interaction between programs.
CPU time: In this case we only count the time that a program is actually running on the CPU. If a program (P1) is co-scheduled with another one (P2), and we want to get the CPU time for P1, this approach does not include the time while P2 is running and P1 is waiting for the CPU (as opposed to the wall-clock time approach).
For measuring CPU time, Boost includes a set of extra clocks:
process_real_cpu_clock, captures wall clock CPU time spent by the current process.
process_user_cpu_clock, captures user-CPU time spent by the current process.
process_system_cpu_clock, captures system-CPU time spent by the current process. A tuple-like class process_cpu_clock, that captures real, user-CPU, and system-CPU process times together.
A thread_clock thread steady clock giving the time spent by the current thread (when supported by a platform).
Unfortunately, C++11 does not have such clocks. But Boost is a wide-used library and, probably, these extra clocks will be incorporated into C++1x at some point. So, if you use Boost you will be ready when the new C++ standard adds them.
Finally, if you want to measure the time a program takes to execute from the command line (as opposed to adding some code into your program), you may have a look at the time command, just as #BЈовић suggests. This approach, however, would not let you measure individual parts of your program (e.g., the time it takes to execute a function).
Use std::chrono::steady_clock and not std::chrono::system_clock for measuring run time in C++11. The reason is (quoting system_clock's documentation):
on most systems, the system time can be adjusted at any moment
while steady_clock is monotonic and is better suited for measuring intervals:
Class std::chrono::steady_clock represents a monotonic clock. The time
points of this clock cannot decrease as physical time moves forward.
This clock is not related to wall clock time, and is best suitable for
measuring intervals.
Here's an example:
auto start = std::chrono::steady_clock::now();
// do something
auto finish = std::chrono::steady_clock::now();
double elapsed_seconds = std::chrono::duration_cast<
std::chrono::duration<double> >(finish - start).count();
A small practical tip: if you are measuring run time and want to report seconds std::chrono::duration_cast<std::chrono::seconds> is rarely what you need because it gives you whole number of seconds. To get the time in seconds as a double use the example above.
You can use time to start your program. When it ends, it print nice time statistics about program run. It is easy to configure what to print. By default, it print user and CPU times it took to execute the program.
EDIT : Take a note that every measure from the code is not correct, because your application will get blocked by other programs, hence giving you wrong values*.
* By wrong values, I meant it is easy to get the time it took to execute the program, but that time varies depending on the CPUs load during the program execution. To get relatively stable time measurement, that doesn't depend on the CPU load, one can execute the application using time and use the CPU as the measurement result.
I used something like this in one of my projects:
#include <sys/time.h>
struct timeval start, end;
gettimeofday(&start, NULL);
//Compute
gettimeofday(&end, NULL);
double elapsed = ((end.tv_sec - start.tv_sec) * 1000)
+ (end.tv_usec / 1000 - start.tv_usec / 1000);
This is for milliseconds and it works both for C and C++.
This is the code I use:
const auto start = std::chrono::steady_clock::now();
// Your code here.
const auto end = std::chrono::steady_clock::now();
std::chrono::duration<double> elapsed = end - start;
std::cout << "Time in seconds: " << elapsed.count() << '\n';
You don't want to use std::chrono::system_clock because it is not monotonic! If the user changes the time in the middle of your code your result will be wrong - it might even be negative. std::chrono::high_resolution_clock might be implemented using std::chrono::system_clock so I wouldn't recommend that either.
This code also avoids ugly casts.
If you wish to print the measured time with printf(), you can use this:
auto start = std::chrono::system_clock::now();
/* measured work */
auto end = std::chrono::system_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
printf("Time = %lld ms\n", static_cast<long long int>(elapsed.count()));
You could also try some timer classes that start and stop automatically, and gather statistics on the average, maximum and minimum time spent in any block of code, as well as the number of calls. These cxx-rtimer classes are available on GitHub, and offer support for using std::chrono, clock_gettime(), or boost::posix_time as a back-end clock source.
With these timers, you can do something like:
void timeCriticalFunction() {
static rtimers::cxx11::DefaultTimer timer("expensive");
auto scopedStartStop = timer.scopedStart();
// Do something costly...
}
with timing stats written to std::cerr on program completion.