Execution time in c++ - c++

Trying to find the execution time of my code using this :
#include <iostream>
#include <time.h>
using namespace std;
int main()
{
clock_t t1, t2;
t1 = clock();
// code goes here
t2 = clock();
float diff = ((float)t2 - (float)t1);
cout <<"Execution Time = "<<diff / CLOCKS_PER_SEC <<endl;
system ("pause");
return 0;
}
but it returns a different time every time it is executed with the same code. Is the code correct?
I want to check the execution time of my code in different scenarios but shouldn't it display the same time when I execute the same code twice?

As mentioned here, clock ticks are units of time of a constant but system-specific length, as those returned by function clock. Having mentioned that we need to consider a couple
of scenarios/facts when using this method to find out the time of execution of a piece of code.
1) The time a tick represents depends on the OS. Moreover there are OS-internal
counters for clock ticks. Please see this SuperUser Question.
2) Resources need to be allocated for any process to run on the system. But what if the processor is busy with another more important process or even it might have run out of resources. In this case your process will be put in a queue and will run with a lower priority. But as the clockticks are stored in an internal counter(as mentioned above), it goes on getting incremented even though some other processes are utilizing processor meanwhile.
Conclusion
Your method of finding the execution time based on the clock ticks will not
yield exact results but it will give you only an idea about the execution times.

Related

I created a high precision multitasking basic C++ code, what is the algorithm implementation called?

So I always wanted to implement basic multitasking code, specifically asynchronous code (not concurrency code) without using interrupts, boost, complex threading, complex multitasking implementations or algorithms.
I did some programming on MCUs such as the ATmega328. In most cases to make most efficient and use out from the MCUs, multitasking is required in which functions run at the same time ("perceived" running at the same time) without halting the MCU to run other functions.
Such that one "function_a" requires a delay but it should not halt the MCU for the delay so that other functions like "function_b" can also run asynchronously.
To do such task with microcontrollers only having one CPU/thread, an algorithm with timers and keeping track of the time is used to implement multitasking. It's really simple and always works. I have taken the concept from MCUs and applied it to desktop PCs in C++ using high precision timers, the code is given below.
I am really surprised that no one uses this form of asynchronous algorithm for C++ and I haven't seen any examples on the internet for C++.
My question now is, what exactly this algorithm and implementation is called in computer science or computer engineering? I read that this implementation is called a "State Machine" but I googled it and did not see any code that is similar to mine that uses only with the help of timers directly in C++.
The code below does the following: It runs function 1 but at the same time also runs function 2 without needing to halt the application.
Both functions also needs to execute such that they do not run blatantly continuously, instead the functions need to run continuously with a specified time (function_1 runs every 1sec and function_2 every 3secs).
Finding similar implementation for the requirements above, given on the internet for C++ is really complex. The code below is simple in nature and works as intended:
// Asynchronous state machine using one CPU C++ example:
// Tested working multitasking code:
#include <iostream>
#include <ctime>
#include <ratio>
#include <chrono>
using namespace std::chrono;
// At the first execution of the program, capture the time as zero reference and store it to "t2".
auto t2 = high_resolution_clock::now();
auto t3 = high_resolution_clock::now();
int main()
{
while (1)
{
// Always update the time reference variable "t1" to the current time:
auto t1 = high_resolution_clock::now();
// Always check the difference of the zero reference time with the current time and see if it is greater than the set time specified in the "if" argument:
duration<double> time_span_1 = duration_cast<duration<double>>(t1 - t2);
duration<double> time_span_2 = duration_cast<duration<double>>(t1 - t3);
if(time_span_1.count() >= 1)
{
printf("This is function_1:\n\n");
std::cout << time_span_1.count() << " Secs (t1-t2)\n\n";
// Set t2 to capture the current time again as zero reference.
t2 = high_resolution_clock::now();
std::cout << "------------------------------------------\n\n";
}
else if (time_span_2.count() >= 3)
{
printf("This is function_2:\n\n");
std::cout << time_span_2.count() << " Secs (t1-t3)\n\n";
// Set t2 to capture the current time again as zero reference.
t3 = high_resolution_clock::now();
std::cout << "------------------------------------------\n\n";
}
}
return 0;
}
What is the algorithm...called?
Some people call it "super loop." I usually write it like this:
while (1) {
if ( itsTimeToPerformTheHighestPriorityTask() ) {
performTheHighestPriorityTask();
continue;
}
if ( itsTimeToPerformTheNextHighestPriorityTask() ) {
performTheNextHighestPriorityTask();
continue;
}
...
if ( itsTimeToPerformTheLowestPriorityTask() ) {
performTheLowestPriorityTask();
continue;
}
waitForInterrupt();
}
The waitForInterrupt() call at the bottom is optional. Most processors have an op-code that puts the processor into a low-power state (basically, it halts the processor for some definition of "halt") until an interrupt occurs.
Halting the CPU when there's no work to be done can greatly improve battery life if the device is battery powered, and it can help with thermal management if that's an issue. But, the price you pay for using it is, your timers and all of your I/O must be interrupt driven.
I would describe the posted code as "microcontroller code", because it is assuming that it is the only program that will be running on the CPU and that it can therefore burn as many CPU-cycles as it wants to without any adverse consequence. That assumption is often valid for programs running on microcontrollers (since usually a microcontroller doesn't have any OS or other programs installed on it), but "spinning the CPU" is not generally considered acceptable behavior in the context of a modern PC/desktop OS where programs are expected to be efficient and share the computer's resources with each other.
In particular, "spinning" the CPU on a modern PC (or Mac) introduces the following problems:
It uses up 100% of the CPU cycles on a CPU core, which means those CPU cycles are unavailable to any other programs that might otherwise be able to make productive use of them
It prevents the CPU from ever going to sleep, which wastes power -- that's bad on a desktop or server because it generates unwanted/unnecessary heat, and it's worse on a laptop because it quickly drains the battery.
Modern OS schedulers keep track of how much CPU time each program uses, and if the scheduler notices that a program is continuously spinning the CPU, it will likely respond by drastically reducing that program's scheduling-priority, in order to allow other, less CPU-hungry programs to remain responsive. Having a reduced CPU priority means that the program is less likely to be scheduled to run at the moment when it wants to do something useful, making its timing less accurate than it otherwise might be.
Users who run system-monitoring utilities like Task Manager (in Windows) or Activity Monitor (under MacOS/X) or top (in Linux) will see the program continuously taking 100% of a CPU core and will likely assume the program is buggy and kill it. (and unless the program actually needs 100% of a CPU core to do its job, they'll be correct!)
In any case, it's not difficult to rewrite the program to use almost no CPU cycles instead. Here's a version of the posted program that uses approximately 0% of a CPU core, but still calls the desired functions at the desired intervals (and also prints out how close it came to the ideal timing -- which is usually within a few milliseconds on my machine, but if you need better timing accuracy than that, you can get it by running the program at higher/real-time priority instead of as a normal-priority task):
#include <iostream>
#include <ctime>
#include <chrono>
#include <thread>
using namespace std::chrono;
int main(int argc, char ** argv)
{
// These variables will contain the times at which we next want to execute each task.
// Initialize them to the current time so that each task will run immediately on startup
auto nextT1Time = high_resolution_clock::now();
auto nextT3Time = high_resolution_clock::now();
while (1)
{
// Compute the next time at which we need to wake up and execute one of our tasks
auto nextWakeupTime = std::min(nextT1Time, nextT3Time);
// Sleep until the desired time
std::this_thread::sleep_until(nextWakeupTime);
bool t1Executed = false, t3Executed = false;
high_resolution_clock::duration t1LateBy, t3LateBy;
auto now = high_resolution_clock::now();
if (now >= nextT1Time)
{
t1Executed = true;
t1LateBy = now-nextT1Time;
// schedule our next execution to be 1 second later
nextT1Time = nextT1Time+seconds(1);
}
if (now >= nextT3Time)
{
t3Executed = true;
t3LateBy = now-nextT3Time;
// schedule our next execution to be 3 seconds later
nextT3Time = nextT3Time+seconds(3);
}
// Since the calls to std::cout can be slow, we'll execute them down here, after the functions have been called but before
// (nextWakeupTime) is recalculated on the next go-around of the loop. That way the time spent printing to stdout during the T1
// task won't potentially hold off execution of the T3 task
if (t1Executed) std::cout << "function T1 was called (it executed " << duration_cast<microseconds>(t1LateBy).count() << " microseconds after the expected time)" << std::endl;
if (t3Executed) std::cout << "function T3 was called (it executed " << duration_cast<microseconds>(t3LateBy).count() << " microseconds after the expected time)" << std::endl;
}
return 0;
}

In C++, how would I make a random number (either 1 or 2) that changes every 5 minutes?

I'm trying to make a simple game and I have a shop in the game. I want it to be every 5 minutes(if the function changeItem() is called) the item in the shop either switches or stays the same. I have no problem generating the random number, but I have yet to find a thread that shows how to make it generate differently each 5 minutes. Thank you.
In short, keep track of the last time the changeItem() function was called. If it is more than 5 minutes since the last time it was called, then use your random number generator to generate a new number. Otherwise, use the saved number from the last time it was generated.
You've already accepted an answer but I would like to say that for apps that need simple timing like this and don't need great accuracy, a simple calculation in the main loop as all you need.
Kicking off a thread for a single timer is a lot of unnecessary overhead.
So, here's the code showing how you'd go about doing it.
#define FIVE_MINUTES (60*5)
int main(int argc, char** argv){
time_t lastChange = 0, tick;
run_game_loop = true;
while (run_game_loop){
// ... game loop
tick = time(NULL);
if ((tick - lastChange) >= FIVE_MINUTES){
changeItem();
lastChange = tick;
}
}
return 0;
}
It somewhat assumes to be called reasonably regularly though. If on the other hand you need it accurate then a thread would be better. And depending on platform there exist API's for timers that get called by the system.
Standard and portable approach:
You could consider C++11 threads. The general idea would be :
#include <thread>
#include <chrono>
void myrandogen () // function that refreshes your randum number:
// will be executed as a thread
{
while (! gameover ) {
this_thread::sleep_for (std::chrono::minutes(5)); // wait 5 minutes
... // generate your random number and update your game data structure
}
}
in the main function, you would then instantiate a thread with your function:
thread t1 (myrandomgen); // create an launch thread
... // do your stuff until game over
t1.join (); // wait until thread returns
Of course you could also pass parameters (references to shared variables, etc...) when you create the thread, like this:
thread t1 (myrandomgen, param1, param2, ....);
The advantage of this approach is that it's standard and portable.
Non-portable alternatives:
I'm less familiar with these, but:
In a MSWIN environment, you could use SetTimer(...) to define a function to be called at regular interval (and KillTimer(...) to delete it). But this requires a programm structure build around the windows event processing loop.
In a linux environment, you could similarly define a call back function with signal(SIGALRM, ...) and activate periodic calls with alarm().
Small update on performance considerations:
Following several reamrks about overkill of therads and performance, I've done a benchmark, executing 1 billion loop iterations an waiting 1 microsecond each 100K iterations. The whole thing was run on an i7 multicore CPU:
Non threaded execution yielded 213K iterations per millisec.
2 thread execution yielded 209K iterations per millisec and per thread. So slightly slower for each thread. The total execution time was however only 70 to 90 ms longer, so that the overall throughput is at 418 K iterations.
How come ? Because the second thread is using a non used core on the processor. This means that with the adequate architecture, a game could process many more calculatios when using multithreading...

C++ , Timer, Milliseconds

#include <iostream>
#include <conio.h>
#include <ctime>
using namespace std;
double diffclock(clock_t clock1,clock_t clock2)
{
double diffticks=clock1-clock2;
double diffms=(diffticks)/(CLOCKS_PER_SEC/1000);
return diffms;
}
int main()
{
clock_t start = clock();
for(int i=0;;i++)
{
if(i==10000)break;
}
clock_t end = clock();
cout << diffclock(start,end)<<endl;
getch();
return 0;
}
So my problems comes to that it returns me a 0, well to be stright i want to check how much time my program does operate...
I found tons of crap over the internet well mostly it comes to the same point of getting a 0 beacuse the start and the end is the same
This problems goes to C++ remeber : <
There are a few problems in here. The first is that you obviously switched start and stop time when passing to diffclock() function. The second problem is optimization. Any reasonably smart compiler with optimizations enabled would simply throw the entire loop away as it does not have any side effects. But even you fix the above problems, the program would most likely still print 0. If you try to imagine doing billions operations per second, throw sophisticated out of order execution, prediction and tons of other technologies employed by modern CPUs, even a CPU may optimize your loop away. But even if it doesn't, you'd need a lot more than 10K iterations in order to make it run longer. You'd probably need your program to run for a second or two in order to get clock() reflect anything.
But the most important problem is clock() itself. That function is not suitable for any time of performance measurements whatsoever. What it does is gives you an approximation of processor time used by the program. Aside of vague nature of the approximation method that might be used by any given implementation (since standard doesn't require it of anything specific), POSIX standard also requires CLOCKS_PER_SEC to be equal to 1000000 independent of the actual resolution. In other words — it doesn't matter how precise the clock is, it doesn't matter at what frequency your CPU is running. To put simply — it is a totally useless number and therefore a totally useless function. The only reason why it still exists is probably for historical reasons. So, please do not use it.
To achieve what you are looking for, people have used to read the CPU Time Stamp also known as "RDTSC" by the name of the corresponding CPU instruction used to read it. These days, however, this is also mostly useless because:
Modern operating systems can easily migrate the program from one CPU to another. You can imagine that reading time stamp on another CPU after running for a second on another doesn't make a lot of sense. It is only in latest Intel CPUs the counter is synchronized across CPU cores. All in all, it is still possible to do this, but a lot of extra care must be taken (i.e. once can setup the affinity for the process, etc. etc).
Measuring CPU instructions of the program oftentimes does not give an accurate picture of how much time it is actually using. This is because in real programs there could be some system calls where the work is performed by the OS kernel on behalf of the process. In that case, that time is not included.
It could also happen that OS suspends an execution of the process for a long time. And even though it took only a few instructions to execute, for user it seemed like a second. So such a performance measurement may be useless.
So what to do?
When it comes to profiling, a tool like perf must be used. It can track a number of CPU clocks, cache misses, branches taken, branches missed, a number of times the process was moved from one CPU to another, and so on. It can be used as a tool, or can be embedded into your application (something like PAPI).
And if the question is about actual time spent, people use a wall clock. Preferably, a high-precision one, that is also not a subject to NTP adjustments (monotonic). That shows exactly how much time elapsed, no matter what was going on. For that purpose clock_gettime() can be used. It is part of SUSv2, POSIX.1-2001 standard. Given that use you getch() to keep the terminal open, I'd assume you are using Windows. There, unfortunately, you don't have clock_gettime() and the closest thing would be performance counters API:
BOOL QueryPerformanceFrequency(LARGE_INTEGER *lpFrequency);
BOOL QueryPerformanceCounter(LARGE_INTEGER *lpPerformanceCount);
For a portable solution, the best bet is on std::chrono::high_resolution_clock(). It was introduced in C++11, but is supported by most industrial grade compilers (GCC, Clang, MSVC).
Below is an example of how to use it. Please note that since I know that my CPU will do 10000 increments of an integer way faster than a millisecond, I have changed it to microseconds. I've also declared the counter as volatile in hope that compiler won't optimize it away.
#include <ctime>
#include <chrono>
#include <iostream>
int main()
{
volatile int i = 0; // "volatile" is to ask compiler not to optimize the loop away.
auto start = std::chrono::steady_clock::now();
while (i < 10000) {
++i;
}
auto end = std::chrono::steady_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
std::cout << "It took me " << elapsed.count() << " microseconds." << std::endl;
}
When I compile and run it, it prints:
$ g++ -std=c++11 -Wall -o test ./test.cpp && ./test
It took me 23 microseconds.
Hope it helps. Good Luck!
At a glance, it seems like you are subtracting the larger value from the smaller value. You call:
diffclock( start, end );
But then diffclock is defined as:
double diffclock( clock_t clock1, clock_t clock2 ) {
double diffticks = clock1 - clock2;
double diffms = diffticks / ( CLOCKS_PER_SEC / 1000 );
return diffms;
}
Apart from that, it may have something to do with the way you are converting units. The use of 1000 to convert to milliseconds is different on this page:
http://en.cppreference.com/w/cpp/chrono/c/clock
The problem appears to be the loop is just too short. I tried it on my system and it gave 0 ticks. I checked what diffticks was and it was 0. Increasing the loop size to 100000000, so there was a noticeable time lag and I got -290 as output (bug -- I think that the diffticks should be clock2-clock1 so we should get 290 and not -290). I tried also changing "1000" to "1000.0" in the division and that didn't work.
Compiling with optimization does remove the loop, so you have to not use it, or make the loop "do something", e.g. increment a counter other than the loop counter in the loop body. At least that's what GCC does.
Note: This is available after c++11.
You can use std::chrono library.
std::chrono has two distinct objects. (timepoint and duration). Timepoint represents a point in time, and duration, as we already know the term represents an interval or a span of time.
This c++ library allows us to subtract two timepoints to get a duration of time passed in the interval. So you can set a starting point and a stopping point. Using functions you can also convert them into appropriate units.
Example using high_resolution_clock (which is one of the three clocks this library provides):
#include <chrono>
using namespace std::chrono;
//before running function
auto start = high_resolution_clock::now();
//after calling function
auto stop = high_resolution_clock::now();
Subtract stop and start timepoints and cast it into required units using the duration_cast() function. Predefined units are nanoseconds, microseconds, milliseconds, seconds, minutes, and hours.
auto duration = duration_cast<microseconds>(stop - start);
cout << duration.count() << endl;
First of all you should subtract end - start not vice versa.
Documentation says if value is not available clock() returns -1, did you check that?
What optimization level do you use when compile your program? If optimization is enabled compiler can effectively eliminate your loop entirely.

time vs gettimeofday c ++

The time command returns the time elapsed in execution of a command.
If I put a "gettimeofday()" at the start of the command call (using system() ), and one at the end of the call, and take a difference, it doesn't come out the same. (its not a very small difference either)
Can anybody explain what is the exact difference between the two usages and which is the best way to time the execution of a call?
Thanks.
The Unix time command measures the whole program execution time, including the time it takes for the system to load your binary and all its libraries, and the time it takes to clean up everything once your program is finished.
On the other hand, gettimeofday can only work inside your program, that is after it has finished loading (for the initial measurement), and before it is cleaned up (for the final measurement).
Which one is best? Depends on what you want to measure... ;)
It's all dependent on what you are timing. If you are trying to time something in seconds, then time() is probably your best bet. If you need higher resolution than that, then I would consider gettimeofday(), which gives up to microsecond resolution (1 / 1000000th of a second).
If you need even higher resolution than that, consider using clock() and CLOCKS_PER_SECOND, just note that clock() is rarely an accurate description for the amount of time taken, but rather the number of CPU cycles used.
time() returns time since epoch in seconds.
gettimeofday(): returns:
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
Each time function has different precision. In C++11 you would use std::chrono:
using namespace std::chrono;
auto start = high_resolution_clock::now();
/* do stuff*/
auto end = high_resolution_clock::now();
float elapsedSeconds = duration_cast<duration<float>>(end-start).count();

Measuring the runtime of a C++ code?

I want to measure the runtime of my C++ code. Executing my code takes about 12 hours and I want to write this time at the end of execution of my code. How can I do it in my code?
Operating system: Linux
If you are using C++11 you can use system_clock::now():
auto start = std::chrono::system_clock::now();
/* do some work */
auto end = std::chrono::system_clock::now();
auto elapsed = end - start;
std::cout << elapsed.count() << '\n';
You can also specify the granularity to use for representing a duration:
// this constructs a duration object using milliseconds
auto elapsed =
std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
// this constructs a duration object using seconds
auto elapsed =
std::chrono::duration_cast<std::chrono::seconds>(end - start);
If you cannot use C++11, then have a look at chrono from Boost.
The best thing about using such a standard libraries is that their portability is really high (e.g., they both work in Linux and Windows). So you do not need to worry too much if you decide to port your application afterwards.
These libraries follow a modern C++ design too, as opposed to C-like approaches.
EDIT: The example above can be used to measure wall-clock time. That is not, however, the only way to measure the execution time of a program. First, we can distinct between user and system time:
User time: The time spent by the program running in user space.
System time: The time spent by the program running in system (or kernel) space. A program enters kernel space for instance when executing a system call.
Depending on the objectives it may be necessary or not to consider system time as part of the execution time of a program. For instance, if the aim is to just measure a compiler optimization on the user code then it is probably better to leave out system time. On the other hand, if the user wants to determine whether system calls are a significant overhead, then it is necessary to measure system time as well.
Moreover, since most modern systems are time-shared, different programs may compete for several computing resources (e.g., CPU). In such a case, another distinction can be made:
Wall-clock time: By using wall-clock time the execution of the program is measured in the same way as if we were using an external (wall) clock. This approach does not consider the interaction between programs.
CPU time: In this case we only count the time that a program is actually running on the CPU. If a program (P1) is co-scheduled with another one (P2), and we want to get the CPU time for P1, this approach does not include the time while P2 is running and P1 is waiting for the CPU (as opposed to the wall-clock time approach).
For measuring CPU time, Boost includes a set of extra clocks:
process_real_cpu_clock, captures wall clock CPU time spent by the current process.
process_user_cpu_clock, captures user-CPU time spent by the current process.
process_system_cpu_clock, captures system-CPU time spent by the current process. A tuple-like class process_cpu_clock, that captures real, user-CPU, and system-CPU process times together.
A thread_clock thread steady clock giving the time spent by the current thread (when supported by a platform).
Unfortunately, C++11 does not have such clocks. But Boost is a wide-used library and, probably, these extra clocks will be incorporated into C++1x at some point. So, if you use Boost you will be ready when the new C++ standard adds them.
Finally, if you want to measure the time a program takes to execute from the command line (as opposed to adding some code into your program), you may have a look at the time command, just as #BЈовић suggests. This approach, however, would not let you measure individual parts of your program (e.g., the time it takes to execute a function).
Use std::chrono::steady_clock and not std::chrono::system_clock for measuring run time in C++11. The reason is (quoting system_clock's documentation):
on most systems, the system time can be adjusted at any moment
while steady_clock is monotonic and is better suited for measuring intervals:
Class std::chrono::steady_clock represents a monotonic clock. The time
points of this clock cannot decrease as physical time moves forward.
This clock is not related to wall clock time, and is best suitable for
measuring intervals.
Here's an example:
auto start = std::chrono::steady_clock::now();
// do something
auto finish = std::chrono::steady_clock::now();
double elapsed_seconds = std::chrono::duration_cast<
std::chrono::duration<double> >(finish - start).count();
A small practical tip: if you are measuring run time and want to report seconds std::chrono::duration_cast<std::chrono::seconds> is rarely what you need because it gives you whole number of seconds. To get the time in seconds as a double use the example above.
You can use time to start your program. When it ends, it print nice time statistics about program run. It is easy to configure what to print. By default, it print user and CPU times it took to execute the program.
EDIT : Take a note that every measure from the code is not correct, because your application will get blocked by other programs, hence giving you wrong values*.
* By wrong values, I meant it is easy to get the time it took to execute the program, but that time varies depending on the CPUs load during the program execution. To get relatively stable time measurement, that doesn't depend on the CPU load, one can execute the application using time and use the CPU as the measurement result.
I used something like this in one of my projects:
#include <sys/time.h>
struct timeval start, end;
gettimeofday(&start, NULL);
//Compute
gettimeofday(&end, NULL);
double elapsed = ((end.tv_sec - start.tv_sec) * 1000)
+ (end.tv_usec / 1000 - start.tv_usec / 1000);
This is for milliseconds and it works both for C and C++.
This is the code I use:
const auto start = std::chrono::steady_clock::now();
// Your code here.
const auto end = std::chrono::steady_clock::now();
std::chrono::duration<double> elapsed = end - start;
std::cout << "Time in seconds: " << elapsed.count() << '\n';
You don't want to use std::chrono::system_clock because it is not monotonic! If the user changes the time in the middle of your code your result will be wrong - it might even be negative. std::chrono::high_resolution_clock might be implemented using std::chrono::system_clock so I wouldn't recommend that either.
This code also avoids ugly casts.
If you wish to print the measured time with printf(), you can use this:
auto start = std::chrono::system_clock::now();
/* measured work */
auto end = std::chrono::system_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
printf("Time = %lld ms\n", static_cast<long long int>(elapsed.count()));
You could also try some timer classes that start and stop automatically, and gather statistics on the average, maximum and minimum time spent in any block of code, as well as the number of calls. These cxx-rtimer classes are available on GitHub, and offer support for using std::chrono, clock_gettime(), or boost::posix_time as a back-end clock source.
With these timers, you can do something like:
void timeCriticalFunction() {
static rtimers::cxx11::DefaultTimer timer("expensive");
auto scopedStartStop = timer.scopedStart();
// Do something costly...
}
with timing stats written to std::cerr on program completion.