Calculating time length between operations in c++ - c++

The program is a middleware between a database and application. For each database access I most calculate the time length in milliseconds. The example bellow is using TDateTime from Builder library. I must, as far as possible, only use standard c++ libraries.
AnsiString TimeInMilliseconds(TDateTime t) {
Word Hour, Min, Sec, MSec;
DecodeTime(t, Hour, Min, Sec, MSec);
long ms = MSec + Sec * 1000 + Min * 1000 * 60 + Hour * 1000 * 60 * 60;
return IntToStr(ms);
}
// computing times
TDateTime SelectStart = Now();
sql_manipulation_statement();
TDateTime SelectEnd = Now();

On both Windows and POSIX-compliant systems (Linux, OSX, etc.), you can calculate the time in 1/CLOCKS_PER_SEC (timer ticks) for a call using clock() found in <ctime>. The return value from that call will be the elapsed time since the program started running in milliseconds. Two calls to clock() can then be subtracted from each other to calculate the running time of a given block of code.
So for example:
#include <ctime>
#include <cstdio>
clock_t time_a = clock();
//...run block of code
clock_t time_b = clock();
if (time_a == ((clock_t)-1) || time_b == ((clock_t)-1))
{
perror("Unable to calculate elapsed time");
}
else
{
unsigned int total_time_ticks = (unsigned int)(time_b - time_a);
}
Edit: You are not going to be able to directly compare the timings from a POSIX-compliant platform to a Windows platform because on Windows clock() measures the the wall-clock time, where-as on a POSIX system, it measures elapsed CPU time. But it is a function in a standard C++ library, and for comparing performance between different blocks of code on the same platform, should fit your needs.

On windows you can use GetTickCount (MSDN) Which will give the number of milliseconds that have elapsed since the system was started. Using this before and after the call you get the amount of milliseconds the call took.
DWORD start = GetTickCount();
//Do your stuff
DWORD end = GetTickCount();
cout << "the call took " << (end - start) << " ms";
Edit:
As Jason mentioned, Clock(); would be better because it is not related to Windows only.

Related

IBM AIX std::clock()?

I execute in IBM AIX the following code.
int main(void)
{
printf( "start\n");
double time1 = (double)clock(); /* get initial time */
time1 = time1 / CLOCKS_PER_SEC; /* in seconds */
boost::this_thread::sleep_for(boost::chrono::seconds(5));
/* call clock a second time */
double time2 = (((double)clock()) / CLOCKS_PER_SEC);
double timedif = time2 - time1;
printf( "The elapsed time is %lf seconds, time1:%lf time2:%lf CLOCKS_PER_SEC:%ld\n",
timedif));
}
The result is:
2018-04-07 09:58:37 start
2018-04-07 09:58:42 The elapsed time is 0.000180 seconds, time1:0.000000
time2:0.000181 CLOCKS_PER_SEC:1000000
I don't know why elapsed time is 0.000180 (why not 5)?
According to the manual
Returns the processor time consumed by the program.
It is CPU time consumed by a program, it is not a physical time. A sleeping program does not consume CPU time. Thus in raw words, it is time interval from main till sleep plus time interval after sleep till return.
If you want to get system/real time, look at the std::chrono::system_clock class.
#include <chrono>
using std::chrono::system_clock;
system_clock::time_point time_now = system_clock::now();

precise time measurement

I'm using time.h in C++ to measure the timing of a function.
clock_t t = clock();
someFunction();
printf("\nTime taken: %.4fs\n", (float)(clock() - t)/CLOCKS_PER_SEC);
however, I'm always getting the time taken as 0.0000. clock() and t when printed separately, have the same value. I would like to know if there is way to measure the time precisely (maybe in the order of nanoseconds) in C++ . I'm using VS2010.
C++11 introduced the chrono API, you can use to get nanoseconds :
auto begin = std::chrono::high_resolution_clock::now();
// code to benchmark
auto end = std::chrono::high_resolution_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::nanoseconds>(end-begin).count() << "ns" << std::endl;
For a more relevant value it is good to run the function several times and compute the average :
auto begin = std::chrono::high_resolution_clock::now();
uint32_t iterations = 10000;
for(uint32_t i = 0; i < iterations; ++i)
{
// code to benchmark
}
auto end = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::nanoseconds>(end-begin).count();
std::cout << duration << "ns total, average : " << duration / iterations << "ns." << std::endl;
But remember the for loop and assigning begin and end var use some CPU time too.
I usually use the QueryPerformanceCounter function.
example:
LARGE_INTEGER frequency; // ticks per second
LARGE_INTEGER t1, t2; // ticks
double elapsedTime;
// get ticks per second
QueryPerformanceFrequency(&frequency);
// start timer
QueryPerformanceCounter(&t1);
// do something
...
// stop timer
QueryPerformanceCounter(&t2);
// compute and print the elapsed time in millisec
elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 / frequency.QuadPart;
The following text, that i completely agree with, is quoted from Optimizing software in C++ (good reading for any C++ programmer) -
The time measurements may require a very high resolution if time
intervals are short. In Windows, you can use the
GetTickCount or
QueryPerformanceCounter functions for millisecond resolution. A much
higher resolution can be obtained with the time stamp counter in the
CPU, which counts at the CPU clock frequency.
There is a problem that "the clock frequency may vary dynamically and that
measurements are unstable due to interrupts and task switches."
In C or C++ I usually do like below. If it still fails you may consider using rtdsc functions
struct timeval time;
gettimeofday(&time, NULL); // Start Time
long totalTime = (time.tv_sec * 1000) + (time.tv_usec / 1000);
//........ call your functions here
gettimeofday(&time, NULL); //END-TIME
totalTime = (((time.tv_sec * 1000) + (time.tv_usec / 1000)) - totalTime);

Measurement with boost::posix_time::microsec_clock has error more than ten microseconds?

I have the following code:
long long unsigned int GetCurrentTimestamp()
{
LARGE_INTEGER res;
QueryPerformanceCounter(&res);
return res.QuadPart;
}
long long unsigned int initalizeFrequency()
{
LARGE_INTEGER res;
QueryPerformanceFrequency(&res);
return res.QuadPart;
}
//start time stamp
boost::posix_time::ptime startTime = boost::posix_time::microsec_clock::local_time();
long long unsigned int start = GetCurrentTimestamp();
// ....
// execution that should be measured
// ....
long long unsigned int end = GetCurrentTimestamp();
boost::posix_time::ptime endTime = boost::posix_time::microsec_clock::local_time();
boost::posix_time::time_duration duration = endTime - startTime;
std::cout << "Duration by Boost posix: " << duration.total_microseconds() <<std::endl;
std::cout << "Processing time is " << ((end - start) * 1000000 / initalizeFrequency())
<< " microsec "<< std::endl;
Result of this code is
Duration by Boost posix: 0
Processing time is 24 microsec
Why there is such a big divergence? Boost sucks as much as it should measure microseconds but it measures microseconds with tenth of microseconds error???
Posix time: microsec_clock:
Get the UTC time using a sub second resolution clock. On Unix systems this is implemented using GetTimeOfDay. On most Win32 platforms it is implemented using ftime. Win32 systems often do not achieve microsecond resolution via this API. If higher resolution is critical to your application test your platform to see the achieved resolution.
ftime simply does not provide microsecond resolution. The argument may contain the word microsecond but the implementation does not provide any accuracy in that range. It's granularity is in the ms regime.
You'd get something different than ZERO when you operation needs more time, say more than at least 20ms.
Edit: Note: In the long run the microsec_clock implementation for Windows should use the GetSystemTimePreciseAsFileTime function when possible (min. req. Windows 8 desktop, Windows Server 2012 desktop) to achieve microsecond resolution.
Unfortunately current Boost implementation of boost::posix_time::microsec_clock doesn't uses QueryPerformanceCounter Win32 API, it uses GetSystemTimeAsFileTime instead which in its turn uses GetSystemTime. But system time resolution is milliseconds (or even worse).

Timing algorithm: clock() vs time() in C++

For timing an algorithm (approximately in ms), which of these two approaches is better:
clock_t start = clock();
algorithm();
clock_t end = clock();
double time = (double) (end-start) / CLOCKS_PER_SEC * 1000.0;
Or,
time_t start = time(0);
algorithm();
time_t end = time(0);
double time = difftime(end, start) * 1000.0;
Also, from some discussion in the C++ channel at Freenode, I know clock has a very bad resolution, so the timing will be zero for a (relatively) fast algorithm. But, which has better resolution time() or clock()? Or is it the same?
<chrono> would be a better library if you're using C++11.
#include <iostream>
#include <chrono>
#include <thread>
void f()
{
std::this_thread::sleep_for(std::chrono::seconds(1));
}
int main()
{
auto t1 = std::chrono::high_resolution_clock::now();
f();
auto t2 = std::chrono::high_resolution_clock::now();
std::cout << "f() took "
<< std::chrono::duration_cast<std::chrono::milliseconds>(t2-t1).count()
<< " milliseconds\n";
}
Example taken from here.
It depends what you want: time measures the real time while clock measures the processing time taken by the current process. If your process sleeps for any appreciable amount of time, or the system is busy with other processes, the two will be very different.
http://en.cppreference.com/w/cpp/chrono/c/clock
The time_t structure is probably going to be an integer, which means it will have a resolution of second.
The first piece of code: It will only count the time that the CPU was doing something, so when you do sleep(), it will not count anything. It can be bypassed by counting the time you sleep(), but it will probably start to drift after a while.
The second piece: Only resolution of seconds, not so great if you need sub-second time readings.
For time readings with the best resolution you can get, you should do something like this:
double getUnixTime(void)
{
struct timespec tv;
if(clock_gettime(CLOCK_REALTIME, &tv) != 0) return 0;
return (tv.tv_sec + (tv.tv_nsec / 1000000000.0));
}
double start_time = getUnixTime();
double stop_time, difference;
doYourStuff();
stop_time = getUnixTime();
difference = stop_time - start_time;
On most systems it's resolution will be down to few microseconds, but it can vary with different CPUs, and probably even major kernel versions.
<chrono> is the best. Visual Studio 2013 provides this feature. Personally, I have tried all the methods mentioned above. I strongly recommend you use the <chrono> library. It can track the wall time and at the same time have a good resolution (much less than a second).
How about gettimeofday()? When it is called it updates two structs (timeval and timezone), with timing information. Usually, passing a timeval struct is enough and the timezone struct can be set to NULL. The updated timeval struct will have two members tv_sec and tv_usec. tv_sec is the number of seconds since 00:00:00, January 1, 1970 (Unix Epoch) and tv_usec is additional number of microseconds w.r.t. tv_sec. Thus, one can get time expressed in very good resolution.
It can be used as follows:
#include <time.h>
struct timeval start_time;
double mtime, seconds, useconds;
gettimeofday(&start_time, NULL); //timeval is usually enough
int seconds = start_time.tv_sec; //time in seconds
int useconds = start_time.tv_usec; //further time in microseconds
int desired_time = seconds * 1000000 + useconds; //time in microseconds

How to get system time in C++?

In fact i am trying to calculate the time a function takes to complete in my program.
So i am using the logic to get system time when i call the function and time when the function returns a value then by subtracting the values i get time it took to complete.
So if anyone can tell me some better approach or just how to get system time at an instance it would be quite a help
The approach I use when timing my code is the time() function. It returns a single numeric value to you representing the epoch which makes the subtraction part easier for calculation.
Relevant code:
#include <time.h>
#include <iostream>
int main (int argc, char *argv[]) {
int startTime, endTime, totalTime;
startTime = time(NULL);
/* relevant code to benchmark in here */
endTime = time(NULL);
totalTime = endTime - startTime;
std::cout << "Runtime: " << totalTime << " seconds.";
return 0;
}
Keep in mind this is user time. For CPU, time see Ben's reply.
Your question is totally dependant on WHICH system you are using. Each system has its own functions for getting the current time. For finding out how long the system has been running, you'd want to access one of the "high resolution performance counters". If you don't use a performance counter, you are usually limited to microsecond accuracy (or worse) which is almost useless in profiling the speed of a function.
In Windows, you can access the counter via the 'QueryPerformanceCounter()' function. This returns an arbitrary number that is different on each processor. To find out how many ticks in the counter == 1 second, call 'QueryPerformanceFrequency()'.
If you're coding under a platform other than windows, just google performance counter and the system you are coding under, and it should tell you how you can access the counter.
Edit (clarification)
This is c++, just include windows.h and import the "Kernel32.lib" (seems to have removed my hyperlink, check out the documentation at: http://msdn.microsoft.com/en-us/library/ms644904.aspx). For C#, you can use the "System.Diagnostics.PerformanceCounter" class.
You can use time_t
Under Linux, try gettimeofday() for microsecond resolution, or clock_gettime() for nanosecond resolution.
(Of course the actual clock may have a coarser resolution.)
In some system you don't have access to the time.h header. Therefore, you can use the following code snippet to find out how long does it take for your program to run, with the accuracy of seconds.
void function()
{
time_t currentTime;
time(&currentTime);
int startTime = currentTime;
/* Your program starts from here */
time(&currentTime);
int timeElapsed = currentTime - startTime;
cout<<"It took "<<timeElapsed<<" seconds to run the program"<<endl;
}
You can use the solution with std::chrono described here: Getting an accurate execution time in C++ (micro seconds) you will have much better accuracy in your measurement. Usually we measure code execution in the round of the milliseconds (ms) or even microseconds (us).
#include <chrono>
#include <iostream>
...
[YOUR METHOD/FUNCTION STARTING HERE]
auto start = std::chrono::high_resolution_clock::now();
[YOUR TEST CODE HERE]
auto elapsed = std::chrono::high_resolution_clock::now() - start;
long long microseconds = std::chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
std::cout << "Elapsed time: " << microseconds << " ms;