Most efficient way to retrieve a timer? - c++

I've never actually worked with timers before but I need one for my current project.
So this might be a silly question: but what's the 'normal' way to retrieve a timer for a game, and is there a better/more efficient way?
Thanks

Since you may want the time elapsed, and it might be so little, you might need to use the clock() function defined in time.h.
Here what I found about it in the MSDN Library:
Calculates the wall-clock time used by the calling process.
clock_t clock( void );
Return Value
The elapsed wall-clock time since the start of the process (elapsed time in seconds times CLOCKS_PER_SEC). If the amount of elapsed time is unavailable, the function returns –1, cast as a clock_t.
Remarks
The clock function tells how much time the calling process has used. A timer tick is approximately equal to 1/CLOCKS_PER_SEC second. In versions of Microsoft C before 6.0, the CLOCKS_PER_SEC constant was called CLK_TCK.
Example:
// crt_clock.c
// This example prompts for how long
// the program is to run and then continuously
// displays the elapsed time for that period.
//
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
void sleep( clock_t wait );
int main( void )
{
long i = 6000000L;
clock_t start, finish;
double duration;
// Delay for a specified time.
printf( "Delay for three seconds\n" );
sleep( (clock_t)3 * CLOCKS_PER_SEC );
printf( "Done!\n" );
// Measure the duration of an event.
printf( "Time to do %ld empty loops is ", i );
start = clock();
while( i-- )
;
finish = clock();
duration = (double)(finish - start) / CLOCKS_PER_SEC;
printf( "%2.1f seconds\n", duration );
}
// Pauses for a specified number of milliseconds.
void sleep( clock_t wait )
{
clock_t goal;
goal = wait + clock();
while( goal > clock() )
;
}
Output
Delay for three seconds
Done!
Time to do 6000000 empty loops is 0.1 seconds

If you want cross-platform and performant time library, use boost::date_time. For timers, just get current time, and substract it from the next reading (they have operators for computing time difference etc, the code is readable).
Current time is read using boost::posix_time::microsecond_clock::universal_time() and stored in the ptime struct. (the posix_ does not refer to that it is available only on POSIX systems; it only indicates that it is modeled after POSIX time concepts).

If you are using C++ on windows you will want to use QueryPerformanceCounter/QueryPerformanceFrequency
http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ms644905(v=vs.85).aspx
If you are on linux check out clock_gettime(CLOCK_REALTIME)
http://linux.die.net/man/3/clock_gettime
the clock() suggestion is incorrect as it is time used in the process. since there is a loop his function will end up begin correct, but if you block then this will not work.
http://linux.die.net/man/3/clock

Related

how to write a measurement function for multithreaded function [duplicate]

I am running a .cpp code (i) in sequential style and (ii) using OpenMP statements. I am trying to see the time difference. For calculating time, I use this:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
The time is pretty accurate in sequential (above) run of the code. It takes about 8 seconds to run this. When I insert OpenMP statements in the code and thereafter calculate the time I get a reduction in time, but the time displayed is about 8-9 seconds on the console, when actually its just 3-4 seconds in real time!
Here is how my code looks abstractly:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
#pragma omp parallel for
for( ... )
for( ... )
for (...)
{
...;
}
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
When I run the above code, I get the reduction in time but the time displayed is not accurate in terms of real time. It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
Can someone tell the reason for this or suggest me any other timing function to use to measure the time in OpenMP programs?
Thanks.
It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
This is exactly what clock() does - it measures the CPU time used by the process, which at least on Linux and Mac OS X means the cumulative CPU time of all threads that have ever existed in the process since it was started.
Real-clock (a.k.a. wall-clock) timing of OpenMP applications should be done using the high resolution OpenMP timer call omp_get_wtime() which returns a double value of the number of seconds since an arbitrary point in the past. It is a portable function, e.g. exists in both Unix and Windows OpenMP run-times, unlike gettimeofday() which is Unix-only.
I've seen clock() reporting CPU time, instead of real time.
You could use
struct timeval start, end;
gettimeofday(&start, NULL);
// benchmark code
gettimeofday(&end, NULL);
delta = ((end.tv_sec - start.tv_sec) * 1000000u +
end.tv_usec - start.tv_usec) / 1.e6;
To time things instead
You could use the built in omp_get_wtime function in omp library itself. Following is an example code snippet to find out execution time.
#include <stdio.h>
#include <omp.h>
int main(){
double itime, ftime, exec_time;
itime = omp_get_wtime();
// Required code for which execution time needs to be computed
ftime = omp_get_wtime();
exec_time = ftime - itime;
printf("\n\nTime taken is %f", exec_time);
}
Well yes, that's what clock() is supposed to do, tell you how much processor time the program used.
If you want to find elapsed real time, instead of CPU time, use a function that returns wall clock time, such as gettimeofday().
#include "ctime"
std::time_t start, end;
long delta = 0;
start = std::time(NULL);
// do your code here
end = std::time(NULL);
delta = end - start;
// output delta

Measure execution time in C++ OpenMP code

I am running a .cpp code (i) in sequential style and (ii) using OpenMP statements. I am trying to see the time difference. For calculating time, I use this:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
The time is pretty accurate in sequential (above) run of the code. It takes about 8 seconds to run this. When I insert OpenMP statements in the code and thereafter calculate the time I get a reduction in time, but the time displayed is about 8-9 seconds on the console, when actually its just 3-4 seconds in real time!
Here is how my code looks abstractly:
#include <time.h>
.....
main()
{
clock_t start, finish;
start = clock();
.
.
#pragma omp parallel for
for( ... )
for( ... )
for (...)
{
...;
}
.
.
finish = clock();
processing time = (double(finish-start)/CLOCKS_PER_SEC);
}
When I run the above code, I get the reduction in time but the time displayed is not accurate in terms of real time. It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
Can someone tell the reason for this or suggest me any other timing function to use to measure the time in OpenMP programs?
Thanks.
It seems to me as though the clock () function is calculating each thread's individual time and adding up them up and displaying them.
This is exactly what clock() does - it measures the CPU time used by the process, which at least on Linux and Mac OS X means the cumulative CPU time of all threads that have ever existed in the process since it was started.
Real-clock (a.k.a. wall-clock) timing of OpenMP applications should be done using the high resolution OpenMP timer call omp_get_wtime() which returns a double value of the number of seconds since an arbitrary point in the past. It is a portable function, e.g. exists in both Unix and Windows OpenMP run-times, unlike gettimeofday() which is Unix-only.
I've seen clock() reporting CPU time, instead of real time.
You could use
struct timeval start, end;
gettimeofday(&start, NULL);
// benchmark code
gettimeofday(&end, NULL);
delta = ((end.tv_sec - start.tv_sec) * 1000000u +
end.tv_usec - start.tv_usec) / 1.e6;
To time things instead
You could use the built in omp_get_wtime function in omp library itself. Following is an example code snippet to find out execution time.
#include <stdio.h>
#include <omp.h>
int main(){
double itime, ftime, exec_time;
itime = omp_get_wtime();
// Required code for which execution time needs to be computed
ftime = omp_get_wtime();
exec_time = ftime - itime;
printf("\n\nTime taken is %f", exec_time);
}
Well yes, that's what clock() is supposed to do, tell you how much processor time the program used.
If you want to find elapsed real time, instead of CPU time, use a function that returns wall clock time, such as gettimeofday().
#include "ctime"
std::time_t start, end;
long delta = 0;
start = std::time(NULL);
// do your code here
end = std::time(NULL);
delta = end - start;
// output delta

Sleep() function usage

This is a sample pgm to check the functionality of Sleep() function.This is a demo only since iam using this sleep() and clock() functions in my app developement.
// TestTicks.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include<iostream>
#include<iomanip>
#include <Windows.h>
int _tmain(int argc, _TCHAR* argv[])
{
int i, i2;
i = clock();
//std::cout<<" \nTime before Sleep() : "<<i;
Sleep(30000);
i2 = clock();
//std::cout<<" \nTime After Sleep() : "<<i2;
std::cout<<"\n Diff : "<<i2 -i;
getchar();
return 0;
}
in this code i am calculating the time using clock() before and after the sleep function.
Since iam using sleep(30000), the time diff would be atleast 30000.
I have run this prgm many times. and printed output as 30000, 30001, 30002. These are ok. But some times i am getting values like 29999 and 29997.How this possible, since i put 30000 sleep b/w the clock().
Please give me the reason for this.
According to http://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx:
The system clock "ticks" at a constant rate. If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time. If dwMilliseconds is greater than one tick but less than two, the wait can be anywhere between one and two ticks, and so on.
It just means that the Sleep function will never sleep exactly for the amount of time given, but as close as possible given the resolution of the scheduler.
The same page gives you a method to increase the timer resolution if you really need it.
There are also high resolution timers that may better fit your needs.
Unless you're using a realtime OS, that is very much expected.
The Operating System has to schedule and run many other processes, so waking yours' up may not match the exact time you wanted to sleep.
The clock() function tells how much processor time the calling process has used.
You may replace the use of clock() by the function GetSystemTimeAsFileTime
in order to measure the time more accurately.
Also you may try to use timeBeginPeriod with wPeriodMin returned by a call to timeGetDevCaps in order to obtail maximum interrupt frequency.
In order to synchronize the sleeps with the system interrupt period, I'd also suggest
to have a sleep(1) ahead of the first "time capture".
By doing so, the "too shorts" will disappear.
More information abount sleep can be found here
#include <iostream>
#include <time.h>
void wait(int seconds)
{
int endwait;
endwait = clock() + seconds * CLOCKS_PER_SEC ;
while (clock() < endwait){}
}
int main()
{
wait(2);
cout<<"2 seconds have passed";
}

Calculating time length between operations in c++

The program is a middleware between a database and application. For each database access I most calculate the time length in milliseconds. The example bellow is using TDateTime from Builder library. I must, as far as possible, only use standard c++ libraries.
AnsiString TimeInMilliseconds(TDateTime t) {
Word Hour, Min, Sec, MSec;
DecodeTime(t, Hour, Min, Sec, MSec);
long ms = MSec + Sec * 1000 + Min * 1000 * 60 + Hour * 1000 * 60 * 60;
return IntToStr(ms);
}
// computing times
TDateTime SelectStart = Now();
sql_manipulation_statement();
TDateTime SelectEnd = Now();
On both Windows and POSIX-compliant systems (Linux, OSX, etc.), you can calculate the time in 1/CLOCKS_PER_SEC (timer ticks) for a call using clock() found in <ctime>. The return value from that call will be the elapsed time since the program started running in milliseconds. Two calls to clock() can then be subtracted from each other to calculate the running time of a given block of code.
So for example:
#include <ctime>
#include <cstdio>
clock_t time_a = clock();
//...run block of code
clock_t time_b = clock();
if (time_a == ((clock_t)-1) || time_b == ((clock_t)-1))
{
perror("Unable to calculate elapsed time");
}
else
{
unsigned int total_time_ticks = (unsigned int)(time_b - time_a);
}
Edit: You are not going to be able to directly compare the timings from a POSIX-compliant platform to a Windows platform because on Windows clock() measures the the wall-clock time, where-as on a POSIX system, it measures elapsed CPU time. But it is a function in a standard C++ library, and for comparing performance between different blocks of code on the same platform, should fit your needs.
On windows you can use GetTickCount (MSDN) Which will give the number of milliseconds that have elapsed since the system was started. Using this before and after the call you get the amount of milliseconds the call took.
DWORD start = GetTickCount();
//Do your stuff
DWORD end = GetTickCount();
cout << "the call took " << (end - start) << " ms";
Edit:
As Jason mentioned, Clock(); would be better because it is not related to Windows only.

How to get system time in C++?

In fact i am trying to calculate the time a function takes to complete in my program.
So i am using the logic to get system time when i call the function and time when the function returns a value then by subtracting the values i get time it took to complete.
So if anyone can tell me some better approach or just how to get system time at an instance it would be quite a help
The approach I use when timing my code is the time() function. It returns a single numeric value to you representing the epoch which makes the subtraction part easier for calculation.
Relevant code:
#include <time.h>
#include <iostream>
int main (int argc, char *argv[]) {
int startTime, endTime, totalTime;
startTime = time(NULL);
/* relevant code to benchmark in here */
endTime = time(NULL);
totalTime = endTime - startTime;
std::cout << "Runtime: " << totalTime << " seconds.";
return 0;
}
Keep in mind this is user time. For CPU, time see Ben's reply.
Your question is totally dependant on WHICH system you are using. Each system has its own functions for getting the current time. For finding out how long the system has been running, you'd want to access one of the "high resolution performance counters". If you don't use a performance counter, you are usually limited to microsecond accuracy (or worse) which is almost useless in profiling the speed of a function.
In Windows, you can access the counter via the 'QueryPerformanceCounter()' function. This returns an arbitrary number that is different on each processor. To find out how many ticks in the counter == 1 second, call 'QueryPerformanceFrequency()'.
If you're coding under a platform other than windows, just google performance counter and the system you are coding under, and it should tell you how you can access the counter.
Edit (clarification)
This is c++, just include windows.h and import the "Kernel32.lib" (seems to have removed my hyperlink, check out the documentation at: http://msdn.microsoft.com/en-us/library/ms644904.aspx). For C#, you can use the "System.Diagnostics.PerformanceCounter" class.
You can use time_t
Under Linux, try gettimeofday() for microsecond resolution, or clock_gettime() for nanosecond resolution.
(Of course the actual clock may have a coarser resolution.)
In some system you don't have access to the time.h header. Therefore, you can use the following code snippet to find out how long does it take for your program to run, with the accuracy of seconds.
void function()
{
time_t currentTime;
time(&currentTime);
int startTime = currentTime;
/* Your program starts from here */
time(&currentTime);
int timeElapsed = currentTime - startTime;
cout<<"It took "<<timeElapsed<<" seconds to run the program"<<endl;
}
You can use the solution with std::chrono described here: Getting an accurate execution time in C++ (micro seconds) you will have much better accuracy in your measurement. Usually we measure code execution in the round of the milliseconds (ms) or even microseconds (us).
#include <chrono>
#include <iostream>
...
[YOUR METHOD/FUNCTION STARTING HERE]
auto start = std::chrono::high_resolution_clock::now();
[YOUR TEST CODE HERE]
auto elapsed = std::chrono::high_resolution_clock::now() - start;
long long microseconds = std::chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
std::cout << "Elapsed time: " << microseconds << " ms;