i have a own GetTickCount() function returning an unsigned int (the count rolls over to zero on 0xFFFFFFFF)
i cant measure an elapsed time with:
unsigned int elapsed;
unsigned int start = GetTickCount();
LongOperation();
unsigned int stop = GetTickCount();
if (stop >= start )
elapsed = stop - start;
else
elapsed = (INT_MAX - start) + stop;
is this the same if i do a cast to signed (the time span i measure is always less than what can be represented in a signed integer - i think about 24 days) ? :
int start = (int)GetTickCount();
LongOperation();
int elapsedTime = (int)GetTickCount() - start;
if i look at the .net Environmet.TickCount property:
TickCount will increment from zero to Int32..::.MaxValue for approximately 24.9 days, then jump to Int32..::.MinValue, which is a negative number, then increment back to zero during the next 24.9 days.
so when i cast my GetTickCount() function to a signed integer i should get the behaviour from .net (wrapping occurs on 0x7FFFFFFF->0x80000000) ?
with this should be possible do measure the elapsed time as follow (seen in another post):
int start = Environment.TickCount;
DoLongRunningOperation();
int elapsedTime = Environment.TickCount - start;
The prototype for GetTickCount() in C++ in Windows is:
DWORD WINAPI GetTickCount(void);
So, I would code it like this (similar to the other answers):
DWORD start = GetTickCount();
dosomething();
DWORD elapsed = GetTickCount() - start;
Will measure elapsed times up to the maximum number DWORD can represent.
As others have said, with unsigned arithmetic, you don't need to worry about the counter wrapping around - try it yourself...
Also check GetTickCount64() and QueryPerformanceCounter()/QueryPerformanceFrequency(). GetTickCount64() will allow you to measure longer intervals, but it is not supported on all versions of Windows, while QueryPerformanceCounter() allows you to measure to much higher resolution and accuracy. For example, on some Windows versions, GetTickCount() may only be accurate to about 18ms while QueryPerformanceCounter() will be better than 1us.
I'm not sure GetTickCount() is the preferred function to your problem.
Can't you just use QueryPerformanceFrequency()? There's a nice example at
http://msdn.microsoft.com/en-us/library/ms644904%28v=VS.85%29.aspx
In C++, if you stick with unsigned, the code will work:
unsigned int start = gettickcount();
DoLongRunningOperation();
unsigned int elapsedTime = static_cast<unsigned int>(gettickcount()) - start;
The reason you want to stick with unsigned is that unsigned arithmetic is required to use modulo arithmetic which is what you want in this case.
Related
I'm trying to measure the CPU and Wall time for my program.
The code should run on Windows so it's alright to use platform specific functions.
For Wall time I use QueryPerformanceCounter() and it is precise.
When I use GetProcessTimes() I get a 15.625 millisecond precision.
On MSDN it says that the precision of the returned CPU time is 100 nanoseconds.
Here is the code I am using:
void getCPUtime(unsigned long long *pUser, unsigned long long *pKernel) {
FILETIME user, kernel, exit, start;
ULARGE_INTEGER userCPU, kernelCPU;
if (::GetProcessTimes(::GetCurrentProcess(), &start, &exit, &kernel, &user) != 0) {
userCPU.LowPart = user.dwLowDateTime;
userCPU.HighPart = user.dwHighDateTime;
kernelCPU.LowPart = kernel.dwLowDateTime;
kernelCPU.HighPart = kernel.dwHighDateTime;
}
*pUser = (unsigned long long)userCPU.QuadPart;
*pKernel = (unsigned long long)kernelCPU.QuadPart;
}
And I am calling it from:
void someFunction() {
unsigned long long *userStartCPU, *userEndCPU, *kernelStartCPU, *kernelEndCPU;
double userCPUTime, kernelCPUTime;
getCPUtime(userStartCPU, kernelStartCPU);
// Do stuff which takes longer than a millisecond
getCPUtime(userEndCPU, kernelEndCPU);
userCPUTime = (userEndCPU - userStartCPU) / (double)10000.00; // Convert to milliseconds
kernelCPUTime = (kernelEndCPU - kernelStartCPU) / (double)10000.00; // Convert to milliseconds
}
Does anyone know why this is happening, or has any other way to precisely measure CPU time on Windows?
MSDN has this page that outlines using a high resolution timer.
I would recommend looking at Google Benchmark]2. Looking at the Windows specific code, you might need to use double instead of integers as used in the MakeTime function here
I need to retrieve the current time point with a precision of microseconds. The time point can be relative to any fixed date.
How can it be achieved? For job policy, I really should't use boost or any other lib.
I'm working at a multiplatform application and under Linux, I can use C++11 system_clock::now().time_since_epoch(), but under Windows I work with VS2010, so I have no std::chrono library.
I've seen the RtlTimeToSecondsSince1970 function, but its resolution is a second.
Timers and timing is a tricky enough subject that In my opinion current cross platform implementations are not quite up to scratch. So I'd recommend a specific version for windows with appropriate #ifdef's. See other answers if you want a cross-platform version.
If you've got to/want to use a windows specific call then GetSystemTimeAsFileTime (or on windows 8 GetSystemTimePreciseAsFileTime) are the best calls for getting UTC time and QueryPerformanceCounter is good for high resolution timestamps. It gives back the number of 100-nanosecond intervals since January 1, 1601 UTC into a FILETIME structure.
This fine article goes into the gory details of measuring timers and timestamps in windows and is well worth a read.
EDIT: Converting a FILETIME to us, you need to go via a ULARGE_INTEGER.
FILETIME ft;
GetSystemTimeAsFileTime(&ft);
ULARGE_INTEGER li;
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
unsigned long long valueAsHns = li.QuadPart;
unsigned long long valueAsUs = valueAsHns/10;
This code works for me in VS2010. The constructor tests to see if high-precision timing is available on the processor and currentTime() returns a time stamp in seconds. Compare time stamps for delta time. I use this for a game engine to get very small delta time values. Please note precision isn't limited to seconds despite the return value being named so (its a double).
Basically you find out how many seconds per cpu tick with QueryPerformanceFrequency and get the time using QueryPerformanceCounter.
////////////////////////
//Grabs speed of processor
////////////////////////
Timer::Timer()
{
__int64 _iCountsPerSec = 0;
bool _bPerfExists = QueryPerformanceFrequency((LARGE_INTEGER*)&_iCountsPerSec) != 0;
if (_bPerfExists)
{
m_dSecondsPerCount = 1.0 / static_cast<double>(_iCountsPerSec);
}
}
////////////////////////
//Returns current real time
////////////////////////
double Timer::currentTime() const
{
__int64 time = 0;
QueryPerformanceCounter((LARGE_INTEGER*)&time);
double timeInSeconds = static_cast<double>(time)* m_dSecondsPerCount;
return timeInSeconds;
}
The following code works in visual studio.
#include <time.h>
clock_t start , end ;
int getTicks_u32()
{
int cpu_time_used ;
end = clock() ;
cpu_time_used = (static_cast<int> (end - start)) / CLOCKS_PER_SEC;
return cpu_time_used ;
}
void initSystemClock_bl(void)
{
start = clock();
}
I have to time the clock_gettime() function for estimating and profiling other operations, and it's for homework so I cant use a profiler and have to write my own code.
The way I'm doing it is like below:
clock_gettime(CLOCK_PROCESS_CPUTIME_ID,&begin);
for(int i=0;i<=n;i++)
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
cout<<(end.tv_nsec-begin.tv_nsec)/n; //time per clock_gettime()
The problem is that when n=100, output is: 370.63 ns, when n=100000, output: 330 ns, when n=1000000, output: 260 ns, n=10000000, output: 55 ns, ....keeps reducing.
I understand that this is happening because of instruction caching, but I don't know how to handle this in profiling. Because for example when I estimate the time for a function call using gettime, how would I know how much time that gettime used for itself?
Would taking a weighted mean of all these values be a good idea? (I can run the operation I want the same number of times, take weighted mean of that, subtract weighted mean of gettime and get a good estimate of the operation irrespective of caching?)
Any suggestions are welcome.
Thank you in advance.
When you compute the time difference: (end.tv_nsec-begin.tv_nsec)/n
You are only taking into account the nanoseconds part of the elapsed time. You must also take the seconds into account since the tv_nsec field only reflects the fractional part of a second:
int64_t end_ns = ((int64_t)end.tv_sec * 1000000000) + end.tv_nsec;
int64_t begin_ns = ((int64_t)begin.tv_sec * 1000000000) + begin.tv_nsec;
int64_t elapsed_ns = end_ns - begin_ns;
Actually, with your current code you should sometimes get negative results when the nanoseconds part of end has wrapped around and is less than begin's nanoseconds part.
Fix that, and you'll be able to observe much more consistent results.
Edit: for the sake of completeness, here's the code I used for my tests, which gets me very consistent results (between 280 and 300ns per call, whatever number of iterations I use):
int main() {
const int loops = 100000000;
struct timespec begin;
struct timespec end;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &begin);
for(int i = 0; i < loops; i++)
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
int64_t end_ns = ((int64_t)end.tv_sec * 1000000000) + end.tv_nsec;
int64_t begin_ns = ((int64_t)begin.tv_sec * 1000000000) + begin.tv_nsec;
int64_t elapsed_ns = end_ns - begin_ns;
int64_t ns_per_call = elapsed_ns / loops;
std::cout << ns_per_call << std::endl;
}
I have found a function to get milliseconds since the Mac was started:
U32 Platform::getRealMilliseconds()
{
// Duration is a S32 value.
// if negative, it is in microseconds.
// if positive, it is in milliseconds.
Duration durTime = AbsoluteToDuration(UpTime());
U32 ret;
if( durTime < 0 )
ret = durTime / -1000;
else
ret = durTime;
return ret;
}
The problem is that after ~20 days AbsoluteToDuration returns INT_MAX all the time until the Mac is rebooted.
I have tried to use method below, it worked, but looks like gettimeofday takes more time and slows down the game a bit:
timeval tim;
gettimeofday(&tim, NULL);
U32 ret = ((tim.tv_sec) * 1000 + tim.tv_usec/1000.0) + 0.5;
Is there a better way to get number of milliseconds elapsed since some epoch (preferably since the app started)?
Thanks!
Your real problem is that you are trying to fit an uptime-in-milliseconds value into a 32-bit integer. If you do that your value will always wrap back to zero (or saturate) in 49 days or less, no matter how you obtain the value.
One possible solution would be to track time values with a 64-bit integer instead; that way the day of reckoning gets postponed for a few hundred years and so you don't have to worry about the problem. Here's a MacOS/X implementation of that:
uint64_t GetTimeInMillisecondsSinceBoot()
{
return UnsignedWideToUInt64(AbsoluteToNanoseconds(UpTime()))/1000000;
}
... or if you don't want to return a 64-bit time value, the next-best thing would be to record the current time-in-milliseconds value when your program starts, and then always subtract that value from the values you return. That way things won't break until your own program has been running for at least 49 days, which I suppose is unlikely for a game.
uint32_t GetTimeInMillisecondsSinceProgramStart()
{
static uint64_t _firstTimeMillis = GetTimeInMillisecondsSinceBoot();
uint64_t nowMillis = GetTimeInMillisecondsSinceBoot();
return (uint32_t) (nowMillis-_firstTimeMillis);
}
My preferred method is mach_absolute_time - see this tech note - I use the second method, i.e. mach_absolute_time to get time stamps and mach_timebase_info to get the constants needed to convert the difference between time stamps into an actual time value (with nanosecond resolution).
timeGetTime seems to be quite good to query for system time. However, its return value is 32-bit only, so it wraps around every 49 days approx.
It's not too hard to detect the rollover in calling code, but it adds some complexity, and (worse) requires keeping a state.
Is there some replacement for timeGetTime that would not have this wrap-around problem (probably by returning a 64-bit value), and have roughly the same precision and cost?
Unless you need to time an event that is over 49 days, you can SAFELY ignore the wrap-around. Just always subtract the previous timeGetTime() from the current timeGetTime() and you will always obtain a delta measured time that is accurate, even across wrap-around -- provided that you are timing events whose total duration is under 49 days. This all works due to how unsigned modular math works inside the computer.
// this code ALWAYS works, even with wrap-around!
DWORD dwStart = timeGetTime();
// provided the event timed here has a duration of less than 49 days
DWORD dwDuration = timeGetTime()-dwStart;
TIP: look into TimeBeginPeriod(1L) to increase the accuracy of timeGetTime().
BUT... if you want a 64-bit version of timeGetTime, here it is:
__int64 timeGetTime64() {
static __int64 time64=0;
// warning: if multiple threads call this function, protect with a critical section!
return (time64 += (timeGetTime()-(DWORD)time64));
}
Please note that if this function is not called at least once every 49 days, that this function will fail to properly detect a wrap-around.
What platform?
You could use GetTickCount64() if you're running on Vista or later, or synthesise your own GetTickCount64() from GetTickCount() and a timer...
I deal with the rollover issue in GetTickCount() and synthesising a GetTickCount64() on platforms that don't support it here on my blog about testing non-trivial code: http://www.lenholgate.com/blog/2008/04/practical-testing-17---a-whole-new-approach.html
Nope, tracking roll-over requires state. It can be as simple as just incrementing your own 64-bit counter on each callback.
It is pretty unusual to want to track time periods to a resolution as low as 1 millisecond for up to 49 days. You'd have to worry that the accuracy is still there after such a long period. The next step is to use the clock, GetTickCount(64), GetSystemTimeAsFileTime have a resolution of 15.625 milliseconds and are kept accurate with a time server.
Have a look at GetSystemTimeAsFileTime(). It fills a FILETIME struct that contains a "64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC)"
How are you trying to use it? I frequently use the Win32 equivalent when checking for durations that I know will be under 49 days. For example the following code will always work.
DWORD start = timeGetTime();
DoSomthingThatTakesLessThen49Days();
DWORD duration = timeGetTime() - start;
Even if timeGetTime rolled over while calling DoSomthingThatTakesLessThen49Days duration will still be correct.
Note the following code could fail on rollover.
DWORD start = timeGetTime();
DoSomthingThatTakesLessThen49Days();
if (now + 5000 < timeGetTime())
{
}
but can easy be re-written to work as follows
DWORD start = timeGetTime();
DoSomthingThatTakesLessThen49Days();
if (timeGetTime() - start < 5000)
{
}
Assuming you can guarantee that this function will called at least once every 49 days, something like this will work:
// Returns current time in milliseconds
uint64_t timeGetTime64()
{
static uint32_t _prevVal = 0;
static uint64_t _wrapOffset = 0;
uint32_t newVal = (uint32_t) timeGetTime();
if (newVal < _prevVal) _wrapOffset += (((uint64_t)1)<<32);
_prevVal = newVal;
return _wrapOffset+newVal;
}
Note that due to the use of static variables, this function isn't multithread-safe, so if you plan on calling it from multiple threads you should serialize it via a critical section or mutex or similar.
I'm not sure if this fully meets your needs, but
std::chrono::system_clock
might be along the lines of what you're looking for.
http://en.cppreference.com/w/cpp/chrono/system_clock
You could use RDTSC intrinsic. To get time in milliseconds you could get transform coefficient:
double get_rdtsc_coeff() {
static double coeff = 0.0;
if ( coeff < 1.0 ) { // count it only once
unsigned __int64 t00 = __rdtsc();
Sleep(1000);
unsigned __int64 t01 = __rdtsc();
coeff = (t01-t00)/1000.0;
}
return coeff; // transformation coefficient
}
Now you could get count of milliseconds from the last reset:
__int64 get_ms_from_start() {
return static_cast<__int64>(__rdtsc()/get_rdtsc_coeff());
}
If your system uses SpeedStep or similar technologies you could use QueryPerformanceCounter/QueryPerformanceFrequency functions. Windows gives guarantees then the frequency cannot change while the system is running.