Linux/OSX Clock Resolution with millisecond accuracy? - c++

On Windows, I can call QueryPerformanceCounter to get high resolution data points, but this method call is affected by issues with the BIOS, multi-core CPUs, and some AMD chips. I can call timeBeginPeriod to increase the system clock resolution in Windows down to 1ms (instead of the standard ~15ms) which means that I can use just call timeGetTime and get the time in the clock resolution that I've specified.
So! On OSX/Linux, what C++ clock resolutions should I expect? Can I get 1ms resolution similar to Windows? Since I'm doing real time media, I want this clock resolution to be as low as possible: can I change this value in the kernel (like in Windows with timeBeginPeriod)? This is a high performance application, so I want getting the current time to be a fast function call. And I'd like to know if the clock generally drifts or what weird problems I can expect.
Thanks!
Brett

If you are using C++11 you can use std::chrono::high_resolution_clock which should give you as high a resolution clock as the system offers. To get a millisecond duration you would do
typedef std::chrono::high_resolution_clock my_clock;
my_clock::time_point start = my_clock::now();
// Do stuff
my_clock::time_point end = my_clock::now();
std::chrono::milliseconds ms_duration =
std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
If you aren't using C++11 then the gettimeofday function works in OSX and most linux distributions. It gives you the time since epoch in seconds and microseconds. The resolution is unspecified, but it should give you at least millisecond accuracy on any modern system.

To add to David's answer, if you can't use C++11, Boost's Timer classes can help you.

Related

Accurate C/C++ clock on a multi-core processor with auto-overclock?

I have looked into several topics to try to get some ideas on how to make a reliable clock with C or C++. However, I also saw some functions used the processor's ticks and ticks per second to calculate the end result, which I think could be a problem on a CPU with auto-overclock like the one I have. I also saw one of them reset after a while, thus is not really reliable.
The idea is to make a (preferably cross-platform) clock like an in-game one, with a precision better than a second in order to be able to add the elapsed time in the "current session" with the saved time at the end of the program. This would be to count the time spent on a console game that does not have an in-game clock, and on the long run to perhaps integrate it to actual PC games.
It should be able to run without taking too much or all of the CPU's time (or a single core's time for multi-core CPUs) as it would be quite bad to use all these resources just for the clock, and also on systems with auto-overclock (which could otherwise cause inaccurate results).
The program I would like to implement this feature into currently looks like this, but I might re-code it in C (since I have to get back to learning how to code in C++):
#include <iostream>
#include <cstdlib>
using namespace std;
int main()
{
cout << "In game" << endl;
system("PAUSE");
return 0;
}
On a side-note, I still need to get rid of the PAUSE feature which is Windows-specific, but I think that can be taken care of with a simple "while (char != '\n')" loop.
What I have already skimmed through:
Using clock() to measure execution time
Calculating elapsed time in a C program in milliseconds
Time stamp in the C programming language
Execution time of C program
C: using clock() to measure time in multi-threaded programs
Is gettimeofday() guaranteed to be of microsecond resolution?
How to measure time in milliseconds using ANSI C?
C++ Cross-Platform High-Resolution Timer
Timer function to provide time in nano seconds using C++
How to measure cpu time and wall clock time?
How can I measure CPU time and wall clock time on both Linux/Windows?
how to measure time?
resolution of std::chrono::high_resolution_clock doesn't correspond to measurements
C++ How to make timer accurate in Linux
http://gameprogrammingpatterns.com/game-loop.html
clock() accuracy
std::chrono doesn't seem to be giving accurate clock resolution/frequency
clock function in C++ with threads
(Edit: Extra research, in particular for a C implementation:
Cross platform C++ High Precision Event Timer implementation (no real answer)
Calculating Function time in nanoseconds in C code (Windows)
How to print time difference in accuracy of milliseconds and nanoseconds? (could be the best answer for a C implementation)
How to get duration, as int milli's and float seconds from <chrono>? (C++ again) )
The problem is that it is not clear whether some of the mentioned methods, like Boost or SDL2, behave properly with auto-overclock in particular.
TL;DR : What cross-platform function should I use to make an accurate, sub-second precise counter in C/C++ that could work on multi-core and/or auto-overclocking processors please?
Thanks in advance.
The std::chrono::high_resolution_clock seems to be what you are looking for. On most modern CPUs it is going to be steady monotonically increased clock which would not be affected by overclocking of the CPU.
Just keep in mind that it can't be used to tell time. It is only good for telling the time intervals, which is a great difference. For example:
using clock = std::chrono::high_resolution_clock;
auto start = clock::now();
perform_operation();
auto end = clock::now();
auto us = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
std::cout << "Operation took " << us << " microseconds.\n";
If the clock checking itself is a performance-sensitive operation, you will have to resort to platform-specific tricks, of which the most popular is reading CPU tick counter directly (RDTSC in Intel family). This is very fast, and on modern CPUs very accurate way of measuring time intervals.

In C++11, what is the fastest way to get system ticks/time?

My program frequently calls WINAPI function timeGetTime(), which should be replaced with usage of <chrono> (standard library). What is the fastest standardized way to get system time - in float or int, for my case?
I do not need tracking the date or day time, i only need precise relative ms/seconds value, which always increments. Is there any?
For benchmarking, you likely want std::chrono::high_resolution_clock. It may not be steady - in the sense that it "always increments". The only clock that guarantees steadiness is std::chrono::steady_clock.
The best, steady clock would then be:
using ClockType = std::conditional<
std::chrono::high_resolution_clock::is_steady,
std::chrono::high_resolution_clock,
std::chrono::steady_clock>::type;
Note that high_resolution_clock could itself simply be an alias for steady_clock.
If you need precise relative ms, you're
looking for system-specific
not wallclock
You tagged question with WinApi, I assume, this is Windows-specific.
For Windows that is High Resolution Timer. This technology allows you to precisely calculate relative times (how much time is spent in certain function call, for example).

Why is there no boost::date_time with microsec resolution on Windows?

On Win32 system boost::date_time::microsec_clock() is implemented using ftime, which provides only millisecond resolution: Link to doc
There are some questions/answers on Stackoverflow stating this and linking the documentation, but not explaining why that is the case:
Stackoverflow #1
Stackoverflow #2
There seemingly are ways to implement microsecond resolution on Windows:
GetSystemTimePreciseAsFileTime (Win8++)
QueryPerformanceCounter
What I'm interested in is why Boost implemented it that way, when in turn there are possibly solutions that would be more fitting?
QueryPerformanceCounter can't help you on this problem. It gives you a timestamp, but as you don't know when the counter starts there is no reliable way to calculate an absolute time point out of it. boost::date_time is such a (user-understandable) time point.
The other difference is that a counter like QueryPerformanceCounter gives you a steadily increasing timer, while the system time can be influenced by the user and can therefore jump.
So the 2 things are for different use cases. One for representing a real time, the other one for getting precise timing in the software and for benchmarking.
GetSystemTimePreciseAsFileTime seems to fit the bill for a high resolution absolute time. I guess it wasn't used because it requires Windows8.
GetSystemTimePreciseAsFileTime only became available with Windows 8 Desktop applications. It mimics Linuxes GetTimeOfDay. The implementation uses QueryPerformanceCounter to achieve the microsecond resolution. Timestamps are taken at the time of a system time increment. Subsequent calls to GetSystemTimePreciseAsFileTime will take the system time and add the elapsed "performance counter time" (elapsed ticks / performance counter frequency) as the high resolution part.
The functionallity of QueryPerformanceCounter again depends on platform specific details (HPET, ACPI PM timer, invariant TSC etc.). See MSDN: Acquiring high-resolution time stamps and SO: Is QueryPerformanceFrequency acurate when using HPET? for details.
The various versions of Windows do have specific schemes to update the system time. Windows XP has a fixed file time granularty which is independent of the systems timer resolution. Only post Windows XP versions allow to modify the system time granularity by changing the system timer resolution.
This can be accomplished by means of the multimedia timer API timeBeginPeriod and/or the hidden API NtSetTimerResolution (See this SO answer for more details about using `
timeBeginPeriod and NtSetTimerResolution).
As stated, GetSystemTimePreciseAsFileTime is only available for desktop applications. The reason for this is the need for specific hardware.
What I'm interested in is why Boost implemented it that way, when in turn there are possibly solutions that would be more fitting?
Taking the facts stated above will make the implementation very complex and the result very platform specific. Every (!) Windows version has undergone severe changes of time keeping. Even the latest small step from 8 to 8.1 has changed the time keeping procedure considerably. However, there is still room to further improve time matters on Windows.
I should mention that GetSystemTimePreciseAsFileTime is, as of Windows 8.1, not giving results as accurate as expected or specified at MSDN: GetSystemTimePreciseAsFileTime function. It combines the system file time with the result of QueryPerformanceCounter to fill the gap between consecutive file time increments but it does not take system time adjustments into account. An active system time adjustement, e.g. done by SetSystemTimeAdjustment, modifies the system time granularity and the progress of the system time. However, the used performance counter frequency to build the result of GetSystemTimePreciseAsFileTime is kept constant. As a result, the microseconds part is off by the adjustment gain set by SetSystemTimeAdjustment.

Get time stamp via Boost.Chrono in resolution of nanoseconds

Does boost chrono provides time stamp with nanoseconds resolution?? If yes how to get the time stamp?
Nanoseconds resolution ? On which hardware do you want to run your program ?
On my PC, my performance counter has a frequency of approx. 4 Mhz, so a tick last 250 ns.
As answered here, boost chrono can give you the nanosecond resolution, but you will not be sure of the measure's accuracy.
In order to easily get time stamps with boost chrono for different measurements you can use boost CPU Timers. A table about the timer accuracy is also given on this site.
To measure the resolution yourself on your specific hardware use boost's cpu_timer_info.cpp.

Getting the System tick count with basic C++?

I essentially want to reconstruct the getTickCount() windows function so I can use it in basic C++ without any non standard libraries or even the STL. (So it complies with the libraries supplied with the Android NDK)
I have looked at
clock()
localtime
time
But I'm still unsure whether it is possible to replicate the getTickCount windows function with the time library.
Can anyone point me in the right direction as to how to do this or even if its possible?
An overview of what I want to do:
I want to be able to calculate how long an application has been "doing" a certain function.
So for example I want to be able to calculate how long the application has been trying to register with a server
I am trying to port it from windows to run on the linux based Android, here is the windows code:
int TimeoutTimer::GetSpentTime() const
{
if (m_On)
{
if (m_Freq>1)
{
unsigned int now;
QueryPerformanceCounter((int*)&now);
return (int)((1000*(now-m_Start))/m_Freq);
}
else
{
return (GetTickCount()-(int)m_Start);
}
}
return -1;
}
On Android NDK you can use the POSIX clock_gettime() call, which is part of libc. This function is where various Android timer calls end up.
For example, java.lang.System.nanoTime() is implemented with:
struct timespec now;
clock_gettime(CLOCK_MONOTONIC, &now);
return (u8)now.tv_sec*1000000000LL + now.tv_nsec;
This example uses the monotonic clock, which is what you want when computing durations. Unlike the wall clock (available through gettimeofday()), it won't skip forward or backward when the device's clock is changed by the network provider.
The Linux man page for clock_gettime() describes the other clocks that may be available, such as the per-thread elapsed CPU time.
clock() works very similarly to Windows's GetTickCount(). The units may be different. GetTickCount() returns milliseconds. clock() returns CLOCKS_PER_SEC ticks per second. Both have a max that will rollover (for Windows, that's about 49.7 days).
GetTickCount() starts at zero when the OS starts. From the docs, it looks like clock() starts when the process does. Thus you can compare times between processes with GetTickCount(), but you probably can't do that with clock().
If you're trying to compute how long something has been happening, within a single process, and you're not worried about rollover:
const clock_t start = clock();
// do stuff here
clock_t now = clock();
clock_t delta = now - start;
double seconds_elapsed = static_cast<double>(delta) / CLOCKS_PER_SEC;
Clarification: There seems to be uncertainty in whether clock() returns elapsed wall time or processor time. The first several references I checked say wall time. For example:
Returns the number of clock ticks elapsed since the program was launched.
which admittedly is a little vague. MSDN is more explicit:
The elapsed wall-clock time since the start of the process....
User darron convinced me to dig deeper, so I found a draft copy of the C standard (ISO/IEC 9899:TC2), and it says:
... returns the implementation’s best approximation to the processor time used ...
I believe every implementation I've ever used gives wall-clock time (which I suppose is an approximation to the processor time used).
Conclusion: If you're trying to time so code so you can benchmark various optimizations, then my answer is appropriate. If you're trying to implement a timeout based on actual wall-clock time, then you have to check your local implementation of clock() or use another function that is documented to give elapsed wall-clock time.
Update: With C++11, there is also the portion of the standard library, which provides a variety of clocks and types to capture times and durations. While standardized and widely available, it's not clear if the Android NDK fully supports yet.
This is platform dependent so you just have to write a wrapper and implement the specifics for each platform.
It's not possible. The C++ standard and, as consequence the standard library, know nothing about processors or 'ticks'. This may or may not change in C++0x with the threading support but at least for now, it's not possible.
Do you have access to a vblank interrupt function (or hblank) on the Android? If so, increment a global, volatile var there for a timer.