How to realise long-term high-resolution timing on windows using C++? - c++

I need to get exact timestamps every couple of ms (20, 30, 40ms) over a long period of time (a couple of hours). The function in which the timestamp is taken is invoked as a callback by a 3rd-party library.
Using GetSystemTime() one can get the correct system timestamp but only with milliseconds accuracy, which is not precise enough for me. Using QueryPerformanceTimer() yields more accurate timestamps but is not synchronous to the system timestamp over a long period of time (see http://msdn.microsoft.com/en-us/magazine/cc163996.aspx).
The solution provided at the site linked above somehow works only on older computers, it hangs while synchronizing when i try to use it with newer computers.
It seems to me like boost is also only working on milliseconds accuracy.
If possible, I'd like to avoid using external libraries, but if there's no other choice I'll go with it.
Any suggestions?

Deleted article from CodeProject, this seems to be the copy: DateTimePrecise C# Class The idea is to use QueryPerformanceCounter API for accurate small increments and periodically adjust it in order to keep long term accuracy. This is about to give microsecond accuracy ("about" because it's still not exactly precise, but still quite usable).
See also: Microsecond resolution timestamps on Windows

Which language are you using?
In Java (1.5 or above) I'd suggest 'System.nanoTime()' which requires no import.
Remember in Windows that time-slice granularity is 1000ms / 64 = 15.625ms.
This will affect inter-process communication, especially on uni-processor machines, or machines that run several heavy CPU usage processes 'concurrently'*.
In fact, I just got DOS 6.22 and Windows for Workgroups 3.11/3.15 via eBay, so I can screenshot the original timeslice configuration for uni-processor Windows machines of the era when I started to get into it. (Although it might not be visible in versions above 3.0).

You'll be hard pressed to find anything better than QueryPerformanceTimer() on Windows.
On modern hardware it uses the HPET as a source which replaces the RTC interrupt controller. I would expect QueryPerformanceTimer() and the System clock to be synchronous.

There is no such QueryPerformanceTimer() on windows. The resource is named QueryPerformanceCounter(). It provides a counter value counting at some higher frequency.
Its incrementing frequency can be retrieved by a call to QueryPerformanceFrequency().
Since this frequency is typically in the MHz range, microsecond resolution can be observed.
There are some implementations around, i.e. this thread or at the Windows Timestamp Project

Related

Modern high res timer for periodic calls

A huge amount has been said about high resolution timers on stackoverflow. But it's clear that the solution is a bit of a moving target and best practices are changing.
I need to create a high resolution timer that has a callback every 10ms to achieve a consistent 100Hz. The target platform is Windows 7 and later.
This exact question was asked in 2009, but I believe things have probably moved on.
Multimedia timers looked to be a great solution, but MSDN says they are depreciated, replaced with CreateTimerQueueTimer. But other answers on stackoverflow suggest that CreateTimerQueue timer is not as accurate as timeSetEvent.
All answers do consistently point out the requirement for setting the windows timer resolution to a low value using timeBeginPeriod.
So with all that said, what is the best approach to achieve the above desired goal in C today.
but MSDN says they are depreciated
It is pretty important to be able to read between the lines when you see a deprecation warning like this. Yes, most certainly Microsoft would like everybody to stop using multimedia timers. They are heavily abused and are very bad for business. Actually getting programmers to stop using them is however a pipe dream, CreateTimerQueueTimer() is not an alternative.
Bad for business because Microsoft likes to be competitive in mobile computing. And multimedia timers are a very poor match, they are murder on battery life. Most programs that use them jack up the clock interrupt rate the maximum allowed, 1000 times per second. With a backdoor to get to 2000. And it is very hard to stop them from doing that, especially when their competitors give their software away for free. They have no incentive whatsoever to fix that problem since it makes their mobile OS look good. And Microsoft can't kill very popular apps like that.
Microsoft also has a mobile OS, exposed through the WinRT api. Where that deprecation is rock-hard, you cannot get your app approved by the Store validation procedure when you use those timers. But it doesn't get much use, their customers like to keep using their desktop apps.
If you want a 100 Hz update rate then you have use use timeBeginPeriod() and timeSetEvent(), there is no other way. And avoid WinRT. Since it is actually only 1.5 times worse than the default, there is no appreciable reason to worry about power consumption. Set laser to stun and use what works.
Are you willing to burn CPU as you go? Call QueryPerformanceCounter() in a busy loop. You will get microsecond precision, with appropriate detrimental effect on battery life. You can also take it to eleven with jacking up your process priority class to REALTIME_PRIORITY_CLASS and worker thread's priority to THREAD_PRIORITY_TIME_CRITICAL (or a notch or two lower). There are of course negative consequences for doing such things.

Is there no equivalent of millis() from Arduino in C++?

I am currently implementing a PID controller for a project I am doing, but I realized I don't know how to ensure a fixed interval for each iteration. I want the PID controller to run at a frequency of 10Hz, but I don't want to use any sleep functions or anything that would otherwise slow down the thread it's running in. I've looked around but I cannot for the life of me find any good topics/functions that simply gives me an accurate measurement of milliseconds. Those that I have found simply uses time_t or clock_t, but time_t only seems to give seconds(?) and clock_t will vary greatly depending on different factors.
Is there any clean and good way to simply see if it's been >= 100 milliseconds since a given point in time in C++? I'm using the Qt5 framework and OpenCV library and the program is running on an ODROID X-2, if that's of any helpful information to anyone.
Thank you for reading, Christian.
I don't know much about the ODROID X-2 platform but if it's at all unixy you may have access to gettimeofday or clock_gettime either one of which would provide a higher resolution clock if available on your hardware.

Why is there no boost::date_time with microsec resolution on Windows?

On Win32 system boost::date_time::microsec_clock() is implemented using ftime, which provides only millisecond resolution: Link to doc
There are some questions/answers on Stackoverflow stating this and linking the documentation, but not explaining why that is the case:
Stackoverflow #1
Stackoverflow #2
There seemingly are ways to implement microsecond resolution on Windows:
GetSystemTimePreciseAsFileTime (Win8++)
QueryPerformanceCounter
What I'm interested in is why Boost implemented it that way, when in turn there are possibly solutions that would be more fitting?
QueryPerformanceCounter can't help you on this problem. It gives you a timestamp, but as you don't know when the counter starts there is no reliable way to calculate an absolute time point out of it. boost::date_time is such a (user-understandable) time point.
The other difference is that a counter like QueryPerformanceCounter gives you a steadily increasing timer, while the system time can be influenced by the user and can therefore jump.
So the 2 things are for different use cases. One for representing a real time, the other one for getting precise timing in the software and for benchmarking.
GetSystemTimePreciseAsFileTime seems to fit the bill for a high resolution absolute time. I guess it wasn't used because it requires Windows8.
GetSystemTimePreciseAsFileTime only became available with Windows 8 Desktop applications. It mimics Linuxes GetTimeOfDay. The implementation uses QueryPerformanceCounter to achieve the microsecond resolution. Timestamps are taken at the time of a system time increment. Subsequent calls to GetSystemTimePreciseAsFileTime will take the system time and add the elapsed "performance counter time" (elapsed ticks / performance counter frequency) as the high resolution part.
The functionallity of QueryPerformanceCounter again depends on platform specific details (HPET, ACPI PM timer, invariant TSC etc.). See MSDN: Acquiring high-resolution time stamps and SO: Is QueryPerformanceFrequency acurate when using HPET? for details.
The various versions of Windows do have specific schemes to update the system time. Windows XP has a fixed file time granularty which is independent of the systems timer resolution. Only post Windows XP versions allow to modify the system time granularity by changing the system timer resolution.
This can be accomplished by means of the multimedia timer API timeBeginPeriod and/or the hidden API NtSetTimerResolution (See this SO answer for more details about using `
timeBeginPeriod and NtSetTimerResolution).
As stated, GetSystemTimePreciseAsFileTime is only available for desktop applications. The reason for this is the need for specific hardware.
What I'm interested in is why Boost implemented it that way, when in turn there are possibly solutions that would be more fitting?
Taking the facts stated above will make the implementation very complex and the result very platform specific. Every (!) Windows version has undergone severe changes of time keeping. Even the latest small step from 8 to 8.1 has changed the time keeping procedure considerably. However, there is still room to further improve time matters on Windows.
I should mention that GetSystemTimePreciseAsFileTime is, as of Windows 8.1, not giving results as accurate as expected or specified at MSDN: GetSystemTimePreciseAsFileTime function. It combines the system file time with the result of QueryPerformanceCounter to fill the gap between consecutive file time increments but it does not take system time adjustments into account. An active system time adjustement, e.g. done by SetSystemTimeAdjustment, modifies the system time granularity and the progress of the system time. However, the used performance counter frequency to build the result of GetSystemTimePreciseAsFileTime is kept constant. As a result, the microseconds part is off by the adjustment gain set by SetSystemTimeAdjustment.

How to do something every millisecond or better on Windows

This question is not about timing something accurately on Windows (XP or better), but rather about doing something very rapidly via callback or interrupt.
I need to be doing something regularly every 1 millisecond, or preferably even every 100 microseconds. What I need to do is drive some assynchronous hardware (ethernet) at this rate to output a steady stream of packets to the network, and make that stream appear to be as regular and synchronous as possible. But if the question can be separated from the (ethernet) device, it would be good to know the general answer.
Before you say "don't even think about using Windows!!!!", a little context. Not all real-time systems have the same demands. Most of the time songs and video play acceptably on Windows despite needing to handle blocks of audio or images every 10-16ms or so on average. With appropriate buffering, Windows can have its variable latencies, but the hardware can be broadly immune to them, and keep a steady synchronous stream of events happening. Even so, most of us tolerate the occasional glitch. My application is like that - probably quite tolerant.
The expensive option for me is to port my entire application to Linux. But Linux is simply different software running on the same hardware, so my strong preference is to write some better software, and stick with Windows. I have the luxury of being able to eliminate all competing hardware and software (no internet or other network access, no other applications running, etc). Do I have any prospect of getting Windows to do this? What limitations will I run into?
I am aware that my target hardware has a High Performance Event Timer, and that this timer can be programmed to interrupt, but that there is no driver for it. Can I write one? Are there useful examples out there? I have not found one yet. Would this interfere with QueryPerformanceCounter? Does the fact that I'm going to be using an ethernet device mean that it all becomes simple if I use select() judiciously?
Pointers to useful articles welcomed - I have found dozens of overviews on how to get accurate times, but none yet on how to do something like this other than by using what amounts to a busy wait. Is there a way to avoid a busy wait? Is there a kernel mode or device driver option?
You should consider looking at the Multimedia Timers. These are timers that are intended to the sort of resolution you are looking at.
Have a look here on MSDN.
I did this using DirectX 9, using the QueryPerformanceCounter, but you will need to hog at least one core, as task switching will mess you up.
For a good comparison on tiemers you can look at
http://www.geisswerks.com/ryan/FAQS/timing.html
If you run into timer granularity issues, I would suggest using good old Sleep() with a spin loop. Essentially, the code should do something like:
void PrecisionSleep(uint64 microSec)
{
uint64 start_time;
start_time = GetCurrentTime(); // assuming GetCurrentTime() returns microsecs
// Calculate number of 10ms intervals using standard OS sleep.
Sleep(10*(microSec/10000)); // assuming Sleep() takes millisecs as argument
// Spin loop to spend the rest of the time in
while(GetCurrentTime() - start_time < microSec)
{}
}
This way, you will have a high precision sleep which wouldn't tax your CPU much if a lot of them are larger than the scheduling granularity (assumed 10ms). You can send your packets in a loop while you use the high precision sleep to time them.
The reason audio works fine on most systems is that the audio device has its own clock. You just buffer the audio data to it and it takes care of playing it and interrupts the program when the buffer is empty. In fact, a time skew between the audio card clock and the CPU clock can cause problems if a playback engine relies on the CPU clock.
EDIT:
You can make a timer abstraction out of this by using a thread which uses a lock protected min heap of timed entries (the heap comparison is done on the expiry timestamp) and then you can either callback() or SetEvent() when the PrecisionSleep() to the next timestamp completes.
Use NtSetTimerResolution when program starts up to set timer resolution. Yes, it is undocumented function, but works well. You may also use NtQueryTimerResolution to know timer-resolution (before setting and after setting new resolution to be sure).
You need to dynamically get the address of these functions using GetProcAddress from NTDLL.DLL, as it is not declared in header or any LIB file.
Setting timer resolution this way would affect Sleep, Windows timers, functions that return current time etc.

Sleep thread 100.8564 millisecond in c++ under window plateform

I there any method to sleep the thread upto 100.8564 millisecond under window OS. I am using multimedia timer but its resolution is minimum 1 second. Kindly guide me so that I can handle the fractional part of the millisecond.
Yes you can do it. See QueryPerformanceCounter() to read accurate time, and make a busy loop.
This will enable you to make waits with up to 10 nanosecond resolution, however, if thread scheduler decides to steal control from you at the moment of the cycle end, it will, and there's nothing you can do about it except assigning your process realtime priority.
You may also have a look at this: http://msdn.microsoft.com/en-us/library/ms838340(WinEmbedded.5).aspx
Several frameworks were developed to do hard realtime on windows.
Otherwise, your question probably implies that you might be doing something wrong. There're numerous mechanisms to trick around ever needing precise delays, such as using proper bus drivers (in case of hardware/IO, or respective DMAs if you are designing a driver), and more.
Please tell us what exactly are you building.
I do not know your use case, but even a high end realtime operating system would be hard pressed to achieve less 100ns jitter on timings.
In most cases I found you do not need that precision in reproducibility but only for long time drift. In that respect it is relatively straightforward to keep a timeline and calculate the event on the desired precision. Then use that timeline to synchronize the events which may be off even by 10's of ms. As long as these errors do not add up, I found I got adequate performance.
If you need guaranteed latency, you cannot get it with MS Windows. It's not a realtime operating system. It might swap in another thread or process at an importune instant. You might get a cache miss. When I did a robot controller a while back, I used an OS called On Time RTOS 32. It has an MS Windows API emulation layer. You can use it with Visual Studio. You'll need something like that.
The resolution of a multimedia timer is much better than one second. It can go down to 1 millisecond when you call timeBeginPeriod(1) first. The timer will automatically adjust its interval for the next call when the callback is delivered late. Which is inevitable on a multi-tasking operating system, there is always some kind of kernel thread with a higher priority than yours that will delay the callback.
While it will work pretty well on average, worst case latency is in the order of hundreds of milliseconds. Clearly, your requirements cannot be met by Windows by a long shot. You'll need some kind of microcontroller to supply that kind of execution guarantee.