C++ call to API function ::GetTickCount() jumps ~18 days - c++

On a few Windows computers I have seen that two, on each other following, calls to ::GetTickCount() returns a difference of 1610619236 ms (around 18 days). This is not due to wrap around og int/unsigned int mismatch. I use Visual C++ 2015/2017.
Has anybody else seen this behaviour? Does anybody have any idea about what could cause behaviour like this?
Best regards
John
Code sample that shows the bug:
class CLTemp
{
DWORD nLastCheck;
CLTemp()
{
nLastCheck=::GetTickCount();
}
//Service is called every 200ms by a timer
void Service()
{
if( ::GetTickCount() - nLastCheck > 20000 )//check every 20 sec
{
//On some Windows machines, after an uptime of 776 days, the
//::GetTickCount() - nLastCheck gives a value of 1610619236
//(corresponding to around 18 days)
nLastCheck = ::GetTickCount();
}
}
};
Update - problem description, a way of recreating and solution:
The Windows API function GetTickCount() unexpectedly jumps 18 days forward in time when passing 776 days after Windows Restart.
We have experienced several times that some of our long running Windows pc applications coded in Microsoft Visual C++ suddenly reported a time-out error. In many of our applications we call GetTickCount() to perform some tasks with certain intervals or to watch for a time-out condition. The example code could go as this:
DWORD dwTimeNow, dwPrevTime = ::GetTickCount();
bool bExit = false;
While (!bExit)
{
dwTimeNow = ::GetTickCount();
if (dwTimeNow – dwPrevTime >= 5000)
{
dwPrevTime = dwTimeNow;
// Perform my task
}
else
{
::Sleep(10);
}
}
GetTickCount() returns a DWORD, which is an unsigned 32-bit int. GetTickCount() wraps around from its maximum value of 0xFFFFFFFF to zero after app. 49 days. The wrap around is easily handled by using unsigned arithmetic and always subtracting the previous value from the new value to calculate the distance. Do never compare two values from GetTickCount() against each other.
So, the wrap around at its maximum value each 49 days it expected and handled. But we have experienced an unexpected wrap around to zero of GetTickCount() after 776 days after latest Windows Restart. And in this case GetTickCount() wraps from 0x9FFFFFFF to zero, which is 1610612736 milliseconds too early corresponding to around 18.6 days. When GetTickCount() is used to check for a time-out condition and it suddenly reports that 18 days have elapsed since last check, then the software reports a false time-out condition. Note that it is 776 days after a Windows Restart. A Windows Restart resets the GetTickCount() value to zero. A pc reboot does not, instead the time elapsed while switched off is added to the initial GetTickCount() value.
We have made a test program that provides evidence of this issue. The test program reads the values of GetTickCount(), GetTickCount64(), InterruptTime(), and UnbiasedInterruptTime() each 5000 milliseconds scheduled by a Windows Timer. Each time the sample program calculates the distance in time for each of the four time-functions. If the distance in time is 10000 milliseconds or more, it is marked as a time jump event and logged. Each time it also keeps track of the minimum distance and the maximum distance in time for each time-function.
Before starting the test program, a Windows Restart is carried out. Ensure no automatic time synchronization is enabled. Then make a Windows shut down. Start the pc again and make it enter its Bios setup when it boots. In the Bios, advance the real time clock 776 days. Let the pc boot up and start the test program. Then after 17 hours the unexpected wraparound of GetTickCount() occurs (776 days, 17 hours, and 21 minutes). It is only GetTickCount() that shows this behavior. The other time-functions do not.
The following excerpt from the logfile of the test program shows the start values reported by the four time-functions. In this example the time has only been advanced to 775 days after Windows Restart. The format of the log entry is the time-function value converted into: days hh:mm:ss.msec. TickCount32 is the plain GetTickCount(). Because it is a 32-bit value it has wrapped around and shows a different value. At GetTickCount64() we can see the 775 days.
2024-05-14 09:13:27.262 Start times
TickCount32 : 029 08:30:11.591
TickCount64 : 775 00:12:01.031
InterruptTime : 775 00:12:01.036
UnbiasedInterruptTime: 000 00:05:48.411
The next excerpt from the logfile shows the unexpected wrap around of GetTickCount() (TickCount32). The format is: Distance between the previous value and the new value (should always be around 5000 msec). Then follows the new value converted into days and time, and finally follows the previous value converted into days and time. We can see that GetTickCount() jumps 1610617752 milliseconds (app. 18.6 days) while the other three time-functions only advances app. 5000 msec as expected. At TickCount64 one can see that it occurs at 776 days, 17 hours, and 21 minutes.
2024-05-16 02:22:30.394 Time jump *****
TickCount32 : 1610617752 - 000 00:00:00.156 - 031 01:39:09.700
TickCount64 : 5016 - 776 17:21:04.156 - 776 17:20:59.140
InterruptTime : 5015 - 776 17:21:04.165 - 776 17:20:59.150
UnbiasedInterruptTime: 5015 - 001 17:14:51.540 - 001 17:14:46.525
If you increase the time that the real time clock is advanced to two times 776 days and 17 hours – for example 1551 days – the phenomenon shows up once more. It has a cyclic nature.
2026-06-30 06:34:26.663 Start times
TickCount32 : 029 12:41:57.888
TickCount64 : 1551 21:44:51.328
InterruptTime : 1551 21:44:51.334
UnbiasedInterruptTime: 004 21:24:24.593
2026-07-01 19:31:47.641 Time jump *****
TickCount32 : 1610617736 - 000 00:00:04.296 - 031 01:39:13.856
TickCount64 : 5000 - 1553 10:42:12.296 - 1553 10:42:07.296
InterruptTime : 5007 - 1553 10:42:12.310 - 1553 10:42:07.303
UnbiasedInterruptTime: 5007 - 006 10:21:45.569 - 006 10:21:40.562
The only viable solution to this issue seems to be using GetTickCount64() and totally abandon usage of GetTickCount().

Related

WASAPI, Delays on m_AudioClient->Start()

The app captures sound from a microphone using WASAPI.
This code initializes m_AudioClient that is of type IAudioClient*.
const LONG CAPTURE_CLIENT_LATENCY = 50 * 10000;
DWORD loopFlag = m_IsLoopback ? AUDCLNT_STREAMFLAGS_LOOPBACK : 0;
hr = m_AudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED
, AUDCLNT_STREAMFLAGS_EVENTCALLBACK | AUDCLNT_STREAMFLAGS_NOPERSIST | loopFlag
, CAPTURE_CLIENT_LATENCY, 0, m_WaveFormat->GetRawFormat(), NULL);
Then I use m_AudioClient->Start() and m_AudioClient->Stop() to pause or resume capturing.
Usaually m_AudioClient->Start() takes 5-6 ms, but sometimes it takes about 150 mswhich is too much for the application.
If I once call m_AudioClient->Start() then subsecuent calls of m_AudioClient->Start() during next 5 seconds will be fast but after about 10-15 seconds next call of m_AudioClient->Start() will take logner (150 ms). So looks like it keeps some state several seconds and after that it needs to get to that state again which takes some time.
On another machine this delays never happen, every call of m_AudioClient->Start() takes about 30 ms
On the third machine average duration of m_AudioClient->Start() is 140 ms but peak values are about 1 s.
I run the same code on all 3 machines. The michrophone is not exatly the same on most cases it is Microphone Array Realtek High Definition Audio.
Can somebody explain why these peak values for the duration m_AudioClient->Start() happen and how I can fix it?

Coldfusion - odd parsedatetime results

I have a time string:
2018-08-09T13:19:22.479522-05:00
Parsing the string using:
parseDateTime(time, "yyyy-MM-dd'T'HH:mm:ss.SSSSSSXXX")
Yields this result:
2018-08-09 14:27:21
I'm -4 hours from GMT, so I get the hour difference, but why is the minute different?
Update:
I'm certain the problem is the 6 digit millisecond, but can ColdFusion process this? As of now, I'm using left() and right() to get around the issue.
why is the minute different?
It's because java.util.Date (which is what ColdFusion uses along with SimpleDateFormat) doesn't handle microseconds, only milliseconds. The mask ".SSSSSS" only allows CF/Java to extract the extra digits, but once extracted that whole value is treated as a number of milliseconds:
479522 milliseconds ... or
479.522 seconds ... or
7 minutes, 59 seconds and 522 milliseconds
So in this case, instead of adding fractions of a second, it increases the final time by nearly eight minutes. That's why the result isn't quite what you expected.
Base Time 14:19:22.000
+ .522 milliseconds
+ 59.000 seconds
+ 7:00.000 minutes
====================
Final Time 14:27:21.522
tl;dr;
ParseDateTime() can't process that particular date/time string, so you'll have to DIY.

Why does time measurement sometimes return repetitive values (multiples of 15.625ms)?

I've done a lot of searching and found similar questions, but I still can't understand why sometimes my code gets the time right, and other times decides to become useless and return repetitive values.
A simple C++ code you can run to test this:
#include <iostream>
#include <chrono>
#include <windows.h>
//#include <unistd.h> //For Unix
void stall(int milisseconds){
auto start = std::chrono::high_resolution_clock::now();
Sleep(milisseconds);
//usleep(milisseconds*1000); //For Unix
auto finish = std::chrono::high_resolution_clock::now();
std::cout<<std::chrono::duration_cast<std::chrono::nanoseconds>(finish-start).count()/1000000.0<<" ms\n";
}
int main(){
std::cout<<"Begin\n";
for (int i = 1; i < 100; i++){
stall(i);
}
}
Running this, the expected output would be something like:
1 ms
2 ms
3 ms
4 ms
...
98 ms
99 ms
100 ms
Sometimes it works, but other times (like, at random), the output looks like this:
15.625 ms
15.62 ms
15.632 ms
7.997 ms
16.713 ms
15.637 ms
31.25 ms
31.263 ms
31.245 ms
31.25 ms
21.985 ms
...
93.718 ms
93.77 ms
93.744 ms
102.263 ms
109.369 ms
96.192 ms
109.367 ms
109.368 ms
How can I eliminate this awful inconsistency? Reducing the number of background processes doesn't seem to have any effect.
I would guess this would be due to your OS' scheduling quantum: If your thread yields or finishes its execution time quantum, some other threads will run for that quantum, and then when your thread runs again, a full quantum (and a bit) has elapsed. So you see advances by noise + either 0 quanta or 1 quantum.
einpoklum suggested that it could be because of my OS' scheduling quantum, which sounds about right. I was about to go crazy, thinking that it was something beyond my control (or too complicated to solve), but I ended up somehow discovering a way of manipulating it in my favor. I noticed that, if the internet browser was closed, the times returned were a very consistent sequence of multiples of 15.625. But if I had the browser running, it looked like the more tabs I had opened, the more inconsistent the times where (but still leaning towards multiples of 15.625). And if a tab had something loading, the numbers started looking like a regular sequence of 1 to 1!
So, I concluded that whevener I had to do testing, I'd put a Youtube or Twitch tab by the side. It's weird as hell (if there's a better way to do it, I'd like to know), but for now, I will have to unite the useful with the pleasent, lol

Why do I get such huge jitter in time measurement?

I'm trying to measure a function's performance by measuring the time for each iteration.
During the process, I found even if I do nothing, the results still vary quite a bit.
e.g.
volatile long count = 0;
for (int i = 0; i < N; ++i) {
measure.begin();
++count;
measure.end();
}
In measure.end(), I measure the time difference and keep an unordered_map to keep track of the time-count.
I've used clock_gettime as well as rdtsc, but there's always about 1% of the data points lie far away from mean, in a 1000 factor.
Here's what the above loop generates:
T: count percentile
18 117563 11.7563%
19 111821 22.9384%
21 201605 43.0989%
22 541095 97.2084%
23 2136 97.422%
24 2783 97.7003%
...
406 1 99.9994%
3678 1 99.9995%
6662 1 99.9996%
17945 1 99.9997%
18148 1 99.9998%
18181 1 99.9999%
22800 1 100%
mean:21
So whether it's ticks or ns, the worst case 22800 is about 1000 times bigger than mean.
I did isolcpus in grub and was running this with taskset. The simple loop almost does nothing, the hash table to do time-count statistics is outside of the time measurements.
What am I missing?
I'm running this on a laptop with ubuntu installed, CPU is Intel(R) Core(TM) i5-2520M CPU # 2.50GHz
Thank you for all the answers.
The main interrupt that I couldn't stop is the local timer interrupt. And it seems new 3.10 kernel would support tickless. I'll try that one.

How to get an accurate 1ms Timer Tick under WinXP

I try to call a function every 1 ms. The problem is, I like to do this with windows. So I tried the multimediatimer API.
Multimediatimer API
Source
idTimer = timeSetEvent(
1,
0,
TimerProc,
0,
TIME_PERIODIC|TIME_CALLBACK_FUNCTION );
My result was that most of the time the 1 ms was ok, but sometimes I get the double period. See the little bump at around 1.95ms
multimediatimerHistogram http://www.freeimagehosting.net/uploads/8b78f2fa6d.png
My first thought was that maybe my method was running too long. But I measured this already and this was not the case.
Queued Timers API
My next try was using the queud timers API with
hTimerQueue = CreateTimerQueue();
if(hTimerQueue == NULL)
{
printf("Error creating queue: 0x%x\n", GetLastError());
}
BOOL res = CreateTimerQueueTimer(
&hTimer,
hTimerQueue,
TimerProc,
NULL,
0,
1, // 1ms
WT_EXECUTEDEFAULT);
But also the result was not as expected. Now I get most of the time 2 ms cycletime.
queuedTimer http://www.freeimagehosting.net/uploads/2a46259a15.png
Measurement
For measuring the times I used the method QueryPerformanceCounter and QueryPerformanceFrequency.
Question
So now my question is if somebody encountered similar problems under windows and maybe even found a solution?
Thanks.
Without going to a real-time OS, you cannot expect to have your function called every 1 ms.
On Windows that is NOT a real-time OS (for Linux it is similar), a program that repeatedly read a current time with microsecond precision, and store consecutive differences in an histogram have a non-empty bin for >10 ms! This means that sometimes you will have 2 ms, but you can also get more between your calls.
You can try to run timeBeginPeriod(1) at the program start and timeEndPeriod(1) before quitting. This probably can enhance timer precision.
A call to NtQueryTimerResolution() will return a value for ActualResolution. In your case the actual resolution is almost certainly 0.9765625 ms. This is exactly what you show in the first plot.
The second occurace of about 1.95 ms is more precisely Sleep(1) = 1.9531 ms = 2 x 0.9765625 ms
I guess the interrupt period runs at someting close to 1ms (0.9765625).
And now the trouble begins: The timer signals when the desired delay expires.
Say the ActualResolution is set to 0.9765625, the interrupt heartbeat of the system will run at 0.9765625 ms periods or 1024 Hz and a call to Sleep is made with a desired delay of 1 ms. Two scenarios are to be looked at:
The call was made < 1ms (ΔT) ahead of the next interrupt. The next interrupt will not confirm that the desired period of time has expired. Only the following interrupt will cause the call to return. The resulting sleep delay will be ΔT + 0.9765625 ms.
The call was made >= 1ms (ΔT) ahead of the next interrupt. The next interrupt will force the call to return. The resulting sleep delay will be ΔT.
So the result depends a lot on when the call was made and therefore you may observe 0.98ms events as well as 1.95ms events.
Edit: Using the CreateTimerQueueTimer will push the observed delay to 1.95 because the timer tick (interrupt period) is 0.9765625 ms. On the first occurence of the interrupt, the requested duration of 1 ms has not quite expired, thus the TimerProc will only be triggered after the second interrupt (2 x 0.9765625 ms = 1.953125 ms > 1 ms). Consequently, the queueTimer plot shows the peak at 1.953125 ms.
Note: This behavior strongly depends on the underlying hardware.
More details can be found at the Windows Timestamp Project