I'm wondering what the timeGetTime() function really returns. I've powered on my system about 15 mins ago, and timeGetTime() returns 257052531 milliseconds which is about 71 hours!
The documentation says:
The timeGetTime function retrieves the system time, in milliseconds. The system time is the time elapsed since Windows was started.
So, my system is operating ONLY ~15 mins! How could it return ~71 hours!?
Related
Could someone help me figure out what clock I can use on iOS in c++ to get a steady/monotonic class that doesn't tick if the app is suspended? I'm trying to measure an approximate time it takes to process something but a regular clock doesn't help since it can inflate times by including time when the app was suspended and doing nothing.
Something equivalent to the QueryUnbiasedInterruptTime on Windows.
mach_absolute_time doesn't tick if the iphone is suspended, but I want a clock that stops ticking if the app is suspended.
I am trying to have a 24 hour countdown on my user interface in a QML/Qt project. The time should update every second like 23:59:59 then 23:59:58. Additionally, I need the time to continue going down even when the application is not open. So if the time is 23:59:59 when I close the app, if I open it two hours later it should continue counting down from 21:59:59. If the timer had timed out when the app isn't running, it needs to reset to 24 and continue. Does anyone know how I could do this, either QML or connected c++? Any help would be greatly appreciated.
You need to store somewhere timer's end time according to system clock or equivalent information. So at each moment you can tell timer's value by taking difference between system clock's now() and timer's end.
Just use std::this_thread::sleep_until to wait to the exact moment you need to update the time for the next second. Don't use sleep_for(1s) as this way you'll accumulate inaccuracies.
Note: system clock has an issue that it can be adjusted. I don't fully know of a way around it - say your application turned off then how to tell how much time passed if system clock was adjusted? You can deal with clock adjustment during application run by using sleep_until with steady_clock. In C++ 20 they introduce utc_clock perhaps you can access that somehow which should solve the issue with daylight saving time adjustments. I don't think that it is theoretical possible to deal with all types of clock adjustments unless you have access to GPS clock.
I am using ubuntu 18.04 on my machine. My ntp is configured to use gpsd as a source. Time provided by gpsd does not consider leap seconds but NTP adjusts it and provides UTC with leap seconds adjusted. So my system clock will be synced to UTC by NTP. From the documentation, std::chrono::system_clock::now provides time since 1970 and does not count leap seconds.
My question is does the kernel adjusts leap seconds when we call this? Or the time queried from std::chrono::system_clock::now is actually contains same time coming from NTP which has leap seconds adjusted.
system_clock and NTP both "handle" leap seconds the same way. Time simply stops while a leap second is being inserted. Here I'm speaking of the time standard, and not of any particular implementation.
An implementation of NTP might not stop for a whole second during a leap second insertion. Instead it might delay itself by small fractions of a second for hours both before and after a leap second insertion such that the sum of all delays is one second. This is known as a "leap second smear".
So you could say that both system_clock and NTP ignore leap seconds in that if you have two time points t0 and t1 in these systems and if t0 references a time prior to a leap second insertion and t1 references a time after that leap second insertion, then the expression t1-t0 gives you a result that does not count the inserted leap second. The result is 1 less than the number of physical seconds that has actually transpired.
A GPS satellite "ignores" leap seconds in a completely different way than system_clock and NTP. The GPS "clock" keeps ticking right through a leap second, almost completely ignoring it. However GPS weeks are always exactly 604,800 seconds (86,400 * 7), even if a leap second was inserted into UTC that week.
So to convert GPS weeks (and GPS time of week) to UTC, one has to know the total number of leap seconds that have been inserted since the GPS epoch (First Sunday of January 1980). I believe gpsd does this transformation for you when it provides you a UTC time point.
I have the following code:
clock_t tt = clock();
sleep(10);
tt = clock()-tt;
cout<<(float)tt/CLOCKS_PER_SEC<<" "<<CLOCKS_PER_SEC<<endl;
When I run the code, it apparently pauses for 10 seconds and the output is:
0.001074 1000000
This indicates it passed 1074 clock ticks and 1ms, which is apparently false.
Why does this happen?
I am using g++ under linux.
The function clocks returns the processor time consumed by the program. While sleeping, your process does not use any amount of processing, so this is expected. The amount of time your program is showing could be from the clock function calling.
clock() doesn't measure elapsed time (what you would measure with a stopwatch), it measures the time spent by your program running on the CPU. But sleep() almost don't use any CPU, it simply makes your process going to sleep. Try to modify sleep(10) by any other value sleep(1)for example, and you will get the same result.
I was just trying the SetTimer method in Win32 with some low values such as 10ms as the timeout period. I calculated the time it took to get 500 timer events and expected it to be around 5 seconds. Surprisingly I found that it is taking about 7.5 seconds to get these many events which means that it is timing out at about 16ms. Is there any limitation on the value we can set for the timeout period ( I couldn't find anything on the MSDN ) ? Also, does the other processes running in my system affect these timer messages?
OnTimer is based on WM_TIMER message, which is a low message priority, meaning it will be send only when there's no other message waiting.
Also MSDN explain that you can not set an interval less than USER_TIMER_MINIMUM, which is 10.
Regardless of that the scheduler will honor the time quantum.
Windows is not a real-time OS and can't handle that kind of precision (10 ms intervals). Having said that, there are multiple kinds of timers and some have better precision than others.
You can alter the granularity of the system timer down to 1ms - this is intended for MIDI work.
Basically, my experiences on w2k are that any requested wait period under 13ms returns a wait which oscillates randomly between two values, 0ms and 13ms. Timers longer than that are generally very accurate. Your 500 timer events - some were 0ms, some were 13ms (assuming 13ms is still correct). You ended up with a time shortfall.
As stated - windows is not a realtime OS. Asking it to do anything and expecting it at a specific time later is a fools errand. Setting a timer asks windows nicely to fire the WM_TIMER event as soon after the time has passed as is possible. This may be after other threads are dealt with and done. Therefore the actual time to see the WM_TIMER event can't be realistically predicted - All you know is it's >the time you set....
Checkout this article on windows time