Timer countdown even when program is not running QML - c++

I am trying to have a 24 hour countdown on my user interface in a QML/Qt project. The time should update every second like 23:59:59 then 23:59:58. Additionally, I need the time to continue going down even when the application is not open. So if the time is 23:59:59 when I close the app, if I open it two hours later it should continue counting down from 21:59:59. If the timer had timed out when the app isn't running, it needs to reset to 24 and continue. Does anyone know how I could do this, either QML or connected c++? Any help would be greatly appreciated.

You need to store somewhere timer's end time according to system clock or equivalent information. So at each moment you can tell timer's value by taking difference between system clock's now() and timer's end.
Just use std::this_thread::sleep_until to wait to the exact moment you need to update the time for the next second. Don't use sleep_for(1s) as this way you'll accumulate inaccuracies.
Note: system clock has an issue that it can be adjusted. I don't fully know of a way around it - say your application turned off then how to tell how much time passed if system clock was adjusted? You can deal with clock adjustment during application run by using sleep_until with steady_clock. In C++ 20 they introduce utc_clock perhaps you can access that somehow which should solve the issue with daylight saving time adjustments. I don't think that it is theoretical possible to deal with all types of clock adjustments unless you have access to GPS clock.

Related

steady clock on ios that ignores sleep/suspended time

Could someone help me figure out what clock I can use on iOS in c++ to get a steady/monotonic class that doesn't tick if the app is suspended? I'm trying to measure an approximate time it takes to process something but a regular clock doesn't help since it can inflate times by including time when the app was suspended and doing nothing.
Something equivalent to the QueryUnbiasedInterruptTime on Windows.
mach_absolute_time doesn't tick if the iphone is suspended, but I want a clock that stops ticking if the app is suspended.

DirectShow IReferenceClock implementation

How exactly are you meant to implement an IReferenceClock that can be set via IMediaFilter::SetSyncSource?
I have a system that implements GetTime and AdviseTime, UnadviseTime. When a stream starts playing it sets a base time via AdviseTime and then increases Stream Time for each subsequent advise.
However how am I supposed to know when a new graph has run? I need to set a zero point for a given reference clock. Otherwise if I create a reference clock and then, 10 seconds later, I start the graph I am now in the position that I don't know whether I should be 10 seconds down the playback or whether I should be starting from 0. Obviously the base time will say that I am starting from 0 but have I just stalled for 10 seconds and do I need to drop a bunch of frames?
I really can't seem to figure out how to write a proper IReferenceClock so any hints or ideas would be hugely appreciated.
Edit: One example of a problem I am having is that I have 2 graphs and 2 videos. The audio from both videos is going to a null renderer. The Video to a standard CLSID_VideoRenderer. Now If i set the same reference clock to both and then Run graph 1 all seems to be fine. However if 10 seconds down the line I run graph 2 then it will run as though the SetSyncSource is NULL for the first 10 seconds or so until it has caught up with the other video.
Obviously if the graphs called GetTime to get their "base time" this would solve the problem but this is not what I'm seeing happening. Both videos end up with a base time of 0 because thats the point I run them from.
Its worth noting that if I set no clock at all (or call SetDefaultSyncSource) then both graphs run as fast as they can. I assume this is due to the lack of an Audio Renderer ...
However how am I supposed to know when a new graph has run?
The clock runs on its own, it is the graph that aligns its operation against the clock and not otherwise. The graph receives outer Run call, then it checks current clock time and assigns base time, which is distributed among filters, as "current clock time + some time for the things to take off". The clock itself doesn't have to have a faintest idea about all this and its task is to keep running and keep incrementing time.
In particular, clock time does not have to reset to zero at any time.
From documentation:
The clock's baseline—the time from which it starts counting—depends on the implementation, so the value returned by GetTime is not inherently meaningful. What matters is the delta from when the graph started running.
When an application calls IMediaControl::Run to run the filter graph, the Filter Graph Manager calls IMediaFilter::Run on each filter. To compensate for the slight amount of time it takes for the filters to start running, the Filter Graph Manager specifies a start time slightly in the future.
BaseClasses offer CBaseReferenceClock class, which you can use as reference implementation (in refclock.*).
Comment to your edit:
You obviously not describing the case in full and you are omitting important details. There is a simple test: you can instantiate standard clock (CLSID_SystemClock) and use it on two regular graphs - they WILL run fine, even with time-separated Run times.
I suspect that you are doing some sync'ing or matching between the graphs and you are time stamping the samples, also using the clock. Presumably you are doing something wrong at that point and then you have hard time fixing it through the clock.

Sleep Function Error In C

I have a file of data Dump, in with different timestamped data available, I get the time from timestamp and sleep my c thread for that time. But the problem is that The actual time difference is 10 second and the data which I receive at the receiving end is almost 14, 15 second delay. I am using window OS. Kindly guide me.
Sorry for my week English.
The sleep function will sleep for at least as long as the time you specify, but there is no guarantee that it won't sleep for longer.If you need an accurate interval, you will need to use some other mechanism.
If I understand well:
you have a thread that send data (through network ? what is the source of data ?)
you slow down sending rythm using sleep
the received data (at the other end of network) can be delayed much more (15 s instead of 10s)
If the above describe what you are doing, your design has several flaws:
sleep is very imprecise, it will wait at least n seconds, but it may be more (especially if your system is loaded by other running apps).
networks introduce a buffering delay, you have no guarantee that your data will be send immediately on the wire (usually it is not).
the trip itself introduce some delay (latency), if your protocol wait for ACK from the receiving end you should take that into account.
you should also consider time necessary to read/build/retrieve data to send and really send it over the wire. Depending of what you are doing it can be negligible or take several seconds...
If you give some more details it will be easier to diagnostic the source of the problem. sleep as you believe (it is indeed a really poor timer) or some other part of your system.
If your dump is large, I will bet that the additional time comes from reading data and sending it over the wire. You should mesure time consumed in the sending process (reading time before and after finishing sending).
If this is indeed the source of the additional time, you just have to remove that time from the next time to wait.
Example: Sending the previous block of data took 4s, the next block is 10s later, but as you allready consumed 4s, you just wait for 6s.
sleep is still a quite imprecise timer and obviously the above mechanism won't work if sending time is larger than delay between sendings, but you get the idea.
Correction sleep is not so bad in windows environment as it is in unixes. Accuracy of windows sleep is millisecond, accuracy of unix sleep is second. If you do not need high precision timing (and if network is involved high precision timing is out of reach anyway) sleep should be ok.
Any modern multitask OS's scheduler will not guarantee any exact timings to any user apps.
You can try to assign 'realtime' priority to your app some way, from a windows task manager for instance. And see if it helps.
Another solution is to implement a 'controlled' sleep, i.e. sleep a series of 500ms, checking current timestamp between them. so, if your all will sleep a 1s instead of 500ms at some step - you will notice it and not do additional sleep(500ms).
Try out a Multimedia Timer. It is about as accurate as you can get on a Windows system. There is a good article on CodeProject about them.
Sleep function can take longer than requested, but never less. Use winapi timer functions to get one function called-back in a interval from now.
You could also use the windows task scheduler, but that's going outside programmatic standalone options.

Timer message in MFC/Win32

I was just trying the SetTimer method in Win32 with some low values such as 10ms as the timeout period. I calculated the time it took to get 500 timer events and expected it to be around 5 seconds. Surprisingly I found that it is taking about 7.5 seconds to get these many events which means that it is timing out at about 16ms. Is there any limitation on the value we can set for the timeout period ( I couldn't find anything on the MSDN ) ? Also, does the other processes running in my system affect these timer messages?
OnTimer is based on WM_TIMER message, which is a low message priority, meaning it will be send only when there's no other message waiting.
Also MSDN explain that you can not set an interval less than USER_TIMER_MINIMUM, which is 10.
Regardless of that the scheduler will honor the time quantum.
Windows is not a real-time OS and can't handle that kind of precision (10 ms intervals). Having said that, there are multiple kinds of timers and some have better precision than others.
You can alter the granularity of the system timer down to 1ms - this is intended for MIDI work.
Basically, my experiences on w2k are that any requested wait period under 13ms returns a wait which oscillates randomly between two values, 0ms and 13ms. Timers longer than that are generally very accurate. Your 500 timer events - some were 0ms, some were 13ms (assuming 13ms is still correct). You ended up with a time shortfall.
As stated - windows is not a realtime OS. Asking it to do anything and expecting it at a specific time later is a fools errand. Setting a timer asks windows nicely to fire the WM_TIMER event as soon after the time has passed as is possible. This may be after other threads are dealt with and done. Therefore the actual time to see the WM_TIMER event can't be realistically predicted - All you know is it's >the time you set....
Checkout this article on windows time

Concurrency question about program running in OS

Here is what I know about concurrency in OS.
In order to run multi-task in an OS, the CPU will allocate a time slot to each task. When doing task A, other task will "sleep" and so on.
Here is my question:
I have a timer program that count for inactivity of keyboard / mouse. If inactivity continues within 15min, a screen saver program will popup.
If the concurrency theory is as I stated above, then the timer will be inaccurate? Because each program running in OS will have some time "sleep", then the timer program also have chance "sleeping", but in the real world the time is not stop.
You would use services from the OS to provide a timer you would not try to implement yourself. If code had to run simple to count time we would still be in the dark ages as far as computing is concerned.
In most operating systems, your task will not only be put to sleep when its time slice has been used but also while it is waiting for I/O (which is much more common for most programs).
Like AnthonyWJones said, use the operating system's concept of the current time.
The OS kernel's time slices are much too short to introduce any noticeable inaccuracy for a screen saver.
I think your waiting process can be very simple:
activityTime = time of last last keypress or mouse movement [from OS]
now = current time [from OS]
If now >= 15 mins after activityTime, start screensaver
sleep for a few seconds and return to step 1
Because steps 1 and 2 use the OS and not some kind of running counter, you don't care if you get interrupted anytime during this activity.
This could be language-dependent. In Java, it's not a problem. I suspect that all languages will "do the right thing" here. That's with the caveat that such timers are not extremely accurate anyway, and that usually you can only expect that your timer will sleep at least as long as you specify, but might sleep longer. That is, it might not be the active thread when the time runs out, and would therefore resume processing a little later.
See for example http://www.opengroup.org/onlinepubs/000095399/functions/sleep.html
The suspension time may be longer than requested due to the scheduling of other activity by the system.
The time you specify in sleep() is in realtime, not the cpu time your process uses. (As the CPU time is approximately 0 while your program sleeps.)