c++ alert timer with little cpu load - c++

I want to write a small alert timer on windows using c++ and msvc2010. The timer needs to trigger a status message after a couple of minutes. I know how to check the system time using c++ and I know there is sleep function in windows api. How can I implement a timer with very little cpu load? For example, I don't want to check the system time every couple of milliseconds to trigger the status message when the trigger time is reached. Do I create cpu load, when using things like sleep(600000) in an extra thread or are there more efficient ways to wait a couple of minutes and execute some code afterwards?

You can indeed busy-wait and poll the time. Even a Sleep(1) will be enough that your program will be barely measurable.
I used to do it "back in the day" and even on my PII 233 Mhz running multiple threads doing this it barely made a dent in the CPU usage.

You could create a thread, write a continuous loop inside which you just sleep for the time interval that your trigger needs to run at then print your message. If you need to run it at 2 minutes, why choose multiple small sleep values and check the time? That would be a waste of CPU time.

Related

What exactly does alarm() and sleep() do?

If my timeslice is 3 seconds, I am guessing the alarm stops the execution of a process every three seconds. What does sleep do? Does it put the process to sleep for 3 seconds? This does not make sense to me - what if there are a lot of processes? Wouldn't it have to sleep for longer?
I am doing this with the round robin stimulation:
while (head!=NULL)
{
alarm(TIMESLICE);
sleep(TIMESLICE);
}
cout<<"no processes left"<<endl;
The code works, but I just want to understand what exactly is going on as I am new to this concept.
I am guessing the alarm stops the execution of a process every three seconds.
Sort of. It arranges for a signal to be sent to the process in three seconds. The process can then continue normally and can even ignore the signal if it wants to.
What does sleep do? Does it put the process to sleep for 3 seconds?
Correct.
This does not make sense to me - what if there are a lot of processes? Wouldn't it have to sleep for longer?
No. Even a process that never sleeps isn't guaranteed to get the CPU all the time. A process that isn't sleeping may or may not be scheduled to run on a core at any particular time. Once it's no longer sleeping, it's ready-to-run, and the scheduler will make the decision of when and for how to long to let it use what core.

Using timers with performance-critical software (Qt)

I am developing an application that is responsible of moving and managing robots over an UDP connection.
The application needs to:
Read joystick/user input using SDL.
Generate and send a control packet to the robot every 20 milliseconds (UDP)
Receive and decode response packets from the robot (~20 msecs). This was implemented with the signal/slot mechanism and does not require a timer.
Receive and process robot messages for debugging reasons. This is not time-regulated.
Update the UI regularly to keep the user notified about the status of the robot (e.g. battery voltage). For most cases, I have also used Qt's signal/slot mechanism.
Use a watchdog that disables the robot if no response is received after 1 second. The watchdog is reset when the application receives a robot packet (~20 msecs)
For the moment, I have implemented all of the above. However, the application fails to send the packets regularly when the watchdog is activated or when two or more QTimer objects are used. The application would generally work, but I would not consider it "production ready". I have tried to use the precision flags of the timers (Qt::Precise, Qt::Coarse and Qt::VeryCoarse), but I still experienced problems.
Notes:
The code is generally well organized, there are no "god objects" in the code base (most source files are less than 150 lines long and only create the necessary dependencies).
Most of the times, I use QTimer::singleShot() (e.g. I will only send the next packet once the current packet has been sent).
Where we use timers:
To read joystick input (~50 msecs, precise timer)
To send robot packets (~20 msecs, precise timer)
To update some aspects of the UI (~500 msecs, coarse timer)
To update the elapsed time since the robot was enabled (~100 msecs, precise timer)
To implement a watchdog (put the application and robot in safe state if 1000 msecs have passed without a robot response)
Note: the watchdog is feed when we receive a response packet from the robot (~20 msecs)
Do you have any recommendations for using QTimer objects with performance-critical code (any idea is welcome). Note that I have also tried to use different threads, but it has caused me more problems, since the application would not be in "sync", thus failing to effectively control the robots that we have tested.
Actually, I seem to have underestimated Qt's timer and event loop performance. On my system I get on average around 20k nanoseconds for an event loop cycle plus the overhead from scheduling a queued function call, and a timer with interval 1 millisecond is rarely late, most of the timeouts are a few thousand nanoseconds short of a millisecond. But it is a high end system, on embedded hardware it may be a lot worse.
You should take the time and profile your target system and Qt build to determine whether it can indeed run snappy enough, and based on those measurements, adjust your timings to compensate for the system delays to get your events scheduled more on time.
You should definitely keep the timer thread as free as possible, because if you block it by IO or extensive computation, your timer will not be accurate. Use a dedicated thread to schedule work and extra worker threads to do the actual work. You may also try playing with thread priorities a bit.
Worst case scenario, look for 3rd party high performance event loop implementations or create your own and potentially, also a faster signaling mechanism as ell. As I already mentioned in the comments, Qt's inter-thread queued signals are very slow, at least compared to something like indirect function calls.
Last but not least, if you want to do task X every N units of time, it will only be only possible if task X takes N units of time or less on your system. You need to make this consideration for each task, and for all tasks running concurrently. And in order to get accurate scheduling, you should measure how long did task X took, and if less than its frequency, schedule the next execution in the time remaining, otherwise execute immediately.

Can I set a single thread's priority above 15 for a normal priority process?

I have a data acquisition application running on Windows 7, using VC2010 in C++. One thread is a heartbeat which sends out a change every .2 seconds to keep-alive some hardware which has a timeout of about .9 seconds. Typically the heartbeat call takes 10-20ms and the thread spends the rest of the time sleeping.
Occasionally however there will be a delay of 1-2 seconds and the hardware will shut down momentarily. The heartbeat thread is running at THREAD_PRIORITY_TIME_CRITICAL which is 15 for a normal priority process. My other threads are running at normal priority, although I use a DLL to control some other hardware and have noticed with Process Explorer that it starts several threads running at level 15.
I can't track down the source of the slow down but other theads in my application are seeing the same kind of delays when this happens. I have made several optimizations to the heartbeat code even though it is quite simple, but the occasional failures are still happening. Now I wonder if I can increase the priority of this thread beyond 15 without specifying REALTIME_PRIORITY_CLASS for the entire process. If not, are there any downsides I should be aware of to using REALTIME_PRIORITY_CLASS? (Other than this heartbeat thread, the rest of the application doesn't have real-time timing needs.)
(Or does anyone have any ideas about how to track down these slowdowns...not sure if the source could be in my app or somewhere else on the system).
Update: So I hadn't actually tried passing 31 into my AfxBeginThread call and turns out it ignores that value and sets the thread to normal priority instead of the 15 that I get with THREAD_PRIORITY_TIME_CRITICAL.
Update: Turns out running the Disk Defragmenter is a good way to cause lots of thread delays. Even running the process at REALTIME_PRIORITY_CLASS and the heartbeat thread at THREAD_PRIORITY_TIME_CRITICAL (level 31) doesn't seem to help. Next thing to try is calling AvSetMmThreadCharacteristics("Pro Audio")
Update: Scheduling heartbeat thread as "Pro Audio" does work to increase the thread's priority beyond 15 (Base=1, Dynamic=24) but it doesn't seem to make any real difference when defrag is running. I've been able to correlate many of the slowdowns with the disk defragmenter so turned off the weekly scan. Still can't explain some delays so we're going to increase to a 5-10 second watchdog timeout.
Even if you could, increasing the priority will not help. The highest priority runnable thread gets the processor at all times.
Most likely there is some extended interrupt processing occurring while interrupts are disabled. Interrupts effectively work at a higher priority than any thread.
It could be video, network, disk, serial, USB, etc., etc. It will take some insight to selectively disable or use an alternate driver to see if the problem system hesitation is affected. Once you find that, then figuring out a way to prevent it might range from trivial to impossible depending on what it is.
Without more knowledge about the system, it is hard to say. Have you tried running it on a different PC?
Officially you can't use REALTIME threads in a process which does not have the REALTIME_PRIORITY_CLASS.
Unoficially you could play with the undocumented NtSetInformationThread
see:
http://undocumented.ntinternals.net/UserMode/Undocumented%20Functions/NT%20Objects/Thread/NtSetInformationThread.html
But since I have not tried it, I don't have any more info about this.
On the other hand, as it was said before, you can never be sure that the OS will not take its time when your thread's quantum will expire. Certain poorly written drivers are often the cause of such latency.
Otherwise there is a software which can tell you if you have misbehaving kernel parts:
http://www.thesycon.de/deu/latency_check.shtml
I would try using CreateWaitableTimer() & SetWaitableTimer() and see if they are subject to the same preemption problems.

Sleep Function Error In C

I have a file of data Dump, in with different timestamped data available, I get the time from timestamp and sleep my c thread for that time. But the problem is that The actual time difference is 10 second and the data which I receive at the receiving end is almost 14, 15 second delay. I am using window OS. Kindly guide me.
Sorry for my week English.
The sleep function will sleep for at least as long as the time you specify, but there is no guarantee that it won't sleep for longer.If you need an accurate interval, you will need to use some other mechanism.
If I understand well:
you have a thread that send data (through network ? what is the source of data ?)
you slow down sending rythm using sleep
the received data (at the other end of network) can be delayed much more (15 s instead of 10s)
If the above describe what you are doing, your design has several flaws:
sleep is very imprecise, it will wait at least n seconds, but it may be more (especially if your system is loaded by other running apps).
networks introduce a buffering delay, you have no guarantee that your data will be send immediately on the wire (usually it is not).
the trip itself introduce some delay (latency), if your protocol wait for ACK from the receiving end you should take that into account.
you should also consider time necessary to read/build/retrieve data to send and really send it over the wire. Depending of what you are doing it can be negligible or take several seconds...
If you give some more details it will be easier to diagnostic the source of the problem. sleep as you believe (it is indeed a really poor timer) or some other part of your system.
If your dump is large, I will bet that the additional time comes from reading data and sending it over the wire. You should mesure time consumed in the sending process (reading time before and after finishing sending).
If this is indeed the source of the additional time, you just have to remove that time from the next time to wait.
Example: Sending the previous block of data took 4s, the next block is 10s later, but as you allready consumed 4s, you just wait for 6s.
sleep is still a quite imprecise timer and obviously the above mechanism won't work if sending time is larger than delay between sendings, but you get the idea.
Correction sleep is not so bad in windows environment as it is in unixes. Accuracy of windows sleep is millisecond, accuracy of unix sleep is second. If you do not need high precision timing (and if network is involved high precision timing is out of reach anyway) sleep should be ok.
Any modern multitask OS's scheduler will not guarantee any exact timings to any user apps.
You can try to assign 'realtime' priority to your app some way, from a windows task manager for instance. And see if it helps.
Another solution is to implement a 'controlled' sleep, i.e. sleep a series of 500ms, checking current timestamp between them. so, if your all will sleep a 1s instead of 500ms at some step - you will notice it and not do additional sleep(500ms).
Try out a Multimedia Timer. It is about as accurate as you can get on a Windows system. There is a good article on CodeProject about them.
Sleep function can take longer than requested, but never less. Use winapi timer functions to get one function called-back in a interval from now.
You could also use the windows task scheduler, but that's going outside programmatic standalone options.

Concurrency question about program running in OS

Here is what I know about concurrency in OS.
In order to run multi-task in an OS, the CPU will allocate a time slot to each task. When doing task A, other task will "sleep" and so on.
Here is my question:
I have a timer program that count for inactivity of keyboard / mouse. If inactivity continues within 15min, a screen saver program will popup.
If the concurrency theory is as I stated above, then the timer will be inaccurate? Because each program running in OS will have some time "sleep", then the timer program also have chance "sleeping", but in the real world the time is not stop.
You would use services from the OS to provide a timer you would not try to implement yourself. If code had to run simple to count time we would still be in the dark ages as far as computing is concerned.
In most operating systems, your task will not only be put to sleep when its time slice has been used but also while it is waiting for I/O (which is much more common for most programs).
Like AnthonyWJones said, use the operating system's concept of the current time.
The OS kernel's time slices are much too short to introduce any noticeable inaccuracy for a screen saver.
I think your waiting process can be very simple:
activityTime = time of last last keypress or mouse movement [from OS]
now = current time [from OS]
If now >= 15 mins after activityTime, start screensaver
sleep for a few seconds and return to step 1
Because steps 1 and 2 use the OS and not some kind of running counter, you don't care if you get interrupted anytime during this activity.
This could be language-dependent. In Java, it's not a problem. I suspect that all languages will "do the right thing" here. That's with the caveat that such timers are not extremely accurate anyway, and that usually you can only expect that your timer will sleep at least as long as you specify, but might sleep longer. That is, it might not be the active thread when the time runs out, and would therefore resume processing a little later.
See for example http://www.opengroup.org/onlinepubs/000095399/functions/sleep.html
The suspension time may be longer than requested due to the scheduling of other activity by the system.
The time you specify in sleep() is in realtime, not the cpu time your process uses. (As the CPU time is approximately 0 while your program sleeps.)