I have a windowless timer (no WM_TIMER) which fires a callback function only once when a given time period is elapsed. It is implemented as a SetTimer()/KillTimer(). Time periods are small enough: 100-300 milliseconds.
Is that cheap enough (I mean performance) to call SetTimer()/KillTimer() pair for every such short time interval?
What if I have 100 such timers which periodically call SetTimer()/KillTimer()? How much Window timer objects may exist simultaneously in the system?
That is the question:
Use a bunch of such timer objects and rely on good Windows implementation of timers, or create one Windows timer object that ticks every, say, 30 milliseconds, and subscribe all custom 100-300 milliseconds one-time timers to it.
Thanks
The problem with timer messages as you are trying to use them is that they are low priority messages. Actually they are fake messages. Timers are associated with an underlying kernel timer object - when the message loop detects the kernel timer is signalled it simply marks the current threads message queue with a flag indicating that the next call to GetMessage - WHEN THERE ARE NO OTHER MESSAGES TO PROCESS - should synthesise a WM_TIMER message just in time and return it.
With potentially lots of timer objects, its not at all obvious that the system will fairly signal timer messages for all the timers equally, and any system load can entirely prevent the generation of WM_TIMER messages for long periods of time.
If you are in control of the message loop, you could use maintain your own list of timer events (along with GetTickCount timestamps when they should occur) and MSGWaitForMultipleObject - instead of GetMessage to wait for messages. Use the dwTimeout parameter to provide the smallest interval - from now - until the next timer should be signalled. So it will return from waiting for messages each time you have a timer to process.
And/Or you could use waitable timers - either on a GUI thread with MSGWaitForMultipleObjects, or just on a worker thread, to access the lower level timing functionality directly.
The biggest SetTimer() pitfall is that actually it is USER object (despite the fact it's not listed in MSDN USER objects list) hence it falls under Windows USER objects limitation - by default max 10000 objects per process, max 65535 objects per session (all running processes).
This can be easily proven by simple test - just call SetTimer() (parameters don't care, both windowed and windowless act the same way) and see USER objects count increased in Task Manager.
Also see ReactOS ntuser.h source and this article. Both of them state that TYPE_TIMER is one of USER handle types.
So beware - creating a bunch of timers could exhaust your system resources and make your process crash or even entire system unresponsive.
Here are the details that I feel you're actually after while asking this question:
SetTimer() will first scan the non-kernel timer list (doubly linked list) to see if the timer ID already exists. If the timer exists, it will simply be reset. If not, an HMAllocObject call occurs and creates space for the structure. The timer struct will then be populated and linked to the head of the list.
This will be the total overhead for creating each your 100 timers. That's exactly what the routine does, save for checking against the min and max dwElapsed parameters.
As far as timer expiration goes, the timer list is scanned at (approximately) the duration of the smallest timer duration seen during the last timer list scan. (Actually, what really happens is -- a kernel timer is set to the duration of the smallest user timer found, and this kernel timer wakes up the thread that does the checking for user timer expirations and wakes the respective threads via setting a flag in their message queue status.)
For each timer in the list, the current delta between the last time (in ms) the timer list was scanned and the current time (in ms) is decremented from each timer in the list. When one is due (<= 0 remaining), it's flagged as "ready" in its own struct and and a pointer to the thread info is read from the timer struct and used to wake the respective thread by setting the thread's QS_TIMER flag. It then increments your message queue's CurrentTimersReady counter. That's all timer expiration does. No actual messages are posted.
When your main message pump calls GetMessage(), when no other messages are available, GetMessage() checks for QS_TIMER in your thread's wake bits, and if set -- generates a WM_TIMER message by scanning the full user timer list for the smallest timer in the list flagged READY and that is associated with your thread id. It then decrements your thread CurrentTimersReady count, and if 0, clears the timer wake bit. Your next call to GetMessage() will cause the same thing to occur until all timers are exhausted.
One shot timers stay instantiated. When they expire, they're flagged as WAITING. The next call to SetTimer() with the same timer ID will simply update and re-activate the original. Both one shot and periodic timers reset themselves and only die with KillTimer or when your thread or window are destroyed.
The Windows implementation is very basic, and I think it'd be trivial for you to write a more performant implementation.
Related
I have multiple objects (Object1, Object2 and Object3) which MAY want to utilize a callback. If it is decided that an object wants to be registed
for a periodic callback, they all will use a 30 second reset rate. The object will choose when it registers for a callback (that it would want
at that fixed interval of 30 seconds going forward).
If I wanted to give each object its own internal Timer (such as a timer on a seperate thread) this would be a simple
problem. However each timer would need to be on a seperate thread, which would grow too much as my object count grows.
So for example:
at T=10 seconds into runtime, Object 1 registers for a callback. Since the callback occurs every 30 seconds, its next fire event will
be at T=40, then T=70, T=100 etc.
say 5 seconds later (T=15), Object 2 registers for a callback. Meaning its next call is at T=45, T=75, T=105 etc.
Lastly 1 second after Object 2, Object 3 registers for a callback. Its callback should be invoked at T=46 etc.
A dirty solution I would have for this to for everything to calculate its delta from the first registered Object.
So Object 0 is 0, Object 1 is 10 and Object 3 is 11. Then in a constantly running loop, once the 30 seconds have elapsed, I know
that Object 0's callback can process, and within 10 seconds from that point I can then call object 1's callback etc.
I don't like that in a way that stay busy waits as a while loop must constantly be running. I guess SystemSleep calls may not be as different using semaphores.
Another thought I had was finding the lowest common multiple between the fire events. For example if I kew it was possible every 3 seconds I may have to fire an event, i would keep track of that.
I think essentially what I am trying to make is some sort of simple scheduler? I'm sure I am hardly the first person to do this.
I am trying to come up with a performant solution. a While Loop or a ton of timers on their own threads would make this easy, but that is not a good solution.
Any ideas? Is there a name for this design?
Normally you would use a priority queue, a heap or similar to manage your timed callbacks using a single timer. You check what callback needs to be called next and that is the time you set for the timer to wake you up.
But if all callbacks use a constant 30s repeat then you can just use a queue. New callbacks are added to the end as a pair of callback and (absolute) timestamp and the next callback to call will always be at the front. Every time you call a callback you add it back to the queue with a timestamp 30s increased.
I want to implement the algorithm that awaits for some events and handles them after some delay. Each event has it's own predefined delay. The handler may be executed in a separate thread. The issues with the CPU throttling, the host overload, etc. may be ignored - it's not intended to be a precise real-time system.
Example.
At moment N arrives an event with delay 1 second. We want to handle it at moment N + 1 sec.
At moment N + 0.5 sec arrives another event with delay 0.3 seconds. We want to handle it at moment N + 0.8 sec.
Approaches.
The only straightforward approach that comes to my mind is to use a loop with minimal possible delay inbetween iterations, like every 10 ms, and check if any event on our timeline should be handled now. But it's not a good idea since the delays may vary on scale from 10 ms to 10 minutes.
Another approach is to have a single thread that sleeps between events. But I can't figure out how to forcefully "wake" it when there is a new event that should be handled between now and the next scheduled wake up.
Also it's possible to use a thread per event and just sleep, but there may be thousands of simultanious events which effectively may lead to running out of threads.
The solution can be language-agnostic, but I prefer the C++ STD library solution.
Another approach is to have a single thread that sleeps between events. But I can't figure out how to forcefully "wake" it when there is a new event that should be handled between now and the next scheduled wake up.
I suppose solution to these problems are, at least on *nix systems, poll or epoll with some help of timer. It allows you to make the thread sleep until some given event. The given event may be something appearing on stdin or timer timeout. Since the question was about a general algorithm/idea of algorithm and the code would take a lot of space I am giving just pseudocode:
epoll = create_epoll();
timers = vector<timer>{};
while(true) {
event = epoll.wait_for_event(timers);
if (event.is_timer_timeout()) {
t = timers.find_timed_out();
t.handle_event();
timers.erase(t);
} else if (event.is_incoming_stdin_data()) {
data = stdin.read();
timers.push_back(create_timer(data));
}
}
Two threads that share a priority queue.
Arrivals thread: Wait for arrival. When event arrives calculate time for handler to run. Add handler to queue with priority of handler time ( the top of the queue will be the next event that is to be handled
Handler thread: Is now equal to time of handler at top of queue then run handler. Sleep for clock resolution.
Note: check if your queue is thread safe. If not, then you will have to use a mutex.
This looks simple, but there a lot of gotchas waiting for the inexperienced. So, I would not recommend coding this from scratch. It is better to use a library. The classic is boost::asio. However, this is beginning to show its age and has way more bells and whistles than are needed. So, personally, I use something more lightweight and coded in C++17 - a non blocking event waiter class I coded that you can get from https://github.com/JamesBremner/await. Notice the sample application using this class which does most of what you require https://github.com/JamesBremner/await/wiki/Event-Server
When I create a QTimer object in Qt 5, and start it using the start() member function, is a separate thread created that keeps track of the time and calls the timeout() function at regular intervals?
For example,
QTimer *timer = new QTimer;
timer->start(10);
connect(timer,SIGNAL(timeout()),someObject,SLOT(someFunction()));
Here, how does the program know when timeout() occurs? I think it would have to run in a separate thread, as I don't see how a sequential program could keep track of the time and continue its execution simultaneously. However, I have been unable to find any information regarding this either in the Qt documentation or anywhere else to confirm this.
I have read the official documentation, and certain questions on StackOverflow such as this and this seem very related, but I could not get my answer through them.
Could anyone explain the mechanism through which a QTimer object works?
On searching further, I found that as per this answer by Bill, it is mentioned that
Events are delivered asynchronously by the OS, which is why it appears that there's something else going on. There is, but not in your program.
Does it mean that timeout() is handled by the OS? Is there some hardware that keeps track of the time and send interrupts at appropriate intervals? But if this is the case, as many timers can run simultaneously and independently, how can each timer be separately tracked?
What is the mechanism?
Thank you.
When I create a QTimer object in Qt 5, and start it using the start()
member function, is a separate thread created that keeps track of the
time and calls the timeout() function at regular intervals?
No; creating a separate thread would be expensive and it isn't necessary, so that isn't how QTimer is implemented.
Here, how does the program know when timeout() occurs?
The QTimer::start() method can call a system time function (e.g. gettimeofday() or similar) to find out (to within a few milliseconds) what the time was that start() was called. It can then add ten milliseconds (or whatever value you specified) to that time and now it has a record indicating when the timeout() signal is supposed to be emitted next.
So having that information, what does it then do to make sure that happens?
The key fact to know is that QTimer timeout-signal-emission only works if/when your Qt program is executing inside Qt's event loop. Just about every Qt program will have something like this, usually near the bottom its main() function:
QApplication app(argc, argv);
[...]
app.exec();
Note that in a typical application, almost all of the application's time will be spent inside that exec() call; that is to say, the app.exec() call will not return until it's time for the application to exit.
So what is going on inside that exec() call while your program is running? With a big complex library like Qt it's necessarily complicated, but it's not too much of a simplification to say that it's running an event loop that looks conceptually something like this:
while(1)
{
SleepUntilThereIsSomethingToDo(); // not a real function name!
DoTheThingsThatNeedDoingNow(); // this is also a name I made up
if (timeToQuit) break;
}
So when your app is idle, the process will be put to sleep inside the SleepUntilThereIsSomethingToDo() call, but as soon as an event arrives that needs handling (e.g. the user moves the mouse, or presses a key, or data arrives on a socket, or etc), SleepUntilThereIsSomethingToDo() will return and then the code to respond to that event will be executed, resulting in the appropriate action such as the widgets updating or the timeout() signal being called.
So how does SleepUntilThereIsSomethingToDo() know when it is time to wake up and return? This will vary greatly depending on what OS you are running on, since different OS's have different APIs for handling this sort of thing, but a classic UNIX-y way to implement such a function would be with the POSIX select() call:
int select(int nfds,
fd_set *readfds,
fd_set *writefds,
fd_set *exceptfds,
struct timeval *timeout);
Note that select() takes three different fd_set arguments, each of which can specify a number of file descriptors; by passing in the appropriate fd_set objects to those arguments you can cause select() to wake up the instant an I/O operations becomes possible on any one of a set of file descriptors you care to monitor, so that your program can then handle the I/O without delay. However, the interesting part for us is the final argument, which is a timeout-argument. In particular, you can pass in a struct timeval object here that says to select(): "If no I/O events have occurred after (this many) microseconds, then you should just give up and return anyway".
That turns out to be very useful, because by using that parameter, the SleepUntilThereIsSomethingToDo() function can do something like this (pseudocode):
void SleepUntilThereIsSomethingToDo()
{
struct timeval now = gettimeofday(); // get the current time
struct timeval nextQTimerTime = [...]; // time at which we want to emit a timeout() signal, as was calculated earlier inside QTimer::start()
struct timeval maxSleepTimeInterval = (nextQTimerTime-now);
select([...], &maxSleepTimeInterval); // sleep until the appointed time (or until I/O arrives, whichever comes first)
}
void DoTheThingsThatNeedDoingNow()
{
// Is it time to emit the timeout() signal yet?
struct timeval now = gettimeofday();
if (now >= nextQTimerTime) emit timeout();
[... do any other stuff that might need doing as well ...]
}
Hopefully that makes sense, and you can see how the event loop uses select()'s timeout argument to allow it to wake up and emit the timeout() signal at (approximately) the time that it had previously calculated when you called start().
Btw if the app has more than one QTimer active simultaneously, that's no problem; in that case, SleepUntilThereIsSomethingToDo() just needs to iterate over all of the active QTimers to find the one with the smallest next-timeout-time stamp, and use only that minimum timestamp for its calculation of the maximum time-interval that select() should be allowed to sleep for. Then after select() returns, DoTheThingsThatNeedDoingNow() also iterates over the active timers and emits a timeout signal only for those whose next-timeout-time stamp is not greater than the current time. The event-loop repeats (as quickly or as slowly as necessary) to give a semblance of multithreaded behavior without actually requiring multiple threads.
Looking at the documentation about timers and at the source code of QTimer and QObject we can see that the timer is running in the thread/event loop that is assigned to the object. From the doc:
For QTimer to work, you must have an event loop in your application; that is, you must call QCoreApplication::exec() somewhere. Timer events will be delivered only while the event loop is running.
In multithreaded applications, you can use QTimer in any thread that has an event loop. To start an event loop from a non-GUI thread, use QThread::exec(). Qt uses the timer's thread affinity to determine which thread will emit the timeout() signal. Because of this, you must start and stop the timer in its thread; it is not possible to start a timer from another thread.
Internally, QTimer simply uses the QObject::startTimer method to fire after a certain amount of time. This one itself somehow tells the thread it's running on to fire after the amount of time.
So your program is fine running continously and keeping track of the timers as long as you don't block your event queue. If you are worried of your timer being not 100% accurate try to move long-running callbacks out of the event queue in their own thread, or use a different event queue for the timers.
QTimer object registers itself into EventDispatcher (QAbstractEventDispatcher) which than takes care to send events of type QTimerEvent every time there is timeout for a particular registered QTimer. For example, on GNU/Linux there is a private implementation of QAbstractEventDispatcher called QEventDispatcherUNIXPrivate that makes calculations taking in consideration the platform api for the time. The QTimerEvent are sent from QEventDispatcherUNIXPrivate into the queue of the event loop of the same thread where QTimer object belongs, i.e. was created.
QEventDispatcherUNIXPrivate doesn't fire a QTimerEvent because of some OS system event or clock, but because it periodically checkes the timeout when processEvents is called by the thread event loop where QTimer lives too. Se here: https://code.woboq.org/qt5/qtbase/src/corelib/kernel/qeventdispatcher_unix.cpp.html#_ZN27QEventDispatcherUNIXPrivateC1Ev
I have implemented my own Timer/Callback classes in C/C++ in Linux, wherein a process requiring a timer to fire either ONE_SHOT or PERIODICally instantiates a timer, and instantiates a callback object and associates the callback with previously created Timer object. The Callback class implements a triggered () method, and when the timer fires at the appointed timeout, the triggered () method is executed. (Nothing new in terms of functionality.) The way my Timer class works is I maintain a minheap of Timer objects and thus always know which timer to fire next. There is a timer task (TimerTask) which itself runs as a separate process (created using fork ()) and shares the memory pools from which the Timer objects and the Callback objects are created. The TimerTask has a main while (1) loop which keeps checking if the root of the Timer object minheap has a time since epoch that is LEQ the current time since epoch. If so, the timer at root has "fired."
Currently, when the timer fires, the callback is executed in the TimerTask process context. I am currently changing this behavior to run the callback processing on other tasks (send them the information that the Timer object has fired via a POSIX message queue. For example, send the message to the Timer object creating process), but my question to SO is what are the principles behind this? Executing a callback in the TimerTask context seems like a bad idea if I expect to service a large number of timers. It seems like a good idea to dispatch the callback processing over to other processes.
What are the general rules of thumb for processing the callback in one task/process over the other? My intention is to process the callback in the receiving task using a pthread like so:
void threadFunctionForTimerCallback (void* arg)
{
while (1)
{
if ((mq_receive (msg_fd, buffer, attr.mq_msgsize, NULL)) == -1)
exit (-1);
else
printf ("Message received %s\n", buffer);
}
}
Would this be a reasonable solution? But never mind the actual way of receiving the message from the TimerTask (threads or any other method, doesn't matter), any discussion and insight into the problem of assigning a task for the callback is appreciated.
There is no need to busy spin while(1) to implement a timer. One traditional and robust way of implementing timers has been using minheap as you do to organize times to expiry and then pass the time till the next timer expiry as a timeout argument to select() or epoll(). Using select() call a thread can watch for file descriptor readiness, signals and timers all at the same time.
Recent kernels support timerfd that delivers timer expiry events as file descriptor readiness for read which again can be handled using select()/epoll(). It obviates the need to maintain the minheap, however, requires a system call for each add/modify/delete a timer.
Having timer code in another process requires processes to use inter-process communication mechanisms, thereby introducing more complexity, so it can actually make the system less robust, especially when the processes communicate via shared memory and can corrupt it.
Anyway, one can use Unix domain sockets to send messages back and forth between communicating processes on the same host. Again, select()/epoll() are your best friends. Or a more high level framework can be used for message passing, such as 0MQ.
I'm importing a portion of existing code into my Qt app and noticed a sleep function in there. I see that this type of function has no place in event programming. What should I do instead?
UPDATE: After thought and feedback I would say the answer is: call sleep outside the GUI main thread only and if you need to wait in the GUI thread use processEvents() or an event loop, this will prevent the GUI from freezing.
It isn't pretty but I found this in the Qt mailing list archives:
The sleep method of QThread is protected, but you can expose it like so:
class SleeperThread : public QThread
{
public:
static void msleep(unsigned long msecs)
{
QThread::msleep(msecs);
}
};
Then just call:
SleeperThread::msleep(1000);
from any thread.
However, a more elegant solution would be to refactor your code to use a QTimer - this might require you saving the state so you know what to do when the timer goes off.
I don't recommend sleep in a event based system but if you want to ...
You can use a waitcondition, that way you can always interrupt the sleep if neccesary.
//...
QMutex dummy;
dummy.lock();
QWaitCondition waitCondition;
waitCondition.wait(&dummy, waitTime);
//...
The reason why sleep is a bad idea in event based programming is because event based programming is effectively a form on non-preemptive multitasking. By calling sleep, you prevent any other event becoming active and therefore blocking the processing of the thread.
In a request response scenario for udp packets, send the request and immediately wait for the response. Qt has good socket APIs which will ensure that the socket does not block while waiting for the event. The event will come when it comes. In your case the QSocket::readReady signal is your friend.
If you want to schedule an event for some point of time in the future, use QTimer. This will ensure that other events are not blocked.
It is not necessary to break down the events at all. All I needed to do was to call QApplication::processEvents() where sleep() was and this prevents the GUI from freezing.
I don't know how the QTs handle the events internally, but on most systems at the lowest level the application life goes like this: the main thread code is basically a loop (the message loop), in which, at each iteration, the application calls a function that gives to it a new message; usually that function is blocking, i.e. if there are no messages the function does not return and the application is stopped.
Each time the function returns, the application has a new message to process, that usually has some recipient (the window to which is sent), a meaning (the message code, e.g. the mouse pointer has been moved) and some additional data (e.g. the mouse has been moved to coords 24, 12).
Now, the application has to process the message; the OS or the GUI toolkit usually do this under the hood, so with some black magic the message is dispatched to its recipient and the correct event handler is executed. When the event handler returns, the internal function that called the event handler returns, so does the one that called it and so on, until the control comes back to the main loop, that now will call again the magic message-retrieving function to get another message. This cycle goes on until the application terminates.
Now, I wrote all this to make you understand why sleep is bad in an event driven GUI application: if you notice, while a message is processed no other messages can be processed, since the main thread is busy running your event handler, that, after all, is just a function called by the message loop. So, if you make your event handler sleep, also the message loop will sleep, which means that the application in the meantime won't receive and process any other messages, including the ones that make your window repaint, so your application will look "hang" from the user perspective.
Long story short: don't use sleep unless you have to sleep for very short times (few hundreds milliseconds at most), otherwise the GUI will become unresponsive. You have several options to replace the sleeps: you can use a timer (QTimer), but it may require you to do a lot of bookkeeping between a timer event and the other. A popular alternative is to start a separate worker thread: it would just handle the UDP communication, and, being separate from the main thread, it would not cause any problem sleeping when necessary. Obviously you must take care to protect the data shared between the threads with mutexes and be careful to avoid race conditions and all the other kind of problems that occur with multithreading.