I want to implement the algorithm that awaits for some events and handles them after some delay. Each event has it's own predefined delay. The handler may be executed in a separate thread. The issues with the CPU throttling, the host overload, etc. may be ignored - it's not intended to be a precise real-time system.
Example.
At moment N arrives an event with delay 1 second. We want to handle it at moment N + 1 sec.
At moment N + 0.5 sec arrives another event with delay 0.3 seconds. We want to handle it at moment N + 0.8 sec.
Approaches.
The only straightforward approach that comes to my mind is to use a loop with minimal possible delay inbetween iterations, like every 10 ms, and check if any event on our timeline should be handled now. But it's not a good idea since the delays may vary on scale from 10 ms to 10 minutes.
Another approach is to have a single thread that sleeps between events. But I can't figure out how to forcefully "wake" it when there is a new event that should be handled between now and the next scheduled wake up.
Also it's possible to use a thread per event and just sleep, but there may be thousands of simultanious events which effectively may lead to running out of threads.
The solution can be language-agnostic, but I prefer the C++ STD library solution.
Another approach is to have a single thread that sleeps between events. But I can't figure out how to forcefully "wake" it when there is a new event that should be handled between now and the next scheduled wake up.
I suppose solution to these problems are, at least on *nix systems, poll or epoll with some help of timer. It allows you to make the thread sleep until some given event. The given event may be something appearing on stdin or timer timeout. Since the question was about a general algorithm/idea of algorithm and the code would take a lot of space I am giving just pseudocode:
epoll = create_epoll();
timers = vector<timer>{};
while(true) {
event = epoll.wait_for_event(timers);
if (event.is_timer_timeout()) {
t = timers.find_timed_out();
t.handle_event();
timers.erase(t);
} else if (event.is_incoming_stdin_data()) {
data = stdin.read();
timers.push_back(create_timer(data));
}
}
Two threads that share a priority queue.
Arrivals thread: Wait for arrival. When event arrives calculate time for handler to run. Add handler to queue with priority of handler time ( the top of the queue will be the next event that is to be handled
Handler thread: Is now equal to time of handler at top of queue then run handler. Sleep for clock resolution.
Note: check if your queue is thread safe. If not, then you will have to use a mutex.
This looks simple, but there a lot of gotchas waiting for the inexperienced. So, I would not recommend coding this from scratch. It is better to use a library. The classic is boost::asio. However, this is beginning to show its age and has way more bells and whistles than are needed. So, personally, I use something more lightweight and coded in C++17 - a non blocking event waiter class I coded that you can get from https://github.com/JamesBremner/await. Notice the sample application using this class which does most of what you require https://github.com/JamesBremner/await/wiki/Event-Server
Related
I'm developing a C++14 Windows DLL on VS2015 that runs on all Windows version >= XP.
TL;DR
Is there a limit to the number of events, created with CreateEvent, with different names of course?
Background
I'm writing a thread pool class.
The class interface is simple:
void AddTask(std::function<void()> task);
Task is added to a queue of tasks and waiting workers (vector <thread>) activate the task when available.
Requirement
Wait (block) for a task for a little bit before continuing with the flow. Meaning, some users of ThreadPool, after calling AddTask, may want to wait for a while (say 1 second) for the task to end, before continuing with the flow. If the task is not done yet, they will continue with the flow anyways.
Problem
ThreadPool class cannot provide Wait interface. Not its responsibility.
Solution
ThreadPool will SetEvent when task is done.
Users of ThreadPool will wait (or not. depend on their need) for the event to be signaled.
So, I've changed the return value of ThreadPool::AddTask from void to int where int is a unique task ID which is essentially the name of the event to be singled when a task is done.
Question
I don't expect more than ~500 tasks but I'm afraid that creating hundreds of events is not possible or even a bad practice.
So is there a limit? or a better approach?
Of course there is a limit (if nothing else; at some point the system runs out of memory).
In reality, the limit is around 16 million per process.
You can read more details here: https://blogs.technet.microsoft.com/markrussinovich/2009/09/29/pushing-the-limits-of-windows-handles/
You're asking the wrong question. Fortunately you gave enough background to answer your real question. But before we get to that:
First, if you're asking what's the maximum number of events a process can open or a system can hold, you're probably doing something very very wrong. Same goes for asking what's the maximum number of files a process can open or what's the maximum number of threads a process can create.
You can create 50, 100, 200, 500, 1000... but where does it stop? If you're even considering creating that many of them that you have to ask about a limit, you're on the wrong track.
Second, the answer depends on too many implementation details: OS version, amount of RAM installed, registry settings, and maybe more. Other programs running also affect that "limit".
Third, even if you knew the limit - even if you could somehow calculate it at runtime based on all the relevant factors - it wouldn't allow you to do anything that you can't already do now.
Lets say you find out the limit is L and you have created exactly L events by now. Another task come in. What do you do? Throw away the task? Execute the task without signaling an event? Wait until there are fewer than L events and only then create an event and start executing the task? Crash the process?
Whatever you decide you can do it just the same when CreateEvent fails. All of this is completely pointless. And this is yet another indication that you're asking the wrong question.
But maybe the most wrong thing you're doing is saying "the thread pool class can't provide wait because it's not its responsibility, so lets have the thread pool class provide an event for each task that the thread pool will signal when the task ends" (in paraphrase).
It looks like by the end of the sentence you forgot the premise from the beginning: It's not the thread pool's responsibility!
If you want to wait for the task to finish have the task itself signal when it's done. There's no reason to complicate the thread pool because someone, sometimes want to wait on tasks. Signaling that the task is done is the task's job:
event evt; ///// this
thread_pool.queue([evt] {
// whatever
evt.signal(); ///// and this
});
auto reason = wait(evt, 1s);
if (reason == timeout) {
log("bummer");
}
The event class could be anything you want - a Windows event, and std::promise and std::future pair, or anything else.
This is so simple and obvious.
Complicating the thread pool infrastructure, taking up valuable system resources for nothing, and signaling synchronization primitives even when no one's listening just to save the two marked code lines above in the few cases where you actually want to wait for the task is unjustifiable.
I am developing an application that is responsible of moving and managing robots over an UDP connection.
The application needs to:
Read joystick/user input using SDL.
Generate and send a control packet to the robot every 20 milliseconds (UDP)
Receive and decode response packets from the robot (~20 msecs). This was implemented with the signal/slot mechanism and does not require a timer.
Receive and process robot messages for debugging reasons. This is not time-regulated.
Update the UI regularly to keep the user notified about the status of the robot (e.g. battery voltage). For most cases, I have also used Qt's signal/slot mechanism.
Use a watchdog that disables the robot if no response is received after 1 second. The watchdog is reset when the application receives a robot packet (~20 msecs)
For the moment, I have implemented all of the above. However, the application fails to send the packets regularly when the watchdog is activated or when two or more QTimer objects are used. The application would generally work, but I would not consider it "production ready". I have tried to use the precision flags of the timers (Qt::Precise, Qt::Coarse and Qt::VeryCoarse), but I still experienced problems.
Notes:
The code is generally well organized, there are no "god objects" in the code base (most source files are less than 150 lines long and only create the necessary dependencies).
Most of the times, I use QTimer::singleShot() (e.g. I will only send the next packet once the current packet has been sent).
Where we use timers:
To read joystick input (~50 msecs, precise timer)
To send robot packets (~20 msecs, precise timer)
To update some aspects of the UI (~500 msecs, coarse timer)
To update the elapsed time since the robot was enabled (~100 msecs, precise timer)
To implement a watchdog (put the application and robot in safe state if 1000 msecs have passed without a robot response)
Note: the watchdog is feed when we receive a response packet from the robot (~20 msecs)
Do you have any recommendations for using QTimer objects with performance-critical code (any idea is welcome). Note that I have also tried to use different threads, but it has caused me more problems, since the application would not be in "sync", thus failing to effectively control the robots that we have tested.
Actually, I seem to have underestimated Qt's timer and event loop performance. On my system I get on average around 20k nanoseconds for an event loop cycle plus the overhead from scheduling a queued function call, and a timer with interval 1 millisecond is rarely late, most of the timeouts are a few thousand nanoseconds short of a millisecond. But it is a high end system, on embedded hardware it may be a lot worse.
You should take the time and profile your target system and Qt build to determine whether it can indeed run snappy enough, and based on those measurements, adjust your timings to compensate for the system delays to get your events scheduled more on time.
You should definitely keep the timer thread as free as possible, because if you block it by IO or extensive computation, your timer will not be accurate. Use a dedicated thread to schedule work and extra worker threads to do the actual work. You may also try playing with thread priorities a bit.
Worst case scenario, look for 3rd party high performance event loop implementations or create your own and potentially, also a faster signaling mechanism as ell. As I already mentioned in the comments, Qt's inter-thread queued signals are very slow, at least compared to something like indirect function calls.
Last but not least, if you want to do task X every N units of time, it will only be only possible if task X takes N units of time or less on your system. You need to make this consideration for each task, and for all tasks running concurrently. And in order to get accurate scheduling, you should measure how long did task X took, and if less than its frequency, schedule the next execution in the time remaining, otherwise execute immediately.
I have encountered the need to use multithreading in my windows form GUI application using C++. From my research on the topic it seems background worker threads are the way to go for my purposes. According to example code I have
System::Void backgroundWorker1_DoWork(System::Object^ sender, System::ComponentModel::DoWorkEventArgs^ e)
{
BackgroundWorker^ worker = dynamic_cast<BackgroundWorker^>(sender);
e->Result = SomeCPUHungryFunction( safe_cast<Int32>(e->Argument), worker, e );
}
However there are a few things I need to get straight and figure out
Will a background worker thread make my multithreading life easier?
Why do I need e->Result?
What are the arguments passed into the backgroundWorker1_DoWork function for?
What is the purpose of the parameter safe_cast(e->Argument)?
What things should I do in my CPUHungryFunction()?
What if my CPUHungryFunction() has a while loop that loops indefinitely?
Do I have control over the processor time my worker thread gets?
Can more specifically control the number of times the loop loops within a set period? I don’t want to be using up cpu looping 1000s of times a second when I only need to loop 30 times a second.
*Is it necessary to control the rate at which the GUI is updated?
Will a background worker thread make my multithreading life easier?
Yes, very much so. It helps you deal with the fact that you cannot update the UI from a worker thread. Particularly the ProgressChanged event lets you show progress and the RunWorkerCompleted event lets you use the results of the worker thread to update the UI without you having to deal with the cross-threading problem.
Why do I need e->Result?
To pass back the result of the work you did to the UI thread. You get the value back in your RunWorkerCompleted event handler, e->Result property. From which you then update the UI with the result.
What are the arguments passed into the function for?
To tell the worker thread what to do, it is optional. Otherwise identical to passing arguments to any method, just more awkward since you don't get to chose the arguments. You typically pass some kind of value from your UI for example, use a little helper class if you need to pass more than one. Always favor this over trying to obtain UI values in the worker, that's very troublesome.
What things should I do in my CPUHungryFunction()?
Burn CPU cycles of course. Or in general do something that takes a long time, like a dbase query. Which doesn't burn CPU cycles but takes too long to allow the UI thread to go dead while waiting for the result. Roughly, whenever you need to do something that takes more than a second then you should execute it on a worker thread instead of the UI thread.
What if my CPUHungryFunction() has a while loop that loops indefinitely?
Then your worker never completes and never produces a result. This may be useful but it isn't common. You would not typically use a BGW for this, just a regular Thread that has its IsBackground property set to true.
Do I have control over the processor time my worker thread gets?
You have some by artificially slowing it down by calling Thread.Sleep(). This is not a common thing to do, the point of starting a worker thread is to do work. A thread that sleeps is using an expensive resource in a non-productive way.
Can more specifically control the number of times the loop loops within a set period? I don’t want to be using up cpu looping 1000s of times a second when I only need to loop 30 times a second.
Same as above, you'd have to sleep. Do so by executing the loop 30 times and then sleep for a second.
Is it necessary to control the rate at which the GUI is updated?
Yes, that's very important. ReportProgress() can be a fire-hose, generating many thousands of UI updates per second. You can easily get into a problem with this when the UI thread just can't keep up with that rate. You'll notice, the UI thread stops taking care of its regular duties, like painting the UI and responding to input. Because it keeps having to deal with another invoke request to run the ProgressChanged event handler. The side-effect is that the UI looks frozen, you've got the exact problem back you were trying to solve with a worker. It isn't actually frozen, it just looks that way, it is still running the event handler. But your user won't see the difference.
The one thing to keep in mind is that ReportProgress() only needs to keep human eyes happy. Which cannot see updates that happen more frequently than 20 times per second. Beyond that, it just turns into an unreadable blur. So don't waste time on UI updates that just are not useful anyway. You'll automatically also avoid the fire-hose problem. Tuning the update rate is something you have to program, it isn't built into BGW.
I will try to answer you question by question
Yes
DoWork is a void method (and need to be so). Also DoWork executes
in a different thread from the calling one, so you need to have a
way to return something to the calling thread. The e->Result
parameter will be passed to the RunWorkerCompleted event inside
the RunWorkerCompletedEventArgs
The sender argument is the backgroundworker itself that you can use
to raise events for the UI thread, the DoWorkEventArgs eventually
contains parameters passed from the calling thread (the one who has
called RunWorkerAsync(Object))
Whatever you have need to do. Paying attention to the userinterface
elements that are not accessible from the DoWork thread. Usually, one
calculate the percentage of work done and update the UI (a progress
bar or something alike) and call ReportProgress to communicate with
the UI thread. (Need to have WorkerReportProgress property set to
True)
Nothing runs indefinitely. You can always unplug the cord.
Seriously, it is just another thread, the OS takes care of it and
destroys everything when your app ends.
Not sure what do you mean with this, but it is probably related
to the next question
You can use the Thread.Sleep or Thread.Join methods to release the
CPU time after one loop. The exact timing to sleep should be fine
tuned depending on what you are doing, the workload of the current
system and the raw speed of your processor
Please refer to MSDN docs on BackgroundWorker and Thread classes
I have a windowless timer (no WM_TIMER) which fires a callback function only once when a given time period is elapsed. It is implemented as a SetTimer()/KillTimer(). Time periods are small enough: 100-300 milliseconds.
Is that cheap enough (I mean performance) to call SetTimer()/KillTimer() pair for every such short time interval?
What if I have 100 such timers which periodically call SetTimer()/KillTimer()? How much Window timer objects may exist simultaneously in the system?
That is the question:
Use a bunch of such timer objects and rely on good Windows implementation of timers, or create one Windows timer object that ticks every, say, 30 milliseconds, and subscribe all custom 100-300 milliseconds one-time timers to it.
Thanks
The problem with timer messages as you are trying to use them is that they are low priority messages. Actually they are fake messages. Timers are associated with an underlying kernel timer object - when the message loop detects the kernel timer is signalled it simply marks the current threads message queue with a flag indicating that the next call to GetMessage - WHEN THERE ARE NO OTHER MESSAGES TO PROCESS - should synthesise a WM_TIMER message just in time and return it.
With potentially lots of timer objects, its not at all obvious that the system will fairly signal timer messages for all the timers equally, and any system load can entirely prevent the generation of WM_TIMER messages for long periods of time.
If you are in control of the message loop, you could use maintain your own list of timer events (along with GetTickCount timestamps when they should occur) and MSGWaitForMultipleObject - instead of GetMessage to wait for messages. Use the dwTimeout parameter to provide the smallest interval - from now - until the next timer should be signalled. So it will return from waiting for messages each time you have a timer to process.
And/Or you could use waitable timers - either on a GUI thread with MSGWaitForMultipleObjects, or just on a worker thread, to access the lower level timing functionality directly.
The biggest SetTimer() pitfall is that actually it is USER object (despite the fact it's not listed in MSDN USER objects list) hence it falls under Windows USER objects limitation - by default max 10000 objects per process, max 65535 objects per session (all running processes).
This can be easily proven by simple test - just call SetTimer() (parameters don't care, both windowed and windowless act the same way) and see USER objects count increased in Task Manager.
Also see ReactOS ntuser.h source and this article. Both of them state that TYPE_TIMER is one of USER handle types.
So beware - creating a bunch of timers could exhaust your system resources and make your process crash or even entire system unresponsive.
Here are the details that I feel you're actually after while asking this question:
SetTimer() will first scan the non-kernel timer list (doubly linked list) to see if the timer ID already exists. If the timer exists, it will simply be reset. If not, an HMAllocObject call occurs and creates space for the structure. The timer struct will then be populated and linked to the head of the list.
This will be the total overhead for creating each your 100 timers. That's exactly what the routine does, save for checking against the min and max dwElapsed parameters.
As far as timer expiration goes, the timer list is scanned at (approximately) the duration of the smallest timer duration seen during the last timer list scan. (Actually, what really happens is -- a kernel timer is set to the duration of the smallest user timer found, and this kernel timer wakes up the thread that does the checking for user timer expirations and wakes the respective threads via setting a flag in their message queue status.)
For each timer in the list, the current delta between the last time (in ms) the timer list was scanned and the current time (in ms) is decremented from each timer in the list. When one is due (<= 0 remaining), it's flagged as "ready" in its own struct and and a pointer to the thread info is read from the timer struct and used to wake the respective thread by setting the thread's QS_TIMER flag. It then increments your message queue's CurrentTimersReady counter. That's all timer expiration does. No actual messages are posted.
When your main message pump calls GetMessage(), when no other messages are available, GetMessage() checks for QS_TIMER in your thread's wake bits, and if set -- generates a WM_TIMER message by scanning the full user timer list for the smallest timer in the list flagged READY and that is associated with your thread id. It then decrements your thread CurrentTimersReady count, and if 0, clears the timer wake bit. Your next call to GetMessage() will cause the same thing to occur until all timers are exhausted.
One shot timers stay instantiated. When they expire, they're flagged as WAITING. The next call to SetTimer() with the same timer ID will simply update and re-activate the original. Both one shot and periodic timers reset themselves and only die with KillTimer or when your thread or window are destroyed.
The Windows implementation is very basic, and I think it'd be trivial for you to write a more performant implementation.
I'm importing a portion of existing code into my Qt app and noticed a sleep function in there. I see that this type of function has no place in event programming. What should I do instead?
UPDATE: After thought and feedback I would say the answer is: call sleep outside the GUI main thread only and if you need to wait in the GUI thread use processEvents() or an event loop, this will prevent the GUI from freezing.
It isn't pretty but I found this in the Qt mailing list archives:
The sleep method of QThread is protected, but you can expose it like so:
class SleeperThread : public QThread
{
public:
static void msleep(unsigned long msecs)
{
QThread::msleep(msecs);
}
};
Then just call:
SleeperThread::msleep(1000);
from any thread.
However, a more elegant solution would be to refactor your code to use a QTimer - this might require you saving the state so you know what to do when the timer goes off.
I don't recommend sleep in a event based system but if you want to ...
You can use a waitcondition, that way you can always interrupt the sleep if neccesary.
//...
QMutex dummy;
dummy.lock();
QWaitCondition waitCondition;
waitCondition.wait(&dummy, waitTime);
//...
The reason why sleep is a bad idea in event based programming is because event based programming is effectively a form on non-preemptive multitasking. By calling sleep, you prevent any other event becoming active and therefore blocking the processing of the thread.
In a request response scenario for udp packets, send the request and immediately wait for the response. Qt has good socket APIs which will ensure that the socket does not block while waiting for the event. The event will come when it comes. In your case the QSocket::readReady signal is your friend.
If you want to schedule an event for some point of time in the future, use QTimer. This will ensure that other events are not blocked.
It is not necessary to break down the events at all. All I needed to do was to call QApplication::processEvents() where sleep() was and this prevents the GUI from freezing.
I don't know how the QTs handle the events internally, but on most systems at the lowest level the application life goes like this: the main thread code is basically a loop (the message loop), in which, at each iteration, the application calls a function that gives to it a new message; usually that function is blocking, i.e. if there are no messages the function does not return and the application is stopped.
Each time the function returns, the application has a new message to process, that usually has some recipient (the window to which is sent), a meaning (the message code, e.g. the mouse pointer has been moved) and some additional data (e.g. the mouse has been moved to coords 24, 12).
Now, the application has to process the message; the OS or the GUI toolkit usually do this under the hood, so with some black magic the message is dispatched to its recipient and the correct event handler is executed. When the event handler returns, the internal function that called the event handler returns, so does the one that called it and so on, until the control comes back to the main loop, that now will call again the magic message-retrieving function to get another message. This cycle goes on until the application terminates.
Now, I wrote all this to make you understand why sleep is bad in an event driven GUI application: if you notice, while a message is processed no other messages can be processed, since the main thread is busy running your event handler, that, after all, is just a function called by the message loop. So, if you make your event handler sleep, also the message loop will sleep, which means that the application in the meantime won't receive and process any other messages, including the ones that make your window repaint, so your application will look "hang" from the user perspective.
Long story short: don't use sleep unless you have to sleep for very short times (few hundreds milliseconds at most), otherwise the GUI will become unresponsive. You have several options to replace the sleeps: you can use a timer (QTimer), but it may require you to do a lot of bookkeeping between a timer event and the other. A popular alternative is to start a separate worker thread: it would just handle the UDP communication, and, being separate from the main thread, it would not cause any problem sleeping when necessary. Obviously you must take care to protect the data shared between the threads with mutexes and be careful to avoid race conditions and all the other kind of problems that occur with multithreading.