Proper use of timers in Qt - c++

I wanted to program an interval-timer which you can use for training. So there have to be a stack widget were the user can input the times for the training- and the rest-rounds and the repetitions followed by a press on a start button which changes the page and starts the first round-countdown shown in a display.
So if the user enter 20 seconds for training, 10 seconds for rest and 3 repetitions the numbers
20 to 0, 10 to 0, 20 to 0, 10 to 0 and 20 to 0
should be displayed one after another.
The problem I ran in:
I tried QTimer and a QThread with a 1 sec-sleep-function and a signal-slot to the gui, but in both options the gui froze.

The use of a QTimer will not block the main window. This is the purpose of timers.
Moreover, you don't need to use threads at all, you only have to start a timer with the desired interval (for example, a tick every 10ms) and connect the timeout() signal to a slot that will process your application behaviour.
In this slot, you just have to handle the countdown and the state change (working time to break time if the number of repetitions is not reached, break time to working time and the finished state).
I have created such an application and it worked well. Maybe I will later make it available on github. If I do it, I would made an edit to my answer to provide the link.
I hope it helps.

I think you have designed the solution in a very complicated way. With no code it's impossible to tell you what went wrong.
If I had to develop a solution for this, it'd be in the form of interconnecting blocks, which can be delay blocks or flow control blocks (child classes of the block parent).
Each block has a next one and a trigger function. A delay block also has a time. A flow control block may have different functionalities, like pointing to a previous block only for x repetitions. You can use a global QTimer when a new delay block is triggered to trigger the next block (connect the timeout signal of the timer to the trigger function of the next block, then start the timer with the current block's time).
For instance, if you wanted to do 3 times 30s exercise, 10s rest, you'd connect two delay blocks with a repeat block.

Related

Scheduler Design? Multiple Events on the same timeline starting at different times

I have multiple objects (Object1, Object2 and Object3) which MAY want to utilize a callback. If it is decided that an object wants to be registed
for a periodic callback, they all will use a 30 second reset rate. The object will choose when it registers for a callback (that it would want
at that fixed interval of 30 seconds going forward).
If I wanted to give each object its own internal Timer (such as a timer on a seperate thread) this would be a simple
problem. However each timer would need to be on a seperate thread, which would grow too much as my object count grows.
So for example:
at T=10 seconds into runtime, Object 1 registers for a callback. Since the callback occurs every 30 seconds, its next fire event will
be at T=40, then T=70, T=100 etc.
say 5 seconds later (T=15), Object 2 registers for a callback. Meaning its next call is at T=45, T=75, T=105 etc.
Lastly 1 second after Object 2, Object 3 registers for a callback. Its callback should be invoked at T=46 etc.
A dirty solution I would have for this to for everything to calculate its delta from the first registered Object.
So Object 0 is 0, Object 1 is 10 and Object 3 is 11. Then in a constantly running loop, once the 30 seconds have elapsed, I know
that Object 0's callback can process, and within 10 seconds from that point I can then call object 1's callback etc.
I don't like that in a way that stay busy waits as a while loop must constantly be running. I guess SystemSleep calls may not be as different using semaphores.
Another thought I had was finding the lowest common multiple between the fire events. For example if I kew it was possible every 3 seconds I may have to fire an event, i would keep track of that.
I think essentially what I am trying to make is some sort of simple scheduler? I'm sure I am hardly the first person to do this.
I am trying to come up with a performant solution. a While Loop or a ton of timers on their own threads would make this easy, but that is not a good solution.
Any ideas? Is there a name for this design?
Normally you would use a priority queue, a heap or similar to manage your timed callbacks using a single timer. You check what callback needs to be called next and that is the time you set for the timer to wake you up.
But if all callbacks use a constant 30s repeat then you can just use a queue. New callbacks are added to the end as a pair of callback and (absolute) timestamp and the next callback to call will always be at the front. Every time you call a callback you add it back to the queue with a timestamp 30s increased.

How to efficiently handle incoming delayed events on a single timeline?

I want to implement the algorithm that awaits for some events and handles them after some delay. Each event has it's own predefined delay. The handler may be executed in a separate thread. The issues with the CPU throttling, the host overload, etc. may be ignored - it's not intended to be a precise real-time system.
Example.
At moment N arrives an event with delay 1 second. We want to handle it at moment N + 1 sec.
At moment N + 0.5 sec arrives another event with delay 0.3 seconds. We want to handle it at moment N + 0.8 sec.
Approaches.
The only straightforward approach that comes to my mind is to use a loop with minimal possible delay inbetween iterations, like every 10 ms, and check if any event on our timeline should be handled now. But it's not a good idea since the delays may vary on scale from 10 ms to 10 minutes.
Another approach is to have a single thread that sleeps between events. But I can't figure out how to forcefully "wake" it when there is a new event that should be handled between now and the next scheduled wake up.
Also it's possible to use a thread per event and just sleep, but there may be thousands of simultanious events which effectively may lead to running out of threads.
The solution can be language-agnostic, but I prefer the C++ STD library solution.
Another approach is to have a single thread that sleeps between events. But I can't figure out how to forcefully "wake" it when there is a new event that should be handled between now and the next scheduled wake up.
I suppose solution to these problems are, at least on *nix systems, poll or epoll with some help of timer. It allows you to make the thread sleep until some given event. The given event may be something appearing on stdin or timer timeout. Since the question was about a general algorithm/idea of algorithm and the code would take a lot of space I am giving just pseudocode:
epoll = create_epoll();
timers = vector<timer>{};
while(true) {
event = epoll.wait_for_event(timers);
if (event.is_timer_timeout()) {
t = timers.find_timed_out();
t.handle_event();
timers.erase(t);
} else if (event.is_incoming_stdin_data()) {
data = stdin.read();
timers.push_back(create_timer(data));
}
}
Two threads that share a priority queue.
Arrivals thread: Wait for arrival. When event arrives calculate time for handler to run. Add handler to queue with priority of handler time ( the top of the queue will be the next event that is to be handled
Handler thread: Is now equal to time of handler at top of queue then run handler. Sleep for clock resolution.
Note: check if your queue is thread safe. If not, then you will have to use a mutex.
This looks simple, but there a lot of gotchas waiting for the inexperienced. So, I would not recommend coding this from scratch. It is better to use a library. The classic is boost::asio. However, this is beginning to show its age and has way more bells and whistles than are needed. So, personally, I use something more lightweight and coded in C++17 - a non blocking event waiter class I coded that you can get from https://github.com/JamesBremner/await. Notice the sample application using this class which does most of what you require https://github.com/JamesBremner/await/wiki/Event-Server

Progress Bar with Gtkmm

Hello I am looking for a signal for gtkmm. Basically I am doing some simulations and what I want is something like this :
I assume I do 5 simulations :
progressBar.set_fraction(0);
1 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
2 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
3 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
4 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
5 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
But I don't know which signal I have to use and how to translate to this.
Thank you a lot for your help !!!
The pseudo code which you presented in your question should actually work - no signal is necessary. However, you could introduce a signal into your simulation for update of the progress bar. IMHO this will not solve your problem and I will try to explain why and what to do to solve it:
You provided a little bit too less context, so, that I will introduce some more assumptions: You have a main window with a button or toolbar item or menu item (or even all of them) which start the simulation.
Let's imagine you set a breakpoint at Gtk::ProgressBar::set_fraction().
Once the debugger stopped at this break point you will find the following calls on the stack trace (probably with many other calls in between):
Gtk::Main::run()
the signal handler of the widget or action which started the simulation
the function which runs the five simulations
and last the call of Gtk::ProgressBar::set_fraction().
If you could inspect the internals of Gtk::ProgressBar you would notice that everything in Gtk::ProgressBar::set_fraction() is done properly. So what's wrong?
When you call Gtk::ProgressBar::set_fraction() it probably generates an expose event (i.e. adds an event to the event queue inside of Gtk::Main with a request for its own refresh). The problem is that you probably do not process the request until all five runs of the simulation are done. (Remember that Gtk::Main::run() which is responsible for this is the uppermost/outmost call of my imaginery stack trace.) Thus, the refresh does not happen until the simulation is over - that's too late. (Btw. the authors of Gtk+ stated somewhere in the manual about their cleverness to optimize events. I.e. there might be finally only one expose event for the Gtk::ProgressBar in the event queue but this does not make your situation better.)
Thus, after you called Gtk::ProgressBar::set_fraction() you must somehow flush the event queue before doing further progress with your simulation.
This sounds like leaving the simulation, leaving the calling widget signal handler, returning to Gtk::Main::run() for further event processing and finally coming back for next simulation step - terrible idea. But we did it much simpler. For this, we use essentially the following code (in gtkmm 2.4):
while (Gtk::Main::events_pending()) Gtk::Main::iteration(false);
(This should hopefully be the same in the gtkmm version you use but if in doubt consult the manual.)
It should be done immediately after updating the progress bar fraction and before simulation is continued.
This recursively enters (parts of) the main loop and processes all pending events in the event queue of Gtk::Main and thus, the progress bar is exposed before the simulation continues. You may be concerned to "recursively enter the main loop" but I read somewhere in the GTK+ manual that it is allowed (and reasonable to solve problems like this) and what to care about (i.e. to limit the number of recursions and to grant a proper "roll-back").
What in your case is the simulation we call in general long running functions. Because such long running functions are algorithms (in libraries for anything) which shall not be polluted with any GUI stuff, we built some administrational infra structure around this basic concept including
a progress "proxy" object with an update(double) method and a signal slot
a customized progress dialog which can connect a signal handler to such a progress object (i.e. its signal slot).
The long running function gets a progress object (as argument) and is responsible to call the Progress::update() method in appropriate intervals with an appropriate progress factor. (We simply use values in the range [0, 1].)
One issue is the interval of calling the progress update. If it is called to often the GUI will slow down your long running function significantly. The opposite case (calling it not often enough) results in less responsiveness of GUI. Thus, we decided for more often progress update. To lower the time consuming of GUI, we remember the time of last update in our progress dialog and skip the next refreshs until a certain duration since last refresh is measured. Thus, the long running function has still some extra effort for progress update but it is not recognizable anymore. (A good refresh interval is IMHO 0.1 s - the perception threshold of humans but you may choose 0.05 s if in doubt.)
Flushing all pending events results in processing of mouse events (and other GTK+ signals) also. This allows another useful feature: aborting the long running function.
When the "Cancel" button of our progress dialog is pressed it sets an internal flag. If the progress is updated next time it checks the flag. If the flag became true it throws a special exception. The throw aborts the caller of the progress update (the long running function) immediately. This exception must be catched in the signal handler of the button (or whatever called the long running function). Otherwise, it would "fall through" to the event dispatcher in Gtk::Main where it is catched definitely which would abort your application. (I saw it often enough whenever I forgot to catch.) On the other hand: catching the special exception tells clearly that the long running function has been aborted (in opposition to ended by regulary return). This may or may not be something which can be stated on GUI also.
Finally, the above solution can cause another issue: It enables to start the simulation (via GUI) while a simulation is already running. This is possible because button presses for simulation start could be processed while in progress update. To prevent this, there is actually a simple solution: set a flag at start of simulation in the GUI until it has finished and prevent further starts while the flag is set. Another option can be to make the widget/action insensitive when simulation is started. This topic becomes more complicated if you have multiple distinct long running functions in your application which may or may not exclude each other - leads to something like an exclusion matrix. Well, we solved it pragmatically... (but without the matrix).
And last but not least I want to mention that we use a similar concept for output of log views (e.g. visual logging of infos, warnings, and errors while anything long running is in progress). IMHO it is always good to provide some visual action for end users. Otherwise, they might get bored and use the telephone to complain about the (too) slow software which actually steals you the time to make it faster (a vicious cycle you have to break...)

g_main_loop uses 100% CPU

I have built my first application using glibmm. I'm using a lot of threads as it does heavy processing. I have tried to follow the guidelines concerning multithreading, i.e. not doing any GUI updates from other threads than the one where g_main_loop is running.
I do a lot of graphics rendering in worker threads but I always only update a PixBuf which is later drawn by the widgets on_draw() from the main loop.
All was fine as long as the data I render was read from files. When I started streaming data from a server which I render at regular intervals then the problems started.
Every now and then, especially when executing multiple instances of my application simultaneously, I see that the main threads takes 100% CPU time. Running strace on the process shows that g_main_loop has ended up in an eternal loop calling poll:
poll([{fd=3, events=POLLIN}, {fd=4, events=POLLIN}, {fd=10, events=POLLIN}, {fd=8, events=POLLIN}], 4, 100) = 1 ([{fd=10, revents=POLLIN}])
In proc I get this for file-descriptor 10: 10 -> socket:[1132750]
The poll always returns immediately as file-descriptor 10 has something to offer. This goes on forever so I assume that the file-descriptor is never read. The odd thing is that running 5 applications will almost always lead to all 5 ending up in the infinite poll loop after just a couple of minutes while running only instance one seems to work more than 30 minutes most of the times I try.
Why is this happening and is there any way to debug this?
My mistake was that I called queue_draw() from one of my worker threads. Given that the function is called "queue", I assumed it would queue a redraw which would later be executed by the g_main_loop. As it turned out, this was what broke the g_main_loop. I wish libgtkmm would have a little more detail about these multithreading restrictions in its reference manual.
My solution, to the problem was adding Glib::Dispatcher queueRedraw to my Widget and connecting it to the queue_draw() function:
queueRedraw.connect(sigc::mem_fun(*this, &MyWidgetClass::queue_draw))
Calling queueRedraw() signals the main thread to call the queue_draw() function.
I don't know if this is the best approach, but it solves the problem.

Some questions on Multithreading and Background worker threads in windows form

I have encountered the need to use multithreading in my windows form GUI application using C++. From my research on the topic it seems background worker threads are the way to go for my purposes. According to example code I have
System::Void backgroundWorker1_DoWork(System::Object^ sender, System::ComponentModel::DoWorkEventArgs^ e)
{
BackgroundWorker^ worker = dynamic_cast<BackgroundWorker^>(sender);
e->Result = SomeCPUHungryFunction( safe_cast<Int32>(e->Argument), worker, e );
}
However there are a few things I need to get straight and figure out
Will a background worker thread make my multithreading life easier?
Why do I need e->Result?
What are the arguments passed into the backgroundWorker1_DoWork function for?
What is the purpose of the parameter safe_cast(e->Argument)?
What things should I do in my CPUHungryFunction()?
What if my CPUHungryFunction() has a while loop that loops indefinitely?
Do I have control over the processor time my worker thread gets?
Can more specifically control the number of times the loop loops within a set period? I don’t want to be using up cpu looping 1000s of times a second when I only need to loop 30 times a second.
*Is it necessary to control the rate at which the GUI is updated?
Will a background worker thread make my multithreading life easier?
Yes, very much so. It helps you deal with the fact that you cannot update the UI from a worker thread. Particularly the ProgressChanged event lets you show progress and the RunWorkerCompleted event lets you use the results of the worker thread to update the UI without you having to deal with the cross-threading problem.
Why do I need e->Result?
To pass back the result of the work you did to the UI thread. You get the value back in your RunWorkerCompleted event handler, e->Result property. From which you then update the UI with the result.
What are the arguments passed into the function for?
To tell the worker thread what to do, it is optional. Otherwise identical to passing arguments to any method, just more awkward since you don't get to chose the arguments. You typically pass some kind of value from your UI for example, use a little helper class if you need to pass more than one. Always favor this over trying to obtain UI values in the worker, that's very troublesome.
What things should I do in my CPUHungryFunction()?
Burn CPU cycles of course. Or in general do something that takes a long time, like a dbase query. Which doesn't burn CPU cycles but takes too long to allow the UI thread to go dead while waiting for the result. Roughly, whenever you need to do something that takes more than a second then you should execute it on a worker thread instead of the UI thread.
What if my CPUHungryFunction() has a while loop that loops indefinitely?
Then your worker never completes and never produces a result. This may be useful but it isn't common. You would not typically use a BGW for this, just a regular Thread that has its IsBackground property set to true.
Do I have control over the processor time my worker thread gets?
You have some by artificially slowing it down by calling Thread.Sleep(). This is not a common thing to do, the point of starting a worker thread is to do work. A thread that sleeps is using an expensive resource in a non-productive way.
Can more specifically control the number of times the loop loops within a set period? I don’t want to be using up cpu looping 1000s of times a second when I only need to loop 30 times a second.
Same as above, you'd have to sleep. Do so by executing the loop 30 times and then sleep for a second.
Is it necessary to control the rate at which the GUI is updated?
Yes, that's very important. ReportProgress() can be a fire-hose, generating many thousands of UI updates per second. You can easily get into a problem with this when the UI thread just can't keep up with that rate. You'll notice, the UI thread stops taking care of its regular duties, like painting the UI and responding to input. Because it keeps having to deal with another invoke request to run the ProgressChanged event handler. The side-effect is that the UI looks frozen, you've got the exact problem back you were trying to solve with a worker. It isn't actually frozen, it just looks that way, it is still running the event handler. But your user won't see the difference.
The one thing to keep in mind is that ReportProgress() only needs to keep human eyes happy. Which cannot see updates that happen more frequently than 20 times per second. Beyond that, it just turns into an unreadable blur. So don't waste time on UI updates that just are not useful anyway. You'll automatically also avoid the fire-hose problem. Tuning the update rate is something you have to program, it isn't built into BGW.
I will try to answer you question by question
Yes
DoWork is a void method (and need to be so). Also DoWork executes
in a different thread from the calling one, so you need to have a
way to return something to the calling thread. The e->Result
parameter will be passed to the RunWorkerCompleted event inside
the RunWorkerCompletedEventArgs
The sender argument is the backgroundworker itself that you can use
to raise events for the UI thread, the DoWorkEventArgs eventually
contains parameters passed from the calling thread (the one who has
called RunWorkerAsync(Object))
Whatever you have need to do. Paying attention to the userinterface
elements that are not accessible from the DoWork thread. Usually, one
calculate the percentage of work done and update the UI (a progress
bar or something alike) and call ReportProgress to communicate with
the UI thread. (Need to have WorkerReportProgress property set to
True)
Nothing runs indefinitely. You can always unplug the cord.
Seriously, it is just another thread, the OS takes care of it and
destroys everything when your app ends.
Not sure what do you mean with this, but it is probably related
to the next question
You can use the Thread.Sleep or Thread.Join methods to release the
CPU time after one loop. The exact timing to sleep should be fine
tuned depending on what you are doing, the workload of the current
system and the raw speed of your processor
Please refer to MSDN docs on BackgroundWorker and Thread classes