I have built my first application using glibmm. I'm using a lot of threads as it does heavy processing. I have tried to follow the guidelines concerning multithreading, i.e. not doing any GUI updates from other threads than the one where g_main_loop is running.
I do a lot of graphics rendering in worker threads but I always only update a PixBuf which is later drawn by the widgets on_draw() from the main loop.
All was fine as long as the data I render was read from files. When I started streaming data from a server which I render at regular intervals then the problems started.
Every now and then, especially when executing multiple instances of my application simultaneously, I see that the main threads takes 100% CPU time. Running strace on the process shows that g_main_loop has ended up in an eternal loop calling poll:
poll([{fd=3, events=POLLIN}, {fd=4, events=POLLIN}, {fd=10, events=POLLIN}, {fd=8, events=POLLIN}], 4, 100) = 1 ([{fd=10, revents=POLLIN}])
In proc I get this for file-descriptor 10: 10 -> socket:[1132750]
The poll always returns immediately as file-descriptor 10 has something to offer. This goes on forever so I assume that the file-descriptor is never read. The odd thing is that running 5 applications will almost always lead to all 5 ending up in the infinite poll loop after just a couple of minutes while running only instance one seems to work more than 30 minutes most of the times I try.
Why is this happening and is there any way to debug this?
My mistake was that I called queue_draw() from one of my worker threads. Given that the function is called "queue", I assumed it would queue a redraw which would later be executed by the g_main_loop. As it turned out, this was what broke the g_main_loop. I wish libgtkmm would have a little more detail about these multithreading restrictions in its reference manual.
My solution, to the problem was adding Glib::Dispatcher queueRedraw to my Widget and connecting it to the queue_draw() function:
queueRedraw.connect(sigc::mem_fun(*this, &MyWidgetClass::queue_draw))
Calling queueRedraw() signals the main thread to call the queue_draw() function.
I don't know if this is the best approach, but it solves the problem.
Related
Got a large C++ function in Linux that calls a whole lot of other functions, making up an algorithm. At various points given certain bad inputs, the algorithm can get "stuck" and go on forever. Adding a timeout seems appropriate as all potential "stuck" points cannot be predicted. But despite scouring the Internet for timeout examples I've only found how to apply timeouts when either the thing your timing is a separate thread or it's reading inputs. My code is a single thread and does not modify file descriptors, so not coming up with any luck. Do I basically have no choice but to thread it?
I am not sure about the situation, actually server applications or embedded applications often run for years in background without stopping. I think one option is to let your program run in background and log to a file(or screen) timely, and, if you really want to stop the program after certain time, you can use timeout command or a script to kill your program after that time, say, timeout 15s your-prog.
I have a GUI app that I am creating with wxWidgets. As part of the functionality, I have to run "tasks" simultaneously with manipulation of the GUI window. For example, I may run the code:
long currentTime = wxGetLocalTime();
long stopTime = wxGetLocalTime() + 3;
while (wxGetLocalTime() != stopTime) {}
wxMessageBox("DONE IN APP");
For the duration of those 3 seconds, my application would essentially be frozen until the wxMessageBox is shown. Is there a way to have this run in the background without the use of multiple threads? It creates problems for the application that I've developing.
I was wondering if there are some types of event handling that could be used. Any sort of help is greatly appreciated.
There are 3 ways to run time-consuming tasks in GUI wx applications:
By far the most preferred is to use a different thread. The explanation of the application being "very GUI intensive" really doesn't make any sense to me, I think you should seriously reconsider your program design if its GUI intensity (whatever it is) prevents you from using background worker threads. If you do use this approach, it's pretty simple but pay special attention to the thread/program termination issues. In particular, you will need to either wait for the thread to finish (acceptable if it doesn't take a long time to run) or cancel it explicitly before exiting the program.
Use EVT_IDLE event to perform your task whenever there are no other events to process. This is not too bad for small tasks which can be broken in small enough pieces as you need to be able to resume processing in your handler. Don't forget to call event.RequestMore() to continue getting idle events even when nothing is happening otherwise.
The worst and most dangerous was is to call wxYield() as suggested by another answer. This can seem simple initially but you will regret doing it later because this can create extremely difficult to debug reentrancy problems in your code. If you do use it, you need to guard against reentrancy everywhere yourself and you should really understand what exactly this function does.
Try this:
long currentTime = wxGetLocalTime();
long stopTime = wxGetLocalTime() + 3;
while (wxGetLocalTime() != stopTime) {
wxYield();
}
wxMessageBox("DONE IN APP");
I know this is late to the game, but...
I've successfully used the EVT_IDLE method for YEARS (back in the 90's with Motif originally). The main idea is to break your task up into small pieces, where each piece calls the next piece (think linked-list). The mechanism to do this is using the CallAfter() method (using C++, of course). You just "CallAfter()" as the last step in the piece and that will allow the GUI main loop to run another iteration and possibly update GUI elements and such before calling your next piece. Just remember to keep the pieces small.
Using a background thread is really nice, but can be trickier than you imagine... eventually. As long as you know the data you're working on in the background won't be touched/viewed by anything else, you're OK. If you know this is the case, then that is the way to go. This method allows the GUI to remain fully responsive during background calculations (resizing/moving the window, etc.)
In either case, just don't forget to desensitize appropriate GUI elements as the first step so you won't accidentally launch the same background task multiple times (for example, accidentally clicking a push button multiple times in succession that launches the background thread).
I have encountered the need to use multithreading in my windows form GUI application using C++. From my research on the topic it seems background worker threads are the way to go for my purposes. According to example code I have
System::Void backgroundWorker1_DoWork(System::Object^ sender, System::ComponentModel::DoWorkEventArgs^ e)
{
BackgroundWorker^ worker = dynamic_cast<BackgroundWorker^>(sender);
e->Result = SomeCPUHungryFunction( safe_cast<Int32>(e->Argument), worker, e );
}
However there are a few things I need to get straight and figure out
Will a background worker thread make my multithreading life easier?
Why do I need e->Result?
What are the arguments passed into the backgroundWorker1_DoWork function for?
What is the purpose of the parameter safe_cast(e->Argument)?
What things should I do in my CPUHungryFunction()?
What if my CPUHungryFunction() has a while loop that loops indefinitely?
Do I have control over the processor time my worker thread gets?
Can more specifically control the number of times the loop loops within a set period? I don’t want to be using up cpu looping 1000s of times a second when I only need to loop 30 times a second.
*Is it necessary to control the rate at which the GUI is updated?
Will a background worker thread make my multithreading life easier?
Yes, very much so. It helps you deal with the fact that you cannot update the UI from a worker thread. Particularly the ProgressChanged event lets you show progress and the RunWorkerCompleted event lets you use the results of the worker thread to update the UI without you having to deal with the cross-threading problem.
Why do I need e->Result?
To pass back the result of the work you did to the UI thread. You get the value back in your RunWorkerCompleted event handler, e->Result property. From which you then update the UI with the result.
What are the arguments passed into the function for?
To tell the worker thread what to do, it is optional. Otherwise identical to passing arguments to any method, just more awkward since you don't get to chose the arguments. You typically pass some kind of value from your UI for example, use a little helper class if you need to pass more than one. Always favor this over trying to obtain UI values in the worker, that's very troublesome.
What things should I do in my CPUHungryFunction()?
Burn CPU cycles of course. Or in general do something that takes a long time, like a dbase query. Which doesn't burn CPU cycles but takes too long to allow the UI thread to go dead while waiting for the result. Roughly, whenever you need to do something that takes more than a second then you should execute it on a worker thread instead of the UI thread.
What if my CPUHungryFunction() has a while loop that loops indefinitely?
Then your worker never completes and never produces a result. This may be useful but it isn't common. You would not typically use a BGW for this, just a regular Thread that has its IsBackground property set to true.
Do I have control over the processor time my worker thread gets?
You have some by artificially slowing it down by calling Thread.Sleep(). This is not a common thing to do, the point of starting a worker thread is to do work. A thread that sleeps is using an expensive resource in a non-productive way.
Can more specifically control the number of times the loop loops within a set period? I don’t want to be using up cpu looping 1000s of times a second when I only need to loop 30 times a second.
Same as above, you'd have to sleep. Do so by executing the loop 30 times and then sleep for a second.
Is it necessary to control the rate at which the GUI is updated?
Yes, that's very important. ReportProgress() can be a fire-hose, generating many thousands of UI updates per second. You can easily get into a problem with this when the UI thread just can't keep up with that rate. You'll notice, the UI thread stops taking care of its regular duties, like painting the UI and responding to input. Because it keeps having to deal with another invoke request to run the ProgressChanged event handler. The side-effect is that the UI looks frozen, you've got the exact problem back you were trying to solve with a worker. It isn't actually frozen, it just looks that way, it is still running the event handler. But your user won't see the difference.
The one thing to keep in mind is that ReportProgress() only needs to keep human eyes happy. Which cannot see updates that happen more frequently than 20 times per second. Beyond that, it just turns into an unreadable blur. So don't waste time on UI updates that just are not useful anyway. You'll automatically also avoid the fire-hose problem. Tuning the update rate is something you have to program, it isn't built into BGW.
I will try to answer you question by question
Yes
DoWork is a void method (and need to be so). Also DoWork executes
in a different thread from the calling one, so you need to have a
way to return something to the calling thread. The e->Result
parameter will be passed to the RunWorkerCompleted event inside
the RunWorkerCompletedEventArgs
The sender argument is the backgroundworker itself that you can use
to raise events for the UI thread, the DoWorkEventArgs eventually
contains parameters passed from the calling thread (the one who has
called RunWorkerAsync(Object))
Whatever you have need to do. Paying attention to the userinterface
elements that are not accessible from the DoWork thread. Usually, one
calculate the percentage of work done and update the UI (a progress
bar or something alike) and call ReportProgress to communicate with
the UI thread. (Need to have WorkerReportProgress property set to
True)
Nothing runs indefinitely. You can always unplug the cord.
Seriously, it is just another thread, the OS takes care of it and
destroys everything when your app ends.
Not sure what do you mean with this, but it is probably related
to the next question
You can use the Thread.Sleep or Thread.Join methods to release the
CPU time after one loop. The exact timing to sleep should be fine
tuned depending on what you are doing, the workload of the current
system and the raw speed of your processor
Please refer to MSDN docs on BackgroundWorker and Thread classes
I have a data acquisition application running on Windows 7, using VC2010 in C++. One thread is a heartbeat which sends out a change every .2 seconds to keep-alive some hardware which has a timeout of about .9 seconds. Typically the heartbeat call takes 10-20ms and the thread spends the rest of the time sleeping.
Occasionally however there will be a delay of 1-2 seconds and the hardware will shut down momentarily. The heartbeat thread is running at THREAD_PRIORITY_TIME_CRITICAL which is 15 for a normal priority process. My other threads are running at normal priority, although I use a DLL to control some other hardware and have noticed with Process Explorer that it starts several threads running at level 15.
I can't track down the source of the slow down but other theads in my application are seeing the same kind of delays when this happens. I have made several optimizations to the heartbeat code even though it is quite simple, but the occasional failures are still happening. Now I wonder if I can increase the priority of this thread beyond 15 without specifying REALTIME_PRIORITY_CLASS for the entire process. If not, are there any downsides I should be aware of to using REALTIME_PRIORITY_CLASS? (Other than this heartbeat thread, the rest of the application doesn't have real-time timing needs.)
(Or does anyone have any ideas about how to track down these slowdowns...not sure if the source could be in my app or somewhere else on the system).
Update: So I hadn't actually tried passing 31 into my AfxBeginThread call and turns out it ignores that value and sets the thread to normal priority instead of the 15 that I get with THREAD_PRIORITY_TIME_CRITICAL.
Update: Turns out running the Disk Defragmenter is a good way to cause lots of thread delays. Even running the process at REALTIME_PRIORITY_CLASS and the heartbeat thread at THREAD_PRIORITY_TIME_CRITICAL (level 31) doesn't seem to help. Next thing to try is calling AvSetMmThreadCharacteristics("Pro Audio")
Update: Scheduling heartbeat thread as "Pro Audio" does work to increase the thread's priority beyond 15 (Base=1, Dynamic=24) but it doesn't seem to make any real difference when defrag is running. I've been able to correlate many of the slowdowns with the disk defragmenter so turned off the weekly scan. Still can't explain some delays so we're going to increase to a 5-10 second watchdog timeout.
Even if you could, increasing the priority will not help. The highest priority runnable thread gets the processor at all times.
Most likely there is some extended interrupt processing occurring while interrupts are disabled. Interrupts effectively work at a higher priority than any thread.
It could be video, network, disk, serial, USB, etc., etc. It will take some insight to selectively disable or use an alternate driver to see if the problem system hesitation is affected. Once you find that, then figuring out a way to prevent it might range from trivial to impossible depending on what it is.
Without more knowledge about the system, it is hard to say. Have you tried running it on a different PC?
Officially you can't use REALTIME threads in a process which does not have the REALTIME_PRIORITY_CLASS.
Unoficially you could play with the undocumented NtSetInformationThread
see:
http://undocumented.ntinternals.net/UserMode/Undocumented%20Functions/NT%20Objects/Thread/NtSetInformationThread.html
But since I have not tried it, I don't have any more info about this.
On the other hand, as it was said before, you can never be sure that the OS will not take its time when your thread's quantum will expire. Certain poorly written drivers are often the cause of such latency.
Otherwise there is a software which can tell you if you have misbehaving kernel parts:
http://www.thesycon.de/deu/latency_check.shtml
I would try using CreateWaitableTimer() & SetWaitableTimer() and see if they are subject to the same preemption problems.
I am facing strange issue on Windows CE:
Running 3 EXEs
1)First exe doing some work every 8 minutes unless exit event is signaled.
2)Second exe doing some work every 5 minutes unless exit event signaled.
3)Third exe while loop is running and in while loop it do some work at random times.
This while loop continues until exit event signaled.
Now this exit event is global event and can be signaled by any process.
The Problem is
When I run First exe it works fine,
Run second exe it works fine,
run third exe it works fine
When I run all exes then only third exe runs and no instructions get executed in first and second.
As soon as third exe gets terminated first and second starts get processing.
It that can be the case that while loop in third exe is taking all CPU cycles?
I havn't tried putting Sleep but I think that can do some tricks.
But OS should give CPU to all processes ...
Any thoughts ???
Put the while loop in the third EXE to Sleep each time through the loop and see what happens. Even if it doesn't fix this particular probem, it isn't ever good practice to poll with a while loop, and even using Sleep inside a loop is a poor substitute for a proper timer.
On the MSDN, I also read that CE allows for (less than) 32 processes simultaneously. (However, the context switches are lightning fast...). Some are already taken by system services.
(From Memory) Processes in Windows CE run until completion if there are no higher priority processes running, or they run for their time slice (100ms) if there are other processes of equal priority running. I'm not sure if Windows CE gives the process with the active/foreground window a small priority boost (just like desktop Windows), or not.
In your situation the first two processes are starved of processor time so they never run until the third process exits. Some ways to solve this are:
Make the third process wait/block on some multi-process primitives (mutex, semaphore, etc) and a short timeout. Using WaitForMultipleObjects/WaitForSingleObject etc.
Make the third process wait using a call to Sleep every time around the processing loop.
Boost the priority of the other processes so when they need to run they will interrupt the third process and actually run. I would probably make the least often called process have the highest priority of the three processes.
The other thing to check is that the third process does actually complete its tasks in time, and does not peg the CPU trying to do its thing normally.
Yeah I think that is not good solution . I may try to use timer and see the results..