tl;dr: I have a QThread which sends a signal to the main thread whenever new data is available for processing. Main thread then acquires, processes and displays the data. The data arrives more often the the main thread is able to process it resulting in frozen GUI and eventually a stack overflow (yay!).
Details
My application acquires frames from a camera for processing and displaying. The camera notifies when the new frame is available through a windows event. I have a thread which periodically checks for these events and notifies the main thread when new frame is available for grabs:
void Worker::run()
{
running_ = true;
while (running_)
{
if (WaitForSingleObject(nextColorFrameEvent, 0) == WAIT_OBJECT_0)
emit signalColorFrame();
usleep(15);
}
}
signalColorFrame is connected to a slot in Camera class which gets the frame from camera, does some processing and sends it to MainWindow which draws it to the screen.
void Camera::onNewColorFrame()
{
getFrameFromCamera();
processFrame();
drawFrame();
}
Now if that method completes before the next frame is available, everything works fine. As the processing gets more complex though the Camera class receives new signals, before it's done with processing a previous frame.
My solution is to block signals from the worker thread for the time of processing and force the even loop to run in between with QCoreApplication::processEvents():
void Camera::onNewColorFrame()
{
worker_->blockSignals(true)
getFrameFromCamera();
processFrame();
drawFrame();
QCoreApplication::processEvents(); // this is essential for the GUI to remain responsive
worker_->blockSignals(false);
}
Does that look like a good way of doing it? Can someone suggest a better solution?
I think before you solve technical side you should consider to think about design side of your application. There are several ways your problem can be solved, but first you should decide what to do with frames which you dont have time to process in main thread. Are you going to skip them or save for later processing, but then you should realise that processing queue still must have certain size limits, so you anyway should decide what to do with 'out of bound' data.
I would personally prefer in such cases make some intermediate container which holds data which received somewhere, so your camera processing thread just notify collector that data received and collector decides if its going to store or skip data. And main loop as soon as it has time access collector in a form fetchNext() or fetchAll() depending on what you need and implements object processing.
Related
I have inherited a complex program in my current job and am seeking to reduce image flickering from a stream of data coming over a QTcpSocket.
The program receives the continuous stream of data, processes it, then paints it on the screen with a paintEvent.
The processing function is run based on a signal/slot connection where the signal is readyread() from a QTcpSocket, and the slot is the data processing function. The stream is continuous so this signal/slot is continually firing and updating the painted image on the screen based on the incoming data.
The image flickers constantly, and I assume that the processing in the main event loop could be interfering with the data stream, so my idea was to put the data processing function in its own thread. This data processing function is so thoroughly integrated into the other features of the program, that subclassing the data stream at this point so that I could apply a QThread is not a solution and would be a complete restructure of the entire program, taking tons of time.
So my idea was to use QtConcurrent like so:
void MainWindow::getDataThread(){ //implemented as a slot
wpFuture = QtConcurrent::run(this, &MainWindow::getData);
}
where getData() is the data processing function connected to the readyread() signal:
connect(tcpSocket2, SIGNAL(readyRead()), this, SLOT(getData()));
So I replaced SLOT(getData()) with SLOT(getDataThread()) to allow the data processing function to be run on a new thread that is obtained from the global thread pool. Since the stream is continuous, I believe it is constantly assigning a new thread every time the getData processing function is ran. It does seem to reduce flickering but after about 30 to 60 seconds the program randomly crashes with no specific callouts.
So my question is: Is there a better method for threading my data processing function, without subclassing the data stream? Is my thinking/understanding wrong in my implementation of QtConcurrent in this specific situation?
Thank you.
From your comment I assume your understanding of thread pool is wrong.
There are a number of threads in a thread pool. Each time you call QtConcurrent::run a free thread from the global thread pool is taken and being handed a task to do (MainWindow::getData). If you call QtConcurrent::run several times than every time MainWindow::getData will be executed in (presumably) different thread. If there are no currently available threads in thread pool, you tasks will be queued and handed to threads as they become available later. This way you can have several simultaneous tasks running limited by the number of threads in the thread pool.
Now the problem is, that MainWindow::getData is probably not thread safe by its design. QtConcurrent::run(this, &MainWindow::getData); called several times may result in data race.
If you want a separate single thread to process data then just use QThread (no need to "subclass" anything):
// A thread and its context are created only once
QThread thread;
QObject context;
context.moveToThread(&thread);
// ...
QObject::connect(tcpSocket2, &QTcpSocket::readyRead, &context, [this] () {
this->getData();
}, Qt::QueuedConnection);
thread.start()
Now as long as context object is alive and thread is running each time QTcpSocket::readyRead is emmited - the lambda will be executed.
Still pay attention so that your worker thread and you main thread do not collide in getData.
I have an ordinary GUI Thread (Main Window) and want to attach a Worker thread to it. The Worker thread will be instantiated, moved to its own thread and then fired away to run on its own independently, running a messaging routine (non-blocking).
This is where the worker is created:
void MainWindow::on_connectButton_clicked()
{
Worker* workwork;
workwork= new Worker();
connect(workwork,SIGNAL(invokeTestResultsUpdate(int,quint8)),
this,SLOT(updateTestResults(int,quint8)),Qt::QueuedConnection);
connect(this,SIGNAL(emitInit()),workwork,SLOT(init()));
workwork->startBC();
}
This is where the Worker starts:
void Worker::startBC()
{
t1553 = new QThread();
this->moveToThread(t1553);
connect(t1553,SIGNAL(started()),this,SLOT(run1553Process()));
t1553->start();
}
I have two problems here, regarding the event queue of the new thread:
The first and minor problem is that, while I can receive the signals from the Worker thread (namely: invokeTestResultsUpdate), I cannot invoke the init method by emitting the emitInit signal from MainWindow. It just doesn't fire unless I call it directly or connect it via Qt::DirectConnection . Why is this happening? Because I have to start the Worker thread's own messaging loop explicitly? Or some other thing I'm not aware of? (I really fail to wrap my head around the concept of Thread/Event Loop/Signal Slot mechanism and the relation between each other even though I try. I welcome any fresh perspective here too.)
The second and more obscure problem is: run1553process method does some heavy work. By heavy work, I mean a very high rate of data. There is a loop running, and I try to receive the data flowing from a device (real-time) as soon as it lands in the buffer, using mostly extern API functions. Then throw the mentioned invokeTestResultsUpdate signal towards the GUI each time it receives a message, updating the message number box. It's nothing more than that.
The thing I'm experiencing is weird; normally the messaging routine is mostly unhindered but when I resize the main window, move it, or hide/show the window, the Worker thread skips many messages. And the resizing action is really slow (not responds very fast). It's really giving me a cancer.
(Note: I have tried subclassing QThread before, it did not mitigate the problem.)
I've been reading all the "Thread Affinity" topics and tried to apply them but it still behaves like it is somehow interrupted by the GUI thread's events at some point. I can understand MainWindow's troubles since there are many messages at the queue to be executed (both the invoked slots and the GUI events). But I cannot see as to why a background thread is affected by the GUI events. I really need to have an extremely robust and unhindered message routine running seperately behind, firing and forgetting the signals and not giving a damn about anything.
I'm really desperate for any help right now, so any bit of information is useful for me. Please do not hesitate to throw ideas.
TL;DR: call QCoreApplication::processEvents(); periodiacally inside run1553process.
Full explanation:
Signals from the main thread are put in a queue and executed once the event loop in the second thread takes control. In your implementation you call run1553Process as soon as the thread starts. the control will not go back to the event loop until the end of that function or QCoreApplication::processEvents is manually invoked so signals will just sit there waiting for the event loop to pick them up.
P.S.
you are leaking both the worker and the thread in the code above
P.P.S.
Data streams from devices normally provide an asynchronous API instead of you having to poll them indefinetly
I finally found the problem.
The crucial mistake was connecting the QThread's built in start() signal to run1553Process() slot. I had thought of this as replacing run() with this method, and expected everything to be fine. But this caused the actual run() method to get blocked, therefore preventing the event loop to start.
As stated in qthread.cpp:
void QThread::run()
{
(void) exec();
}
To fix this, I didn't touch the original start() signal, instead connected another signal to my run1553Process() independently. First started the thread ordinarily, allowed the event loop to start, then fired my other signals. That did it, now my Worker can receive all the messages.
I think now I understand the relation between threads and events better.
By the way, this solution did not take care of the message skipping problem entirely, but I feel that's caused by another factor (like my message reading implementation).
Thanks everyone for the ideas. I hope the solution helps some other poor guy like me.
I have two threads:
GUI, which does the typical GUI stuff and manages a bunch of flags that affect the Processing thread
Processing, which handles realtime data on a 30Hz period forever
There are lots of examples of how to have one thread wait for another to finish, but none for how to make a temporary roadbock without killing the thread.
There's a function in my GUI thread that contains this:
Scene* scene = getSceneToFadeFrom();
scene->setSelected(false);
///TODO: wait until (!scene->processing)
fadeFrom = scene->dmx;
and one in my Processing thread that contains this while looping through a QList:
if(scene->getSelected())
{
scene->processing = true;
scene->run(); //updates scene->dmx
scene->processing = false;
}
If this were an embedded project on bare metal, I could use the global interrupt enable flag in place of scene->processing (invert the logic) and be done, which dedicates the entire CPU to that task at the expense of all others.
But because this is a desktop project with an operating system to play nice with, how can I achieve the same effect within this project? Basically, pause the GUI thread at that point until scene->processing == false (which it might be already) and guarantee that the Processing thread is actually running while the GUI thread waits for it.
And here's what I came up with. It was actually an XY problem. I'm surprised that I didn't think of this right away because I had already done something similar for deleting a Scene:
GUI thread:
//(sceneToReplace != 0) means there's something for Processing to do
sceneToReplace = getSceneToFadeFrom();
if(sceneToReplace)
{
sceneToReplace->setSelected(false);
}
Processing thread, same class:
if(sceneToReplace)
{
fadeFrom = sceneToReplace->dmx;
sceneToReplace = 0;
}
and I don't even need the processing flag anymore!
fadeFrom gets set a little later than in the original veresion, but it's not actually needed until then anyway.
I'm importing a portion of existing code into my Qt app and noticed a sleep function in there. I see that this type of function has no place in event programming. What should I do instead?
UPDATE: After thought and feedback I would say the answer is: call sleep outside the GUI main thread only and if you need to wait in the GUI thread use processEvents() or an event loop, this will prevent the GUI from freezing.
It isn't pretty but I found this in the Qt mailing list archives:
The sleep method of QThread is protected, but you can expose it like so:
class SleeperThread : public QThread
{
public:
static void msleep(unsigned long msecs)
{
QThread::msleep(msecs);
}
};
Then just call:
SleeperThread::msleep(1000);
from any thread.
However, a more elegant solution would be to refactor your code to use a QTimer - this might require you saving the state so you know what to do when the timer goes off.
I don't recommend sleep in a event based system but if you want to ...
You can use a waitcondition, that way you can always interrupt the sleep if neccesary.
//...
QMutex dummy;
dummy.lock();
QWaitCondition waitCondition;
waitCondition.wait(&dummy, waitTime);
//...
The reason why sleep is a bad idea in event based programming is because event based programming is effectively a form on non-preemptive multitasking. By calling sleep, you prevent any other event becoming active and therefore blocking the processing of the thread.
In a request response scenario for udp packets, send the request and immediately wait for the response. Qt has good socket APIs which will ensure that the socket does not block while waiting for the event. The event will come when it comes. In your case the QSocket::readReady signal is your friend.
If you want to schedule an event for some point of time in the future, use QTimer. This will ensure that other events are not blocked.
It is not necessary to break down the events at all. All I needed to do was to call QApplication::processEvents() where sleep() was and this prevents the GUI from freezing.
I don't know how the QTs handle the events internally, but on most systems at the lowest level the application life goes like this: the main thread code is basically a loop (the message loop), in which, at each iteration, the application calls a function that gives to it a new message; usually that function is blocking, i.e. if there are no messages the function does not return and the application is stopped.
Each time the function returns, the application has a new message to process, that usually has some recipient (the window to which is sent), a meaning (the message code, e.g. the mouse pointer has been moved) and some additional data (e.g. the mouse has been moved to coords 24, 12).
Now, the application has to process the message; the OS or the GUI toolkit usually do this under the hood, so with some black magic the message is dispatched to its recipient and the correct event handler is executed. When the event handler returns, the internal function that called the event handler returns, so does the one that called it and so on, until the control comes back to the main loop, that now will call again the magic message-retrieving function to get another message. This cycle goes on until the application terminates.
Now, I wrote all this to make you understand why sleep is bad in an event driven GUI application: if you notice, while a message is processed no other messages can be processed, since the main thread is busy running your event handler, that, after all, is just a function called by the message loop. So, if you make your event handler sleep, also the message loop will sleep, which means that the application in the meantime won't receive and process any other messages, including the ones that make your window repaint, so your application will look "hang" from the user perspective.
Long story short: don't use sleep unless you have to sleep for very short times (few hundreds milliseconds at most), otherwise the GUI will become unresponsive. You have several options to replace the sleeps: you can use a timer (QTimer), but it may require you to do a lot of bookkeeping between a timer event and the other. A popular alternative is to start a separate worker thread: it would just handle the UDP communication, and, being separate from the main thread, it would not cause any problem sleeping when necessary. Obviously you must take care to protect the data shared between the threads with mutexes and be careful to avoid race conditions and all the other kind of problems that occur with multithreading.
My program does file loading and memcpy'ing in the background while the screen is meant to be updated interactively. The idea is to have async loading of files the program will soon need so that they are ready to be used when the main thread needs them. However, the loads/copies don't seem to happen in parallel with the main thread. The main thread pauses during the loading and will often wait for all loads (can be up to 8 at once) to finish before the next iteration of the main thread's main loop.
I'm using Win32, so I'm using _beginthread for creating the file-loading/copying thread.
The worker thread function:
void fileLoadThreadFunc(void *arglist)
{
while(true)
{
// s_mutex keeps the list from being updated by the main thread
s_mutex.lock(); // uses WaitForSingleObject INFINITE
// s_filesToLoad is a list added to from the main thread
while (s_filesToLoad.size() == 0)
{
s_mutex.unlock();
Sleep(10);
s_mutex.lock();
}
loadObj *obj = s_filesToLoad[0];
s_filesToLoad.erase(s_filesToLoad.begin());
s_mutex.unlock();
obj->loadFileAndMemcpy();
}
}
main thread startup:
_beginThread(fileLoadThreadFunc, 0, NULL);
code in a class that the main thread uses to "kick" the thread for loading a file:
// I used the commented code to see if main thread was ever blocking
// but the PRINT never printed, so it looks like it never was waiting on the worker
//while(!s_mutex.lock(false))
//{
// PRINT(L"blocked! ");
//}
s_mutex.lock();
s_filesToLoad.push_back(this);
s_mutex.unlock();
Some more notes based on comments:
The loadFileAndMemcpy() function in the worker thread loads via Win32 ReadFile function - does this cause the main thread to block?
I reduced the worker thread priority to either THREAD_PRIORITY_BELOW_NORMAL and THREAD_PRIORITY_LOWEST, and that helps a bit, but when I move the mouse around to see how slowly it moves while the worker thread is working, the mouse "jumps" a bit (without lowering the priority, it was MUCH worse).
I am running on a Core 2 Duo, so I wouldn't expect to see any mouse lag at all.
Mutex code doesn't seem to be an issue since the "blocked!" never printed in my test code above.
I bumped the sleep up to 100ms, but even 1000ms doesn't seem to help as far as the mouse lag goes.
Data being loaded is tiny - 20k .png images (but they are 2048x2048).. they are small size since this is just test data, one single color in the image, so real data will be much larger.
You will have to show the code for the main thread to indicate how it is notified that it a file is loaded. Most likely the blocking issue is there. This is really a good case for using asynchronous I/O instead of threads if you can work it into your main loop. If nothing else you really need to use conditions or events. One to trigger the file reader thread that there is work to do, and another to signal the main thread a file has been loaded.
Edit: Alright, so this is a game, and you're polling to see if the file is done loading as part of the rendering loop. Here's what I would try: use ReadFileEx to initiate an overlapped read. This won't block. Then in your main loop you can check if the read is done by using one of the Wait functions with a zero timeout. This won't block either.
Not sure on your specific problem but you really should mutex-protect the size call as well.
void fileLoadThreadFunc(void *arglist) {
while (true) {
s_mutex.lock();
while (s_filesToLoad.size() == 0) {
s_mutex.unlock();
Sleep(10);
s_mutex.lock();
}
loadObj *obj = s_filesToLoad[0];
s_filesToLoad.erase(s_filesToLoad.begin());
s_mutex.unlock();
obj->loadFileAndMemcpy();
}
}
Now, examining your specific problem, I can see nothing wrong with the code you've provided. The main thread and file loader thread should quite happily run side-by-side if that mutex is the only contention between them.
I say that because there may be other points of contention, such as in the standard library, that your sample code doesn't show.
I'd write that loop this way, less locking unlock which could get messed up :P :
void fileLoadThreadFunc(void *arglist)
{
while(true)
{
loadObj *obj = NULL;
// protect all access to the vector
s_mutex.lock();
if(s_filesToLoad.size() != 0)
{
obj = s_filesToLoad[0];
s_filesToLoad.erase(s_filesToLoad.begin());
}
s_mutex.unlock();
if( obj != NULL )
obj->loadFileAndMemcpy();
else
Sleep(10);
}
}
MSDN on Synchronization
if you can consider open source options, Java has a blocking queue [link] as does Python [link]. This would reduce your code to (queue here is bound to load_fcn, i.e. using a closure)
def load_fcn():
while True:
queue.get().loadFileAndMemcpy()
threading.Thread(target=load_fcn).start()
Even though you're maybe not supposed to use them, python 3.0 threads have a _stop() function and python2.0 threads have a _Thread__stop function. You could also write a "None" value to the queue and check in load_fcn().
Also, search stackoverflow for "[python] gui" and "[subjective] [gui] [java]" if you wish.
Based on the information present at this point, my guess would be that something in handler for the file loading is interacting with your main loop. I do not know the libraries involved, but based on your description the file handler does something along the following lines:
Load raw binary data for a 20k file
Interpret the 20k as a PNG file
Load into a structure representing a 2048x2048 pixel image
The following possibilities come to mind regarding the libraries you use to achieve these steps:
Could it be that the memory allocation for the uncompressed image data is holding a lock that the main thread needs for any drawing / interactive operations it performs?
Could it be that a call that is responsible for translating the PNG data into pixels actually holds a low-level game library lock that adversely interacts with your main thread?
The best way to get some more information would be to try and model the activity of your file loader handler without using the current code in it... write a routine yourself that allocates the right size of memory block and performs some processing that turns 20k of source data into a structure the size of the target block... then add further logic to it one bit at a time until you narrow down when performance crashes to isolate the culprit.
I think that your problem lies with access to the FilesToLoad object.
As I see it this object is locked by your thread when the it is actually processing it (which is every 10ms according to your code) and by your main thread as it is trying to update the list. This probably means that your main thread is waiting for a while to access it and/or the OS sorts out any race situations that may occur.
I would would suggest that you either start up a worker thread just to load a file when you as you want it, setting a semaphore (or even a bool value) to show when it has completed or use _beginthreadex and create a suspended thread for each file and then synchronise them so that as each one completes the next in line is resumed.
If you want a thread to run permenently in the background erasing and loading files then you could always have it process it's own message queue and use windows messaging to pass data back and forth. This saves a lot of heartache regarding thread locking and race condition avoidance.