Synchronizing Objects in Different Threads in Qt - c++

Right now in Qt I am faced with the problem where I have 2 threads that have 2 different objects. These objects are not QObjects, thus they are not able to communicate using Signals/Slots. The first thread is the primary thread, and the second thread is in a infinite loop, which process command objects using a queue.
The main thread must wait for the processing thread to finish the request.
How would I go about synchronizing the two different threads without using global mutex, and wait conditions?

You could use mutexes. Lock everytime you pull "request" from queue, and lock everytime you want to append to queue. This way you may have
something like this:
#include <QMutex>
#include <QWaitCondition>
class processingThread
{
public:
void appendToQueue(Request req)
{
sync.lock();
queue.append(req);
sync.unlock();
cond.wakeAll();
}
protected:
void run()
{
while(1)
{
sync.lock();
QWaitCondition wait(&sync);
Request current = queue.takeFirst();
// process request
sync.unlock()
}
}
private:
QList<Request> queue;
QMutex sync;
QWaitCondition cond;
};
You can now call processingThread::appendToQueue from any thread and get data synchronized. You can use this pattern to sync any data within thread. Just remember to lock any access to data you want sync. Note that QWaitCondition is only to make your thread work only when needed

Your command object can contain a "sync" object, so the sender can wait on this object and the processor thread can signal when it has finished. The sync object only needs a boolean and a QWaitcondition, which shouldn't be global.

Related

How to execute a function asynchronously in c++?

I'm now working for a c++ queue managing program. I added my dummy code and sequence diagram. There are many Data data from SomeClass::Handler. So I have to save all data to my queue. And worker has to manipulate data for converting into a Command instance. So I want to make a thread for generating command from data. But I want to limit the number of worker thread to one so that the number of command generating process is always one. After generating command, I want to return this command to SomeClass.
I'm totally confused how to implement this design. Any helps will be appreciated.
Edited for more specification.
How to restrict the number of worker thread, not avoiding pushing data to queue?
How to return a command instance from a woker thread?
`
void MyClass::notify_new_data()
{
// if there are no worker, I want to start new worker
// But how?
}
// I want to limit the number of worker to one.
void MyClass::worker() {
// busy loop, so should I add sleep_for(time) at the bottom of this func?
while(queue.empty() == false) {
MyData data;
{
lock_guard<std::mutex> lock(mtx);
data = queue.pop();
}
// do heavy processing with data
auto command = do_something_in_this thread();
// how to pass this command to SomeClass!?
}
}
// this class is called via aother thread.
void MyClass::push_data(MyData some_data) {
{
lock_guard<std::mutex> lock(mtx);
queue.push(some_data);
}
notify_new_data();
}
void SomeClass::Handler(Data d) {
my_class.push(data);
}
void SomeClass::OnReceivedCommand(Command cmd) {
// receive command
}
The question is not very clear. I am assuming that:
You need a single worker thread that executes some operations asynchronously.
You need to retrieve the result of the computation of the worker thread from another thread.
How to restrict the number of worker thread, not avoiding pushing data to queue?
Look into "thread pooling". Create a thread pool with a single worker thread that reads from a thread-safe queue. This is pretty much what your MyClass::worker() is doing.
// busy loop, so should I add sleep_for(time) at the bottom of this func?
You can either use condition variables and locking mechanisms to prevent busy waiting, or use a mature lock-free queue implementation like moodycamel::ConcurrentQueue.
// how to pass this command to SomeClass!?
The cleanest and safest way of passing data between threads is using futures and promises. The std::future page on cppreference is a good place to start.
// if there are no worker, I want to start new worker
I would create a thread pool containing a single active worker before starting your computations, so that you never have to check if a worker is available. If you cannot do that, an std::atomic flag signaling whether or not a worker was created should suffice.

what if notify() is called before wait()?

I have a situation where a notify() 'can' be called before a wait().
I am trying to make a simulator to schedule its next event when I 'notify' him by sending him messages. So I have devised a wait->notify->scedule chain
void Broker::pause()
{
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
{
std::cout << "pausing the simulation" << std::endl;
m_cond_cnn.wait(lock);
std::cout << "Simulation UNpaused" << std::endl;
// the following line causes the current function to be called at
// a later time, and a notify() can happen before the current function
// is called again
Simulator::Schedule(MilliSeconds(xxx), &Broker::pause, this);
}
}
void Broker::messageReceiveCallback(std::string message) {
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
{
m_cond_cnn.notify_one();
}
}
the problem here is that: there can be situations that a notify() is called before its wait() is called.
Is there a solution for such situation?
thank you
Condition variables can hardly be used alone, if only because, as you noticed, they only wake the currently waiting threads. There's also the matter of spurious wake-ups (ie. the condition variable can sometimes wake up a thread without any corresponding notify having been called). To work properly, condition variables usually need another variable to maintain a more reliable state.
To solve both those problems, in your case you just need to add a boolean flag:
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
while (!someFlag)
m_cond_cnn.wait(lock);
someFlag = false;
//...
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
someFlag = true;
m_cond_cnn.notify_one();
I think that syam's answer is fine in general but in your specific case where you seem to be using ns-3, I would suggest instead that you restructure your code to use the right primitives in ns-3:
I suspect that you use one of the ns-3 realtime simulator implementations. Good.
Schedule a keeplive event for the 0.1s to make sure that the simulator keeps running (it will top running when there are no events left).
Optionally, use a boolean in this keepalive event to check if you should reschedule the keepalive event or call Simulator::Stop.
Create a thread to run the simulator mainloop with Simulator::Run(). The simulator will sleep until the next scheduled event is supposed to expire or until a new event is externally scheduled
Use Simulator::ScheduleWithContext to schedule an event externally from another thread.
Keep in mind that the ns-3 API is not thread safe in general. The only ns-3 API that is thread-safe is ns3::Simulator::ScheduleWithContext. I can't stress out how important it is to not use any other API available in the ns-3:: namespace from a thread that is not the main thread.

Issue with Concurrent Access of a Queue (Multiple Producers and Consumers) - C++, Boost

I'm writing an application which has an event queue. My intention is to create this in such a way that multiple threads can write and one thread can read from the queue, and hand over the processing of a popped element to another thread so that the subsequent pop again will not be blocked. I used a lock and a condition variable for pushing and popping items from the queue:
void Publisher::popEvent(boost::shared_ptr<Event>& event) {
boost::mutex::scoped_lock lock(queueMutex);
while(eventQueue.empty())
{
queueConditionVariable.wait(lock);
}
event = eventQueue.front();
eventQueue.pop();
lock.unlock();
}
void Publisher::pushEvent(boost::shared_ptr<Event> event) {
boost::mutex::scoped_lock lock(queueMutex);
eventQueue.push(event);
lock.unlock();
queueConditionVariable.notify_one();
}
In the constructor of the Publisher class (only one instance is created), I'm starting one thread which will iterate through a loop till a notify_one() is captured, and then is starting up another thread to process the event popped from the queue:
In constructor:
publishthreadGroup = boost::shared_ptr<boost::thread_group> (new boost::thread_group());
publishthreadGroup->create_thread(boost::bind(queueProcessor, this));
queueProcessor method:
void queueProcessor(Publisher* agent) {
while(true) {
boost::shared_ptr<Event> event;
agent->getEvent(event);
agent->publishthreadGroup->create_thread(boost::bind(dispatcher, agent, event));
}
}
and in the dispatcher method, the relevant processing is done and the processed information is published to a server via thrift. In another method called before program exists, which is in the main thread, I call join_all() so that main thread waits till threads are done.
In this implementation, after the thread for dispatcher is made, in the while loop above, I have experienced a deadlock/hang. The running code seem to be stuck. What is the issue in this implementation? And is there a much cleaner, better way of doing what I am trying to do? (Multiple producers and one one consumer thread iterating through the queue and handing off the processing of an element to a different thread)
Thank you!
It seems that the queueProcessor function will run forever and the thread running it will never exit. Any threads created by that function will do their work and exit, but this thread - the first one created in the publishthreadGroup - has a while(true) loop that has no way of stopping. Thus a call to join_all() will wait forever. Can you create some other flag variable that triggers that function to exit the loop and return? That should do the trick!

Thread synchronization with boost::condition_variable

I'm doing some experiments on C++ multithreading and I have no idea how to solve one problem. Let's say we have thread pool, that process user requests using existing thread and creates new thread, when no free thread available. I've created command_queue thread-safe class, which have push and pop methods. pop waits while queue is empty and returns only when command is available or timeout occurred. Now it's time to implement thread pool. The idea is to make free threads sleep for some amount of time and kill the thread if there is nothing to do after that period of time. Here is implementation
command_queue::handler_t handler;
while (handler = tasks.pop(timeout))
{
handler();
}
here we exit the thread procedure if timeout occurred. That is fine, but there is problem with new thread creation. Let's say we already have 2 thread processing user requests, they are working at the moment, but we need to do some other operation asynchronously.
We call
thread_pool::start(some_operation);
which should start new thread, because there is no free threads available. When thread is available it calls timed_wait on condition variable, so the idea is to check whether there are threads that are waiting.
if (thread_are_free_threads) // ???
condition.notify_one();
else
create_thread(thread_proc);
but how to check it? Documentation says, that if there are no waiting threads notify_one does nothing. If I could check whether or not it did nothing that would be a solution
if (!condition.notify_one()) // nobody was notified
create_thread(thread_proc);
As far as I see there is no way to check that.
Thanks for your answers.
You need to create another variable (perhaps a semaphore) which knows how many threads are running, then you can check that and create a new thread, if needed, before calling notify.
The other, better option is to just not have your threads exit when they time out. They should stay alive waiting to be notified. Instead of exiting when the notify times out, check a variable to see if the program is still running or if it is "shutting down", If it's still running, then start waiting again.
A more typical thread pool would look like this:
Pool::Pool()
{
runningThreads = 0;
actualThreads = 0;
finished = false;
jobQue.Init();
mutex.Init();
conditionVariable.Init();
for(int loop=0; loop < threadCount; ++loop) { startThread(threadroutine); }
}
Pool::threadroutine()
{
{
// Extra code to count threads sp we can add more if required.
RAIILocker doLock(mutex);
++ actualThreads;
++ runningThreads;
}
while(!finished)
{
Job job;
{
RAIILocker doLock(mutex);
while(jobQue.empty())
{
// This is the key.
// Here the thread is suspended (using zero resources)
// until some other thread calls the notify_one on the
// conditionVariable. At this point exactly one thread is release
// and it will start executing as soon as it re-acquires the lock
// on the mutex.
//
-- runningThreads;
conditionVariable.wait(mutex);
++ runningThreads;
}
job = jobQue.getJobAndRemoveFromQue();
}
job.execute();
}
{
// Extra code to count threads sp we can add more if required.
RAIILocker doLock(mutex);
-- actualThreads;
-- runningThreads;
}
}
Pool::AddJob(Job job)
{
RAIILocker doLock(mutex);
// This is where you would check to see if you need more threads.
if (runningThreads == actualThreads) // Plus some other conditions.
{
// increment both counts. When it waits we decrease the running count.
startThread(threadroutine);
}
jobQue.push_back(job);
conditionVariable.notify_one(); // This releases one worker thread
// from the call to wait() above.
// Note: The worker thread will not start
// until this thread releases the mutex.
}
I think you need to rethink your design. In a simple model of a dealer thread handing out work the player threads, the dealer places the job onto the message queue and lets one of the players pick up the job when it gets a chance.
In your case the dealer is actively managing the thread pool in that it retains a knowledge on which player threads are idle and which are busy. Since the dealer knows which player is idle, the dealer can actively pass the idle the job and signal the player using a simple semaphore (or cond var) - there being one semaphore per player. In such a case, it might make sense for the dealer to destroy idle threads actively by giving the thread a kill myself job to do.
Now I found one solution, but it's not as that perfect.
I have volatile member variable named free - that stores number of free threads in the pool.
void thread_pool::thread_function()
{
free++;
command_queue::handler_t handler;
while (handler = tasks.pop(timeout))
{
free--;
handler();
free++;
}
free--;
}
when I assign task to the thread I do something like this
if (free == 0)
threads.create_thread(boost::bind(&thread_pool::thread_function, this));
there is still issue with synchronization, because if the context will be switched after free-- in thread_function we might create a new thread, which we actually don't need, but as the tasks queue is thread safe there is no problem with that, it's just an unwanted overhead. Can you suggest solution to that and what do you think about it? Maybe it's better to leave it as it is then having one more synchronization here?
Another idea. You can query the length of the Message Queue. If it gets too long, create a new worker.

Threading issues in C++

I have asked this problem on many popular forums but no concrete response. My applciation uses serial communication to interface with external systems each having its own interface protocol. The data that is received from the systems is displayed on a GUI made in Qt 4.2.1.
Structure of application is such that
When app begins we have a login page
with a choice of four modules. This
is implemented as a maindisplay
class. Each of the four modules is a
separate class in itself. The concerned module here is of action class which is responsible of gathering and displaying data from various systems.
User authentication gets him/her
into the action screen. The
constructor of the action screen
class executes and apart from
mundane initialisation it starts the
individual systems threads which are
implemented as singleton.
Each system protocol is implemented as a singleton thread of the form:
class SensorProtocol:public QThread {
static SensorProtocol* s_instance;
SensorProtocol(){}
SensorProtocol(const SensorProtocol&);
operator=(const SensorProtocol&);
public:
static SensorProtocol* getInstance();
//miscellaneous system related data to be used for
// data acquisition and processing
};
In implementation file *.cpp:
SensorProtocol* SensorProtocol::s_instance=0;
SensorProtocol* SensorProtocol::getInstance()
{
//DOUBLE CHECKED LOCKING PATTERN I have used singletons
// without this overrated pattern also but just fyi
if(!s_instance)
{
mutex.lock();
if(!s_instance)
s_instance=new SensorProtocol();
mutex.unlock();
}
}
Structure of run function
while(!mStop)
{
mutex.lock()
while(!WaitCondition.wait(&mutex,5)
{
if(mStop)
return;
}
//code to read from port when data becomes available
// and process it and store in variables
mutex.unlock();
}
In the action screen class I have define an InputSignalHandler using sigaction and saio. This is a function pointer which is activated as soon as data arrives on any of the serial ports.
It is a global function (we cannot change it as it is specific to Linux) which is just used to compare the file descriptors of the serial port where data has arrived and the fd's of the sensor systems, if a match is found WaitCondition.wakeOne is invoked on that thread and it comes out the wait and reads and processes the data.
In the action screen class the individual threads are started as SensorProtocol::getInstance()->start().
Each system's protocol has a frame rate at which it sends data. Based on this fact, in actions screen we set up update timers to time out at refresh rate of protocols. When these timers time out the UpdateSensorProtocol() function of operation screen is called
connect(&timer, SIGNAL(timeout), this,SLOT(UpdateSensorProtocol()));
This grabs an instance of sensor singleton as
SensorProtocol* pSingleton=SensorProtocol::getInstance();
if(pSingleton->mUpdate)
{
//update data on action screen GUI
pSingleton->mUpdate=false; //NOTE : this variable is set to
// true in the singleton thread
// while one frame is processed completely
}
For all uses of singleton instance SensorProtocol::getInstance() is used. Given the above scenario, One of my protocols is hanging no matter what changes I do.
The hang occurs in the while displaying data using UpdateSensorProtocol() If I comment ShowSensorData() function in the UpdateSensorProtocol() it works fine. But otherwise it hangs and the GUI freezes. Any suggestions!
Also, Since the main thread grabs the running instance of singleton, is it really multithreading because we are essentially changing mUpdate in singleton itself albeit from action screen.
I am confused in this.
Also, Can somebody suggest an alternate design as to what I am doing now.
Thanks In Advance
First off all don't make the Systems singletons. Use some kind of Context Encapsulation
for the different system.
If you ignoe this advice and still want to create "singletons" threads at least use QApplication::instance(); as the parent of the thread and put QThread::wait() in the singleton destructors otherwise your program will crash at the program exit.
if(!s_instance){
QMutexLocker lock(&mutex);
if(!s_instance)
s_instance=new SensorProtocol( QApplication::instance());
}
But this isn't going to solve your problem ...
Qt is event driven so try to exployed this very nice event-driven architecture and create a eventloop for each system thread. Then you can create "SystemProtocols" that live in another threads and you can create timers, send events between threads, ... without using low level synchronization objects.
Have a look at the blog entry from Bradley T. Hughes Treading without the headache
Code is not compiled but should give you a good idea where to start ...
class GuiComponent : public QWidget {
//...
signals:
void start(int); // button triggerd signal
void stop(); // button triggerd singal
public slots:
// don't forget to register DataPackage at the metacompiler
// qRegisterMetaType<DataPackage>();
void dataFromProtocol( DataPackage ){
// update the gui the the new data
}
};
class ProtocolSystem : public QObject {
//...
int timerId;
signals:
void dataReady(DataPackage);
public slots:
void stop() {
killTimer(timerId);
}
void start( int interval ) {
timerId = startTimer();
}
protected:
void timerEvent(QTimerEvent * event) {
//code to read from port when data becomes available
// and process it and store in dataPackage
emit dataReady(dataPackage);
}
};
int main( int argc, char ** argv ) {
QApplication app( argc, argv );
// construct the system and glue them together
ProtocolSystem protocolSystem;
GuiComponent gui;
gui.connect(&protocolSystem, SIGNAL(dataReady(DataPackage)), SLOT(dataFromProtocol(DataPackage)));
protocolSystem.connect(&gui, SIGNAL(start(int)), SLOT(start(int)));
protocolSystem.connect(&gui, SIGNAL(stop()), SLOT(stop()));
// move communication to its thread
QThread protocolThread;
protocolSystem.moveToThread(&protocolThread);
protocolThread.start();
// repeat this for other systems ...
// start the application
gui.show();
app.exec();
// stop eventloop to before closing the application
protocolThread.quit();
protocolThread.wait();
return 0;
}
Now you have total independent systems, gui and protocols don't now each other and don't even know that the program is multithreaded. You can unit test all systems independently in a single threaded environement and just glue them together in the real application and if you need to, divided them between different threads.
That is the program architecture that I would use for this problem. Mutlithreading without a single low level synchronization element. No race conditions, no locks, ...
Problems:
Use RAII to lock/unlock your mutexes. They are currently not exception safe.
while(!mStop)
{
mutex.lock()
while(!WaitCondition.wait(&mutex,5))
{
if(mStop)
{
// PROBLEM 1: You mutex is still locked here.
// So returning here will leave the mutex locked forever.
return;
}
// PROBLEM 2: If you leave here via an exception.
// This will not fire, and again you will the mutex locked forever.
mutex.unlock();
// Problem 3: You are using the WaitCondition() incorrectly.
// You unlock the mutex here. The next thing that happens is a call
// WaitCondition.wait() where the mutex MUST be locked
}
// PROBLEM 4
// You are using the WaitCondition() incorrectly.
// On exit the mutex is always locked. So nwo the mutex is locked.
What your code should look like:
while(!mStop)
{
MutextLocker lock(mutex); // RAII lock and unlock mutex.
while(!WaitCondition.wait(&mutex,5))
{
if(mStop)
{
return;
}
//code to read from port when data becomes available
// and process it and store in variables
}
By using RAII it solves all the problems I spotted above.
On a side note.
Your double checked locking will not work correctly.
By using the static function variable suggested by 'Anders Karlsson' you solve the problem because g++ guarantees that static function variables will only be initialized once. In addition this method guaranteed that the singelton will be correctly destroyed (via destructor). Currently unless you are doing some fancy stuff via onexit() you will be leaking memory.
See here for lots of details about better implementation of singleton.
C++ Singleton design pattern
See here why your double checked locking does not work.
What are all the common undefined behaviours that a C++ programmer should know about?
I would start by using RAII (Resource Acquisition Is Initialization) to improve the safety of your locking code. You have code that look like this:
mutex.lock();
...logic...
mutex.unlock();
Wrap the mutex code inside a class where the mutex gets acquired in the ctor and released in the dtor. Now your code looks like this:
MyMutex mutex;
...logic...
The major improvement is that if any exceptions throw in the logic part, your mutex still gets released.
Also, don't let any exceptions leak out of your threads! Catch them even if you don't know how to handle them other than logging it somewhere.
I can't be completely sure what the problem is since I have no clue what the ShowSensorData() function (method?) is doing, but there are some multithreading issues with the code that you have included.
mUpdate should be protected by a mutex if it is accessed by more than one thread.
The run() method looks like it will lock the mutex and never release it if mStop is true.
You should consider using RAII practices to grab and release the mutex. I don't know if you are using Qt mutexes or not but you should look into using QMutexLocker to lock and unlock your mutexes.
I would consider changing your SensorProtocol class to use the condition variable and a flag or some sort of event (not sure what Qt has to offer here) to handle the update inside of a method associated with the object instance. Something like:
/*static*/ void
SensorProtocol::updateSensorProtocol() {
SensorProtocol *inst = SensorProtocol::getInstance();
inst->update();
}
Then make sure that the update() method grabs the mutex before reading or writing any of the members that are shared between the reader and display.
A more complete approach would be to separate your UI display, the sensors, and their linkage using a Model-View-Controller architecture. Refactoring the solution into an MVC architecture would probably simplify things quite a bit. Not to mention that it makes applications like this a lot less error-prone. Take a look at the QAbstractItemView and QAbstractItemDelegate classes for an idea on how this can be implemented. From what I remember, there is a tutorial about implementing MVC using Qt somewhere... it's been quite a few years since I have played with Qt though.
your getInstance method could maybe be written like this as well to avoid having the s_instance var:
SensorProtocol& getInstance()
{
static SensorProtocol instance;
return instance;
}
The double checked locking pattern is broken in C++. This is well documented all over the internet. I don't know what your problem is but clearly you will need to resolve this in your code.
Take a look at QextSerialPort:
QextSerialPort is a cross-platform
serial port class. This class
encapsulates a serial port on both
POSIX and Windows systems.
QextSerialPort inherits from QIODevice and makes serial port communications integrate more smoothly with the rest of the Qt API.
Also, you could use a message passing scheme for communications between the I/O and GUI threads instead of shared memory. This is often much less error prone. You can use the QApplication::postEvent function to send custom QEvent messages to a QObject to be processed in the GUI thread with the QObject::customeEvent handler. It will take care of synchronization for you and alleviate your deadlock problems..
Here is a quick and dirty example:
class IODataEvent : public QEvent
{
public:
IODataEvent() : QEvent(QEvent::User) {}
// put all of your data here
};
class IOThread : public QThread
{
public:
IOThread(QObject * parent) : QThread(parent) {}
void run()
{
for (;;) {
// do blocking I/O and protocol parsing
IODataEvent *event = new IODataEvent;
// put all of your data for the GUI into the event
qApp->postEvent(parent(), event);
// QApplication will take ownership of the event
}
}
};
class GUIObject : public QObject
{
public:
GUIObject() : QObject(), thread(new IOThread(this)) { thread->start() }
protected:
void customEvent(QEvent *event)
{
if (QEvent::User == event->type) {
IODataEvent *data = (IODataEvent *) event;
// get data and update GUI here
event->accept();
} else {
event->ignore();
}
// the event loop will release the IODataEvent memory automatically
}
private:
IOThread *thread;
};
Also, Qt 4 supports queing signals and slots across threads.
Have three sepearate threads for send, receive and display.
Raise an event whenever data is received and handle that within the display thread.
Edit in response to comment 1
I'll admit that I know nothing of qt but from what you've said it would still appear that you can create your serial port object which in turn starts up two worker threads (by use of a start method) for the input and output buffer control.
If the serial port class has a "Connect to port" method to gain use of the serial port; an "Open port" method which starts up your worker threads and opens the port; a "Close port" method to shutdown the send and receive threads and a property for setting the "On Data Received" event handler then you should be all set.
The class shouldn't need to be a singleton as you'll find that most operating systems wont allow more than one process to control a serial port at any one time, instead you'll get an exception (which you need to handle) when you try and connect if it is already in use. The worker threads ensure that the port is held under you're control.