How to execute a function asynchronously in c++? - c++

I'm now working for a c++ queue managing program. I added my dummy code and sequence diagram. There are many Data data from SomeClass::Handler. So I have to save all data to my queue. And worker has to manipulate data for converting into a Command instance. So I want to make a thread for generating command from data. But I want to limit the number of worker thread to one so that the number of command generating process is always one. After generating command, I want to return this command to SomeClass.
I'm totally confused how to implement this design. Any helps will be appreciated.
Edited for more specification.
How to restrict the number of worker thread, not avoiding pushing data to queue?
How to return a command instance from a woker thread?
`
void MyClass::notify_new_data()
{
// if there are no worker, I want to start new worker
// But how?
}
// I want to limit the number of worker to one.
void MyClass::worker() {
// busy loop, so should I add sleep_for(time) at the bottom of this func?
while(queue.empty() == false) {
MyData data;
{
lock_guard<std::mutex> lock(mtx);
data = queue.pop();
}
// do heavy processing with data
auto command = do_something_in_this thread();
// how to pass this command to SomeClass!?
}
}
// this class is called via aother thread.
void MyClass::push_data(MyData some_data) {
{
lock_guard<std::mutex> lock(mtx);
queue.push(some_data);
}
notify_new_data();
}
void SomeClass::Handler(Data d) {
my_class.push(data);
}
void SomeClass::OnReceivedCommand(Command cmd) {
// receive command
}

The question is not very clear. I am assuming that:
You need a single worker thread that executes some operations asynchronously.
You need to retrieve the result of the computation of the worker thread from another thread.
How to restrict the number of worker thread, not avoiding pushing data to queue?
Look into "thread pooling". Create a thread pool with a single worker thread that reads from a thread-safe queue. This is pretty much what your MyClass::worker() is doing.
// busy loop, so should I add sleep_for(time) at the bottom of this func?
You can either use condition variables and locking mechanisms to prevent busy waiting, or use a mature lock-free queue implementation like moodycamel::ConcurrentQueue.
// how to pass this command to SomeClass!?
The cleanest and safest way of passing data between threads is using futures and promises. The std::future page on cppreference is a good place to start.
// if there are no worker, I want to start new worker
I would create a thread pool containing a single active worker before starting your computations, so that you never have to check if a worker is available. If you cannot do that, an std::atomic flag signaling whether or not a worker was created should suffice.

Related

create `wxThread` to call backend function for every `EVT_TREELIST_ITEM_EXPANDED` event

I have following classes:
BEGIN_EVENT_TABLE(MyFrame, wxFrame)
EVT_TREELIST_ITEM_CHECKED(wxID_ANY, MyFrame::OnItemChecked)
EVT_TREELIST_ITEM_EXPANDED(wxID_ANY, MyFrame::OnItemExpand)
END_EVENT_TABLE()
class MyThread: public wxThread
{
public:
MyThread(MyFrame *frame, wxTreeListItem &item);
virtual void *Entry();
SeleSyncFrame *m_frame;
wxTreeListItem item;
};
class MyFrame
{
friend class MyThread;
private:
wxTreeListCtrl* m_treelist;
public:
void OnItemExpand(wxTreeListEvent& event);
};
I have to update m_treelist on every EVT_TREELIST_ITEM_EXPANDED event. For that I am calling OnItemExpand().
void MyFrame::OnItemExpand(wxTreeListEvent& event)
{
wxTreeListItem item = event.GetItem();
MyThread *thread = new MyThread(this, item);
if (thread->Create() != wxTHREAD_NO_ERROR)
{
dbg.Error(__FUNCTION__, "Can't create thread!");
}
thread->Run();
}
constructor of MyThread class:
MyThread::MyThread(MyFrame *frame, wxTreeListItem &item) : wxThread()
{
m_frame = frame;
this->item = item;
}
Entry function of MyThread:
wxThread::ExitCode MyThread::Entry()
{
wxTreeListItem root = m_frame->m_treelist->GetRootItem();
m_frame->m_treelist->CheckItem(root, wxCHK_CHECKED);
//This back-end fun is time consuming
Calltobackend(string resp);
// I have to convert this string resp into xml and append all items of xml as children for 'item'.
(m_frame->m_treelist)->AppendItem(item, "child");
m_frame->m_treelist->CheckItem(item, wxCHK_CHECKED);
m_frame->m_treelist->UpdateItemParentStateRecursively(m_frame->m_treelist->GetFirstChild(item));
return NULL;
}
I want to create thread for every browse request and update corresponding item with its children. Is my approach is not correct? How should I achieve this? I was thinking of one more approach where I will use thread only to send request to backend and I will send response to Main thread using OnWorkerEvent. But I have to update item which is expanded with response returned by backend. How will that OnWorkerEvent will know which item from tree it has to update with children returned by response?
As VZ said, updating GUI from a different thread is a can of worms. Don't do it.
For your issue. Let's say you have to update a control (in your case, items of a treelist) with values that come from a long task.
The idea is simple:
On your user event handler (like OnItemExpand) just create and run
the thread. Don't wait for it, make it "detached".
In the thread code, just before it ends, post a message to the main thread by wxQueueEvent(). The value you need may be part of this message. Or
you can also write an accesible var, better using wxMutex; and use
the message to inform the main thread that that var is updated.
Write a new function (e.g. a MyFrame::OnMyThreadEnds) than handles the message and/or var. Here is where you update the GUI.
See http://docs.wxwidgets.org/trunk/classwx_thread.html
You can only use GUI objects from one (usually main) thread of your application, so your approach simply can't work. It's also not clear at all why would you go to the trouble of creating a thread just for doing this, it's not like there are any time-consuming operations being done in the thread here.
The standard way to use threads in GUI applications is to perform any long-running tasks in background worker threads and post events to the main thread to perform the GUI updates. You should structure your application like this unless you have really good reasons not to do it.
In more details, the traditional way to do it is for the worker thread to post wxThreadEvents to the main thread, containing the information that the main thread needs to perform the action. Notice that wxThreadEvent has SetPayload() method which allows you to pass any kind of data between threads, so you just need to call it in the worker and then use GetPayload() in the main thread to extract the information and process it.
However since wxWidgets 3.0 you have another way to do it with CallAfter(), which is especially convenient if you use C++11 (and you really should). This allows you to write the code you want to execute in the scope of the thread function, but it will actually get executed in the context of the main thread. So you could do this:
wxThread::ExitCode MyThread::Entry()
{
wxGetApp().CallAfter([this] {
wxTreeListItem root = m_frame->m_treelist->GetRootItem();
m_frame->m_treelist->CheckItem(root, wxCHK_CHECKED);
});
...
}
and it would actually work because the code inside the lambda would be run in the main thread. This is extremely convenient and you should do it like this, but just make sure you actually understand what does this do and that it still uses the same underlying mechanism of posting events to do its magic.

what if notify() is called before wait()?

I have a situation where a notify() 'can' be called before a wait().
I am trying to make a simulator to schedule its next event when I 'notify' him by sending him messages. So I have devised a wait->notify->scedule chain
void Broker::pause()
{
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
{
std::cout << "pausing the simulation" << std::endl;
m_cond_cnn.wait(lock);
std::cout << "Simulation UNpaused" << std::endl;
// the following line causes the current function to be called at
// a later time, and a notify() can happen before the current function
// is called again
Simulator::Schedule(MilliSeconds(xxx), &Broker::pause, this);
}
}
void Broker::messageReceiveCallback(std::string message) {
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
{
m_cond_cnn.notify_one();
}
}
the problem here is that: there can be situations that a notify() is called before its wait() is called.
Is there a solution for such situation?
thank you
Condition variables can hardly be used alone, if only because, as you noticed, they only wake the currently waiting threads. There's also the matter of spurious wake-ups (ie. the condition variable can sometimes wake up a thread without any corresponding notify having been called). To work properly, condition variables usually need another variable to maintain a more reliable state.
To solve both those problems, in your case you just need to add a boolean flag:
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
while (!someFlag)
m_cond_cnn.wait(lock);
someFlag = false;
//...
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
someFlag = true;
m_cond_cnn.notify_one();
I think that syam's answer is fine in general but in your specific case where you seem to be using ns-3, I would suggest instead that you restructure your code to use the right primitives in ns-3:
I suspect that you use one of the ns-3 realtime simulator implementations. Good.
Schedule a keeplive event for the 0.1s to make sure that the simulator keeps running (it will top running when there are no events left).
Optionally, use a boolean in this keepalive event to check if you should reschedule the keepalive event or call Simulator::Stop.
Create a thread to run the simulator mainloop with Simulator::Run(). The simulator will sleep until the next scheduled event is supposed to expire or until a new event is externally scheduled
Use Simulator::ScheduleWithContext to schedule an event externally from another thread.
Keep in mind that the ns-3 API is not thread safe in general. The only ns-3 API that is thread-safe is ns3::Simulator::ScheduleWithContext. I can't stress out how important it is to not use any other API available in the ns-3:: namespace from a thread that is not the main thread.

Issue with Concurrent Access of a Queue (Multiple Producers and Consumers) - C++, Boost

I'm writing an application which has an event queue. My intention is to create this in such a way that multiple threads can write and one thread can read from the queue, and hand over the processing of a popped element to another thread so that the subsequent pop again will not be blocked. I used a lock and a condition variable for pushing and popping items from the queue:
void Publisher::popEvent(boost::shared_ptr<Event>& event) {
boost::mutex::scoped_lock lock(queueMutex);
while(eventQueue.empty())
{
queueConditionVariable.wait(lock);
}
event = eventQueue.front();
eventQueue.pop();
lock.unlock();
}
void Publisher::pushEvent(boost::shared_ptr<Event> event) {
boost::mutex::scoped_lock lock(queueMutex);
eventQueue.push(event);
lock.unlock();
queueConditionVariable.notify_one();
}
In the constructor of the Publisher class (only one instance is created), I'm starting one thread which will iterate through a loop till a notify_one() is captured, and then is starting up another thread to process the event popped from the queue:
In constructor:
publishthreadGroup = boost::shared_ptr<boost::thread_group> (new boost::thread_group());
publishthreadGroup->create_thread(boost::bind(queueProcessor, this));
queueProcessor method:
void queueProcessor(Publisher* agent) {
while(true) {
boost::shared_ptr<Event> event;
agent->getEvent(event);
agent->publishthreadGroup->create_thread(boost::bind(dispatcher, agent, event));
}
}
and in the dispatcher method, the relevant processing is done and the processed information is published to a server via thrift. In another method called before program exists, which is in the main thread, I call join_all() so that main thread waits till threads are done.
In this implementation, after the thread for dispatcher is made, in the while loop above, I have experienced a deadlock/hang. The running code seem to be stuck. What is the issue in this implementation? And is there a much cleaner, better way of doing what I am trying to do? (Multiple producers and one one consumer thread iterating through the queue and handing off the processing of an element to a different thread)
Thank you!
It seems that the queueProcessor function will run forever and the thread running it will never exit. Any threads created by that function will do their work and exit, but this thread - the first one created in the publishthreadGroup - has a while(true) loop that has no way of stopping. Thus a call to join_all() will wait forever. Can you create some other flag variable that triggers that function to exit the loop and return? That should do the trick!

Synchronizing Objects in Different Threads in Qt

Right now in Qt I am faced with the problem where I have 2 threads that have 2 different objects. These objects are not QObjects, thus they are not able to communicate using Signals/Slots. The first thread is the primary thread, and the second thread is in a infinite loop, which process command objects using a queue.
The main thread must wait for the processing thread to finish the request.
How would I go about synchronizing the two different threads without using global mutex, and wait conditions?
You could use mutexes. Lock everytime you pull "request" from queue, and lock everytime you want to append to queue. This way you may have
something like this:
#include <QMutex>
#include <QWaitCondition>
class processingThread
{
public:
void appendToQueue(Request req)
{
sync.lock();
queue.append(req);
sync.unlock();
cond.wakeAll();
}
protected:
void run()
{
while(1)
{
sync.lock();
QWaitCondition wait(&sync);
Request current = queue.takeFirst();
// process request
sync.unlock()
}
}
private:
QList<Request> queue;
QMutex sync;
QWaitCondition cond;
};
You can now call processingThread::appendToQueue from any thread and get data synchronized. You can use this pattern to sync any data within thread. Just remember to lock any access to data you want sync. Note that QWaitCondition is only to make your thread work only when needed
Your command object can contain a "sync" object, so the sender can wait on this object and the processor thread can signal when it has finished. The sync object only needs a boolean and a QWaitcondition, which shouldn't be global.

Thread synchronization with boost::condition_variable

I'm doing some experiments on C++ multithreading and I have no idea how to solve one problem. Let's say we have thread pool, that process user requests using existing thread and creates new thread, when no free thread available. I've created command_queue thread-safe class, which have push and pop methods. pop waits while queue is empty and returns only when command is available or timeout occurred. Now it's time to implement thread pool. The idea is to make free threads sleep for some amount of time and kill the thread if there is nothing to do after that period of time. Here is implementation
command_queue::handler_t handler;
while (handler = tasks.pop(timeout))
{
handler();
}
here we exit the thread procedure if timeout occurred. That is fine, but there is problem with new thread creation. Let's say we already have 2 thread processing user requests, they are working at the moment, but we need to do some other operation asynchronously.
We call
thread_pool::start(some_operation);
which should start new thread, because there is no free threads available. When thread is available it calls timed_wait on condition variable, so the idea is to check whether there are threads that are waiting.
if (thread_are_free_threads) // ???
condition.notify_one();
else
create_thread(thread_proc);
but how to check it? Documentation says, that if there are no waiting threads notify_one does nothing. If I could check whether or not it did nothing that would be a solution
if (!condition.notify_one()) // nobody was notified
create_thread(thread_proc);
As far as I see there is no way to check that.
Thanks for your answers.
You need to create another variable (perhaps a semaphore) which knows how many threads are running, then you can check that and create a new thread, if needed, before calling notify.
The other, better option is to just not have your threads exit when they time out. They should stay alive waiting to be notified. Instead of exiting when the notify times out, check a variable to see if the program is still running or if it is "shutting down", If it's still running, then start waiting again.
A more typical thread pool would look like this:
Pool::Pool()
{
runningThreads = 0;
actualThreads = 0;
finished = false;
jobQue.Init();
mutex.Init();
conditionVariable.Init();
for(int loop=0; loop < threadCount; ++loop) { startThread(threadroutine); }
}
Pool::threadroutine()
{
{
// Extra code to count threads sp we can add more if required.
RAIILocker doLock(mutex);
++ actualThreads;
++ runningThreads;
}
while(!finished)
{
Job job;
{
RAIILocker doLock(mutex);
while(jobQue.empty())
{
// This is the key.
// Here the thread is suspended (using zero resources)
// until some other thread calls the notify_one on the
// conditionVariable. At this point exactly one thread is release
// and it will start executing as soon as it re-acquires the lock
// on the mutex.
//
-- runningThreads;
conditionVariable.wait(mutex);
++ runningThreads;
}
job = jobQue.getJobAndRemoveFromQue();
}
job.execute();
}
{
// Extra code to count threads sp we can add more if required.
RAIILocker doLock(mutex);
-- actualThreads;
-- runningThreads;
}
}
Pool::AddJob(Job job)
{
RAIILocker doLock(mutex);
// This is where you would check to see if you need more threads.
if (runningThreads == actualThreads) // Plus some other conditions.
{
// increment both counts. When it waits we decrease the running count.
startThread(threadroutine);
}
jobQue.push_back(job);
conditionVariable.notify_one(); // This releases one worker thread
// from the call to wait() above.
// Note: The worker thread will not start
// until this thread releases the mutex.
}
I think you need to rethink your design. In a simple model of a dealer thread handing out work the player threads, the dealer places the job onto the message queue and lets one of the players pick up the job when it gets a chance.
In your case the dealer is actively managing the thread pool in that it retains a knowledge on which player threads are idle and which are busy. Since the dealer knows which player is idle, the dealer can actively pass the idle the job and signal the player using a simple semaphore (or cond var) - there being one semaphore per player. In such a case, it might make sense for the dealer to destroy idle threads actively by giving the thread a kill myself job to do.
Now I found one solution, but it's not as that perfect.
I have volatile member variable named free - that stores number of free threads in the pool.
void thread_pool::thread_function()
{
free++;
command_queue::handler_t handler;
while (handler = tasks.pop(timeout))
{
free--;
handler();
free++;
}
free--;
}
when I assign task to the thread I do something like this
if (free == 0)
threads.create_thread(boost::bind(&thread_pool::thread_function, this));
there is still issue with synchronization, because if the context will be switched after free-- in thread_function we might create a new thread, which we actually don't need, but as the tasks queue is thread safe there is no problem with that, it's just an unwanted overhead. Can you suggest solution to that and what do you think about it? Maybe it's better to leave it as it is then having one more synchronization here?
Another idea. You can query the length of the Message Queue. If it gets too long, create a new worker.