So I want to make a synchronous event queue in c++ on a custom thread. As far as I can tell, boost::asio::strand is an excelent candidate for this, with one twist: when asio::run() is called, it only runs while there are events in the strand's queue. The code:
this->control_strand_.reset(new boost::asio::strand(control_io_service_));
control_thread_ = boost::thread(boost::bind(&boost::asio::io_service::run,&control_io_service_));
control_thread_.join();
Returns immediately. Now I could go with the answer of Boost Asio - How to know when the handler queue is empty?, but this has a while-loop-wait in it. I'd rather have it be more event based (aka, wait for a "wrap" call in the while look when the queue is empty). Th only way I can think to do this is completely wrap the strand class, having it trigger a signal whenever "wrap" is called (something like, pseudo code)
//some member variables
boost::condition_variable cond_var;
boost::mutex mut;
std::unique_ptr<boost::asio::strand> control_strand_;
boost::asio::io_service control_io_service_
//while loop,running on event processing thread
void MessageProcessor()
{
while (true)
{
{
boost::unique_lock<boost::mutex> lock(mut);
cond_var.wait(lock);
}
control_io_service_.run();
}
}
//post call,from different thread
template <typename Handler>
void wrap(Handler hand)
{
cond_var.notify_all();
control_strand_->wrap(hand);
}
This will run the queue forever without the while loop (my synchronization is a little off, but thats not an issue atm). Is there a better, more standard, way?
You can use io_service directly, it implements an "implicit strand". To keep it running, just give it io_service::work object, like in the io_service reference (see "Stopping the io_service from running out of work").
Note that io_service is intentionally thread-safe, so you can post() functors to it from external threads.
Related
Due to fixed requirements, I need to execute some code in a specific thread, and then return a result. The main-thread initiating that action should be blocked in the meantime.
void background_thread()
{
while(1)
{
request.lock();
g_lambda();
response.unlock();
request.unlock();
}
}
void mainthread()
{
...
g_lambda = []()...;
request.unlock();
response.lock();
request.lock();
...
}
This should work. But it leaves us with a big problem: background thread needs to start with response mutex locked, and main-thread needs to start with request mutex locked...
How can we accomplish that? I cant think of a good way. And isnt that an anti-pattern anyways?
Passing tasks to background thread could be accomplished by a producer-consumer queue. Simple C++11 implementation, that does not depend on 3rd party libraries would have std::condition_variable which is waited by the background thread and notified by main thead, std::queue of tasks, and std::mutex to guard these.
Getting the result back to main thread can be done by std::promise/std::future. The simplest way is to make std::packaged_task as queue objects, so that main thread creates packaged_task, puts it to the queue, notifies condition_variable and waits on packaged_task's future.
You would not actually need std::queue if you will create tasks by one at once, from one thread - just one std::unique_ptr<std::packaged_task>> would be enough. The queue adds flexibility to simultaneosly add many backround tasks.
I am in process of implementing messages passing from one thread to another
Thread 1: Callback functions are registered with libraries, on callback, functions are invoked and needs to be send to another thread for processing as it takes time.
Thread 2: Thread to check if any messages are available(preferrednas in queue) and process the same.
Is condition_variable usage with mutex a correct approach to start considering thread 2 processing takes time in which multiple other messages can be added by thread 1?
Is condition_variable usage with mutex a correct approach to start considering thread 2 processing takes time in which multiple other messages can be added by thread 1?
The question is a bit vague about how a condition variable and mutex would be used, but yes, there would definitely be a role for such objects. The high-level view would be something like this:
The mutex would protect access to the message queue. Any read or modification of the queue, by any thread, would be done only while holding the mutex locked.
The message-processing thread would block on the CV in the event that it became ready to process a new message but the queue was empty.
The message-generating thread would signal the CV each time it enqueued a new message.
This is exactly a producer / consumer problem, and you can find a lot of information about such problems using that terminology.
But note also that there are multiple message queue implementations already available to serve exactly your purpose ("message queue" is in fact a standard term for these), so you should consider whether you really want to reinvent this wheel.
In general, mutexes are intended to control access between threads; but not great for notifying between threads.
If you design Thread2 to wait on the condition; you can simply process messages as they are received from Thread1.
Here would be a rough implementation
void pushFunction
{
// Obtain the mutex (preferrably scoped lock in boost or c++17)
std::lock_guard lock(myMutex);
const bool empty = myQueue.empty();
myQueue.push(data);
lock.unlock();
if(empty)
{
conditionVar.notify_one();
}
}
In Thread 2
void waitForMessage()
{
std::lock_guard lock(myMutex);
while (myQueue.empty())
{
conditionVar.wait(lock);
}
rxMessage = myQueue.front();
myQueue.pop();
}
It's important to note that the condition can spuriously wake up so it's important to keep it in the 'while empty' loop.
See https://en.cppreference.com/w/cpp/thread/condition_variable
I have rather a noob question regarding concurrency in C++ (Using Boost threads) on which I haven't found a clear answer.I have a worker class which runs in a separate thread.I init the worker on the start of the program only once.This worker is "lazy" and does some data encoding only when it receives it from the calling thread.In the worker I have a public method:
void PushFrame(byte* data);
which pushes the data to the std::stack member variable so the worker can access it each time new data object is pushed there.
What I don't understand is how such an interaction generally done? Can I just call PushFrame() from the caller thread and pass the argument? Or do I have to access the methods in the worker in some special way?
Usually you use a producer-consumer-queue for this type of work.
Whenever the worker thread runs out of work he wait()s on a boost::condition_variable which is protected by the same boost::mutex as the stack holding the data for the worker thread (you might want to use a queue here instead to minimize the risk of unfair work scheduling).
Your PushFrame() function now calls notify_one() on that condition variable whenever it inserts new data into the stack. That way, the worker thread will truly sleep (i.e. the OS scheduler will probably not give it any timeslice) until there is actually work to be done.
The easiest thing to get wrong here is the locking of the mutex protecting both the stack and the condition_variable. Besides avoiding races on the data structures, you also need to take care that the condition_variable does not miss a notify call and therefore might get stuck waiting while there actually is more work available.
class Worker {
void PushFrame(byte* data)
{
boost::lock_guard<boost::mutex> lk(m_mutex);
// push the data
// ...
m_data_cond.notify_one();
}
void DoWork()
{
while(!done) {
boost::unique_lock<boost::mutex> lk(m_mutex);
// we need a loop here as wait() may return spuriously
while(is_out_of_work()) {
// wait() will release the mutex and suspend the thread
m_data_cond.wait(lk);
// upon returning from wait() the mutex will be locked again
}
// do work from the queue
// ...
}
}
boost::mutex m_mutex;
boost::condition_variable m_data_cond;
[...]
};
Reading the document of boost::asio, it is still not clear when I need to use asio::strand. Suppose that I have one thread using io_service is it then safe to write on a socket as follows ?
void Connection::write(boost::shared_ptr<string> msg)
{
_io_service.post(boost::bind(&Connection::_do_write,this,msg));
}
void Connection::_do_write(boost::shared_ptr<string> msg)
{
if(_write_in_progress)
{
_msg_queue.push_back(msg);
}
else
{
_write_in_progress=true;
boost::asio::async_write(_socket, boost::asio::buffer(*(msg.get())),
boost::bind(&Connection::_handle_write,this,
boost::asio::placeholders::error));
}
}
void Connection::_handle_write(boost::system::error_code const &error)
{
if(!error)
{
if(!_msg_queue.empty())
{
boost::shared_ptr<string> msg=_msg_queue.front();
_msg_queue.pop_front();
boost::asio::async_write(_socket, boost::asio::buffer(*(msg.get())),
boost::bind(&Connection::_handle_write,this,
boost::asio::placeholders::error));
}
else
{
_write_in_progress=false;
}
}
}
Where multiple threads calls Connection::write(..) or do I have to use asio::strand ?
Short answer: no, you don't have to use a strand in this case.
Broadly simplificated, an io_service contains a list of function objects (handlers). Handlers are put into the list when post() is called on the service. e.g. whenever an asynchronous operation completes, the handler and its arguments are put into the list. io_service::run() executes one handler after another. So if there is only one thread calling run() like in your case, there are no synchronisation problems and no strands are needed.
Only if multiple threads call run() on the same io_service, multiple handlers will be executed at the same time, in N threads up to N concurrent handlers. If that is a problem, e.g. if there might be two handlers in the queue at the same time that access the same object, you need the strand.
You can see the strand as a kind of lock for a group of handlers. If a thread executes a handler associated to a strand, that strand gets locked, and it gets released after the handler is done. Any other thread can execute only handlers that are not associated to a locked strand.
Caution: this explanation may be over-simplified and technically not accurate, but it gives a basic concept of what happens in the io_service and of the strands.
Calling io_service::run() from only one thread will cause all event handlers to execute within the thread, regardless of how many threads are invoking Connection::write(...). Therefore, with no possible concurrent execution of handlers, it is safe. The documentation refers to this as an implicit strand.
On the other hand, if multiple threads are invoking io_service::run(), then a strand would become necessary. This answer covers strands in much more detail.
async_write() is forbidden to be called concurrently from different threads. It sends data by chunks using async_write_some and such chunks can be interleaved. So it is up to the user to take care of not calling async_write() concurrently.
Is there a nicer solution than this pseudocode?
void send(shared_ptr<char> p) {
boost::mutex::scoped_lock lock(m_write_mutex);
async_write(p, handler);
}
I do not like the idea to block other threads for a quite long time (there are ~50Mb sends in my application).
May be something like that would work?
void handler(const boost::system::error_code& e) {
if(!e) {
bool empty = lockfree_pop_front(m_queue);
if(!empty) {
shared_ptr<char> p = lockfree_queue_get_first(m_queue);
async_write(p, handler);
}
}
}
void send(shared_ptr<char> p) {
bool q_was_empty = lockfree_queue_push_back(m_queue, p)
if(q_was_empty)
async_write(p, handler);
}
I'd prefer to find a ready-to-use cookbook recipe. Dealing with lock-free is not easy, a lot of subtle bugs can appear.
async_write() is forbidden to be
called concurrently from different
threads
This statement is not quite correct. Applications can freely invoke async_write concurrently, as long as they are on different socket objects.
Is there a nicer solution than this
pseudocode?
void send(shared_ptr<char> p) {
boost::mutex::scoped_lock lock(m_write_mutex);
async_write(p, handler);
}
This likely isn't accomplishing what you intend since async_write returns immediately. If you intend the mutex to be locked for the entire duration of the write operation, you will need to keep the scoped_lock in scope until the completion handler is invoked.
There are nicer solutions for this problem, the library has built-in support using the concept of a strand. It fits this scenario nicely.
A strand is defined as a strictly
sequential invocation of event
handlers (i.e. no concurrent
invocation). Use of strands allows
execution of code in a multithreaded
program without the need for explicit
locking (e.g. using mutexes).
Using an explicit strand here will ensure your handlers are only invoked by a single thread that has invoked io_service::run(). With your example, the m_queue member would be protected by a strand, ensuring atomic access to the outgoing message queue. After adding an entry to the queue, if the size is 1, it means no outstanding async_write operation is in progress and the application can initiate one wrapped through the strand. If the queue size is greater than 1, the application should wait for the async_write to complete. In the async_write completion handler, pop off an entry from the queue and handle any errors as necessary. If the queue is not empty, the completion handler should initiate another async_write from the front of the queue.
This is a much cleaner design that sprinkling mutexes in your classes since it uses the built-in Asio constructs as they are intended. This other answer I wrote has some code implementing this design.
We've solved this problem by having a seperate queue of data to be written held in our socket object. When the first piece of data to be written is "queued", we start an async_write(). In our async_write's completion handler, we start subsequent async_write operations if there is still data to be transmitted.