Wait on two conditions [duplicate] - c++

First a little context: I'm in the process of learning about threading in C++11 and for this purpose, I'm trying to build a small actor class, essentially (I left the exception handling and propagation stuff out) like so:
class actor {
private: std::atomic<bool> stop;
private: std::condition_variable interrupt;
private: std::thread actor_thread;
private: message_queue incoming_msgs;
public: actor()
: stop(false),
actor_thread([&]{ run_actor(); })
{}
public: virtual ~actor() {
// if the actor is destroyed, we must ensure the thread dies too
stop = true;
// to this end, we have to interrupt the actor thread which is most probably
// waiting on the incoming_msgs queue:
interrupt.notify_all();
actor_thread.join();
}
private: virtual void run_actor() {
try {
while(!stop)
// wait for new message and process it
// but interrupt the waiting process if interrupt is signaled:
process(incoming_msgs.wait_and_pop(interrupt));
}
catch(interrupted_exception) {
// ...
}
};
private: virtual void process(const message&) = 0;
// ...
};
Every actor runs in its own actor_thread, waits on a new incoming message on incoming_msgs and -- when a message arrives -- processes it.
The actor_thread is created together with the actor and has to die together with it, which is why I need some kind of interrupt mechanism in the message_queue::wait_and_pop(std::condition_variable interrupt).
Essentially, I require that wait_and_pop blocks until either
a) a new message arrives or
b) until the interrupt is fired, in which case -- ideally -- an interrupted_exception is to be thrown.
The arrival of a new message in the message_queue is presently modeled also by a std::condition_variable new_msg_notification:
// ...
// in class message_queue:
message wait_and_pop(std::condition_variable& interrupt) {
std::unique_lock<std::mutex> lock(mutex);
// How to interrupt the following, when interrupt fires??
new_msg_notification.wait(lock,[&]{
return !queue.empty();
});
auto msg(std::move(queue.front()));
queue.pop();
return msg;
}
To cut the long story short, the question is this: How do I interrupt the waiting for a new message in new_msg_notification.wait(...) when the interrupt is triggered (without introducing a time-out)?
Alternatively, the question may be read as: How do I wait until any one of two std::condition_variables are signaled?
One naive approach seems to be not to use std::condition_variable at all for the interrupt and instead just use an atomic flag std::atomic<bool> interrupted and then busy wait on new_msg_notification with a very small time-out until either a new message has arrived or until true==interrupted. However, I would very much like to avoid busy waiting.
EDIT:
From the comments and the answer by pilcrow, it looks like there are basically two approaches possible.
Enqueue a special "Terminate" message, as proposed by Alan, mukunda and pilcrow. I decided against this option because I have no idea about the size of the queue at the time I want the actor to terminate. It may very well be (as it is mostly the case when I want something to quickly terminate) that there are thousands of messages left to process in the queue and it seems unacceptable to wait for them to be processed until finally the terminate message gets its turn.
Implement a custom version of a condition variable, that may be interrupted by another thread by forwarding the notification to the condition variable that the first thread is waiting on. I opted for this approach.
For those of you interested, my implementation goes as follows. The condition variable in my case is actually a semaphore (because I like them more and because I liked the exercise of doing so). I equipped this semaphore with an associated interrupt which can be obtained from the semaphore via semaphore::get_interrupt(). If now one thread blocks in semaphore::wait(), another thread has the possibility to call semaphore::interrupt::trigger() on the interrupt of the semaphore, causing the first thread to unblock and propagate an interrupt_exception.
struct
interrupt_exception {};
class
semaphore {
public: class interrupt;
private: mutable std::mutex mutex;
// must be declared after our mutex due to construction order!
private: interrupt* informed_by;
private: std::atomic<long> counter;
private: std::condition_variable cond;
public:
semaphore();
public:
~semaphore() throw();
public: void
wait();
public: interrupt&
get_interrupt() const { return *informed_by; }
public: void
post() {
std::lock_guard<std::mutex> lock(mutex);
counter++;
cond.notify_one(); // never throws
}
public: unsigned long
load () const {
return counter.load();
}
};
class
semaphore::interrupt {
private: semaphore *forward_posts_to;
private: std::atomic<bool> triggered;
public:
interrupt(semaphore *forward_posts_to) : triggered(false), forward_posts_to(forward_posts_to) {
assert(forward_posts_to);
std::lock_guard<std::mutex> lock(forward_posts_to->mutex);
forward_posts_to->informed_by = this;
}
public: void
trigger() {
assert(forward_posts_to);
std::lock_guard<std::mutex>(forward_posts_to->mutex);
triggered = true;
forward_posts_to->cond.notify_one(); // never throws
}
public: bool
is_triggered () const throw() {
return triggered.load();
}
public: void
reset () throw() {
return triggered.store(false);
}
};
semaphore::semaphore() : counter(0L), informed_by(new interrupt(this)) {}
// must be declared here because otherwise semaphore::interrupt is an incomplete type
semaphore::~semaphore() throw() {
delete informed_by;
}
void
semaphore::wait() {
std::unique_lock<std::mutex> lock(mutex);
if(0L==counter) {
cond.wait(lock,[&]{
if(informed_by->is_triggered())
throw interrupt_exception();
return counter>0;
});
}
counter--;
}
Using this semaphore, my message queue implementation now looks like this (using the semaphore instead of the std::condition_variable I could get rid of the std::mutex:
class
message_queue {
private: std::queue<message> queue;
private: semaphore new_msg_notification;
public: void
push(message&& msg) {
queue.push(std::move(msg));
new_msg_notification.post();
}
public: const message
wait_and_pop() {
new_msg_notification.wait();
auto msg(std::move(queue.front()));
queue.pop();
return msg;
}
public: semaphore::interrupt&
get_interrupt() const { return new_msg_notification.get_interrupt(); }
};
My actor, is now able to interrupt its thread with very low latency in its thread. The implementation presently like this:
class
actor {
private: message_queue
incoming_msgs;
/// must be declared after incoming_msgs due to construction order!
private: semaphore::interrupt&
interrupt;
private: std::thread
my_thread;
private: std::exception_ptr
exception;
public:
actor()
: interrupt(incoming_msgs.get_interrupt()), my_thread(
[&]{
try {
run_actor();
}
catch(...) {
exception = std::current_exception();
}
})
{}
private: virtual void
run_actor() {
while(!interrupt.is_triggered())
process(incoming_msgs.wait_and_pop());
};
private: virtual void
process(const message&) = 0;
public: void
notify(message&& msg_in) {
incoming_msgs.push(std::forward<message>(msg_in));
}
public: virtual
~actor() throw (interrupt_exception) {
interrupt.trigger();
my_thread.join();
if(exception)
std::rethrow_exception(exception);
}
};

You ask,
What is the best way to wait on multiple condition variables in C++11?
You can't, and must redesign. One thread may wait on only one condition variable (and its associated mutex) at a time. In this regard the Windows facilities for synchronization are rather richer than those of the "POSIX-style" family of synchronization primitives.
The typical approach with thread-safe queues is to enqueue a special "all done!" message, or to design a "breakable" (or "shutdown-able") queue. In the latter case, the queue's internal condition variable then protects a complex predicate: either an item is available or the queue has been broken.
In a comment you observe that
a notify_all() will have no effect if there is no one waiting
That's true but probably not relevant. wait()ing on a condition variable also implies checking a predicate, and checking it before actually blocking for a notification. So, a worker thread busy processing a queue item that "misses" a notify_all() will see, the next time it inspects the queue condition, that the predicate (a new item is available, or, the queue is all done) has changed.

Recently I resolved this issue with the help of single condition variable and separate Boolean variable for each producer/worker.
The predicate within the wait function in consumer thread can check for these flags and take the decision which producer/worker has satisfied the condition.

Maybe this can works:
get rid of interrupt.
message wait_and_pop(std::condition_variable& interrupt) {
std::unique_lock<std::mutex> lock(mutex);
{
new_msg_notification.wait(lock,[&]{
return !queue.empty() || stop;
});
if( !stop )
{
auto msg(std::move(queue.front()));
queue.pop();
return msg;
}
else
{
return NULL; //or some 'terminate' message
}
}
In destructor, replace interrupt.notify_all() with new_msg_notification.notify_all()

Related

avoid busy waiting and mode switches between realtime and non realtime threading

I have the following problem:
we do have a controller implemented with ros_control that runs on a Real Time, Xenomai linux-patched system. The control loop is executed by iteratively calling an update function. I need to communicate some of the internal state of the controller, and for this task I'm using LCM, developed in MIT. Regardless of the internal behaviour of LCM, the publication method is breaking the real time, therefore I've implemented in C++11 a publication loop running on a separated thread. But the loop it is gonna publish at infinite frequency if I don't synchronize the secondary thread with the controller. Therefore I'm using also condition variables.
Here's an example for the controller:
MyClass mc;
// This is called just once
void init(){
mc.init();
}
// Control loop function (e.g., called every 5 ms in RT)
void update(const ros::Time& time, const ros::Duration& period) {
double value = time.toSec();
mc.setValue(value);
}
And for the class which is trying to publish:
double myvalue;
std::mutex mutex;
std::condition_variable cond;
bool go = true;
void MyClass::init(){
std::thread thread(&MyClass::body, this);
}
void MyClass::setValue(double value){
myvalue = value;
{
std::lock_guard<std::mutex> lk(mutex);
go = true;
}
cond.notify_one();
}
void MyClass::body() {
while(true) {
std::unique_lock<std::mutex>lk(mutex);
cond.wait(lk, [this] {return go;});
publish(myvalue); // the dangerous call
go = false;
lk.unlock();
}
}
This code produces mode switches (i.e., is breaking real time). Probably because of the lock on the condition variable, which I use to synchronize the secondary thread with the main controller and is in contention with the thread. If I do something like this:
void MyClass::body() {
while(true) {
if(go){
publish(myvalue);
go = false;
}
}
}
void MyClass::setValue(double value){
myvalue = value;
go = true;
}
I would not produce mode switches, but also it would be unsafe and most of all I would have busy waiting for the secondary thread.
Is there a way to have non-blocking synch between main thread and secondary thread (i.e., having body doing something only when setValue is called) which is also non-busy waiting?
Use a lock free data structure.
In your case here you don't even need a data structure, just use an atomic for go. No locks necessary. You might look into using a semaphore instead of a condition variable to avoid the now-unused lock too. And if you need a semaphore to avoid using a lock you're going to end up using your base OS semaphores, not C++11 since C++11 doesn't even have them.
This isn't perfect, but it should reduce your busy-wait frequency with only occasional loss of responsiveness.
The idea is to use a naked condition variable wake up while passing a message through an atomic.
template<class T>
struct non_blocking_poke {
std::atomic<T> message;
std::atomic<bool> active;
std::mutex m;
std::condition_variable v;
void poke(T t) {
message = t;
active = true;
v.notify_one();
}
template<class Rep, class Period>
T wait_for_poke(const std::chrono::duration<Rep, Period>& busy_time) {
std::unique_lock<std::mutex> l(m);
while( !v.wait_for(l, busy_time, [&]{ return active; } ))
{}
active = false;
return message;
}
};
The waiting thread wakes up every busy_time to see if it missed a message. However, it will usually get a message faster than that (there is a race condition where it misses a message). In addition, multiple messages can be sent without the reliever getting them. However, if a message is sent, within about 1 second the receiver will get that message or a later message.
non_blocking_poke<double> poker;
// in realtime thread:
poker.poke(3.14);
// in non-realtime thread:
while(true) {
using namespace std::literals::chrono_literals;
double d = poker.wait_for_poke( 1s );
std::cout << d << '\n';
}
In an industrial quality solution, you'll also want an abort flag or message to stop the loops.

What is the best way to wait on multiple condition variables in C++11?

First a little context: I'm in the process of learning about threading in C++11 and for this purpose, I'm trying to build a small actor class, essentially (I left the exception handling and propagation stuff out) like so:
class actor {
private: std::atomic<bool> stop;
private: std::condition_variable interrupt;
private: std::thread actor_thread;
private: message_queue incoming_msgs;
public: actor()
: stop(false),
actor_thread([&]{ run_actor(); })
{}
public: virtual ~actor() {
// if the actor is destroyed, we must ensure the thread dies too
stop = true;
// to this end, we have to interrupt the actor thread which is most probably
// waiting on the incoming_msgs queue:
interrupt.notify_all();
actor_thread.join();
}
private: virtual void run_actor() {
try {
while(!stop)
// wait for new message and process it
// but interrupt the waiting process if interrupt is signaled:
process(incoming_msgs.wait_and_pop(interrupt));
}
catch(interrupted_exception) {
// ...
}
};
private: virtual void process(const message&) = 0;
// ...
};
Every actor runs in its own actor_thread, waits on a new incoming message on incoming_msgs and -- when a message arrives -- processes it.
The actor_thread is created together with the actor and has to die together with it, which is why I need some kind of interrupt mechanism in the message_queue::wait_and_pop(std::condition_variable interrupt).
Essentially, I require that wait_and_pop blocks until either
a) a new message arrives or
b) until the interrupt is fired, in which case -- ideally -- an interrupted_exception is to be thrown.
The arrival of a new message in the message_queue is presently modeled also by a std::condition_variable new_msg_notification:
// ...
// in class message_queue:
message wait_and_pop(std::condition_variable& interrupt) {
std::unique_lock<std::mutex> lock(mutex);
// How to interrupt the following, when interrupt fires??
new_msg_notification.wait(lock,[&]{
return !queue.empty();
});
auto msg(std::move(queue.front()));
queue.pop();
return msg;
}
To cut the long story short, the question is this: How do I interrupt the waiting for a new message in new_msg_notification.wait(...) when the interrupt is triggered (without introducing a time-out)?
Alternatively, the question may be read as: How do I wait until any one of two std::condition_variables are signaled?
One naive approach seems to be not to use std::condition_variable at all for the interrupt and instead just use an atomic flag std::atomic<bool> interrupted and then busy wait on new_msg_notification with a very small time-out until either a new message has arrived or until true==interrupted. However, I would very much like to avoid busy waiting.
EDIT:
From the comments and the answer by pilcrow, it looks like there are basically two approaches possible.
Enqueue a special "Terminate" message, as proposed by Alan, mukunda and pilcrow. I decided against this option because I have no idea about the size of the queue at the time I want the actor to terminate. It may very well be (as it is mostly the case when I want something to quickly terminate) that there are thousands of messages left to process in the queue and it seems unacceptable to wait for them to be processed until finally the terminate message gets its turn.
Implement a custom version of a condition variable, that may be interrupted by another thread by forwarding the notification to the condition variable that the first thread is waiting on. I opted for this approach.
For those of you interested, my implementation goes as follows. The condition variable in my case is actually a semaphore (because I like them more and because I liked the exercise of doing so). I equipped this semaphore with an associated interrupt which can be obtained from the semaphore via semaphore::get_interrupt(). If now one thread blocks in semaphore::wait(), another thread has the possibility to call semaphore::interrupt::trigger() on the interrupt of the semaphore, causing the first thread to unblock and propagate an interrupt_exception.
struct
interrupt_exception {};
class
semaphore {
public: class interrupt;
private: mutable std::mutex mutex;
// must be declared after our mutex due to construction order!
private: interrupt* informed_by;
private: std::atomic<long> counter;
private: std::condition_variable cond;
public:
semaphore();
public:
~semaphore() throw();
public: void
wait();
public: interrupt&
get_interrupt() const { return *informed_by; }
public: void
post() {
std::lock_guard<std::mutex> lock(mutex);
counter++;
cond.notify_one(); // never throws
}
public: unsigned long
load () const {
return counter.load();
}
};
class
semaphore::interrupt {
private: semaphore *forward_posts_to;
private: std::atomic<bool> triggered;
public:
interrupt(semaphore *forward_posts_to) : triggered(false), forward_posts_to(forward_posts_to) {
assert(forward_posts_to);
std::lock_guard<std::mutex> lock(forward_posts_to->mutex);
forward_posts_to->informed_by = this;
}
public: void
trigger() {
assert(forward_posts_to);
std::lock_guard<std::mutex>(forward_posts_to->mutex);
triggered = true;
forward_posts_to->cond.notify_one(); // never throws
}
public: bool
is_triggered () const throw() {
return triggered.load();
}
public: void
reset () throw() {
return triggered.store(false);
}
};
semaphore::semaphore() : counter(0L), informed_by(new interrupt(this)) {}
// must be declared here because otherwise semaphore::interrupt is an incomplete type
semaphore::~semaphore() throw() {
delete informed_by;
}
void
semaphore::wait() {
std::unique_lock<std::mutex> lock(mutex);
if(0L==counter) {
cond.wait(lock,[&]{
if(informed_by->is_triggered())
throw interrupt_exception();
return counter>0;
});
}
counter--;
}
Using this semaphore, my message queue implementation now looks like this (using the semaphore instead of the std::condition_variable I could get rid of the std::mutex:
class
message_queue {
private: std::queue<message> queue;
private: semaphore new_msg_notification;
public: void
push(message&& msg) {
queue.push(std::move(msg));
new_msg_notification.post();
}
public: const message
wait_and_pop() {
new_msg_notification.wait();
auto msg(std::move(queue.front()));
queue.pop();
return msg;
}
public: semaphore::interrupt&
get_interrupt() const { return new_msg_notification.get_interrupt(); }
};
My actor, is now able to interrupt its thread with very low latency in its thread. The implementation presently like this:
class
actor {
private: message_queue
incoming_msgs;
/// must be declared after incoming_msgs due to construction order!
private: semaphore::interrupt&
interrupt;
private: std::thread
my_thread;
private: std::exception_ptr
exception;
public:
actor()
: interrupt(incoming_msgs.get_interrupt()), my_thread(
[&]{
try {
run_actor();
}
catch(...) {
exception = std::current_exception();
}
})
{}
private: virtual void
run_actor() {
while(!interrupt.is_triggered())
process(incoming_msgs.wait_and_pop());
};
private: virtual void
process(const message&) = 0;
public: void
notify(message&& msg_in) {
incoming_msgs.push(std::forward<message>(msg_in));
}
public: virtual
~actor() throw (interrupt_exception) {
interrupt.trigger();
my_thread.join();
if(exception)
std::rethrow_exception(exception);
}
};
You ask,
What is the best way to wait on multiple condition variables in C++11?
You can't, and must redesign. One thread may wait on only one condition variable (and its associated mutex) at a time. In this regard the Windows facilities for synchronization are rather richer than those of the "POSIX-style" family of synchronization primitives.
The typical approach with thread-safe queues is to enqueue a special "all done!" message, or to design a "breakable" (or "shutdown-able") queue. In the latter case, the queue's internal condition variable then protects a complex predicate: either an item is available or the queue has been broken.
In a comment you observe that
a notify_all() will have no effect if there is no one waiting
That's true but probably not relevant. wait()ing on a condition variable also implies checking a predicate, and checking it before actually blocking for a notification. So, a worker thread busy processing a queue item that "misses" a notify_all() will see, the next time it inspects the queue condition, that the predicate (a new item is available, or, the queue is all done) has changed.
Recently I resolved this issue with the help of single condition variable and separate Boolean variable for each producer/worker.
The predicate within the wait function in consumer thread can check for these flags and take the decision which producer/worker has satisfied the condition.
Maybe this can works:
get rid of interrupt.
message wait_and_pop(std::condition_variable& interrupt) {
std::unique_lock<std::mutex> lock(mutex);
{
new_msg_notification.wait(lock,[&]{
return !queue.empty() || stop;
});
if( !stop )
{
auto msg(std::move(queue.front()));
queue.pop();
return msg;
}
else
{
return NULL; //or some 'terminate' message
}
}
In destructor, replace interrupt.notify_all() with new_msg_notification.notify_all()

Can I use a pointer assignment to detect the start of a thread in a safe way?

I quickly wrote some kind of wrapper to ensure some functionality in a system is always executed in a defined thread context. To make the code as small as possible, I simple use a pointer assignment to check if the thread has started.
void waitForStart() {
while (_handler == nullptr) {
msleep(100); // Sleep for 100ms;
}
msleep(100); // Sleep for 100ms to make sure the pointer is assigned
}
In my opinion, this should work in any case. Even if the assignment to _handler is for unknown reason split up into two operations on a CPU.
Is my assumtion correct? Or did I miss a case where this could go wrong?
For reference a more complete example how the system looks like. There are the System, the Thread and the Handler classes:
class Handler {
public:
void doSomeWork() {
// things are executed here.
}
};
class Thread : public ThreadFromAFramework {
public:
Thread() : _handler(nullptr) {
}
void waitForStart() {
while (_handler == nullptr) {
msleep(100); // Sleep for 100ms;
}
msleep(100); // Sleep for 100ms to make sure the pointer is assigned
}
Handler* handler() const {
return _handler;
}
protected:
virtual void run() { // This method is executed as a new thread
_handler = new Handler();
exec(); // This will go into a event loop
delete _handler;
_handler = nullptr;
}
private:
Handler *_handler;
}
class System {
public:
System() {
_thread = new Thread();
_thread->start(); // Start the thread, this will call run() in the new thread
_thread->waitForStart(); // Make sure we can access the handler.
}
void doSomeWork() {
Handler *handler = _thread->handler();
// "Magically" call doSomeWork() in the context of the thread.
}
private:
Thread *_thread;
}
You missed a case where this can go wrong. The thread might exit 5 msec after it sets the pointer. Accessing any changing variable from two threads is never reliable without synchronization.

boost asio asynchronously waiting on a condition variable

Is it possible to perform an asynchronous wait (read : non-blocking) on a conditional variable in boost::asio ? if it isn't directly supported any hints on implementing it would be appreciated.
I could implement a timer and fire a wakeup even every few ms, but this is approach is vastly inferior, I find it hard to believe that condition variable synchronization is not implemented / documented.
If I understand the intent correctly, you want to launch an event handler, when some condition variable is signaled, in context of asio thread pool? I think it would be sufficient to wait on the condition variable in the beginning of the handler, and io_service::post() itself back in the pool in the end, something of this sort:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
boost::asio::io_service io;
boost::mutex mx;
boost::condition_variable cv;
void handler()
{
boost::unique_lock<boost::mutex> lk(mx);
cv.wait(lk);
std::cout << "handler awakened\n";
io.post(handler);
}
void buzzer()
{
for(;;)
{
boost::this_thread::sleep(boost::posix_time::seconds(1));
boost::lock_guard<boost::mutex> lk(mx);
cv.notify_all();
}
}
int main()
{
io.post(handler);
boost::thread bt(buzzer);
io.run();
}
I can suggest solution based on boost::asio::deadline_timer which works fine for me. This is kind of async event in boost::asio environment.
One very important thing is that the 'handler' must be serialised through the same 'strand_' as 'cancel', because using 'boost::asio::deadline_timer' from multiple threads is not thread safe.
class async_event
{
public:
async_event(
boost::asio::io_service& io_service,
boost::asio::strand<boost::asio::io_context::executor_type>& strand)
: strand_(strand)
, deadline_timer_(io_service, boost::posix_time::ptime(boost::posix_time::pos_infin))
{}
// 'handler' must be serialised through the same 'strand_' as 'cancel' or 'cancel_one'
// because using 'boost::asio::deadline_timer' from multiple threads is not thread safe
template<class WaitHandler>
void async_wait(WaitHandler&& handler) {
deadline_timer_.async_wait(handler);
}
void async_notify_one() {
boost::asio::post(strand_, boost::bind(&async_event::async_notify_one_serialized, this));
}
void async_notify_all() {
boost::asio::post(strand_, boost::bind(&async_event::async_notify_all_serialized, this));
}
private:
void async_notify_one_serialized() {
deadline_timer_.cancel_one();
}
void async_notify_all_serialized() {
deadline_timer_.cancel();
}
boost::asio::strand<boost::asio::io_context::executor_type>& strand_;
boost::asio::deadline_timer deadline_timer_;
};
Unfortunately, Boost ASIO doesn't have an async_wait_for_condvar() method.
In most cases, you also won't need it. Programming the ASIO way usually means, that you use strands, not mutexes or condition variables, to protect shared resources. Except for rare cases, which usually focus around correct construction or destruction order at startup and exit, you won't need mutexes or condition variables at all.
When modifying a shared resource, the classic, partially synchronous threaded way is as follows:
Lock the mutex protecting the resource
Update whatever needs to be updated
Signal a condition variable, if further processing by a waiting thread is required
Unlock the mutex
The fully asynchronous ASIO way is though:
Generate a message, that contains everything, that is needed to update the resource
Post a call to an update handler with that message to the resource's strand
If further processing is needed, let that update handler create further message(s) and post them to the apropriate resources' strands.
If jobs can be executed on fully private data, then post them directly to the io-context instead.
Here is an example of a class some_shared_resource, that receives a string state and triggers some further processing depending on the state received. Please note, that all processing in the private method some_shared_resource::receive_state() is fully thread-safe, as the strand serializes all calls.
Of course, the example is not complete; some_other_resource needs a similiar send_code_red() method as some_shared_ressource::send_state().
#include <boost/asio>
#include <memory>
using asio_context = boost::asio::io_context;
using asio_executor_type = asio_context::executor_type;
using asio_strand = boost::asio::strand<asio_executor_type>;
class some_other_resource;
class some_shared_resource : public std::enable_shared_from_this<some_shared_resource> {
asio_strand strand;
std::shared_ptr<some_other_resource> other;
std::string state;
void receive_state(std::string&& new_state) {
std::string oldstate = std::exchange(state, new_state);
if(state == "red" && oldstate != "red") {
// state transition to "red":
other.send_code_red(true);
} else if(state != "red" && oldstate == "red") {
// state transition from "red":
other.send_code_red(false);
}
}
public:
some_shared_resource(asio_context& ctx, const std::shared_ptr<some_other_resource>& other)
: strand(ctx.get_executor()), other(other) {}
void send_state(std::string&& new_state) {
boost::asio::post(strand, [me = weak_from_this(), new_state = std::move(new_state)]() mutable {
if(auto self = me.lock(); self) {
self->receive_state(std::move(new_state));
}
});
}
};
As you see, posting always into ASIO's strands can be a bit tedious at first. But you can move most of that "equip a class with a strand" code into a template.
The good thing about message passing: As you are not using mutexes, you cannot deadlock yourself anymore, even in extreme situations. Also, using message passing, it is often easier to create a high level of parallelity than with classical multithreading. On the downside, moving and copying around all these message objects is time consuming, which can slow down your application.
A last note: Using the weak pointer in the message formed by send_state() facilitates the reliable destruction of some_shared_resource objects: Otherwise, if A calls B and B calls C and C calls A (possibly only after a timeout or similiar), using shared pointers instead of weak pointers in the messages would create cyclic references, which then prevents object destruction. If you are sure, that you never will have cycles, and that processing messages from to-be-deleted objects doesn't pose a problem, you can use shared_from_this() instead of weak_from_this(), of course. If you are sure, that objects won't get deleted before ASIO has been stopped (and all working threads been joined back to the main thread), then you can also directly capture the this pointer instead.
FWIW, I implemented an asynchronous mutex using the rather good continuable library:
class async_mutex
{
cti::continuable<> tail_{cti::make_ready_continuable()};
std::mutex mutex_;
public:
async_mutex() = default;
async_mutex(const async_mutex&) = delete;
const async_mutex& operator=(const async_mutex&) = delete;
[[nodiscard]] cti::continuable<std::shared_ptr<int>> lock()
{
std::shared_ptr<int> result;
cti::continuable<> tail = cti::make_continuable<void>(
[&result](auto&& promise) {
result = std::shared_ptr<int>((int*)1,
[promise = std::move(promise)](auto) mutable {
promise.set_value();
}
);
}
);
{
std::lock_guard _{mutex_};
std::swap(tail, tail_);
}
co_await std::move(tail);
co_return result;
}
};
usage eg:
async_mutex mutex;
...
{
const auto _ = co_await mutex.lock();
// only one lock per mutex-instance
}

threading-related active object design questions (c++ boost)

I would like some feedback regarding the IService class listed below. From what I know, this type of class is related to the "active-object" pattern. Please excuse/correct if I use any related terminology incorrectly. Basically the idea is that the classes using this active object class need to provide a start and a stop method which control some event loop. This event loop could be implemented with a while loop or with boost asio etc.
This class is responsible for starting a new thread in a non-blocking manner so that events can be handled in/by the new thread. It must also handle all clean-up related code. I first tried an OO approach in which subclasses were responsible for overriding methods to control the event loop but the cleanup was messy: in the destructor calling the stop method resulted in a pure virtual function call in cases where the calling class had not manually called the stop method. The templated solution seems to be a lot cleaner:
template <typename T>
class IService : private boost::noncopyable
{
typedef boost::shared_ptr<boost::thread> thread_ptr;
public:
IService()
{
}
~IService()
{
/// try stop the service in case it's running
stop();
}
void start()
{
boost::mutex::scoped_lock lock(m_threadMutex);
if (m_pServiceThread && m_pServiceThread->joinable())
{
// already running
return;
}
m_pServiceThread = thread_ptr(new boost::thread(boost::bind(&IService::main, this)));
// need to wait for thread to start: else if destructor is called before thread has started
// Wait for condition to be signaled and then
// try timed wait since the application could deadlock if the thread never starts?
//if (m_startCondition.timed_wait(m_threadMutex, boost::posix_time::milliseconds(getServiceTimeoutMs())))
//{
//}
m_startCondition.wait(m_threadMutex);
// notify main to continue: it's blocked on the same condition var
m_startCondition.notify_one();
}
void stop()
{
// trigger the stopping of the event loop
m_serviceObject.stop();
if (m_pServiceThread)
{
if (m_pServiceThread->joinable())
{
m_pServiceThread->join();
}
// the service is stopped so we can reset the thread
m_pServiceThread.reset();
}
}
private:
/// entry point of thread
void main()
{
boost::mutex::scoped_lock lock(m_threadMutex);
// notify main thread that it can continue
m_startCondition.notify_one();
// Try Dummy wait to allow 1st thread to resume???
m_startCondition.wait(m_threadMutex);
// call template implementation of event loop
m_serviceObject.start();
}
/// Service thread
thread_ptr m_pServiceThread;
/// Thread mutex
mutable boost::mutex m_threadMutex;
/// Condition for signaling start of thread
boost::condition m_startCondition;
/// T must satisfy the implicit service interface and provide a start and a stop method
T m_serviceObject;
};
The class could be used as follows:
class TestObject3
{
public:
TestObject3()
:m_work(m_ioService),
m_timer(m_ioService, boost::posix_time::milliseconds(200))
{
m_timer.async_wait(boost::bind(&TestObject3::doWork, this, boost::asio::placeholders::error));
}
void start()
{
// simple event loop
m_ioService.run();
}
void stop()
{
// signal end of event loop
m_ioService.stop();
}
void doWork(const boost::system::error_code& e)
{
// Do some work here
if (e != boost::asio::error::operation_aborted)
{
m_timer.expires_from_now( boost::posix_time::milliseconds(200) );
m_timer.async_wait(boost::bind(&TestObject3::doWork, this, boost::asio::placeholders::error));
}
}
private:
boost::asio::io_service m_ioService;
boost::asio::io_service::work m_work;
boost::asio::deadline_timer m_timer;
};
Now to my specific questions:
1) Is the use of the boost condition variable correct? It seems like a bit of a hack to me: I wanted to wait for the thread to be launched so I waited on the condition variable. Then once the new thread has launched in the main method, I again wait on the same condition variable to allow the initial thread to continue. Then once the start method of the initial thread is exited, the new thread can continue. Is this ok?
2) Are there any cases in which the thread would not get launched successfully by the OS? I remember reading somewhere that this can occur. If this is possible, I should rather do a timed wait on the condition variable (as is commented out in the start method)?
3) I am aware that of the templated class could not implement the stop method "correctly" i.e. if the event loop fails to stop, the code will block on the joins (either in the stop or in the destructor) but I see no way around this. I guess it is up to the user of the class to make sure that the start and stop method are implemented correctly?
4) I would appreciate any other design mistakes, improvements, etc?
Thanks!
Finally settled on the following:
1) After much testing use of condition variable seems fine
2) This issue hasn't cropped up (yet)
3) The templated class implementation must meet the requirements, unit tests are used to
test for correctness
4) Improvements
Added join with lock
Catching exceptions in spawned thread and rethrowing in main thread to avoid crashes and to not loose exception info
Using boost::system::error_code to communicate error codes back to caller
implementation object is set-able
Code:
template <typename T>
class IService : private boost::noncopyable
{
typedef boost::shared_ptr<boost::thread> thread_ptr;
typedef T ServiceImpl;
public:
typedef boost::shared_ptr<IService<T> > ptr;
IService()
:m_pServiceObject(&m_serviceObject)
{
}
~IService()
{
/// try stop the service in case it's running
if (m_pServiceThread && m_pServiceThread->joinable())
{
stop();
}
}
static ptr create()
{
return boost::make_shared<IService<T> >();
}
/// Accessor to service implementation. The handle can be used to configure the implementation object
ServiceImpl& get() { return m_serviceObject; }
/// Mutator to service implementation. The handle can be used to configure the implementation object
void set(ServiceImpl rServiceImpl)
{
// the implementation object cannot be modified once the thread has been created
assert(m_pServiceThread == 0);
m_serviceObject = rServiceImpl;
m_pServiceObject = &m_serviceObject;
}
void set(ServiceImpl* pServiceImpl)
{
// the implementation object cannot be modified once the thread has been created
assert(m_pServiceThread == 0);
// make sure service object is valid
if (pServiceImpl)
m_pServiceObject = pServiceImpl;
}
/// if the service implementation reports an error from the start or stop method call, it can be accessed via this method
/// NB: only the last error can be accessed
boost::system::error_code getServiceErrorCode() const { return m_ecService; }
/// The join method allows the caller to block until thread completion
void join()
{
// protect this method from being called twice (e.g. by user and by stop)
boost::mutex::scoped_lock lock(m_joinMutex);
if (m_pServiceThread && m_pServiceThread->joinable())
{
m_pServiceThread->join();
m_pServiceThread.reset();
}
}
/// This method launches the non-blocking service
boost::system::error_code start()
{
boost::mutex::scoped_lock lock(m_threadMutex);
if (m_pServiceThread && m_pServiceThread->joinable())
{
// already running
return boost::system::error_code(SHARED_INVALID_STATE, shared_category);
}
m_pServiceThread = thread_ptr(new boost::thread(boost::bind(&IService2::main, this)));
// Wait for condition to be signaled
m_startCondition.wait(m_threadMutex);
// notify main to continue: it's blocked on the same condition var
m_startCondition.notify_one();
// No error
return boost::system::error_code();
}
/// This method stops the non-blocking service
boost::system::error_code stop()
{
// trigger the stopping of the event loop
//boost::system::error_code ec = m_serviceObject.stop();
assert(m_pServiceObject);
boost::system::error_code ec = m_pServiceObject->stop();
if (ec)
{
m_ecService = ec;
return ec;
}
// The service implementation can return an error code here for more information
// However it is the responsibility of the implementation to stop the service event loop (if running)
// Failure to do so, will result in a block
// If this occurs in practice, we may consider a timed join?
join();
// If exception was thrown in new thread, rethrow it.
// Should the template implementation class want to avoid this, it should catch the exception
// in its start method and then return and error code instead
if( m_exception )
boost::rethrow_exception(m_exception);
return ec;
}
private:
/// runs in it's own thread
void main()
{
try
{
boost::mutex::scoped_lock lock(m_threadMutex);
// notify main thread that it can continue
m_startCondition.notify_one();
// Try Dummy wait to allow 1st thread to resume
m_startCondition.wait(m_threadMutex);
// call implementation of event loop
// This will block
// In scenarios where the service fails to start, the implementation can return an error code
m_ecService = m_pServiceObject->start();
m_exception = boost::exception_ptr();
}
catch (...)
{
m_exception = boost::current_exception();
}
}
/// Service thread
thread_ptr m_pServiceThread;
/// Thread mutex
mutable boost::mutex m_threadMutex;
/// Join mutex
mutable boost::mutex m_joinMutex;
/// Condition for signaling start of thread
boost::condition m_startCondition;
/// T must satisfy the implicit service interface and provide a start and a stop method
T m_serviceObject;
T* m_pServiceObject;
// Error code for service implementation errors
boost::system::error_code m_ecService;
// Exception ptr to transport exception across different threads
boost::exception_ptr m_exception;
};
Further feedback/criticism would of course be welcome.