C++ ASIO: Asynchronous sockets and threading - c++

My application is based on the asio chat example and consists of a client and a server:
- Client: Connect to the server, receive requests and respond to it
- Server: Has a QT GUI (main thread) and a network service (separate thread) listening for connections, sending requests to particular clients and interprets the response from/in the GUI
I want to achieve this in an asynchronous way to avoid a seperate thread for each client connection.
In my QT window, I have one io_service instance and one instance of my network service:
io_service_ = new asio::io_service();
asio::ip::tcp::endpoint endpoint(asio::ip::tcp::v4(), "1234");
service_ = new Service(*io_service_, endpoint, this);
asio::io_service* ioServicePointer = io_service_;
t = std::thread{ [ioServicePointer](){ ioServicePointer->run(); } };
I want to be able to send data to one client, like this:
service_->send_message(selectedClient.id, msg);
And I am receiving and handling the responses via the observer pattern (the window implements the IStreamListener interface)
Service.cpp:
#include "Service.h"
#include "Stream.h"
void Service::runAcceptor()
{
acceptor_.async_accept(socket_,
[this](asio::error_code ec)
{
if (!ec)
{
std::make_shared<Stream>(std::move(socket_), &streams_)->start();
}
runAcceptor();
});
}
void Service::send_message(std::string streamID, chat_message& msg)
{
io_service_.post(
[this, msg, streamID]()
{
auto stream = streams_.getStreamByID(streamID);
stream->deliver(msg);
});
}
Stream.cpp:
#include "Stream.h"
#include <iostream>
#include "../chat_message.h"
Stream::Stream(asio::ip::tcp::socket socket, StreamCollection* streams)
: socket_(std::move(socket))
{
streams_ = streams; // keep a reference to the streamCollection
// retrieve endpoint ip
asio::ip::tcp::endpoint remote_ep = socket_.remote_endpoint();
asio::ip::address remote_ad = remote_ep.address();
this->ip_ = remote_ad.to_string();
}
void Stream::start()
{
streams_->join(shared_from_this());
readHeader();
}
void Stream::deliver(const chat_message& msg)
{
bool write_in_progress = !write_msgs_.empty();
write_msgs_.push_back(msg);
if (!write_in_progress)
{
write();
}
}
std::string Stream::getName()
{
return name_;
}
std::string Stream::getIP()
{
return ip_;
}
void Stream::RegisterListener(IStreamListener *l)
{
m_listeners.insert(l);
}
void Stream::UnregisterListener(IStreamListener *l)
{
std::set<IStreamListener *>::const_iterator iter = m_listeners.find(l);
if (iter != m_listeners.end())
{
m_listeners.erase(iter);
}
else {
std::cerr << "Could not unregister the specified listener object as it is not registered." << std::endl;
}
}
void Stream::readHeader()
{
auto self(shared_from_this());
asio::async_read(socket_,
asio::buffer(read_msg_.data(), chat_message::header_length),
[this, self](asio::error_code ec, std::size_t /*length*/)
{
if (!ec && read_msg_.decode_header())
{
readBody();
}
else if (ec == asio::error::eof || ec == asio::error::connection_reset)
{
std::for_each(m_listeners.begin(), m_listeners.end(), [&](IStreamListener *l) {l->onStreamDisconnecting(this->id()); });
streams_->die(shared_from_this());
}
else
{
std::cerr << "Exception: " << ec.message();
}
});
}
void Stream::readBody()
{
auto self(shared_from_this());
asio::async_read(socket_,
asio::buffer(read_msg_.body(), read_msg_.body_length()),
[this, self](asio::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
// notify the listener (GUI) that a response has arrived and pass a reference to it
auto msg = std::make_shared<chat_message>(std::move(read_msg_));
std::for_each(m_listeners.begin(), m_listeners.end(), [&](IStreamListener *l) {l->onMessageReceived(msg); });
readHeader();
}
else
{
streams_->die(shared_from_this());
}
});
}
void Stream::write()
{
auto self(shared_from_this());
asio::async_write(socket_,
asio::buffer(write_msgs_.front().data(),
write_msgs_.front().length()),
[this, self](asio::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
write_msgs_.pop_front();
if (!write_msgs_.empty())
{
write();
}
}
else
{
streams_->die(shared_from_this());
}
});
}
Interfaces
class IStream
{
public:
/// Unique stream identifier
typedef void* TId;
virtual TId id() const
{
return (TId)(this);
}
virtual ~IStream() {}
virtual void deliver(const chat_message& msg) = 0;
virtual std::string getName() = 0;
virtual std::string getIP() = 0;
/// observer pattern
virtual void RegisterListener(IStreamListener *l) = 0;
virtual void UnregisterListener(IStreamListener *l) = 0;
};
class IStreamListener
{
public:
virtual void onStreamDisconnecting(IStream::TId streamId) = 0;
virtual void onMessageReceived(std::shared_ptr<chat_message> msg) = 0;
};
/*
streamCollection / service delegates
*/
class IStreamCollectionListener
{
public:
virtual void onStreamDied(IStream::TId streamId) = 0;
virtual void onStreamCreated(std::shared_ptr<IStream> stream) = 0;
};
StreamCollection is basically a set of IStreams:
class StreamCollection
{
public:
void join(stream_ptr stream)
{
streams_.insert(stream);
std::for_each(m_listeners.begin(), m_listeners.end(), [&](IStreamCollectionListener *l) {l->onStreamCreated(stream); });
}
// more events and observer pattern inplementation
First of all: The code works as intended so far.
My question:
Is this the way ASIO is supposed to be used for asynchronous programming? I'm especially unsure about the Service::send_message method and the use of io_service.post. What is it's purpose in my case? It did work too when I just called async_write, without wrapping it in the io_service.post call.
Am I running into problems with this approach?

Asio is designed to be a tookit rather than a framework. As such, there are various ways to successfully use it. Separating the GUI and network threads, and using asynchronous I/O for scalability can be a good idea.
Delegating work to the io_service within a public API, such as Service::send_message(), has the following consequences:
decouples the caller's thread from the thread(s) servicing the io_service. For example, if Stream::write() performs a time consuming cryptographic function, the caller thread (GUI) would not be impacted.
it provides thread-safety. The io_service is thread-safe; however socket is not thread-safe. Additionally, other objects may not be thread safe, such as write_msgs_. Asio guarantees that handlers will only be invoked from within threads running the io_servce. Consequently, if only one thread is running the io_service, then there is no possibility for concurrency and both socket_ and write_msgs_ will be accessed in a thread-safe manner. Asio refers to this as an implicit strand. If more than one thread is processing the io_service, then one may need to use an explicit strand to provide thread safety. See this answer for more details on strands.
Additional Asio considerations:
Observers are invoked within handlers, and handlers are running within the network thread. If any observer takes a long time to complete, such as having to synchronize with various shared objects touched by the GUI thread, then it could create poor responsiveness across other operations. Consider using a queue to broker events between the observer and subject components. For instance, one could use another io_service as a queue, that is being ran by its own thread, and post into it:
auto msg = std::make_shared<chat_message>(std::move(read_msg_));
for (auto l: m_listeners)
dispatch_io_service.post([=](){ l->onMessageReceived(msg); });
Verify that the container type for write_msgs_ does not invalidate iterators, pointers and references to existing elements on push_back() and other elements for pop_front(). For instance, using std::list or std::dequeue would be safe, but a std::vector may invalidate references to existing elements on push_back.
StreamCollection::die() may be called multiple times for a single Stream. This function should either be idempotent or handle the side effects appropriately.
On failure for a given Stream, its listeners are informed of a disconnect only in one path: failing to read a header with an error of asio::error::eof or asio::error::connection_reset. Other paths do not invoke IStreamListener.onStreamDisconnecting():
the header is read, but decoding failed. In this particular case, the entire read chain will stop without informing other components. The only indication that a problem has occurred is a print statement to std::cerr.
when there is a failure reading the body.

Related

Simple Boost TCP Server, example from the book "C++ Crash Course"

I'm trying to understand the thing with std::enable_shared_from_this in case of TCP connections and I see it like when first connection is accepted in the serve func, object of the class Session is created and later invocations just create shared_ptr to the same object isn't it? If I get it well, I'm not sure is it completely correct to move everytime socket in serve? The below example is like original one from the book besides connections int I've added:
using namespace boost::asio;
int connections{};
struct Session : std::enable_shared_from_this<Session> {
explicit Session(ip::tcp::socket socket) : socket{ std::move(socket) } {}
void read() {
async_read_until(socket, dynamic_buffer(message), '\n',
[self=shared_from_this()] (boost::system::error_code ec,
std::size_t length) {
if(ec || self->message == "\n") {
std::cout<<"Ended connection as endline was sent\n" ;
return;
}
boost::algorithm::to_upper(self->message);
self->write();
});
}
void write() {
async_write(socket, buffer(message),
[self=shared_from_this()] (boost::system::error_code ec,
std::size_t length) {
if(ec) return;
self->message.clear();
self->read();
});
}
private:
ip::tcp::socket socket;
std::string message;
};
void serve(ip::tcp::acceptor& acceptor) {
acceptor.async_accept([&acceptor](boost::system::error_code ec,
ip::tcp::socket socket) {
serve(acceptor);
if (ec) return;
auto session = std::make_shared<Session>(std::move(socket));
std::cout<<"Connection established no "<<++connections<<"\n";
session->read();
});
}
int main(){
try{
io_context io_context;
ip::tcp::acceptor acceptor{ io_context,
ip::tcp::endpoint(ip::tcp::v4(), 1895)};
serve(acceptor);
io_context.run();
} catch (std::exception& e) {
std::cerr << e.what() << std::endl;
}
}
Socket is moved at each invocation of serve because it is a fresh socket for a newly established connection. Note that it is passed by value and unless moved to some long-living object (session in this case) it will be immediately destroyed after going out of scope, ending the connection.
"object of the class Session is created and later invocations just create shared_ptr to the same object isn't it" - nope, each make_shared invocation creates a new session object - one per connection. shared_from_this spawns pointer to the current object.

Boost 1.70 io_service deprecation

I'm trying to migrate some old code from using io_service to io_context for the basic tcp acceptor, but am running into issues when switching get_io_service() to get_executor().context() results in the following error:
cannot convert ‘boost::asio::execution_context’ to ‘boost::asio::io_context&’
This is the listener:
ImageServerListener::ImageServerListener(boost::asio::io_context& io)
{
_acceptor = new boost::asio::ip::tcp::acceptor(io, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), sConfig.net.imageServerPort));
StartAccept();
}
ImageServerListener::~ImageServerListener()
{
delete _acceptor;
}
void ImageServerListener::StartAccept()
{
std::shared_ptr<ImageServerConnection> connection = ImageServerConnection::create(_acceptor->get_executor().context());
_acceptor->async_accept(connection->socket(), std::bind(&ImageServerListener::HandleAccept, this, connection));
}
void ImageServerListener::HandleAccept(std::shared_ptr<ImageServerConnection> connection)
{
connection->Process();
StartAccept();
}
What would have to be changed in order to return an io_context instead of an execution_context?
You will want to focus on executors rather than contexts.
Passing around executors is cheap, they are copyable, as opposed to contexts.
Also, it abstracts away (polymorphism) the type of execution context that the executor is attached to, so you don't need to bother.
However, the static type of the executor is not fixed. This means that the typical way to accept one is by template argument:
struct MyThing {
template <typename Executor>
explicit MyThing(Executor ex)
: m_socket(ex)
{ }
void do_stuff(std::string caption) {
post(m_socket.get_executor(),
[=] { std::cout << ("Doing stuff " + caption + "\n") << std::flush; });
}
// ...
private:
tcp::socket m_socket;
};
Now you employ it in many ways without changes:
Live On Coliru
int main() {
boost::asio::thread_pool pool;
MyThing a(pool.get_executor());
MyThing b(make_strand(pool));
a.do_stuff("Pool a");
b.do_stuff("Pool b");
boost::asio::io_context ioc;
MyThing c(ioc.get_executor());
MyThing d(make_strand(ioc));
c.do_stuff("IO c");
d.do_stuff("IO d");
pool.join();
ioc.run();
}
Which will print something like
Doing stuff Pool a
Doing stuff Pool b
Doing stuff IO c
Doing stuff IO d
Type Erasure
As you have probably surmised, there's type erasure inside m_socket that stores the executor. If you want to do the same, you can use
boost::asio::any_io_executor ex;
ex = m_socket.get_executor();

boost::asio and multiple client connections using asynch

I need to establish up to three different TCP connections to different servers. All three connections requiring different protocols, different handshakes and different heartbeats. Studying http://www.boost.org/doc/libs/1_61_0/doc/html/boost_asio/example/cpp11/chat/chat_client.cpp, reading stuff here and following Chris Kohlhoffs advices I tried to implement it as below.
The problem is that with this architecture I'm getting a bad_weak_pointer exception at calling shared_from_this() in doConnect() no matter what I'm doing.
Importent These are just snippets of a not running code, which can contain bugs! Importent
I'm having a base class which is containing some basic methods.
Connection.h
class Connection : public std::enable_shared_from_this<Connection>
{
public:
//! Ctor
inline Connection();
//! Dtor
inline virtual ~Connection();
inline void setReconnectTime(const long &reconnectAfterMilisec)
{
m_reconnectTime = boost::posix_time::milliseconds(reconnectAfterMilisec);
}
inline void setHandshakePeriod(const long &periodInMilisec)
{
m_handshakePeriod = boost::posix_time::milliseconds(periodInMilisec);
}
virtual void doConnect() = 0;
virtual void stop() = 0;
//... and some view more...
}
I have then my three classes which are derived from the base class. Here just one (and also the core part) to depict the approach.
ConnectionA.h
//queues which containing also the age of the messages
typedef std::deque<std::pair<handshakeMsg, boost::posix_time::ptime>> handskMsg_queue;
typedef std::deque<std::pair<errorcodeMsg, boost::posix_time::ptime>> ecMsg_queue;
typedef std::deque<std::pair<A_Msg, boost::posix_time::ptime>> A_Msg_queue;
class ConnectionA : public Connection
{
public:
ConnectionA();
ConnectionA(const std::string& IP, const int &port);
ConnectionA& operator=(const ConnectionA &other);
virtual ~ConnectionA();
virtual void stop() override;
virtual void doConnect() override;
void doPost(std::string &message);
void doHandshake();
void sendErrorCode(const int &ec);
std::shared_ptr<boost::asio::io_service>m_ioS;
private:
std::shared_ptr<tcp::socket> m_socket;
std::shared_ptr<boost::asio::deadline_timer> m_deadlineTimer; // for reconnetions
std::shared_ptr<boost::asio::deadline_timer> m_handshakeTimer; // for heartbeats
void deadlineTimer_handler(const boost::system::error_code& error);
void handshakeTimer_handler(const boost::system::error_code& error);
void doRead();
void doWrite();
std::string m_IP;
int m_port;
handskMsg_queue m_handskMsgQueue;
ecMsg_queue m_ecMsgQueue;
A_Msg_queue m_AMsgQueue;
}
ConnectionA.cpp
ConnectionA::ConnectionA(const std::string &IP, const int &port)
: m_ioS()
, m_socket()
, m_deadlineTimer()
, m_handshakeTimer()
, m_IP(IP)
, m_port(port)
, m_handskMsgQueue(10)
, m_ecMsgQueue(10)
, m_AMsgQueue(10)
{
m_ioS = std::make_shared<boost::asio::io_service>();
m_socket = std::make_shared<tcp::socket>(*m_ioS);
m_deadlineTimer = std::make_shared<boost::asio::deadline_timer>(*m_ioS);
m_handshakeTimer = std::make_shared<boost::asio::deadline_timer> (*m_ioS);
m_deadlineTimer->async_wait(boost::bind(&ConnectionA::deadlineTimer_handler, this, boost::asio::placeholders::error));
m_handshakeTimer->async_wait(boost::bind(&ConnectionA::handshakeTimer_handler, this, boost::asio::placeholders::error));
}
ConnectionA::~ConnectionA()
{}
void ConnectionA::stop()
{
m_ioS->post([this]() { m_socket->close(); });
m_deadlineTimer->cancel();
m_handshakeTimer->cancel();
}
void ConnectionA::doConnect()
{
if (m_socket->is_open()){
return;
}
tcp::resolver resolver(*m_ioS);
std::string portAsString = std::to_string(m_port);
auto endpoint_iter = resolver.resolve({ m_IP.c_str(), portAsString.c_str() });
m_deadlineTimer->expires_from_now(m_reconnectTime);
// this gives me a bad_weak_pointer exception!!!
auto self = std::static_pointer_cast<ConnectionA>(static_cast<ConnectionA*>(this)->shared_from_this());
boost::asio::async_connect(*m_socket, endpoint_iter, [this, self](boost::system::error_code ec, tcp::resolver::iterator){
if (!ec)
{
doHandshake();
doRead();
}
else {
// don't know if async_connect can fail but set the socket to open
if (m_socket->is_open()){
m_socket->close();
}
}
});
}
void ConnectionA::doRead()
{
auto self(shared_from_this());
boost::asio::async_read(*m_socket,
boost::asio::buffer(m_readBuf, m_readBufSize),
[this, self](boost::system::error_code ec, std::size_t){
if(!ec){
// check server answer for errors
}
doRead();
}
else {
stop();
}
});
}
void ConnectionA::doPost(std::string &message)
{
A_Msg newMsg (message);
auto self(shared_from_this());
m_ioS->post([this, self, newMsg](){
bool writeInProgress = false;
if (!m_A_MsgQueue.empty()){
writeInProgress = true;
}
boost::posix_time::ptime currentTime = time_traits_t::now();
m_AMsgQueue.push_back(std::make_pair(newMsg,currentTime));
if (!writeInProgress)
{
doWrite();
}
});
}
void ConnectionA::doWrite()
{
while (!m_AMsgQueue.empty())
{
if (m_AMsgQueue.front().second + m_maxMsgAge < time_traits_t::now()){
m_AMsgQueue.pop_front();
continue;
}
if (!m_socket->is_open()){
continue;
}
auto self(shared_from_this());
boost::asio::async_write(*m_socket,
boost::asio::buffer(m_AMsgQueue.front().first.data(),
m_AMsgQueue.front().first.A_lenght),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec) // successful
{
m_handshakeTimer->expires_from_now(m_handshakePeriod); // reset timer
m_AMsgQueue.pop_front();
doWrite();
}
else {
if (m_socket->is_open()){
m_socket->close();
}
}
});
}
}
void ConnectionA::deadlineTimer_handler(const boost::system::error_code& error){
if (m_stopped){
return;
}
m_deadlineTimer->async_wait(boost::bind(&ConnectionA::deadlineTimer_handler, this, boost::asio::placeholders::error));
if (!error && !m_socket->is_open()) // timer expired and no connection was established
{
doConnect();
}
else if (!error && m_socket->is_open()){ // timer expired and connection was established
m_deadlineTimer->expires_at(boost::posix_time::pos_infin); // to reactivate timer call doConnect()
}
}
And finally there is also another class which encapsulate these classes make it more comfortable to use:
TcpConnect.h
class CTcpConnect
{
public:
/*! Ctor
*/
CTcpConnect();
//! Dtor
~CTcpConnect();
void initConnectionA(std::string &IP, const int &port);
void initConnectionB(std::string &IP, const int &port);
void initConnectionC(std::string &IP, const int &port);
void postMessageA(std::string &message);
void run();
void stop();
private:
ConnectionA m_AConnection;
ConnectionB m_BConnection;
ConnectionC m_CConnection;
}
TcpConnect.cpp
CTcpConnect::CTcpConnect()
: m_AConnection()
, m_BConnection()
, m_CConnection()
{}
CTcpConnect::~CTcpConnect()
{}
void CTcpConnect::run(){
[this](){ m_AConnection.m_ioS->run(); };
[this](){ m_BConnection.m_ioS->run(); };
[this](){ m_CConnection.m_ioS->run(); };
}
void CTcpConnect::stop(){
m_AConnection.stop();
m_BConnection.stop();
m_CConnection.stop();
}
void CTcpConnect::initConnectionA(std::string &IP, const int &port)
{
m_AConnection = ConnectionA(IP, port);
m_AConnection.setMaxMsgAge(30000);
//... set some view parameter more
m_AConnection.doConnect();
}
// initConnectionB & initConnectionC are quite the same
void CTcpConnect::postMessageA(std::string &message)
{
m_AConnection.doWrite(message);
}
In the beginning I tried also to have only one io_service (for my approach this would be fine), but holding the service just as reference gave some headache, because my implementation requires also a default constructor for the connections. Now each connection has its own io-service.
Any ideas how I can bring this code to run?
Feel free to make suggestion for other architectures. If you could came up this some snippets would be even the better. I'm struggling with this implementation for weeks already. I'm grateful for every hint.
BTW I'm using boost 1.61 with VS12.
This is the problem:
m_AConnection = ConnectionA(IP, port);
That is, ConnectionA derives from Connection which derives from enable_shared_from_this. That means that ConnectionA must be instantiated as a shared pointer for shared_from_this to work.
Try this:
void CTcpConnect::initConnectionA(std::string &IP, const int &port)
{
m_AConnection = std::make_shared<ConnectionA>(IP, port);
m_AConnection->setMaxMsgAge(30000);
//... set some view parameter more
m_AConnection->doConnect();
}
EDIT1:
You are right. That was the issue. Now I realised that the way I'm calling io-service.run() is total crap.
It is very uncommon to use more than one io_service, and extremely uncommon to use one per connection :)
However, do you know if I need the cast then calling shared_from_this()? I noticed the asynch_connect() works fine with and without the cast.
Many Asio examples use shared_from_this() for convenience, I for example don't use it in my projects at all. There are certain rules that you need to be careful when working with Asio. For example, one is that the reading and writing buffers must not be destructed before the corresponding callback is executed, if the lambda function captures a shared pointer to the object that holds the buffers, this condition holds trivially.
You could for example do something like this as well:
auto data = std::make_shared<std::vector<uint8_t>>(10);
async_read(socket,
boost::asio::const_buffer(*data),
[data](boost::system::error_code, size_t) {});
It would be valid but would have the performance drawback that you'd be allocating a new data inside std::vector on each read.
Another reason why shared_from_this() is useful can be seen when you look at some some of your lambdas, they often have the form:
[this, self,...](...) {...}
That is, you very often want to use this inside them. If you did not capture self as well, you'd need to use other measures to make sure this has not been destroyed when the handler is invoked.

Proper cleanup with a suspended coroutine

I'm wondering what the best (cleanest, hardest to mess up) method for cleanup is in this situation.
void MyClass::do_stuff(boost::asio::yield_context context) {
while (running_) {
uint32_t data = async_buffer->Read(context);
// do other stuff
}
}
Read is a call which asynchronously waits until there is data to be read, then returns that data. If I want to delete this instance of MyClass, how can I make sure I do so properly? Let's say that the asynchronous wait here is performed via a deadline_timer's async_wait. If I cancel the event, I still have to wait for the thread to finish executing the "other stuff" before I know things are in a good state (I can't join the thread, as it's a thread that belongs to the io service that may also be handling other jobs). I could do something like this:
MyClass::~MyClass() {
running_ = false;
read_event->CancelEvent(); // some way to cancel the deadline_timer the Read is waiting on
boost::mutex::scoped_lock lock(finished_mutex_);
if (!finished_) {
cond_.wait(lock);
}
// any other cleanup
}
void MyClass::do_stuff(boost::asio::yield_context context) {
while (running_) {
uint32_t data = async_buffer->Read(context);
// do other stuff
}
boost::mutex::scoped_lock lock(finished_mutex_);
finished_ = true;
cond.notify();
}
But I'm hoping to make these stackful coroutines as easy to use as possible, and it's not straightforward for people to recognize that this condition exists and what would need to be done to make sure things are cleaned up properly. Is there a better way? Is what I'm trying to do here wrong at a more fundamental level?
Also, for the event (what I have is basically the same as Tanner's answer here) I need to cancel it in a way that I'd have to keep some extra state (a true cancel vs. the normal cancel used to fire the event) -- which wouldn't be appropriate if there were multiple pieces of logic waiting on that same event. Would love to hear if there's a better way to model the asynchronous event to be used with a coroutine suspend/resume.
Thanks.
EDIT: Thanks #Sehe, took a shot at a working example, I think this illustrates what I'm getting at:
class AsyncBuffer {
public:
AsyncBuffer(boost::asio::io_service& io_service) :
write_event_(io_service) {
write_event_.expires_at(boost::posix_time::pos_infin);
}
void Write(uint32_t data) {
buffer_.push_back(data);
write_event_.cancel();
}
uint32_t Read(boost::asio::yield_context context) {
if (buffer_.empty()) {
write_event_.async_wait(context);
}
uint32_t data = buffer_.front();
buffer_.pop_front();
return data;
}
protected:
boost::asio::deadline_timer write_event_;
std::list<uint32_t> buffer_;
};
class MyClass {
public:
MyClass(boost::asio::io_service& io_service) :
running_(false), io_service_(io_service), buffer_(io_service) {
}
void Run(boost::asio::yield_context context) {
while (running_) {
boost::system::error_code ec;
uint32_t data = buffer_.Read(context[ec]);
// do something with data
}
}
void Write(uint32_t data) {
buffer_.Write(data);
}
void Start() {
running_ = true;
boost::asio::spawn(io_service_, boost::bind(&MyClass::Run, this, _1));
}
protected:
boost::atomic_bool running_;
boost::asio::io_service& io_service_;
AsyncBuffer buffer_;
};
So here, let's say that the buffer is empty and MyClass::Run is currently suspended while making a call to Read, so there's a deadline_timer.async_wait that's waiting for the event to fire to resume that context. It's time to destroy this instance of MyClass, so how do we make sure that it gets done cleanly.
A more typical approach would be to use boost::enable_shared_from_this with MyClass, and run the methods as bound to the shared pointer.
Boost Bind supports binding to boost::shared_ptr<MyClass> transparently.
This way, you can automatically have the destructor run only when the last user disappears.
If you create a SSCCE, I'm happy to change it around, to show what I mean.
UPDATE
To the SSCCEE: Some remarks:
I imagined a pool of threads running the IO service
The way in which MyClass calls into AsyncBuffer member functions directly is not threadsafe. There is actually no thread safe way to cancel the event outside the producer thread[1], since the producer already access the buffer for Writeing. This could be mitigated using a strand (in the current setup I don't see how MyClass would likely be threadsafe). Alternatively, look at the active object pattern (for which Tanner has an excellent answer[2] on SO).
I chose the strand approach here, for simplicity, so we do:
void MyClass::Write(uint32_t data) {
strand_.post(boost::bind(&AsyncBuffer::Write, &buffer_, data));
}
You ask
Also, for the event (what I have is basically the same as Tanner's answer here) I need to cancel it in a way that I'd have to keep some extra state (a true cancel vs. the normal cancel used to fire the event)
The most natural place for this state is the usual for the deadline_timer: it's deadline. Stopping the buffer is done by resetting the timer:
void AsyncBuffer::Stop() { // not threadsafe!
write_event_.expires_from_now(boost::posix_time::seconds(-1));
}
This at once cancels the timer, but is detectable because the deadline is in the past.
Here's a simple demo with a a group of IO service threads, one "producer coroutine" that produces random numbers and a "sniper thread" that snipes the MyClass::Run coroutine after 2 seconds. The main thread is the sniper thread.
See it Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/async_result.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/atomic.hpp>
#include <list>
#include <iostream>
// for refcounting:
#include <boost/enable_shared_from_this.hpp>
#include <boost/make_shared.hpp>
namespace asio = boost::asio;
class AsyncBuffer {
friend class MyClass;
protected:
AsyncBuffer(boost::asio::io_service &io_service) : write_event_(io_service) {
write_event_.expires_at(boost::posix_time::pos_infin);
}
void Write(uint32_t data) {
buffer_.push_back(data);
write_event_.cancel();
}
uint32_t Read(boost::asio::yield_context context) {
if (buffer_.empty()) {
boost::system::error_code ec;
write_event_.async_wait(context[ec]);
if (ec != boost::asio::error::operation_aborted || write_event_.expires_from_now().is_negative())
{
if (context.ec_)
*context.ec_ = boost::asio::error::operation_aborted;
return 0;
}
}
uint32_t data = buffer_.front();
buffer_.pop_front();
return data;
}
void Stop() {
write_event_.expires_from_now(boost::posix_time::seconds(-1));
}
private:
boost::asio::deadline_timer write_event_;
std::list<uint32_t> buffer_;
};
class MyClass : public boost::enable_shared_from_this<MyClass> {
boost::atomic_bool stopped_;
public:
MyClass(boost::asio::io_service &io_service) : stopped_(false), buffer_(io_service), strand_(io_service) {}
void Run(boost::asio::yield_context context) {
while (!stopped_) {
boost::system::error_code ec;
uint32_t data = buffer_.Read(context[ec]);
if (ec == boost::asio::error::operation_aborted)
break;
// do something with data
std::cout << data << " " << std::flush;
}
std::cout << "EOF\n";
}
bool Write(uint32_t data) {
if (!stopped_) {
strand_.post(boost::bind(&AsyncBuffer::Write, &buffer_, data));
}
return !stopped_;
}
void Start() {
if (!stopped_) {
stopped_ = false;
boost::asio::spawn(strand_, boost::bind(&MyClass::Run, shared_from_this(), _1));
}
}
void Stop() {
stopped_ = true;
strand_.post(boost::bind(&AsyncBuffer::Stop, &buffer_));
}
~MyClass() {
std::cout << "MyClass destructed because no coroutines hold a reference to it anymore\n";
}
protected:
AsyncBuffer buffer_;
boost::asio::strand strand_;
};
int main()
{
boost::thread_group tg;
asio::io_service svc;
{
// Start the consumer:
auto instance = boost::make_shared<MyClass>(svc);
instance->Start();
// Sniper in 2 seconds :)
boost::thread([instance]{
boost::this_thread::sleep_for(boost::chrono::seconds(2));
instance->Stop();
}).detach();
// Start the producer:
auto producer_coro = [instance, &svc](asio::yield_context c) { // a bound function/function object in C++03
asio::deadline_timer tim(svc);
while (instance->Write(rand())) {
tim.expires_from_now(boost::posix_time::milliseconds(200));
tim.async_wait(c);
}
};
asio::spawn(svc, producer_coro);
// Start the service threads:
for(size_t i=0; i < boost::thread::hardware_concurrency(); ++i)
tg.create_thread(boost::bind(&asio::io_service::run, &svc));
}
// now `instance` is out of scope, it will selfdestruct after the snipe
// completed
boost::this_thread::sleep_for(boost::chrono::seconds(3)); // wait longer than the snipe
std::cout << "This is the main thread _after_ MyClass self-destructed correctly\n";
// cleanup service threads
tg.join_all();
}
[1] logical thread, this could be a coroutine that gets resumed on different threads
[2] boost::asio and Active Object

How to handle thread-safe callback registration and execution in C++?

For example I've an EventGenerator class that call IEventHandler::onEvent for all registered event handlers:
class IEventHandler {
public: virtual void onEvent(...) = 0;
};
class EventGenerator {
private:
std::vector<IEventHandler*> _handlers;
std::mutex _mutex; // [1]
public:
void AddHandler(IEventHandler* handler) {
std::lock_guard<std::mutex> lck(_mutex); // [2]
_handlers.push_back(handler);
}
void RemoveHanler(IEventHandler* handler) {
std::lock_guard<std::mutex> lck(_mutex); // [3]
// remove from "_handlers"
}
private:
void threadMainTask() {
while(true) {
// Do some work ...
// Post event to all registered handlers
{
std::lock_guard<std::mutex> lck(_mutex); // [4]
for(auto& h : _handlers) { h->onEvent(...); )
}
// Do some work ...
}
}
The code should be thread safe in the following manner:
one thread is executing the EventGenerator::threadMainTask
many threads might access EventGenerator::AddHandler and EventGenerator::RemoveHandler APIs.
To support this, I have the following synchonization (see comment in the code):
[1] is the mutex that protects the vector _handlers from multiple thread access.
[2] and [3] are protect adding or removing handlers simultaneously.
[4] is preventing from changing the vector while the main thread is posting events.
This code works until... If for some reason, during the execution of IEventHandler::onEvent(...) the code is trying to call EventManager::RemoveHandler or EventManager::AddHandler. The result is runtime exception.
What is the best approach to handle registration of the event handlers and executing the event handler callback in the thread safe manner?
>> UPDATE <<
So based on the inputs, I've updated to the following design:
class IEventHandler {
public: virtual void onEvent(...) = 0;
};
class EventDelegate {
private:
IEventHandler* _handler;
std::atomic<bool> _cancelled;
public:
EventDelegate(IEventHandler* h) : _handler(h), _cancelled(false) {};
void Cancel() { _cancelled = true; }
void Invoke(...) { if (!_cancelled) _handler->onEvent(...); }
}
class EventGenerator {
private:
std::vector<std::shared_ptr<EventDelegate>> _handlers;
std::mutex _mutex;
public:
void AddHandler(std::shared_ptr<EventDelegate> handler) {
std::lock_guard<std::mutex> lck(_mutex);
_handlers.push_back(handler);
}
void RemoveHanler(std::shared_ptr<EventDelegate> handler) {
std::lock_guard<std::mutex> lck(_mutex);
// remove from "_handlers"
}
private:
void threadMainTask() {
while(true) {
// Do some work ...
std::vector<std::shared_ptr<EventDelegate>> handlers_copy;
{
std::lock_guard<std::mutex> lck(_mutex);
handlers_copy = _handlers;
}
for(auto& h : handlers_copy) { h->Invoke(...); )
// Do some work ...
}
}
As you can see, there is additional class EventDelegate that have two purposes:
hold the event callback
enable to cancel the callback
In the threadMainTask, I'm using a local copy of the std::vector<std::shared_ptr<EventDelegate>> and I'm releasing the lock before invoking the callbacks. This approach solves an issue when during the IEventHandler::onEvent(...) the EventGenerator::{AddHandler,RemoveHanler} is called.
Any thoughts about the new design?
Copy-on-Write vector implemented on atomic swap of shared_ptr's (in assumptions callback registration is occurring far less frequently than events the callbacks are notified about):
using callback_t = std::shared_ptr<std::function<void(event_t const&)> >;
using callbacks_t = std::shared_ptr<std::vector<callback_t> >;
callbacks_t callbacks_;
mutex_t mutex_; // a mutex of your choice
void register(callback_t cb)
{
// the mutex is to serialize concurrent callbacks registrations
// this is not always necessary, as depending on the application
// architecture, single writer may be enforced by design
scoped_lock lock(mutex_);
auto callbacks = atomic_load(&callbacks_);
auto new_callbacks = std::make_shared< std::vector<callback_t> >();
new_callbacks->reserve(callbacks->size() + 1);
*new_callbacks = callbacks;
new_callbacks->push_back(std::move(cb));
atomic_store(&callbacks_, new_callbacks);
}
void invoke(event_t const& evt)
{
auto callbacks = atomic_load(&callbacks_);
// many people wrap each callback invocation into a try-catch
// and de-register on exception
for(auto& cb: *callbacks) (*cb)(evt);
}
Specifically on the subject of asynchronous behavior when callback is executed while being de-registered, well here the best approach to take is remember of the Separation of Concerns principle.
The callback should not be able to die until it has been executed. This is achieved via another classic trick called "extra level of indirection". Namely, instead of registering user provided callback one would wrap it to something like the below and callback de-registration apart from updating the vector will call the below defined discharge() method on the callback wrapper and will even notify the caller of de-registration method of whether the callback execution finished successfully.
template <class CB> struct cb_wrapper
{
mutable std::atomic<bool> done_;
CB cb_;
cb_wrapper(CB&& cb): cb(std::move(cb_)) {}
bool discharge()
{
bool not_done = false;
return done_.compare_exchange_strong(not_done, true);
}
void operator()(event_t const&)
{
if (discharge())
{
cb();
}
}
};
I can't see a right thing here. From your update I can see a problem: you are not synchronizing the invoke method with callback removal. There's an atomic but it's not enough. Example: just after this line of code:
if (!_cancelled)
Another thread calls the remove method. What can happen is that the onEvent() is called anyway, even if the removed method has removed the callback from the list and returned the result, there's nothing to keep synchronized this execution flow. Same problem for the answer of #bobah.