Boost asio handler that does not keep the io_service running - c++

I want to add a signal handler to my boost io_service, allowing the application to shut down cleanly when the user presses Ctrl-C. This is of course easily done by stopping the loop, something like this:
boost::asio::io_service service;
boost::asio::signal_set signals{ service, SIGINT, SIGTERM };
signals.async_wait(std::bind(&boost::asio::io_service::stop, &service));
This stops the loop normally, allowing the destructors to do their routine clean-up behaviour.
The problem is, once the application runs out of work it does not stop because the signal handler still has a handler registered and thus the io_service never stops running.
I have not found a clean way around this. I could of course do the signal handling myself and then just stop the loop, but this kind of defeats the idea of using boost (portability).

In the following code, http_server has a "listening socket" to accept multiple connections. The listening socket constantly runs async_accept so the io_service never runs out of work. The http_server.shutdown() function closes the listening socket and all open connections, so the io_service has no more work and stops running:
void handle_stop(ASIO_ERROR_CODE const&, // error,
int, // signal_number,
http_server_type& http_server)
{
std::cout << "Shutting down" << std::endl;
http_server.shutdown();
}
...
ASIO::io_service io_service;
http_server_type http_server(io_service);
...
// The signal set is used to register termination notifications
ASIO::signal_set signals_(io_service);
signals_.add(SIGINT);
signals_.add(SIGTERM);
#if defined(SIGQUIT)
signals_.add(SIGQUIT);
#endif // #if defined(SIGQUIT)
// register the handle_stop callback
signals_.async_wait([&http_server]
(ASIO_ERROR_CODE const& error, int signal_number)
{ handle_stop(error, signal_number, http_server); });
...
io_service.run();
std::cout << "io_service.run complete, shutdown successful" << std::endl;
This method also works for thread pools, see:thread_pool_http_server.cpp

I am probably going to hell for this, but I found a workaround to get a handler that doesn't coun't towards the number of running handlers. It seriously abuses both the work_guard boost provides, calls destructors by hand and misuses placement new, but it works.
#pragma once
#include <boost/asio/io_service.hpp>
#include <utility>
#include <memory>
template <typename HANDLER>
class unwork
{
public:
unwork(boost::asio::io_service &service, HANDLER &&handler) :
_work_guard(std::make_unique<boost::asio::io_service::work>(service)),
_handler(std::forward<HANDLER>(handler))
{
// wait for the handler to be installed
service.post([work_guard = _work_guard.operator->()]() {
// remove the work guard and the handler that has now been installed
work_guard->~work();
work_guard->~work();
});
}
unwork(const unwork &that) :
unwork(that._work_guard->get_io_service(), that._handler)
{}
unwork(unwork &&that) :
_work_guard(std::move(that._work_guard)),
_handler(std::move(that._handler))
{}
~unwork()
{
// was the work guard not moved out?
if (_work_guard) {
// add the work guard reference and the handler reference again
new (_work_guard.operator->()) boost::asio::io_service::work{ _work_guard->get_io_service() };
new (_work_guard.operator->()) boost::asio::io_service::work{ _work_guard->get_io_service() };
}
}
template <class ...Arguments>
auto operator()(Arguments ...parameters)
{
return _handler(std::forward<Arguments>(parameters)...);
}
private:
std::unique_ptr<boost::asio::io_service::work> _work_guard;
HANDLER _handler;
};
// maker function, for c++ < c++17
template <typename HANDLER>
unwork<HANDLER> make_unwork(boost::asio::io_service &service, HANDLER &&handler)
{
// create the new unwork wrapper
return { service, std::forward<HANDLER>(handler) };
}
It is used by wrapping your handler in a make_unwork() call if you are using c++14. In c++17 the constructor can be used directly.

Related

How to design proper release of a boost::asio socket or wrapper thereof

I am making a few attempts at making my own simple asynch TCP server using boost::asio after not having touched it for several years.
The latest example listing I can find is:
http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html
The problem I have with this example listing is that (I feel) it cheats and it cheats big, by making the tcp_connection a shared_ptr, such that it doesn't worry about the lifetime management of each connection. (I think) They do this for brevity, since it is a small tutorial, but that solution is not real world.
What if you wanted to send a message to each client on a timer, or something similar? A collection of client connections is going to be necessary in any real world non-trivial server.
I am worried about the lifetime management of each connection. I figure the natural thing to do would be to keep some collection of tcp_connection objects or pointers to them inside tcp_server. Adding to that collection from the OnConnect callback and removing from that collection OnDisconnect.
Note that OnDisconnect would most likely be called from an actual Disconnect method, which in turn would be called from OnReceive callback or OnSend callback, in the case of an error.
Well, therein lies the problem.
Consider we'd have a callstack that looked something like this:
tcp_connection::~tcp_connection
tcp_server::OnDisconnect
tcp_connection::OnDisconnect
tcp_connection::Disconnect
tcp_connection::OnReceive
This would cause errors as the call stack unwinds and we are executing code in a object that has had its destructor called...I think, right?
I imagine everyone doing server programming comes across this scenario in some fashion. What is a strategy for handling it?
I hope the explanation is good enough to follow. If not let me know and I will create my own source listing, but it will be very large.
Edit:
Related
) Memory management in asynchronous C++ code
IMO not an acceptable answer, relies on cheating with shared_ptr outstanding on receive calls and nothing more, and is not real world. what if the server wanted to say "Hi" to all clients every 5 minutes. A collection of some kind is necessary. What if you are calling io_service.run on multiple threads?
I am also asking on the boost mailing list:
http://boost.2283326.n4.nabble.com/How-to-design-proper-release-of-a-boost-asio-socket-or-wrapper-thereof-td4693442.html
Like I said, I fail to see how using smart pointers is "cheating, and cheating big". I also do not think your assessment that "they do this for brevity" holds water.
Here's a slightly redacted excerpt¹ from our code base that exemplifies how using shared_ptrs doesn't preclude tracking connections.
It shows just the server side of things, with
a very simple connection object in connection.hpp; this uses the enable_shared_from_this
just the fixed size connection_pool (we have dynamically resizing pools too, hence the locking primitives). Note how we can do actions on all active connections.
So you'd trivially write something like this to write to all clients, like on a timer:
_pool.for_each_active([] (auto const& conn) {
send_message(conn, hello_world_packet);
});
a sample listener that shows how it ties in with the connection_pool (which has a sample method to close all connections)
Code Listings
connection.hpp
#pragma once
#include "xxx/net/rpc/protocol.hpp"
#include "log.hpp"
#include "stats_filer.hpp"
#include <memory>
namespace xxx { namespace net { namespace rpc {
struct connection : std::enable_shared_from_this<connection>, protected LogSource {
typedef std::shared_ptr<connection> ptr;
private:
friend struct io;
friend struct listener;
boost::asio::io_service& _svc;
protocol::socket _socket;
protocol::endpoint _ep;
protocol::endpoint _peer;
public:
connection(boost::asio::io_service& svc, protocol::endpoint ep)
: LogSource("rpc::connection"),
_svc(svc),
_socket(svc),
_ep(ep)
{}
void init() {
_socket.set_option(protocol::no_delay(true));
_peer = _socket.remote_endpoint();
g_stats_filer_p->inc_value("asio." + _ep.address().to_string() + ".sockets_accepted");
debug() << "New connection from " << _peer;
}
protocol::endpoint endpoint() const { return _ep; }
protocol::endpoint peer() const { return _peer; }
protocol::socket& socket() { return _socket; }
// TODO encapsulation
int handle() {
return _socket.native_handle();
}
bool valid() const { return _socket.is_open(); }
void cancel() {
_svc.post([this] { _socket.cancel(); });
}
using shutdown_type = boost::asio::ip::tcp::socket::shutdown_type;
void shutdown(shutdown_type what = shutdown_type::shutdown_both) {
_svc.post([=] { _socket.shutdown(what); });
}
~connection() {
g_stats_filer_p->inc_value("asio." + _ep.address().to_string() + ".sockets_disconnected");
}
};
} } }
connection_pool.hpp
#pragma once
#include <mutex>
#include "xxx/threads/null_mutex.hpp"
#include "xxx/net/rpc/connection.hpp"
#include "stats_filer.hpp"
#include "log.hpp"
namespace xxx { namespace net { namespace rpc {
// not thread-safe by default, but pass e.g. std::mutex for `Mutex` if you need it
template <typename Ptr = xxx::net::rpc::connection::ptr, typename Mutex = xxx::threads::null_mutex>
struct basic_connection_pool : LogSource {
using WeakPtr = std::weak_ptr<typename Ptr::element_type>;
basic_connection_pool(std::string name = "connection_pool", size_t size)
: LogSource(std::move(name)), _pool(size)
{ }
bool try_insert(Ptr const& conn) {
std::lock_guard<Mutex> lk(_mx);
auto slot = std::find_if(_pool.begin(), _pool.end(), std::mem_fn(&WeakPtr::expired));
if (slot == _pool.end()) {
g_stats_filer_p->inc_value("asio." + conn->endpoint().address().to_string() + ".connections_dropped");
error() << "dropping connection from " << conn->peer() << ": connection pool (" << _pool.size() << ") saturated";
return false;
}
*slot = conn;
return true;
}
template <typename F>
void for_each_active(F action) {
auto locked = [=] {
using namespace std;
lock_guard<Mutex> lk(_mx);
vector<Ptr> locked(_pool.size());
transform(_pool.begin(), _pool.end(), locked.begin(), mem_fn(&WeakPtr::lock));
return locked;
}();
for (auto const& p : locked)
if (p) action(p);
}
constexpr static bool synchronizing() {
return not std::is_same<xxx::threads::null_mutex, Mutex>();
}
private:
void dump_stats(LogSource::LogTx tx) const {
// lock is assumed!
size_t empty = 0, busy = 0, idle = 0;
for (auto& p : _pool) {
switch (p.use_count()) {
case 0: empty++; break;
case 1: idle++; break;
default: busy++; break;
}
}
tx << "usage empty:" << empty << " busy:" << busy << " idle:" << idle;
}
Mutex _mx;
std::vector<WeakPtr> _pool;
};
// TODO FIXME use null_mutex once growing is no longer required AND if
// en-pooling still only happens from the single IO thread (XXX-2535)
using server_connection_pool = basic_connection_pool<xxx::net::rpc::connection::ptr, std::mutex>;
} } }
listener.hpp
#pragma once
#include "xxx/threads/null_mutex.hpp"
#include <mutex>
#include "xxx/net/rpc/connection_pool.hpp"
#include "xxx/net/rpc/io_operations.hpp"
namespace xxx { namespace net { namespace rpc {
struct listener : std::enable_shared_from_this<listener>, LogSource {
typedef std::shared_ptr<listener> ptr;
protocol::acceptor _acceptor;
protocol::endpoint _ep;
listener(boost::asio::io_service& svc, protocol::endpoint ep, server_connection_pool& pool)
: LogSource("rpc::listener"), _acceptor(svc), _ep(ep), _pool(pool)
{
_acceptor.open(ep.protocol());
_acceptor.set_option(protocol::acceptor::reuse_address(true));
_acceptor.set_option(protocol::no_delay(true));
::fcntl(_acceptor.native(), F_SETFD, FD_CLOEXEC); // FIXME use non-racy socket factory?
_acceptor.bind(ep);
_acceptor.listen(32);
}
void accept_loop(std::function<void(connection::ptr conn)> on_accept) {
auto self = shared_from_this();
auto conn = std::make_shared<xxx::net::rpc::connection>(_acceptor.get_io_service(), _ep);
_acceptor.async_accept(conn->_socket, [this,self,conn,on_accept](boost::system::error_code ec) {
if (ec) {
auto tx = ec == boost::asio::error::operation_aborted? debug() : warn();
tx << "failed accept " << ec.message();
} else {
::fcntl(conn->_socket.native(), F_SETFD, FD_CLOEXEC); // FIXME use non-racy socket factory?
if (_pool.try_insert(conn)) {
on_accept(conn);
}
self->accept_loop(on_accept);
}
});
}
void close() {
_acceptor.cancel();
_acceptor.close();
_acceptor.get_io_service().post([=] {
_pool.for_each_active([] (auto const& sp) {
sp->shutdown(connection::shutdown_type::shutdown_both);
sp->cancel();
});
});
debug() << "shutdown";
}
~listener() {
}
private:
server_connection_pool& _pool;
};
} } }
¹ download as gist https://gist.github.com/sehe/979af25b8ac4fd77e73cdf1da37ab4c2
While others have answered similarly to the second half of this answer, it seems the most complete answer I can find, came from asking the same question on the Boost Mailing list.
http://boost.2283326.n4.nabble.com/How-to-design-proper-release-of-a-boost-asio-socket-or-wrapper-thereof-td4693442.html
I will summarize here in order to assist those that arrive here from a search in the future.
There are 2 options
1) Close the socket in order to cancel any outstanding io and then post a callback for the post-disconnection logic on the io_service and let the server class be called back when the socket has been disconnected. It can then safely release the connection. As long as there was only one thread that had called io_service::run, then other asynchronous operations will have been already been resolved when the callback is made. However, if there are multiple threads that had called io_service::run, then this is not safe.
2) As others have been pointing out in their answers, using the shared_ptr to manage to connections lifetime, using outstanding io operations to keep them alive, is viable. We can use a collection weak_ptr to the connections in order to access them if we need to. The latter is the tidbit that had been omitted from other posts on the topic which confused me.
The way that asio solves the "deletion problem" where there are outstanding async methods is that is splits each async-enabled object into 3 classes, eg:
server
server_service
server_impl
there is one service per io_loop (see use_service<>). The service creates an impl for the server, which is now a handle class.
This has separated the lifetime of the handle and the lifetime of the implementation.
Now, in the handle's destructor, a message can be sent (via the service) to the impl to cancel all outstanding IO.
The handle's destructor is free to wait for those io calls to be queued if necessary (for example if the server's work is being delegated to a background io loop or thread pool).
It has become a habit with me to implement all io_service-enabled objects this way as it makes coding with aiso very much simpler.
Connection lifetime is a fundamental issue with boost::asio. Speaking from experience, I can assure you that getting it wrong causes "undefined behaviour"...
The asio examples use shared_ptr to ensure that a connection is kept alive whilst it may have outstanding handlers in an asio::io_service. Note that even in a single thread, an asio::io_service runs asynchronously to the application code, see CppCon 2016: Michael Caisse "Asynchronous IO with Boost.Asio" for an excellent description of the precise mechanism.
A shared_ptr enables the lifetime of a connection to be controlled by the shared_ptr instance count. IMHO it's not "cheating and cheating big"; but an elegant solution to complicated problem.
However, I agree with you that just using shared_ptr's to control connection lifetimes is not a complete solution since it can lead to resource leaks.
In my answer here: Boost async_* functions and shared_ptr's, I proposed using a combination of shared_ptr and weak_ptr to manage connection lifetimes. An HTTP server using a combination of shared_ptr's and weak_ptr's can be found here: via-httplib.
The HTTP server is built upon an asynchronous TCP server which uses a collection of (shared_ptr's to) connections, created on connects and destroyed on disconnects as you propose.

boost::asio ioservice holds the rest of my code run

i have created a windows appliation and i have used some QT Gui in that to display a Widget, so now i want to add boost::asio async TCP code to receive and send the data to another application.
when i write below code in my main(), this is what happens
//Code to initialize QT widgets and working fine.
try
{
boost::asio::io_service io_service;
server s(io_service, 8888); //8888 is a port no.
io_service.run(); // **Even after successfull creation it doesn't look for incoming data**
}
catch(std::exception& e)
{
std::cout << "Exception : " << e.what() << std::endl;
}
//rest of the code for qt widget, which will be blocked by io_service.
i have tried poll() as well to avoid this but it is also not wait for any incoming data.
is there any way to achieve both at a time??
Regards,
Jithendra.
io_service requires a thread of its own to not block other operations. also, instead of starting a thread on run(), there's an object called boost::asio::io_service::work that ensures run() is always executed, even when it returns.
here's how I usually implement io_service to run:
IoServiceWork.h:
#include <boost/asio.hpp>
#include <boost/thread/thread.hpp>
class IoServiceWork
{
public:
IoServiceWork()
: m_ioService(new boost::asio::io_service()),
m_ioServiceWork(new boost::asio::io_service::work(*m_ioService)),
m_ioWorkThread(new boost::thread(boost::bind(&boost::asio::io_service::run, m_ioService)))
{
}
~IoServiceWork()
{
delete m_ioServiceWork;
m_ioWorkThread->join();
delete m_ioWorkThread;
delete m_ioService;
}
boost::asio::io_service& ioService()
{
return *m_ioService;
}
boost::asio::io_service* m_ioService;
boost::asio::io_service::work* m_ioServiceWork;
boost::thread* m_ioWorkThread;
};
Then I access my static global io_service object from any .cpp file from my project using this function.
CustomIOService.h:
#include <boost/asio.hpp>
boost::asio::io_service& IOService();
CustomIOService.cpp:
#include "IoServiceWork.h"
boost::asio::io_service& IOService()
{
static IoServiceWork ioServiceWork;
return ioServiceWork.ioService();
}
it is put as a static object in a function to avoid the static initialization order fiasco.
So, then, all you have to do to create your socket, or any object requiring an io_service object, such as server in your case:
#include <CustomIoService.h>
server s(IOService(), 8888);

boost asio udp socket async_receive_from does not call the handler

I want to create an autonomous thread devoted only to receive data from an UDP socket using boost libraries (asio). This thread should be an infinite loop triggered by some data received from the UDP socket. In my application I need to use an asynchronous receive operation.
If I use the synchronous function receive_from everything works as expected.
However if I use async_receive_from the handler is never called. Since I use a semaphore to detect that some data have been received, the program locks and the loop is never triggered.
I have verified (with a network analyzer) that the sender device properly sends the data on the UDP socket.
I have isolated the problem in the following code.
#include <boost\array.hpp>
#include <boost\asio.hpp>
#include <boost\thread.hpp>
#include <boost\interprocess\sync\interprocess_semaphore.hpp>
#include <iostream>
typedef boost::interprocess::interprocess_semaphore Semaphore;
using namespace boost::asio::ip;
class ReceiveUDP
{
public:
boost::thread* m_pThread;
boost::asio::io_service m_io_service;
udp::endpoint m_local_endpoint;
udp::endpoint m_sender_endpoint;
udp::socket m_socket;
size_t m_read_bytes;
Semaphore m_receive_semaphore;
ReceiveUDP() :
m_socket(m_io_service),
m_local_endpoint(boost::asio::ip::address::from_string("192.168.0.254"), 11),
m_sender_endpoint(boost::asio::ip::address::from_string("192.168.0.11"), 5550),
m_receive_semaphore(0)
{
Start();
}
void Start()
{
m_pThread = new boost::thread(&ReceiveUDP::_ThreadFunction, this);
}
void _HandleReceiveFrom(
const boost::system::error_code& error,
size_t received_bytes)
{
m_receive_semaphore.post();
m_read_bytes = received_bytes;
}
void _ThreadFunction()
{
try
{
boost::array<char, 100> recv_buf;
m_socket.open(udp::v4());
m_socket.bind(m_local_endpoint);
m_io_service.run();
while (1)
{
#if 1 // THIS WORKS
m_read_bytes = m_socket.receive_from(
boost::asio::buffer(recv_buf), m_sender_endpoint);
#else // THIS DOESN'T WORK
m_socket.async_receive_from(
boost::asio::buffer(recv_buf),
m_sender_endpoint,
boost::bind(&ReceiveUDP::_HandleReceiveFrom, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
/* The program locks on this wait since _HandleReceiveFrom
is never called. */
m_receive_semaphore.wait();
#endif
std::cout.write(recv_buf.data(), m_read_bytes);
}
m_socket.close();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
}
};
void main()
{
ReceiveUDP receive_thread;
receive_thread.m_pThread->join();
}
A timed_wait on the semaphore is to be preferred, however for debug purposes I have used a blocking wait as in the code above.
Did I miss something? Where is my mistake?
Your call to io_service.run() is exiting because there is no work for the io_service to do. The code then enters the while loop and calls m_socket.async_receive_from. At this point the io_service is not running ergo it never reads the data and calls your handler.
you need to schedule the work to do before calling io_service run:
ie:
// Configure io service
ReceiveUDP receiver;
m_socket.open(udp::v4());
m_socket.bind(m_local_endpoint);
m_socket.async_receive_from(
boost::asio::buffer(recv_buf),
m_sender_endpoint,
boost::bind(&ReceiveUDP::_HandleReceiveFrom, receiver,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
The handler function will do the following:
// start the io service
void HandleReceiveFrom(
const boost::system::error_code& error,
size_t received_bytes)
{
m_receive_semaphore.post();
// schedule the next asynchronous read
m_socket.async_receive_from(
boost::asio::buffer(recv_buf),
m_sender_endpoint,
boost::bind(&ReceiveUDP::_HandleReceiveFrom, receiver,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
m_read_bytes = received_bytes;
}
Your thread then simply waits for the semaphore:
while (1)
{
m_receive_semaphore.wait();
std::cout.write(recv_buf.data(), m_read_bytes);
}
Notes:
Do you really need this additional thread? The handler is completely asynchronous, and boost::asio can be used to manage a thread pool (see: think-async)
Please do not use underscores followed by a capitol letter for variable / function names. They are reserved.
m_io_service.run() returns immediately, so noone dispatches completion handlers. Note that io_service::run is a kind of "message loop" of an asio-based application, and it should run as long as you want asio functionality to be available (this's a bit simplified description, but it's good enough for your case).
Besides, you should not invoke async.operation in a loop. Instead, issue subsequent async.operation in the completion handler of the previous one -- to ensure that 2 async.reads would not run simultaniously.
See asio examples to see the typical asio application design.

boost::asio async condition

The idea is to be able to replace multithreaded code with boost::asio and a thread pool, on a consumer/producer problem. Currently, each consumer thread waits on a boost::condition_variable - when a producer adds something to the queue, it calls notify_one/notify_all to notify all the consumers. Now what happens when you (potentially) have 1k+ consumers? Threads won't scale!
I decided to use boost::asio, but then I ran into the fact that it doesn't have condition variables. And then async_condition_variable was born:
class async_condition_variable
{
private:
boost::asio::io_service& service_;
typedef boost::function<void ()> async_handler;
std::queue<async_handler> waiters_;
public:
async_condition_variable(boost::asio::io_service& service) : service_(service)
{
}
void async_wait(async_handler handler)
{
waiters_.push(handler);
}
void notify_one()
{
service_.post(waiters_.front());
waiters_.pop();
}
void notify_all()
{
while (!waiters_.empty()) {
notify_one();
}
}
};
Basically, each consumer would call async_condition_variable::wait(...). Then, a producer would eventually call async_condition_variable::notify_one() or async_condition_variable::notify_all(). Each consumer's handle would be called, and would either act on the condition or call async_condition_variable::wait(...) again. Is this feasible or am I being crazy here? What kind of locking (mutexes) should be performed, given the fact that this would be run on a thread pool?
P.S.: Yes, this is more a RFC (Request for Comments) than a question :).
Have a list of things that need to be done when an event occurs. Have a function to add something to that list and a function to remove something from that list. Then, when the event occurs, have a pool of threads work on the list of jobs that now need to be done. You don't need threads specifically waiting for the event.
Boost::asio can be kind of hard to wrap your head around. At least, I have difficult time doing it.
You don't need to have the threads wait on anything. They do that on their own when they don't have any work to do. The examples that seemed to look like what you wanted to do had work posted to the io_service for each item.
The following code was inspired from this link. It actually open my eyes to how you could use it do a lot of things.
I'm sure this isn't perfect, but I think it gives the general idea. I hope this helps.
Code
#include <iostream>
#include <boost/thread.hpp>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
class ServerProcessor
{
protected:
void handleWork1(WorkObject1* work)
{
//The code to do task 1 goes in here
}
void handleWork2(WorkObject2* work)
{
//The code to do task 2 goes in here
}
boost::thread_group worker_threads_;
boost::asio::io_service io_service_;
//This is used to keep io_service from running out of work and exiting to soon.
boost::shared_ptr<boost::asio::io_service::work> work_;
public:
void start(int numberOfThreads)
{
boost::shared_ptr<boost::asio::io_service::work> myWork(new boost::asio::io_service::work(io_service_));
work_=myWork;
for (int x=0; x < numberOfThreads; ++x)
worker_threads_.create_thread( boost::bind( &ServerProcessor::threadAction, this ) );
}
void doWork1(WorkObject1* work)
{
io_service_.post(boost::bind(&ServerProcessor::handleWork1, this, work));
}
void doWork2(WorkObject2* work)
{
io_service_.post(boost::bind(&ServerProcessor::handleWork2, this, work));
}
void threadAction()
{
io_service_.run();
}
void stop()
{
work_.reset();
io_service_.stop();
worker_threads_.join_all();
}
};
int main()
{
ServerProcessor s;
std::string input;
std::cout<<"Press f to stop"<<std::endl;
s.start(8);
std::cin>>input;
s.stop();
return 0;
}
How about using boost::signals2?
It is a thread safe spinoff of boost::signals that lets your clients subscribe a callback to a signal to be emitted.
Then, when the signal is emitted asynchronously in an io_service dispatched job all the registered callbacks will be executed (on the same thread that emitted the signal).

threading-related active object design questions (c++ boost)

I would like some feedback regarding the IService class listed below. From what I know, this type of class is related to the "active-object" pattern. Please excuse/correct if I use any related terminology incorrectly. Basically the idea is that the classes using this active object class need to provide a start and a stop method which control some event loop. This event loop could be implemented with a while loop or with boost asio etc.
This class is responsible for starting a new thread in a non-blocking manner so that events can be handled in/by the new thread. It must also handle all clean-up related code. I first tried an OO approach in which subclasses were responsible for overriding methods to control the event loop but the cleanup was messy: in the destructor calling the stop method resulted in a pure virtual function call in cases where the calling class had not manually called the stop method. The templated solution seems to be a lot cleaner:
template <typename T>
class IService : private boost::noncopyable
{
typedef boost::shared_ptr<boost::thread> thread_ptr;
public:
IService()
{
}
~IService()
{
/// try stop the service in case it's running
stop();
}
void start()
{
boost::mutex::scoped_lock lock(m_threadMutex);
if (m_pServiceThread && m_pServiceThread->joinable())
{
// already running
return;
}
m_pServiceThread = thread_ptr(new boost::thread(boost::bind(&IService::main, this)));
// need to wait for thread to start: else if destructor is called before thread has started
// Wait for condition to be signaled and then
// try timed wait since the application could deadlock if the thread never starts?
//if (m_startCondition.timed_wait(m_threadMutex, boost::posix_time::milliseconds(getServiceTimeoutMs())))
//{
//}
m_startCondition.wait(m_threadMutex);
// notify main to continue: it's blocked on the same condition var
m_startCondition.notify_one();
}
void stop()
{
// trigger the stopping of the event loop
m_serviceObject.stop();
if (m_pServiceThread)
{
if (m_pServiceThread->joinable())
{
m_pServiceThread->join();
}
// the service is stopped so we can reset the thread
m_pServiceThread.reset();
}
}
private:
/// entry point of thread
void main()
{
boost::mutex::scoped_lock lock(m_threadMutex);
// notify main thread that it can continue
m_startCondition.notify_one();
// Try Dummy wait to allow 1st thread to resume???
m_startCondition.wait(m_threadMutex);
// call template implementation of event loop
m_serviceObject.start();
}
/// Service thread
thread_ptr m_pServiceThread;
/// Thread mutex
mutable boost::mutex m_threadMutex;
/// Condition for signaling start of thread
boost::condition m_startCondition;
/// T must satisfy the implicit service interface and provide a start and a stop method
T m_serviceObject;
};
The class could be used as follows:
class TestObject3
{
public:
TestObject3()
:m_work(m_ioService),
m_timer(m_ioService, boost::posix_time::milliseconds(200))
{
m_timer.async_wait(boost::bind(&TestObject3::doWork, this, boost::asio::placeholders::error));
}
void start()
{
// simple event loop
m_ioService.run();
}
void stop()
{
// signal end of event loop
m_ioService.stop();
}
void doWork(const boost::system::error_code& e)
{
// Do some work here
if (e != boost::asio::error::operation_aborted)
{
m_timer.expires_from_now( boost::posix_time::milliseconds(200) );
m_timer.async_wait(boost::bind(&TestObject3::doWork, this, boost::asio::placeholders::error));
}
}
private:
boost::asio::io_service m_ioService;
boost::asio::io_service::work m_work;
boost::asio::deadline_timer m_timer;
};
Now to my specific questions:
1) Is the use of the boost condition variable correct? It seems like a bit of a hack to me: I wanted to wait for the thread to be launched so I waited on the condition variable. Then once the new thread has launched in the main method, I again wait on the same condition variable to allow the initial thread to continue. Then once the start method of the initial thread is exited, the new thread can continue. Is this ok?
2) Are there any cases in which the thread would not get launched successfully by the OS? I remember reading somewhere that this can occur. If this is possible, I should rather do a timed wait on the condition variable (as is commented out in the start method)?
3) I am aware that of the templated class could not implement the stop method "correctly" i.e. if the event loop fails to stop, the code will block on the joins (either in the stop or in the destructor) but I see no way around this. I guess it is up to the user of the class to make sure that the start and stop method are implemented correctly?
4) I would appreciate any other design mistakes, improvements, etc?
Thanks!
Finally settled on the following:
1) After much testing use of condition variable seems fine
2) This issue hasn't cropped up (yet)
3) The templated class implementation must meet the requirements, unit tests are used to
test for correctness
4) Improvements
Added join with lock
Catching exceptions in spawned thread and rethrowing in main thread to avoid crashes and to not loose exception info
Using boost::system::error_code to communicate error codes back to caller
implementation object is set-able
Code:
template <typename T>
class IService : private boost::noncopyable
{
typedef boost::shared_ptr<boost::thread> thread_ptr;
typedef T ServiceImpl;
public:
typedef boost::shared_ptr<IService<T> > ptr;
IService()
:m_pServiceObject(&m_serviceObject)
{
}
~IService()
{
/// try stop the service in case it's running
if (m_pServiceThread && m_pServiceThread->joinable())
{
stop();
}
}
static ptr create()
{
return boost::make_shared<IService<T> >();
}
/// Accessor to service implementation. The handle can be used to configure the implementation object
ServiceImpl& get() { return m_serviceObject; }
/// Mutator to service implementation. The handle can be used to configure the implementation object
void set(ServiceImpl rServiceImpl)
{
// the implementation object cannot be modified once the thread has been created
assert(m_pServiceThread == 0);
m_serviceObject = rServiceImpl;
m_pServiceObject = &m_serviceObject;
}
void set(ServiceImpl* pServiceImpl)
{
// the implementation object cannot be modified once the thread has been created
assert(m_pServiceThread == 0);
// make sure service object is valid
if (pServiceImpl)
m_pServiceObject = pServiceImpl;
}
/// if the service implementation reports an error from the start or stop method call, it can be accessed via this method
/// NB: only the last error can be accessed
boost::system::error_code getServiceErrorCode() const { return m_ecService; }
/// The join method allows the caller to block until thread completion
void join()
{
// protect this method from being called twice (e.g. by user and by stop)
boost::mutex::scoped_lock lock(m_joinMutex);
if (m_pServiceThread && m_pServiceThread->joinable())
{
m_pServiceThread->join();
m_pServiceThread.reset();
}
}
/// This method launches the non-blocking service
boost::system::error_code start()
{
boost::mutex::scoped_lock lock(m_threadMutex);
if (m_pServiceThread && m_pServiceThread->joinable())
{
// already running
return boost::system::error_code(SHARED_INVALID_STATE, shared_category);
}
m_pServiceThread = thread_ptr(new boost::thread(boost::bind(&IService2::main, this)));
// Wait for condition to be signaled
m_startCondition.wait(m_threadMutex);
// notify main to continue: it's blocked on the same condition var
m_startCondition.notify_one();
// No error
return boost::system::error_code();
}
/// This method stops the non-blocking service
boost::system::error_code stop()
{
// trigger the stopping of the event loop
//boost::system::error_code ec = m_serviceObject.stop();
assert(m_pServiceObject);
boost::system::error_code ec = m_pServiceObject->stop();
if (ec)
{
m_ecService = ec;
return ec;
}
// The service implementation can return an error code here for more information
// However it is the responsibility of the implementation to stop the service event loop (if running)
// Failure to do so, will result in a block
// If this occurs in practice, we may consider a timed join?
join();
// If exception was thrown in new thread, rethrow it.
// Should the template implementation class want to avoid this, it should catch the exception
// in its start method and then return and error code instead
if( m_exception )
boost::rethrow_exception(m_exception);
return ec;
}
private:
/// runs in it's own thread
void main()
{
try
{
boost::mutex::scoped_lock lock(m_threadMutex);
// notify main thread that it can continue
m_startCondition.notify_one();
// Try Dummy wait to allow 1st thread to resume
m_startCondition.wait(m_threadMutex);
// call implementation of event loop
// This will block
// In scenarios where the service fails to start, the implementation can return an error code
m_ecService = m_pServiceObject->start();
m_exception = boost::exception_ptr();
}
catch (...)
{
m_exception = boost::current_exception();
}
}
/// Service thread
thread_ptr m_pServiceThread;
/// Thread mutex
mutable boost::mutex m_threadMutex;
/// Join mutex
mutable boost::mutex m_joinMutex;
/// Condition for signaling start of thread
boost::condition m_startCondition;
/// T must satisfy the implicit service interface and provide a start and a stop method
T m_serviceObject;
T* m_pServiceObject;
// Error code for service implementation errors
boost::system::error_code m_ecService;
// Exception ptr to transport exception across different threads
boost::exception_ptr m_exception;
};
Further feedback/criticism would of course be welcome.