Integrate boost::asio into file descriptor based eventloops (select/poll) - c++

If I want to integrate stuff from boost::asio into an eventloop that is based on file descriptors (select/poll), how can I achieve it? Other libraries with asynchronous functions offer to hand out a file descriptor that becomes readable as soon as there is work to be done, so that you can integrate it into the select/poll of the eventloop and let it call a processing callback of the library (like a single shot event processing).
A great example would be an asynchronous name resolver in a thread pool, like discussed in this question.

Based on the example in this answer I came up with this solution that uses a generic handler, which writes into a wake-up pipe and then posts the handler call into another io_service. The read end of the pipe can be used in a file descriptor based event loop and the callback run_handler() is called from there, which clears the pipe and runs pending handlers in the main thread.
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/optional.hpp>
#include <boost/thread.hpp>
/// #brief Type used to emulate asynchronous host resolution with a
/// dedicated thread pool.
class resolver {
public:
resolver(const std::size_t pool_size)
: work_(boost::ref(resolver_service_)) {
// Create wake-up pipe
pipe(pipe_);
fcntl(pipe_[0], F_SETFL, O_NONBLOCK);
// Create pool.
for (std::size_t i = 0; i < pool_size; ++i)
threads_.create_thread(boost::bind(&boost::asio::io_service::run,
&resolver_service_));
}
~resolver() {
work_ = boost::none;
threads_.join_all();
}
template <typename QueryOrEndpoint, typename Handler>
void async_resolve(QueryOrEndpoint query, Handler handler) {
resolver_service_.post(boost::bind(
&resolver::do_async_resolve<QueryOrEndpoint, Handler>, this,
query, handler));
}
// callback for eventloop in main thread
void run_handler() {
char c;
// clear wake-up pipe
while (read(pipe_[0], &c, 1) > 0);
// run handler posted from resolver threads
handler_service_.poll();
handler_service_.reset();
}
// get read end of wake up pipe for polling in eventloop
int fd() {
return pipe_[0];
}
private:
/// #brief Resolve address and invoke continuation handler.
template <typename QueryOrEndpoint, typename Handler>
void do_async_resolve(const QueryOrEndpoint& query, Handler handler) {
typedef typename QueryOrEndpoint::protocol_type protocol_type;
typedef typename protocol_type::resolver resolver_type;
// Resolve synchronously, as synchronous resolution will perform work
// in the calling thread. Thus, it will not use Boost.Asio's internal
// thread that is used for asynchronous resolution.
boost::system::error_code error;
resolver_type resolver(resolver_service_);
typename resolver_type::iterator result = resolver.resolve(query, error);
// post handler callback to service running in main thread
handler_service_.post(boost::bind(handler, error, result));
// wake up eventloop in main thread
write(pipe_[1], "*", 1);
}
private:
boost::asio::io_service resolver_service_;
boost::asio::io_service handler_service_;
boost::optional<boost::asio::io_service::work> work_;
boost::thread_group threads_;
int pipe_[2];
};
template <typename ProtocolType>
void handle_resolve(
const boost::system::error_code& error,
typename ProtocolType::resolver::iterator iterator) {
std::stringstream stream;
stream << "handle_resolve:\n"
" " << error.message() << "\n";
if (!error)
stream << " " << iterator->endpoint() << "\n";
std::cout << stream.str();
std::cout.flush();
}
int main() {
// Resolver will emulate asynchronous host resolution with a pool of 5
// threads.
resolver resolver(5);
namespace ip = boost::asio::ip;
resolver.async_resolve(
ip::udp::resolver::query("localhost", "12345"),
&handle_resolve<ip::udp>);
resolver.async_resolve(
ip::tcp::resolver::query("www.google.com", "80"),
&handle_resolve<ip::tcp>);
resolver.async_resolve(
ip::udp::resolver::query("www.stackoverflow.com", "80"),
&handle_resolve<ip::udp>);
resolver.async_resolve(
ip::icmp::resolver::query("some.other.address", "54321"),
&handle_resolve<ip::icmp>);
pollfd fds;
fds.fd = resolver.fd();
fds.events = POLLIN;
// simple eventloop
while (true) {
if (poll(&fds, 1, 2000)) // waiting for wakeup call
resolver.run_handler(); // call resolve handler
else
break;
}
}

Several of the objects in the Boost ASIO library expose a native_handle for scenarios like this.

Related

In Asio, all completion handlers of async_* functions called in the same thread will run sequentially, right?

I am new to Asio, so I am a little confused about the control flow of asynchronous operations. Let's see this server:
class session
{
...
sendMsg()
{
bool idle = msgQueue.empty();
msgQueue.push(msg);
if (idle)
send();
}
send()
{
async_write(write_handler);
}
write_handler()
{
msgQueue.pop()
if (!msgQueue.empty())
send();
}
recvMsg()
{
async_read(read_handler);
}
read_handler()
{
...
recvMsg();
}
...
};
class server
{
...
start()
{
async_accept(accept_handler);
}
accept_handler()
{
auto client = make_shared<session>(move(socket));
client->recvMsg();
...
start();
}
...
};
int main()
{
io_context;
server srv(io_context, 22222);
srv.start();
io_context.run();
return 0;
}
In this case, all completion handlers accept_handler, read_handler, write_handler will be called in the thread calling io_context.run(), which is the main thread. If they will run in the same thread, it means they will run sequentially, not concurrently, right? And further, the msgQueue will be accessed sequentially, so there is no need a mutex lock for this queue, right?
I think async_* functions tell the operating system to do some work, and these work will run simultaneously in some other threads with their own buffers. Even if these work are done at the same time(say, at a point, a new connection requirement arrives, a new message from a exist client arrives, sending a message to a exist client is done), the completion handlers(accept_handler, read_handler, write_handler) will still be called sequentially. They will not run concurrently, am I correct?
Thank you so much for your help.
Yes. There's only one thread running the io_context, so all completion handlers will run on that one thread. Indeed this implies a strand (the implicit strand) of execution, namely, all handlers will execute sequentially.
See: https://www.boost.org/doc/libs/1_81_0/doc/html/boost_asio/overview/core/threads.html
and these work will run simultaneously in some other threads with their own buffers
They will run asynchronously, but not usually on another thread. There could be internal threads, or kernel threads, but also just hardware. Their "own" buffer is true, but dangerously worded, because in Asio the operations never own the buffer - you have to make sure it stays valid until the operation completes.
Note:
if there can be multiple threads running (or polling) the io service, you need to make sure access to IO objects is synchronized. In Asio this can be achieved with strand executors
not all IO operations may be active in overlapping fashion. You seem to be aware of this given the msgQueue in your pseudo code
Bonus
For bonus, let me convert your code into non-pseudo code showing an explicit strand per connection to be future proof:
Live On Coliru
#include <boost/asio.hpp>
#include <deque>
namespace asio = boost::asio;
using asio::ip::tcp;
using boost::system::error_code;
using namespace std::placeholders;
class session : public std::enable_shared_from_this<session> {
public:
session(tcp::socket s) : s(std::move(s)) {}
void start() {
post(s.get_executor(), [self = shared_from_this()] { self->recvMsg(); });
}
void sendMsg(std::string msg) {
post(s.get_executor(), [=, self = shared_from_this()] { self->do_sendMsg(msg); });
}
private:
//... all private members on strand
void do_sendMsg(std::string msg) {
bool was_idle = msgQueue.empty();
msgQueue.push_back(std::move(msg));
if (was_idle)
do_writeloop();
}
void do_writeloop() {
if (!msgQueue.empty())
async_write(s, asio::buffer(msgQueue.front()),
std::bind(&session::write_handler, shared_from_this(), _1, _2));
}
void write_handler(error_code ec, size_t) {
if (!ec) {
msgQueue.pop_front();
do_writeloop();
}
}
void recvMsg() {
//async_read(s, asio::dynamic_buffer(incoming),
//std::bind(&session::read_handler, shared_from_this(), _1, _2));
async_read_until(s, asio::dynamic_buffer(incoming), "\n",
std::bind(&session::read_handler, shared_from_this(), _1, _2));
}
void read_handler(error_code ec, size_t n) {
if (!ec) {
auto msg = incoming.substr(0, n);
incoming.erase(0, n);
recvMsg();
sendMsg("starting job for " + msg);
sendMsg("finishing job for " + msg);
sendMsg(" -- some other message --\n");
}
}
tcp::socket s;
std::string incoming;
std::deque<std::string> msgQueue;
};
class server {
public:
server(auto ex, uint16_t port) : acc(ex, tcp::v4()) {
acc.set_option(tcp::acceptor::reuse_address(true));
acc.bind({{}, port});
acc.listen();
}
void accept_loop() {
acc.async_accept(make_strand(acc.get_executor()),
std::bind(&server::accept_handler, this, _1, _2));
}
void accept_handler(error_code ec, tcp::socket s) {
if (!ec ){
std::make_shared<session>(std::move(s))->start();
accept_loop();
}
}
private:
tcp::acceptor acc;
};
int main() {
boost::asio::io_context ioc;
server srv(ioc.get_executor(), 22222);
srv.accept_loop();
ioc.run();
}
With a sample client
for a in foo bar qux; do (sleep 1.$RANDOM; echo "command $a")|nc 127.0.0.1 22222 -w2; done
Prints
starting job for command foo
finishing job for command foo
-- some other message --
starting job for command bar
finishing job for command bar
-- some other message --
starting job for command qux
finishing job for command qux
-- some other message --

Nonblocking io_service::run

I'm trying to implement a single C++ application, that holds two processing loops. Currently the first processing loop (boost's io_service::run) blocks the execution of the second one.
Approaches utilizing threads or std::async approaches failed. (I don't have experience/background on multi-threading).
Is there an elegant way to run the io_service::run in an other thread, while still executing the callbacks upon incoming UDP datagrams?
Main-File:
class Foo
{
public:
Foo();
void callback(const int&);
private:
// ... (hopefully) non-relevant stuff...
};
int main()
{
Foo foo_obj;
// I need to run this function (blocking) but the constructor is blocking (io_server::run())
run();
return 0;
}
Foo::Foo(){
boost::asio::io_service io;
UDP_Server UDP_Server(io);
// Set function to be called on received message
UDP_Server.add_handler(std::bind(&Foo::callback, this, std::placeholders::_1));
// This function should be non-blocking
// -> tried several things, like threads, async, ... (unfortunately not successful)
io.run();
}
// realization of callback function here (see class definition)
Included custom "library":
class UDP_Server
{
public:
UDP_Server(boost::asio::io_service&);
void add_handler(std::function<void(int)>);
private:
// Function handle
std::function<void(int)> callbackFunctionHandle;
// Functions
void start_receive();
void handle_receive(const boost::system::error_code&, std::size_t);
// ... (hopefully) non-relevant stuff...
};
// Constructor
UDP_Server::UDP_Server(boost::asio::io_service& io_service)
: socket_(io_service, udp::endpoint(udp::v4(), UDP_PORT)){
}
// Store a callback function (class foo) to be called whenever a message is received
void UDP_Server::add_handler(std::function<void(int)> callbackFunction){
try
{
callbackFunctionHandle = callbackFunction;
start_receive();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
}
// Async receive
UDP_Server::start_receive()
{
socket_.async_receive_from(
boost::asio::buffer(recv_buffer_), remote_endpoint_,
boost::bind(&UDP_Server::handle_receive, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
// When message is received
void UDP_Server::handle_receive(const boost::system::error_code& error,
std::size_t bytes_transferred)
{
if (!error || error == boost::asio::error::message_size)
{
// ... do smth. with the received data ...
// Call specified function in Foo class
callbackFunctionHandle(some_integer);
start_receive();
}
else{
// ... handle errors
}
}
have a look at what they did in here:
boost::asio::io_service io_service;
/** your code here **/
boost::thread(boost::bind(&boost::asio::io_service::run, &io_service));
ros::spin();
So you basically start the blocking call to io_service::run() in a separate thread from the ros::spin().
If you start that bound to a single cpu-node (in order to not waste 2 cpu-nodes with waiting commands) your scheduler might handle stuff.

Stopping boost::asio::io_service::run() from concurrent destructor

Can anybody explain me why this program does not terminate (see the comments)?
#include <boost/asio/io_service.hpp>
#include <boost/asio.hpp>
#include <memory>
#include <cstdio>
#include <iostream>
#include <future>
class Service {
public:
~Service() {
std::cout << "Destroying...\n";
io_service.post([this]() {
std::cout << "clean and stop\n"; // does not get called
// do some cleanup
// ...
io_service.stop();
std::cout << "Bye!\n";
});
std::cout << "...destroyed\n"; // last printed line, blocks
}
void operator()() {
io_service.run();
std::cout << "run completed\n";
}
private:
boost::asio::io_service io_service;
boost::asio::io_service::work work{io_service};
};
struct Test {
void start() {
f = std::async(std::launch::async, [this]() { service(); std::cout << "exiting thread\n";});
}
std::future<void> f;
Service service;
};
int main(int argc, char* argv[]) {
{
Test test;
test.start();
std::string exit;
std::cin >> exit;
}
std::cout << "exiting program\n"; // never printed
}
The real issue is that destruction of io_service is (obviously) not thread-safe.
Just reset the work and join the thread. Optionally, set a flag so your IO operations know shutdown is in progress.
You Test and Service classes are trying to share responsibility for the IO service, that doesn't work. Here's much simplified, merging the classes and dropping the unused future.
Live On Coliru
The trick was to make the work object optional<>:
#include <boost/asio.hpp>
#include <boost/optional.hpp>
#include <iostream>
#include <thread>
struct Service {
~Service() {
std::cout << "clean and stop\n";
io_service.post([this]() {
work.reset(); // let io_service run out of work
});
if (worker.joinable())
worker.join();
}
void start() {
assert(!worker.joinable());
worker = std::thread([this] { io_service.run(); std::cout << "exiting thread\n";});
}
private:
boost::asio::io_service io_service;
std::thread worker;
boost::optional<boost::asio::io_service::work> work{io_service};
};
int main() {
{
Service test;
test.start();
std::cin.ignore(1024, '\n');
std::cout << "Start shutdown\n";
}
std::cout << "exiting program\n"; // never printed
}
Prints
Start shutdown
clean and stop
exiting thread
exiting program
See here: boost::asio hangs in resolver service destructor after throwing out of io_service::run()
I think the trick here is to destroy the worker (the work member) before calling io_service.stop(). I.e. in this case the work could be an unique_ptr, and call reset() explicitly before stopping the service.
EDIT: The above helped me some time ago in my case, where the ioservice::stop didn't stop and was waiting for some dispatching events which never happened.
However I reproduced the problem you have on my machine and this seems to be a race condition inside ioservice, a race between ioservice::post() and the ioservice destruction code (shutdown_service). In particular, if the shutdown_service() is triggered before the post() notification wakes up the other thread, the shutdown_service() code removes the operation from the queue (and "destroys" it instead of calling it), therefore the lambda is never called then.
For now it seems to me that you'd need to call the io_service.stop() directly in the destructor, not postponed via the post() as that apparently doest not work here because of the race.
I was able to fix the problem by rewriting your code like so:
class Service {
public:
~Service() {
std::cout << "Destroying...\n";
work.reset();
std::cout << "...destroyed\n"; // last printed line, blocks
}
void operator()() {
io_service.run();
std::cout << "run completed\n";
}
private:
boost::asio::io_service io_service;
std::unique_ptr<boost::asio::io_service::work> work = std::make_unique<boost::asio::io_service::work>(io_service);
};
However, this is largely a bandaid solution.
The problem lies in your design ethos; specifically, in choosing not to tie the lifetime of the executing thread directly to the io_service object:
struct Test {
void start() {
f = std::async(std::launch::async, [this]() { service(); std::cout << "exiting thread\n";});
}
std::future<void> f; //Constructed First, deleted last
Service service; //Constructed second, deleted first
};
In this particular scenario, the thread is going to continue to attempt to execute io_service.run() past the lifetime of the io_service object itself. If more than the basic work object were executing on the service, you very quickly begin to deal with undefined behavior with calling member functions of deleted objects.
You could reverse the order of the member objects in Test:
struct Test {
void start() {
f = std::async(std::launch::async, [this]() { service(); std::cout << "exiting thread\n";});
}
Service service;
std::future<void> f;
};
But it still represents a significant design flaw.
The way that I usually implement anything which uses io_service is to tie its lifetime to the threads that are actually going to be executing on it.
class Service {
public:
Service(size_t num_of_threads = 1) :
work(std::make_unique<boost::asio::io_service::work>(io_service))
{
for (size_t thread_index = 0; thread_index < num_of_threads; thread_index++) {
threads.emplace_back([this] {io_service.run(); });
}
}
~Service() {
work.reset();
for (std::thread & thread : threads)
thread.join();
}
private:
boost::asio::io_service io_service;
std::unique_ptr<boost::asio::io_service::work> work;
std::vector<std::thread> threads;
};
Now, if you have any infinite loops active on any of these threads, you'll still need to make sure you properly clean those up, but at least the code specific to the operation of this io_service is cleaned up correctly.

Boost async_write callback as a member function of another class instance

I have a protocol structure where one class takes care of protocol states (Protocol) and another class takes care of send and receiving messages (Comm).
I´m using boost:asio in asynchronous mode.
So I have the following code structure:
#include <string>
#include <iostream>
#include "boost/asio.hpp"
#include "boost/bind.hpp"
class Comm {
public:
Comm::Comm();
void SendMessage(std::string message, void (callback) (const boost::system::error_code& errorCode, std::size_t bytesTranferred));
private:
boost::asio::io_service ioService;
std::shared_ptr<boost::asio::ip::tcp::socket> mySocket;
};
Comm::Comm()
{
boost::asio::ip::tcp::resolver resolver(ioService);
boost::asio::ip::tcp::resolver::query query("192.168.0.1");
boost::asio::ip::tcp::resolver::iterator iterator = resolver.resolve(query);
mySocket->connect(*iterator);
}
void Comm::SendMessage(std::string message, void (callback) (const boost::system::error_code& errorCode, std::size_t bytesTranferred))
{
mySocket->async_send(boost::asio::buffer(message.c_str(), message.length()), boost::bind(&callback)); // <<< ERROR HERE
}
class Protocol {
public:
void SendMessage(std::string message);
void SendMessageHandler(const boost::system::error_code& errorCode, std::size_t bytesTranferred);
private:
Comm channel;
};
void Protocol::SendMessage(std::string message)
{
channel.SendMessage(message, &SendMessageHandler); // <<< ERROR HERE
}
void Protocol::SendMessageHandler(const boost::system::error_code& errorCode, std::size_t bytesTranferred)
{
if (!errorCode)
std::cout << "Send OK" << std::endl;
else
std::cout << "Send FAIL." << std::endl;
}
As shown, I need that the callback of the async_send to be a non-static function of the caller´s class, so I have to pass the callback function in SendMessage and use it as a parameter in async_send.
These both statements are not compiling. I´ve tried variations but I can´t find what what´s going on here.
Help appreciated.
Try something like this using binding to class method:
void Comm::SendMessage(std::string message, boost::function< void(const boost::system::error_code& , std::size_t) > callback )
{
mySocket->async_send(boost::asio::buffer(message.c_str(), message.length()), callback);
}
...//later
channel.SendMessage(message, boost::bind(&Protocol::SendMessageHandler, this) );
Note/more importantly you have amount unfixable bugs here:
You are taking std::string message by value several times - it will copy the content.
Comm::SendMessage uses local message object, which will be destroyed before async operation will complete (boost::asio::buffer will not copy content).
It will be hard to use 2 or more Comm objects, since each have its own ioService (you will not able to run them all at same time)
No shared_ptr or any other capability to control object lifetime: your SendMessageHandler can be called when Protocol already destroyed.
Protocol does not control write parallelism, and multiple SendMessage calls can lead to write mixed buffers into sockets, this can/will send complete trash over network.
More fatal/minor issues, no point to search for them.
Consider taking one of the asio examples as base usage pattern.

boost::asio with boost::unique_future

According to http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/overview/cpp2011/futures.html, we can use boost::asio with std::future. But I couldn't find any information about working with boost::unique_future, which has more functions, such as then(). How can I use?
Boost.Asio only provides first-class support for asynchronous operations to return a C++11 std::future or an actual value in stackful coroutines. Nevertheless, the requirements on asynchronous operations documents how to customize the return type for other types, such as Boost.Thread's boost::unique_future. It requires:
A specialization of the handler_type template. This template is used to determine the actual handler to use based on the asynchronous operation's signature.
A specialization of the async_result template. This template is used both to determine the return type and to extract the return value from the handler.
Below is a minimal complete example demonstrating deadline_timer::async_wait() returning boost:unique_future with a basic calculation being performed over a series of continuations composed with .then(). To keep the example simple, I have opted to only specialize handler_type for the asynchronous operation signatures used in the example. For a complete reference, I highly suggest reviewing use_future.hpp and impl/use_future.hpp.
#include <exception> // current_exception, make_exception_ptr
#include <memory> // make_shared, shared_ptr
#include <thread> // thread
#include <utility> // move
#define BOOST_RESULT_OF_USE_DECLTYPE
#define BOOST_THREAD_PROVIDES_FUTURE_CONTINUATION
#include <boost/asio.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/thread/future.hpp>
/// #brief Class used to indicate an asynchronous operation should return
/// a boost::unique_future.
class use_unique_future_t {};
/// #brief A special value, similiar to std::nothrow.
constexpr use_unique_future_t use_unique_future;
namespace detail {
/// #brief Completion handler to adapt a boost::promise as a completion
/// handler.
template <typename T>
class unique_promise_handler;
/// #brief Completion handler to adapt a void boost::promise as a completion
/// handler.
template <>
class unique_promise_handler<void>
{
public:
/// #brief Construct from use_unique_future special value.
explicit unique_promise_handler(use_unique_future_t)
: promise_(std::make_shared<boost::promise<void> >())
{}
void operator()(const boost::system::error_code& error)
{
// On error, convert the error code into an exception and set it on
// the promise.
if (error)
promise_->set_exception(
std::make_exception_ptr(boost::system::system_error(error)));
// Otherwise, set the value.
else
promise_->set_value();
}
//private:
std::shared_ptr<boost::promise<void> > promise_;
};
// Ensure any exceptions thrown from the handler are propagated back to the
// caller via the future.
template <typename Function, typename T>
void asio_handler_invoke(
Function function,
unique_promise_handler<T>* handler)
{
// Guarantee the promise lives for the duration of the function call.
std::shared_ptr<boost::promise<T> > promise(handler->promise_);
try
{
function();
}
catch (...)
{
promise->set_exception(std::current_exception());
}
}
} // namespace detail
namespace boost {
namespace asio {
/// #brief Handler type specialization for use_unique_future.
template <typename ReturnType>
struct handler_type<
use_unique_future_t,
ReturnType(boost::system::error_code)>
{
typedef ::detail::unique_promise_handler<void> type;
};
/// #brief Handler traits specialization for unique_promise_handler.
template <typename T>
class async_result< ::detail::unique_promise_handler<T> >
{
public:
// The initiating function will return a boost::unique_future.
typedef boost::unique_future<T> type;
// Constructor creates a new promise for the async operation, and obtains the
// corresponding future.
explicit async_result(::detail::unique_promise_handler<T>& handler)
{
value_ = handler.promise_->get_future();
}
// Obtain the future to be returned from the initiating function.
type get() { return std::move(value_); }
private:
type value_;
};
} // namespace asio
} // namespace boost
int main()
{
boost::asio::io_service io_service;
boost::asio::io_service::work work(io_service);
// Run io_service in its own thread to demonstrate future usage.
std::thread thread([&io_service](){ io_service.run(); });
// Arm 3 second timer.
boost::asio::deadline_timer timer(
io_service, boost::posix_time::seconds(3));
// Asynchronously wait on the timer, then perform basic calculations
// within the future's continuations.
boost::unique_future<int> result =
timer.async_wait(use_unique_future)
.then([](boost::unique_future<void> future){
std::cout << "calculation 1" << std::endl;
return 21;
})
.then([](boost::unique_future<int> future){
std::cout << "calculation 2" << std::endl;
return 2 * future.get();
})
;
std::cout << "Waiting for result" << std::endl;
// Wait for the timer to trigger and for its continuations to calculate
// the result.
std::cout << result.get() << std::endl;
// Cleanup.
io_service.stop();
thread.join();
}
Output:
Waiting for result
calculation 1
calculation 2
42