How to get boost::asio::io_service current action number - c++

Boost::asio::io_service provides "handler tracking" for debugging purposes, it is enabled by defining BOOST_ASIO_ENABLE_HANDLER_TRACKING but logs its data to stderr. I'd like to use this tracking information in my application. My question is what is the best way to get access to the <action> inside my application?
For more context as to why I want to do this; I would like to attach the <action> as a parameter to other async operations so that I can track where the originating request came from.

Asio does not expose its handler tracking data. Attempting to extract the tracking information contained within Asio would be far more of a dirty hack than rolling ones own custom handler.
Here is a snippet from Asio's handler tracking:
namespace boost {
namespace asio {
namespace detail {
class handler_tracking
{
public:
class completion;
// Base class for objects containing tracked handlers.
class tracked_handler
{
private:
// Only the handler_tracking class will have access to the id.
friend class handler_tracking;
friend class completion;
uint64_t id_;
// ...
private:
friend class handler_tracking;
uint64_t id_;
bool invoked_;
completion* next_;
};
// ...
private:
struct tracking_state;
static tracking_state* get_state();
};
} // namespace detail
} // namespace asio
} // namespace boost
As others have mentioned, passing a GUID throughout the handlers would allow one to associate multiple asynchronous operations. One non-intrusive way to accomplish this is to create a custom tracking handler type that wraps existing handlers and manages the tracking data. For an example on custom handlers, see the Boost.Asio Invocation example.
Also, be aware that if a custom handler type is used, one should be very careful when composing handlers. In particular, the custom handler type's invocation hook (asio_handler_invoke()) may need to account for the context of other handlers. For example, if one does not explicitly account for wrapped handler returned from strand::wrap(), then it will prevent intermediate operations from running in the correct context for composed operations. To avoid having to explicitly handle this, one can wrap the custom handler by strand::wrap():
boost::asio::async_read(..., strand.wrap(tracker.wrap(&handle_read))); // Good.
boost::asio::async_read(..., tracker.wrap(strand.wrap(&handle_read))); // Bad.

An example that mimics asio debug handler tracking. Caveats:
Assumes ioService only run from a single thread. I never use any other way so I'm not sure what needs to change to fix this limitation.
Non-thread safe access to std::cerr - fixing this left as an exercise.
Code:
#include <boost/asio.hpp>
#include <boost/atomic.hpp>
#include <iostream>
class HandlerTracking
{
public:
HandlerTracking()
:
mCount(1)
{ }
template <class Handler>
class WrappedHandler
{
public:
WrappedHandler(HandlerTracking& t, Handler h, std::uint64_t id)
:
mHandlerTracking(t),
mHandler(h),
mId(id)
{ }
WrappedHandler(const WrappedHandler& other)
:
mHandlerTracking(other.mHandlerTracking),
mHandler(other.mHandler),
mId(other.mId),
mInvoked(other.mInvoked)
{
other.mInvoked = true;
}
~WrappedHandler()
{
if (!mInvoked)
std::cerr << '~' << mId << std::endl;
}
template <class... Args>
void operator()(Args... args)
{
mHandlerTracking.mCurrHandler = mId;
std::cerr << '>' << mId << std::endl;
try
{
mInvoked = true;
mHandler(args...);
}
catch(...)
{
std::cerr << '!' << mId << std::endl;
throw;
}
std::cerr << '<' << mId << std::endl;
}
const std::uint64_t id() { return mId; }
private:
HandlerTracking& mHandlerTracking;
Handler mHandler;
const std::uint64_t mId;
mutable bool mInvoked = false;
};
template <class Handler>
WrappedHandler<Handler> wrap(Handler handler)
{
auto next = mCount.fetch_add(1);
std::cerr << mCurrHandler << '*' << next << std::endl;
return WrappedHandler<Handler>(*this, handler, next);
}
boost::atomic<std::uint64_t> mCount;
std::uint64_t mCurrHandler = 0; // Note: If ioService run on multiple threads we need a curr handler per thread
};
// Custom invokation hook for wrapped handlers
//template <typename Function, typename Handler>
//void asio_handler_invoke(Function f, HandlerTracking::WrappedHandler<Handler>* h)
//{
// std::cerr << "Context: " << h << ", " << h->id() << ", " << f.id() << std::endl;
// f();
//}
// Class to demonstrate callback with arguments
class MockSocket
{
public:
MockSocket(boost::asio::io_service& ioService) : mIoService(ioService) {}
template <class Handler>
void async_read(Handler h)
{
mIoService.post([h]() mutable { h(42); }); // we always read 42 bytes
}
private:
boost::asio::io_service& mIoService;
};
int main(int argc, char* argv[])
{
boost::asio::io_service ioService;
HandlerTracking tracking;
MockSocket socket(ioService);
std::function<void()> f1 = [&]() { std::cout << "Handler1" << std::endl; };
std::function<void()> f2 = [&]() { std::cout << "Handler2" << std::endl; ioService.post(tracking.wrap(f1)); };
std::function<void()> f3 = [&]() { std::cout << "Handler3" << std::endl; ioService.post(tracking.wrap(f2)); };
std::function<void()> f4 = [&]() { std::cout << "Handler4" << std::endl; ioService.post(tracking.wrap(f3)); };
std::function<void(int)> s1 = [](int s) { std::cout << "Socket read " << s << " bytes" << std::endl; };
socket.async_read(tracking.wrap(s1));
ioService.post(tracking.wrap(f1));
ioService.post(tracking.wrap(f2));
ioService.post(tracking.wrap(f3));
auto tmp = tracking.wrap(f4); // example handler destroyed without invocation
ioService.run();
return 0;
}
Output:
0*1
0*2
0*3
0*4
0*5
>1
Socket read 42 bytes
<1
>2
Handler1
<2
>3
Handler2
3*6
<3
>4
Handler3
4*7
<4
>6
Handler1
<6
>7
Handler2
7*8
<7
>8
Handler1
<8
~5

Related

Controlling output stream in multi-threaded program

I am trying to control the output prints in my simulation. It prints a lot of output stream information. This is a sample code of how I try to control the output stream. Sometimes I want to print information for each thread and sometimes I do not want a single print from threads to reduce the system calls in the simulation. I pass command line argument to control the stream. Argument v means no prints. The problem is it requires a lot of if statements in whole simulator. Is there any easy way to deal with this issue?
#include <iostream>
#include <thread>
void work_to_do_1(char ch)
{
//work for thread 1
if(ch != 'v')
std::cout << "-:Thread 1:-" << std::endl;
}
void work_to_do_2(char ch)
{
if (ch != 'v')
std::cout << "-:Thread 2:-" << std::endl;
}
void work_to_do_3(char ch)
{
if (ch != 'v')
std::cout << "-:Thread 3:-" << std::endl;
}
int main(int argc, char *arg[])
{
std::cout << "You have entered " << argc
<< " arguments:" << "\n";
for (int i = 0; i < argc; ++i)
{
std::cout << arg[i] << "\n";
}
char t = *arg[1];
std::cout << "manager is running" << std::endl;
std::thread t1(work_to_do_1, t);
std::thread t2(work_to_do_2, t);
std::thread t3(work_to_do_3, t);
t1.join();
t2.join();
t3.join();
system("pause");
return 0;
}
Make your own nul stream:
struct cnul_t : std::basic_ostream<char> {} cnul;
template<class T> std::ostream& operator<<(cnul_t& os, T const&) { return os; }
And redirect your output to it to ignore it:
#include <ostream>
#include <iostream>
struct cnul_t : std::basic_ostream<char> {} cnul;
template<class T> std::ostream& operator<<(cnul_t& os, T const&) { return os; }
void maybe_log(bool b)
{
std::ostream& out = b == true ? std::cout : cnul;
out << "Hello, World!\n";
}
int main()
{
maybe_log(true); // outputs Hello, World!
maybe_log(false); // no output
}
Demo: http://coliru.stacked-crooked.com/a/362ecb660283cbff
OK, well, if you have read and understood the comments you will see that the real problem is not what you think it is. The real problem is that your logging code is not threadsafe.
This answer explains the problem very well. Although ostreams are threadsafe in themselves (since C++11), something like std::cout << "-:Thread 1:-" << std::endl; is actually two calls to std::cout.operator<< and another thread might sneak in between them thus garbling your output. This, I imagine, you could do without.
So, stealing code unashamedly from this post I humbly submit the following solution (which also has a global flag, gLogging, to turn logging on or off). This will write lines to std::cout atomically whenever you log std::endl. I wrote this as an exercise to develop my own personal skills and I thought you might like to have it.
See the linked post for an explanation of how std::endl is detected, but the underlying principle is a separate log buffer for each thread which is flushed to std::cout when it has a complete line of output to get rid of. The code includes a manager class (Logger) to take care of the details of creating, destroying and accessing these buffers. You just need to put two lines of initialisation code at the start of each thread as shown and then log to logstream rather than std::cout.
#include <iostream>
#include <sstream>
#include <mutex>
#include <map>
#include <thread>
bool gLogging = true;
constexpr int bufsize = 512; // needs to be big enough for longest logging line expected
// A streambuf that writes atomically to std::cout when (indirectly) it sees std::endl
class LogBuf : public std::stringbuf
{
public:
LogBuf () { setbuf (m_buf = new char [bufsize], bufsize); str (""); }
~LogBuf () { delete [] m_buf; }
protected:
// This gets called when the ostream we are serving sees endl
int sync() override
{
if (gLogging)
{
std::cout << str();
std::cout.flush();
}
str("");
return 0;
}
private:
char *m_buf;
};
// An ostream that uses LogBuf
class LogStream : public std::ostream
{
public:
LogStream () : std::ostream (m_LogBuf = new LogBuf ()) { }
~LogStream () { delete m_LogBuf; }
private:
LogBuf *m_LogBuf;
};
// A class to manage LogStream objects (one per thread)
class Logger
{
public:
void AddThread (void)
{
mutex.lock ();
m_logstreams [std::this_thread::get_id ()] = new LogStream ();
mutex.unlock ();
}
void RemoveThread ()
{
mutex.lock ();
std::thread::id thread_id = std::this_thread::get_id ();
LogStream *logstream = m_logstreams [thread_id];
m_logstreams.erase (m_logstreams.find (thread_id));
mutex.unlock ();
delete logstream;
}
LogStream& GetLogStream ()
{
mutex.lock ();
LogStream *logstream = m_logstreams [std::this_thread::get_id ()];
mutex.unlock ();
return *logstream;
}
private:
static std::mutex mutex;
std::map<const std::thread::id, LogStream *> m_logstreams;
};
std::mutex Logger::mutex;
Logger logger;
// A simple class to make sure we remember to call RemoveThread
class LogStreamHelper
{
public:
LogStreamHelper () { logger.AddThread (); }
~LogStreamHelper () { logger.RemoveThread (); }
inline LogStream &GetLogStream () { return logger.GetLogStream (); }
};
// Test program
void work_to_do_1()
{
LogStreamHelper logstream_helper;
LogStream& logstream = logstream_helper.GetLogStream ();
logstream << "-:Thread 1:-" << std::endl;
}
void work_to_do_2()
{
LogStreamHelper logstream_helper;
LogStream& logstream = logstream_helper.GetLogStream ();
logstream << "-:Thread 2:-" << std::endl;
}
int main ()
{
LogStreamHelper logstream_helper;
LogStream& logstream = logstream_helper.GetLogStream ();
logstream << "Main thread" << std::endl;
std::thread t1 (work_to_do_1);
std::thread t2 (work_to_do_2);
t1.join ();
t2.join ();
return 0;
}
Output:
Main thread
-:Thread 1:-
-:Thread 2:-
Run it at Wandbox.

Boost ASIO: Send message to all connected clients

I'm working on a project that involves a boost::beast websocket/http mixed server, which runs on top of boost::asio. I've heavily based my project off the advanced_server.cpp example source.
It works fine, but right now I'm attempting to add a feature that requires the sending of a message to all connected clients.
I'm not very familiar with boost::asio, but right now I can't see any way to have something like "broadcast" events (if that's even the correct term).
My naive approach would be to see if I can have the construction of websocket_session() attach something like an event listener, and the destructor detatch the listener. At that point, I could just fire the event, and have all the currently valid websocket sessions (to which the lifetime of websocket_session() is scoped) execute a callback.
There is https://stackoverflow.com/a/17029022/268006, which does more or less what I want by (ab)using a boost::asio::steady_timer, but that seems like a kind of horrible hack to accomplish something that should be pretty straightforward.
Basically, given a stateful boost::asio server, how can I do an operation on multiple connections?
First off: You can broadcast UDP, but that's not to connected clients. That's just... UDP.
Secondly, that link shows how to have a condition-variable (event)-like interface in Asio. That's only a tiny part of your problem. You forgot about the big picture: you need to know about the set of open connections, one way or the other:
e.g. keeping a container of session pointers (weak_ptr) to each connection
each connection subscribing to a signal slot (e.g. Boost Signals).
Option 1. is great for performance, option 2. is better for flexibility (decoupling the event source from subscribers, making it possible to have heterogenous subscribers, e.g. not from connections).
Because I think Option 1. is much simpler w.r.t to threading, better w.r.t. efficiency (you can e.g. serve all clients from one buffer without copying) and you probably don't need to doubly decouple the signal/slots, let me refer to an answer where I already showed as much for pure Asio (without Beast):
How to design proper release of a boost::asio socket or wrapper thereof
It shows the concept of a "connection pool" - which is essentially a thread-safe container of weak_ptr<connection> objects with some garbage collection logic.
Demonstration: Introducing Echo Server
After chatting about things I wanted to take the time to actually demonstrate the two approaches, so it's completely clear what I'm talking about.
First let's present a simple, run-of-the mill asynchronous TCP server with
with multiple concurrent connections
each connected session reads from the client line-by-line, and echoes the same back to the client
stops accepting after 3 seconds, and exits after the last client disconnects
master branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
private:
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
session->start();
if (!ec)
accept_loop();
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(3s);
s.stop(); // active connections will continue
th.join();
}
Approach 1. Adding Broadcast Messages
So, let's add "broadcast messages" that get sent to all active connections simultaneously. We add two:
one at each new connection (saying "Player ## has entered the game")
one that emulates a global "server event", like you described in the question). It gets triggered from within main:
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active connections\n";
Note how we do this by registering a weak pointer to each accepted connection and operating on each:
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
broadcast is also used directly from main and is simply:
size_t broadcast(std::string const& msg) {
return for_each_active([msg](connection& c) { c.send(msg, true); });
}
using-asio-post branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
size_t broadcast(std::string const& msg) {
return for_each_active([msg](connection& c) { c.send(msg, true); });
}
private:
using connptr = std::shared_ptr<connection>;
using weakptr = std::weak_ptr<connection>;
std::mutex _mx;
std::vector<weakptr> _registered;
size_t reg_connection(weakptr wp) {
std::lock_guard<std::mutex> lk(_mx);
_registered.push_back(wp);
return _registered.size();
}
template <typename F>
size_t for_each_active(F f) {
std::vector<connptr> active;
{
std::lock_guard<std::mutex> lk(_mx);
for (auto& w : _registered)
if (auto c = w.lock())
active.push_back(c);
}
for (auto& c : active) {
std::cout << "(running action for " << c->_s.remote_endpoint() << ")" << std::endl;
f(*c);
}
return active.size();
}
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active connections\n";
std::this_thread::sleep_for(2s);
s.stop(); // active connections will continue
th.join();
}
Approach 2: Those Broadcast But With Boost Signals2
The Signals approach is a fine example of Dependency Inversion.
Most salient notes:
signal slots get invoked on the thread invoking it ("raising the event")
the scoped_connection is there so subscriptions are *automatically removed when the connection is destructed
there's subtle difference in the wording of the console message from "reached # active connections" to "reached # active subscribers".
The difference is key to understanding the added flexibility: the signal owner/invoker does not know anything about the subscribers. That's the decoupling/dependency inversion we're talking about
using-signals2 branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
#include <boost/signals2.hpp>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
boost::signals2::scoped_connection _subscription;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
size_t broadcast(std::string const& msg) {
_broadcast_event(msg);
return _broadcast_event.num_slots();
}
private:
boost::signals2::signal<void(std::string const& msg)> _broadcast_event;
size_t reg_connection(connection& c) {
c._subscription = _broadcast_event.connect(
[&c](std::string msg){ c.send(msg, true); }
);
return _broadcast_event.num_slots();
}
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(*session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active subscribers\n";
std::this_thread::sleep_for(2s);
s.stop(); // active connections will continue
th.join();
}
See the diff between Approach 1. and 2.: Compare View on github
A sample of the output when run against 3 concurrent clients with:
(for a in {1..3}; do netcat localhost 6767 < /etc/dictionaries-common/words > echoed.$a& sleep .1; done; time wait)
The answer from #sehe was amazing, so I'll be brief. Generally speaking, to implement an algorithm which operates on all active connections you must do the following:
Maintain a list of active connections. If this list is accessed by multiple threads, it will need synchronization (std::mutex). New connections should be inserted to the list, and when a connection is destroyed or becomes inactive it should be removed from the list.
To iterate the list, synchronization is required if the list is accessed by multiple threads (i.e. more than one thread calling asio::io_context::run, or if the list is also accessed from threads that are not calling asio::io_context::run)
During iteration, if the algorithm needs to inspect or modify the state of any connection, and that state can be changed by other threads, additional synchronization is needed. This includes any internal "queue" of messages that the connection object stores.
A simple way to synchronize a connection object is to use boost::asio::post to submit a function for execution on the connection object's context, which will be either an explicit strand (boost::asio::strand, as in the advanced server examples) or an implicit strand (what you get when only one thread calls io_context::run). The Approach 1 provided by #sehe uses post to synchronize in this fashion.
Another way to synchronize the connection object is to "stop the world." That means call io_context::stop, wait for all the threads to exit, and then you are guaranteed that no other threads are accessing the list of connections. Then you can read and write connection object state all you want. When you are finished with the list of connections, call io_context::restart and launch the threads which call io_context::run again. Stopping the io_context does not stop network activity, the kernel and network drivers still send and receive data from internal buffers. TCP/IP flow control will take care of things so the application still operates smoothly even though it becomes briefly unresponsive during the "stop the world." This approach can simplify things but depending on your particular application you will have to evaluate if it is right for you.
Hope this helps!
Thank you #sehe for the amazing answer. Still, I think there is a small but severe bug in the Approach 2. IMHO reg_connection should look like this:
size_t reg_connection(std::shared_ptr<connection> c) {
c->_subscription = _broadcast_event.connect(
[weak_c = std::weak_ptr<connection>(c)](std::string msg){
if(auto c = weak_c.lock())
c->send(msg, true);
}
);
return _broadcast_event.num_slots();
}
Otherwise you can end up with a race condition leading to a server crash. In case the connection instance is destroyed during the call to the lambda, the reference becomes invalid.
Similarly connection#send() should look like this, because otherwise this might be dead by the time the lambda is called:
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(),
[self=shared_from_this(), msg=std::move(msg), at_front] {
if (self->enqueue(std::move(msg), at_front))
self->write_loop();
});
}
PS: I would have posted this as a comment on #sehe's answer, but unfortunately I have not enough reputation.

What's wrong with this boost::asio and boost::coroutine usage pattern?

In this question I described boost::asio and boost::coroutine usage pattern which causes random crashes of my application and I published extract from my code and valgrind and GDB output.
In order to investigate the problem further I created smaller proof of concept application which applies the same pattern. I saw that the same problem arises in the smaller program which source I publish here.
The code starts a few threads and creates a connection pool with a few dummy connections (user supplied numbers). Additional arguments are unsigned integer numbers which plays the role of fake requests. The dummy implementation of sendRequest function just starts asynchronous timer for waiting number of seconds equal to the input number and yileds from the function.
Can someone see the problem with this code and can he propose some fix for it?
#include "asiocoroutineutils.h"
#include "concurrentqueue.h"
#include <iostream>
#include <thread>
#include <boost/lexical_cast.hpp>
using namespace std;
using namespace boost;
using namespace utils;
#define id this_thread::get_id() << ": "
// ---------------------------------------------------------------------------
/*!
* \brief This is a fake Connection class
*/
class Connection
{
public:
Connection(unsigned connectionId)
: _id(connectionId)
{
}
unsigned getId() const
{
return _id;
}
void sendRequest(asio::io_service& ioService,
unsigned seconds,
AsioCoroutineJoinerProxy,
asio::yield_context yield)
{
cout << id << "Connection " << getId()
<< " Start sending: " << seconds << endl;
// waiting on this timer is palceholder for any asynchronous operation
asio::steady_timer timer(ioService);
timer.expires_from_now(chrono::seconds(seconds));
coroutineAsyncWait(timer, yield);
cout << id << "Connection " << getId()
<< " Received response: " << seconds << endl;
}
private:
unsigned _id;
};
typedef std::unique_ptr<Connection> ConnectionPtr;
typedef std::shared_ptr<asio::steady_timer> TimerPtr;
// ---------------------------------------------------------------------------
class ConnectionPool
{
public:
ConnectionPool(size_t connectionsCount)
{
for(size_t i = 0; i < connectionsCount; ++i)
{
cout << "Creating connection: " << i << endl;
_connections.emplace_back(new Connection(i));
}
}
ConnectionPtr getConnection(TimerPtr timer,
asio::yield_context& yield)
{
lock_guard<mutex> lock(_mutex);
while(_connections.empty())
{
cout << id << "There is no free connection." << endl;
_timers.emplace_back(timer);
timer->expires_from_now(
asio::steady_timer::clock_type::duration::max());
_mutex.unlock();
coroutineAsyncWait(*timer, yield);
_mutex.lock();
cout << id << "Connection was freed." << endl;
}
cout << id << "Getting connection: "
<< _connections.front()->getId() << endl;
ConnectionPtr connection = std::move(_connections.front());
_connections.pop_front();
return connection;
}
void addConnection(ConnectionPtr connection)
{
lock_guard<mutex> lock(_mutex);
cout << id << "Returning connection " << connection->getId()
<< " to the pool." << endl;
_connections.emplace_back(std::move(connection));
if(_timers.empty())
return;
auto timer = _timers.back();
_timers.pop_back();
auto& ioService = timer->get_io_service();
ioService.post([timer]()
{
cout << id << "Wake up waiting getConnection." << endl;
timer->cancel();
});
}
private:
mutex _mutex;
deque<ConnectionPtr> _connections;
deque<TimerPtr> _timers;
};
typedef unique_ptr<ConnectionPool> ConnectionPoolPtr;
// ---------------------------------------------------------------------------
class ScopedConnection
{
public:
ScopedConnection(ConnectionPool& pool,
asio::io_service& ioService,
asio::yield_context& yield)
: _pool(pool)
{
auto timer = make_shared<asio::steady_timer>(ioService);
_connection = _pool.getConnection(timer, yield);
}
Connection& get()
{
return *_connection;
}
~ScopedConnection()
{
_pool.addConnection(std::move(_connection));
}
private:
ConnectionPool& _pool;
ConnectionPtr _connection;
};
// ---------------------------------------------------------------------------
void sendRequest(asio::io_service& ioService,
ConnectionPool& pool,
unsigned seconds,
asio::yield_context yield)
{
cout << id << "Constructing request ..." << endl;
AsioCoroutineJoiner joiner(ioService);
ScopedConnection connection(pool, ioService, yield);
asio::spawn(ioService, bind(&Connection::sendRequest,
connection.get(),
std::ref(ioService),
seconds,
AsioCoroutineJoinerProxy(joiner),
placeholders::_1));
joiner.join(yield);
cout << id << "Processing response ..." << endl;
}
// ---------------------------------------------------------------------------
void threadFunc(ConnectionPool& pool,
ConcurrentQueue<unsigned>& requests)
{
try
{
asio::io_service ioService;
while(true)
{
unsigned request;
if(!requests.tryPop(request))
break;
cout << id << "Scheduling request: " << request << endl;
asio::spawn(ioService, bind(sendRequest,
std::ref(ioService),
std::ref(pool),
request,
placeholders::_1));
}
ioService.run();
}
catch(const std::exception& e)
{
cerr << id << "Error: " << e.what() << endl;
}
}
// ---------------------------------------------------------------------------
int main(int argc, char* argv[])
{
if(argc < 3)
{
cout << "Usage: ./async_request poolSize threadsCount r0 r1 ..."
<< endl;
return -1;
}
try
{
auto poolSize = lexical_cast<size_t>(argv[1]);
auto threadsCount = lexical_cast<size_t>(argv[2]);
ConcurrentQueue<unsigned> requests;
for(int i = 3; i < argc; ++i)
{
auto request = lexical_cast<unsigned>(argv[i]);
requests.tryPush(request);
}
ConnectionPoolPtr pool(new ConnectionPool(poolSize));
vector<unique_ptr<thread>> threads;
for(size_t i = 0; i < threadsCount; ++i)
{
threads.emplace_back(
new thread(threadFunc, std::ref(*pool), std::ref(requests)));
}
for_each(threads.begin(), threads.end(), mem_fn(&thread::join));
}
catch(const std::exception& e)
{
cerr << "Error: " << e.what() << endl;
}
return 0;
}
Here are some helper utilities used by the above code:
#pragma once
#include <boost/asio/steady_timer.hpp>
#include <boost/asio/spawn.hpp>
namespace utils
{
inline void coroutineAsyncWait(boost::asio::steady_timer& timer,
boost::asio::yield_context& yield)
{
boost::system::error_code ec;
timer.async_wait(yield[ec]);
if(ec && ec != boost::asio::error::operation_aborted)
throw std::runtime_error(ec.message());
}
class AsioCoroutineJoiner
{
public:
explicit AsioCoroutineJoiner(boost::asio::io_service& io)
: _timer(io), _count(0) {}
void join(boost::asio::yield_context yield)
{
assert(_count > 0);
_timer.expires_from_now(
boost::asio::steady_timer::clock_type::duration::max());
coroutineAsyncWait(_timer, yield);
}
void inc()
{
++_count;
}
void dec()
{
assert(_count > 0);
--_count;
if(0 == _count)
_timer.cancel();
}
private:
boost::asio::steady_timer _timer;
std::size_t _count;
}; // AsioCoroutineJoiner class
class AsioCoroutineJoinerProxy
{
public:
AsioCoroutineJoinerProxy(AsioCoroutineJoiner& joiner)
: _joiner(joiner)
{
_joiner.inc();
}
AsioCoroutineJoinerProxy(const AsioCoroutineJoinerProxy& joinerProxy)
: _joiner(joinerProxy._joiner)
{
_joiner.inc();
}
~AsioCoroutineJoinerProxy()
{
_joiner.dec();
}
private:
AsioCoroutineJoiner& _joiner;
}; // AsioCoroutineJoinerProxy class
} // utils namespace
For completeness of the code the last missing part is ConcurrentQueue class. It is too long to paste it here, but if you want you can find it here.
Example usage of the application is:
./connectionpooltest 3 3 5 7 8 1 0 9 2 4 3 6
where the first number 3 are fake connections count and the second number 3 are the number of used threads. Numbers after them are fake requests.
The output of valgrind and GDB is the same as in the mentioned above question.
Used version of boost is 1.57. The compiler is GCC 4.8.3. The operating system is CentOS Linux release 7.1.1503
It seems that all valgrind errors are caused because of BOOST_USE_VALGRIND macro is not defined as Tanner Sansbury points in comment related to this question. It seems that except this the program is correct.

Is already in c++11 or boost thread monitor?

Is already in c++11 or boost thread monitor ?
I need to monitor thread execution and when one fails for any reason I need to start again.
I am using in c++11.
This depends on what constitutes a thread failure. If you mean it could exit, you can package it up:
Let's pretend we have a "long-running" task with a 25% chance of failing midway:
int my_processing_task() // this can randomly fail
{
static const size_t iterations = 1ul << 6;
static const size_t mtbf = iterations << 2; // 25% chance of failure
static auto odds = bind(uniform_int_distribution<size_t>(0, mtbf), mt19937(time(NULL)));
for(size_t iteration = 0; iteration < iterations; ++iteration)
{
// long task
this_thread::sleep_for(chrono::milliseconds(10));
// that could fail
if (odds() == 37)
throw my_failure();
}
// we succeeded!
return 42;
}
If we want to keep running the task, regardless of whether it completed normally, or with an error, we can write a monitoring wrapper:
template <typename F> void monitor_task_loop(F f)
{
while (!shutdown)
try {
f();
++completions;
} catch (exception const& e)
{
std::cout << "handling: '" << e.what() << "'\n";
++failures;
}
std::cout << "shutdown requested\n";
}
In this case, I randomly thought it would be nice to count the number of regular completions, and the number of failures. The shutdown flag enables the thread to be shutdown:
auto timeout = async(launch::async, []{ this_thread::sleep_for(chrono::seconds(3)); shutdown = true; });
monitor_task_loop(my_processing_task);
Will run the task montoring loop for ~3 seconds. A demonstration running three background threads monitoring our task is Live On Coliru.
Added a c++03 version using Boost Live On Coliru.
This version uses only standard c++11 features.
#include <thread>
#include <future>
#include <iostream>
#include <random>
using namespace std;
struct my_failure : virtual std::exception {
char const* what() const noexcept { return "the thread failed randomly"; }
};
int my_processing_task() // this can randomly fail
{
static const size_t iterations = 1ul << 4;
static const size_t mtbf = iterations << 2; // 25% chance of failure
static auto odds = bind(uniform_int_distribution<size_t>(0, mtbf), mt19937(time(NULL)));
for(size_t iteration = 0; iteration < iterations; ++iteration)
{
// long task
this_thread::sleep_for(chrono::milliseconds(10));
// that could fail
if (odds() == 37)
throw my_failure();
}
// we succeeded!
return 42;
}
std::atomic_bool shutdown(false);
std::atomic_size_t failures(0), completions(0);
template <typename F> void monitor_task_loop(F f)
{
while (!shutdown)
try {
f();
++completions;
} catch (exception const& e)
{
std::cout << "handling: '" << e.what() << "'\n";
++failures;
}
std::cout << "shutdown requested\n";
}
int main()
{
auto monitor = [] { monitor_task_loop(my_processing_task); };
thread t1(monitor), t2(monitor), t3(monitor);
this_thread::sleep_for(chrono::seconds(3));
shutdown = true;
t1.join();
t2.join();
t3.join();
std::cout << "completions: " << completions << ", failures: " << failures << "\n";
}

Mutex unlock fails strangely

I am playing around with some sockets, thread and mutexes. My question concerns threads and mutexes:
int ConnectionHandler::addNewSocket(){
this->connectionList_mutex.lock();
std::cout << "test1" << std::endl;
this->connectionList_mutex.unlock();
return 0;
}
int ConnectionHandler::main(){
while(true){
this->connectionList_mutex.lock();
std::cout << "test2" << std::endl;
this->connectionList_mutex.unlock();
}
}`
The main function is running in one thread, while the addNewSocket is called by another thread. The problem is, that when addNewSocket is called once (by the second thread), the next unlock by thread 1 (main) will fail with a strange "signal SIGABRT". I have worked two days on this now, but i did not manage to get it fixed, sadly. I hope you can help me.
Edit: ConnectionHandler is a class, that has connectionList_mutex as a member.
Edit: Sometimes i also get this error: "Assertion failed: (ec == 0), function unlock, file /SourceCache/libcxx/libcxx-65.1/src/mutex.cpp, line 44." but it occurs randomly.
Edit: This is the whole class (Reduced to a minimum, should be context independant to a certain degree, but crashes when i put it right after a client connected, and works if i put it right after the start:
class ConnectionHandler{
public:
ConnectionHandler();
int addNewSocket();
private:
int main();
static void start(void * pThis);
std::mutex connectionList_mutex;
};
ConnectionHandler::ConnectionHandler(){
std::thread t(&this->start, this);
t.detach();
}
void ConnectionHandler::start(void * pThis){
ConnectionHandler *handlerThis;
handlerThis = (ConnectionHandler *)pThis;
handlerThis->main();
}
int ConnectionHandler::addNewSocket(){
this->connectionList_mutex.lock();
std::cout << "test1" << std::endl;
this->connectionList_mutex.unlock();
return 0;
}
int ConnectionHandler::main(){
while(true){
this->connectionList_mutex.lock();
std::cout << "test2" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
this->connectionList_mutex.unlock();
}
}
My guess is that your ConnectionHandler object is being destroyed somewhere. Also, you have defined ConnectionHandler::start in a silly way.
First, ConnectionHandler::start should be defined this way:
void ConnectionHandler::start(ConnectionHandler * pThis){
pThis->main();
}
The C++11 ::std::thread class is perfectly capable of preserving the type of the function argument so there is no need to resort to void *.
Secondly, add in this code:
void ConnectionHandler::~ConnectionHandler(){
const void * const meptr = this;
this->connectionList_mutex.lock();
::std::cout << "ConnectionHandler being destroyed at " << meptr << ::std::endl;
this->connectionList_mutex.unlock();
}
And change the constructor to read:
ConnectionHandler::ConnectionHandler(){
const void * const meptr = this;
::std::cout << "ConnectionHandler being created at " << meptr << ::std::endl;
std::thread t(&this->start, this);
t.detach();
}
This will show you when the ConnectionHandler object is being destroyed. And my guess is that your code is destroying it while your detached thread is still running.
The meptr thing is because operator << has an overload for void * that prints out the pointer value. Printing out the pointer value for this will allow you to match up calls to the constructor and destructor if you're creating multiple ConnectionHandler objects.
Edit: Since it turned out I was correct, here is how I would recommend you write the play ConnectionHandler class:
#include <iostream>
#include <atomic>
#include <thread>
#include <chrono>
#include <mutex>
class ConnectionHandler {
public:
ConnectionHandler();
~ConnectionHandler();
ConnectionHandler(const ConnectionHandler &) = delete;
const ConnectionHandler &operator =(const ConnectionHandler &) = delete;
int addNewSocket();
private:
int main();
static void start(ConnectionHandler * pThis);
::std::mutex connectionList_mutex;
volatile ::std::atomic_bool thread_shutdown;
::std::thread thread;
};
ConnectionHandler::ConnectionHandler()
: thread_shutdown(false), thread(&this->start, this)
{
}
ConnectionHandler::~ConnectionHandler()
{
thread_shutdown.store(true);
thread.join();
}
void ConnectionHandler::start(ConnectionHandler * pThis){
pThis->main();
}
int ConnectionHandler::addNewSocket(){
::std::lock_guard< ::std::mutex> lock(connectionList_mutex);
::std::cout << "test1" << ::std::endl;
return 0;
}
int ConnectionHandler::main(){
while(!thread_shutdown.load()){
::std::lock_guard< ::std::mutex> lock(connectionList_mutex);
::std::cout << "test2" << ::std::endl;
::std::this_thread::sleep_for(::std::chrono::milliseconds(100));
}
return 0;
}