What causes bad file descriptor error with async_write? - c++

I am using boost asio for a client server application and ran into this problem, with this not so informative error message(at least to me ;)), I am sending structs as messages to and fro, the sending works perfectly well from the client side, but almost similar attempt from the server's side causes this problem(rather error) : Send failed: Bad file descriptor
here's a snippet of the sending part ( please ask for any other details required in the comments):
void read_from_ts(const char* buf, int len) { // this is the read callback function
if (len <= 0) {
std::cerr << "Error: Connection closed by peer. " << __FILE__ << ":" << __LINE__ << std::endl;
tcp_client_.close(tcp_connection_);
tcp_connection_ = nullptr;
ios_.stop(); // exit
return;
}
const UserLoginRequest *obj = reinterpret_cast<const UserLoginRequest *>(buf);
int tempId = obj->MessageHeaderIn.TemplateID;
Responses r;
switch(tempId)
{
case 10018: //login
const UserLoginRequest *obj = reinterpret_cast<const UserLoginRequest *>(buf);
//std::cout<<"Login request received"<<"\n";
boost::asio::ip::tcp::socket sock_(ios_);
r.login_ack(sock_);
/*will add more*/
}
std::cout << "RX: " << len << " bytes\n";
}
class Responses
{
public:
int login_ack(boost::asio::ip::tcp::socket& socket)
{
//std::cout<<"here"<<"\n";
UserLoginResponse info;
MessageHeaderOutComp mh;
ResponseHeaderComp rh;
rh.MsgSeqNum = 0; //no use for now
rh.RequestTime = 0; //not used at all
mh.BodyLen = 53; //no use
mh.TemplateID = 10019; // IMP
info.MessageHeaderOut = mh;
info.LastLoginTime = 0;
info.DaysLeftForPasswdExpiry = 10; //not used
info.GraceLoginsLeft = 10; //not used
rh.SendingTime = 0;
info.ResponseHeader = rh;
//Pad6 not used
async_write(socket, boost::asio::buffer(&info, sizeof(info)), on_send_completed);
}
static void on_send_completed(boost::system::error_code ec, size_t bytes_transferred) {
if (ec)
std::cout << "Send failed: " << ec.message() << "\n"; //**error shows up here**
else
std::cout << "Send succesful (" << bytes_transferred << " bytes)\n";
}
};
};

UPDATE Just noticed a third, trivial explanation when reading your code, see added bullet
Typically when the file-descriptor is closed elsewhere.
If you're using Asio, this typically means
the socket¹ object was destructed. This can be a beginner error when the code doesn't extend the lifetime of objects for the duration of asynchronous operations
the file-descriptor was passed to other code that closed it (e.g. using native_handle(https://www.boost.org/doc/libs/1_73_0/doc/html/boost_asio/reference/basic_stream_socket/native_handle.html) and the other code closed it (e.g. because it assumes ownership and did error-handling).
UPDATE Or, it can mean your socket was never initialized to begin. In your code I read:
//std::cout<<"Login request received"<<"\n";
boost::asio::ip::tcp::socket sock_(ios_);
r.login_ack(sock_);
However, that just constructs a new socket, never connects or binds it and tries to do login_ack on it. That won't work because login_ack doesn't bind nor connect the socket and invokes async_write on it.
Did you mean to use tcp_connection_.sock_ or similar?
In general closing file-descriptors in third-party code is an error in multi-threaded code because it invites race conditions which will lead to arbitrary stream corruption (see e.g. How do you gracefully select() on sockets that could be closed on another thread?)
You can use shutdown instead in the majority of cases
UNDEFINED BEHAVIORS
Also, note that
info doesn't have sufficient lifetime (it goes out of scope before the async_write would be completed
your login_ack never returns a value
Imagining Fixes
This is what I imagine surrounding code to look like, when removing the above problems.
In fact it could be significantly simpler due to the static nature of the response, but I didn't want to assume all responses would be that simple, so I went with shared-pointer lifetime:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/core/ignore_unused.hpp>
#include <iostream>
using boost::asio::ip::tcp;
struct MyProgram {
boost::asio::io_context ios_;
struct UserLoginRequest {
struct MessageHeaderInComp {
int TemplateID = 10018;
} MessageHeaderIn;
};
struct Connection {
tcp::socket sock_;
template <typename Executor>
Connection(Executor ex) : sock_{ex} {}
};
std::unique_ptr<Connection> tcp_connection_ = std::make_unique<Connection>(ios_.get_executor());
struct {
void close(std::unique_ptr<Connection> const&);
} tcp_client_;
struct Responses {
static auto login_ack() {
struct UserLoginResponse {
struct MessageHeaderOutComp {
int BodyLen = 53; // no use
int TemplateID = 10019; // IMP
} MessageHeaderOut;
int LastLoginTime = 0;
int DaysLeftForPasswdExpiry = 10; // not used
int GraceLoginsLeft = 10; // not used
struct ResponseHeaderComp {
int MsgSeqNum = 0; // no use for now
int RequestTime = 0; // not used at all
int SendingTime = 0;
} ResponseHeader;
};
return std::make_shared<UserLoginRequest>();
}
};
void read_from_ts(const char* buf, int len) { // this is the read callback function
if (len <= 0) {
std::cerr << "Error: Connection closed by peer. " << __FILE__ << ":" << __LINE__ << std::endl;
tcp_client_.close(tcp_connection_);
tcp_connection_ = nullptr;
ios_.stop(); // exit
return;
}
const UserLoginRequest *obj = reinterpret_cast<const UserLoginRequest *>(buf);
int tempId = obj->MessageHeaderIn.TemplateID;
switch(tempId) {
case 10018: //login
const UserLoginRequest *obj = reinterpret_cast<const UserLoginRequest *>(buf);
//std::cout<<"Login request received"<<"\n";
boost::asio::ip::tcp::socket sock_(ios_);
auto response = Responses::login_ack();
async_write(tcp_connection_->sock_, boost::asio::buffer(response.get(), sizeof(*response)),
[response](boost::system::error_code ec, size_t bytes_transferred) {
if (ec)
std::cout << "Send failed: " << ec.message() << "\n"; //**error shows up here**
else
std::cout << "Send succesful (" << bytes_transferred << " bytes)\n";
});
/*will add more*/
boost::ignore_unused(obj);
}
std::cout << "RX: " << len << " bytes\n";
}
};
int main() {
MyProgram p;
}
¹ (or acceptor/posix::strean_descriptor)

Related

Simple client/server using C++/boost socket works under Windows but fails under Linux

I'm trying to write a very simple client/server app with boost::socket. I need a server to run and a single client to connect, send data, disconnect and possibly reconnect later and repeat.
The code reduced to the minimum is here:
Server app:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
using boost::asio::ip::tcp;
class TheServer
{
public:
TheServer(int port) : m_port(port)
{
m_pIOService = new boost::asio::io_service;
m_pThread = new boost::thread(boost::bind<void>(&TheServer::run, this));
listenForNewConnection();
}
~TheServer()
{
m_bContinueReading = false;
m_pIOService->stop();
m_pThread->join();
delete m_pThread;
delete m_pSocket;
delete m_pAcceptor;
delete m_pIOService;
}
void listenForNewConnection()
{
if (m_pSocket)
delete m_pSocket;
if (m_pAcceptor)
delete m_pAcceptor;
// start new acceptor operation
m_pSocket = new tcp::socket(*m_pIOService);
m_pAcceptor = new tcp::acceptor(*m_pIOService, tcp::endpoint(tcp::v4(), m_port));
std::cout << "Starting async_accept" << std::endl;
m_pAcceptor->async_accept(*m_pSocket,
boost::bind<void>(&TheServer::readSession, this, boost::asio::placeholders::error));
}
void readSession(boost::system::error_code error)
{
if (!error)
{
std::cout << "Connection established" << std::endl;
while ( m_bContinueReading )
{
static unsigned char buffer[1000];
boost::system::error_code error;
size_t length = m_pSocket->read_some(boost::asio::buffer(&buffer, 1000), error);
if (!error && length != 0)
{
std::cout << "Received " << buffer << std::endl;
}
else
{
std::cout << "Received error, connection likely closed by peer" << std::endl;
break;
}
}
std::cout << "Connection closed" << std::endl;
listenForNewConnection();
}
else
{
std::cout << "Connection error" << std::endl;
}
std::cout << "Ending readSession" << std::endl;
}
void run()
{
while (m_bContinueReading)
m_pIOService->run_one();
std::cout << "Exiting run thread" << std::endl;
}
bool m_bContinueReading = true;
boost::asio::io_service* m_pIOService = NULL;
tcp::socket* m_pSocket = NULL;
tcp::acceptor* m_pAcceptor = NULL;
boost::thread* m_pThread = NULL;
int m_port;
};
int main(int argc, char* argv[])
{
TheServer* server = new TheServer(1900);
std::cout << "Press Enter to quit" << std::endl;
std::string sGot;
getline(std::cin, sGot);
delete server;
return 0;
}
Client app:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
int main(int argc, char* argv[])
{
std::cout << std::endl;
std::cout << "Starting client" << std::endl;
using boost::asio::ip::tcp;
boost::asio::io_service* m_pIOService = NULL;
tcp::socket* m_pSocket = NULL;
try
{
m_pIOService = new boost::asio::io_service;
std::stringstream sPort;
sPort << 1900;
tcp::resolver resolver(*m_pIOService);
tcp::resolver::query query(tcp::v4(), "localhost", sPort.str());
tcp::resolver::iterator iterator = resolver.resolve(query);
m_pSocket = new tcp::socket(*m_pIOService);
m_pSocket->connect(*iterator);
std::cout << "Client conected" << std::endl;
std::string hello = "Hello World";
boost::asio::write( *m_pSocket, boost::asio::buffer(hello.data(), hello.size()) );
boost::this_thread::sleep(boost::posix_time::milliseconds(100));
hello += "(2)";
boost::asio::write(*m_pSocket, boost::asio::buffer(hello.data(), hello.size()));
}
catch (std::exception& e)
{
delete m_pSocket;
m_pSocket = NULL;
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
Note that I use non-blocking async_accept to be able to cleanly stop the server when Enter is pressed.
Under Windows, it works perfectly fine, I run the server, it outputs:
Starting async_accept
Press Enter to quit
For each client app run, it outpts:
Starting client
Client conected
and server app outputs:
Connection established
Received Hello World
Received Hello World(2)
Received error, connection likely closed by peer
Connection closed
Starting async_accept
Ending readSession
Then when I press Enter in server app console, it outputs Exiting run thread and cleanly stops.
Now, when I compile this same code under Linux, the client outputs the same as under Windows, but nothing happens on the server side...
Any idea what's wrong?
There are many questionable elements.
There is a classical data race on m_bContinueReading. You write from another thread, but the other thread may never see the change because of the data race.
The second race condition is likely your problem:
m_pThread = new boost::thread(boost::bind<void>(&TheServer::run, this));
listenForNewConnection();
Here the run thread may complete before you ever post the first work. You can use a work-guard to prevent this. In your specific code you would already fix it by reordering the lines:
listenForNewConnection();
m_pThread = new boost::thread(boost::bind<void>(&TheServer::run, this));
I would not do this, because I would not have those statements in my constructor body. See below for the work guard solution
There is a lot of raw pointer handling and new/delete going on, which merely invites errors.
You use the buffer assuming that it is NUL-terminated. This is especially unwarranted because you use read_some which will read partial messages as they arrive on the wire.
You use a static buffer while the code may have different instances of the class. This is very false optimization. Instead, prevent all the allocations! Combining with the previous item:
char buffer[1000];
while (m_bContinueReading) {
size_t length = m_Socket.read_some(asio::buffer(&buffer, 1000), ec);
std::cout << "Received " << length << " (" << quoted(std::string(buffer, length)) << "), "
<< ec.message() << std::endl;
if (ec.failed())
break;
}
You start a new acceptor always, where there is no need: a single acceptor can accept as many connections as you wish. In fact, the method shown runs into the problems
that lingering connections can prevent the new acceptor from binding to the same port. You could also alleviate that with
m_Acceptor.set_option(tcp::acceptor::reuse_address(true));
the destroyed acceptor may have backlogged connections, which are discarded
Typically you want to support concurrent connection, so you can split of a "readSession" and immediately accept the next connection. Now, strangely your code seems to expect clients to be connected until the server is prompted to shutdown (from the console) but after that you somehow start listening to new connections (even though you know the service will be stopping, and m_bContinueReading will remain false).
In the grand scheme of things, you don't want to destroy the acceptor unless something invalidated it. In practice this is rare (e.g. on Linux the acceptor will happily survive disabling/re-enabling the network adaptor).
you have spurious explicit template arguments (bind<void>). This is an anti-pattern and may lead to subtle problems
similar with the buffer (just say asio::buffer(buffer) and no longer have correctness concerns. In fact, don't use C-style arrays:
std::array<char, 1000> buffer;
size_t n = m_Socket.read_some(asio::buffer(buffer), ec);
std::cout << "Received " << n << " " << quoted(std::string(buffer.data(), n))
<< " (" << ec.message() << ")" << std::endl;
Instead of running a manual run_one() loop (where you forget to handle exceptions), consider "just" letting the service run(). Then you can .cancel() the acceptor to let the service run out of work.
In fact, this subtlety isn't required in your code, since your code already forces "ungraceful" shutdown anyways:
m_IOService.stop(); // nuclear option
m_Thread.join();
More gentle would be e.g.
m_Acceptor.cancel();
m_Socket.cancel();
m_Thread.join();
In which case you can respond to the completion error_code == error::operation_aborted to stop the session/accept loop.
Technically, you may be able to do away with the boolean flag altogether.
I keep it because it allows us to handle multiple session-per-thread in
"fire-and-forget" manner.
In the client you have many of the same problems, and also a gotcha where
you only look at the first resolver result (assuming there was one),
ignoring the rest. You can use asio::connect instead of
m_Socket.connect to try all resolved entries
Addressing the majority of these issues, simplifying the code:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
#include <boost/optional.hpp>
#include <iomanip>
#include <iostream>
namespace asio = boost::asio;
using asio::ip::tcp;
using namespace std::chrono_literals;
using boost::system::error_code;
class TheServer {
public:
TheServer(int port) : m_port(port) {
m_Acceptor.set_option(tcp::acceptor::reuse_address(true));
do_accept();
}
~TheServer() {
m_shutdownRequested = true;
m_Work.reset(); // release the work-guard
m_Acceptor.cancel();
m_Thread.join();
}
private:
void do_accept() {
std::cout << "Starting async_accept" << std::endl;
m_Acceptor.async_accept( //
m_Socket, boost::bind(&TheServer::on_accept, this, asio::placeholders::error));
}
void on_accept(error_code ec) {
if (!ec) {
std::cout << "Connection established " << m_Socket.remote_endpoint() << std::endl;
// leave session running in the background:
std::thread(&TheServer::read_session_thread, this, std::move(m_Socket)).detach();
do_accept(); // and immediately accept new connection(s)
} else {
std::cout << "Connection error (" << ec.message() << ")" << std::endl;
std::cout << "Ending readSession" << std::endl;
}
}
void read_session_thread(tcp::socket sock) {
std::array<char, 1000> buffer;
for (error_code ec;;) {
size_t n = sock.read_some(asio::buffer(buffer), ec);
std::cout << "Received " << n << " " << quoted(std::string(buffer.data(), n)) << " ("
<< ec.message() << ")" << std::endl;
if (ec.failed() || m_shutdownRequested)
break;
}
std::cout << "Connection closed" << std::endl;
}
void thread_func() {
// http://www.boost.org/doc/libs/1_61_0/doc/html/boost_asio/reference/io_service.html#boost_asio.reference.io_service.effect_of_exceptions_thrown_from_handlers
for (;;) {
try {
m_IOService.run();
break; // exited normally
} catch (std::exception const& e) {
std::cerr << "[eventloop] error: " << e.what();
} catch (...) {
std::cerr << "[eventloop] unexpected error";
}
}
std::cout << "Exiting service thread" << std::endl;
}
std::atomic_bool m_shutdownRequested{false};
uint16_t m_port;
asio::io_service m_IOService;
boost::optional<asio::io_service::work> m_Work{m_IOService};
tcp::socket m_Socket{m_IOService};
tcp::acceptor m_Acceptor{m_IOService, tcp::endpoint{tcp::v4(), m_port}};
std::thread m_Thread{boost::bind(&TheServer::thread_func, this)};
};
constexpr uint16_t s_port = 1900;
void run_server() {
TheServer server(s_port);
std::cout << "Press Enter to quit" << std::endl;
std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
}
void run_client() {
std::cout << std::endl;
std::cout << "Starting client" << std::endl;
using asio::ip::tcp;
try {
asio::io_service m_IOService;
tcp::resolver resolver(m_IOService);
auto iterator = resolver.resolve("localhost", std::to_string(s_port));
tcp::socket m_Socket(m_IOService);
connect(m_Socket, iterator);
std::cout << "Client connected" << std::endl;
std::string hello = "Hello World";
write(m_Socket, asio::buffer(hello));
std::this_thread::sleep_for(100ms);
hello += "(2)";
write(m_Socket, asio::buffer(hello));
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
}
int main(int argc, char**) {
if (argc>1)
run_server();
else
run_client();
}

Boost Asio and Udp Poll() No incoming data

I have to handle information from 100 ports in parallel for 100ms per second.
I am using Ubuntu OS.
I did some research and i saw that poll() function is a good candidate, to avoid to open 100 threads to handle in parallel data coming on udp protocol.
I did main part with boost and I tried to integrate poll() with boost.
The problem is when i am trying to send by client data to the server, I receive nothing.
According to wireshark, data are coming on the right host. (localhost, port 1234)
Did I miss something or did I put something wrong ?
The test code (server) :
#include <deque>
#include <iostream>
#include <chrono>
#include <thread>
#include <sys/poll.h>
#include <boost/optional.hpp>
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
using boost::asio::ip::udp;
using namespace boost::asio;
using namespace std::chrono_literals;
std::string ip_address = "127.0.0.1";
template<typename T, size_t N>
size_t arraySize( T(&)[N] )
{
return(N);
}
class UdpReceiver
{
using Resolver = udp::resolver;
using Sockets = std::deque<udp::socket>;
using EndPoint = udp::endpoint;
using Buffer = std::array<char, 100>; // receiver buffer
public:
explicit UdpReceiver()
: work_(std::ref(resolver_context)), thread_( [this]{ resolver_context.run(); })
{ }
~UdpReceiver()
{
work_ = boost::none; // using work to keep run active always !
thread_.join();
}
void async_resolve(udp::resolver::query const& query_) {
resolver_context.post([this, query_] { do_resolve(query_); });
}
// callback for event-loop in main thread
void run_handler(int fd_idx) {
// start reading
auto result = read(fd_idx, receive_buf.data(), sizeof(Buffer));
// increment number of received packets
received_packets = received_packets + 1;
std::cout << "Received bytes " << result << " current recorded packets " << received_packets <<'\n';
// run handler posted from resolver threads
handler_context.poll();
handler_context.reset();
}
static void handle_receive(boost::system::error_code error, udp::resolver::iterator const& iterator) {
std::cout << "handle_resolve:\n"
" " << error.message() << "\n";
if (!error)
std::cout << " " << iterator->endpoint() << "\n";
}
// get current file descriptor
int fd(size_t idx)
{
return sockets[idx].native_handle();
}
private:
void do_resolve(boost::asio::ip::udp::resolver::query const& query_) {
boost::system::error_code error;
Resolver resolver(resolver_context);
Resolver::iterator result = resolver.resolve(query_, error);
sockets.emplace_back(udp::socket(resolver_context, result->endpoint()));
// post handler callback to service running in main thread
resolver_context.post(boost::bind(&UdpReceiver::handle_receive, error, result));
}
private:
Sockets sockets;
size_t received_packets = 0;
EndPoint remote_receiver;
Buffer receive_buf {};
io_context resolver_context;
io_context handler_context;
boost::optional<boost::asio::io_context::work> work_;
std::thread thread_;
};
int main (int argc, char** argv)
{
UdpReceiver udpReceiver;
udpReceiver.async_resolve(udp::resolver::query(ip_address, std::to_string(1234)));
//logic
pollfd fds[2] { };
for(int i = 0; i < arraySize(fds); ++i)
{
fds[i].fd = udpReceiver.fd(0);
fds[i].events = 0;
fds[i].events |= POLLIN;
fcntl(fds[i].fd, F_SETFL, O_NONBLOCK);
}
// simple event-loop
while (true) {
if (poll(fds, arraySize(fds), -1)) // waiting for wakeup call. Timeout - inf
{
for(auto &fd : fds)
{
if(fd.revents & POLLIN) // checking if we have something to read
{
fd.revents = 0; // reset kernel message
udpReceiver.run_handler(fd.fd); // call resolve handler. Do read !
}
}
}
}
return 0;
}
This looks like a confused mix of C style poll code and Asio code. The point is
you don't need poll (Asio does it internally (or epoll/select/kqueue/IOCP - whatever is available)
UDP is connectionless, so you don't need more than one socket to receive all "connections" (senders)
I'd replace it all with a single udp::socket on a single thread. You don't even have to manage the thread/work:
net::thread_pool io(1); // single threaded
udp::socket s{io, {{}, 1234}};
Let's run an asynchronous receive loop for 5s:
std::array<char, 100> receive_buffer;
udp::endpoint sender;
std::function<void(error_code, size_t)> read_loop;
read_loop = [&](error_code ec, size_t bytes) {
if (bytes != size_t(-1)) {
//std::cout << "read_loop (" << ec.message() << ")\n";
if (ec)
return;
received_packets += 1;
unique_senders.insert(sender);
//std::cout << "Received:" << bytes << " sender:" << sender << " recorded:" << received_packets << "\n";
//std::cout << std::string_view(receive_buffer.data(), bytes) << "\n";
}
s.async_receive_from(net::buffer(receive_buffer), sender, read_loop);
};
read_loop(error_code{}, -1); // prime the async pump
// after 5s stop
std::this_thread::sleep_for(5s);
post(io, [&s] { s.cancel(); });
io.join();
At the end, we can report the statistics:
std::cout << "A total of " << received_packets << " were received from "
<< unique_senders.size() << " unique senders\n";
With a similated load in bash:
function client() { while read a; do echo "$a" > /dev/udp/localhost/1234 ; done < /etc/dictionaries-common/words; }
for a in {1..20}; do client& done; time wait
We get:
A total of 294808 were received from 28215 unique senders
real 0m5,007s
user 0m0,801s
sys 0m0,830s
This is obviously not optimized, the bottle neck here is likely the many many bash subshells being launched for the clients.
Full Listing
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
#include <iostream>
#include <set>
namespace net = boost::asio;
using boost::asio::ip::udp;
using boost::system::error_code;
using namespace std::chrono_literals;
int main ()
{
net::thread_pool io(1); // single threaded
udp::socket s{io, {{}, 1234}};
std::set<udp::endpoint> unique_senders;
size_t received_packets = 0;
{
std::array<char, 100> receive_buffer;
udp::endpoint sender;
std::function<void(error_code, size_t)> read_loop;
read_loop = [&](error_code ec, size_t bytes) {
if (bytes != size_t(-1)) {
//std::cout << "read_loop (" << ec.message() << ")\n";
if (ec)
return;
received_packets += 1;
unique_senders.insert(sender);
//std::cout << "Received:" << bytes << " sender:" << sender << " recorded:" << received_packets << "\n";
//std::cout << std::string_view(receive_buffer.data(), bytes) << "\n";
}
s.async_receive_from(net::buffer(receive_buffer), sender, read_loop);
};
read_loop(error_code{}, -1); // prime the async pump
// after 5s stop
std::this_thread::sleep_for(5s);
post(io, [&s] { s.cancel(); });
io.join();
}
std::cout << "A total of " << received_packets << " were received from "
<< unique_senders.size() << " unique senders\n";
}

C++ UDP Server io_context running in thread exits before work can start

I'm new to C++ but so far most of the asio stuff has made sense. I am however stuggling to get my UDPServer working.
My question is possibly similar to: Trying to write UDP server class, io_context doesn't block
I think my UDPServer stops before work can be given to its io_context. However, I am issuing work to the context before calling io_context.run() so I don't understand why.
Of course, I am not entirely sure if I am even on the right track with the above statement and would appreciate some guidance. Here is my class:
template<typename message_T>
class UDPServer
{
public:
UDPServer(uint16_t port)
: m_socket(m_asioContext, asio::ip::udp::endpoint(asio::ip::udp::v4(), port))
{
m_port = port;
}
virtual ~UDPServer()
{
Stop();
}
public:
// Starts the server!
bool Start()
{
try
{
// Issue a task to the asio context
WaitForMessages();
m_threadContext = std::thread([this]() { m_asioContext.run(); });
}
catch (std::exception& e)
{
// Something prohibited the server from listening
std::cerr << "[SERVER # PORT " << m_port << "] Exception: " << e.what() << "\n";
return false;
}
std::cout << "[SERVER # PORT " << m_port << "] Started!\n";
return true;
}
// Stops the server!
void Stop()
{
// Request the context to close
m_asioContext.stop();
// Tidy up the context thread
if (m_threadContext.joinable()) m_threadContext.join();
// Inform someone, anybody, if they care...
std::cout << "[SERVER # PORT " << m_port << "] Stopped!\n";
}
void WaitForMessages()
{
m_socket.async_receive_from(asio::buffer(vBuffer.data(), vBuffer.size()), m_endpoint,
[this](std::error_code ec, std::size_t length)
{
if (!ec)
{
std::cout << "[SERVER # PORT " << m_port << "] Got " << length << " bytes \n Data: " << vBuffer.data() << "\n" << "Address: " << m_endpoint.address() << " Port: " << m_endpoint.port() << "\n" << "Data: " << m_endpoint.data() << "\n";
}
else
{
std::cerr << "[SERVER # PORT " << m_port << "] Exception: " << ec.message() << "\n";
return;
}
WaitForMessages();
}
);
}
void Send(message_T& msg, const asio::ip::udp::endpoint& ep)
{
asio::post(m_asioContext,
[this, msg, ep]()
{
// If the queue has a message in it, then we must
// assume that it is in the process of asynchronously being written.
bool bWritingMessage = !m_messagesOut.empty();
m_messagesOut.push_back(msg);
if (!bWritingMessage)
{
WriteMessage(ep);
}
}
);
}
private:
void WriteMessage(const asio::ip::udp::endpoint& ep)
{
m_socket.async_send_to(asio::buffer(&m_messagesOut.front(), sizeof(message_T)), ep,
[this, ep](std::error_code ec, std::size_t length)
{
if (!ec)
{
m_messagesOut.pop_front();
// If the queue is not empty, there are more messages to send, so
// make this happen by issuing the task to send the next header.
if (!m_messagesOut.empty())
{
WriteMessage(ep);
}
}
else
{
std::cout << "[SERVER # PORT " << m_port << "] Write Header Fail.\n";
m_socket.close();
}
});
}
void ReadMessage()
{
}
private:
uint16_t m_port = 0;
asio::ip::udp::endpoint m_endpoint;
std::vector<char> vBuffer = std::vector<char>(21);
protected:
TSQueue<message_T> m_messagesIn;
TSQueue<message_T> m_messagesOut;
Message<message_T> m_tempMessageBuf;
asio::io_context m_asioContext;
std::thread m_threadContext;
asio::ip::udp::socket m_socket;
};
}
Code is invoked in the main function for now:
enum class TestMsg {
Ping,
Join,
Leave
};
int main() {
Message<TestMsg> msg; // Message is a pretty basic struct that I'm not using yet. When I was, I was only receiving the first 4 bytes - which led me down this path of investigation
msg.id = TestMsg::Join;
msg << "hello";
UDPServer<Message<TestMsg>> server(60000);
}
When invoked the Server immediately exits before it gets chance to print "[SERVER] Started"
I'll try adding the work guard as the link post describes but I would still like to understand why the io_context is not being primed with work quick enough.
Update (Now I also read the question not just the code)
While in WaitForMessages you do start listening by calling the m_socket.async_receive_from function, as it is async, that function will return/unblock as soon as it has setup the listening. So as long as you don't actually have a client sending you something, you server has nothing do to. Only when it has received something the callback will be called, by a thread calling io_context::run. So you need the work guard so that your thread running run won't unblock right after start, but will block as long as the work guard is there.
Usually it is also combined with a try/while pattern if an exception gets thrown in a handler and you still want to move on with your server.
Also in the code you posted, you never actually call UDPServer::Start!
This was my first idea of an answer:
This is normal behavior of ASIO. The io_context::run function will return as soon as it has no work to do.
So to change the behaviour of the run function to block you have to use a boost::asio::executor_work_guard<boost::asio::io_context::executor_type> i.e. a so called work guard. Construct that object with a reference to your io_context and hold it i.e. don't let it destruct as long as you want to let the server run, i.e. do not want to let io_context::run return when there is not work.
So given
boost::asio::io_context io_context_;
boost::asio::executor_work_guard<boost::asio::io_context::executor_type> work_guard_;
you then could call
work_guard_{boost::asio::make_work_guard(io_context_)},
const auto thread_count{std::max<unsigned>(std::thread::hardware_concurrency(), 1)};
std::generate_n(std::back_inserter(this->io_run_threads_),
thread_count,
[this]() {
return std::thread{io_run_loop,
std::ref(this->io_context_), std::ref(this->error_handler_)};
});
void io_run_loop(boost::asio::io_context &context,
const std::function<void(std::exception &)> &error_handler) {
while (true) {
try {
context.run();
break;
} catch (std::exception &e) {
error_handler(e);
}
}
}
And then for server shutdown:
work_guard_.reset();
io_context_.stop();
std::for_each(this->io_run_threads_.begin(), this->io_run_threads_.end(), [](auto &thread) {
if (thread.joinable()) thread.join();
});
For a more graceful shutdown you can omit the stop call and rather close all sockets before.
Looks like you forgot to call server.Start();. Moreover, you will want to make the main thread wait for some amount of time, otherwise the destructor of Server will immediately cause Stop() to be called:
int main()
{
Message<TestMsg> msg;
msg.id = TestMsg::Join;
msg << "hello";
UDPServer<Message<TestMsg>> server(60000);
server.Start();
std::this_thread::sleep_for(30s);
}
Issues
There is a conceptual problem with the Send API.
It takes an endpoint on each call, but it only uses the one that starts the write call chain! This means that if you do
srv.Send(msg1, {mymachine, 60001});
srv.Send(msg1, {otherserver, 5517});
It is likely they both get sent to mymachine:60001.
How you treat the buffer received. Just using .data() blindly assumes that the data is NUL-terminated. Don't do that:
std::string const data(vBuffer.data(), length);
Also, you seem to have at some time been confused about data and printed m_endpoint.data() - your princess is in another castle.
In reality you probably want ways to extract the typed data. I'm leaving that as beyond the scope of this question for today.
Regardless you should clear the buffer before reuse, because you might be seeing old data in subsequent reads.
vBuffer.assign(vBuffer.size(), '\0');
This is most likely undefined behaviour:
asio::buffer(&m_messagesOut.front(), sizeof(message_T)), ep,
This is only valid if message_T is trivial and standard-layout ("POD" - Plain Old Data). The presence of operator<< strongly suggests that is not the case.
Instead, build a (sequence of) buffer(s) hat represents the message as raw bytes, e.g.
auto& msg = m_messagesOut.front();
msg.length = msg.body.size();
m_socket.async_send_to(
std::vector<asio::const_buffer>{
asio::buffer(&msg.id, sizeof(msg.id)),
asio::buffer(&msg.length, sizeof(msg.length)),
asio::buffer(msg.body),
},
// ...
Thread safe queues seem to be overkill since you have a single service thread; that is an implicit "strand" so you can post to it to have single-threaded semantics.
Here's a few adaptations to make it work so far (except the exercise-for-the-reader pointed out):
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
#include <deque>
#include <sstream>
// Library facilities
namespace asio = boost::asio;
using asio::ip::udp;
using boost::system::error_code;
using namespace std::chrono_literals;
/////////////////////////////////
// mock ups:
template <typename message_T> struct Message {
message_T id;
uint16_t length; // automatically filled on send, UDP packets are < 64k
std::string body;
template <typename T> friend Message& operator<<(Message& m, T const& v)
{
std::ostringstream oss;
oss << v;
m.body += oss.str();
//m.body += '\0'; // suggestion for easier message extraction
return m;
}
};
// Thread-safety can be replaced with the implicit strand of a single service
// thread
template <typename T> using TSQueue = std::deque<T>;
// end mock ups
/////////////////////////////////
template <typename message_T> class UDPServer {
public:
UDPServer(uint16_t port)
: m_socket(m_asioContext, udp::endpoint(udp::v4(), port))
{
m_port = port;
}
virtual ~UDPServer() { Stop(); }
public:
// Starts the server!
bool Start()
{
if (m_threadContext.joinable() && !m_asioContext.stopped())
return false;
try {
// Issue a task to the asio context
WaitForMessages();
m_threadContext = std::thread([this]() { m_asioContext.run(); });
} catch (std::exception const& e) {
// Something prohibited the server from listening
std::cerr << "[SERVER # PORT " << m_port
<< "] Exception: " << e.what() << "\n";
return false;
}
std::cout << "[SERVER # PORT " << m_port << "] Started!\n";
return true;
}
// Stops the server!
void Stop()
{
// Tell the context to stop processing
m_asioContext.stop();
// Tidy up the context thread
if (m_threadContext.joinable())
m_threadContext.join();
// Inform someone, anybody, if they care...
std::cout << "[SERVER # PORT " << m_port << "] Stopped!\n";
m_asioContext
.reset(); // required in case you want to reuse this Server object
}
void Send(message_T& msg, const udp::endpoint& ep)
{
asio::post(m_asioContext, [this, msg, ep]() {
// If the queue has a message in it, then we must
// assume that it is in the process of asynchronously being written.
bool bWritingMessage = !m_messagesOut.empty();
m_messagesOut.push_back(msg);
if (!bWritingMessage) {
WriteMessage(ep);
}
});
}
private:
void WaitForMessages() // assumed to be on-strand
{
vBuffer.assign(vBuffer.size(), '\0');
m_socket.async_receive_from(
asio::buffer(vBuffer.data(), vBuffer.size()), m_endpoint,
[this](std::error_code ec, std::size_t length) {
if (!ec) {
std::string const data(vBuffer.data(), length);
std::cout << "[SERVER # PORT " << m_port << "] Got "
<< length << " bytes \n Data: " << data << "\n"
<< "Address: " << m_endpoint.address()
<< " Port: " << m_endpoint.port() << "\n"
<< std::endl;
} else {
std::cerr << "[SERVER # PORT " << m_port
<< "] Exception: " << ec.message() << "\n";
return;
}
WaitForMessages();
});
}
void WriteMessage(const udp::endpoint& ep)
{
auto& msg = m_messagesOut.front();
msg.length = msg.body.size();
m_socket.async_send_to(
std::vector<asio::const_buffer>{
asio::buffer(&msg.id, sizeof(msg.id)),
asio::buffer(&msg.length, sizeof(msg.length)),
asio::buffer(msg.body),
},
ep, [this, ep](std::error_code ec, std::size_t length) {
if (!ec) {
m_messagesOut.pop_front();
// If the queue is not empty, there are more messages to
// send, so make this happen by issuing the task to send the
// next header.
if (!m_messagesOut.empty()) {
WriteMessage(ep);
}
} else {
std::cout << "[SERVER # PORT " << m_port
<< "] Write Header Fail.\n";
m_socket.close();
}
});
}
private:
uint16_t m_port = 0;
udp::endpoint m_endpoint;
std::vector<char> vBuffer = std::vector<char>(21);
protected:
TSQueue<message_T> m_messagesIn;
TSQueue<message_T> m_messagesOut;
Message<message_T> m_tempMessageBuf;
asio::io_context m_asioContext;
std::thread m_threadContext;
udp::socket m_socket;
};
enum class TestMsg {
Ping,
Join,
Leave
};
int main()
{
UDPServer<Message<TestMsg>> server(60'000);
if (server.Start()) {
std::this_thread::sleep_for(3s);
{
Message<TestMsg> msg;
msg.id = TestMsg::Join;
msg << "hello PI equals " << M_PI << " in this world";
server.Send(msg, {{}, 60'001});
}
std::this_thread::sleep_for(27s);
}
}
For some reason netcat doesn't work with UDP on Coliru, so here's a "live" demo:
You can see our netcat client messages arriving. You can see the message Sent to 60001 arriving in the tcpdump output.

Making my function which calls async_read asynchronous Boost::asio

I am building an networking application, and being a newbie to Boost asio and networking as a whole had this doubt which might be trivial. I have this application which reads from a file and calls apis accordingly. I am reading json (example):
test.json
{
"commands":
[
{
"type":"login",
"Username": 0,
"Password": "kk"
}
]
}
My main program looks like this :
int main() {
ba::io_service ios;
tcp::socket s(ios);
s.connect({{},8080});
IO io;
io.start_read(s);
io.interact(s);
ios.run();
}
void start_read(tcp::socket& socket) {
char buffer_[MAX_LEN];
socket.async_receive(boost::asio::null_buffers(),
[&](const boost::system::error_code& ec, std::size_t bytes_read) {
(void)bytes_read;
if (likely(!ec)) {
boost::system::error_code errc;
int br = 0;
do {
br = socket.receive(boost::asio::buffer(buffer_, MAX_LEN), 0, errc);
if (unlikely(errc)) {
if (unlikely(errc != boost::asio::error::would_block)) {
if (errc != boost::asio::error::eof)
std::cerr << "asio async_receive: error " << errc.value() << " ("
<< errc.message() << ")" << std::endl;
interpret_read(socket,nullptr, -1);
//close(as);
return;
}
break; // EAGAIN
}
if (unlikely(br <= 0)) {
std::cerr << "asio async_receive: error, read " << br << " bytes" << std::endl;
interpret_read(socket,nullptr, br);
//close(as);
return;
}
interpret_read(socket,buffer_, br);
} while (br == (int)MAX_LEN);
} else {
if (socket.is_open())
std::cerr << "asio async_receive: error " << ec.value() << " (" << ec.message() << ")"
<< std::endl;
interpret_read(socket,nullptr, -1);
//close(as);
return;
}
start_read(socket);
});
}
void interpret_read(tcp::socket& s,const char* buf, int len) {
if(len<0)
{
std::cout<<"some error occured in reading"<<"\n";
}
const MessageHeaderOutComp *obj = reinterpret_cast<const MessageHeaderOutComp *>(buf);
int tempId = obj->TemplateID;
//std::cout<<tempId<<"\n";
switch(tempId)
{
case 10019: //login
{
//const UserLoginResponse *obj = reinterpret_cast<const UserLoginResponse *>(buf);
std::cout<<"*********[SERVER]: LOGIN ACKNOWLEDGEMENT RECEIVED************* "<<"\n";
break;
}
}
std::cout << "RX: " << len << " bytes\n";
if(this->input_type==2)
interact(s);
}
void interact(tcp::socket& s)
{
if(this->input_type == -1){
std::cout<<"what type of input you want ? option 1 : test.json / option 2 : manually through command line :";
int temp;
std::cin>>temp;
this->input_type = temp;
}
if(this->input_type==1)
{
//std::cout<<"reading from file\n";
std::ifstream input_file("test.json");
Json::Reader reader;
Json::Value input;
reader.parse(input_file, input);
for(auto i: input["commands"])
{
std::string str = i["type"].asString();
if(str=="login")
this->login_request(s,i);
}
std::cout<<"File read completely!! \n Do you want to continue or exit?: ";
}
}
The sending works fine, the message is sent and the server responds in a correct manner, but what I need to understand is why is the control not going to on_send_completed (which prints sent x bytes). Neither it prints the message [SERVER]: LOGIN ACKNOWLEDGEMENT RECEIVED, I know I am missing something basic or am doing something wrong, please correct me.
login_request function:
void login_request(tcp::socket& socket,Json::Value o) {
/*Some buffer being filled*/
async_write(socket, boost::asio::buffer(&info, sizeof(info)), on_send_completed);
}
Thanks in advance!!
From a cursory scan it looks like you redefined buffer_ that was already a class member (of IO, presumably).
It's hidden by the local in start_read, which is both UB (because the lifetime ends before the async read operation completes) and also makes it so the member _buffer isn't used.
I see a LOT of confusing code though. Why are you doing synchronous reads from within completion handlers?
I think you might be looking for the composed-ooperation reads (boost::asio::async_read and boost::asio::async_until)

Boost ASIO: Send message to all connected clients

I'm working on a project that involves a boost::beast websocket/http mixed server, which runs on top of boost::asio. I've heavily based my project off the advanced_server.cpp example source.
It works fine, but right now I'm attempting to add a feature that requires the sending of a message to all connected clients.
I'm not very familiar with boost::asio, but right now I can't see any way to have something like "broadcast" events (if that's even the correct term).
My naive approach would be to see if I can have the construction of websocket_session() attach something like an event listener, and the destructor detatch the listener. At that point, I could just fire the event, and have all the currently valid websocket sessions (to which the lifetime of websocket_session() is scoped) execute a callback.
There is https://stackoverflow.com/a/17029022/268006, which does more or less what I want by (ab)using a boost::asio::steady_timer, but that seems like a kind of horrible hack to accomplish something that should be pretty straightforward.
Basically, given a stateful boost::asio server, how can I do an operation on multiple connections?
First off: You can broadcast UDP, but that's not to connected clients. That's just... UDP.
Secondly, that link shows how to have a condition-variable (event)-like interface in Asio. That's only a tiny part of your problem. You forgot about the big picture: you need to know about the set of open connections, one way or the other:
e.g. keeping a container of session pointers (weak_ptr) to each connection
each connection subscribing to a signal slot (e.g. Boost Signals).
Option 1. is great for performance, option 2. is better for flexibility (decoupling the event source from subscribers, making it possible to have heterogenous subscribers, e.g. not from connections).
Because I think Option 1. is much simpler w.r.t to threading, better w.r.t. efficiency (you can e.g. serve all clients from one buffer without copying) and you probably don't need to doubly decouple the signal/slots, let me refer to an answer where I already showed as much for pure Asio (without Beast):
How to design proper release of a boost::asio socket or wrapper thereof
It shows the concept of a "connection pool" - which is essentially a thread-safe container of weak_ptr<connection> objects with some garbage collection logic.
Demonstration: Introducing Echo Server
After chatting about things I wanted to take the time to actually demonstrate the two approaches, so it's completely clear what I'm talking about.
First let's present a simple, run-of-the mill asynchronous TCP server with
with multiple concurrent connections
each connected session reads from the client line-by-line, and echoes the same back to the client
stops accepting after 3 seconds, and exits after the last client disconnects
master branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
private:
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
session->start();
if (!ec)
accept_loop();
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(3s);
s.stop(); // active connections will continue
th.join();
}
Approach 1. Adding Broadcast Messages
So, let's add "broadcast messages" that get sent to all active connections simultaneously. We add two:
one at each new connection (saying "Player ## has entered the game")
one that emulates a global "server event", like you described in the question). It gets triggered from within main:
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active connections\n";
Note how we do this by registering a weak pointer to each accepted connection and operating on each:
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
broadcast is also used directly from main and is simply:
size_t broadcast(std::string const& msg) {
return for_each_active([msg](connection& c) { c.send(msg, true); });
}
using-asio-post branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
size_t broadcast(std::string const& msg) {
return for_each_active([msg](connection& c) { c.send(msg, true); });
}
private:
using connptr = std::shared_ptr<connection>;
using weakptr = std::weak_ptr<connection>;
std::mutex _mx;
std::vector<weakptr> _registered;
size_t reg_connection(weakptr wp) {
std::lock_guard<std::mutex> lk(_mx);
_registered.push_back(wp);
return _registered.size();
}
template <typename F>
size_t for_each_active(F f) {
std::vector<connptr> active;
{
std::lock_guard<std::mutex> lk(_mx);
for (auto& w : _registered)
if (auto c = w.lock())
active.push_back(c);
}
for (auto& c : active) {
std::cout << "(running action for " << c->_s.remote_endpoint() << ")" << std::endl;
f(*c);
}
return active.size();
}
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active connections\n";
std::this_thread::sleep_for(2s);
s.stop(); // active connections will continue
th.join();
}
Approach 2: Those Broadcast But With Boost Signals2
The Signals approach is a fine example of Dependency Inversion.
Most salient notes:
signal slots get invoked on the thread invoking it ("raising the event")
the scoped_connection is there so subscriptions are *automatically removed when the connection is destructed
there's subtle difference in the wording of the console message from "reached # active connections" to "reached # active subscribers".
The difference is key to understanding the added flexibility: the signal owner/invoker does not know anything about the subscribers. That's the decoupling/dependency inversion we're talking about
using-signals2 branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
#include <boost/signals2.hpp>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
boost::signals2::scoped_connection _subscription;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
size_t broadcast(std::string const& msg) {
_broadcast_event(msg);
return _broadcast_event.num_slots();
}
private:
boost::signals2::signal<void(std::string const& msg)> _broadcast_event;
size_t reg_connection(connection& c) {
c._subscription = _broadcast_event.connect(
[&c](std::string msg){ c.send(msg, true); }
);
return _broadcast_event.num_slots();
}
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(*session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active subscribers\n";
std::this_thread::sleep_for(2s);
s.stop(); // active connections will continue
th.join();
}
See the diff between Approach 1. and 2.: Compare View on github
A sample of the output when run against 3 concurrent clients with:
(for a in {1..3}; do netcat localhost 6767 < /etc/dictionaries-common/words > echoed.$a& sleep .1; done; time wait)
The answer from #sehe was amazing, so I'll be brief. Generally speaking, to implement an algorithm which operates on all active connections you must do the following:
Maintain a list of active connections. If this list is accessed by multiple threads, it will need synchronization (std::mutex). New connections should be inserted to the list, and when a connection is destroyed or becomes inactive it should be removed from the list.
To iterate the list, synchronization is required if the list is accessed by multiple threads (i.e. more than one thread calling asio::io_context::run, or if the list is also accessed from threads that are not calling asio::io_context::run)
During iteration, if the algorithm needs to inspect or modify the state of any connection, and that state can be changed by other threads, additional synchronization is needed. This includes any internal "queue" of messages that the connection object stores.
A simple way to synchronize a connection object is to use boost::asio::post to submit a function for execution on the connection object's context, which will be either an explicit strand (boost::asio::strand, as in the advanced server examples) or an implicit strand (what you get when only one thread calls io_context::run). The Approach 1 provided by #sehe uses post to synchronize in this fashion.
Another way to synchronize the connection object is to "stop the world." That means call io_context::stop, wait for all the threads to exit, and then you are guaranteed that no other threads are accessing the list of connections. Then you can read and write connection object state all you want. When you are finished with the list of connections, call io_context::restart and launch the threads which call io_context::run again. Stopping the io_context does not stop network activity, the kernel and network drivers still send and receive data from internal buffers. TCP/IP flow control will take care of things so the application still operates smoothly even though it becomes briefly unresponsive during the "stop the world." This approach can simplify things but depending on your particular application you will have to evaluate if it is right for you.
Hope this helps!
Thank you #sehe for the amazing answer. Still, I think there is a small but severe bug in the Approach 2. IMHO reg_connection should look like this:
size_t reg_connection(std::shared_ptr<connection> c) {
c->_subscription = _broadcast_event.connect(
[weak_c = std::weak_ptr<connection>(c)](std::string msg){
if(auto c = weak_c.lock())
c->send(msg, true);
}
);
return _broadcast_event.num_slots();
}
Otherwise you can end up with a race condition leading to a server crash. In case the connection instance is destroyed during the call to the lambda, the reference becomes invalid.
Similarly connection#send() should look like this, because otherwise this might be dead by the time the lambda is called:
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(),
[self=shared_from_this(), msg=std::move(msg), at_front] {
if (self->enqueue(std::move(msg), at_front))
self->write_loop();
});
}
PS: I would have posted this as a comment on #sehe's answer, but unfortunately I have not enough reputation.