c++ Sending struct over network - c++

I'm working with Intel SGX which has predefined structures. I need to send these structures over a network connection which is operated by using boost::asio.
The structure that needs to be send has the following format:
typedef struct _ra_samp_request_header_t{
uint8_t type; /* set to one of ra_msg_type_t*/
uint32_t size; /*size of request body*/
uint8_t align[3];
uint8_t body[];
} ra_samp_request_header_t;
For the sending and receiving, the methods async_write and async_async_read_some are used
boost::asio::async_write(socket_, boost::asio::buffer(data_, max_length),
boost::bind(&Session::handle_write, this,
boost::asio::placeholders::error));
socket_.async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&Session::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
whereas data_ is defined as
enum { max_length = 1024 };
char data_[max_length];
My first approach was to transform the single structure elements into strings and store them in a vector<string> which is then further transformed into char* whereas each element is separated by \n.
But when assembling the received char* on the receiver side back to the original structure I run into some troubles.
Is this really the way this should be done or is there a nicer more sufficient way of transfering the structure

Do you need it to be portable?
If not:
simplistic approach
using Boost Serialization
If it needs to be portable
complicate the simplistic approach with ntohl and htonl calls etc.
use Boost Serialization with EOS Portable Archives
1. simplistic approach
Just send the struct as POD data (assuming it is actually POD, which given the code in your question is a fair assumption as the struct is clearly not C++).
A simple sample that uses synchronous calls on 2 threads (listener and client) shows how the server sends a packet to the client which the client receives correctly.
Notes:
using async calls is a trivial change (change write and read into async_write and async_write, which just makes control flow a bit less legible unless using coroutines)
I showed how I'd use malloc/free in a (exceptio) safe manner in C++11. You may want to make a simple Rule-Of-Zero wrapper instead in your codebase.
Live On Coliru
#include <boost/asio.hpp>
#include <cstring>
namespace ba = boost::asio;
using ba::ip::tcp;
typedef struct _ra_samp_request_header_t{
uint8_t type; /* set to one of ra_msg_type_t*/
uint32_t size; /*size of request body*/
uint8_t align[3];
uint8_t body[];
} ra_samp_request_header_t;
#include <iostream>
#include <thread>
#include <memory>
int main() {
auto unique_ra_header = [](uint32_t body_size) {
static_assert(std::is_pod<ra_samp_request_header_t>(), "not pod");
auto* raw = static_cast<ra_samp_request_header_t*>(::malloc(sizeof(ra_samp_request_header_t)+body_size));
new (raw) ra_samp_request_header_t { 2, body_size, {0} };
return std::unique_ptr<ra_samp_request_header_t, decltype(&std::free)>(raw, std::free);
};
auto const& body = "There are many variations of passages of Lorem Ipsum available, but the majority have suffered alteration in some form, by injected humour, or randomised words which don't look even slightly believable. If you are going to use a passage of Lorem Ipsum, you need to be sure there isn't anything embarrassing hidden in the middle of text. All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as necessary, making this the first true generator on the Internet. It uses a dictionary of over 200 Latin words, combined with a handful of model sentence structures, to generate Lorem Ipsum which looks reasonable.";
auto sample = unique_ra_header(sizeof(body));
std::strncpy(reinterpret_cast<char*>(+sample->body), body, sizeof(body));
ba::io_service svc;
ra_samp_request_header_t const& packet = *sample;
auto listener = std::thread([&] {
try {
tcp::acceptor a(svc, tcp::endpoint { {}, 6767 });
tcp::socket s(svc);
a.accept(s);
std::cout << "listener: Accepted: " << s.remote_endpoint() << "\n";
auto written = ba::write(s, ba::buffer(&packet, sizeof(packet) + packet.size));
std::cout << "listener: Written: " << written << "\n";
} catch(std::exception const& e) {
std::cerr << "listener: " << e.what() << "\n";
}
});
std::this_thread::sleep_for(std::chrono::milliseconds(10)); // make sure listener is ready
auto client = std::thread([&] {
try {
tcp::socket s(svc);
s.connect(tcp::endpoint { {}, 6767 });
// this is to avoid the output to get intermingled, only
std::this_thread::sleep_for(std::chrono::milliseconds(200));
std::cout << "client: Connected: " << s.remote_endpoint() << "\n";
enum { max_length = 1024 };
auto packet_p = unique_ra_header(max_length); // slight over allocation for simplicity
boost::system::error_code ec;
auto received = ba::read(s, ba::buffer(packet_p.get(), max_length), ec);
// we expect only eof since the message received is likely not max_length
if (ec != ba::error::eof) ba::detail::throw_error(ec);
std::cout << "client: Received: " << received << "\n";
(std::cout << "client: Payload: ").write(reinterpret_cast<char const*>(packet_p->body), packet_p->size) << "\n";
} catch(std::exception const& e) {
std::cerr << "client: " << e.what() << "\n";
}
});
client.join();
listener.join();
}
Prints
g++ -std=gnu++11 -Os -Wall -pedantic main.cpp -pthread -lboost_system && ./a.out
listener: Accepted: 127.0.0.1:42914
listener: Written: 645
client: Connected: 127.0.0.1:6767
client: Received: 645
client: Payload: There are many variations of passages of Lorem Ipsum available, but the majority have suffered alteration in some form, by injected humour, or randomised words which don't look even slightly believable. If you are going to use a passage of Lorem Ipsum, you need to be sure there isn't anything embarrassing hidden in the middle of text. All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as necessary, making this the first true generator on the Internet. It uses a dictionary of over 200 Latin words, combined with a handful of model sentence structures, to generate Lorem Ipsum which looks reasonable.
1b. simplistic with wrapper
Because for Boost Serialization it would be convenient to have such a wrapper anyways, let's rewrite that using such a "Rule Of Zero" wrapper:
Live On Coliru
namespace mywrappers {
struct ra_samp_request_header {
enum { max_length = 1024 };
// Rule Of Zero - https://rmf.io/cxx11/rule-of-zero
ra_samp_request_header(uint32_t body_size = max_length) : _p(create(body_size)) {}
::ra_samp_request_header_t const& get() const { return *_p; };
::ra_samp_request_header_t& get() { return *_p; };
private:
static_assert(std::is_pod<::ra_samp_request_header_t>(), "not pod");
using Ptr = std::unique_ptr<::ra_samp_request_header_t, decltype(&std::free)>;
Ptr _p;
static Ptr create(uint32_t body_size) {
auto* raw = static_cast<::ra_samp_request_header_t*>(::malloc(sizeof(::ra_samp_request_header_t)+body_size));
new (raw) ::ra_samp_request_header_t { 2, body_size, {0} };
return Ptr(raw, std::free);
};
};
}
2. using Boost Serialization
Without much ado, here's a simplistic way to implement serialization in-class for that wrapper:
friend class boost::serialization::access;
template<typename Ar>
void save(Ar& ar, unsigned /*version*/) const {
ar & _p->type
& _p->size
& boost::serialization::make_array(_p->body, _p->size);
}
template<typename Ar>
void load(Ar& ar, unsigned /*version*/) {
uint8_t type = 0;
uint32_t size = 0;
ar & type & size;
auto tmp = create(size);
*tmp = ::ra_samp_request_header_t { type, size, {0} };
ar & boost::serialization::make_array(tmp->body, tmp->size);
// if no exceptions, swap it out
_p = std::move(tmp);
}
BOOST_SERIALIZATION_SPLIT_MEMBER()
Which then simplifies the test driver to this - using streambuf:
auto listener = std::thread([&] {
try {
tcp::acceptor a(svc, tcp::endpoint { {}, 6767 });
tcp::socket s(svc);
a.accept(s);
std::cout << "listener: Accepted: " << s.remote_endpoint() << "\n";
ba::streambuf sb;
{
std::ostream os(&sb);
boost::archive::binary_oarchive oa(os);
oa << sample;
}
auto written = ba::write(s, sb);
std::cout << "listener: Written: " << written << "\n";
} catch(std::exception const& e) {
std::cerr << "listener: " << e.what() << "\n";
}
});
std::this_thread::sleep_for(std::chrono::milliseconds(10)); // make sure listener is ready
auto client = std::thread([&] {
try {
tcp::socket s(svc);
s.connect(tcp::endpoint { {}, 6767 });
// this is to avoid the output to get intermingled, only
std::this_thread::sleep_for(std::chrono::milliseconds(200));
std::cout << "client: Connected: " << s.remote_endpoint() << "\n";
mywrappers::ra_samp_request_header packet;
boost::system::error_code ec;
ba::streambuf sb;
auto received = ba::read(s, sb, ec);
// we expect only eof since the message received is likely not max_length
if (ec != ba::error::eof) ba::detail::throw_error(ec);
std::cout << "client: Received: " << received << "\n";
{
std::istream is(&sb);
boost::archive::binary_iarchive ia(is);
ia >> packet;
}
(std::cout << "client: Payload: ").write(reinterpret_cast<char const*>(packet.get().body), packet.get().size) << "\n";
} catch(std::exception const& e) {
std::cerr << "client: " << e.what() << "\n";
}
});
All other code is unchanged from the above, see it Live On Coliru. Output unchanged, except packet sizes grew to 683 on my 64-bit machine using Boost 1.62.
3. complicate the simplistic approach
I'm not in the mood to demo this. It feels like being a C programmer instead of a C++ programmer. Of course there are clever ways to avoid writing the endian-ness twiddling etc. For a modern approach see e.g.
I can't find the sample talk/blog post about using Fusion to generate struct portable serialization for your classes (might add later)
https://www.youtube.com/watch?v=zvfPK4ot9uA
4. use EAS Portable Archive
Is a simple drop-in exercise using the code of 3.

Related

C++ UDP Server io_context running in thread exits before work can start

I'm new to C++ but so far most of the asio stuff has made sense. I am however stuggling to get my UDPServer working.
My question is possibly similar to: Trying to write UDP server class, io_context doesn't block
I think my UDPServer stops before work can be given to its io_context. However, I am issuing work to the context before calling io_context.run() so I don't understand why.
Of course, I am not entirely sure if I am even on the right track with the above statement and would appreciate some guidance. Here is my class:
template<typename message_T>
class UDPServer
{
public:
UDPServer(uint16_t port)
: m_socket(m_asioContext, asio::ip::udp::endpoint(asio::ip::udp::v4(), port))
{
m_port = port;
}
virtual ~UDPServer()
{
Stop();
}
public:
// Starts the server!
bool Start()
{
try
{
// Issue a task to the asio context
WaitForMessages();
m_threadContext = std::thread([this]() { m_asioContext.run(); });
}
catch (std::exception& e)
{
// Something prohibited the server from listening
std::cerr << "[SERVER # PORT " << m_port << "] Exception: " << e.what() << "\n";
return false;
}
std::cout << "[SERVER # PORT " << m_port << "] Started!\n";
return true;
}
// Stops the server!
void Stop()
{
// Request the context to close
m_asioContext.stop();
// Tidy up the context thread
if (m_threadContext.joinable()) m_threadContext.join();
// Inform someone, anybody, if they care...
std::cout << "[SERVER # PORT " << m_port << "] Stopped!\n";
}
void WaitForMessages()
{
m_socket.async_receive_from(asio::buffer(vBuffer.data(), vBuffer.size()), m_endpoint,
[this](std::error_code ec, std::size_t length)
{
if (!ec)
{
std::cout << "[SERVER # PORT " << m_port << "] Got " << length << " bytes \n Data: " << vBuffer.data() << "\n" << "Address: " << m_endpoint.address() << " Port: " << m_endpoint.port() << "\n" << "Data: " << m_endpoint.data() << "\n";
}
else
{
std::cerr << "[SERVER # PORT " << m_port << "] Exception: " << ec.message() << "\n";
return;
}
WaitForMessages();
}
);
}
void Send(message_T& msg, const asio::ip::udp::endpoint& ep)
{
asio::post(m_asioContext,
[this, msg, ep]()
{
// If the queue has a message in it, then we must
// assume that it is in the process of asynchronously being written.
bool bWritingMessage = !m_messagesOut.empty();
m_messagesOut.push_back(msg);
if (!bWritingMessage)
{
WriteMessage(ep);
}
}
);
}
private:
void WriteMessage(const asio::ip::udp::endpoint& ep)
{
m_socket.async_send_to(asio::buffer(&m_messagesOut.front(), sizeof(message_T)), ep,
[this, ep](std::error_code ec, std::size_t length)
{
if (!ec)
{
m_messagesOut.pop_front();
// If the queue is not empty, there are more messages to send, so
// make this happen by issuing the task to send the next header.
if (!m_messagesOut.empty())
{
WriteMessage(ep);
}
}
else
{
std::cout << "[SERVER # PORT " << m_port << "] Write Header Fail.\n";
m_socket.close();
}
});
}
void ReadMessage()
{
}
private:
uint16_t m_port = 0;
asio::ip::udp::endpoint m_endpoint;
std::vector<char> vBuffer = std::vector<char>(21);
protected:
TSQueue<message_T> m_messagesIn;
TSQueue<message_T> m_messagesOut;
Message<message_T> m_tempMessageBuf;
asio::io_context m_asioContext;
std::thread m_threadContext;
asio::ip::udp::socket m_socket;
};
}
Code is invoked in the main function for now:
enum class TestMsg {
Ping,
Join,
Leave
};
int main() {
Message<TestMsg> msg; // Message is a pretty basic struct that I'm not using yet. When I was, I was only receiving the first 4 bytes - which led me down this path of investigation
msg.id = TestMsg::Join;
msg << "hello";
UDPServer<Message<TestMsg>> server(60000);
}
When invoked the Server immediately exits before it gets chance to print "[SERVER] Started"
I'll try adding the work guard as the link post describes but I would still like to understand why the io_context is not being primed with work quick enough.
Update (Now I also read the question not just the code)
While in WaitForMessages you do start listening by calling the m_socket.async_receive_from function, as it is async, that function will return/unblock as soon as it has setup the listening. So as long as you don't actually have a client sending you something, you server has nothing do to. Only when it has received something the callback will be called, by a thread calling io_context::run. So you need the work guard so that your thread running run won't unblock right after start, but will block as long as the work guard is there.
Usually it is also combined with a try/while pattern if an exception gets thrown in a handler and you still want to move on with your server.
Also in the code you posted, you never actually call UDPServer::Start!
This was my first idea of an answer:
This is normal behavior of ASIO. The io_context::run function will return as soon as it has no work to do.
So to change the behaviour of the run function to block you have to use a boost::asio::executor_work_guard<boost::asio::io_context::executor_type> i.e. a so called work guard. Construct that object with a reference to your io_context and hold it i.e. don't let it destruct as long as you want to let the server run, i.e. do not want to let io_context::run return when there is not work.
So given
boost::asio::io_context io_context_;
boost::asio::executor_work_guard<boost::asio::io_context::executor_type> work_guard_;
you then could call
work_guard_{boost::asio::make_work_guard(io_context_)},
const auto thread_count{std::max<unsigned>(std::thread::hardware_concurrency(), 1)};
std::generate_n(std::back_inserter(this->io_run_threads_),
thread_count,
[this]() {
return std::thread{io_run_loop,
std::ref(this->io_context_), std::ref(this->error_handler_)};
});
void io_run_loop(boost::asio::io_context &context,
const std::function<void(std::exception &)> &error_handler) {
while (true) {
try {
context.run();
break;
} catch (std::exception &e) {
error_handler(e);
}
}
}
And then for server shutdown:
work_guard_.reset();
io_context_.stop();
std::for_each(this->io_run_threads_.begin(), this->io_run_threads_.end(), [](auto &thread) {
if (thread.joinable()) thread.join();
});
For a more graceful shutdown you can omit the stop call and rather close all sockets before.
Looks like you forgot to call server.Start();. Moreover, you will want to make the main thread wait for some amount of time, otherwise the destructor of Server will immediately cause Stop() to be called:
int main()
{
Message<TestMsg> msg;
msg.id = TestMsg::Join;
msg << "hello";
UDPServer<Message<TestMsg>> server(60000);
server.Start();
std::this_thread::sleep_for(30s);
}
Issues
There is a conceptual problem with the Send API.
It takes an endpoint on each call, but it only uses the one that starts the write call chain! This means that if you do
srv.Send(msg1, {mymachine, 60001});
srv.Send(msg1, {otherserver, 5517});
It is likely they both get sent to mymachine:60001.
How you treat the buffer received. Just using .data() blindly assumes that the data is NUL-terminated. Don't do that:
std::string const data(vBuffer.data(), length);
Also, you seem to have at some time been confused about data and printed m_endpoint.data() - your princess is in another castle.
In reality you probably want ways to extract the typed data. I'm leaving that as beyond the scope of this question for today.
Regardless you should clear the buffer before reuse, because you might be seeing old data in subsequent reads.
vBuffer.assign(vBuffer.size(), '\0');
This is most likely undefined behaviour:
asio::buffer(&m_messagesOut.front(), sizeof(message_T)), ep,
This is only valid if message_T is trivial and standard-layout ("POD" - Plain Old Data). The presence of operator<< strongly suggests that is not the case.
Instead, build a (sequence of) buffer(s) hat represents the message as raw bytes, e.g.
auto& msg = m_messagesOut.front();
msg.length = msg.body.size();
m_socket.async_send_to(
std::vector<asio::const_buffer>{
asio::buffer(&msg.id, sizeof(msg.id)),
asio::buffer(&msg.length, sizeof(msg.length)),
asio::buffer(msg.body),
},
// ...
Thread safe queues seem to be overkill since you have a single service thread; that is an implicit "strand" so you can post to it to have single-threaded semantics.
Here's a few adaptations to make it work so far (except the exercise-for-the-reader pointed out):
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
#include <deque>
#include <sstream>
// Library facilities
namespace asio = boost::asio;
using asio::ip::udp;
using boost::system::error_code;
using namespace std::chrono_literals;
/////////////////////////////////
// mock ups:
template <typename message_T> struct Message {
message_T id;
uint16_t length; // automatically filled on send, UDP packets are < 64k
std::string body;
template <typename T> friend Message& operator<<(Message& m, T const& v)
{
std::ostringstream oss;
oss << v;
m.body += oss.str();
//m.body += '\0'; // suggestion for easier message extraction
return m;
}
};
// Thread-safety can be replaced with the implicit strand of a single service
// thread
template <typename T> using TSQueue = std::deque<T>;
// end mock ups
/////////////////////////////////
template <typename message_T> class UDPServer {
public:
UDPServer(uint16_t port)
: m_socket(m_asioContext, udp::endpoint(udp::v4(), port))
{
m_port = port;
}
virtual ~UDPServer() { Stop(); }
public:
// Starts the server!
bool Start()
{
if (m_threadContext.joinable() && !m_asioContext.stopped())
return false;
try {
// Issue a task to the asio context
WaitForMessages();
m_threadContext = std::thread([this]() { m_asioContext.run(); });
} catch (std::exception const& e) {
// Something prohibited the server from listening
std::cerr << "[SERVER # PORT " << m_port
<< "] Exception: " << e.what() << "\n";
return false;
}
std::cout << "[SERVER # PORT " << m_port << "] Started!\n";
return true;
}
// Stops the server!
void Stop()
{
// Tell the context to stop processing
m_asioContext.stop();
// Tidy up the context thread
if (m_threadContext.joinable())
m_threadContext.join();
// Inform someone, anybody, if they care...
std::cout << "[SERVER # PORT " << m_port << "] Stopped!\n";
m_asioContext
.reset(); // required in case you want to reuse this Server object
}
void Send(message_T& msg, const udp::endpoint& ep)
{
asio::post(m_asioContext, [this, msg, ep]() {
// If the queue has a message in it, then we must
// assume that it is in the process of asynchronously being written.
bool bWritingMessage = !m_messagesOut.empty();
m_messagesOut.push_back(msg);
if (!bWritingMessage) {
WriteMessage(ep);
}
});
}
private:
void WaitForMessages() // assumed to be on-strand
{
vBuffer.assign(vBuffer.size(), '\0');
m_socket.async_receive_from(
asio::buffer(vBuffer.data(), vBuffer.size()), m_endpoint,
[this](std::error_code ec, std::size_t length) {
if (!ec) {
std::string const data(vBuffer.data(), length);
std::cout << "[SERVER # PORT " << m_port << "] Got "
<< length << " bytes \n Data: " << data << "\n"
<< "Address: " << m_endpoint.address()
<< " Port: " << m_endpoint.port() << "\n"
<< std::endl;
} else {
std::cerr << "[SERVER # PORT " << m_port
<< "] Exception: " << ec.message() << "\n";
return;
}
WaitForMessages();
});
}
void WriteMessage(const udp::endpoint& ep)
{
auto& msg = m_messagesOut.front();
msg.length = msg.body.size();
m_socket.async_send_to(
std::vector<asio::const_buffer>{
asio::buffer(&msg.id, sizeof(msg.id)),
asio::buffer(&msg.length, sizeof(msg.length)),
asio::buffer(msg.body),
},
ep, [this, ep](std::error_code ec, std::size_t length) {
if (!ec) {
m_messagesOut.pop_front();
// If the queue is not empty, there are more messages to
// send, so make this happen by issuing the task to send the
// next header.
if (!m_messagesOut.empty()) {
WriteMessage(ep);
}
} else {
std::cout << "[SERVER # PORT " << m_port
<< "] Write Header Fail.\n";
m_socket.close();
}
});
}
private:
uint16_t m_port = 0;
udp::endpoint m_endpoint;
std::vector<char> vBuffer = std::vector<char>(21);
protected:
TSQueue<message_T> m_messagesIn;
TSQueue<message_T> m_messagesOut;
Message<message_T> m_tempMessageBuf;
asio::io_context m_asioContext;
std::thread m_threadContext;
udp::socket m_socket;
};
enum class TestMsg {
Ping,
Join,
Leave
};
int main()
{
UDPServer<Message<TestMsg>> server(60'000);
if (server.Start()) {
std::this_thread::sleep_for(3s);
{
Message<TestMsg> msg;
msg.id = TestMsg::Join;
msg << "hello PI equals " << M_PI << " in this world";
server.Send(msg, {{}, 60'001});
}
std::this_thread::sleep_for(27s);
}
}
For some reason netcat doesn't work with UDP on Coliru, so here's a "live" demo:
You can see our netcat client messages arriving. You can see the message Sent to 60001 arriving in the tcpdump output.

Use Futures with Boost Thread Pool

I'm implementing a TCP client which read and send files and strings and I'm using Boost as my main library. I'd like to continue reading or sending files while I keep sending strings, which in these case are the commands to send to the server. For this purpose I thought about using a Thread Pool in order to not overload the client. My question is, can I use futures to use callbacks when on of the thread in the pool ends? In case I can't, is there any other solution?
I was doing something like this, where pool_ is a boost:asio:thread_pool
void send_file(std::string const& file_path){
boost::asio::post(pool_, [this, &file_path] {
handle_send_file(file_path);
});
// DO SOMETHING WHEN handle_send_file ENDS
}
void handle_send_file(std::string const& file_path) {
boost::array<char, 1024> buf{};
boost::system::error_code error;
std::ifstream source_file(file_path, std::ios_base::binary | std::ios_base::ate);
if(!source_file) {
std::cout << "[ERROR] Failed to open " << file_path << std::endl;
//TODO gestire errore
}
size_t file_size = source_file.tellg();
source_file.seekg(0);
std::string file_size_readable = file_size_to_readable(file_size);
// First send file name and file size in bytes to server
boost::asio::streambuf request;
std::ostream request_stream(&request);
request_stream << file_path << "\n"
<< file_size << "\n\n"; // Consider sending readable version, does it change anything?
// Send the request
boost::asio::write(*socket_, request, error);
if(error){
std::cout << "[ERROR] Send request error:" << error << std::endl;
//TODO lanciare un'eccezione? Qua dovrò controllare se il server funziona o no
}
if(DEBUG) {
std::cout << "[DEBUG] " << file_path << " size is: " << file_size_readable << std::endl;
std::cout << "[DEBUG] Start sending file content" << std::endl;
}
long bytes_sent = 0;
float percent = 0;
print_percentage(percent);
while(!source_file.eof()) {
source_file.read(buf.c_array(), (std::streamsize)buf.size());
int bytes_read_from_file = source_file.gcount(); //int is fine because i read at most buf's size, 1024 in this case
if(bytes_read_from_file<=0) {
std::cout << "[ERROR] Read file error" << std::endl;
break;
//TODO gestire questo errore
}
percent = std::ceil((100.0 * bytes_sent) / file_size);
print_percentage(percent);
boost::asio::write(*socket_, boost::asio::buffer(buf.c_array(), source_file.gcount()),
boost::asio::transfer_all(), error);
if(error) {
std::cout << "[ERROR] Send file error:" << error << std::endl;
//TODO lanciare un'eccezione?
}
bytes_sent += bytes_read_from_file;
}
std::cout << "\n" << "[INFO] File " << file_path << " sent successfully!" << std::endl;
}
The operations posted to the pool end without the threads ending. That's the whole purpose of pooling the threads.
void send_file(std::string const& file_path){
post(pool_, [this, &file_path] {
handle_send_file(file_path);
});
// DO SOMETHING WHEN handle_send_file ENDS
}
This has several issues. The largest one is that you should not capture file_path by reference, as the argument is soon out of scope, and the handle_send_file call will run at an unspecified time in another thread. That's a race condition and dangling reference. Undefined Behaviour results.
Then the
// DO SOMETHING WHEN handle_send_file ENDS
is on a line which has no sequence relation with handle_send_file. In fact, it will probably run before that operation ever has a chance to start.
Simplifying
Here's a simplified version:
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <fstream>
#include <iostream>
namespace asio = boost::asio;
using asio::ip::tcp;
static asio::thread_pool pool_;
struct X {
std::unique_ptr<tcp::socket> socket_;
explicit X(unsigned short port) : socket_(new tcp::socket{ pool_ }) {
socket_->connect({ {}, port });
}
asio::thread_pool pool_;
std::unique_ptr<tcp::socket> socket_{ new tcp::socket{ pool_ } };
void send_file(std::string file_path) {
post(pool_, [=, this] {
send_file_implementation(file_path);
// DO SOMETHING WHEN send_file_implementation ENDS
});
}
// throws system_error exception
void send_file_implementation(std::string file_path) {
std::ifstream source_file(file_path,
std::ios_base::binary | std::ios_base::ate);
size_t file_size = source_file.tellg();
source_file.seekg(0);
write(*socket_,
asio::buffer(file_path + "\n" + std::to_string(file_size) + "\n\n"));
boost::array<char, 1024> buf{};
while (source_file.read(buf.c_array(), buf.size()) ||
source_file.gcount() > 0)
{
int n = source_file.gcount();
if (n <= 0) {
using namespace boost::system;
throw system_error(errc::io_error, system_category());
}
write(*socket_, asio::buffer(buf), asio::transfer_exactly(n));
}
}
};
Now, you can indeed run several of these operations in parallel (assuming several instances of X, so you have separate socket_ connections).
To do something at the end, just put code where I moved the comment:
// DO SOMETHING WHEN send_file_implementation ENDS
If you don't know what to do there and you wish to make a future ready at that point, you can:
std::future<void> send_file(std::string file_path) {
std::packaged_task<void()> task([=, this] {
send_file_implementation(file_path);
});
return post(pool_, std::move(task));
}
This overload of post magically¹ returns the future from the packaged task. That packaged task will set the internal promise with either the (void) return value or the exception thrown.
See it in action: Live On Coliru
int main() {
// send two files simultaneously to different connections
X clientA(6868);
X clientB(6969);
std::future<void> futures[] = {
clientA.send_file("main.cpp"),
clientB.send_file("main.cpp"),
};
for (auto& fut : futures) try {
fut.get();
std::cout << "Everything completed without error\n";
} catch(std::exception const& e) {
std::cout << "Error occurred: " << e.what() << "\n";
};
pool_.join();
}
I tested this while running two netcats to listen on 6868/6969:
nc -l -p 6868 | head& nc -l -p 6969 | md5sum&
./a.out
wait
The server prints:
Everything completed without error
Everything completed without error
The netcats print their filtered output:
main.cpp
1907
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <fstream>
#include <iostream>
#include <future>
namespace asio = boost::asio;
using asio::ip::tcp;
7ecb71992bcbc22bda44d78ad3e2a5ef -
¹ not magic: see https://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/reference/async_result.html

Reading a serialized struct at the receiver end boost asio

I am new to boost and networking ;). I am making a client server application with boost::asio, I need to pass structs as messages so used boost::asio::serialization for it :
test.h
#pragma once
#include <boost/archive/binary_oarchive.hpp>
#include <boost/serialization/serialization.hpp>
struct Test
{
public:
int a;
int b;
template<typename archive> void serialize(archive& ar, const unsigned version) {
ar & a;
ar & b;
}
};
client side sending:
void send_asynchronously(tcp::socket& socket) {
Test info;
info.a = 1;
info.b = 2;
{
std::ostream os(&buf);
boost::archive::binary_oarchive out_archive(os);
out_archive << info;
}
async_write(socket, buf, on_send_completed);
}
On the receiver side, I read the data into a boost::asio::buffer, I want to know a way to parse this buffer and extract the object on server side. Please help.
You don't show enough code to know how you declared buf or managed the lifetime.
I'm assuming you used boost::asio::streambuf buf; and it has static storage duration (namespace scope) or is a class member (but you didn't show a class).
Either way, whatever you have you can do "the same" in reverse to receive.
Here's a shortened version (that leaves out the async so we don't have to make guesses about the lifetimes of things like I mentioned above);
Connect
Let's connect to an imaginary server (we can make one below) at port 3001 on localhost:
asio::io_context ioc;
asio::streambuf buf;
tcp::socket s(ioc, tcp::v4());
s.connect({{}, 3001});
Serialize
Basically what you had:
{
std::ostream os(&buf);
boost::archive::binary_oarchive oa(os);
Test req {13,31};
oa << req;
}
Note the {} scope around the stream/archive make sure the archive is completed before sending.
Send
/*auto bytes_sent =*/ asio::write(s, buf);
Receive
Let's assume our server sends back another Test object serialized in the same way¹.
Reading into the buffer, assuming no framing we'll just "read until the end of the stream":
boost::system::error_code ec;
/*auto bytes_received =*/ asio::read(s, buf, ec);
if (ec && ec != asio::error::eof) {
std::cout << "Read error: " << ec.message() << "\n";
return 1;
}
In real life you want timeouts and limits to the amount of data read. Often your protocol will add framing where you know what amount of data to read or what boundary marker to expect.
Deserialize
Test response; // uninitialized
{
std::istream is(&buf);
boost::archive::binary_iarchive ia(is);
ia >> response;
}
Full Demo
Live On Coliru
#include <boost/asio.hpp>
#include <boost/archive/binary_oarchive.hpp>
#include <boost/archive/binary_iarchive.hpp>
#include <boost/serialization/serialization.hpp>
#include <iostream>
namespace asio = boost::asio;
using tcp = boost::asio::ip::tcp;
struct Test {
int a,b;
template<typename Ar> void serialize(Ar& ar, unsigned) { ar & a & b; }
};
int main() {
asio::io_context ioc;
asio::streambuf buf;
tcp::socket s(ioc, tcp::v4());
s.connect({{}, 3001});
///////////////////
// send a "request"
///////////////////
{
std::ostream os(&buf);
boost::archive::binary_oarchive oa(os);
Test req {13,31};
oa << req;
}
/*auto bytes_sent =*/ asio::write(s, buf);
/////////////////////
// receive "response"
/////////////////////
boost::system::error_code ec;
/*auto bytes_received =*/ asio::read(s, buf, ec);
if (ec && ec != asio::error::eof) {
std::cout << "Read error: " << ec.message() << "\n";
return 1;
}
Test response; // uninitialized
{
std::istream is(&buf);
boost::archive::binary_iarchive ia(is);
ia >> response;
}
std::cout << "Response: {" << response.a << ", " << response.b << "}\n";
}
Using netcat to mock a server with a previously generated response Test{42,99} (base64 encoded here):
base64 -d <<<"FgAAAAAAAABzZXJpYWxpemF0aW9uOjphcmNoaXZlEgAECAQIAQAAAAAAAAAAKgAAAGMAAAA=" | nc -N -l -p 3001
It prints:
Response: {42, 99}
¹ on the same architecture and compiled with the same version of boost, because Boost's binary archives are not portable. The live demo is good demonstration of this

Thread-safety when accessing data from N-theads in context of an async TCP-server

As the title says i have a question concerning the following scenario (simplyfied example):
Assume that i have an object of the Generator-Class below, which continuously updates its dataChunk member ( running in the main thread).
class Generator
{
void generateData();
uint8_t dataChunk[999];
}
Furthermore i have an async. acceptor of TCP-connections to which 1-N clients can connect to (running in a second thread).
The acceptor starts a new thread for each new client-connection, in which an object of the Connection class below, receives a request message from the client and provides a fraction of the dataChunk (belonging to the Generator) as an answer. Then waits for a new request and so on...
class Connection
{
void setDataChunk(uint8_t* dataChunk);
void handleRequest();
uint8_t* dataChunk;
}
Finally the actual question: The desired behaviour is that the Generator object generates a new dataChunk and waits until all 1-N Connection objects have delt with their client requests until it generates a new dataChunk.
How do i lock the dataChunk for writing access of the Generator object while the Connection objects deal with their requests, but all Connection objects in their respective threads are supposed to have reading-access at the same time during their request-handling phase.
On the other hand the Connection objects are supposed to wait for a new dataChunk after dealing with their respective request, without dropping a new client request.
--> I think a single mutex won't do the trick here.
My first idea was to share a struct between the objects with a semaphore for the Generator and a vector of semaphores for the connections. With these, every object could "understand" the state of the full-system and work accordingly.
What to you guys think, what is best practice i cases like this?
Thanks in advance!
There are several ways to solve it.
You can use std::shared_mutex.
void Connection::handleRequest()
{
while(true)
{
std::shared_lock<std::shared_mutex> lock(GeneratorObj.shared_mutex);
if(GeneratorObj.DataIsAvailable()) // we need to know that data is available
{
// Send to client
break;
}
}
}
void Generator::generateData()
{
std::unique_lock<std::shared_mutex> lock(GeneratorObj.shared_mutex);
// Generate data
}
Or you can use a boost::lockfree::queue, but data structures will be different.
How do i lock the dataChunk for writing access of the Generator object while the Connection objects deal with their requests, but all Connection objects in their respective threads are supposed to have reading-access at the same time during their request-handling phase.
I'd make a logical chain of operations, that includes the generation.
Here's a sample:
it is completely single threaded
accepts unbounded connections and deals with dropped connections
it uses a deadline_timer object to signal a barrier when waiting for to send of a chunck to (many) connections. This makes it convenient to put the generateData call in an async call chain.
Live On Coliru
#include <boost/asio.hpp>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using Clock = std::chrono::high_resolution_clock;
using Duration = Clock::duration;
using namespace std::chrono_literals;
struct Generator {
void generateData();
uint8_t dataChunk[999];
};
struct Server {
Server(unsigned short port) : _port(port) {
_barrier.expires_at(boost::posix_time::neg_infin);
_acc.set_option(tcp::acceptor::reuse_address());
accept_loop();
}
void generate_loop() {
assert(n_sending == 0);
garbage_collect(); // remove dead connections, don't interfere with sending
if (_socks.empty()) {
std::clog << "No more connections; pausing Generator\n";
} else {
_gen.generateData();
_barrier.expires_at(boost::posix_time::pos_infin);
for (auto& s : _socks) {
++n_sending;
ba::async_write(s, ba::buffer(_gen.dataChunk), [this,&s](error_code ec, size_t written) {
assert(n_sending);
--n_sending; // even if failed, decreases pending operation
if (ec) {
std::cerr << "Write: " << ec.message() << "\n";
s.close();
}
std::clog << "Written: " << written << ", " << n_sending << " to go\n";
if (!n_sending) {
// green light to generate next chunk
_barrier.expires_at(boost::posix_time::neg_infin);
}
});
}
_barrier.async_wait([this](error_code ec) {
if (ec && ec != ba::error::operation_aborted)
std::cerr << "Client activity: " << ec.message() << "\n";
else generate_loop();
});
}
}
void accept_loop() {
_acc.async_accept(_accepting, [this](error_code ec) {
if (ec) {
std::cerr << "Accept fail: " << ec.message() << "\n";
} else {
std::clog << "Accepted: " << _accepting.remote_endpoint() << "\n";
_socks.push_back(std::move(_accepting));
if (_socks.size() == 1) // first connection?
generate_loop(); // start generator
accept_loop();
}
});
}
void run_for(Duration d) {
_svc.run_for(d);
}
void garbage_collect() {
_socks.remove_if([](tcp::socket& s) { return !s.is_open(); });
}
private:
ba::io_service _svc;
unsigned short _port;
tcp::acceptor _acc { _svc, { {}, _port } };
tcp::socket _accepting {_svc};
std::list<tcp::socket> _socks;
Generator _gen;
size_t n_sending = 0;
ba::deadline_timer _barrier {_svc};
};
int main() {
Server s(6767);
s.run_for(3s); // COLIRU
}
#include <fstream>
// synchronously generate random data chunks
void Generator::generateData() {
std::ifstream ifs("/dev/urandom", std::ios::binary);
ifs.read(reinterpret_cast<char*>(dataChunk), sizeof(dataChunk));
std::clog << "Generated chunk: " << ifs.gcount() << "\n";
}
Prints (for just the 1 client):
Accepted: 127.0.0.1:60870
Generated chunk: 999
Written: 999, 0 to go
Generated chunk: 999
[... snip ~4000 lines ...]
Written: 999, 0 to go
Generated chunk: 999
Write: Broken pipe
Written: 0, 0 to go
No more connections; pausing Generator

Boost ASIO: Send message to all connected clients

I'm working on a project that involves a boost::beast websocket/http mixed server, which runs on top of boost::asio. I've heavily based my project off the advanced_server.cpp example source.
It works fine, but right now I'm attempting to add a feature that requires the sending of a message to all connected clients.
I'm not very familiar with boost::asio, but right now I can't see any way to have something like "broadcast" events (if that's even the correct term).
My naive approach would be to see if I can have the construction of websocket_session() attach something like an event listener, and the destructor detatch the listener. At that point, I could just fire the event, and have all the currently valid websocket sessions (to which the lifetime of websocket_session() is scoped) execute a callback.
There is https://stackoverflow.com/a/17029022/268006, which does more or less what I want by (ab)using a boost::asio::steady_timer, but that seems like a kind of horrible hack to accomplish something that should be pretty straightforward.
Basically, given a stateful boost::asio server, how can I do an operation on multiple connections?
First off: You can broadcast UDP, but that's not to connected clients. That's just... UDP.
Secondly, that link shows how to have a condition-variable (event)-like interface in Asio. That's only a tiny part of your problem. You forgot about the big picture: you need to know about the set of open connections, one way or the other:
e.g. keeping a container of session pointers (weak_ptr) to each connection
each connection subscribing to a signal slot (e.g. Boost Signals).
Option 1. is great for performance, option 2. is better for flexibility (decoupling the event source from subscribers, making it possible to have heterogenous subscribers, e.g. not from connections).
Because I think Option 1. is much simpler w.r.t to threading, better w.r.t. efficiency (you can e.g. serve all clients from one buffer without copying) and you probably don't need to doubly decouple the signal/slots, let me refer to an answer where I already showed as much for pure Asio (without Beast):
How to design proper release of a boost::asio socket or wrapper thereof
It shows the concept of a "connection pool" - which is essentially a thread-safe container of weak_ptr<connection> objects with some garbage collection logic.
Demonstration: Introducing Echo Server
After chatting about things I wanted to take the time to actually demonstrate the two approaches, so it's completely clear what I'm talking about.
First let's present a simple, run-of-the mill asynchronous TCP server with
with multiple concurrent connections
each connected session reads from the client line-by-line, and echoes the same back to the client
stops accepting after 3 seconds, and exits after the last client disconnects
master branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
private:
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
session->start();
if (!ec)
accept_loop();
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(3s);
s.stop(); // active connections will continue
th.join();
}
Approach 1. Adding Broadcast Messages
So, let's add "broadcast messages" that get sent to all active connections simultaneously. We add two:
one at each new connection (saying "Player ## has entered the game")
one that emulates a global "server event", like you described in the question). It gets triggered from within main:
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active connections\n";
Note how we do this by registering a weak pointer to each accepted connection and operating on each:
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
broadcast is also used directly from main and is simply:
size_t broadcast(std::string const& msg) {
return for_each_active([msg](connection& c) { c.send(msg, true); });
}
using-asio-post branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
size_t broadcast(std::string const& msg) {
return for_each_active([msg](connection& c) { c.send(msg, true); });
}
private:
using connptr = std::shared_ptr<connection>;
using weakptr = std::weak_ptr<connection>;
std::mutex _mx;
std::vector<weakptr> _registered;
size_t reg_connection(weakptr wp) {
std::lock_guard<std::mutex> lk(_mx);
_registered.push_back(wp);
return _registered.size();
}
template <typename F>
size_t for_each_active(F f) {
std::vector<connptr> active;
{
std::lock_guard<std::mutex> lk(_mx);
for (auto& w : _registered)
if (auto c = w.lock())
active.push_back(c);
}
for (auto& c : active) {
std::cout << "(running action for " << c->_s.remote_endpoint() << ")" << std::endl;
f(*c);
}
return active.size();
}
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active connections\n";
std::this_thread::sleep_for(2s);
s.stop(); // active connections will continue
th.join();
}
Approach 2: Those Broadcast But With Boost Signals2
The Signals approach is a fine example of Dependency Inversion.
Most salient notes:
signal slots get invoked on the thread invoking it ("raising the event")
the scoped_connection is there so subscriptions are *automatically removed when the connection is destructed
there's subtle difference in the wording of the console message from "reached # active connections" to "reached # active subscribers".
The difference is key to understanding the added flexibility: the signal owner/invoker does not know anything about the subscribers. That's the decoupling/dependency inversion we're talking about
using-signals2 branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
#include <boost/signals2.hpp>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
boost::signals2::scoped_connection _subscription;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
size_t broadcast(std::string const& msg) {
_broadcast_event(msg);
return _broadcast_event.num_slots();
}
private:
boost::signals2::signal<void(std::string const& msg)> _broadcast_event;
size_t reg_connection(connection& c) {
c._subscription = _broadcast_event.connect(
[&c](std::string msg){ c.send(msg, true); }
);
return _broadcast_event.num_slots();
}
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(*session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active subscribers\n";
std::this_thread::sleep_for(2s);
s.stop(); // active connections will continue
th.join();
}
See the diff between Approach 1. and 2.: Compare View on github
A sample of the output when run against 3 concurrent clients with:
(for a in {1..3}; do netcat localhost 6767 < /etc/dictionaries-common/words > echoed.$a& sleep .1; done; time wait)
The answer from #sehe was amazing, so I'll be brief. Generally speaking, to implement an algorithm which operates on all active connections you must do the following:
Maintain a list of active connections. If this list is accessed by multiple threads, it will need synchronization (std::mutex). New connections should be inserted to the list, and when a connection is destroyed or becomes inactive it should be removed from the list.
To iterate the list, synchronization is required if the list is accessed by multiple threads (i.e. more than one thread calling asio::io_context::run, or if the list is also accessed from threads that are not calling asio::io_context::run)
During iteration, if the algorithm needs to inspect or modify the state of any connection, and that state can be changed by other threads, additional synchronization is needed. This includes any internal "queue" of messages that the connection object stores.
A simple way to synchronize a connection object is to use boost::asio::post to submit a function for execution on the connection object's context, which will be either an explicit strand (boost::asio::strand, as in the advanced server examples) or an implicit strand (what you get when only one thread calls io_context::run). The Approach 1 provided by #sehe uses post to synchronize in this fashion.
Another way to synchronize the connection object is to "stop the world." That means call io_context::stop, wait for all the threads to exit, and then you are guaranteed that no other threads are accessing the list of connections. Then you can read and write connection object state all you want. When you are finished with the list of connections, call io_context::restart and launch the threads which call io_context::run again. Stopping the io_context does not stop network activity, the kernel and network drivers still send and receive data from internal buffers. TCP/IP flow control will take care of things so the application still operates smoothly even though it becomes briefly unresponsive during the "stop the world." This approach can simplify things but depending on your particular application you will have to evaluate if it is right for you.
Hope this helps!
Thank you #sehe for the amazing answer. Still, I think there is a small but severe bug in the Approach 2. IMHO reg_connection should look like this:
size_t reg_connection(std::shared_ptr<connection> c) {
c->_subscription = _broadcast_event.connect(
[weak_c = std::weak_ptr<connection>(c)](std::string msg){
if(auto c = weak_c.lock())
c->send(msg, true);
}
);
return _broadcast_event.num_slots();
}
Otherwise you can end up with a race condition leading to a server crash. In case the connection instance is destroyed during the call to the lambda, the reference becomes invalid.
Similarly connection#send() should look like this, because otherwise this might be dead by the time the lambda is called:
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(),
[self=shared_from_this(), msg=std::move(msg), at_front] {
if (self->enqueue(std::move(msg), at_front))
self->write_loop();
});
}
PS: I would have posted this as a comment on #sehe's answer, but unfortunately I have not enough reputation.