I'm implementing a TCP client which read and send files and strings and I'm using Boost as my main library. I'd like to continue reading or sending files while I keep sending strings, which in these case are the commands to send to the server. For this purpose I thought about using a Thread Pool in order to not overload the client. My question is, can I use futures to use callbacks when on of the thread in the pool ends? In case I can't, is there any other solution?
I was doing something like this, where pool_ is a boost:asio:thread_pool
void send_file(std::string const& file_path){
boost::asio::post(pool_, [this, &file_path] {
handle_send_file(file_path);
});
// DO SOMETHING WHEN handle_send_file ENDS
}
void handle_send_file(std::string const& file_path) {
boost::array<char, 1024> buf{};
boost::system::error_code error;
std::ifstream source_file(file_path, std::ios_base::binary | std::ios_base::ate);
if(!source_file) {
std::cout << "[ERROR] Failed to open " << file_path << std::endl;
//TODO gestire errore
}
size_t file_size = source_file.tellg();
source_file.seekg(0);
std::string file_size_readable = file_size_to_readable(file_size);
// First send file name and file size in bytes to server
boost::asio::streambuf request;
std::ostream request_stream(&request);
request_stream << file_path << "\n"
<< file_size << "\n\n"; // Consider sending readable version, does it change anything?
// Send the request
boost::asio::write(*socket_, request, error);
if(error){
std::cout << "[ERROR] Send request error:" << error << std::endl;
//TODO lanciare un'eccezione? Qua dovrò controllare se il server funziona o no
}
if(DEBUG) {
std::cout << "[DEBUG] " << file_path << " size is: " << file_size_readable << std::endl;
std::cout << "[DEBUG] Start sending file content" << std::endl;
}
long bytes_sent = 0;
float percent = 0;
print_percentage(percent);
while(!source_file.eof()) {
source_file.read(buf.c_array(), (std::streamsize)buf.size());
int bytes_read_from_file = source_file.gcount(); //int is fine because i read at most buf's size, 1024 in this case
if(bytes_read_from_file<=0) {
std::cout << "[ERROR] Read file error" << std::endl;
break;
//TODO gestire questo errore
}
percent = std::ceil((100.0 * bytes_sent) / file_size);
print_percentage(percent);
boost::asio::write(*socket_, boost::asio::buffer(buf.c_array(), source_file.gcount()),
boost::asio::transfer_all(), error);
if(error) {
std::cout << "[ERROR] Send file error:" << error << std::endl;
//TODO lanciare un'eccezione?
}
bytes_sent += bytes_read_from_file;
}
std::cout << "\n" << "[INFO] File " << file_path << " sent successfully!" << std::endl;
}
The operations posted to the pool end without the threads ending. That's the whole purpose of pooling the threads.
void send_file(std::string const& file_path){
post(pool_, [this, &file_path] {
handle_send_file(file_path);
});
// DO SOMETHING WHEN handle_send_file ENDS
}
This has several issues. The largest one is that you should not capture file_path by reference, as the argument is soon out of scope, and the handle_send_file call will run at an unspecified time in another thread. That's a race condition and dangling reference. Undefined Behaviour results.
Then the
// DO SOMETHING WHEN handle_send_file ENDS
is on a line which has no sequence relation with handle_send_file. In fact, it will probably run before that operation ever has a chance to start.
Simplifying
Here's a simplified version:
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <fstream>
#include <iostream>
namespace asio = boost::asio;
using asio::ip::tcp;
static asio::thread_pool pool_;
struct X {
std::unique_ptr<tcp::socket> socket_;
explicit X(unsigned short port) : socket_(new tcp::socket{ pool_ }) {
socket_->connect({ {}, port });
}
asio::thread_pool pool_;
std::unique_ptr<tcp::socket> socket_{ new tcp::socket{ pool_ } };
void send_file(std::string file_path) {
post(pool_, [=, this] {
send_file_implementation(file_path);
// DO SOMETHING WHEN send_file_implementation ENDS
});
}
// throws system_error exception
void send_file_implementation(std::string file_path) {
std::ifstream source_file(file_path,
std::ios_base::binary | std::ios_base::ate);
size_t file_size = source_file.tellg();
source_file.seekg(0);
write(*socket_,
asio::buffer(file_path + "\n" + std::to_string(file_size) + "\n\n"));
boost::array<char, 1024> buf{};
while (source_file.read(buf.c_array(), buf.size()) ||
source_file.gcount() > 0)
{
int n = source_file.gcount();
if (n <= 0) {
using namespace boost::system;
throw system_error(errc::io_error, system_category());
}
write(*socket_, asio::buffer(buf), asio::transfer_exactly(n));
}
}
};
Now, you can indeed run several of these operations in parallel (assuming several instances of X, so you have separate socket_ connections).
To do something at the end, just put code where I moved the comment:
// DO SOMETHING WHEN send_file_implementation ENDS
If you don't know what to do there and you wish to make a future ready at that point, you can:
std::future<void> send_file(std::string file_path) {
std::packaged_task<void()> task([=, this] {
send_file_implementation(file_path);
});
return post(pool_, std::move(task));
}
This overload of post magically¹ returns the future from the packaged task. That packaged task will set the internal promise with either the (void) return value or the exception thrown.
See it in action: Live On Coliru
int main() {
// send two files simultaneously to different connections
X clientA(6868);
X clientB(6969);
std::future<void> futures[] = {
clientA.send_file("main.cpp"),
clientB.send_file("main.cpp"),
};
for (auto& fut : futures) try {
fut.get();
std::cout << "Everything completed without error\n";
} catch(std::exception const& e) {
std::cout << "Error occurred: " << e.what() << "\n";
};
pool_.join();
}
I tested this while running two netcats to listen on 6868/6969:
nc -l -p 6868 | head& nc -l -p 6969 | md5sum&
./a.out
wait
The server prints:
Everything completed without error
Everything completed without error
The netcats print their filtered output:
main.cpp
1907
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <fstream>
#include <iostream>
#include <future>
namespace asio = boost::asio;
using asio::ip::tcp;
7ecb71992bcbc22bda44d78ad3e2a5ef -
¹ not magic: see https://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/reference/async_result.html
Related
I have an application where I need to connect to a socket, send a handshake message (send command1, get response, send command2), and then receive data. It is set to expire after a timeout, stop the io_service, and then attempt to reconnect. There is no error message when I do my first async_write but the following async_read waits until the timer expires, and then reconnects in an infinite loop.
My code looks like:
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
#include <iostream>
#include <string>
#include <memory>
#include <boost/date_time/posix_time/posix_time.hpp>
using namespace std;
using boost::asio::ip::tcp;
static shared_ptr<boost::asio::io_service> _ios;
static shared_ptr<boost::asio::deadline_timer> timer;
static shared_ptr<boost::asio::ip::tcp::socket> tcp_sock;
static shared_ptr<tcp::resolver> _resolver;
static boost::asio::ip::tcp::resolver::results_type eps;
string buffer(1024,0);
void handle_read(const boost::system::error_code& ec, size_t bytes)
{
if (ec)
{
cout << "error: " << ec.message() << endl;
_ios->stop();
return;
}
// got first response, send off reply
if (buffer == "response")
{
boost::asio::async_write(*tcp_sock, boost::asio::buffer("command2",7),
[](auto ec, auto bytes)
{
if (ec)
{
cout << "write error: " << ec.message() << endl;
_ios->stop();
return;
}
});
}
else
{
// parse incoming data
}
// attempt next read
timer->expires_from_now(boost::posix_time::seconds(10));
boost::asio::async_read(*tcp_sock, boost::asio::buffer(buffer,buffer.size()), handle_read);
}
void get_response()
{
timer->expires_from_now(boost::posix_time::seconds(10));
boost::asio::async_read(*tcp_sock, boost::asio::buffer(buffer,buffer.size()), handle_read);
}
void on_connected(const boost::system::error_code& ec, tcp::endpoint)
{
if (!tcp_sock->is_open())
{
cout << "socket is not open" << endl;
_ios->stop();
}
else if (ec)
{
cout << "error: " << ec.message() << endl;
_ios->stop();
return;
}
else
{
cout << "connected" << endl;
// do handshake (no errors?)
boost::asio::async_write(*tcp_sock, boost::asio::buffer("command1",7),
[](auto ec, auto bytes)
{
if (ec)
{
cout << "write error: " << ec.message() << endl;
_ios->stop();
return;
}
get_response();
});
}
}
void check_timer()
{
if (timer->expires_at() <= boost::asio::deadline_timer::traits_type::now())
{
tcp_sock->close();
timer->expires_at(boost::posix_time::pos_infin);
}
timer->async_wait(boost::bind(check_deadline));
}
void init(string ip, string port)
{
// set/reset data and connect
_resolver.reset(new tcp::resolver(*_ios));
eps = _resolver->resolve(ip, port);
timer.reset(new boost::asio::deadline_timer(*_ios));
tcp_sock.reset(new boost::asio::ip::tcp::socket(*_ios));
timer->expires_from_now(boost::posix_time::seconds(5));
// start async connect
boost::asio::async_connect(*tcp_sock, eps, on_connected);
timer->async_wait(boost::bind(check_timer));
}
int main(int argc, char** argv)
{
while (1)
{
// start new io context
_ios.reset(new boost::asio::io_service);
init(argv[1],argv[2]);
_ios->run();
cout << "try reconnect" << endl;
}
return 0;
}
Why would I be timing out? When I do a netcat and follow the same procedure things look ok. I get no errors from the async_write indicating that there are any errors and I am making sure to not call the async_read for the response until I am in the write handler.
Others have been spot on. You use "blanket" read, which means it only completes at error (like EOF) or when the buffer is full (docs)
Besides your code is over-complicated (excess dynamic allocation, manual new, globals, etc).
The following simplified/cleaned up version still exhibits your problem: http://coliru.stacked-crooked.com/a/8f5d0820b3cee186
Since it looks like you just want to limit over-all time of the request, I'd suggest dropping the timer and just limit the time to run the io_context.
Also showing how to use '\n' for message delimiter and avoid manually managing dynamic buffers:
Live On Coliru
#include <boost/asio.hpp>
#include <iomanip>
#include <iostream>
#include <memory>
#include <string>
namespace asio = boost::asio;
using asio::ip::tcp;
using boost::system::error_code;
using namespace std::literals;
struct Client {
#define HANDLE(memfun) std::bind(&Client::memfun, this, std::placeholders::_1, std::placeholders::_2)
Client(std::string const& ip, std::string const& port) {
async_connect(_sock, tcp::resolver{_ios}.resolve(ip, port), HANDLE(on_connected));
}
void run() { _ios.run_for(10s); }
private:
asio::io_service _ios;
asio::ip::tcp::socket _sock{_ios};
std::string _buffer;
void on_connected(error_code ec, tcp::endpoint) {
std::cout << "on_connected: " << ec.message() << std::endl;
if (ec)
return;
async_write(_sock, asio::buffer("command1\n"sv), [this](error_code ec, size_t) {
std::cout << "write: " << ec.message() << std::endl;
if (!ec)
get_response();
});
}
void get_response() {
async_read_until(_sock, asio::dynamic_buffer(_buffer /*, 1024*/), "\n", HANDLE(on_read));
}
void on_read(error_code ec, size_t bytes) {
std::cout << "handle_read: " << ec.message() << " " << bytes << std::endl;
if (ec)
return;
auto cmd = _buffer.substr(0, bytes);
_buffer.erase(0, bytes);
// got first response, send off reply
std::cout << "Handling command " << quoted(cmd) << std::endl;
if (cmd == "response\n") {
async_write(_sock, asio::buffer("command2\n"sv), [](error_code ec, size_t) {
std::cout << "write2: " << ec.message() << std::endl;
});
} else {
// TODO parse cmd
}
get_response(); // attempt next read
}
};
int main(int argc, char** argv) {
assert(argc == 3);
while (1) {
Client(argv[1], argv[2]).run();
std::this_thread::sleep_for(1s); // for demo on COLIRU
std::cout << "try reconnect" << std::endl;
}
}
With output live on coliru:
on_connected: Connection refused
try reconnect
on_connected: Success
write: Success
command1
handle_read: Success 4
Handling command "one
"
handle_read: Success 9
Handling command "response
"
write2: Success
command2
handle_read: Success 6
Handling command "three
"
handle_read: End of file 0
try reconnect
on_connected: Success
write: Success
command1
Local interactive demo:
Sidenote: as long as resolve() isn't happening asynchronously it will not be subject to the timeouts.
I'm new to the boost::asio, and boost::process libraries and I've come across a problem which I'm struggling to find a solution for...
Consider that I have a small toy program that does the following:
Firstly, fork()s itself into a parent-branch and a child-branch.
The child-branch then uses the boost::process::child class to invoke the unix command ls in an asynchronous context.
The child-branch supplies the boost::process::child class with a boost::process::async_pipe to direct std_out to.
The parent-branch wishes to read what has been written to the pipe, line by line, and process it further.
Currently, my implementation of this works up to a point. However, the read_loop() call in the parent-branch does not terminate. It is almost as if it never reaches EOF, or is blocked. Why is this?
Here is my MWE:
#include <boost/process.hpp>
#include <boost/asio.hpp>
#include <iostream>
#include <string>
#include <unistd.h>
void read_loop(boost::process::async_pipe& pipe)
{
static boost::asio::streambuf buffer;
boost::asio::async_read_until(
pipe,
buffer,
'\n',
[&](boost::system::error_code error_code, std::size_t bytes) {
if (!error_code) {
std::istream is(&buffer);
if (std::string line; std::getline(is, line)) {
std::cout << "Read Line: " << line << "\n";
}
read_loop(pipe);
}
else {
std::cout << "Error in read_loop()!\n";
pipe.close();
}
}
);
}
int main(int argc, char* argv[])
{
boost::asio::io_context io_context{};
boost::process::async_pipe pipe{ io_context };
io_context.notify_fork(boost::asio::io_context::fork_prepare);
pid_t pid{ fork() };
if (pid == 0) {
io_context.notify_fork(boost::asio::io_context::fork_child);
boost::process::child child(
boost::process::args({ "/usr/bin/ls", "/etc/" }),
boost::process::std_out > pipe,
boost::process::on_exit([&](int exit, std::error_code error_code) { std::cout << "[Exited with code " << exit << " (" << error_code.message() << ")]\n"; }),
io_context
);
io_context.run();
}
else {
io_context.notify_fork(boost::asio::io_context::fork_parent);
read_loop(pipe);
io_context.run();
}
return 0;
}
Which will successfully give the (abridged) output, as expected:
Read Line: adduser.conf
...
[Exited with code 0 (Success)]
...
Read Line: zsh_command_not_found
but will then just hang until it is forcibly killed.
Which leaves the main question, why does my read_loop() function end up blocking/not exiting correctly?
Thanks in advance!
Chasing The Symptom
The process not "seeing" EOF makes me think you have to close either end of the pipe. This is somewhat hacky, but works:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/process.hpp>
#include <iostream>
namespace bp = boost::process;
void read_loop(bp::async_pipe& pipe) {
static boost::asio::streambuf buffer;
using boost::system::error_code;
async_read_until( //
pipe, buffer, '\n', [&](error_code ec, [[maybe_unused]] size_t bytes) {
// std::cout << "Handler " << ec.message() << " bytes:" << bytes << " (" <<
// buffer.size() << ")" << std::endl;
if (!ec) {
std::istream is(&buffer);
if (std::string line; std::getline(is, line)) {
std::cout << "Read Line: " << line << "\n";
}
read_loop(pipe);
} else {
std::cout << "Loop exit (" << ec.message() << ")" << std::endl;
pipe.close();
}
});
}
int main() {
boost::asio::io_context ioc{};
bp::async_pipe pipe{ioc};
ioc.notify_fork(boost::asio::io_context::fork_prepare);
pid_t pid{fork()};
if (pid == 0) {
ioc.notify_fork(boost::asio::io_context::fork_child);
bp::child child( //
bp::args({"/usr/bin/ls", "/etc/"}), bp::std_out > pipe, bp::std_in.close(),
bp::on_exit([&](int exit, std::error_code ec) {
std::cout << "[Exited with code " << exit << " (" << ec.message() << ")]\n";
pipe.close();
}),
ioc);
ioc.run();
} else {
ioc.notify_fork(boost::asio::io_context::fork_parent);
std::move(pipe).sink().close();
read_loop(pipe);
ioc.run();
}
}
Side note: I guess it would be nice to have a more unhacky way to specify this, like (bp::std_in < pipe).close() or so.
Fixing The Root Cause
When using Boost Process, the fork is completely redundant. Boost Process literally does the fork for you, complete with correct service notification and file descriptor handling.
You'll find the code becomes a lot simpler and also handles the closing correctly (likely because some assumptions within Boost Process implementation details):
Live On Coliru
#include <boost/asio.hpp>
#include <boost/process.hpp>
#include <iostream>
namespace bp = boost::process;
void read_loop(bp::async_pipe& pipe) {
static boost::asio::streambuf buffer;
static std::string line; // re-used because we can
async_read_until( //
pipe, buffer, '\n',
[&](boost::system::error_code ec, size_t /*bytes*/) {
if (ec) {
std::cout << "Loop exit (" << ec.message() << ")" << std::endl;
return;
}
if (getline(std::istream(&buffer), line))
std::cout << "Read Line: " << line << "\n";
read_loop(pipe);
});
}
int main() {
boost::asio::io_context ioc{};
bp::async_pipe pipe{ioc};
bp::child child( //
bp::args({"/bin/ls", "/etc/"}), bp::std_out > pipe,
bp::on_exit([&](int exit, std::error_code ec) {
std::cout << "[Exited with " << exit << " (" << ec.message()
<< ")]\n";
}));
read_loop(pipe);
ioc.run();
}
I have to handle information from 100 ports in parallel for 100ms per second.
I am using Ubuntu OS.
I did some research and i saw that poll() function is a good candidate, to avoid to open 100 threads to handle in parallel data coming on udp protocol.
I did main part with boost and I tried to integrate poll() with boost.
The problem is when i am trying to send by client data to the server, I receive nothing.
According to wireshark, data are coming on the right host. (localhost, port 1234)
Did I miss something or did I put something wrong ?
The test code (server) :
#include <deque>
#include <iostream>
#include <chrono>
#include <thread>
#include <sys/poll.h>
#include <boost/optional.hpp>
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
using boost::asio::ip::udp;
using namespace boost::asio;
using namespace std::chrono_literals;
std::string ip_address = "127.0.0.1";
template<typename T, size_t N>
size_t arraySize( T(&)[N] )
{
return(N);
}
class UdpReceiver
{
using Resolver = udp::resolver;
using Sockets = std::deque<udp::socket>;
using EndPoint = udp::endpoint;
using Buffer = std::array<char, 100>; // receiver buffer
public:
explicit UdpReceiver()
: work_(std::ref(resolver_context)), thread_( [this]{ resolver_context.run(); })
{ }
~UdpReceiver()
{
work_ = boost::none; // using work to keep run active always !
thread_.join();
}
void async_resolve(udp::resolver::query const& query_) {
resolver_context.post([this, query_] { do_resolve(query_); });
}
// callback for event-loop in main thread
void run_handler(int fd_idx) {
// start reading
auto result = read(fd_idx, receive_buf.data(), sizeof(Buffer));
// increment number of received packets
received_packets = received_packets + 1;
std::cout << "Received bytes " << result << " current recorded packets " << received_packets <<'\n';
// run handler posted from resolver threads
handler_context.poll();
handler_context.reset();
}
static void handle_receive(boost::system::error_code error, udp::resolver::iterator const& iterator) {
std::cout << "handle_resolve:\n"
" " << error.message() << "\n";
if (!error)
std::cout << " " << iterator->endpoint() << "\n";
}
// get current file descriptor
int fd(size_t idx)
{
return sockets[idx].native_handle();
}
private:
void do_resolve(boost::asio::ip::udp::resolver::query const& query_) {
boost::system::error_code error;
Resolver resolver(resolver_context);
Resolver::iterator result = resolver.resolve(query_, error);
sockets.emplace_back(udp::socket(resolver_context, result->endpoint()));
// post handler callback to service running in main thread
resolver_context.post(boost::bind(&UdpReceiver::handle_receive, error, result));
}
private:
Sockets sockets;
size_t received_packets = 0;
EndPoint remote_receiver;
Buffer receive_buf {};
io_context resolver_context;
io_context handler_context;
boost::optional<boost::asio::io_context::work> work_;
std::thread thread_;
};
int main (int argc, char** argv)
{
UdpReceiver udpReceiver;
udpReceiver.async_resolve(udp::resolver::query(ip_address, std::to_string(1234)));
//logic
pollfd fds[2] { };
for(int i = 0; i < arraySize(fds); ++i)
{
fds[i].fd = udpReceiver.fd(0);
fds[i].events = 0;
fds[i].events |= POLLIN;
fcntl(fds[i].fd, F_SETFL, O_NONBLOCK);
}
// simple event-loop
while (true) {
if (poll(fds, arraySize(fds), -1)) // waiting for wakeup call. Timeout - inf
{
for(auto &fd : fds)
{
if(fd.revents & POLLIN) // checking if we have something to read
{
fd.revents = 0; // reset kernel message
udpReceiver.run_handler(fd.fd); // call resolve handler. Do read !
}
}
}
}
return 0;
}
This looks like a confused mix of C style poll code and Asio code. The point is
you don't need poll (Asio does it internally (or epoll/select/kqueue/IOCP - whatever is available)
UDP is connectionless, so you don't need more than one socket to receive all "connections" (senders)
I'd replace it all with a single udp::socket on a single thread. You don't even have to manage the thread/work:
net::thread_pool io(1); // single threaded
udp::socket s{io, {{}, 1234}};
Let's run an asynchronous receive loop for 5s:
std::array<char, 100> receive_buffer;
udp::endpoint sender;
std::function<void(error_code, size_t)> read_loop;
read_loop = [&](error_code ec, size_t bytes) {
if (bytes != size_t(-1)) {
//std::cout << "read_loop (" << ec.message() << ")\n";
if (ec)
return;
received_packets += 1;
unique_senders.insert(sender);
//std::cout << "Received:" << bytes << " sender:" << sender << " recorded:" << received_packets << "\n";
//std::cout << std::string_view(receive_buffer.data(), bytes) << "\n";
}
s.async_receive_from(net::buffer(receive_buffer), sender, read_loop);
};
read_loop(error_code{}, -1); // prime the async pump
// after 5s stop
std::this_thread::sleep_for(5s);
post(io, [&s] { s.cancel(); });
io.join();
At the end, we can report the statistics:
std::cout << "A total of " << received_packets << " were received from "
<< unique_senders.size() << " unique senders\n";
With a similated load in bash:
function client() { while read a; do echo "$a" > /dev/udp/localhost/1234 ; done < /etc/dictionaries-common/words; }
for a in {1..20}; do client& done; time wait
We get:
A total of 294808 were received from 28215 unique senders
real 0m5,007s
user 0m0,801s
sys 0m0,830s
This is obviously not optimized, the bottle neck here is likely the many many bash subshells being launched for the clients.
Full Listing
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
#include <iostream>
#include <set>
namespace net = boost::asio;
using boost::asio::ip::udp;
using boost::system::error_code;
using namespace std::chrono_literals;
int main ()
{
net::thread_pool io(1); // single threaded
udp::socket s{io, {{}, 1234}};
std::set<udp::endpoint> unique_senders;
size_t received_packets = 0;
{
std::array<char, 100> receive_buffer;
udp::endpoint sender;
std::function<void(error_code, size_t)> read_loop;
read_loop = [&](error_code ec, size_t bytes) {
if (bytes != size_t(-1)) {
//std::cout << "read_loop (" << ec.message() << ")\n";
if (ec)
return;
received_packets += 1;
unique_senders.insert(sender);
//std::cout << "Received:" << bytes << " sender:" << sender << " recorded:" << received_packets << "\n";
//std::cout << std::string_view(receive_buffer.data(), bytes) << "\n";
}
s.async_receive_from(net::buffer(receive_buffer), sender, read_loop);
};
read_loop(error_code{}, -1); // prime the async pump
// after 5s stop
std::this_thread::sleep_for(5s);
post(io, [&s] { s.cancel(); });
io.join();
}
std::cout << "A total of " << received_packets << " were received from "
<< unique_senders.size() << " unique senders\n";
}
I'm working on a project that involves a boost::beast websocket/http mixed server, which runs on top of boost::asio. I've heavily based my project off the advanced_server.cpp example source.
It works fine, but right now I'm attempting to add a feature that requires the sending of a message to all connected clients.
I'm not very familiar with boost::asio, but right now I can't see any way to have something like "broadcast" events (if that's even the correct term).
My naive approach would be to see if I can have the construction of websocket_session() attach something like an event listener, and the destructor detatch the listener. At that point, I could just fire the event, and have all the currently valid websocket sessions (to which the lifetime of websocket_session() is scoped) execute a callback.
There is https://stackoverflow.com/a/17029022/268006, which does more or less what I want by (ab)using a boost::asio::steady_timer, but that seems like a kind of horrible hack to accomplish something that should be pretty straightforward.
Basically, given a stateful boost::asio server, how can I do an operation on multiple connections?
First off: You can broadcast UDP, but that's not to connected clients. That's just... UDP.
Secondly, that link shows how to have a condition-variable (event)-like interface in Asio. That's only a tiny part of your problem. You forgot about the big picture: you need to know about the set of open connections, one way or the other:
e.g. keeping a container of session pointers (weak_ptr) to each connection
each connection subscribing to a signal slot (e.g. Boost Signals).
Option 1. is great for performance, option 2. is better for flexibility (decoupling the event source from subscribers, making it possible to have heterogenous subscribers, e.g. not from connections).
Because I think Option 1. is much simpler w.r.t to threading, better w.r.t. efficiency (you can e.g. serve all clients from one buffer without copying) and you probably don't need to doubly decouple the signal/slots, let me refer to an answer where I already showed as much for pure Asio (without Beast):
How to design proper release of a boost::asio socket or wrapper thereof
It shows the concept of a "connection pool" - which is essentially a thread-safe container of weak_ptr<connection> objects with some garbage collection logic.
Demonstration: Introducing Echo Server
After chatting about things I wanted to take the time to actually demonstrate the two approaches, so it's completely clear what I'm talking about.
First let's present a simple, run-of-the mill asynchronous TCP server with
with multiple concurrent connections
each connected session reads from the client line-by-line, and echoes the same back to the client
stops accepting after 3 seconds, and exits after the last client disconnects
master branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
private:
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
session->start();
if (!ec)
accept_loop();
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(3s);
s.stop(); // active connections will continue
th.join();
}
Approach 1. Adding Broadcast Messages
So, let's add "broadcast messages" that get sent to all active connections simultaneously. We add two:
one at each new connection (saying "Player ## has entered the game")
one that emulates a global "server event", like you described in the question). It gets triggered from within main:
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active connections\n";
Note how we do this by registering a weak pointer to each accepted connection and operating on each:
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
broadcast is also used directly from main and is simply:
size_t broadcast(std::string const& msg) {
return for_each_active([msg](connection& c) { c.send(msg, true); });
}
using-asio-post branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
size_t broadcast(std::string const& msg) {
return for_each_active([msg](connection& c) { c.send(msg, true); });
}
private:
using connptr = std::shared_ptr<connection>;
using weakptr = std::weak_ptr<connection>;
std::mutex _mx;
std::vector<weakptr> _registered;
size_t reg_connection(weakptr wp) {
std::lock_guard<std::mutex> lk(_mx);
_registered.push_back(wp);
return _registered.size();
}
template <typename F>
size_t for_each_active(F f) {
std::vector<connptr> active;
{
std::lock_guard<std::mutex> lk(_mx);
for (auto& w : _registered)
if (auto c = w.lock())
active.push_back(c);
}
for (auto& c : active) {
std::cout << "(running action for " << c->_s.remote_endpoint() << ")" << std::endl;
f(*c);
}
return active.size();
}
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active connections\n";
std::this_thread::sleep_for(2s);
s.stop(); // active connections will continue
th.join();
}
Approach 2: Those Broadcast But With Boost Signals2
The Signals approach is a fine example of Dependency Inversion.
Most salient notes:
signal slots get invoked on the thread invoking it ("raising the event")
the scoped_connection is there so subscriptions are *automatically removed when the connection is destructed
there's subtle difference in the wording of the console message from "reached # active connections" to "reached # active subscribers".
The difference is key to understanding the added flexibility: the signal owner/invoker does not know anything about the subscribers. That's the decoupling/dependency inversion we're talking about
using-signals2 branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
#include <boost/signals2.hpp>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
boost::signals2::scoped_connection _subscription;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
size_t broadcast(std::string const& msg) {
_broadcast_event(msg);
return _broadcast_event.num_slots();
}
private:
boost::signals2::signal<void(std::string const& msg)> _broadcast_event;
size_t reg_connection(connection& c) {
c._subscription = _broadcast_event.connect(
[&c](std::string msg){ c.send(msg, true); }
);
return _broadcast_event.num_slots();
}
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(*session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active subscribers\n";
std::this_thread::sleep_for(2s);
s.stop(); // active connections will continue
th.join();
}
See the diff between Approach 1. and 2.: Compare View on github
A sample of the output when run against 3 concurrent clients with:
(for a in {1..3}; do netcat localhost 6767 < /etc/dictionaries-common/words > echoed.$a& sleep .1; done; time wait)
The answer from #sehe was amazing, so I'll be brief. Generally speaking, to implement an algorithm which operates on all active connections you must do the following:
Maintain a list of active connections. If this list is accessed by multiple threads, it will need synchronization (std::mutex). New connections should be inserted to the list, and when a connection is destroyed or becomes inactive it should be removed from the list.
To iterate the list, synchronization is required if the list is accessed by multiple threads (i.e. more than one thread calling asio::io_context::run, or if the list is also accessed from threads that are not calling asio::io_context::run)
During iteration, if the algorithm needs to inspect or modify the state of any connection, and that state can be changed by other threads, additional synchronization is needed. This includes any internal "queue" of messages that the connection object stores.
A simple way to synchronize a connection object is to use boost::asio::post to submit a function for execution on the connection object's context, which will be either an explicit strand (boost::asio::strand, as in the advanced server examples) or an implicit strand (what you get when only one thread calls io_context::run). The Approach 1 provided by #sehe uses post to synchronize in this fashion.
Another way to synchronize the connection object is to "stop the world." That means call io_context::stop, wait for all the threads to exit, and then you are guaranteed that no other threads are accessing the list of connections. Then you can read and write connection object state all you want. When you are finished with the list of connections, call io_context::restart and launch the threads which call io_context::run again. Stopping the io_context does not stop network activity, the kernel and network drivers still send and receive data from internal buffers. TCP/IP flow control will take care of things so the application still operates smoothly even though it becomes briefly unresponsive during the "stop the world." This approach can simplify things but depending on your particular application you will have to evaluate if it is right for you.
Hope this helps!
Thank you #sehe for the amazing answer. Still, I think there is a small but severe bug in the Approach 2. IMHO reg_connection should look like this:
size_t reg_connection(std::shared_ptr<connection> c) {
c->_subscription = _broadcast_event.connect(
[weak_c = std::weak_ptr<connection>(c)](std::string msg){
if(auto c = weak_c.lock())
c->send(msg, true);
}
);
return _broadcast_event.num_slots();
}
Otherwise you can end up with a race condition leading to a server crash. In case the connection instance is destroyed during the call to the lambda, the reference becomes invalid.
Similarly connection#send() should look like this, because otherwise this might be dead by the time the lambda is called:
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(),
[self=shared_from_this(), msg=std::move(msg), at_front] {
if (self->enqueue(std::move(msg), at_front))
self->write_loop();
});
}
PS: I would have posted this as a comment on #sehe's answer, but unfortunately I have not enough reputation.
I translated the example from Programming in Lua by Roberto Ierusalimschy for downloading several files via HTTP using coroutines to C++ using boost::asio and stackful coroutines. Here is the code:
#include <iostream>
#include <chrono>
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
using namespace std;
using namespace boost::asio;
io_service ioService;
void download(const string& host, const string& file, yield_context& yield)
{
clog << "Downloading " << host << file << " ..." << endl;
size_t fileSize = 0;
boost::system::error_code ec;
ip::tcp::resolver resolver(ioService);
ip::tcp::resolver::query query(host, "80");
auto it = resolver.async_resolve(query, yield[ec]);
ip::tcp::socket socket(ioService);
socket.async_connect(*it, yield[ec]);
ostringstream req;
req << "GET " << file << " HTTP/1.0\r\n\r\n";
write(socket, buffer(req.str()));
while (true)
{
char data[8192];
size_t bytesRead = socket.async_read_some(buffer(data), yield[ec]);
if (0 == bytesRead) break;
fileSize += bytesRead;
}
socket.shutdown(ip::tcp::socket::shutdown_both);
socket.close();
clog << file << " size: " << fileSize << endl;
}
int main()
{
auto timeBegin = chrono::high_resolution_clock::now();
vector<pair<string, string>> resources =
{
{"www.w3.org", "/TR/html401/html40.txt"},
{"www.w3.org", "/TR/2002/REC-xhtml1-20020801/xhtml1.pdf"},
{"www.w3.org", "/TR/REC-html32.html"},
{"www.w3.org", "/TR/2000/REC-DOM-Level-2-Core-20001113/DOM2-Core.txt"},
};
for(const auto& res : resources)
{
spawn(ioService, [&res](yield_context yield)
{
download(res.first, res.second, yield);
});
}
ioService.run();
auto timeEnd = chrono::high_resolution_clock::now();
clog << "Time: " << chrono::duration_cast<chrono::milliseconds>(
timeEnd - timeBegin).count() << endl;
return 0;
}
Now I'm trying to translate the code to use stackless coroutines from boost::asio but the documentation is not enough for me to grok how to organize the code in such way to be able to do it. Can someone provide solution for this?
Here is a solution based on stackless coroutines as provided by Boost. Given that they are essentially a hack, I would not consider the solution particularly elegant. It could probably be done better with C++20, but I think that would be outside the scope of this question.
#include <functional>
#include <iostream>
#include <boost/asio.hpp>
#include <boost/asio/yield.hpp>
using boost::asio::async_write;
using boost::asio::buffer;
using boost::asio::error::eof;
using boost::system::error_code;
using std::placeholders::_1;
using std::placeholders::_2;
/**
* Stackless coroutine for downloading file from host.
*
* The lifetime of the object is limited to one () call. After that,
* the object will be copied and the old object is discarded. For this
* reason, the socket_ and resolver_ member are stored as shared_ptrs,
* so that they can live as long as there is a live copy. An alternative
* solution would be to manager these objects outside of the coroutine
* and to pass them here by reference.
*/
class downloader : boost::asio::coroutine {
using socket_t = boost::asio::ip::tcp::socket;
using resolver_t = boost::asio::ip::tcp::resolver;
public:
downloader(boost::asio::io_service &service, const std::string &host,
const std::string &file)
: socket_{std::make_shared<socket_t>(service)},
resolver_{std::make_shared<resolver_t>(service)}, file_{file},
host_{host} {}
void operator()(error_code ec = error_code(), std::size_t length = 0,
const resolver_t::results_type &results = {}) {
// Check if the last yield resulted in an error.
if (ec) {
if (ec != eof) {
throw boost::system::system_error{ec};
}
}
// Jump to after the previous yield.
reenter(this) {
yield {
resolver_t::query query{host_, "80"};
// Use bind to skip the length parameter not provided by async_resolve
auto result_func = std::bind(&downloader::operator(), this, _1, 0, _2);
resolver_->async_resolve(query, result_func);
}
yield socket_->async_connect(*results, *this);
yield {
std::ostringstream req;
req << "GET " << file_ << " HTTP/1.0\r\n\r\n";
async_write(*socket_, buffer(req.str()), *this);
}
while (true) {
yield {
char data[8192];
socket_->async_read_some(buffer(data), *this);
}
if (length == 0) {
break;
}
fileSize_ += length;
}
std::cout << file_ << " size: " << fileSize_ << std::endl;
socket_->shutdown(socket_t::shutdown_both);
socket_->close();
}
// Uncomment this to show progress and to demonstrace interleaving
// std::cout << file_ << " size: " << fileSize_ << std::endl;
}
private:
std::shared_ptr<socket_t> socket_;
std::shared_ptr<resolver_t> resolver_;
const std::string file_;
const std::string host_;
size_t fileSize_{};
};
int main() {
auto timeBegin = std::chrono::high_resolution_clock::now();
try {
boost::asio::io_service service;
std::vector<std::pair<std::string, std::string>> resources = {
{"www.w3.org", "/TR/html401/html40.txt"},
{"www.w3.org", "/TR/2002/REC-xhtml1-20020801/xhtml1.pdf"},
{"www.w3.org", "/TR/REC-html32.html"},
{"www.w3.org", "/TR/2000/REC-DOM-Level-2-Core-20001113/DOM2-Core.txt"},
};
std::vector<downloader> downloaders{};
std::transform(resources.begin(), resources.end(),
std::back_inserter(downloaders), [&](auto &x) {
return downloader{service, x.first, x.second};
});
std::for_each(downloaders.begin(), downloaders.end(),
[](auto &dl) { dl(); });
service.run();
} catch (std::exception &e) {
std::cerr << "exception: " << e.what() << "\n";
}
auto timeEnd = std::chrono::high_resolution_clock::now();
std::cout << "Time: "
<< std::chrono::duration_cast<std::chrono::milliseconds>(timeEnd -
timeBegin)
.count()
<< std::endl;
return 0;
}
Compiled with Boost 1.72 and g++ -lboost_coroutine -lpthread test.cpp. Example output:
$ ./a.out
/TR/REC-html32.html size: 606
/TR/html401/html40.txt size: 629
/TR/2002/REC-xhtml1-20020801/xhtml1.pdf size: 115777
/TR/2000/REC-DOM-Level-2-Core-20001113/DOM2-Core.txt size: 229699
Time: 1644
The log line at the end of the () function can be uncommented to demonstrate the interleaving of the downloads.