UDP clients pool sending but not receiving - c++

I am creating an udp client pool. The servers will be some other applications running in different computers, and they are suppoused to be alive from beginning. Using a configurable file (not important to problem in example so not included) one to several clients are created so they connect to those servers (1 to 1 relation) in a bidirectional way, sending and receiving.
Sending can be sync because it uses small messages and blocking there in not a problem, but receiving must be async, because answerback can arrive much later after sending.
In my test with only one sockect, it is able to send, but it is not receiving anything at all.
Q1: Where is the problem and how to fix it?
Q2: I also wonder if the use of iterators from std::vector in the async calls can be problematic at the time new connections are pushed into vector due to its rearangment in memory. This may be a problem?
Q3: I really does not understand why in all examples sender and receiver endpoints (endpoint1 and endpoint2 in example struct Socket) are different, couldn't they be the same?
My code is next:
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
using boost::asio::ip::udp;
class Pool
{
struct Socket {
std::string id;
udp::socket socket;
udp::endpoint endpoint1;
udp::endpoint endpoint2;
enum { max_length = 1024 };
std::array<char, max_length> data;
};
public:
void create(const std::string& id, const std::string& host, const std::string& port)
{
udp::resolver resolver(io_context);
sockets.emplace_back(Socket{ id, udp::socket{io_context, udp::v4()}, *resolver.resolve(udp::v4(), host, port).begin() });
receive(id);
}
void send(const std::string& id, const std::string& msg)
{
auto it = std::find_if(sockets.begin(), sockets.end(), [&](auto& socket) { return id == socket.id; });
if (it == sockets.end()) return;
it->data = std::array<char, Socket::max_length>{ 'h', 'e', 'l', 'l', 'o' };
auto bytes = it->socket.send_to(boost::asio::buffer(it->data, 5), it->endpoint1);
}
void receive(const std::string& id)
{
auto it = std::find_if(sockets.begin(), sockets.end(), [&](auto& socket) { return id == socket.id; });
if (it == sockets.end()) return;
it->socket.async_receive_from(
boost::asio::buffer(it->data, Socket::max_length),
it->endpoint2,
[this, id](boost::system::error_code error, std::size_t bytes) {
if (!error && bytes)
bool ok = true;//Call to whatever function
receive(id);
}
);
}
void poll()
{
io_context.poll();
}
private:
boost::asio::io_context io_context;
std::vector<Socket> sockets;
};
int main()
{
Pool clients;
clients.create("ID", "localhost", "55000");
while (true) {
clients.poll();
clients.send("ID", "x");
Sleep(5000);
}
}

Q1: Where is the problem and how to fix it?
You don't really bind to any port, and then you have multiple sockets all receiving unbound udp packets. Likely they're simply competing and something gets lost in the confusion.
Q2: can std::vector be problematic
Yes. Use a std::deque (stable iterator/references as long as you only push/pop at either end). Otherwise, consider a std::list or other node-based container.
In your case map<id, socket> seems more intuitive.
Actually, map<endpoint, peer> would be a lot more intuitive. Or... you could do without the peers entirely.
Q3: I really does not understand why in all examples sender and
receiver endpoints (endpoint1 and endpoint2 in example struct Socket)
are different, couldn't they be the same?
Yeah, they could be "the same" if you don't care about overwriting the original endpoint you had sent to.
Here's my simplified take. As others have said, it's not possible/useful to have many UDP sockets "listening" on the same endpoint. That is, provided that you even bound to an endpoint.
So my sample uses a single _socket with local endpoint :8765.
It can connect to many client endpoints - I chose to replace the id string with the endpoint itself for simplicity. Feel free to add a map<string, endpoint> for some translation.
See it Live On Coliru
#include <boost/asio.hpp>
#include <iomanip>
#include <iostream>
#include <set>
using boost::asio::ip::udp;
using namespace std::chrono_literals;
class Pool {
public:
using Message = std::array<char, 1024>;
using Peers = std::set<udp::endpoint>;
using Id = udp::endpoint;
Pool() { receive_loop(); }
Id create(const std::string& host, const std::string& port)
{
auto ep = *udp::resolver(_io).resolve(udp::v4(), host, port).begin();
/*auto [it,ok] =*/_peers.emplace(ep);
return ep;
}
void send(Id id, const std::string& msg)
{
/*auto bytes =*/
_socket.send_to(boost::asio::buffer(msg), id);
}
void receive_loop()
{
_socket.async_receive_from(
boost::asio::buffer(_incoming), _incoming_ep,
[this](boost::system::error_code error, std::size_t bytes) {
if (!error && bytes)
{
if (_peers.contains(_incoming_ep)) {
std::cout << "Received: "
<< std::quoted(std::string_view(
_incoming.data(), bytes))
<< " from " << _incoming_ep << "\n";
} else {
std::cout << "Ignoring message from unknown peer "
<< _incoming_ep << "\n";
}
}
receive_loop();
});
}
void poll() { _io.poll(); }
private:
boost::asio::io_context _io;
udp::socket _socket{_io, udp::endpoint{udp::v4(), 8765}};
Message _incoming;
udp::endpoint _incoming_ep;
Peers _peers;
};
int main(int argc, char** argv) {
Pool pool;
std::vector<Pool::Id> peers;
for (auto port : std::vector(argv + 1, argv + argc)) {
peers.push_back(pool.create("localhost", port));
}
int message_number = 0;
while (peers.size()) {
pool.poll();
auto id = peers.at(rand() % peers.size());
pool.send(id, "Message #" + std::to_string(++message_number) + "\n");
std::this_thread::sleep_for(1s);
}
}
Live on my machine with some remotes simulated like
sort -R /etc/dictionaries-common/words | while read word; do sleep 5; echo "$word"; done | netcat -u -l -p 8787 -w 1000
Also sending bogus messages from an "other" endpoint to simulate stray/unknown messages.

Related

TCP client socket problem while emplacing in vector (solution and/or improved proposal)

The next code contains a tcp client class which should be created one or more times defined in a file (it is hard-coded for the example) and emplaced into a std::vector object, and then connected to its corresponding server socket.
Godbolt link: https://godbolt.org/z/hzK9jhzjc
#include <chrono>
#include <thread>
#include <fstream>
#include <boost/asio.hpp>
namespace tcpsocket
{
using boost::asio::ip::tcp;
class client
{
public:
void connect(const std::string& host, const std::string& port)
{
if (host.empty() || port.empty()) return;
tcp::resolver resolver{ io_context };
tcp::resolver::results_type endpoints = resolver.resolve(host, port);
boost::asio::async_connect(socket, endpoints, [this](const boost::system::error_code& error, const tcp::endpoint /*endpoint*/)
{
if (!error)
read();
});
}
void read()
{
socket.async_read_some(boost::asio::buffer(data, max_length), [this](const boost::system::error_code& error, std::size_t bytes)
{
if (error) return socket.close();
bytes_received = bytes;
read();
});
}
void write(const std::string& msg)
{
boost::system::error_code error;
size_t bytes = socket.write_some(boost::asio::buffer(msg), error);
if (error) return socket.close();
}
void poll()
{
io_context.poll();
}
private:
std::string host;
std::string port;
size_t bytes_received{};
enum { max_length = 512 };
unsigned char data[max_length];
boost::asio::io_context io_context;
tcp::socket socket{io_context};
};
}//namespace tcpsocket
struct Cfg
{
unsigned id{};
std::string host;
std::string port;
};
struct Client
{
unsigned id{};
tcpsocket::client sck;
};
int main()
{
std::vector<Client> clients;
std::vector<Cfg> config{ {125u, "127.0.0.1", "30000"}, {137u, "127.0.0.1", "30001"} };//In real life, this would come from configuration file
for (auto&[id, host, port] : config)
{
//auto& client = clients.push_back(Client{id, {}});//This is failing (typo error with return value detected by Sehe!!!)
auto& client = clients.emplace_back(id, {});//This is failing
client.sck.connect(host, port);
}
while (true)
{
for (auto&[id, client] : clients)
client.poll();
using namespace std::chrono_literals;
std::this_thread::sleep_for(100ms);
}
}
The program is not compiling, due to an error with copying io_context/socket under my understanding, but I may be wrong in this point.
How can I fix this? And therefore, is there any better alternative to which I am doing? For example, it should be a better approach to make some tcp socket pool into the client class and use the same io_context for all them?
push_back doesn't return a value (return type is void). If you have c++17, emplace_back can be used like that:
auto& client = clients.emplace_back(Client{id, {}});
But vector can reallocate, which necessitates moving or copying all elements. Since client isn't copyable nor movable, that can't work. And that's only good, because otherwise the async_ operations would run into UB when the vector was reallocated.
Consider deque or list which afford reference stability (meaning elements don't reallocate, or in fewer cases). std::list is the safer of the two here:
std::list<Client> clients;
This gets you somewhere. However I'd note a few things:
it's not efficient to create separate IO services for each client
manually polling them is not typical
you had host and port members that were never used
bytes_received was being overwritten
write_some doesn't guarantee the whole buffer will be written
you're mixing async and sync operations (async_read vs write_some). This is not always a good idea. I think for tcp::socket this will be fine in the given use case, but don't expect IO objects to support this in general
There's no reason to supply the array length for boost::asio::buffer - it will be deduced. Even better to use std::array instead of C style array
I see your #include <thread>; if you intend to run on multiple threads, be aware of strands: Why do I need strand per connection when using boost::asio?
Here's a simplified, fixed version with the above:
Live On Coliru
#include <boost/asio.hpp>
#include <chrono>
#include <fstream>
#include <iostream>
#include <list>
using namespace std::chrono_literals;
namespace tcpsocket {
using boost::asio::ip::tcp;
using boost::system::error_code;
class client {
public:
client(boost::asio::any_io_executor ex) : socket_(ex) {}
size_t bytes_received() const { return bytes_received_; }
void connect(const std::string& host, const std::string& port) {
post(socket_.get_executor(), [=, this] { do_connect(host, port); });
}
void write(std::string msg) {
post(socket_.get_executor(), [=, this] { do_write(msg); });
}
void read() {
post(socket_.get_executor(), [=, this] { do_read(); });
}
private:
void do_connect(const std::string& host, const std::string& port) {
if (host.empty() || port.empty())
return;
tcp::resolver resolver{socket_.get_executor()};
async_connect(socket_, resolver.resolve(host, port),
[this](error_code ec, tcp::endpoint /*endpoint*/) {
if (!ec)
do_read();
else
std::cerr << ec.message() << std::endl;
});
}
void do_write(const std::string& msg) {
error_code ec;
boost::asio::write(socket_, boost::asio::buffer(msg), ec);
if (ec) {
std::cerr << "Closing (" << ec.message() << ")" << std::endl;
return socket_.close();
}
}
void do_read() {
socket_.async_read_some( //
boost::asio::buffer(data),
[this](error_code ec, std::size_t bytes) {
if (ec)
return socket_.close();
bytes_received_ += bytes;
do_read();
});
}
std::atomic_size_t bytes_received_{0};
std::array<unsigned char, 512> data;
tcp::socket socket_;
};
} // namespace tcpsocket
struct Cfg {
unsigned id{};
std::string host;
std::string port;
};
struct Client {
Client(unsigned id, boost::asio::any_io_executor ex) : id_(id), impl_(ex) {}
unsigned id_;
tcpsocket::client impl_;
};
int main()
{
boost::asio::io_context ioc;
std::list<Client> clients;
std::vector<Cfg> const config{
{125u, "127.0.0.1", "30000"},
{137u, "127.0.0.1", "30001"},
{149u, "127.0.0.1", "30002"},
{161u, "127.0.0.1", "30003"},
{173u, "127.0.0.1", "30004"},
{185u, "127.0.0.1", "30005"},
{197u, "127.0.0.1", "30006"},
{209u, "127.0.0.1", "30007"},
{221u, "127.0.0.1", "30008"},
{233u, "127.0.0.1", "30009"},
{245u, "127.0.0.1", "30010"},
};
for (auto& [id, host, port] : config) {
auto& c = clients.emplace_back(id, make_strand(ioc));
c.impl_.connect(host, port);
c.impl_.write(std::to_string(id) + " connected to " + host + ":" + port + "\n");
}
ioc.run_for(150ms);
for (auto& [id, impl]: clients)
std::cout << id << " received " << impl.bytes_received() << "\n";
}
Prints
(for a in {30000..30010}; do netcat -tlp $a < main.cpp & done)
g++ -std=c++20 -O2 -Wall -pedantic -pthread main.cpp
./a.out
125 connected to 127.0.0.1:30000
149 connected to 127.0.0.1:30002
161 connected to 127.0.0.1:30003
185 connected to 127.0.0.1:30005
197 connected to 127.0.0.1:30006
209 connected to 127.0.0.1:30007
221 connected to 127.0.0.1:30008
233 connected to 127.0.0.1:30009
173 connected to 127.0.0.1:30004
245 connected to 127.0.0.1:30010
137 connected to 127.0.0.1:30001
125 received 3386
137 received 3386
149 received 3386
161 received 3386
173 received 3386
185 received 3386
197 received 3386
209 received 3386
221 received 3386
233 received 3386
245 received 3386
Other Notes
Read operations form a loop (implicit strand).
Note that it is still your responsibility to ensure no write operations overlap. If necessary, introduce a queue so that you can have multiple messages pending. See e.g. How to safely write to a socket from multiple threads?

Server crashing while being interrupted sending large chunk of data

My server crashes when I gracefully close a client that is connected to it, while the client is receiving a large chunk of data. I am thinking of a possible lifetime bug as with the most bugs in boost ASIO, however I was not able to point out my mistake myself.
Each client establishes 2 connection with the server, one of them is for syncing, the other connection is long-lived one to receive continuous updates. In the "syncing phase" client receives large data to sync with the server state ("state" is basically DB data in JSON format). After syncing, sync connection is closed. Client receives updates to the DB as it happens (these are of course very small data compared to "syncing data") via the other connection.
These are the relevant files:
connection.h
#pragma once
#include <array>
#include <memory>
#include <string>
#include <boost/asio.hpp>
class ConnectionManager;
/// Represents a single connection from a client.
class Connection : public std::enable_shared_from_this<Connection>
{
public:
Connection(const Connection&) = delete;
Connection& operator=(const Connection&) = delete;
/// Construct a connection with the given socket.
explicit Connection(boost::asio::ip::tcp::socket socket, ConnectionManager& manager);
/// Start the first asynchronous operation for the connection.
void start();
/// Stop all asynchronous operations associated with the connection.
void stop();
/// Perform an asynchronous write operation.
void do_write(const std::string& buffer);
int getNativeHandle();
~Connection();
private:
/// Perform an asynchronous read operation.
void do_read();
/// Socket for the connection.
boost::asio::ip::tcp::socket socket_;
/// The manager for this connection.
ConnectionManager& connection_manager_;
/// Buffer for incoming data.
std::array<char, 8192> buffer_;
std::string outgoing_buffer_;
};
typedef std::shared_ptr<Connection> connection_ptr;
connection.cpp
#include "connection.h"
#include <utility>
#include <vector>
#include <iostream>
#include <thread>
#include "connection_manager.h"
Connection::Connection(boost::asio::ip::tcp::socket socket, ConnectionManager& manager)
: socket_(std::move(socket))
, connection_manager_(manager)
{
}
void Connection::start()
{
do_read();
}
void Connection::stop()
{
socket_.close();
}
Connection::~Connection()
{
}
void Connection::do_read()
{
auto self(shared_from_this());
socket_.async_read_some(boost::asio::buffer(buffer_), [this, self](boost::system::error_code ec, std::size_t bytes_transferred) {
if (!ec) {
std::string buff_str = std::string(buffer_.data(), bytes_transferred);
const auto& tokenized_buffer = split(buff_str, ' ');
if(!tokenized_buffer.empty() && tokenized_buffer[0] == "sync") {
/// "syncing connection" sends a specific text
/// hence I can separate between sycing and long-lived connections here and act accordingly.
const auto& exec_json_strs = getExecutionJsons();
const auto& order_json_strs = getOrdersAsJsons();
const auto& position_json_strs = getPositionsAsJsons();
const auto& all_json_strs = exec_json_strs + order_json_strs + position_json_strs + createSyncDoneJson();
/// this is potentially a very large data.
do_write(all_json_strs);
}
do_read();
} else {
connection_manager_.stop(shared_from_this());
}
});
}
void Connection::do_write(const std::string& write_buffer)
{
outgoing_buffer_ = write_buffer;
auto self(shared_from_this());
boost::asio::async_write(socket_, boost::asio::buffer(outgoing_buffer_, outgoing_buffer_.size()), [this, self](boost::system::error_code ec, std::size_t transfer_size) {
if (!ec) {
/// everything is fine.
} else {
/// what to do here?
/// server crashes once I get error code 32 (EPIPE) here.
}
});
}
connection_manager.h
#pragma once
#include <set>
#include "connection.h"
/// Manages open connections so that they may be cleanly stopped when the server
/// needs to shut down.
class ConnectionManager
{
public:
ConnectionManager(const ConnectionManager&) = delete;
ConnectionManager& operator=(const ConnectionManager&) = delete;
/// Construct a connection manager.
ConnectionManager();
/// Add the specified connection to the manager and start it.
void start(connection_ptr c);
/// Stop the specified connection.
void stop(connection_ptr c);
/// Stop all connections.
void stop_all();
void sendAllConnections(const std::string& buffer);
private:
/// The managed connections.
std::set<connection_ptr> connections_;
};
connection_manager.cpp
#include "connection_manager.h"
ConnectionManager::ConnectionManager()
{
}
void ConnectionManager::start(connection_ptr c)
{
connections_.insert(c);
c->start();
}
void ConnectionManager::stop(connection_ptr c)
{
connections_.erase(c);
c->stop();
}
void ConnectionManager::stop_all()
{
for (auto c: connections_)
c->stop();
connections_.clear();
}
/// this function is used to keep clients up to date with the changes, not used during syncing phase.
void ConnectionManager::sendAllConnections(const std::string& buffer)
{
for (auto c: connections_)
c->do_write(buffer);
}
server.h
#pragma once
#include <boost/asio.hpp>
#include <string>
#include "connection.h"
#include "connection_manager.h"
class Server
{
public:
Server(const Server&) = delete;
Server& operator=(const Server&) = delete;
/// Construct the server to listen on the specified TCP address and port, and
/// serve up files from the given directory.
explicit Server(const std::string& address, const std::string& port);
/// Run the server's io_service loop.
void run();
void deliver(const std::string& buffer);
private:
/// Perform an asynchronous accept operation.
void do_accept();
/// Wait for a request to stop the server.
void do_await_stop();
/// The io_service used to perform asynchronous operations.
boost::asio::io_service io_service_;
/// The signal_set is used to register for process termination notifications.
boost::asio::signal_set signals_;
/// Acceptor used to listen for incoming connections.
boost::asio::ip::tcp::acceptor acceptor_;
/// The connection manager which owns all live connections.
ConnectionManager connection_manager_;
/// The *NEXT* socket to be accepted.
boost::asio::ip::tcp::socket socket_;
};
server.cpp
#include "server.h"
#include <signal.h>
#include <utility>
Server::Server(const std::string& address, const std::string& port)
: io_service_()
, signals_(io_service_)
, acceptor_(io_service_)
, connection_manager_()
, socket_(io_service_)
{
// Register to handle the signals that indicate when the server should exit.
// It is safe to register for the same signal multiple times in a program,
// provided all registration for the specified signal is made through Asio.
signals_.add(SIGINT);
signals_.add(SIGTERM);
#if defined(SIGQUIT)
signals_.add(SIGQUIT);
#endif // defined(SIGQUIT)
do_await_stop();
// Open the acceptor with the option to reuse the address (i.e. SO_REUSEADDR).
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::endpoint endpoint = *resolver.resolve({address, port});
acceptor_.open(endpoint.protocol());
acceptor_.set_option(boost::asio::ip::tcp::acceptor::reuse_address(true));
acceptor_.bind(endpoint);
acceptor_.listen();
do_accept();
}
void Server::run()
{
// The io_service::run() call will block until all asynchronous operations
// have finished. While the server is running, there is always at least one
// asynchronous operation outstanding: the asynchronous accept call waiting
// for new incoming connections.
io_service_.run();
}
void Server::do_accept()
{
acceptor_.async_accept(socket_,
[this](boost::system::error_code ec)
{
// Check whether the server was stopped by a signal before this
// completion handler had a chance to run.
if (!acceptor_.is_open())
{
return;
}
if (!ec)
{
connection_manager_.start(std::make_shared<Connection>(
std::move(socket_), connection_manager_));
}
do_accept();
});
}
void Server::do_await_stop()
{
signals_.async_wait(
[this](boost::system::error_code /*ec*/, int /*signo*/)
{
// The server is stopped by cancelling all outstanding asynchronous
// operations. Once all operations have finished the io_service::run()
// call will exit.
acceptor_.close();
connection_manager_.stop_all();
});
}
/// this function is used to keep clients up to date with the changes, not used during syncing phase.
void Server::deliver(const std::string& buffer)
{
connection_manager_.sendAllConnections(buffer);
}
So, I am repeating my question: My server crashes when I gracefully close a client that is connected to it, while the client is receiving a large chunk of data and I do not know why.
Edit: Crash happens in async_write function, as soon as I receive EPIPE error. The application is multithreaded. There are 4 threads that call Server::deliver with their own data as it is produced. deliver() is used for keeping clients up to date, it has nothing to do with the initial syncing: syncing is done with persistent data fetched from db.
I had a single io_service, so I thought that I would not need strands. io_service::run is called on main thread, so the main thread is blocking.
Reviewing, adding some missing code bits:
namespace /*missing code stubs*/ {
auto split(std::string_view input, char delim) {
std::vector<std::string_view> result;
boost::algorithm::split(result, input,
boost::algorithm::is_from_range(delim, delim));
return result;
}
std::string getExecutionJsons() { return ""; }
std::string getOrdersAsJsons() { return ""; }
std::string getPositionsAsJsons() { return ""; }
std::string createSyncDoneJson() { return ""; }
}
Now the things I notice are:
you have a single io_service, so a single thread. Okay, so no strands should be required unless you have threads in your other code (main, e.g.?).
A particular reason to suspect that threads are at play is that nobody could possibly call Server::deliver because run() is blocking. This means that whenever you call deliver() now it causes a data race, which leads to Undefined Behaviour
The casual comment
/// this function is used to keep clients up to date with the changes,
/// not used during syncing phase.
does not do much to remove this concern. The code needs to defend against misuse. Comments do not get executed. Make it better:
void Server::deliver(const std::string& buffer) {
post(io_context_,
[this, buffer] { connection_manager_.broadcast(std::move(buffer)); });
}
you do not check that previous writes are completed before accepting a "new" one. This means that calling Connection::do_write results in Undefined Behaviour for two reasons:
modifying outgoing_buffer_ during an ongoing async operation that uses that buffer is UB
having two overlapped async_write on the same IO object is UB (see docs
The typical way to fix that is to have a queue of outgoing messages instead.
using async_read_some is rarely what you want, especially since the reads don't accumulate into a dynamic buffer. This means that if your packets get separated at unexpected boundaries, you may not detect commands at all, or incorrectly.
Instead consider asio::async_read_until with a dynamic buffer (e.g.
read directly into std::string so you don't have to copy the buffer into a string
read into streambuf so you can use std::istream(&sbuf_) to parse instead of tokenizing
Concatenating all_json_strs which clearly have to be owning text containers is wasteful. Instead, use a const-buffer-sequence to combine them all without copying.
Better yet, consider a streaming approach to JSON serialization so not all the JSON needs to be serialized in memory at any given time.
Don't declare empty destructors (~Connection). They're pessimizations
Likewise for empty constructors (ConnectionManager). If you must, consider
ConnectionManager::ConnectionManager() = default;
The getNativeHandle gives me more questions about other code that may interfere. E.g. it may indicate other libraries doing operations, which again can lead to overlapped reads/writes, or it could be a sign of more code living on threads (as Server::run() is by definition blocking)
Connection manager should probably hold weak_ptr, so Connections could eventually terminate. Now, the last reference is by defintion held in the connection manager, meaning nothing ever gets destructed when the peer disconnects or the session fails for some other reason.
This is not idiomatic:
// Check whether the server was stopped by a signal before this
// completion handler had a chance to run.
if (!acceptor_.is_open()) {
return;
}
If you closed the acceptor, the completion handler is called with error::operation_aborted anyways. Simply handle that, e.g. in the final version I'll post later:
// separate strand for each connection - just in case you ever add threads
acceptor_.async_accept(
make_strand(io_context_), [this](error_code ec, tcp::socket sock) {
if (!ec) {
connection_manager_.register_and_start(
std::make_shared<Connection>(std::move(sock),
connection_manager_));
do_accept();
}
});
I notice this comment:
// The server is stopped by cancelling all outstanding asynchronous
// operations. Once all operations have finished the io_service::run()
// call will exit.
In fact you never cancel() any operation on any IO object in your code. Again, comments aren't executed. It's better to indeed do as you say, and let the destructors close the resources. This prevents spurious errors when objects are used-after-close, and also prevents very annoying race conditions when e.g. you closed the handle, some other thread re-opened a new stream on the same filedescriptor and you had given out the handle to a third party (using getNativeHandle)... you see where this leads?
Reproducing The Problem?
Having reviewed this way, I tried to repro the issue, so I created fake data:
std::string getExecutionJsons() { return std::string(1024, 'E'); }
std::string getOrdersAsJsons() { return std::string(13312, 'O'); }
std::string getPositionsAsJsons() { return std::string(8192, 'P'); }
std::string createSyncDoneJson() { return std::string(24576, 'C'); }
With some minor tweaks to the Connection class:
std::string buff_str =
std::string(buffer_.data(), bytes_transferred);
const auto& tokenized_buffer = split(buff_str, ' ');
if (!tokenized_buffer.empty() &&
tokenized_buffer[0] == "sync") {
std::cerr << "sync detected on " << socket_.remote_endpoint() << std::endl;
/// "syncing connection" sends a specific text
/// hence I can separate between sycing and long-lived
/// connections here and act accordingly.
const auto& exec_json_strs = getExecutionJsons();
const auto& order_json_strs = getOrdersAsJsons();
const auto& position_json_strs = getPositionsAsJsons();
const auto& all_json_strs = exec_json_strs +
order_json_strs + position_json_strs +
createSyncDoneJson();
std::cerr << "All json length: " << all_json_strs.length() << std::endl;
/// this is potentially a very large data.
do_write(all_json_strs); // already on strand!
}
We get the server outputting
sync detected on 127.0.0.1:43012
All json length: 47104
sync detected on 127.0.0.1:43044
All json length: 47104
And clients faked with netcat:
$ netcat localhost 8989 <<< 'sync me' > expected
^C
$ wc -c expected
47104 expected
Good. Now let's cause premature disconnect:
netcat localhost 8989 -w0 <<< 'sync me' > truncated
$ wc -c truncated
0 truncated
So, it does lead to early close, but server still says
sync detected on 127.0.0.1:44176
All json length: 47104
Let's instrument do_write as well:
async_write( //
socket_, boost::asio::buffer(outgoing_buffer_, outgoing_buffer_.size()),
[/*this,*/ self](error_code ec, size_t transfer_size) {
std::cerr << "do_write completion: " << transfer_size << " bytes ("
<< ec.message() << ")" << std::endl;
if (!ec) {
/// everything is fine.
} else {
/// what to do here?
// FIXME: probably cancel the read loop so the connection
// closes?
}
});
Now we see:
sync detected on 127.0.0.1:44494
All json length: 47104
do_write completion: 47104 bytes (Success)
sync detected on 127.0.0.1:44512
All json length: 47104
do_write completion: 32768 bytes (Operation canceled)
For one disconnected and one "okay" connection.
No sign of crashes/undefined behaviour. Let's check with -fsanitize=address,undefined: clean record, even adding a heartbeat:
int main() {
Server s("127.0.0.1", "8989");
std::thread yolo([&s] {
using namespace std::literals;
int i = 1;
do {
std::this_thread::sleep_for(5s);
} while (s.deliver("HEARTBEAT DEMO " + std::to_string(i++)));
});
s.run();
yolo.join();
}
Conclusion
The only problem highlighted above that weren't addressed were:
additional threading issues not shown (perhaps via getNativeHandle)
the fact that you can have overlapping writes in the Connection do_write. Fixing that:
void Connection::write(std::string msg) { // public, might not be on the strand
post(socket_.get_executor(),
[self = shared_from_this(), msg = std::move(msg)]() mutable {
self->do_write(std::move(msg));
});
}
void Connection::do_write(std::string msg) { // assumed on the strand
outgoing_.push_back(std::move(msg));
if (outgoing_.size() == 1)
do_write_loop();
}
void Connection::do_write_loop() {
if (outgoing_.size() == 0)
return;
auto self(shared_from_this());
async_write( //
socket_, boost::asio::buffer(outgoing_.front()),
[this, self](error_code ec, size_t transfer_size) {
std::cerr << "write completion: " << transfer_size << " bytes ("
<< ec.message() << ")" << std::endl;
if (!ec) {
outgoing_.pop_front();
do_write_loop();
} else {
socket_.cancel();
// This would ideally be enough to free the connection, but
// since `ConnectionManager` doesn't use `weak_ptr` you need to
// force the issue using kind of an "umbillical cord reflux":
connection_manager_.stop(self);
}
});
}
As you can see I also split write/do_write to prevent off-strand invocation. Same with stop.
Full Listing
A full listing with all the remarks/fixes from above:
File connection.h
#pragma once
#include <boost/asio.hpp>
#include <array>
#include <deque>
#include <memory>
#include <string>
using boost::asio::ip::tcp;
class ConnectionManager;
/// Represents a single connection from a client.
class Connection : public std::enable_shared_from_this<Connection> {
public:
Connection(const Connection&) = delete;
Connection& operator=(const Connection&) = delete;
/// Construct a connection with the given socket.
explicit Connection(tcp::socket socket, ConnectionManager& manager);
void start();
void stop();
void write(std::string msg);
private:
void do_stop();
void do_write(std::string msg);
void do_write_loop();
/// Perform an asynchronous read operation.
void do_read();
/// Socket for the connection.
tcp::socket socket_;
/// The manager for this connection.
ConnectionManager& connection_manager_;
/// Buffer for incoming data.
std::array<char, 8192> buffer_;
std::deque<std::string> outgoing_;
};
using connection_ptr = std::shared_ptr<Connection>;
File connection_manager.h
#pragma once
#include <list>
#include "connection.h"
/// Manages open connections so that they may be cleanly stopped when the server
/// needs to shut down.
class ConnectionManager {
public:
ConnectionManager(const ConnectionManager&) = delete;
ConnectionManager& operator=(const ConnectionManager&) = delete;
ConnectionManager() = default; // could be split across h/cpp if you wanted
void register_and_start(connection_ptr c);
void stop(connection_ptr c);
void stop_all();
void broadcast(const std::string& buffer);
// purge defunct connections, returns remaining active connections
size_t garbage_collect();
private:
using handle = std::weak_ptr<connection_ptr::element_type>;
std::list<handle> connections_;
};
File server.h
#pragma once
#include <boost/asio.hpp>
#include <string>
#include "connection.h"
#include "connection_manager.h"
class Server {
public:
Server(const Server&) = delete;
Server& operator=(const Server&) = delete;
/// Construct the server to listen on the specified TCP address and port,
/// and serve up files from the given directory.
explicit Server(const std::string& address, const std::string& port);
/// Run the server's io_service loop.
void run();
bool deliver(const std::string& buffer);
private:
void do_accept();
void do_await_signal();
boost::asio::io_context io_context_;
boost::asio::any_io_executor strand_{io_context_.get_executor()};
boost::asio::signal_set signals_{strand_};
tcp::acceptor acceptor_{strand_};
ConnectionManager connection_manager_;
};
File connection.cpp
#include "connection.h"
#include <boost/algorithm/string.hpp>
#include <iostream>
#include <thread>
#include <utility>
#include <vector>
#include "connection_manager.h"
using boost::system::error_code;
Connection::Connection(tcp::socket socket, ConnectionManager& manager)
: socket_(std::move(socket))
, connection_manager_(manager) {}
void Connection::start() { // always assumed on the strand (since connection
// just constructed)
do_read();
}
void Connection::stop() { // public, might not be on the strand
post(socket_.get_executor(),
[self = shared_from_this()]() mutable {
self->do_stop();
});
}
void Connection::do_stop() { // assumed on the strand
socket_.cancel(); // trust shared pointer to destruct
}
namespace /*missing code stubs*/ {
auto split(std::string_view input, char delim) {
std::vector<std::string_view> result;
boost::algorithm::split(result, input,
boost::algorithm::is_from_range(delim, delim));
return result;
}
std::string getExecutionJsons() { return std::string(1024, 'E'); }
std::string getOrdersAsJsons() { return std::string(13312, 'O'); }
std::string getPositionsAsJsons() { return std::string(8192, 'P'); }
std::string createSyncDoneJson() { return std::string(24576, 'C'); }
} // namespace
void Connection::do_read() {
auto self(shared_from_this());
socket_.async_read_some(
boost::asio::buffer(buffer_),
[this, self](error_code ec, size_t bytes_transferred) {
if (!ec) {
std::string buff_str =
std::string(buffer_.data(), bytes_transferred);
const auto& tokenized_buffer = split(buff_str, ' ');
if (!tokenized_buffer.empty() &&
tokenized_buffer[0] == "sync") {
std::cerr << "sync detected on " << socket_.remote_endpoint() << std::endl;
/// "syncing connection" sends a specific text
/// hence I can separate between sycing and long-lived
/// connections here and act accordingly.
const auto& exec_json_strs = getExecutionJsons();
const auto& order_json_strs = getOrdersAsJsons();
const auto& position_json_strs = getPositionsAsJsons();
const auto& all_json_strs = exec_json_strs +
order_json_strs + position_json_strs +
createSyncDoneJson();
std::cerr << "All json length: " << all_json_strs.length() << std::endl;
/// this is potentially a very large data.
do_write(all_json_strs); // already on strand!
}
do_read();
} else {
std::cerr << "do_read terminating: " << ec.message() << std::endl;
connection_manager_.stop(shared_from_this());
}
});
}
void Connection::write(std::string msg) { // public, might not be on the strand
post(socket_.get_executor(),
[self = shared_from_this(), msg = std::move(msg)]() mutable {
self->do_write(std::move(msg));
});
}
void Connection::do_write(std::string msg) { // assumed on the strand
outgoing_.push_back(std::move(msg));
if (outgoing_.size() == 1)
do_write_loop();
}
void Connection::do_write_loop() {
if (outgoing_.size() == 0)
return;
auto self(shared_from_this());
async_write( //
socket_, boost::asio::buffer(outgoing_.front()),
[this, self](error_code ec, size_t transfer_size) {
std::cerr << "write completion: " << transfer_size << " bytes ("
<< ec.message() << ")" << std::endl;
if (!ec) {
outgoing_.pop_front();
do_write_loop();
} else {
socket_.cancel();
// This would ideally be enough to free the connection, but
// since `ConnectionManager` doesn't use `weak_ptr` you need to
// force the issue using kind of an "umbellical cord reflux":
connection_manager_.stop(self);
}
});
}
File connection_manager.cpp
#include "connection_manager.h"
void ConnectionManager::register_and_start(connection_ptr c) {
connections_.emplace_back(c);
c->start();
}
void ConnectionManager::stop(connection_ptr c) {
c->stop();
}
void ConnectionManager::stop_all() {
for (auto h : connections_)
if (auto c = h.lock())
c->stop();
}
/// this function is used to keep clients up to date with the changes, not used
/// during syncing phase.
void ConnectionManager::broadcast(const std::string& buffer) {
for (auto h : connections_)
if (auto c = h.lock())
c->write(buffer);
}
size_t ConnectionManager::garbage_collect() {
connections_.remove_if(std::mem_fn(&handle::expired));
return connections_.size();
}
File server.cpp
#include "server.h"
#include <signal.h>
#include <utility>
using boost::system::error_code;
Server::Server(const std::string& address, const std::string& port)
: io_context_(1) // THREAD HINT: single threaded
, connection_manager_()
{
// Register to handle the signals that indicate when the server should exit.
// It is safe to register for the same signal multiple times in a program,
// provided all registration for the specified signal is made through Asio.
signals_.add(SIGINT);
signals_.add(SIGTERM);
#if defined(SIGQUIT)
signals_.add(SIGQUIT);
#endif // defined(SIGQUIT)
do_await_signal();
// Open the acceptor with the option to reuse the address (i.e. SO_REUSEADDR).
tcp::resolver resolver(io_context_);
tcp::endpoint endpoint = *resolver.resolve({address, port});
acceptor_.open(endpoint.protocol());
acceptor_.set_option(tcp::acceptor::reuse_address(true));
acceptor_.bind(endpoint);
acceptor_.listen();
do_accept();
}
void Server::run() {
// The io_service::run() call will block until all asynchronous operations
// have finished. While the server is running, there is always at least one
// asynchronous operation outstanding: the asynchronous accept call waiting
// for new incoming connections.
io_context_.run();
}
void Server::do_accept() {
// separate strand for each connection - just in case you ever add threads
acceptor_.async_accept(
make_strand(io_context_), [this](error_code ec, tcp::socket sock) {
if (!ec) {
connection_manager_.register_and_start(
std::make_shared<Connection>(std::move(sock),
connection_manager_));
do_accept();
}
});
}
void Server::do_await_signal() {
signals_.async_wait([this](error_code /*ec*/, int /*signo*/) {
// handler on the strand_ because of the executor on signals_
// The server is stopped by cancelling all outstanding asynchronous
// operations. Once all operations have finished the io_service::run()
// call will exit.
acceptor_.cancel();
connection_manager_.stop_all();
});
}
bool Server::deliver(const std::string& buffer) {
if (io_context_.stopped()) {
return false;
}
post(io_context_,
[this, buffer] { connection_manager_.broadcast(std::move(buffer)); });
return true;
}
File test.cpp
#include "server.h"
int main() {
Server s("127.0.0.1", "8989");
std::thread yolo([&s] {
using namespace std::literals;
int i = 1;
do {
std::this_thread::sleep_for(5s);
} while (s.deliver("HEARTBEAT DEMO " + std::to_string(i++)));
});
s.run();
yolo.join();
}

Boost TCP client to connect to multiple servers

I want my TCP client to connect to multiple servers(each server has a separate IP and port).
I am using async_connect. I can successfully connect to different servers but the read/write fails since the server's corresponding tcp::socket object is not available.
Can you please suggest how I could store each server's socket in some data structure? I tried saving the IP, socket to a std::map, but the first server's socket object is not available in memory and the app crashes. I tried making the socket static, but it does not help either.
Please help me!!
Also, I hope I am logically correct in making a single TCP client connect to 2 different servers.
I am sharing below the simplified header & cpp file.
class TCPClient: public Socket
{
public:
TCPClient(boost::asio::io_service& io_service,
boost::asio::ip::tcp::endpoint ep);
virtual ~TCPClient();
void Connect(boost::asio::ip::tcp::endpoint ep, boost::asio::io_service &ioService, void (Comm::*SaveClientDetails)(std::string,void*),
void *pClassInstance);
void TransmitData(const INT8 *pi8Buffer);
void HandleWrite(const boost::system::error_code& err,
size_t szBytesTransferred);
void HandleConnect(const boost::system::error_code &err,
void (Comm::*SaveClientDetails)(std::string,void*),
void *pClassInstance, std::string sIPAddr);
static tcp::socket* CreateSocket(boost::asio::io_service &ioService)
{ return new tcp::socket(ioService); }
static tcp::socket *mSocket;
private:
std::string sMsgRead;
INT8 i8Data[MAX_BUFFER_LENGTH];
std::string sMsg;
boost::asio::deadline_timer mTimer;
};
tcp::socket* TCPClient::mSocket = NULL;
TCPClient::TCPClient(boost::asio::io_service &ioService,
boost::asio::ip::tcp::endpoint ep) :
mTimer(ioService)
{
}
void TCPClient::Connect(boost::asio::ip::tcp::endpoint ep,
boost::asio::io_service &ioService,
void (Comm::*SaveServerDetails)(std::string,void*),
void *pClassInstance)
{
mSocket = CreateSocket(ioService);
std::string sIPAddr = ep.address().to_string();
/* To send connection request to server*/
mSocket->async_connect(ep,boost::bind(&TCPClient::HandleConnect, this,
boost::asio::placeholders::error, SaveServerDetails,
pClassInstance, sIPAddr));
}
void TCPClient::HandleConnect(const boost::system::error_code &err,
void (Comm::*SaveServerDetails)(std::string,void*),
void *pClassInstance, std::string sIPAddr)
{
if (!err)
{
Comm* pInstance = (Comm*) pClassInstance;
if (NULL == pInstance)
{
break;
}
(pInstance->*SaveServerDetails)(sIPAddr,(void*)(mSocket));
}
else
{
break;
}
}
void TCPClient::TransmitData(const INT8 *pi8Buffer)
{
sMsg = pi8Buffer;
if (sMsg.empty())
{
break;
}
mSocket->async_write_some(boost::asio::buffer(sMsg, MAX_BUFFER_LENGTH),
boost::bind(&TCPClient::HandleWrite, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void TCPClient::HandleWrite(const boost::system::error_code &err,
size_t szBytesTransferred)
{
if (!err)
{
std::cout<< "Data written to TCP Client port! ";
}
else
{
break;
}
}
You seem to know your problem: the socket object is unavailable. That's 100% by choice. You chose to make it static, of course there will be only one instance.
Also, I hope I am logically correct in making a single TCP client connect to 2 different servers.
It sounds wrong to me. You can redefine "client" to mean something having multiple TCP connections. In that case at the very minimum you expect a container of tcp::socket objects to hold those (or, you know, a Connection object that contains the tcp::socket.
BONUS: Demo
For fun and glory, here's what I think you should be looking for.
Notes:
no more new, delete
no more void*, reinterpret casts (!!!)
less manual buffer sizing/handling
no more bind
buffer lifetimes are guaranteed for the corresponding async operations
message queues per connection
connections are on a strand for proper synchronized access to shared state in multi-threading environments
I added in a connection max idle time timeout; it also limits the time taken for any async operation (connect/write). I assumed you wanted something like this because (a) it's common (b) there was an unused deadline_timer in your question code
Note the technique of using shared pointers to have Comm manage its own lifetime. Note also that _socket and _outbox are owned by the individual Comm instance.
Live On Coliru
#include <boost/asio.hpp>
#include <deque>
#include <iostream>
using INT8 = char;
using boost::asio::ip::tcp;
using boost::system::error_code;
//using SaveFunc = std::function<void(std::string, void*)>; // TODO abolish void*
using namespace std::chrono_literals;
using duration = std::chrono::high_resolution_clock::duration;
static inline constexpr size_t MAX_BUFFER_LENGTH = 1024;
using Handle = std::weak_ptr<class Comm>;
class Comm : public std::enable_shared_from_this<Comm> {
public:
template <typename Executor>
explicit Comm(Executor ex, tcp::endpoint ep, // ex assumed to be strand
duration max_idle)
: _ep(ep)
, _max_idle(max_idle)
, _socket{ex}
, _timer{_socket.get_executor()}
{
}
~Comm() { std::cerr << "Comm closed (" << _ep << ")\n"; }
void Start() {
post(_socket.get_executor(), [this, self = shared_from_this()] {
_socket.async_connect(
_ep, [this, self = shared_from_this()](error_code ec) {
std::cerr << "Connect: " << ec.message() << std::endl;
if (!ec)
DoIdle();
else
_timer.cancel();
});
DoIdle();
});
}
void Stop() {
post(_socket.get_executor(), [this, self = shared_from_this()] {
if (not _outbox.empty())
std::cerr << "Warning: some messages may be undelivered ("
<< _ep << ")" << std::endl;
_socket.cancel();
_timer.cancel();
});
}
void TransmitData(std::string_view msg) {
post(_socket.get_executor(),
[this, self = shared_from_this(), msg = std::string(msg.substr(0, MAX_BUFFER_LENGTH))] {
_outbox.emplace_back(std::move(msg));
if (_outbox.size() == 1) { // no send loop already active?
DoSendLoop();
}
});
}
private:
// The DoXXXX functions are assumed to be on the strand
void DoSendLoop() {
DoIdle(); // restart max_idle even after last successful send
if (_outbox.empty())
return;
boost::asio::async_write(
_socket, boost::asio::buffer(_outbox.front()),
[this, self = shared_from_this()](error_code ec, size_t xfr) {
std::cerr << "Write " << xfr << " bytes to " << _ep << " " << ec.message() << std::endl;
if (!ec) {
_outbox.pop_front();
DoSendLoop();
} else
_timer.cancel(); // causes Comm shutdown
});
}
void DoIdle() {
_timer.expires_from_now(_max_idle); // cancels any pending wait
_timer.async_wait([this, self = shared_from_this()](error_code ec) {
if (!ec) {
std::cerr << "Timeout" << std::endl;
_socket.cancel();
}
});
}
tcp::endpoint _ep;
duration _max_idle;
tcp::socket _socket;
boost::asio::high_resolution_timer _timer;
std::deque<std::string> _outbox;
};
class TCPClient {
boost::asio::any_io_executor _ex;
std::deque<Handle> _comms;
public:
TCPClient(boost::asio::any_io_executor ex) : _ex(ex) {}
void Add(tcp::endpoint ep, duration max_idle = 3s)
{
auto pcomm = std::make_shared<Comm>(make_strand(_ex), ep, max_idle);
pcomm->Start();
_comms.push_back(pcomm);
// optionally garbage collect expired handles:
std::erase_if(_comms, std::mem_fn(&Handle::expired));
}
void TransmitData(std::string_view msg) {
for (auto& handle : _comms)
if (auto pcomm = handle.lock())
pcomm->TransmitData(msg);
}
void Stop() {
for (auto& handle : _comms)
if (auto pcomm = handle.lock())
pcomm->Stop();
}
};
int main() {
using std::this_thread::sleep_for;
boost::asio::thread_pool ctx(1);
TCPClient c(ctx.get_executor());
c.Add({{}, 8989});
c.Add({{}, 8990}, 1s); // shorter timeout for demo
c.TransmitData("Hello world\n");
c.Add({{}, 8991});
sleep_for(2s); // times out second connection
c.TransmitData("Three is a crowd\n"); // only delivered to 8989 and 8991
sleep_for(1s); // allow for delivery
c.Stop();
ctx.join();
}
Prints (on Coliru):
for p in {8989..8991}; do netcat -t -l -p $p& done
sleep .5; ./a.out
Hello world
Connect: Success
Connect: Success
Hello world
Connect: Success
Write 12 bytes to 0.0.0.0:8989 Success
Write 12 bytes to 0.0.0.0:8990 Success
Timeout
Comm closed (0.0.0.0:8990)
Write Three is a crowd
17Three is a crowd
bytes to 0.0.0.0:8989 Success
Write 17 bytes to 0.0.0.0:8991 Success
Comm closed (0.0.0.0:8989)
Comm closed (0.0.0.0:8991)
The output is a little out of sequence there. Live local demo:

asio aync_send memory leak

I have next snippet:
void TcpConnection::Send(const std::vector<uint8_t>& buffer) {
std::shared_ptr<std::vector<uint8_t>> bufferCopy = std::make_shared<std::vector<uint8_t>>(buffer);
auto socket = m_socket;
m_socket->async_send(asio::buffer(bufferCopy->data(), bufferCopy->size()), [socket, bufferCopy](const boost::system::error_code& err, size_t bytesSent)
{
if (err)
{
logwarning << "clientcomms_t::sendNext encountered error: " << err.message();
// Assume that the communications path is no longer
// valid.
socket->close();
}
});
}
This code leads to memory leak. if m_socket->async_send call is commented then there is not memeory leak. I can not understand why bufferCopy is not freed after callback is dispatched. What I am doing wrong?
Windows is used.
Since you don't show any relevant code, and the code shown does not contain a strict problem, I'm going to assume from the code smells.
The smell is that you have a TcpConnection class that is not enable_shared_from_this<TcpConnection> derived. This leads me to suspect you didn't plan ahead, because there's no possible reasonable way to continue using the instance after the completion of any asynchronous operation (like the async_send).
This leads me to suspect you have a crucially simple problem, which is that your completion handler never runs. There's only one situation that could explain this, and that leads me to assume you never run() the ios_service instance
Here's the situation live:
Live On Coliru
#include <boost/asio.hpp>
namespace asio = boost::asio;
using asio::ip::tcp;
#include <iostream>
auto& logwarning = std::clog;
struct TcpConnection {
using Buffer = std::vector<uint8_t>;
void Send(Buffer const &);
TcpConnection(asio::io_service& svc) : m_socket(std::make_shared<tcp::socket>(svc)) {}
tcp::socket& socket() const { return *m_socket; }
private:
std::shared_ptr<tcp::socket> m_socket;
};
void TcpConnection::Send(Buffer const &buffer) {
auto bufferCopy = std::make_shared<Buffer>(buffer);
auto socket = m_socket;
m_socket->async_send(asio::buffer(bufferCopy->data(), bufferCopy->size()),
[socket, bufferCopy](const boost::system::error_code &err, size_t /*bytesSent*/) {
if (err) {
logwarning << "clientcomms_t::sendNext encountered error: " << err.message();
// Assume that the communications path is no longer
// valid.
socket->close();
}
});
}
int main() {
asio::io_service svc;
tcp::acceptor a(svc, tcp::v4());
a.bind({{}, 6767});
a.listen();
boost::system::error_code ec;
do {
TcpConnection conn(svc);
a.accept(conn.socket(), ec);
char const* greeting = "whale hello there!\n";
conn.Send({greeting, greeting+strlen(greeting)});
} while (!ec);
}
You'll see that any client, connection e.g. with netcat localhost 6767 will receive the greeting, after which, surprisingly the connection will stay open, instead of being closed.
You'd expect the connection to be closed by the server side either way, either because
a transmission error occurred in async_send
or because after the completion handler is run, it is destroyed and hence the captured shared-pointers are destructed. Not only would that free the copied buffer, but also would it run the destructor of socket which would close the connection.
This clearly confirms that the completion handler never runs. The fix is "easy", find a place to run the service:
int main() {
asio::io_service svc;
tcp::acceptor a(svc, tcp::v4());
a.set_option(tcp::acceptor::reuse_address());
a.bind({{}, 6767});
a.listen();
std::thread th;
{
asio::io_service::work keep(svc); // prevent service running out of work early
th = std::thread([&svc] { svc.run(); });
boost::system::error_code ec;
for (int i = 0; i < 11 && !ec; ++i) {
TcpConnection conn(svc);
a.accept(conn.socket(), ec);
char const* greeting = "whale hello there!\n";
conn.Send({greeting, greeting+strlen(greeting)});
}
}
th.join();
}
This runs 11 connections and exits leak-free.
Better:
It becomes a lot cleaner when the accept loop is also async, and the TcpConnection is properly shared as hinted above:
Live On Coliru
#include <boost/asio.hpp>
namespace asio = boost::asio;
using asio::ip::tcp;
#include <memory>
#include <thread>
#include <iostream>
auto& logwarning = std::clog;
struct TcpConnection : std::enable_shared_from_this<TcpConnection> {
using Buffer = std::vector<uint8_t>;
TcpConnection(asio::io_service& svc) : m_socket(svc) {}
void start() {
char const* greeting = "whale hello there!\n";
Send({greeting, greeting+strlen(greeting)});
}
void Send(Buffer);
private:
friend struct Server;
Buffer m_output;
tcp::socket m_socket;
};
struct Server {
Server(unsigned short port) {
_acceptor.set_option(tcp::acceptor::reuse_address());
_acceptor.bind({{}, port});
_acceptor.listen();
do_accept();
}
~Server() {
keep.reset();
_svc.post([this] { _acceptor.cancel(); });
if (th.joinable())
th.join();
}
private:
void do_accept() {
auto conn = std::make_shared<TcpConnection>(_svc);
_acceptor.async_accept(conn->m_socket, [this,conn](boost::system::error_code ec) {
if (ec)
logwarning << "accept failed: " << ec.message() << "\n";
else {
conn->start();
do_accept();
}
});
}
asio::io_service _svc;
// prevent service running out of work early:
std::unique_ptr<asio::io_service::work> keep{std::make_unique<asio::io_service::work>(_svc)};
std::thread th{[this]{_svc.run();}}; // TODO handle handler exceptions
tcp::acceptor _acceptor{_svc, tcp::v4()};
};
void TcpConnection::Send(Buffer buffer) {
m_output = std::move(buffer);
auto self = shared_from_this();
m_socket.async_send(asio::buffer(m_output),
[self](const boost::system::error_code &err, size_t /*bytesSent*/) {
if (err) {
logwarning << "clientcomms_t::sendNext encountered error: " << err.message() << "\n";
// not holding on to `self` means the socket gets closed
}
// do more with `self` which points to the TcpConnection instance...
});
}
int main() {
Server server(6868);
std::this_thread::sleep_for(std::chrono::seconds(3));
}

Boost asio, single TCP server, many clients

I am creating a TCP server that will use boost asio which will accept connections from many clients, receive data, and send confirmations. The thing is that I want to be able to accept all the clients but I want to work only with one at a time. I want all the other transactions to be kept in a queue.
Example:
Client1 connects
Client2 connects
Client1 sends data and asks for reply
Client2 sends data and asks for reply
Client2's request is put into queue
Client1's data is read, server replies, end of transaction
Client2's request is taken from the queue, server reads data, replies end of transaction.
So this is something between asynchronous server and blocking server. I want to do just 1 thing at once but at the same time I want to be able to store all client sockets and their demands in the queue.
I was able to create server-client communication with all the functionality that I need but only on single thread. Once client disconnects server is terminated as well. I don't really know how to start implementing what I have mentioned above. Should I open new thread each time connection is accepted? Should I use async_accept or blocking accept?
I have read boost::asio chat example, where many clients connect so single server, but there is no queuing mechanism that I need here.
I am aware that this post might be a bit confusing but TCP servers are new to me so I am not familiar enough with the terminology. There is also no source code to post because I am asking only for help with concept of this project.
Just keep accepting.
You show no code, but it typically looks like
void do_accept() {
acceptor_.async_accept(socket_, [this](boost::system::error_code ec) {
std::cout << "async_accept -> " << ec.message() << "\n";
if (!ec) {
std::make_shared<Connection>(std::move(socket_))->start();
do_accept(); // THIS LINE
}
});
}
If you don't include the line marked // THIS LINE you will indeed not accept more than 1 connection.
If this doesn't help, please include some code we can work from.
For Fun, A Demo
This uses just standard library features for the non-network part.
Network Listener
The network part is as outlined before:
#include <boost/asio.hpp>
#include <boost/asio/high_resolution_timer.hpp>
#include <istream>
using namespace std::chrono_literals;
using Clock = std::chrono::high_resolution_clock;
namespace Shared {
using PostRequest = std::function<void(std::istream& is)>;
}
namespace Network {
namespace ba = boost::asio;
using ba::ip::tcp;
using error_code = boost::system::error_code;
using Shared::PostRequest;
struct Connection : std::enable_shared_from_this<Connection> {
Connection(tcp::socket&& s, PostRequest poster) : _s(std::move(s)), _poster(poster) {}
void process() {
auto self = shared_from_this();
ba::async_read(_s, _request, [this,self](error_code ec, size_t) {
if (!ec || ec == ba::error::eof) {
std::istream reader(&_request);
_poster(reader);
}
});
}
private:
tcp::socket _s;
ba::streambuf _request;
PostRequest _poster;
};
struct Server {
Server(unsigned port, PostRequest poster) : _port(port), _poster(poster) {}
void run_for(Clock::duration d = 30s) {
_stop.expires_from_now(d);
_stop.async_wait([this](error_code ec) { if (!ec) _svc.post([this] { _a.close(); }); });
_a.listen();
do_accept();
_svc.run();
}
private:
void do_accept() {
_a.async_accept(_s, [this](error_code ec) {
if (!ec) {
std::make_shared<Connection>(std::move(_s), _poster)->process();
do_accept();
}
});
}
unsigned short _port;
PostRequest _poster;
ba::io_service _svc;
ba::high_resolution_timer _stop { _svc };
tcp::acceptor _a { _svc, tcp::endpoint {{}, _port } };
tcp::socket _s { _svc };
};
}
The only "connection" to the work service part is the PostRequest handler that is passed to the server at construction:
Network::Server server(6767, handler);
I've also opted for async operations, so we can have a timer to stop the service, even though we do not use any threads:
server.run_for(3s); // this blocks
The Work Part
This is completely separate, and will use threads. First, let's define a Request, and a thread-safe Queue:
namespace Service {
struct Request {
std::vector<char> data; // or whatever you read from the sockets...
};
Request parse_request(std::istream& is) {
Request result;
result.data.assign(std::istream_iterator<char>(is), {});
return result;
}
struct Queue {
Queue(size_t max = 50) : _max(max) {}
void enqueue(Request req) {
std::unique_lock<std::mutex> lk(mx);
cv.wait(lk, [this] { return _queue.size() < _max; });
_queue.push_back(std::move(req));
cv.notify_one();
}
Request dequeue(Clock::time_point deadline) {
Request req;
{
std::unique_lock<std::mutex> lk(mx);
_peak = std::max(_peak, _queue.size());
if (cv.wait_until(lk, deadline, [this] { return _queue.size() > 0; })) {
req = std::move(_queue.front());
_queue.pop_front();
cv.notify_one();
} else {
throw std::range_error("dequeue deadline");
}
}
return req;
}
size_t peak_depth() const {
std::lock_guard<std::mutex> lk(mx);
return _peak;
}
private:
mutable std::mutex mx;
mutable std::condition_variable cv;
size_t _max = 50;
size_t _peak = 0;
std::deque<Request> _queue;
};
This is nothing special, and doesn't actually use threads yet. Let's make a worker function that accepts a reference to a queue (more than 1 worker can be started if so desired):
void worker(std::string name, Queue& queue, Clock::duration d = 30s) {
auto const deadline = Clock::now() + d;
while(true) try {
auto r = queue.dequeue(deadline);
(std::cout << "Worker " << name << " handling request '").write(r.data.data(), r.data.size()) << "'\n";
}
catch(std::exception const& e) {
std::cout << "Worker " << name << " got " << e.what() << "\n";
break;
}
}
}
The main Driver
Here's where the Queue gets instantiated and both the network server as well as some worker threads are started:
int main() {
Service::Queue queue;
auto handler = [&](std::istream& is) {
queue.enqueue(Service::parse_request(is));
};
Network::Server server(6767, handler);
std::vector<std::thread> pool;
pool.emplace_back([&queue] { Service::worker("one", queue, 6s); });
pool.emplace_back([&queue] { Service::worker("two", queue, 6s); });
server.run_for(3s); // this blocks
for (auto& thread : pool)
if (thread.joinable())
thread.join();
std::cout << "Maximum queue depth was " << queue.peak_depth() << "\n";
}
Live Demo
See It Live On Coliru
With a test load looking like this:
for a in "hello world" "the quick" "brown fox" "jumped over" "the pangram" "bye world"
do
netcat 127.0.0.1 6767 <<< "$a" || echo "not sent: '$a'"&
done
wait
It prints something like:
Worker one handling request 'brownfox'
Worker one handling request 'thepangram'
Worker one handling request 'jumpedover'
Worker two handling request 'Worker helloworldone handling request 'byeworld'
Worker one handling request 'thequick'
'
Worker one got dequeue deadline
Worker two got dequeue deadline
Maximum queue depth was 6
The includes you need. Some maybe are unnecessary:
boost/asio.hpp, boost/thread.hpp, boost/asio/io_service.hpp
boost/asio/spawn.hpp, boost/asio/write.hpp, boost/asio/buffer.hpp
boost/asio/ip/tcp.hpp, iostream, stdlib.h, array, string
vector, string.h, stdio.h, process.h, iterator
using namespace boost::asio;
using namespace boost::asio::ip;
io_service ioservice;
tcp::endpoint sim_endpoint{ tcp::v4(), 4066 }; //{which connectiontype, portnumber}
tcp::acceptor sim_acceptor{ ioservice, sim_endpoint };
std::vector<tcp::socket> sim_sockets;
static int iErgebnis;
int iSocket = 0;
void do_write(int a) //int a is the postion of the socket in the vector
{
int iWSchleife = 1; //to stay connected with putty or something
static char chData[32000];
std::string sBuf = "Received!\r\n";
while (iWSchleife > 0)
{
boost::system::error_code error;
memset(chData, 0, sizeof(chData)); //clear the char
iErgebnis = sim_sockets[a].read_some(boost::asio::buffer(chData), error); //recv data from client
iWSchleife = iErgebnis; //if iErgebnis is bigger then 0 it will stay in the loop. iErgebniss is always >0 when data is received
if (iErgebnis > 0) {
printf("%d data received from client : \n%s\n\n", iErgebnis, chData);
write(sim_sockets[a], boost::asio::buffer(sBuf), error); //send data to client
}
else {
boost::system::error_code ec;
sim_sockets[a].shutdown(boost::asio::ip::tcp::socket::shutdown_send, ec); //close the socket when no data
if (ec)
{
printf("studown error"); // An error occurred.
}
}
}
}
void do_accept(yield_context yield)
{
while (1) //endless loop to accept limitless clients
{
sim_sockets.emplace_back(ioservice); //look to the link below for more info
sim_acceptor.async_accept(sim_sockets.back(), yield); //waits here to accept an client
boost::thread dosome(do_write, iSocket); //when accepted, starts the thread do_write and passes the parameter iSocket
iSocket++; //to know the position of the socket in the vector
}
}
int main()
{
sim_acceptor.listen();
spawn(ioservice, do_accept); //here you can learn more about Coroutines https://theboostcpplibraries.com/boost.coroutine
ioservice.run(); //from here you jump to do:accept
getchar();
}