asio aync_send memory leak - c++

I have next snippet:
void TcpConnection::Send(const std::vector<uint8_t>& buffer) {
std::shared_ptr<std::vector<uint8_t>> bufferCopy = std::make_shared<std::vector<uint8_t>>(buffer);
auto socket = m_socket;
m_socket->async_send(asio::buffer(bufferCopy->data(), bufferCopy->size()), [socket, bufferCopy](const boost::system::error_code& err, size_t bytesSent)
{
if (err)
{
logwarning << "clientcomms_t::sendNext encountered error: " << err.message();
// Assume that the communications path is no longer
// valid.
socket->close();
}
});
}
This code leads to memory leak. if m_socket->async_send call is commented then there is not memeory leak. I can not understand why bufferCopy is not freed after callback is dispatched. What I am doing wrong?
Windows is used.

Since you don't show any relevant code, and the code shown does not contain a strict problem, I'm going to assume from the code smells.
The smell is that you have a TcpConnection class that is not enable_shared_from_this<TcpConnection> derived. This leads me to suspect you didn't plan ahead, because there's no possible reasonable way to continue using the instance after the completion of any asynchronous operation (like the async_send).
This leads me to suspect you have a crucially simple problem, which is that your completion handler never runs. There's only one situation that could explain this, and that leads me to assume you never run() the ios_service instance
Here's the situation live:
Live On Coliru
#include <boost/asio.hpp>
namespace asio = boost::asio;
using asio::ip::tcp;
#include <iostream>
auto& logwarning = std::clog;
struct TcpConnection {
using Buffer = std::vector<uint8_t>;
void Send(Buffer const &);
TcpConnection(asio::io_service& svc) : m_socket(std::make_shared<tcp::socket>(svc)) {}
tcp::socket& socket() const { return *m_socket; }
private:
std::shared_ptr<tcp::socket> m_socket;
};
void TcpConnection::Send(Buffer const &buffer) {
auto bufferCopy = std::make_shared<Buffer>(buffer);
auto socket = m_socket;
m_socket->async_send(asio::buffer(bufferCopy->data(), bufferCopy->size()),
[socket, bufferCopy](const boost::system::error_code &err, size_t /*bytesSent*/) {
if (err) {
logwarning << "clientcomms_t::sendNext encountered error: " << err.message();
// Assume that the communications path is no longer
// valid.
socket->close();
}
});
}
int main() {
asio::io_service svc;
tcp::acceptor a(svc, tcp::v4());
a.bind({{}, 6767});
a.listen();
boost::system::error_code ec;
do {
TcpConnection conn(svc);
a.accept(conn.socket(), ec);
char const* greeting = "whale hello there!\n";
conn.Send({greeting, greeting+strlen(greeting)});
} while (!ec);
}
You'll see that any client, connection e.g. with netcat localhost 6767 will receive the greeting, after which, surprisingly the connection will stay open, instead of being closed.
You'd expect the connection to be closed by the server side either way, either because
a transmission error occurred in async_send
or because after the completion handler is run, it is destroyed and hence the captured shared-pointers are destructed. Not only would that free the copied buffer, but also would it run the destructor of socket which would close the connection.
This clearly confirms that the completion handler never runs. The fix is "easy", find a place to run the service:
int main() {
asio::io_service svc;
tcp::acceptor a(svc, tcp::v4());
a.set_option(tcp::acceptor::reuse_address());
a.bind({{}, 6767});
a.listen();
std::thread th;
{
asio::io_service::work keep(svc); // prevent service running out of work early
th = std::thread([&svc] { svc.run(); });
boost::system::error_code ec;
for (int i = 0; i < 11 && !ec; ++i) {
TcpConnection conn(svc);
a.accept(conn.socket(), ec);
char const* greeting = "whale hello there!\n";
conn.Send({greeting, greeting+strlen(greeting)});
}
}
th.join();
}
This runs 11 connections and exits leak-free.
Better:
It becomes a lot cleaner when the accept loop is also async, and the TcpConnection is properly shared as hinted above:
Live On Coliru
#include <boost/asio.hpp>
namespace asio = boost::asio;
using asio::ip::tcp;
#include <memory>
#include <thread>
#include <iostream>
auto& logwarning = std::clog;
struct TcpConnection : std::enable_shared_from_this<TcpConnection> {
using Buffer = std::vector<uint8_t>;
TcpConnection(asio::io_service& svc) : m_socket(svc) {}
void start() {
char const* greeting = "whale hello there!\n";
Send({greeting, greeting+strlen(greeting)});
}
void Send(Buffer);
private:
friend struct Server;
Buffer m_output;
tcp::socket m_socket;
};
struct Server {
Server(unsigned short port) {
_acceptor.set_option(tcp::acceptor::reuse_address());
_acceptor.bind({{}, port});
_acceptor.listen();
do_accept();
}
~Server() {
keep.reset();
_svc.post([this] { _acceptor.cancel(); });
if (th.joinable())
th.join();
}
private:
void do_accept() {
auto conn = std::make_shared<TcpConnection>(_svc);
_acceptor.async_accept(conn->m_socket, [this,conn](boost::system::error_code ec) {
if (ec)
logwarning << "accept failed: " << ec.message() << "\n";
else {
conn->start();
do_accept();
}
});
}
asio::io_service _svc;
// prevent service running out of work early:
std::unique_ptr<asio::io_service::work> keep{std::make_unique<asio::io_service::work>(_svc)};
std::thread th{[this]{_svc.run();}}; // TODO handle handler exceptions
tcp::acceptor _acceptor{_svc, tcp::v4()};
};
void TcpConnection::Send(Buffer buffer) {
m_output = std::move(buffer);
auto self = shared_from_this();
m_socket.async_send(asio::buffer(m_output),
[self](const boost::system::error_code &err, size_t /*bytesSent*/) {
if (err) {
logwarning << "clientcomms_t::sendNext encountered error: " << err.message() << "\n";
// not holding on to `self` means the socket gets closed
}
// do more with `self` which points to the TcpConnection instance...
});
}
int main() {
Server server(6868);
std::this_thread::sleep_for(std::chrono::seconds(3));
}

Related

boost::asio - the acceptor doesn't call the callback after the server is stopped and started again

I've created a simple wrapper for boost::asio library. My wrapper consists of 4 main classes: NetServer (server), NetClient (client), NetSession (client/server session) and Network (composition class of these three which also includes all callback methods).
The problem is that the first connection client/server works flawlessly, but when I then stop server, start it again and then try to connect the client, the server just doesn't recognize the client. It seems like the acceptor callback isn't called. And client does connect to server, because first - the connection goes without errors, second - when I close the server's program, the client receives the error message WSAECONNRESET.
I've created test program which emulates the procedure written above. It does following:
Starts the server
Starts the client
Client succesfully connects to server
Stops the server
Client receives the error and disconnects itself
Starts the server again
Client again succesfully connects to server
BUT SERVER DOESN'T CALL THE ACCEPTOR CALLBACK ANYMORE
It means that in point 3 the acceptor succesfully calls the callback function, but in point 7 the acceptor doesn't call the callback.
I think I do something wrong in stop()/start() method of the server, but I can't figure out what's exactly wrong.
The source of the NetServer class:
NetServer::NetServer(Network& netRef) : net{ netRef }
{
acceptor = std::make_unique<boost::asio::ip::tcp::acceptor>(ioc);
}
NetServer::~NetServer(void)
{
ioc.stop();
if (threadStarted)
{
th.join();
threadStarted = false;
}
if (active)
stop();
}
int NetServer::start(void)
{
assert(getAcceptHandler() != nullptr);
assert(getHeaderHandler() != nullptr);
assert(getDataHandler() != nullptr);
assert(getErrorHandler() != nullptr);
closeAll();
try
{
ep = boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), srvPort);
acceptor->open(ep.protocol());
acceptor->bind(ep);
acceptor->listen();
initAccept();
}
catch (system::system_error& e)
{
return e.code().value();
}
if (!threadStarted)
{
th = std::thread([this]()
{
ioc.run();
});
threadStarted = true;
}
active = true;
return Network::NET_OK;
}
int NetServer::stop(void)
{
ioc.post(boost::bind(&NetServer::_stop, this));
return Network::NET_OK;
}
void NetServer::_stop(void)
{
boost::system::error_code ec;
acceptor->close(ec);
for (auto& s : sessions)
closeSession(s.get(), false);
active = false;
}
void NetServer::initAccept(void)
{
sock = std::make_shared<asio::ip::tcp::socket>(ioc);
acceptor->async_accept(*sock.get(), [this](const boost::system::error_code& error)
{
onAccept(error, sock);
});
}
void NetServer::onAccept(const boost::system::error_code& ec, SocketSharedPtr sock)
{
if (ec.value() == 0)
{
if (accHandler())
{
addSession(sock);
initAccept();
}
}
else
getErrorHandler()(nullptr, ec);
}
SessionPtr NetServer::addSession(SocketSharedPtr sock)
{
std::lock_guard<std::mutex> guard(mtxSession);
auto session = std::make_shared<NetSession>(sock, *this, true);
sessions.insert(session);
session->start();
return session;
}
SessionPtr NetServer::findSession(const SessionPtr session)
{
for (auto it = std::begin(sessions); it != std::end(sessions); it++)
if (*it == session)
return *it;
return nullptr;
}
bool NetServer::closeSession(const void *session, bool erase /* = true */)
{
std::lock_guard<std::mutex> guard(mtxSession);
for (auto it = std::begin(sessions); it != std::end(sessions); it++)
if (it->get() == session)
{
try
{
it->get()->getSocket()->cancel();
it->get()->getSocket()->shutdown(asio::socket_base::shutdown_send);
it->get()->getSocket()->close();
it->get()->getSocket().reset();
}
catch (system::system_error& e)
{
UNREFERENCED_PARAMETER(e);
}
if (erase)
sessions.erase(*it);
return true;
}
return false;
}
void NetServer::closeAll(void)
{
using namespace boost::placeholders;
std::lock_guard<std::mutex> guard(mtxSession);
std::for_each(sessions.begin(), sessions.end(), boost::bind(&NetSession::stop, _1));
sessions.clear();
}
bool NetServer::write(const SessionPtr session, std::string msg)
{
if (SessionPtr s = findSession(session); s)
{
s->addMessage(msg);
if (s->canWrite())
s->write();
return true;
}
return false;
}
This is the output from the server:
Enter 0 - server, 1 - client: 0
1. Server started
3. Client connected to server
Stopping server....
4. Server stopped
Net error, server, acceptor: ERROR_OPERATION_ABORTED
Net error, server, ERROR_OPERATION_ABORTED
Client session deleted
6. Server started again
(HERE SHOULD BE "8. Client again connected to server", but the server didn't recognize the reconnected client!)
And from the client:
Enter 0 - server, 1 - client: 1
2. Client started and connected to server
Net error, client: ERROR_FILE_NOT_FOUND
5. Client disconnected from server
Waiting 3 sec before reconnect...
Connecting to server...
7. Client started and connected to server
(WHEN I CLOSE THE SERVER WINDOW, I RECVEIVE HERE THE "Net error, client: WSAECONNRESET" MESSAGE - it means client was connected to server anyhow!)
If the code of NetClient, NetSession and Network is necessary, just let me know.
Thanks in advance
Wow. There's a lot to unpack. There is quite a lot of code smell that reminds me of some books on Asio programming that turned out to be... not excellent in my previous experience.
I couldn't give any real advice without grokking your code, which requires me to review in-depth and add missing bits. So let me just provide you with my reviewed/fixed code first, then we'll talk about some of the details.
A few areas where you seemed to have trouble making up your mind:
whether to use a strand or to use mutex locking
whether to use async or sync (e.g. closeSession is completely synchronous and blokcing)
whether to use shared-pointers for lifetime or not: on the one hand you have NetSesion support shared_from_this, but on the other hand you are keeping them alive in a sessions collection.
whether to use smart pointers or raw pointers (sp.get() is a code smell)
whether to use void* pointers or forward declared structs for opaque implementation
whether to use exceptions or to use error codes. Specifically:
return e.code().value();
is a Very Bad Idea. Just return error_code already. Or just propagate the exception.
judging from the use, my best bet is that sessions is std::set<SessionPtr>. Then it's funny that you're doing linear searches. In fact, findSession could be:
SessionPtr findSession(SessionPtr const& session) {
std::lock_guard guard(mtxSessions);
return sessions.contains(session)? session: nullptr;
}
In fact, given some natural invariants, it could just be
auto findSession(SessionPtr s) { return std::move(s); }
Note as well, you had forgotten to lock the mutex in findSession
closeSession completely violates Law Of Demeter, 6*3 times over if you will. In my example I make it so SessHandle is a weak pointer to NetSession and you can just write:
for (auto& handle : sessions)
if (auto sess = handle.lock())
sess->close();
Of course, sess->close() should not block
Also, it should correctly synchronize on the session e.g. using the sessions strand:
void close() {
return post(sock_.get_executor(), [this, self = shared_from_this()] {
error_code ec;
if (!ec) sock_.cancel(ec);
if (!ec) sock_.shutdown(tcp::socket::shutdown_send, ec);
if (!ec) sock_.close(ec);
});
}
If you insist, you can make it so the caller can still await the result and receive any exceptions:
std::future<void> close() {
return post(
sock_.get_executor(),
std::packaged_task<void()>{[this, self = shared_from_this()] {
sock_.cancel();
sock_.shutdown(tcp::socket::shutdown_send);
sock_.close();
}});
}
Honestly, that seems overkill since you never look at the return value anyways.
In general, I recommend leaving socket::close() to the destructor. It avoids a specific class of race-conditions on socket handles.
Don't use boolean flags (isThreadActive is better replaced with th.joinable())
apparently you had NetSession::stop which I imagine did largely the same as closeSession but in the right place? I replaced it with the new NetSession::close
subtly when accHandler returned false, you would exit the accept loop alltogether. I doubt that was on purpose
try to minimize time under locks:
std::future<void> close() {
return post(
sock_.get_executor(),
std::packaged_task<void()>{[this, self = shared_from_this()] {
sock_.cancel();
sock_.shutdown(tcp::socket::shutdown_send);
sock_.close();
}});
}
I will show you how to do without the lock entirely instead.
Demo Listing
#include <boost/asio.hpp>
#include <boost/system/error_code.hpp>
#include <deque>
#include <iostream>
#include <iomanip>
#include <set>
using namespace std::chrono_literals;
using namespace std::placeholders;
namespace asio = boost::asio;
using asio::ip::tcp;
using boost::system::error_code;
static inline std::ostream debug(std::cerr.rdbuf());
struct Network {
static constexpr error_code NET_OK{};
};
struct NetSession; // opaque forward reference
struct NetServer;
using SessHandle = std::weak_ptr<NetSession>; // opaque handle
using Sessions = std::set<SessHandle, std::owner_less<>>;
struct NetSession : std::enable_shared_from_this<NetSession> {
NetSession(tcp::socket&& s, NetServer& srv, bool)
: sock_(std::move(s))
, srv_(srv) {
debug << "New session from " << getPeer() << std::endl;
}
void start() {
post(sock_.get_executor(),
std::bind(&NetSession::do_read, shared_from_this()));
}
tcp::endpoint getPeer() const { return peer_; }
void close() {
return post(sock_.get_executor(), [this, self = shared_from_this()] {
debug << "Closing " << getPeer() << std::endl;
error_code ec;
if (!ec) sock_.cancel(ec);
if (!ec) sock_.shutdown(tcp::socket::shutdown_send, ec);
// if (!ec) sock_.close(ec);
});
}
void addMessage(std::string msg) {
post(sock_.get_executor(),
[this, msg = std::move(msg), self = shared_from_this()] {
outgoing_.push_back(std::move(msg));
if (canWrite())
write_loop();
});
}
private:
// assumed on (logical) strand
bool canWrite() const { // FIXME misnomer: shouldStartWriteLoop()?
return outgoing_.size() == 1;
}
void write_loop() {
if (outgoing_.empty())
return;
async_write(sock_, asio::buffer(outgoing_.front()),
[this, self = shared_from_this()](error_code ec, size_t) {
if (!ec) {
outgoing_.pop_front();
write_loop();
}
});
}
void do_read() {
incoming_.clear();
async_read_until(
sock_, asio::dynamic_buffer(incoming_), "\n",
std::bind(&NetSession::on_read, shared_from_this(), _1, _2));
}
void on_read(error_code ec, size_t);
tcp::socket sock_;
tcp::endpoint peer_ = sock_.remote_endpoint();
NetServer& srv_;
std::string incoming_;
std::deque<std::string> outgoing_;
};
using SessionPtr = std::shared_ptr<NetSession>;
using SocketSharedPtr = std::shared_ptr<tcp::socket>;
struct NetServer {
NetServer(Network& netRef) : net{netRef} {}
~NetServer()
{
if (acceptor.is_open())
acceptor.cancel(); // TODO seems pretty redundant
stop();
if (th.joinable())
th.join();
}
std::function<bool()> accHandler;
std::function<void(SocketSharedPtr, error_code)> errHandler;
// TODO headerHandler
std::function<void(SessionPtr, error_code, std::string)> dataHandler;
error_code start() {
assert(accHandler);
assert(errHandler);
assert(dataHandler);
closeAll(sessions);
error_code ec;
if (!ec) acceptor.open(tcp::v4(), ec);
if (!ec) acceptor.bind({{}, srvPort}, ec);
if (!ec) acceptor.listen(tcp::socket::max_listen_connections, ec);
if (!ec) {
do_accept();
if (!th.joinable()) {
th = std::thread([this] { ioc.run(); }); // TODO exceptions!
}
}
if (ec && acceptor.is_open())
acceptor.close();
return ec;
}
void stop() { //
post(ioc, std::bind(&NetServer::do_stop, this));
}
void closeSession(SessHandle handle, bool erase = true) {
post(acceptor.get_executor(), [=, this] {
if (auto s = handle.lock()) {
s->close();
}
if (erase) {
sessions.erase(handle);
}
});
}
void closeAll() {
post(acceptor.get_executor(), [this] {
closeAll(sessions);
sessions.clear();
});
}
// TODO FIXME is the return value worth it?
bool write(SessionPtr const& session, std::string msg) {
return post(acceptor.get_executor(),
std::packaged_task<bool()>{std::bind(
&NetServer::do_write, this, session, std::move(msg))})
.get();
}
// compare
void writeAll(std::string msg) {
post(acceptor.get_executor(),
std::bind(&NetServer::do_write_all, this, std::move(msg)));
}
private:
Network& net;
asio::io_context ioc;
tcp::acceptor acceptor{ioc}; // active -> acceptor.is_open()
std::thread th; // threadActive -> th.joinable()
Sessions sessions;
std::uint16_t srvPort = 8989;
// std::mutex mtxSessions; // note naming; also replaced by logical strand
// assumed on acceptor logical strand
void do_accept() {
acceptor.async_accept(
make_strand(ioc), [this](error_code ec, tcp::socket sock) {
if (ec.failed()) {
return errHandler(nullptr, ec);
}
if (accHandler()) {
auto s = std::make_shared<NetSession>(std::move(sock),
*this, true);
sessions.insert(s);
s->start();
}
do_accept();
});
}
SessionPtr do_findSession(SessionPtr const& session) {
return sessions.contains(session) ? session : nullptr;
}
bool do_write(SessionPtr session, std::string msg) {
if (auto s = do_findSession(session)) {
s->addMessage(std::move(msg));
return true;
}
return false;
}
void do_write_all(std::string msg) {
for(auto& handle : sessions)
if (auto sess = handle.lock())
do_write(sess, msg);
}
static void closeAll(Sessions const& sessions) {
for (auto& handle : sessions)
if (auto sess = handle.lock())
sess->close();
}
void do_stop()
{
if (acceptor.is_open()) {
error_code ec;
acceptor.close(ec); // TODO error handling?
}
closeAll(sessions); // TODO FIXME why not clear sessions?
}
};
// Implementation must be after NetServer definition:
void NetSession::on_read(error_code ec, size_t) {
if (srv_.dataHandler)
srv_.dataHandler(shared_from_this(), ec, std::move(incoming_));
if (!ec)
do_read();
}
int main() {
Network net;
NetServer srv{net};
srv.accHandler = [] { return true; };
srv.errHandler = [](SocketSharedPtr, error_code ec) {
debug << "errHandler: " << ec.message() << std::endl;
};
srv.dataHandler = [](SessionPtr sess, error_code ec, std::string msg) {
debug << "dataHandler: " << sess->getPeer() << " " << ec.message()
<< " " << std::quoted(msg) << std::endl;
};
srv.start();
std::this_thread::sleep_for(10s);
std::cout << "Shutdown started" << std::endl;
srv.writeAll("We're going to shutdown, take care!\n");
srv.stop();
}
Live Demo:

Boost TCP client to connect to multiple servers

I want my TCP client to connect to multiple servers(each server has a separate IP and port).
I am using async_connect. I can successfully connect to different servers but the read/write fails since the server's corresponding tcp::socket object is not available.
Can you please suggest how I could store each server's socket in some data structure? I tried saving the IP, socket to a std::map, but the first server's socket object is not available in memory and the app crashes. I tried making the socket static, but it does not help either.
Please help me!!
Also, I hope I am logically correct in making a single TCP client connect to 2 different servers.
I am sharing below the simplified header & cpp file.
class TCPClient: public Socket
{
public:
TCPClient(boost::asio::io_service& io_service,
boost::asio::ip::tcp::endpoint ep);
virtual ~TCPClient();
void Connect(boost::asio::ip::tcp::endpoint ep, boost::asio::io_service &ioService, void (Comm::*SaveClientDetails)(std::string,void*),
void *pClassInstance);
void TransmitData(const INT8 *pi8Buffer);
void HandleWrite(const boost::system::error_code& err,
size_t szBytesTransferred);
void HandleConnect(const boost::system::error_code &err,
void (Comm::*SaveClientDetails)(std::string,void*),
void *pClassInstance, std::string sIPAddr);
static tcp::socket* CreateSocket(boost::asio::io_service &ioService)
{ return new tcp::socket(ioService); }
static tcp::socket *mSocket;
private:
std::string sMsgRead;
INT8 i8Data[MAX_BUFFER_LENGTH];
std::string sMsg;
boost::asio::deadline_timer mTimer;
};
tcp::socket* TCPClient::mSocket = NULL;
TCPClient::TCPClient(boost::asio::io_service &ioService,
boost::asio::ip::tcp::endpoint ep) :
mTimer(ioService)
{
}
void TCPClient::Connect(boost::asio::ip::tcp::endpoint ep,
boost::asio::io_service &ioService,
void (Comm::*SaveServerDetails)(std::string,void*),
void *pClassInstance)
{
mSocket = CreateSocket(ioService);
std::string sIPAddr = ep.address().to_string();
/* To send connection request to server*/
mSocket->async_connect(ep,boost::bind(&TCPClient::HandleConnect, this,
boost::asio::placeholders::error, SaveServerDetails,
pClassInstance, sIPAddr));
}
void TCPClient::HandleConnect(const boost::system::error_code &err,
void (Comm::*SaveServerDetails)(std::string,void*),
void *pClassInstance, std::string sIPAddr)
{
if (!err)
{
Comm* pInstance = (Comm*) pClassInstance;
if (NULL == pInstance)
{
break;
}
(pInstance->*SaveServerDetails)(sIPAddr,(void*)(mSocket));
}
else
{
break;
}
}
void TCPClient::TransmitData(const INT8 *pi8Buffer)
{
sMsg = pi8Buffer;
if (sMsg.empty())
{
break;
}
mSocket->async_write_some(boost::asio::buffer(sMsg, MAX_BUFFER_LENGTH),
boost::bind(&TCPClient::HandleWrite, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void TCPClient::HandleWrite(const boost::system::error_code &err,
size_t szBytesTransferred)
{
if (!err)
{
std::cout<< "Data written to TCP Client port! ";
}
else
{
break;
}
}
You seem to know your problem: the socket object is unavailable. That's 100% by choice. You chose to make it static, of course there will be only one instance.
Also, I hope I am logically correct in making a single TCP client connect to 2 different servers.
It sounds wrong to me. You can redefine "client" to mean something having multiple TCP connections. In that case at the very minimum you expect a container of tcp::socket objects to hold those (or, you know, a Connection object that contains the tcp::socket.
BONUS: Demo
For fun and glory, here's what I think you should be looking for.
Notes:
no more new, delete
no more void*, reinterpret casts (!!!)
less manual buffer sizing/handling
no more bind
buffer lifetimes are guaranteed for the corresponding async operations
message queues per connection
connections are on a strand for proper synchronized access to shared state in multi-threading environments
I added in a connection max idle time timeout; it also limits the time taken for any async operation (connect/write). I assumed you wanted something like this because (a) it's common (b) there was an unused deadline_timer in your question code
Note the technique of using shared pointers to have Comm manage its own lifetime. Note also that _socket and _outbox are owned by the individual Comm instance.
Live On Coliru
#include <boost/asio.hpp>
#include <deque>
#include <iostream>
using INT8 = char;
using boost::asio::ip::tcp;
using boost::system::error_code;
//using SaveFunc = std::function<void(std::string, void*)>; // TODO abolish void*
using namespace std::chrono_literals;
using duration = std::chrono::high_resolution_clock::duration;
static inline constexpr size_t MAX_BUFFER_LENGTH = 1024;
using Handle = std::weak_ptr<class Comm>;
class Comm : public std::enable_shared_from_this<Comm> {
public:
template <typename Executor>
explicit Comm(Executor ex, tcp::endpoint ep, // ex assumed to be strand
duration max_idle)
: _ep(ep)
, _max_idle(max_idle)
, _socket{ex}
, _timer{_socket.get_executor()}
{
}
~Comm() { std::cerr << "Comm closed (" << _ep << ")\n"; }
void Start() {
post(_socket.get_executor(), [this, self = shared_from_this()] {
_socket.async_connect(
_ep, [this, self = shared_from_this()](error_code ec) {
std::cerr << "Connect: " << ec.message() << std::endl;
if (!ec)
DoIdle();
else
_timer.cancel();
});
DoIdle();
});
}
void Stop() {
post(_socket.get_executor(), [this, self = shared_from_this()] {
if (not _outbox.empty())
std::cerr << "Warning: some messages may be undelivered ("
<< _ep << ")" << std::endl;
_socket.cancel();
_timer.cancel();
});
}
void TransmitData(std::string_view msg) {
post(_socket.get_executor(),
[this, self = shared_from_this(), msg = std::string(msg.substr(0, MAX_BUFFER_LENGTH))] {
_outbox.emplace_back(std::move(msg));
if (_outbox.size() == 1) { // no send loop already active?
DoSendLoop();
}
});
}
private:
// The DoXXXX functions are assumed to be on the strand
void DoSendLoop() {
DoIdle(); // restart max_idle even after last successful send
if (_outbox.empty())
return;
boost::asio::async_write(
_socket, boost::asio::buffer(_outbox.front()),
[this, self = shared_from_this()](error_code ec, size_t xfr) {
std::cerr << "Write " << xfr << " bytes to " << _ep << " " << ec.message() << std::endl;
if (!ec) {
_outbox.pop_front();
DoSendLoop();
} else
_timer.cancel(); // causes Comm shutdown
});
}
void DoIdle() {
_timer.expires_from_now(_max_idle); // cancels any pending wait
_timer.async_wait([this, self = shared_from_this()](error_code ec) {
if (!ec) {
std::cerr << "Timeout" << std::endl;
_socket.cancel();
}
});
}
tcp::endpoint _ep;
duration _max_idle;
tcp::socket _socket;
boost::asio::high_resolution_timer _timer;
std::deque<std::string> _outbox;
};
class TCPClient {
boost::asio::any_io_executor _ex;
std::deque<Handle> _comms;
public:
TCPClient(boost::asio::any_io_executor ex) : _ex(ex) {}
void Add(tcp::endpoint ep, duration max_idle = 3s)
{
auto pcomm = std::make_shared<Comm>(make_strand(_ex), ep, max_idle);
pcomm->Start();
_comms.push_back(pcomm);
// optionally garbage collect expired handles:
std::erase_if(_comms, std::mem_fn(&Handle::expired));
}
void TransmitData(std::string_view msg) {
for (auto& handle : _comms)
if (auto pcomm = handle.lock())
pcomm->TransmitData(msg);
}
void Stop() {
for (auto& handle : _comms)
if (auto pcomm = handle.lock())
pcomm->Stop();
}
};
int main() {
using std::this_thread::sleep_for;
boost::asio::thread_pool ctx(1);
TCPClient c(ctx.get_executor());
c.Add({{}, 8989});
c.Add({{}, 8990}, 1s); // shorter timeout for demo
c.TransmitData("Hello world\n");
c.Add({{}, 8991});
sleep_for(2s); // times out second connection
c.TransmitData("Three is a crowd\n"); // only delivered to 8989 and 8991
sleep_for(1s); // allow for delivery
c.Stop();
ctx.join();
}
Prints (on Coliru):
for p in {8989..8991}; do netcat -t -l -p $p& done
sleep .5; ./a.out
Hello world
Connect: Success
Connect: Success
Hello world
Connect: Success
Write 12 bytes to 0.0.0.0:8989 Success
Write 12 bytes to 0.0.0.0:8990 Success
Timeout
Comm closed (0.0.0.0:8990)
Write Three is a crowd
17Three is a crowd
bytes to 0.0.0.0:8989 Success
Write 17 bytes to 0.0.0.0:8991 Success
Comm closed (0.0.0.0:8989)
Comm closed (0.0.0.0:8991)
The output is a little out of sequence there. Live local demo:

boost::asio::ip::tcp::acceptor terminates application when receiving connection request using async_accept

I want to make this simple server that listens to incoming connection requests, makes connections and sends some data. When I start this acceptor looks like it's working fine, it waits for those incoming connection requests, but when my client tries to connect to this acceptor it automatically crushes. I cant even catch any exceptions with catch(...)
When I start this program it looks like this in a terminal
But when I try to connect
Client application received this kind of error code
Is there something fundamentally wrong with my my_acceptor class?
class my_acceptor{
public:
my_acceptor(asio::io_context& ios, unsigned short port_num) :
m_ios(ios),
port{port_num},
m_acceptor{ios}{}
//start accepting incoming connection requests
void Start()
{
std::cout << "Acceptor Start" << std::endl;
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::tcp::v4(), port);
m_acceptor.open(endpoint.protocol());
m_acceptor.set_option(boost::asio::ip::tcp::acceptor::reuse_address(true));
m_acceptor.bind(endpoint);
m_acceptor.listen();
InitAccept();
}
void Stop(){}
private:
void InitAccept()
{
std::cout << "Acceptor InitAccept" << std::endl;
std::shared_ptr<asio::ip::tcp::socket> sock{new asio::ip::tcp::socket(m_ios)};
m_acceptor.async_accept(*sock.get(),
[this, sock](const boost::system::error_code& error)
{
onAccept(error, sock);
});
}
void onAccept(const boost::system::error_code& ec, std::shared_ptr<asio::ip::tcp::socket> sock)
{
std::cout << "Acceptor onAccept" << std::endl;
}
private:
unsigned short port;
asio::io_context& m_ios;
asio::ip::tcp::acceptor m_acceptor;
};
Just in case this is the Server code that wraps my_acceptor
class Server{
public:
Server(){}
//start the server
void Start(unsigned short port_num, unsigned int thread_pool_size)
{
assert(thread_pool_size > 0);
//create specified number of threads and add them to the pool
for(unsigned int i = 0; i < thread_pool_size; ++i)
{
std::unique_ptr<std::thread> th(
new std::thread([this]()
{
m_ios.run();
}));
m_thread_pool.push_back(std::move(th));
}
//create and start acceptor
acc.reset(new my_acceptor(m_ios, port_num));
acc->Start();
}
//stop the server
void Stop()
{
work_guard.reset();
acc->Stop();
m_ios.stop();
for(auto& th : m_thread_pool)
{
th->join();
}
}
private:
asio::io_context m_ios;
boost::asio::executor_work_guard<boost::asio::io_context::executor_type> work_guard = boost::asio::make_work_guard(m_ios);
std::unique_ptr<my_acceptor> acc;
std::vector<std::unique_ptr<std::thread>> m_thread_pool;
};
There's a threading bug, at least. tcp::acceptor is not thread-safe and you (potentially) run multiple threads. So you will need to make the acceptor access be done from a strand.
my_acceptor(asio::io_context& ios, unsigned short port_num) :
m_ios(ios),
port{port_num},
m_acceptor{make_strand(ios)}{}
And then any operation involving it must be on that strand. E.g., the missing Stop() code should look like:
void Stop(){
post(m_acceptor.get_executor(), [this] { m_acceptor.cancel(); });
}
I leave the initial accept as-is because at that point there aren't multiple threads involved.
Likewise in Start() and Stop() you should check whether acc is null, because acc->Stop() would throw and just replacing a running acc would cause Undefined Behaviour due to deleting the instance that is still having async operations in flight.
In a sidenote, m_ios.stop() should not be necessary if you stop the running acceptor. In the future you might have to signal any client connections to stop, in order for the threads to naturally join.
Here's how I'd complete the accept loop:
void onAccept(error_code ec, std::shared_ptr<tcp::socket> sock)
{
std::cout << "Acceptor onAccept " << ec.message() << " " << sock.get() << std::endl;
if (!ec) {
InitAccept();
}
}
Note how unless the socket is canceled (or otherwise in error), we keep accepting.
I think the threading issue was likely your big problem. The result after my suggestions works:
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
#include <memory>
#include <thread>
using namespace std::chrono_literals;
namespace asio = boost::asio;
using boost::system::error_code;
using asio::ip::tcp;
class my_acceptor {
public:
my_acceptor(asio::io_context& ios, unsigned short port_num) :
m_ios(ios),
port{port_num},
m_acceptor{make_strand(ios)}{}
//start accepting incoming connection requests
void Start()
{
std::cout << "Acceptor Start" << std::endl;
tcp::endpoint endpoint(tcp::v4(), port);
m_acceptor.open(endpoint.protocol());
m_acceptor.set_option(tcp::acceptor::reuse_address(true));
m_acceptor.bind(endpoint);
m_acceptor.listen();
InitAccept();
}
void Stop(){
post(m_acceptor.get_executor(), [this] { m_acceptor.cancel(); });
}
private:
void InitAccept()
{
std::cout << "Acceptor InitAccept" << std::endl;
auto sock = std::make_shared<tcp::socket>(m_ios);
m_acceptor.async_accept(*sock,
[this, sock](error_code error) { onAccept(error, sock); });
}
void onAccept(error_code ec, const std::shared_ptr<tcp::socket>& sock)
{
std::cout << "Acceptor onAccept " << ec.message() << " " << sock.get() << std::endl;
if (!ec) {
InitAccept();
}
}
private:
asio::io_context& m_ios;
unsigned short port;
tcp::acceptor m_acceptor;
};
class Server{
public:
Server() = default;
//start the server
void Start(unsigned short port_num, unsigned int thread_pool_size)
{
assert(!acc); // otherwise UB results
assert(thread_pool_size > 0);
//create specified number of threads and add them to the pool
for(unsigned int i = 0; i < thread_pool_size; ++i)
{
std::unique_ptr<std::thread> th(
new std::thread([this]() { m_ios.run(); }));
m_thread_pool.push_back(std::move(th));
}
//create and start acceptor
acc = std::make_unique<my_acceptor>(m_ios, port_num);
acc->Start();
}
//stop the server
void Stop()
{
work_guard.reset();
if (acc) {
acc->Stop();
}
//m_ios.stop();
for(auto& th : m_thread_pool) {
th->join();
}
acc.reset();
}
private:
asio::io_context m_ios;
asio::executor_work_guard<asio::io_context::executor_type>
work_guard = make_work_guard(m_ios);
std::unique_ptr<my_acceptor> acc;
std::vector<std::unique_ptr<std::thread>> m_thread_pool;
};
int main() {
Server s;
s.Start(6868, 1);
std::this_thread::sleep_for(10s);
s.Stop();
}
Testing with netcat as client:
for msg in one two three; do
sleep 1
nc 127.0.0.1 6868 <<< "$msg"
done
Prints
Acceptor Start
Acceptor InitAccept
Acceptor onAccept Success 0x1f26960
Acceptor InitAccept
Acceptor onAccept Success 0x7f59f80009d0
Acceptor InitAccept
Acceptor onAccept Success 0x7f59f8000a50
Acceptor InitAccept
Acceptor onAccept Operation canceled 0x7f59f80009d0

C++ Boost::ASIO: system error 995 after second call to io_context::run

I've got troubles with following scenario using asio 1.66.0 Windows implementation
bind socket
run io_context
stop io_context
close socket
restart io_context
repeat 1-4
A call to io_context::run in second iteration is followed by system error 995
The I/O operation has been aborted because of either a thread exit or
an applica tion request
Looks like this error is from socket closure: asio uses PostQueuedCompletionStatus/GetQueuedCompletionStatus to signal itself that io_context::stop was called. But I/O operation, enqueued by WSARecvFrom in socket_.async_receive_from, is failed because of socket is closed and in the next call to io_context::run it is the first what I get in handler passed to socket_.async_receive_from.
Is it intended behavior of asio io_context? How do I avoid this error in sequential iterations?
If I stop io_context::run by closing the socket, all works except there will be same error and it looks little dirty.
Another odd thing is if I proceed with do_receive after error receipt, I will receive as many errors as number of previous iterations, and then I'll receive data from socket.
// based on boost_asio/example/cpp11/multicast/receiver.cpp
// https://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/example/cpp11/multicast/receiver.cpp
#include <array>
#include <iostream>
#include <string>
#include <boost/asio.hpp>
#include <future>
#include <chrono>
#include <thread>
using namespace std::chrono_literals;
constexpr short multicast_port = 30001;
class receiver
{
public:
explicit receiver(boost::asio::io_context& io_context) : socket_(io_context)
{}
~receiver()
{
close();
}
void open(
const boost::asio::ip::address& listen_address,
const boost::asio::ip::address& multicast_address)
{
// Create the socket so that multiple may be bound to the same address.
boost::asio::ip::udp::endpoint listen_endpoint(
listen_address, multicast_port);
socket_.open(listen_endpoint.protocol());
socket_.set_option(boost::asio::ip::udp::socket::reuse_address(true));
socket_.bind(listen_endpoint);
// Join the multicast group.
socket_.set_option(
boost::asio::ip::multicast::join_group(multicast_address));
do_receive();
}
void close()
{
if (socket_.is_open())
{
socket_.close();
}
}
private:
void do_receive()
{
socket_.async_receive_from(
boost::asio::buffer(data_), sender_endpoint_,
[this](boost::system::error_code ec, std::size_t length)
{
if (!ec)
{
std::cout.write(data_.data(), length);
std::cout << std::endl;
do_receive();
}
else
{
// A call to io_context::run in second iteration is followed by system error 995
std::cout << ec.message() << std::endl;
}
});
}
boost::asio::ip::udp::socket socket_;
boost::asio::ip::udp::endpoint sender_endpoint_;
std::array<char, 1024> data_;
};
int main(int argc, char* argv[])
{
try
{
const std::string listen_address = "0.0.0.0";
const std::string multicast_address = "239.255.0.1";
boost::asio::io_context io_context;
receiver r(io_context);
std::future<void> fut;
for (int i = 5; i > 0; --i)
{
io_context.restart();
r.open(
boost::asio::ip::make_address(listen_address),
boost::asio::ip::make_address(multicast_address));
fut = std::async(std::launch::async, [&](){ io_context.run(); });
std::this_thread::sleep_for(3s);
io_context.stop();
fut.get();
r.close();
}
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}

Boost::Asio::Ip::Tcp::Iostream questions

Hey all, I'm new to asio and boost, I've been trying to implement a TCP Server & Client so that I could transmit an std::vector - but I've failed so far. I'm finding the boost documentation of Asio lacking (to say the least) and hard to understand (english is not my primary language).
In any case, I've been looking at the iostreams examples and I've been trying to implement an object oriented solution - but I've failed.
The server that I'm trying to implement should be able to accept connections from multiple clients (How do I do that ?)
The server should receive the std::vector, /* Do something */ and then return it to the client so that the client can tell that the server received the data intact.
*.h file
class TCP_Server : private boost::noncopyable
{
typedef boost::shared_ptr<TCP_Connection> tcp_conn_pointer;
public :
TCP_Server(ba::io_service &io_service, int port);
virtual ~TCP_Server() {}
virtual void Start_Accept();
private:
virtual void Handle_Accept(const boost::system::error_code& e);
private :
int m_port;
ba::io_service& m_io_service; // IO Service
bi::tcp::acceptor m_acceptor; // TCP Connections acceptor
tcp_conn_pointer m_new_tcp_connection; // New connection pointer
};
*.cpp file
TCP_Server::TCP_Server(boost::asio::io_service &io_service, int port) :
m_io_service(io_service),
m_acceptor(io_service, bi::tcp::endpoint(bi::tcp::v4(), port)),
m_new_tcp_connection(TCP_Connection::Create(io_service))
{
m_port = port;
Start_Accept();
}
void TCP_Server::Start_Accept()
{
std::cout << "[TCP_Server][Start_Accept] => Listening on port : " << m_port << std::endl;
//m_acceptor.async_accept(m_new_tcp_connection->Socket(),
// boost::bind(&TCP_Server::Handle_Accept, this,
// ba::placeholders::error));
m_acceptor.async_accept(*m_stream.rdbuf(),
boost::bind(&TCP_Server::Handle_Accept,
this,
ba::placeholders::error));
}
void TCP_Server::Handle_Accept(const boost::system::error_code &e)
{
if(!e)
{
/*boost::thread T(boost::bind(&TCP_Connection::Run, m_new_tcp_connection));
std::cout << "[TCP_Server][Handle_Accept] => Accepting incoming connection. Launching Thread " << std::endl;
m_new_tcp_connection = TCP_Connection::Create(m_io_service);
m_acceptor.async_accept(m_new_tcp_connection->Socket(),
boost::bind(&TCP_Server::Handle_Accept,
this,
ba::placeholders::error));*/
m_stream << "Server Response..." << std::endl;
}
}
How should the client look ?
How do I keep the connection alive while both apps "talk" ?
AFAIK ASIO iostreams are only for synchronous I/O. But your example gives me a hint that you want to use asynchronous I/O.
Here is a small example of a server which uses async I/O to read a request comprising of an array of integers preceded by 4 byte count of the integers in the request.
So in effect I am serializing a vector of integerss as
count(4 bytes)
int
int
...
etc
if reading the vector of ints is successful, the server will write a 4 byte response code(=1) and then issue a read for a new request from the client. Enough said, Code follows.
#include <iostream>
#include <vector>
#include <boost/bind.hpp>
#include <boost/function.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/thread.hpp>
#include <boost/asio.hpp>
using namespace boost::asio;
using boost::asio::ip::tcp;
class Connection
{
public:
Connection(tcp::acceptor& acceptor)
: acceptor_(acceptor), socket_(acceptor.get_io_service(), tcp::v4())
{
}
void start()
{
acceptor_.get_io_service().post(boost::bind(&Connection::start_accept, this));
}
private:
void start_accept()
{
acceptor_.async_accept(socket_,boost::bind(&Connection::handle_accept, this,
placeholders::error));
}
void handle_accept(const boost::system::error_code& err)
{
if (err)
{
//Failed to accept the incoming connection.
disconnect();
}
else
{
count_ = 0;
async_read(socket_, buffer(&count_, sizeof(count_)),
boost::bind(&Connection::handle_read_count,
this, placeholders::error, placeholders::bytes_transferred));
}
}
void handle_read_count(const boost::system::error_code& err, std::size_t bytes_transferred)
{
if (err || (bytes_transferred != sizeof(count_))
{
//Failed to read the element count.
disconnect();
}
else
{
elements_.assign(count_, 0);
async_read(socket_, buffer(elements_),
boost::bind(&Connection::handle_read_elements, this,
placeholders::error, placeholders::bytes_transferred));
}
}
void handle_read_elements(const boost::system::error_code& err, std::size_t bytes_transferred)
{
if (err || (bytes_transferred != count_ * sizeof(int)))
{
//Failed to read the request elements.
disconnect();
}
else
{
response_ = 1;
async_write(socket_, buffer(&response_, sizeof(response_)),
boost::bind(&Connection::handle_write_response, this,
placeholders::error, placeholders::bytes_transferred));
}
}
void handle_write_response(const boost::system::error_code& err, std::size_t bytes_transferred)
{
if (err)
disconnect();
else
{
//Start a fresh read
count_ = 0;
async_read(socket_, buffer(&count_, sizeof(count_)),
boost::bind(&Connection::handle_read_count,
this, placeholders::error, placeholders::bytes_transferred));
}
}
void disconnect()
{
socket_.shutdown(tcp::socket::shutdown_both);
socket_.close();
socket_.open(tcp::v4());
start_accept();
}
tcp::acceptor& acceptor_;
tcp::socket socket_;
std::vector<int> elements_;
long count_;
long response_;
};
class Server : private boost::noncopyable
{
public:
Server(unsigned short port, unsigned short thread_pool_size, unsigned short conn_pool_size)
: acceptor_(io_service_, tcp::endpoint(tcp::v4(), port), true)
{
unsigned short i = 0;
for (i = 0; i < conn_pool_size; ++i)
{
ConnectionPtr conn(new Connection(acceptor_));
conn->start();
conn_pool_.push_back(conn);
}
// Start the pool of threads to run all of the io_services.
for (i = 0; i < thread_pool_size; ++i)
{
thread_pool_.create_thread(boost::bind(&io_service::run, &io_service_));
}
}
~Server()
{
io_service_.stop();
thread_pool_.join_all();
}
private:
io_service io_service_;
tcp::acceptor acceptor_;
typedef boost::shared_ptr<Connection> ConnectionPtr;
std::vector<ConnectionPtr> conn_pool_;
boost::thread_group thread_pool_;
};
boost::function0<void> console_ctrl_function;
BOOL WINAPI console_ctrl_handler(DWORD ctrl_type)
{
switch (ctrl_type)
{
case CTRL_C_EVENT:
case CTRL_BREAK_EVENT:
case CTRL_CLOSE_EVENT:
case CTRL_SHUTDOWN_EVENT:
console_ctrl_function();
return TRUE;
default:
return FALSE;
}
}
void stop_server(Server* pServer)
{
delete pServer;
pServer = NULL;
}
int main()
{
Server *pServer = new Server(10255, 4, 20);
console_ctrl_function = boost::bind(stop_server, pServer);
SetConsoleCtrlHandler(console_ctrl_handler, TRUE);
while(true)
{
Sleep(10000);
}
}
I believe the code you have posted is a little incomplete/incorrect. Nonetheless, here is some guidance..
1)
Your async_accept() call seems wrong. It should be something like,
m_acceptor.async_accept(m_new_tcp_connection->socket(),...)
2)
Take note that the Handle_Accept() function will be called after the socket is accepted. In other words, when control reaches Handle_Accept(), you simply have to write to the socket. Something like
void TCP_Server::Handle_Accept(const system::error_code& error)
{
if(!error)
{
//send data to the client
string message = "hello there!\n";
//Write data to the socket and then call the handler AFTER that
//Note, you will need to define a Handle_Write() function in your TCP_Connection class.
async_write(m_new_tcp_connection->socket(),buffer(message),bind(&TCP_Connection::Handle_Write, this,placeholders::error,placeholders::bytes_transferred));
//accept the next connection
Start_Accept();
}
}
3)
As for the client, you should take a look here:
http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/tutorial/tutdaytime1.html
If your communication on both ends is realized in C++ you can use Boost Serialization library to sezilize the vector into bytes and transfer these to the other machine. On the opposite end you will use boost serialization lib to desirialize the object. I saw at least two approaches doing so.
Advantage of Boost Serialization: this approach works when transferring objects between 32bit and 64bit systems as well.
Below are the links:
code project article
boost mailing list ideas
Regards,
Ovanes