Related
So I don't know why but I can't wrap my head around the boost Beast websocket server and how you can (or should) interact with it.
The basic program I made looks like this, across 2 classes (WebSocketListener and WebSocketSession)
https://www.boost.org/doc/libs/develop/libs/beast/example/websocket/server/async/websocket_server_async.cpp
Everything works great, I can connect, and it echos messages. We will only ever have 1 active session, and I'm struggling to understand how I can interface with this session from outside its class, in my int main() for example or another class that may be responsible for issuing read/writes. We will be using a simple Command design pattern of commands async coming into a buffer that get processed against hardware and then async_write back out the results. The reading and queuing is straight forward and will be done in the WebsocketSession, but everything I see for write is just reading/writing directly inside the session and not getting external input.
I've seen examples using things like boost::asio::async_write(socket, buffer, ...) but I'm struggling to understand how I get a reference to said socket when the session is created by the listener itself.
Instead of depending on a socket from outside of the session, I'd depend on your program logic to implement the session.
That's because the session (connection) will govern its own lifetime, arriving spontaneously and potentially disconnecting spontaneously. Your hardware, most likely, doesn't.
So, borrowing the concept of "Dependency Injection" tell your listener about your application logic, and then call into that from the session. (The listener will "inject" the dependency into each newly created session).
Let's start from a simplified/modernized version of your linked example.
Now, where we prepare a response, you want your own logic injected, so let's write it how we would imagine it:
void on_read(beast::error_code ec, std::size_t /*bytes_transferred*/) {
if (ec == websocket::error::closed) return;
if (ec.failed()) return fail(ec, "read");
// Process the message
response_ = logic_->Process(beast::buffers_to_string(buffer_));
ws_.async_write(
net::buffer(response_),
beast::bind_front_handler(&session::on_write, shared_from_this()));
}
Here we declare the members and initialize them from the constructor:
std::string response_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
explicit session(tcp::socket&& socket,
std::shared_ptr<AppDomain::Logic> logic)
: ws_(std::move(socket))
, logic_(logic) {}
Now, we need to inject the listener with the logic so we can pass it along:
class listener : public std::enable_shared_from_this<listener> {
net::any_io_executor ex_;
tcp::acceptor acceptor_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
listener(net::any_io_executor ex, tcp::endpoint endpoint,
std::shared_ptr<AppDomain::Logic> logic)
: ex_(ex)
, acceptor_(ex)
, logic_(logic) {
So that we can pass it along:
void on_accept(beast::error_code ec, tcp::socket socket) {
if (ec) {
fail(ec, "accept");
} else {
std::make_shared<session>(std::move(socket), logic_)->run();
}
// Accept another connection
do_accept();
}
Now making the real logic in main:
auto logic = std::make_shared<AppDomain::Logic>("StackOverflow Demo/");
try {
// The io_context is required for all I/O
net::thread_pool ioc(threads);
std::make_shared<listener>(ioc.get_executor(),
tcp::endpoint{address, port}, logic)
->run();
ioc.join();
} catch (beast::system_error const& se) {
fail(se.code(), "listener");
}
Demo Logic
Just for fun, let's implement some random logic, that might be implemented in hardware in the future:
namespace AppDomain {
struct Logic {
std::string banner;
Logic(std::string msg) : banner(std::move(msg)) {}
std::string Process(std::string request) {
std::cout << "Processing: " << std::quoted(request) << std::endl;
std::string result;
auto fold = [&result](auto op, double initial) {
return [=, &result](auto& ctx) {
auto& args = _attr(ctx);
auto v = accumulate(args.begin(), args.end(), initial, op);
result = "Fold:" + std::to_string(v);
};
};
auto invalid = [&result](auto& ctx) {
result = "Invalid Command: " + _attr(ctx);
};
using namespace boost::spirit::x3;
auto args = rule<void, std::vector<double>>{} = '(' >> double_ % ',' >> ')';
auto add = "adding" >> args[fold(std::plus<>{}, 0)];
auto mul = "multiplying" >> args[fold(std::multiplies<>{}, 1)];
auto err = lexeme[+char_][invalid];
phrase_parse(begin(request), end(request), add | mul | err, blank);
return banner + result;
}
};
} // namespace AppDomain
Now you can see it in action: Full Listing
Where To Go From Here
What if you need multiple responses for one request?
You need a queue. I usually call those outbox so searching for outbox_, _outbox etc will give lots of examples.
Those examples will also show how to deal with other situations where writes can be "externally initiated", and how to safely enqueue those. Perhaps a very engaging example is here How to batch send unsent messages in asio
Listing For Reference
In case the links go dead in the future:
#include <boost/algorithm/string/trim.hpp>
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <filesystem>
#include <functional>
#include <iostream>
static std::string g_app_name = "app-logic-service";
#include <boost/core/demangle.hpp> // just for our demo logic
#include <boost/spirit/home/x3.hpp> // idem
#include <numeric> // idem
namespace AppDomain {
struct Logic {
std::string banner;
Logic(std::string msg) : banner(std::move(msg)) {}
std::string Process(std::string request) {
std::string result;
auto fold = [&result](auto op, double initial) {
return [=, &result](auto& ctx) {
auto& args = _attr(ctx);
auto v = accumulate(args.begin(), args.end(), initial, op);
result = "Fold:" + std::to_string(v);
};
};
auto invalid = [&result](auto& ctx) {
result = "Invalid Command: " + _attr(ctx);
};
using namespace boost::spirit::x3;
auto args = rule<void, std::vector<double>>{} = '(' >> double_ % ',' >> ')';
auto add = "adding" >> args[fold(std::plus<>{}, 0)];
auto mul = "multiplying" >> args[fold(std::multiplies<>{}, 1)];
auto err = lexeme[+char_][invalid];
phrase_parse(begin(request), end(request), add | mul | err, blank);
return banner + result;
}
};
} // namespace AppDomain
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = boost::asio::ip::tcp; // from <boost/asio/ip/tcp.hpp>
// Report a failure
void fail(beast::error_code ec, char const* what) {
std::cerr << what << ": " << ec.message() << "\n";
}
class session : public std::enable_shared_from_this<session> {
websocket::stream<beast::tcp_stream> ws_;
beast::flat_buffer buffer_;
std::string response_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
explicit session(tcp::socket&& socket,
std::shared_ptr<AppDomain::Logic> logic)
: ws_(std::move(socket))
, logic_(logic) {}
void run() {
// Get on the correct executor
// strand for thread safety
dispatch(
ws_.get_executor(),
beast::bind_front_handler(&session::on_run, shared_from_this()));
}
private:
void on_run() {
// Set suggested timeout settings for the websocket
ws_.set_option(websocket::stream_base::timeout::suggested(
beast::role_type::server));
// Set a decorator to change the Server of the handshake
ws_.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) + " " +
g_app_name);
}));
// Accept the websocket handshake
ws_.async_accept(
beast::bind_front_handler(&session::on_accept, shared_from_this()));
}
void on_accept(beast::error_code ec) {
if (ec)
return fail(ec, "accept");
do_read();
}
void do_read() {
ws_.async_read(
buffer_,
beast::bind_front_handler(&session::on_read, shared_from_this()));
}
void on_read(beast::error_code ec, std::size_t /*bytes_transferred*/) {
if (ec == websocket::error::closed) return;
if (ec.failed()) return fail(ec, "read");
// Process the message
auto request = boost::algorithm::trim_copy(
beast::buffers_to_string(buffer_.data()));
std::cout << "Processing: " << std::quoted(request) << " from "
<< beast::get_lowest_layer(ws_).socket().remote_endpoint()
<< std::endl;
response_ = logic_->Process(request);
ws_.async_write(
net::buffer(response_),
beast::bind_front_handler(&session::on_write, shared_from_this()));
}
void on_write(beast::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if (ec)
return fail(ec, "write");
// Clear the buffer
buffer_.consume(buffer_.size());
// Do another read
do_read();
}
};
// Accepts incoming connections and launches the sessions
class listener : public std::enable_shared_from_this<listener> {
net::any_io_executor ex_;
tcp::acceptor acceptor_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
listener(net::any_io_executor ex, tcp::endpoint endpoint,
std::shared_ptr<AppDomain::Logic> logic)
: ex_(ex)
, acceptor_(ex)
, logic_(logic) {
acceptor_.open(endpoint.protocol());
acceptor_.set_option(tcp::acceptor::reuse_address(true));
acceptor_.bind(endpoint);
acceptor_.listen(tcp::acceptor::max_listen_connections);
}
// Start accepting incoming connections
void run() { do_accept(); }
private:
void do_accept() {
// The new connection gets its own strand
acceptor_.async_accept(make_strand(ex_),
beast::bind_front_handler(&listener::on_accept,
shared_from_this()));
}
void on_accept(beast::error_code ec, tcp::socket socket) {
if (ec) {
fail(ec, "accept");
} else {
std::make_shared<session>(std::move(socket), logic_)->run();
}
// Accept another connection
do_accept();
}
};
int main(int argc, char* argv[]) {
g_app_name = std::filesystem::path(argv[0]).filename();
if (argc != 4) {
std::cerr << "Usage: " << g_app_name << " <address> <port> <threads>\n"
<< "Example:\n"
<< " " << g_app_name << " 0.0.0.0 8080 1\n";
return 1;
}
auto const address = net::ip::make_address(argv[1]);
auto const port = static_cast<uint16_t>(std::atoi(argv[2]));
auto const threads = std::max<int>(1, std::atoi(argv[3]));
auto logic = std::make_shared<AppDomain::Logic>("StackOverflow Demo/");
try {
// The io_context is required for all I/O
net::thread_pool ioc(threads);
std::make_shared<listener>(ioc.get_executor(),
tcp::endpoint{address, port}, logic)
->run();
ioc.join();
} catch (beast::system_error const& se) {
fail(se.code(), "listener");
}
}
UPDATE
In response to the comments I reified the outbox pattern again. Note some of the comments in the code.
Compiler Explorer
#include <boost/algorithm/string/trim.hpp>
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <deque>
#include <filesystem>
#include <functional>
#include <iostream>
#include <list>
static std::string g_app_name = "app-logic-service";
#include <boost/core/demangle.hpp> // just for our demo logic
#include <boost/spirit/home/x3.hpp> // idem
#include <numeric> // idem
namespace AppDomain {
struct Logic {
std::string banner;
Logic(std::string msg) : banner(std::move(msg)) {}
std::string Process(std::string request) {
std::string result;
auto fold = [&result](auto op, double initial) {
return [=, &result](auto& ctx) {
auto& args = _attr(ctx);
auto v = accumulate(args.begin(), args.end(), initial, op);
result = "Fold:" + std::to_string(v);
};
};
auto invalid = [&result](auto& ctx) {
result = "Invalid Command: " + _attr(ctx);
};
using namespace boost::spirit::x3;
auto args = rule<void, std::vector<double>>{} = '(' >> double_ % ',' >> ')';
auto add = "adding" >> args[fold(std::plus<>{}, 0)];
auto mul = "multiplying" >> args[fold(std::multiplies<>{}, 1)];
auto err = lexeme[+char_][invalid];
phrase_parse(begin(request), end(request), add | mul | err, blank);
return banner + result;
}
};
} // namespace AppDomain
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = boost::asio::ip::tcp; // from <boost/asio/ip/tcp.hpp>
// Report a failure
void fail(beast::error_code ec, char const* what) {
std::cerr << what << ": " << ec.message() << "\n";
}
class session : public std::enable_shared_from_this<session> {
websocket::stream<beast::tcp_stream> ws_;
beast::flat_buffer buffer_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
explicit session(tcp::socket&& socket,
std::shared_ptr<AppDomain::Logic> logic)
: ws_(std::move(socket))
, logic_(logic) {}
void run() {
// Get on the correct executor
// strand for thread safety
dispatch(
ws_.get_executor(),
beast::bind_front_handler(&session::on_run, shared_from_this()));
}
void post_message(std::string msg) {
post(ws_.get_executor(),
[self = shared_from_this(), this, msg = std::move(msg)] {
do_post_message(std::move(msg));
});
}
private:
void on_run() {
// on the strand
// Set suggested timeout settings for the websocket
ws_.set_option(websocket::stream_base::timeout::suggested(
beast::role_type::server));
// Set a decorator to change the Server of the handshake
ws_.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) + " " +
g_app_name);
}));
// Accept the websocket handshake
ws_.async_accept(
beast::bind_front_handler(&session::on_accept, shared_from_this()));
}
void on_accept(beast::error_code ec) {
// on the strand
if (ec)
return fail(ec, "accept");
do_read();
}
void do_read() {
// on the strand
buffer_.clear();
ws_.async_read(
buffer_,
beast::bind_front_handler(&session::on_read, shared_from_this()));
}
void on_read(beast::error_code ec, std::size_t /*bytes_transferred*/) {
// on the strand
if (ec == websocket::error::closed) return;
if (ec.failed()) return fail(ec, "read");
// Process the message
auto request = boost::algorithm::trim_copy(
beast::buffers_to_string(buffer_.data()));
std::cout << "Processing: " << std::quoted(request) << " from "
<< beast::get_lowest_layer(ws_).socket().remote_endpoint()
<< std::endl;
do_post_message(logic_->Process(request)); // already on the strand
do_read();
}
std::deque<std::string> _outbox;
void do_post_message(std::string msg) {
// on the strand
_outbox.push_back(std::move(msg));
if (_outbox.size() == 1)
do_write_loop();
}
void do_write_loop() {
// on the strand
if (_outbox.empty())
return;
ws_.async_write( //
net::buffer(_outbox.front()),
[self = shared_from_this(), this] //
(beast::error_code ec, size_t bytes_transferred) {
// on the strand
boost::ignore_unused(bytes_transferred);
if (ec)
return fail(ec, "write");
_outbox.pop_front();
do_write_loop();
});
}
};
// Accepts incoming connections and launches the sessions
class listener : public std::enable_shared_from_this<listener> {
net::any_io_executor ex_;
tcp::acceptor acceptor_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
listener(net::any_io_executor ex, tcp::endpoint endpoint,
std::shared_ptr<AppDomain::Logic> logic)
: ex_(ex)
, acceptor_(make_strand(ex)) // NOTE to guard sessions_
, logic_(logic) {
acceptor_.open(endpoint.protocol());
acceptor_.set_option(tcp::acceptor::reuse_address(true));
acceptor_.bind(endpoint);
acceptor_.listen(tcp::acceptor::max_listen_connections);
}
// Start accepting incoming connections
void run() { do_accept(); }
void broadcast(std::string msg) {
post(acceptor_.get_executor(),
beast::bind_front_handler(&listener::do_broadcast,
shared_from_this(), std::move(msg)));
}
private:
using handle_t = std::weak_ptr<session>;
std::list<handle_t> sessions_;
void do_broadcast(std::string const& msg) {
for (auto handle : sessions_)
if (auto sess = handle.lock())
sess->post_message(msg);
}
void do_accept() {
// The new connection gets its own strand
acceptor_.async_accept(make_strand(ex_),
beast::bind_front_handler(&listener::on_accept,
shared_from_this()));
}
void on_accept(beast::error_code ec, tcp::socket socket) {
// on the strand
if (ec) {
fail(ec, "accept");
} else {
auto sess = std::make_shared<session>(std::move(socket), logic_);
sessions_.emplace_back(sess);
// optionally:
sessions_.remove_if(std::mem_fn(&handle_t::expired));
sess->run();
}
// Accept another connection
do_accept();
}
};
static void emulate_hardware_stuff(std::shared_ptr<listener> srv) {
using std::this_thread::sleep_for;
using namespace std::chrono_literals;
// Extremely simplistic. Instead I'd recommend `steady_timer` with
// `_async_wait` here, but since I'm just making a sketch...
unsigned i = 0;
while (true) {
sleep_for(1s);
srv->broadcast("Hardware thing #" + std::to_string(++i));
}
}
int main(int argc, char* argv[]) {
g_app_name = std::filesystem::path(argv[0]).filename();
if (argc != 4) {
std::cerr << "Usage: " << g_app_name << " <address> <port> <threads>\n"
<< "Example:\n"
<< " " << g_app_name << " 0.0.0.0 8080 1\n";
return 1;
}
auto const address = net::ip::make_address(argv[1]);
auto const port = static_cast<uint16_t>(std::atoi(argv[2]));
auto const threads = std::max<int>(1, std::atoi(argv[3]));
auto logic = std::make_shared<AppDomain::Logic>("StackOverflow Demo/");
try {
// The io_context is required for all I/O
net::thread_pool ioc(threads);
auto srv = std::make_shared<listener>( //
ioc.get_executor(), //
tcp::endpoint{address, port}, //
logic);
srv->run();
std::thread something_hardware(emulate_hardware_stuff, srv);
ioc.join();
something_hardware.join();
} catch (beast::system_error const& se) {
fail(se.code(), "listener");
}
}
With Live Demo:
I've created a simple wrapper for boost::asio library. My wrapper consists of 4 main classes: NetServer (server), NetClient (client), NetSession (client/server session) and Network (composition class of these three which also includes all callback methods).
The problem is that the first connection client/server works flawlessly, but when I then stop server, start it again and then try to connect the client, the server just doesn't recognize the client. It seems like the acceptor callback isn't called. And client does connect to server, because first - the connection goes without errors, second - when I close the server's program, the client receives the error message WSAECONNRESET.
I've created test program which emulates the procedure written above. It does following:
Starts the server
Starts the client
Client succesfully connects to server
Stops the server
Client receives the error and disconnects itself
Starts the server again
Client again succesfully connects to server
BUT SERVER DOESN'T CALL THE ACCEPTOR CALLBACK ANYMORE
It means that in point 3 the acceptor succesfully calls the callback function, but in point 7 the acceptor doesn't call the callback.
I think I do something wrong in stop()/start() method of the server, but I can't figure out what's exactly wrong.
The source of the NetServer class:
NetServer::NetServer(Network& netRef) : net{ netRef }
{
acceptor = std::make_unique<boost::asio::ip::tcp::acceptor>(ioc);
}
NetServer::~NetServer(void)
{
ioc.stop();
if (threadStarted)
{
th.join();
threadStarted = false;
}
if (active)
stop();
}
int NetServer::start(void)
{
assert(getAcceptHandler() != nullptr);
assert(getHeaderHandler() != nullptr);
assert(getDataHandler() != nullptr);
assert(getErrorHandler() != nullptr);
closeAll();
try
{
ep = boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), srvPort);
acceptor->open(ep.protocol());
acceptor->bind(ep);
acceptor->listen();
initAccept();
}
catch (system::system_error& e)
{
return e.code().value();
}
if (!threadStarted)
{
th = std::thread([this]()
{
ioc.run();
});
threadStarted = true;
}
active = true;
return Network::NET_OK;
}
int NetServer::stop(void)
{
ioc.post(boost::bind(&NetServer::_stop, this));
return Network::NET_OK;
}
void NetServer::_stop(void)
{
boost::system::error_code ec;
acceptor->close(ec);
for (auto& s : sessions)
closeSession(s.get(), false);
active = false;
}
void NetServer::initAccept(void)
{
sock = std::make_shared<asio::ip::tcp::socket>(ioc);
acceptor->async_accept(*sock.get(), [this](const boost::system::error_code& error)
{
onAccept(error, sock);
});
}
void NetServer::onAccept(const boost::system::error_code& ec, SocketSharedPtr sock)
{
if (ec.value() == 0)
{
if (accHandler())
{
addSession(sock);
initAccept();
}
}
else
getErrorHandler()(nullptr, ec);
}
SessionPtr NetServer::addSession(SocketSharedPtr sock)
{
std::lock_guard<std::mutex> guard(mtxSession);
auto session = std::make_shared<NetSession>(sock, *this, true);
sessions.insert(session);
session->start();
return session;
}
SessionPtr NetServer::findSession(const SessionPtr session)
{
for (auto it = std::begin(sessions); it != std::end(sessions); it++)
if (*it == session)
return *it;
return nullptr;
}
bool NetServer::closeSession(const void *session, bool erase /* = true */)
{
std::lock_guard<std::mutex> guard(mtxSession);
for (auto it = std::begin(sessions); it != std::end(sessions); it++)
if (it->get() == session)
{
try
{
it->get()->getSocket()->cancel();
it->get()->getSocket()->shutdown(asio::socket_base::shutdown_send);
it->get()->getSocket()->close();
it->get()->getSocket().reset();
}
catch (system::system_error& e)
{
UNREFERENCED_PARAMETER(e);
}
if (erase)
sessions.erase(*it);
return true;
}
return false;
}
void NetServer::closeAll(void)
{
using namespace boost::placeholders;
std::lock_guard<std::mutex> guard(mtxSession);
std::for_each(sessions.begin(), sessions.end(), boost::bind(&NetSession::stop, _1));
sessions.clear();
}
bool NetServer::write(const SessionPtr session, std::string msg)
{
if (SessionPtr s = findSession(session); s)
{
s->addMessage(msg);
if (s->canWrite())
s->write();
return true;
}
return false;
}
This is the output from the server:
Enter 0 - server, 1 - client: 0
1. Server started
3. Client connected to server
Stopping server....
4. Server stopped
Net error, server, acceptor: ERROR_OPERATION_ABORTED
Net error, server, ERROR_OPERATION_ABORTED
Client session deleted
6. Server started again
(HERE SHOULD BE "8. Client again connected to server", but the server didn't recognize the reconnected client!)
And from the client:
Enter 0 - server, 1 - client: 1
2. Client started and connected to server
Net error, client: ERROR_FILE_NOT_FOUND
5. Client disconnected from server
Waiting 3 sec before reconnect...
Connecting to server...
7. Client started and connected to server
(WHEN I CLOSE THE SERVER WINDOW, I RECVEIVE HERE THE "Net error, client: WSAECONNRESET" MESSAGE - it means client was connected to server anyhow!)
If the code of NetClient, NetSession and Network is necessary, just let me know.
Thanks in advance
Wow. There's a lot to unpack. There is quite a lot of code smell that reminds me of some books on Asio programming that turned out to be... not excellent in my previous experience.
I couldn't give any real advice without grokking your code, which requires me to review in-depth and add missing bits. So let me just provide you with my reviewed/fixed code first, then we'll talk about some of the details.
A few areas where you seemed to have trouble making up your mind:
whether to use a strand or to use mutex locking
whether to use async or sync (e.g. closeSession is completely synchronous and blokcing)
whether to use shared-pointers for lifetime or not: on the one hand you have NetSesion support shared_from_this, but on the other hand you are keeping them alive in a sessions collection.
whether to use smart pointers or raw pointers (sp.get() is a code smell)
whether to use void* pointers or forward declared structs for opaque implementation
whether to use exceptions or to use error codes. Specifically:
return e.code().value();
is a Very Bad Idea. Just return error_code already. Or just propagate the exception.
judging from the use, my best bet is that sessions is std::set<SessionPtr>. Then it's funny that you're doing linear searches. In fact, findSession could be:
SessionPtr findSession(SessionPtr const& session) {
std::lock_guard guard(mtxSessions);
return sessions.contains(session)? session: nullptr;
}
In fact, given some natural invariants, it could just be
auto findSession(SessionPtr s) { return std::move(s); }
Note as well, you had forgotten to lock the mutex in findSession
closeSession completely violates Law Of Demeter, 6*3 times over if you will. In my example I make it so SessHandle is a weak pointer to NetSession and you can just write:
for (auto& handle : sessions)
if (auto sess = handle.lock())
sess->close();
Of course, sess->close() should not block
Also, it should correctly synchronize on the session e.g. using the sessions strand:
void close() {
return post(sock_.get_executor(), [this, self = shared_from_this()] {
error_code ec;
if (!ec) sock_.cancel(ec);
if (!ec) sock_.shutdown(tcp::socket::shutdown_send, ec);
if (!ec) sock_.close(ec);
});
}
If you insist, you can make it so the caller can still await the result and receive any exceptions:
std::future<void> close() {
return post(
sock_.get_executor(),
std::packaged_task<void()>{[this, self = shared_from_this()] {
sock_.cancel();
sock_.shutdown(tcp::socket::shutdown_send);
sock_.close();
}});
}
Honestly, that seems overkill since you never look at the return value anyways.
In general, I recommend leaving socket::close() to the destructor. It avoids a specific class of race-conditions on socket handles.
Don't use boolean flags (isThreadActive is better replaced with th.joinable())
apparently you had NetSession::stop which I imagine did largely the same as closeSession but in the right place? I replaced it with the new NetSession::close
subtly when accHandler returned false, you would exit the accept loop alltogether. I doubt that was on purpose
try to minimize time under locks:
std::future<void> close() {
return post(
sock_.get_executor(),
std::packaged_task<void()>{[this, self = shared_from_this()] {
sock_.cancel();
sock_.shutdown(tcp::socket::shutdown_send);
sock_.close();
}});
}
I will show you how to do without the lock entirely instead.
Demo Listing
#include <boost/asio.hpp>
#include <boost/system/error_code.hpp>
#include <deque>
#include <iostream>
#include <iomanip>
#include <set>
using namespace std::chrono_literals;
using namespace std::placeholders;
namespace asio = boost::asio;
using asio::ip::tcp;
using boost::system::error_code;
static inline std::ostream debug(std::cerr.rdbuf());
struct Network {
static constexpr error_code NET_OK{};
};
struct NetSession; // opaque forward reference
struct NetServer;
using SessHandle = std::weak_ptr<NetSession>; // opaque handle
using Sessions = std::set<SessHandle, std::owner_less<>>;
struct NetSession : std::enable_shared_from_this<NetSession> {
NetSession(tcp::socket&& s, NetServer& srv, bool)
: sock_(std::move(s))
, srv_(srv) {
debug << "New session from " << getPeer() << std::endl;
}
void start() {
post(sock_.get_executor(),
std::bind(&NetSession::do_read, shared_from_this()));
}
tcp::endpoint getPeer() const { return peer_; }
void close() {
return post(sock_.get_executor(), [this, self = shared_from_this()] {
debug << "Closing " << getPeer() << std::endl;
error_code ec;
if (!ec) sock_.cancel(ec);
if (!ec) sock_.shutdown(tcp::socket::shutdown_send, ec);
// if (!ec) sock_.close(ec);
});
}
void addMessage(std::string msg) {
post(sock_.get_executor(),
[this, msg = std::move(msg), self = shared_from_this()] {
outgoing_.push_back(std::move(msg));
if (canWrite())
write_loop();
});
}
private:
// assumed on (logical) strand
bool canWrite() const { // FIXME misnomer: shouldStartWriteLoop()?
return outgoing_.size() == 1;
}
void write_loop() {
if (outgoing_.empty())
return;
async_write(sock_, asio::buffer(outgoing_.front()),
[this, self = shared_from_this()](error_code ec, size_t) {
if (!ec) {
outgoing_.pop_front();
write_loop();
}
});
}
void do_read() {
incoming_.clear();
async_read_until(
sock_, asio::dynamic_buffer(incoming_), "\n",
std::bind(&NetSession::on_read, shared_from_this(), _1, _2));
}
void on_read(error_code ec, size_t);
tcp::socket sock_;
tcp::endpoint peer_ = sock_.remote_endpoint();
NetServer& srv_;
std::string incoming_;
std::deque<std::string> outgoing_;
};
using SessionPtr = std::shared_ptr<NetSession>;
using SocketSharedPtr = std::shared_ptr<tcp::socket>;
struct NetServer {
NetServer(Network& netRef) : net{netRef} {}
~NetServer()
{
if (acceptor.is_open())
acceptor.cancel(); // TODO seems pretty redundant
stop();
if (th.joinable())
th.join();
}
std::function<bool()> accHandler;
std::function<void(SocketSharedPtr, error_code)> errHandler;
// TODO headerHandler
std::function<void(SessionPtr, error_code, std::string)> dataHandler;
error_code start() {
assert(accHandler);
assert(errHandler);
assert(dataHandler);
closeAll(sessions);
error_code ec;
if (!ec) acceptor.open(tcp::v4(), ec);
if (!ec) acceptor.bind({{}, srvPort}, ec);
if (!ec) acceptor.listen(tcp::socket::max_listen_connections, ec);
if (!ec) {
do_accept();
if (!th.joinable()) {
th = std::thread([this] { ioc.run(); }); // TODO exceptions!
}
}
if (ec && acceptor.is_open())
acceptor.close();
return ec;
}
void stop() { //
post(ioc, std::bind(&NetServer::do_stop, this));
}
void closeSession(SessHandle handle, bool erase = true) {
post(acceptor.get_executor(), [=, this] {
if (auto s = handle.lock()) {
s->close();
}
if (erase) {
sessions.erase(handle);
}
});
}
void closeAll() {
post(acceptor.get_executor(), [this] {
closeAll(sessions);
sessions.clear();
});
}
// TODO FIXME is the return value worth it?
bool write(SessionPtr const& session, std::string msg) {
return post(acceptor.get_executor(),
std::packaged_task<bool()>{std::bind(
&NetServer::do_write, this, session, std::move(msg))})
.get();
}
// compare
void writeAll(std::string msg) {
post(acceptor.get_executor(),
std::bind(&NetServer::do_write_all, this, std::move(msg)));
}
private:
Network& net;
asio::io_context ioc;
tcp::acceptor acceptor{ioc}; // active -> acceptor.is_open()
std::thread th; // threadActive -> th.joinable()
Sessions sessions;
std::uint16_t srvPort = 8989;
// std::mutex mtxSessions; // note naming; also replaced by logical strand
// assumed on acceptor logical strand
void do_accept() {
acceptor.async_accept(
make_strand(ioc), [this](error_code ec, tcp::socket sock) {
if (ec.failed()) {
return errHandler(nullptr, ec);
}
if (accHandler()) {
auto s = std::make_shared<NetSession>(std::move(sock),
*this, true);
sessions.insert(s);
s->start();
}
do_accept();
});
}
SessionPtr do_findSession(SessionPtr const& session) {
return sessions.contains(session) ? session : nullptr;
}
bool do_write(SessionPtr session, std::string msg) {
if (auto s = do_findSession(session)) {
s->addMessage(std::move(msg));
return true;
}
return false;
}
void do_write_all(std::string msg) {
for(auto& handle : sessions)
if (auto sess = handle.lock())
do_write(sess, msg);
}
static void closeAll(Sessions const& sessions) {
for (auto& handle : sessions)
if (auto sess = handle.lock())
sess->close();
}
void do_stop()
{
if (acceptor.is_open()) {
error_code ec;
acceptor.close(ec); // TODO error handling?
}
closeAll(sessions); // TODO FIXME why not clear sessions?
}
};
// Implementation must be after NetServer definition:
void NetSession::on_read(error_code ec, size_t) {
if (srv_.dataHandler)
srv_.dataHandler(shared_from_this(), ec, std::move(incoming_));
if (!ec)
do_read();
}
int main() {
Network net;
NetServer srv{net};
srv.accHandler = [] { return true; };
srv.errHandler = [](SocketSharedPtr, error_code ec) {
debug << "errHandler: " << ec.message() << std::endl;
};
srv.dataHandler = [](SessionPtr sess, error_code ec, std::string msg) {
debug << "dataHandler: " << sess->getPeer() << " " << ec.message()
<< " " << std::quoted(msg) << std::endl;
};
srv.start();
std::this_thread::sleep_for(10s);
std::cout << "Shutdown started" << std::endl;
srv.writeAll("We're going to shutdown, take care!\n");
srv.stop();
}
Live Demo:
I am attempting to make a fairly simple client-server program with boost asio. The server class is implemented as follows:
template<class RequestHandler, class RequestClass>
class Server {
public:
typedef std::map<std::string, RequestHandler> CommandMap;
Server(short port, CommandMap commands, RequestClass *request_class_inst)
: acceptor_(io_context_, tcp::endpoint(tcp::v4(), port))
, commands_(std::move(commands))
, request_class_inst_(request_class_inst)
{
DoAccept();
}
~Server()
{
}
void Run()
{
io_context_.run();
}
void RunInBackground()
{
std::thread t( [this]{ Run(); });
t.detach();
}
void Kill()
{
acceptor_.close();
}
private:
boost::asio::io_context io_context_;
tcp::acceptor acceptor_;
CommandMap commands_;
RequestClass *request_class_inst_;
void DoAccept()
{
acceptor_.async_accept(
[this](boost::system::error_code ec, tcp::socket socket) {
if (!ec)
std::make_shared<Session<RequestHandler, RequestClass>>
(std::move(socket), commands_, request_class_inst_)->Run();
DoAccept();
});
}
};
In addition to the server class, I implement a basic Client class thusly:
class Client {
public:
/**
* Constructor, initializes JSON parser and serializer.
*/
Client()
: reader_((new Json::CharReaderBuilder)->newCharReader())
{}
Json::Value MakeRequest(const std::string &ip_addr, unsigned short port,
const Json::Value &request)
{
boost::asio::io_context io_context;
std::string serialized_req = Json::writeString(writer_, request);
tcp::socket s(io_context);
tcp::resolver resolver(io_context);
s.connect({ boost::asio::ip::address::from_string(ip_addr), port });
boost::asio::write(s, boost::asio::buffer(serialized_req));
s.shutdown(tcp::socket::shutdown_send);
error_code ec;
char reply[2048];
size_t reply_length = boost::asio::read(s, boost::asio::buffer(reply),
ec);
std::cout << std::string(reply).substr(0, reply_length) << std::endl;
Json::Value json_resp;
JSONCPP_STRING parse_err;
std::string resp_str(reply);
if (reader_->parse(resp_str.c_str(), resp_str.c_str() + resp_str.length(),
&json_resp, &parse_err))
return json_resp;
throw std::runtime_error("Error parsing response.");
}
bool IsAlive(const std::string &ip_addr, unsigned short port)
{
boost::asio::io_context io_context;
tcp::socket s(io_context);
tcp::resolver resolver(io_context);
try {
s.connect({boost::asio::ip::address::from_string(ip_addr), port});
} catch(const boost::wrapexcept<boost::system::system_error> &err) {
s.close();
return false;
}
s.close();
return true;
}
private:
/// Reads JSON.
const std::unique_ptr<Json::CharReader> reader_;
/// Writes JSON.
Json::StreamWriterBuilder writer_;
};
I have implemented a small example to test Client::IsAlive:
int main()
{
auto *request_inst = new RequestClass(1);
std::map<std::string, RequestClassMethod> commands {
{"ADD_1", std::mem_fn(&RequestClass::add_n)},
{"SUB_1", std::mem_fn(&RequestClass::sub_n)}
};
Server<RequestClassMethod, RequestClass> s1(5000, commands, request_inst);
s1.RunInBackground();
std::vector<Client*> clients(6, new Client());
s1.Kill();
// Should output "0" to console.
std::cout << clients.at(1)->IsAlive("127.0.0.1", 5000);
return 0;
}
However, when I attempt to run this, the output varies. About half the time, I receive the correct value and the program exits with code 0, but, on other occasions, the program will either: (1) exit with code 139 (SEGFAULT) before outputting 0 to the console, (2) output 0 to the console and subsequently exit with code 139, (3) output 0 to the console and subsequently hang, or (4) hang before writing anything to the console.
I am uncertain as to what has caused these errors. I expect that it has to do with the destruction of Server::io_context_ and implementation of Server::Kill. Could this pertain to how I am storing Server::io_context_ as a data member?
A minimum reproducible example is shown below:
#define BOOST_ASIO_HAS_MOVE
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>
#include <boost/system/error_code.hpp>
#include <json/json.h>
using boost::asio::ip::tcp;
using boost::system::error_code;
/// NOTE: This class exists exclusively for unit testing.
class RequestClass {
public:
/**
* Initialize class with value n to add sub from input values.
*
* #param n Value to add/sub from input values.
*/
explicit RequestClass(int n) : n_(n) {}
/// Value to add/sub from
int n_;
/**
* Add n to value in JSON request.
*
* #param request JSON request with field "value".
* #return JSON response containing modified field "value" = [original_value] + n.
*/
[[nodiscard]] Json::Value add_n(const Json::Value &request) const
{
Json::Value resp;
resp["SUCCESS"] = true;
// If value is present in request, return value + 1, else return error.
if (request.get("VALUE", NULL) != NULL) {
resp["VALUE"] = request["VALUE"].asInt() + this->n_;
} else {
resp["SUCCESS"] = false;
resp["ERRORS"] = "Invalid value.";
}
return resp;
}
/**
* Sun n from value in JSON request.
*
* #param request JSON request with field "value".
* #return JSON response containing modified field "value" = [original_value] - n.
*/
[[nodiscard]] Json::Value sub_n(const Json::Value &request) const
{
Json::Value resp, value;
resp["SUCCESS"] = true;
// If value is present in request, return value + 1, else return error.
if (request.get("VALUE", NULL) != NULL) {
resp["VALUE"] = request["VALUE"].asInt() - this->n_;
} else {
resp["SUCCESS"] = false;
resp["ERRORS"] = "Invalid value.";
}
return resp;
}
};
typedef std::function<Json::Value(RequestClass, const Json::Value &)> RequestClassMethod;
template<class RequestHandler, class RequestClass>
class Session :
public std::enable_shared_from_this<Session<RequestHandler,
RequestClass>>
{
public:
typedef std::map<std::string, RequestHandler> CommandMap;
Session(tcp::socket socket, CommandMap commands,
RequestClass *request_class_inst)
: socket_(std::move(socket))
, commands_(std::move(commands))
, request_class_inst_(request_class_inst)
, reader_((new Json::CharReaderBuilder)->newCharReader())
{}
void Run()
{
DoRead();
}
void Kill()
{
continue_ = false;
}
private:
tcp::socket socket_;
RequestClass *request_class_inst_;
CommandMap commands_;
/// Reads JSON.
const std::unique_ptr<Json::CharReader> reader_;
/// Writes JSON.
Json::StreamWriterBuilder writer_;
bool continue_ = true;
char data_[2048];
std::string resp_;
void DoRead()
{
auto self(this->shared_from_this());
socket_.async_read_some(boost::asio::buffer(data_),
[this, self](error_code ec, std::size_t length)
{
if (!ec)
DoWrite(length);
});
}
void DoWrite(std::size_t length)
{
JSONCPP_STRING parse_err;
Json::Value json_req, json_resp;
std::string client_req_str(data_);
if (reader_->parse(client_req_str.c_str(),
client_req_str.c_str() +
client_req_str.length(),
&json_req, &parse_err))
{
try {
// Get JSON response.
json_resp = ProcessRequest(json_req);
json_resp["SUCCESS"] = true;
} catch (const std::exception &ex) {
// If json parsing failed.
json_resp["SUCCESS"] = false;
json_resp["ERRORS"] = std::string(ex.what());
}
} else {
// If json parsing failed.
json_resp["SUCCESS"] = false;
json_resp["ERRORS"] = std::string(parse_err);
}
resp_ = Json::writeString(writer_, json_resp);
auto self(this->shared_from_this());
boost::asio::async_write(socket_,
boost::asio::buffer(resp_),
[this, self]
(boost::system::error_code ec,
std::size_t bytes_xfered) {
if (!ec) DoRead();
});
}
Json::Value ProcessRequest(Json::Value request)
{
Json::Value response;
std::string command = request["COMMAND"].asString();
// If command is not valid, give a response with an error.
if(commands_.find(command) == commands_.end()) {
response["SUCCESS"] = false;
response["ERRORS"] = "Invalid command.";
}
// Otherwise, run the relevant handler.
else {
RequestHandler handler = commands_.at(command);
response = handler(*request_class_inst_, request);
}
return response;
}
};
template<class RequestHandler, class RequestClass>
class Server {
public:
typedef std::map<std::string, RequestHandler> CommandMap;
Server(short port, CommandMap commands, RequestClass *request_class_inst)
: acceptor_(io_context_, tcp::endpoint(tcp::v4(), port))
, commands_(std::move(commands))
, request_class_inst_(request_class_inst)
{
DoAccept();
}
~Server()
{
}
void Run()
{
io_context_.run();
}
void RunInBackground()
{
std::thread t( [this]{ Run(); });
t.detach();
}
void Kill()
{
acceptor_.close();
}
private:
boost::asio::io_context io_context_;
tcp::acceptor acceptor_;
CommandMap commands_;
RequestClass *request_class_inst_;
void DoAccept()
{
acceptor_.async_accept(
[this](boost::system::error_code ec, tcp::socket socket) {
if (!ec)
std::make_shared<Session<RequestHandler, RequestClass>>
(std::move(socket), commands_, request_class_inst_)->Run();
DoAccept();
});
}
};
class Client {
public:
/**
* Constructor, initializes JSON parser and serializer.
*/
Client()
: reader_((new Json::CharReaderBuilder)->newCharReader())
{}
Json::Value MakeRequest(const std::string &ip_addr, unsigned short port,
const Json::Value &request)
{
boost::asio::io_context io_context;
std::string serialized_req = Json::writeString(writer_, request);
tcp::socket s(io_context);
tcp::resolver resolver(io_context);
s.connect({ boost::asio::ip::address::from_string(ip_addr), port });
boost::asio::write(s, boost::asio::buffer(serialized_req));
s.shutdown(tcp::socket::shutdown_send);
error_code ec;
char reply[2048];
size_t reply_length = boost::asio::read(s, boost::asio::buffer(reply),
ec);
std::cout << std::string(reply).substr(0, reply_length) << std::endl;
Json::Value json_resp;
JSONCPP_STRING parse_err;
std::string resp_str(reply);
if (reader_->parse(resp_str.c_str(), resp_str.c_str() + resp_str.length(),
&json_resp, &parse_err))
return json_resp;
throw std::runtime_error("Error parsing response.");
}
bool IsAlive(const std::string &ip_addr, unsigned short port)
{
boost::asio::io_context io_context;
tcp::socket s(io_context);
tcp::resolver resolver(io_context);
try {
s.connect({boost::asio::ip::address::from_string(ip_addr), port});
} catch(const boost::wrapexcept<boost::system::system_error> &err) {
s.close();
return false;
}
s.close();
return true;
}
private:
/// Reads JSON.
const std::unique_ptr<Json::CharReader> reader_;
/// Writes JSON.
Json::StreamWriterBuilder writer_;
};
int main()
{
auto *request_inst = new RequestClass(1);
std::map<std::string, RequestClassMethod> commands {
{"ADD_1", std::mem_fn(&RequestClass::add_n)},
{"SUB_1", std::mem_fn(&RequestClass::sub_n)}
};
Server<RequestClassMethod, RequestClass> s1(5000, commands, request_inst);
s1.RunInBackground();
std::vector<Client*> clients(6, new Client());
Json::Value sub_one_req;
sub_one_req["COMMAND"] = "SUB_1";
sub_one_req["VALUE"] = 1;
s1.Kill();
std::cout << clients.at(1)->IsAlive("127.0.0.1", 5000);
return 0;
}
Using ASAN (-fsanitize=addess) on that shows
false
=================================================================
==31232==ERROR: AddressSanitizer: heap-use-after-free on address 0x6110000002c0 at pc 0x561409ca2ea3 bp 0x7efcf
bbfdc60 sp 0x7efcfbbfdc50
READ of size 8 at 0x6110000002c0 thread T1
=================================================================
#0 0x561409ca2ea2 in boost::asio::detail::epoll_reactor::run(long, boost::asio::detail::op_queue<boost::asi
o::detail::scheduler_operation>&) /home/sehe/custom/boost_1_76_0/boost/asio/detail/impl/epoll_reactor.ipp:504
==31232==ERROR: LeakSanitizer: detected memory leaks
#1 0x561409cb442c in boost::asio::detail::scheduler::do_run_one(boost::asio::detail::conditionally_enabled_
mutex::scoped_lock&, boost::asio::detail::scheduler_thread_info&, boost::system::error_code const&) /home/sehe/
custom/boost_1_76_0/boost/asio/detail/impl/scheduler.ipp:470
Direct leak of 4 byte(s) in 1 object(s) allocated from:
#0 0x7efd08fca717 in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.6+0xb4717)
#2 0x561409cf2792 in boost::asio::detail::scheduler::run(boost::system::error_code&) /home/sehe/custom/boos
t_1_76_0/boost/asio/detail/impl/scheduler.ipp:204
#1 0x561409bc62b5 in main /home/sehe/Projects/stackoverflow/test.cpp:229
SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
Or on another run:
It already tells you "everything" you need to know. Coincidentally, it was the bug I referred to in my previous answer. To do graceful shutdown you have to synchronize on the thread. Detaching it ruins your chances forever. So, let's not detach it:
void RunInBackground()
{
if (!t_.joinable()) {
t_ = std::thread([this] { Run(); });
}
}
As you can see, this is captured, so you can never allow the thread to run past the destruction of the Server object.
And then in the destructor join it:
~Server()
{
if (t_.joinable()) {
t_.join();
}
}
Now, let's be thorough. We have two threads. They share objects. io_context is thread-safe, so that's fine. But tcp::acceptor is not. Neither might request_class_inst_. You need to synchronize more:
void Kill()
{
post(io_context_, [this] { acceptor_.close(); });
}
Now, note that this is NOT enough! .close() causes .cancel() on the acceptor, but that just makes the completion handler be invoked with error::operation_aborted. So, you need to prevent initiating DoAccept again in that case:
void DoAccept()
{
acceptor_.async_accept(
[this](boost::system::error_code ec, tcp::socket socket) {
if (ec) {
std::cout << "Accept loop: " << ec.message() << std::endl;
} else {
std::make_shared<Session<RequestHandler, RequestClass>>(
std::move(socket), commands_, request_class_inst_)
->Run();
DoAccept();
}
});
}
I took the liberty of aborting on /any/ error. Err on the safe side: you prefer processes to exit instead of being stuck in unresponsive state of high-CPU loops.
Regardless of this, you should be aware of the race condition between server startup/shutdown and your test client:
s1.RunInBackground();
// unspecified, race condition!
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(0).IsAlive("127.0.0.1", 5000) << std::endl;
sleep_for(10ms); // likely enough for acceptor to start
// true:
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(1).IsAlive("127.0.0.1", 5000) << std::endl;
std::cout << "MakeRequest: " << clients.at(2).MakeRequest(
"127.0.0.1", 5000, {{"COMMAND", "MUL_2"}, {"VALUE", "21"}})
<< std::endl;
s1.Kill();
// unspecified, race condition!
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(3).IsAlive("127.0.0.1", 5000) << std::endl;
sleep_for(10ms); // likely enough for acceptor to be closed
// false:
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(4).IsAlive("127.0.0.1", 5000) << std::endl;
Prints
IsAlive(240): true
IsAlive(245): true
MakeRequest: {"SUCCESS":false,"ERRORS":"not an int64"}
{"SUCCESS":false,"ERRORS":"not an int64"}
IsAlive(252): CLOSING
Accept loop: Operation canceled
THREAD EXIT
false
IsAlive(256): false
Complete Listing
Note that this also fixed the unnecessary leak of the RequestClass instance. You were already assuming copy-ability (because you were passing it by value in various places).
Also note that in MakeRequest we now no longer swallow any errors except EOF.
Like last time, I employ Boost Json for simplicity and to make the sample self-contained for StackOverflow.
Address sanitizer (ASan) and UBSan are silent. Life is good.
Live On Coliru
#include <boost/asio.hpp>
#include <boost/json.hpp>
#include <boost/json/src.hpp>
#include <iostream>
#include <deque>
using boost::asio::ip::tcp;
using boost::system::error_code;
namespace json = boost::json;
using Value = json::object;
using namespace std::chrono_literals;
static auto sleep_for(auto delay) { return std::this_thread::sleep_for(delay); }
/// NOTE: This class exists exclusively for unit testing.
struct RequestClass {
int n_;
Value add_n(Value const& request) const { return impl(std::plus<>{}, request); }
Value sub_n(Value const& request) const { return impl(std::minus<>{}, request); }
Value mul_n(Value const& request) const { return impl(std::multiplies<>{}, request); }
Value div_n(Value const& request) const { return impl(std::divides<>{}, request); }
private:
template <typename Op> Value impl(Op op, Value const& req) const {
return (req.contains("VALUE"))
? Value{{"VALUE", op(req.at("VALUE").as_int64(), n_)},
{"SUCCESS", true}}
: Value{{"ERRORS", "Invalid value."}, {"SUCCESS", false}};
}
};
using RequestClassMethod =
std::function<Value(RequestClass const&, Value const&)>;
template <class RequestHandler, class RequestClass>
class Session
: public std::enable_shared_from_this<
Session<RequestHandler, RequestClass>> {
public:
using CommandMap = std::map<std::string, RequestHandler>;
Session(tcp::socket socket, CommandMap commands,
RequestClass request_class_inst)
: socket_(std::move(socket))
, commands_(std::move(commands))
, request_class_inst_(std::move(request_class_inst))
{
}
void Run() { DoRead(); }
void Kill() { continue_ = false; }
private:
tcp::socket socket_;
CommandMap commands_;
RequestClass request_class_inst_;
bool continue_ = true;
char data_[2048];
std::string resp_;
void DoRead()
{
socket_.async_read_some(
boost::asio::buffer(data_),
[this, self = this->shared_from_this()](error_code ec, std::size_t length) {
if (!ec) {
DoWrite(length);
}
});
}
void DoWrite(std::size_t length)
{
Value json_resp;
try {
auto json_req = json::parse({data_, length}).as_object();
json_resp = ProcessRequest(json_req);
json_resp["SUCCESS"] = true;
} catch (std::exception const& ex) {
json_resp = {{"SUCCESS", false}, {"ERRORS", ex.what()}};
}
resp_ = json::serialize(json_resp);
boost::asio::async_write(socket_, boost::asio::buffer(resp_),
[this, self = this->shared_from_this()](
error_code ec, size_t bytes_xfered) {
if (!ec)
DoRead();
});
}
Value ProcessRequest(Value request)
{
auto command = request.contains("COMMAND")
? request["COMMAND"].as_string() //
: "";
std::string cmdstr(command.data(), command.size());
// If command is not valid, give a response with an error.
return commands_.contains(cmdstr)
? commands_.at(cmdstr)(request_class_inst_, request)
: Value{{"SUCCESS", false}, {"ERRORS", "Invalid command."}};
}
};
template <class RequestHandler, class RequestClass> class Server {
public:
using CommandMap = std::map<std::string, RequestHandler>;
Server(uint16_t port, CommandMap commands, RequestClass request_class_inst)
: acceptor_(io_context_, tcp::endpoint(tcp::v4(), port))
, commands_(std::move(commands))
, request_class_inst_(std::move(request_class_inst))
{
DoAccept();
}
~Server()
{
if (t_.joinable()) {
t_.join();
}
assert(not t_.joinable());
}
void Run()
{
io_context_.run();
}
void RunInBackground()
{
if (!t_.joinable()) {
t_ = std::thread([this] {
Run();
std::cout << "THREAD EXIT" << std::endl;
});
}
}
void Kill()
{
post(io_context_, [this] {
std::cout << "CLOSING" << std::endl;
acceptor_.close(); // causes .cancel() as well
});
}
private:
boost::asio::io_context io_context_;
tcp::acceptor acceptor_;
CommandMap commands_;
RequestClass request_class_inst_;
std::thread t_;
void DoAccept()
{
acceptor_.async_accept(
[this](boost::system::error_code ec, tcp::socket socket) {
if (ec) {
std::cout << "Accept loop: " << ec.message() << std::endl;
} else {
std::make_shared<Session<RequestHandler, RequestClass>>(
std::move(socket), commands_, request_class_inst_)
->Run();
DoAccept();
}
});
}
};
class Client {
public:
/**
* Constructor, initializes JSON parser and serializer.
*/
Client() {}
Value MakeRequest(std::string const& ip_addr, uint16_t port,
Value const& request)
{
boost::asio::io_context io_context;
std::string serialized_req = serialize(request);
tcp::socket s(io_context);
s.connect({boost::asio::ip::address::from_string(ip_addr), port});
boost::asio::write(s, boost::asio::buffer(serialized_req));
s.shutdown(tcp::socket::shutdown_send);
char reply[2048];
error_code ec;
size_t reply_length = read(s, boost::asio::buffer(reply), ec);
if (ec && ec != boost::asio::error::eof) {
throw boost::system::system_error(ec);
}
// safe method:
std::string_view resp_str(reply, reply_length);
Value res = json::parse({reply, reply_length}).as_object();
std::cout << res << std::endl;
return res;
}
bool IsAlive(std::string const& ip_addr, unsigned short port)
{
boost::asio::io_context io_context;
tcp::socket s(io_context);
error_code ec;
s.connect({boost::asio::ip::address::from_string(ip_addr), port}, ec);
return not ec.failed();
}
};
int main()
{
std::cout << std::boolalpha;
std::deque<Client> clients(6);
Server<RequestClassMethod, RequestClass> s1(
5000,
{
{"ADD_2", std::mem_fn(&RequestClass::add_n)},
{"SUB_2", std::mem_fn(&RequestClass::sub_n)},
{"MUL_2", std::mem_fn(&RequestClass::mul_n)},
{"DIV_2", std::mem_fn(&RequestClass::div_n)},
},
RequestClass{1});
s1.RunInBackground();
// unspecified, race condition!
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(0).IsAlive("127.0.0.1", 5000) << std::endl;
sleep_for(10ms); // likely enough for acceptor to start
// true:
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(1).IsAlive("127.0.0.1", 5000) << std::endl;
std::cout << "MakeRequest: " << clients.at(2).MakeRequest(
"127.0.0.1", 5000, {{"COMMAND", "MUL_2"}, {"VALUE", "21"}})
<< std::endl;
s1.Kill();
// unspecified, race condition!
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(3).IsAlive("127.0.0.1", 5000) << std::endl;
sleep_for(10ms); // likely enough for acceptor to be closed
// false:
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(4).IsAlive("127.0.0.1", 5000) << std::endl;
}
I've been trying to build a messaging application using Boost::asio and for some reason my modified version of chat_client does not recieve any messages. I first thought the io_context running out of handlers but that was not the case. Weirdly, if I eliminate controller.cpp and create a chat_client object in main my problems are solved.
chat_client.cpp(modified from examples)I've hard coded IP of my server and port for testing
#include <string.h>
#include <string>
#include "chat_client.h"
using asio::ip::tcp;
typedef std::deque<chat_message> chat_message_queue;
chat_client::chat_client(std::string ip, std::string port)
: socket_(io_context_)
{
tcp::resolver resolver(io_context_);
endpoints = resolver.resolve(ip, port);
do_connect(endpoints);
t = new std::thread([this](){ io_context_.run(); });
}
chat_client::~chat_client()
{
t->join();
}
void chat_client::write(const chat_message& msg)
{
asio::post(io_context_,
[this, msg]()
{
bool write_in_progress = !write_msgs_.empty();
write_msgs_.push_back(msg);
if (!write_in_progress)
{
do_write();
}
});
}
void chat_client::close()
{
asio::post(io_context_, [this]() { socket_.close(); });
}
void chat_client::do_connect(const tcp::resolver::results_type& endpoints)
{
asio::async_connect(socket_, endpoints,
[this](std::error_code ec, tcp::endpoint)
{
if (!ec)
{
do_read_header();
}
});
}
void chat_client::do_read_header()
{
asio::async_read(socket_,
asio::buffer(read_msg_.data(), chat_message::header_length),
[this](std::error_code ec, std::size_t /*length*/)
{
if (!ec && read_msg_.decode_header())
{
do_read_body();
}
else
{
socket_.close();
}
});
}
void chat_client::do_read_body()
{
asio::async_read(socket_,
asio::buffer(read_msg_.body(), read_msg_.body_length()),
[this](std::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
std::cout.write(read_msg_.body(), read_msg_.body_length());
std::cout << "\n";
do_read_header();
}
else
{
socket_.close();
}
});
}
void chat_client::do_write()
{
asio::async_write(socket_,
asio::buffer(write_msgs_.front().data(),
write_msgs_.front().length()),
[this](std::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
write_msgs_.pop_front();
if (!write_msgs_.empty())
{
do_write();
}
}
else
{
socket_.close();
}
});
}
controller.cpp
#include <stdexcept>
#include "controller.h"
//#include "chat_client.h" inside controller.h
/*Constructor*/
Controller::Controller() {}
void Controller::execute_cmd(int cmd)
{
switch(cmd)
{
case 1: //Create chat_client
{
try
{
c = new chat_client("192.168.0.11", "5000");
chat_message msg;
msg.body_length(5);
std::memcpy(msg.body(), "test", msg.body_length());
msg.encode_header();
c->write(msg);
}
catch(std::runtime_error& e)
{
std::cerr << "Cannot create chat_client" << std::endl;
}
}
case 2: //Close chat_client
{
c->close();
delete c;
}
case 3: //TESTING
{
char line[chat_message::max_body_length + 1];
while (std::cin.getline(line, chat_message::max_body_length + 1))
{
chat_message msg;
msg.body_length(std::strlen(line));
std::memcpy(msg.body(), line, msg.body_length());
msg.encode_header();
c->write(msg);
}
}
}
}
main.cpp used for testing
#include "controller.h"
int main(int argc, char** argv)
{
Controller c;
c.execute_cmd(1); //create chat_client
c.execute_cmd(2); //Start sending messages
return 0;
}
Your switch statement un execute_cmd has not break at end of case. In switch statemte if case not end with break then continué with the next sentence of next case. c.execite_cmd(1] really execute cmd 1, 2 and 3. Remenber that read and write io are async so in cmd 2 tour close connection.
Here's my implementation :
Client A send a message for Client B
Server process the message by async_read the right amount of data and
will wait for new data from Client A (in Order not to block Client A)
Afterwards Server will process the information (probably do a mysql
query) and then send the message to Client B with async_write.
The problem is, if Client A send message really fast, async_writes will interleave before the previous async_write handler is called.
Is there a simple way to avoid this problem ?
EDIT 1 :
If a Client C sends a message to Client B just after Client A, the same issue should appear...
EDIT 2 :
This would work ? because it seems to block, I don't know where...
namespace structure {
class User {
public:
User(boost::asio::io_service& io_service, boost::asio::ssl::context& context) :
m_socket(io_service, context), m_strand(io_service), is_writing(false) {}
ssl_socket& getSocket() {
return m_socket;
}
boost::asio::strand getStrand() {
return m_strand;
}
void push(std::string str) {
m_strand.post(boost::bind(&structure::User::strand_push, this, str));
}
void strand_push(std::string str) {
std::cout << "pushing: " << boost::this_thread::get_id() << std::endl;
m_queue.push(str);
if (!is_writing) {
write();
std::cout << "going to write" << std::endl;
}
std::cout << "Already writing" << std::endl;
}
void write() {
std::cout << "writing" << std::endl;
is_writing = true;
std::string str = m_queue.front();
boost::asio::async_write(m_socket,
boost::asio::buffer(str.c_str(), str.size()),
boost::bind(&structure::User::sent, this)
);
}
void sent() {
std::cout << "sent" << std::endl;
m_queue.pop();
if (!m_queue.empty()) {
write();
return;
}
else
is_writing = false;
std::cout << "done sent" << std::endl;
}
private:
ssl_socket m_socket;
boost::asio::strand m_strand;
std::queue<std::string> m_queue;
bool is_writing;
};
}
#endif
Is there a simple way to avoid this problem ?
Yes, maintain an outgoing queue for each client. Inspect the queue size in the async_write completion handler, if non-zero, start another async_write operation. Here is a sample
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <deque>
#include <iostream>
#include <string>
class Connection
{
public:
Connection(
boost::asio::io_service& io_service
) :
_io_service( io_service ),
_strand( _io_service ),
_socket( _io_service ),
_outbox()
{
}
void write(
const std::string& message
)
{
_strand.post(
boost::bind(
&Connection::writeImpl,
this,
message
)
);
}
private:
void writeImpl(
const std::string& message
)
{
_outbox.push_back( message );
if ( _outbox.size() > 1 ) {
// outstanding async_write
return;
}
this->write();
}
void write()
{
const std::string& message = _outbox[0];
boost::asio::async_write(
_socket,
boost::asio::buffer( message.c_str(), message.size() ),
_strand.wrap(
boost::bind(
&Connection::writeHandler,
this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
)
);
}
void writeHandler(
const boost::system::error_code& error,
const size_t bytesTransferred
)
{
_outbox.pop_front();
if ( error ) {
std::cerr << "could not write: " << boost::system::system_error(error).what() << std::endl;
return;
}
if ( !_outbox.empty() ) {
// more messages to send
this->write();
}
}
private:
typedef std::deque<std::string> Outbox;
private:
boost::asio::io_service& _io_service;
boost::asio::io_service::strand _strand;
boost::asio::ip::tcp::socket _socket;
Outbox _outbox;
};
int
main()
{
boost::asio::io_service io_service;
Connection foo( io_service );
}
some key points
the boost::asio::io_service::strand protects access to Connection::_outbox
a handler is dispatched from Connection::write() since it is public
it wasn't obvious to me if you were using similar practices in the example in your question since all methods are public.
Just trying to improve Sam's great answer. The improvement points are:
async_write tries hard to send every single byte from the buffer(s) before completing, which means you should supply all the input data that you have to the write operation, otherwise the framing overhead may increase due to TCP packets being smaller than they could have been.
asio::streambuf, while being very convenient to use, is not zero-copy. The example below demonstrates a zero-copy approach: keep the input data chunks where they are and use a scatter/gather overload of async_write that takes in a sequence of input buffers (which are just pointers to the actual input data).
Full source code:
#include <boost/asio.hpp>
#include <iostream>
#include <memory>
#include <mutex>
#include <string>
#include <thread>
#include <unordered_set>
#include <vector>
using namespace std::chrono_literals;
using boost::asio::ip::tcp;
class Server
{
class Connection : public std::enable_shared_from_this<Connection>
{
friend class Server;
void ProcessCommand(const std::string& cmd) {
if (cmd == "stop") {
server_.Stop();
return;
}
if (cmd == "") {
Close();
return;
}
std::thread t([this, self = shared_from_this(), cmd] {
for (int i = 0; i < 30; ++i) {
Write("Hello, " + cmd + " " + std::to_string(i) + "\r\n");
}
server_.io_service_.post([this, self] {
DoReadCmd();
});
});
t.detach();
}
void DoReadCmd() {
read_timer_.expires_from_now(server_.read_timeout_);
read_timer_.async_wait([this](boost::system::error_code ec) {
if (!ec) {
std::cout << "Read timeout\n";
Shutdown();
}
});
boost::asio::async_read_until(socket_, buf_in_, '\n', [this, self = shared_from_this()](boost::system::error_code ec, std::size_t bytes_read) {
read_timer_.cancel();
if (!ec) {
const char* p = boost::asio::buffer_cast<const char*>(buf_in_.data());
std::string cmd(p, bytes_read - (bytes_read > 1 && p[bytes_read - 2] == '\r' ? 2 : 1));
buf_in_.consume(bytes_read);
ProcessCommand(cmd);
}
else {
Close();
}
});
}
void DoWrite() {
active_buffer_ ^= 1; // switch buffers
for (const auto& data : buffers_[active_buffer_]) {
buffer_seq_.push_back(boost::asio::buffer(data));
}
write_timer_.expires_from_now(server_.write_timeout_);
write_timer_.async_wait([this](boost::system::error_code ec) {
if (!ec) {
std::cout << "Write timeout\n";
Shutdown();
}
});
boost::asio::async_write(socket_, buffer_seq_, [this, self = shared_from_this()](const boost::system::error_code& ec, size_t bytes_transferred) {
write_timer_.cancel();
std::lock_guard<std::mutex> lock(buffers_mtx_);
buffers_[active_buffer_].clear();
buffer_seq_.clear();
if (!ec) {
std::cout << "Wrote " << bytes_transferred << " bytes\n";
if (!buffers_[active_buffer_ ^ 1].empty()) // have more work
DoWrite();
}
else {
Close();
}
});
}
bool Writing() const { return !buffer_seq_.empty(); }
Server& server_;
boost::asio::streambuf buf_in_;
std::mutex buffers_mtx_;
std::vector<std::string> buffers_[2]; // a double buffer
std::vector<boost::asio::const_buffer> buffer_seq_;
int active_buffer_ = 0;
bool closing_ = false;
bool closed_ = false;
boost::asio::deadline_timer read_timer_, write_timer_;
tcp::socket socket_;
public:
Connection(Server& server) : server_(server), read_timer_(server.io_service_), write_timer_(server.io_service_), socket_(server.io_service_) {
}
void Start() {
socket_.set_option(tcp::no_delay(true));
DoReadCmd();
}
void Close() {
closing_ = true;
if (!Writing())
Shutdown();
}
void Shutdown() {
if (!closed_) {
closing_ = closed_ = true;
boost::system::error_code ec;
socket_.shutdown(tcp::socket::shutdown_both, ec);
socket_.close();
server_.active_connections_.erase(shared_from_this());
}
}
void Write(std::string&& data) {
std::lock_guard<std::mutex> lock(buffers_mtx_);
buffers_[active_buffer_ ^ 1].push_back(std::move(data)); // move input data to the inactive buffer
if (!Writing())
DoWrite();
}
};
void DoAccept() {
if (acceptor_.is_open()) {
auto session = std::make_shared<Connection>(*this);
acceptor_.async_accept(session->socket_, [this, session](boost::system::error_code ec) {
if (!ec) {
active_connections_.insert(session);
session->Start();
}
DoAccept();
});
}
}
boost::asio::io_service io_service_;
tcp::acceptor acceptor_;
std::unordered_set<std::shared_ptr<Connection>> active_connections_;
const boost::posix_time::time_duration read_timeout_ = boost::posix_time::seconds(30);
const boost::posix_time::time_duration write_timeout_ = boost::posix_time::seconds(30);
public:
Server(int port) : acceptor_(io_service_, tcp::endpoint(tcp::v6(), port), false) { }
void Run() {
std::cout << "Listening on " << acceptor_.local_endpoint() << "\n";
DoAccept();
io_service_.run();
}
void Stop() {
acceptor_.close();
{
std::vector<std::shared_ptr<Connection>> sessionsToClose;
copy(active_connections_.begin(), active_connections_.end(), back_inserter(sessionsToClose));
for (auto& s : sessionsToClose)
s->Shutdown();
}
active_connections_.clear();
io_service_.stop();
}
};
int main() {
try {
Server srv(8888);
srv.Run();
}
catch (const std::exception& e) {
std::cerr << "Error: " << e.what() << "\n";
}
}