How do I make this HTTPS connection persistent in Beast? - c++

I'm making about 30,000 queries to a GraphQL server; because I have a high-latency connection, I'm doing many queries in parallel, using threads. Currently each query makes a new connection; I'd like to reuse the connections, which should reduce the time the whole download takes. Here's my code:
#include <boost/asio.hpp>
#include <boost/asio/ssl.hpp>
#include <boost/beast.hpp>
#include <boost/beast/ssl.hpp>
#include <boost/asio/ssl/error.hpp>
#include <boost/asio/ssl/stream.hpp>
#include <chrono>
#include <vector>
#include <array>
#include <iostream>
#include "http.h"
namespace beast=boost::beast;
namespace http=beast::http;
namespace net=boost::asio;
namespace ssl=net::ssl;
using tcp=net::ip::tcp;
using namespace std;
namespace cr=chrono;
struct TimeBytes
/* Used to compute the latency and data rate, which will be used
* to compute the number of I/O threads for the next run.
*/
{
float ms;
int bytes;
};
cr::steady_clock clk;
vector<TimeBytes> timeBytes;
mutex timeBytesMutex;
thread_local string lastProto,lastHost,lastPort;
array<string,4> parseUrl(string url)
// protocol, hostname, port, path. All are strings, including the port.
{
size_t pos0=url.find("://");
size_t pos1;
array<string,4> ret;
ret[0]=url.substr(0,pos0);
if (pos0<url.length())
pos0+=3;
pos1=url.find("/",pos0);
ret[1]=url.substr(pos0,pos1-pos0);
ret[3]=url.substr(pos1);
pos0=ret[1].find(":");
if (pos0<ret[1].length())
{
ret[2]=ret[1].substr(pos0+1);
ret[1]=ret[1].substr(0,pos0);
}
else
if (ret[0]=="https")
ret[2]="443";
else if (ret[0]=="https")
ret[2]="80";
else
ret[2]="0";
return ret;
}
string httpPost(string url,string data)
{
net::io_context context;
ssl::context ctx(ssl::context::tlsv12_client);
tcp::resolver res(context);
tcp::resolver::results_type endpoints;
beast::ssl_stream<beast::tcp_stream> stream(context,ctx);
array<string,4> parsed=parseUrl(url);
http::request<http::string_body> req;
http::response<http::string_body> resp;
beast::flat_buffer buffer;
TimeBytes tb;
cr::nanoseconds elapsed;
cr::time_point<cr::steady_clock> timeStart=clk.now();
//if (parsed[0]==lastProto && parsed[1]==lastHost && parsed[2]==lastPort)
//cout<<"same host\n";
//load_root_certificates(ctx);
try
{
ctx.set_verify_mode(ssl::verify_peer);
endpoints=res.resolve(parsed[1],parsed[2]);
beast::get_lowest_layer(stream).connect(endpoints);
SSL_set_tlsext_host_name(stream.native_handle(),parsed[1].c_str());
if (parsed[0]=="https")
stream.handshake(net::ssl::stream_base::client);
req.method(http::verb::post);
req.target(parsed[3]);
req.set(http::field::host,parsed[1]);
req.set(http::field::connection,"keep-alive");
req.set(http::field::user_agent,BOOST_BEAST_VERSION_STRING);
req.set(http::field::content_type,"application/json");
req.set(http::field::accept,"application/json");
req.body()=data;
req.prepare_payload();
http::write(stream,req);
http::read(stream,buffer,resp);
elapsed=clk.now()-timeStart;
tb.ms=elapsed.count()/1e6;
tb.bytes=req.body().size()+resp.body().size()+7626;
// 7626 accounts for HTTP, TCP, IP, and Ethernet headers.
timeBytesMutex.lock();
timeBytes.push_back(tb);
timeBytesMutex.unlock();
beast::close_socket(beast::get_lowest_layer(stream));
if (DEBUG_QUERY)
{
cout<<parsed[0]<<"|\n"<<parsed[1]<<"|\n"<<parsed[2]<<"|\n"<<parsed[3]<<"|\n";
cout<<data<<"|\n";
}
}
catch (...)
{
}
lastProto=parsed[0];
lastHost=parsed[1];
lastPort=parsed[2];
return resp.body();
}
Most of the requests are to one server. A few GET requests are made to another server (using an httpGet function which is pretty similar to httpPost). After I download the data, I crunch them, so I'd like to close the connections before starting to crunch.
I tried making context, ctx, and stream thread-local, and stream.shutdown() and context.restart() before close_socket(), but the program crashed the second time the main thread called httpPost, from http::read throwing an error. (A worker thread made one query between the main thread's two queries.) At that point I was not trying to keep the connection open, but trying to make thread-local work so that I could keep the connection open.

I'd strongly suggest using async interfaces. Since the majority of time is obviously spent waiting for the IO, you likely can get all the throughput from just a single thread.
Here's an example that does answer your question (how to keep a client open for more than one request) while making the processing asynchronous. Right now, the downside is that all requests on a single client need to be sequenced (that's what I used the _tasks queue for). However this should probably serve as inspiration.
Note that the initiation functions work with all completion handler result types: net::use_future, net::spawn (coroutines) etc.
Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/ssl.hpp>
#include <boost/beast.hpp>
#include <boost/beast/ssl.hpp>
#include <chrono>
#include <deque>
#include <iomanip>
#include <iostream>
namespace net = boost::asio;
namespace ssl = net::ssl;
namespace beast = boost::beast;
namespace http = beast::http;
using clk = std::chrono::steady_clock;
using net::ip::tcp;
using beast::error_code;
using namespace std::chrono_literals;
/* Used to compute the latency and data rate, which will be used to compute the
* number of I/O threads for the next run. */
struct TimeBytes {
long double ms;
size_t bytes;
};
static std::vector<TimeBytes> timeBytes;
static std::mutex timeBytesMutex;
struct Url {
struct Spec {
std::string hostname, port;
bool operator<(Spec const& rhs) const {
return std::tie(hostname, port) < std::tie(rhs.hostname, rhs.port);
}
};
std::string protocol, hostname, port, path;
Spec specification() const { return {hostname, port}; }
};
#include <boost/spirit/home/x3.hpp>
#include <boost/fusion/adapted/std_tuple.hpp>
namespace x3 = boost::spirit::x3;
Url parseUrl(std::string const& url)
{
Url ret;
std::string hostport;
{
static const auto url_ = *(x3::char_ - "://") >> "://" // protocol
>> +~x3::char_('/') // hostname
>> *x3::char_; // path
auto into = std::tie(ret.protocol, hostport, ret.path);
parse(begin(url), end(url), x3::expect[url_], into);
}
{
static const auto portspec_ = -(':' >> x3::uint_) >> x3::eoi;
static const auto hostport_ =
x3::raw[+(+~x3::char_(':') | !portspec_ >> x3::char_)] //
>> -portspec_;
boost::optional<uint16_t> port;
auto into = std::tie(ret.hostname, port);
parse(begin(hostport), end(hostport), x3::expect[hostport_], into);
if (port.has_value()) { ret.port = std::to_string(*port); }
else if (ret.protocol == "https") { ret.port = "443"; }
else if (ret.protocol == "http") { ret.port = "80"; }
else { ret.port = "0"; }
}
return ret;
}
struct Client : std::enable_shared_from_this<Client> {
public:
Client(net::any_io_executor ex, Url::Spec spec, ssl::context& ctx)
: _executor(ex)
, _spec(spec)
, _sslcontext(ctx)
{
}
template <typename Token>
auto async_request(http::verb verb, std::string const& path,
std::string const& data, Token&& token)
{
using R = typename net::async_result<std::decay_t<Token>,
void(error_code, std::string)>;
using H = typename R::completion_handler_type;
H handler(std::forward<Token>(token));
R result(handler);
auto chain_tasks = [this, h = std::move(handler),
self = shared_from_this()](auto&&... args) mutable {
if (!self->_tasks.empty()) {
dispatch(self->_executor, [this, self] {
if (not _tasks.empty()) _tasks.pop_front();
if (not _tasks.empty()) _tasks.front()->initiate();
});
}
std::move(h)(std::forward<decltype(args)>(args)...);
};
auto task = std::make_shared<RequestOp<decltype(chain_tasks)>>(
this, verb, path, data, chain_tasks);
enqueue(std::move(task));
return result.get();
}
template <typename Token>
auto async_post(std::string const& path, std::string const& data,
Token&& token)
{
return async_request(http::verb::post,path, data, std::forward<Token>(token));
}
template <typename Token>
auto async_get(std::string const& path, Token&& token)
{
return async_request(http::verb::get,path, "", std::forward<Token>(token));
}
private:
template <typename Token> auto async_reconnect(Token&& token)
{
using R = typename net::async_result<std::decay_t<Token>, void(error_code)>;
using H = typename R::completion_handler_type;
H handler(std::forward<Token>(token));
R result(handler);
assert(!_stream.has_value()); // probably a program flow bu
_stream.emplace(_executor, _sslcontext);
std::make_shared<ReconnectOp<H>>(this, std::move(handler))->start();
return result.get();
}
template <typename Handler>
struct ReconnectOp : std::enable_shared_from_this<ReconnectOp<Handler>> {
ReconnectOp(Client* client, Handler h)
: _client{client}
, _handler(std::move(h))
, _resolver(client->_stream->get_executor())
{
}
Client* _client;
Handler _handler;
tcp::resolver _resolver;
bool checked(error_code ec, bool complete = false) {
if (complete || ec)
std::move(_handler)(ec);
if (ec && _client->_stream.has_value())
{
std::cerr << "Socket " << _client->_stream->native_handle()
<< " closed due to " << ec.message() << std::endl;
_client->_stream.reset();
}
return !ec.failed();
}
void start()
{
_resolver.async_resolve(
_client->_spec.hostname, _client->_spec.port,
beast::bind_front_handler(&ReconnectOp::on_resolved,
this->shared_from_this()));
}
void on_resolved(error_code ec, tcp::resolver::results_type ep)
{
if (checked(ec)) {
beast::get_lowest_layer(*_client->_stream)
.async_connect(
ep,
beast::bind_front_handler(&ReconnectOp::on_connected,
this->shared_from_this()));
}
}
void on_connected(error_code ec, tcp::endpoint ep) {
if (checked(ec)) {
std::cerr << "Socket " << _client->_stream->native_handle()
<< " (re)connected to " << ep << std::endl;
auto& hostname = _client->_spec.hostname;
SSL_set_tlsext_host_name(_client->_stream->native_handle(),
hostname.c_str());
_client->_stream->async_handshake(
Stream::client,
beast::bind_front_handler(&ReconnectOp::on_ready,
this->shared_from_this()));
}
}
void on_ready(error_code ec) {
checked(ec, true);
}
};
struct IAsyncTask {
virtual void initiate() = 0;
};
template <typename Handler>
struct RequestOp : IAsyncTask, std::enable_shared_from_this<RequestOp<Handler>> {
RequestOp(Client* client, http::verb verb, std::string const& path,
std::string data, Handler h)
: _client(client)
, _handler(std::move(h))
, _request(verb, path, 11, std::move(data))
{
_request.set(http::field::host, _client->_spec.hostname);
_request.set(http::field::connection, "keep-alive");
_request.set(http::field::user_agent, BOOST_BEAST_VERSION_STRING);
_request.set(http::field::content_type, "application/json");
_request.set(http::field::accept, "application/json");
_request.prepare_payload();
}
Client* _client;
Handler _handler;
http::request<http::string_body> _request;
http::response<http::string_body> _response;
beast::flat_buffer _buffer;
size_t _bandwidth = 0;
clk::time_point _start = clk::now();
bool checked(error_code ec, bool complete = false) {
if (complete || ec)
std::move(_handler)(ec, std::move(_response.body()));
if (ec)
_client->_stream.reset();
return !ec.failed();
}
void initiate() override
{
if (!_client->_stream.has_value()) {
_client->async_reconnect(beast::bind_front_handler(
&RequestOp::on_connected, this->shared_from_this()));
} else {
on_connected(error_code{});
}
}
void on_connected(error_code ec) {
_start = clk::now(); // This matches the start of measurements in
// the original, synchronous code
http::async_write(*_client->_stream, _request,
beast::bind_front_handler(
&RequestOp::on_sent, this->shared_from_this()));
}
void on_sent(error_code ec, size_t transferred) {
_bandwidth += transferred; // measuring actual bytes including HTTP headers
if (checked(ec)) {
http::async_read(
*_client->_stream, _buffer, _response,
beast::bind_front_handler(&RequestOp::on_response,
this->shared_from_this()));
}
}
void on_response(error_code ec, size_t transferred) {
_bandwidth += transferred; // measuring actual bytes including HTTP headers
std::lock_guard lk(timeBytesMutex);
timeBytes.push_back({(clk::now() - _start) / 1.0ms, _bandwidth});
checked(ec, true);
}
};
private:
net::any_io_executor _executor;
Url::Spec _spec;
ssl::context& _sslcontext;
using Stream = beast::ssl_stream<beast::tcp_stream>;
std::optional<Stream> _stream; // nullopt when disconnected
// task queueing
using AsyncTask = std::shared_ptr<IAsyncTask>;
std::deque<AsyncTask> _tasks;
void enqueue(AsyncTask task) {
post(_executor,
[=, t = std::move(task), this, self = shared_from_this()] {
_tasks.push_back(std::move(t));
if (_tasks.size() == 1) {
_tasks.front()->initiate();
}
});
}
};
int main()
{
ssl::context ctx(ssl::context::tlsv12_client);
ctx.set_verify_mode(ssl::verify_peer);
ctx.set_default_verify_paths();
// load_root_certificates(ctx);
net::thread_pool io(1);
std::map<Url::Spec, std::shared_ptr<Client> > pool;
using V = http::verb;
for (auto [url, verb, data] : {
std::tuple //
{"https://httpbin.org/post", V::post, "post data"},
{"https://httpbin.org/delay/5", V::delete_, ""},
{"https://httpbin.org/base64/ZGVjb2RlZCBiYXM2NA==", V::get, ""},
{"https://httpbin.org/delay/7", V::patch, ""},
{"https://httpbin.org/stream/3", V::get, ""},
{"https://httpbin.org/uuid", V::get, ""},
}) //
{
auto parsed = parseUrl(url);
std::cout << std::quoted(parsed.protocol) << " "
<< std::quoted(parsed.hostname) << " "
<< std::quoted(parsed.port) << " "
<< std::quoted(parsed.path) << "\n";
auto spec = parsed.specification();
if (!pool.contains(spec)) {
pool.emplace(spec,
std::make_shared<Client>(
make_strand(io.get_executor()), spec, ctx));
}
pool.at(spec)->async_request(
verb, parsed.path, data,
[=, v = verb, u = url](error_code ec, std::string const& body) {
std::cout << v << " to " << u << ": " << std::quoted(body)
<< std::endl;
});
}
io.join();
for (auto& [time, bytes] : timeBytes) {
std::cout << bytes << " bytes in " << time << "ms\n";
}
}
On my system this prints
"https" "httpbin.org" "443" "/post"
"https" "httpbin.org" "443" "/delay/5"
"https" "httpbin.org" "443" "/base64/ZGVjb2RlZCBiYXM2NA=="
"https" "httpbin.org" "443" "/delay/7"
"https" "httpbin.org" "443" "/stream/3"
"https" "httpbin.org" "443" "/uuid"
Socket 0x7f4ad4001060 (re)connected to 18.232.227.86:443
POST to https://httpbin.org/post: "{
\"args\": {},
\"data\": \"post data\",
\"files\": {},
\"form\": {},
\"headers\": {
\"Accept\": \"application/json\",
\"Content-Length\": \"9\",
\"Content-Type\": \"application/json\",
\"Host\": \"httpbin.org\",
\"User-Agent\": \"Boost.Beast/318\",
\"X-Amzn-Trace-Id\": \"Root=1-618b513c-2c51c112061b10456a5e3d4e\"
},
\"json\": null,
\"origin\": \"163.158.244.77\",
\"url\": \"https://httpbin.org/post\"
}
"
DELETE to https://httpbin.org/delay/5: "{
\"args\": {},
\"data\": \"\",
\"files\": {},
\"form\": {},
\"headers\": {
\"Accept\": \"application/json\",
\"Content-Type\": \"application/json\",
\"Host\": \"httpbin.org\",
\"User-Agent\": \"Boost.Beast/318\",
\"X-Amzn-Trace-Id\": \"Root=1-618b513c-324c97504eb79d8b743c6c5d\"
},
\"origin\": \"163.158.244.77\",
\"url\": \"https://httpbin.org/delay/5\"
}
"
GET to https://httpbin.org/base64/ZGVjb2RlZCBiYXM2NA==: "decoded bas64"
PATCH to https://httpbin.org/delay/7: "{
\"args\": {},
\"data\": \"\",
\"files\": {},
\"form\": {},
\"headers\": {
\"Accept\": \"application/json\",
\"Content-Type\": \"application/json\",
\"Host\": \"httpbin.org\",
\"User-Agent\": \"Boost.Beast/318\",
\"X-Amzn-Trace-Id\": \"Root=1-618b5141-3a8c30e60562df583061fc5a\"
},
\"origin\": \"163.158.244.77\",
\"url\": \"https://httpbin.org/delay/7\"
}
"
GET to https://httpbin.org/stream/3: "{\"url\": \"https://httpbin.org/stream/3\", \"args\": {}, \"headers\": {\"Host\": \"httpbin.org\", \"X-Amzn-Trace-Id\": \"Root=1-618b5148-45fce8a8432930a006c0a574\", \"User-Agent\": \"Boost.Beast/318\", \"Content-Type\": \"application/json\", \"Accept\": \"application/json\"}, \"origin\": \"163.158.244.77\", \"id\": 0}
{\"url\": \"https://httpbin.org/stream/3\", \"args\": {}, \"headers\": {\"Host\": \"httpbin.org\", \"X-Amzn-Trace-Id\": \"Root=1-618b5148-45fce8a8432930a006c0a574\", \"User-Agent\": \"Boost.Beast/318\", \"Content-Type\": \"application/json\", \"Accept\": \"application/json\"}, \"origin\": \"163.158.244.77\", \"id\": 1}
{\"url\": \"https://httpbin.org/stream/3\", \"args\": {}, \"headers\": {\"Host\": \"httpbin.org\", \"X-Amzn-Trace-Id\": \"Root=1-618b5148-45fce8a8432930a006c0a574\", \"User-Agent\": \"Boost.Beast/318\", \"Content-Type\": \"application/json\", \"Accept\": \"application/json\"}, \"origin\": \"163.158.244.77\", \"id\": 2}
"
GET to https://httpbin.org/uuid: "{
\"uuid\": \"4557c909-880e-456c-8ef9-049a72f5fda1\"
}
"
826 bytes in 84.9807ms
752 bytes in 5267.26ms
425 bytes in 84.6031ms
751 bytes in 7085.28ms
1280 bytes in 86.6554ms
434 bytes in 85.0086ms
Note:
httpbin.org has all manner of test urls - some of which generate long delays, hence the timings
there's only 1 connection. In case of an IO error, we disconnect (and things should reconnect on the next request)
HTTP errors are not "errors" in that the connection stays valid
The DNS resolve, connect and handshake are also asynchronous

Related

Boost Beast Async Websocket Server How to interface with session?

So I don't know why but I can't wrap my head around the boost Beast websocket server and how you can (or should) interact with it.
The basic program I made looks like this, across 2 classes (WebSocketListener and WebSocketSession)
https://www.boost.org/doc/libs/develop/libs/beast/example/websocket/server/async/websocket_server_async.cpp
Everything works great, I can connect, and it echos messages. We will only ever have 1 active session, and I'm struggling to understand how I can interface with this session from outside its class, in my int main() for example or another class that may be responsible for issuing read/writes. We will be using a simple Command design pattern of commands async coming into a buffer that get processed against hardware and then async_write back out the results. The reading and queuing is straight forward and will be done in the WebsocketSession, but everything I see for write is just reading/writing directly inside the session and not getting external input.
I've seen examples using things like boost::asio::async_write(socket, buffer, ...) but I'm struggling to understand how I get a reference to said socket when the session is created by the listener itself.
Instead of depending on a socket from outside of the session, I'd depend on your program logic to implement the session.
That's because the session (connection) will govern its own lifetime, arriving spontaneously and potentially disconnecting spontaneously. Your hardware, most likely, doesn't.
So, borrowing the concept of "Dependency Injection" tell your listener about your application logic, and then call into that from the session. (The listener will "inject" the dependency into each newly created session).
Let's start from a simplified/modernized version of your linked example.
Now, where we prepare a response, you want your own logic injected, so let's write it how we would imagine it:
void on_read(beast::error_code ec, std::size_t /*bytes_transferred*/) {
if (ec == websocket::error::closed) return;
if (ec.failed()) return fail(ec, "read");
// Process the message
response_ = logic_->Process(beast::buffers_to_string(buffer_));
ws_.async_write(
net::buffer(response_),
beast::bind_front_handler(&session::on_write, shared_from_this()));
}
Here we declare the members and initialize them from the constructor:
std::string response_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
explicit session(tcp::socket&& socket,
std::shared_ptr<AppDomain::Logic> logic)
: ws_(std::move(socket))
, logic_(logic) {}
Now, we need to inject the listener with the logic so we can pass it along:
class listener : public std::enable_shared_from_this<listener> {
net::any_io_executor ex_;
tcp::acceptor acceptor_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
listener(net::any_io_executor ex, tcp::endpoint endpoint,
std::shared_ptr<AppDomain::Logic> logic)
: ex_(ex)
, acceptor_(ex)
, logic_(logic) {
So that we can pass it along:
void on_accept(beast::error_code ec, tcp::socket socket) {
if (ec) {
fail(ec, "accept");
} else {
std::make_shared<session>(std::move(socket), logic_)->run();
}
// Accept another connection
do_accept();
}
Now making the real logic in main:
auto logic = std::make_shared<AppDomain::Logic>("StackOverflow Demo/");
try {
// The io_context is required for all I/O
net::thread_pool ioc(threads);
std::make_shared<listener>(ioc.get_executor(),
tcp::endpoint{address, port}, logic)
->run();
ioc.join();
} catch (beast::system_error const& se) {
fail(se.code(), "listener");
}
Demo Logic
Just for fun, let's implement some random logic, that might be implemented in hardware in the future:
namespace AppDomain {
struct Logic {
std::string banner;
Logic(std::string msg) : banner(std::move(msg)) {}
std::string Process(std::string request) {
std::cout << "Processing: " << std::quoted(request) << std::endl;
std::string result;
auto fold = [&result](auto op, double initial) {
return [=, &result](auto& ctx) {
auto& args = _attr(ctx);
auto v = accumulate(args.begin(), args.end(), initial, op);
result = "Fold:" + std::to_string(v);
};
};
auto invalid = [&result](auto& ctx) {
result = "Invalid Command: " + _attr(ctx);
};
using namespace boost::spirit::x3;
auto args = rule<void, std::vector<double>>{} = '(' >> double_ % ',' >> ')';
auto add = "adding" >> args[fold(std::plus<>{}, 0)];
auto mul = "multiplying" >> args[fold(std::multiplies<>{}, 1)];
auto err = lexeme[+char_][invalid];
phrase_parse(begin(request), end(request), add | mul | err, blank);
return banner + result;
}
};
} // namespace AppDomain
Now you can see it in action: Full Listing
Where To Go From Here
What if you need multiple responses for one request?
You need a queue. I usually call those outbox so searching for outbox_, _outbox etc will give lots of examples.
Those examples will also show how to deal with other situations where writes can be "externally initiated", and how to safely enqueue those. Perhaps a very engaging example is here How to batch send unsent messages in asio
Listing For Reference
In case the links go dead in the future:
#include <boost/algorithm/string/trim.hpp>
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <filesystem>
#include <functional>
#include <iostream>
static std::string g_app_name = "app-logic-service";
#include <boost/core/demangle.hpp> // just for our demo logic
#include <boost/spirit/home/x3.hpp> // idem
#include <numeric> // idem
namespace AppDomain {
struct Logic {
std::string banner;
Logic(std::string msg) : banner(std::move(msg)) {}
std::string Process(std::string request) {
std::string result;
auto fold = [&result](auto op, double initial) {
return [=, &result](auto& ctx) {
auto& args = _attr(ctx);
auto v = accumulate(args.begin(), args.end(), initial, op);
result = "Fold:" + std::to_string(v);
};
};
auto invalid = [&result](auto& ctx) {
result = "Invalid Command: " + _attr(ctx);
};
using namespace boost::spirit::x3;
auto args = rule<void, std::vector<double>>{} = '(' >> double_ % ',' >> ')';
auto add = "adding" >> args[fold(std::plus<>{}, 0)];
auto mul = "multiplying" >> args[fold(std::multiplies<>{}, 1)];
auto err = lexeme[+char_][invalid];
phrase_parse(begin(request), end(request), add | mul | err, blank);
return banner + result;
}
};
} // namespace AppDomain
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = boost::asio::ip::tcp; // from <boost/asio/ip/tcp.hpp>
// Report a failure
void fail(beast::error_code ec, char const* what) {
std::cerr << what << ": " << ec.message() << "\n";
}
class session : public std::enable_shared_from_this<session> {
websocket::stream<beast::tcp_stream> ws_;
beast::flat_buffer buffer_;
std::string response_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
explicit session(tcp::socket&& socket,
std::shared_ptr<AppDomain::Logic> logic)
: ws_(std::move(socket))
, logic_(logic) {}
void run() {
// Get on the correct executor
// strand for thread safety
dispatch(
ws_.get_executor(),
beast::bind_front_handler(&session::on_run, shared_from_this()));
}
private:
void on_run() {
// Set suggested timeout settings for the websocket
ws_.set_option(websocket::stream_base::timeout::suggested(
beast::role_type::server));
// Set a decorator to change the Server of the handshake
ws_.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) + " " +
g_app_name);
}));
// Accept the websocket handshake
ws_.async_accept(
beast::bind_front_handler(&session::on_accept, shared_from_this()));
}
void on_accept(beast::error_code ec) {
if (ec)
return fail(ec, "accept");
do_read();
}
void do_read() {
ws_.async_read(
buffer_,
beast::bind_front_handler(&session::on_read, shared_from_this()));
}
void on_read(beast::error_code ec, std::size_t /*bytes_transferred*/) {
if (ec == websocket::error::closed) return;
if (ec.failed()) return fail(ec, "read");
// Process the message
auto request = boost::algorithm::trim_copy(
beast::buffers_to_string(buffer_.data()));
std::cout << "Processing: " << std::quoted(request) << " from "
<< beast::get_lowest_layer(ws_).socket().remote_endpoint()
<< std::endl;
response_ = logic_->Process(request);
ws_.async_write(
net::buffer(response_),
beast::bind_front_handler(&session::on_write, shared_from_this()));
}
void on_write(beast::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if (ec)
return fail(ec, "write");
// Clear the buffer
buffer_.consume(buffer_.size());
// Do another read
do_read();
}
};
// Accepts incoming connections and launches the sessions
class listener : public std::enable_shared_from_this<listener> {
net::any_io_executor ex_;
tcp::acceptor acceptor_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
listener(net::any_io_executor ex, tcp::endpoint endpoint,
std::shared_ptr<AppDomain::Logic> logic)
: ex_(ex)
, acceptor_(ex)
, logic_(logic) {
acceptor_.open(endpoint.protocol());
acceptor_.set_option(tcp::acceptor::reuse_address(true));
acceptor_.bind(endpoint);
acceptor_.listen(tcp::acceptor::max_listen_connections);
}
// Start accepting incoming connections
void run() { do_accept(); }
private:
void do_accept() {
// The new connection gets its own strand
acceptor_.async_accept(make_strand(ex_),
beast::bind_front_handler(&listener::on_accept,
shared_from_this()));
}
void on_accept(beast::error_code ec, tcp::socket socket) {
if (ec) {
fail(ec, "accept");
} else {
std::make_shared<session>(std::move(socket), logic_)->run();
}
// Accept another connection
do_accept();
}
};
int main(int argc, char* argv[]) {
g_app_name = std::filesystem::path(argv[0]).filename();
if (argc != 4) {
std::cerr << "Usage: " << g_app_name << " <address> <port> <threads>\n"
<< "Example:\n"
<< " " << g_app_name << " 0.0.0.0 8080 1\n";
return 1;
}
auto const address = net::ip::make_address(argv[1]);
auto const port = static_cast<uint16_t>(std::atoi(argv[2]));
auto const threads = std::max<int>(1, std::atoi(argv[3]));
auto logic = std::make_shared<AppDomain::Logic>("StackOverflow Demo/");
try {
// The io_context is required for all I/O
net::thread_pool ioc(threads);
std::make_shared<listener>(ioc.get_executor(),
tcp::endpoint{address, port}, logic)
->run();
ioc.join();
} catch (beast::system_error const& se) {
fail(se.code(), "listener");
}
}
UPDATE
In response to the comments I reified the outbox pattern again. Note some of the comments in the code.
Compiler Explorer
#include <boost/algorithm/string/trim.hpp>
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <deque>
#include <filesystem>
#include <functional>
#include <iostream>
#include <list>
static std::string g_app_name = "app-logic-service";
#include <boost/core/demangle.hpp> // just for our demo logic
#include <boost/spirit/home/x3.hpp> // idem
#include <numeric> // idem
namespace AppDomain {
struct Logic {
std::string banner;
Logic(std::string msg) : banner(std::move(msg)) {}
std::string Process(std::string request) {
std::string result;
auto fold = [&result](auto op, double initial) {
return [=, &result](auto& ctx) {
auto& args = _attr(ctx);
auto v = accumulate(args.begin(), args.end(), initial, op);
result = "Fold:" + std::to_string(v);
};
};
auto invalid = [&result](auto& ctx) {
result = "Invalid Command: " + _attr(ctx);
};
using namespace boost::spirit::x3;
auto args = rule<void, std::vector<double>>{} = '(' >> double_ % ',' >> ')';
auto add = "adding" >> args[fold(std::plus<>{}, 0)];
auto mul = "multiplying" >> args[fold(std::multiplies<>{}, 1)];
auto err = lexeme[+char_][invalid];
phrase_parse(begin(request), end(request), add | mul | err, blank);
return banner + result;
}
};
} // namespace AppDomain
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = boost::asio::ip::tcp; // from <boost/asio/ip/tcp.hpp>
// Report a failure
void fail(beast::error_code ec, char const* what) {
std::cerr << what << ": " << ec.message() << "\n";
}
class session : public std::enable_shared_from_this<session> {
websocket::stream<beast::tcp_stream> ws_;
beast::flat_buffer buffer_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
explicit session(tcp::socket&& socket,
std::shared_ptr<AppDomain::Logic> logic)
: ws_(std::move(socket))
, logic_(logic) {}
void run() {
// Get on the correct executor
// strand for thread safety
dispatch(
ws_.get_executor(),
beast::bind_front_handler(&session::on_run, shared_from_this()));
}
void post_message(std::string msg) {
post(ws_.get_executor(),
[self = shared_from_this(), this, msg = std::move(msg)] {
do_post_message(std::move(msg));
});
}
private:
void on_run() {
// on the strand
// Set suggested timeout settings for the websocket
ws_.set_option(websocket::stream_base::timeout::suggested(
beast::role_type::server));
// Set a decorator to change the Server of the handshake
ws_.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) + " " +
g_app_name);
}));
// Accept the websocket handshake
ws_.async_accept(
beast::bind_front_handler(&session::on_accept, shared_from_this()));
}
void on_accept(beast::error_code ec) {
// on the strand
if (ec)
return fail(ec, "accept");
do_read();
}
void do_read() {
// on the strand
buffer_.clear();
ws_.async_read(
buffer_,
beast::bind_front_handler(&session::on_read, shared_from_this()));
}
void on_read(beast::error_code ec, std::size_t /*bytes_transferred*/) {
// on the strand
if (ec == websocket::error::closed) return;
if (ec.failed()) return fail(ec, "read");
// Process the message
auto request = boost::algorithm::trim_copy(
beast::buffers_to_string(buffer_.data()));
std::cout << "Processing: " << std::quoted(request) << " from "
<< beast::get_lowest_layer(ws_).socket().remote_endpoint()
<< std::endl;
do_post_message(logic_->Process(request)); // already on the strand
do_read();
}
std::deque<std::string> _outbox;
void do_post_message(std::string msg) {
// on the strand
_outbox.push_back(std::move(msg));
if (_outbox.size() == 1)
do_write_loop();
}
void do_write_loop() {
// on the strand
if (_outbox.empty())
return;
ws_.async_write( //
net::buffer(_outbox.front()),
[self = shared_from_this(), this] //
(beast::error_code ec, size_t bytes_transferred) {
// on the strand
boost::ignore_unused(bytes_transferred);
if (ec)
return fail(ec, "write");
_outbox.pop_front();
do_write_loop();
});
}
};
// Accepts incoming connections and launches the sessions
class listener : public std::enable_shared_from_this<listener> {
net::any_io_executor ex_;
tcp::acceptor acceptor_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
listener(net::any_io_executor ex, tcp::endpoint endpoint,
std::shared_ptr<AppDomain::Logic> logic)
: ex_(ex)
, acceptor_(make_strand(ex)) // NOTE to guard sessions_
, logic_(logic) {
acceptor_.open(endpoint.protocol());
acceptor_.set_option(tcp::acceptor::reuse_address(true));
acceptor_.bind(endpoint);
acceptor_.listen(tcp::acceptor::max_listen_connections);
}
// Start accepting incoming connections
void run() { do_accept(); }
void broadcast(std::string msg) {
post(acceptor_.get_executor(),
beast::bind_front_handler(&listener::do_broadcast,
shared_from_this(), std::move(msg)));
}
private:
using handle_t = std::weak_ptr<session>;
std::list<handle_t> sessions_;
void do_broadcast(std::string const& msg) {
for (auto handle : sessions_)
if (auto sess = handle.lock())
sess->post_message(msg);
}
void do_accept() {
// The new connection gets its own strand
acceptor_.async_accept(make_strand(ex_),
beast::bind_front_handler(&listener::on_accept,
shared_from_this()));
}
void on_accept(beast::error_code ec, tcp::socket socket) {
// on the strand
if (ec) {
fail(ec, "accept");
} else {
auto sess = std::make_shared<session>(std::move(socket), logic_);
sessions_.emplace_back(sess);
// optionally:
sessions_.remove_if(std::mem_fn(&handle_t::expired));
sess->run();
}
// Accept another connection
do_accept();
}
};
static void emulate_hardware_stuff(std::shared_ptr<listener> srv) {
using std::this_thread::sleep_for;
using namespace std::chrono_literals;
// Extremely simplistic. Instead I'd recommend `steady_timer` with
// `_async_wait` here, but since I'm just making a sketch...
unsigned i = 0;
while (true) {
sleep_for(1s);
srv->broadcast("Hardware thing #" + std::to_string(++i));
}
}
int main(int argc, char* argv[]) {
g_app_name = std::filesystem::path(argv[0]).filename();
if (argc != 4) {
std::cerr << "Usage: " << g_app_name << " <address> <port> <threads>\n"
<< "Example:\n"
<< " " << g_app_name << " 0.0.0.0 8080 1\n";
return 1;
}
auto const address = net::ip::make_address(argv[1]);
auto const port = static_cast<uint16_t>(std::atoi(argv[2]));
auto const threads = std::max<int>(1, std::atoi(argv[3]));
auto logic = std::make_shared<AppDomain::Logic>("StackOverflow Demo/");
try {
// The io_context is required for all I/O
net::thread_pool ioc(threads);
auto srv = std::make_shared<listener>( //
ioc.get_executor(), //
tcp::endpoint{address, port}, //
logic);
srv->run();
std::thread something_hardware(emulate_hardware_stuff, srv);
ioc.join();
something_hardware.join();
} catch (beast::system_error const& se) {
fail(se.code(), "listener");
}
}
With Live Demo:

Boost::Asio : Server code causes SEGFAULT only some of the time, seemingly related to the destruction of io_contexts

I am attempting to make a fairly simple client-server program with boost asio. The server class is implemented as follows:
template<class RequestHandler, class RequestClass>
class Server {
public:
typedef std::map<std::string, RequestHandler> CommandMap;
Server(short port, CommandMap commands, RequestClass *request_class_inst)
: acceptor_(io_context_, tcp::endpoint(tcp::v4(), port))
, commands_(std::move(commands))
, request_class_inst_(request_class_inst)
{
DoAccept();
}
~Server()
{
}
void Run()
{
io_context_.run();
}
void RunInBackground()
{
std::thread t( [this]{ Run(); });
t.detach();
}
void Kill()
{
acceptor_.close();
}
private:
boost::asio::io_context io_context_;
tcp::acceptor acceptor_;
CommandMap commands_;
RequestClass *request_class_inst_;
void DoAccept()
{
acceptor_.async_accept(
[this](boost::system::error_code ec, tcp::socket socket) {
if (!ec)
std::make_shared<Session<RequestHandler, RequestClass>>
(std::move(socket), commands_, request_class_inst_)->Run();
DoAccept();
});
}
};
In addition to the server class, I implement a basic Client class thusly:
class Client {
public:
/**
* Constructor, initializes JSON parser and serializer.
*/
Client()
: reader_((new Json::CharReaderBuilder)->newCharReader())
{}
Json::Value MakeRequest(const std::string &ip_addr, unsigned short port,
const Json::Value &request)
{
boost::asio::io_context io_context;
std::string serialized_req = Json::writeString(writer_, request);
tcp::socket s(io_context);
tcp::resolver resolver(io_context);
s.connect({ boost::asio::ip::address::from_string(ip_addr), port });
boost::asio::write(s, boost::asio::buffer(serialized_req));
s.shutdown(tcp::socket::shutdown_send);
error_code ec;
char reply[2048];
size_t reply_length = boost::asio::read(s, boost::asio::buffer(reply),
ec);
std::cout << std::string(reply).substr(0, reply_length) << std::endl;
Json::Value json_resp;
JSONCPP_STRING parse_err;
std::string resp_str(reply);
if (reader_->parse(resp_str.c_str(), resp_str.c_str() + resp_str.length(),
&json_resp, &parse_err))
return json_resp;
throw std::runtime_error("Error parsing response.");
}
bool IsAlive(const std::string &ip_addr, unsigned short port)
{
boost::asio::io_context io_context;
tcp::socket s(io_context);
tcp::resolver resolver(io_context);
try {
s.connect({boost::asio::ip::address::from_string(ip_addr), port});
} catch(const boost::wrapexcept<boost::system::system_error> &err) {
s.close();
return false;
}
s.close();
return true;
}
private:
/// Reads JSON.
const std::unique_ptr<Json::CharReader> reader_;
/// Writes JSON.
Json::StreamWriterBuilder writer_;
};
I have implemented a small example to test Client::IsAlive:
int main()
{
auto *request_inst = new RequestClass(1);
std::map<std::string, RequestClassMethod> commands {
{"ADD_1", std::mem_fn(&RequestClass::add_n)},
{"SUB_1", std::mem_fn(&RequestClass::sub_n)}
};
Server<RequestClassMethod, RequestClass> s1(5000, commands, request_inst);
s1.RunInBackground();
std::vector<Client*> clients(6, new Client());
s1.Kill();
// Should output "0" to console.
std::cout << clients.at(1)->IsAlive("127.0.0.1", 5000);
return 0;
}
However, when I attempt to run this, the output varies. About half the time, I receive the correct value and the program exits with code 0, but, on other occasions, the program will either: (1) exit with code 139 (SEGFAULT) before outputting 0 to the console, (2) output 0 to the console and subsequently exit with code 139, (3) output 0 to the console and subsequently hang, or (4) hang before writing anything to the console.
I am uncertain as to what has caused these errors. I expect that it has to do with the destruction of Server::io_context_ and implementation of Server::Kill. Could this pertain to how I am storing Server::io_context_ as a data member?
A minimum reproducible example is shown below:
#define BOOST_ASIO_HAS_MOVE
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>
#include <boost/system/error_code.hpp>
#include <json/json.h>
using boost::asio::ip::tcp;
using boost::system::error_code;
/// NOTE: This class exists exclusively for unit testing.
class RequestClass {
public:
/**
* Initialize class with value n to add sub from input values.
*
* #param n Value to add/sub from input values.
*/
explicit RequestClass(int n) : n_(n) {}
/// Value to add/sub from
int n_;
/**
* Add n to value in JSON request.
*
* #param request JSON request with field "value".
* #return JSON response containing modified field "value" = [original_value] + n.
*/
[[nodiscard]] Json::Value add_n(const Json::Value &request) const
{
Json::Value resp;
resp["SUCCESS"] = true;
// If value is present in request, return value + 1, else return error.
if (request.get("VALUE", NULL) != NULL) {
resp["VALUE"] = request["VALUE"].asInt() + this->n_;
} else {
resp["SUCCESS"] = false;
resp["ERRORS"] = "Invalid value.";
}
return resp;
}
/**
* Sun n from value in JSON request.
*
* #param request JSON request with field "value".
* #return JSON response containing modified field "value" = [original_value] - n.
*/
[[nodiscard]] Json::Value sub_n(const Json::Value &request) const
{
Json::Value resp, value;
resp["SUCCESS"] = true;
// If value is present in request, return value + 1, else return error.
if (request.get("VALUE", NULL) != NULL) {
resp["VALUE"] = request["VALUE"].asInt() - this->n_;
} else {
resp["SUCCESS"] = false;
resp["ERRORS"] = "Invalid value.";
}
return resp;
}
};
typedef std::function<Json::Value(RequestClass, const Json::Value &)> RequestClassMethod;
template<class RequestHandler, class RequestClass>
class Session :
public std::enable_shared_from_this<Session<RequestHandler,
RequestClass>>
{
public:
typedef std::map<std::string, RequestHandler> CommandMap;
Session(tcp::socket socket, CommandMap commands,
RequestClass *request_class_inst)
: socket_(std::move(socket))
, commands_(std::move(commands))
, request_class_inst_(request_class_inst)
, reader_((new Json::CharReaderBuilder)->newCharReader())
{}
void Run()
{
DoRead();
}
void Kill()
{
continue_ = false;
}
private:
tcp::socket socket_;
RequestClass *request_class_inst_;
CommandMap commands_;
/// Reads JSON.
const std::unique_ptr<Json::CharReader> reader_;
/// Writes JSON.
Json::StreamWriterBuilder writer_;
bool continue_ = true;
char data_[2048];
std::string resp_;
void DoRead()
{
auto self(this->shared_from_this());
socket_.async_read_some(boost::asio::buffer(data_),
[this, self](error_code ec, std::size_t length)
{
if (!ec)
DoWrite(length);
});
}
void DoWrite(std::size_t length)
{
JSONCPP_STRING parse_err;
Json::Value json_req, json_resp;
std::string client_req_str(data_);
if (reader_->parse(client_req_str.c_str(),
client_req_str.c_str() +
client_req_str.length(),
&json_req, &parse_err))
{
try {
// Get JSON response.
json_resp = ProcessRequest(json_req);
json_resp["SUCCESS"] = true;
} catch (const std::exception &ex) {
// If json parsing failed.
json_resp["SUCCESS"] = false;
json_resp["ERRORS"] = std::string(ex.what());
}
} else {
// If json parsing failed.
json_resp["SUCCESS"] = false;
json_resp["ERRORS"] = std::string(parse_err);
}
resp_ = Json::writeString(writer_, json_resp);
auto self(this->shared_from_this());
boost::asio::async_write(socket_,
boost::asio::buffer(resp_),
[this, self]
(boost::system::error_code ec,
std::size_t bytes_xfered) {
if (!ec) DoRead();
});
}
Json::Value ProcessRequest(Json::Value request)
{
Json::Value response;
std::string command = request["COMMAND"].asString();
// If command is not valid, give a response with an error.
if(commands_.find(command) == commands_.end()) {
response["SUCCESS"] = false;
response["ERRORS"] = "Invalid command.";
}
// Otherwise, run the relevant handler.
else {
RequestHandler handler = commands_.at(command);
response = handler(*request_class_inst_, request);
}
return response;
}
};
template<class RequestHandler, class RequestClass>
class Server {
public:
typedef std::map<std::string, RequestHandler> CommandMap;
Server(short port, CommandMap commands, RequestClass *request_class_inst)
: acceptor_(io_context_, tcp::endpoint(tcp::v4(), port))
, commands_(std::move(commands))
, request_class_inst_(request_class_inst)
{
DoAccept();
}
~Server()
{
}
void Run()
{
io_context_.run();
}
void RunInBackground()
{
std::thread t( [this]{ Run(); });
t.detach();
}
void Kill()
{
acceptor_.close();
}
private:
boost::asio::io_context io_context_;
tcp::acceptor acceptor_;
CommandMap commands_;
RequestClass *request_class_inst_;
void DoAccept()
{
acceptor_.async_accept(
[this](boost::system::error_code ec, tcp::socket socket) {
if (!ec)
std::make_shared<Session<RequestHandler, RequestClass>>
(std::move(socket), commands_, request_class_inst_)->Run();
DoAccept();
});
}
};
class Client {
public:
/**
* Constructor, initializes JSON parser and serializer.
*/
Client()
: reader_((new Json::CharReaderBuilder)->newCharReader())
{}
Json::Value MakeRequest(const std::string &ip_addr, unsigned short port,
const Json::Value &request)
{
boost::asio::io_context io_context;
std::string serialized_req = Json::writeString(writer_, request);
tcp::socket s(io_context);
tcp::resolver resolver(io_context);
s.connect({ boost::asio::ip::address::from_string(ip_addr), port });
boost::asio::write(s, boost::asio::buffer(serialized_req));
s.shutdown(tcp::socket::shutdown_send);
error_code ec;
char reply[2048];
size_t reply_length = boost::asio::read(s, boost::asio::buffer(reply),
ec);
std::cout << std::string(reply).substr(0, reply_length) << std::endl;
Json::Value json_resp;
JSONCPP_STRING parse_err;
std::string resp_str(reply);
if (reader_->parse(resp_str.c_str(), resp_str.c_str() + resp_str.length(),
&json_resp, &parse_err))
return json_resp;
throw std::runtime_error("Error parsing response.");
}
bool IsAlive(const std::string &ip_addr, unsigned short port)
{
boost::asio::io_context io_context;
tcp::socket s(io_context);
tcp::resolver resolver(io_context);
try {
s.connect({boost::asio::ip::address::from_string(ip_addr), port});
} catch(const boost::wrapexcept<boost::system::system_error> &err) {
s.close();
return false;
}
s.close();
return true;
}
private:
/// Reads JSON.
const std::unique_ptr<Json::CharReader> reader_;
/// Writes JSON.
Json::StreamWriterBuilder writer_;
};
int main()
{
auto *request_inst = new RequestClass(1);
std::map<std::string, RequestClassMethod> commands {
{"ADD_1", std::mem_fn(&RequestClass::add_n)},
{"SUB_1", std::mem_fn(&RequestClass::sub_n)}
};
Server<RequestClassMethod, RequestClass> s1(5000, commands, request_inst);
s1.RunInBackground();
std::vector<Client*> clients(6, new Client());
Json::Value sub_one_req;
sub_one_req["COMMAND"] = "SUB_1";
sub_one_req["VALUE"] = 1;
s1.Kill();
std::cout << clients.at(1)->IsAlive("127.0.0.1", 5000);
return 0;
}
Using ASAN (-fsanitize=addess) on that shows
false
=================================================================
==31232==ERROR: AddressSanitizer: heap-use-after-free on address 0x6110000002c0 at pc 0x561409ca2ea3 bp 0x7efcf
bbfdc60 sp 0x7efcfbbfdc50
READ of size 8 at 0x6110000002c0 thread T1
=================================================================
#0 0x561409ca2ea2 in boost::asio::detail::epoll_reactor::run(long, boost::asio::detail::op_queue<boost::asi
o::detail::scheduler_operation>&) /home/sehe/custom/boost_1_76_0/boost/asio/detail/impl/epoll_reactor.ipp:504
==31232==ERROR: LeakSanitizer: detected memory leaks
#1 0x561409cb442c in boost::asio::detail::scheduler::do_run_one(boost::asio::detail::conditionally_enabled_
mutex::scoped_lock&, boost::asio::detail::scheduler_thread_info&, boost::system::error_code const&) /home/sehe/
custom/boost_1_76_0/boost/asio/detail/impl/scheduler.ipp:470
Direct leak of 4 byte(s) in 1 object(s) allocated from:
#0 0x7efd08fca717 in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.6+0xb4717)
#2 0x561409cf2792 in boost::asio::detail::scheduler::run(boost::system::error_code&) /home/sehe/custom/boos
t_1_76_0/boost/asio/detail/impl/scheduler.ipp:204
#1 0x561409bc62b5 in main /home/sehe/Projects/stackoverflow/test.cpp:229
SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
Or on another run:
It already tells you "everything" you need to know. Coincidentally, it was the bug I referred to in my previous answer. To do graceful shutdown you have to synchronize on the thread. Detaching it ruins your chances forever. So, let's not detach it:
void RunInBackground()
{
if (!t_.joinable()) {
t_ = std::thread([this] { Run(); });
}
}
As you can see, this is captured, so you can never allow the thread to run past the destruction of the Server object.
And then in the destructor join it:
~Server()
{
if (t_.joinable()) {
t_.join();
}
}
Now, let's be thorough. We have two threads. They share objects. io_context is thread-safe, so that's fine. But tcp::acceptor is not. Neither might request_class_inst_. You need to synchronize more:
void Kill()
{
post(io_context_, [this] { acceptor_.close(); });
}
Now, note that this is NOT enough! .close() causes .cancel() on the acceptor, but that just makes the completion handler be invoked with error::operation_aborted. So, you need to prevent initiating DoAccept again in that case:
void DoAccept()
{
acceptor_.async_accept(
[this](boost::system::error_code ec, tcp::socket socket) {
if (ec) {
std::cout << "Accept loop: " << ec.message() << std::endl;
} else {
std::make_shared<Session<RequestHandler, RequestClass>>(
std::move(socket), commands_, request_class_inst_)
->Run();
DoAccept();
}
});
}
I took the liberty of aborting on /any/ error. Err on the safe side: you prefer processes to exit instead of being stuck in unresponsive state of high-CPU loops.
Regardless of this, you should be aware of the race condition between server startup/shutdown and your test client:
s1.RunInBackground();
// unspecified, race condition!
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(0).IsAlive("127.0.0.1", 5000) << std::endl;
sleep_for(10ms); // likely enough for acceptor to start
// true:
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(1).IsAlive("127.0.0.1", 5000) << std::endl;
std::cout << "MakeRequest: " << clients.at(2).MakeRequest(
"127.0.0.1", 5000, {{"COMMAND", "MUL_2"}, {"VALUE", "21"}})
<< std::endl;
s1.Kill();
// unspecified, race condition!
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(3).IsAlive("127.0.0.1", 5000) << std::endl;
sleep_for(10ms); // likely enough for acceptor to be closed
// false:
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(4).IsAlive("127.0.0.1", 5000) << std::endl;
Prints
IsAlive(240): true
IsAlive(245): true
MakeRequest: {"SUCCESS":false,"ERRORS":"not an int64"}
{"SUCCESS":false,"ERRORS":"not an int64"}
IsAlive(252): CLOSING
Accept loop: Operation canceled
THREAD EXIT
false
IsAlive(256): false
Complete Listing
Note that this also fixed the unnecessary leak of the RequestClass instance. You were already assuming copy-ability (because you were passing it by value in various places).
Also note that in MakeRequest we now no longer swallow any errors except EOF.
Like last time, I employ Boost Json for simplicity and to make the sample self-contained for StackOverflow.
Address sanitizer (ASan) and UBSan are silent. Life is good.
Live On Coliru
#include <boost/asio.hpp>
#include <boost/json.hpp>
#include <boost/json/src.hpp>
#include <iostream>
#include <deque>
using boost::asio::ip::tcp;
using boost::system::error_code;
namespace json = boost::json;
using Value = json::object;
using namespace std::chrono_literals;
static auto sleep_for(auto delay) { return std::this_thread::sleep_for(delay); }
/// NOTE: This class exists exclusively for unit testing.
struct RequestClass {
int n_;
Value add_n(Value const& request) const { return impl(std::plus<>{}, request); }
Value sub_n(Value const& request) const { return impl(std::minus<>{}, request); }
Value mul_n(Value const& request) const { return impl(std::multiplies<>{}, request); }
Value div_n(Value const& request) const { return impl(std::divides<>{}, request); }
private:
template <typename Op> Value impl(Op op, Value const& req) const {
return (req.contains("VALUE"))
? Value{{"VALUE", op(req.at("VALUE").as_int64(), n_)},
{"SUCCESS", true}}
: Value{{"ERRORS", "Invalid value."}, {"SUCCESS", false}};
}
};
using RequestClassMethod =
std::function<Value(RequestClass const&, Value const&)>;
template <class RequestHandler, class RequestClass>
class Session
: public std::enable_shared_from_this<
Session<RequestHandler, RequestClass>> {
public:
using CommandMap = std::map<std::string, RequestHandler>;
Session(tcp::socket socket, CommandMap commands,
RequestClass request_class_inst)
: socket_(std::move(socket))
, commands_(std::move(commands))
, request_class_inst_(std::move(request_class_inst))
{
}
void Run() { DoRead(); }
void Kill() { continue_ = false; }
private:
tcp::socket socket_;
CommandMap commands_;
RequestClass request_class_inst_;
bool continue_ = true;
char data_[2048];
std::string resp_;
void DoRead()
{
socket_.async_read_some(
boost::asio::buffer(data_),
[this, self = this->shared_from_this()](error_code ec, std::size_t length) {
if (!ec) {
DoWrite(length);
}
});
}
void DoWrite(std::size_t length)
{
Value json_resp;
try {
auto json_req = json::parse({data_, length}).as_object();
json_resp = ProcessRequest(json_req);
json_resp["SUCCESS"] = true;
} catch (std::exception const& ex) {
json_resp = {{"SUCCESS", false}, {"ERRORS", ex.what()}};
}
resp_ = json::serialize(json_resp);
boost::asio::async_write(socket_, boost::asio::buffer(resp_),
[this, self = this->shared_from_this()](
error_code ec, size_t bytes_xfered) {
if (!ec)
DoRead();
});
}
Value ProcessRequest(Value request)
{
auto command = request.contains("COMMAND")
? request["COMMAND"].as_string() //
: "";
std::string cmdstr(command.data(), command.size());
// If command is not valid, give a response with an error.
return commands_.contains(cmdstr)
? commands_.at(cmdstr)(request_class_inst_, request)
: Value{{"SUCCESS", false}, {"ERRORS", "Invalid command."}};
}
};
template <class RequestHandler, class RequestClass> class Server {
public:
using CommandMap = std::map<std::string, RequestHandler>;
Server(uint16_t port, CommandMap commands, RequestClass request_class_inst)
: acceptor_(io_context_, tcp::endpoint(tcp::v4(), port))
, commands_(std::move(commands))
, request_class_inst_(std::move(request_class_inst))
{
DoAccept();
}
~Server()
{
if (t_.joinable()) {
t_.join();
}
assert(not t_.joinable());
}
void Run()
{
io_context_.run();
}
void RunInBackground()
{
if (!t_.joinable()) {
t_ = std::thread([this] {
Run();
std::cout << "THREAD EXIT" << std::endl;
});
}
}
void Kill()
{
post(io_context_, [this] {
std::cout << "CLOSING" << std::endl;
acceptor_.close(); // causes .cancel() as well
});
}
private:
boost::asio::io_context io_context_;
tcp::acceptor acceptor_;
CommandMap commands_;
RequestClass request_class_inst_;
std::thread t_;
void DoAccept()
{
acceptor_.async_accept(
[this](boost::system::error_code ec, tcp::socket socket) {
if (ec) {
std::cout << "Accept loop: " << ec.message() << std::endl;
} else {
std::make_shared<Session<RequestHandler, RequestClass>>(
std::move(socket), commands_, request_class_inst_)
->Run();
DoAccept();
}
});
}
};
class Client {
public:
/**
* Constructor, initializes JSON parser and serializer.
*/
Client() {}
Value MakeRequest(std::string const& ip_addr, uint16_t port,
Value const& request)
{
boost::asio::io_context io_context;
std::string serialized_req = serialize(request);
tcp::socket s(io_context);
s.connect({boost::asio::ip::address::from_string(ip_addr), port});
boost::asio::write(s, boost::asio::buffer(serialized_req));
s.shutdown(tcp::socket::shutdown_send);
char reply[2048];
error_code ec;
size_t reply_length = read(s, boost::asio::buffer(reply), ec);
if (ec && ec != boost::asio::error::eof) {
throw boost::system::system_error(ec);
}
// safe method:
std::string_view resp_str(reply, reply_length);
Value res = json::parse({reply, reply_length}).as_object();
std::cout << res << std::endl;
return res;
}
bool IsAlive(std::string const& ip_addr, unsigned short port)
{
boost::asio::io_context io_context;
tcp::socket s(io_context);
error_code ec;
s.connect({boost::asio::ip::address::from_string(ip_addr), port}, ec);
return not ec.failed();
}
};
int main()
{
std::cout << std::boolalpha;
std::deque<Client> clients(6);
Server<RequestClassMethod, RequestClass> s1(
5000,
{
{"ADD_2", std::mem_fn(&RequestClass::add_n)},
{"SUB_2", std::mem_fn(&RequestClass::sub_n)},
{"MUL_2", std::mem_fn(&RequestClass::mul_n)},
{"DIV_2", std::mem_fn(&RequestClass::div_n)},
},
RequestClass{1});
s1.RunInBackground();
// unspecified, race condition!
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(0).IsAlive("127.0.0.1", 5000) << std::endl;
sleep_for(10ms); // likely enough for acceptor to start
// true:
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(1).IsAlive("127.0.0.1", 5000) << std::endl;
std::cout << "MakeRequest: " << clients.at(2).MakeRequest(
"127.0.0.1", 5000, {{"COMMAND", "MUL_2"}, {"VALUE", "21"}})
<< std::endl;
s1.Kill();
// unspecified, race condition!
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(3).IsAlive("127.0.0.1", 5000) << std::endl;
sleep_for(10ms); // likely enough for acceptor to be closed
// false:
std::cout << "IsAlive(" << __LINE__ << "): " << clients.at(4).IsAlive("127.0.0.1", 5000) << std::endl;
}

Boost::Asio : Why does async_write truncate the buffer when sending it through the given socket?

I'm currently attempting to design a fairly simple boost::asio server. My first unit test is fairly simple: send a JSON request {"COMMAND": "ADD_1", "VALUE" : 1} and receive the following response:
{
"SUCCESS" : true,
"VALUE" : 2
}
However, instead, the reply is truncated by one character after being read from the socket:
Reply is: {
"SUCCESS" : true,
"VALUE" : 2
Process finished with exit code 0
The code to write to the socket is fairly simple, a member function of a class RequestContext:
void RequestContext::DoWrite(std::size_t length)
{
JSONCPP_STRING parse_err;
Json::Value json_req, json_resp;
auto self(this->shared_from_this());
std::string client_req_str(data_);
if (reader_->parse(client_req_str.c_str(),
client_req_str.c_str() +
client_req_str.length(),
&json_req, &parse_err))
{
try {
// Get JSON response.
json_resp = ProcessRequest(json_req);
json_resp["SUCCESS"] = true;
} catch (const std::exception &ex) {
// If json parsing failed.
json_resp["SUCCESS"] = false;
json_resp["ERRORS"] = std::string(ex.what());
}
} else {
// If json parsing failed.
json_resp["SUCCESS"] = false;
json_resp["ERRORS"] = std::string(parse_err);
}
std::string resp = Json::writeString(writer_, json_resp);
boost::asio::async_write(socket_,
boost::asio::buffer(&resp[0], resp.size()),
[this, self]
(boost::system::error_code ec,
std::size_t bytes_xfered) {
if (!ec) DoRead();
});
}
I have verified that ProcessRequest returns the correct value, so the issue is evidently with async_write. I have tried increasing the value of the second argument to async_write, but it seems to have no effect. What am I doing wrong?
A minimum reproducible example can be found below:
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>
#include <boost/system/error_code.hpp>
#include <json/json.h>
using boost::asio::ip::tcp;
using boost::system::error_code;
/// NOTE: This class exists exclusively for unit testing.
class RequestClass {
public:
/**
* Initialize class with value n to add sub from input values.
*
* #param n Value to add/sub from input values.
*/
explicit RequestClass(int n) : n_(n) {}
/// Value to add/sub from
int n_;
/**
* Add n to value in JSON request.
*
* #param request JSON request with field "value".
* #return JSON response containing modified field "value" = [original_value] + n.
*/
[[nodiscard]] Json::Value add_n(const Json::Value &request) const
{
Json::Value resp;
resp["SUCCESS"] = true;
// If value is present in request, return value + 1, else return error.
if (request.get("VALUE", NULL) != NULL) {
resp["VALUE"] = request["VALUE"].asInt() + this->n_;
} else {
resp["SUCCESS"] = false;
resp["ERRORS"] = "Invalid value.";
}
return resp;
}
/**
* Sun n from value in JSON request.
*
* #param request JSON request with field "value".
* #return JSON response containing modified field "value" = [original_value] - n.
*/
[[nodiscard]] Json::Value sub_n(const Json::Value &request) const
{
Json::Value resp, value;
resp["SUCCESS"] = true;
// If value is present in request, return value + 1, else return error.
if (request.get("VALUE", NULL) != NULL) {
resp["VALUE"] = request["VALUE"].asInt() - this->n_;
} else {
resp["SUCCESS"] = false;
resp["ERRORS"] = "Invalid value.";
}
return resp;
}
};
typedef std::function<Json::Value(RequestClass, const Json::Value &)> RequestClassMethod;
template<class RequestHandler, class RequestClass>
class RequestContext :
public std::enable_shared_from_this<RequestContext<RequestHandler,
RequestClass>>
{
public:
typedef std::map<std::string, RequestHandler> CommandMap;
RequestContext(tcp::socket socket, CommandMap commands,
RequestClass *request_class_inst)
: socket_(std::move(socket))
, commands_(std::move(commands))
, request_class_inst_(request_class_inst)
, reader_((new Json::CharReaderBuilder)->newCharReader())
{}
void Run()
{
DoRead();
}
void Kill()
{
continue_ = false;
}
private:
tcp::socket socket_;
RequestClass *request_class_inst_;
CommandMap commands_;
/// Reads JSON.
const std::unique_ptr<Json::CharReader> reader_;
/// Writes JSON.
Json::StreamWriterBuilder writer_;
bool continue_ = true;
char data_[2048];
void DoRead()
{
auto self(this->shared_from_this());
socket_.async_read_some(boost::asio::buffer(data_, 2048),
[this, self](error_code ec, std::size_t length)
{
if (!ec)
{
DoWrite(length);
}
});
}
void DoWrite(std::size_t length)
{
JSONCPP_STRING parse_err;
Json::Value json_req, json_resp;
auto self(this->shared_from_this());
std::string client_req_str(data_);
if (reader_->parse(client_req_str.c_str(),
client_req_str.c_str() +
client_req_str.length(),
&json_req, &parse_err))
{
try {
// Get JSON response.
json_resp = ProcessRequest(json_req);
json_resp["SUCCESS"] = true;
} catch (const std::exception &ex) {
// If json parsing failed.
json_resp["SUCCESS"] = false;
json_resp["ERRORS"] = std::string(ex.what());
}
} else {
// If json parsing failed.
json_resp["SUCCESS"] = false;
json_resp["ERRORS"] = std::string(parse_err);
}
std::string resp = Json::writeString(writer_, json_resp);
boost::asio::async_write(socket_,
boost::asio::buffer(&resp[0], resp.size()),
[this, self]
(boost::system::error_code ec,
std::size_t bytes_xfered) {
if (!ec) DoRead();
});
}
Json::Value ProcessRequest(Json::Value request)
{
Json::Value response;
std::string command = request["COMMAND"].asString();
// If command is not valid, give a response with an error.
if(commands_.find(command) == commands_.end()) {
response["SUCCESS"] = false;
response["ERRORS"] = "Invalid command.";
}
// Otherwise, run the relevant handler.
else {
RequestHandler handler = commands_.at(command);
response = handler(*request_class_inst_, request);
}
return response;
}
};
template<class RequestHandler, class RequestClass>
class Server {
public:
typedef std::map<std::string, RequestHandler> CommandMap;
Server(boost::asio::io_context &io_context, short port,
const CommandMap &commands,
RequestClass *request_class_inst)
: acceptor_(io_context, tcp::endpoint(tcp::v4(), port))
, commands_(commands)
, request_class_inst_(request_class_inst)
{
DoAccept();
}
~Server()
{
Kill();
}
void Kill()
{
continue_ = false;
}
private:
tcp::acceptor acceptor_;
bool continue_ = true;
CommandMap commands_;
RequestClass *request_class_inst_;
void DoAccept()
{
acceptor_.async_accept(
[this](boost::system::error_code ec, tcp::socket socket) {
if (!ec)
std::make_shared<RequestContext<RequestHandler, RequestClass>>
(std::move(socket), commands_, request_class_inst_)->Run();
DoAccept();
});
}
};
void RunServer(short port)
{
boost::asio::io_context io_context;
auto *request_inst = new RequestClass(1);
std::map<std::string, RequestClassMethod> commands {
{"ADD_1", std::mem_fn(&RequestClass::add_n)},
{"SUB_1", std::mem_fn(&RequestClass::sub_n)}
};
Server<RequestClassMethod, RequestClass> s(io_context, port, commands,
request_inst);
io_context.run();
}
void RunServerInBackground(short port)
{
std::thread t([port] { RunServer(port); });
t.detach();
}
int main()
{
try
{
RunServerInBackground(5000);
boost::asio::io_context io_context;
tcp::socket s(io_context);
tcp::resolver resolver(io_context);
boost::asio::connect(s, resolver.resolve("127.0.0.1", "5000"));
char request[2048] = R"({"COMMAND": "ADD_1", "VALUE" : 1})";
size_t request_length = std::strlen(request);
boost::asio::write(s, boost::asio::buffer(request, request_length));
char reply[2048];
size_t reply_length = boost::asio::read(s, boost::asio::buffer(reply, request_length));
std::cout << "Reply is: ";
std::cout << reply << std::endl;
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
The outgoing buffer needs to be a class member, just like data_, so that the lifetime is guaranteed until async_write is completed.
You can also spot issues like this with linter/runtime checks like ASAN/UBSAN or Valgrind.
UPDATE
Also
size_t reply_length =
boost::asio::read(s, boost::asio::buffer(reply, request_length));
wrongly uses request_length. As a rule, avoid manually specifying buffer sizes, at any time.
Besides, your protocol doesn't provide framing, so you cannot practically keep the same connection open for newer requests (you don't know how many bytes to expect for a response to be complete). I'll "fix" it here by closing the connection after the first request, so we have a working demo.
There's also a race condition with the continue_ flags, but I'll leave that as an exorcism for the reader.
Of course, consider not leaking the request class instance.
Oh, I also switched to Boost JSON as it seemed an easier fit:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/json.hpp>
#include <boost/json/src.hpp>
#include <iostream>
using boost::asio::ip::tcp;
using boost::system::error_code;
namespace json = boost::json;
using Value = json::object;
/// NOTE: This class exists exclusively for unit testing.
struct Sample {
int n_;
Value add_n(Value const& request) const { return impl(std::plus<>{}, request); }
Value sub_n(Value const& request) const { return impl(std::minus<>{}, request); }
Value mul_n(Value const& request) const { return impl(std::multiplies<>{}, request); }
Value div_n(Value const& request) const { return impl(std::divides<>{}, request); }
private:
template <typename Op> Value impl(Op op, Value const& req) const {
return (req.contains("VALUE"))
? Value{{"VALUE", op(req.at("VALUE").as_int64(), n_)},
{"SUCCESS", true}}
: Value{{"ERRORS", "Invalid value."}, {"SUCCESS", false}};
}
};
using RequestClassMethod =
std::function<Value(Sample, Value const&)>;
template <class RequestHandler, class RequestClass>
class RequestContext
: public std::enable_shared_from_this<
RequestContext<RequestHandler, RequestClass>> {
public:
using CommandMap = std::map<std::string, RequestHandler>;
RequestContext(tcp::socket socket, CommandMap commands,
RequestClass* request_class_inst)
: socket_(std::move(socket))
, commands_(std::move(commands))
, request_class_inst_(request_class_inst)
{}
void Run() { DoRead(); }
void Kill() { continue_ = false; }
private:
tcp::socket socket_;
CommandMap commands_;
RequestClass* request_class_inst_;
bool continue_ = true;
char data_[2048];
std::string resp_;
void DoRead()
{
socket_.async_read_some(
boost::asio::buffer(data_),
[this, self = this->shared_from_this()](error_code ec, std::size_t length) {
if (!ec) {
DoWrite(length);
}
});
}
void DoWrite(std::size_t length)
{
Value json_resp;
try {
auto json_req = json::parse({data_, length}).as_object();
json_resp = ProcessRequest(json_req);
json_resp["SUCCESS"] = true;
} catch (std::exception const& ex) {
json_resp = {{"SUCCESS", false}, {"ERRORS", ex.what()}};
}
resp_ = json::serialize(json_resp);
boost::asio::async_write(socket_, boost::asio::buffer(resp_),
[this, self = this->shared_from_this()](
error_code ec, size_t bytes_xfered) {
if (!ec)
DoRead();
});
}
Value ProcessRequest(Value request)
{
auto command = request.contains("COMMAND")
? request["COMMAND"].as_string() //
: "";
std::string cmdstr(command.data(), command.size());
// If command is not valid, give a response with an error.
return commands_.contains(cmdstr) && request_class_inst_
? commands_.at(cmdstr)(*request_class_inst_, request)
: Value{{"SUCCESS", false}, {"ERRORS","Invalid command."}};
}
};
template<class RequestHandler, class RequestClass>
class Server {
public:
using CommandMap = std::map<std::string, RequestHandler>;
Server(boost::asio::io_context& io_context, uint16_t port,
const CommandMap& commands, RequestClass* request_class_inst)
: acceptor_(io_context, {{}, port})
, commands_(commands)
, request_class_inst_(request_class_inst)
{
DoAccept();
}
~Server() { Kill(); }
void Kill() { continue_ = false; }
private:
tcp::acceptor acceptor_;
bool continue_ = true;
CommandMap commands_;
RequestClass *request_class_inst_;
void DoAccept()
{
acceptor_.async_accept(
[this](error_code ec, tcp::socket socket) {
if (!ec)
std::make_shared<
RequestContext<RequestHandler, RequestClass>>(
std::move(socket), commands_, request_class_inst_)
->Run();
DoAccept();
});
}
};
void RunServer(uint16_t port)
{
boost::asio::io_context io_context;
Server<RequestClassMethod, Sample> s(
io_context, port,
{{"ADD_2", std::mem_fn(&Sample::add_n)},
{"SUB_2", std::mem_fn(&Sample::sub_n)},
{"MUL_2", std::mem_fn(&Sample::mul_n)},
{"DIV_2", std::mem_fn(&Sample::div_n)}},
new Sample{2});
io_context.run();
}
void RunServerInBackground(uint16_t port)
{
std::thread t([port] { RunServer(port); });
t.detach();
}
int main() try {
RunServerInBackground(5000);
::sleep(1); // avoid startup race
boost::asio::io_context io;
tcp::socket s(io);
s.connect({{}, 5000});
std::string const request = R"({"COMMAND": "MUL_2", "VALUE" : 21})";
std::cout << "Request: " << std::quoted(request, '\'') << std::endl;
boost::asio::write(s, boost::asio::buffer(request));
s.shutdown(tcp::socket::shutdown_send); // avoid framing problems
error_code ec;
char reply[2048];
size_t reply_length = boost::asio::read(s, boost::asio::buffer(reply), ec);
std::cout << "Reply is: "
<< std::quoted(std::string_view(reply, reply_length), '\'')
<< " (" << ec.message() << ")" << std::endl;
} catch (std::exception const& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
Prints
Request: '{"COMMAND": "MUL_2", "VALUE" : 21}'
Reply is: '{"VALUE":42,"SUCCESS":true}' (End of file)

socks4 with asynchronous boost::asio

I'm trying to hack into an existing appilication a socks4 client. The program uses asynchronous boost::asio.
So i've worked out so far that i need to negotiate with the socks4 server first:
boost::asio::ip::tcp::endpoint socks_proxy{boost::asio::ip::make_address("127.0.0.1"),1080};
if( socks_proxy.protocol() != boost::asio::ip::tcp::v4() )
{
throw boost::system::system_error(
boost::asio::error::address_family_not_supported);
}
....
boost::asio::ip::tcp::socket* m_socket;
// negotiate with the socks server
// m_endpoint is an item in std::queue<boost::asio::ip::basic_endpoint<boost::asio::ip::tcp>> m_endpoints
boost::asio::ip::address_v4::bytes_type address_ = m_endpoint.address().to_v4().to_bytes();
unsigned short port = m_endpoint.port();
unsigned char port_high_byte_ = (port >> 8) & 0xff;
unsigned char port_low_byte_ = port & 0xff;
boost::array<boost::asio::const_buffer, 7> send_buffer =
{
{
boost::asio::buffer(&SOCKS_VERSION, 1), // const unsigned char SOCKS_VERSION = 0x04;
boost::asio::buffer(&SOCKS_CONNECT, 1), // const unsigned char SOCKS_VERSION = 0x04;
boost::asio::buffer(&port_high_byte_, 1),
boost::asio::buffer(&port_low_byte_, 1),
boost::asio::buffer(address_),
boost::asio::buffer("userid"),
boost::asio::buffer(&null_byte_, 1). // unsigned char null_byte_ = 0;
}
};
// initiate socks
boost::asio::write( m_socket, send_buffer );
// check it worked
unsigned char status_;
boost::array<boost::asio::mutable_buffer, 5> reply_buffer =
{
{
boost::asio::buffer(&null_byte_, 1),
boost::asio::buffer(&status_, 1),
boost::asio::buffer(&port_high_byte_, 1),
boost::asio::buffer(&port_low_byte_, 1),
boost::asio::buffer(address_)
}
};
boost::asio::read( m_socket, reply_buffer );
if( ! ( null_byte_ == 0 && status_ == 0x5a ) )
{
std::cout << "Proxy connection failed.\n";
}
However, the exist application code bascially does:
boost::asio::ip::tcp::socket* m_socket;
m_nonsecuresocket = std::make_shared<boost::asio::ip::tcp::socket>(m_io_service);
m_socket = m_nonsecuresocket.get();
m_socket->async_connect(m_endpoint,
m_io_strand.wrap(boost::bind(&CLASS::connect_handler, this, _1)));
so that even if i could get it to compile, the async_connect would disconnect the socket anyway.
How can i integrate the socks4 client code into the async_connect()?
As I commented, I think your question requires a lot more focus. However, since this is actually a useful question and it might be good to have an example, I went ahead and implemented a socks4::async_proxy_connect operation:
tcp::socket sock{io};
tcp::endpoint
target({}, 80), // connect to localhost:http
proxy{{}, 1080}; // via SOCKS4 proxy on localhost:1080
socks4::async_proxy_connect(sock, target, proxy, handler);
// continue using sock
Loose ends:
synchronous version is not implemented yet (but should be a lot simpler) added
does not include address resolution (just as your question). Integrating that would require quite a bit of the groundwork in boost::asio::async_connect that takes a resolver query. Sadly, that doesn't seen well factored for reuse.
Listing
File socks4.hpp
#include <boost/asio.hpp>
#include <boost/endian/arithmetic.hpp>
namespace socks4 { // threw in the kitchen sink for error codes
#ifdef STANDALONE_ASIO
using std::error_category;
using std::error_code;
using std::error_condition;
using std::system_error;
#else
namespace asio = boost::asio;
using boost::system::error_category;
using boost::system::error_code;
using boost::system::error_condition;
using boost::system::system_error;
#endif
enum class result_code {
ok = 0,
invalid_version = 1,
rejected_or_failed = 3,
need_identd = 4,
unconirmed_userid = 5,
//
failed = 99,
};
auto const& get_result_category() {
struct impl : error_category {
const char* name() const noexcept override { return "result_code"; }
std::string message(int ev) const override {
switch (static_cast<result_code>(ev)) {
case result_code::ok: return "Success";
case result_code::invalid_version: return "SOCKS4 invalid reply version";
case result_code::rejected_or_failed: return "SOCKS4 rejected or failed";
case result_code::need_identd: return "SOCKS4 unreachable (client not running identd)";
case result_code::unconirmed_userid: return "SOCKS4 identd could not confirm user ID";
case result_code::failed: return "SOCKS4 general unexpected failure";
default: return "unknown error";
}
}
error_condition
default_error_condition(int ev) const noexcept override {
return error_condition{ev, *this};
}
bool equivalent(int ev, error_condition const& condition)
const noexcept override {
return condition.value() == ev && &condition.category() == this;
}
bool equivalent(error_code const& error,
int ev) const noexcept override {
return error.value() == ev && &error.category() == this;
}
} const static instance;
return instance;
}
error_code make_error_code(result_code se) {
return error_code{
static_cast<std::underlying_type<result_code>::type>(se),
get_result_category()};
}
} // namespace socks4
template <>
struct boost::system::is_error_code_enum<socks4::result_code>
: std::true_type {};
namespace socks4 {
using namespace std::placeholders;
template <typename Endpoint> struct core_t {
Endpoint _target;
Endpoint _proxy;
core_t(Endpoint target, Endpoint proxy)
: _target(target)
, _proxy(proxy) {}
#pragma pack(push)
#pragma pack(1)
using ipv4_octets = boost::asio::ip::address_v4::bytes_type;
using net_short = boost::endian::big_uint16_t;
struct alignas(void*) Req {
uint8_t version = 0x04;
uint8_t cmd = 0x01;
net_short port;
ipv4_octets address;
} _request{0x04, 0x01, _target.port(),
_target.address().to_v4().to_bytes()};
struct alignas(void*) Res {
uint8_t reply_version;
uint8_t status;
net_short port;
ipv4_octets address;
} _response;
#pragma pack(pop)
using const_buffer = boost::asio::const_buffer;
using mutable_buffer = boost::asio::mutable_buffer;
auto request_buffers(char const* szUserId) const {
return std::array<const_buffer, 2>{
boost::asio::buffer(&_request, sizeof(_request)),
boost::asio::buffer(szUserId, strlen(szUserId) + 1)};
}
auto response_buffers() {
return boost::asio::buffer(&_response, sizeof(_response));
}
error_code get_result(error_code ec = {}) const {
if (ec)
return ec;
if (_response.reply_version != 0)
return result_code::invalid_version;
switch (_response.status) {
case 0x5a: return result_code::ok; // Request grantd
case 0x5B: return result_code::rejected_or_failed;
case 0x5C: return result_code::need_identd;
case 0x5D: return result_code::unconirmed_userid;
}
return result_code::failed;
}
};
template <typename Socket, typename Completion>
struct async_proxy_connect_op {
using Endpoint = typename Socket::protocol_type::endpoint;
using executor_type = typename Socket::executor_type;
auto get_executor() { return _socket.get_executor(); }
private:
core_t<Endpoint> _core;
Socket& _socket;
std::string _userId;
Completion _handler;
public:
async_proxy_connect_op(Completion handler, Socket& s, Endpoint target,
Endpoint proxy, std::string user_id = {})
: _core(target, proxy)
, _socket(s)
, _userId(std::move(user_id))
, _handler(std::move(handler)) {}
using Self = std::unique_ptr<async_proxy_connect_op>;
void init(Self&& self) { operator()(self, INIT{}); }
private:
// states
struct INIT{};
struct CONNECT{};
struct SENT{};
struct ONRESPONSE{};
struct Binder {
Self _self;
template <typename... Args>
decltype(auto) operator()(Args&&... args) {
return (*_self)(_self, std::forward<Args>(args)...);
}
};
void operator()(Self& self, INIT) {
_socket.async_connect(_core._proxy,
std::bind(Binder{std::move(self)}, CONNECT{}, _1));
}
void operator()(Self& self, CONNECT, error_code ec) {
if (ec) return _handler(ec);
boost::asio::async_write(
_socket,
_core.request_buffers(_userId.c_str()),
std::bind(Binder{std::move(self)}, SENT{}, _1, _2));
}
void operator()(Self& self, SENT, error_code ec, size_t xfer) {
if (ec) return _handler(ec);
auto buf = _core.response_buffers();
boost::asio::async_read(
_socket, buf, boost::asio::transfer_exactly(buffer_size(buf)),
std::bind(Binder{std::move(self)}, ONRESPONSE{}, _1, _2));
}
void operator()(Self& self, ONRESPONSE, error_code ec, size_t xfer) {
_handler(_core.get_result(ec));
}
};
template <typename Socket,
typename Endpoint = typename Socket::protocol_type::endpoint>
error_code proxy_connect(Socket& s, Endpoint ep, Endpoint proxy,
std::string const& user_id, error_code& ec) {
core_t<Endpoint> core(ep, proxy);
ec.clear();
s.connect(core._proxy, ec);
if (!ec)
boost::asio::write(s, core.request_buffers(user_id.c_str()),
ec);
auto buf = core.response_buffers();
if (!ec)
boost::asio::read(s, core.response_buffers(),
boost::asio::transfer_exactly(buffer_size(buf)), ec);
return ec = core.get_result(ec);
}
template <typename Socket,
typename Endpoint = typename Socket::protocol_type::endpoint>
void proxy_connect(Socket& s, Endpoint ep, Endpoint proxy,
std::string const& user_id = "") {
error_code ec;
if (proxy_connect(s, ep, proxy, user_id, ec))
throw system_error(ec);
}
template <typename Socket, typename Token,
typename Endpoint = typename Socket::protocol_type::endpoint>
auto async_proxy_connect(Socket& s, Endpoint ep, Endpoint proxy,
std::string user_id, Token&& token) {
using Result = asio::async_result<std::decay_t<Token>, void(error_code)>;
using Completion = typename Result::completion_handler_type;
Completion completion(std::forward<Token>(token));
Result result(completion);
using Op = async_proxy_connect_op<Socket, Completion>;
// make an owning self ptr, to serve a unique async chain
auto self =
std::make_unique<Op>(completion, s, ep, proxy, std::move(user_id));
self->init(std::move(self));
return result.get();
}
template <typename Socket, typename Token,
typename Endpoint = typename Socket::protocol_type::endpoint>
auto async_proxy_connect(Socket& s, Endpoint ep, Endpoint proxy, Token&& token) {
return async_proxy_connect<Socket, Token, Endpoint>(
s, ep, proxy, "", std::forward<Token>(token));
}
} // namespace socks4
Demo
File test.cpp
#include "socks4.hpp"
#include <boost/beast.hpp>
#include <boost/beast/http.hpp>
#include <iostream>
int main(int argc, char**) {
bool synchronous = argc > 1;
using boost::asio::ip::tcp;
boost::asio::thread_pool ctx(1); // just one thread will do
tcp::socket sock{ctx};
tcp::endpoint target(
boost::asio::ip::address_v4::from_string("173.203.57.63"), 80),
proxy{{}, 1080};
try {
if (synchronous) {
std::cerr << "Using synchronous interface" << std::endl;
socks4::proxy_connect(sock, target,
proxy); // throws system_error if failed
} else {
std::cerr << "Using asynchronous interface" << std::endl;
// using the async interface (still emulating synchronous by using
// future for brevity of this demo)
auto fut = socks4::async_proxy_connect(sock, target, proxy,
boost::asio::use_future);
fut.get(); // throws system_error if failed
}
// Now do a request using beast
namespace beast = boost::beast;
namespace http = beast::http;
{
http::request<http::empty_body> req(http::verb::get, "/", 11);
req.set(http::field::host, "coliru.stacked-crooked.com");
req.set(http::field::connection, "close");
std::cout << "-------\nRequest: " << req << "\n-------\n";
http::write(sock, req);
}
{
http::response<http::string_body> res;
beast::flat_buffer buf;
http::read(sock, buf, res);
std::cout << "\n-------\nResponse: " << res << "\n";
}
} catch(socks4::system_error const& se) {
std::cerr << "Error: " << se.code().message() << std::endl;
}
ctx.join();
}
Output
Using asynchronous interface
-------
Request: GET / HTTP/1.1
Host: coliru.stacked-crooked.com
Connection: close
-------
-------
Response: HTTP/1.1 200 OK
Content-Type: text/html;charset=utf-8
Content-Length: 8616
Server: WEBrick/1.4.2 (Ruby/2.5.1/2018-03-29) OpenSSL/1.0.2g
Date: Thu, 29 Apr 2021 19:05:03 GMT
Connection: close
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Frameset//EN">
<html>
<head>
<title>Coliru</title>
(rest of response omitted)

How to use boost::beast, download a file no blocking and with responses

I have started with this example so won't post all the code. My objective is to download a large file without blocking my main thread. The second objective is to get notifications so I can update a progress bar. I do have the code working a couple of ways. First is to just ioc.run(); and let it go to work, I get the file downloaded. But I can not find anyway to start the session without blocking.
The second way I can make the calls down to http::async_read_some and the call works but I can not get a response that I can use. I don't know if there is a way to pass a lambda that captures.
The #if 0..#else..#endif switches the methods. I'm sure there is a simple way but I just can not see it. I'll clean up the code when I get it working, like setting the local file name. Thanks.
std::size_t on_read_some(boost::system::error_code ec, std::size_t bytes_transferred)
{
if (ec);//deal with it...
if (!bValidConnection) {
std::string_view view((const char*)buffer_.data().data(), bytes_transferred);
auto pos = view.find("Content-Length:");
if (pos == std::string_view::npos)
;//error
file_size = std::stoi(view.substr(pos+sizeof("Content-Length:")).data());
if (!file_size)
;//error
bValidConnection = true;
}
else {
file_pos += bytes_transferred;
response_call(ec, file_pos);
}
#if 0
std::cout << "in on_read_some caller\n";
http::async_read_some(stream_, buffer_, file_parser_, std::bind(
response_call,
std::placeholders::_1,
std::placeholders::_2));
#else
std::cout << "in on_read_some inner\n";
http::async_read_some(stream_, buffer_, file_parser_, std::bind(
&session::on_read_some,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
#endif
return buffer_.size();
}
The main, messy but.....
struct lambda_type {
bool bDone = false;
void operator ()(const boost::system::error_code ec, std::size_t bytes_transferred) {
;
}
};
int main(int argc, char** argv)
{
auto const host = "reserveanalyst.com";
auto const port = "443";
auto const target = "/downloads/demo.msi";
int version = argc == 5 && !std::strcmp("1.0", argv[4]) ? 10 : 11;
boost::asio::io_context ioc;
ssl::context ctx{ ssl::context::sslv23_client };
load_root_certificates(ctx);
//ctx.load_verify_file("ca.pem");
auto so = std::make_shared<session>(ioc, ctx);
so->run(host, port, target, version);
bool bDone = false;
auto const lambda = [](const boost::system::error_code ec, std::size_t bytes_transferred) {
std::cout << "data lambda bytes: " << bytes_transferred << " er: " << ec.message() << std::endl;
};
lambda_type lambda2;
so->set_response_call(lambda);
ioc.run();
std::cout << "not in ioc.run()!!!!!!!!" << std::endl;
so->async_read_some(lambda);
//pseudo message pump when working.........
for (;;) {
std::this_thread::sleep_for(250ms);
std::cout << "time" << std::endl;
}
return EXIT_SUCCESS;
}
And stuff I've added to the class session
class session : public std::enable_shared_from_this<session>
{
using response_call_type = void(*)(boost::system::error_code ec, std::size_t bytes_transferred);
http::response_parser<http::file_body> file_parser_;
response_call_type response_call;
//
bool bValidConnection = false;
std::size_t file_pos = 0;
std::size_t file_size = 0;
public:
auto& get_result() { return res_; }
auto& get_buffer() { return buffer_; }
void set_response_call(response_call_type the_call) { response_call = the_call; }
I've updated this as I finally put it to use and I wanted the old method where I could download to a file or a string. Link to how asio works, great talk.
CppCon 2016 Michael Caisse Asynchronous IO with BoostAsio
As for my misunderstanding of how to pass a lambda, here is Adam Nevraumont's answer
There are two ways to compile this using a type to select the method. Both are shown at the beginning of main. You can construct either a file downloader or string downloader by selecting the type of beast parser. The parsers don't have the same constructs so an if constexpr compile time conditions are used. And I checked, a release build of the downloader is about 1K so pretty light weight for what it does. In the case of a small string you don't have to handle the call backs. either pass an empty lambda or add the likes of:
if(response_call)
response_call(resp_ok, test);
This looks to be a pretty clean way to get the job done so I've updated this post as of 11/27/2202.
The code:
//
// Copyright (c) 2016-2019 Vinnie Falco (vinnie dot falco at gmail dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
// Official repository: https://github.com/boostorg/beast
//------------------------------------------------------------------------------
//
// Example: HTTP SSL client, synchronous, usable in a thread with a message pump
// Added code to use from a message pump
// Also useable as body to a file download, or body to string
//
//------------------------------------------------------------------------------
#include <boost/beast/core.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/ssl.hpp>
#include <boost/beast/version.hpp>
#include <boost/asio/connect.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/ssl/error.hpp>
#include <boost/asio/ssl/stream.hpp>
#include <cstdlib>
#include <iostream>
#include <string>
#include <fstream>
//the boost shipped certificates
#include <boost/../libs/beast/example/common/root_certificates.hpp>
//TODO add your ssl libs as you would like
#ifdef _M_IX86
#pragma comment(lib, "libcrypto.lib")
#pragma comment(lib, "libssl.lib")
#elif _M_X64
#pragma comment(lib, "libcrypto-3-x64.lib")
#pragma comment(lib, "libssl-3-x64.lib")
#endif
namespace downloader {
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
namespace ssl = net::ssl; // from <boost/asio/ssl.hpp>
using tcp = net::ip::tcp; // from <boost/asio/ip/tcp.hpp>
//specialization if using < c++17; see both 'if constexpr' below.
//this is not needed otherwise
//namespace detail {
// template<typename Type>
// void open_file(http::parser < false, Type>& p, const char* name, boost::system::error_code& file_open_ec) { }
// template<>
// void open_file(http::parser<false, http::file_body>& p, const char* name, boost::system::error_code& file_open_ec) {
// p.get().body().open(name, boost::beast::file_mode::write, file_open_ec);
// }
// template<typename Type>
// std::string get_string(http::parser < false, Type>& p) { return std::string{}; }
// template<>
// std::string get_string(http::parser<false, http::string_body>& p) {
// return p.get().body();
// }
//} //namespace detail
enum responses {
resp_null,
resp_ok,
resp_done,
resp_error,
};
using response_call_type = std::function< void(responses, std::size_t)>;
template<typename ParserType>
struct download {
//as these can be set with array initialization
const char* target_ = "/";
const char* filename_ = "test.txt";
const char* host_ = "lakeweb.net";
std::string body_;
using response_call_type = std::function< void(responses, std::size_t)>;
response_call_type response_call;
boost::asio::io_context ioc_;
ssl::context ctx_{ ssl::context::sslv23_client };
ssl::stream<tcp::socket> stream_{ ioc_, ctx_ };
tcp::resolver resolver_{ ioc_ };
boost::beast::flat_buffer buffer_;
uint64_t file_size_{};
int version{ 11 };
void set_response_call(response_call_type the_call) { response_call = the_call; }
uint64_t get_file_size() { return file_size_; }
void stop() { ioc_.stop(); }
bool stopped() { return ioc_.stopped(); }
std::string get_body() { return std::move(body_); }
void run() {
try {
// TODO should have a timer in case of a hang
load_root_certificates(ctx_);
// Set SNI Hostname (many hosts need this to handshake successfully)
if (!SSL_set_tlsext_host_name(stream_.native_handle(), host_)) {
boost::system::error_code ec{ static_cast<int>(::ERR_get_error()), boost::asio::error::get_ssl_category() };
throw boost::system::system_error{ ec };
}
//TODO resolve is depreciated, use endpoint
auto const results = resolver_.resolve(host_, "443");
boost::asio::connect(stream_.next_layer(), results.begin(), results.end());
stream_.handshake(ssl::stream_base::client);
// Set up an HTTP GET request message
http::request<http::string_body> req{ http::verb::get, target_, version };
req.set(http::field::host, host_);
req.set(http::field::user_agent, "mY aGENT");
// Send the HTTP request to the remote host
http::write(stream_, req);
// Read the header
boost::system::error_code file_open_ec;
http::parser<false, ParserType> p;
p.body_limit((std::numeric_limits<std::uint32_t>::max)());
//detail::open_file(p, filename_, file_open_ec);
//or => c++17
if constexpr (std::is_same_v<ParserType, http::file_body>)
p.get().body().open(filename_, boost::beast::file_mode::write, file_open_ec);
http::read_header(stream_, buffer_, p);
file_size_ = p.content_length().has_value() ? p.content_length().value() : 0;
//Read the body
uint64_t test{};
boost::system::error_code rec;
for (;;) {
test += http::read_some(stream_, buffer_, p, rec);
if (test >= file_size_) {
response_call(resp_done, 0);
break;
}
response_call(resp_ok, test);
}
// Gracefully close the stream
boost::system::error_code ec;
stream_.shutdown(ec);
if (ec == boost::asio::error::eof)
{
// Rationale:
// http://stackoverflow.com/questions/25587403/boost-asio-ssl-async-shutdown-always-finishes-with-an-error
ec.assign(0, ec.category());
}
if (ec)
throw boost::system::system_error{ ec };
//value = detail::get_string(p);
//or => c++17
if constexpr (std::is_same_v<ParserType, http::string_body>)
body_ = p.get().body();
}
catch (std::exception const& e)
{
std::cerr << "Error: " << e.what() << std::endl;
response_call(resp_error, -1);
}
ioc_.stop();
}
};
}//namespace downloadns
//comment to test with string body
#define THE_FILE_BODY_TEST
int main(int argc, char** argv)
{
using namespace downloader;
#ifdef THE_FILE_BODY_TEST
download<http::file_body> dl{"/Nasiri%20Abarbekouh_Mahdi.pdf", "test.pdf"};
#else //string body test
download<http::string_body> dl{ "/robots.txt" };
#endif
responses dl_response{ resp_null };
size_t cur_size{};
auto static const lambda = [&dl_response, &dl, &cur_size](responses response, std::size_t bytes_transferred) {
if ((dl_response = response) == resp_ok) {
cur_size += bytes_transferred;
size_t sizes = dl.get_file_size() - cur_size;//because size is what is left
//drive your progress bar from here in a GUI app
}
};
dl.set_response_call(lambda);
std::thread thread{ [&dl]() { dl.run(); } };
//thread has started, now the pseudo message pump
bool quit = false; //true: as if a cancel button was pushed; won't finish download
for (int i = 0; ; ++i) {
switch (dl_response) { //ad hoc as if messaged
case resp_ok:
std::cout << "from sendmessage: " << cur_size << std::endl;
dl_response = resp_null;
break;
case resp_done:
std::cout << "from sendmessage: done" << std::endl;
dl_response = resp_null;
break;
case resp_error:
std::cout << "from sendmessage: error" << std::endl;
dl_response = resp_null;
}//switch
if (!(i % 5))
std::cout << "in message pump, stopped: " << std::boolalpha << dl.stopped() << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
if (quit && i == 10) //the cancel message
dl.stop();
if (!(i % 20) && dl.stopped()) {//dl job was quit or error or finished
std::cout << "dl is stopped" << std::endl;
break;
}
}
#ifdef THE_FILE_BODY_TEST
std::cout << "file written named: 'test.txt'" << std::endl;
#else
std::string res = dl.get_body();
std::cout << "body retrieved:\n" << res << std::endl;
#endif
if (thread.joinable())//in the case a thread was never started
thread.join();
std::cout << "exiting, program all done" << std::endl;
return EXIT_SUCCESS;
}
I strongly recommend against using the low-level [async_]read_some function instead of using http::[async_]read as intended with http::response_parser<http::buffer_body>
I do have an example of that - which is a little bit complicated by the fact that it also uses Boost Process to concurrently decompress the body data, but regardless it should show you how to use it:
How to read data from Internet using muli-threading with connecting only once?
I guess I could tailor it to your specific example given more complete code, but perhaps the above is good enough? Also see "Relay an HTTP message" in libs/beast/example/doc/http_examples.hpp which I used as "inspiration".
Caution: the buffer arithmetic is not intuitive. I think this is unfortunate and should not have been necessary, so pay (very) close attention to these samples for exactly how that's done.