I'm trying to write a function async_read_string_n to asynchronously read a string of exactly n bytes from a socket with Boost.Asio 1.78 (and GCC 11.2).
This is how I want to use the function async_read_string_n:
void run() {
co_spawn (io_context_, [&]() -> awaitable<void> {
auto executor = io_context_.get_executor();
tcp::acceptor acceptor(executor, listen_endpoint_);
auto [ec, socket] = co_await acceptor.async_accept(as_tuple(use_awaitable));
co_spawn(executor, [&]() -> awaitable<void> {
auto [ec, header] = co_await async_read_string_n(socket, 6, as_tuple(use_awaitable));
std::cerr << "received string " << header << "\n";
co_return;
}
, detached);
co_return;
}
, detached);
}
Here is my attempt to write async_read_string_n, following the advice in
https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/asynchronous_operations.html#boost_asio.reference.asynchronous_operations.automatic_deduction_of_initiating_function_return_type
https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/overview/core/cpp20_coroutines.html#boost_asio.overview.core.cpp20_coroutines.error_handling
(I don't care about memory copying. This isn't supposed to be fast; it's supposed to have a nice API.)
template<class CompletionToken> auto async_read_string_n(tcp::socket& socket, int n, CompletionToken&& token) {
async_completion<CompletionToken, void(boost::system::error_code, std::string)> init(token);
asio::streambuf b;
asio::streambuf::mutable_buffers_type bufs = b.prepare(n);
auto [ec, bytes_transferred] = co_await asio::async_read(socket, bufs, asio::transfer_exactly(n), as_tuple(use_awaitable));
b.commit(n);
std::istream is(&b);
std::string s;
is >> s;
b.consume(n);
init.completion_handler(ec, s);
return init.result.get();
}
Edit
(I had a syntax error and I fixed it.) Here is the compiler error in async_read_string_n which I'm stuck on:
GCC error:
error: 'co_await' cannot be used in a function with a deduced return type
How can I write the function async_read_string_n?
You don't have to use streambuf. Regardless, using the >> extraction will not reliably extract the string (whitespace stops the input).
The bigger problem is that you have to choose whether you want to use
co_await (which requires another kind of signature as your second link correctly shows)
or the async result protocol, which implies that the caller will decide what mechanism to use (a callback, future, group, awaitable etc).
So either make it:
Using Async Result Protocol:
#include <boost/asio.hpp>
#include <boost/asio/awaitable.hpp>
#include <boost/asio/experimental/as_tuple.hpp>
#include <boost/asio/use_awaitable.hpp>
#include <iostream>
#include <iomanip>
namespace net = boost::asio;
using net::ip::tcp;
using boost::system::error_code;
template <typename CompletionToken>
auto async_read_string_n(tcp::socket& socket, int n, CompletionToken&& token)
{
struct Op {
net::async_completion<CompletionToken, void(error_code, std::string)>
init;
std::string buf;
Op(CompletionToken token) : init(token) {}
};
auto op = std::make_shared<Op>(token);
net::async_read(socket, net::dynamic_buffer(op->buf),
net::transfer_exactly(n), [op](error_code ec, size_t n) {
op->init.completion_handler(ec, std::move(op->buf));
});
return op->init.result.get();
}
int main() {
net::io_context ioc;
tcp::socket s(ioc);
s.connect({{}, 8989});
async_read_string_n(s, 10, [](error_code ec, std::string s) {
std::cout << "Read " << ec.message() << ": " << std::quoted(s)
<< std::endl;
});
ioc.run();
}
Prints
NOTE This version affords you the calling semantics that you desire in your sample run() function.
OR Use co_await
Analogous to the sample here:
boost::asio::awaitable<void> echo(tcp::socket socket)
{
char data[1024];
for (;;)
{
auto [ec, n] = co_await socket.async_read_some(boost::asio::buffer(data),
boost::asio::experimental::as_tuple(boost::asio::use_awaitable));
if (!ec)
{
// success
}
// ...
}
}
Thank you #sehe for your answer, which gave me the information I needed to write async_read_string_n which works with co_await:
asio::awaitable<std::tuple<boost::system::error_code, std::string>> async_read_string_n(tcp::socket& socket, int n) {
std::string buf;
auto [ec, bytes_transferred] = co_await asio::async_read(socket, asio::dynamic_buffer(buf), asio::transfer_exactly(n), as_tuple(use_awaitable));
co_return make_tuple(ec, buf);
}
Use it like this:
auto [ec, string6] = co_await async_read_string_n(socket, 6);
I wrote a post about this: https://github.com/xc-jp/blog-posts/blob/master/_posts/2022-03-03-Asio-Coroutines.md
Related
Current Scheme
I am developing a Serial Port routine that will regard current receive transfer is complete if no new data is received for 25 milli-seconds. I start the timer on the first the read_handler (Boost ASIO callback method) call. For every new read_handler call, I cancel the asynchronous operations that are waiting on the timer and create a new asynchronous operations on the timer.
Problem
The problem I am facing is that randomly my receive transfer that was suppose to be 1 transfer is being treated as 2 separate transfer as receive_timeout event (receive_timeout_handler) is being triggered (called) multiple times.
I'm not sure is this because of my incorrect implementation/usage of Boost ASIO system_timer or due to Driver issue in my USB to Serial Converter.
I'm currently using FT4232 module (contains 4 UART/Serial Port) to test my routines whereby I send data from send data (4 K.B. text file) from UART1 and receive data on UART0.
I expect that only after receiving all 4 K.B. of data, the serial port class signal main thread however sometimes this one 4 K.B. transfer is signaled 2-3 times.
Code :
class SerialPort
{
public:
SerialPort() : io(), port(io), receive_timeout_timer(io)
bool open_port(void);
bool read_async(std::int32_t read_timeout = -1)
void read_handler(const boost::system::error_code& error, std::size_t bytes_transferred);
void receive_timeout_handler(const boost::system::error_code& error);
private:
boost::asio::io_context io;
boost::asio::serial_port port;
boost::asio::system_timer receive_timeout_timer {25};
std::array<std::byte, 8096> read_byte_buffer;
};
bool SerialPort::open_port(void)
{
try
{
this->port.open("COM3");
return true;
}
catch (const std::exception& ex)
{
}
return false;
}
bool SerialPort::read_async(std::uint32_t read_timeout)
{
try
{
this->read_byte_buffer.fill(static_cast<std::byte>(0)); //Clear Buffer
if (read_timeout not_eq -1)
{
this->read_timeout = read_timeout;//If read_timeout is not set to ignore_timeout, update the read_timeout else use old read_timeout
}
this->port.async_read_some(
boost::asio::buffer(
this->read_byte_buffer.data(),
this->read_byte_buffer.size()
),
boost::bind(
&SerialPort::read_handler,
this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
);
return true;
}
catch (const std::exception& ex)
{
return false;
}
}
void SerialPort::read_handler(const boost::system::error_code& error, std::size_t bytes_transferred)
{
std::string temporary_recieve_data;
try
{
if (error not_eq boost::system::errc::success) //Error in serial port read
{
return;
}
std::transform(this->read_byte_buffer.begin(), this->read_byte_buffer.begin() + bytes_transferred,
std::back_inserter(temporary_recieve_data), [](std::byte character) {
return static_cast<char>(character);
}
);
this->read_async(); //Again Start the read operation
this->received_data += temporary_recieve_data;
this->receive_timeout_timer.cancel(); // Cancel existing timers if any are running
this->receive_timeout_timer.expires_after(boost::asio::chrono::milliseconds(SerialPort::bulk_data_receive_complete)); // Reset timer to current timestamp + 25 milliseconds
this->receive_timeout_timer.async_wait(boost::bind(&SerialPort::receive_timeout_handler, this, boost::asio::placeholders::error));
}
catch (const std::exception& ex)
{
}
}
void SerialPort::receive_timeout_handler(const boost::system::error_code& error)
{
try
{
if (error not_eq boost::system::errc::success) //Error in serial port read
{
return;
}
// this->signal(this->port_number, SerialPortEvents::read_data, this->received_data); //Signal to main thread that data has been received
}
catch (const std::exception& ex)
{
}
}
read_timer.cancel(); // Cancel existing timers if any are running
read_timer.expires_after(
SerialPort::bulk_data_receive_complete); // Reset timer to current timestamp + 25 milliseconds
Here the cancel is redundant, because setting the expiration cancels any pending wait.
You reschedule the timer regardless of whether it ran out. Your code misses the possibility that both the read and timer could have completed successfully. In that case your main gets signaled multiple times, even though it only "nearly" exceeded 25ms idle.
You would expect to see partially duplicated data, then, because received_data isn't cleared.
To clearly see what is going on, build your code with -DBOOST_ASIO_ENABLE_HANDLER_TRACKING=1 and run the output through handler_viz.pl (see also Cancelling boost asio deadline timer safely).
Suggestions
You could probably avoid the double firing by being explicit about the flow:
To achieve that, only cancel the read from the timeout handler:
void SerialPort::receive_timeout_handler(error_code ec) {
if (!ec.failed()) {
port.cancel(ec);
std::cerr << "read canceled: " << ec.message() << std::endl;
}
}
Then you could move the signal to the read-handler, where you expect the cancellation:
void SerialPort::read_handler(error_code ec, size_t bytes_transferred) {
if (ec == asio::error::operation_aborted) {
signal(port_number, SerialPortEvents::read_data, std::move(received_data));
} else if (ec.failed()) {
std::cerr << "SerialPort read: " << ec.message() << std::endl;
} else {
copy_n(begin(read_buffer), bytes_transferred, back_inserter(received_data));
read_timer.expires_after(bulk_data_receive_complete); // reset timer
read_timer.async_wait(boost::bind(&SerialPort::receive_timeout_handler, this, ph::error));
start_async_read(); // continue reading
}
}
To be completely fool-proof, you can check that the timer wasn't actually expired even on successful read (see again Cancelling boost asio deadline timer safely).
Intuitively, I think it makes more even sense to schedule the timer from start_async_read.
ASIDE #1
Currently your code completely ignores read_timeout (even aside from the unnecessary confusion between the argument read_timeout and the member read_timeout). It is unclear to me whether you want the read_timeout override argument to "stick" for the entire chain of read operations.
If you want it to stick, change the
start_async_read(bulk_data_receive_complete); // continue reading
call to
start_async_read(); // continue reading
below. I kept it like it is because it allows for easier timing demonstrations
ASIDE #2
I've undone the exception swallowing code. Instead of just squashing all exceptions into a boolean (which you'll then check to change control flow), use the native language feature to change the control flow, retaining error information.
Full Demo
Live On Coliru
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
#include <boost/signals2.hpp>
#include <iomanip>
#include <iostream>
namespace asio = boost::asio;
namespace ph = boost::asio::placeholders;
using boost::system::error_code;
using namespace std::chrono_literals;
enum class SerialPortEvents { read_data };
class SerialPort {
using duration = std::chrono::system_clock::duration;
static constexpr duration //
ignore_timeout = duration::min(), // e.g. -0x8000000000000000ns
bulk_data_receive_complete = 25ms;
public:
SerialPort() : io(), port(io), read_timer(io) {}
void open_port(std::string device);
void start_async_read(duration read_timeout = ignore_timeout);
void run() {
if (io.stopped())
io.restart();
io.run();
}
boost::signals2::signal<void(unsigned, SerialPortEvents, std::string)> signal;
private:
void read_handler(error_code ec, size_t bytes_transferred);
void receive_timeout_handler(error_code ec);
duration read_timeout = bulk_data_receive_complete;
asio::io_context io;
asio::serial_port port;
asio::system_timer read_timer;
std::array<char, 8096> read_buffer;
std::string received_data;
// TODO
unsigned const port_number = 0;
};
void SerialPort::open_port(std::string device) { port.open(device); }
void SerialPort::start_async_read(duration timeout_override) {
read_buffer.fill(0); // Clear Buffer (TODO redundant)
if (timeout_override != ignore_timeout)
read_timeout = timeout_override;
std::cerr << "Expiry: " << read_timeout/1.s << "s from now" << std::endl;
read_timer.expires_after(read_timeout); // reset timer
read_timer.async_wait(boost::bind(&SerialPort::receive_timeout_handler, this, ph::error));
port.async_read_some( //
boost::asio::buffer(read_buffer),
boost::bind(&SerialPort::read_handler, this, ph::error, ph::bytes_transferred));
}
void SerialPort::read_handler(error_code ec, size_t bytes_transferred) {
if (ec == asio::error::operation_aborted) {
signal(port_number, SerialPortEvents::read_data, std::move(received_data));
} else if (ec.failed()) {
std::cerr << "SerialPort read: " << ec.message() << std::endl;
} else {
copy_n(begin(read_buffer), bytes_transferred, back_inserter(received_data));
start_async_read(bulk_data_receive_complete); // continue reading
}
}
void SerialPort::receive_timeout_handler(error_code ec) {
if (!ec.failed()) {
port.cancel(ec);
std::cerr << "read canceled: " << ec.message() << std::endl;
}
}
int main(int argc, char** argv) {
SerialPort sp;
sp.open_port(argc > 1 ? argv[1] : "COM3");
int count = 0;
sp.signal.connect([&count](unsigned port, SerialPortEvents event, std::string data) {
assert(port == 0);
assert(event == SerialPortEvents::read_data);
std::cout << "data #" << ++count << ": " << std::quoted(data) << "\n----" << std::endl;
});
sp.start_async_read(10s);
sp.run();
sp.start_async_read();
sp.run();
}
Testing with
socat -d -d pty,raw,echo=0 pty,raw,echo=0
./build/sotest /dev/pts/7
And various device emulations:
for a in hello world bye world; do sleep .01; echo "$a"; done >> /dev/pts/9
for a in hello world bye world; do sleep .025; echo "$a"; done >> /dev/pts/9
for a in hello world bye world; do sleep 1.0; echo "$a"; done >> /dev/pts/9
cat /etc/dictionaries-common/words >> /dev/pts/9
You can see all the outputs match with the expectations. With the sleep .025 you can see the input split over two read operations, but never with repeated data.
Handler tracking for the various runs: 1. 2.
3. 4.
The last one (literally throwing the dictionary at it) is way too big to be useful: https://imgur.com/a/I5lHnCV
Simplifying Notes
Note that your entire SerialPort re-implements a composed read operation. You might use simplify all that to asio::async_read_until with a MatchCondition.
This has the benefit of allowing directly asio::dynamic_buffer(received_data) as well.
Here's a simpler version that doesn't use a timer, but instead updates the deadline inside the manual run() loop.
It uses a single composed read operation with a MatchCondition that checks when the connection is "idle".
Live On Coliru
#include <boost/asio.hpp>
#include <iomanip>
#include <iostream>
namespace asio = boost::asio;
using namespace std::chrono_literals;
enum class SerialPortEvents { read_data };
class SerialPort {
using Clock = std::chrono::system_clock;
using Duration = Clock::duration;
static constexpr Duration default_idle_timeout = 25ms;
public:
void open_port(std::string device);
void read_till_idle(Duration idle_timeout = default_idle_timeout);
std::function<void(unsigned, SerialPortEvents, std::string)> signal;
private:
asio::io_context io;
asio::serial_port port{io};
std::string received_data;
};
void SerialPort::open_port(std::string device) { port.open(device); }
namespace {
// Asio requires nested result_type to be MatchCondition... :(
template <typename F> struct AsMatchCondition {
using CBT = boost::asio::dynamic_string_buffer<char, std::char_traits<char>,
std::allocator<char>>::const_buffers_type;
using It = asio::buffers_iterator<CBT>;
using result_type = std::pair<It, bool>;
F _f;
AsMatchCondition(F f) : _f(std::move(f)) {}
auto operator()(It f, It l) const { return _f(f, l); }
};
}
void SerialPort::read_till_idle(Duration idle_timeout) {
if (io.stopped())
io.restart();
using T = Clock::time_point;
T start = Clock::now();
auto current_timeout = idle_timeout;
auto deadline = T::max();
auto is_idle = [&](T& new_now) { // atomic w.r.t. a new_now
new_now = Clock::now();
return new_now >= deadline;
};
auto update = [&](int invocation) {
auto previous = start;
bool idle = is_idle(start);
if (invocation > 0) {
current_timeout = default_idle_timeout; // or not, your choice
std::cerr << " [update deadline for current timeout:" << current_timeout / 1ms << "ms after "
<< (start - previous) / 1ms << "ms]" << std::endl;
}
deadline = start + current_timeout;
return idle;
};
int invocation = 0; // to avoid updating current_timeout on first invocation
auto condition = AsMatchCondition([&](auto, auto e) { return std::pair(e, update(invocation++)); });
async_read_until(port, asio::dynamic_buffer(received_data), condition,
[this](auto...) { signal(0, SerialPortEvents::read_data, std::move(received_data)); });
for (T t; !io.stopped(); io.run_for(5ms))
if (is_idle(t))
port.cancel();
}
void data_received(unsigned port, SerialPortEvents event, std::string data) {
static int count = 0;
assert(port == 0);
assert(event == SerialPortEvents::read_data);
std::cout << "data #" << ++count << ": " << std::quoted(data) << std::endl;
}
int main(int argc, char** argv) {
SerialPort sp;
sp.signal = data_received;
sp.open_port(argc > 1 ? argv[1] : "COM3");
sp.read_till_idle(3s);
}
Same local demos:
So I don't know why but I can't wrap my head around the boost Beast websocket server and how you can (or should) interact with it.
The basic program I made looks like this, across 2 classes (WebSocketListener and WebSocketSession)
https://www.boost.org/doc/libs/develop/libs/beast/example/websocket/server/async/websocket_server_async.cpp
Everything works great, I can connect, and it echos messages. We will only ever have 1 active session, and I'm struggling to understand how I can interface with this session from outside its class, in my int main() for example or another class that may be responsible for issuing read/writes. We will be using a simple Command design pattern of commands async coming into a buffer that get processed against hardware and then async_write back out the results. The reading and queuing is straight forward and will be done in the WebsocketSession, but everything I see for write is just reading/writing directly inside the session and not getting external input.
I've seen examples using things like boost::asio::async_write(socket, buffer, ...) but I'm struggling to understand how I get a reference to said socket when the session is created by the listener itself.
Instead of depending on a socket from outside of the session, I'd depend on your program logic to implement the session.
That's because the session (connection) will govern its own lifetime, arriving spontaneously and potentially disconnecting spontaneously. Your hardware, most likely, doesn't.
So, borrowing the concept of "Dependency Injection" tell your listener about your application logic, and then call into that from the session. (The listener will "inject" the dependency into each newly created session).
Let's start from a simplified/modernized version of your linked example.
Now, where we prepare a response, you want your own logic injected, so let's write it how we would imagine it:
void on_read(beast::error_code ec, std::size_t /*bytes_transferred*/) {
if (ec == websocket::error::closed) return;
if (ec.failed()) return fail(ec, "read");
// Process the message
response_ = logic_->Process(beast::buffers_to_string(buffer_));
ws_.async_write(
net::buffer(response_),
beast::bind_front_handler(&session::on_write, shared_from_this()));
}
Here we declare the members and initialize them from the constructor:
std::string response_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
explicit session(tcp::socket&& socket,
std::shared_ptr<AppDomain::Logic> logic)
: ws_(std::move(socket))
, logic_(logic) {}
Now, we need to inject the listener with the logic so we can pass it along:
class listener : public std::enable_shared_from_this<listener> {
net::any_io_executor ex_;
tcp::acceptor acceptor_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
listener(net::any_io_executor ex, tcp::endpoint endpoint,
std::shared_ptr<AppDomain::Logic> logic)
: ex_(ex)
, acceptor_(ex)
, logic_(logic) {
So that we can pass it along:
void on_accept(beast::error_code ec, tcp::socket socket) {
if (ec) {
fail(ec, "accept");
} else {
std::make_shared<session>(std::move(socket), logic_)->run();
}
// Accept another connection
do_accept();
}
Now making the real logic in main:
auto logic = std::make_shared<AppDomain::Logic>("StackOverflow Demo/");
try {
// The io_context is required for all I/O
net::thread_pool ioc(threads);
std::make_shared<listener>(ioc.get_executor(),
tcp::endpoint{address, port}, logic)
->run();
ioc.join();
} catch (beast::system_error const& se) {
fail(se.code(), "listener");
}
Demo Logic
Just for fun, let's implement some random logic, that might be implemented in hardware in the future:
namespace AppDomain {
struct Logic {
std::string banner;
Logic(std::string msg) : banner(std::move(msg)) {}
std::string Process(std::string request) {
std::cout << "Processing: " << std::quoted(request) << std::endl;
std::string result;
auto fold = [&result](auto op, double initial) {
return [=, &result](auto& ctx) {
auto& args = _attr(ctx);
auto v = accumulate(args.begin(), args.end(), initial, op);
result = "Fold:" + std::to_string(v);
};
};
auto invalid = [&result](auto& ctx) {
result = "Invalid Command: " + _attr(ctx);
};
using namespace boost::spirit::x3;
auto args = rule<void, std::vector<double>>{} = '(' >> double_ % ',' >> ')';
auto add = "adding" >> args[fold(std::plus<>{}, 0)];
auto mul = "multiplying" >> args[fold(std::multiplies<>{}, 1)];
auto err = lexeme[+char_][invalid];
phrase_parse(begin(request), end(request), add | mul | err, blank);
return banner + result;
}
};
} // namespace AppDomain
Now you can see it in action: Full Listing
Where To Go From Here
What if you need multiple responses for one request?
You need a queue. I usually call those outbox so searching for outbox_, _outbox etc will give lots of examples.
Those examples will also show how to deal with other situations where writes can be "externally initiated", and how to safely enqueue those. Perhaps a very engaging example is here How to batch send unsent messages in asio
Listing For Reference
In case the links go dead in the future:
#include <boost/algorithm/string/trim.hpp>
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <filesystem>
#include <functional>
#include <iostream>
static std::string g_app_name = "app-logic-service";
#include <boost/core/demangle.hpp> // just for our demo logic
#include <boost/spirit/home/x3.hpp> // idem
#include <numeric> // idem
namespace AppDomain {
struct Logic {
std::string banner;
Logic(std::string msg) : banner(std::move(msg)) {}
std::string Process(std::string request) {
std::string result;
auto fold = [&result](auto op, double initial) {
return [=, &result](auto& ctx) {
auto& args = _attr(ctx);
auto v = accumulate(args.begin(), args.end(), initial, op);
result = "Fold:" + std::to_string(v);
};
};
auto invalid = [&result](auto& ctx) {
result = "Invalid Command: " + _attr(ctx);
};
using namespace boost::spirit::x3;
auto args = rule<void, std::vector<double>>{} = '(' >> double_ % ',' >> ')';
auto add = "adding" >> args[fold(std::plus<>{}, 0)];
auto mul = "multiplying" >> args[fold(std::multiplies<>{}, 1)];
auto err = lexeme[+char_][invalid];
phrase_parse(begin(request), end(request), add | mul | err, blank);
return banner + result;
}
};
} // namespace AppDomain
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = boost::asio::ip::tcp; // from <boost/asio/ip/tcp.hpp>
// Report a failure
void fail(beast::error_code ec, char const* what) {
std::cerr << what << ": " << ec.message() << "\n";
}
class session : public std::enable_shared_from_this<session> {
websocket::stream<beast::tcp_stream> ws_;
beast::flat_buffer buffer_;
std::string response_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
explicit session(tcp::socket&& socket,
std::shared_ptr<AppDomain::Logic> logic)
: ws_(std::move(socket))
, logic_(logic) {}
void run() {
// Get on the correct executor
// strand for thread safety
dispatch(
ws_.get_executor(),
beast::bind_front_handler(&session::on_run, shared_from_this()));
}
private:
void on_run() {
// Set suggested timeout settings for the websocket
ws_.set_option(websocket::stream_base::timeout::suggested(
beast::role_type::server));
// Set a decorator to change the Server of the handshake
ws_.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) + " " +
g_app_name);
}));
// Accept the websocket handshake
ws_.async_accept(
beast::bind_front_handler(&session::on_accept, shared_from_this()));
}
void on_accept(beast::error_code ec) {
if (ec)
return fail(ec, "accept");
do_read();
}
void do_read() {
ws_.async_read(
buffer_,
beast::bind_front_handler(&session::on_read, shared_from_this()));
}
void on_read(beast::error_code ec, std::size_t /*bytes_transferred*/) {
if (ec == websocket::error::closed) return;
if (ec.failed()) return fail(ec, "read");
// Process the message
auto request = boost::algorithm::trim_copy(
beast::buffers_to_string(buffer_.data()));
std::cout << "Processing: " << std::quoted(request) << " from "
<< beast::get_lowest_layer(ws_).socket().remote_endpoint()
<< std::endl;
response_ = logic_->Process(request);
ws_.async_write(
net::buffer(response_),
beast::bind_front_handler(&session::on_write, shared_from_this()));
}
void on_write(beast::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if (ec)
return fail(ec, "write");
// Clear the buffer
buffer_.consume(buffer_.size());
// Do another read
do_read();
}
};
// Accepts incoming connections and launches the sessions
class listener : public std::enable_shared_from_this<listener> {
net::any_io_executor ex_;
tcp::acceptor acceptor_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
listener(net::any_io_executor ex, tcp::endpoint endpoint,
std::shared_ptr<AppDomain::Logic> logic)
: ex_(ex)
, acceptor_(ex)
, logic_(logic) {
acceptor_.open(endpoint.protocol());
acceptor_.set_option(tcp::acceptor::reuse_address(true));
acceptor_.bind(endpoint);
acceptor_.listen(tcp::acceptor::max_listen_connections);
}
// Start accepting incoming connections
void run() { do_accept(); }
private:
void do_accept() {
// The new connection gets its own strand
acceptor_.async_accept(make_strand(ex_),
beast::bind_front_handler(&listener::on_accept,
shared_from_this()));
}
void on_accept(beast::error_code ec, tcp::socket socket) {
if (ec) {
fail(ec, "accept");
} else {
std::make_shared<session>(std::move(socket), logic_)->run();
}
// Accept another connection
do_accept();
}
};
int main(int argc, char* argv[]) {
g_app_name = std::filesystem::path(argv[0]).filename();
if (argc != 4) {
std::cerr << "Usage: " << g_app_name << " <address> <port> <threads>\n"
<< "Example:\n"
<< " " << g_app_name << " 0.0.0.0 8080 1\n";
return 1;
}
auto const address = net::ip::make_address(argv[1]);
auto const port = static_cast<uint16_t>(std::atoi(argv[2]));
auto const threads = std::max<int>(1, std::atoi(argv[3]));
auto logic = std::make_shared<AppDomain::Logic>("StackOverflow Demo/");
try {
// The io_context is required for all I/O
net::thread_pool ioc(threads);
std::make_shared<listener>(ioc.get_executor(),
tcp::endpoint{address, port}, logic)
->run();
ioc.join();
} catch (beast::system_error const& se) {
fail(se.code(), "listener");
}
}
UPDATE
In response to the comments I reified the outbox pattern again. Note some of the comments in the code.
Compiler Explorer
#include <boost/algorithm/string/trim.hpp>
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <deque>
#include <filesystem>
#include <functional>
#include <iostream>
#include <list>
static std::string g_app_name = "app-logic-service";
#include <boost/core/demangle.hpp> // just for our demo logic
#include <boost/spirit/home/x3.hpp> // idem
#include <numeric> // idem
namespace AppDomain {
struct Logic {
std::string banner;
Logic(std::string msg) : banner(std::move(msg)) {}
std::string Process(std::string request) {
std::string result;
auto fold = [&result](auto op, double initial) {
return [=, &result](auto& ctx) {
auto& args = _attr(ctx);
auto v = accumulate(args.begin(), args.end(), initial, op);
result = "Fold:" + std::to_string(v);
};
};
auto invalid = [&result](auto& ctx) {
result = "Invalid Command: " + _attr(ctx);
};
using namespace boost::spirit::x3;
auto args = rule<void, std::vector<double>>{} = '(' >> double_ % ',' >> ')';
auto add = "adding" >> args[fold(std::plus<>{}, 0)];
auto mul = "multiplying" >> args[fold(std::multiplies<>{}, 1)];
auto err = lexeme[+char_][invalid];
phrase_parse(begin(request), end(request), add | mul | err, blank);
return banner + result;
}
};
} // namespace AppDomain
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = boost::asio::ip::tcp; // from <boost/asio/ip/tcp.hpp>
// Report a failure
void fail(beast::error_code ec, char const* what) {
std::cerr << what << ": " << ec.message() << "\n";
}
class session : public std::enable_shared_from_this<session> {
websocket::stream<beast::tcp_stream> ws_;
beast::flat_buffer buffer_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
explicit session(tcp::socket&& socket,
std::shared_ptr<AppDomain::Logic> logic)
: ws_(std::move(socket))
, logic_(logic) {}
void run() {
// Get on the correct executor
// strand for thread safety
dispatch(
ws_.get_executor(),
beast::bind_front_handler(&session::on_run, shared_from_this()));
}
void post_message(std::string msg) {
post(ws_.get_executor(),
[self = shared_from_this(), this, msg = std::move(msg)] {
do_post_message(std::move(msg));
});
}
private:
void on_run() {
// on the strand
// Set suggested timeout settings for the websocket
ws_.set_option(websocket::stream_base::timeout::suggested(
beast::role_type::server));
// Set a decorator to change the Server of the handshake
ws_.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) + " " +
g_app_name);
}));
// Accept the websocket handshake
ws_.async_accept(
beast::bind_front_handler(&session::on_accept, shared_from_this()));
}
void on_accept(beast::error_code ec) {
// on the strand
if (ec)
return fail(ec, "accept");
do_read();
}
void do_read() {
// on the strand
buffer_.clear();
ws_.async_read(
buffer_,
beast::bind_front_handler(&session::on_read, shared_from_this()));
}
void on_read(beast::error_code ec, std::size_t /*bytes_transferred*/) {
// on the strand
if (ec == websocket::error::closed) return;
if (ec.failed()) return fail(ec, "read");
// Process the message
auto request = boost::algorithm::trim_copy(
beast::buffers_to_string(buffer_.data()));
std::cout << "Processing: " << std::quoted(request) << " from "
<< beast::get_lowest_layer(ws_).socket().remote_endpoint()
<< std::endl;
do_post_message(logic_->Process(request)); // already on the strand
do_read();
}
std::deque<std::string> _outbox;
void do_post_message(std::string msg) {
// on the strand
_outbox.push_back(std::move(msg));
if (_outbox.size() == 1)
do_write_loop();
}
void do_write_loop() {
// on the strand
if (_outbox.empty())
return;
ws_.async_write( //
net::buffer(_outbox.front()),
[self = shared_from_this(), this] //
(beast::error_code ec, size_t bytes_transferred) {
// on the strand
boost::ignore_unused(bytes_transferred);
if (ec)
return fail(ec, "write");
_outbox.pop_front();
do_write_loop();
});
}
};
// Accepts incoming connections and launches the sessions
class listener : public std::enable_shared_from_this<listener> {
net::any_io_executor ex_;
tcp::acceptor acceptor_;
std::shared_ptr<AppDomain::Logic> logic_;
public:
listener(net::any_io_executor ex, tcp::endpoint endpoint,
std::shared_ptr<AppDomain::Logic> logic)
: ex_(ex)
, acceptor_(make_strand(ex)) // NOTE to guard sessions_
, logic_(logic) {
acceptor_.open(endpoint.protocol());
acceptor_.set_option(tcp::acceptor::reuse_address(true));
acceptor_.bind(endpoint);
acceptor_.listen(tcp::acceptor::max_listen_connections);
}
// Start accepting incoming connections
void run() { do_accept(); }
void broadcast(std::string msg) {
post(acceptor_.get_executor(),
beast::bind_front_handler(&listener::do_broadcast,
shared_from_this(), std::move(msg)));
}
private:
using handle_t = std::weak_ptr<session>;
std::list<handle_t> sessions_;
void do_broadcast(std::string const& msg) {
for (auto handle : sessions_)
if (auto sess = handle.lock())
sess->post_message(msg);
}
void do_accept() {
// The new connection gets its own strand
acceptor_.async_accept(make_strand(ex_),
beast::bind_front_handler(&listener::on_accept,
shared_from_this()));
}
void on_accept(beast::error_code ec, tcp::socket socket) {
// on the strand
if (ec) {
fail(ec, "accept");
} else {
auto sess = std::make_shared<session>(std::move(socket), logic_);
sessions_.emplace_back(sess);
// optionally:
sessions_.remove_if(std::mem_fn(&handle_t::expired));
sess->run();
}
// Accept another connection
do_accept();
}
};
static void emulate_hardware_stuff(std::shared_ptr<listener> srv) {
using std::this_thread::sleep_for;
using namespace std::chrono_literals;
// Extremely simplistic. Instead I'd recommend `steady_timer` with
// `_async_wait` here, but since I'm just making a sketch...
unsigned i = 0;
while (true) {
sleep_for(1s);
srv->broadcast("Hardware thing #" + std::to_string(++i));
}
}
int main(int argc, char* argv[]) {
g_app_name = std::filesystem::path(argv[0]).filename();
if (argc != 4) {
std::cerr << "Usage: " << g_app_name << " <address> <port> <threads>\n"
<< "Example:\n"
<< " " << g_app_name << " 0.0.0.0 8080 1\n";
return 1;
}
auto const address = net::ip::make_address(argv[1]);
auto const port = static_cast<uint16_t>(std::atoi(argv[2]));
auto const threads = std::max<int>(1, std::atoi(argv[3]));
auto logic = std::make_shared<AppDomain::Logic>("StackOverflow Demo/");
try {
// The io_context is required for all I/O
net::thread_pool ioc(threads);
auto srv = std::make_shared<listener>( //
ioc.get_executor(), //
tcp::endpoint{address, port}, //
logic);
srv->run();
std::thread something_hardware(emulate_hardware_stuff, srv);
ioc.join();
something_hardware.join();
} catch (beast::system_error const& se) {
fail(se.code(), "listener");
}
}
With Live Demo:
I have started with this example so won't post all the code. My objective is to download a large file without blocking my main thread. The second objective is to get notifications so I can update a progress bar. I do have the code working a couple of ways. First is to just ioc.run(); and let it go to work, I get the file downloaded. But I can not find anyway to start the session without blocking.
The second way I can make the calls down to http::async_read_some and the call works but I can not get a response that I can use. I don't know if there is a way to pass a lambda that captures.
The #if 0..#else..#endif switches the methods. I'm sure there is a simple way but I just can not see it. I'll clean up the code when I get it working, like setting the local file name. Thanks.
std::size_t on_read_some(boost::system::error_code ec, std::size_t bytes_transferred)
{
if (ec);//deal with it...
if (!bValidConnection) {
std::string_view view((const char*)buffer_.data().data(), bytes_transferred);
auto pos = view.find("Content-Length:");
if (pos == std::string_view::npos)
;//error
file_size = std::stoi(view.substr(pos+sizeof("Content-Length:")).data());
if (!file_size)
;//error
bValidConnection = true;
}
else {
file_pos += bytes_transferred;
response_call(ec, file_pos);
}
#if 0
std::cout << "in on_read_some caller\n";
http::async_read_some(stream_, buffer_, file_parser_, std::bind(
response_call,
std::placeholders::_1,
std::placeholders::_2));
#else
std::cout << "in on_read_some inner\n";
http::async_read_some(stream_, buffer_, file_parser_, std::bind(
&session::on_read_some,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
#endif
return buffer_.size();
}
The main, messy but.....
struct lambda_type {
bool bDone = false;
void operator ()(const boost::system::error_code ec, std::size_t bytes_transferred) {
;
}
};
int main(int argc, char** argv)
{
auto const host = "reserveanalyst.com";
auto const port = "443";
auto const target = "/downloads/demo.msi";
int version = argc == 5 && !std::strcmp("1.0", argv[4]) ? 10 : 11;
boost::asio::io_context ioc;
ssl::context ctx{ ssl::context::sslv23_client };
load_root_certificates(ctx);
//ctx.load_verify_file("ca.pem");
auto so = std::make_shared<session>(ioc, ctx);
so->run(host, port, target, version);
bool bDone = false;
auto const lambda = [](const boost::system::error_code ec, std::size_t bytes_transferred) {
std::cout << "data lambda bytes: " << bytes_transferred << " er: " << ec.message() << std::endl;
};
lambda_type lambda2;
so->set_response_call(lambda);
ioc.run();
std::cout << "not in ioc.run()!!!!!!!!" << std::endl;
so->async_read_some(lambda);
//pseudo message pump when working.........
for (;;) {
std::this_thread::sleep_for(250ms);
std::cout << "time" << std::endl;
}
return EXIT_SUCCESS;
}
And stuff I've added to the class session
class session : public std::enable_shared_from_this<session>
{
using response_call_type = void(*)(boost::system::error_code ec, std::size_t bytes_transferred);
http::response_parser<http::file_body> file_parser_;
response_call_type response_call;
//
bool bValidConnection = false;
std::size_t file_pos = 0;
std::size_t file_size = 0;
public:
auto& get_result() { return res_; }
auto& get_buffer() { return buffer_; }
void set_response_call(response_call_type the_call) { response_call = the_call; }
I've updated this as I finally put it to use and I wanted the old method where I could download to a file or a string. Link to how asio works, great talk.
CppCon 2016 Michael Caisse Asynchronous IO with BoostAsio
As for my misunderstanding of how to pass a lambda, here is Adam Nevraumont's answer
There are two ways to compile this using a type to select the method. Both are shown at the beginning of main. You can construct either a file downloader or string downloader by selecting the type of beast parser. The parsers don't have the same constructs so an if constexpr compile time conditions are used. And I checked, a release build of the downloader is about 1K so pretty light weight for what it does. In the case of a small string you don't have to handle the call backs. either pass an empty lambda or add the likes of:
if(response_call)
response_call(resp_ok, test);
This looks to be a pretty clean way to get the job done so I've updated this post as of 11/27/2202.
The code:
//
// Copyright (c) 2016-2019 Vinnie Falco (vinnie dot falco at gmail dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
// Official repository: https://github.com/boostorg/beast
//------------------------------------------------------------------------------
//
// Example: HTTP SSL client, synchronous, usable in a thread with a message pump
// Added code to use from a message pump
// Also useable as body to a file download, or body to string
//
//------------------------------------------------------------------------------
#include <boost/beast/core.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/ssl.hpp>
#include <boost/beast/version.hpp>
#include <boost/asio/connect.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/ssl/error.hpp>
#include <boost/asio/ssl/stream.hpp>
#include <cstdlib>
#include <iostream>
#include <string>
#include <fstream>
//the boost shipped certificates
#include <boost/../libs/beast/example/common/root_certificates.hpp>
//TODO add your ssl libs as you would like
#ifdef _M_IX86
#pragma comment(lib, "libcrypto.lib")
#pragma comment(lib, "libssl.lib")
#elif _M_X64
#pragma comment(lib, "libcrypto-3-x64.lib")
#pragma comment(lib, "libssl-3-x64.lib")
#endif
namespace downloader {
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
namespace ssl = net::ssl; // from <boost/asio/ssl.hpp>
using tcp = net::ip::tcp; // from <boost/asio/ip/tcp.hpp>
//specialization if using < c++17; see both 'if constexpr' below.
//this is not needed otherwise
//namespace detail {
// template<typename Type>
// void open_file(http::parser < false, Type>& p, const char* name, boost::system::error_code& file_open_ec) { }
// template<>
// void open_file(http::parser<false, http::file_body>& p, const char* name, boost::system::error_code& file_open_ec) {
// p.get().body().open(name, boost::beast::file_mode::write, file_open_ec);
// }
// template<typename Type>
// std::string get_string(http::parser < false, Type>& p) { return std::string{}; }
// template<>
// std::string get_string(http::parser<false, http::string_body>& p) {
// return p.get().body();
// }
//} //namespace detail
enum responses {
resp_null,
resp_ok,
resp_done,
resp_error,
};
using response_call_type = std::function< void(responses, std::size_t)>;
template<typename ParserType>
struct download {
//as these can be set with array initialization
const char* target_ = "/";
const char* filename_ = "test.txt";
const char* host_ = "lakeweb.net";
std::string body_;
using response_call_type = std::function< void(responses, std::size_t)>;
response_call_type response_call;
boost::asio::io_context ioc_;
ssl::context ctx_{ ssl::context::sslv23_client };
ssl::stream<tcp::socket> stream_{ ioc_, ctx_ };
tcp::resolver resolver_{ ioc_ };
boost::beast::flat_buffer buffer_;
uint64_t file_size_{};
int version{ 11 };
void set_response_call(response_call_type the_call) { response_call = the_call; }
uint64_t get_file_size() { return file_size_; }
void stop() { ioc_.stop(); }
bool stopped() { return ioc_.stopped(); }
std::string get_body() { return std::move(body_); }
void run() {
try {
// TODO should have a timer in case of a hang
load_root_certificates(ctx_);
// Set SNI Hostname (many hosts need this to handshake successfully)
if (!SSL_set_tlsext_host_name(stream_.native_handle(), host_)) {
boost::system::error_code ec{ static_cast<int>(::ERR_get_error()), boost::asio::error::get_ssl_category() };
throw boost::system::system_error{ ec };
}
//TODO resolve is depreciated, use endpoint
auto const results = resolver_.resolve(host_, "443");
boost::asio::connect(stream_.next_layer(), results.begin(), results.end());
stream_.handshake(ssl::stream_base::client);
// Set up an HTTP GET request message
http::request<http::string_body> req{ http::verb::get, target_, version };
req.set(http::field::host, host_);
req.set(http::field::user_agent, "mY aGENT");
// Send the HTTP request to the remote host
http::write(stream_, req);
// Read the header
boost::system::error_code file_open_ec;
http::parser<false, ParserType> p;
p.body_limit((std::numeric_limits<std::uint32_t>::max)());
//detail::open_file(p, filename_, file_open_ec);
//or => c++17
if constexpr (std::is_same_v<ParserType, http::file_body>)
p.get().body().open(filename_, boost::beast::file_mode::write, file_open_ec);
http::read_header(stream_, buffer_, p);
file_size_ = p.content_length().has_value() ? p.content_length().value() : 0;
//Read the body
uint64_t test{};
boost::system::error_code rec;
for (;;) {
test += http::read_some(stream_, buffer_, p, rec);
if (test >= file_size_) {
response_call(resp_done, 0);
break;
}
response_call(resp_ok, test);
}
// Gracefully close the stream
boost::system::error_code ec;
stream_.shutdown(ec);
if (ec == boost::asio::error::eof)
{
// Rationale:
// http://stackoverflow.com/questions/25587403/boost-asio-ssl-async-shutdown-always-finishes-with-an-error
ec.assign(0, ec.category());
}
if (ec)
throw boost::system::system_error{ ec };
//value = detail::get_string(p);
//or => c++17
if constexpr (std::is_same_v<ParserType, http::string_body>)
body_ = p.get().body();
}
catch (std::exception const& e)
{
std::cerr << "Error: " << e.what() << std::endl;
response_call(resp_error, -1);
}
ioc_.stop();
}
};
}//namespace downloadns
//comment to test with string body
#define THE_FILE_BODY_TEST
int main(int argc, char** argv)
{
using namespace downloader;
#ifdef THE_FILE_BODY_TEST
download<http::file_body> dl{"/Nasiri%20Abarbekouh_Mahdi.pdf", "test.pdf"};
#else //string body test
download<http::string_body> dl{ "/robots.txt" };
#endif
responses dl_response{ resp_null };
size_t cur_size{};
auto static const lambda = [&dl_response, &dl, &cur_size](responses response, std::size_t bytes_transferred) {
if ((dl_response = response) == resp_ok) {
cur_size += bytes_transferred;
size_t sizes = dl.get_file_size() - cur_size;//because size is what is left
//drive your progress bar from here in a GUI app
}
};
dl.set_response_call(lambda);
std::thread thread{ [&dl]() { dl.run(); } };
//thread has started, now the pseudo message pump
bool quit = false; //true: as if a cancel button was pushed; won't finish download
for (int i = 0; ; ++i) {
switch (dl_response) { //ad hoc as if messaged
case resp_ok:
std::cout << "from sendmessage: " << cur_size << std::endl;
dl_response = resp_null;
break;
case resp_done:
std::cout << "from sendmessage: done" << std::endl;
dl_response = resp_null;
break;
case resp_error:
std::cout << "from sendmessage: error" << std::endl;
dl_response = resp_null;
}//switch
if (!(i % 5))
std::cout << "in message pump, stopped: " << std::boolalpha << dl.stopped() << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
if (quit && i == 10) //the cancel message
dl.stop();
if (!(i % 20) && dl.stopped()) {//dl job was quit or error or finished
std::cout << "dl is stopped" << std::endl;
break;
}
}
#ifdef THE_FILE_BODY_TEST
std::cout << "file written named: 'test.txt'" << std::endl;
#else
std::string res = dl.get_body();
std::cout << "body retrieved:\n" << res << std::endl;
#endif
if (thread.joinable())//in the case a thread was never started
thread.join();
std::cout << "exiting, program all done" << std::endl;
return EXIT_SUCCESS;
}
I strongly recommend against using the low-level [async_]read_some function instead of using http::[async_]read as intended with http::response_parser<http::buffer_body>
I do have an example of that - which is a little bit complicated by the fact that it also uses Boost Process to concurrently decompress the body data, but regardless it should show you how to use it:
How to read data from Internet using muli-threading with connecting only once?
I guess I could tailor it to your specific example given more complete code, but perhaps the above is good enough? Also see "Relay an HTTP message" in libs/beast/example/doc/http_examples.hpp which I used as "inspiration".
Caution: the buffer arithmetic is not intuitive. I think this is unfortunate and should not have been necessary, so pay (very) close attention to these samples for exactly how that's done.
I have next snippet:
void TcpConnection::Send(const std::vector<uint8_t>& buffer) {
std::shared_ptr<std::vector<uint8_t>> bufferCopy = std::make_shared<std::vector<uint8_t>>(buffer);
auto socket = m_socket;
m_socket->async_send(asio::buffer(bufferCopy->data(), bufferCopy->size()), [socket, bufferCopy](const boost::system::error_code& err, size_t bytesSent)
{
if (err)
{
logwarning << "clientcomms_t::sendNext encountered error: " << err.message();
// Assume that the communications path is no longer
// valid.
socket->close();
}
});
}
This code leads to memory leak. if m_socket->async_send call is commented then there is not memeory leak. I can not understand why bufferCopy is not freed after callback is dispatched. What I am doing wrong?
Windows is used.
Since you don't show any relevant code, and the code shown does not contain a strict problem, I'm going to assume from the code smells.
The smell is that you have a TcpConnection class that is not enable_shared_from_this<TcpConnection> derived. This leads me to suspect you didn't plan ahead, because there's no possible reasonable way to continue using the instance after the completion of any asynchronous operation (like the async_send).
This leads me to suspect you have a crucially simple problem, which is that your completion handler never runs. There's only one situation that could explain this, and that leads me to assume you never run() the ios_service instance
Here's the situation live:
Live On Coliru
#include <boost/asio.hpp>
namespace asio = boost::asio;
using asio::ip::tcp;
#include <iostream>
auto& logwarning = std::clog;
struct TcpConnection {
using Buffer = std::vector<uint8_t>;
void Send(Buffer const &);
TcpConnection(asio::io_service& svc) : m_socket(std::make_shared<tcp::socket>(svc)) {}
tcp::socket& socket() const { return *m_socket; }
private:
std::shared_ptr<tcp::socket> m_socket;
};
void TcpConnection::Send(Buffer const &buffer) {
auto bufferCopy = std::make_shared<Buffer>(buffer);
auto socket = m_socket;
m_socket->async_send(asio::buffer(bufferCopy->data(), bufferCopy->size()),
[socket, bufferCopy](const boost::system::error_code &err, size_t /*bytesSent*/) {
if (err) {
logwarning << "clientcomms_t::sendNext encountered error: " << err.message();
// Assume that the communications path is no longer
// valid.
socket->close();
}
});
}
int main() {
asio::io_service svc;
tcp::acceptor a(svc, tcp::v4());
a.bind({{}, 6767});
a.listen();
boost::system::error_code ec;
do {
TcpConnection conn(svc);
a.accept(conn.socket(), ec);
char const* greeting = "whale hello there!\n";
conn.Send({greeting, greeting+strlen(greeting)});
} while (!ec);
}
You'll see that any client, connection e.g. with netcat localhost 6767 will receive the greeting, after which, surprisingly the connection will stay open, instead of being closed.
You'd expect the connection to be closed by the server side either way, either because
a transmission error occurred in async_send
or because after the completion handler is run, it is destroyed and hence the captured shared-pointers are destructed. Not only would that free the copied buffer, but also would it run the destructor of socket which would close the connection.
This clearly confirms that the completion handler never runs. The fix is "easy", find a place to run the service:
int main() {
asio::io_service svc;
tcp::acceptor a(svc, tcp::v4());
a.set_option(tcp::acceptor::reuse_address());
a.bind({{}, 6767});
a.listen();
std::thread th;
{
asio::io_service::work keep(svc); // prevent service running out of work early
th = std::thread([&svc] { svc.run(); });
boost::system::error_code ec;
for (int i = 0; i < 11 && !ec; ++i) {
TcpConnection conn(svc);
a.accept(conn.socket(), ec);
char const* greeting = "whale hello there!\n";
conn.Send({greeting, greeting+strlen(greeting)});
}
}
th.join();
}
This runs 11 connections and exits leak-free.
Better:
It becomes a lot cleaner when the accept loop is also async, and the TcpConnection is properly shared as hinted above:
Live On Coliru
#include <boost/asio.hpp>
namespace asio = boost::asio;
using asio::ip::tcp;
#include <memory>
#include <thread>
#include <iostream>
auto& logwarning = std::clog;
struct TcpConnection : std::enable_shared_from_this<TcpConnection> {
using Buffer = std::vector<uint8_t>;
TcpConnection(asio::io_service& svc) : m_socket(svc) {}
void start() {
char const* greeting = "whale hello there!\n";
Send({greeting, greeting+strlen(greeting)});
}
void Send(Buffer);
private:
friend struct Server;
Buffer m_output;
tcp::socket m_socket;
};
struct Server {
Server(unsigned short port) {
_acceptor.set_option(tcp::acceptor::reuse_address());
_acceptor.bind({{}, port});
_acceptor.listen();
do_accept();
}
~Server() {
keep.reset();
_svc.post([this] { _acceptor.cancel(); });
if (th.joinable())
th.join();
}
private:
void do_accept() {
auto conn = std::make_shared<TcpConnection>(_svc);
_acceptor.async_accept(conn->m_socket, [this,conn](boost::system::error_code ec) {
if (ec)
logwarning << "accept failed: " << ec.message() << "\n";
else {
conn->start();
do_accept();
}
});
}
asio::io_service _svc;
// prevent service running out of work early:
std::unique_ptr<asio::io_service::work> keep{std::make_unique<asio::io_service::work>(_svc)};
std::thread th{[this]{_svc.run();}}; // TODO handle handler exceptions
tcp::acceptor _acceptor{_svc, tcp::v4()};
};
void TcpConnection::Send(Buffer buffer) {
m_output = std::move(buffer);
auto self = shared_from_this();
m_socket.async_send(asio::buffer(m_output),
[self](const boost::system::error_code &err, size_t /*bytesSent*/) {
if (err) {
logwarning << "clientcomms_t::sendNext encountered error: " << err.message() << "\n";
// not holding on to `self` means the socket gets closed
}
// do more with `self` which points to the TcpConnection instance...
});
}
int main() {
Server server(6868);
std::this_thread::sleep_for(std::chrono::seconds(3));
}
Using boost::asio, I'm coding network stuff.
I tried to build a simple send-and-receive-string protocol.
The sender first send the string size to the receiver. Then the sender sends the actual string to the receiver.
In particular, I designed the following two protocols.
A sender holding a string sends it to a receiver. Upon receiving it, the receiver shows the string.
Execute above protocol sequentially (two times).
I built the above protocols as shown below:
If I execute this protocol once, that works fine.
However, if i execute this protocol more than once (e.g. two times), the
string size that the receiver receives gets wrong.
First time : 1365 bytes.
Second time : 779073 bytes. (just read not 779073 but 7790)
I found that os << data_size is not done in a binary way. "779073" is just sent as 6 bytes string. But the receiver just reads 4bytes of it.
How to send a binary data and to receive a binary data using boost::asio and boost::asio::streambuf?
Receiver
// socket is already defined
// ** first step: recv data size
boost::asio::streambuf buf;
boost::asio::read(
socket,
buf,
boost::asio::transfer_exactly(sizeof(uint32_t))
);
std::istream iss(&buf);
uint32_t read_len;
iss >> read_len;
// ** second step: recv payload based on the data size
boost::asio::streambuf buf2;
read_len = boost::asio::read(socket, buf2,
boost::asio::transfer_exactly(read_len), error);
cout << " read "<< read_len << " bytes payload" << endl;
std::istream is_payload(&buf2);
std::string str;
is_payload >> str;
cout << str << endl;
Sender
// socket is already defined
string str=...; // some string to be sent
// ** first step: tell the string size to the reciever
uint32_t data_size = str.size();
boost::asio::streambuf send_buf;
std::ostream os(&send_buf);
os << data_size;
size_t sent_byte = boost::asio::write(socket, send_buf.data());
cout << sent_byte << endl; // debug purpose
// ** second step: send the actual string (payload)
sent_byte = boost::asio::write(socket, boost::asio::buffer(reinterpret_cast<const char*>(&str[0]), data_size));
cout << sent_byte << endl; // debug purpose
You can send the size binary, but that requires you to take architectural differences between devices and operating systems into account¹.
Here's my take on actually coding the protocol reusably:
//#define BOOST_ASIO_ENABLE_HANDLER_TRACKING
#include <boost/asio.hpp>
#include <boost/endian/arithmetic.hpp>
namespace ba = boost::asio;
using ba::ip::tcp;
using error_code = boost::system::error_code;
namespace Protocol { // your library
using net_size_t = boost::endian::big_int32_t; // This protocol uses Big-endian network byte order
template <typename Derived, typename Token, typename Sig = void(error_code, size_t)>
struct base_async_op : std::enable_shared_from_this<Derived> {
using base_type = base_async_op<Derived, Token, Sig>;
template <typename DeducedToken>
base_async_op(DeducedToken &&token) : _token(std::forward<DeducedToken>(token)) {}
using _Token = std::decay_t<Token>;
using _Init = ba::async_completion<_Token, Sig>;
using _Handler = typename _Init::completion_handler_type;
_Token _token;
_Init _init {_token};
auto get_allocator() const noexcept {
return (boost::asio::get_associated_allocator)(_init.completion_handler);
}
using executor_type = ba::associated_executor_t<_Handler>;
executor_type get_executor() const noexcept {
return (boost::asio::get_associated_executor)(_init.completion_handler);
}
Derived& derived() { return static_cast<Derived&>(*this); }
Derived const& derived() const { return static_cast<Derived const&>(*this); }
template <typename F>
auto wrap(F&& f) const {
//std::cout << "WRAP: " << typeid(derived().get_executor()).name() << "\n";
return ba::bind_executor(derived().get_executor(), std::forward<F>(f));
}
};
template <typename Derived, typename Stream, typename Token, typename Sig = void(error_code, size_t)>
struct stream_async_op : base_async_op<Derived, Token, Sig> {
using base_type = stream_async_op<Derived, Stream, Token, Sig>;
template <typename DeducedToken>
stream_async_op(Stream& s, DeducedToken &&token) : base_async_op<Derived, Token, Sig>(std::forward<DeducedToken>(token)), _stream(s) {}
Stream& _stream;
using executor_type = ba::associated_executor_t<typename stream_async_op::_Handler, decltype(std::declval<Stream>().get_executor())>;
executor_type get_executor() const noexcept {
return (boost::asio::get_associated_executor)(this->_init.completion_handler, _stream.get_executor());
}
};
template <typename AsyncStream, typename Buffer, typename Token>
auto async_transmit(AsyncStream& s, Buffer message_buffer, Token&& token) {
struct op : stream_async_op<op, AsyncStream, Token> {
using op::base_type::base_type;
using op::base_type::_init;
using op::base_type::_stream;
net_size_t _length[1];
auto run(Buffer buffer) {
auto self = this->shared_from_this();
_length[0] = ba::buffer_size(buffer);
ba::async_write(_stream, std::vector<ba::const_buffer> { ba::buffer(_length), buffer },
this->wrap([self,this](error_code ec, size_t transferred) { _init.completion_handler(ec, transferred); }));
return _init.result.get();
}
};
return std::make_shared<op>(s, std::forward<Token>(token))->run(message_buffer);
}
template <typename AsyncStream, typename Buffer, typename Token>
auto async_receive(AsyncStream& s, Buffer& output, Token&& token) {
struct op : stream_async_op<op, AsyncStream, Token> {
using op::base_type::base_type;
using op::base_type::_init;
using op::base_type::_stream;
net_size_t _length[1] = {0};
auto run(Buffer& output) {
auto self = this->shared_from_this();
ba::async_read(_stream, ba::buffer(_length), this->wrap([self, this, &output](error_code ec, size_t transferred) {
if (ec)
_init.completion_handler(ec, transferred);
else
ba::async_read(_stream, ba::dynamic_buffer(output), ba::transfer_exactly(_length[0]),
this->wrap([self, this](error_code ec, size_t transferred) {
_init.completion_handler(ec, transferred);
}));
}));
return _init.result.get();
}
};
return std::make_shared<op>(s, std::forward<Token>(token))->run(output);
}
template <typename Output = std::string, typename AsyncStream, typename Token>
auto async_receive(AsyncStream& s, Token&& token) {
struct op : stream_async_op<op, AsyncStream, Token, void(error_code, Output)> {
using op::base_type::base_type;
using op::base_type::_init;
using op::base_type::_stream;
Output _output;
net_size_t _length[1] = {0};
auto run() {
auto self = this->shared_from_this();
ba::async_read(_stream, ba::buffer(_length), [self,this](error_code ec, size_t) {
if (ec)
_init.completion_handler(ec, std::move(_output));
else
ba::async_read(_stream, ba::dynamic_buffer(_output), ba::transfer_exactly(_length[0]),
[self,this](error_code ec, size_t) { _init.completion_handler(ec, std::move(_output)); });
});
return _init.result.get();
}
};
return std::make_shared<op>(s, std::forward<Token>(token))->run();
}
} // Protocol
#include <iostream>
#include <iomanip>
int main() {
ba::io_context io;
tcp::socket sock(io);
sock.connect({tcp::v4(), 6767});
auto cont = [](auto name, auto continuation = []{}) { return [=](error_code ec, size_t transferred) {
std::cout << name << " completed (" << transferred << ", " << ec.message() << ")\n";
if (!ec) continuation();
}; };
auto report = [=](auto name) { return cont(name, []{}); };
// send chain
std::string hello = "Hello", world = "World";
Protocol::async_transmit(sock, ba::buffer(hello),
cont("Send hello", [&] { Protocol::async_transmit(sock, ba::buffer(world), report("Send world")); }
));
#ifndef DEMO_USE_FUTURE
// receive chain
std::string object1, object2;
Protocol::async_receive(sock, object1,
cont("Read object 1", [&] { Protocol::async_receive(sock, object2, report("Read object 2")); }));
io.run();
std::cout << "Response object 1: " << std::quoted(object1) << "\n";
std::cout << "Response object 2: " << std::quoted(object2) << "\n";
#else
// also possible, alternative completion mechanisms:
std::future<std::string> fut = Protocol::async_receive(sock, ba::use_future);
io.run();
std::cout << "Response object: " << std::quoted(fut.get()) << "\n";
#endif
}
When talking to a test server like:
xxd -p -r <<< '0000 0006 4e6f 2077 6179 0000 0005 4a6f 73c3 a90a' | netcat -l -p 6767 | xxd
The program prints
Send hello completed (9, Success)
Send world completed (9, Success)
Read object 1 completed (6, Success)
Read object 2 completed (5, Success)
Response object 1: "No way"
Response object 2: "José"
And the netcat side prints:
00000000: 0000 0005 4865 6c6c 6f00 0000 0557 6f72 ....Hello....Wor
00000010: 6c64 ld
Enabling handler tracking allows you to use handlerviz.pl to visualize the call chains:
Note You can change big_int32_t to little_int32_t without any further change. Of course, you should change the payload on the server side to match:
xxd -p -r <<< '0600 0000 4e6f 2077 6179 0500 0000 4a6f 73c3 a90a' | netcat -l -p 6767 | xxd
¹ Endianness, e.g. using Boost Endian or ::ntohs, ::ntohl, ::htons and ::htonl