mutex on boost::asio::write not works - c++

I'm trying to make an async tcp client(it's gonna not waits for result of a request before sending another request).
A request method looks like:
std::future<void> AsyncClient::SomeRequestMethod(sometype& parameter)
{
return std::async(
std::launch::async,
[&]()
{
// Gonna send a json. ';' at the end of a json separates the requests.
const std::string requestJson = Serializer::ArraySumRequest(numbers) + ';';
boost::system::error_code err;
write(requestJson, err);
// Other stuff.
write method calls boost::asio::write like this:
void AsyncClient::write(const std::string& strToWrite, boost::system::error_code& err)
{
// m_writeMutex is a class member I use to synchronize writing.
std::lock_guard<std::mutex> lock(m_writeMutex);
boost::asio::write(m_socket,
boost::asio::buffer(strToWrite), err);
}
But looks like still multiple threads do write concurrently as what I receive in server is like:
{"Key":"Val{"Key":Value};ue"};
What should I do?

You did put the lock guard around writing to the asio. There is, as you can see, no guarantee the other end will have them processed with the same guard.
You should rather put the guard where you need it, on writing the json, out of the asio:
void AsyncClient::write(const std::string& strToWrite, boost::system::error_code& err)
{
// m_writeMutex is a class member I use to synchronize writing.
// std::lock_guard<std::mutex> lock(m_writeMutex);
boost::asio::write(m_socket,
boost::asio::buffer(strToWrite), err);
}
return std::async(
std::launch::async,
[&]()
{
std::lock_guard<std::mutex> lock(m_writeMutex); // <--- here
// Gonna send a json. ';' at the end of a json separates the requests.
const std::string requestJson = Serializer::ArraySumRequest(numbers) + ';';
boost::system::error_code err;
write(requestJson, err);
// Other stuff.

Related

Boost awaitable: write into a socket and await particular response

The problem can be a bit complex. I will try to explain the best possible the situation and what tools I imagined to solve my problems.
I am writing a socket application that may write into a socket and expects a response. The protocol enable that in an easy way: each request has a "command id" that will be forwarded back into the response so we can have code that react to that particular request.
For simplicity, we will assume all communication is done using json in the socket.
First, let's assume this session type:
using json = /* assume any json lib */;
struct socket_session {
auto write(json data) -> boost::awaitable<void>;
auto read() -> boost::awaitable<json>;
private:
boost::asio::ip::tcp::socket socket;
};
Usually, I would go with a callback system that go (very) roughly like this.
using command_it_t = std::uint32_t;
// global incrementing command id
command_it_t command_id = 0;
// All callbacks associated with commands
std::unordered_map<command_id_t, std::function<void(json)>> callbacks;
void write_command_to_socket(
boost::io_context& ioc,
socket_session& session,
json command,
std::function<void(json)> callback
) {
boost::co_spawn(ioc, session->write(command), asio::detached);
callbacks.emplace(command_id++, callback);
}
// ... somewhere in the read loop, we call this:
void call_command(json response) {
if (auto const& command_id = response["command"]; command_id.is_integer()) {
if (auto const it = callbacks.find(command_id_t{command_id}); it != callbacks.end()) {
// We found the callback for this command, call it!
auto const& [id, callback] = *it;
callback(response["payload"]);
callbacks.erase(it);
}
}
}
It would be used like this:
write_command_to_socket(ioc, session, json_request, [](json response) {
// do stuff
});
As I began using coroutine more and more for asynchronous code, I noticed that it's a golden opportunity to use them in that kind of system.
Instead of sending a callback to the write function, it would return a boost::awaitable<json>, that would contain the response payload, I imagined it a bit like this:
auto const json_response = co_await write_command_to_socket(session, json_request);
Okay, here's the problem
So the first step to do that was to transform my code like this:
void write_command_to_socket(socket_session& session, json command) {
co_await session->write(command);
co_return /* response data from the read loop?? */
}
I noticed that I don't have any mean to await on the response, as it is on another async loop. I was able to imagine a system that looked like I wanted, but I have no idea how to translate my own mental model to asio with coroutines.
// Type from my mental model: an async promise
template<typename T>
struct promise {
auto get_value() -> boost::awaitable<T>;
auto write_value(T value);
};
// Instead of callbacks, my mental model needs promises structured in a similar way:
std::unordered_map<command_id_t, promise<json>> promises;
void write_command_to_socket(socket_session& session, json command) {
auto const [it, inserted] = promises.emplace(session_id++, promise<json>{});
auto const [id, promise] = *it;
co_await session->write(command);
// Here we awaits until the reader loop sets the value
auto const response_json = co_await promise.get_value();
co_return response_json;
}
// ... somewhere in the read loop
void call_command(json response) {
if (auto const& command_id = response["command"]; command_id.is_integer()) {
if(auto const it = promises.find(command_id_t{command_id}); it != promises.end()) {
auto const& [id, promise] = *it;
// Effectively calls the write_command_to_socket coroutine to continue
promise.write_value(response["payload"]);
promise.erase(it);
}
}
}
As far as I know, the "promise type" I written here as an example don't exist in boost. Without that type, I really struggle how my command system can exist. Would I need to write my own coroutine type for that kind of system? Is there a way I can just get away using boost's coroutine types?
With asio, as I said, the "promise type" don't exist. Asio instead uses continuation handlers, which are kind of callbacks that may actually call a callback or resume a coroutine.
To create such continuation handler, one must first initiate an async operation. The async operation can be resumed by another if you want, or composed of many async operation. This is done with the asio::async_initiate function, which takes some parameter reguarding the form of the continuation:
// the completion token type can be a callback,
// could be `asio::use_awaitable_t const&` or even `asio::detached_t const&`
return asio::async_initiate<CompletionToken, void(json)>(
[self = shared_from_this()](auto&& handler) {
// HERE! `handler` is a callable that resumes the coroutine!
// We can register it somewhere
callbacks.emplace(command_id, std::forward<decltype(handler)>(handler));
}
);
To resume the async operation, you simply have to call the continuation handler:
void call_command(json response) {
if (auto const& command_id = response["command"]; command_id.is_integer()) {
if (auto const it = callbacks.find(command_id_t{command_id}); it != callbacks.end()) {
// We found the continuation handler for this command, call it!
// It resumes the coroutine with the json as its result
auto const& [id, callback] = *it;
callback(response["payload"]);
callbacks.erase(it);
}
}
}
Here's the rest of the system, how it would look like (very roughtly):
using command_it_t = std::uint32_t;
// global incrementing command id
command_it_t command_id = 0;
// All callbacks associated with commands
std::unordered_map<command_id_t, moveable_function<void(json)>> callbacks;
void write_command_to_socket(
boost::io_context& ioc,
socket_session session,
json command
) -> boost::asio::awaitable<json> {
return asio::async_initiate<boost::asio::use_awaitable_t<> const&, void(json)>(
[&ioc, session](auto&& handler) {
callbacks.emplace(command_id, std::forward<decltype(handler)>(handler));
boost::asio::co_spawn(ioc, session.write(command), asio::detached);
}
);
}

multithreading problem in boost asio example

I'm developing a tcp service, and I took an example from boost asio to start (https://www.boost.org/doc/libs/1_73_0/doc/html/boost_asio/example/cpp11/chat/chat_server.cpp), and I'm worried about something, as I understand, any time you want to send something you have to use the deliver function that check the status of and run some operations over the write_msgs_ queue (in my code write_msgs_ is a queue of std::byte based structures):
void deliver(const chat_message& msg)
{
bool write_in_progress = !write_msgs_.empty();
write_msgs_.push_back(msg);
if (!write_in_progress)
{
do_write();
}
}
and inside the do_write() function you will see an asynchronous call wrapping a lambda function:
void do_write()
{
auto self(shared_from_this());
boost::asio::async_write(socket_,
boost::asio::buffer(write_msgs_.front().data(),
write_msgs_.front().length()),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
write_msgs_.pop_front();
if (!write_msgs_.empty())
{
do_write();
}
}
else
{
room_.leave(shared_from_this());
}
});
}
where the call is constantly sending messages until the queue is empty.
Now, as I understand, the boost::asio::async_write make the lambda function thread safe, but, as the write_msgs_ is open to be used in the deliver function which is out of the isolation given by the io_context a mutex is needed. Now, should I put a mutex each time the write_queue is used or is cheaper to use the boost::asio::post() calling the deliver function to isolate the write_msgs_ from asynchronous calls ?
something like this:
boost::asio::io_service srvc; // this somewhere
void deliver2(const chat_message &msg)
{
srvc.post(std::bind(&chat_session::deliver,this,msg));
}

C++ wait for all async operation end

I have started N same async operations(e.g. N requests to database), so i need to do something after all this operations end. How i can do this? (After one async operation end, my callback will be called).
I use C++14
Example
i use boost.asio to write some data to socket.
for (int i = 0; i < N; ++i)
{
boost::asio::async_write(
m_socket,
boost::asio::buffer(ptr[i], len[i]),
[this, callback](const boost::system::error_code& ec, std::size_t )
{
callback(ec);
});
}
So i need to know when all my writes ends;
first of all, never call async_write in a loop. Each socket may have only one async_write and one async_read outstanding at any one time.
boost already has provision for scatter/gather io.
This snippet should give you enough information to go on.
Notice that async_write can take a vector of vectors as a 'buffer' and it will fire the handler exactly once, once all the buffers have been written.
struct myclass {
boost::asio::ip::tcp::socket m_socket;
std::vector<std::vector<char>> pending_buffers;
std::vector<std::vector<char>> writing_buffers;
void write_all()
{
assert(writing_buffers.size() == 0);
writing_buffers = std::move(pending_buffers);
boost::asio::async_write(
m_socket,
boost::asio::buffer(writing_buffers),
std::bind(&myclass::write_all_handler,
this,
std::placeholders::_1,
std::placeholders::_2));
}
void write_all_handler(const boost::system::error_code& ec, size_t bytes_written)
{
writing_buffers.clear();
// send next load of data
if (pending_buffers.size())
write_all();
// call your callback here
}
};

boost asio async_write with shared buffer over multi-thread

Now I have a Connection class as shown below (irrelevant things are omitted):
class Connection : public std::enable_shared_from_this<Connection> {
public:
virtual void write() {
socket_->async_write_some(boost::asio::buffer(buffer_.data(),
buffer_.size()),
std::bind(&Connection::on_written,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
void on_written(const boost::system::error_code& e, std::size_t length) {
if(e) {
// handle error here
return;
}
buffer_.consume(length);
}
void add_to_buf(const std::string& data) {
// add the string data to buffer_ here
}
private:
boost::asio::io_service& service_;
std::unique_ptr<socket> socket_;
boost::asio::streambuf buffer_;
};
As you see, the write() operation will write data in buffer_ and the buffer_ is only cleaned in the write operation completion handler. However, the problem comes, now I have the following invocation code (Note: it is multi-threaded):
Connection conn;
// initialization code here
conn.add_to_buf("first ");
conn.write();
conn.add_to_buf("second");
conn.write();
The output I want is first second, but, sometimes the output could be first first second. It happens when the second operation starts but the first completion handler has not been called. I have read about strand to serialize things, however, it can only serialize tasks, it cannot serialize a completion handler and a task.
Someone may suggest to call the second write operation in the first's completion handler, but, per the design, this cannot be achieved.
So, any suggestions? maybe a lock on buffer_?
Locking the buffer per se wont change anything. If you call write before the first write has completed it will send the same data again. In my opinion the best way is to drop the add_to_buf method and just stick to a write function that does both, add data to the buffer and if neccessary triggers a send.
class Connection : public std::enable_shared_from_this<Connection> {
public:
virtual void write(const std::string& data) {
std::lock_guard<std::mutex> l(lock_);
bool triggerSend = buffer_.size() == 0;
// add data to buffer
if (triggerSend) {
do_send_chunk();
}
}
void on_written(const boost::system::error_code& e, std::size_t length) {
if (e) {
// handle error here
return;
}
std::lock_guard<std::mutex> l(lock_);
buffer_.consume(length);
if (buffer_.size() > 0) {
do_send_chunk();
}
}
private:
void do_send_chunk() {
socket_->async_write_some(boost::asio::buffer(buffer_.data(),
buffer_.size()),
std::bind(&Connection::on_written,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
boost::asio::io_service& service_;
std::unique_ptr<socket> socket_;
boost::asio::streambuf buffer_;
std::mutex lock_;
};
The idea is, that the write function checks if there is still data left in the buffer. In this case it does not have to trigger a do_send_chunk call as sooner or later on_written will be called which then will cause another do_send_chunk as the new data will stay in the buffer and the if(buffer_.size() > 0) will be true inside on_written. If however there is no data left it has to trigger a send operation.

Chaining asynchronous Lambdas with Boost.Asio?

I find myself writing code that basically looks like this:
using boost::system::error_code;
socket.async_connect(endpoint, [&](error_code Error)
{
if (Error)
{
print_error(Error);
return;
}
// Read header
socket.async_read(socket, somebuffer, [&](error_code Error, std::size_t N)
{
if (Error)
{
print_error(Error);
return;
}
// Read actual data
socket.async_read(socket, somebuffer, [&](error_code Error, std::size_t N)
{
// Same here...
});
});
};
So basically I'm nesting callbacks in callbacks in callbacks, while the logic is simple and "linear".
Is there a more elegant way of writing this, so that the code is both local and in-order?
One elegant solution is to use coroutines. Boost.Asio supports both stackless coroutines, which introduce a small set of pseudo-keywords, and stackful coroutines, which use Boost.Coroutine.
Stackless Coroutines
Stackless coroutines introduce a set of pseudo-keywords preprocessor macros, that implement a switch statement using a technique similar to Duff's Device. The documentation covers each of the keywords in detail.
The original problem (connect->read header->read body) might look something like the following when implemented with stackless coroutines:
struct session
: boost::asio::coroutine
{
boost::asio::ip::tcp::socket socket_;
std::vector<char> buffer_;
// ...
void operator()(boost::system::error_code ec = boost::system::error_code(),
std::size_t length = 0)
{
// In this example we keep the error handling code in one place by
// hoisting it outside the coroutine. An alternative approach would be to
// check the value of ec after each yield for an asynchronous operation.
if (ec)
{
print_error(ec);
return;
}
// On reentering a coroutine, control jumps to the location of the last
// yield or fork. The argument to the "reenter" pseudo-keyword can be a
// pointer or reference to an object of type coroutine.
reenter (this)
{
// Asynchronously connect. When control resumes at the following line,
// the error and length parameters reflect the result of
// the asynchronous operation.
yield socket_.async_connect(endpoint_, *this);
// Loop until an error or shutdown occurs.
while (!shutdown_)
{
// Read header data. When control resumes at the following line,
// the error and length parameters reflect the result of
// the asynchronous operation.
buffer_.resize(fixed_header_size);
yield socket_.async_read(boost::asio::buffer(buffer_), *this);
// Received data. Extract the size of the body from the header.
std::size_t body_size = parse_header(buffer_, length);
// If there is no body size, then leave coroutine, as an invalid
// header was received.
if (!body_size) return;
// Read body data. When control resumes at the following line,
// the error and length parameters reflect the result of
// the asynchronous operation.
buffer_.resize(body_size);
yield socket_.async_read(boost::asio::buffer(buffer_), *this);
// Invoke the user callback to handle the body.
body_handler_(buffer_, length);
}
// Initiate graceful connection closure.
socket_.shutdown(tcp::socket::shutdown_both, ec);
} // end reenter
}
}
Stackful Coroutines
Stackful coroutines are created using the spawn() function. The original problem may look something like the following when implemented with stackful coroutines:
boost::asio::spawn(io_service, [&](boost::asio::yield_context yield)
{
boost::system::error_code ec;
boost::asio::ip::tcp::socket socket(io_service);
// Asynchronously connect and suspend the coroutine. The coroutine will
// be resumed automatically when the operation completes.
socket.async_connect(endpoint, yield[ec]);
if (ec)
{
print_error(ec);
return;
}
// Loop until an error or shutdown occurs.
std::vector<char> buffer;
while (!shutdown)
{
// Read header data.
buffer.resize(fixed_header_size);
std::size_t bytes_transferred = socket.async_read(
boost::asio::buffer(buffer), yield[ec]);
if (ec)
{
print_error(ec);
return;
}
// Extract the size of the body from the header.
std::size_t body_size = parse_header(buffer, bytes_transferred);
// If there is no body size, then leave coroutine, as an invalid header
// was received.
if (!body_size) return;
// Read body data.
buffer.resize(body_size);
bytes_transferred =
socket.async_read(boost::asio::buffer(buffer), yield[ec]);
if (ec)
{
print_error(ec);
return;
}
// Invoke the user callback to handle the body.
body_handler_(buffer, length);
}
// Initiate graceful connection closure.
socket.shutdown(tcp::socket::shutdown_both, ec);
});