C++ wait for all async operation end - c++

I have started N same async operations(e.g. N requests to database), so i need to do something after all this operations end. How i can do this? (After one async operation end, my callback will be called).
I use C++14
Example
i use boost.asio to write some data to socket.
for (int i = 0; i < N; ++i)
{
boost::asio::async_write(
m_socket,
boost::asio::buffer(ptr[i], len[i]),
[this, callback](const boost::system::error_code& ec, std::size_t )
{
callback(ec);
});
}
So i need to know when all my writes ends;

first of all, never call async_write in a loop. Each socket may have only one async_write and one async_read outstanding at any one time.
boost already has provision for scatter/gather io.
This snippet should give you enough information to go on.
Notice that async_write can take a vector of vectors as a 'buffer' and it will fire the handler exactly once, once all the buffers have been written.
struct myclass {
boost::asio::ip::tcp::socket m_socket;
std::vector<std::vector<char>> pending_buffers;
std::vector<std::vector<char>> writing_buffers;
void write_all()
{
assert(writing_buffers.size() == 0);
writing_buffers = std::move(pending_buffers);
boost::asio::async_write(
m_socket,
boost::asio::buffer(writing_buffers),
std::bind(&myclass::write_all_handler,
this,
std::placeholders::_1,
std::placeholders::_2));
}
void write_all_handler(const boost::system::error_code& ec, size_t bytes_written)
{
writing_buffers.clear();
// send next load of data
if (pending_buffers.size())
write_all();
// call your callback here
}
};

Related

Boost async: which completion condition to use with async_read on a socket?

I'm using Boost's async_read method to read from a socket (see code below).
Wireshark shows that packets of various size are received on the socket port, some with size between 256B and 512B (we only focus on these packets below).
When I use transfer_all() completion condition, the handler function is never called, as if all packets were buffered forever and never actually read.
On the opposite, if I use for instance transfer_at_least(8) as a completion condition, the handler function is called twice per packet, once with 256B of data, then once with the remainder. This I can understand (I guess the condition is checked every 256B or something).
What I want is to have the handler called once for each packet with the full data, but I cannot find how to do this.
Note: this question (boost::asio::read with completion condition boost::asio::transfer_at_least(1) won't read until EOF) seems to say transfer_all is the way to go, but what is my issue here?
// Extract of .h file
class ClientSocket
{
...
boost::asio::ip::tcp::socket socket_;
boost::asio::streambuf input_buffer_;
...
}
// Extract of .cpp file
void ClientSocket::start_read()
{
boost::asio::async_read(socket_, input_buffer_, boost::asio::transfer_at_least(8), boost::bind(&ClientSocket::handle_read, this, _1));
};
void ClientSocket::handle_read(const boost::system::error_code& ec)
{
if (!ec)
{
const auto buffer = input_buffer_.data();
std::size_t size = input_buffer_.size();
// ---> the value of 'size' here show the issue
// Copy data
...
// Keep reading
start_read();
}
}

Boost Asio: Strange when use streambuf and async_write in mutl-thread

My program has a buffer when sending data. Instead of directly calling async_write every time for send small packets, try to make the send method return quickly, use streambuf as the sending buffer, and try to send large packets.
The problem encountered now is that when multiple threads call send at the same time, there is a small probability that the opposite end may receive duplicate data packets or messy data. Here is my code:
void ClientConnection::send(const string* buffer, function<void (bool status)> callback) {
{
unique_lock<mutex> lck(*_ioLockPtr);
ostream os(_sendBufferPtr.get());
os << *buffer;
}
delete buffer;
callback(true);
_sendBuffer();
}
void ClientConnection::_sendBuffer() {
unique_lock<mutex> lck(*_ioLockPtr);
size_t bufferSize = _sendBufferPtr->size();
if (!bufferSize || _sendingBufferCount > 0) {
return;
}
++_sendingBufferCount;
async_write(*_socketPtr, _sendBufferPtr->data(), boost::asio::transfer_exactly(bufferSize), boost::bind(&ClientConnection::_handleWrite,
shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
_sendBufferPtr->consume(bufferSize);
}
void ClientConnection::_handleWrite(const boost::system::error_code& error, size_t bytes_transferred) {
if (!error) {
unique_lock<mutex> lck(*_ioLockPtr);
size_t bufferSize = _sendBufferPtr->size();
if (bufferSize) {
async_write(*_socketPtr, _sendBufferPtr->data(), boost::asio::transfer_exactly(bufferSize), boost::bind(&ClientConnection::_handleWrite,
shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
_sendBufferPtr->consume(bufferSize);
} else {
--_sendingBufferCount;
}
} else {
{
unique_lock<mutex> lck(*_ioLockPtr);
--_sendingBufferCount;
}
_close();
}
}
The relevant variables are defined as follows:
shared_ptr<boost::asio::streambuf> _sendBufferPtr;
uint8_t _sendingBufferCount;
Please help me to understand how to solve this problem, thanks!
The problem encountered now is that when multiple threads call send at the same time
This is strictly prohibited as per the documentation:
This operation is implemented in terms of zero or more calls to the stream's async_write_some function, and is known as a composed operation. The program must ensure that the stream performs no other write operations (such as async_write, the stream's async_write_some function, or any other composed operations that perform writes) until this operation completes.
To serialize the async operations, you may want to use a strand.

How to call a handler?

I don't understand how I could return handle in case the io_context was stopped. Minimum example:
void my_class::async_get_one_scan(
std::function<void(const boost::system::error_code& ec,
std::shared_ptr<my_chunked_packet>)> handler)
{
asio::spawn(strand_, [this, handler] (asio::yield_context yield)
{
const auto work = boost::asio::make_work_guard(io_service_);
my_chunk_buffer chunks;
while (!chunks.full()) {
std::array<uint8_t, 1000> datagram;
boost::system::error_code ec;
auto size = socket_.async_receive(asio::buffer(datagram), yield[ec]);
if (!ec)
process_datagram(datagram, size, chunks);
else {
handler(ec, nullptr);
return;
}
}
io_service_.post(std::bind(handler, boost::system::error_code, chunks.packet()));
});
}
Debug asio output:
#asio|1532525798.533266|6*7|strand#01198ff0.dispatch
#asio|1532525798.533266|>7|
#asio|1532525798.533266|>0|
#asio|1532525798.533266|0*8|socket#008e345c.async_receive
#asio|1532525798.533266|<7|
#asio|1532525798.533266|<6|
#asio|1532525799.550640|0|socket#008e34ac.close
#asio|1532525799.550640|0|socket#008e345c.close
#asio|1532525799.551616|~8|
So the last async_receive() #8 is created, after |<6| io_context.stop() is called and then I have no idea how to get the error_code from yield_context to call the handler.
question#2 is it even a correct way of async reading of chunks of data to collect the whole packet?
By definition, io_context::stop prevents the event loop from executing other handlers. So there's no way to get the exit code into the handler, because it doesn't get invoked.
You probably want to have a "soft-stop" function instead, where you stop admitting new async tasks to the io_context and optionally cancel any pending operations.
If pending operations could take too long, you will want to add a deadline timer that forces the cancellation at some threshold time interval.
The usual way to make the run loop exit is by releasing a work object. See https://www.boost.org/doc/libs/1_67_0/doc/html/boost_asio/reference/io_context__work.html

boost asio async_write with shared buffer over multi-thread

Now I have a Connection class as shown below (irrelevant things are omitted):
class Connection : public std::enable_shared_from_this<Connection> {
public:
virtual void write() {
socket_->async_write_some(boost::asio::buffer(buffer_.data(),
buffer_.size()),
std::bind(&Connection::on_written,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
void on_written(const boost::system::error_code& e, std::size_t length) {
if(e) {
// handle error here
return;
}
buffer_.consume(length);
}
void add_to_buf(const std::string& data) {
// add the string data to buffer_ here
}
private:
boost::asio::io_service& service_;
std::unique_ptr<socket> socket_;
boost::asio::streambuf buffer_;
};
As you see, the write() operation will write data in buffer_ and the buffer_ is only cleaned in the write operation completion handler. However, the problem comes, now I have the following invocation code (Note: it is multi-threaded):
Connection conn;
// initialization code here
conn.add_to_buf("first ");
conn.write();
conn.add_to_buf("second");
conn.write();
The output I want is first second, but, sometimes the output could be first first second. It happens when the second operation starts but the first completion handler has not been called. I have read about strand to serialize things, however, it can only serialize tasks, it cannot serialize a completion handler and a task.
Someone may suggest to call the second write operation in the first's completion handler, but, per the design, this cannot be achieved.
So, any suggestions? maybe a lock on buffer_?
Locking the buffer per se wont change anything. If you call write before the first write has completed it will send the same data again. In my opinion the best way is to drop the add_to_buf method and just stick to a write function that does both, add data to the buffer and if neccessary triggers a send.
class Connection : public std::enable_shared_from_this<Connection> {
public:
virtual void write(const std::string& data) {
std::lock_guard<std::mutex> l(lock_);
bool triggerSend = buffer_.size() == 0;
// add data to buffer
if (triggerSend) {
do_send_chunk();
}
}
void on_written(const boost::system::error_code& e, std::size_t length) {
if (e) {
// handle error here
return;
}
std::lock_guard<std::mutex> l(lock_);
buffer_.consume(length);
if (buffer_.size() > 0) {
do_send_chunk();
}
}
private:
void do_send_chunk() {
socket_->async_write_some(boost::asio::buffer(buffer_.data(),
buffer_.size()),
std::bind(&Connection::on_written,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
boost::asio::io_service& service_;
std::unique_ptr<socket> socket_;
boost::asio::streambuf buffer_;
std::mutex lock_;
};
The idea is, that the write function checks if there is still data left in the buffer. In this case it does not have to trigger a do_send_chunk call as sooner or later on_written will be called which then will cause another do_send_chunk as the new data will stay in the buffer and the if(buffer_.size() > 0) will be true inside on_written. If however there is no data left it has to trigger a send operation.

Chaining asynchronous Lambdas with Boost.Asio?

I find myself writing code that basically looks like this:
using boost::system::error_code;
socket.async_connect(endpoint, [&](error_code Error)
{
if (Error)
{
print_error(Error);
return;
}
// Read header
socket.async_read(socket, somebuffer, [&](error_code Error, std::size_t N)
{
if (Error)
{
print_error(Error);
return;
}
// Read actual data
socket.async_read(socket, somebuffer, [&](error_code Error, std::size_t N)
{
// Same here...
});
});
};
So basically I'm nesting callbacks in callbacks in callbacks, while the logic is simple and "linear".
Is there a more elegant way of writing this, so that the code is both local and in-order?
One elegant solution is to use coroutines. Boost.Asio supports both stackless coroutines, which introduce a small set of pseudo-keywords, and stackful coroutines, which use Boost.Coroutine.
Stackless Coroutines
Stackless coroutines introduce a set of pseudo-keywords preprocessor macros, that implement a switch statement using a technique similar to Duff's Device. The documentation covers each of the keywords in detail.
The original problem (connect->read header->read body) might look something like the following when implemented with stackless coroutines:
struct session
: boost::asio::coroutine
{
boost::asio::ip::tcp::socket socket_;
std::vector<char> buffer_;
// ...
void operator()(boost::system::error_code ec = boost::system::error_code(),
std::size_t length = 0)
{
// In this example we keep the error handling code in one place by
// hoisting it outside the coroutine. An alternative approach would be to
// check the value of ec after each yield for an asynchronous operation.
if (ec)
{
print_error(ec);
return;
}
// On reentering a coroutine, control jumps to the location of the last
// yield or fork. The argument to the "reenter" pseudo-keyword can be a
// pointer or reference to an object of type coroutine.
reenter (this)
{
// Asynchronously connect. When control resumes at the following line,
// the error and length parameters reflect the result of
// the asynchronous operation.
yield socket_.async_connect(endpoint_, *this);
// Loop until an error or shutdown occurs.
while (!shutdown_)
{
// Read header data. When control resumes at the following line,
// the error and length parameters reflect the result of
// the asynchronous operation.
buffer_.resize(fixed_header_size);
yield socket_.async_read(boost::asio::buffer(buffer_), *this);
// Received data. Extract the size of the body from the header.
std::size_t body_size = parse_header(buffer_, length);
// If there is no body size, then leave coroutine, as an invalid
// header was received.
if (!body_size) return;
// Read body data. When control resumes at the following line,
// the error and length parameters reflect the result of
// the asynchronous operation.
buffer_.resize(body_size);
yield socket_.async_read(boost::asio::buffer(buffer_), *this);
// Invoke the user callback to handle the body.
body_handler_(buffer_, length);
}
// Initiate graceful connection closure.
socket_.shutdown(tcp::socket::shutdown_both, ec);
} // end reenter
}
}
Stackful Coroutines
Stackful coroutines are created using the spawn() function. The original problem may look something like the following when implemented with stackful coroutines:
boost::asio::spawn(io_service, [&](boost::asio::yield_context yield)
{
boost::system::error_code ec;
boost::asio::ip::tcp::socket socket(io_service);
// Asynchronously connect and suspend the coroutine. The coroutine will
// be resumed automatically when the operation completes.
socket.async_connect(endpoint, yield[ec]);
if (ec)
{
print_error(ec);
return;
}
// Loop until an error or shutdown occurs.
std::vector<char> buffer;
while (!shutdown)
{
// Read header data.
buffer.resize(fixed_header_size);
std::size_t bytes_transferred = socket.async_read(
boost::asio::buffer(buffer), yield[ec]);
if (ec)
{
print_error(ec);
return;
}
// Extract the size of the body from the header.
std::size_t body_size = parse_header(buffer, bytes_transferred);
// If there is no body size, then leave coroutine, as an invalid header
// was received.
if (!body_size) return;
// Read body data.
buffer.resize(body_size);
bytes_transferred =
socket.async_read(boost::asio::buffer(buffer), yield[ec]);
if (ec)
{
print_error(ec);
return;
}
// Invoke the user callback to handle the body.
body_handler_(buffer, length);
}
// Initiate graceful connection closure.
socket.shutdown(tcp::socket::shutdown_both, ec);
});