I am currently working on a UDP socket client. I am currently noticing a memory leak and I've tried several things in hopes to squash it, but it still prevails. In my main, I have a char* that has been malloc'd. I then call the below function to send the data:
void Send(const char* data, const int size) {
Socket.async_send_to(boost::asio::buffer(data, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
If I run this code, it will always leak memory. However, if I comment out the async_send_to call, the memory stays consistent.
I have tried several variations(see below) on this, but they all only appear to speed up the memory leak.
A couple notes, there is a chance that the char* that gets passed to Send may get free'd before the call completes. However, in my variations, I have taken precaution to do handle that.
Variation 1:
void Send(const char* data, const int size) {
char* buf = (char*)malloc(size);
memcpy(buf, data, size);
Socket.async_send_to(boost::asio::buffer(buf, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error, buf));
}
void HandleSendTo(const boost::system::error_code& ec, const char* buf) {
free(buf);
}
Variation 2:
class MulticastSender {
char* Buffer;
public:
void Send(const char* data, const int size) {
Buffer = (char*)malloc(size);
memcpy(Buffer, data, size);
Socket.async_send_to(boost::asio::buffer(Buffer, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
void HandleSendTo(const boost::system::error_code& ec) {
free(Buffer);
}
}
However, both variations seem to only speed up the memory leak. I have also tried removing the async_send_to and just calling boost::asio::buffer(data, size), but as has been explained in other questions, the buffer does not own the memory and thus it is up to the user to safely manage it. Any thoughts on what could be causing this issue and how to resolve it?
EDIT 1:
As suggested in the comments, I have preallocated a single buffer (for test purposes) and I am never deallocating it, however, the memory leak still persists.
class MulticastSender {
char* Buffer;
const int MaxSize = 16384;
public:
MulticastSender() {
Buffer = (char*)malloc(MaxSize);
}
void Send(const char* data, const int size) {
memcpy(Buffer, data, size);
Socket.async_send_to(boost::asio::buffer(Buffer, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
void HandleSendTo(const boost::system::error_code& ec) {
}
}
EDIT 2:
As requested here is an MCVE of the problem. In making this I have also observed an interesting behavior that I will explain below.
#include <string>
#include <iostream>
#include <functional>
#include <thread>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
class MulticastSender {
private:
boost::asio::io_service IOService;
const unsigned short Port;
const boost::asio::ip::address Address;
boost::asio::ip::udp::endpoint Endpoint;
boost::asio::ip::udp::socket Socket;
boost::asio::streambuf Buffer;
void HandleSendTo(const boost::system::error_code& ec) {
if(ec) {
std::cerr << "Error writing data to socket: " << ec.message() << '\n';
}
}
void Run() {
IOService.run();
}
public:
MulticastSender(const std::string& address,
const std::string& multicastaddress,
const unsigned short port) : Address(boost::asio::ip::address::from_string(address)),
Port(port),
Endpoint(Address, port),
Socket(IOService, Endpoint.protocol()) {
std::thread runthread(&MulticastSender::Run, this);
runthread.detach();
}
void Send(const char* data, const int size) {
std::ostreambuf_iterator<char> out(&Buffer);
std::copy(data, data + size, out);
Socket.async_send_to(Buffer.data(), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
};
const int SIZE = 8192;
int main() {
MulticastSender sender("127.0.0.1", "239.255.0.0", 30000);
while(true) {
char* data = (char*)malloc(SIZE);
std::memset(data, 0, SIZE);
sender.Send(data, SIZE);
usleep(250);
free(data);
}
}
The above code still produces a memory leak. I should mention that I am running this on CentOS 6.6 with kernel Linux dev 2.6.32-504.el6.x86_64 and running Boost 1.55.0. I am observing this simply by watching the process in top.
However, if I simply move the creation of the MulticastSender into the while loop, I no longer observe the memory leak. I am concerned about the speed of the application though, so this is not a valid option.
Memory is not leaking, as there is still a handle to the allocated memory. However, there will be continual growth because:
The io_service is not running because run() is returning as there is no work. This results in completion handlers being allocated, queued into the io_service, but neither executed nor freed. Additionally, any cleanup that is expected to occur within the completion handler is not occurring. It is worth noting that during the destruction of the io_service, completion handlers will be destroyed and not invoked; hence, one cannot depend on only performing cleanup within the execution of the completion handler. For more details as to when io_service::run() blocks or unblocks, consider reading this question.
The streambuf's input sequence is never being consumed. Each iteration in the main loop will append to the streambuf, which will then send the prior message content and the newly appended data. See this answer for more details on the overall usage of streambuf.
A few other points:
The program fails to meet a requirement of async_send_to(), where ownership of the underlying buffer memory is retained by the caller, who must guarantee that it remains valid until the handler is called. In this case, when copying into the streambuf via the ostreambuf_iterator, the streambuf's input sequence is modified and invalidates the buffer returned from streambuf.data().
During shutdown, some form of synchronization will need to occur against threads that are running the io_service. Otherwise, undefined behavior may be invoked.
To resolve these issues, consider:
Using boost::asio::io_service::work to ensure that the io_service object's run() does not exit when there is no work remaining.
Passing ownership of the memory to the completion handler via std::shared_ptr or another class that will manage the memory via resource acquisition is initialization (RAII) idiom. This will allow for proper cleanup and meet the requirement's of the buffer validity for async_send_to().
Not detaching and joining upon the worker thread.
Here is a complete example based on the original that demonstrates these changes:
#include <string>
#include <iostream>
#include <thread>
#include <boost/asio.hpp>
class multicast_sender
{
public:
multicast_sender(
const std::string& address,
const std::string& multicast_address,
const unsigned short multicast_port)
: work_(io_service_),
multicast_endpoint_(
boost::asio::ip::address::from_string(multicast_address),
multicast_port),
socket_(io_service_, boost::asio::ip::udp::endpoint(
boost::asio::ip::address::from_string(address),
0 /* any port */))
{
// Start running the io_service. The work_ object will keep
// io_service::run() from returning even if there is no real work
// queued into the io_service.
auto self = this;
work_thread_ = std::thread([self]()
{
self->io_service_.run();
});
}
~multicast_sender()
{
// Explicitly stop the io_service. Queued handlers will not be ran.
io_service_.stop();
// Synchronize with the work thread.
work_thread_.join();
}
void send(const char* data, const int size)
{
// Caller may delete before the async operation finishes, so copy the
// buffer and associate it to the completion handler's lifetime. Note
// that the completion may not run in the event the io_servie is
// destroyed, but the the completion handler will be, so managing via
// a RAII object (std::shared_ptr) is ideal.
auto buffer = std::make_shared<std::string>(data, size);
socket_.async_send_to(boost::asio::buffer(*buffer), multicast_endpoint_,
[buffer](
const boost::system::error_code& error,
std::size_t bytes_transferred)
{
std::cout << "Wrote " << bytes_transferred << " bytes with " <<
error.message() << std::endl;
});
}
private:
boost::asio::io_service io_service_;
boost::asio::io_service::work work_;
boost::asio::ip::udp::endpoint multicast_endpoint_;
boost::asio::ip::udp::socket socket_;
std::thread work_thread_;
};
const int SIZE = 8192;
int main()
{
multicast_sender sender("127.0.0.1", "239.255.0.0", 30000);
char* data = (char*) malloc(SIZE);
std::memset(data, 0, SIZE);
sender.send(data, SIZE);
free(data);
// Give some time to allow for the async operation to complete
// before shutting down the io_service.
std::this_thread::sleep_for(std::chrono::seconds(2));
}
Output:
Wrote 8192 bytes with Success
The class variation looks better, and you can use boost::asio::streambuf as a buffer for network io (it doesn't leak and doesn't need much maintenance).
// The send function
void
send(char const* data, size_t size)
{
std::ostreambuf_iterator<char> out(&buffer_);
std::copy(data, data + size, out);
socket.async_send_to(buffer_, endpoint,
std::bind( &multicast_sender,
this, std::placeholders::_1 ));
}
Moving the socket and endpoint inside the class would be a good idea. Also you should bear in mind that the async operation can finish when your object goes out of scope. I would recommend using enable_shared_from_this (boost or std flavours) and pass shared_from_this() instead of this to the bind function.
The whole solution would look like this:
#include <boost/asio.hpp>
class multicast_sender :
public std::enable_shared_from_this<multicast_sender> {
using boost::asio::ip::udp;
udp::socket socket_;
udp::endpoint endpoint_;
boost::asio::streambuf buffer_;
public:
multicast_sender(boost::asio::io_service& io_service, short port,
udp::endpoint const& remote) :
socket_(io_service, udp::endpoint(udp::v4(), port)),
endpoint_(remote)
{
}
void
send(char const* data, size_t size)
{
std::ostreambuf_iterator<char> out(&buffer_);
std::copy(data, data + size, out);
socket_.async_send_to(buffer_, endpoint_,
std::bind( &multicast_sender,
shared_from_this(), std::placeholders::_1 ));
}
void
handle_send(boost::system::error_code const& ec)
{
}
};
EDIT
And as far as you don't have to do anything in the write handler, you can use a lambda (requires C++11) as a completion callback
// The send function
void
send(char const* data, size_t size)
{
std::ostreambuf_iterator<char> out(&buffer_);
std::copy(data, data + size, out);
socket.async_send_to(buffer_, endpoint,
[](boost::system::error_code const& ec){
std::cerr << "Error sending :" << ec.message() << "\n";
});
}
Related
I need to print the contents of the boost::asio::streambuf in the async_write() handler into the log after it was sent with the same async_write(). But although streambuf::size() returns 95 before async_write(), it will return 0 in the async_write() handler, while containing the exact sent data. That makes logging impossible, as we don't know the buffer size (how many symbols to log).
As far as I understand, the problem is that after async_write() has been executed, the internal pointers of the streambuf are changed because of the "write" operation, and the data in the buffer is "invalidated" after been sent. That's why, despite the fact that streambuf::size() returns 95 before async_write(), it will return 0 in the async_write() handler.
I also noticed that the buffer still contains the needed content in the async_write() handler. One could suggest saving the size of the buffer before sending and reuse it in the handler. Still, I assume I cannot rely on the fact that it will always be available in the handler, as far as the streambuf implementation may delete the content if it thinks it would be necessary. Intuitively such approach feels unreliable.
Are there any workarounds to safely print the buffer content into the log in async_write() handler?
// Message class.
struct MyTcpMsg {
...
public:
// Buffer "getter".
boost::asio::streambuf& buffer();
private:
// Buffer that keeps the data that is going to be sent over the network.
boost::asio::streambuf m_buffer;
};
// Message queue.
std::deque<std::unique_ptr<MyTcpMsg>> messagesOut;
//.. fill messagesOut queue with messages... (code omitted)
// Code that executes sending the message.
// Attempting to log the sent data in the handler.
auto& msgStream = m_messagesOut.front()->buffer();
// msgStream.size() is 95 here.
boost::asio::async_write(
getSocket(), msgStream,
[this](boost::system::error_code ec, std::size_t length) {
if (ec == 0) {
auto& msgStreamOut = m_messagesOut.front()->buffer();;
// Log important info. But the buffer size
// in (msgStreamOut.size()) is 0 now, which makes logging impossible,
// although the data is still present in the buffer.
printBufferToLog(msgStreamOut, msgStreamOut.size());
}
});
Thanks in advance
Yeah. You correctly understood the way DynamicBuffer operates. If you don't want that, use a non-dynamic buffer or sequence of buffers.
The good news is that you can get a buffer sequence instance from the streambuf in no effort at all:
auto& sb = m_messagesOut.front()->buffer();
asio::const_buffers_1 buf = sb.data();
So you can update your code:
void stub_send_loop() {
auto& sb = m_messagesOut.front()->buffer();
asio::const_buffers_1 buf = sb.data();
async_write(getSocket(), buf, [=, &sb](error_code ec, size_t length) {
if (!ec) {
// Log important info
(std::cout << "Wrote : ").write(buffer_cast<char const*>(buf), length) << std::endl;
// update stream
sb.consume(length);
}
});
}
Side-note: The exact type of buf is a bit of an implementation detail. I recommend depending on it indirectly to make sure that the implementation of the streambufs buffer sequence is guaranteed to be a single buffer. async_write doesn't care, but your logging code might (as shown). See also is it safe to use boost::asio::streambuf as both an istream and an array as string_view?
Live On Coliru
#include <boost/asio.hpp>
#include <deque>
#include <iostream>
namespace asio = boost::asio;
using asio::ip::tcp;
using boost::system::error_code;
struct MyTcpMsg {
asio::streambuf& buffer() { return m_buffer; }
template <typename... T> MyTcpMsg(T const&... args) {
(std::ostream(&m_buffer) << ... << args);
}
private:
asio::streambuf m_buffer;
};
struct X {
asio::io_context io;
tcp::socket sock_{io};
std::deque<std::unique_ptr<MyTcpMsg>> m_messagesOut;
X() {
m_messagesOut.push_back(std::make_unique<MyTcpMsg>("Hello world: ", 21 * 2, "."));
m_messagesOut.push_back(std::make_unique<MyTcpMsg>("Bye"));
};
tcp::socket& getSocket() {
if (!sock_.is_open())
sock_.connect({{}, 7878});
return sock_;
}
void stub_send_loop() {
auto& sb = m_messagesOut.front()->buffer();
asio::const_buffers_1 buf = sb.data();
async_write(getSocket(), buf, [=, &sb](error_code ec, size_t length) {
if (!ec) {
// Log important info
(std::cout << "Wrote : ").write(buffer_cast<char const*>(buf), length) << std::endl;
// update stream
sb.consume(length);
}
});
}
};
int main() {
X x;
x.stub_send_loop();
}
Local demo:
Side Note
I think you might want to rethink your design a little. Likely, the use of streambuf is a pessimization. You could "just" return a buffer sequence, which may allow you to avoid allocation. Also, the fact that you expose it by mutable reference (via a quasi-class "getter") breaks encapsulation.
I am struggeling to understand why my quite simple UDP receiver is getting heap-free-after-use error (diagnosed by ASAN). The idea is listen to a configurable number of local ports for incoming packets.
I am here posting a simplified version of the class
UdpReceiver.hpp
class UdpReceiver
{
public:
UdpReceiver(std::vector<int> listen_ports);
void run();
protected:
boost::asio::io_service m_io;
char m_receive_buffer[MAX_RECEIVE_LENGTH];
std::vector<udp::endpoint> m_endpoints;
std::vector<udp::socket> m_sockets;
void handleUdpData(const boost::system::error_code& error, size_t bytes_recvd, int idx);
};
UdpReceiver.cpp
UdpReceiver::UdpReceiver(std::vector<int> listen_ports) :
m_io()
{
int idx = 0;
try {
for (auto port: listen_ports) {
m_endpoints.push_back(udp::endpoint(udp::v4(), port));
m_sockets.push_back(udp::socket(m_io, m_endpoints[idx]));
m_sockets[idx].async_receive_from(
boost::asio::buffer(m_receive_buffer, MAX_RECEIVE_LENGTH), m_endpoints[idx],
boost::bind(&MessageParser::handleUdpData, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred,
idx)
);
idx++;
}
} catch(const std::exception &exc)
{
std::cerr <<exc.what();
exit(-1);
}
}
According to ASAN m_endpoints.push_back(udp::endpoint(udp::v4(), port)) allocates some dynamic memory which is again freed by a later iteration. This eventually gives me use-after-free which messes up my application in a unpredictable way.
I cant really understand how the use of std::vector should not work in this case. Any ideas?
The documentation for async_receive_from says: "Ownership of the sender_endpoint object is retained by the caller, which must guarantee that it is valid until the handler is called."
Your push_backs may reallocate the underlying storage, leaving async_receive_from with a dangling reference.
To avoid reallocation, reserve space for the necessary amount of elements before entering the loop:
m_endpoints.reserve(listen_ports.size());
m_sockets.reserve(listen_ports.size());
I got a async udp server with boost::asio
but the problem is:
if I launch it on a thread, the server won't work
but if I launch it on the main thread (blocking with the service) it's working...
I've try to do it with a fork but not working eiser
class Server {
private:
boost::asio::io_service _IO_service;
boost::shared_ptr<boost::asio::ip::udp::socket> _My_socket;
boost::asio::ip::udp::endpoint _His_endpoint;
boost::array<char, 1000> _My_Buffer;
private:
void Handle_send(const boost::system::error_code& error, size_t size, std::string msg) {
//do stuff
};
void start_send(std::string msg) {
_My_socket->async_send_to(boost::asio::buffer(msg), _His_endpoint,
boost::bind(&Server::Handle_send, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred, msg));
};
void Handle_receive(const boost::system::error_code& error, size_t size) {
//do stuff
};
void start_receive(void) {
_My_socket->async_receive_from(
boost::asio::buffer(_My_Buffer), _His_endpoint,
boost::bind(&Server::Handle_receive, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
public:
Server(int port):
_IO_service(),
_My_socket(boost::make_shared<boost::asio::ip::udp::socket>(_IO_service, \
boost::asio::ip::udp::endpoint(boost::asio::ip::udp::v4(), port)))
{
start_receive();
};
void Launch() {
_IO_service.run();
};
};
the objective is to call the Server::Launch in the background.
First of all you have undefined behaviour in start_send.
async_send_to returns immediately, so msg as local variable is destroyed when start_send returns. When you call async_send_to you must ensure that msg is not destroyed before asynchronous operation is completed. What is described in documentation -
Although the buffers object may be copied as necessary, ownership of
the underlying memory blocks is retained by the caller, which must
guarantee that they remain valid until the handler is called.
You can resolve it by many ways, the easiest is to use string as data members (as buffer for sending data):
class Server {
//..
std::string _M_toSendBuffer;
//
void start_send(std::string msg) {
_M_toSend = msg; // store msg into buffer for sending
_My_socket->async_send_to(boost::asio::buffer(_M_toSend), _His_endpoint,
boost::bind(&Server::Handle_send, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred,
_M_toSend));
};
Another solution is to wrap msg into smart pointer to extend its lifetime:
void Handle_send(const boost::system::error_code& error, size_t size,
boost::shared_ptr<std::string> msg) {
//do stuff
};
void start_send(std::string msg) {
boost::shared_ptr<std::string> msg2 = boost::make_shared<std::string>(msg); // [1]
_My_socket->async_send_to(boost::asio::buffer(*msg2), _His_endpoint,
boost::bind(&Server::Handle_send, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred,
msg2)); // [2]
};
in [1] line we create shared_ptr which taks msg content, then in [2] line reference counter of shared_ptr is increased when bind is called, so string lifetime is extended and it is destroyed after handler is called.
Regarding your not-working verion based on thread. You didn't show the code where Launch is called, but maybe you just don't join this thread?
Server s(3456);
boost::thread th(&Server::Launch,&s);
th.join(); // are you calling this line?
or perhaps your code doesn't work by UB in start_send.
I am trying to send a Google Protobuf message over a boost::asio socket via TCP. I recognize that TCP is a streaming protocol and thus I am performing length-prefixing on the messages before they go through the socket. I have the code working, but it only appears to work some of the time, even though I'm repeating the same calls and not changing the environment. On occasion I will receive the following error:
[libprotobuf ERROR google/protobuf/message_lite.cc:123] Can't parse message of type "xxx" because it is missing required fields: Name, ApplicationType, MessageType
The reason is easy to understand, but I cannot single out why this only occurs sometimes and parses just fine the majority of the time. It is very easy to duplicate the error by just having a single client talking to the server and simply restarting the processes.
Below are the socket code snippets.
const int TCP_HEADER_SIZE = 8;
Sender:
bool Write(const google::protobuf::MessageLite& proto) {
char header[TCP_HEADER_SIZE];
int size = proto.ByteSize();
char data[TCP_HEADER_SIZE + size];
sprintf(data, "%i", size);
proto.SerializeToArray(data+TCP_HEADER_SIZE, size);
boost::asio::async_write(Socket,
boost::asio::buffer(data, TCP_HEADER_SIZE + size),
boost::bind(&TCPSender::WriteHandler,
this, _1, _2));
}
Receiver:
std::array<char, TCP_HEADER_SIZE> Header;
std::array<char, 8192> Bytes;
void ReadHandler(const boost::system::error_code &ec,
std::size_t bytes_transferred) {
if(!ec) {
int msgsize = atoi(Header.data());
if(msgsize > 0) {
boost::asio::read(Socket, boost::asio::buffer(Bytes,static_cast<std::size_t>(msgsize)));
ReadFunc(Bytes.data(), msgsize);
}
boost::asio::async_read(Socket, boost::asio::buffer(Header, TCP_HEADER_SIZE),
boost::bind(&TCPReceiver::ReadHandler, this, _1, _2));
}
else {
std::cerr << "Server::ReadHandler::" << ec.message() << '\n';
}
}
ReadFunc:
void HandleIncomingData(const char *data, const std::size_t size) {
xxx::messaging::CMSMessage proto;
proto.ParseFromArray(data, static_cast<int>(size));
}
I should mention that I need this to be as fast as possible, so any optimizations would be very much appreciated as well.
The program invokes undefined behavior as it fails to meet a lifetime requirement for boost::asio::async_write()'s buffers parameter:
[...] ownership of the underlying memory blocks is retained by the caller, which must guarantee that they remain valid until the handler is called.
Within the Write() function, boost::asio::async_write() will return immediately, and potentially cause data to go out of scope before the asynchronous write operation has completed. To resolve this, consider expanding the life of the underlying buffer, such as by associating the buffer with the operation and performing cleanup in the handler, or making the buffer a data member on TCPSender.
Now I have a Connection class as shown below (irrelevant things are omitted):
class Connection : public std::enable_shared_from_this<Connection> {
public:
virtual void write() {
socket_->async_write_some(boost::asio::buffer(buffer_.data(),
buffer_.size()),
std::bind(&Connection::on_written,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
void on_written(const boost::system::error_code& e, std::size_t length) {
if(e) {
// handle error here
return;
}
buffer_.consume(length);
}
void add_to_buf(const std::string& data) {
// add the string data to buffer_ here
}
private:
boost::asio::io_service& service_;
std::unique_ptr<socket> socket_;
boost::asio::streambuf buffer_;
};
As you see, the write() operation will write data in buffer_ and the buffer_ is only cleaned in the write operation completion handler. However, the problem comes, now I have the following invocation code (Note: it is multi-threaded):
Connection conn;
// initialization code here
conn.add_to_buf("first ");
conn.write();
conn.add_to_buf("second");
conn.write();
The output I want is first second, but, sometimes the output could be first first second. It happens when the second operation starts but the first completion handler has not been called. I have read about strand to serialize things, however, it can only serialize tasks, it cannot serialize a completion handler and a task.
Someone may suggest to call the second write operation in the first's completion handler, but, per the design, this cannot be achieved.
So, any suggestions? maybe a lock on buffer_?
Locking the buffer per se wont change anything. If you call write before the first write has completed it will send the same data again. In my opinion the best way is to drop the add_to_buf method and just stick to a write function that does both, add data to the buffer and if neccessary triggers a send.
class Connection : public std::enable_shared_from_this<Connection> {
public:
virtual void write(const std::string& data) {
std::lock_guard<std::mutex> l(lock_);
bool triggerSend = buffer_.size() == 0;
// add data to buffer
if (triggerSend) {
do_send_chunk();
}
}
void on_written(const boost::system::error_code& e, std::size_t length) {
if (e) {
// handle error here
return;
}
std::lock_guard<std::mutex> l(lock_);
buffer_.consume(length);
if (buffer_.size() > 0) {
do_send_chunk();
}
}
private:
void do_send_chunk() {
socket_->async_write_some(boost::asio::buffer(buffer_.data(),
buffer_.size()),
std::bind(&Connection::on_written,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
boost::asio::io_service& service_;
std::unique_ptr<socket> socket_;
boost::asio::streambuf buffer_;
std::mutex lock_;
};
The idea is, that the write function checks if there is still data left in the buffer. In this case it does not have to trigger a do_send_chunk call as sooner or later on_written will be called which then will cause another do_send_chunk as the new data will stay in the buffer and the if(buffer_.size() > 0) will be true inside on_written. If however there is no data left it has to trigger a send operation.