boost asio async_write with shared buffer over multi-thread - c++

Now I have a Connection class as shown below (irrelevant things are omitted):
class Connection : public std::enable_shared_from_this<Connection> {
public:
virtual void write() {
socket_->async_write_some(boost::asio::buffer(buffer_.data(),
buffer_.size()),
std::bind(&Connection::on_written,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
void on_written(const boost::system::error_code& e, std::size_t length) {
if(e) {
// handle error here
return;
}
buffer_.consume(length);
}
void add_to_buf(const std::string& data) {
// add the string data to buffer_ here
}
private:
boost::asio::io_service& service_;
std::unique_ptr<socket> socket_;
boost::asio::streambuf buffer_;
};
As you see, the write() operation will write data in buffer_ and the buffer_ is only cleaned in the write operation completion handler. However, the problem comes, now I have the following invocation code (Note: it is multi-threaded):
Connection conn;
// initialization code here
conn.add_to_buf("first ");
conn.write();
conn.add_to_buf("second");
conn.write();
The output I want is first second, but, sometimes the output could be first first second. It happens when the second operation starts but the first completion handler has not been called. I have read about strand to serialize things, however, it can only serialize tasks, it cannot serialize a completion handler and a task.
Someone may suggest to call the second write operation in the first's completion handler, but, per the design, this cannot be achieved.
So, any suggestions? maybe a lock on buffer_?

Locking the buffer per se wont change anything. If you call write before the first write has completed it will send the same data again. In my opinion the best way is to drop the add_to_buf method and just stick to a write function that does both, add data to the buffer and if neccessary triggers a send.
class Connection : public std::enable_shared_from_this<Connection> {
public:
virtual void write(const std::string& data) {
std::lock_guard<std::mutex> l(lock_);
bool triggerSend = buffer_.size() == 0;
// add data to buffer
if (triggerSend) {
do_send_chunk();
}
}
void on_written(const boost::system::error_code& e, std::size_t length) {
if (e) {
// handle error here
return;
}
std::lock_guard<std::mutex> l(lock_);
buffer_.consume(length);
if (buffer_.size() > 0) {
do_send_chunk();
}
}
private:
void do_send_chunk() {
socket_->async_write_some(boost::asio::buffer(buffer_.data(),
buffer_.size()),
std::bind(&Connection::on_written,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
boost::asio::io_service& service_;
std::unique_ptr<socket> socket_;
boost::asio::streambuf buffer_;
std::mutex lock_;
};
The idea is, that the write function checks if there is still data left in the buffer. In this case it does not have to trigger a do_send_chunk call as sooner or later on_written will be called which then will cause another do_send_chunk as the new data will stay in the buffer and the if(buffer_.size() > 0) will be true inside on_written. If however there is no data left it has to trigger a send operation.

Related

Udp server from boost not working on multithreading, but work only on the main thread

I got a async udp server with boost::asio
but the problem is:
if I launch it on a thread, the server won't work
but if I launch it on the main thread (blocking with the service) it's working...
I've try to do it with a fork but not working eiser
class Server {
private:
boost::asio::io_service _IO_service;
boost::shared_ptr<boost::asio::ip::udp::socket> _My_socket;
boost::asio::ip::udp::endpoint _His_endpoint;
boost::array<char, 1000> _My_Buffer;
private:
void Handle_send(const boost::system::error_code& error, size_t size, std::string msg) {
//do stuff
};
void start_send(std::string msg) {
_My_socket->async_send_to(boost::asio::buffer(msg), _His_endpoint,
boost::bind(&Server::Handle_send, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred, msg));
};
void Handle_receive(const boost::system::error_code& error, size_t size) {
//do stuff
};
void start_receive(void) {
_My_socket->async_receive_from(
boost::asio::buffer(_My_Buffer), _His_endpoint,
boost::bind(&Server::Handle_receive, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
public:
Server(int port):
_IO_service(),
_My_socket(boost::make_shared<boost::asio::ip::udp::socket>(_IO_service, \
boost::asio::ip::udp::endpoint(boost::asio::ip::udp::v4(), port)))
{
start_receive();
};
void Launch() {
_IO_service.run();
};
};
the objective is to call the Server::Launch in the background.
First of all you have undefined behaviour in start_send.
async_send_to returns immediately, so msg as local variable is destroyed when start_send returns. When you call async_send_to you must ensure that msg is not destroyed before asynchronous operation is completed. What is described in documentation -
Although the buffers object may be copied as necessary, ownership of
the underlying memory blocks is retained by the caller, which must
guarantee that they remain valid until the handler is called.
You can resolve it by many ways, the easiest is to use string as data members (as buffer for sending data):
class Server {
//..
std::string _M_toSendBuffer;
//
void start_send(std::string msg) {
_M_toSend = msg; // store msg into buffer for sending
_My_socket->async_send_to(boost::asio::buffer(_M_toSend), _His_endpoint,
boost::bind(&Server::Handle_send, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred,
_M_toSend));
};
Another solution is to wrap msg into smart pointer to extend its lifetime:
void Handle_send(const boost::system::error_code& error, size_t size,
boost::shared_ptr<std::string> msg) {
//do stuff
};
void start_send(std::string msg) {
boost::shared_ptr<std::string> msg2 = boost::make_shared<std::string>(msg); // [1]
_My_socket->async_send_to(boost::asio::buffer(*msg2), _His_endpoint,
boost::bind(&Server::Handle_send, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred,
msg2)); // [2]
};
in [1] line we create shared_ptr which taks msg content, then in [2] line reference counter of shared_ptr is increased when bind is called, so string lifetime is extended and it is destroyed after handler is called.
Regarding your not-working verion based on thread. You didn't show the code where Launch is called, but maybe you just don't join this thread?
Server s(3456);
boost::thread th(&Server::Launch,&s);
th.join(); // are you calling this line?
or perhaps your code doesn't work by UB in start_send.

C++ Boost ASIO async_send_to memory leak

I am currently working on a UDP socket client. I am currently noticing a memory leak and I've tried several things in hopes to squash it, but it still prevails. In my main, I have a char* that has been malloc'd. I then call the below function to send the data:
void Send(const char* data, const int size) {
Socket.async_send_to(boost::asio::buffer(data, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
If I run this code, it will always leak memory. However, if I comment out the async_send_to call, the memory stays consistent.
I have tried several variations(see below) on this, but they all only appear to speed up the memory leak.
A couple notes, there is a chance that the char* that gets passed to Send may get free'd before the call completes. However, in my variations, I have taken precaution to do handle that.
Variation 1:
void Send(const char* data, const int size) {
char* buf = (char*)malloc(size);
memcpy(buf, data, size);
Socket.async_send_to(boost::asio::buffer(buf, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error, buf));
}
void HandleSendTo(const boost::system::error_code& ec, const char* buf) {
free(buf);
}
Variation 2:
class MulticastSender {
char* Buffer;
public:
void Send(const char* data, const int size) {
Buffer = (char*)malloc(size);
memcpy(Buffer, data, size);
Socket.async_send_to(boost::asio::buffer(Buffer, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
void HandleSendTo(const boost::system::error_code& ec) {
free(Buffer);
}
}
However, both variations seem to only speed up the memory leak. I have also tried removing the async_send_to and just calling boost::asio::buffer(data, size), but as has been explained in other questions, the buffer does not own the memory and thus it is up to the user to safely manage it. Any thoughts on what could be causing this issue and how to resolve it?
EDIT 1:
As suggested in the comments, I have preallocated a single buffer (for test purposes) and I am never deallocating it, however, the memory leak still persists.
class MulticastSender {
char* Buffer;
const int MaxSize = 16384;
public:
MulticastSender() {
Buffer = (char*)malloc(MaxSize);
}
void Send(const char* data, const int size) {
memcpy(Buffer, data, size);
Socket.async_send_to(boost::asio::buffer(Buffer, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
void HandleSendTo(const boost::system::error_code& ec) {
}
}
EDIT 2:
As requested here is an MCVE of the problem. In making this I have also observed an interesting behavior that I will explain below.
#include <string>
#include <iostream>
#include <functional>
#include <thread>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
class MulticastSender {
private:
boost::asio::io_service IOService;
const unsigned short Port;
const boost::asio::ip::address Address;
boost::asio::ip::udp::endpoint Endpoint;
boost::asio::ip::udp::socket Socket;
boost::asio::streambuf Buffer;
void HandleSendTo(const boost::system::error_code& ec) {
if(ec) {
std::cerr << "Error writing data to socket: " << ec.message() << '\n';
}
}
void Run() {
IOService.run();
}
public:
MulticastSender(const std::string& address,
const std::string& multicastaddress,
const unsigned short port) : Address(boost::asio::ip::address::from_string(address)),
Port(port),
Endpoint(Address, port),
Socket(IOService, Endpoint.protocol()) {
std::thread runthread(&MulticastSender::Run, this);
runthread.detach();
}
void Send(const char* data, const int size) {
std::ostreambuf_iterator<char> out(&Buffer);
std::copy(data, data + size, out);
Socket.async_send_to(Buffer.data(), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
};
const int SIZE = 8192;
int main() {
MulticastSender sender("127.0.0.1", "239.255.0.0", 30000);
while(true) {
char* data = (char*)malloc(SIZE);
std::memset(data, 0, SIZE);
sender.Send(data, SIZE);
usleep(250);
free(data);
}
}
The above code still produces a memory leak. I should mention that I am running this on CentOS 6.6 with kernel Linux dev 2.6.32-504.el6.x86_64 and running Boost 1.55.0. I am observing this simply by watching the process in top.
However, if I simply move the creation of the MulticastSender into the while loop, I no longer observe the memory leak. I am concerned about the speed of the application though, so this is not a valid option.
Memory is not leaking, as there is still a handle to the allocated memory. However, there will be continual growth because:
The io_service is not running because run() is returning as there is no work. This results in completion handlers being allocated, queued into the io_service, but neither executed nor freed. Additionally, any cleanup that is expected to occur within the completion handler is not occurring. It is worth noting that during the destruction of the io_service, completion handlers will be destroyed and not invoked; hence, one cannot depend on only performing cleanup within the execution of the completion handler. For more details as to when io_service::run() blocks or unblocks, consider reading this question.
The streambuf's input sequence is never being consumed. Each iteration in the main loop will append to the streambuf, which will then send the prior message content and the newly appended data. See this answer for more details on the overall usage of streambuf.
A few other points:
The program fails to meet a requirement of async_send_to(), where ownership of the underlying buffer memory is retained by the caller, who must guarantee that it remains valid until the handler is called. In this case, when copying into the streambuf via the ostreambuf_iterator, the streambuf's input sequence is modified and invalidates the buffer returned from streambuf.data().
During shutdown, some form of synchronization will need to occur against threads that are running the io_service. Otherwise, undefined behavior may be invoked.
To resolve these issues, consider:
Using boost::asio::io_service::work to ensure that the io_service object's run() does not exit when there is no work remaining.
Passing ownership of the memory to the completion handler via std::shared_ptr or another class that will manage the memory via resource acquisition is initialization (RAII) idiom. This will allow for proper cleanup and meet the requirement's of the buffer validity for async_send_to().
Not detaching and joining upon the worker thread.
Here is a complete example based on the original that demonstrates these changes:
#include <string>
#include <iostream>
#include <thread>
#include <boost/asio.hpp>
class multicast_sender
{
public:
multicast_sender(
const std::string& address,
const std::string& multicast_address,
const unsigned short multicast_port)
: work_(io_service_),
multicast_endpoint_(
boost::asio::ip::address::from_string(multicast_address),
multicast_port),
socket_(io_service_, boost::asio::ip::udp::endpoint(
boost::asio::ip::address::from_string(address),
0 /* any port */))
{
// Start running the io_service. The work_ object will keep
// io_service::run() from returning even if there is no real work
// queued into the io_service.
auto self = this;
work_thread_ = std::thread([self]()
{
self->io_service_.run();
});
}
~multicast_sender()
{
// Explicitly stop the io_service. Queued handlers will not be ran.
io_service_.stop();
// Synchronize with the work thread.
work_thread_.join();
}
void send(const char* data, const int size)
{
// Caller may delete before the async operation finishes, so copy the
// buffer and associate it to the completion handler's lifetime. Note
// that the completion may not run in the event the io_servie is
// destroyed, but the the completion handler will be, so managing via
// a RAII object (std::shared_ptr) is ideal.
auto buffer = std::make_shared<std::string>(data, size);
socket_.async_send_to(boost::asio::buffer(*buffer), multicast_endpoint_,
[buffer](
const boost::system::error_code& error,
std::size_t bytes_transferred)
{
std::cout << "Wrote " << bytes_transferred << " bytes with " <<
error.message() << std::endl;
});
}
private:
boost::asio::io_service io_service_;
boost::asio::io_service::work work_;
boost::asio::ip::udp::endpoint multicast_endpoint_;
boost::asio::ip::udp::socket socket_;
std::thread work_thread_;
};
const int SIZE = 8192;
int main()
{
multicast_sender sender("127.0.0.1", "239.255.0.0", 30000);
char* data = (char*) malloc(SIZE);
std::memset(data, 0, SIZE);
sender.send(data, SIZE);
free(data);
// Give some time to allow for the async operation to complete
// before shutting down the io_service.
std::this_thread::sleep_for(std::chrono::seconds(2));
}
Output:
Wrote 8192 bytes with Success
The class variation looks better, and you can use boost::asio::streambuf as a buffer for network io (it doesn't leak and doesn't need much maintenance).
// The send function
void
send(char const* data, size_t size)
{
std::ostreambuf_iterator<char> out(&buffer_);
std::copy(data, data + size, out);
socket.async_send_to(buffer_, endpoint,
std::bind( &multicast_sender,
this, std::placeholders::_1 ));
}
Moving the socket and endpoint inside the class would be a good idea. Also you should bear in mind that the async operation can finish when your object goes out of scope. I would recommend using enable_shared_from_this (boost or std flavours) and pass shared_from_this() instead of this to the bind function.
The whole solution would look like this:
#include <boost/asio.hpp>
class multicast_sender :
public std::enable_shared_from_this<multicast_sender> {
using boost::asio::ip::udp;
udp::socket socket_;
udp::endpoint endpoint_;
boost::asio::streambuf buffer_;
public:
multicast_sender(boost::asio::io_service& io_service, short port,
udp::endpoint const& remote) :
socket_(io_service, udp::endpoint(udp::v4(), port)),
endpoint_(remote)
{
}
void
send(char const* data, size_t size)
{
std::ostreambuf_iterator<char> out(&buffer_);
std::copy(data, data + size, out);
socket_.async_send_to(buffer_, endpoint_,
std::bind( &multicast_sender,
shared_from_this(), std::placeholders::_1 ));
}
void
handle_send(boost::system::error_code const& ec)
{
}
};
EDIT
And as far as you don't have to do anything in the write handler, you can use a lambda (requires C++11) as a completion callback
// The send function
void
send(char const* data, size_t size)
{
std::ostreambuf_iterator<char> out(&buffer_);
std::copy(data, data + size, out);
socket.async_send_to(buffer_, endpoint,
[](boost::system::error_code const& ec){
std::cerr << "Error sending :" << ec.message() << "\n";
});
}

C++ wait for all async operation end

I have started N same async operations(e.g. N requests to database), so i need to do something after all this operations end. How i can do this? (After one async operation end, my callback will be called).
I use C++14
Example
i use boost.asio to write some data to socket.
for (int i = 0; i < N; ++i)
{
boost::asio::async_write(
m_socket,
boost::asio::buffer(ptr[i], len[i]),
[this, callback](const boost::system::error_code& ec, std::size_t )
{
callback(ec);
});
}
So i need to know when all my writes ends;
first of all, never call async_write in a loop. Each socket may have only one async_write and one async_read outstanding at any one time.
boost already has provision for scatter/gather io.
This snippet should give you enough information to go on.
Notice that async_write can take a vector of vectors as a 'buffer' and it will fire the handler exactly once, once all the buffers have been written.
struct myclass {
boost::asio::ip::tcp::socket m_socket;
std::vector<std::vector<char>> pending_buffers;
std::vector<std::vector<char>> writing_buffers;
void write_all()
{
assert(writing_buffers.size() == 0);
writing_buffers = std::move(pending_buffers);
boost::asio::async_write(
m_socket,
boost::asio::buffer(writing_buffers),
std::bind(&myclass::write_all_handler,
this,
std::placeholders::_1,
std::placeholders::_2));
}
void write_all_handler(const boost::system::error_code& ec, size_t bytes_written)
{
writing_buffers.clear();
// send next load of data
if (pending_buffers.size())
write_all();
// call your callback here
}
};

Correct use of Boost::asio inside of a separate thread

I am writing a DLL plugin for the Orbiter space simulator, which allows for UDP communication with an external system. I've chosen boost::asio for the task, as it allows me to abstract from the low-level stuff.
The "boundary conditions" are as follows:
I can create any threads or call any API functions from my DLL
I can modify the data inside of the simulation only inside the callback passed to my DLL (each frame), due to lack of other thread safety.
Hence, I chose the following architecture for the NetworkClient class I'm using for communications:
Upon construction, it initializes the UDP socket (boost::socket+boost::io_service) and starts a thread, which calls io_service.run()
Incoming messages are put asyncronously into a queue (thread-safe via CriticalSection)
The callback processing function can pull the messages from queue and process it
However, I have run into some strange exception upon running the implementation:
boost::exception_detail::clone_impl > at memory location 0x01ABFA00.
The exception arises in io_service.run() call.
Can anyone point me, please, am I missing something? The code listings for my classes are below.
NetworkClient declaration:
class NetworkClient {
public:
NetworkClient(udp::endpoint server_endpoint);
~NetworkClient();
void Send(shared_ptr<NetworkMessage> message);
inline bool HasMessages() {return incomingMessages.HasMessages();};
inline shared_ptr<NetworkMessage> GetQueuedMessage() {return incomingMessages.GetQueuedMessage();};
private:
// Network send/receive stuff
boost::asio::io_service io_service;
udp::socket socket;
udp::endpoint server_endpoint;
udp::endpoint remote_endpoint;
boost::array<char, NetworkBufferSize> recv_buffer;
// Queue for incoming messages
NetworkMessageQueue incomingMessages;
void start_receive();
void handle_receive(const boost::system::error_code& error, std::size_t bytes_transferred);
void handle_send(boost::shared_ptr<std::string> /*message*/, const boost::system::error_code& /*error*/, std::size_t /*bytes_transferred*/) {}
void run_service();
NetworkClient(NetworkClient&); // block default copy constructor
};
Methods implementation:
NetworkClient::NetworkClient(udp::endpoint server_endpoint) : socket(io_service, udp::endpoint(udp::v4(), 28465)) {
this->server_endpoint = server_endpoint;
boost::thread* th = new boost::thread(boost::bind(&NetworkClient::run_service,this));
start_receive();
}
void NetworkClient::start_receive()
{
socket.async_receive_from(boost::asio::buffer(recv_buffer), remote_endpoint,
boost::bind(&NetworkClient::handle_receive, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)
);
}
void NetworkClient::run_service()
{
this->io_service.run();
}
There's nothing wrong with your architecture that I can see. You should catch exceptions thrown from io_service::run(), that is likely the source of your problem.
void NetworkClient::run_service()
{
while(1) {
try {
this->io_service.run();
} catch( const std::exception& e ) {
std::cerr << e.what << std::endl;
}
}
}
You'll also want to fix whatever is throwing the exception.

Am I getting a race condition with my boost asio async_read?

bool Connection::Receive(){
std::vector<uint8_t> buf(1000);
boost::asio::async_read(socket_,boost::asio::buffer(buf,1000),
boost::bind(&Connection::handler, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
int rcvlen=buf.size();
ByteBuffer b((std::shared_ptr<uint8_t>)buf.data(),rcvlen);
if(rcvlen <= 0){
buf.clear();
return false;
}
OnReceived(b);
buf.clear();
return true;
}
The method works fine but only when I make a breakpoint inside it. Is there an issue with timing as it waits to receive? Without the breakpoint, nothing is received.
You are trying to read from the receive buffer immediately after starting the asynchronous operation, without waiting for it to complete, that is why it works when you set a breakpoint.
The code after your async_read belongs into Connection::handler, since that is the callback you told async_read to invoke after receiving some data.
What you usually want is a start_read and a handle_read_some function:
void connection::start_read()
{
socket_->async_read_some(boost::asio::buffer(read_buffer_),
boost::bind(&connection::handle_read_some, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void connection::handle_read_some(const boost::system::error_code& error, size_t bytes_transferred)
{
if (!error)
{
// Use the data here!
start_read();
}
}
Note the shared_from_this, it's important if you want the lifetime of your connection to be automatically taken care of by the number of outstanding I/O requests. Make sure to derive your class from boost::enable_shared_from_this<connection> and to only create it with make_shared<connection>.
To enforce this, your constructor should be private and you can add a friend declaration (C++0x version; if your compiler does not support this, you will have to insert the correct number of arguments yourself):
template<typename T, typename... Arg> friend boost::shared_ptr<T> boost::make_shared(const Arg&...);
Also make sure your receive buffer is still alive by the time the callback is invoked, preferably by using a statically sized buffer member variable of your connection class.