Invalid instance variable in Asio completion handler - c++

I've set up a simple async tcp server using Asio (non-boost), which pretty much follows the code used here: http://think-async.com/Asio/asio-1.11.0/doc/asio/tutorial/tutdaytime3.html
I'm experiencing an issue where attempting to access a variable of the current tcp_connection instance inside the completion handler for async_read_some/async_receive throws an error. The variable in question is simply a pointer to an instance of an encryption class that I have created. It seems that this pointer becomes invalid (address of 0xFEEEFEEE) once the completion handler is called. Here's the tcp_connection class that gets created once a connection from a client is made:
class tcp_connection
: public enable_shared_from_this<tcp_connection> {
public:
typedef shared_ptr<tcp_connection> pointer;
static pointer create(asio::io_service &ios) {
return pointer(new tcp_connection(ios));
}
tcp::socket &socket() {
return socket_;
}
void start() {
byte* buf = new byte[4096];
socket_.async_receive(asio::buffer(buf, 4096), 0,
bind(&tcp_connection::handle_receive, this,
buf,
std::placeholders::_1, std::placeholders::_2));
}
private:
tcp_connection(asio::io_service &ios)
: socket_(ios) {
crypt_ = new crypt();
}
void handle_receive(byte* data, const asio::error_code &err, size_t len) {
cout << "Received packet of length: " << len << endl;
crypt_->decrypt(data, 0, len); // This line causes a crash, as the crypt_ pointer is invalid.
for (int i = 0; i < len; ++i)
cout << hex << setfill('0') << setw(2) << (int)data[i] << ", ";
cout << endl;
}
tcp::socket socket_;
crypt* crypt_;
};
I'm assuming this has something to do with the way Asio uses threads internally. I would have thought that the completion handler (handle_receive) would be invoked with the current tcp_connection instance, though.
Is there something I'm missing? I'm not too familiar with Asio. Thanks in advance.

Firstly, you should use shared_from_this to prevent tcp_connection to be "collected" when there are only extant async operations:
socket_.async_receive(asio::buffer(buf, 4096), 0,
bind(&tcp_connection::handle_receive, shared_from_this()/*HERE!!*/,
buf,
std::placeholders::_1, std::placeholders::_2));
Secondly, your tcp_connection class should implement Rule Of Three (at least cleanup crypt_ in the destructor and prohibit copy/assignment).
You also don't free up buf in your current sample.
Of course, in general, just use smart pointers for all of these.
Live On Coliru

Related

Receiving to few chars from asio::async_write

I am calling my server via TCP/IP. I am sending a couple of chars and want to receive an answer, that is send by my acknowledge-fct. But I only receive as many chars as I sent to the server.
constexpr int maxInputBufferLength{1024};
using boost::asio::ip::tcp;
struct session
: public std::enable_shared_from_this<session>
{
public:
session(tcp::socket socket)
: socket_(std::move(socket))
{
}
void start(){
do_read();
}
private:
void do_read(){
auto self(shared_from_this());
socket_.async_read_some(boost::asio::buffer(data_, maxInputBufferLength),
[this, self](boost::system::error_code ec, std::size_t length)
{
command_t currentCommand{data_};
if(currentCommand.commandStringVector.front()==driveCommand){
acknowledge("driveCommand triggered by TCP");
}
....
}
void acknowledge(std::string ackmessage){
auto self(shared_from_this());
boost::asio::async_write(socket_, boost::asio::buffer(ackmessage, maxInputBufferLength),
[this, self, ackmessage](boost::system::error_code ec, std::size_t length)
{
std::cout << ackmessage <<"\n";
if(ec.value()){
std::cerr << ec.category().name() << ':' << ec.value() << "\n";
}
});
}
char data_[maxInputBufferLength];
};
According to my little knowledge, my acknowledge function that calls async_write with its own boost::asio::buffer should send the whole buffer, which should contain ackmessage with a length of maxInputBufferLength which is a global constexpr int.
You have undefined behaviour, because you pass local variable into buffer. buffer is only wrapper for string, it can be treated as pair: pointer to data and size of data, it doesn't make copy of ackmessage. You need to ensure that ackmessage exists until handler for async_write is called. async_write returns immediately, so what you have now looks like:
acknowledge ()
{
ackmessage // local variable
async_write is called, it creates buffer to askmessage, pointer to string is stored
// async_write returns immediately
// ackmesssage is destroyed here
}
Fix:
1) consider using synchronous writing operation
2) if you want to stay with async you could store ackmessage as data member of session and then pass it as argument to buffer. You need to find some way of extending its lifetime until handler is invoked.
Here is nice example to resolve your issue, create shared_ptr from ackmessage and pass it to your handler - lambda, it extends lifetime of string which is being writing.

Broken pipe after writing to socket

In my network library I can do asynchronous writes to the network if I run() and restart() the io_context manually.
I'm now trying to make things scale by adding a thread pool:
.hpp
struct pool : public std::enable_shared_from_this<pool> {
pool(const pool &) = delete;
auto operator=(const pool &) -> pool & = delete;
explicit pool(pool_parameters config, db_parameters params) noexcept;
asio::io_context m_io_context;
asio::thread_pool m_workers;
asio::executor_work_guard<asio::io_context::executor_type> m_work_guard;
/// \brief Container to hold connections.
std::vector<std::unique_ptr<dbc::connection>> m_connections;
};
.cpp
pool::pool(pool_parameters config, db_parameters params) noexcept
: m_config{std::move(config)},
m_params{std::move(params)},
m_work_guard{asio::make_work_guard(m_io_context)},
m_workers{m_config.thread_pool_size} {
m_connections.reserve(m_config.connection_pool_size);
asio::post(m_workers, [&]() { m_io_context.run(); });
}
Which manages connections:
.hpp
struct abstract_connection : connection {
explicit abstract_connection(const std::shared_ptr<pool> &pool) noexcept;
~abstract_connection() override;
packet m_buffer;
asio::local::stream_protocol::endpoint m_endpoint;
asio::generic::stream_protocol::socket m_socket;
asio::io_context::strand m_strand;
};
.cpp
abstract_connection::abstract_connection(const std::shared_ptr<pool> &pool) noexcept
: m_params{pool->m_params},
m_config{pool->m_config},
m_endpoint{pool->m_config.socket},
m_socket{pool->m_io_context},
m_strand{pool->m_io_context} {
m_socket.connect(m_endpoint);
m_socket.non_blocking(true);
}
abstract_connection::~abstract_connection() {
std::error_code ec;
m_socket.shutdown(asio::generic::stream_protocol::socket::shutdown_both, ec);
m_socket.close();
}
Now comes the confusing park. On the ctor of a concrete connection object I need to do a handshake, along with a handshake on the destructor of the same class. Which does not happen because the socket object seems to be behaving in odd ways:
If I send data asyncrhonously, nothing gets written to the socket and sometimes I get a broken pipe error:
asio::dispatch(m_strand, [&]() {
m_buffer = write::startup(m_params);
asio::async_write(m_socket, asio::buffer(m_buffer), [](std::error_code ec, std::size_t len) {});
});
If I do a synchronous write I get a broken pipe error before I can read from the socket:
std::error_code ec;
auto startup = write::startup(m_params);
asio::write(m_socket, asio::buffer(startup), ec);
if (set_error(ec)) {
std::cerr << " XXX " << ec.message() << std::endl;
return;
}
m_buffer.reserve(327);
asio::read(m_socket, asio::buffer(m_buffer), ec);
std::cerr << ec.message() << std::endl;
std::cerr << m_buffer.size() << std::endl;
The connection is being done over a unix socket and I have socat sitting between both, so I can see data coming and going, along with the broken pipe messages. Trying to connect to the remote using a third party tool works, with all relevant data appearing in socat, so I believe the problem is in my code.
How can I debug what is going on with the socket?
Based on the code you posted, it seems your boost::asio::thread_pool just goes out of scope to early. Your abstract_connection class just takes a const std::shared_ptr<pool> &pool, which means your abstract connection instances are not holding a reference count on your thread pool. References to std::shared_ptr are not making sense in general because of this, let your abstract_connection just take a std::shared_ptr<const pool> pool in its constructor, which you should copy or move in a member with the same type.
I solved the problem by making the socket blocking (non_blocking(false)) which I would not have thought of without Superlokkus' answer.

Multithreading issue with UDP Client class implemented using C++

Since I am committed to develop some little audio applications that share audio content over network through the UDP protocol, I am currently drafting the code for a UDP Client class.
This mentioned class should receive the audio content of the other clients connected to network and also send the audio content processed on the local machine; all these contents are exchanged with a server, that works as a kind of content router.
Since audio content is generated by a process() method that is periodically called by the audio application, in order to not loose packets, each audio application should have a kind of UDP listener that is independent from the process() method and that should always be active; they should only share a buffer, or memory allocation, where audio data can be temporary saved and later processed.
Taking all this into account, I coded this method:
void udp_client::listen_to_packets() {
while (udp_client::is_listening) {
if ((udp_client::message_len = recvfrom(udp_client::socket_file_descr, udp_client::buffer, _BUFFER_SIZE, MSG_WAITALL, (struct sockaddr*) &(udp_client::client_struct), &(udp_client::address_len))) < 0) {
throw udp_client_exception("Error on receiving message.");
}
std::cout << "New message received!" << std::endl;
}
std::cout << "Stop listenig for messages!" << std::endl;
}
As you can see, the function uses udp_client::buffer that is the shared memory allocation I previously mentioned. In order to let it be always active, I was thinking to start a new thread or process at class construction and stop its execution at class destruction:
udp_client::udp_client():
is_listening(true) {
std::cout << "Constructing udp_client..." << std::endl;
std::thread listener = std::thread(udp_client::listen_to_packets);
}
udp_client::~udp_client() {
std::cout << "Destructing udp_client..." << std::endl;
udp_client::is_listening = false;
}
Of course, the above listed code doesn't work and as #user4581301 suggested, the listener and is_listening variable definitions have been moved to class attributes:
private:
std::atomic<bool> is_listening;
std::thread listener;
Furthermore the constructor and destructor have been modified a little:
udp_client::udp_client():
listener(&udp_client::listen_to_packets, this),
is_listening(true) {
std::cout << "Constructing udp_client..." << std::endl;
}
udp_client::~udp_client() {
std::cout << "Destructing udp_client..." << std::endl;
udp_client::is_listening = false;
listener.join();
}
Unfortunately, g++ still returns an error, saying that there are not constructors with two arguments for the std::thread class:
error: no matching constructor for initialization of 'std::thread'
listener(&udp_client::listen_to_packets, this)
So, what should I modify to make the code work properly?
Here you can see the implementation of the class (hoping that this link is allowed for Stack Overflow rules):
https://www.dropbox.com/sh/lzxlp3tyvoncvxo/AAApN5KLf3YAsOD0PV7wJJO4a?dl=0

C++ Boost ASIO async_send_to memory leak

I am currently working on a UDP socket client. I am currently noticing a memory leak and I've tried several things in hopes to squash it, but it still prevails. In my main, I have a char* that has been malloc'd. I then call the below function to send the data:
void Send(const char* data, const int size) {
Socket.async_send_to(boost::asio::buffer(data, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
If I run this code, it will always leak memory. However, if I comment out the async_send_to call, the memory stays consistent.
I have tried several variations(see below) on this, but they all only appear to speed up the memory leak.
A couple notes, there is a chance that the char* that gets passed to Send may get free'd before the call completes. However, in my variations, I have taken precaution to do handle that.
Variation 1:
void Send(const char* data, const int size) {
char* buf = (char*)malloc(size);
memcpy(buf, data, size);
Socket.async_send_to(boost::asio::buffer(buf, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error, buf));
}
void HandleSendTo(const boost::system::error_code& ec, const char* buf) {
free(buf);
}
Variation 2:
class MulticastSender {
char* Buffer;
public:
void Send(const char* data, const int size) {
Buffer = (char*)malloc(size);
memcpy(Buffer, data, size);
Socket.async_send_to(boost::asio::buffer(Buffer, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
void HandleSendTo(const boost::system::error_code& ec) {
free(Buffer);
}
}
However, both variations seem to only speed up the memory leak. I have also tried removing the async_send_to and just calling boost::asio::buffer(data, size), but as has been explained in other questions, the buffer does not own the memory and thus it is up to the user to safely manage it. Any thoughts on what could be causing this issue and how to resolve it?
EDIT 1:
As suggested in the comments, I have preallocated a single buffer (for test purposes) and I am never deallocating it, however, the memory leak still persists.
class MulticastSender {
char* Buffer;
const int MaxSize = 16384;
public:
MulticastSender() {
Buffer = (char*)malloc(MaxSize);
}
void Send(const char* data, const int size) {
memcpy(Buffer, data, size);
Socket.async_send_to(boost::asio::buffer(Buffer, size), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
void HandleSendTo(const boost::system::error_code& ec) {
}
}
EDIT 2:
As requested here is an MCVE of the problem. In making this I have also observed an interesting behavior that I will explain below.
#include <string>
#include <iostream>
#include <functional>
#include <thread>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
class MulticastSender {
private:
boost::asio::io_service IOService;
const unsigned short Port;
const boost::asio::ip::address Address;
boost::asio::ip::udp::endpoint Endpoint;
boost::asio::ip::udp::socket Socket;
boost::asio::streambuf Buffer;
void HandleSendTo(const boost::system::error_code& ec) {
if(ec) {
std::cerr << "Error writing data to socket: " << ec.message() << '\n';
}
}
void Run() {
IOService.run();
}
public:
MulticastSender(const std::string& address,
const std::string& multicastaddress,
const unsigned short port) : Address(boost::asio::ip::address::from_string(address)),
Port(port),
Endpoint(Address, port),
Socket(IOService, Endpoint.protocol()) {
std::thread runthread(&MulticastSender::Run, this);
runthread.detach();
}
void Send(const char* data, const int size) {
std::ostreambuf_iterator<char> out(&Buffer);
std::copy(data, data + size, out);
Socket.async_send_to(Buffer.data(), Endpoint, boost::bind(&MulticastSender::HandleSendTo, this, boost::asio::placeholders::error));
}
};
const int SIZE = 8192;
int main() {
MulticastSender sender("127.0.0.1", "239.255.0.0", 30000);
while(true) {
char* data = (char*)malloc(SIZE);
std::memset(data, 0, SIZE);
sender.Send(data, SIZE);
usleep(250);
free(data);
}
}
The above code still produces a memory leak. I should mention that I am running this on CentOS 6.6 with kernel Linux dev 2.6.32-504.el6.x86_64 and running Boost 1.55.0. I am observing this simply by watching the process in top.
However, if I simply move the creation of the MulticastSender into the while loop, I no longer observe the memory leak. I am concerned about the speed of the application though, so this is not a valid option.
Memory is not leaking, as there is still a handle to the allocated memory. However, there will be continual growth because:
The io_service is not running because run() is returning as there is no work. This results in completion handlers being allocated, queued into the io_service, but neither executed nor freed. Additionally, any cleanup that is expected to occur within the completion handler is not occurring. It is worth noting that during the destruction of the io_service, completion handlers will be destroyed and not invoked; hence, one cannot depend on only performing cleanup within the execution of the completion handler. For more details as to when io_service::run() blocks or unblocks, consider reading this question.
The streambuf's input sequence is never being consumed. Each iteration in the main loop will append to the streambuf, which will then send the prior message content and the newly appended data. See this answer for more details on the overall usage of streambuf.
A few other points:
The program fails to meet a requirement of async_send_to(), where ownership of the underlying buffer memory is retained by the caller, who must guarantee that it remains valid until the handler is called. In this case, when copying into the streambuf via the ostreambuf_iterator, the streambuf's input sequence is modified and invalidates the buffer returned from streambuf.data().
During shutdown, some form of synchronization will need to occur against threads that are running the io_service. Otherwise, undefined behavior may be invoked.
To resolve these issues, consider:
Using boost::asio::io_service::work to ensure that the io_service object's run() does not exit when there is no work remaining.
Passing ownership of the memory to the completion handler via std::shared_ptr or another class that will manage the memory via resource acquisition is initialization (RAII) idiom. This will allow for proper cleanup and meet the requirement's of the buffer validity for async_send_to().
Not detaching and joining upon the worker thread.
Here is a complete example based on the original that demonstrates these changes:
#include <string>
#include <iostream>
#include <thread>
#include <boost/asio.hpp>
class multicast_sender
{
public:
multicast_sender(
const std::string& address,
const std::string& multicast_address,
const unsigned short multicast_port)
: work_(io_service_),
multicast_endpoint_(
boost::asio::ip::address::from_string(multicast_address),
multicast_port),
socket_(io_service_, boost::asio::ip::udp::endpoint(
boost::asio::ip::address::from_string(address),
0 /* any port */))
{
// Start running the io_service. The work_ object will keep
// io_service::run() from returning even if there is no real work
// queued into the io_service.
auto self = this;
work_thread_ = std::thread([self]()
{
self->io_service_.run();
});
}
~multicast_sender()
{
// Explicitly stop the io_service. Queued handlers will not be ran.
io_service_.stop();
// Synchronize with the work thread.
work_thread_.join();
}
void send(const char* data, const int size)
{
// Caller may delete before the async operation finishes, so copy the
// buffer and associate it to the completion handler's lifetime. Note
// that the completion may not run in the event the io_servie is
// destroyed, but the the completion handler will be, so managing via
// a RAII object (std::shared_ptr) is ideal.
auto buffer = std::make_shared<std::string>(data, size);
socket_.async_send_to(boost::asio::buffer(*buffer), multicast_endpoint_,
[buffer](
const boost::system::error_code& error,
std::size_t bytes_transferred)
{
std::cout << "Wrote " << bytes_transferred << " bytes with " <<
error.message() << std::endl;
});
}
private:
boost::asio::io_service io_service_;
boost::asio::io_service::work work_;
boost::asio::ip::udp::endpoint multicast_endpoint_;
boost::asio::ip::udp::socket socket_;
std::thread work_thread_;
};
const int SIZE = 8192;
int main()
{
multicast_sender sender("127.0.0.1", "239.255.0.0", 30000);
char* data = (char*) malloc(SIZE);
std::memset(data, 0, SIZE);
sender.send(data, SIZE);
free(data);
// Give some time to allow for the async operation to complete
// before shutting down the io_service.
std::this_thread::sleep_for(std::chrono::seconds(2));
}
Output:
Wrote 8192 bytes with Success
The class variation looks better, and you can use boost::asio::streambuf as a buffer for network io (it doesn't leak and doesn't need much maintenance).
// The send function
void
send(char const* data, size_t size)
{
std::ostreambuf_iterator<char> out(&buffer_);
std::copy(data, data + size, out);
socket.async_send_to(buffer_, endpoint,
std::bind( &multicast_sender,
this, std::placeholders::_1 ));
}
Moving the socket and endpoint inside the class would be a good idea. Also you should bear in mind that the async operation can finish when your object goes out of scope. I would recommend using enable_shared_from_this (boost or std flavours) and pass shared_from_this() instead of this to the bind function.
The whole solution would look like this:
#include <boost/asio.hpp>
class multicast_sender :
public std::enable_shared_from_this<multicast_sender> {
using boost::asio::ip::udp;
udp::socket socket_;
udp::endpoint endpoint_;
boost::asio::streambuf buffer_;
public:
multicast_sender(boost::asio::io_service& io_service, short port,
udp::endpoint const& remote) :
socket_(io_service, udp::endpoint(udp::v4(), port)),
endpoint_(remote)
{
}
void
send(char const* data, size_t size)
{
std::ostreambuf_iterator<char> out(&buffer_);
std::copy(data, data + size, out);
socket_.async_send_to(buffer_, endpoint_,
std::bind( &multicast_sender,
shared_from_this(), std::placeholders::_1 ));
}
void
handle_send(boost::system::error_code const& ec)
{
}
};
EDIT
And as far as you don't have to do anything in the write handler, you can use a lambda (requires C++11) as a completion callback
// The send function
void
send(char const* data, size_t size)
{
std::ostreambuf_iterator<char> out(&buffer_);
std::copy(data, data + size, out);
socket.async_send_to(buffer_, endpoint,
[](boost::system::error_code const& ec){
std::cerr << "Error sending :" << ec.message() << "\n";
});
}

Boost ASIO async_write "Vector iterator not dereferencable"

I've been working on an async boost server program, and so far I've got it to connect. However I'm now getting a "Vector iterator not dereferencable" error.
I suspect the vector gets destroyed or dereferenced before he packet gets sent thus causing the error.
void start()
{
Packet packet;
packet.setOpcode(SMSG_PING);
send(packet);
}
void send(Packet packet)
{
cout << "DEBUG> Transferring packet with opcode " << packet.GetOpcode() << endl;
async_write(m_socket, buffer(packet.write()), boost::bind(&Session::writeHandler, shared_from_this(), placeholders::error, placeholders::bytes_transferred));
}
void writeHandler(const boost::system::error_code& errorCode, size_t bytesTransferred)
{
cout << "DEBUG> Transfered " << bytesTransferred << " bytes to " << m_socket.remote_endpoint().address().to_string() << endl;
}
Start gets called once a connection is made.
packet.write() returns a uint8_t vector
Would it matter if I'd change
void send(Packet packet)
to
void send(Packet& packet)
Not in relation to this problem but performance wise.
All this depends on how your Packet class is implemented. How it is copied, .... Has the copy of Packet class do a deep copy or just a default copy? if it is a default copy and your class Packet is not a POD, this can be the reason, and you will need to do a deep copy.
In general it is better to pass a class parameter by const& so maybe you should try with
void send(Packet const& packet);
I have found a solution, as the vector would get destroyed I made a queue that contains the resulting packets and they get processed one by one, now nothing gets dereferenced so the problem is solved.
might want to change my queue to hold the packet class instead of the result but that's just a detail.