Segmentation fault with boost::asio, asynchronous udp-server with deadline_timer - c++

I am having trouble with a server program using the boost::asio library.
The Server class is very much like the one presented in the boost asio tutorial "asynchronous udp-server"
The class has a public method ("sendMessageTo"), which is called by a message - processor object, here the segmentation fault occurs, if the method is invoked by the deadline_timer thread. It occurs with the call of new std::string(msg, len), this is puzzeling me. msg contains what it should contain, and len too.
void Server::sendMessageTo(const char* msg, size_t len, udp::endpoint to)
{
boost::shared_ptr<std::string> message( new std::string (msg,len) );
socket.async_send_to(boost::asio::buffer(*message), to,
boost::bind(&Server::handleSend, this, message,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
When the method "sendMessageTo" is called on the first attempt, everything works fine: It is called later in the same thread, that is opened by the "handleReceive" method of the server class.
My message-processor object is some kind of state-machine, that keeps the remote-endpoint, and in some states periodically wants to send some udp Messages back to the endpoint. Therefor a asio::deadline_timer is used.
The deadline timer is created with the same io_service, the udp-server runs on.
When the timer is revoked for the first time, the state_handling method inside the message_processor object calls the "sendMessageTo" method an segmentation fault occurs.
All arguments of "sendMessageTo" are valid and contain the expected values.
The constructor head of my message-processor class (called Transaction)
Transaction::Transaction(ClientReference *cli, ServerReference *serv)
: timer(*(serv->getIOService()), boost::posix_time::milliseconds(TRANSACTION_THREAD_SLEEP_MILLISEC)),
clientEndpoint(serv->getEndpoint())
timer is the asio::deadline_timer object, and clientEndpoint is the udp::endpoint
the server response is sent inside the method Transaction::runThread()
server->sendMessageTo(&encryptedMsgBuf[0], size, clientEndpoint);
encryptedMsgBuf is a char array buffer, that stores the encrypted message, and it is part of the Transaction - object.
at the end of the method Transaction::runThread() the deadline_timer is called onto the method runThread() to reactivate it until the final state is reached:
if (state != done && state != expired)
timer.async_wait(boost::bind(&Transaction::runThread, this));
Thank you in advantage.

I'm not 100% sure on this one, since I can't locally reproduce your error from what you've posted, but I strongly suspect your problem is due to the scoping of the message string variable. I have had some issues with boost::shared_ptr in the past, where the shared_ptr has been destructed earlier than expected. If this is the case, then shared_ptr message may be getting destructed at the end of the call to Server::sendMessageTo(), and when the asynchronous tranmission actually attempts to start, that memory has been deallocated causing a segfault.
In general, I like to keep the buffers which I am actually transmitting from and receiving to as private members of my server and client classes to ensure they are statically scoped and won't vanish on me unexpectedly half-way through a transmit or receive. It can cost a bit in memory footprint, but I find it gives me a lot of peace-of-mind. If this approach doesn't give you any joy, let me know and I'll see if I can reproduce the error locally. (At the moment my 'local reproduction' attempts have consisted of me hacking an old "server-client using ASIO" example to allocate the TX buffer as you've indicated above, then thrash some memory so if the TX is trying to do further heap access it should segfault.

Related

asio underlying behavior in async_receive

I have worked with asio library for a few projects and have always managed to get it work, but I feel there are somethings of it that I have not entirely/clearly understood so far.
I am wondering how async_receive works.
I googled around a bit and had a look at the implementation but didn't understand it quite well. This is the way that I often use async communication:
socket.async_receive(receive_buffer, receiveHandler);
where receiveHandler is the function that will be called upon the arrival of data on the socket.
I know that the async_receive call return immediately. So here are my questions:
Does async_receive create a new thread each time it is called?
If not, does it mean that there is a thread responsible to waiting for data and when it arrives, it calls the handler function? When does this thread get created?
If I were to turn this call into a recursive call by using a lambda function like this:
void cyclicReceive() {
// Imagine the whole thing is in a class, so "this" means something here and
// "receiveHandler" is still a valid function that gets called.
socket.async_receive(receive_buffer,
[this](const asio::error_code& error_code, const std::size_t num_bytes)
{
receiveHandler(error_code, num_bytes);
cyclicReceive();
});
}
is there any danger of stack overflow? Why not?
I tried to show a minimal example by removing unnecessary details, so the exact syntax might be a bit wrong.
Asio does not create implicitly any new threads. In general it is based on queue of commands. When you call io.run() the framework is taking the commands from the queue and executing them until queue is empty. All the async_ operations in ASIO push new commands to the internal queue.
Therefore there is no risk of stack overflow. Worst possible but not really probable scenario is out_of_memory exception when there is no space left for commands in command queue (which is very unlikely).

Async send automatic variable using boost::asio. Is it possible?

I'm still trying to understand the work of boost::asio C++ library.
According to the answer on my previous question, async_write() method enqueues the message in the network stack and immediately returns. However, in the documentation they say it is wrong to do such thing:
void dont_do_this()
{
std::string msg = "Hello, world!";
boost::asio::async_write(socket, boost::asio::buffer(msg), my_handler);
}
They insist that we need to ensure that the buffer for the operation is valid until the completion handler is called. The question is WHY? At the moment of async_write return we've already put our message in the network stack and we don't need the buffer any longer, and the automatic variable msg can be destroyed without serious consequences. Where am I wrong?
async_write does not really queue the message in the network stack. Instead it queues the write to boost asynchronous tasks queue held by the io_service. The write to the network stack actually happens later, when you call run on the io_service. In short there is an intermediate queue.
In you case the boost::asio::buffer keeps a reference to 'msg' and not a copy of it. If msg goes out of the scope, when your message is sent to the network stack, the buffer is pointing to a dangling reference to a string.

boost::asio acceptor avoid memory leak

Using boost::asio i use async_accept to accept connections. This works good, but there is one issue and i need a suggestion how to deal with it. Using typical async_accept:
Listener::Listener(int port)
: acceptor(io, ip::tcp::endpoint(ip::tcp::v4(), port))
, socket(io) {
start_accept();
}
void Listener::start_accept() {
Request *r = new Request(io);
acceptor.async_accept(r->socket(),
boost::bind(&Listener::handle_accept, this, r, placeholders::error));
}
Works fine but there is a issue: Request object is created with plain new so it can memory "leak". Not really a leak, it leaks only at program stop, but i want to make valgrind happy.
Sure there is an option: i can replace it with shared_ptr, and pass it to every event handler. This will work until program stop, when asio io_service is stopping, all objects will be destroyed and Request will be free'd. But this way i always must have an active asio event for Request, or it will be destroyed! I think its direct way to crash so i dont like this variant, too.
UPD Third variant: Listener holds list of shared_ptr to active connections. Looks great and i prefer to use this unless some better way will be found. The drawback is: since this schema allows to do "garbage collection" on idle connects, its not safe: removing connection pointer from Listener will immediately destroy it, what can lead to segfault when some of connection's handler is active in other thread. Using mutex cant fix this cus in this case we must lock nearly anything.
Is there a way to make acceptor work with connection management some beautiful and safe way? I will be glad to hear any suggestions.
The typical recipe for avoiding memory leaks when using this library is using a shared_ptr, the io_service documentation specifically mentions this
Remarks
The destruction sequence described above permits programs to simplify
their resource management by using shared_ptr<>. Where an object's
lifetime is tied to the lifetime of a connection (or some other
sequence of asynchronous operations), a shared_ptr to the object would
be bound into the handlers for all asynchronous operations associated
with it. This works as follows:
When a single connection ends, all associated asynchronous operations
complete. The corresponding handler objects are destroyed, and all
shared_ptr references to the objects are destroyed. To shut down the
whole program, the io_service function stop() is called to terminate
any run() calls as soon as possible. The io_service destructor defined
above destroys all handlers, causing all shared_ptr references to all
connection objects to be destroyed.
For your scenario, change your Listener::handle_accept() method to take a boost::shared_ptr<Request> parameter. Your second concern
removing connection pointer from Listener will immediately destroy it,
what can lead to segfault when some of connection's handler is active
in other thread. Using mutex cant fix this cus in this case we must
lock nearly anything.
is mitigated by inheriting from the boost::enable_shared_from_this template in your classes:
class Listener : public boost::enable_shared_from_this<Listener>
{
...
};
then when you dispatch handlers, use shared_from_this() instead of this when binding to member functions of Listener.
If anyone interested, i found another way. Listener holds list of shared_ptr to active connections. Connections ending/terminating is made via io_service::post which call Listener::FinishConnection wrapped with asio::strand. Usually i always wrap Request's methods with strand - its safer in terms of DDOS and/or thread safety. So, calling FinishConnection from post using strand protects from segfault in other thread
Not sure whether this is directly related to your issue, but I was also having similar memory leaks by using the Boost Asio libraries, in particular the same acceptor object you mentioned. Turned out that I was not shutting down the service correctly; some connections would stay opened and their corresponding objects would not be freed from memory. Calling the following got rid of the leaks reported by Valgrind:
acceptor.close();
Hope this can be useful for someone!

boost::asio async_read doesn't receive data or doesn't use callback

I am trying to receive data from a server application using boost asio's async_read() free function, but the callback I set for when the receiving is never called.
The client code is like this:
Client::Client()
{
m_oIoService.run(); // member boost::asio::io_service
m_pSocket = new boost::asio::ip::tcp::socket(m_oIoService);
// Connection to the server
[...]
// First read
boost::asio::async_read(*m_pSocket,
boost::asio::buffer((void*)&m_oData, sizeof(m_oData)),
boost::bind(&Client::handleReceivedData, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
I tried with small data (a short string) and I can't get it to work. When I use the synchronous read function (boost::asio::read()) using the two same first parameters, everything works perfectly.
Am I missing something with the use of the io_service? I am still unsure about how it works.
boost::asio::service::run () is a blocking call. Now, in your example it may or may not return immediately. In case it doesn't, you you are blocked even before you create a socket, and never call read, so cannot expect a callback. Otherwise, dispatch loop is exited, so no callbacks are ever delivered.
Read more about boost::asio::service::run (). I recommend you check out documentation including tutorial, examples and reference. It is worth going trough it in full to understand the concept.
Hope it helps!
P.S.: On a side note, your code is not exception safe. Beware that if constructor of the class fails with exception then destructor of that class instance is never called. Thus, you may leak at least m_pSocket if its type is not one of the "smart pointers". You should consider making it exception safe, moving the code to another method that should be called by user, or even wrapping this functionality with a free function.

boost::asio asynchronous operations and resources

So I've made a socket class that uses boost::asio library to make asynchronous reads and writes. It works, but I have a few questions.
Here's a basic code example:
class Socket
{
public:
void doRead()
{
m_sock->async_receive_from(boost::asio::buffer(m_recvBuffer), m_from, boost::bind(&Socket::handleRecv, this, boost::asio::placeholders::error(), boost::asio::placeholders::bytes_transferred()));
}
void handleRecv(boost::system::error_code e, int bytes)
{
if (e.value() || !bytes)
{
handle_error();
return;
}
//do something with data read
do_something(m_recvBuffer);
doRead(); //read another packet
}
protected:
boost::array<char, 1024> m_recvBuffer;
boost::asio::ip::udp::endpoint m_from;
};
It seems that the program will read a packet, handle it, then prepare to read another. Simple.
But what if I set up a thread pool? Should the next call to doRead() be before or after handling the read data? It seems that if it is put before do_something(), the program can immediately begin reading another packet, and if it is put after, the thread is tied up doing whatever do_something() does, which could possibly take a while. If I put the doRead() before the handling, does that mean the data in m_readBuffer might change while I'm handling it?
Also, if I'm using async_send_to(), should I copy the data to be sent into a temporary buffer, because the actual send might not happen until after the data has fallen out of scope? i.e.
void send()
{
char data[] = {1, 2, 3, 4, 5};
m_sock->async_send_to(boost::buffer(&data[0], 5), someEndpoint, someHandler);
} //"data" gets deallocated, but the write might not have happened yet!
Additionally, when the socket is closed, the handleRecv will be called with an error indicating it was interrupted. If I do
Socket* mySocket = new Socket()...
...
mySocket->close();
delete mySocket;
could it cause an error, because there is a chance that mySocket will be deleted before handleRecv() gets called/finished?
Lots of questions here, I'll try to address them one at a time.
But what if I set up a thread pool?
The traditional way to use a thread pool with Boost.Asio is to invoke io_service::run() from multiple threads. Beware this isn't a one-size-fits-all answer though, there can be scalability or performance issues, but this methodology is by far the easiest to implement. There are many similar questions on Stackoverflow with more information.
Should the next call to doRead be before or after handling the read
data? It seems that if it is put before do_something(), the program
can immediately begin reading another packet, and if it is put after,
the thread is tied up doing whatever do_something does, which could
possibly take a while.
This really depends on what do_something() needs to do with m_recvBuffer. If you wish to invoke do_something() in parallel with doRead() using io_service::post() you will likely need to make a copy of m_recvBuffer.
If I put the doRead() before the handling, does
that mean the data in m_readBuffer might change while I'm handling it?
as I mentioned previously, yes this can and will happen.
Also, if I'm using async_send_to(), should I copy the data to be sent
into a temporary buffer, because the actual send might not happen
until after the data has fallen out of scope?
As the documentation describes, it is up to the caller (you) to ensure the buffer remains in scope for the duration of the asynchronous operation. As you suspected, your current example invokes undefined behavior because data[] will go out of scope.
Additionally, when the socket is closed, the handleRecv() will be called
with an error indicating it was interrupted.
If you wish to continue to use the socket, use cancel() to interrupt outstanding asynchronous operations. Otherwise, close() will work. The error passed to outstanding asynchronous operations in either scenario is boost::asio::error::operation_aborted.