boost::asio acceptor avoid memory leak - c++

Using boost::asio i use async_accept to accept connections. This works good, but there is one issue and i need a suggestion how to deal with it. Using typical async_accept:
Listener::Listener(int port)
: acceptor(io, ip::tcp::endpoint(ip::tcp::v4(), port))
, socket(io) {
start_accept();
}
void Listener::start_accept() {
Request *r = new Request(io);
acceptor.async_accept(r->socket(),
boost::bind(&Listener::handle_accept, this, r, placeholders::error));
}
Works fine but there is a issue: Request object is created with plain new so it can memory "leak". Not really a leak, it leaks only at program stop, but i want to make valgrind happy.
Sure there is an option: i can replace it with shared_ptr, and pass it to every event handler. This will work until program stop, when asio io_service is stopping, all objects will be destroyed and Request will be free'd. But this way i always must have an active asio event for Request, or it will be destroyed! I think its direct way to crash so i dont like this variant, too.
UPD Third variant: Listener holds list of shared_ptr to active connections. Looks great and i prefer to use this unless some better way will be found. The drawback is: since this schema allows to do "garbage collection" on idle connects, its not safe: removing connection pointer from Listener will immediately destroy it, what can lead to segfault when some of connection's handler is active in other thread. Using mutex cant fix this cus in this case we must lock nearly anything.
Is there a way to make acceptor work with connection management some beautiful and safe way? I will be glad to hear any suggestions.

The typical recipe for avoiding memory leaks when using this library is using a shared_ptr, the io_service documentation specifically mentions this
Remarks
The destruction sequence described above permits programs to simplify
their resource management by using shared_ptr<>. Where an object's
lifetime is tied to the lifetime of a connection (or some other
sequence of asynchronous operations), a shared_ptr to the object would
be bound into the handlers for all asynchronous operations associated
with it. This works as follows:
When a single connection ends, all associated asynchronous operations
complete. The corresponding handler objects are destroyed, and all
shared_ptr references to the objects are destroyed. To shut down the
whole program, the io_service function stop() is called to terminate
any run() calls as soon as possible. The io_service destructor defined
above destroys all handlers, causing all shared_ptr references to all
connection objects to be destroyed.
For your scenario, change your Listener::handle_accept() method to take a boost::shared_ptr<Request> parameter. Your second concern
removing connection pointer from Listener will immediately destroy it,
what can lead to segfault when some of connection's handler is active
in other thread. Using mutex cant fix this cus in this case we must
lock nearly anything.
is mitigated by inheriting from the boost::enable_shared_from_this template in your classes:
class Listener : public boost::enable_shared_from_this<Listener>
{
...
};
then when you dispatch handlers, use shared_from_this() instead of this when binding to member functions of Listener.

If anyone interested, i found another way. Listener holds list of shared_ptr to active connections. Connections ending/terminating is made via io_service::post which call Listener::FinishConnection wrapped with asio::strand. Usually i always wrap Request's methods with strand - its safer in terms of DDOS and/or thread safety. So, calling FinishConnection from post using strand protects from segfault in other thread

Not sure whether this is directly related to your issue, but I was also having similar memory leaks by using the Boost Asio libraries, in particular the same acceptor object you mentioned. Turned out that I was not shutting down the service correctly; some connections would stay opened and their corresponding objects would not be freed from memory. Calling the following got rid of the leaks reported by Valgrind:
acceptor.close();
Hope this can be useful for someone!

Related

Cancel connection is boost::asio

From the standard SSL client example. Say I call this function.
boost::asio::async_connect(socket_.lowest_layer(), endpoint_iterator,
boost::bind(&SSLClient::handle_connect, this,
boost::asio::placeholders::error));
But then the function is called, and the program is connecting. I would like to cancel my request and stop the connection! How can I do that?
Special case: Say I have those objects in a thread. Is there a way to do it in this case?
Now if I try to do this, the program simply doesn't respond. I don't see a way to force it to stop!
There are several ways to achieve what you want ¹.
You could hard-stop the service (service.stop()). But this leaves you no control over all running operations. It's the "nuclear" approach, so to say.
The controlled way would be to call cancel()
Cancel all asynchronous operations associated with the socket.
socket_.cancel()
Now, you have the additional task of maintaining the lifetime of your connection object (presumably the this in your bound completion handler). A very common pattern to use is to make the connection class derive from enable_shared_from_this and bind the completion handler to shared_from_this() instead of just this.
That way, the shared connection object will automatically "go away" after the last pending async operation has been canceled, and you don't have to worry about leaking connection objects.
¹ short of exit, abort, quick_exit etc. :)

Memory management in asynchronous C++ code

I have been working with boost::asio for a while now and while I do understand the concept of the asynchronous calls I am still somewhat befuddled by the memory management implications. In normal synchrous code the object lifetime is clear. But consider a scenario similar to the case of the daytime server:
There might be multiple active connections which have been accepted. Each connection now sends and receives some data from a socket, does some work internally and then decides to close the connection. It is safe to assume that the data related to the connection needs to stay accessible during the processing but the memory can be freed as soon as the connection is closed. But how can I implement the creation/destruction of the data correctly? Assuming that I use classes and bind the callback to member functions, should I create a class using new and call delete this; as soon as the processing is done or is there a better way?
But how can I implement the creation/destruction of the data correctly?
Use shared_ptr.
Assuming that I use classes and bind the callback to member functions, should I create a class using new and call delete this; as soon as the processing is done or is there a better way?
Make your class inherit from enable_shared_from_this, create instances of your classes using make_shared, and when you bind your callbacks bind them to shared_from_this() instead of this. The destruction of your instances will be done automatically when they have gone out of the last scope where they are needed.

how to make lock free producer consumer thread exchange more exception safe with QThreads

The gripe I have with this otherwise good example: https://www.qt.io/blog/2006/12/04/threading-without-the-headache is that it is exchanging naked pointers and it is not using Qt::QueuedConnection.
Edit: here is the code snippet the above link shows (in case the link goes down before this post)
// create the producer and consumer and plug them together
Producer producer;
Consumer consumer;
producer.connect(&consumer, SIGNAL(consumed()), SLOT(produce()));
consumer.connect(&producer, SIGNAL(produced(QByteArray *)), SLOT(consume(QByteArray *)));
// they both get their own thread
QThread producerThread;
producer.moveToThread(&producerThread);
QThread consumerThread;
consumer.moveToThread(&consumerThread);
// go!
producerThread.start();
consumerThread.start();
If I used a unique_ptr in the producer, releasing it when I call the produced signal and directly put the naked pointer into another unique pointer in the connected consume slot it would be somewhat safer. Especially after some maintenance programmer has a go at the code ;)
void calculate()
{
std::unique_ptr<std::vector<int>> pi(new std::vector<int>());
...
produced(pi.release());
//prodiced is a signal, the connected slot destroys the object
//a slot must be connected or the objects are leaked
//if multiple slots are connected the objects are double deleted
}
void consume(std::vector<int> *piIn)
{
std::unique_ptr<std::vector<int>> pi(piIn);
...
}
this still has a few major problems:
I am not protecting against leaks when the slot is not connected
I am not protecting against double deletes if multiple slots were to be connected (should be a logic error on the part of the programmer if it happens, but I would like to detect it)
I don't know the inner working of Qt well enough to be sure that nothing leaks in transit.
If I were to use a shared pointer to const it would solve all my problems but be slower and as far as I know I would have to register it with the meta object system as described here: http://qt-project.org/doc/qt-4.8/qt.html#ConnectionType-enum is this a good idea?
Is there a better way of doing this that I'm not thinking of?
You shouldn't pass pointers in a signal while expecting a slot to destroy them, because the slot may not be available.
Pass a const reference instead, allowing the slot to copy the object. If you use Qt's container classes, this should not hinder performance, as Qt's container classes implement copy-on-write.

Cleaning up threads referencing an object when deleting the object (in C++)

I have an object (Client * client) which starts multiple threads to handle various tasks (such as processing incoming data). The threads are started like this:
// Start the thread that will process incoming messages and stuff them into the appropriate queues.
mReceiveMessageThread = CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)receiveRtpMessageFunction, this, 0, 0);
These threads all have references back to the initial object, like so:
// Thread initialization function for receiving RTP messages from a newly connected client.
static int WINAPI receiveRtpMessageFunction(LPVOID lpClient)
{
LOG_METHOD("receiveRtpMessageFunction");
Client * client = (Client *)lpClient;
while(client ->isConnected())
{
if(client ->receiveMessage() == ERROR)
{
Log::log("receiveRtpMessageFunction Failed to receive message");
}
}
return SUCCESS;
}
Periodically, the Client object gets deleted (for various good and sufficient reasons). But when that happens, the processing threads that still have references to the (now deleted) object throw exceptions of one sort or another when trying to access member functions on that object.
So I'm sure that there's a standard way to handle this situation, but I haven't been able to figure out a clean approach. I don't want to just terminate the thread, as that doesn't allow for cleaning up resources. I can't set a property on the object, as it's precisely properties on the object that become inaccessible.
Thoughts on the best way to handle this?
I would solve this problem by introducing a reference count to your object. The worker thread would hold a reference and so would the creator of the object. Instead of using delete, you decrement from the reference count and whoever drops the last reference is the one that actually calls delete.
You can use existing reference counting mechanisms (shared_ptr etc.), or you can roll your own with the Win32 APIs InterlockedIncrement() and InterlockedDecrement() or similar (maybe the reference count is a volatile DWORD starting out at 1...).
The only other thing that's missing is that when the main thread releases its reference, it should signal to the worker thread to drop its own reference. One way you can do this is by an event; you can rewrite the worker thread's loop as calls to WaitForMultipleObjects(), and when a certain event is signalled, you take that to mean that the worker thread should clean up and drop the reference.
You don't have much leeway because of the running threads.
No combination of shared_ptr + weak_ptr may save you... you may call a method on the object when it's valid and then order its destruction (using only shared_ptr would).
The only thing I can imagine is to first terminate the various processes and then destroy the object. This way you ensure that each process terminate gracefully, cleaning up its own mess if necessary (and it might need the object to do that).
This means that you cannot delete the object out of hand, since you must first resynchronize with those who use it, and that you need some event handling for the synchronization part (since you basically want to tell the threads to stop, and not wait indefinitely for them).
I leave the synchronization part to you, there are many alternatives (events, flags, etc...) and we don't have enough data.
You can deal with the actual cleanup from either the destructor itself or by overloading the various delete operations, whichever suits you.
You'll need to have some other state object the threads can check to verify that the "client" is still valid.
One option is to encapsulate your client reference inside some other object that remains persistent, and provide a reference to that object from your threads.
You could use the observer pattern with proxy objects for the client in the threads. The proxies act like smart pointers, forwarding access to the real client. When you create them, they register themselves with the client, so that it can invalidate them from its destructor. Once they're invalidated, they stop forwarding and just return errors.
This could be handled by passing a (boost) weak pointer to the threads.

is it safe to to destroy a socket object while an asyn_read might be going on in boost.ASIO?

In the following code:
tcp::socket socket(io_service);
tcp::endpoint ep(boost::asio::ip::address::from_string(addr), i);
socket.async_connect(ep, &connect_handler);
socket.close();
is it correct to close the socket object, or should I close it only in the connect_handler(), resort to shared_ptr to prolong the life of of the socket object?
Thanks.
Closing the socket isn't much of an issue, but the socket being destructed and deallocated is. One way to deal with it is to just make sure the socket outlives the io_service where work is being done. In other words, you just make sure to not delete it until after the io_service has exited. Obviously this won't work in every situation.
In a variety of conditions it can be difficult to impossible to tell when all work is really done on the socket when it's active within the io_service, and ASIO doesn't provide any mechanism to explicitly remove or disconnect the object's callbacks so they don't get called. So you should consider holding the connection in a shared_ptr, which will keep the connection object until the last reference inside the io_service has been released.
Meanwhile your handler functors should handle all possible errors passed in, including the connection being destroyed.
It is safe. The connect_handler will give you ec == boost::asio::error::connection_aborted. Of course, you need to do io_service.run() for the handler to be invoked.
As already answered by Chila, it's safe to close the socket whenever you want. If the socket in question has an outstanding operation at the time, the handler/callback will be invoked to notify you've cancelled the operation. That's where connection_aborted shows up.
As for your question about shared_ptr, I consider it a big win if you have another thread or other objects referencing your sockets, however, it isn't required in many cases. All you have to do is to dynamically allocate them, and deallocate when they're no longer needed. Of course, if you have other objects or threads referencing your socket, you must update them prior to delete/dealloc. Doing so, you avoid invalid memory access because the object pointed by them no longer exists (see dangling pointer).