I have found that boost::signals2 uses sort of a lazy deletion of connected slots, which makes it difficult to use connections as something that manages lifetimes of objects. I am looking for a way to force slots to be deleted directly when disconnected. Any ideas on how to work around the problem by designing my code differently are also appreciated!
This is my scenario: I have a Command class responsible for doing something that takes time asynchronously, looking something like this (simplified):
class ActualWorker {
public:
boost::signals2<void ()> OnWorkComplete;
};
class Command : boost::enable_shared_from_this<Command> {
public:
...
void Execute() {
m_WorkerConnection = m_MyWorker.OnWorkDone.connect(boost::bind(&Command::Handle_OnWorkComplete, shared_from_this());
// launch asynchronous work here and return
}
boost::signals2<void ()> OnComplete;
private:
void Handle_OnWorkComplete() {
// get a shared_ptr to ourselves to make sure that we live through
// this function but don't keep ourselves alive if an exception occurs.
shared_ptr<Command> me = shared_from_this();
// Disconnect from the signal, ideally deleting the slot object
m_WorkerConnection.disconnect();
OnComplete();
// the shared_ptr now goes out of scope, ideally deleting this
}
ActualWorker m_MyWorker;
boost::signals2::connection m_WorkerConnection;
};
The class is invoked about like this:
...
boost::shared_ptr<Command> cmd(new Command);
cmd->OnComplete.connect( foo );
cmd->Execute();
// now go do something else, forget all about the cmd variable etcetera.
the Command class keeps itself alive by getting a shared_ptr to itself which is bound to the ActualWorker signal using boost::bind.
When the worker completes, the handler in Command is invoked. Now, since I would like the Command object to be destroyed, I disconnect from the signal as can be seen in the code above. The problem is that the actual slot object is not deleted when disconnected, it is only marked as invalid and then deleted at a later time. This in turn appears to depend on the signal to fire again, which it doesn't do in my case, leading to the slot never expiring. The boost::bind object thus never goes out of scope, holding a shared_ptr to my object that will never get deleted.
I can work around this by binding using the this pointer instead of a shared_ptr and then keeping my object alive using a member shared_ptr which I then release in the handler function, but it kind of makes the design feel a bit overcomplicated. Is there a way to force signals2 to delete the slot when disconnecting? Or is there something else I could do to simplify the design?
Any comments are appreciated!
boost::signals2 does clean up the slots during connect/invoke.
So if all the slots disconnect themselves from the signal, invoking the signal a second time will not call anything but it should clean up the slots.
To answer your comment, yes, invoking the signal again is not safe if there are be other slots connected, as they will be invoked again. In that case I suggest you go the other way around and connect a dummy slot, then disconnect it when your "real" slot is invoked. Connecting another slot will clean up stale connections, so your slot should be released.
Just make sure that you don't keep any references that need freeing in the dummy slot, or you're back where you started.
This is an incredibly annoying aspect of boost::signals2.
The approach I took to resolve it is to store the signal in a scoped_ptr, and when I want to force disconnection of all slots, I delete the signal. This only works in cases when you want to forcefully disconnect all connections to a signal.
Is the behaviour any more strict with a scoped_connection?
So, rather than:
void Execute() {
m_WorkerConnection = m_MyWorker.OnWorkDone.connect(boost::bind
(&Command::Handle_OnWorkComplete, shared_from_this());
// launch asynchronous work here and return
}
...
boost::signals2::connection m_WorkerConnection;
Instead using:
void Execute() {
boost::signals2::scoped_connection m_WorkerConnection
(m_MyWorker.OnWorkDone.connect(boost::bind
(&Command::Handle_OnWorkComplete, shared_from_this()));
// launch asynchronous work here and return
} // connection falls out of scope
(copy-constructed from a boost::signals2::connection)
I've not used any sort of signalling so it's more of a guess than anything else, but following Execute() you wouldn't need to disconnect(), since scoped_connection handles it for you. That's more of a 'simplify the design' rather than actually solving your problem. But it may mean that you can Execute() and then immediately ~Command() (or delete the shared_ptr).
Hope that helps.
EDIT: And by Execute() then immediately ~Command() I obviously mean from outside your Command object. When you construct the Command to execute it, you should then be able to say:
cmd->Execute();
delete cmd;
Or similar.
I ended up doing my own (subset) implementation of a signal, the main requirement being that a slot should be destroyed by a call to connection::disconnect().
The implementation goes along the lines of the signal storing all slots in a map from slot implementation pointer to a shared_ptr for a slot implementation instead of a list/vector, thereby giving quick access to individual slots without having to iterate over all slots. A slot implementation is in my case basically a boost::function.
Connections have a weak_ptr to the internal implementation class for the signal and a weak_ptr to the slot implementation type to allow the signal to go out of scope and to use the slot pointer as the key into the signal map as well as an indication on whether the connection is still active (can't use a raw pointer as that could potentially be reused).
When disconnect is called, both of these weak pointers are converted to shared_ptrs and if both of these succeed, the signal implementation is asked to disconnect the slot given by the pointer. This is done by simple erasing it from the map.
The map is protected by a mutex to allow for multithreaded use. To prevent deadlocks, the mutex is not held while calling the slots, however this means that a slot may be disconnected from a different thread just prior to being called by the signal. This is also the case with regular boost::signals2 and in both of these scenarios one needs to be able to handle a callback from a signal even after one has disconnected.
To simplify the code for when the signal is fired, I am forcing all slots to be disconnected during this. This is different from boost::signals2, that does a copy of the list of slots before calling them in order to handle disconnections/connections while firing the signal.
The above works well for my scenario, where the signal of interest is fired very seldom (and in that case only once) but there are a lot of short-lived connections that otherwise use up a lot of memory even when using the trick outlined in the question.
For other scenarios, I've been able to replace the use of a signal with just a boost::function (thus requiring that there can only be a single connection) or just by sticking with the workaround in the question where the listener itself manages its lifetime.
I stumbled upon the same problem and i really miss some kind of explicit cleanup in the API.
In my scenario i am unloading some plug-in dll's and i have to assure there are no dangling objects (slots) which refer to code (vftables or whatsoever) living in the unloaded dll. Simply disconnecting slots didn't work due to the lazy deletion stuff.
My first workaround was a signal wrapper which tweaks the disconnecting code a little bit:
template <typename Signature>
struct MySignal
{
// ...
template <typename Slot>
void disconnect (Slot&& s)
{
mPrivate.disconnect (forward (s));
// connect/disconnect dummy slot to force cleanup of s
mPrivate.connect (&MySignal::foo);
mPrivate.disconnect (&MySignal::foo);
}
private:
// dummy slot function with matching signature
// ... foo (...)
private:
::boost::signals2::signal<Signature> mPrivate;
};
Unfortunately this didn't work because connect() only does some cleanup. It doesn't guarantee cleanup of all unconnected slots. Signal invocation on the other hand does a full cleanup but a dummy invocation would also be an unacceptable behavioral change (as already mentioned by others).
In the absence of alternatives i ended up in patching the original signal class (Edit: i really would appreciate a built-in solution. this patch was my last resort). My patch is around 10 lines of code and adds a public cleanup_connections() method to signal. My signal wrapper invokes the cleanup at the end of the disconnecting methods. This approach solved my problems and i didn't encounter any performance problems so far.
Edit: Here is my patch for boost 1.5.3
Index: signals2/detail/signal_template.hpp
===================================================================
--- signals2/detail/signal_template.hpp
+++ signals2/detail/signal_template.hpp
## -220,6 +220,15 ##
typedef mpl::bool_<(is_convertible<T, group_type>::value)> is_group;
do_disconnect(slot, is_group());
}
+ void cleanup_connections () const
+ {
+ unique_lock<mutex_type> list_lock(_mutex);
+ if(_shared_state.unique() == false)
+ {
+ _shared_state.reset(new invocation_state(*_shared_state, _shared_state->connection_bodies()));
+ }
+ nolock_cleanup_connections_from(false, _shared_state->connection_bodies().begin());
+ }
// emit signal
result_type operator ()(BOOST_SIGNALS2_SIGNATURE_FULL_ARGS(BOOST_SIGNALS2_NUM_ARGS))
{
## -690,6 +699,10 ##
{
(*_pimpl).disconnect(slot);
}
+ void cleanup_connections ()
+ {
+ (*_pimpl).cleanup_connections();
+ }
result_type operator ()(BOOST_SIGNALS2_SIGNATURE_FULL_ARGS(BOOST_SIGNALS2_NUM_ARGS))
{
return (*_pimpl)(BOOST_SIGNALS2_SIGNATURE_ARG_NAMES(BOOST_SIGNALS2_NUM_ARGS));
Related
I tried to break down my problem to a small example. The real problem is a more complex communication:
I have a function that triggers a communication and connects and sends messages to a server. If there is an answer, the Client-class emits a signal containing the answer.
void communicate()
{
client.setUpMessage(); // the answer is emitted as a signal and
// and processed in the Slot
// 'reactToAnswer(...)'
client.sendMessage("HelloWorld");
}
void reactToAnswer(QString answer)
{
parser.parseAnswer() // an error could occur
}
What if an error is detected in the slot in which the response is processed? I would like to stop the execution of the function communicate(). This means that the function client.sendMessage("HelloWorld") should no longer be executed.
In my naivety I tried to handle the problem with exceptions:
void communicate()
{
try
{
client.setUpMessage(); // the answer is emitted as a signal and
// and processed in the Slot
// 'reactToAnswer(...)'
client.sendMessage("HelloWorld");
}
catch(myException)
{
// do something
}
void reactToAnswer(QString answer)
{
if( !parser.parseAnswer() )
{
throw myException;
}
}
This does not work, throwing an exception from a slot invoked by a qt-signal is undefined behaviour. The usual way is to reimplement QApplication::notify() resp. QCoreApplication()::notify, but this does not work for me. There is already a QApplication for the GUI and I want the communication class (QObject) to stand alone. All things should be treated within this class.
I hope I explained the problem comprehensibly. I do not want to use exceptions in any case, other ways to stop the communication are also right for me.
Thanks in advance!
I'm not sure that what you are trying to accomplish is a particularly good fit for the signals-and-slots paradigm... perhaps you want to go with just a regular old function call instead? i.e. something like:
void communicate()
{
QString theAnswer; // will be written to by setupMessage() unless error occurs
if (client.setUpMessage(theAnswer))
{
reactToAnswer(theAnswer);
client.sendMessage("HelloWorld");
}
}
The reason that signals-and-slots aren't a good fit is that signals are designed to be connectable to multiple slots at once, and the order in which the slots-methods are called is undefined -- so if a slot-method tries to interfere with the signal-emitting process in the way you describe, the behavior is rather unpredictable (because you don't know how many other connected slot-methods, if any, had already been called as part of the signal-emission, before your particular slot-method hit the brakes). And of course if you ever go to queued/asynchronous signals, then it won't work at all, because the slot will be called in a different context entirely, long after the signal-emitting function has already returned.
That said, if you absolutely must use signals-and-slots for this, you can have your slot emit its own error-has-occurred signal, which can be connected back to a slot in the original signal-emitting class. That slot could then set a boolean (or whatever), and your communicate() method could then check the state of that boolean (right after client.setUpMessage() has returned) to decide whether or not to continue executing or return early.
(I don't recommend that though -- signals-and-slots are there to make your program less complicated, and in this case I think using them instead of a regular function call actually makes your program more complicated, with no corresponding benefit)
I'v read the documentation for QObject::connect (for Qt 5.4), but I have a question about the overload
QMetaObject::Connection QObject::connect(const QObject * sender, PointerToMemberFunction signal, const QObject * context, Functor functor, Qt::ConnectionType type = Qt::AutoConnection)
What exactly is the context parameter? What is its purpose? Can it be used to build connections in local event loops in threads?
Can someone provide examples of how/when to use this overload (when the context is not this)?
The context object is used in two scenarios.
Automatic disconnection
Let's first do a step back and ask ourselves: when does Qt break a connection?
With the usual connect(sender, signal, receiver, slot) connect, there are three possibilities:
When someone explicitely calls disconnect;
When sender is deleted;
When receiver is deleted.
Especially in cases #2 and #3, it just makes sense for Qt to behave that way (actually, it must behave that way, otherwise you'd have resource leaks and/or crashes).
Now: when using the connect overload taking a functor, when does Qt break a connection?
Note that without the context parameter, there's only one QObject involved: the sender. Hence the answer is:
When someone explicitely calls disconnect;
When sender is deleted.
Of course, there's no receiver object here! So only the sender automatically controls the lifetime of a connection.
Now, the problem is that the functor may capture some extra state that can become invalid, in which case is desirable that the connection gets broken automatically. The typical case is with lambdas:
connect(sender, &Sender::signal,
[&object1, &object2](Param p)
{
use(object1, object2, p);
}
);
What happens if object1 or object2 get deleted? The connection will still be alive, therefore emitting the signal will still invoke the lambda, which in turn will access destroyed objects. And that's kind of bad...
For this reason, when it comes to functors, a connect overload taking a context object has been introduced. A connection established using that overload will be disconnected automatically also
when the context object is deleted.
You're probably right when you say that a good number of times you're going to see there the very same "main" object used in the functor, for instance
connect(button,
&QPushButton::clicked,
otherWidget,
[otherWidget]()
{
otherWidget->doThis(); otherWidget->doThat();
}
);
That's just a pattern in Qt -- when setting up connections for sub-objects, you typically connect them to slots on this object, hence this is probably the most common context. However, in general, you may also end up with something like
// manages the lifetime of the resources; they will never outlive this object
struct ResourceManager : QObject
{
Resource res1; // non-QObjects
OtherResource res2;
};
ResourceManager manager;
connect(sender, signal, manager, [&manager](){ use(manager.res1, ...); });
// or, directly capture the resources, not the handle
So, you're capturing part of the state of manager.
In the most general case, when no context object is available, if there's the chance that the objects captured by the lambda survive the connection, then you must capture them by weak pointers, and try to lock those pointers inside the lambda before trying to access them.
Running a functor in a specific thread/event loop
Very shortly: when specifying a context object, the functor will be run into the context's thread, just like normal connections employing a receiver object. Indeed, note that the connect overload that takes a context also takes a connection type (while the one without context doesn't take one -- connection is always direct).
Again, this is useful because QObject is not reentrant or thread safe, and you must use a QObject only in the thread it lives in. If your functor accesses an object living in another thread, it must be executed in that thread; specifying that object as the context solves the issue.
When a QObject-derived object is being destructed, is it OK to emit a signal from its destructor? I tried it and it seems to work, but I'm not sure if it should be done.
For example, this code
class MyClass : public QObject {
signals:
void mySignal(const QString &str);
public:
QString myString;
~MyClass() { emit mySignal(myString); }
}
would pass a const reference to an object that might be out of scope by the time when the connected slot is executed.
Emission is generally fine (QObject does it too with the "destroyed" signal), including a case as yours. When the connection is direct, the string is still alive. And when it is QueuedConnection, then the string is first copied to the event loop.
If you ask is it OK: Yes, it will not cause any problem in itself.
If you would ask if its a generally safe thing to do in Qt? Definitely not safe. You have to be very mindful what you do if you emit from destructor, and have a good understanding of the Qt event system.
Remember that when a QObject descendant destructs, it disconnects all signals, so the destructed object does not get any more calls on their slots? Well there is a catch: destruction order. The QObject destructor does that disconnect, and it is the LAST to destruct, meaning, in the destruction chain events might still arrive to the "half-dead" object, causing access violations when accessing virtual functions and members of already destructed descendants. The possibility is present if you use the event system, and any of these conditions are met:
In multi threaded environment, if the object is not destructed on its own thread.
In multi threaded environment, if the object's destruction chain triggers the run of a processEvents() on any run path.
In multi threaded environment, if any object on another thread has a direct connection to this object, and it fails to react to its destroyed signal in direct connection.
In single threaded environment, when the destructors sends signals
that might return to the object in a direct connect chain.
I call this effect "life during death", and emiting signals or running any form of processEvents() (typically accidentally) in the destructor increase the chance to create such an error.
Of course, if you can somehow guarantee that not any present or future code will actually trigger any slots during destruction, its perfectly safe to emit from destructor, but its very hard to give such guarantee, and I'd advice simply avoid it whenever possible.
The gripe I have with this otherwise good example: https://www.qt.io/blog/2006/12/04/threading-without-the-headache is that it is exchanging naked pointers and it is not using Qt::QueuedConnection.
Edit: here is the code snippet the above link shows (in case the link goes down before this post)
// create the producer and consumer and plug them together
Producer producer;
Consumer consumer;
producer.connect(&consumer, SIGNAL(consumed()), SLOT(produce()));
consumer.connect(&producer, SIGNAL(produced(QByteArray *)), SLOT(consume(QByteArray *)));
// they both get their own thread
QThread producerThread;
producer.moveToThread(&producerThread);
QThread consumerThread;
consumer.moveToThread(&consumerThread);
// go!
producerThread.start();
consumerThread.start();
If I used a unique_ptr in the producer, releasing it when I call the produced signal and directly put the naked pointer into another unique pointer in the connected consume slot it would be somewhat safer. Especially after some maintenance programmer has a go at the code ;)
void calculate()
{
std::unique_ptr<std::vector<int>> pi(new std::vector<int>());
...
produced(pi.release());
//prodiced is a signal, the connected slot destroys the object
//a slot must be connected or the objects are leaked
//if multiple slots are connected the objects are double deleted
}
void consume(std::vector<int> *piIn)
{
std::unique_ptr<std::vector<int>> pi(piIn);
...
}
this still has a few major problems:
I am not protecting against leaks when the slot is not connected
I am not protecting against double deletes if multiple slots were to be connected (should be a logic error on the part of the programmer if it happens, but I would like to detect it)
I don't know the inner working of Qt well enough to be sure that nothing leaks in transit.
If I were to use a shared pointer to const it would solve all my problems but be slower and as far as I know I would have to register it with the meta object system as described here: http://qt-project.org/doc/qt-4.8/qt.html#ConnectionType-enum is this a good idea?
Is there a better way of doing this that I'm not thinking of?
You shouldn't pass pointers in a signal while expecting a slot to destroy them, because the slot may not be available.
Pass a const reference instead, allowing the slot to copy the object. If you use Qt's container classes, this should not hinder performance, as Qt's container classes implement copy-on-write.
Using boost::asio i use async_accept to accept connections. This works good, but there is one issue and i need a suggestion how to deal with it. Using typical async_accept:
Listener::Listener(int port)
: acceptor(io, ip::tcp::endpoint(ip::tcp::v4(), port))
, socket(io) {
start_accept();
}
void Listener::start_accept() {
Request *r = new Request(io);
acceptor.async_accept(r->socket(),
boost::bind(&Listener::handle_accept, this, r, placeholders::error));
}
Works fine but there is a issue: Request object is created with plain new so it can memory "leak". Not really a leak, it leaks only at program stop, but i want to make valgrind happy.
Sure there is an option: i can replace it with shared_ptr, and pass it to every event handler. This will work until program stop, when asio io_service is stopping, all objects will be destroyed and Request will be free'd. But this way i always must have an active asio event for Request, or it will be destroyed! I think its direct way to crash so i dont like this variant, too.
UPD Third variant: Listener holds list of shared_ptr to active connections. Looks great and i prefer to use this unless some better way will be found. The drawback is: since this schema allows to do "garbage collection" on idle connects, its not safe: removing connection pointer from Listener will immediately destroy it, what can lead to segfault when some of connection's handler is active in other thread. Using mutex cant fix this cus in this case we must lock nearly anything.
Is there a way to make acceptor work with connection management some beautiful and safe way? I will be glad to hear any suggestions.
The typical recipe for avoiding memory leaks when using this library is using a shared_ptr, the io_service documentation specifically mentions this
Remarks
The destruction sequence described above permits programs to simplify
their resource management by using shared_ptr<>. Where an object's
lifetime is tied to the lifetime of a connection (or some other
sequence of asynchronous operations), a shared_ptr to the object would
be bound into the handlers for all asynchronous operations associated
with it. This works as follows:
When a single connection ends, all associated asynchronous operations
complete. The corresponding handler objects are destroyed, and all
shared_ptr references to the objects are destroyed. To shut down the
whole program, the io_service function stop() is called to terminate
any run() calls as soon as possible. The io_service destructor defined
above destroys all handlers, causing all shared_ptr references to all
connection objects to be destroyed.
For your scenario, change your Listener::handle_accept() method to take a boost::shared_ptr<Request> parameter. Your second concern
removing connection pointer from Listener will immediately destroy it,
what can lead to segfault when some of connection's handler is active
in other thread. Using mutex cant fix this cus in this case we must
lock nearly anything.
is mitigated by inheriting from the boost::enable_shared_from_this template in your classes:
class Listener : public boost::enable_shared_from_this<Listener>
{
...
};
then when you dispatch handlers, use shared_from_this() instead of this when binding to member functions of Listener.
If anyone interested, i found another way. Listener holds list of shared_ptr to active connections. Connections ending/terminating is made via io_service::post which call Listener::FinishConnection wrapped with asio::strand. Usually i always wrap Request's methods with strand - its safer in terms of DDOS and/or thread safety. So, calling FinishConnection from post using strand protects from segfault in other thread
Not sure whether this is directly related to your issue, but I was also having similar memory leaks by using the Boost Asio libraries, in particular the same acceptor object you mentioned. Turned out that I was not shutting down the service correctly; some connections would stay opened and their corresponding objects would not be freed from memory. Calling the following got rid of the leaks reported by Valgrind:
acceptor.close();
Hope this can be useful for someone!