I've been bashing my head for the last two nights trying to figure this out with no positive results. There is the thing, in boost signals, every time I want to connect, say, a member function of one class to another's class signal, I have to save the resulting connection in a variable if I want to disconnect later. If later on, I want to connect the same member function to some other class signal (the member function is still connected with the previous class signal) I have to save this new connection in order to manage it too. My question is, is there any way to avoid this?
You shouldn't need to keep connection instances around, you should be able to disconnect from a signal by passing the original callable to signal::disconnect, as described in the Boost.Signals tutorial. With member functions the problem is rather the fact that you cannot pass them directly to signal, you either wrap them in custom function objects, which would then be available as arguments to signal::disconnect or you use Boost.Bind, which by itself wouldn't be very useful as you cannot conveniently declare its return type. However that problem can be solved using Boost.Bind together with Boost.Function.
I hope I answered your question.
Scoped Connections
Alternatively you could assign the returned connection to a variable of type signal::scoped_connection. It's a type of connection which automatically disconnects on destruction or reassignment. This effectively limits a signal-slot connection lifetime to a particular scope.
For example when you reassign myConnection, the previous connection is automatically disconnected:
scoped_connection myConnection = someObject.Signal.connect(MyHandler);
myConnection = totallyDifferentObject.Signal.connect(MyHandler);
Automatic Connection Management
In our project, we usually declare member variables as scoped connections. So their scope matches the live time of the particular object instance the belong to. This is a convenient way to automatically disconnect any signals an object instance is connected to when it is being destructed. Without scoped connections you have to manually disconnect yourself in the destructor. If you neglect to disconnect instances when they're destroyed, you'll end up invoking invalid signal handlers which will crash your programs.
Related
In early Qt 5 versions I have to disconnect lambdas from signals as shown here: "Disconnecting lambda functions in Qt5".
Here I found the following statement:
There is no automatic disconnection when the 'receiver' is destroyed because it's a functor with no QObject. However, since 5.2 there is an overload which adds a "context object". When that object is destroyed, the connection is broken (the context is also used for the thread affinity: the lambda will be called in the thread of the event loop of the object used as context).
Does it mean I have no longer to disconnect lambdas with Qt5.2 or later?
Do I have to pass that context or is that done automatically?
Qt automatically removes all connections to or from an object when it is destroyed through QObject::~QObject(). So if you create a connection to a lambda, when the sending object is deleted, the connection is automatically cleaned up. You do not, and have not previously needed to, disconnect it yourself.
The context object that you are referring to is used when you require more fine grained control over the lifetime of the connection. This is used when you want the connection to be removed when another object is destroyed (the context object). This makes it easier to remove the connection if you need to do so before the sender is destroyed.
In summary: You do not need to manually disconnect lambdas, they are cleaned up automatically. You can use context objects to remove the connection before the sender object is destroyed.
I'v read the documentation for QObject::connect (for Qt 5.4), but I have a question about the overload
QMetaObject::Connection QObject::connect(const QObject * sender, PointerToMemberFunction signal, const QObject * context, Functor functor, Qt::ConnectionType type = Qt::AutoConnection)
What exactly is the context parameter? What is its purpose? Can it be used to build connections in local event loops in threads?
Can someone provide examples of how/when to use this overload (when the context is not this)?
The context object is used in two scenarios.
Automatic disconnection
Let's first do a step back and ask ourselves: when does Qt break a connection?
With the usual connect(sender, signal, receiver, slot) connect, there are three possibilities:
When someone explicitely calls disconnect;
When sender is deleted;
When receiver is deleted.
Especially in cases #2 and #3, it just makes sense for Qt to behave that way (actually, it must behave that way, otherwise you'd have resource leaks and/or crashes).
Now: when using the connect overload taking a functor, when does Qt break a connection?
Note that without the context parameter, there's only one QObject involved: the sender. Hence the answer is:
When someone explicitely calls disconnect;
When sender is deleted.
Of course, there's no receiver object here! So only the sender automatically controls the lifetime of a connection.
Now, the problem is that the functor may capture some extra state that can become invalid, in which case is desirable that the connection gets broken automatically. The typical case is with lambdas:
connect(sender, &Sender::signal,
[&object1, &object2](Param p)
{
use(object1, object2, p);
}
);
What happens if object1 or object2 get deleted? The connection will still be alive, therefore emitting the signal will still invoke the lambda, which in turn will access destroyed objects. And that's kind of bad...
For this reason, when it comes to functors, a connect overload taking a context object has been introduced. A connection established using that overload will be disconnected automatically also
when the context object is deleted.
You're probably right when you say that a good number of times you're going to see there the very same "main" object used in the functor, for instance
connect(button,
&QPushButton::clicked,
otherWidget,
[otherWidget]()
{
otherWidget->doThis(); otherWidget->doThat();
}
);
That's just a pattern in Qt -- when setting up connections for sub-objects, you typically connect them to slots on this object, hence this is probably the most common context. However, in general, you may also end up with something like
// manages the lifetime of the resources; they will never outlive this object
struct ResourceManager : QObject
{
Resource res1; // non-QObjects
OtherResource res2;
};
ResourceManager manager;
connect(sender, signal, manager, [&manager](){ use(manager.res1, ...); });
// or, directly capture the resources, not the handle
So, you're capturing part of the state of manager.
In the most general case, when no context object is available, if there's the chance that the objects captured by the lambda survive the connection, then you must capture them by weak pointers, and try to lock those pointers inside the lambda before trying to access them.
Running a functor in a specific thread/event loop
Very shortly: when specifying a context object, the functor will be run into the context's thread, just like normal connections employing a receiver object. Indeed, note that the connect overload that takes a context also takes a connection type (while the one without context doesn't take one -- connection is always direct).
Again, this is useful because QObject is not reentrant or thread safe, and you must use a QObject only in the thread it lives in. If your functor accesses an object living in another thread, it must be executed in that thread; specifying that object as the context solves the issue.
I have been working with boost::asio for a while now and while I do understand the concept of the asynchronous calls I am still somewhat befuddled by the memory management implications. In normal synchrous code the object lifetime is clear. But consider a scenario similar to the case of the daytime server:
There might be multiple active connections which have been accepted. Each connection now sends and receives some data from a socket, does some work internally and then decides to close the connection. It is safe to assume that the data related to the connection needs to stay accessible during the processing but the memory can be freed as soon as the connection is closed. But how can I implement the creation/destruction of the data correctly? Assuming that I use classes and bind the callback to member functions, should I create a class using new and call delete this; as soon as the processing is done or is there a better way?
But how can I implement the creation/destruction of the data correctly?
Use shared_ptr.
Assuming that I use classes and bind the callback to member functions, should I create a class using new and call delete this; as soon as the processing is done or is there a better way?
Make your class inherit from enable_shared_from_this, create instances of your classes using make_shared, and when you bind your callbacks bind them to shared_from_this() instead of this. The destruction of your instances will be done automatically when they have gone out of the last scope where they are needed.
Using boost::asio i use async_accept to accept connections. This works good, but there is one issue and i need a suggestion how to deal with it. Using typical async_accept:
Listener::Listener(int port)
: acceptor(io, ip::tcp::endpoint(ip::tcp::v4(), port))
, socket(io) {
start_accept();
}
void Listener::start_accept() {
Request *r = new Request(io);
acceptor.async_accept(r->socket(),
boost::bind(&Listener::handle_accept, this, r, placeholders::error));
}
Works fine but there is a issue: Request object is created with plain new so it can memory "leak". Not really a leak, it leaks only at program stop, but i want to make valgrind happy.
Sure there is an option: i can replace it with shared_ptr, and pass it to every event handler. This will work until program stop, when asio io_service is stopping, all objects will be destroyed and Request will be free'd. But this way i always must have an active asio event for Request, or it will be destroyed! I think its direct way to crash so i dont like this variant, too.
UPD Third variant: Listener holds list of shared_ptr to active connections. Looks great and i prefer to use this unless some better way will be found. The drawback is: since this schema allows to do "garbage collection" on idle connects, its not safe: removing connection pointer from Listener will immediately destroy it, what can lead to segfault when some of connection's handler is active in other thread. Using mutex cant fix this cus in this case we must lock nearly anything.
Is there a way to make acceptor work with connection management some beautiful and safe way? I will be glad to hear any suggestions.
The typical recipe for avoiding memory leaks when using this library is using a shared_ptr, the io_service documentation specifically mentions this
Remarks
The destruction sequence described above permits programs to simplify
their resource management by using shared_ptr<>. Where an object's
lifetime is tied to the lifetime of a connection (or some other
sequence of asynchronous operations), a shared_ptr to the object would
be bound into the handlers for all asynchronous operations associated
with it. This works as follows:
When a single connection ends, all associated asynchronous operations
complete. The corresponding handler objects are destroyed, and all
shared_ptr references to the objects are destroyed. To shut down the
whole program, the io_service function stop() is called to terminate
any run() calls as soon as possible. The io_service destructor defined
above destroys all handlers, causing all shared_ptr references to all
connection objects to be destroyed.
For your scenario, change your Listener::handle_accept() method to take a boost::shared_ptr<Request> parameter. Your second concern
removing connection pointer from Listener will immediately destroy it,
what can lead to segfault when some of connection's handler is active
in other thread. Using mutex cant fix this cus in this case we must
lock nearly anything.
is mitigated by inheriting from the boost::enable_shared_from_this template in your classes:
class Listener : public boost::enable_shared_from_this<Listener>
{
...
};
then when you dispatch handlers, use shared_from_this() instead of this when binding to member functions of Listener.
If anyone interested, i found another way. Listener holds list of shared_ptr to active connections. Connections ending/terminating is made via io_service::post which call Listener::FinishConnection wrapped with asio::strand. Usually i always wrap Request's methods with strand - its safer in terms of DDOS and/or thread safety. So, calling FinishConnection from post using strand protects from segfault in other thread
Not sure whether this is directly related to your issue, but I was also having similar memory leaks by using the Boost Asio libraries, in particular the same acceptor object you mentioned. Turned out that I was not shutting down the service correctly; some connections would stay opened and their corresponding objects would not be freed from memory. Calling the following got rid of the leaks reported by Valgrind:
acceptor.close();
Hope this can be useful for someone!
Is it possible to send signals to a slot without connecting them?
There is a class that has a SLOT which shows some logs.
For now we don't have any information how many classes will be use to send signals to this log slot, and we won't be able to address their objects to each other, but every objects might send logging request.
You can call an object's (public) slot just like you call a normal member function. A connection is not necessary.
Besides, you don't need to know in advance who will connect to a given slot. The connection can happen outside your class. (For public slots at least.)
Yes you may, in a few ways.
You may call the slot like any other C++ function (if it is public). Slots are still C++ functions. The downside is that the caller needs to know the receiver's interface at compile time.
logger.log("The frobnitz could not be quuxed");
You may invoke the slot via QMetaObject::invokeMethod. With this method, the caller doesn't need any compile-time info about the recipient other than the fact that it is a QObject*.
if (!QMetaObject::invokeMethod(logger, "log", Q_ARG(QString, "The frobnitz could not be quuxed"))) {
qWarning("Internal error: logging failed (did someone change the logger API?)");
}
I think there is no such possibility. But maybe you can just make log() method static, so you will be able to call log() method without referencing logger object?