Is there a way to check if a QObject-pointer is still valid in Qt? - c++

I have a scenario where an anonymous QObject starts an asynchronous operation by emitting a signal. The receiving slot stores the QObject-pointer and sets a property of this object later. The object could be gone meanwhile.
So, is there a safe way to check if this pointer still valid?
P.S.:
I'm aware of QObject::destroyed signal, which I could connect to the object supposed to call the setProperty of that pointer. But I wonder, if it works easier.

This is a great question, but it is the wrong question.
Is there a way to check if the pointer is valid? Yes. QPointer is designed specifically to do that.
But the answer to this question is useless if the object lives in another thread! You only know whether it's valid at a single point in time - the answer is not valid immediately afterwards.
Absent other mechanisms, it is useless to hold a QPointer to an object in a different thread - it won't help you. Why? Look at this scenario:
Thread A Thread B
1. QPointer returns a non-zero pointer
2. deletes the object
3. Use the now-dangling pointer
I'm aware of QObject::destroyed signal, which I could connect to the object supposed to call the setProperty of that pointer. But I wonder, if it works easier.
The destroyed signals are useless when sent using queued connections - whether within a thread, or across thread boundaries. They are meant to be used within one thread, using direct connections.
By the time the target thread's event loop picks up the slot call, the originating object is long gone. Worse - this is always the case in a single-threaded application. The reason for the problem is the same as with the QPointer: the destroyed signal indicates that the object is no longer valid, but it doesn't mean that it was valid before you received the signal unless you're using a direct connection (and are in the same thread) or you're using a blocking queued connection.
Using the blocking queued connection, the requesting object's thread will block until the async thread finishes reacting to object's deletion. While this certainly "works", it forces the two threads to synchronize on a resource with sparse availability - the front spot in the async thread's event loop. Yes, this is literally what you compete for - a single spot in a queue that can be arbitrarily long. While this might be OK for debugging, it has no place in production code unless it's OK to block either thread to synchronize.
You are trying to work very hard around the fact that you're passing a QObject pointer between threads, and the object's lifetime, from the point of view of the receiving thread, is uncontrolled. That's your problem. You'd solve everything by not passing a raw object pointer. Instead, you could pass a shared smart pointer, or using signal-slot connections: those vanish whenever either end of the connection is destructed. That's what you'd want.
In fact, Qt's own design patterns hint at this. QNetworkReply is a QObject not only because it is a QIODevice, but because it must be to support direct indications of finished requests across thread boundaries. In light of a multitude of requests being processed, connecting to QNetworkAccessManager::finished(QNetworkReply*) can be a premature pessimization. Your object gets notified of a possibly very large number of replies, but it really is only interested in one or very few of them. Thus there must be a way to notify the requester directly that its one and only request is done - and that's what QNetworkReply::finished is for.
So, a simple way to proceed is to make the Request be a QObject with a done signal. When you ready the request, connect the requesting object to that signal. You can also connect a functor, but make sure that the functor executes in the requesting object's context:
// CORRECT
connect(request, &Request::done, requester, [...](...){...});
// WRONG
connect(request, &Request::done, [...](...){...});
The below demonstrates how it could be put together. The requests' lifetimes are managed through the use of a shared (reference-counting) smart pointer. This makes life rather easy. We check that no requests exist at the time main returns.
#include <QtCore>
class Request;
typedef QSharedPointer<Request> RequestPtr;
class Request : public QObject {
Q_OBJECT
public:
static QAtomicInt m_count;
Request() { m_count.ref(); }
~Request() { m_count.deref(); }
int taxIncrease;
Q_SIGNAL void done(RequestPtr);
};
Q_DECLARE_METATYPE(RequestPtr)
QAtomicInt Request::m_count(0);
class Requester : public QObject {
Q_OBJECT
Q_PROPERTY (int catTax READ catTax WRITE setCatTax NOTIFY catTaxChanged)
int m_catTax;
public:
Requester(QObject * parent = 0) : QObject(parent), m_catTax(0) {}
Q_SLOT int catTax() const { return m_catTax; }
Q_SLOT void setCatTax(int t) {
if (t != m_catTax) {
m_catTax = t;
emit catTaxChanged(t);
}
}
Q_SIGNAL void catTaxChanged(int);
Q_SIGNAL void hasRequest(RequestPtr);
void sendNewRequest() {
RequestPtr req(new Request);
req->taxIncrease = 5;
connect(req.data(), &Request::done, this, [this, req]{
setCatTax(catTax() + req->taxIncrease);
qDebug() << objectName() << "has cat tax" << catTax();
QCoreApplication::quit();
});
emit hasRequest(req);
}
};
class Processor : public QObject {
Q_OBJECT
public:
Q_SLOT void process(RequestPtr req) {
QThread::msleep(50); // Pretend to do some work.
req->taxIncrease --; // Figure we don't need so many cats after all...
emit req->done(req);
emit done(req);
}
Q_SIGNAL void done(RequestPtr);
};
struct Thread : public QThread { ~Thread() { quit(); wait(); } };
int main(int argc, char ** argv) {
struct C { ~C() { Q_ASSERT(Request::m_count == 0); } } check;
QCoreApplication app(argc, argv);
qRegisterMetaType<RequestPtr>();
Processor processor;
Thread thread;
processor.moveToThread(&thread);
thread.start();
Requester requester1;
requester1.setObjectName("requester1");
QObject::connect(&requester1, &Requester::hasRequest, &processor, &Processor::process);
requester1.sendNewRequest();
{
Requester requester2;
requester2.setObjectName("requester2");
QObject::connect(&requester2, &Requester::hasRequest, &processor, &Processor::process);
requester2.sendNewRequest();
} // requester2 is destructed here
return app.exec();
}
#include "main.moc"

It is impossible to check is that pointer still valid. So, the only safe way here is to inform receiving part about deleting of that QObject (and in multithreading case: before accessing to object you need to check and block it to be sure, that the object will not be deleted in another thread right after check). The reason of it is simple:
Theoretically it is possible that after deleting of initial object, system will put another object in that memory (so pointer will look like valid).
Or it is possible that object will be deleted, but it's memory will not be overwritten by something else, so it still will look like valid (but it fact it will be invalid).
So, there are no any way to detect is that pointer valid, if you have only pointer. You need something more.
Also it is not safe to just send a signal about deleting of object in multithreading case (or to use QObject::destroyed as you suggested). Why? Because it is possible, that things happens in this order:
QObject send a message "a am going to be deleted",
QObject deleted,
your receiving code uses that pointer (and this is wrong and dangerous),
your receiving code receives message "a am going to be deleted" (too late).
So, in case of only one thread you need QPointer. Else you need something like QSharedPointer or QWeakPointer (both of them are thread-safe) - see answer of Kuba Ober.

Related

Recovering from error in Qt

I'm implementing a system that uses 3 threads (one is GUI, one is TCP client for data acquisition and one analysis thread for calculations).
I'm having a hard time handling an exception for either one. The case that I'm trying to solve now is what happens if some calculation goes wrong, and I need to 'freeze' the system. The problem is that in some scenarios, I have data waiting in the analysis thread's event loop. How can I clear this queue safely, without handling all the events (as I said, something went wrong so I don't want any more calculations done).
Is there a way to clear an event loop for a specific thread? When can I delete the objects safely?
Thanks
You question is somewhat low on details, but I assume you're using a QThread and embedding a QEventLoop in it?
You can call QEventLoop::exit(-1), which is thread safe.
The value passed to exit is the exit status, and will be the value returned from QEventLoop::exec(). I've chosen -1, which is typically used to denote an error condition.
You can then check the return code from exec(), and act accordingly.
class AnalysisThread : public QThread
{
Q_OBJECT
public:
void run() override
{
int res = _loop.exec();
if (res == -1)
{
// delete objects
}
}
void exit()
{
_loop.exit(-1);
}
private:
QEventLoop _loop;
};
Elsewhere, in your exception handler
try
{
// ...
}
catch(const CalculationError& e)
{
_analysis_thread.exit();
}

How to access QWidget from other threads

I have
struct MyWidget : QWidget {
// non-GUI related stuff:
int data;
int doSth();
};
I need to access a MyWidget instance from another thread (i.e. not the main thread). Is there any way to do that safely? I understand that I cannot access GUI related functions because some backends (e.g. MacOSX/Cocoa) don't support that. However, I only need to access data or doSth() in this example. But from what I have understand, there is simply no way to guarantee the lifetime of the object - i.e. if the parent window with that widget closes, the MyWidget instance gets deleted.
Or is there a way to guarantee the lifetime? I guess QSharedPointer doesn't work because the QWidget does its lifetime handling internally, depending on the parent widget. QPointer of course also doesn't help because it is only weak and there is no locking mechanism.
My current workaround is basically:
int widget_doSth(QPointer<MyWidget> w) {
int ret = -1;
execInMainThread_sync([&]() {
if(w)
ret = w->doSth();
});
return ret;
}
(execInMainThread_sync works by using QMetaMethod::invoke to call a method in the main thread.)
However, that workaround doesn't work anymore for some specific reason (I will explain later why, but that doesn't matter here). Basically, I am not able to execute something in the main thread at that point (for some complicated deadlock reasons).
Another workaround I'm currently thinking about is to add a global mutex which will guard the MyWidget destructor, and in the destructor, I'm cleaning up other weak references to the MyWidget. Then, elsewhere, when I need to ensure the lifetime, I just lock that mutex.
The reason why my current workaround doesn't work anymore (and that is still a simplified version of the real situation):
In MyWidget, the data is actually a PyObject*.
In the main thread, some Python code gets called. (It's not really possible to avoid any Python code calls at all in the main thread in my app.) That Python code ends up doing some import, which is guarded by some Python-import-mutex (Python doesn't allow parallel imports.)
In some other Python thread, some other import is called. That import now locks the Python-import-mutex. And while it's doing its thing, it does some GC cleanup at some point. That GC cleanup calls the traverse function of some object which holds that MyWidget. Thus, it must access the MyWidget. However, execInMainThread_sync (or equivalently working solutions) will deadlock because the main thread currently waits for the Python-import-lock.
Note: The Python global interpreter lock is not really the problem. Of course it gets unlocked before any execInMainThread_sync call. However, I cannot really check for any other potential Python/whatever locks. Esp. I am not allowed to just unlock the Python-import-lock -- it's there for a reason.
One solution you might think of is to really just avoid any Python code at all in the main thread. But that has a lot of drawbacks, e.g. it will be slow, complicated and ugly (the GUI basically only shows data from Python, so there need to be a huge proxy/wrapper around it all). And I think I still need to wait at some points for the Python data, so I just introduce the possible deadlock-situation at some other point.
Also, all the problems would just go away if I could access MyWidget safely from another thread. Introducing a global mutex is the much cleaner and shorter solution, compared to above.
You can use the signal/slot mechanism, but it can be tedious, if the number of GUI controls is large. I'd recommend a single signal and slot to control the gui. Send over a struct with all the info needed for updating the GUI.
void SomeWidget::updateGUISlot(struct Info const& info)
{
firstControl->setText(info.text);
secondControl->setValue(info.value);
}
You don't need to worry about emitting signals, if the recipient is deleted. This detail is handled by Qt. Alternatively, you can wait for your threads to exit, after exiting the GUI threads event loop. You'll need to register the struct with Qt.
EDIT:
From what I've read from your extended question, you're problems are related to communication between threads. Try pipes, (POSIX) message queues, sockets or POSIX signals instead of Qt signals for inter-thread communication.
Personally I don't like designs where GUI stuff (ie: A widget) has non-GUI related stuff... I think you should separate these two from each other. Qt needs to keep the GUI objects always on the main thread, but anything else (QObject derived) can be moved to a thread (QObject::moveToThread).
It seems that what you're explaining has nothing at all to do with widgets, Qt, or anything like that. It's a problem inherent to Python and its threading and the lock structure that doesn't make sense if you're multithreading. Python basically presumes that any object can be accessed from any thread. You'd have the same problem using any other toolkit. There may be a way of telling Python not to do that - I don't know enough about the cpython implementation's details, but that's where you'd need to look.
That GC cleanup calls the traverse function of some object which holds that MyWidget
That's your problem. You must ensure that such cross-thread GC cleanup can't happen. I have no idea how you'd go about it :(
My worry is that you've quietly and subtly shot yourself in the foot by using Python, in spite of everyone claiming that only C/C++ lets you do it at such a grand scale.
My solution:
struct MyWidget : QWidget {
// some non-GUI related stuff:
int someData;
virtual void doSth();
// We reset that in the destructor. When you hold its mutex-lock,
// the ref is either NULL or a valid pointer to this MyWidget.
struct LockedRef {
boost::mutex mutex;
MyWidget* ptr;
LockedRef(MyWidget& w) : ptr(&w) {}
void reset() {
boost::mutex::scoped_lock lock(mutex);
ptr = NULL;
}
};
boost::shared_ptr<LockedRef> selfRef;
struct WeakRef;
struct ScopedRef {
boost::shared_ptr<LockedRef> _ref;
MyWidget* ptr;
bool lock;
ScopedRef(WeakRef& ref);
~ScopedRef();
operator bool() { return ptr; }
MyWidget* operator->() { return ptr; }
};
struct WeakRef {
typedef boost::weak_ptr<LockedRef> Ref;
Ref ref;
WeakRef() {}
WeakRef(MyWidget& w) { ref = w.selfRef; }
ScopedRef scoped() { return ScopedRef(*this); }
};
MyWidget();
~MyWidget();
};
MyWidget::ScopedRef::ScopedRef(WeakRef& ref) : ptr(NULL), lock(true) {
_ref = ref.ref.lock();
if(_ref) {
lock = (QThread::currentThread() == qApp->thread());
if(lock) _ref->mutex.lock();
ptr = _ref->ptr;
}
}
MyWidget::ScopedRef::~ScopedRef() {
if(_ref && lock)
_ref->mutex.unlock();
}
MyWidget::~QtBaseWidget() {
selfRef->reset();
selfRef.reset();
}
MyWidget::MyWidget() {
selfRef = boost::shared_ptr<LockedRef>(new LockedRef(*this));
}
Now, everywhere I need to pass around a MyWidget pointer, I'm using:
MyWidget::WeakRef widget;
And I can use it from another thread like this:
MyWidget::ScopedRef widgetRef(widget);
if(widgetRef)
widgetRef->doSth();
This is safe. As long as ScopedRef exists, MyWidget cannot be deleted. It will block in its destructor. Or it is already deleted and ScopedRef::ptr == NULL.

How and why one would use Boost signals2?

Learning c++ and trying to get familiar with some patterns. The signals2 doc clearly has a vast array of things I can do with slots and signals. What I don't understand is what types of applications (use cases) I should use it for.
I'm thinking along the lines of a state machine dispatching change events. Coming from a dynamically typed background (C#,Java etc) you'd use an event dispatcher or a static ref or a callback.
Are there difficulties in c++ with using cross-class callbacks? Is that essentially why signals2 exists?
One to the example cases is a document/view. How is this pattern better suited than say, using a vector of functions and calling each one in a loop, or say a lambda that calls state changes in registered listening class instances?
class Document
{
public:
typedef boost::signals2::signal<void ()> signal_t;
public:
Document()
{}
/* Connect a slot to the signal which will be emitted whenever
text is appended to the document. */
boost::signals2::connection connect(const signal_t::slot_type &subscriber)
{
return m_sig.connect(subscriber);
}
void append(const char* s)
{
m_text += s;
m_sig();
}
const std::string& getText() const
{
return m_text;
}
private:
signal_t m_sig;
std::string m_text;
};
and
class TextView
{
public:
TextView(Document& doc): m_document(doc)
{
m_connection = m_document.connect(boost::bind(&TextView::refresh, this));
}
~TextView()
{
m_connection.disconnect();
}
void refresh() const
{
std::cout << "TextView: " << m_document.getText() << std::endl;
}
private:
Document& m_document;
boost::signals2::connection m_connection;
};
Boost.Signals2 is not just "an array of callbacks", it has a lot of added value. IMO, the most important points are:
Thread-safety: several threads may connect/disconnect/invoke the same signal concurrently, without introducing race conditions. This is especially useful when communicating with an asynchronous subsystem, like an Active Object running in its own thread.
connection and scoped_connection handles that allow disconnection without having direct access to the signal. Note that this is the only way to disconnect incomparable slots, like boost::function (or std::function).
Temporary slot blocking. Provides a clean way to temporarily disable a listening module (eg. when a user requests to pause receiving messages in a view).
Automatic slot lifespan tracking: a signal disconnects automatically from "expired" slots. Consider the situation when a slot is a binder referencing a non-copyable object managed by shared_ptrs:
shared_ptr<listener> l = listener::create();
auto slot = bind(&listener::listen, l.get()); // we don't want aSignal_ to affect `listener` lifespan
aSignal_.connect(your_signal_type::slot_type(slot).track(l)); // but do want to disconnect automatically when it gets destroyed
Certainly, one can re-implement all the above functionality on his own "using a vector of functions and calling each one in a loop" etc, but the question is how it would be better than Boost.Signals2. Re-inventing the wheel is rarely a good idea.

Is it safe to work on object that delete later was called on

I was thinking about writing method like this:
QString getData() {
QNetworkReply *reply = getReply();
reply->deleteLater();
return QString::fromUtf8(reply->readAll()).trimmed();
}
Is it safe?
If I'm forced to write this like this:
QString getData() {
QNetworkReply *reply = getReply();
QString result = QString::fromUtf8(reply->readAll()).trimmed();
reply->deleteLater();
return result;
}
I'm copping QString twice (am I?, once it's put into result and second when returning it by value), which I wanted to avoid.
From the deleteLater docs:
Schedules this object for deletion.
The object will be deleted when control returns to the event loop. If the event loop is not running when this function is called (e.g. deleteLater() is called on an object before QCoreApplication::exec()), the object will be deleted once the event loop is started.
So what you are doing there is safe. Obviously handing out references or pointers to that object (or its members) that might be persisted is wrong. But if you're returning copies, you're fine.
But what you're doing might or might not do what you want to do. readAll doesn't block, it returns the data currently available. Meaning that a single call to readAll might only read a partial response - unless you've ensured that all data has arrived through other means.
Other things to note, from the docs:
Note that entering and leaving a new event loop (e.g., by opening a modal dialog) will not perform the deferred deletion; for the object to be deleted, the control must return to the event loop from which deleteLater() was called.
So the only thing to worry about when doing this type of thing would be calling functions that somehow re-enters the "current" event loop. But that won't happen if that is done via QCoreApplication::processEvents:
In event you are running a local loop which calls this function continuously, without an event loop, the DeferredDelete events will not be processed.
So that's covered too. The deferred deletion logic is pretty complex, but safe under normal circumstances. If you're digging very deep into the Qt internals (or calling code that might do something fishy there), be defensive. But for normal code flow, deleteLater is safe as long as you don't have dangling references (or pointers) that might persist.
What does deleteLater do? From it's name, I would expect that it registers the object for deletion at some later point in time (end of transaction? end of session?). If so, you can safely use it as long as that later point in time has not occurred. The only issue is knowing when that point occurs, but for things like end of transaction or end of session, you're probably safeā€”if your function was called within a transaction or session, the transaction or session will not end until you return.
It is safe, but you'd better not use deleteLater at all, because
The object will be deleted when control returns to the event loop. If
the event loop is not running when this function is called (e.g.
deleteLater() is called on an object before QCoreApplication::exec()),
the object will be deleted once the event loop is started.
means that object can be deleted mmm... never. This pretend to work like GC, but it is even worse:
class A: public QObject
{
char x[10000000];
};
void process()
{
A* a = new A();
//delete a;
a->deleteLater();
}
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
for (int k = 0; k < 1000000; ++k) {
process();
}
return a.exec();
}
At least it is not C++ idiomatic, that uses RAII.
From the other size, copying QString is a cheap operation, because QSring uses copy-on-write ideome.

How can I protect a QThread function so it will not be called again until finished its previous work?

I'm using a QThread and inside its run method I have a timer invoking a function that performs some heavy actions that take some time. Usually more than the interval that triggers the timer (but not always).
What I need is to protect this method so it can be invoked only if it has completed its previous job.
Here is the code:
NotificationThread::NotificationThread(QObject *parent)
: QThread(parent),
bWorking(false),
m_timerInterval(0)
{
}
NotificationThread::~NotificationThread()
{
;
}
void NotificationThread::fire()
{
if (!bWorking)
{
m_mutex.lock(); // <-- This is not protection the GetUpdateTime method from invoking over and over.
bWorking = true;
int size = groupsMarkedForUpdate.size();
if (MyApp::getInstance()->GetUpdateTime(batchVectorResult))
{
bWorking = false;
emit UpdateNotifications();
}
m_mutex.unlock();
}
}
void NotificationThread::run()
{
m_NotificationTimer = new QTimer();
connect(m_NotificationTimer,
SIGNAL(timeout()),
this,
SLOT(fire(),
Qt::DirectConnection));
int interval = val.toInt();
m_NotificationTimer->setInterval(3000);
m_NotificationTimer->start();
QThread::exec();
}
// This method is invoked from the main class
void NotificationThread::Execute(const QStringList batchReqList)
{
m_batchReqList = batchReqList;
start();
}
You could always have a thread that needs to run the method connected to an onDone signal that alerts all subscribers that it is complete. Then you should not run into the problems associated with double lock check and memory reordering. Maintain the run state in each thread.
I'm assuming you want to protect your thread from calls from another thread. Am I right? If yes, then..
This is what QMutex is for. QMutex gives you an interface to "lock" the thread until it is "unlocked", thus serializing access to the thread. You can choose to unlock the thread until it is done doing its work. But use it at your own risk. QMutex presents its own problems when used incorrectly. Refer to the documentation for more information on this.
But there are many more ways to solve your problem, like for example, #Beached suggests a simpler way to solve the problem; your instance of QThread would emit a signal if it's done. Or better yet, make a bool isDone inside your thread which would then be true if it's done, or false if it's not. If ever it's true then it's safe to call the method. But make sure you do not manipulate isDone outside the thread that owns it. I suggest you only manipulate isDone inside your QThread.
Here's the class documentation: link
LOL, I seriously misinterpreted your question. Sorry. It seems you've already done my second suggestion with bWorking.