Qt C++ destructor taking a long time to return - c++

I'm working on a pretty standard Qt mobile app (written in C++, targeted at Symbian devices), and am finding that sometimes when the app is closed (i.e. via a call to QApplication::quit), the final destructor in the app can take a long time to return (30 seconds plus). By this I mean, all clean up operations in the destructor have completed (quickly, all well within a second) and we've reached the point where execution is leaving the destructor and returning to the code that implicitly called it (i.e. when we delete the object).
Obviously at that point I'd expect execution to return to just after the call to delete the object, virtually instantly, but as I say sometimes this is taking an age!
This long closure time happens both in debug and release builds, with logging enabled or disabled, so I don't think that's a factor here. When we reach the end of the destructor I'm pretty certain no file handles are left open, or any other open resources (network connections etc.)...though even if they where surely this wouldn't present itself as a problem on exiting the destructor (?).
This is on deleting the application's QMainWindow object. Currently the call to do this is in a slot connected to QApplication::aboutToQuit, though I've tried deleting that object in the apps "main" function too.
The length of delay we experience seems proportional to the amount of activity in the app before we exit. This sort of makes me think memory leaks may be a problem here, however we're not aware of any (doesn't mean there aren't any of course), and also I've never seen this behaviour before with leaked memory.
Has anyone any ideas what might be going on here?
Cheers

If your final destructor is for a class than inherits QObject then the QObject destructor will be called immediately following the destructor of your final object. Presumably this object is the root of a possibly large object tree which will trigger a number of actions to occur including calling the destructor of all the child QObjects. Since you state that the problem is componded by the amount of activity, there are likely a very large number of children being added to the object tree that are deleted at this time, perhaps more than you intended. Instead of adding all the objects to one giant tree to be deleted all at once. Identify objects that are being created often that don't need to persist through the entire execution. Instead of creating those objects with a parent, start a new tree that can be deleted earlier (parent =0). Look at QObject::deleteLater() which will wait until there is no user interaction to delete the objects in these independant trees.

Related

QGestureRecognizer automatically destroyed by QGestureManager?

I have recently experienced exit code 255 in my Qt 5.7 application. This happened right after I added my custom QGestureRecognizer. I have debugged into the Qt's sources and I came to the conclusion that the QGestureManager automatically disposes of all QGestureRecognizer instances. The line that causes the issue is inside the destructor of widget where the recognizer is created and registered:
Demo::~Demo() {
// delete other stuff
delete recognizer;
}
The thing is QGestureRecognizer doesn't support (at least according to the documentation and by looking at the constructor's signature) the parent-child relationship in Qt since it's not derived from QObject (or any subclass of that fundamental Qt class). This means that one cannot assign a parent to the its constructor hence QCustomGestureRecognizer recognizer = new QCustomGestureRecognizer (this) isn't possible. Continuing this line of thought this means that one has to manually trigger the destructor by calling delete recognizer. Or so I thought...
At the end of the life of my application the QGestureManager is called. In there there is a list of recognizers called m_recognizers. It contains a bunch of the built-in recognizers (such as the one for the Tap gesture) along with the registered custom recognizer (in my case it was registered as 257). The destructor of QGestureManager iterates through the list and deletes its entries.
When the delete recognizer line is present I get a segmentation fault when qDeleteAll(...) (for the m_recognizers) reaches the custom recognizer's entry since it attempts to delete something that has already been deleted.
After I commented out the delete recognizer line in my widget's destructor I no longer face the issue however I'm still uncertain if I'm not breaking my code somewhere. The exit code is not (as expected) a 0 but the information on how recognizers are disposed is completely missing from the official documentation.
Has anyone encountered this problem? I'm not excluding the possibility that the issue arises from some other part of my code although it seems quite unlikely considering that it appears when the default QWidget destructor is called. As per C++ standard when inheriting a class first the subclass' destructor is called (in my case this is the Demo custom widget - no issues there) and then the base class.
If you use
Qt::GestureType QGestureRecognizer::registerRecognizer(QGestureRecognizer *recognizer)
the System does take ownership of the object and you should not delete it yourself.
Excerpt from the Documentation:
The application takes ownership of the recognizer and returns the
gesture type ID associated with it.

Assertion when operator delete is called

When my objects are destroyed, I keep getting an Assertion failure in
dbgheap.c line 1399
_ASSERTE(pHead->nBlockUse == nBlockUse);
I can't find any reason why this happens. The pointers are properly initialized to NULL in the constructor:
CPlayThread()
{
m_pPlayer[0]= NULL;
m_pPlayer[1]= NULL;
};
This is the code that actually creates the objects.
if(pParam->m_pPlayer[0] == NULL) //pParam is a CPlayThread*
{
if(config.m_nPlayerMode == modeSocket)
pParam->m_pPlayer[0]= new CSocketPlayer();
}
The objects get destroyed when the thread is destroyed, and this is where the assertion occurs.
~CPlayThread()
{
if(m_pPlayer[0])
delete m_pPlayer[0];
m_pPlayer[0]=NULL;
if(m_pPlayer[1])
delete m_pPlayer[1];
m_pPlayer[1]= NULL;
};
I'm at a total loss here. It used to work fine and somehow it started crashing at a client's location after three or four days of running continously. At the same time my debug executable started asserting every
single time a player was destroyed. There are up to 96 threads that might be playing at any given time (with two players each thread, alternating - the players were created and destroyed as needed). So after looking for a solution and not finding one, I decided to just keep the objects for the duration of the application exectution. So now I only get the assertion when I close the debug version of the program (and presummably there is an unnoticeable crash on closing the release version, which is never because this should run 24/7).
I just need to know what I am doing wrong. Any help would be appretiated.
What type is m_pPlayer[0] ( Like David asked. ) is it a base type of CSocketPlayer or is it CSocketPlayer itself. ? If it's a base type you need to make the destructor in the base class virtual. That might be related to your problem. If not, then the problem must be that you already deleted the object. This can be due to a racing condition, where 2 threads run the destructor with the same pointers.
What also could be is that either the new or delete operator is overloaded, for example allocating from another heap. Guess that is far fetched .... but possible ..

Place critical functions in destructor to "enhance atomicity"?

Say I have two C++ functions foo1() and foo2(), and I want to minimize the likelihood that that foo1() starts execution but foo2() is not called due to some external event. I don't mind if neither is called, but foo2() must execute if foo1() was called. Both functions can be called consecutively and do not throw exceptions.
Is there any benefit / drawback to wrapping the functions in an object and calling both in the destructor? Would things change if the application was multi-threaded (say the parent thread crashes)? Are there any other options for ensuring foo2() is called so long as foo1() is called?
I thought having them in a destructor might help with e.g. SIGINT, though I learned SIGINT will stop execution immediately, even in the middle of the destructor.
Edit:
To clarify: both foo1() and foo2() will be abstracted away, so I'm not concerned about someone else calling them in the wrong order. My concern is solely related to crashes, exceptions, or other interruptions during the execution of the application (e.g. someone pressing SIGINT, another thread crashing, etc.).
If another thread crashes (without relevant signal handler -> the whole application exits), there is not much you can do to guarantee that your application does something - it's up to what the OS does. And there are ALWAYS cases where the system will kill your app without your actual knowledge (e.g. a bug that causes "all" memory being used by your app and the OS "out of memory killer" killing your process).
The only time your destructor is guaranteed to be executed is if the object is constructed and a C++ exception is thrown. All signals and such, make no such guarantees, and contininuing to execute [in the same thread] after for example SIGSEGV or SIGBUS is well into the "undefined" parts of the world - nothing much you can do about that, since the SEGV typically means "you tried to do something to memory that doesn't exist [or that you can't access in the way you tried, e.g. write to code-memory]", and the processor would have aborted the current instruction. Attempting to continue where you were will either lead to the same instruction being executed again, or the instruction being skipped [if you continue at the next instruction - and I'm ignoring the trouble of determining where that is for now]. And of course, there are situations where it's IMPOSSIBLE to continue even if you wanted to - say for example the stack pointer has been corrupted [restored from memory that was overwritten, etc].
In short, don't spend much time trying to come up with something that tries to avoid these sort of scenarios, because it's unlikely to work. Spend your time trying to come up with schemes where you don't need to know if you completed something or not [for example transaction based programming, or "commit-based" programming (not sure if that's the right term, but basically you do some steps, and then "commit" the stuff done so far, and then do some further steps, etc - only stuff that has been "committed" is sure to be complete, uncommitted work is discarded next time around) , where something is either completely done, or completely discarded, depending on if it completed or not].
Separating "sensitive" and "not sensitive" parts of your application into separate processes can be another way to achieve some more safety.

C++ Iterating a huge std::multimap while handling many elements going MIA

I have a situation where objects will add events (a struct containing a function pointer to a function like object::do_something) to a "chain of events" (std::multimap) in their constructor. My interpreter reads the chain of events (sorted by depth) every time the game updates and executes each one sequentially. When an object is destroyed, it will remove all its events from the chain in its destructor automatically (to prevent possible leaks of events).
Because events are sorted by depth, it's possible that an object might register multiple events which are "next" to each other in the chain. When an object destroys itself, it unlinks all its events and immediately stops running its share of code (when something is destroyed, it can't do anything). I've cunningly produced a way of doing this; the particular function which deletes an object, instance_destroy() will throw an exception which my event interpreter can catch and continue along with the next event in the chain.
I've come to realize;
Unpredictable amounts of events can be unlinked from the chain, and the current iterator is (likely) to be invalidated when an object destroys itself.
Objects can destroy other objects in their lifetime, as well as themselves. I can't simply keep a copy of the next iterator that doesn't belong to the current object in case of destruction, as it could also be removed!
When control is passed back to the interpreter (say, via exception) and heaps of events have been removed, including possibly the current iterator, I have no way of knowing what to execute next. I can't start the map from the beginning -- that would cause undefined behaviour in the game; things would be executed twice. I also can't copy the map -- it's absolutely HUGE -- it would come at an enormous performance penalty. I can't redesign the way the system should work either, as it's not my protocol.
Consider the following data structure;
typedef std::multimap<real_t, event> events_by_depth_t;
How can I iterate it given my requirements above?
I'm using C++11.

Is it safe to emit a sigc++ signal in a destructor?

My current project has a mechanism that tracks/proxies C++ objects to safely expose them to a script environment. Part of its function is to be informed when a C++ object is destroyed so it can safely clean up references to that object in the script environment.
To achieve this, I have defined the following class:
class DeleteEmitter {
public:
virtual ~DeleteEmitter() {
onDelete.emit();
}
sigc::signal<void> onDelete;
};
I then have any class that may need to be exposed to the script environment inherit from this class. When the proxy layer is invoked it connects a callback to the onDelete signal and is thus informed when the object is destroyed.
Light testing shows that this works, but in live tests I'm seeing peculiar memory corruptions (read: crashes in malloc/free) in unrelated parts of the code. Running under valgrind suggests there may be a double-free or continued use of an object after its been freed, so its possible that there is an old bug in a class that was only exposed after DeleteEmitter was added to its inheritance hierarchy.
During the course of my investigation it has occured to me that it might not be safe to emit a sigc++ signal during a destructor. Obviously it would be a bad thing to do if the callback tried to use the object being deleted, but I can confirm that is not what's happening here. Assuming that, does anyone know if this is a safe thing to do? And is there a more common pattern for achieving the same result?
The c++ spec guarantees that the data members in your object will not be destroyed until your destructor returns, so the onDelete object is untouched at that point. If you're confident that the signal won't indirectly result in any reads, writes or method calls on the object(s) being destroyed (multiple objects if the DeleteEmitter is part of another object) or generate C++ exceptions, then it's "safe." Assuming, of course, that you're not in a multi-threaded environment, in which case you also have to ensure other threads aren't interfering.