I'm checking for memory leaks in my Qt program using QtCreator and Valgrind. I am deleting a few entries in a QHash in my destructor like this:
QHash<QString, QVariant*> m_Hash;
/**
* #brief
* Destruct a Foo Class instance
*/
Foo ::~Foo ()
{
// Do Cleanup here
// Delete hash leftovers
foreach( QString key, m_Hash.keys() )
{
qDebug() << "Deleting an entry..";
// Delete the hash item
delete m_Hash.take(key);
}
}
If I debug with Valgrind this code is fine and deletes the contents when the destructor is called:
>> Deleting an entry..
>> Deleting an entry..
If I launch with GDB within QtCreator, launch without GDB from QtCreator, or just run my Qt App from the command line I get Segmentation Faults!
Signal name :
SIGSEGV
Signal meaning :
Segmentation fault
If I commend out the 'delete' line then I can run my app just fine using any method but I do leak memory.
What gives? Does valgrind introduce some sort of delay that allows my destructor to work? How can I solve this?
hyde's answer is correct; however, the simplest possible way to clear your particular hash is as follows:
#include <QtAlgorithms>
Foo::~Foo()
{
qDeleteAll(m_Hash);
m_Hash.clear();
}
Note that the above technique would not work if the key of the hash table was a pointer (e.g. QHash<QString*, QVariant>).
You can not modify the container you iterate over with foreach. Use iterators instead. Correct code using method iterator QHash::erase ( iterator pos ):
QMap<QString, QVariant* >::iterator it = m_Hash.begin();
// auto it = m_Hash.begin(); // in C++11
while (it != m_Hash.end()) {
delete it.value();
it = m_Hash.erase(it);
}
Also, any particular reason why you are storing QVariant pointers, instead of values? QVariant is usually suitable for keeping as value, since most data you'd store in QVariant is either implicitly shared, or small.
The documentation does not explicitly mention it, but it is a problem that you are mutating the container you are iterating over.
The code to foreach is here.
Looking at the code, it and your code do basically the same as if you'd write:
for (QHash::iterator it=m_Hash.begin(), end=m_Hash.end();
it!=end;
++it)
{
delete m_Hash.take(key);
}
however, the take member function may trigger an invalidation of existing iterators (it and end), so your iterators might have become dangling, yielding undefined behavior.
Possible solutions:
* do not modify the container you iterator over while iterating
* make sure it is valid before the next iteration begins, and don't store an end-iterator (this solution forbids the use of foreach)
Maybe there is problem with the foreach keyword. Try replacing:
foreach( QString key, m_Hash.keys() )
{
qDebug() << "Deleting an entry..";
delete m_Hash.take(key); // take changes the m_Hash object
}
with:
for (QHash<QString, QVariant*>::iterator it = m_Hash.begin();
it != m_Hash.end(); ++it)
{
qDebug() << "Deleting an entry..";
delete it.value(); // we delete only what it.value() points to, but the
// m_Hash object remains intact.
}
m_Hash.clear();
This way the hash table remains unchanged while you iterate through it. It is possible that the foreach macro expands into a construct where you "delete the hashtable from under your feet". That is the macro probably creates an iterator, which becomes invalid or "dangling" as a side effect of calling
m_Hash.take(key);
Related
In My class I have a member variable;
QProcess* p1;
Inside some function, i initialize and use it as:
process1 = new QProcess();
it works fine. Now i have a situation where i have many of these processes to be started. One option is to declare all of them as member functions:
QProcess* p1;
QProcess* p2;
QProcess* p3;
...
And then initialize all of them when needed. However this is too much redundant work. So i tried to create a list and initialize it in a loop like this:
QList<QProcess*> procList;
for(int i=0; i<len; i++){
procList[i] = new QProcess();
}
It compiles fine but then crashes. Is there something missing or what am i doing wrong here?
I also tried to add all member variables in this list like:
for(int i=0; i<len; i++){
switch(i){
case 0:
procList[i] = p1;
break;
}
}
But this also has the same result like above
EDIT:
Based on your suggestion, i tried:
procList.append(new QProcess());
as well as procList.append(p1);
But result is same, it compiles but crashes at run time
EDIT:
So i found the issue was totally unrelated. The class itself where i was using this code (custom class i made) had no default constructor. And as i learned without default constructor, if you initialize a pointer, somehow it just crashes.... strange. After specifying default constructor it is working fine now.
You are accessing areas of the memory that is not allocated. Your list is empty and you access one element that is not valid.
As per documentation: https://doc.qt.io/qt-5/qlist.html#operator-5b-5d
T &QList::operator[](int i) Returns the item at index position i as a
modifiable reference. i must be a valid index position in the list
(i.e., 0 <= i < size()).
If this function is called on a list that is currently being shared,
it will trigger a copy of all elements. Otherwise, this function runs
in constant time. If you do not want to modify the list you should use
QList::at().
Try to use append or push_back, see https://doc.qt.io/qt-5/qlist.html#push_back
PS: Not an expert of QT, but you might want to look into a QList of std::unique_ptr to manage your memory if possible. Otherwise you risk to forget to delete the heap-allocated elements whose pointer is stored in the list.
Edit: OP reports that the suggestion did not work. I wrote a small example myself (disregarding possible leaks). The following example crashes when using operator[] but works with append in debug mode (I used QT creator for Windows with Qt 5.13.0 for MinGW 64 bits) . What OP is experiencing can be either some issue with the toolchain or some undefined behavior triggered before the append. I suggest OP tries to copy/paste my code in a clean project and run it.
#include <QList>
#include <QProcess>
int main()
{
QList<QProcess*> list;
for(int i = 0; i < 10; ++i){
QProcess * p = new QProcess();
//Decomment and crash
//list[i] = p;
//does not crash
list.append(p);
}
//Here you should cleanup
}
may I ask help to confirm if my issue comes from a Design problem or if there would be a possible clean solution to the following:
Entity.h
class CLEntity3D
{
public:
CLEntity3D();
virtual ~CLEntity3D();
virtual void update() = 0;
static std::vector<CLEntity3D*> vecEntity;
};
Entity.cpp
int CLEntity3D::nbrEntity = 0;
std::vector<CLEntity3D*> CLEntity3D::vecEntity;
CLEntity3D::CLEntity3D()
{
vecEntity.push_back(this);
}
CLEntity3D::~CLEntity3D()
{
vecEntity.erase((std::remove(vecEntity.begin(), vecEntity.end(), this)), vecEntity.end());
}
Various derived class are creating/deleting different Entities object through the program, this all works fine.
In a Scene class, I have the following methods:
void CLScene::Update()
{
for (auto& iter : CLEntity3D::vecEntity) {
iter->update();
}
}
void CLScene::ClearScene()
{
for (auto& iter : CLEntity3D::vecEntity) {
delete(iter); iter = nullptr;
}
CLEntity3D::vecEntity.clear();
}
Update is ok, the issue is with ClearScene(). I get a "Vector Iterators incompatible" debug assertion.
From my research, the common problem seems to be because the iterators are from different vectors, which I don't think is the issue here. I think the problem is when ClearScene() is called, every delete(iter) changes the size of vecEntity through the CLEntity3D destructor therefore invalidates the iterator in the ClearScene loop. Am I right?
My question would then be:
Is there a way to delete all CLEntity3D objects from CLScene with that design?
I guess I could have CLScene holding the vecEntity, which would eliminate the problem but this would mean that CLScene would have to manage all creation/deletion of entities, therefore not being as versatile...
PS: I know this example is not one to compile but since my question is more about concept... please advise if I shall provide otherwise.
The problem is, you can't remove anything from the underlying vector while inside a range based for loop.
The loop in your ClearScene method deletes CLEntity3D instances, which in it's destructor changes the same vector you used in your for loop.
A relatively easy fix would be to change your ClearScene to something like this:
void CLScene::ClearScene()
{
auto vectorCopy = CLEntity3D::vecEntity;
for (auto& iter : vectorCopy) {
delete iter;
}
}
This works because the loop operates on a copy, and the remove happens on the original.
Note that there is no need to clear the original vector after the loop, since the destructors ensure that the vector will be empty after deleting every item.
Or as suggested by a comment, you could avoid the copy by using a while loop:
while (!CLEntity3D::vecEntity.empty())
{
delete CLEntity3D::vecEntity.begin();
}
I'm getting a segfault on MacOSX ("Segmentation fault: 11", in gdb "Program received signal SIGSEGV, Segmentation fault"), appearing in the destructor in which a container is looped over with an iterator and memory deleted. I've tried with clang++, g++ (both part of LLVM) and homebrew g++. The segfault appears when the iterator is incremented for the first time, with the gdb message (having compiled with clang++)
"0x000000010001196d in std::__1::__tree_node_base<void*>* std::__1::__tree_next<std::__1::__tree_node_base<void*>*>(std::__1::__tree_node_base<void*>*) ()"
When starting the program in gdb I also get warnings saying "warning: Could not open OSO archive file".
On a cluster linux node, with gcc 4.8.1, I don't get a segfault. Any ideas what might be wrong and how I can avoid the segfault on my mac (preferably with clang)? I really don't know much about compilers and such.
EDIT:
I think I found the problem, however I'd like to understand still why this works on one platform but not another. Here's a minimal example:
class Word:
#ifndef WORD_H
#define WORD_H
#include <string>
#include <map>
class Word {
public:
/*** Constructor ***/
Word(std::string w) : m_word(w) {
// Add word to index map, if it's not already in there
std::map<std::string, Word*>::iterator it = index.find(w);
if (it == index.end()) {
index[w] = this;
}
}
~Word() { index.erase(m_word); } // Remove from index
static void DeleteAll() { // Clear index, delete all allocated memory
for (std::map<std::string, Word*>::const_iterator it = index.begin();
it != index.end();
++it)
{ delete it->second; }
}
private:
std::string m_word;
static std::map<std::string, Word*> index; // Index holding all words initialized
};
#endif
WordHandler class:
#ifndef _WORDHANDLER_H_
#define _WORDHANDLER_H_
#include <string>
#include "Word.h"
class WordHandler {
WordHandler() {}
~WordHandler() { Word::DeleteAll(); } // clear memory
void WordHandler::NewWord(const std::string word) {
Word* w = new Word(word);
}
};
#endif
Main program:
#include <iostream>
#include "WordHandler.h"
int main () {
std::cout << "Welcome to the WordHandler. " << std::endl;
WordHandler wh;
wh.NewWord("hallon");
wh.NewWord("karl");
std::cout << "About to exit WordHandler after having added two new words " << std::endl;
return 0;
}
So the segfault occurs upon exiting of the program, when the destructor ~WordHandler is called. The reason I found, is the Word destructor: the Word object is erased from the map, which makes the DeleteAll() function weird because the map is altered while it's being iterated over (some sort of double delete I suppose). The segfault disappears either by removing the DeleteAll completely, or removing the Word destructor.
So I'm still wondering why the segfault doesn't appear on linux with g++ from gcc 4.8.1. (Also, I guess off topic, I'm wondering about the programming itself – what would be the proper way to treat index erasing/memory deletion in this code?)
EDIT 2:
I don't think this is a duplicate of Vector.erase(Iterator) causes bad memory access, because my original question had to do with why I get a segfault on one platform and not another. It's possible that the other question explains the segfault per se (not sure how get around this problem... perhaps removing the Word destructor and calling erase from DeleteAll() rather than "delete"? But that destructor makes sense to me though...), but if it's truly a bug in the code why isn't picked up by gcc g++?
This is a problem:
~Word() { index.erase(m_word); } // Remove from index
static void DeleteAll() { // Clear index, delete all allocated memory
for (std::map<std::string, Word*>::const_iterator it = index.begin();
it != index.end();
++it)
{ delete it->second; }
}
delete it->second invokes ~Word which erases from the map that you are iterating over. This invalidates your active iterator, leading to undefined behaviour. Because it is UB, the fact that it works on one platform but not another is basically just luck (or lack thereof).
To fix this, you can either make a copy of index and iterate over that, consider a different design that doesn't mutate the index as you delete it, or use the fact that erase returns the next valid iterator to make the loop safe (which means hoisting the erase into DeleteAll).
I have a map declared as
std::map<std::string, Texture*> textureMap;
which I use for pairing the path to a texture file to the actual texture so I can reference the texture by the path without loading the same texture a bunch of times for individual sprites. What I don't know how to do is properly destroy the textures in the destructor for the ResourceManager class (where the map is).
I thought about using a loop with an iterator like this:
ResourceManager::~ResourceManager()
{
for(std::map<std::string, Texture*>::iterator itr = textureMap.begin(); itr != textureMap.end(); itr++)
{
delete (*itr);
}
}
But that doesn't work, it says delete expected a pointer. It's pretty late so I'm probably just missing something obvious, but I wanted to get this working before bed. So am I close or am I totally in the wrong direction with this?
As far as your sample code goes, you need to do this inside the loop:
delete itr->second;
The map has two elements and you need to delete the second. In your case, itr->first is a std::string and itr->second is a Texture*.
If you need to delete a particular entry, you could do something like this:
std::map<std::string, Texture*>::iterator itr = textureMap.find("some/path.png");
if (itr != textureMap.end())
{
// found it - delete it
delete itr->second;
textureMap.erase(itr);
}
You have to make sure that the entry exists in the map otherwise you may get an exception when trying to delete the texture pointer.
An alternative might be to use std::shared_ptr instead of a raw pointer, then you could use a simpler syntax for removing an item from the map and let the std::shared_ptr handle the deletion of the underlying object when appropriate. That way, you can use erase() with a key argument, like so:
// map using shared_ptr
std::map<std::string, std::shared_ptr<Texture>> textureMap;
// ... delete an entry ...
textureMap.erase("some/path.png");
That will do two things:
Remove the entry from the map, if it exists
If there are no other references to the Texture*, the object will be deleted
In order to use std::shared_ptr you'll either need a recent C++11 compiler, or Boost.
The answer didn't fully address the looping issue. At least Coverty (TM) doesn't allow erasing the iterator within the loop and still use it to continue looping. Anyway, after deleting the memory, calling clear() on the map should do the rest:
ResourceManager::~ResourceManager()
{
for(std::map<std::string, Texture*>::iterator itr = textureMap.begin(); itr != textureMap.end(); itr++)
{
delete (itr->second);
}
textureMap.clear();
}
You're not using the right tool for the job.
Pointers should not "own" data.
Use boost::ptr_map<std::string, Texture> instead.
I was wondering if this is an accaptable practice:
struct Item { };
std::list<std::shared_ptr<Item>> Items;
std::list<std::shared_ptr<Item>> RemovedItems;
void Update()
{
Items.push_back(std::make_shared<Item>()); // sample item
for (auto ItemIterator=Items.begin();ItemIterator!=Items.end();ItemIterator++)
{
if (true) { // a complex condition, (true) is for demo purposes
RemovedItems.push_back(std::move(*ItemIterator)); // move ownership
*ItemIterator=nullptr; // set current item to nullptr
}
// One of the downsides, is that we have to always check if
// the current iterator value is not a nullptr
if (*ItemIterator!=nullptr) {
// A complex loop where Items collection could be modified
}
}
// After the loop is done, we can now safely remove our objects
RemovedItems.clear(); // calls destructors on objects
//finally clear the items that are nullptr
Items.erase( std::remove_if( Items.begin(), Items.end(),
[](const std::shared_ptr<Item>& ItemToCheck){
return ItemToCheck==nullptr;
}), Items.end() );
}
The idea here is that we're marking Items container could be effected by outside sources. When an item is removed from the container, it's simply set to nullptr but moved to RemovedItems before that.
Something like an event might effect the Items and add/remove items, so I had to come up with this solution.
Does this seem like a good idea?
I think you are complicating things too much. If you are a in multi-threaded situation (you didn't mention it in your question), you would certainly need some locks guarding reads from other threads that access your modified lists. Since there are no concurrent data structures in the Standard Library, you would need to add such stuff yourself.
For single-threaded code, you can simply call the std:list member remove_if with your predicate. There is no need to set pointers to null, store them and do multiple passes over your data.
#include <algorithm>
#include <list>
#include <memory>
#include <iostream>
using Item = int;
int main()
{
auto lst = std::list< std::shared_ptr<Item> >
{
std::make_shared<int>(0),
std::make_shared<int>(1),
std::make_shared<int>(2),
std::make_shared<int>(3),
};
// shared_ptrs to even elements
auto x0 = *std::next(begin(lst), 0);
auto x2 = *std::next(begin(lst), 2);
// erase even numbers
lst.remove_if([](std::shared_ptr<int> p){
return *p % 2 == 0;
});
// even numbers have been erased
for (auto it = begin(lst); it != end(lst); ++it)
std::cout << **it << ",";
std::cout << "\n";
// shared pointers to even members are still valid
std::cout << *x0 << "," << *x2;
}
Live Example.
Note that the elements have been really erased from the list, not just put at the end of the list. The latter effect is what the standard algorithm std::remove_if would do, and after which you would have to call the std::list member function erase. This two-step erase-remove idiom looks like this
// move even numbers to the end of the list in an unspecified state
auto res = std::remove_if(begin(lst), end(lst), [](std::shared_ptr<int> p){
return *p % 2 == 0;
});
// erase even numbers
lst.erase(res, end(lst));
Live Example.
However, in both cases, the underlying Item elements have not been deleted, since they each still have a shared pointer associated to them. Only if the refence counts would drop to zero, would those former list elements actually be deleted.
If I was reviewing this code I would say it's not acceptable.
What is the purpose of the two-stage removal? An unusual decision like that needs comments explaining its purpose. Despite repeated requests you have failed to explain the point of it.
The idea here is that we're marking Items container could be effected by outside sources.
Do you mean "The idea here is that while we're marking Items container could be effected by outside sources." ? Otherwise that sentence doesn't make sense.
How could it be affected? Your explanation isn't clear:
Think of a Root -> Parent -> Child relationship. An event might trigger in a Child that could remove Parent from Root. So the loop might break in the middle and iterator will be invalid.
That doesn't explain anything, it's far too vague, using very broad terms. Explain what you mean.
A "parent-child relationship" could mean lots of different things. Do you mean the types are related, by inheritance? Objects are related, by ownership? What?
What kind of "event"? Event can mean lots of things, I wish people on StackOverflow would stop using the word "event" to mean specific things and assuming everyone else knows what meaning they intend. Do you mean an asynchronous event, e.g. in another thread? Or do you mean destroying an Item could cause the removal of other elements from the Items list?
If you mean an asynchronous event, your solution completely fails to address the problem. You cannot safely iterate over any standard container if that container can be modidifed at the same time. To make that safe you must do something (e.g. lock a mutex) to ensure exclusive access to the container while modifying it.
Based on this comment:
// A complex loop where Items collection could be modified
I assume you don't mean an asynchronous event (but then why do you say "outside sources" could alter the container) in which case your solution does ensure that iterators remain valid while the "complex loop" iterates over the list, but why do need the actual Item objects to remain valid, rather than just keeping iterators valid? Couldn't you just set the element to nullptr without putting it in RemovedItems, then do Items.remove_if([](shared_ptr<Item> const& p) { return !p; } at the end? You need to explain a bit more about what your "complex loop" can do to the container or to the items.
Why is RemovedItems not a local variable in the Update() function? It doesn't seem to be needed outside that function. Why not use the new C++11 range-based for loop to iterate over the list?
Finally, why is everything named with a capital letter?! Naming local variables and functions with a capital letter is just weird, and if everything is named that way then it's pointless because the capitalisation doesn't help distinguish different types of names (e.g. using a capital letter just for types makes it clear which names are types and which are not ... using it for everything is useless.)
I feel like this only complicates things a lot by having to check for nullptr everywhere. Also, moving a shared_ptr is a little bit silly.
edit:
I think I understand the problem now and this is how I would solve it:
struct Item {
std::list<std::shared_ptr<Item>> Children;
std::set < std::shared_ptr<Item>, std::owner_less < std::shared_ptr<Item >> > RemovedItems;
void Update();
void Remove(std::shared_ptr<Item>);
};
void Item::Update()
{
for (auto child : Children){
if (true) { // a complex condition, (true) is for demo purposes
RemovedItems.insert(child);
}
// A complex loop where children collection could be modified but
// only by calling Item::remove, Item::add or similar
}
auto oless = std::owner_less < std::shared_ptr < Item >>();
std::sort(Children.begin(), Children.end(), oless ); //to avoid use a set
auto newEnd = std::set_difference(Children.begin(),
Children.end(),
RemovedItems.begin(),
RemovedItems.end(),
Children.begin(),
oless);
Children.erase(newEnd, Children.end());
RemovedItems.clear(); // may call destructors on objects
}
void Item::Remove(std::shared_ptr<Item> element){
RemovedItems.insert(element);
}