I have a map declared as
std::map<std::string, Texture*> textureMap;
which I use for pairing the path to a texture file to the actual texture so I can reference the texture by the path without loading the same texture a bunch of times for individual sprites. What I don't know how to do is properly destroy the textures in the destructor for the ResourceManager class (where the map is).
I thought about using a loop with an iterator like this:
ResourceManager::~ResourceManager()
{
for(std::map<std::string, Texture*>::iterator itr = textureMap.begin(); itr != textureMap.end(); itr++)
{
delete (*itr);
}
}
But that doesn't work, it says delete expected a pointer. It's pretty late so I'm probably just missing something obvious, but I wanted to get this working before bed. So am I close or am I totally in the wrong direction with this?
As far as your sample code goes, you need to do this inside the loop:
delete itr->second;
The map has two elements and you need to delete the second. In your case, itr->first is a std::string and itr->second is a Texture*.
If you need to delete a particular entry, you could do something like this:
std::map<std::string, Texture*>::iterator itr = textureMap.find("some/path.png");
if (itr != textureMap.end())
{
// found it - delete it
delete itr->second;
textureMap.erase(itr);
}
You have to make sure that the entry exists in the map otherwise you may get an exception when trying to delete the texture pointer.
An alternative might be to use std::shared_ptr instead of a raw pointer, then you could use a simpler syntax for removing an item from the map and let the std::shared_ptr handle the deletion of the underlying object when appropriate. That way, you can use erase() with a key argument, like so:
// map using shared_ptr
std::map<std::string, std::shared_ptr<Texture>> textureMap;
// ... delete an entry ...
textureMap.erase("some/path.png");
That will do two things:
Remove the entry from the map, if it exists
If there are no other references to the Texture*, the object will be deleted
In order to use std::shared_ptr you'll either need a recent C++11 compiler, or Boost.
The answer didn't fully address the looping issue. At least Coverty (TM) doesn't allow erasing the iterator within the loop and still use it to continue looping. Anyway, after deleting the memory, calling clear() on the map should do the rest:
ResourceManager::~ResourceManager()
{
for(std::map<std::string, Texture*>::iterator itr = textureMap.begin(); itr != textureMap.end(); itr++)
{
delete (itr->second);
}
textureMap.clear();
}
You're not using the right tool for the job.
Pointers should not "own" data.
Use boost::ptr_map<std::string, Texture> instead.
Related
Well I am creating a vector like this
vector<Member*> emp;
Then I am creating heap objects of the member class like this
Member* memObj = new Member();
Then using push_back like this
emp.push_back(memObj);
Well after using all my functions do I have to clear the memory by iterating like this ?
for( vector<Member*>::iterator iter = emp.begin();
iter != emp.end(); )
{
Member* mem = *iter;
iter = emp.erase (iter);
delete mem;
//iter++;
}
Is there any effective way other than iterating through each value? clear function calls the destructor only and it clears the values but does not free the memory..I wish to achieve polymorphism here...I am new in C++ ....please help..Thanks in advance.. :) I am not using C++11
If you are able to use a C++11 compiler, you can use one of the smart pointers.
std::unique_ptr
std::vector<std::unique_ptr<Member>> emp;
or
std::shared_ptr
std::vector<std::shared_ptr<Member>> emp;
EDIT
If you are not able to use a C++11 compiler, VS 2005 is definitely too old to support C++11, you will have to delete the objects manually, like you have shown.
However, I would add a helper class to help with deleteing the Member objects.
struct MemberDeleteHelper
{
MemberDeleteHelper(std::vector<Member*> emp) : emp_(emp);
~MemberDeleteHelper()
{
for( vector<Member*>::iterator iter = emp.begin();
iter != emp.end(); ++iter )
{
delete *iter;
}
}
std::vector<Member*>& emp_;
};
and use it as:
vector<Member*> emp;
MemberDeleteHelper deleteHelper(emp);
With this in place, the elements of emp will be deleted no matter how you return from the function. If an exception gets thrown from a nested function call, the stack will be unrolled and the elements of emp will still be deleted.
EDIT 2
Do not use auto_ptr objects in std::vector. The pitfalls of using auto_ptr in STL containers are discussed at http://www.devx.com/tips/Tip/13606 (Thanks are due to #pstrjds for the link).
Unless your intent is to added instances of types derived from Member to the vector, there's no need for vector<Member*>, just use vector<Member>.
If you actually need dynamic allocation, use vector<unique_ptr<Member>>. The smart pointer will automatically delete the instances when you clear() the vector. If this is the case, don't forget that Member needs a virtual destructor.
Similar options for pre-C++11 compilers are std::vector<std::tr1::shared_ptr<Member>> or boost::ptr_vector<Member>.
Finally, your current code has a bug. vector::erase returns a pointer to the next element, so by manually incrementing the iterator within the loop, you're skipping every other element. And I don't understand why you're going through the trouble of storing the pointer in a temporary variable. Your loop should be
for( vector<Member*>::iterator iter = emp.begin(); iter != emp.end(); )
{
delete *iter;
iter = emp.erase(iter);
}
or just delete all the elements first and then clear the vector
for( vector<Member*>::iterator iter = emp.begin(); iter != emp.end(); ++iter)
{
delete *iter;
}
emp.clear();
I was wondering if this is an accaptable practice:
struct Item { };
std::list<std::shared_ptr<Item>> Items;
std::list<std::shared_ptr<Item>> RemovedItems;
void Update()
{
Items.push_back(std::make_shared<Item>()); // sample item
for (auto ItemIterator=Items.begin();ItemIterator!=Items.end();ItemIterator++)
{
if (true) { // a complex condition, (true) is for demo purposes
RemovedItems.push_back(std::move(*ItemIterator)); // move ownership
*ItemIterator=nullptr; // set current item to nullptr
}
// One of the downsides, is that we have to always check if
// the current iterator value is not a nullptr
if (*ItemIterator!=nullptr) {
// A complex loop where Items collection could be modified
}
}
// After the loop is done, we can now safely remove our objects
RemovedItems.clear(); // calls destructors on objects
//finally clear the items that are nullptr
Items.erase( std::remove_if( Items.begin(), Items.end(),
[](const std::shared_ptr<Item>& ItemToCheck){
return ItemToCheck==nullptr;
}), Items.end() );
}
The idea here is that we're marking Items container could be effected by outside sources. When an item is removed from the container, it's simply set to nullptr but moved to RemovedItems before that.
Something like an event might effect the Items and add/remove items, so I had to come up with this solution.
Does this seem like a good idea?
I think you are complicating things too much. If you are a in multi-threaded situation (you didn't mention it in your question), you would certainly need some locks guarding reads from other threads that access your modified lists. Since there are no concurrent data structures in the Standard Library, you would need to add such stuff yourself.
For single-threaded code, you can simply call the std:list member remove_if with your predicate. There is no need to set pointers to null, store them and do multiple passes over your data.
#include <algorithm>
#include <list>
#include <memory>
#include <iostream>
using Item = int;
int main()
{
auto lst = std::list< std::shared_ptr<Item> >
{
std::make_shared<int>(0),
std::make_shared<int>(1),
std::make_shared<int>(2),
std::make_shared<int>(3),
};
// shared_ptrs to even elements
auto x0 = *std::next(begin(lst), 0);
auto x2 = *std::next(begin(lst), 2);
// erase even numbers
lst.remove_if([](std::shared_ptr<int> p){
return *p % 2 == 0;
});
// even numbers have been erased
for (auto it = begin(lst); it != end(lst); ++it)
std::cout << **it << ",";
std::cout << "\n";
// shared pointers to even members are still valid
std::cout << *x0 << "," << *x2;
}
Live Example.
Note that the elements have been really erased from the list, not just put at the end of the list. The latter effect is what the standard algorithm std::remove_if would do, and after which you would have to call the std::list member function erase. This two-step erase-remove idiom looks like this
// move even numbers to the end of the list in an unspecified state
auto res = std::remove_if(begin(lst), end(lst), [](std::shared_ptr<int> p){
return *p % 2 == 0;
});
// erase even numbers
lst.erase(res, end(lst));
Live Example.
However, in both cases, the underlying Item elements have not been deleted, since they each still have a shared pointer associated to them. Only if the refence counts would drop to zero, would those former list elements actually be deleted.
If I was reviewing this code I would say it's not acceptable.
What is the purpose of the two-stage removal? An unusual decision like that needs comments explaining its purpose. Despite repeated requests you have failed to explain the point of it.
The idea here is that we're marking Items container could be effected by outside sources.
Do you mean "The idea here is that while we're marking Items container could be effected by outside sources." ? Otherwise that sentence doesn't make sense.
How could it be affected? Your explanation isn't clear:
Think of a Root -> Parent -> Child relationship. An event might trigger in a Child that could remove Parent from Root. So the loop might break in the middle and iterator will be invalid.
That doesn't explain anything, it's far too vague, using very broad terms. Explain what you mean.
A "parent-child relationship" could mean lots of different things. Do you mean the types are related, by inheritance? Objects are related, by ownership? What?
What kind of "event"? Event can mean lots of things, I wish people on StackOverflow would stop using the word "event" to mean specific things and assuming everyone else knows what meaning they intend. Do you mean an asynchronous event, e.g. in another thread? Or do you mean destroying an Item could cause the removal of other elements from the Items list?
If you mean an asynchronous event, your solution completely fails to address the problem. You cannot safely iterate over any standard container if that container can be modidifed at the same time. To make that safe you must do something (e.g. lock a mutex) to ensure exclusive access to the container while modifying it.
Based on this comment:
// A complex loop where Items collection could be modified
I assume you don't mean an asynchronous event (but then why do you say "outside sources" could alter the container) in which case your solution does ensure that iterators remain valid while the "complex loop" iterates over the list, but why do need the actual Item objects to remain valid, rather than just keeping iterators valid? Couldn't you just set the element to nullptr without putting it in RemovedItems, then do Items.remove_if([](shared_ptr<Item> const& p) { return !p; } at the end? You need to explain a bit more about what your "complex loop" can do to the container or to the items.
Why is RemovedItems not a local variable in the Update() function? It doesn't seem to be needed outside that function. Why not use the new C++11 range-based for loop to iterate over the list?
Finally, why is everything named with a capital letter?! Naming local variables and functions with a capital letter is just weird, and if everything is named that way then it's pointless because the capitalisation doesn't help distinguish different types of names (e.g. using a capital letter just for types makes it clear which names are types and which are not ... using it for everything is useless.)
I feel like this only complicates things a lot by having to check for nullptr everywhere. Also, moving a shared_ptr is a little bit silly.
edit:
I think I understand the problem now and this is how I would solve it:
struct Item {
std::list<std::shared_ptr<Item>> Children;
std::set < std::shared_ptr<Item>, std::owner_less < std::shared_ptr<Item >> > RemovedItems;
void Update();
void Remove(std::shared_ptr<Item>);
};
void Item::Update()
{
for (auto child : Children){
if (true) { // a complex condition, (true) is for demo purposes
RemovedItems.insert(child);
}
// A complex loop where children collection could be modified but
// only by calling Item::remove, Item::add or similar
}
auto oless = std::owner_less < std::shared_ptr < Item >>();
std::sort(Children.begin(), Children.end(), oless ); //to avoid use a set
auto newEnd = std::set_difference(Children.begin(),
Children.end(),
RemovedItems.begin(),
RemovedItems.end(),
Children.begin(),
oless);
Children.erase(newEnd, Children.end());
RemovedItems.clear(); // may call destructors on objects
}
void Item::Remove(std::shared_ptr<Item> element){
RemovedItems.insert(element);
}
I have a std::list and a std::map that I would like to empty and call delete on all the pointers.
After reading this question I am using this for the std::list:
mylist.remove_if([](myThingy* thingy) -> bool { delete thingy; return true; });
Is there something similarly concise for std::map?
Note that I cannot use a range based for loop because it isn't supported by my compiler (VC10). If it would be possible I assume
for(auto* thingy : myMap) { delete thingy; }
myMap.clear();
would work. Please correct me if I am wrong.
Is there something similarly concise for std::map?
You could do it this way (supposing your thingy is the mapped value, and not the key):
for_each(myMap.begin(), myMap.end(),
[] (decltype(myMap)::value_type const& p) { delete p.second; });
myMap.clear();
Concerning the range-based for loop:
If it would be possible I assume
for(auto* thingy : myMap) { delete thingy; }
myMap.clear();
would work. Please correct me if I am wrong.
More or less. You still need to keep in mind that values of a map are actually pairs - but you could fix the range-based for to keep that into account, yes:
for (auto&& p : myMap) { delete p.second; }
myMap.clear();
Anyway, please consider using smart pointers instead of performing manual memory management through raw pointers, new, and delete. This way you would avoid this kind of problems.
I don't think your remove_if solution is a good idea for the
other types, either. Just use a simple loop (or foreach):
for ( auto current = myMap.begin(); current != myMap.end(); ++ current ) {
delete current->second;
}
myMap.clear();
Note that you cannot do a delete current->first; this will
invalidate keys in the map. And unless you are doing
a clear() immediately afterwards (or are destructing the map),
set the deleted pointer to NULL.
With regards to your second question:
for ( auto* thingy : myMap )
would certainly not work. The value type of map is a pair, not
a pointer, so you'ld need somthing like:
for ( auto thingy : myMap ) { delete thingy.second; }
(I think. I've not been able to experiment with this either.)
Here's my code for updating a list of items in a vector and removing some of them:
std::vector<Particle*> particles;
...
int i = 0;
while ( i < particles.size() ) {
bool shouldRemove = particles[ i ]->update();
if ( shouldRemove ) {
delete particles[ i ];
particles[ i ] = particles.back();
particles.pop_back();
} else {
i++;
}
}
When I find an item that should be removed, I replace it with the last item from the vector to avoid potentially copying the rest of the backing array multiple times. Yes, I know it is premature optimization...
Is this a valid way of removing items from the vector? I get some occasional (!) crashes somewhere around this area but can't track them down precisely (LLDB fails to show me the line), so I would like to make sure this part is OK. Or is it... ?
UPDATE: I found the bug and indeed it was in another part of my code.
Yes, this is a valid way. But if it is not a performance bottleneck in your program then it's better to use smart pointers to manage the lifetime of Particle objects.
Take a look at std::remove_if.
Also, might be good to use a shared pointer as it may make life easier :-)
typedef std::shared_ptr< Particle > ParticlePtr;
auto newend = std::remove_if( particles.begin(), particles.end(), [](ParticlePtr p) {return p->update();} );
particles.erase( newend, particles.end() );
You are iterating over an STL vector, so use iterators, it's what they're for.
std::vector<Particle*>::iterator particle = particles.begin();
while ( particle != particles.end() ) {
bool shouldRemove = particle->update();
if ( shouldRemove ) {
particle = particles.remove(particle); //remove returns the new next particle
} else {
++particle;
}
}
Or, even better, use smart pointers and the erase/remove idiom. Remove_if itself does as you describe, moving old members to the back of the vector and returning an iterator pointing to the first non-valid member. Passing this and the vector's end() to erase allows erase to erase all the old members as they are in a contiguous block. In your scenario, you would have to delete each before calling erase:
auto deleteBegin = std::remove_if(
particles.begin(), particles.end(),
[](Particle* part){ return part->update();}));
for(auto deleteIt = deleteBegin; deleteIt != particles.end(); ++deleteIt)
delete *deleteIt;
std::erase(deleteBegin, particles.end());
Or pre C++11:
bool ShouldDelete(Particle* part) {
return part->update();
}
typedef vector<Particle*> ParticlesPtrVec;
ParticlesPtrVec::iterator deleteBegin = std::remove_if(
particles.begin(), particles.end(), ShouldDelete);
for(ParticlesPtrVec::iterator deleteIt = deleteBegin;
deleteIt != particles.end(); ++deleteIt)
delete *deleteIt;
std::erase(deleteBegin, particles.end());
Then test the whole code for performance and optimise wherever the actual bottlenecks are.
I don't see any direct issue in the code. You are probably having some issues with the actual pointers inside the vector.
Try running valgrind on your code to detect any hidden memory access problems, or switch to smart pointers.
I'm checking for memory leaks in my Qt program using QtCreator and Valgrind. I am deleting a few entries in a QHash in my destructor like this:
QHash<QString, QVariant*> m_Hash;
/**
* #brief
* Destruct a Foo Class instance
*/
Foo ::~Foo ()
{
// Do Cleanup here
// Delete hash leftovers
foreach( QString key, m_Hash.keys() )
{
qDebug() << "Deleting an entry..";
// Delete the hash item
delete m_Hash.take(key);
}
}
If I debug with Valgrind this code is fine and deletes the contents when the destructor is called:
>> Deleting an entry..
>> Deleting an entry..
If I launch with GDB within QtCreator, launch without GDB from QtCreator, or just run my Qt App from the command line I get Segmentation Faults!
Signal name :
SIGSEGV
Signal meaning :
Segmentation fault
If I commend out the 'delete' line then I can run my app just fine using any method but I do leak memory.
What gives? Does valgrind introduce some sort of delay that allows my destructor to work? How can I solve this?
hyde's answer is correct; however, the simplest possible way to clear your particular hash is as follows:
#include <QtAlgorithms>
Foo::~Foo()
{
qDeleteAll(m_Hash);
m_Hash.clear();
}
Note that the above technique would not work if the key of the hash table was a pointer (e.g. QHash<QString*, QVariant>).
You can not modify the container you iterate over with foreach. Use iterators instead. Correct code using method iterator QHash::erase ( iterator pos ):
QMap<QString, QVariant* >::iterator it = m_Hash.begin();
// auto it = m_Hash.begin(); // in C++11
while (it != m_Hash.end()) {
delete it.value();
it = m_Hash.erase(it);
}
Also, any particular reason why you are storing QVariant pointers, instead of values? QVariant is usually suitable for keeping as value, since most data you'd store in QVariant is either implicitly shared, or small.
The documentation does not explicitly mention it, but it is a problem that you are mutating the container you are iterating over.
The code to foreach is here.
Looking at the code, it and your code do basically the same as if you'd write:
for (QHash::iterator it=m_Hash.begin(), end=m_Hash.end();
it!=end;
++it)
{
delete m_Hash.take(key);
}
however, the take member function may trigger an invalidation of existing iterators (it and end), so your iterators might have become dangling, yielding undefined behavior.
Possible solutions:
* do not modify the container you iterator over while iterating
* make sure it is valid before the next iteration begins, and don't store an end-iterator (this solution forbids the use of foreach)
Maybe there is problem with the foreach keyword. Try replacing:
foreach( QString key, m_Hash.keys() )
{
qDebug() << "Deleting an entry..";
delete m_Hash.take(key); // take changes the m_Hash object
}
with:
for (QHash<QString, QVariant*>::iterator it = m_Hash.begin();
it != m_Hash.end(); ++it)
{
qDebug() << "Deleting an entry..";
delete it.value(); // we delete only what it.value() points to, but the
// m_Hash object remains intact.
}
m_Hash.clear();
This way the hash table remains unchanged while you iterate through it. It is possible that the foreach macro expands into a construct where you "delete the hashtable from under your feet". That is the macro probably creates an iterator, which becomes invalid or "dangling" as a side effect of calling
m_Hash.take(key);