I recently finished fixing a bug in the following function, and the answer surprised me. I have the following function (written as it was before I found the bug):
void Level::getItemsAt(vector<item::Item>& vect, const Point& pt)
{
vector<itemPtr>::iterator it; // itemPtr is a typedef for a std::tr1::shared_ptr<item::Item>
for(it=items.begin(); it!=items.end(); ++it)
{
if((*it)->getPosition() == pt)
{
item::Item item(**it);
items.erase(it);
vect.push_back(item);
}
}
}
This function finds all Item objects in the 'items' vector that has a certain position, removes them from 'items', and puts them in 'vect'. Later, a function named putItemsAt does the opposite, and adds items to 'items'. The first time through, getItemsAt works fine. After putItemsAt is called, though, the for loop in getItemsAt will run off the end of 'items'. 'it' will point at an invalid Item pointer, and getPosition() segfaults. On a hunch, I changed it!=items.end() to it<items.end(), and it worked. Can anyone tell me why? Looking around SO suggests it might involve erase invalidating the iterator, but it still doesn't make sense why it would work the first time through.
I'm also curious because I plan to change 'items' from a vector to a list, since list's erase is more efficient. I know I'd have to use != for a list, as it doesn't have a < operator. Would I run into the same problem using a list?
When you call erase(), that iterator becomes invalidated. Since that is your loop iterator, calling the '++' operator on it after invalidating it is undefined behavor. erase() returns a new valid iterator that points to the next item in the vector. You need to use that new iterator from that point onwards in your loop, ie:
void Level::getItemsAt(vector<item::Item>& vect, const Point& pt)
{
vector<itemPtr>::iterator it = items.begin();
while( it != items.end() )
{
if( (*it)->getPosition() == pt )
{
item::Item item(**it);
it = items.erase(it);
vect.push_back(item);
}
else
++it;
}
}
You're invoking undefined behavior. All the iterators to a vector are invalidated by the fact that you called erase on that vector. It's perfectly valid for an implementation to do whatever it wants.
When you call items.erase(it);, it is now invalid. To conform to the standard, you must now assume that it is dead.
You invoke undefined behavior by using that invalid iterator in the next call to vect.push_back.
You invoke undefined behavior again by using it as the tracking variable of your for loop.
You can make your code valid by using std::remove_copy_if.
class ItemIsAtPoint : std::unary_function<bool, item::Item>
{
Point pt;
public:
ItemIsAtPoint(const Point& inPt) : pt(inPt) {}
bool operator()(const item::Item* input)
{
return input->GetPosition() == pt;
}
};
void Level::getItemsAt(vector<item::Item>& vect, const Point& pt)
{
std::size_t oldSize = items.size();
std::remove_copy_if(items.begin(), items.end(), std::back_inserter(vect),
ItemIsAtPoint(pt));
items.resize(vect.size() - (items.size() - oldSize));
}
You can make this a lot prettier if you are using boost::bind, but this works.
I'll go with Remy Lebeau's explanation about iterator invalidation, and just add that you can make your code valid and asymptotically faster (linear time, instead of quadratic time) by using a std::list instead of a std::vector. (std::list deletions only invalidate the iterator that was deleted, and insertions don't invalidate any iterators.)
You can also predictibly identify iterator invalidation while debugging by activating your STL implementation's debug mode. On GCC, you do with with the compiler flag -D_GLIBCXX_DEBUG (see some caveats there).
Related
We're currently using Coverity's Synapsis which runs over the code base and flags lines of code that would cause a bug and such.
I have this code:
auto it = std::find_if(my_container.my_list.begin(), my_container.my_list.end(),
[&](my_struct temp)
{
return temp._id == id;
});
/*To erase duplicates*/
if (it != my_container.my_list.end())
{
my_container.my_list.erase(it);
}
The erase part is being recognized as "erase invalidates iterator " then "Using invalid iterator (INVALIDATE_ITERATOR)". I'm not sure I understand why this is so. The iterator is not used after this code so it should be safe, right?
This question already has answers here:
Why does a push_back on an std::list change a reverse iterator initialized with rbegin?
(3 answers)
Closed 5 years ago.
I am currently using std::list in my application and trying get rid of problem with going out of range.
I really need to use pop_back in one object's method while iterating. I think it may change std::list::end in a some way. Is there any possibility to make it work properly? This code should look simillar to this:
Edit: I'm using GCC 6.1
Edit 2: If you have same problem related to reverse_iterator I recommend you to redesign application as far as it is possible and use casual iterator. It's more intuitive.
#include <list>
struct Object;
std::list<Object*> list;
struct Object
{
Object(bool state) : state(state) {}
bool state;
void method()
{
if(state) list.pop_back();
}
};
int main()
{
list.push_back(new Object(false));
list.push_back(new Object(true));
list.push_back(new Object(false));
for(auto it = list.rbegin(); it != list.rend(); ++it)
{
(*it) -> method();
}
return 0;
}
This should only be a problem if you are popping back the very last (first with your reverse iterators) element you are visiting right now. This is because you are incrementing the iterator which was invalidated by pop_back()
If this is the case, one solution would be to increment iterator first, store the result and than call the method().
The problem is that it is invalidated the moment you call pop_back() causing undefined behavior when you try to use it. You should store the next step before calling your method:
for(auto it = list.rbegin(); it != list.rend();)
{
auto itr = it++;
(*itr) -> method();
}
In my program, I have classes I use for handling projectiles in a game.
class Projectile
{
bool IsActive;
bool GetActive();
//....
};
class Game
{
std::vector<Projectile*> ProjectilesToUpdate;
//....
};
Of course, there is more to it than that, however I'm trying to stay relevant to my current problem.
I want to use std::sort to make it so that all projectiles where IsActive == true are at the far beginning and that any projectile which isn't active is at the very end.
How would I go about doing this?
Basically, you want to create a partition:
std::partition(std::begin(ProjectilesToUpdate),
std::end(ProjectilesToUpdate),
[](Projectile const* p) { return p->GetActive(); }
);
As for the subsidiary questions:
I had to remove the "const" part in the code to make it compile.
That's because your GetActive() method should be const:
bool GetActive() const { return IsActive; }
See Meaning of "const" last in a C++ method declaration?
how can I use this to delete every single object (and pointer to object) that is no longer needed?
You could use smart pointers (such as std::shared_ptr) and no longer care about delete. Thus you could use the Erase–remove idiom as follow:
std::vector<std::shared_ptr<Projectile>> ProjectilesToUpdate;
// :
// :
auto it = std::remove_if(
std::begin(ProjectilesToUpdate),
std::end(ProjectilesToUpdate),
[](std::shared_ptr<Projectile> const& p) { return !p->GetActive(); } // mind the negation
);
ProjectilesToUpdate.erase(it, std::end(ProjectilesToUpdate));
Related question: What is a smart pointer and when should I use one?
If you don't want to use smart pointers, you could use the returned iterator which point to the first element of the second group (i.e. the non active ones) and iterate until the end of the array:
auto begin = std::begin(ProjectilesToUpdate);
auto end = std::end(ProjectilesToUpdate);
auto start = std::partition(begin, end,
[](Projectile const* p) { return p->GetActive(); }
);
for (auto it = start; it != end; ++it) {
delete *it;
}
ProjectilesToUpdate.erase(start, end);
Note that I'm not calling erase inside the loop since it invalidates iterators.
And of course, this last solution is more complex than using smart pointers.
So that's what I have tried so far:
class menu_item
{
private:
// ....
std::vector<std::string> options_;
std::vector<std::string>::iterator current_;
public:
menu_item(std::string name, std::vector<std::string> options)
: name_(name), options_(options)
{
current_ = begin(options_);
}
// ....
const int curr_opt_id()
{
return current_ - begin(options_);
}
};
But curr_opt_id() returns -24. Does anybody know what I am doing wrong here?
When you add to a vector, there's a chance that the internal storage will be reallocated which will invalidate all existing iterators. Doing arithmetic on an invalid iterator isn't going to end well.
See Iterator invalidation rules
Iterators of a vector get invalidated upon reallocation, which happens when the current capacity is not sufficient to hold the actual content plus a newly added element.
What is most likely happening here is that the current_ iterator, which is initialized at construction time, gets invalidated by subsequent insertions into options_, which gives you undefined behavior when evaluating the expression:
current_ - begin(options_)
Here's my code for updating a list of items in a vector and removing some of them:
std::vector<Particle*> particles;
...
int i = 0;
while ( i < particles.size() ) {
bool shouldRemove = particles[ i ]->update();
if ( shouldRemove ) {
delete particles[ i ];
particles[ i ] = particles.back();
particles.pop_back();
} else {
i++;
}
}
When I find an item that should be removed, I replace it with the last item from the vector to avoid potentially copying the rest of the backing array multiple times. Yes, I know it is premature optimization...
Is this a valid way of removing items from the vector? I get some occasional (!) crashes somewhere around this area but can't track them down precisely (LLDB fails to show me the line), so I would like to make sure this part is OK. Or is it... ?
UPDATE: I found the bug and indeed it was in another part of my code.
Yes, this is a valid way. But if it is not a performance bottleneck in your program then it's better to use smart pointers to manage the lifetime of Particle objects.
Take a look at std::remove_if.
Also, might be good to use a shared pointer as it may make life easier :-)
typedef std::shared_ptr< Particle > ParticlePtr;
auto newend = std::remove_if( particles.begin(), particles.end(), [](ParticlePtr p) {return p->update();} );
particles.erase( newend, particles.end() );
You are iterating over an STL vector, so use iterators, it's what they're for.
std::vector<Particle*>::iterator particle = particles.begin();
while ( particle != particles.end() ) {
bool shouldRemove = particle->update();
if ( shouldRemove ) {
particle = particles.remove(particle); //remove returns the new next particle
} else {
++particle;
}
}
Or, even better, use smart pointers and the erase/remove idiom. Remove_if itself does as you describe, moving old members to the back of the vector and returning an iterator pointing to the first non-valid member. Passing this and the vector's end() to erase allows erase to erase all the old members as they are in a contiguous block. In your scenario, you would have to delete each before calling erase:
auto deleteBegin = std::remove_if(
particles.begin(), particles.end(),
[](Particle* part){ return part->update();}));
for(auto deleteIt = deleteBegin; deleteIt != particles.end(); ++deleteIt)
delete *deleteIt;
std::erase(deleteBegin, particles.end());
Or pre C++11:
bool ShouldDelete(Particle* part) {
return part->update();
}
typedef vector<Particle*> ParticlesPtrVec;
ParticlesPtrVec::iterator deleteBegin = std::remove_if(
particles.begin(), particles.end(), ShouldDelete);
for(ParticlesPtrVec::iterator deleteIt = deleteBegin;
deleteIt != particles.end(); ++deleteIt)
delete *deleteIt;
std::erase(deleteBegin, particles.end());
Then test the whole code for performance and optimise wherever the actual bottlenecks are.
I don't see any direct issue in the code. You are probably having some issues with the actual pointers inside the vector.
Try running valgrind on your code to detect any hidden memory access problems, or switch to smart pointers.