I am fairly new to c++, but had a question about vectors. My goal is to remove an element from the vector using erase once I hit my out of bounds condition. This all seems to work fine, except that when I call erase, it will be pointing to the first element, but delete the last. Basically, the loop will continue to iterate and delete every element out of my vector. I am using push_back to add Lasers to my vector elsewhere in the code.
std::vector<Laser> m_Lasers;
for (int i = 0; i != m_Lasers.size(); i++)
{
m_Lasers[i].ClearLaser();
if (m_Lasers[i].GetX() > m_ScreenWidth || m_Lasers[i].GetX() < 0 || m_Lasers[i].GetY() < 0)
{
//erase the vector
m_Lasers.erase(m_Lasers.begin() + i);
i--;
}
}
my =operator is defined as:
void operator=(const Laser& L){};
in my laser class. I think my issue may be with this.
Thank you so much for you help!
What vector::erase does is moving all elements after the erased element forward using assignment, and then destroying the last element. It has to do this to maintain the invariant that the elements in the vector are stored contiguously.
For example, give a vector v of size 4, erasing v[1] essentially does v[1] = v[2]; v[2] = v[3]; v.pop_back(); (These are actually move assignments; std::move is omitted for clarity.)
If your assignment is a no-op (which is not allowed by erase's requirements, by the way), then this will just end up destroying the last element.
Related
I am trying to do a double loop across a std::vector to explore all combinations of items in the vector. If the result is good, I add it to the vector for another pass. This is being used for an association rule problem but I made a smaller demonstration for this question. It seems as though when I push_back it will sometimes change the vector such that the original iterator no longer works. For example:
std::vector<int> nums{1,2,3,4,5};
auto nextStart = nums.begin();
while (nextStart != nums.end()){
auto currentStart = nextStart;
auto currentEnd = nums.end();
nextStart = currentEnd;
for (auto a = currentStart; a!= currentEnd-1; a++){
for (auto b = currentStart+1; b != currentEnd; b++){
auto sum = (*a) + (*b);
if (sum < 10) nums.push_back(sum);
}
}
}
On some iterations, currentStart points to a location that is outside the array and provides garbage data. What is causing this and what is the best way to avoid this situation? I know that modifying something you iterate over is an invitation for trouble...
nums.push_back(sum);
push_back invalidates all existing iterators to the vector if push_back ends up reallocating the vector.
That's just how the vector works. Initially some additional space gets reserved for the vector's growth. Vector's internal buffer that holds its contents has some extra room to spare, but when it is full the next call to push_back allocates a bigger buffer to the vector's contents, moves the contents of the existing buffer, then deletes the existing buffer.
The shown code creates and uses iterators for the vector, but any call to push_back will invalidate the whole lot, and the next invalidated vector dereference results in undefined behavior.
You have two basic options:
Replace the vector with some other container that does not invalidate its existing iterators, when additional values get added to the iterator
Reimplement the entire logic using vector indexes instead of iterators.
I have to copy the first size element from a set of Solution (a class) named population to an array of solution named parents. I have some problems with iterators because i should do an hybrid solution between a normal for loop
and a for with iterators. The idea is this: when I'm at the ith iteration of the for I declare a new iterator that's pointing the beginning
of population, then I advance this iterator to the ith position, I take this solution element and I copy into parents[i]
Solution* parents; //it is filled somewhere else
std::set<Solution> population; //it is filled somewhere else
for (int i = 0; i < size; i++) {
auto it = population.begin();
advance(it, i);
parents[i] = *it;
}
Two error messages popup with this sentence: 'Expression: cannot dereference end map/set iterator'
and 'Expression: cannot advance end map/set iterator'
Any idea on how to this trick? I know it's kinda bad mixing array and set, i should use vector instead of array?
You use std::copy_n.
#include <algorithm>
extern Solution* parents; //it is filled somewhere else
extern std::set<Solution> population; //it is filled somewhere else
std::copy_n(population.begin(), size, parents);
It seems like size may be incorrectly set. To ensure that your code behaves as expected, you should just use the collection's size directly:
auto it = population.begin();
for (int i = 0; i < population.size(); i++) {
parents[i] = *it;
++it;
}
This can also be solved with a much simpler expression:
std::copy(population.begin(), population.end(), parents);
I have to copy the first size element from a set [..] to an array
You can use std::copy_n.
for (int i = 0; i < size; i++) {
auto it = population.begin();
advance(it, i);
The problem with this is that you're iterating over the linked list in every iteration. This turns the copy operation from normally linear complexity to quadratic.
Expression: cannot dereference end map/set iterator'
The problem here appears to be that your set doesn't contain at least size number of elements. You cannot copy size number of elements if there aren't that many. I suggest that you would copy less elements when the set is smaller.
i should use vector instead of array?
Probably. Is the array very large? Is the size of the vector not known at compile time? If so, use a vector.
int main() {
std::vector<std::vector<int>> v;
v.push_back({1,2,3,4});
auto it = v.at(0).begin();
int size = v.at(0).size();
std::cout<<size<<std::endl;
for (int i = 0; i < size; ++it)
{
v.push_back({5,6,7,8});
//std::cout<<*it<<std::endl;
}
return 0;
}
The iterator is broken when I push some elements into the outer container.
what should I do if I really want to iterate the container element inside of outer container and at the same time keep pushing back some new elements ? Many thanks!
When the outer vector resizes, it must do one of two things:
copy the elements
move the elements
It can only move the elements if the type has nonthrowing move semantics (the move constructor is marked noexcept, etc).
In this case, the element is a vector holding integers, which recursively depends on its elements. Since integer does not throw, the inner vector should be noexcept-movable too. The standard requires the iterators to remain valid when moving such a vector.
You say iterator is broken in your example, but the real problem is a bug in your code:
for (int i = 0; i < size; ++it) // << HERE
{
v.push_back({5,6,7,8});
//std::cout<<*it<<std::endl;
}
You don't increment the loop variable, you increment the iterator, in an infinite loop
what should I do if I really want to iterate the container element inside of outer container and at the same time keep pushing back some new elements ?
I see 3 possible solutions:
use container that does not relocate elements when new ones are added, for example std::map<size_t,std::vector<int>>, you should be aware of different memory usage and access speed
use index of inner array instead of iterator
add data to a temporary vector in the loop and after you done append all data to the original vector
I've written some code to decrease the capacity of a templated container class. After an element is removed from the container, the erase function checks to see whether or not 25% of the total space is in use, and whether reducing the capacity by half would cause it to be less than the default size I've set. If these two return true, then the downsize function runs. However, if this happens while I'm in the middle of a const_iterator loop, I get a segfault.
edit again: I'm thinking it's because the const_iterator pointer is pointing to the old array and needs to be pointed to the new one created by downsize()...now how to go about doing that...
template <class T>
void sorted<T>::downsize(){
// Run the same process as resize, except
// in reverse (sort of).
int newCapacity = (m_capacity / 2);
T *temp_array = new T[newCapacity];
for (int i = 0; i < m_size; i++)
temp_array[i] = m_data[i];
// Frees memory, points m_data at the
// new, smaller array, sets the capacity
// to the proper (lower) value.
delete [] m_data;
m_data = temp_array;
setCap(newCapacity);
cout << "Decreased array capacity to " << newCapacity << "." << endl;
}
// Implementation of the const_iterator erase method.
template <class T>
typename sorted<T>::const_iterator sorted<T>::erase(const_iterator itr){
// This section is reused from game.cpp, a file provided in the
// Cruno project. It handles erasing the element pointed to
// by the constant iterator.
T *end = &m_data[m_capacity]; // one past the end of data
T *ptr = itr.m_current; // element to erase
// to erase element at ptr, shift elements from ptr+1 to
// the end of the array down one position
while ( ptr+1 != end ) {
*ptr = *(ptr+1);
ptr++;
}
m_size--;
// Once the element is removed, check to
// see if a size reduction of the array is
// necessary.
// Initialized some new values here to make
// sure downsize only runs when the correct
// conditions are met.
double capCheck = m_capacity;
double sizeCheck = m_size;
double usedCheck = (sizeCheck / capCheck);
int boundCheck = (m_capacity / 2);
if ((usedCheck <= ONE_FOURTH) && (boundCheck >= DEFAULT_SIZE))
downsize();
return itr;
}
// Chunk from main that erases.
int i = 0;
for (itr = x.begin(); itr != x.end(); itr++) {
if (i < 7) x.erase(itr);
i++;
}
To prevent issues with invalidated iterators during an erase loop, you can use:
x.erase(itr++);
instead of separate increment and increment calls (obviously if you're not erasing everything you loop over you'd need an else case to increment past the non-erased items.) Note this is a case where you need to use post-increment rather than pre-increment.
The whole thing looks a bit inefficient though. The way you convert to an array suggests that it's only going to work with certain container types anyway. If you expect lots of erasing from the middle of the container you might be able to avoid the memory issue by just choosing a different container; list maybe.
If you choose vector, you might find that calling shrink_to_fit is what you want.
Also note that erase has a return value that will point to the (new) location of the element after the erased one.
If you go down the line of returning an iterator to a newly created down-sized version of the container, note that comparing to the original containers end() iterator wouldn't work.
I figured out the segfault problem!
What was going on was that my x.end() was using the size of the array (NOT the capacity), where the size is the number of data elements stored in the array, instead of the capacity which is actually the size of the array. So when it was looking for the end of the array, it was seeing it as numerous elements before the actual end.
Now on to more interesting problems!
So I have a vector of unsigned ints (vector<unsigned int> is called vector1). I have another vector of a struct I created (vector<struct> is called vector2). vector<int> holds an integer that is the index of the vector<struct>. For example, let's say that vector<int = {5, 17, 18, 19}. That means vector2.at(5) == vector2.at(vector1.at(0)).
In the struct, I have a bool variable called var. In most cases, var is false. I want to delete all of the elements in vector1 that have var = true.
What I did was:
for (unsigned int i = 0; i < vector1.size(); i++)
{
if (vector2.at(vector1.at(i)).var)
vector1.erase(vector.begin() + i);
}
The only problem with this is that it does not delete all of the true elements. I have run the for loop multiple times for all values to be delete. Is this the correct behavior? If it is not, where did I go wrong?
You have to use the erase-remove idiom to delete elements from a vector.
v.erase(std::remove(v.begin(), v.end(), value), v.begin);
std::remove moves the elements to the end of the vector and erase will erase the element from the vector.
You can keep a temporary vector, copy of vector1 and iterate over it in the for loop and delete from vector1.
You are erasing elements in the vector while at the same time iterating over it. So when erasing an element, you always jump over the next element, since you increase i while having just shortened the vector at i (it would be even worse, had you used a proper iterator loop instead of an index loop). The best way to do this would be to seperate both opretions, first "marking" (or rather reordering) the elements for removal and then erasing them from the vector.
This is in practice best done using the erase-remove idiom (vector.erarse(std::remove(...), vector.end())), which first uses std::remove(_if) to reorganize the data with the non-removed elements at the beginning and returns the new end of the range, which can then be used to really delete those removed elements from the range (effectively just shortening the whole vector), using std::vector::erase. Using a C++11 lambda, the removal condition can be expressed quite easily:
vector1.erase(std::remove_if( //erase range starting here
vector1.begin(), vector1.end(), //iterate over whole vector
[&vector2](unsigned int i) //call this for each element
{ return vector2.at(i).var; }), //return true to remove
vector1.end()); //erase up to old end
EDIT: And by the way, as always be sure if you really need std::vector::at instead of just [] and keep in mind the implications of both (in particular the overhead of the former and "maybe insecurity" of the latter).