Why iterator is not dereferenced as an lvalue - c++

Apologies if my question does not contain all relevant info. Please comment and I will amend accordingly.
I use CLion on Win7 with MinGW and gcc
I have been experimenting with circular buffers and came across boost::circular_buffer, but for the size of my project I want to use circular buffer by Pete Goodlife, which seems like a solid implementation in just one .hpp.
Note: I am aware of how to reduce boost dependecies thanks to Boost dependencies and bcp.
However, the following example with Pete's implementation does not behave as expected, i.e. the result to std::adjacent_difference(cbuf.begin(),cbuf.end(),df.begin()); comes out empty. I would like to understand why and possibly correct its behaviour.
Follows a MWE:
#include "circular.h"
#include <iostream>
#include <algorithm>
typedef circular_buffer<int> cbuf_type;
void print_cbuf_contents(cbuf_type &cbuf){
std::cout << "Printing cbuf size("
<<cbuf.size()<<"/"<<cbuf.capacity()<<") contents...\n";
for (size_t n = 0; n < cbuf.size(); ++n)
std::cout << " " << n << ": " << cbuf[n] << "\n";
if (!cbuf.empty()) {
std::cout << " front()=" << cbuf.front()
<< ", back()=" << cbuf.back() << "\n";
} else {
std::cout << " empty\n";
}
}
int main()
{
cbuf_type cbuf(5);
for (int n = 0; n < 3; ++n) cbuf.push_back(n);
print_cbuf_contents(cbuf);
cbuf_type df(5);
std::adjacent_difference(cbuf.begin(),cbuf.end(),df.begin());
print_cbuf_contents(df);
}
Which prints the following:
Printing cbuf size(3/5) contents...
0: 0
1: 1
2: 2
front()=0, back()=2
Printing cbuf size(0/5) contents...
empty
Unfortunately, being new to c++ I can’t figure out why the df.begin() iterator is not dereferenced as an lvalue.
I supsect the culprit is (or don't completely uderstand) the member call of the circular_buffer_iterator on line 72 in Pete's circular.h:
elem_type &operator*() { return (*buf_)[pos_]; }
Any help is very much appreciated.

The iterator you pass as the output iterator is dereferenced and treated as an lvalue, and most probably the data you expect is actually stored in the circular buffer's buffer.
The problem is, that apart from the actual storage buffer, most containers also contain some internal book-keeping state that has to be maintained. (for instance: how many elements is in the buffer, how much frees space is left etc).
Dereferencing and incrementing the container doesn't update the internal state, so the container does not "know" that new data has been added.
Consider the following code:
std::vector<int> v;
v.reserve(3);
auto i = v.begin();
*(i++) = 1; // this simply writes to memory
*(i++) = 2; // but doesn't update the internal
*(i++) = 3; // state of the vector
assert(v.size() == 0); // so the vector still "thinks" it's empty
Using push_back would work as expected:
std::vector<int> v;
v.reserve(3);
v.push_back(1); // adds to the storage AND updates internal state
v.push_back(2);
v.push_back(3);
assert(v.size() == 3); // so the vector "knows" it has 3 elements
In your case, you should use std::back_inserter, an iterator that calls "push_back" on a container every time it is dereferenced:
std::adjacent_difference(
cbuf.begin(), cbuf.end(),
std::back_inserter(df));

std::adjacent_difference writes to the result iterator. In your case, that result iterator points into df, which has a size of 0 and a capacity of 5. Those writes will be into the reserved memory of df, but will not change the size of the container, so size will still be 0, and the first 3 ints of the reserved container space will have your difference. In order to see the results, the container being written into must already have data stored in the slots being written to.
So to see the results you must put data into the circular buffer before the difference, then resize the container to the appropriate size (based in the iterator returned by adjacent_difference.

Related

list<pair<float,float>> iterating through a list that holds pairs?

As a part of runtime analysis I've got a small game that after calculating every Frame puts a new element in this list:
typedef std::list<std::pair<float, float>> PairList;
PairList Frames; //in pair: index 0 = elapsed time, index 1 = frames
The txt file is later used to draw a graph.
I decided to use a list, because while playing I do not need to process data held in the list and I think lists are the fastest containers when it comes to only adding or deleting items. As a next step I want to write the frames in an external txt file.
void WriteStats(PairList &pairList)
{
// open a file in write mode.
std::ofstream outfile;
outfile.open("afile.dat");
PairList::iterator itBegin = pairList.begin();
PairList::iterator itEnd = pairList.end();
for (auto it = itBegin; it != itEnd; ++it)
{
outfile << *it.first << "\t" << *it.second;
}
outfile.close();
}
With normal lists the pointer to "it" should return the item right?
Except visual studio says pair<float, float>* does not have a member called first
How do I want to do it then, when access via my iterator does not work? Is it because I pass in the reference to the list?
*it.first is parsed as *(it.first).
You need (*it).first or, better yet it->first.
Or, even better yet use range for:
for (auto& elem : pairList)
{
float a = elem.first;
}
I decided to use a list, because [...] I think lists are the fastest containers when it comes to only adding or deleting items.
The first go-to container should be std::vector. In practice it will outperform std::list even on algorithms that on paper should be faster on std::list because of cache locality. So I would test your theory with a good-ol benchmarking if performance is a concern.
The issue is one of operator precedence. Specifically, the member access operator '.' has higher precedence than indirection '*' so *it.first is effectively parsed as...
*(it.first)
Hence the warning. Instead use...
it->first
Use a range-based for loop instead of messing with iterators:
void WriteStats(const PairList &pairList)
{
// open a file in write mode.
std::ofstream outfile("afile.dat");
for (const auto &elem : pairList) {
outfile << elem.first << "\t" << elem.second << '\n';
}
}

How to chain delete pairs from a vector in C++?

I have this text file where I am reading each line into a std::vector<std::pair>,
handgun bullets
bullets ore
bombs ore
turret bullets
The first item depends on the second item. And I am writing a delete function where, when the user inputs an item name, it deletes the pair containing the item as second item. Since there is a dependency relationship, the item depending on the deleted item should also be deleted since it is no longer usable. For example, if I delete ore, bullets and bombs can no longer be usable because ore is unavailable. Consequently, handgun and turret should also be removed since those pairs are dependent on bullets which is dependent on ore i.e. indirect dependency on ore. This chain should continue until all dependent pairs are deleted.
I tried to do this for the current example and came with the following pseudo code,
for vector_iterator_1 = vector.begin to vector.end
{
if user_input == vector_iterator_1->second
{
for vector_iterator_2 = vector.begin to vector.end
{
if vector_iterator_1->first == vector_iterator_2->second
{
delete pair_of_vector_iterator_2
}
}
delete pair_of_vector_iterator_1
}
}
Not a very good algorithm, but it explains what I intend to do. In the example, if I delete ore, then bullets and bombs gets deleted too. Subsequently, pairs depending on ore and bullets will also be deleted (bombs have no dependency). Since, there is only one single length chain (ore-->bullets), there is only one nested for loop to check for it. However, there may be zero or large number of dependencies in a single chain resulting in many or no nested for loops. So, this is not a very practical solution. How would I do this with a chain of dependencies of variable length? Please tell me. Thank you for your patience.
P. S. : If you didn't understand my question, please let me know.
One (naive) solution:
Create a queue of items-to-delete
Add in your first item (user-entered)
While(!empty(items-to-delete)) loop through your vector
Every time you find your current item as the second-item in your list, add the first-item to your queue and then delete that pair
Easy optimizations:
Ensure you never add an item to the queue twice (hash table/etc)
personally, I would just use the standard library for removal:
vector.erase(remove_if(vector.begin(), vector.end(), [](pair<string,string> pair){ return pair.second == "ore"; }));
remove_if() give you an iterator to the elements matching the criteria, so you could have a function that takes in a .second value to erase, and erases matching pairs while saving the .first values in those being erased. From there, you could loop until nothing is removed.
For your solution, it might be simpler to use find_if inside a loop, but either way, the standard library has some useful things you could use here.
I couldn't help myself to not write a solution using standard algorithms and data structures from the C++ standard library. I'm using a std::set to remember which objects we delete (I prefer it since it has log-access and does not contain duplicates). The algorithm is basically the same as the one proposed by #Beth Crane.
#include <iostream>
#include <vector>
#include <utility>
#include <algorithm>
#include <string>
#include <set>
int main()
{
std::vector<std::pair<std::string, std::string>> v
{ {"handgun", "bullets"},
{"bullets", "ore"},
{"bombs", "ore"},
{"turret", "bullets"}};
std::cout << "Initially: " << std::endl << std::endl;
for (auto && elem : v)
std::cout << elem.first << " " << elem.second << std::endl;
// let's remove "ore", this is our "queue"
std::set<std::string> to_remove{"bullets"}; // unique elements
while (!to_remove.empty()) // loop as long we still have elements to remove
{
// "pop" an element, then remove it via erase-remove idiom
// and a bit of lambdas
std::string obj = *to_remove.begin();
v.erase(
std::remove_if(v.begin(), v.end(),
[&to_remove](const std::pair<const std::string,
const std::string>& elem)->bool
{
// is it on the first position?
if (to_remove.find(elem.first) != to_remove.end())
{
return true;
}
// is it in the queue?
if (to_remove.find(elem.second) != to_remove.end())
{
// add the first element in the queue
to_remove.insert(elem.first);
return true;
}
return false;
}
),
v.end()
);
to_remove.erase(obj); // delete it from the queue once we're done with it
}
std::cout << std::endl << "Finally: " << std::endl << std::endl;
for (auto && elem : v)
std::cout << elem.first << " " << elem.second << std::endl;
}
#vsoftco I looked at Beth's answer and went off to try the solution. I did not see your code until I came back. On closer examination of your code, I see that we have done pretty much the same thing. Here's what I did,
std::string Node;
std::cout << "Enter Node to delete: ";
std::cin >> Node;
std::queue<std::string> Deleted_Nodes;
Deleted_Nodes.push(Node);
while(!Deleted_Nodes.empty())
{
std::vector<std::pair<std::string, std::string>>::iterator Current_Iterator = Pair_Vector.begin(), Temporary_Iterator;
while(Current_Iterator != Pair_Vector.end())
{
Temporary_Iterator = Current_Iterator;
Temporary_Iterator++;
if(Deleted_Nodes.front() == Current_Iterator->second)
{
Deleted_Nodes.push(Current_Iterator->first);
Pair_Vector.erase(Current_Iterator);
}
else if(Deleted_Nodes.front() == Current_Iterator->first)
{
Pair_Vector.erase(Current_Iterator);
}
Current_Iterator = Temporary_Iterator;
}
Deleted_Nodes.pop();
}
To answer your question in the comment of my question, that's what the else if statement is for. It's supposed to be a directed graph so it removes only next level elements in the chain. Higher level elements are not touched.
1 --> 2 --> 3 --> 4 --> 5
Remove 5: 1 --> 2 --> 3 --> 4
Remove 3: 1 --> 2 4 5
Remove 1: 2 3 4 5
Although my code is similar to yours, I am no expert in C++ (yet). Tell me if I made any mistakes or overlooked anything. Thanks. :-)

Setting vector elements in range-based for loop [duplicate]

This question already has answers here:
How can I modify values in a map using range based for loop?
(4 answers)
Closed 1 year ago.
I have come across what I consider weird behaviour with the c++11 range-based for loop when assigning to elements of a dynamically allocated std::vector. I have the following code:
int arraySize = 1000;
std::string fname = "aFileWithLoadsOfNumbers.bin";
CTdata = new std::vector<short int>(arraySize, 0);
std::ifstream dataInput(fname.c_str(), std::ios::binary);
if(dataInput.is_open()
{
std::cout << "File opened sucessfully" << std::endl;
for(auto n: *CTdata)
{
dataInput.read(reinterpret_cast<char*>(&n), sizeof(short int));
// If I do "cout << n << endl;" here, I get sensible results
}
// However, if I do something like "cout << CTdata->at(500) << endl;" here, I get 0
}
else
{
std::cerr << "Failed to open file." << std::endl;
}
If I change the loop to a more traditional for(int i=0; i<arraySize; i++) and use &CTdata->at(i) in place of &n in the read function, things do as I would expect.
What am I missing?
Change this loop statement
for(auto n: *CTdata)
to
for(auto &n : *CTdata)
that is you have to use references to elements of the vector.
you have to write
for( auto& n : *CTdata )
because auto n means short int n when you need short int& n.
i recommend you to read difference beetween decltype and auto.
The reason your loop fails is because you reference vector elements by value. However, in this case you can eliminate the loop altogether:
dataInput.read(reinterpret_cast<char*>(CTdata->data()), arraySize*sizeof(short int));
This reads the content into the vector in a single call.
Vlad's answer perfectly answers your question.
However, consider this for a moment. Instead of filling your array with zeroes from the beginning, you could call vector<>::reserve(), which pre allocates your backing buffer without changing the front facing portion of the vector.
You can then call vector<>::push_back() like normal, with no performance implications, while still maintaining the logic clear in your source code. Coming from a C# background, looping over your vector like that looks like an abomination to me, not to mention you set each element twice. Plus if at any point your element generation fails, you'll have a bunch of zeroes that weren't supposed to be there in the first place.

Finding the intersection of two vectors of strings

I have two vectors of strings and want to find the strings which are present in both, filling a third vector with the common elements. EDIT: I've added the complete code listing with the respective output so that things are clear.
std::cout << "size " << m_HLTMap->size() << std::endl;
/// Vector to store the wanted, present and found triggers
std::vector<std::string> wantedTriggers;
wantedTriggers.push_back("L2_xe25");
wantedTriggers.push_back("L2_vtxbeamspot_FSTracks_L2Star_A");
std::vector<std::string> allTriggers;
// Push all the trigger names to a vector
std::map<std::string, int>::iterator itr = m_HLTMap->begin();
std::map<std::string, int>::iterator itrLast = m_HLTMap->end();
for(;itr!=itrLast;++itr)
{
allTriggers.push_back((*itr).first);
}; // End itr
/// Sort the list of trigger names and find the intersection
/// Build a typdef to make things clearer
std::vector<std::string>::iterator wFirst = wantedTriggers.begin();
std::vector<std::string>::iterator wLast = wantedTriggers.end();
std::vector<std::string>::iterator aFirst = allTriggers.begin();
std::vector<std::string>::iterator aLast = allTriggers.end();
std::vector<std::string> foundTriggers;
for(;aFirst!=aLast;++aFirst)
{
std::cout << "Found:" << (*aFirst) << std::endl;
};
std::vector<std::string>::iterator it;
std::sort(wFirst, wLast);
std::sort(aFirst, aLast);
std::set_intersection(wFirst, wLast, aFirst, aLast, back_inserter(foundTriggers));
std::cout << "Found this many triggers: " << foundTriggers.size() << std::endl;
for(it=foundTriggers.begin();it!=foundTriggers.end();++it)
{
std::cout << "Found in both" << (*it) << std::endl;
}; // End for intersection
The output is then
Here is the partial output, there are over 1000 elements in the vector so I didn't include the full output:
Found:L2_te1400
Found:L2_te1600
Found:L2_te600
Found:L2_trk16_Central_Tau_IDCalib
Found:L2_trk16_Fwd_Tau_IDCalib
Found:L2_trk29_Central_Tau_IDCalib
Found:L2_trk29_Fwd_Tau_IDCalib
Found:L2_trk9_Central_Tau_IDCalib
Found:L2_trk9_Fwd_Tau_IDCalib
Found:L2_vtxbeamspot_FSTracks_L2Star_A
Found:L2_vtxbeamspot_FSTracks_L2Star_B
Found:L2_vtxbeamspot_activeTE_L2Star_A_peb
Found:L2_vtxbeamspot_activeTE_L2Star_B_peb
Found:L2_vtxbeamspot_allTE_L2Star_A_peb
Found:L2_vtxbeamspot_allTE_L2Star_B_peb
Found:L2_xe25
Found:L2_xe35
Found:L2_xe40
Found:L2_xe45
Found:L2_xe45T
Found:L2_xe55
Found:L2_xe55T
Found:L2_xe55_LArNoiseBurst
Found:L2_xe65
Found:L2_xe65_tight
Found:L2_xe75
Found:L2_xe90
Found:L2_xe90_tight
Found:L2_xe_NoCut_allL1
Found:L2_xs15
Found:L2_xs30
Found:L2_xs45
Found:L2_xs50
Found:L2_xs60
Found:L2_xs65
Found:L2_zerobias_NoAlg
Found:L2_zerobias_Overlay_NoAlg
Found this many triggers: 0
Possible Reason
I am starting to think that the way in which I compile my code is to blame. I am currently compiling with ROOT (the physics data analysis framework) instead of doing a standalone compile. I get the feeling that it doesn't work all that well with the STL Algorithm library and that's the cause of the issue, especially given how many people seem to have the code working for them. I will try to do a stand-alone compilation and re-running.
Passing foundTriggers.begin(), with foundTriggers empty, as the output argument will not cause the output to be pushed onto foundTriggers. Instead, it will increment the iterator past the end of the vector without resizing it, randomly corrupting memory.
You want to use an insert iterator:
std::set_intersection(wFirst, wLast, aFirst, aLast,
std::back_inserter(foundTriggers));
UPDATE: As pointed out in the comments, the vector is resized to be at least large enough for the result, so your code should work. Note that you should use the iterator returned from set_intersection to indicate the end of the intersection - your code ignores it, so you will also iterate over the empty strings left at the end of the output.
Could you post a complete test case so that we can see whether the intersection is actually empty or not?
Your allTrigers vector is empty, afterall. You never reset itr to the beginning of the map when you're filling it.
EDIT:
Actually, you never reset aFirst:
for(;aFirst!=aLast;++aFirst)
{
std::cout << "Found:" << (*aFirst) << std::endl;
};
// here aFirst == aLast
std::vector<std::string>::iterator it;
std::sort(wFirst, wLast);
std::sort(aFirst, aLast); // **** sorting empty range ****
std::set_intersection(wFirst, wLast, aFirst, aLast, back_inserter(foundTrigger));
// ^^^^^^^^^^^^^^
// ***** empty range *****
I hope you can now see why it is good practice to narrow down the scope of your variables.
You never use the return value of set_intersection. In this case you could use it to resize foundIterators after set_intersection has returned, or as the upper limit of the for loop. Otherwise your code seems to work. Can we see a full compilable program and its actual output please?

Erasing multiple objects from a std::vector?

Here is my issue, lets say I have a std::vector with ints in it.
let's say it has 50,90,40,90,80,60,80.
I know I need to remove the second, fifth and third elements. I don't necessarily always know the order of elements to remove, nor how many. The issue is by erasing an element, this changes the index of the other elements. Therefore, how could I erase these and compensate for the index change. (sorting then linearly erasing with an offset is not an option)
Thanks
I am offering several methods:
1. A fast method that does not retain the original order of the elements:
Assign the current last element of the vector to the element to erase, then erase the last element. This will avoid big moves and all indexes except the last will remain constant. If you start erasing from the back, all precomputed indexes will be correct.
void quickDelete( int idx )
{
vec[idx] = vec.back();
vec.pop_back();
}
I see this essentially is a hand-coded version of the erase-remove idiom pointed out by Klaim ...
2. A slower method that retains the original order of the elements:
Step 1: Mark all vector elements to be deleted, i.e. with a special value. This has O(|indexes to delete|).
Step 2: Erase all marked elements using v.erase( remove (v.begin(), v.end(), special_value), v.end() );. This has O(|vector v|).
The total run time is thus O(|vector v|), assuming the index list is shorter than the vector.
3. Another slower method that retains the original order of the elements:
Use a predicate and remove if as described in https://stackoverflow.com/a/3487742/280314 . To make this efficient and respecting the requirement of
not "sorting then linearly erasing with an offset", my idea is to implement the predicate using a hash table and adjust the indexes stored in the hash table as the deletion proceeds on returning true, as Klaim suggested.
Using a predicate and the algorithm remove_if you can achieve what you want : see http://www.cplusplus.com/reference/algorithm/remove_if/
Don't forget to erase the item (see remove-erase idiom).
Your predicate will simply hold the idx of each value to remove and decrease all indexes it keeps each time it returns true.
That said if you can afford just removing each object using the remove-erase idiom, just make your life simple by doing it.
Erase the items backwards. In other words erase the highest index first, then next highest etc. You won't invalidate any previous iterators or indexes so you can just use the obvious approach of multiple erase calls.
I would move the elements which you don't want to erase to a temporary vector and then replace the original vector with this.
While this answer by Peter G. in variant one (the swap-and-pop technique) is the fastest when you do not need to preserve the order, here is the unmentioned alternative which maintains the order.
With C++17 and C++20 the removal of multiple elements from a vector is possible with standard algorithms. The run time is O(N * Log(N)) due to std::stable_partition. There are no external helper arrays, no excessive copying, everything is done inplace. Code is a "one-liner":
template <class T>
inline void erase_selected(std::vector<T>& v, const std::vector<int>& selection)
{
v.resize(std::distance(
v.begin(),
std::stable_partition(v.begin(), v.end(),
[&selection, &v](const T& item) {
return !std::binary_search(
selection.begin(),
selection.end(),
static_cast<int>(static_cast<const T*>(&item) - &v[0]));
})));
}
The code above assumes that selection vector is sorted (if it is not the case, std::sort over it does the job, obviously).
To break this down, let us declare a number of temporaries:
// We need an explicit item index of an element
// to see if it should be in the output or not
int itemIndex = 0;
// The checker lambda returns `true` if the element is in `selection`
auto filter = [&itemIndex, &sorted_sel](const T& item) {
return !std::binary_search(
selection.begin(),
selection.end(),
itemIndex++);
};
This checker lambda is then fed to std::stable_partition algorithm which is guaranteed to call this lambda only once for each element in the original (unpermuted !) array v.
auto end_of_selected = std::stable_partition(
v.begin(),
v.end(),
filter);
The end_of_selected iterator points right after the last element which should remain in the output array, so we now can resize v down. To calculate the number of elements we use the std::distance to get size_t from two iterators.
v.resize(std::distance(v.begin(), end_of_selected));
This is different from the code at the top (it uses itemIndex to keep track of the array element). To get rid of the itemIndex, we capture the reference to source array v and use pointer arithmetic to calculate itemIndex internally.
Over the years (on this and other similar sites) multiple solutions have been proposed, but usually they employ multiple "raw loops" with conditions and some erase/insert/push_back calls. The idea behind stable_partition is explained beautifully in this talk by Sean Parent.
This link provides a similar solution (and it does not assume that selection is sorted - std::find_if instead of std::binary_search is used), but it also employs a helper (incremented) variable which disables the possibility to parallelize processing on larger arrays.
Starting from C++17, there is a new first argument to std::stable_partition (the ExecutionPolicy) which allows auto-parallelization of the algorithm, further reducing the run-time for big arrays. To make yourself believe this parallelization actually works, there is another talk by Hartmut Kaiser explaining the internals.
Would this work:
void DeleteAll(vector<int>& data, const vector<int>& deleteIndices)
{
vector<bool> markedElements(data.size(), false);
vector<int> tempBuffer;
tempBuffer.reserve(data.size()-deleteIndices.size());
for (vector<int>::const_iterator itDel = deleteIndices.begin(); itDel != deleteIndices.end(); itDel++)
markedElements[*itDel] = true;
for (size_t i=0; i<data.size(); i++)
{
if (!markedElements[i])
tempBuffer.push_back(data[i]);
}
data = tempBuffer;
}
It's an O(n) operation, no matter how many elements you delete. You could gain some efficiency by reordering the vector inline (but I think this way it's more readable).
This is non-trival because as you delete elements from the vector, the indexes change.
[0] hi
[1] you
[2] foo
>> delete [1]
[0] hi
[1] foo
If you keep a counter of times you delete an element and if you have a list of indexes you want to delete in sorted order then:
int counter = 0;
for (int k : IndexesToDelete) {
events.erase(events.begin()+ k + counter);
counter -= 1;
}
You can use this method, if the order of the remaining elements doesn't matter
#include <iostream>
#include <vector>
using namespace std;
int main()
{
vector< int> vec;
vec.push_back(1);
vec.push_back(-6);
vec.push_back(3);
vec.push_back(4);
vec.push_back(7);
vec.push_back(9);
vec.push_back(14);
vec.push_back(25);
cout << "The elements befor " << endl;
for(int i = 0; i < vec.size(); i++) cout << vec[i] <<endl;
vector< bool> toDeleted;
int YesOrNo = 0;
for(int i = 0; i<vec.size(); i++)
{
cout<<"You need to delete this element? "<<vec[i]<<", if yes enter 1 else enter 0"<<endl;
cin>>YesOrNo;
if(YesOrNo)
toDeleted.push_back(true);
else
toDeleted.push_back(false);
}
//Deleting, beginning from the last element to the first one
for(int i = toDeleted.size()-1; i>=0; i--)
{
if(toDeleted[i])
{
vec[i] = vec.back();
vec.pop_back();
}
}
cout << "The elements after" << endl;
for(int i = 0; i < vec.size(); i++) cout << vec[i] <<endl;
return 0;
}
Here's an elegant solution in case you want to preserve the indices, the idea is to replace the values you want to delete with a special value that is guaranteed not be used anywhere, and then at the very end, you perform the erase itself:
std::vector<int> vec = {1, 2, 3, 4, 5, 6, 7, 8, 9};
// marking 3 elements to be deleted
vec[2] = std::numeric_limits<int>::lowest();
vec[5] = std::numeric_limits<int>::lowest();
vec[3] = std::numeric_limits<int>::lowest();
// erase
vec.erase(std::remove(vec.begin(), vec.end(), std::numeric_limits<int>::lowest()), vec.end());
// print values => 1 2 5 7 8 9
for (const auto& value : vec) std::cout << ' ' << value;
std::cout << std::endl;
It's very quick if you delete a lot of elements because the deletion itself is happening only once. Items can also be deleted in any order that way.
If you use a a struct instead of an int, then you can still mark an element of that struct, for ex dead=true and then use remove_if instead of remove =>
struct MyObj
{
int x;
bool dead = false;
};
std::vector<MyObj> objs = {{1}, {2}, {3}, {4}, {5}, {6}, {7}, {8}, {9}};
objs[2].dead = true;
objs[5].dead = true;
objs[3].dead = true;
objs.erase(std::remove_if(objs.begin(), objs.end(), [](const MyObj& obj) { return obj.dead; }), objs.end());
// print values => 1 2 5 7 8 9
for (const auto& obj : objs) std::cout << ' ' << obj.x;
std::cout << std::endl;
This one is a bit slower, around 80% the speed of the remove.