Improvement to remove-erase idiom when order need not be preserved - c++

The popular remove_if-erase idiom preserves the order of the elements kept in the container. I have a case in which I would like to remove some elements, but I don't care about the order of the remaining ones, because they are going to be moved around later on, anyway. I thought, then, that instead of using remove_if-erase, I can scan the vector and - when an element to remove is found - I can swap it with the last valid element of the vector. I call this idiom swap-erase, and it can easily be implemented as follows:
template<typename Object, typename Condition>
void swap_erase(std::vector<Object>& v, const Condition& condition) {
// Keeps track to one past the last element we want to keep.
auto iter_to_last = v.end();
for(auto it = v.begin(); it < iter_to_last; ++it) {
// If the erasure condition is fulfilled...
if(condition(*it)) {
// Increase by one to the left the "tail" of the
// vector, made by elements we want to get rid of;
// Swap the two elements.
// Rewind the current iterator by 1, so at the
// next iteration we test the element we just swapped.
std::iter_swap(it--, --iter_to_last);
}
}
// Erase the elements we pushed at the end of the queue.
v.erase(iter_to_last, v.end());
}
Since there is no need to shift elements, I would expect this code to be consistently faster than remove_if-erase on vectors which are large, or contain large objects.
However, a quick benchmark shows that the two are roughly equivalent when compiled by gcc 7.3.0 with -Ofast, on my i7 at 2.6GHz.
Am I wrong in my assumptions, in my implementation, or in the way I am benchmarking?
Edit: it turns out I was wrong in my assumption. This is a possible implementation of remove_if which clearly shows that it needs not shift any element:
template<typename ForwardIterator, typename Predicate>
ForwardIterator remove_if(ForwardIterator first, ForwardIterator last, Predicate pred) {
first = std::find_if(first, last, pred);
if(first == last) {
return first;
}
ForwardIterator result = first;
++first;
for(; first != last; ++first) {
if(!pred(first)) {
*result = std::move(*first);
++result;
}
}
return result;
}

Your assumption about how remove_if works might be wrong. Perhaps you should state it explicitly.
Basically remove_if moves each not deleted element at most once, so it is particularly fast if most elements are being deleted. (It might optimise by first scanning over the initial part of the array which is not being deleted, in which case it will also be fast if few elements are being deleted and the first deleted element is near the end.)
Your swap algorithm does one swap for each element being deleted, so it is fastest if few elements are being deleted. But swap is unnecessary, and unnecessarily slow in some cases since it requires three moves. You could just move the last element over top of the element being deleted, possibly saving two data copies.

Related

c++11: Erase multiple occurrences from vector. Which is best practice?

I understand that erase moves the iterator forward automatically, so when removing multiple occurrences I need to avoid this so I can compare contiguous elements. That's why I usually do:
auto i = vect.begin();
while (i!=vect.end())
if (*i==someValue)
vect.erase(i);
else
++i;
But I was wondering if I could also do it with a for loop, like this:
for (auto i=vec.begin(); i!=vec.end(); ++i)
if (*i==someValue){
vec.erase(i);
--i;
}
The --i part looks a bit weird but it works. Would that be bad practice? Bad code? Be prone to errors? Or it's just right to use either option?
Thanks.
Use Remove and Erase idiom:
auto new_end = std::remove(v.begin(), v.end(), some_value);
v.erase(new_end, v.end());
That above code has complexity of O(n) and it can be executed in parallel if there're no data race from C++17 with
template< class ExecutionPolicy, class ForwardIt, class T >
ForwardIt remove( ExecutionPolicy&& policy, ForwardIt first, ForwardIt last, const T& value );
or with parallelism TS
Your code has a problem because from vector.modifiers#3
Effects: Invalidates iterators and references at or after the point of the erase
The standard said that iterator is invalidated
However, in reality, most of implementations keep the iterator point to the old node, which is now the end if it was the last element or the next element, then your code has complexity of O(n2) because it will loop n times and take n more for shift the data. It also can't be executed in parallel.

`std::list<>::sort()` - why the sudden switch to top-down strategy?

I remember that since the beginning of times the most popular approach to implementing std::list<>::sort() was the classic Merge Sort algorithm implemented in bottom-up fashion (see also What makes the gcc std::list sort implementation so fast?).
I remember seeing someone aptly refer to this strategy as "onion chaining" approach.
At least that's the way it is in GCC's implementation of C++ standard library (see, for example, here). And this is how it was in old Dimkumware's STL in MSVC version of standard library, as well as in all versions of MSVC all the way to VS2013.
However, the standard library supplied with VS2015 suddenly no longer follows this sorting strategy. The library shipped with VS2015 uses a rather straightforward recursive implementation of top-down Merge Sort. This strikes me as strange, since top-down approach requires access to the mid-point of the list in order to split it in half. Since std::list<> does not support random access, the only way to find that mid-point is to literally iterate through half of the list. Also, at the very beginning it is necessary to know the total number of elements in the list (which was not necessarily an O(1) operation before C++11).
Nevertheless, std::list<>::sort() in VS2015 does exactly that. Here's an excerpt from that implementation that locates the mid-point and performs recursive calls
...
iterator _Mid = _STD next(_First, _Size / 2);
_First = _Sort(_First, _Mid, _Pred, _Size / 2);
_Mid = _Sort(_Mid, _Last, _Pred, _Size - _Size / 2);
...
As you can see, they just nonchalantly use std::next to walk through the first half of the list and arrive at _Mid iterator.
What could be the reason behind this switch, I wonder? All I see is a seemingly obvious inefficiency of repetitive calls to std::next at each level of recursion. Naive logic says that this is slower. If they are willing to pay this kind of price, they probably expect to get something in return. What are they getting then? I don't immediately see this algorithm as having better cache behavior (compared to the original bottom-up approach). I don't immediately see it as behaving better on pre-sorted sequences.
Granted, since C++11 std::list<> is basically required to store its element count, which makes the above slightly more efficient, since we always know the element count in advance. But that still does not seem to be enough to justify the sequential scan on each level of recursion.
(Admittedly, I haven't tried to race the implementations against each other. Maybe there are some surprises there.)
Note this answer has been updated to address all of the issues mentioned in the comments below and after the question, by making the same change from an array of lists to an array of iterators, while retaining the faster bottom up merge sort algorithm, and eliminating the small chance of stack overflow due to recursion with the top down merge sort algorithm.
Initially I assumed that Microsoft would not have switched to a less efficient top down merge sort when it switched to using iterators unless it was necessary, so I was looking for alternatives. It was only when I tried to analyze the issues (out of curiosity) that I realized that the original bottom up merge sort could be modified to work with iterators.
In #sbi's comment, he asked the author of the top down approach, Stephan T. Lavavej, why the change to iterators was made. Stephan's response was "to avoid memory allocation and default constructing allocators". VS2015 introduced non-default-constructible and stateful allocators, which presents an issue when using the prior version's array of lists, as each instance of a list allocates a dummy node, and a change would be needed to handle no default allocator.
Lavavej's solution was to switch to using iterators to keep track of run boundaries within the original list instead of an internal array of lists. The merge logic was changed to use 3 iterator parameters, 1st parameter is iterator to start of left run, 2nd parameter is iterator to end of left run == iterator to start of right run, 3rd parameter is iterator to end of right run. The merge process uses std::list::splice to move nodes within the original list during merge operations. This has the added benefit of being exception safe. If a caller's compare function throws an exception, the list will be re-ordered, but no loss of data will occur (assuming splice can't fail). With the prior scheme, some (or most) of the data would be in the internal array of lists if an exception occurred, and data would be lost from the original list.
I changed bottom up merge sort to use an array of iterators instead of an array of lists, where array[i] is an iterator to the start of a sorted run with 2^i nodes, or it is empty (using std::list::end to indicate empty, since iterators can't be null). Similar to the top down approach, the array of iterators is only used to keep track of sorted run boundaries within the original linked list, with the same merge logic as top down that uses std::list::splice to move nodes within the original linked list.
A single scan of the list is done, building up sorted runs to the left of the current scan.next position according to the sorted run boundaries in the array, until all nodes are merged into the sorted runs. Then the sorted runs are merged, resulting in a sorted list.
For example, for a list with 7 nodes, after the scan:
2 1 0 array index
run0->run0->run0->run0->run1->run1->run2->end
Then the 3 sorted runs are merged right to left via merge(left, right), so that the sort is stable.
If a linked list is large and the nodes are scattered, there will be a lot of cache misses, and top down will be about 40% to 50% slower than bottom up depending on the processor. Then again, if there's enough memory, it would usually be faster to move the list to an array or vector, sort the array or vector, then create a new list from the sorted array or vector.
Example C++ code:
template <typename T>
typename std::list<T>::iterator Merge(std::list<T> &ll,
typename std::list<T>::iterator li,
typename std::list<T>::iterator ri,
typename std::list<T>::iterator ei);
// iterator array size
#define ASZ 32
template <typename T>
void SortList(std::list<T> &ll)
{
if (ll.size() < 2) // return if nothing to do
return;
typename std::list<T>::iterator ai[ASZ]; // array of iterator (bgn lft)
typename std::list<T>::iterator ri; // right iterator (end lft, bgn rgt)
typename std::list<T>::iterator ei; // end iterator (end rgt)
size_t i;
for (i = 0; i < ASZ; i++) // "clear" array
ai[i] = ll.end();
// merge nodes into array of runs
for (ei = ll.begin(); ei != ll.end();) {
ri = ei++;
for (i = 0; (i < ASZ) && ai[i] != ll.end(); i++) {
ri = Merge(ll, ai[i], ri, ei);
ai[i] = ll.end();
}
if(i == ASZ)
i--;
ai[i] = ri;
}
// merge array of runs into single sorted list
// ei = ll.end();
for(i = 0; (i < ASZ) && ai[i] == ei; i++);
ri = ai[i++];
while(1){
for( ; (i < ASZ) && ai[i] == ei; i++);
if (i == ASZ)
break;
ri = Merge(ll, ai[i++], ri, ei);
}
}
template <typename T>
typename std::list<T>::iterator Merge(std::list<T> &ll,
typename std::list<T>::iterator li,
typename std::list<T>::iterator ri,
typename std::list<T>::iterator ei)
{
typename std::list<T>::iterator ni;
(*ri < *li) ? ni = ri : ni = li;
while(1){
if(*ri < *li){
ll.splice(li, ll, ri++);
if(ri == ei)
return ni;
} else {
if(++li == ri)
return ni;
}
}
}
Example replacement code for VS2019's std::list::sort(), in include file list. The merge logic was made into a separate internal function, since it's now used in two places. The call to _Sort from std::list::sort() is _Sort(begin(), end(), _Pred, this->_Mysize());, where _Pred is a pointer to the compare function (defaults to std::less()).
private:
template <class _Pr2>
iterator _Merge(_Pr2 _Pred, iterator _First, iterator _Mid, iterator _Last){
iterator _Newfirst = _First;
for (bool _Initial_loop = true;;
_Initial_loop = false) { // [_First, _Mid) and [_Mid, _Last) are sorted and non-empty
if (_DEBUG_LT_PRED(_Pred, *_Mid, *_First)) { // consume _Mid
if (_Initial_loop) {
_Newfirst = _Mid; // update return value
}
splice(_First, *this, _Mid++);
if (_Mid == _Last) {
return _Newfirst; // exhausted [_Mid, _Last); done
}
}
else { // consume _First
++_First;
if (_First == _Mid) {
return _Newfirst; // exhausted [_First, _Mid); done
}
}
}
}
template <class _Pr2>
void _Sort(iterator _First, iterator _Last, _Pr2 _Pred,
size_type _Size) { // order [_First, _Last), using _Pred, return new first
// _Size must be distance from _First to _Last
if (_Size < 2) {
return; // nothing to do
}
const size_t _ASZ = 32; // array size
iterator _Ai[_ASZ]; // array of iterator to run (bgn lft)
iterator _Mi; // middle iterator to run (end lft, bgn rgt)
iterator _Li; // last (end) iterator to run (end rgt)
size_t _I; // index to _Ai
for (_I = 0; _I < _ASZ; _I++) // "empty" array
_Ai[_I] = _Last; // _Ai[] == _Last => empty entry
// merge nodes into array of runs
for (_Li = _First; _Li != _Last;) {
_Mi = _Li++;
for (_I = 0; (_I < _ASZ) && _Ai[_I] != _Last; _I++) {
_Mi = _Merge(_Pass_fn(_Pred), _Ai[_I], _Mi, _Li);
_Ai[_I] = _Last;
}
if (_I == _ASZ)
_I--;
_Ai[_I] = _Mi;
}
// merge array of runs into single sorted list
for (_I = 0; _I < _ASZ && _Ai[_I] == _Last; _I++);
_Mi = _Ai[_I++];
while (1) {
for (; _I < _ASZ && _Ai[_I] == _Last; _I++);
if (_I == _ASZ)
break;
_Mi = _Merge(_Pass_fn(_Pred), _Ai[_I++], _Mi, _Last);
}
}
The remainder of this answer is historical, and only left for the historical comments, otherwise it is no longer relevant.
I was able to reproduce the issue (old sort fails to compile, new one works) based on a demo from #IgorTandetnik:
#include <iostream>
#include <list>
#include <memory>
template <typename T>
class MyAlloc : public std::allocator<T> {
public:
MyAlloc(T) {} // suppress default constructor
template <typename U>
MyAlloc(const MyAlloc<U>& other) : std::allocator<T>(other) {}
template< class U > struct rebind { typedef MyAlloc<U> other; };
};
int main()
{
std::list<int, MyAlloc<int>> l(MyAlloc<int>(0));
l.push_back(3);
l.push_back(0);
l.push_back(2);
l.push_back(1);
l.sort();
return 0;
}
I noticed this change back in July, 2016 and emailed P.J. Plauger about this change on August 1, 2016. A snippet of his reply:
Interestingly enough, our change log doesn't reflect this change. That
probably means it was "suggested" by one of our larger customers and
got by me on the code review. All I know now is that the change came
in around the autumn of 2015. When I reviewed the code, the first
thing that struck me was the line:
iterator _Mid = _STD next(_First, _Size / 2);
which, of course, can take a very long time for a large list.
The code looks a bit more elegant than what I wrote in early 1995(!),
but definitely has worse time complexity. That version was modeled
after the approach by Stepanov, Lee, and Musser in the original STL.
They are seldom found to be wrong in their choice of algorithms.
I'm now reverting to our latest known good version of the original code.
I don't know if P.J. Plauger's reversion to the original code dealt with the new allocator issue, or if or how Microsoft interacts with Dinkumware.
For a comparison of the top down versus bottom up methods, I created a linked list with 4 million elements, each consisting of one 64 bit unsigned integer, assuming I would end up with a doubly linked list of nearly sequentially ordered nodes (even though they would be dynamically allocated), filled them with random numbers, then sorted them. The nodes don't move, only the linkage is changed, but now traversing the list accesses the nodes in random order. I then filled those randomly ordered nodes with another set of random numbers and sorted them again. I compared the 2015 top down approach with the prior bottom up approach modified to match the other changes made for 2015 (sort() now calls sort() with a predicate compare function, rather than having two separate functions). These are the results. update - I added a node pointer based version and also noted the time for simply creating a vector from list, sorting vector, copy back.
sequential nodes: 2015 version 1.6 seconds, prior version 1.5 seconds
random nodes: 2015 version 4.0 seconds, prior version 2.8 seconds
random nodes: node pointer based version 2.6 seconds
random nodes: create vector from list, sort, copy back 1.25 seconds
For sequential nodes, the prior version is only a bit faster, but for random nodes, the prior version is 30% faster, and the node pointer version 35% faster, and creating a vector from the list, sorting the vector, then copying back is 69% faster.
Below is the first replacement code for std::list::sort() I used to compare the prior bottom up with small array (_BinList[]) method versus VS2015's top down approach I wanted the comparison to be fair, so I modified a copy of < list >.
void sort()
{ // order sequence, using operator<
sort(less<>());
}
template<class _Pr2>
void sort(_Pr2 _Pred)
{ // order sequence, using _Pred
if (2 > this->_Mysize())
return;
const size_t _MAXBINS = 25;
_Myt _Templist, _Binlist[_MAXBINS];
while (!empty())
{
// _Templist = next element
_Templist._Splice_same(_Templist.begin(), *this, begin(),
++begin(), 1);
// merge with array of ever larger bins
size_t _Bin;
for (_Bin = 0; _Bin < _MAXBINS && !_Binlist[_Bin].empty();
++_Bin)
_Templist.merge(_Binlist[_Bin], _Pred);
// don't go past end of array
if (_Bin == _MAXBINS)
_Bin--;
// update bin with merged list, empty _Templist
_Binlist[_Bin].swap(_Templist);
}
// merge bins back into caller's list
for (size_t _Bin = 0; _Bin < _MAXBINS; _Bin++)
if(!_Binlist[_Bin].empty())
this->merge(_Binlist[_Bin], _Pred);
}
I made some minor changes. The original code kept track of the actual maximum bin in a variable named _Maxbin, but the overhead in the final merge is small enough that I removed the code associated with _Maxbin. During the array build, the original code's inner loop merged into a _Binlist[] element, followed by a swap into _Templist, which seemed pointless. I changed the inner loop to just merge into _Templist, only swapping once an empty _Binlist[] element is found.
Below is a node pointer based replacement for std::list::sort() I used for yet another comparison. This eliminates allocation related issues. If a compare exception is possible and occurred, all the nodes in the array and temp list (pNode) would have to be appended back to the original list, or possibly a compare exception could be treated as a less than compare.
void sort()
{ // order sequence, using operator<
sort(less<>());
}
template<class _Pr2>
void sort(_Pr2 _Pred)
{ // order sequence, using _Pred
const size_t _NUMBINS = 25;
_Nodeptr aList[_NUMBINS]; // array of lists
_Nodeptr pNode;
_Nodeptr pNext;
_Nodeptr pPrev;
if (this->size() < 2) // return if nothing to do
return;
this->_Myhead()->_Prev->_Next = 0; // set last node ->_Next = 0
pNode = this->_Myhead()->_Next; // set ptr to start of list
size_t i;
for (i = 0; i < _NUMBINS; i++) // zero array
aList[i] = 0;
while (pNode != 0) // merge nodes into array
{
pNext = pNode->_Next;
pNode->_Next = 0;
for (i = 0; (i < _NUMBINS) && (aList[i] != 0); i++)
{
pNode = _MergeN(_Pred, aList[i], pNode);
aList[i] = 0;
}
if (i == _NUMBINS)
i--;
aList[i] = pNode;
pNode = pNext;
}
pNode = 0; // merge array into one list
for (i = 0; i < _NUMBINS; i++)
pNode = _MergeN(_Pred, aList[i], pNode);
this->_Myhead()->_Next = pNode; // update sentinel node links
pPrev = this->_Myhead(); // and _Prev pointers
while (pNode)
{
pNode->_Prev = pPrev;
pPrev = pNode;
pNode = pNode->_Next;
}
pPrev->_Next = this->_Myhead();
this->_Myhead()->_Prev = pPrev;
}
template<class _Pr2>
_Nodeptr _MergeN(_Pr2 &_Pred, _Nodeptr pSrc1, _Nodeptr pSrc2)
{
_Nodeptr pDst = 0; // destination head ptr
_Nodeptr *ppDst = &pDst; // ptr to head or prev->_Next
if (pSrc1 == 0)
return pSrc2;
if (pSrc2 == 0)
return pSrc1;
while (1)
{
if (_DEBUG_LT_PRED(_Pred, pSrc2->_Myval, pSrc1->_Myval))
{
*ppDst = pSrc2;
pSrc2 = *(ppDst = &pSrc2->_Next);
if (pSrc2 == 0)
{
*ppDst = pSrc1;
break;
}
}
else
{
*ppDst = pSrc1;
pSrc1 = *(ppDst = &pSrc1->_Next);
if (pSrc1 == 0)
{
*ppDst = pSrc2;
break;
}
}
}
return pDst;
}
#sbi asked Stephan T. Lavavej, MSVC's standard library maintainer, who responded:
I did that to avoid memory allocation and default constructing
allocators.
To this I'll add "free basic exception safety".
To elaborate: the pre-VS2015 implementation suffers from several defects:
_Myt _Templist, _Binlist[_MAXBINS]; creates a bunch of intermediate lists (_Myt is simply a typedef for the current instantiation of list; a less confusing spelling for that is, well, list) to hold the nodes during sorting, but these lists are default constructed, which leads to a multitude of problems:
If the allocator used is not default constructible (and there is no requirement that allocators be default constructible), this simply won't compile, because the default constructor of list will attempt to default construct its allocator.
If the allocator used is stateful, then a default-constructed allocator may not compare equal to this->get_allocator(), which means that the later splices and merges are technically undefined behavior and may well break in debug builds. ("Technically", because the nodes are all merged back in the end, so you don't actually deallocate with the wrong allocator if the function successfully completes.)
Dinkumware's list uses a dynamically allocated sentinel node, which means that the above will perform _MAXBINS + 1 dynamic allocations. I doubt that many people expect sort to potentially throw bad_alloc. If the allocator is stateful, then these sentinel nodes may not be even allocated from the right place (see #2).
The code is not exception safe. In particular, the comparison is allowed to throw, and if it throws while there are elements in the intermediate lists, those elements are simply destroyed with the lists during stack unwinding. Users of sort don't expect the list to be sorted if sort throws an exception, of course, but they probably also don't expect the elements to go missing.
This interacts very poorly with #2 above, because now it's not just technical undefined behavior: the destructor of those intermediate lists will be deallocating and destroying the nodes spliced into them with the wrong allocator.
Are those defects fixable? Probably. #1 and #2 can be fixed by passing get_allocator() to the constructor of the lists:
_Myt _Templist(get_allocator());
_Myt _Binlist[_MAXBINS] = { _Myt(get_allocator()), _Myt(get_allocator()),
_Myt(get_allocator()), /* ... repeat _MAXBINS times */ };
The exception safety problem can be fixed by surrounding the loop with a try-catch that splices all the nodes in the intermediate lists back into *this without regard to order if an exception is thrown.
Fixing #3 is harder, because that means not using list at all as the holder of nodes, which probably requires a decent amount of refactoring, but it's doable.
The question is: is it worth jumping through all these hoops to improve the performance of a container that has reduced performance by design? After all, someone who really cares about performance probably won't be using list in the first place.

Insert multiple values into vector

I have a std::vector<T> variable. I also have two variables of type T, the first of which represents the value in the vector after which I am to insert, while the second represents the value to insert.
So lets say I have this container: 1,2,1,1,2,2
And the two values are 2 and 3 with respect to their definitions above. Then I wish to write a function which will update the container to instead contain:
1,2,3,1,1,2,3,2,3
I am using c++98 and boost. What std or boost functions might I use to implement this function?
Iterating over the vector and using std::insert is one way, but it gets messy when one realizes that you need to remember to hop over the value you just inserted.
This is what I would probably do:
vector<T> copy;
for (vector<T>::iterator i=original.begin(); i!=original.end(); ++i)
{
copy.push_back(*i);
if (*i == first)
copy.push_back(second);
}
original.swap(copy);
Put a call to reserve in there if you want. You know you need room for at least original.size() elements. You could also do an initial iteraton over the vector (or use std::count) to determine the exact amount of elements to reserve, but without testing, I don't know whether that would improve performance.
I propose a solution that works in place and in O(n) in memory and O(2n) time. Instead of O(n^2) in time by the solution proposed by Laethnes and O(2n) in memory by the solution proposed by Benjamin.
// First pass, count elements equal to first.
std::size_t elems = std::count(data.begin(), data.end(), first);
// Resize so we'll add without reallocating the elements.
data.resize(data.size() + elems);
vector<T>::reverse_iterator end = data.rbegin() + elems;
// Iterate from the end. Move elements from the end to the new end (and so elements to insert will have some place).
for(vector<T>::reverse_iterator new_end = data.rbegin(); end != data.rend() && elems > 0; ++new_end,++end)
{
// If the current element is the one we search, insert second first. (We iterate from the end).
if(*end == first)
{
*new_end = second;
++new_end;
--elems;
}
// Copy the data to the end.
*new_end = *end;
}
This algorithm may be buggy but the idea is to copy only once each elements by:
Firstly count how much elements we'll need to insert.
Secondly by going though the data from the end and moving each elements to the new end.
This is what I probably would do:
typedef ::std::vector<int> MyList;
typedef MyList::iterator MyListIter;
MyList data;
// ... fill data ...
const int searchValue = 2;
const int addValue = 3;
// Find first occurence of searched value
MyListIter iter = ::std::find(data.begin(), data.end(), searchValue);
while(iter != data.end())
{
// We want to add our value after searched one
++iter;
// Insert value and return iterator pointing to the inserted position
// (original iterator is invalid now).
iter = data.insert(iter, addValue);
// This is needed only if we want to be sure that out value won't be used
// - for example if searchValue == addValue is true, code would create
// infinite loop.
++iter;
// Search for next value.
iter = ::std::find(iter, data.end(), searchValue);
}
but as you can see, I couldn't avoid the incrementation you mentioned. But I don't think that would be bad thing: I would put this code to separate functions (probably in some kind of "core/utils" module) and - of course - implement this function as template, so I would write it only once - only once worrying about incrementing value is IMHO acceptable. Very acceptable.
template <class ValueType>
void insertAfter(::std::vector<ValueType> &io_data,
const ValueType &i_searchValue,
const ValueType &i_insertAfterValue);
or even better (IMHO)
template <class ListType, class ValueType>
void insertAfter(ListType &io_data,
const ValueType &i_searchValue,
const ValueType &i_insertAfterValue);
EDIT:
well, I would solve problem little different way: first count number of the searched value occurrence (preferably store in some kind of cache which can be kept and used repeatably) so I could prepare array before (only one allocation) and used memcpy to move original values (for types like int only, of course) or memmove (if the vector allocated size is sufficient already).
In place, O(1) additional memory and O(n) time (Live at Coliru):
template <typename T, typename A>
void do_thing(std::vector<T, A>& vec, T target, T inserted) {
using std::swap;
typedef typename std::vector<T, A>::size_type size_t;
const size_t occurrences = std::count(vec.begin(), vec.end(), target);
if (occurrences == 0) return;
const size_t original_size = vec.size();
vec.resize(original_size + occurrences, inserted);
for(size_t i = original_size - 1, end = i + occurrences; i > 0; --i, --end) {
if (vec[i] == target) {
--end;
}
swap(vec[i], vec[end]);
}
}

Accessing elements of a list of lists in C++

I have a list of lists like this:
std::list<std::list<double> > list;
I filled it with some lists with doubles in them (actually quite a lot, which is why I am not using a vector. All this copying takes up a lot of time.)
Say I want to access the element that could be accesed like list[3][3] if the list were not a list but a vector or two dimensional array. How would I do that?
I know that accessing elements in a list is accomplished by using an iterator. I couldn't figure out how to get out the double though.
double item = *std::next(std::begin(*std::next(std::begin(list), 3)), 3);
Using a vector would usually have much better performance, though; accessing element n of a list is O(n).
If you're concerned about performance of splicing the interior of the container, you could use deque, which has operator[], amortized constant insertion and deletion from either end, and linear time insertion and deletion from the interior.
For C++03 compilers, you can implement begin and next yourself:
template<typename Container>
typename Container::iterator begin(Container &container)
{
return container.begin();
}
template<typename Container>
typename Container::const_iterator begin(const Container &container)
{
return container.begin();
}
template<typename T, int n>
T *begin(T (&array)[n])
{
return &array[0];
}
template<typename Iterator>
Iterator next(Iterator it, typename std::iterator_traits<Iterator>::difference_type n = 1)
{
std::advance(it, n);
return it;
}
To actually answer your question, you should probably look at std::advance.
To strictly answer your question, Joachim Pileborg's answer is the way to go:
std::list<std::list<double> >::iterator it = list.begin();
std::advance(it, 3);
std::list<double>::iterator it2 = (*it).begin();
std::advance(it2, 3);
double d = *it2;
Now, from your question and further comments it is not clear whether you always add elements to the end of the lists or they can be added anywhere. If you always add to the end, vector<double> will work better. A vector<T> does not need to be copied every time its size increases; only whenever its capacity increases, which is a very different thing.
In addition to this, using reserve(), as others said before, will help a lot with the reallocations. You don't need to reserve for the combined size of all vectors, but only for each individual vector. So:
std::vector<std::vector<double> > v;
v.reserve(512); // If you are inserting 400 vectors, with a little extra just in case
And you would also reserve for each vector<double> inside v. That's all.
Take into account that your list of lists will take much more space. For each double in the internal list, it will have to allocate at least two additional pointers, and also two additional pointers for each list inside the global least. This means that the total memory taken by your container will be roughly three times that of the vector. And all this allocation and management also takes extra runtime.

What's wrong with my vector<T>::erase here?

I have two vector<T> in my program, called active and non_active respectively. This refers to the objects it contains, as to whether they are in use or not.
I have some code that loops the active vector and checks for any objects that might have gone non active. I add these to a temp_list inside the loop.
Then after the loop, I take my temp_list and do non_active.insert of all elements in the temp_list.
After that, I do call erase on my active vector and pass it the temp_list to erase.
For some reason, however, the erase crashes.
This is the code:
non_active.insert(non_active.begin(), temp_list.begin(), temp_list.end());
active.erase(temp_list.begin(), temp_list.end());
I get this assertion:
Expression:("_Pvector == NULL || (((_Myvec*)_Pvector)->_Myfirst <= _Ptr && _Ptr <= ((_Myvect*)_Pvector)->_Mylast)",0)
I've looked online and seen that there is a erase-remove idiom, however not sure how I'd apply that to a removing a range of elements from a vector<T>
I'm not using C++11.
erase expects a range of iterators passed to it that lie within the current vector. You cannot pass iterators obtained from a different vector to erase.
Here is a possible, but inefficient, C++11 solution supported by lambdas:
active.erase(std::remove_if(active.begin(), active.end(), [](const T& x)
{
return std::find(temp_list.begin(), temp_list.end(), x) != temp_list.end();
}), active.end());
And here is the equivalent C++03 solution without the lambda:
template<typename Container>
class element_of
{
Container& container;
element_of(Container& container) : container(container) {}
public:
template<typename T>
bool operator()(const T& x) const
{
return std::find(container.begin(), container.end(), x)
!= container.end();
}
};
// ...
active.erase(std::remove_if(active.begin(), active.end(),
element_of<std::vector<T> >(temp_list)),
active.end());
If you replace temp_list with a std::set and the std::find_if with a find member function call on the set, the performance should be acceptable.
The erase method is intended to accept iterators to the same container object. You're trying to pass in iterators to temp_list to use to erase elements from active which is not allowed for good reasons, as a Sequence's range erase method is intended to specify a range in that Sequence to remove. It's important that the iterators are in that sequence because otherwise we're specifying a range of values to erase rather than a range within the same container which is a much more costly operation.
The type of logic you're trying to perform suggests to me that a set or list might be better suited for the purpose. That is, you're trying to erase various elements from the middle of a container that match a certain condition and transfer them to another container, and you could eliminate the need for temp_list this way.
With list, for example, it could be as easy as this:
for (ActiveList::iterator it = active.begin(); it != active.end();)
{
if (it->no_longer_active())
{
inactive.push_back(*it);
it = active.erase(it);
}
else
++it;
}
However, sometimes vector can outperform these solutions, and maybe you have need for vector for other reasons (like ensuring contiguous memory). In that case, std::remove_if is your best bet.
Example:
bool not_active(const YourObjectType& obj);
active_list.erase(
remove_if(active_list.begin(), active_list.end(), not_active),
active_list.end());
More info on this can be found under the topic, 'erase-remove idiom' and you may need predicate function objects depending on what external states are required to determine if an object is no longer active.
You can actually make the erase/remove idiom usable for your case. You just need to move the value over to the other container before std::remove_if possibly shuffles it around: in the predicate.
template<class OutIt, class Pred>
struct copy_if_predicate{
copy_if_predicate(OutIt dest, Pred p)
: dest(dest), pred(p) {}
template<class T>
bool operator()(T const& v){
if(pred(v)){
*dest++ = v;
return true;
}
return false;
}
OutIt dest;
Pred pred;
};
template<class OutIt, class Pred>
copy_if_predicate<OutIt,Pred> copy_if_pred(OutIt dest, Pred pred){
return copy_if_predicate<OutIt,Pred>(dest,pred);
}
Live example on Ideone. (I directly used bools to make the code shorter, not bothering with output and the likes.)
The function std::vector::erase requires the iterators to be iterators into this vector, but you are passing iterators from temp_list. You cannot erase elements from a container that are in a completely different container.
active.erase(temp_list.begin(), temp_list.end());
You try to erase elements from one list, but you use iterators for second list. First list iterators aren't the same, like in second list.
I would like to suggest that this is an example of where std::list should be used. You can splice members from one list to another. Look at std::list::splice()for this.
Do you need random access? If not then you don't need a std::vector.
Note that with list, when you splice, your iterators, and references to the objects in the list remain valid.
If you don't mind making the implementation "intrusive", your objects can contain their own iterator value, so they know where they are. Then when they change state, they can automate their own "moving" from one list to the other, and you don't need to transverse the whole list for them. (If you want this sweep to happen later, you can get them to "register" themselves for later moving).
I will write an algorithm here now to run through one collection and if a condition exists, it will effect a std::remove_if but at the same time will copy the element into your "inserter".
//fwd iterator must be writable
template< typename FwdIterator, typename InputIterator, typename Pred >
FwdIterator copy_and_remove_if( FwdIterator inp, FwdIterator end, InputIterator outp, Pred pred )
{
for( FwdIterator test = inp; test != end; ++test )
{
if( pred(*test) ) // insert
{
*outp = *test;
++outp;
}
else // keep
{
if( test != inp )
{
*inp = *test;
}
++inp;
}
}
return inp;
}
This is a bit like std::remove_if but will copy the ones being removed into an alternative collection. You would invoke it like this (for a vector) where isInactive is a valid predicate that indicates it should be moved.
active.erase( copy_and_remove_if( active.begin(), active.end(), std::back_inserter(inactive), isInactive ), active.end() );
The iterators you pass to erase() should point into the vector itself; the assertion is telling you that they don't. This version of erase() is for erasing a range out of the vector.
You need to iterate over temp_list yourself and call active.erase() on the result of dereferencing the iterator at each step.