Will std::remove_if always call the predicate on each element in order (according to the iterator's order) or could it be called out of order?
Here is a toy example of what I would like to do:
void processVector(std::vector<int> values)
{
values.erase(std::remove_if(values.begin(), values.end(), [](int v)
{
if (v % 2 == 0)
{
std::cout << v << "\n";
return true;
}
return false;
}));
}
I need to process and remove all elements of a vector that meet certain criteria, and erase + remove_if seems perfect for that. However, the processing I will do has side effects, and I need to make sure that processing happens in order (in the toy example, suppose that I want to print the values in the order they appear in the original vector).
Is it safe to assume that my predicate will be called on each item in order?
I assume that C++17's execution policies would disambiguate this, but since C++17 isn't out yet that obviously doesn't help me.
Edit: Also, is this a good idea? Or is there a better way to accomplish this?
The standard makes no guarantees on the order of calling the predicate.
What you ought to use is stable_partition. You partition the sequence based on your predicate. Then you can walk the partitioned sequence to perform whatever "side effect" you wanted to do, since stable_partition ensures the relative order of both sets of data. Then you can erase the elements from the vector.
stable_partition has to be used here because erase_if leaves the contents of the "erased" elements undefined.
In code:
void processVector(std::vector<int> &values)
{
auto it = std::stable_partition(begin(values), end(values), [](int v) {return v % 2 != 0;});
std::for_each(it, end(values), [](int v) {std::cout << v << "\n";});
values.erase(it, end(values));
}
A bit late to the party, but here's my take:
While the order is not specified, it will involve jumping through hoops to implement an order different from first-to-last, due to the following:
The complexity is specified to be "exactly std::distance(first, last) applications of the predicate", which requires visiting each element exactly once.
The iterators are ForwardIterators, which means that they can only be incremented.
[C++17 and above] To prevent parallel processing, one can use the version that accepts an execution policy, and pass std::execution::seq.
Given the above, I believe that a (non-parallel) implementation that follows a different order will be convoluted and have no advantages over the straightforward case.
Source: https://en.cppreference.com/w/cpp/algorithm/remove
They should be processed in order, but it is not guaranteed.
std::remove_if() moves "removed" items to the end of the container, they are not actually removed from the container until erase() is called. Both operations will potentially invalidate existing iterators in a std::vector.
Related
I am asking this as the other relevant questions on SO seem to be either for older versions of the C++ standard, do not mention any form of parallelization, or are focused on keeping the ordering/indexing the same as elements are removed.
I have a vector of potentially hundreds of thousands or millions of elements (which are fairly light structures, around ~20 bytes assuming they're compacted down).
Due to other restrictions, it must be a std::vector and other containers would not work (like std::forward_list), or be even less optimal in other uses.
I recently swapped from simple it = std::erase(it) approach to using pop-and-swap using something like this:
for(int i = 0; i < myVec.size();) {
// Do calculations to determine if element must be removed
// ...
// Remove if needed
if(elementMustBeRemoved) {
myVec[i] = myVec.back();
myVec.pop_back();
} else {
i++;
}
}
This works, and was a significant improvement. It cut the runtime of the method down to ~61% of what it was previously. But I would like to improve this further.
Does C++ have a method to remove many non-consecutive elements from a std::vector efficiently? Like passing a vector of indices to erase() and have C++ do some magic under the hood to minimize movement of data?
If so, I could have threads individually gather indices that must be removed in parallel, and then combine them and pass them to erase().
Take a look at std::remove_if algorithm. You could use it like this:
auto firstToErase = std::remove_if(myVec.begin(), myVec.end(),
[](const & T x){
// Do calculations to determine if element must be removed
// ...
return elementMustBeRemoved;});
myVec.erase(firstToErase, myVec.end());
cppreference says that following code is a possible implementation for remove_if:
template<class ForwardIt, class UnaryPredicate>
ForwardIt remove_if(ForwardIt first, ForwardIt last, UnaryPredicate p)
{
first = std::find_if(first, last, p);
if (first != last)
for(ForwardIt i = first; ++i != last; )
if (!p(*i))
*first++ = std::move(*i);
return first;
}
Instead of swapping with the last element it continuously moves through a container building up a range of elements which should be erased, until this range is at the very end of vector. This looks like a more cache-friendly solution and you might notice some performance improvement on a very big vector.
If you want to experiment with a parallel version, there is a version (4) which allows to specify execution policy.
Or, since C++20 you can type sligthly less and use erase_if.
However, in such case you lose the option to choose execution policy.
Is there an even faster approach than swap-and-pop for erasing from std::vector?
Ever since C++11, the optimal removal of single element from vector without preserving order has been move-and-pop rather than swap-and-pop.
Does C++ have a method to remove many non-consecutive elements from a std::vector efficiently?
The remove-erase (std::erase in C++20) idiom is the most efficient that the standard provides. std::remove_if does preserve order, and if you don't care about that, then a more efficient algorithm may be possible. But standard library does not come with unstable remove out of the box. The algorithm goes as follows:
Find first element to be removed (a)
Find last element to not be removed (b)
Move b to a.
Repeat between a and b until iterators meet.
There is a proposal P0048 to add such algorithm to the standard library, and there is a demo implementation in https://github.com/WG21-SG14/SG14/blob/6c5edd5c34e1adf42e69b25ddc57c17d99224bb4/SG14/algorithm_ext.h#L84
I'm writing a piece of C++ that checks to see whether particular elements of a vector return true, and uses remove_if() to remove them if not. After this, I use vector.size() to check to see if there are any elements remaining in the vector, and then return the function if not.
At the moment, I do vector.erase() after remove_if(), as it doesn't actually reduce the size of the vector. However, this code needs to run fast, and recursively changing the size of a vector in memory is probably not ideal. However, returning if the vector is zero (instead of running the rest of the function) probably also saves time.
Is there a nice way to check how many elements remain in the vector without erasing?
here's the code:
auto remove = remove_if(sight.begin(), sight.end(), [](const Glance *a) {
return a->occupied;
});
sight.erase(remove, sight.end());
if (sight.size() == 0) {
// There's nowhere to move
return;
}
EDIT:
thanks for the help + the guidance. From the answers it's clear that the wording of the question isn't quite correct: erase() doesn't change the size of the vector in memory, but the capacity. I had mis-remembered the explanation from this post, which nicely articulates why erase() is slower than remove() for multiple removals (as you have to copy the location of the elements in the vector multiple times).
I used Instruments to benchmark the code I had originally against Johannes' suggestion, and the difference was marginal, though Johannes' was consistently slightly faster (~9.8% weight vs ~8.3% weight for the same code otherwise). The linked article should explain why. ✨
You can use std::distance(sight.begin(), remove); to get the number of remaining elements:
auto remove = remove_if(sight.begin(), sight.end(), [](const Glance *a) {
return a->occupied;
});
size_t remaining = std::distance(sight.begin(), remove);
if (remaining == 0) {
// There's nowhere to move
return;
}
But if you are only interested in 0 you can do:
auto remove = remove_if(sight.begin(), sight.end(), [](const Glance *a) {
return a->occupied;
});
if (remove == sight.begin()) {
// There's nowhere to move
return;
}
Instead of erasing elements that satisfy a criteria and then checking the number of remaining elements just to find out how many elements did not satisfy the criteria, you could simply iterate the container and count those elements. The standard library has an algorithm for that: std::count_if.
I am trying to have multiple iterators to a bit more complex range (using range-v3 library) -- manually implementing a cartesian product, using filter, for_each and yield. However, when I tried to hold multiple iterators to such range, they share a common value. For example:
#include <vector>
#include <iostream>
#include <range/v3/view/for_each.hpp>
#include <range/v3/view/filter.hpp>
int main() {
std::vector<int> data1{1,5,2,7,6};
std::vector<int> data2{1,5,2,7,6};
auto range =
data1
| ranges::v3::view::filter([](int v) { return v%2; })
| ranges::v3::view::for_each([&data2](int v) {
return data2 | ranges::v3::view::for_each([v](int v2) {
return ranges::v3::yield(std::make_pair(v,v2));
});
});
auto it1 = range.begin();
for (auto it2 = range.begin(); it2 != range.end(); ++it2) {
std::cout << "[" << it1->first << "," << it1->second << "] [" << it2->first << "," << it2->second << "]\n";
}
return 0;
}
I expected the iterator it1 to keep pointing at the beginning of the range, while the iterator it2 goes through the whole sequence. To my surprise, it1 is incremented as well! I get the following output:
[1,1] [1,1]
[1,5] [1,5]
[1,2] [1,2]
[1,7] [1,7]
[1,6] [1,6]
[5,1] [5,1]
[5,5] [5,5]
[5,2] [5,2]
[5,7] [5,7]
[5,6] [5,6]
[7,1] [7,1]
[7,5] [7,5]
[7,2] [7,2]
[7,7] [7,7]
[7,6] [7,6]
Why is that?
How can I avoid this?
How can I keep multiple, independent iterators pointing in various locations of the range?
Should I implement a cartesian product in a different way? (that's my previous question)
While it is not reflected in the MCVE above, consider a use case where someone tries to implement something similar to std::max_element - trying to return an iterator to the highest-valued pair in the cross product. While looking for the highest value you need to store an iterator to the current best candidate. It cannot alter while you search, and it would be cumbersome to manage the iterators if you need a copy of the range (as suggested in one of the answers).
Materialising the whole cross product is not an option either, as it requires a lot of memory. After all, the whole point of using ranges with filters and other on-the-fly transformations is to avoid such materialisation.
It seems that the resulting view stores state such that it turns out to be single pass. You can work around that by simply making as many copies of the view as you need:
int main() {
std::vector<int> data1{1,5,2,7,6};
std::vector<int> data2{1,5,2,7,6};
auto range =
data1
| ranges::v3::view::filter([](int v) { return v%2; })
| ranges::v3::view::for_each([&data2](int v) {
return data2 | ranges::v3::view::for_each([v](int v2) {
return ranges::v3::yield(std::make_pair(v,v2));
});
});
auto range1= range; // Copy the view adaptor
auto it1 = range1.begin();
for (auto it2 = range.begin(); it2 != range.end(); ++it2) {
std::cout << "[" << it1->first << "," << it1->second << "] [" << it2->first << "," << it2->second << "]\n";
}
std::cout << '\n';
for (; it1 != range1.end(); ++it1) { // Consume the copied view
std::cout << "[" << it1->first << "," << it1->second << "]\n";
}
return 0;
}
Another option would be materializing the view into a container as mentioned in the comments.
Keeping in mind the aforementioned limitation of single-pass views, it is not really hard to implement a max_element
function that returns an iterator, with the important drawback of having to compute the sequence one time and a half.
Here's a possible implementation:
template <typename InputRange,typename BinaryPred = std::greater<>>
auto my_max_element(InputRange &range1,BinaryPred &&pred = {}) -> decltype(range1.begin()) {
auto range2 = range1;
auto it1 = range1.begin();
std::ptrdiff_t pos = 0L;
for (auto it2 = range2.begin(); it2 != range2.end(); ++it2) {
if (pred(*it2,*it1)) {
ranges::advance(it1,pos); // Computing again the sequence as the iterator advances!
pos = 0L;
}
++pos;
}
return it1;
}
What is goin on here?
The entire problem here originates in the fact that std::max_element requires its arguments to be LecacyForwardIterators while the ranges created by ranges::v3::yield apparently (obviously?) only provide LecacyInputIterators. Unfortunately, the range-v3 docs do not explicitly mention the iterator categories one can expect (at least I haven't found it being mentioned). This would indeed be a huge enhancement as all standard library algorithms do explicitly state what iterator categories they require.
In the particular case of std::max_element you are not the first one to stumble over this counterintuitive requirement of ForwardIterator rather than just InputIterator, see Why does std::max_element require a ForwardIterator? for example. In summary, it does make sense, though, because std::max_element does not (despite the name suggesting it) return the max element, but an iterator to the max element. Hence, it is in particular the multipass guarantee that is missing on InputIterator in order to make std::max_element work with it.
For this reason, many other standard library functions do not work with std::max_element either, e.g. std::istreambuf_iterator which really is a pity: you just cannot get the max element from a file with the existing standard library! You either have to load the entire file into memory first, or you have to use your own max algorithm.
The standard library is simply missing an algorithm that really returns the max element rather than an iterator pointing to the max element. Such an algorithm could work with InputIterators as well. Of course, this can very easily be implemented manually, but still it would be handy to have this given by the standard library. I can only speculate why it doesn't exist. Maybe one reason is, that it would require the value_type to be copy constructable because InputIterator is not required to return references to the elements and it might be in turn counterintuitive for a max algorithm to make a copy...
So, now regarding your actual questions:
Why is this? (i.e. why does your range only return InputIterators?)
Obviously, yield creates the values on the fly. This is by design, it's the very reason why one would want to use yield: to not have to create (and thus store) the range upfront. Hence, I do not see how yield could be implemented in a way that it fulfills the multipass guarantee, especially the second bullet is giving me headaches:
If a and b compare equal (a == b is contextually convertible to true) then either they are both non-dereferenceable or *a and *b are references bound to the same object
Technically, I could imagine that one could implement yield in a way that all iterators created from one range share a common internal storage that is filled on the fly during the first traversal. Then it would be possible for different iterators to give you the same references to underlying objects. But then std::max_element would silently consume O(n²) memory (all elements of your cartesian product). So, in my opinion it's definitely better to not do this and instead make the users materialize the range themselves, so that they are aware of it happening.
How can I avoid this?
Well, as already said by metalfox, you can copy your view which would result in different ranges and thus independent iterators. Still, that wouldn't make std::max_element work. So, given the nature of yield the answer to this question, unfortunately, is: you simply cannot avoid this with yield or any other technique that creates values on the fly.
How can I keep multiple, independent iterators pointing in various locations of the range?
This is related to the previous question. Basically, this question answers itself: If you want to point independent iterators in various locations, these locations have to exist somewhere in memory. So, you need to materialize at least those elements that did once have an iterator pointing to them, which in case of std::max_element means that you have to materialize all of them.
Should I implement a cartesian product in a different way?
I can imagine many different implementations. But none of them will be able to provide both of these properties all together:
return ForwardIterators
require less than O(n²) memory
Technically, it could be possible to implement an iterator that is specialized for the usage with std::max_element, meaning that it keeps only the current max element in memory so that it can be referenced... But this would be somewhat ridiculous, wouldn't it? We cannot expect a general purpose library like range-v3 to come up with such highly specialized iterator categories.
Summary
You are saying
After all, I don't think my use case is such a rare outlier and ranges
are planned to be added to the C++20 standard - so there should be
some reasonable way to achieve this without traps...
I definitely agree that "this is not a rare outlier"! However, that doesn't necessarily imply that "there should be some reasonable way to achieve this without traps". Consider e.g. NP-hard problems. It is not a rare outlier to be facing one. Still, it is impossible (unless P=NP) to solve them in polynomial time. And in your case it is simply not possible to use std::max_element without ForwardIterators. And it is not possible to implement a ForwardIterator (as defined by the standard library) on a cartesian product without consuming O(n²) memory.
For the particular case of std::max_element I would suggest to just implement your own version that returns the max element rather than an iterator pointing to it.
However, if I understand your question correctly your concern is more general and std::max_element is just an example. So, I have to disappoint you. Even with the existing standard library some trivial things are impossible due to incompatible iterator categories (again, std::istreambuf_iterator is an existing example). So, if range-v3 happens to be added, there will just be some more of such examples.
So, finally, my recommendation is to just go with your own algorithms, if possible, and swallow the pill of materializing a view otherwise.
An iterator is a pointer to an element in the vector, in this case, it1 points to the beginning of the vector. And hence, if you are trying to point the iterator to the same location of the vector, they will be the same. However, you can have multiple iterators pointing to different locations of the vector. Hope this answers your question.
I have a C++11 list of complex elements that are defined by a structure node_info. A node_info element, in particular, contains a field time and is inserted into the list in an ordered fashion according to its time field value. That is, the list contains various node_info elements that are time ordered. I want to remove from this list all the nodes that verify some specific condition specified by coincidence_detect, which I am currently implementing as a predicate for a remove_if operation.
Since my list can be very large (order of 100k -- 10M elements), and for the way I am building my list this coincidence_detect condition is only verified by few (thousands) elements closer to the "lower" end of the list -- that is the one that contains elements whose time value is less than some t_xv, I thought that to improve speed of my code I don't need to run remove_if through the whole list, but just restrict it to all those elements in the list whose time < t_xv.
remove_if() though does not seem however to allow the user to control up to which point I can iterate through the list.
My current code.
The list elements:
struct node_info {
char *type = "x";
int ID = -1;
double time = 0.0;
bool spk = true;
};
The predicate/condition for remove_if:
// Remove all events occurring at t_event
class coincident_events {
double t_event; // Event time
bool spk; // Spike condition
public:
coincident_events(double time,bool spk_) : t_event(time), spk(spk_){}
bool operator()(node_info node_event){
return ((node_event.time==t_event)&&(node_event.spk==spk)&&(strcmp(node_event.type,"x")!=0));
}
};
The actual removing from the list:
void remove_from_list(double t_event, bool spk_){
// Remove all events occurring at t_event
coincident_events coincidence(t_event,spk_);
event_heap.remove_if(coincidence);
}
Pseudo main:
int main(){
// My list
std::list<node_info> event_heap;
...
// Populate list with elements with random time values, yet ordered in ascending order
...
remove_from_list(0.5, true);
return 1;
}
It seems that remove_if may not be ideal in this context. Should I consider instead instantiating an iterator and run an explicit for cycle as suggested for example in this post?
It seems that remove_if may not be ideal in this context. Should I consider instead instantiating an iterator and run an explicit for loop?
Yes and yes. Don't fight to use code that is preventing you from reaching your goals. Keep it simple. Loops are nothing to be ashamed of in C++.
First thing, comparing double exactly is not a good idea as you are subject to floating point errors.
You could always search the point up to where you want to do a search using lower_bound (I assume you list is properly sorted).
The you could use free function algorithm std::remove_if followed by std::erase to remove items between the iterator returned by remove_if and the one returned by lower_bound.
However, doing that you would do multiple passes in the data and you would move nodes so it would affect performance.
See also: https://en.cppreference.com/w/cpp/algorithm/remove
So in the end, it is probably preferable to do you own loop on the whole container and for each each check if it need to be removed. If not, then check if you should break out of the loop.
for (auto it = event_heap.begin(); it != event_heap.end(); )
{
if (coincidence(*it))
{
auto itErase = it;
++it;
event_heap.erase(itErase)
}
else if (it->time < t_xv)
{
++it;
}
else
{
break;
}
}
As you can see, code can easily become quite long for something that should be simple. Thus, if you need to do that kind of algorithm often, consider writing you own generic algorithm.
Also, in practice you might not need to do a complete search for the end using the first solution if you process you data in increasing time order.
Finally, you might consider using an std::set instead. It could lead to simpler and more optimized code.
Thanks. I used your comments and came up with this solution, which seemingly increases speed by a factor of 5-to-10.
void remove_from_list(double t_event,bool spk_){
coincident_events coincidence(t_event,spk_);
for(auto it=event_heap.begin();it!=event_heap.end();){
if(t_event>=it->time){
if(coincidence(*it)) {
it = event_heap.erase(it);
}
else
++it;
}
else
break;
}
}
The idea to make erase return it (as already ++it) was suggested by this other post. Note that in this implementation I am actually erasing all list elements up to t_event value (meaning, I pass whatever I want for t_xv).
I would like to traverse a map in C++ with iterators but not all the way to the end.
The problem is that even if we can do basic operations with iterators, we cannot add or compare iterators with integers.
How can I write the following instructions? (final is a map; window, an integer)
for (it=final.begin(); it!=final.end()-window; it++)
You cannot subtract from a map iterator directly, because it is an expensive operation (in practice doing --iter the required number of times). If you really want to do it anyway, you can use the standard library function 'advance'.
map<...>::iterator end = final.end();
std::advance(end, -window);
That will give you the end of your window.
std::map<T1, T2>::iterator it = final.begin();
for (int i = 0; i < final.size()-window; ++i, ++it)
{
// TODO: add your normal loop body
}
Replace T1 and T2 with the actual types of the keys and values of the map.
Why don't you make 'it' an iterator as well ?
See the example here : http://www.cplusplus.com/reference/stl/map/begin/
Another solution:
size_t count=final.size();
size_t processCount=(window<count?count-window:0);
for (it=final.begin(); processCount && it!=final.end(); ++it, --processCount)
{
// loop body
}
This one is a bit safer:
It takes care of the case when your map is actually smaller than the value of window.
It will process at most processCount elements, even if you change the size of your map inside your loop (e.g. add new elements)
According to STL, size() can take O(n) time to compute, although usual implementations can do this in O(1). To be on the safe side, it is better not to call size() many times, if it is not necessary.
'end()' on the other hand has amortized constant time, so it should be OK to have it in the for-loop condition
++it may be faster than it++. The post-increment operator creates a temporary object, while the other - does not. When the variable is a simple integral type, compiler can optimise it out, but with iterators it is not always the case.