Why erase+remove is more efficient than remove - c++

I'm coding C++ to solve this problem from Leetcode: https://leetcode.com/problems/remove-element/
Given an array nums and a value val, remove all instances of that
value in-place and return the new length.
Do not allocate extra space for another array, you must do this by
modifying the input array in-place with O(1) extra memory.
The order of elements can be changed. It doesn't matter what you leave
beyond the new length.
Example 1:
Given nums = [3,2,2,3], val = 3,
Your function should return length = 2, with the first two elements of
nums being 2.
It doesn't matter what you leave beyond the returned length.
I have two solutions:
Solution A:
int removeElement(vector<int>& nums, int val) {
nums.erase(remove(begin(nums), end(nums), val), end(nums));
return nums.size();
}
Solution B:
int removeElement(vector<int>& nums, int val) {
auto it = std::remove(nums.begin(), nums.end(), val);
return it - nums.begin();
}
In my opinion, Solution B should be faster than Solution A. However, the result is the opposite:
Solution A spent 0 ms, whereas Solution B spent 4 ms.
I don't know why remove + erase is faster than remove.

For a vector of trivially destructible type (int is one such type), erase(it, end()) is usually just a decrement of a size member (or pointer member, depending on the implementation strategy) that takes almost no time. 4 milliseconds is a very very small difference. It can be easily caused by the state of the machine. And I won't expect such a small difference will be reproducible.
If you want to really remove the elements from the vector, go with the first version. If you really want to do what std::remove does (you probably don't), go with the second version. Performance is not the problem here.

Related

How to add an element to the front of a vector in C++? [duplicate]

iterator insert ( iterator position, const T& x );
Is the function declaration of the insert operator of the std::Vector class.
This function's return type is an iterator pointing to the inserted element. My question is, given this return type, what is the most efficient way (this is part of a larger program I am running where speed is of the essence, so I am looking for the most computationally efficient way) of inserting at the beginning. Is it the following?
//Code 1
vector<int> intvector;
vector<int>::iterator it;
it = myvector.begin();
for(int i = 1; i <= 100000; i++){
it = intvector.insert(it,i);
}
Or,
//Code 2
vector<int> intvector;
for(int i = 1; i <= 100000; i++){
intvector.insert(intvector.begin(),i);
}
Essentially, in Code 2, is the parameter,
intvector.begin()
"Costly" to evaluate computationally as compared to using the returned iterator in Code 1 or should both be equally cheap/costly?
If one of the critical needs of your program is to insert elements at the begining of a container: then you should use a std::deque and not a std::vector. std::vector is only good at inserting elements at the end.
Other containers have been introduced in C++11. I should start to find an updated graph with these new containers and insert it here.
The efficiency of obtaining the insertion point won't matter in the least - it will be dwarfed by the inefficiency of constantly shuffling the existing data up every time you do an insertion.
Use std::deque for this, that's what it was designed for.
An old thread, but it showed up at a coworker's desk as the first search result for a Google query.
There is one alternative to using a deque that is worth considering:
std::vector<T> foo;
for (int i = 0; i < 100000; ++i)
foo.push_back(T());
std::reverse( foo.begin(), foo.end() );
You still use a vector which is significantly more engineered than deque for performance. Also, swaps (which is what reverse uses) are quite efficient. On the other hand, the complexity, while still linear, is increased by 50%.
As always, measure before you decide what to do.
If you're looking for a computationally efficient way of inserting at the front, then you probably want to use a deque instead of a vector.
Most likely deque is the appropriate solution as suggested by others. But just for completeness, suppose that you need to do this front-insertion just once, that elsewhere in the program you don't need to do other operations on the front, and that otherwise vector provides the interface you need. If all of those are true, you could add the items with the very efficient push_back and then reverse the vector to get everything in order. That would have linear complexity rather than polynomial as it would when inserting at the front.
When you use a vector, you usually know the actual number of elements it is going to have. In this case, reserving the needed number of elements (100000 in the case you show) and filling them by using the [] operator is the fastest way. If you really need an efficient insert at the front, you can use deque or list, depending on your algorithms.
You may also consider inverting the logic of your algorithm and inserting at the end, that is usually faster for vectors.
I think you should change the type of your container if you really want to insert data at the beginning. It's the reason why vector does not have push_front() member function.
Intuitively, I agree with #Happy Green Kid Naps and ran a small test showing that for small sizes (1 << 10 elements of a primitive data type) it doesn't matter. For larger container sizes (1 << 20), however, std::deque seems to be of higher performance than reversing an std::vector. So, benchmark before you decide. Another factor might be the element type of the container.
Test 1: push_front (a) 1<<10 or (b) 1<<20 uint64_t into std::deque
Test 2: push_back (a) 1<<10 or (b) 1<<20 uint64_t into std::vector followed by std::reverse
Results:
Test 1 - deque (a) 19 µs
Test 2 - vector (a) 19 µs
Test 1 - deque (b) 6339 µs
Test 2 - vector (b) 10588 µs
You can support-
Insertion at front.
Insertion at the end.
Changing value at any position (won't present in deque)
Accessing value at any index (won't present in deque)
All above operations in O(1) time complexity
Note: You just need to know the upper bound on max_size it can go in left and right.
class Vector{
public:
int front,end;
int arr[100100]; // you should set this in according to 2*max_size
Vector(int initialize){
arr[100100/2] = initialize; // initializing value
front = end = 100100/2;
front--;end++;
}
void push_back(int val){
arr[end] = val;
end++;
}
void push_front(int val){
if(front<0){return;} // you should set initial size accordingly
arr[front] = val;
front--;
}
int value(int idx){
return arr[front+idx];
}
// similarity create function to change on any index
};
int main(){
Vector v(2);
for(int i=1;i<100;i++){
// O(1)
v.push_front(i);
}
for(int i=0;i<20;i++){
// to access the value in O(1)
cout<<v.value(i)<<" ";
}
return;
}
This may draw the ire of some because it does not directly answer the question, but it may help to keep in mind that retrieving the items from a std::vector in reverse order is both easy and fast.

Is std::sort the best choice to do in-place sort for a huge array with limited integer value?

I want to sort an array with huge(millions or even billions) elements, while the values are integers within a small range(1 to 100 or 1 to 1000), in such a case, is std::sort and the parallelized version __gnu_parallel::sort the best choice for me?
actually I want to sort a vecotor of my own class with an integer member representing the processor index.
as there are other member inside the class, so, even if two data have same integer member that is used for comparing, they might not be regarded as same data.
Counting sort would be the right choice if you know that your range is so limited. If the range is [0,m) the most efficient way to do so it have a vector in which the index represent the element and the value the count. For example:
vector<int> to_sort;
vector<int> counts;
for (int i : to_sort) {
if (counts.size() < i) {
counts.resize(i+1, 0);
}
counts[i]++;
}
Note that the count at i is lazily initialized but you can resize once if you know m.
If you are sorting objects by some field and they are all distinct, you can modify the above as:
vector<T> to_sort;
vector<vector<const T*>> count_sorted;
for (const T& t : to_sort) {
const int i = t.sort_field()
if (count_sorted.size() < i) {
count_sorted.resize(i+1, {});
}
count_sorted[i].push_back(&t);
}
Now the main difference is that your space requirements grow substantially because you need to store the vectors of pointers. The space complexity went from O(m) to O(n). Time complexity is the same. Note that the algorithm is stable. The code above assumes that to_sort is in scope during the life cycle of count_sorted. If your Ts implement move semantics you can store the object themselves and move them in. If you need count_sorted to outlive to_sort you will need to do so or make copies.
If you have a range of type [-l, m), the substance does not change much, but your index now represents the value i + l and you need to know l beforehand.
Finally, it should be trivial to simulate an iteration through the sorted array by iterating through the counts array taking into account the value of the count. If you want stl like iterators you might need a custom data structure that encapsulates that behavior.
Note: in the previous version of this answer I mentioned multiset as a way to use a data structure to count sort. This would be efficient in some java implementations (I believe the Guava implementation would be efficient) but not in C++ where the keys in the RB tree are just repeated many times.
You say "in-place", I therefore assume that you don't want to use O(n) extra memory.
First, count the number of objects with each value (as in Gionvanni's and ronaldo's answers). You still need to get the objects into the right locations in-place. I think the following works, but I haven't implemented or tested it:
Create a cumulative sum from your counts, so that you know what index each object needs to go to. For example, if the counts are 1: 3, 2: 5, 3: 7, then the cumulative sums are 1: 0, 2: 3, 3: 8, 4: 15, meaning that the first object with value 1 in the final array will be at index 0, the first object with value 2 will be at index 3, and so on.
The basic idea now is to go through the vector, starting from the beginning. Get the element's processor index, and look up the corresponding cumulative sum. This is where you want it to be. If it's already in that location, move on to the next element of the vector and increment the cumulative sum (so that the next object with that value goes in the next position along). If it's not already in the right location, swap it with the correct location, increment the cumulative sum, and then continue the process for the element you swapped into this position in the vector.
There's a potential problem when you reach the start of a block of elements that have already been moved into place. You can solve that by remembering the original cumulative sums, "noticing" when you reach one, and jump ahead to the current cumulative sum for that value, so that you don't revisit any elements that you've already swapped into place. There might be a cleverer way to deal with this, but I don't know it.
Finally, compare the performance (and correctness!) of your code against std::sort. This has better time complexity than std::sort, but that doesn't mean it's necessarily faster for your actual data.
You definitely want to use counting sort. But not the one you're thinking of. Its main selling point is that its time complexity is O(N+X) where X is the maximum value you allow the sorting of.
Regular old counting sort (as seen on some other answers) can only sort integers, or has to be implemented with a multiset or some other data structure (becoming O(Nlog(N))). But a more general version of counting sort can be used to sort (in place) anything that can provide an integer key, which is perfectly suited to your use case.
The algorithm is somewhat different though, and it's also known as American Flag Sort. Just like regular counting sort, it starts off by calculating the counts.
After that, it builds a prefix sums array of the counts. This is so that we can know how many elements should be placed behind a particular item, thus allowing us to index into the right place in constant time.
since we know the correct final position of the items, we can just swap them into place. And doing just that would work if there weren't any repetitions but, since it's almost certain that there will be repetitions, we have to be more careful.
First: when we put something into its place we have to increment the value in the prefix sum so that the next element with same value doesn't remove the previous element from its place.
Second: either
keep track of how many elements of each value we have already put into place so that we dont keep moving elements of values that have already reached their place, this requires a second copy of the counts array (prior to calculating the prefix sum), as well as a "move count" array.
keep a copy of the prefix sums shifted over by one so that we stop moving elements once the stored position of the latest element
reaches the first position of the next value.
Even though the first approach is somewhat more intuitive, I chose the second method (because it's faster and uses less memory).
template<class It, class KeyOf>
void countsort (It begin, It end, KeyOf key_of) {
constexpr int max_value = 1000;
int final_destination[max_value] = {}; // zero initialized
int destination[max_value] = {}; // zero initialized
// Record counts
for (It it = begin; it != end; ++it)
final_destination[key_of(*it)]++;
// Build prefix sum of counts
for (int i = 1; i < max_value; ++i) {
final_destination[i] += final_destination[i-1];
destination[i] = final_destination[i-1];
}
for (auto it = begin; it != end; ++it) {
auto key = key_of(*it);
// while item is not in the correct position
while ( std::distance(begin, it) != destination[key] &&
// and not all items of this value have reached their final position
final_destination[key] != destination[key] ) {
// swap into the right place
std::iter_swap(it, begin + destination[key]);
// tidy up for next iteration
++destination[key];
key = key_of(*it);
}
}
}
Usage:
vector<Person> records = populateRecords();
countsort(records.begin(), records.end(), [](Person const &){
return Person.id()-1; // map [1, 1000] -> [0, 1000)
});
This can be further generalized to become MSD Radix Sort,
here's a talk by Malte Skarupke about it: https://www.youtube.com/watch?v=zqs87a_7zxw
Here's a neat visualization of the algorithm: https://www.youtube.com/watch?v=k1XkZ5ANO64
The answer given by Giovanni Botta is perfect, and Counting Sort is definitely the way to go. However, I personally prefer not to go resizing the vector progressively, but I'd rather do it this way (assuming your range is [0-1000]):
vector<int> to_sort;
vector<int> counts(1001);
int maxvalue=0;
for (int i : to_sort) {
if(i > maxvalue) maxvalue = i;
counts[i]++;
}
counts.resize(maxvalue+1);
It is essentially the same, but no need to be constantly managing the size of the counts vector. Depending on your memory constraints, you could use one solution or the other.

push_back/append or appending a vector with a loop in C++ Armadillo

I would like to create a vector (arma::uvec) of integers - I do not ex ante know the size of the vector. I could not find approptiate function in Armadillo documentation, but moreover I was not successfull with creating the vector by a loop. I think the issue is initializing the vector or in keeping track of its length.
arma::uvec foo(arma::vec x){
arma::uvec vect;
int nn=x.size();
vect(0)=1;
int ind=0;
for (int i=0; i<nn; i++){
if ((x(i)>0)){
ind=ind+1;
vect(ind)=i;
}
}
return vect;
}
The error message is: Error: Mat::operator(): index out of bounds.
I would not want to assign 1 to the first element of the vector, but could live with that if necessary.
PS: I would really like to know how to obtain the vector of unknown length by appending, so that I could use it even in more general cases.
Repeatedly appending elements to a vector is a really bad idea from a performance point of view, as it can cause repeated memory reallocations and copies.
There are two main solutions to that.
Set the size of the vector to the theoretical maximum length of your operation (nn in this case), and then use a loop to set some of the values in the vector. You will need to keep a separate counter for the number of set elements in the vector so far. After the loop, take a subvector of the vector, using the .head() function. The advantage here is that there will be only one copy.
An alternative solution is to use two loops, to reduce memory usage. In the first loop work out the final length of the vector. Then set the size of the vector to the final length. In the second loop set the elements in the vector. Obviously using two loops is less efficient than one loop, but it's likely that this is still going to be much faster than appending.
If you still want to be a lazy coder and inefficiently append elements, use the .insert_rows() function.
As a sidenote, your foo(arma::vec x) is already making an unnecessary copy the input vector. Arguments in C++ are by default passed by value, which basically means C++ will make a copy of x before running your function. To avoid this unnecessary copy, change your function to foo(const arma::vec& x), which means take a constant reference to x. The & is critical here.
In addition to mtall's answer, which i agree with,
for a case in which performance wasn't needed i used this:
void uvec_push(arma::uvec & v, unsigned int value) {
arma::uvec av(1);
av.at(0) = value;
v.insert_rows(v.n_rows, av.row(0));
}

Fastest way to copy one vector into another conditionally

This question is related to existing Question: fast way to copy one vector into another
I have a vector source vector S and I want to create a destination vector D which has only those elements of S which satisfy a particular condition(say element is even). Note that source vector is constant vector.
I can think of two STL algorithms to do this :
copy_if
remove_if
In both methods, I will need to make sure the destination vector D is of big enough size. So, I will need to create initially vector D of the same size as S. Also, in both methods, I want to compact the vector D to be of the same length as the number of elements in it. I donot know which one of them is faster or more convenient but I dont know any better way to copy a vector conditionally ?
The simplest way is:
auto const predicate = [](int const value) { return value % 2 == 0; };
std::copy_if(begin(src), end(src), back_inserter(dest), predicate);
which relies on push_back.
Now, indeed, this may trigger memory reallocation. However I'd like to underline that push_back has amortized constant complexity, meaning that in average it is O(1), which is achieved by having an exponential growth behavior (so that the number of allocations performed is O(log N)).
On the other hand, if you have 1 million elements, only 5 of which being even, it will not allocate 4MB of memory up-front, only to relinquish it for only 20 bytes later on.
Therefore:
it's optimal when the distribution is skewed toward odd numbers, because it does not over-allocate much
it's close to optimal otherwise, because it does not reallocate much
Even more interesting, if you have an idea of the distribution up-front, you can use resize and shrink_to_fit:
// 90% of the time, 30% of the numbers are even:
dest.reserve(src.size() * 3 / 10);
auto const predicate = [](int const value) { return value % 2 == 0; };
std::copy_if(begin(src), end(src), back_inserter(dest), predicate);
dest.shrink_to_fit();
This way:
if there were less than 30%, shrink_to_fit might trim the excess
if there were 30%, bingo
if there were more than 30%, re-allocations are triggered as necessary, still following that O(log N) pattern anyway
Personal experience tells me that the call to reserve is rarely (if ever) worth it, amortized constant complexity being really good at keeping costs down.
Note: shrink_to_fit is non-binding, there is no guaranteed way to get the capacity to be equal to the size, the implementation chooses what's best.
Well, you could use back_inserter:
std::vector<int> foo = {...whatever...};
std::vector<int> bar;
std::back_insert_iterator< std::vector<int> > back_it (bar);
std::copy_if (foo.begin(), foo.end(), back_it, MyPredicate);
or count element:
std::vector<int> foo = {...whatever...};
int mycount = count_if (foo.begin(), foo.end(), MyPredicate);
std::vector<int> bar (mycount);
std::copy_if (foo.begin(), foo.end(), bar.begin(), MyPredicate );
A third solution:
std::vector<int> foo = {...whatever...};
std::vector<int> bar (foo.size());
auto it = std::copy_if (foo.begin(), foo.end(), bar.begin(), MyPredicate );
bar.resize(std::distance(bar.begin(),it));
copy_if and remove_if have different semantics. The former needs a separate destination vector for the matching items
copy_if(begin(src), end(src), back_inserter(dst), myPred());
whereas the latter removes the non-matching items but they still have to be erased ( the remove-erase idiom)
src.erase(remove_if(begin(src), end(src), std::not1(myPred()), end(src));
If you want to have a separate destination vector, you need to
remove_copy_if(begin(src), end(src), back_inserter(dst), std::not1(myPred()));
This should be equally expensive as copy_if. I would find it more confusing because of the double negative (remove if not, vs copy if).
I would personally recommend using copy_if(). The nice thing about it is that it returns the output iterator at which it stopped copying. Here's an example for the even number case you mentioned:
vector<int> src;
// initialize to numbers 1 -> 10
for(int i = 0; i < 10; ++i) {
src.push_back(i);
}
// set initial size to v1.size()
vector<int> dest(src.size());
// use copy_if
auto it = copy_if(src.begin(), src.end(), dest.begin(), [](int val){
return val % 2 == 0;
});
dest.resize(dest.end() - it);
In this way, you will only need to resize once.

Inserting into a vector at the front

iterator insert ( iterator position, const T& x );
Is the function declaration of the insert operator of the std::Vector class.
This function's return type is an iterator pointing to the inserted element. My question is, given this return type, what is the most efficient way (this is part of a larger program I am running where speed is of the essence, so I am looking for the most computationally efficient way) of inserting at the beginning. Is it the following?
//Code 1
vector<int> intvector;
vector<int>::iterator it;
it = myvector.begin();
for(int i = 1; i <= 100000; i++){
it = intvector.insert(it,i);
}
Or,
//Code 2
vector<int> intvector;
for(int i = 1; i <= 100000; i++){
intvector.insert(intvector.begin(),i);
}
Essentially, in Code 2, is the parameter,
intvector.begin()
"Costly" to evaluate computationally as compared to using the returned iterator in Code 1 or should both be equally cheap/costly?
If one of the critical needs of your program is to insert elements at the begining of a container: then you should use a std::deque and not a std::vector. std::vector is only good at inserting elements at the end.
Other containers have been introduced in C++11. I should start to find an updated graph with these new containers and insert it here.
The efficiency of obtaining the insertion point won't matter in the least - it will be dwarfed by the inefficiency of constantly shuffling the existing data up every time you do an insertion.
Use std::deque for this, that's what it was designed for.
An old thread, but it showed up at a coworker's desk as the first search result for a Google query.
There is one alternative to using a deque that is worth considering:
std::vector<T> foo;
for (int i = 0; i < 100000; ++i)
foo.push_back(T());
std::reverse( foo.begin(), foo.end() );
You still use a vector which is significantly more engineered than deque for performance. Also, swaps (which is what reverse uses) are quite efficient. On the other hand, the complexity, while still linear, is increased by 50%.
As always, measure before you decide what to do.
If you're looking for a computationally efficient way of inserting at the front, then you probably want to use a deque instead of a vector.
Most likely deque is the appropriate solution as suggested by others. But just for completeness, suppose that you need to do this front-insertion just once, that elsewhere in the program you don't need to do other operations on the front, and that otherwise vector provides the interface you need. If all of those are true, you could add the items with the very efficient push_back and then reverse the vector to get everything in order. That would have linear complexity rather than polynomial as it would when inserting at the front.
When you use a vector, you usually know the actual number of elements it is going to have. In this case, reserving the needed number of elements (100000 in the case you show) and filling them by using the [] operator is the fastest way. If you really need an efficient insert at the front, you can use deque or list, depending on your algorithms.
You may also consider inverting the logic of your algorithm and inserting at the end, that is usually faster for vectors.
I think you should change the type of your container if you really want to insert data at the beginning. It's the reason why vector does not have push_front() member function.
Intuitively, I agree with #Happy Green Kid Naps and ran a small test showing that for small sizes (1 << 10 elements of a primitive data type) it doesn't matter. For larger container sizes (1 << 20), however, std::deque seems to be of higher performance than reversing an std::vector. So, benchmark before you decide. Another factor might be the element type of the container.
Test 1: push_front (a) 1<<10 or (b) 1<<20 uint64_t into std::deque
Test 2: push_back (a) 1<<10 or (b) 1<<20 uint64_t into std::vector followed by std::reverse
Results:
Test 1 - deque (a) 19 µs
Test 2 - vector (a) 19 µs
Test 1 - deque (b) 6339 µs
Test 2 - vector (b) 10588 µs
You can support-
Insertion at front.
Insertion at the end.
Changing value at any position (won't present in deque)
Accessing value at any index (won't present in deque)
All above operations in O(1) time complexity
Note: You just need to know the upper bound on max_size it can go in left and right.
class Vector{
public:
int front,end;
int arr[100100]; // you should set this in according to 2*max_size
Vector(int initialize){
arr[100100/2] = initialize; // initializing value
front = end = 100100/2;
front--;end++;
}
void push_back(int val){
arr[end] = val;
end++;
}
void push_front(int val){
if(front<0){return;} // you should set initial size accordingly
arr[front] = val;
front--;
}
int value(int idx){
return arr[front+idx];
}
// similarity create function to change on any index
};
int main(){
Vector v(2);
for(int i=1;i<100;i++){
// O(1)
v.push_front(i);
}
for(int i=0;i<20;i++){
// to access the value in O(1)
cout<<v.value(i)<<" ";
}
return;
}
This may draw the ire of some because it does not directly answer the question, but it may help to keep in mind that retrieving the items from a std::vector in reverse order is both easy and fast.