I'm looking for the simplest way(algorithm?) to push an entire vector onto a queue and then delete the vector. I think there are a few ways to do this but I'm not sure which is best, or if all of them are correct. Option 1 is to use vector.pop_back(), but I'd have to go backwards through the for loop in this case, which isn't a problem since the order the objects go into the queue from the vector do not matter
for(unsigned i = vector.size() - 1; i >= 0; i--){
queue.push(vector[i]);
vector.pop_back();
}
Option 2 is to use vector.erase(). Also is it okay to do i < vector.size()? Because when I looked online for iterating through vectors I found a lot of i != vector.size() instead
for(unsigned i = 0; i < vector.size(); i++){
queue.push(vector[i]);
vector.erase[i];
}
My issue here is that if I erase vector[i], does vector [i+1] now become vector[i]? Or does vector[i] become a Null value?
My 3rd option would be to just erase it all at the end
for(unsigned i = 0; i < vector.size(); i++){
queue.push(vector[i]);
}
vector.erase(vector.begin(), vector.end());
Just for clarity, I don't want to get rid of the vector variable itself, just empty it after putting it into the queue, because it will eventually store a bunch of new things to dump into a queue again and again.
If you don't mind the objects being present in both the queue and the vector for a while, just do the simplest thing: your 3rd option, just with a clear() instead to be explicit what you're doing:
for(size_t i = 0; i < vector.size(); i++){
queue.push(vector[i]);
}
vector.clear();
Of course, in C++11, you could use a range-based for loop, and even move the items out of the vector to avoid needless copies:
for (auto &elem : vector) {
queue.push(std::move(elem));
}
vector.clear();
A)
for(unsigned i = vector.size() - 1; i >= 0; i--){
queue.push(vector[i]);
vector.pop_back();
}
This should be a bit less efficient then C)
B)
for(unsigned i = 0; i < vector.size(); i++){
queue.push(vector[i]);
vector.erase[i];
}
"My issue here is that if I erase vector[i], does vector [i+1] now become vector[i]? Or does vector[i] become a Null value?"
Erase[i] doesn't work at all. You could ruse erase(vector.begin()) pulling elements from the vectors head one by one, but its not very efficient and the whole loop should end up with O(N^2/2) since your deleting from the vectors head.
C)
for(unsigned i = 0; i < vector.size(); i++){
queue.push(vector[i]);
}
vector.clear();
Should be the most efficient way to go.
Note
The result of A and B since in A you're pulling elements from the tail while taking them from the head in C
Unless the element-type is large, you can't do much (except move which #Angew suggested). If element size is small, there is no benefit of move either - the memory layout of both vector and queue are different. If element-type is large, you may consider using pointers (in a list or vector) of elements.
If you were to use a deque instead of queue then you could do an insert and insert all the elements at once.
Something like
template <class T, template <typename, typename> class Container>
class BlockQueue : public Container<T, std::allocator<T>>
{
public:
BlockQueue() : Container<T, std::allocator<T>>()
{
}
void push( T val )
{
this->push_back( val );
}
void push( const std::vector<T>& newData )
{
this->insert( this->end(), newData.begin(), newData.end() );
}
};
Related
i am trying to write a code that will delete all the elements if an array has same element at different index . it works fine for one element deletion or elements at odd index i.e 1,3,5 etc but it neglects one element if the consecutive index have same element.
i have just tried this to get my hands on arrays
for(int i=0;i<n;i++) //for deletion
{
if(arr[i]==_delete)
{
arr[i]=arr[i+1];
--n;
}
}
I suggest you use std::vector as a container for your objects.
std::vector<TYPE> vec ;
// initialise vector
You can use
vec.erase(std::remove_if(vec.begin(), vec.end(),
[](const auto & item){return item == _delete;}), vec.end());
Alternatively, you can use std::list. Its list::erase has linear time complexity.
As an additional solution, if you want to deal with built-in C++ arrays, the standard std::remove algorithm can be rewritten like this:
void remove(int _delete) {
int j = 0;
for (int i = 0; i < n; ++i) {
if (arr[i] != _delete) {
arr[j++] = arr[i];
}
}
// update the size!
n = j;
}
It's quite pretty:
We keep in the array the elements we only need, and override the ones in which we are not interested (they can be either equal or not to _delete and start at position j till the end)
I am a new programmer and I am trying to sort a vector of integers by their parities - put even numbers in front of odds. The order inside of the odd or even numbers themselves doesn't matter. For example, given an input [3,1,2,4], the output can be [2,4,3,1] or [4,2,1,3], etc. Below is my c++ code, sometimes I got luck that the vector gets sorted properly, sometimes it doesn't. I exported the odd and even vectors and they look correct, but when I tried to combine them together it is just messed up. Can someone please help me debug?
class Solution {
public:
vector<int> sortArrayByParity(vector<int>& A) {
unordered_multiset<int> even;
unordered_multiset<int> odd;
vector<int> result(A.size());
for(int C:A)
{
if(C%2 == 0)
even.insert(C);
else
odd.insert(C);
}
merge(even.begin(),even.end(),odd.begin(),odd.end(),result.begin());
return result;
}
};
If you just need even values before odds and not a complete sort I suggest you use std::partition. You give it two iterators and a predicate. The elements where the predicate returns true will appear before the others. It works in-place and should be very fast.
Something like this:
std::vector<int> sortArrayByParity(std::vector<int>& A)
{
std::partition(A.begin(), A.end(), [](int value) { return value % 2 == 0; });
return A;
}
Because the merge function assumes that the two ranges are sorted, which is used as in merge sort. Instead, you should just use the insert function of vector:
result.insert(result.end(), even.begin(), even.end());
result.insert(result.end(), odd.begin(), odd.end());
return result;
There is no need to create three separate vectors. As you have allocated enough space in the result vector, that vector can be used as the final vector also to store your sub vectors, storing the separated odd and even numbers.
The value of using a vector, which under the covers is an array, is to avoid inserts and moves. Arrays/Vectors are fast because they allow immediate access to memory as an offset from the beginning. Take advantage of this!
The code simply keeps an index to the next odd and even indices and then assigns the correct cell accordingly.
class Solution {
public:
// As this function does not access any members, it can be made static
static std::vector<int> sortArrayByParity(std::vector<int>& A) {
std::vector<int> result(A.size());
uint even_index = 0;
uint odd_index = A.size()-1;
for(int element: A)
{
if(element%2 == 0)
result[even_index++] = element;
else
result[odd_index--] = element;
}
return result;
}
};
Taking advantage of the fact that you don't care about the order among the even or odd numbers themselves, you could use a very simple algorithm to sort the array in-place:
// Assume helper function is_even() and is_odd() are defined.
void sortArrayByParity(std::vector<int>& A)
{
int i = 0; // scanning from beginning
int j = A.size()-1; // scanning from end
do {
while (i < j && is_even(A[i])) ++i; // A[i] is an even at the front
while (i < j && is_odd(A[j])) --j; // A[j] is an odd at the back
if (i >= j) break;
// Now A[i] must be an odd number in front of an even number A[j]
std::swap(A[i], A[j]);
++i;
--j;
} while (true);
}
Note that the function above returns void, since the vector is sorted in-place. If you do want to return a sorted copy of input vector, you'd need to define a new vector inside the function, and copy the elements right before every ++i and --j above (and of course do not use std::swap but copy the elements cross-way instead; also, pass A as const std::vector<int>& A).
// Assume helper function is_even() and is_odd() are defined.
std::vector<int> sortArrayByParity(const std::vector<int>& A)
{
std::vector<int> B(A.size());
int i = 0; // scanning from beginning
int j = A.size()-1; // scanning from end
do {
while (i < j && is_even(A[i])) {
B[i] = A[i];
++i;
}
while (i < j && is_odd(A[j])) {
B[j] = A[j];
--j;
}
if (i >= j) break;
// Now A[i] must be an odd number in front of an even number A[j]
B[i] = A[j];
B[j] = A[i];
++i;
--j;
} while (true);
return B;
}
In both cases (in-place or out-of-place) above, the function has complexity O(N), N being number of elements in A, much better than the general O(N log N) for sorting N elements. This is because the problem doesn't actually sort much -- it only separates even from odd. There's therefore no need to invoke a full-fledged sorting algorithm.
I am using a very simple function in c++, vector.erase(), here's what I have (I'm trying to erase all instances of these three keywords from a .txt file):
First I use it in two separate for loops to erase all instances of <event> and </event>, this works perfectly and outputs the edited text file with no more instances of those words.
for (int j = 0; j< N-counter; j++) {
if(myvec[j] == "<event>") {
myvec.erase(myvec.begin()+j);
}
}
for (int j = 0; j< N-counter; j++) {
if(myvec[j] == "</event>") {
myvec.erase(myvec.begin()+j);
}
}
However, when I add a third for loop to do the EXACT same thing, literally just copy and paste with a new keyword as follows:
for (int j = 0; j< N-counter; j++) {
if(myvec[j] == "</LesHouchesEvents>") {
myvec.erase(myvec.begin()+j);
}
}
It compiles and executes, however it completely destroys the .txt file, making it completely un-openable, and when i cat it, I just get a bunch of crazy symbols.
I have tried switching the order of these for loops, even getting rid of the first two for loops entirely, everything I can think of, alas it just will not work for the keyword </LesHouchesEvents> for some strange reason.
Your loops are not taking into account that when you erase() an element from a vector, the indexes of the remaining elements will decrement accordingly. So your loops will eventually exceed the bounds of the vector once you have erased at least 1 element. You need to take that into account:
std:string word = ...;
size_t count = N-counter;
for (int j = 0; j < count;) {
if(myvec[j] == word) {
myvec.erase(myvec.begin()+j);
--count;
}
else {
++j;
}
}
With that said, it would be safer to use iterators instead of indexes. erase() returns an iterator to the element that immediately follows the removed element. You can use std::find() for the actual searching:
#include <algorithm>
std::vector<std::string>::iterator iter = std::find(myvec.begin(), myvec.end(), word);
while (iter != myvec.end())
{
iter = myvec.erase(iter);
iter = std::find(iter, myvec.end(), word);
}
Or, you could just use std::remove() instead:
#include <algorithm>
myvec.erase(std::remove(myvec.begin(), myvec.end(), word), myvec.end());
I don't know if this is your specific problem or not, but this loop is almost surely not what you want.
Note the documentation for erase - it "shifts" left the remaining elements. Unfortunately, your code still increments j, meaning you're skipping the next element:
for (int j = 0; j< N-counter; j++) { // <- Don't increment j here
...
myvec.erase(myvec.begin()+j); // <- increment it only if this didn't happen.
}
You'll also need to adjust your loop's halting condition.
Even assuming you got it working, this is nearly the worst possible way to remove items from a vector.
You almost certainly want the remove/erase idiom here, and you probably want to do all the comparisons in a single pass, so it's something like this:
std::vector<std::string> bad = {
"<event>",
"</event>",
"</LesHouchesEvents>"
};
myvec.erase(std::remove_if(my_vec.begin(), my_vec.end(),
[&](std::string const &s) {
return std::find(bad.begin(), bad.end(), s) != bad.end();
}),
my_vec.end());
Here is a simple question I have been wondering about for a long time :
When I do a loop such as this one :
for (int i = 0; i < myVector.size() ; ++i) {
// my loop
}
As the condition i < myVector.size() is checked each time, should I store the size of the array inside a variable before the loop to prevent the call to size() each iteration ? Or is the compiler smart enough to do it itself ?
mySize = myVector.size();
for (int i = 0; i < mySize ; ++i) {
// my loop
}
And I would extend the question with a more complex condition such as i < myVector.front()/myVector.size()
Edit : I don't use myVector inside the loop, it is juste here to give the ending condition. And what about the more complex condition ?
The answer depends mainly on the contents of your loop–it may modify the vector during processing, thus modifying its size.
However if the vector is just scanned you can safely store its size in advance:
for (int i = 0, mySize = myVector.size(); i < mySize ; ++i) {
// my loop
}
although in most classes the functions like 'get current size' are just inline getters:
class XXX
{
public:
int size() const { return mSize; }
....
private:
int mSize;
....
};
so the compiler can easily reduce the call to just reading the int variable, consequently prefetching the length gives no gain.
If you are not changing anything in vector (adding/removing) during for-loop (which is normal case) I would use foreach loop
for (auto object : myVector)
{
//here some code
}
or if you cannot use c++11 I would use iterators
for (auto it = myVector.begin(); it != myVector.end(); ++it)
{
//here some code
}
I'd say that
for (int i = 0; i < myVector.size() ; ++i) {
// my loop
}
is a bit safer than
mySize = myVector.size();
for (int i = 0; i < mySize ; ++i) {
// my loop
}
because the value of myVector.size() may change (as result of , e.g. push_back(value) inside the loop) thus you might miss some of the elements.
If you are 100% sure that the value of myVector.size() is not going to change, then both are the same thing.
Yet, the first one is a bit more flexible than the second (other developer may be unaware that the loop iterates over fixed size and he might change the array size). Don't worry about the compiler, he's smarter than both of us combined.
The overhead is very small.
vector.size() does not recalculate anything, but simply returns the value of the private size variable..
it is safer than pre-buffering the value, as the vectors internal size variable is changed when an element is popped or pushed to/from the vector..
compilers can be written to optimize this out, if and only if, it can predict that the vector is not changed by ANYTHING while the for loop runs.
That is difficult to do if there are threads in there.
but if there isn't any threading going on, it's very easy to optimize it.
Any smart compiler will probably optimize this out. However just to be sure I usually lay out my for loops like this:
for (int i = myvector.size() -1; i >= 0; --i)
{
}
A couple of things are different:
The iteration is done the other way around. Although this shouldn't be a problem in most cases. If it is I prefer David Haim's method.
The --i is used rather than a i--. In theory the --i is faster, although on most compilers it won't make a difference.
If you don't care about the index this:
for (int i = myvector.size(); i > 0; --i)
{
}
Would also be an option. Altough in general I don't use it because it is a bit more confusing than the first. And will not gain you any performance.
For a type like a std::vector or std::list an iterator is the preffered method:
for (std::vector</*vectortype here*/>::iterator i = myVector.begin(); i != myVector.end(); ++i)
{
}
I have a vector of items items, and a vector of indices that should be deleted from items:
std::vector<T> items;
std::vector<size_t> indicesToDelete;
items.push_back(a);
items.push_back(b);
items.push_back(c);
items.push_back(d);
items.push_back(e);
indicesToDelete.push_back(3);
indicesToDelete.push_back(0);
indicesToDelete.push_back(1);
// given these 2 data structures, I want to remove items so it contains
// only c and e (deleting indices 3, 0, and 1)
// ???
What's the best way to perform the deletion, knowing that with each deletion, it affects all other indices in indicesToDelete?
A couple ideas would be to:
Copy items to a new vector one item at a time, skipping if the index is in indicesToDelete
Iterate items and for each deletion, decrement all items in indicesToDelete which have a greater index.
Sort indicesToDelete first, then iterate indicesToDelete, and for each deletion increment an indexCorrection which gets subtracted from subsequent indices.
All seem like I'm over-thinking such a seemingly trivial task. Any better ideas?
Edit Here is the solution, basically a variation of #1 but using iterators to define blocks to copy to the result.
template<typename T>
inline std::vector<T> erase_indices(const std::vector<T>& data, std::vector<size_t>& indicesToDelete/* can't assume copy elision, don't pass-by-value */)
{
if(indicesToDelete.empty())
return data;
std::vector<T> ret;
ret.reserve(data.size() - indicesToDelete.size());
std::sort(indicesToDelete.begin(), indicesToDelete.end());
// new we can assume there is at least 1 element to delete. copy blocks at a time.
std::vector<T>::const_iterator itBlockBegin = data.begin();
for(std::vector<size_t>::const_iterator it = indicesToDelete.begin(); it != indicesToDelete.end(); ++ it)
{
std::vector<T>::const_iterator itBlockEnd = data.begin() + *it;
if(itBlockBegin != itBlockEnd)
{
std::copy(itBlockBegin, itBlockEnd, std::back_inserter(ret));
}
itBlockBegin = itBlockEnd + 1;
}
// copy last block.
if(itBlockBegin != data.end())
{
std::copy(itBlockBegin, data.end(), std::back_inserter(ret));
}
return ret;
}
I would go for 1/3, that is: order the indices vector, create two iterators into the data vector, one for reading and one for writting. Initialize the writing iterator to the first element to be removed, and the reading iterator to one beyond that one. Then in each step of the loop increment the iterators to the next value (writing) and next value not to be skipped (reading) and copy/move the elements. At the end of the loop call erase to discard the elements beyond the last written to position.
BTW, this is the approach implemented in the remove/remove_if algorithms of the STL with the difference that you maintain the condition in a separate ordered vector.
std::sort() the indicesToDelete in descending order and then delete from the items in a normal for loop. No need to adjust indices then.
It might even be option 4:
If you are deleting a few items from a large number, and know that there will never be a high density of deleted items:
Replace each of the items at indices which should be deleted with 'tombstone' values, indicating that there is nothing valid at those indices, and make sure that whenever you access an item, you check for a tombstone.
It depends on the numbers you are deleting.
If you are deleting many items, it may make sense to copy the items that are not deleted to a new vector and then replace the old vector with the new vector (after sorting the indicesToDelete). That way, you will avoid compressing the vector after each delete, which is an O(n) operation, possibly making the entire process O(n^2).
If you are deleting a few items, perhaps do the deletion in reverse index order (assuming the indices are sorted), then you do not need to adjust them as items get deleted.
Since the discussion has somewhat transformed into a performance related question, I've written up the following code. It uses remove_if and vector::erase, which should move the elements a minimal number of times. There's a bit of overhead, but for large cases, this should be good.
However, if you don't care about the relative order of elements, then this will not be all that fast.
#include <algorithm>
#include <iostream>
#include <string>
#include <vector>
#include <set>
using std::vector;
using std::string;
using std::remove_if;
using std::cout;
using std::endl;
using std::set;
struct predicate {
public:
predicate(const vector<string>::iterator & begin, const vector<size_t> & indices) {
m_begin = begin;
m_indices.insert(indices.begin(), indices.end());
}
bool operator()(string & value) {
const int index = distance(&m_begin[0], &value);
set<size_t>::iterator target = m_indices.find(index);
return target != m_indices.end();
}
private:
vector<string>::iterator m_begin;
set<size_t> m_indices;
};
int main() {
vector<string> items;
items.push_back("zeroth");
items.push_back("first");
items.push_back("second");
items.push_back("third");
items.push_back("fourth");
items.push_back("fifth");
vector<size_t> indicesToDelete;
indicesToDelete.push_back(3);
indicesToDelete.push_back(0);
indicesToDelete.push_back(1);
vector<string>::iterator pos = remove_if(items.begin(), items.end(), predicate(items.begin(), indicesToDelete));
items.erase(pos, items.end());
for (int i=0; i< items.size(); ++i)
cout << items[i] << endl;
}
The output for this would be:
second
fourth
fifth
There is a bit of a performance overhead that can still be reduced. In remove_if (atleast on gcc), the predicate is copied by value for each element in the vector. This means that we're possibly doing the copy constructor on the set m_indices each time. If the compiler is not able to get rid of this, then I would recommend passing the indices in as a set, and storing it as a const reference.
We could do that as follows:
struct predicate {
public:
predicate(const vector<string>::iterator & begin, const set<size_t> & indices) : m_begin(begin), m_indices(indices) {
}
bool operator()(string & value) {
const int index = distance(&m_begin[0], &value);
set<size_t>::iterator target = m_indices.find(index);
return target != m_indices.end();
}
private:
const vector<string>::iterator & m_begin;
const set<size_t> & m_indices;
};
int main() {
vector<string> items;
items.push_back("zeroth");
items.push_back("first");
items.push_back("second");
items.push_back("third");
items.push_back("fourth");
items.push_back("fifth");
set<size_t> indicesToDelete;
indicesToDelete.insert(3);
indicesToDelete.insert(0);
indicesToDelete.insert(1);
vector<string>::iterator pos = remove_if(items.begin(), items.end(), predicate(items.begin(), indicesToDelete));
items.erase(pos, items.end());
for (int i=0; i< items.size(); ++i)
cout << items[i] << endl;
}
Basically the key to the problem is remembering that if you delete the object at index i, and don't use a tombstone placeholder, then the vector must make a copy of all of the objects after i. This applies to every possibility you suggested except for #1. Copying to a new list makes one copy no matter how many you delete, making it by far the fastest answer.
And as David Rodríguez said, sorting the list of indexes to be deleted allows for some minor optimizations, but it may only worth it if you're deleting more than 10-20 (please profile first).
Here is my solution for this problem which keeps the order of the original "items":
create a "vector mask" and initialize (fill) it with "false" values.
change the values of mask to "true" for all the indices you want to remove.
loop over all members of "mask" and erase from both vectors "items" and "mask" the elements with "true" values.
Here is the code sample:
#include <iostream>
#include <vector>
using namespace std;
int main()
{
vector<unsigned int> items(12);
vector<unsigned int> indicesToDelete(3);
indicesToDelete[0] = 3;
indicesToDelete[1] = 0;
indicesToDelete[2] = 1;
for(int i=0; i<12; i++) items[i] = i;
for(int i=0; i<items.size(); i++)
cout << "items[" << i << "] = " << items[i] << endl;
// removing indeces
vector<bool> mask(items.size());
vector<bool>::iterator mask_it;
vector<unsigned int>::iterator items_it;
for(size_t i = 0; i < mask.size(); i++)
mask[i] = false;
for(size_t i = 0; i < indicesToDelete.size(); i++)
mask[indicesToDelete[i]] = true;
mask_it = mask.begin();
items_it = items.begin();
while(mask_it != mask.end()){
if(*mask_it){
items_it = items.erase(items_it);
mask_it = mask.erase(mask_it);
}
else{
mask_it++;
items_it++;
}
}
for(int i=0; i<items.size(); i++)
cout << "items[" << i << "] = " << items[i] << endl;
return 0;
}
This is not a fast implementation for using with large data sets. The method "erase()" takes time to rearrange the vector after eliminating the element.