Fast algorithm to remove odd elements from vector - c++

Given a vector of integers, I want to wrote a fast (not obvious O(n^2)) algorithm to remove all odd elements from it.
My idea is: iterate through vector till first odd element, then copy everything before it to the end of vector (call push_back method) and so on until we have looked through all original elements (except copied ones), then remove all of them, so that only the vector's tail survive.
I wrote the following code to implement it:
void RemoveOdd(std::vector<int> *data) {
size_t i = 0, j, start, end;
uint l = (*data).size();
start = 0;
for (i = 0; i < l; ++i)
{
if ((*data)[i] % 2 != 0)
{
end = i;
for (j = start, j < end, ++j)
{
(*data).push_back((*data)[j]);
}
start = i + 1;
}
}
(*data).erase((*data).begin(), i);
}
but it gives me lots of errors, which I can't fix. I'm very new to the programming, so expect that all of them are elementary and stupid.
Please help me with error corrections or another algorithm implementation. Any suggestions and explanations will be very appreciative. It is also better not to use algorithm library.

You can use the remove-erase idiom.
data.erase(std::remove_if(data.begin(), data.end(),
[](int item) { return item % 2 != 0; }), data.end());

You don't really need to push_back anything (or erase elements at the front, which requires repositioning all that follows) to remove elements according to a predicate... Try to understand the "classic" inplace removal algorithm (which ultimately is how std::remove_if is generally implemented):
void RemoveOdd(std::vector<int> & data) {
int rp = 0, wp = 0, sz = data.size();
for(; rp<sz; ++rp) {
if(data[rp] % 2 == 0) {
// if the element is a keeper, write it in the "write pointer" position
data[wp] = data[rp];
// increment so that next good element won't overwrite this
wp++;
}
}
// shrink to include only the good elements
data.resize(wp);
}
rp is the "read" pointer - it's the index to the current element; wp is the "write" pointer - it always points to the location where we'll write the next "good" element, which is also the "current length" of the "new" vector. Every time we have a good element we copy it in the write position and increment the write pointer. Given that wp <= rp always (as rp is incremented once at each iteration, and wp at most once per iteration), you are always overwriting either an element with itself (so no harm is done), or an element that has already been examined and either has been moved to its correct final position, or had to be discarded anyway.
This version is done with specific types (vector<int>), a specific predicate, with indexes and with "regular" (non-move) assignment, but can be easily generalized to any container with forward iterators (as its done in std::remove_if) and erase.
Even if the generic standard library algorithm works well in most cases, this is still an important algorithm to keep in mind, there are often cases where the generic library version isn't sufficient and knowing the underlying idea is useful to implement your own version.

Given pure algorithm implementation, you don't need to push back elements. In worst case scenario, you will do more than n^2 copy. (All odd data)
Keep two pointers: one for iterating (i), and one for placing. Iterate on all vector (i++), and if *data[I] is even, write it to *data[placed] and increment placed. At the end, reduce length to placed, all elements after are unecessary
remove_if does this for you ;)

void DeleteOdd(std::vector<int> & m_vec) {
int i= 0;
for(i= 0; i< m_vec.size(); ++i) {
if(m_vec[i] & 0x01)
{
m_vec.erase(m_vec.begin()+i);
i--;
}
}
m_vec.resize(i);
}

Related

How to remove elements from a vector that uses the vector size in the for loop

I have a game where I shoot bullets at an object and I delete the object that gets hit by the bullet and bullet that are off screen as well.
For example:
std::vector<object> object_list;
for(size_t i = 0; i < object_list.size(); i++)
{
if(object_list[i].hit())
{
object_list.erase(object_list.begin() + i);
}
else
object_list[i].draw();
}
The problem with this is that when i remove an object, the size of the vector decreases so when it check the conditions, it fails and i get an error such as " vector subscript out of range." I could just choose not to render the asteroid by rendering those that haven't been hit, but the problem with that is that the no. of objects increases when hit(splits up) so eventually the program is going to get slower. I've used a similar concept for the off screen bullets but I can't find a way around it. I'm looking for a solution to this or better way of removing elements.
Both object and bullet are classes.
You should split for loop in 2 parts:
remove all "hit" elements:
object_list.erase(std::remove_if(object_list.begin(),
object_list.end(), [](auto&& item) { return item.hit(); }),
object_list.end());
draw remaining:
std::for_each(object_list.begin(), object_list.end(), [](auto&& item) { item.draw(); });
It's safer and more readable.
Same idea as the other answers but this code is a little easier with iterators
for (auto i = object_list.begin(); i != object_list.end(); )
{
if (i->hit())
{
i = object_list.erase(i);
}
else
{
i->draw();
++i;
}
}
vector::erase returns an iterator to the next element, which you can use to continue the loop.
Functional approach using the range-v3 library (C++20)
[...] I'm looking for a solution to this or better way of removing elements.
Using the ranges::actions::remove_if action from the range-v3 library, you can use a functional programming style approach to mutate the object_list container in-place:
object_list |= ranges::actions::remove_if(
[](const auto& obj) { return obj.hit(); });
followed by subsequent ranges:for_each invocation to draw the object:
ranges::for_each(object_list, [](const auto& obj){ obj.draw(); });
DEMO.
You could do something like this:
for (size_t i = 0; i < object_list.size(); )
{
if (object_list[i].hit())
object_list.erase(object_list.begin() + i)
else
{
object_list[i].draw()
++i;
}
}
Let us say you are at i=5 and that object has been hit, after deleting that element, the obj at i=6 is shifted to i=5, and you haven't checked it, so just add i--; after your erase statement.
Another way to do it would be -
for(size_t i = 0; i < object_list.size();)
{
if(object_list[i].hit())
{
object_list.erase(object_list.begin() + i);
}
else
{
object_list[i].draw();
i++;
}
}
Also, it could possibly be faster to just remove the object from the vector where you execute the code that marks the object as hit, that way you just need to draw all the objects which are left out in the list. Some more background into how you are doing all this would be helpful to decide something specific which would be better :)
The shown code does not fail or give a vector subscript out of range - it just does not consider every object, as it skips over the element after the removed one.
For very short and concise solutions employing concepts from C++11 and later, see the answer by Equod or the one by dfri
For better understanding the issue, and/or if you have to stick to for loops with indices, you basically have two options:
Iterate over the vector in reverse direction (i.e. start at the last element), then items after the current one being shifted is not a problem;
for (int i=object_list.size()-1; i>=0; --i)
{
if (object_list[i].hit())
{
object_list.erase(object_list.begin() + i)
}
else
{
object_list[i].draw()
}
}
Or, if the order is important (as I could imagine with items to draw), and you have to iterate from front to back, then only increase the counter i if you have not erased the current element:
for (int i=0; i<object_list.size(); /* No increase here... */ )
{
if (object_list[i].hit())
{
object_list.erase(object_list.begin() + i);
}
else
{
object_list[i].draw();
++i; // ...just here if we didn't remove the element
}
}
I suspect that std::vector is not the container you want (but, of course, I don't know the entire code). Each call to erase implies reallocation of the right-part of the vector (and then copies of you objects), it could be very costly. And your actual problem is the symptom of a design problem.
From what I see, std::list is probably better:
std::list<object> objects;
// ...
for(std::list<object>::iterator it = objects.begin(); it != objects.end();)
{
if(it->hit())
objects.erase(it++); // No object copied
else
{
it->draw();
++it;
}
}

Removing first three elements of 2d array C++

So here's my problem.. I have a 2d array of 2 char strings.
9D 5C 6S 9D KS 4S 9D
9S
If 3 found I need to delete the first 3 based on the first char.
card
My problem is I segfault almost anything i do...
pool is the 2d vector
selection = "9S";
while(col != GameBoard::pool.size() ){
while(GameBoard::pool[col][0].at(0) == selection.at(0) || cardsRem!=0){
if(GameBoard::pool[col].size() == 1){
GameBoard::pool.erase(GameBoard::pool.begin() + col);
cardsRem--;
}
else{
GameBoard::pool[col].pop_back();
cardsRem--;
}
}
if(GameBoard::pool[col][0].at(0) != selection.at(0)){
col++;
}
}
I've tried a series of for loops etc, and no luck! Any thoughts would save my sanity!
So I've tried to pull out a code segment to replicate it. But I can't...
If I run my whole program in a loop it will eventually throw a segfault. If I run that exact code in the same circumstance it doesn't... I'm trying to figure out what I'm missing. I'll get back in if I figure out exactly where my issue is..
So in the end the issue is not my code itself, i've got memory leaks or something somewhere that are adding up to eventually crash my program... That tends to be in the same method each time I guess.
The safer and most efficient way to erase some elements from a container is to apply the erase-remove idiom.
For instance, your snippet can be rewritten as the following (which is testable here):
using card_t = std::string;
std::vector<std::vector<card_t>> decks = {
{"9D", "5C", "6S", "9D", "KS", "4S", "9D"},
{"9S"}
};
card_t selection{"9S"};
// Predicate specifing which cards should be removed
auto has_same_rank = [rank = selection.at(0)] (card_t const& card) {
return card.at(0) == rank;
};
auto & deck = decks.at(0);
// 'std::remove_if' removes all the elements satisfying the predicate from the range
// by moving the elements that are not to be removed at the beginning of the range
// and returns a past-the-end iterator for the new end of the range.
// 'std::vector::erase' removes from the vector the elements from the iterator
// returned by 'std::remove_if' up to the end iterator. Note that it invalidates
// iterators and references at or after the point of the erase, including the
// end() iterator (it's the most common cause of errors in code like OP's).
deck.erase(std::remove_if(deck.begin(), deck.end(), has_same_rank),
deck.end());
So for anyone else in the future who comes across this...
The problem is I was deleting an element in the array in a loop, with the conditional stop was it's size. The size is set before hand, and while it was accounted for in the code it still left open the possibility for while(array.size() ) which would be locked in at 8 in the loop be treated as 6 in the code.
The solution was to save the location in the vector to delete and then delete them outside of the loop. I imagine there is a better, more technical answer to this, but it works as intended now!
for (double col = 0; col < size; ++col)
{
if(GameBoard::pool[col][0].at(0) == selection.at(0)){
while(GameBoard::pool[col][0].at(0) == selection.at(0) && cardsRem !=0){
if( GameBoard::pool[col].size() > 1 ){
GameBoard::pool[col].pop_back();
cardsRem--;
}
if(GameBoard::pool[col].size() <2){
toDel.insert ( toDel.begin() , col );
//GameBoard::pool.erase(GameBoard::pool.begin() + col);
cardsRem--;
size--;
}
}
}
}
for(int i = 0; i< toDel.size(); i++){
GameBoard::pool.erase(GameBoard::pool.begin() + toDel[i]);
}

How to remove elements from a vector based on a condition in another vector?

I have two equal length vectors from which I want to remove elements based on a condition in one of the vectors. The same removal operation should be applied to both so that the indices match.
I have come up with a solution using std::erase, but it is extremely slow:
vector<myClass> a = ...;
vector<otherClass> b = ...;
assert(a.size() == b.size());
for(size_t i=0; i<a.size(); i++)
{
if( !a[i].alive() )
{
a.erase(a.begin() + i);
b.erase(b.begin() + i);
i--;
}
}
Is there a way that I can do this more efficiently and preferably using stl algorithms?
If order doesn't matter you could swap the elements to the back of the vector and pop them.
for(size_t i=0; i<a.size();)
{
if( !a[i].alive() )
{
std::swap(a[i], a.back());
a.pop_back();
std::swap(b[i], b.back());
b.pop_back();
}
else
++i;
}
If you have to maintain the order you could use std::remove_if. See this answer how to get the index of the dereferenced element in the remove predicate:
a.erase(remove_if(begin(a), end(a),
[b&](const myClass& d) { return b[&d - &*begin(a)].alive(); }),
end(a));
b.erase(remove_if(begin(b), end(b),
[](const otherClass& d) { return d.alive(); }),
end(b));
The reason it's slow is probably due to the O(n^2) complexity. Why not use list instead? As making a pair of a and b is a good idea too.
A quick win would be to run the loop backwards: i.e. start at the end of the vector. This tends to minimise the number of backward shifts due to element removal.
Another approach would be to consider std::vector<std::unique_ptr<myClass>> etc.: then you'll be essentially moving pointers rather than values.
I propose you create 2 new vectors, reserve memory and swap vectors content in the end.
vector<myClass> a = ...;
vector<otherClass> b = ...;
vector<myClass> new_a;
vector<myClass> new_b;
new_a.reserve(a.size());
new_b.reserve(b.size());
assert(a.size() == b.size());
for(size_t i=0; i<a.size(); i++)
{
if( a[i].alive() )
{
new_a.push_back(a[i]);
new_b.push_back(b[i]);
}
}
swap(a, new_a);
swap(b, new_b);
It can be memory consumed, but should work fast.
erasing from the middle of a vector is slow due to it needing to reshuffle everything after the deletion point. consider using another container instead that makes erasing quicker. It depends on your use cases, will you be iterating often? does the data need to be in order? If you aren't iterating often, consider a list. if you need to maintain order, consider a set. if you are iterating often and need to maintain order, depending on the number of elements, it may be quicker to push back all alive elements to a new vector and set a/b to point to that instead.
Also, since the data is intrinsically linked, it seems to make sense to have just one vector containing data a and b in a pair or small struct.
For performance reason need to use next.
Use
vector<pair<myClass, otherClass>>
as say #Basheba and std::sort.
Use special form of std::sort with comparision predicate. And do not enumerate from 0 to n. Use std::lower_bound instead, becouse vector will be sorted. Insertion of element do like say CashCow in this question: "how do you insert the value in a sorted vector?"
I had a similar problem where I had two :
std::<Eigen::Vector3d> points;
std::<Eigen::Vector3d> colors;
for 3D pointclouds in Open3D and after removing the floor, I wanted to delete all points and colors if the points' z coordinate is greater than 0.05. I ended up overwriting the points based on the index and resizing the vector afterward.
bool invert = true;
std::vector<bool> mask = std::vector<bool>(points.size(), invert);
size_t pos = 0;
for (auto & point : points) {
if (point(2) < CONSTANTS::FLOOR_HEIGHT) {
mask.at(pos) = false;
}
++pos;
}
size_t counter = 0;
for (size_t i = 0; i < points.size(); i++) {
if (mask[i]) {
points.at(counter) = points.at(i);
colors.at(counter) = colors.at(i);
++counter;
}
}
points.resize(counter);
colors.resize(counter);
This maintains order and at least in my case, worked almost twice as fast than the remove_if method from the accepted answer:
for 921600 points the runtimes were:
33 ms for the accepted answer
17 ms for this approach.

How to iterate through a list while adding items to it

I have a list of line segments (a std::vector<std::pair<int, int> > that I'd like to iterate through and subdivide. The algorithm would be, in psuedocode:
for segment in vectorOfSegments:
firstPoint = segment.first;
secondPoint = segment.second;
newMidPoint = (firstPoint + secondPoint) / 2.0
vectorOfSegments.remove(segment);
vectorOfSegments.push_back(std::make_pair(firstPoint, newMidPoint));
vectorOfSegments.push_back(std::make_pair(newMidPoint, secondPoint));
The issue that I'm running into is how I can push_back new elements (and remove the old elements) without iterating over this list forever.
It seems like the best approach may be to make a copy of this vector first, and use the copy as a reference, clear() the original vector, and then push_back the new elements to the recently emptied vector.
Is there a better approach to this?
It seems like the best approach may be to make a copy of this vector first, and use the copy as a reference, clear() the original vector, and then push_back the new elements to the recently emptied vector.
Almost. You don't need to copy-and-clear; move instead!
// Move data from `vectorOfSegments` into new vector `original`.
// This is an O(1) operation that more than likely just swaps
// two pointers.
std::vector<std::pair<int, int>> original{std::move(vectorOfSegments)};
// Original vector is now in "a valid but unspecified state".
// Let's run `clear()` to get it into a specified state, BUT
// all its elements have already been moved! So this should be
// extremely cheap if not a no-op.
vectorOfSegments.clear();
// We expect twice as many elements to be added to `vectorOfSegments`
// as it had before. Let's reserve some space for them to get
// optimal behaviour.
vectorOfSegments.reserve(original.size() * 2);
// Now iterate over `original`, adding to `vectorOfSegments`...
Don't remove elements while you insert new segments. Then, when finished with inserting you could remove the originals:
int len=vectorOfSegments.size();
for (int i=0; i<len;i++)
{
std::pair<int,int>& segment = vectorOfSegments[i];
int firstPoint = segment.first;
int secondPoint = segment.second;
int newMidPoint = (firstPoint + secondPoint) / 2;
vectorOfSegments.push_back(std::make_pair(firstPoint, newMidPoint));
vectorOfSegments.push_back(std::make_pair(newMidPoint, secondPoint));
}
vectorOfSegments.erase(vectorOfSegments.begin(),vectorOfSegments.begin()+len);
Or, if you want to replace one segment by two new segments in one pass, you could use iterators like here:
for (auto it=vectorOfSegments.begin(); it != vectorOfSegments.end(); ++it)
{
std::pair<int,int>& segment = *it;
int firstPoint = segment.first;
int secondPoint = segment.second;
int newMidPoint = (firstPoint + secondPoint) / 2;
it = vectorOfSegments.erase(it);
it = vectorOfSegments.insert(it, std::make_pair(firstPoint, newMidPoint));
it = vectorOfSegments.insert(it+1, std::make_pair(newMidPoint, secondPoint));
}
As Lightning Racis in Orbit pointed out, you should do a reserve before either of these approaches. In the first case do reserve(vectorOfSegmets.size()*3), in the latter reserve(vectorOfSegmets.size()*2+1)
This is easiest solved by using an explicit index variable like this:
for(size_t i = 0; i < segments.size(); i++) {
... //other code
if(/*condition when to split segments*/) {
Point midpoint = ...;
segments[i] = Segment(..., midpoint); //replace the segment by the first subsegment
segments.emplace_back(Segment(midpoint, ...)); //add the second subsegment to the end of the vector
i--; //reconsider the first subsegment
}
}
Notes:
segments.size() is called in each iteration of the loop, so we really reconsider all appended segments.
The explicit index means that the std::vector<> is free to reallocate in the emplace_back() call, there are no iterators/pointers/references that can become invalid.
I assumed that you don't care about the order of your vector because you add the new segments to the end of the vector. If you do care, you might want to use a linked list to avoid quadratic complexity of your algorithm as insertion/deletion to/from an std::vector<> has linear complexity. In my code I avoid insertion/deletion by replacing the old segment.
Another approach to retain order would be to ignore order at first and then reestablish order via sorting. Assuming a good sorting algorithm, that is O(n*log(n)) which is still better than the naive O(n^2) but worse than the O(n) of the linked list approach.
If you don't want to reconsider the new segments, just use a constant size and omit the counter decrement:
size_t count = segments.size();
for(size_t i = 0; i < count; i++) {
... //other code
if(/*condition when to split segments*/) {
Point midpoint = ...;
segments[i] = Segment(..., midpoint); //replace the segment by the first subsegment
segments.emplace_back(Segment(midpoint, ...)); //add the second subsegment to the end of the vector
}
}

priority queue with limited space: looking for a good algorithm

This is not a homework.
I'm using a small "priority queue" (implemented as array at the moment) for storing last N items with smallest value. This is a bit slow - O(N) item insertion time. Current implementation keeps track of largest item in array and discards any items that wouldn't fit into array, but I still would like to reduce number of operations further.
looking for a priority queue algorithm that matches following requirements:
queue can be implemented as array, which has fixed size and _cannot_ grow. Dynamic memory allocation during any queue operation is strictly forbidden.
Anything that doesn't fit into array is discarded, but queue keeps all smallest elements ever encountered.
O(log(N)) insertion time (i.e. adding element into queue should take up to O(log(N))).
(optional) O(1) access for *largest* item in queue (queue stores *smallest* items, so the largest item will be discarded first and I'll need them to reduce number of operations)
Easy to implement/understand. Ideally - something similar to binary search - once you understand it, you remember it forever.
Elements need not to be sorted in any way. I just need to keep N smallest value ever encountered. When I'll need them, I'll access all of them at once. So technically it doesn't have to be a queue, I just need N last smallest values to be stored.
I initially thought about using binary heaps (they can be easily implemented via arrays), but apparently they don't behave well when array can't grow anymore. Linked lists and arrays will require extra time for moving things around. stl priority queue grows and uses dynamic allocation (I may be wrong about it, though).
So, any other ideas?
--EDIT--
I'm not interested in STL implementation. STL implementation (suggested by a few people) works a bit slower than currently used linear array due to high number of function calls.
I'm interested in priority queue algorithms, not implemnetations.
Array based heaps seem ideal for your purpose. I am not sure why you rejected them.
You use a max-heap.
Say you have an N element heap (implemented as an array) which contains the N smallest elements seen so far.
When an element comes in you check against the max (O(1) time), and reject if it is greater.
If the value coming in is lower, you modify the root to be the new value and sift-down this changed value - worst case O(log N) time.
The sift-down process is simple: Starting at root, at each step you exchange this value with it's larger child until the max-heap property is restored.
So, you will not have to do any deletes which you probably will have to, if you use std::priority_queue. Depending on the implementation of std::priority_queue, this could cause memory allocation/deallocation.
So you can have the code as follows:
Allocated Array of size N.
Fill it up with the first N elements you see.
heapify (you should find this in standard text books, it uses sift-down). This is O(N).
Now any new element you get, you either reject it in O(1) time or insert by sifting-down in worst case O(logN) time.
On an average, though, you probably will not have to sift-down the new value all the way down and might get better than O(logn) average insert time (though I haven't tried proving it).
You only allocate size N array once and any insertion is done by exchanging elements of the array, so there is no dynamic memory allocation after that.
Check out the wiki page which has pseudo code for heapify and sift-down: http://en.wikipedia.org/wiki/Heapsort
Use std::priority_queue with the largest item at the head. For each new item, discard it if it is >= the head item, otherwise pop the head item and insert the new item.
Side note: Standard containers will only grow if you make them grow. As long as you remove one item before inserting a new item (after it reaches its maximum size, of course), this won't happen.
Most priority queues I work are based on linked lists. If you have a pre-determined number of priority levels, you can easily create a priority queue with O(1) insertion by having an array of linked lists--one linked list per priority level. Items of the same priority will of course degenerate into either a FIFO, but that can be considered acceptable.
Adding and removal then becomes something like (your API may vary) ...
listItemAdd (&list[priLevel], &item); /* Add to tail */
pItem = listItemRemove (&list[priLevel]); /* Remove from head */
Getting the first item in the queue then becomes a problem of finding the non-empty linked-list with the highest priority. That may be O(N), but there are several tricks you can use to speed it up.
In your priority queue structure, keep a pointer or index or something to the linked list with the current highest priority. This would need to be updated each time an item is added or removed from the priority queue.
Use a bitmap to indicate which linked lists are not empty. Combined with a find most significant bit, or find least significant bit algorithm you can usually test up to 32 lists at once. Again, this would need to be updated on each add / remove.
Hope this helps.
If amount of priorities is small and fixed than you can use ring-buffer for each priority. That will lead to waste of the space if objects is big, but if their size is comparable with pointer/index than variants with storing additional pointers in objects may increase size of array in the same way.
Or you can use simple single-linked list inside array and store 2*M+1 pointers/indexes, one will point to first free node and other pairs will point to head and tail of each priority. In that cases you'll have to compare in avg. O(M) before taking out next node with O(1). And insertion will take O(1).
If you construct an STL priority queue at the maximum size (perhaps from a vector initialized with placeholders), and then check the size before inserting (removing an item if necessary beforehand) you'll never have dynamic allocation during insert operations. The STL implementation is quite efficient.
Matters Computational see page 158. The implementation itself is quite well, and you can even tweak it a little without making it less readable. For example, when you compute the left child like:
int left = i / 2;
You can compute the rightchild like so:
int right = left + 1;
Found a solution ("difference" means "priority" in the code, and maxRememberedResults is 255 (could be any (2^n - 1)):
template <typename T> inline void swap(T& a, T& b){
T c = a;
a = b;
b = c;
}
struct MinDifferenceArray{
enum{maxSize = maxRememberedResults};
int size;
DifferenceData data[maxSize];
void add(const DifferenceData& val){
if (size >= maxSize){
if(data[0].difference <= val.difference)
return;
data[0] = val;
for (int i = 0; (2*i+1) < maxSize; ){
int next = 2*i + 1;
if (data[next].difference < data[next+1].difference)
next++;
if (data[i].difference < data[next].difference)
swap(data[i], data[next]);
else
break;
i = next;
}
}
else{
data[size++] = val;
for (int i = size - 1; i > 0;){
int parent = (i-1)/2;
if (data[parent].difference < data[i].difference){
swap(data[parent], data[i]);
i = parent;
}
else
break;
}
}
}
void clear(){
size = 0;
}
MinDifferenceArray()
:size(0){
}
};
build max-based queue (root is largest)
until it is full, fill up normally
when it is full, for every new element
Check if new element is smaller than root.
if it is larger or equal than root, reject.
otherwise, replace root with new element and perform normal heap "sift-down".
And we get O(log(N)) insert as a worst case scenario.
It is the same solution as the one provided by user with nickname "Moron".
Thanks to everyone for replies.
P.S. Apparently programming without sleeping enough wasn't a good idea.
It's better to implement your own class using std::array and heap algorithms.
`template<class T, int fixed_size = 5>
class fixed_size_arr_pqueue_v2
{
std::array<T, fixed_size> _data;
int _size = 0;
int parent(int i)
{
return (i - 1)/2;
}
void heapify(int i, bool downward = false)
{
int l = 2*i + 1;
int r = 2*i + 2;
int largest = 0;
if (l < size() && _data[l] > _data[i])
largest = l;
else
largest = i;
if (r < size() && _data[r] > _data[largest])
largest = r;
if (largest != i)
{
std::swap(_data[largest], _data[i]);
if (!downward)
heapify(parent(i));
else
heapify(largest, true);
}
}
public:
void push(T &d)
{
if (_size == fixed_size)
{
//min elements in a max heap lies at leaves only.
auto minItr = std::min_element(begin(_data) + _size/2, end(_data));
auto minPos {minItr - _data.begin()};
auto min { *minItr};
if (d > min)
{
_data.at(minPos) = d;
if (_data[parent(minPos)] > d)
{
//this is unlikely to happen in our case? as this position is a leaf?
heapify(minPos, true);
}
else
heapify(parent(minPos));
}
return ;
}
_data.at(_size++) = d;
std::push_heap(_data.begin(), _data.begin() + _size);
}
T pop()
{
T d = _data.front();
std::pop_heap(_data.begin(), _data.begin() + _size);
_size--;
return d;
}
T top()
{
return _data.front();
}
int size() const
{
return _size;
}
};`