I have a custom struct data:
struct mydata
{
double distance;
string label;
}
I will generate lots of mydata in a loop. And I want to get the top minium disatnce items meanwhile theirs label must be unique.
Now I am using the max heap to solve this problem. My algorithm like this:
// get topK items with unique label
for i = 1:N
{
mydata item = generate_a_data();
if (max_heap.size() < K)
{
insert_to_max_heap(item);
}
else // max_heap is full
{
if (item.distance < max_heap(top).distance)
{
insert_to_max_heap(item);
}
}
}
The problem happened in the insert_to_max_heap(), because the constraint of unique label, I cannot just replace the top node in the max heap with new item, so I have to iterate all elements in the heap to find whether the same label exists. If it exists a node has the same label, I just update the distance of old node. pseudocode :
insert_to_max_heap(item)
{
for_each node in max_heap
{
if (node.label == item.label)
{
if (node.distance > item.distance)
{
// update min distance
node.distance = item.distance;
}
return;
}
}
// no identical label, replace the top node
max_heap.top = item;
sort_max_heap();
}
Is there more efficient way to improve my algorithm or new idea to solve th problem? Algorithm should be as fast as possible, and I don't have enough space to save all items in the loop.
I think you need to maintain a hash map which the key is label and the value is the position(or pointer) of the struct in your max heap.
When a new mydata is generated,check if a struct with the same label exists in the hash map firstly.If 'yes', determine whether to substitute it(after substituting,shift it down in the heap if necessary) or not according to the distance,otherwise determine whether to insert the new mydata to your heap or not,and don't forget to update your hash map at the same time.
Related
The task is to implement an O(1) Least Recently Used Cache
Here is the question on leetcode
https://leetcode.com/problems/lru-cache/
Here is my solution, while it is O(1) it is not the fastest implementationcould you give some feedback and maybe ideas on how can I optimize this ? Thank you !
#include<unordered_map>
#include<list>
class LRUCache {
// umap<key,<value,listiterator>>
// store the key,value, position in list(iterator) where push_back occurred
private:
unordered_map<int,pair<int,list<int>::iterator>> umap;
list<int> klist;
int cap = -1;
public:
LRUCache(int capacity):cap(capacity){
}
int get(int key) {
// if the key exists in the unordered map
if(umap.count(key)){
// remove it from the old position
klist.erase(umap[key].second);
klist.push_back(key);
list<int>::iterator key_loc = klist.end();
umap[key].second = --key_loc;
return umap[key].first;
}
return -1;
}
void put(int key, int value) {
// if key already exists delete it from the the umap and klist
if(umap.count(key)){
klist.erase(umap[key].second);
umap.erase(key);
}
// if the unordered map is at max capacity
if(umap.size() == cap){
umap.erase(klist.front());
klist.pop_front();
}
// finally update klist and umap
klist.push_back(key);
list<int>::iterator key_loc = klist.end();
umap[key].first = value;
umap[key].second = --key_loc;
return;
}
};
/**
* Your LRUCache object will be instantiated and called as such:
* LRUCache* obj = new LRUCache(capacity);
* int param_1 = obj->get(key);
* obj->put(key,value);
*/
Here's some optimizations that might help:
Take this segment of code from the get function:
if(umap.count(key)){
// remove it from the old position
klist.erase(umap[key].second);
The above will lookup key in the map twice. Once for the count method to see if it exists. Another to invoke the [] operator to fetch its value. Save a few cycles by doing this:
auto itor = umap.find(key);
if (itor != umap.end()) {
// remove it from the old position
klist.erase(itor->second);
In the put function, you do this:
if(umap.count(key)){
klist.erase(umap[key].second);
umap.erase(key);
}
Same thing as get, you can avoid the redundant search through umap. Additionally, there's no reason to invoke umap.erase only to add that same key back into the map a few lines later.
Further, this is also inefficient
umap[key].first = value;
umap[key].second = --key_loc;
Similar to above, redundantly looking up key twice in the map. In the first assignment statement, the key is not in the map, so it default constructs a new value pair thing. The second assignment is doing another lookup in the map.
Let's restructure your put function as follows:
void put(int key, int value) {
auto itor = umap.find(key);
bool reinsert = (itor != umap.end());
// if key already exists delete it from the klist only
if (reinsert) {
klist.erase(umap[key].second);
}
else {
// if the unordered map is at max capacity
if (umap.size() == cap) {
umap.erase(klist.front());
klist.pop_front();
}
}
// finally update klist and umap
klist.push_back(key);
list<int>::iterator key_loc = klist.end();
auto endOfList = --key_loc;
if (reinsert) {
itor->second.first = value;
itor->second.second = endOfList;
}
else {
const pair<int, list<int>::iterator> itempair = { value, endOfList };
umap.emplace(key, itempair);
}
}
That's as far as you can probably go by using std::list. The downside of the list type is that there's no way to move an existing node from the middle to the front (or back) without first removing it and then adding it back. That's a couple of unneeded memory allocations to update the list. Possible alternative is that you just use your own double-linked list type and manually fixup the prev/next pointer yourself.
Here is my solution, while it is O(1) it is not the fastest implementation
could you give some feedback and maybe ideas on how can I optimize this ? Thank you !
Gonna take on selbie's point here:
Every instance of if(umap.count(key)) will search for the key and using umap[key] is the equivalent for the search. You can avoid the double search by assigning an iterator which points to the key by a single std::unordered_map::find() operation.
selbie already gave the code for int get()'s search, here's the one for void put()'s one:
auto it = umap.find(key);
if (it != umap.end())
{
klist.erase(it ->second);
umap.erase(key);
}
Sidecase:
Not applicable for your code as of now due to lack of input and output work, but in case you use std::cin and std::cout, you can disable the synchronization between C and C++ streams, and untie cin from cout as an optimization: (they are tied together by default)
// If your using cin/cout or I/O
ios::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
I'm reading a book about AI and currently learning about pathfinding(currently doing the Dijkstra algorithm)
In the sample code he's using something he calls an IndexedPriorityQueue implemented as a two-way heap. I couldn't find any information on what a two-way heap is on google.
This search algorithm is implemented using an indexed priority queue.
A priority queue, or PQ for short, is a queue that keeps its elements
sorted in order of priority (no surprises there then). This type of
data structure can be utilized to store the destination nodes of the
edges on the search frontier, in order of increasing distance (cost)
from the source node. This method guarantees that the node at the
front of the PQ will be the node not already on the SPT that is
closest to the source node.
This is how it gets implemented:
//----------------------- IndexedPriorityQLow ---------------------------
//
// Priority queue based on an index into a set of keys. The queue is
// maintained as a 2-way heap.
//
// The priority in this implementation is the lowest valued key
//------------------------------------------------------------------------
template<class KeyType>
class IndexedPriorityQLow
{
private:
std::vector<KeyType>& m_vecKeys;
std::vector<int> m_Heap;
std::vector<int> m_invHeap;
int m_iSize,
m_iMaxSize;
void Swap(int a, int b)
{
int temp = m_Heap[a]; m_Heap[a] = m_Heap[b]; m_Heap[b] = temp;
//change the handles too
m_invHeap[m_Heap[a]] = a; m_invHeap[m_Heap[b]] = b;
}
void ReorderUpwards(int nd)
{
//move up the heap swapping the elements until the heap is ordered
while ( (nd>1) && (m_vecKeys[m_Heap[nd/2]] > m_vecKeys[m_Heap[nd]]) )
{
Swap(nd/2, nd);
nd /= 2;
}
}
void ReorderDownwards(int nd, int HeapSize)
{
//move down the heap from node nd swapping the elements until
//the heap is reordered
while (2*nd <= HeapSize)
{
int child = 2 * nd;
//set child to smaller of nd's two children
if ((child < HeapSize) && (m_vecKeys[m_Heap[child]] > m_vecKeys[m_Heap[child+1]]))
{
++child;
}
//if this nd is larger than its child, swap
if (m_vecKeys[m_Heap[nd]] > m_vecKeys[m_Heap[child]])
{
Swap(child, nd);
//move the current node down the tree
nd = child;
}
else
{
break;
}
}
}
public:
//you must pass the constructor a reference to the std::vector the PQ
//will be indexing into and the maximum size of the queue.
IndexedPriorityQLow(std::vector<KeyType>& keys,
int MaxSize):m_vecKeys(keys),
m_iMaxSize(MaxSize),
m_iSize(0)
{
m_Heap.assign(MaxSize+1, 0);
m_invHeap.assign(MaxSize+1, 0);
}
bool empty()const{return (m_iSize==0);}
//to insert an item into the queue it gets added to the end of the heap
//and then the heap is reordered from the bottom up.
void insert(const int idx)
{
assert (m_iSize+1 <= m_iMaxSize);
++m_iSize;
m_Heap[m_iSize] = idx;
m_invHeap[idx] = m_iSize;
ReorderUpwards(m_iSize);
}
//to get the min item the first element is exchanged with the lowest
//in the heap and then the heap is reordered from the top down.
int Pop()
{
Swap(1, m_iSize);
ReorderDownwards(1, m_iSize-1);
return m_Heap[m_iSize--];
}
//if the value of one of the client key's changes then call this with
//the key's index to adjust the queue accordingly
void ChangePriority(const int idx)
{
ReorderUpwards(m_invHeap[idx]);
}
};
Can anyone give me more information on what a 2-way heap is?
"Two-way heap" simply refers to the standard heap data structure. This code shows a very common way of implementing it, namely by flattening the tree structure of the heap into an array, in such a way that the index of a node's parent is always half of the index of the node (rounded down).
It is implemented AS a two way heap because he omits the 0 index in the heap to make parent- child calculation easier but the key values vektor begins at 0 index so inverted heap is storing indexes that keys to heap witch stores indexes that keys to keyVector to get appropriate value from keyVector you will need to do something like this keyVector[heap[invHeap[itemIndex] ] ] the rest of the code is just standard binary heap implementation.
I'm trying to implement an efficient hash table where collisions are solved using linear probing with step. This function has to be as efficient as possible. No needless = or == operations. My code is working, but not efficient. This efficiency is evaluated by an internal company system. It needs to be better.
There are two classes representing a key/value pair: CKey and CValue. These classes each have a standard constructor, copy constructor, and overridden operators = and ==. Both of them contain a getValue() method returning value of internal private variable. There is also the method getHashLPS() inside CKey, which return hashed position in hash table.
int getHashLPS(int tableSize,int step, int collision) const
{
return ((value + (i*step)) % tableSize);
}
Hash table.
class CTable
{
struct CItem {
CKey key;
CValue value;
};
CItem **table;
int valueCounter;
}
Methods
// return collisions count
int insert(const CKey& key, const CValue& val)
{
int position, collision = 0;
while(true)
{
position = key.getHashLPS(tableSize, step, collision); // get position
if(table[position] == NULL) // free space
{
table[position] = new CItem; // save item
table[position]->key = CKey(key);
table[position]->value = CValue(val);
valueCounter++;
break;
}
if(table[position]->key == key) // same keys => overwrite value
{
table[position]->value = val;
break;
}
collision++; // current positions is full, try another
if(collision >= tableSize) // full table
return -1;
}
return collision;
}
// return collisions count
int remove(const CKey& key)
{
int position, collision = 0;
while(true)
{
position = key.getHashLPS(tableSize, step, collision);
if(table[position] == NULL) // free position - key isn't in table or is unreachable bacause of wrong rehashing
return -1;
if(table[position]->key == key) // found
{
table[position] = NULL; // remove it
valueCounter--;
int newPosition, collisionRehash = 0;
for(int i = 0; i < tableSize; i++, collisionRehash = 0) // rehash table
{
if(table[i] != NULL) // if there is a item, rehash it
{
while(true)
{
newPosition = table[i]->key.getHashLPS(tableSize, step, collisionRehash++);
if(newPosition == i) // same position like before
break;
if(table[newPosition] == NULL) // new position and there is a free space
{
table[newPosition] = table[i]; // copy from old, insert to new
table[i] = NULL; // remove from old
break;
}
}
}
}
break;
}
collision++; // there is some item on newPosition, let's count another
if(collision >= valueCounter) // item isn't in table
return -1;
}
return collision;
}
Both functions return collisions count (for my own purpose) and they return -1 when the searched CKey isn't in the table or the table is full.
Tombstones are forbidden. Rehashing after removing is a must.
The biggest change for improvement I see is in the removal function. You shouldn't need to rehash the entire table. You only need to rehash starting from the removal point until you reach an empty bucket. Also, when re-hashing, remove and store all of the items that need to be re-hashed before doing the re-hashing so that they don't get in the way when placing them back in.
Another thing. With all hashes, the quickest way to increase efficiency to to decrease the loadFactor (the ratio of elements to backing-array size). This reduces the number of collisions, which means less iterating looking for an open spot, and less rehashing on removal. In the limit, as the loadFactor approaches 0, collision probability approaches 0, and it becomes more and more like an array. Though of course memory use goes up.
Update
You only need to rehash starting from the removal point and moving forward by your step size until you reach a null. The reason for this is that those are the only objects that could possibly change their location due to the removal. All other objects would wind up hasing to the exact same place, since they don't belong to the same "collision run".
A possible improvement would be to pre-allocate an array of CItems, that would avoid the malloc()s / news and free() deletes; and you would need the array to be changed to "CItem *table;"
But again: what you want is basically a smooth ride in a car with square wheels.
Goal is, I've multiple lists of elements available, and I want to be able to store all of these elements into a resultant list in an ordered way.
Some of the ideas that comes to my mind are
a) Keep the result as a set (std::set), but the B-tree , needs to rebalanced every now and then.
b) Store all the elements in a list and sort the list at the end.
But, I thought, why not store them in a sorted fashion, as and when we add the items to the resultant list.
Here is my function, that does the job of maintaining the results in a sorted way. Is there an efficient way to do the same?
void findItemToInsertAt(std::list<int>& dataSet, int itemToInsert, std::list<int>::iterator& location)
{
std::list<int>::iterator fromBegin = dataSet.begin();
std::list<int>::iterator fromEnd = dataSet.end() ;
// Have two pointers namely end and begin
if ( !dataSet.empty() )
--fromEnd;
// Set the location to the beginning, so that if the dataset is empty, it can return the appropriate value
location = fromBegin;
while ( fromBegin != dataSet.end() )
{
// If the left pointer points to lesser value, move to the next element
if ( *fromBegin < itemToInsert )
{
++fromBegin;
// If the end is greater than the item to be inserted then move to the previous element
if ( *fromEnd > itemToInsert )
{
--fromEnd;
}
else
{
// We move only if the element to be inserted is greater than the end, so that end points to the
// right location
if ( *fromEnd < itemToInsert )
{
location = ++fromEnd;
}
else
{
location = fromEnd;
}
break;
}
}
else
{
location = fromBegin;
break;
}
}
}
And, here is the caller of the function
void storeListToResults(const std::list<int>& dataset, std::list<int>& resultset)
{
std::list<int>::const_iterator curloc;
std::list<int>::iterator insertAt;
// For each item in the data set, find the location to be inserted into
// and insert the item.
for (curloc = dataset.begin(); curloc != dataset.end() ; ++curloc)
{
// Find the iterator to be inserted at
findItemToInsertAt(resultset,*curloc,insertAt);
// If we have reached the end, then the element to be inserted is at the end
if ( insertAt == resultset.end() )
{
resultset.push_back(*curloc);
}
else if ( *insertAt != *curloc ) // If the elements do not exist already, then insert it.
{
resultset.insert(insertAt,*curloc);
}
}
}
At a glance, your code looks like it's doing a linear search of the list in order to find the place to insert the item. While it's true that std::set will have to balance its tree (I think it's a Red-Black Tree) in order to maintain efficiency, chances are it'll do so much more efficiently than what you're proposing.
Answering the question asked:
Is there an efficient way to do the same?
Yes. Use std::set.
I would sort the indivual lists and then use STL's list::merge to create the result list. Then, if the list is kind of big, you could pay to transfer the result to a vector.
Is there a way to find a nonexisting key in a map?
I am using std::map<int,myclass>, and I want to automatically generate a key for new items. Items may be deleted from the map in different order from their insertion.
The myclass items may, or may not be identical, so they can not serve as a key by themself.
During the run time of the program, there is no limit to the number of items that are generated and deleted, so I can not use a counter as a key.
An alternative data structure that have the same functionality and performance will do.
Edit
I trying to build a container for my items - such that I can delete/modify items according to their keys, and I can iterate over the items. The key value itself means nothing to me, however, other objects will store those keys for their internal usage.
The reason I can not use incremental counter, is that during the life-span of the program they may be more than 2^32 (or theoretically 2^64) items, however item 0 may theoretically still exist even after all other items are deleted.
It would be nice to ask std::map for the lowest-value non-used key, so i can use it for new items, instead of using a vector or some other extrnal storage for non-used keys.
I'd suggest a combination of counter and queue. When you delete an item from the map, add its key to the queue. The queue then keeps track of the keys that have been deleted from the map so that they can be used again. To get a new key, you first check if the queue is empty. If it isn't, pop the top index off and use it, otherwise use the counter to get the next available key.
Let me see if I understand. What you want to do is
look for a key.
If not present, insert an element.
Items may be deleted.
Keep a counter (wait wait) and a vector. The vector will keep the ids of the deleted items.
When you are about to insert the new element,look for a key in the vector. If vector is not empty, remove the key and use it. If its empty, take one from the counter (counter++).
However, if you neveer remove items from the map, you are just stuck with a counter.
Alternative:
How about using the memory address of the element as a key ?
I would say that for general case, when key can have any type allowed by map, this is not possible. Even ability to say whether some unused key exists requires some knowledge about type.
If we consider situation with int, you can store std::set of contiguous segments of unused keys (since these segments do not overlap, natural ordering can be used - simply compare their starting points). When a new key is needed, you take the first segment, cut off first index and place the rest in the set (if the rest is not empty). When some key is released, you find whether there are neighbour segments in the set (due to set nature it's possible with O(log n) complexity) and perform merging if needed, otherwise simply put [n,n] segment into the set.
in this way you will definitely have the same order of time complexity and order of memory consumption as map has independently on requests history (because number of segments cannot be more than map.size()+1)
something like this:
class TKeyManager
{
public:
TKeyManager()
{
FreeKeys.insert(
std::make_pair(
std::numeric_limits<int>::min(),
std::numeric_limits<int>::max());
}
int AlocateKey()
{
if(FreeKeys.empty())
throw something bad;
const std::pair<int,int> freeSegment=*FreeKeys.begin();
if(freeSegment.second>freeSegment.first)
FreeKeys.insert(std::make_pair(freeSegment.first+1,freeSegment.second));
return freeSegment.first;
}
void ReleaseKey(int key)
{
std:set<std::pair<int,int>>::iterator position=FreeKeys.insert(std::make_pair(key,key)).first;
if(position!=FreeKeys.begin())
{//try to merge with left neighbour
std::set<std::pair<int,int>>::iterator left=position;
--left;
if(left->second+1==key)
{
left->second=key;
FreeKeys.erase(position);
position=left;
}
}
if(position!=--FreeKeys.end())
{//try to merge with right neighbour
std::set<std::pair<int,int>>::iterator right=position;
++right;
if(right->first==key+1)
{
position->second=right->second;
FreeKeys.erase(right);
}
}
}
private:
std::set<std::pair<int,int>> FreeKeys;
};
Is there a way to find a nonexisting
key in a map?
I'm not sure what you mean here. How can you find something that doesn't exist? Do you mean, is there a way to tell if a map does not contain a key?
If that's what you mean, you simply use the find function, and if the key doesn't exist it will return an iterator pointing to end().
if (my_map.find(555) == my_map.end()) { /* do something */ }
You go on to say...
I am using std::map, and
I want to automatically generate a key
for new items. Items may be deleted
from the map in different order from
their insertion. The myclass items may, or may not be identical, so they can not serve as a key by themself.
It's a bit unclear to me what you're trying to accomplish here. It seems your problem is that you want to store instances of myclass in a map, but since you may have duplicate values of myclass, you need some way to generate a unique key. Rather than doing that, why not just use std::multiset<myclass> and just store duplicates? When you look up a particular value of myclass, the multiset will return an iterator to all the instances of myclass which have that value. You'll just need to implement a comparison functor for myclass.
Could you please clarify why you can not use a simple incremental counter as auto-generated key? (increment on insert)? It seems that there's no problem doing that.
Consider, that you decided how to generate non-counter based keys and found that generating them in a bulk is much more effective than generating them one-by-one.
Having this generator proved to be "infinite" and "statefull" (it is your requirement), you can create a second fixed sized container with say 1000 unused keys.
Supply you new entries in map with keys from this container, and return keys back for recycling.
Set some low "threshold" to react on key container reaching low level and refill keys in bulk using "infinite" generator.
The actual posted problem still exists "how to make efficient generator based on non-counter". You may want to have a second look at the "infinity" requirement and check if say 64-bit or 128-bit counter still can satisfy your algorithms for some limited period of time like 1000 years.
use uint64_t as a key type of sequence or even if you think that it will be not enough
struct sequence_key_t {
uint64_t upper;
uint64_t lower;
operator++();
bool operator<()
};
Like:
sequence_key_t global_counter;
std::map<sequence_key_t,myclass> my_map;
my_map.insert(std::make_pair(++global_counter, myclass()));
and you will not have any problems.
Like others I am having difficulty figuring out exactly what you want. It sounds like you want to create an item if it is not found. sdt::map::operator[] ( const key_type& x ) will do this for you.
std::map<int, myclass> Map;
myclass instance1, instance2;
Map[instance1] = 5;
Map[instance2] = 6;
Is this what you are thinking of?
Going along with other answers, I'd suggest a simple counter for generating the ids. If you're worried about being perfectly correct, you could use an arbitrary precision integer for the counter, rather than a built in type. Or something like the following, which will iterate through all possible strings.
void string_increment(std::string& counter)
{
bool carry=true;
for (size_t i=0;i<counter.size();++i)
{
unsigned char original=static_cast<unsigned char>(counter[i]);
if (carry)
{
++counter[i];
}
if (original>static_cast<unsigned char>(counter[i]))
{
carry=true;
}
else
{
carry=false;
}
}
if (carry)
{
counter.push_back(0);
}
}
e.g. so that you have:
std::string counter; // empty string
string_increment(counter); // now counter=="\x00"
string_increment(counter); // now counter=="\x01"
...
string_increment(counter); // now counter=="\xFF"
string_increment(counter); // now counter=="\x00\x00"
string_increment(counter); // now counter=="\x01\x00"
...
string_increment(counter); // now counter=="\xFF\x00"
string_increment(counter); // now counter=="\x00\x01"
string_increment(counter); // now counter=="\x01\x01"
...
string_increment(counter); // now counter=="\xFF\xFF"
string_increment(counter); // now counter=="\x00\x00\x00"
string_increment(counter); // now counter=="\x01\x00\x00"
// etc..
Another option, if the working set actually in the map is small enough would be to use an incrementing key, then re-generate the keys when the counter is about to wrap. This solution would only require temporary extra storage. The hash table performance would be unchanged, and the key generation would just be an if and an increment.
The number of items in the current working set would really determine if this approach is viable or not.
I loved Jon Benedicto's and Tom's answer very much. To be fair, the other answers that only used counters may have been the starting point.
Problem with only using counters
You always have to increment higher and higher; never trying to fill the empty gaps.
Once you run out of numbers and wrap around, you have to do log(n) iterations to find unused keys.
Problem with the queue for holding used keys
It is easy to imagine lots and lots of used keys being stored in this queue.
My Improvement to queues!
Rather than storing single used keys in the queue; we store ranges of unused keys.
Interface
using Key = wchar_t; //In my case
struct Range
{
Key first;
Key last;
size_t size() { return last - first + 1; }
};
bool operator< (const Range&,const Range&);
bool operator< (const Range&,Key);
bool operator< (Key,const Range&);
struct KeyQueue__
{
public:
virtual void addKey(Key)=0;
virtual Key getUniqueKey()=0;
virtual bool shouldMorph()=0;
protected:
Key counter = 0;
friend class Morph;
};
struct KeyQueue : KeyQueue__
{
public:
void addKey(Key)override;
Key getUniqueKey()override;
bool shouldMorph()override;
private:
std::vector<Key> pool;
friend class Morph;
};
struct RangeKeyQueue : KeyQueue__
{
public:
void addKey(Key)override;
Key getUniqueKey()override;
bool shouldMorph()override;
private:
boost::container::flat_set<Range,std::less<>> pool;
friend class Morph;
};
void morph(KeyQueue__*);
struct Morph
{
static void morph(const KeyQueue &from,RangeKeyQueue &to);
static void morph(const RangeKeyQueue &from,KeyQueue &to);
};
Implementation
Note: Keys being added are assumed to be key not found in queue
// Assumes that Range is valid. first <= last
// Assumes that Ranges do not overlap
bool operator< (const Range &l,const Range &r)
{
return l.first < r.first;
}
// Assumes that Range is valid. first <= last
bool operator< (const Range &l,Key r)
{
int diff_1 = l.first - r;
int diff_2 = l.last - r;
return diff_1 < -1 && diff_2 < -1;
}
// Assumes that Range is valid. first <= last
bool operator< (Key l,const Range &r)
{
int diff = l - r.first;
return diff < -1;
}
void KeyQueue::addKey(Key key)
{
if(counter - 1 == key) counter = key;
else pool.push_back(key);
}
Key KeyQueue::getUniqueKey()
{
if(pool.empty()) return counter++;
else
{
Key key = pool.back();
pool.pop_back();
return key;
}
}
bool KeyQueue::shouldMorph()
{
return pool.size() > 10;
}
void RangeKeyQueue::addKey(Key key)
{
if(counter - 1 == key) counter = key;
else
{
auto elem = pool.find(key);
if(elem == pool.end()) pool.insert({key,key});
else // Expand existing range
{
Range &range = (Range&)*elem;
// Note at this point, key is 1 value less or greater than range
if(range.first > key) range.first = key;
else range.last = key;
}
}
}
Key RangeKeyQueue::getUniqueKey()
{
if(pool.empty()) return counter++;
else
{
Range &range = (Range&)*pool.begin();
Key key = range.first++;
if(range.first > range.last) // exhausted all keys in range
pool.erase(pool.begin());
return key;
}
}
bool RangeKeyQueue::shouldMorph()
{
return pool.size() == 0 || pool.size() == 1 && pool.begin()->size() < 4;
}
void morph(KeyQueue__ *obj)
{
if(KeyQueue *queue = dynamic_cast<KeyQueue*>(obj))
{
RangeKeyQueue *new_queue = new RangeKeyQueue();
Morph::morph(*queue,*new_queue);
obj = new_queue;
}
else if(RangeKeyQueue *queue = dynamic_cast<RangeKeyQueue*>(obj))
{
KeyQueue *new_queue = new KeyQueue();
Morph::morph(*queue,*new_queue);
obj = new_queue;
}
}
void Morph::morph(const KeyQueue &from,RangeKeyQueue &to)
{
to.counter = from.counter;
for(Key key : from.pool) to.addKey(key);
}
void Morph::morph(const RangeKeyQueue &from,KeyQueue &to)
{
to.counter = from.counter;
for(Range range : from.pool)
while(range.first <= range.last)
to.addKey(range.first++);
}
Usage:
int main()
{
std::vector<Key> keys;
KeyQueue__ *keyQueue = new KeyQueue();
srand(time(NULL));
bool insertKey = true;
for(int i=0; i < 1000; ++i)
{
if(insertKey)
{
Key key = keyQueue->getUniqueKey();
keys.push_back(key);
}
else
{
int index = rand() % keys.size();
Key key = keys[index];
keys.erase(keys.begin()+index);
keyQueue->addKey(key);
}
if(keyQueue->shouldMorph())
{
morph(keyQueue);
}
insertKey = rand() % 3; // more chances of insert
}
}