I need to loop over all key-values in RocksDB in order to fill my POD collection. I don't need to store key-values after retrieval. What should I use - DeleteRange() after loop or Delete() within loop? If it is DeleteRange(), then what end iterator must be passed as a parameter?
QScopedPointer<Iterator> it(m_db->NewIterator(ReadOptions()));
for (it->SeekToFirst(); it->Valid(); it->Next())
{
// filling POD collection
}
You can use DeleteRange(start, end) where start is inclusive and end is exclusive. This is atomic and may be faster than using Delete() within loop.
Related
I would like to know what the most suitable data structure is for the following problem in C++
I am wanting to store 100 floats ordered by recency. So when I add (push) a new item the other elements are moved up one position. Every time an event is triggered I receive a value and then add it to my data structure.
When the number of elements reaches 100, I would like to remove (pop) the item at the end (the oldest).
I want to able to iterate over all the elements and perform some mathematical operations on them.
I have looked at all the standard C++ containers but none of them fulfill all my needs. What's the easiest way to achieve this with standard C++ code?
You want a circular buffer. You can use Boost's implementation or make your own by allocating an array, and keeping track of the beginning and end of the used range. This boils down to doing indexing modulo 100.
Without creating your own or using a library, std::vector is the most efficient standard data structure for this. Once it has reached its maximum size, there will be no more dynamic memory allocations. The cost of moving up 100 floats is trivial compared to the cost of dynamic memory allocations. (This is why std::list is a slow data structure for this). There is no push_front function for vector. Instead you have to use v.insert(v.begin(), f)
Of course this assumes what you are doing is performance-critical, which it probably isn't. In that case I would use std::deque for more convenient usage.
Just saw that you need to iterator over them. Use a list.
Your basic function would look something like this
void addToList(int value){
list100.push_back(value);
if(list100.size() > 100){
list100.pop_front();
}
}
Iterating over them is easy as well:
for(int val : list100){
sum += val;
}
// Average, or whatever you need to do
Obviously, if you're using something besides int, you'll need to change that. Although this adds a little bit more functionality than you need, it's very efficient since it's a doubly linked list.
http://www.cplusplus.com/reference/list/list/
You can use either std::array, std::dequeue, std::list or std::priority_queue
A MAP (std::map) should be able to solve your requirement. Use Key as the object and value as the current push number nPusheCount which gets incremented whenever you add an element to map.
when adding a new element to map, if you have less than 100 elements, just add the number to the MAP as key and nPushCount as the value.
If you have 100 elements already, check if the number exists in map already and do following:
If the number already exists in map, then add the number as key and nPushCount as value;
If doesnt, delete the number with lowest nPushCount as value and then add the desired number with updated nPushCount.
I have std::set having large number unique objects as its elements.
In the main thread of program:
I take some objects from the set
Assign data to be processed to each of them
Remove those objects from set
And finally pass the objects to threads in threadpool for processing
Once those threads finish processing objects, they adds them back to the set. (So that in the next iteration, main thread can again
assigns next batch of data to those objects for processing)
This arrangement works perfect. But if I encounter error while adding back object to the set (for example, std::set.insert() throws bad_alloc) then it all goes on toss.
If I ignore that error and proceed, then there is no way for the object to get back in the processing set and it remains out of the program flow forever causing memory leaks.
To address this issue I tried to not to remove object from set. Instead, have a member flag that indicates the object is 'being processed'. But in that case the problem is, main thread encounters 'being processed' objects again and again while iterating through all elements of set. And it badly hampers performance (Number of objects in set are quite large).
What are better alternatives here?
Can std::list be used instead of std::set? List will not have bad_alloc problem while adding back element, as it just needs to assign pointers while adding element to list. But how can we make list elements unique? If at all we achieve it, will it be efficient as std::set?
Instead of removing and adding back elements to the std::set, is there any way to move element to the start or end of the set? So that unprocessed objects and processed will accumulate together towards start and end of the set.
Any other solution please?
I have 15,000,000 std:vectors of 6 integers.
Those 15M vectors contain duplicates.
Duplicate example:
(4,3,2,0,4,23)
(4,3,2,0,4,23)
I need to obtain a list of unique sequence with their associated count. (A sequence that is only present once would have a 1 count)
Is there an algorithm in the std C++ (can be x11) that does that in one shot?
Windows, 4GB RAM, 30+GB hdd
There is no such algorithm in the standard library which does exactly this, however it's very easy with a single loop and by choosing the proper data structure.
For this you want to use std::unordered_map which is typically a hash map. It has expected constant time per access (insert and look-up) and thus the first choice for huge data sets.
The following access and incement trick will automatically insert a new entry in the counter map if it's not yet there; then it will increment and write back the count.
typedef std::vector<int> VectorType; // Please consider std::array<int,6>!
std::unordered_map<VectorType, int> counters;
for (VectorType vec : vectors) {
counters[vec]++;
}
For further processing, you most probably want to sort the entries by the number of occurrence. For this, either write them out in a vector of pairs (which encapsulates the number vector and the occurrence count), or in an (ordered) map which has key and value swapped, so it's automatically ordered by the counter.
In order to reduce the memory footprint of this solution, try this:
If you don't need to get the keys back from this hash map, you can use a hash map which doesn't store the keys but only their hashes. For this, use size_t for the key type, std::identity<std::size_t> for the internal hash function and access it with a manual call to the hash function std::hash<VectorType>.
std::unordered_map<std::size_t, int, std::identity<std::size_t> > counters;
std::hash<VectorType> hashFunc;
for (VectorType vec : vectors) {
counters[hashFunc(vec)]++;
}
This reduces memory but requires an additional effort to interpret the results, as you have to loop over the original data structure a second time in order to find the original vectors (then look-up them in your hash map by hashing them again).
Yes: first std::sort the list (std::vector uses lexicographic ordering, the first element is the most significant), then loop with std::adjacent_find to find duplicates. When a duplicate is found, use std::adjacent_find again but with an inverted comparator to find the first non-duplicate.
Alternately, you could use std::unique with a custom comparator that flags when a duplicate is found, and maintains a count through the successive calls. This also gives you a deduplicated list.
The advantage of these approaches over std::unordered_map is space complexity proportional to the number of duplicates. You don't have to copy the entire original dataset or add a seldom-used field for dup-count.
You should convert each vector element to string one by one like this "4,3,2,0,4,23".
Then add them into a new string vector by controlling their existance with find() function.
If you need original vector, convert string vector to another integer sequence vector.
If you do not need delete duplicated elements while making sting vector.
I am using an STL queue to implement a BFS (breadth first search) on a graph. I need to push a node in the queue if that node already doesn't exist in the queue. However, STL queue does not allow iteration through its elements and hence I cannot use the STL find function.
I could use a flag for each node to mark them when they are visited and push them only when the flag is false, however, I need to run BFS multiple times and after each time I will have to reset all the flags, so I ended up using a counter instead of a flag, but I still would like to know if there is a standard way of finding an item in a queue.
I assume you're implementing the concept of a "closed set" in your BFS? The standard way of doing that is to simply maintain a separate std::set or std::unordered_set of elements already encountered. That way, you get O(lg n) or O(1) lookup, while iterating through a queue, if it were supported, would take O(n) time.
the accepted answer is silly.
each of the items in a BFS search will go through three states: not-visited, visited-but-not-complete, visited. For search purposes you can trim this down to visited or not-visited...
you simply need to use a flag...
the flag just alternates.
well, with the first traversal of the tree each node will start at false (not-visited) and go to true (visited). At the second traversal it will switch to true (not-visited) and go to false (visited).
so all you have to do is to keep a separate flag that simply changes state on each traversal...
then your logic is
if ( visitedFlag ^ alternatingFlagStateOfTree )
{
visitedFlag ^= 1;
}
basically the alternatingFlagStateOfTree is just used to indicate if a true is visited or a false is visited state. Each run alternates so we just swap them around.
This entirely eliminates the need for the set(), all the memory overhead, etc. as well as eliminated any need to reset the flag values between runs.
This technique can be used for more complex states, so long as there is a consistent ending state across all items being traversed. You simply do some math to reset what the flag value is to return the state back to base-state.
We have a C++ application for which we try to improve performance. We identified that data retrieval takes a lot of time, and want to cache data. We can't store all data in memory as it is huge. We want to store up to 1000 items in memory. This items can be indexed by a long key. However, when the cache size goes over 1000, we want to remove the item that was not accessed for the longest time, as we assume some sort of "locality of reference", that is we assume that items in the cache that was recently accessed will probably be accessed again.
Can you suggest a way to implement it?
My initial implementation was to have a map<long, CacheEntry> to store the cache, and add an accessStamp member to CacheEntry which will be set to an increasing counter whenever an entry is created or accessed. When the cache is full and a new entry is needed, the code will scan the entire cache map and find the entry with the lowest accessStamp, and remove it.
The problem with this is that once the cache is full, every insertion requires a full scan of the cache.
Another idea was to hold a list of CacheEntries in addition to the cache map, and on each access move the accessed entry to the top of the list, but the problem was how to quickly find that entry in the list.
Can you suggest a better approach?
Thankssplintor
Have your map<long,CacheEntry> but instead of having an access timestamp in CacheEntry, put in two links to other CacheEntry objects to make the entries form a doubly-linked list. Whenever an entry is accessed, move it to the head of the list (this is a constant-time operation). This way you will both find the cache entry easily, since it's accessed from the map, and are able to remove the least-recently used entry, since it's at the end of the list (my preference is to make doubly-linked lists circular, so a pointer to the head suffices to get fast access to the tail as well). Also remember to put in the key that you used in the map into the CacheEntry so that you can delete the entry from the map when it gets evicted from the cache.
Scanning a map of 1000 elements will take very little time, and the scan will only be performed when the item is not in the cache which, if your locality of reference ideas are correct, should be a small proportion of the time. Of course, if your ideas are wrong, the cache is probably a waste of time anyway.
An alternative implementation that might make the 'aging' of the elements easier but at the cost of lower search performance would be to keep your CacheEntry elements in a std::list (or use a std::pair<long, CacheEntry>. The newest element gets added at the front of the list so they 'migrate' towards the end of the list as they age. When you check if an element is already present in the cache, you scan the list (which is admittedly an O(n) operation as opposed to being an O(log n) operation in a map). If you find it, you remove it from its current location and re-insert it at the start of the list. If the list length extends over 1000 elements, you remove the required number of elements from the end of the list to trim it back below 1000 elements.
Update: I got it now...
This should be reasonably fast. Warning, some pseudo-code ahead.
// accesses contains a list of id's. The latest used id is in front(),
// the oldest id is in back().
std::vector<id> accesses;
std::map<id, CachedItem*> cache;
CachedItem* get(long id) {
if (cache.has_key(id)) {
// In cache.
// Move id to front of accesses.
std::vector<id>::iterator pos = find(accesses.begin(), accesses.end(), id);
if (pos != accesses.begin()) {
accesses.erase(pos);
accesses.insert(0, id);
}
return cache[id];
}
// Not in cache, fetch and add it.
CachedItem* item = noncached_fetch(id);
accesses.insert(0, id);
cache[id] = item;
if (accesses.size() > 1000)
{
// Remove dead item.
std::vector<id>::iterator back_it = accesses.back();
cache.erase(*back_it);
accesses.pop_back();
}
return item;
}
The inserts and erases may be a little expensive, but may also not be too bad given the locality (few cache misses). Anyway, if they become a big problem, one could change to std::list.
In my approach, it's needed to have a hash-table for lookup stored objects quickly and a linked-list for maintain the sequence of last used.
When an object are requested.
1) try to find a object from the hash table
2.yes) if found(the value have an pointer of the object in linked-list), move the object in linked-list to the top of the linked-list.
2.no) if not, remove last object from the linked-list and remove the data also from hash-table then put object into hash-table and top of linked-list.
For example
Let's say we have a cache memory only for 3 objects.
The request sequence is 1 3 2 1 4.
1) Hash-table : [1]
Linked-list : [1]
2) Hash-table : [1, 3]
Linked-list : [3, 1]
3) Hash-table : [1,2,3]
Linked-list : [2,3,1]
4) Hash-table : [1,2,3]
Linked-list : [1,2,3]
5) Hash-table : [1,2,4]
Linked-list : [4,1,2] => 3 out
Create a std:priority_queue<map<int, CacheEntry>::iterator>, with a comparer for the access stamp.. For an insert, first pop the last item off the queue, and erase it from the map. Than insert the new item into the map, and finally push it's iterator onto the queue.
I agree with Neil, scanning 1000 elements takes no time at all.
But if you want to do it anyway, you could just use the additional list you propose and, in order to avoid scanning the whole list each time, instead of storing just the CacheEntry in your map, you could store the CacheEntry and a pointer to the element of your list that corresponds to this entry.
As a simpler alternative, you could create a map that grows indefinitely and clears itself out every 10 minutes or so (adjust time for expected traffic).
You could also log some very interesting stats this way.
I believe this is a good candidate for treaps. The priority would be the time (virtual or otherwise), in ascending order (older at the root) and the long as the key.
There's also the second chance algorithm, that's good for caches. Although you lose search ability, it won't be a big impact if you only have 1000 items.
The naïve method would be to have a map associated with a priority queue, wrapped in a class. You use the map to search and the queue to remove (first remove from the queue, grabbing the item, and then remove by key from the map).
Another option might be to use boost::multi_index. It is designed to separate index from data and by that allowing multiple indexes on the same data.
I am not sure this really would be faster then to scan through 1000 items. It might use more memory then good. Or slow down search and/or insert/remove.