std::map should be implemented with a binary search tree as I read in the documentation and it sorts them too.
I need to insert rapidly and retrieve rapidly elements. I also need to get the first lowest N elements from time to time.
I was thinking about using a std::map, is it a good choice? If it is, what is the time I would need to retrieve the lowest N elements? O(n*logn)?
Given you need both retrieval and n smallest, I would say std::map is reasonable choice. But depending on the exact access pattern std::vector with sorting might be a good choice too.
I am not sure what you mean by retrieve. Time to read k elements is O(k) (provided you do it sequentially using iterator), time to remove them is O(k log n) (n is the total amount of elements; even if you do it sequentially using iterators).
You can use iterators to rapidly read through the lowest N elements. Going from begin() to the N-1th element will take O(n) time (getting the next element is amortised constant time for a std::map).
I'd note, however, that it is often actually faster to use a sorted std::vector with a binary chop search method to implement what it sounds like you are doing so depending on your exact requirements this might be worth investigating.
The C++ standard requires that all required iterator operations (including iterator increment) be amortized constant time. Consequently, getting the first N items in a container must take amortized O(N) time.
I would say yes to both questions.
Related
In a C++ std::set (often implemented using red-black binary search trees), the elements are automatically sorted, and key lookups and deletions in arbitrary positions take time O(log n) [amortised, i.e. ignoring reallocations when the size gets too big for the current capacity].
In a sorted C++ std::vector, lookups are also fast (actually probably a bit faster than std::set), but insertions are slow (since maintaining sortedness takes time O(n)).
However, sorted C++ std::vectors have another property: they can find the number of elements in a range quickly (in time O(log n)).
i.e., a sorted C++ std::vector can quickly answer: how many elements lie between given x,y?
std::set can quickly find iterators to the start and end of the range, but gives no clue how many elements are within.
So, is there a data structure that allows all the speed of a C++ std::set (fast lookups and deletions), but also allows fast computation of the number of elements in a given range?
(By fast, I mean time O(log n), or maybe a polynomial in log n, or maybe even sqrt(n). Just as long as it's faster than O(n), since O(n) is almost the same as the trivial O(n log n) to search through everything).
(If not possible, even an estimate of the number to within a fixed factor would be useful. For integers a trivial upper bound is y-x+1, but how to get a lower bound? For arbitrary objects with an ordering there's no such estimate).
EDIT: I have just seen the
related question, which essentially asks whether one can compute the number of preceding elements. (Sorry, my fault for not seeing it before). This is clearly trivially equivalent to this question (to get the number in a range, just compute the start/end elements and subtract, etc.)
However, that question also allows the data to be computed once and then be fixed, unlike here, so that question (and the sorted vector answer) isn't actually a duplicate of this one.
The data structure you're looking for is an Order Statistic Tree
It's typically implemented as a binary search tree in which each node additionally stores the size of its subtree.
Unfortunately, I'm pretty sure the STL doesn't provide one.
All data structures have their pros and cons, the reason why the standard library offers a bunch of containers.
And the rule is that there is often a balance between quickness of modifications and quickness of data extraction. Here you would like to easily access the number of elements in a range. A possibility in a tree based structure would be to cache in each node the number of elements of its subtree. That would add an average log(N) operations (the height of the tree) on each insertion or deletion, but would highly speedup the computation of the number of elements in a range. Unfortunately, few classes from the C++ standard library are tailored for derivation (and AFAIK std::set is not) so you will have to implement your tree from scratch.
Maybe you are looking for LinkedHashSet alternate for C++ https://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashSet.html.
We have 48,16,703 entries in this format.
1 abc
2 def
...
...
4816702 blah
4816703 blah_blah
Since the number of entries are quite big, I am worried that std::map would take much time during insertion since it need to do the balancing as well for each insertion.
Only inserting these entries into the map takes a lot of time. I am doing
map[first] = second;
Two questions:
1. Am I correct in using std::map for these kind of cases?
2. Am I correct in inserting like the above way. OR Should I use map.insert()
I am sorry for not doing the experiments and writing the absolute numbers but we want an general consensus if we are doing the right thing or not.
Also, they keys are not consecutive always..
P.S. Of-course, later we will need to access that map as well to get the values corresponding to the keys.
If you don’t need to insert into the map afterwards, you can construct an unsorted vector of your data, sort it according to the key, and then search using functions like std::equal_range.
It’s the same complexity as std::map, but far less allocations.
Use an std::unordered_map, which has much better insertion time complexity than std::map, as the reference mentions:
Complexity
Single element insertions:
Average case: constant.
Worst case: linear in container size.
Multiple elements insertion:
Average case: linear in the number of elements inserted.
Worst case: N*(size+1): number of elements inserted times the container size plus one.
May trigger a rehash (not included in the complexity above).
That's better than the logarithmic time complexity of std::map's insertion.
Note: std::map's insertion can enjoy "amortized constant if a hint is given and the position given is the optimal.". If that's the case for you, then use a map (if a vector is not applicable).
#n.m. provides a representative Live demo
I am developing a time critical application and am looking for the best container to handle a collection of elements of the following type:
class Element
{
int weight;
Data data;
};
Considering that the time critical steps of my application, periodically performed in a unique thread, are the following:
the Element with the lowest weight is extracted from the container, and data is processed;
a number n>=0 of new Element, with random(*) weight, are inserted into the container.
Some Element of the container may have the same weight. The total number of elements in the container at any time is quite high and almost stationary in average (several hundreds of thousands). The time needed for the extract/process/insert sequence described above must be as low as possible. (Note(*): new weight is actually computed from data but is considered as random here to simplify.)
After some searches and tries of different STL containers, I ended up using std::multiset container, which performed about 5 times faster than ordered std::vector and 16 times faster than ordered std:list. But still, I am wondering if I could achieve even better performance, considering that the bottleneck of my application remains in the extract/insert operations.
Notice that, though I only tried ordered containers, I did not mentioned "ordered container" in my requirements. I do not need the Element to be ordered in the container, I only need to perform the "extract lowest weighted element"/"insert new elements" operations as fast as possible. I am not limited to STL containers and can go for boost, or any other implementation, if suited.
Thanks for help.
I do not need the Element to be ordered in the container, I only need to perform the "extract lowest weighted element"/"insert new elements" operations as fast as possible.
Then you should try priority_queue<T>, or use make_heap/push_heap/pop_heap operations on a vector<T>.
Since you are looking for min heap, not max heap, you would need to supply a custom comparator that orders your Element objects in reverse order.
I think that within the STL , lazy std::vector will give the best results.
a suggested psuedo code may look like:
emplace back new elements in the end of the vector
only when you want to smallest element, sort the array and get the first element
in this way, you get the amortized insertion time of vector, relativly small amount of memory allocations and good cache locality.
It is instructive to consider different candidates and how your assumptions would impact the final selection. When your requirements change, it then becomes easer to switch containers.
Generally, the containers of size N have roughly 3 complexity categories for their basic acces/modification operations: (amortized) O(1), O(log N) and O(N).
Your first requirement (finding the lowest weight element) gives you roughly three candidates with O(1) complexity, and one candidate with O(N) complexity per element:
O(1) for std::priority_queue<Element, LowestWeightCompare>
O(1) for std::multiset<Element, LowestWeightCompare>
O(1) for boost::flat_multiset<Element, LowestWeightCompare>
O(N) for std::unordered_multiset<Element>
Your second requirement (randomized insertion of new elements) gives you the following complexity per element for each of the above four choices
O(log N) for std::priority_queue
O(log N) for std::multiset
O(N) for boost::flat_multiset
amortized O(1) for std::unordered_multiset
Among the first three choices, boost::multiset should be dominated by the other two for large N. Among the remaining two, the better caching behavior of std::priority_queue over std::multiset might prevail. But: measure, measure, measure, however.
It is a priori ambiguous whether std::unorderd_multiset is competitive with the other three. Depending on the number n of randomly inserted elements, total cost per batch of find(1)-insert(n) would be O(N) search + O(n) insertion for std::unordered_multiset and O(1) search + O(n log N) insertion for std::multiset. Again, measure, measure, measure.
How robust are these considerations with respect to your requirements? The story would change as follows if you would have to find the k lowest weight elements in each batch. Then you'd have to compare the costs of find(k)-insert(n). The search costs would scale roughly as
O(k log N) for std::priority_queue
O(1) for std::multiset
O(1) for boost::flat_multiset
O(k N) for std::unordered_multiset
Note that a priority_queue can only efficiently access the top element, not its k top elements without actually calling pop() on them, which has O(log N) complexity per call. If you expect that your code would likely change from a find(1)-insert(n) batch-mode to a find(k)-insert(n), then it might be a good idea to choose std::multiset, or at least document what kind of interface changes it would require.
Bonus: the best of both worlds?! You might also want to experiment a bit with Boost.MultiIndex and use something like (check the documentation to get the syntax correct)
boost::multi_index<
Element,
indexed_by<
ordered_non_unique<member<Element, &Element::weight>, std::less<>>,
hashed_non_unique<>
>
>
The above code will create a node-based container that implement two pointer structures to keep track of both the ordering by Element weight and also to allow quick hashed insertion. This will allow O(1) lookup of the lowest weight Element and also allows O(n) random insertion of n new elements.
For large N, it should scale better than the four previously mentioned containers, but again, for moderate N, cache effects induced by pointer chasing into random memory might spoil its theoretical advantage over std::priority_queue. Did I mention the mantra of measure, measure, measure?
Try either of these:
std::map<int,std::vector<Data>>
or
std::unordered_map<int,std::vector<Data>>
The int above is the weight.
These both have different speeds for find, remove and add depending on many different factors such as if the element is there or not. (If there, unordered_map .find is faster, if not, map .find is faster)
What would be an efficient implementation for a std::set insert member function? Because the data structure sorts elements based on std::less (operator < needs to be defined for the element type), it is conceptually easy to detect a duplicate.
How does it actually work internally? Does it make use of the red-back tree data structure (a mentioned implementation detail in the book of Josuttis)?
Implementations of the standard data structures may vary...
I have a problem where I am forced to have a (generally speaking) sets of integers which should be unique. The length of the sets varies so I am in need of dynamical data structure (based on my narrow knowledge, this narrows things down to list, set). The elements do not necessarily need to be sorted, but there may be no duplicates. Since the candidate sets always have a lot of duplicates (sets are small, up to 64 elements), will trying to insert duplicates into std::set with the insert member function cause a lot of overhead compared to std::list and another algorithm that may not resort to having the elements sorted?
Additional: the output set has a fixed size of 27 elements. Sorry, I forgot this... this works for a special case of the problem. For other cases, the length is arbitrary (lower than the input set).
If you're creating the entire set all at once, you could try using std::vector to hold the elements, std::sort to sort them, and std::unique to prune out the duplicates.
The complexity of std::set::insert is O(log n), or amortized O(1) if you use the "positional" insert and get the position correct (see e.g. http://cplusplus.com/reference/stl/set/insert/).
The underlying mechanism is implementation-dependent. It's often a red-black tree, but this is not mandated. You should look at the source code for your favourite implementation to find out what it's doing.
For small sets, it's possible that e.g. a simple linear search on a vector will be cheaper, due to spatial locality. But the insert itself will require all the following elements to be copied. The only way to know for sure is to profile each option.
When you only have 64 possible values known ahead of time, just take a bit field and flip on the bits for the elements actually seen. That works in n+O(1) steps, and you can't get less than that.
Inserting into a std::set of size m takes O(log(m)) time and comparisons, meaning that using an std::set for this purpose will cost O(n*log(n)) and I wouldn't be surprised if the constant were larger than for simply sorting the input (which requires additional space) and then discarding duplicates.
Doing the same thing with an std::list would take O(n^2) average time, because finding the insertion place in a list needs O(n).
Inserting one element at a time into an std::vector would also take O(n^2) average time – finding the insertion place is doable in O(log(m)), but elements need to me moved to make room. If the number of elements in the final result is much smaller than the input, that drops down to O(n*log(n)), with close to no space overhead.
If you have a C++11 compiler or use boost, you could also use a hash table. I'm not sure about the insertion characteristics, but if the number of elements in the result is small compared to the input size, you'd only need O(n) time – and unlike the bit field, you don't need to know the potential elements or the size of the result a priori (although knowing the size helps, since you can avoid rehashing).
I want to add natural numbers from A to B in a set. Currently I am inserting each and every number from A to B, one by one in the set like this,
set<int> s;
for(int j=A; j<=B; j++)
s.insert(j);
But it takes O(n) time (here n = (B - A)+1). Is there any pre-defined way in STL to do it in O(1) time?
Thanks
Allocating memory to hold n number is always going to be in at least O(n) so I think you're out of luck.
Technically I believe this is O(n log n) because the set.insert function is log n. O(n) is the best you can do I think but for that you would need to use an unsorted container like a vector or list.
No. The shortest amount of time it takes to fill a container with sequential values is O(n) time.
With the STL set container you will never get O(1) time. You may be able to reduce the running time by using the set(InputIterator f, InputIterator l, const key_compare& comp) constructor and passing in a custom iterator that iterates over the given integer range. The reason this may run faster (depends on stl implementation, compiler, etc) is that you are reducing the call stack depth. In your snippet, you go all the way down from your .insert() call to the actual insertion and back for each integer. Using the alternate constructor, your increment operation is moved down into the frame in which the insertion is performed. The increment operation would now have the possible overhead of a function call if your compiler can't inline it. You should benchmark this before taking this approach though. It may be slower if your stl implementation has a shallow call stack for .insert().
In general though, if you need a set of a contiguous range of integers, you could see massive performance gains by implementing a specialized set class that can store and compare only the upper and lower bounds of each set.
O(1) is only true for default constructor.
O(n) for the copy constructors and sorted sequence insertion using iterators.
O(log n!) for unsorted sequence insertion using iterators.
Well, if you want to complately out of the box, you could design a "lazy-loaded" array, custom to this task. Basically, upon access, if the value had not been previously set, it would determine the correct value.
This would allow the setup to be O(1) (assuming inintialize the "not previously set" flags is itself O(1)), but wouldn't speed up the overall operation -- it would just scatter that time over the rest of the run (It would probably take longer overall).