STL: Set of natural numbers from A to B - c++

I want to add natural numbers from A to B in a set. Currently I am inserting each and every number from A to B, one by one in the set like this,
set<int> s;
for(int j=A; j<=B; j++)
s.insert(j);
But it takes O(n) time (here n = (B - A)+1). Is there any pre-defined way in STL to do it in O(1) time?
Thanks

Allocating memory to hold n number is always going to be in at least O(n) so I think you're out of luck.

Technically I believe this is O(n log n) because the set.insert function is log n. O(n) is the best you can do I think but for that you would need to use an unsorted container like a vector or list.

No. The shortest amount of time it takes to fill a container with sequential values is O(n) time.

With the STL set container you will never get O(1) time. You may be able to reduce the running time by using the set(InputIterator f, InputIterator l, const key_compare& comp) constructor and passing in a custom iterator that iterates over the given integer range. The reason this may run faster (depends on stl implementation, compiler, etc) is that you are reducing the call stack depth. In your snippet, you go all the way down from your .insert() call to the actual insertion and back for each integer. Using the alternate constructor, your increment operation is moved down into the frame in which the insertion is performed. The increment operation would now have the possible overhead of a function call if your compiler can't inline it. You should benchmark this before taking this approach though. It may be slower if your stl implementation has a shallow call stack for .insert().
In general though, if you need a set of a contiguous range of integers, you could see massive performance gains by implementing a specialized set class that can store and compare only the upper and lower bounds of each set.

O(1) is only true for default constructor.
O(n) for the copy constructors and sorted sequence insertion using iterators.
O(log n!) for unsorted sequence insertion using iterators.

Well, if you want to complately out of the box, you could design a "lazy-loaded" array, custom to this task. Basically, upon access, if the value had not been previously set, it would determine the correct value.
This would allow the setup to be O(1) (assuming inintialize the "not previously set" flags is itself O(1)), but wouldn't speed up the overall operation -- it would just scatter that time over the rest of the run (It would probably take longer overall).

Related

Remove last n elements of vector<int> in O(1) complexity C++?

I want to remove the last n elements of a vector of integers in constant time complexity O(1). This should be doable I think because only some arithmetic for the end pointer is needed. However, erase, resize, and remove all have O(n) complexity. What function is there to use?
The C++ standard (I believe) says that the cost of removing k items from the end of a std::vector must have time complexity O(k), but that's an upper bound on the amount of work done, not a lower bound.
The C++ compiler, when generating code for std::vector<int>::erase, can inspect the types and realize that there's no need to do any per-element work when removing ints; it can just adjust the logical size of the std::vector and call it a day. In fact, it looks like g++, with optimization turned on, does just that. The generated assembly here doesn't involve any loops and simply changes the size of the std::vector.
If you don't want to rely on the compiler to do this, another option would be to store your own separate size variable and then just "pretend" that you've removed items from the std::vector by never using them again. That takes time O(1).
Hope this helps!

STL priority_queue<pair> vs. map

I need a priority queue that will store a value for every key, not just the key. I think the viable options are std::multi_map<K,V> since it iterates in key order, or std::priority_queue<std::pair<K,V>> since it sorts on K before V. Is there any reason I should prefer one over the other, other than personal preference? Are they really the same, or did I miss something?
A priority queue is sorted initially, in O(N) time, and then iterating all the elements in decreasing order takes O(N log N) time. It is stored in a std::vector behind the scenes, so there's only a small coefficient after the big-O behavior. Part of that, though, is moving the elements around inside the vector. If sizeof (K) or sizeof (V) is large, it will be a bit slower.
std::map is a red-black tree (in universal practice), so it takes O(N log N) time to insert the elements, keeping them sorted after each insertion. They are stored as linked nodes, so each item incurs malloc and free overhead. Then it takes O(N) time to iterate over them and destroy the structure.
The priority queue overall should usually have better performance, but it's more constraining on your usage: the data items will move around during iteration, and you can only iterate once.
If you don't need to insert new items while iterating, you can use std::sort with a std::vector, of course. This should outperform the priority_queue by some constant factor.
As with most things in performance, the only way to judge for sure is to try it both ways (with real-world testcases) and measure.
By the way, to maximize performance, you can define a custom comparison function to ignore the V and compare only the K within the pair<K,V>.

Efficiently inserting values into a map. Better incrementing or decrementing keys?

I have a vector of pairs ordered by key in decrementing order.
I want to efficiently transform it to a map.
This is what I currently do:
int size = vect.size();
for (int i = 0; i < size; i++)
map[vect[i].key] = vect[i];
Is there a point in traversing the vector backwards and inserting values with lowest keys first? I'm not sure how insert works internally and whether it even matters...
How about using map constructor and just passing the vector into that instead of looping? This would be recreating the map, vs doing map.clear() that I currently do between runs.
I read a few other SO answers about [key]=val being about the same as insert() but none deal with insertion order.
std::map is usually implemented as Red-Black Tree. Therefore, it doesn't really matter whether you increment or decrement the keys. It will still perform a search with O(log n) complexity and rebalancing.
What you can do to speed up your insertion is use either insert or emplace_hint with "hint", which is an iterator used as a suggestion as to where to insert the new element.
Constructing map with a range won't make a difference.
It is hard to recommend the best data structure for you without knowing details about the program and data it handles. Generally, RB-tree is the best you can get for general case (and that's why it is an implementation of choice for std::map).
Hope it helps. Good Luck!
I decided this was interesting enough (an outright bug in the standard that lasted 13 years) to add as an answer.
Section 23.1.2 of the C++03 specification says, concerning the "hinted" version insert(p,t), that the complexity is:
logarithmic in general, but amortized constant if t is inserted right after p
What this means is that if you insert n elements in sorted order, providing the correct hint each time, then the total time will be O(n), not O(n log n). Even though some individual insertions will take logarithmic time, the average time per insertion will still be constant.
C++11 finally fixed the wording to read "right before p" instead of "right after p", which is almost certainly what was meant in the first place... And the corrected wording actually makes it possible to use the "hint" when inserting elements in either forward or reverse order (i.e. passing container.end() or container.begin() as the hint).

std::map get the lowest n elements time

std::map should be implemented with a binary search tree as I read in the documentation and it sorts them too.
I need to insert rapidly and retrieve rapidly elements. I also need to get the first lowest N elements from time to time.
I was thinking about using a std::map, is it a good choice? If it is, what is the time I would need to retrieve the lowest N elements? O(n*logn)?
Given you need both retrieval and n smallest, I would say std::map is reasonable choice. But depending on the exact access pattern std::vector with sorting might be a good choice too.
I am not sure what you mean by retrieve. Time to read k elements is O(k) (provided you do it sequentially using iterator), time to remove them is O(k log n) (n is the total amount of elements; even if you do it sequentially using iterators).
You can use iterators to rapidly read through the lowest N elements. Going from begin() to the N-1th element will take O(n) time (getting the next element is amortised constant time for a std::map).
I'd note, however, that it is often actually faster to use a sorted std::vector with a binary chop search method to implement what it sounds like you are doing so depending on your exact requirements this might be worth investigating.
The C++ standard requires that all required iterator operations (including iterator increment) be amortized constant time. Consequently, getting the first N items in a container must take amortized O(N) time.
I would say yes to both questions.

std::set<T>::insert, duplicate elements

What would be an efficient implementation for a std::set insert member function? Because the data structure sorts elements based on std::less (operator < needs to be defined for the element type), it is conceptually easy to detect a duplicate.
How does it actually work internally? Does it make use of the red-back tree data structure (a mentioned implementation detail in the book of Josuttis)?
Implementations of the standard data structures may vary...
I have a problem where I am forced to have a (generally speaking) sets of integers which should be unique. The length of the sets varies so I am in need of dynamical data structure (based on my narrow knowledge, this narrows things down to list, set). The elements do not necessarily need to be sorted, but there may be no duplicates. Since the candidate sets always have a lot of duplicates (sets are small, up to 64 elements), will trying to insert duplicates into std::set with the insert member function cause a lot of overhead compared to std::list and another algorithm that may not resort to having the elements sorted?
Additional: the output set has a fixed size of 27 elements. Sorry, I forgot this... this works for a special case of the problem. For other cases, the length is arbitrary (lower than the input set).
If you're creating the entire set all at once, you could try using std::vector to hold the elements, std::sort to sort them, and std::unique to prune out the duplicates.
The complexity of std::set::insert is O(log n), or amortized O(1) if you use the "positional" insert and get the position correct (see e.g. http://cplusplus.com/reference/stl/set/insert/).
The underlying mechanism is implementation-dependent. It's often a red-black tree, but this is not mandated. You should look at the source code for your favourite implementation to find out what it's doing.
For small sets, it's possible that e.g. a simple linear search on a vector will be cheaper, due to spatial locality. But the insert itself will require all the following elements to be copied. The only way to know for sure is to profile each option.
When you only have 64 possible values known ahead of time, just take a bit field and flip on the bits for the elements actually seen. That works in n+O(1) steps, and you can't get less than that.
Inserting into a std::set of size m takes O(log(m)) time and comparisons, meaning that using an std::set for this purpose will cost O(n*log(n)) and I wouldn't be surprised if the constant were larger than for simply sorting the input (which requires additional space) and then discarding duplicates.
Doing the same thing with an std::list would take O(n^2) average time, because finding the insertion place in a list needs O(n).
Inserting one element at a time into an std::vector would also take O(n^2) average time – finding the insertion place is doable in O(log(m)), but elements need to me moved to make room. If the number of elements in the final result is much smaller than the input, that drops down to O(n*log(n)), with close to no space overhead.
If you have a C++11 compiler or use boost, you could also use a hash table. I'm not sure about the insertion characteristics, but if the number of elements in the result is small compared to the input size, you'd only need O(n) time – and unlike the bit field, you don't need to know the potential elements or the size of the result a priori (although knowing the size helps, since you can avoid rehashing).