iterator++ complexity for stl map [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What's the complexity of iterator++ operation for stl RB-Tree(set or map)?
I always thought they would use indices thus the answer should be O(1), but recently I read the vc10 implementation and shockly found that they did not.
To find the next element in an ordered RB-Tree, it would take time to search the smallest element in the right subtree, or if the node is a left child and has no right child, the smallest element in the right sibling. This introduce a recursive process and I believe the ++ operator takes O(lgn) time.
Am I right? And is this the case for all stl implementations or just visual C++?
Is it really difficult to maintain indices for an RB-Tree? As long as I see, by holding two extra pointers in the node structure we can maintain a doubly linked list as long as the RB-Tree. Why don't they do that?

The amortized complexity when incrementing the iterator over the whole container is O(1) per increment, which is all that's required by the standard. You're right that a single increment is only O(log n), since the depth of the tree has that complexity class.
It seems likely to me that other RB-tree implementations of map will be similar. As you've said, the worst-case complexity for operator++ could be improved, but the cost isn't trivial.
It quite possible that the total time to iterate the whole container would be improved by the linked list, but it's not certain since bigger node structures tend to result in more cache misses.

Related

C++ binary search tree implementation, dynamic array or structs/class? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm tasked with implementing a binary search tree and was going the usual struct route:
struct Node {
int value;
Node * left;
Node * right;
Node( int val ) {
...
}
}
When I thought about implementing it using a dynamic array and using arithmetic to figure out the left and right nodes. My question is will an array implementation change the time and space complexity of the operations (insert, delete, inorder walk, et al.) for better or worse?
I can see how the delete operation might be an issue, reorganize the array and keep the tree's structure, but the tree size is small, a hundred nodes max.
Will the time and space complexity of the operations (insert, delete, inorder walk, et al.) change?
Inserting and removing from non-leaf nodes in an array-based tree will require moving all elements that come after it in the array. This changes the complexity from O(log n) to O(n log n).
will an array implementation be a better use of memory than using structs?
Yes, without a doubt. Array based trees are friendlier to the cache and take fewer allocations, plus there's no requirement to store pointers per node.

What is the underlying structure of an std::map? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Somebody told me yesterday that the underlying structure of an ordered map is a binary search tree. This does not make sense to me since you cannot have O(1) retrieval if that were the case. Can anyone explain?
Also, if one were to implement a hash table in C++ without using the stdlib, what would be the best way to do so?
std::map lookup time is not O(1) its O(log(n)).
std::unordered_map has a lookup time of O(1) amortized.
std::unordered_map and std::unordered_set are hashtables.
The underlying data structure is implementation-defined. It is most commonly implemented as a Red-Black tree which is a self-balancing binary search tree. The time complexity for getting an element is O(logn) (see this)
I would just read the implementation of std::unordered_map as a starting point. I assume this is learning activity so reading and understanding working STL implementation would be a good exercise. If it's not an exercise then use std::unordered_map
std::map uses Red-Black tree as it gets a reasonable trade-off between the complexity of node insertion/deletion and searching.

Container with fast inserts and index? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am looking for a C++ container class that is indexed like an std::vector, but has fast insertions, deletions and indexing. For example, a vector interface implemented with an underlying balancing tree would have O(logN) insertions/deletions and O(logN) indexing as well.
To be clear: I am not looking for std::map<int, T>. Inserting an element at index N should increment indices of all subsequent elements in the array which would not be the case with std::map<int, T>.
I have found AVL Array which does exactly what I am looking for. It has the right interface, but I would like to see if there are other options.
Do you know any other (production-quality) implementations? Maybe something more popular (does boost have something of the sort?). Something with a smaller memory footprint? (A node holding a pointer in the AVL Array is 64 bytes on my machine.)
Thought about using SkipLists yet? Basically it is a linked list, with multiple levels of shortcuts added on top that are organised as a tree. No shuffling of nodes, just a few pointer updates. The shortcuts allow you to iterate much faster across your list. One of my favorites.
http://openmymind.net/Building-A-Skiplist/
http://en.wikipedia.org/wiki/Skip_list

Priority queues O(1) insert, delete-min and decrease-key? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I saw this "Hopeless challenge" about priority queues in an algorithms book :
" O(1) insert, delete-min and decrease-key. Why is it impossible?"
Is it because the only way is to implement it with some sort of heap and heaps always take logn time to to delete-min (even if amortized)?
I assume the well-known fact that sorting n integers requires time n * log(n) and I assume that delete-min actually finds the minimum (and could, for instance return it).
Toward a contradiction, suppose we have a data structure such as the one you described. Then, in order to sort n integers, we first insert all of them into the data structure, thus taking time O(n). We then repeatedly delete-min until the structure is empty. This gives us the sorted integers in time O(n), giving us a contradiction. Therefore, that data structure cannot exist.

Datastructure for O(logn) deletion and index access in O(logn) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Please recommend a datastructure having O(logn) deletion and I want index of the element in the datastructure in O(1) or O(logn)??
Most self balancing ordered binary trees can be modified to keep the number of children in each node, and maintaining that in lg(n) time per operation is pretty easy.
They clearly modify fewer than lg(n) nodes per operation, and in my experience the nodes they modify are often "vertically related". It isn't free, but it tends not to be expensive. 1
Once you have that data in the nodes of the tree, finding the nth element is easy (if n is bigger then # in left subtree, subtract # in left subtree from n and recurse into right subtree, otherwise recurse into left subtree with unchanged n).
This would also work for non-binary self balancing trees, such as B-trees.
As far as I know, no std container supports random logarithmic delete, insert and index operations. I looked for one a bit back. I also did a quick check of boost, even looking at the multi-index containers, and couldn't figure out a way to get it to work.
Footnotes:
1 When you modify a tree where you want the cost of getting the number of children of a node to be O(1) at the node, you have to modify nodes from your change all the way to the root. There are at most lg(n) of them per modified node. If the nodes are, however, "vertically related" to each other, the nodes you need to fix will be almost all the same on each node change.
On the other hand, suppose your tree rebalancing algorithm somehow managed to modify lg(n) utterly unrelated nodes, the cost would be as high as lg(n)*lg(n) to maintain the counts.