Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm tasked with implementing a binary search tree and was going the usual struct route:
struct Node {
int value;
Node * left;
Node * right;
Node( int val ) {
...
}
}
When I thought about implementing it using a dynamic array and using arithmetic to figure out the left and right nodes. My question is will an array implementation change the time and space complexity of the operations (insert, delete, inorder walk, et al.) for better or worse?
I can see how the delete operation might be an issue, reorganize the array and keep the tree's structure, but the tree size is small, a hundred nodes max.
Will the time and space complexity of the operations (insert, delete, inorder walk, et al.) change?
Inserting and removing from non-leaf nodes in an array-based tree will require moving all elements that come after it in the array. This changes the complexity from O(log n) to O(n log n).
will an array implementation be a better use of memory than using structs?
Yes, without a doubt. Array based trees are friendlier to the cache and take fewer allocations, plus there's no requirement to store pointers per node.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am looking an efficient C++ data structure implementation of min-heap augmented with hash table.
There is a counter-part in python which called pqdict.
Priority Queue Dictionary — pqdict 1.0.0 documentation
https://pqdict.readthedocs.io/
To be more specific, I want to use this data structure as the open list for an efficient a* search implementation.
I hope there already exists one so I do not need to re-implement.
I assume you want this kind of data structure to support the decrease_key operation...
When I implement A* or Dijkstra's algorithm, I just don't do it that way.
In C++, I would:
put (node *,priority) records in a std::priority_queue, and also store the priorities in the nodes.
When decreasing the priority in a node, just insert another record into the priority queue and leave the old one where it is.
When popping a record off the priority queue, check to see if the priority is accurate. If it isn't then discard it and pop again.
Keep track of the number of invalid records in the priority queue. When/if the number of invalid records grows to half the size of the priority queue, then clear and rebuild the priority queue with only valid records.
This sort of system is easy to implement, doesn't affect the complexity of Dijkstra's algorithm or A*, and uses less memory than most of the kinds of data structure you're asking for.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Somebody told me yesterday that the underlying structure of an ordered map is a binary search tree. This does not make sense to me since you cannot have O(1) retrieval if that were the case. Can anyone explain?
Also, if one were to implement a hash table in C++ without using the stdlib, what would be the best way to do so?
std::map lookup time is not O(1) its O(log(n)).
std::unordered_map has a lookup time of O(1) amortized.
std::unordered_map and std::unordered_set are hashtables.
The underlying data structure is implementation-defined. It is most commonly implemented as a Red-Black tree which is a self-balancing binary search tree. The time complexity for getting an element is O(logn) (see this)
I would just read the implementation of std::unordered_map as a starting point. I assume this is learning activity so reading and understanding working STL implementation would be a good exercise. If it's not an exercise then use std::unordered_map
std::map uses Red-Black tree as it gets a reasonable trade-off between the complexity of node insertion/deletion and searching.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What's the complexity of iterator++ operation for stl RB-Tree(set or map)?
I always thought they would use indices thus the answer should be O(1), but recently I read the vc10 implementation and shockly found that they did not.
To find the next element in an ordered RB-Tree, it would take time to search the smallest element in the right subtree, or if the node is a left child and has no right child, the smallest element in the right sibling. This introduce a recursive process and I believe the ++ operator takes O(lgn) time.
Am I right? And is this the case for all stl implementations or just visual C++?
Is it really difficult to maintain indices for an RB-Tree? As long as I see, by holding two extra pointers in the node structure we can maintain a doubly linked list as long as the RB-Tree. Why don't they do that?
The amortized complexity when incrementing the iterator over the whole container is O(1) per increment, which is all that's required by the standard. You're right that a single increment is only O(log n), since the depth of the tree has that complexity class.
It seems likely to me that other RB-tree implementations of map will be similar. As you've said, the worst-case complexity for operator++ could be improved, but the cost isn't trivial.
It quite possible that the total time to iterate the whole container would be improved by the linked list, but it's not certain since bigger node structures tend to result in more cache misses.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I saw this "Hopeless challenge" about priority queues in an algorithms book :
" O(1) insert, delete-min and decrease-key. Why is it impossible?"
Is it because the only way is to implement it with some sort of heap and heaps always take logn time to to delete-min (even if amortized)?
I assume the well-known fact that sorting n integers requires time n * log(n) and I assume that delete-min actually finds the minimum (and could, for instance return it).
Toward a contradiction, suppose we have a data structure such as the one you described. Then, in order to sort n integers, we first insert all of them into the data structure, thus taking time O(n). We then repeatedly delete-min until the structure is empty. This gives us the sorted integers in time O(n), giving us a contradiction. Therefore, that data structure cannot exist.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Please recommend a datastructure having O(logn) deletion and I want index of the element in the datastructure in O(1) or O(logn)??
Most self balancing ordered binary trees can be modified to keep the number of children in each node, and maintaining that in lg(n) time per operation is pretty easy.
They clearly modify fewer than lg(n) nodes per operation, and in my experience the nodes they modify are often "vertically related". It isn't free, but it tends not to be expensive. 1
Once you have that data in the nodes of the tree, finding the nth element is easy (if n is bigger then # in left subtree, subtract # in left subtree from n and recurse into right subtree, otherwise recurse into left subtree with unchanged n).
This would also work for non-binary self balancing trees, such as B-trees.
As far as I know, no std container supports random logarithmic delete, insert and index operations. I looked for one a bit back. I also did a quick check of boost, even looking at the multi-index containers, and couldn't figure out a way to get it to work.
Footnotes:
1 When you modify a tree where you want the cost of getting the number of children of a node to be O(1) at the node, you have to modify nodes from your change all the way to the root. There are at most lg(n) of them per modified node. If the nodes are, however, "vertically related" to each other, the nodes you need to fix will be almost all the same on each node change.
On the other hand, suppose your tree rebalancing algorithm somehow managed to modify lg(n) utterly unrelated nodes, the cost would be as high as lg(n)*lg(n) to maintain the counts.