Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I saw this "Hopeless challenge" about priority queues in an algorithms book :
" O(1) insert, delete-min and decrease-key. Why is it impossible?"
Is it because the only way is to implement it with some sort of heap and heaps always take logn time to to delete-min (even if amortized)?
I assume the well-known fact that sorting n integers requires time n * log(n) and I assume that delete-min actually finds the minimum (and could, for instance return it).
Toward a contradiction, suppose we have a data structure such as the one you described. Then, in order to sort n integers, we first insert all of them into the data structure, thus taking time O(n). We then repeatedly delete-min until the structure is empty. This gives us the sorted integers in time O(n), giving us a contradiction. Therefore, that data structure cannot exist.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 months ago.
Improve this question
After learning bubble sorting, I learned other sorting.
And then I thought, there are other good sorting, do we need bubble sorting?
In the worst case, the Time Complexity is O(n²)
Even in the best case, Time Complexity is O(n²)
Is there a reason why need this kind of bubble sorting?
Bubble sort is easy to understand and implement. It is not very efficient, but it can be useful in certain situations, such as when the data is nearly sorted or when the input is very small.
Bubble sort can be easily adapted to sort lists of data that are stored in other data structures, such as linked lists
Bubble sort is a stable sorting algorithm, which means that it preserves the relative order of items with equal keys.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
So we know that the basic structures which form the backbone of C++ algorithms are:
trees set
queue
linkedlist
array
vector map
unordered_map and pair.
My question is which data structure is suitable for which application.For instance I know that for Database indexing and searching preferred choices are B+ tree and Hash table.Can anyone shed some more light on this,
This is not only a C++ problem, but also an algorithm question. It maybe too broad, but I can give you some advice.
set and map: They are ordered container, it is used for a both manytimes-insert-and-read structure. It can finish insert delete read in O(logn) time.
vector: used for something like dynamic array or a structure you will frequently push_back at it, and if no other reason, you should use it.
deque: much like vector, but it can also finish push_front in O(1) time
list: used for a structure you need to frequently insert, but less random access
unordered_map and unordered_set: look for hash table
array: used for a structure whose size is fixed.
pair and tuple: bind many object into one struct. Nothing special
Beside all of this, there are also some container meeting other requirement, you can serach them.
e.g. any and optional
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Somebody told me yesterday that the underlying structure of an ordered map is a binary search tree. This does not make sense to me since you cannot have O(1) retrieval if that were the case. Can anyone explain?
Also, if one were to implement a hash table in C++ without using the stdlib, what would be the best way to do so?
std::map lookup time is not O(1) its O(log(n)).
std::unordered_map has a lookup time of O(1) amortized.
std::unordered_map and std::unordered_set are hashtables.
The underlying data structure is implementation-defined. It is most commonly implemented as a Red-Black tree which is a self-balancing binary search tree. The time complexity for getting an element is O(logn) (see this)
I would just read the implementation of std::unordered_map as a starting point. I assume this is learning activity so reading and understanding working STL implementation would be a good exercise. If it's not an exercise then use std::unordered_map
std::map uses Red-Black tree as it gets a reasonable trade-off between the complexity of node insertion/deletion and searching.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Some days ago I wanted to use C++ sort() function to sort an array of strings, but I had a problem!
What algorithm does it use to sort the array? Is it a deterministic one or may it use different algorithms based on the type of the array?
Also, is there a clear time complexity analysis about it?
Does this function use the same algorithm for sorting numbers array and strings array?
It might or it might not. That is not specified by the standard.
And if we use it to sort an array of strings which the total size of them is less than 100,000 characters, would it work in less than 1 second(in the worst case)?
It might or it might not. It depends on the machine you're running the program on. Even if it will work in less than 1 second in worst case on a particular machine, it would be difficult to prove. But you can get a decent estimation by measuring. A measurement only applies to the machine it was performed, of course.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What's the complexity of iterator++ operation for stl RB-Tree(set or map)?
I always thought they would use indices thus the answer should be O(1), but recently I read the vc10 implementation and shockly found that they did not.
To find the next element in an ordered RB-Tree, it would take time to search the smallest element in the right subtree, or if the node is a left child and has no right child, the smallest element in the right sibling. This introduce a recursive process and I believe the ++ operator takes O(lgn) time.
Am I right? And is this the case for all stl implementations or just visual C++?
Is it really difficult to maintain indices for an RB-Tree? As long as I see, by holding two extra pointers in the node structure we can maintain a doubly linked list as long as the RB-Tree. Why don't they do that?
The amortized complexity when incrementing the iterator over the whole container is O(1) per increment, which is all that's required by the standard. You're right that a single increment is only O(log n), since the depth of the tree has that complexity class.
It seems likely to me that other RB-tree implementations of map will be similar. As you've said, the worst-case complexity for operator++ could be improved, but the cost isn't trivial.
It quite possible that the total time to iterate the whole container would be improved by the linked list, but it's not certain since bigger node structures tend to result in more cache misses.