If there is a List of 1000 elements. What is the worst case comparison needed to search an element?
O(1)
O(n)
1000
I came across this question in placement session in the same language as I have written above and I know this looks a very trivial question but I have a doubt:
Is this a question about the number of comparisons(because 1000 is also given
in options) or time complexity to search an element.
According to me, the answer is O(1) because it takes constant time to search an element in constant size LIST.
Thanks in Advance
Related
It is possible to perform binary search on a doubly-linked list in Θ(log 𝑛) time?
My answer is yes because if the list is already somewhat ordered it could be faster than just O(n).
In order to do a binary search on a doubly-linked list, you're going to have to first iterate to the halfway-point of the list, so that you can do your first recursion on the two halves of the list.
Iterating to the halfway-point of a linked list is already an O(n) operation, since the time it takes to iterate to the halfway-point will grow linearly as the list itself gets longer.
So you're already at O(n) time, even before you've done any actual searching. Hence, the answer is no.
As you asked the question, the answer is no. You cannot have O(lg(n)) time for a linked list since traversal is linear, it cannot be better than O(n) in general, but binary search would be worse than a linear scan in that case since it must iterate multiple times to "jump" around. It would be better to do a single linear scan to find the element.
However, the C++ standard specifies that std::lower_bound algorithm (which does a binary search) has the following complexity:
[lower.bound]
Complexity: At most log2(last - first) + O(1) comparisons and projections.
That is, it is counting the element comparisons, not time, if you are measuring time by number of iterator advancements. That is, it finds the proper place by calling std::advance() on an iterator many times, but each of those calls on a list will be O(N) iterator advancements but on random access containers it's a constant, and for each call to advance there would be a corresponding call to the comparator.
That's why it is always so important to be clear what big-oh notation is measuring. Often the comparisons are a proxy for time, but not always!
We have 48,16,703 entries in this format.
1 abc
2 def
...
...
4816702 blah
4816703 blah_blah
Since the number of entries are quite big, I am worried that std::map would take much time during insertion since it need to do the balancing as well for each insertion.
Only inserting these entries into the map takes a lot of time. I am doing
map[first] = second;
Two questions:
1. Am I correct in using std::map for these kind of cases?
2. Am I correct in inserting like the above way. OR Should I use map.insert()
I am sorry for not doing the experiments and writing the absolute numbers but we want an general consensus if we are doing the right thing or not.
Also, they keys are not consecutive always..
P.S. Of-course, later we will need to access that map as well to get the values corresponding to the keys.
If you don’t need to insert into the map afterwards, you can construct an unsorted vector of your data, sort it according to the key, and then search using functions like std::equal_range.
It’s the same complexity as std::map, but far less allocations.
Use an std::unordered_map, which has much better insertion time complexity than std::map, as the reference mentions:
Complexity
Single element insertions:
Average case: constant.
Worst case: linear in container size.
Multiple elements insertion:
Average case: linear in the number of elements inserted.
Worst case: N*(size+1): number of elements inserted times the container size plus one.
May trigger a rehash (not included in the complexity above).
That's better than the logarithmic time complexity of std::map's insertion.
Note: std::map's insertion can enjoy "amortized constant if a hint is given and the position given is the optimal.". If that's the case for you, then use a map (if a vector is not applicable).
#n.m. provides a representative Live demo
I'm going through an algorithm lecture and it is telling me that I can "look up the i-th element of a list in constant time".
Can someone explain why this is to me? Shouldn't the worst case scenario be linear time O(n) because what if the i-th element is outside of the list? Then it goes through all the elements of the list and realize it is not there hence O(size of the list:n)?
Let me know your thoughts.
You certainly can't achieve lookup in constant time for a classical linked list (as you have to traverse the nodes somehow).
But you can achieve constant time access with a contiguous array.
std::map should be implemented with a binary search tree as I read in the documentation and it sorts them too.
I need to insert rapidly and retrieve rapidly elements. I also need to get the first lowest N elements from time to time.
I was thinking about using a std::map, is it a good choice? If it is, what is the time I would need to retrieve the lowest N elements? O(n*logn)?
Given you need both retrieval and n smallest, I would say std::map is reasonable choice. But depending on the exact access pattern std::vector with sorting might be a good choice too.
I am not sure what you mean by retrieve. Time to read k elements is O(k) (provided you do it sequentially using iterator), time to remove them is O(k log n) (n is the total amount of elements; even if you do it sequentially using iterators).
You can use iterators to rapidly read through the lowest N elements. Going from begin() to the N-1th element will take O(n) time (getting the next element is amortised constant time for a std::map).
I'd note, however, that it is often actually faster to use a sorted std::vector with a binary chop search method to implement what it sounds like you are doing so depending on your exact requirements this might be worth investigating.
The C++ standard requires that all required iterator operations (including iterator increment) be amortized constant time. Consequently, getting the first N items in a container must take amortized O(N) time.
I would say yes to both questions.
We know that lookup on a singly linked list is O(n) given a head pointer. Let us say I maintain a pointer at half the linked list at all times. Would I be improving any lookup times?
Yes, it can reduce the complexity by a constant factor of 2, provided you have some way of determining whether to start from the beginning or middle of the list (typically, but not necessarily, the list being sorted). This is, however, a constant factor, so in terms of big-O complexity, it's irrelevant.
To be relevant to big-O complexity, you need more than a constant factor change. If, for example, you had a pointer to bisect each half, and again each half of that, and so on, you'd end up with logarithmic complexity instead of linear -- and you'd have transformed your "linked list" into an (already well known) threaded tree.
Nice thought, but this still does not improve the search operation. No matter how many pointers you have at different portions of the list, you still have to analyze each element in the list. However, you -could- two threads to search each half of the list making the operation twice as fast in theory.
Only if your linked list's data is sorted. Otherwise, as already said in the other reply.
It would, but asymptotically it would be still the same. However, there is a data structure that uses this idea, it is called skip list. Skip list is a linked list where some nodes have more pointers that are pointing in some sense to the middle of the rest of list. The idea is well illustrated on this image. This structure usually has logarithmic insert find and delete.