We know that lookup on a singly linked list is O(n) given a head pointer. Let us say I maintain a pointer at half the linked list at all times. Would I be improving any lookup times?
Yes, it can reduce the complexity by a constant factor of 2, provided you have some way of determining whether to start from the beginning or middle of the list (typically, but not necessarily, the list being sorted). This is, however, a constant factor, so in terms of big-O complexity, it's irrelevant.
To be relevant to big-O complexity, you need more than a constant factor change. If, for example, you had a pointer to bisect each half, and again each half of that, and so on, you'd end up with logarithmic complexity instead of linear -- and you'd have transformed your "linked list" into an (already well known) threaded tree.
Nice thought, but this still does not improve the search operation. No matter how many pointers you have at different portions of the list, you still have to analyze each element in the list. However, you -could- two threads to search each half of the list making the operation twice as fast in theory.
Only if your linked list's data is sorted. Otherwise, as already said in the other reply.
It would, but asymptotically it would be still the same. However, there is a data structure that uses this idea, it is called skip list. Skip list is a linked list where some nodes have more pointers that are pointing in some sense to the middle of the rest of list. The idea is well illustrated on this image. This structure usually has logarithmic insert find and delete.
Related
It is possible to perform binary search on a doubly-linked list in Θ(log 𝑛) time?
My answer is yes because if the list is already somewhat ordered it could be faster than just O(n).
In order to do a binary search on a doubly-linked list, you're going to have to first iterate to the halfway-point of the list, so that you can do your first recursion on the two halves of the list.
Iterating to the halfway-point of a linked list is already an O(n) operation, since the time it takes to iterate to the halfway-point will grow linearly as the list itself gets longer.
So you're already at O(n) time, even before you've done any actual searching. Hence, the answer is no.
As you asked the question, the answer is no. You cannot have O(lg(n)) time for a linked list since traversal is linear, it cannot be better than O(n) in general, but binary search would be worse than a linear scan in that case since it must iterate multiple times to "jump" around. It would be better to do a single linear scan to find the element.
However, the C++ standard specifies that std::lower_bound algorithm (which does a binary search) has the following complexity:
[lower.bound]
Complexity: At most log2(last - first) + O(1) comparisons and projections.
That is, it is counting the element comparisons, not time, if you are measuring time by number of iterator advancements. That is, it finds the proper place by calling std::advance() on an iterator many times, but each of those calls on a list will be O(N) iterator advancements but on random access containers it's a constant, and for each call to advance there would be a corresponding call to the comparator.
That's why it is always so important to be clear what big-oh notation is measuring. Often the comparisons are a proxy for time, but not always!
So one of the topics in my comp sci class is concerning time complexity and using arrays and linked lists as a good way to compare certain operations and what container is better at doing so, so you can choose the appropiate data structure.
I understand the reasoning behind most of the operations but I'm unsure about one and that is inserting and appending in an array.
The worst case scenario for both of these is O(n). I believe I understand why inserting is O(n) because worst case, you insert at the front causing you to shift all elements over to the right meaning that its linear and dependent on the number of elements in the array.
For appending, I was curious why it was not O(1) since it takes one operation no matter the size to add an element at the end, given that there is space.
Is that the issue, if there isn't enough space you have to copy the array to a larger one for its worst case scenario?
[...] if there isn't enough space you have to copy the array to a larger one
for its worst case scenario?
Bingo.
A typical array is a chunk of contiguous memory with a definite size, which is determined either at compile or run time. There is no such thing as removing or inserting elements into an array, but simply writing into the already-allocated memory.
A linked list is a non-contiguous collection of memory chunks, which are connected by means of their addresses. There is such a thing as removing and inserting elements into a linked list.
The benefits of an array over a linked list are easier traversal and compactness (extra memory to store the address of the next [or previous] element is unnecessary). However, unlike a linked list, this cannot be extended as easily.
Nevertheless, in order for us to more precisely talk about the time complexities of the algorithms inherent to a data structure, we need to first define the data structure.
Doubly linked lists? Do we store the addresses of the first and last elements (like a queue)? Binary trees (which are a type of linked list)?
Could anybody know why inserting an element into the middle of a list is faster
than inserting an element into the middle of a vector?
I prefer to use vector but am told to use list if I can.
Anybody can explains why?
And is it always recommended to use list over vector?
If I take the question verbatim, finding the middle of an array (std::vector) is a simple operation, you divide the length by two and then round up or down to get the index. Finding the middle of a doubly linked list (std::list) requires walking through all elements. Even if you know its size, you still need to walk over half of the elements. Therefore std::vector is faster than std::list, in other words one is O(1) while the other is O(n).
Inserting at a known position requires shuffing the adjacent elements for an array and just linking in another node for a doubly linked list, as others explained here. Therefore, std::list with O(1) is faster than std::vector with O(n).
Together, to insert in the exact middle, we have O(1) + O(n) for the array and O(n) + O(1) for the doubly linked list, making inserting in the middle O(n) for both container types. All this leaves out things like CPU caches and allocator speed though, it just compares the number of "simple" operations. In summary, you need to find out how you use the container. If you really insert/delete at random positions a lot, std::list might be better. If you only do so rarely and then only read the container, a std::vector might be better. If you only have ten elements, all the O(x) is probably worthless anyway and you should go with the one you like best.
Inserting into the middle of the vector requires all the elements after the insertion point to be shuffled along to make space, potentially involving lots of copying.
The list is implemented as a linked list with each node occupying its own space in memory with references to neighboring nodes, so adding a new node just requires changing 2 references to point to the new node.
Depending on the data type you use, a vector may well perform much faster than a list. But the more complex the object is to copy, the worse a vector gets.
In simple terms, a vector is an array. So, its elements are stored in consecutive memory locations (i.e., one next to the other). The only exception is that a vector allows resizing during run-time, without causing data loss.
Now, to insert to a list, you identify the node, then create the new element (any where in memory), store the value and connect the pointers.
But in the case of the vector (array), you must physically move the elements from one cell to the other in order to create that space for a new elements. That physical movement is what causes the delay, particularly if many elements (i.e., data) needs to be moved. You are not physcially moving array elements. Rather, its their contents.
Ulrich Eckhardt's answer is pretty good. I don't have enough reputation to add a comment so I will write an answer myself. Like Ulrich said the speed of insertion in the middle for both the list and the vector is O(n) in theory. In practice, modern CPUs have a thing called "prefetcher". it's pretty good at getting contiguous data. Since the vector is contiguous in memory, moving lots of elements is pretty fast because of the prefetcher. You need to be manipulating really, really big vectors in order for them to be slower in inserting than the list. For more details check this awesome blog post:
http://gameprogrammingpatterns.com/data-locality.html
I am looking for the best data structure for C++ in which insertion and deletion can take place very efficiently and fast.
Traversal should also be very easy for this data structure. Which one should i go with?
What about SET in C++??
A linked list provides efficient insertion and deletion of arbitrary elements. Deletion here is deletion by iterator, not by value. Traversal is quite fast.
A dequeue provides efficient insertion and deletion only at the ends, but those are faster than for a linked list, and traversal is faster as well.
A set only makes sense if you want to find elements by their value, e.g. to remove them. Otherwise the overhead of checking for duplicate as well as that of keeping things sorted will be wasted.
It depends on what you want to put into this data structure. If the items are unordered or you care about their order, list<> could be used. If you want them in a sorted order, set<> or multiset<> (the later allows multiple identical elements) could be an alternative.
list<> is typically a double-linked list, so insertion and deletion can be done in constant time, provided you know the position. traversal over all elements is also fast, but accessing a specified element (either by value or by position) could become slow.
set<> and its family are typically binary trees, so insertion, deletion and searching for elements are mostly in logarithmic time (when you know where to insert/delete, it's constant time). Traversal over all elements is also fast.
(Note: boost and C++11 both have data structures based on hash-tables, which could also be an option)
I would say a linked list depending on whether or not you're deletions are specific and often. Iterator about it.
It occurs to me, that you need a tree.
I'm not sure about the exact structure (since you didnt provide in-detail info), but if you can put your data into a binary tree, you can achieve decent speed at searching, deleting and inserting elements ( O(logn) average and O(n) worst case).
Note that I'm talking about the data structure here, you can implement it in different ways.
To insert/delete a node with a particular value in DLL (doubly linked list) entire list need to be traversed to find the location hence these operations should be O(n).
If that's the case then how come STL list (most likely implemented using DLL) is able to provide these operations in constant time?
Thanks everyone for making it clear to me.
Insertion and deletion at a known position is O(1). However, finding that position is O(n), unless it is the head or tail of the list.
When we talk about insertion and deletion complexity, we generally assume we already know where that's going to occur.
It's not. The STL methods take an iterator to the position where insertion is to happen, so strictly speaking, they ARE O(1), because you're giving them the position. You still have to find the position yourself in O(n) however.
Deleting an arbitrary value (rather than a node) will indeed be O(n) as it will need to find the value. Deleting a node (i.e. when you start off knowing the node) is O(1).
Inserting based on the value - e.g. inserting in a sorted list - will be O(n). If you're inserting after or before an existing known node is O(1).
Inserting to the head or tail of the list will always be O(1) - because those are just special cases of the above.