diff between ADT list and linked list - c++

What is the ( real | significiant ) difference (s) between ADT list implementation and linked list implementation
with respect to queue ?
Moreover,
Can you suggest any website with visual example of these type of lists ?

It is REALLY hard to understand this question, but in an attempt to ask what the actual question is, I believe to have figured it out. So my assumption is, that the question is: "What is the difference between std::list and std::queue. #fatai: Please correct me, when I am wrong.
The std::list is a doubly-linked list. Each element of the list "knows" the next and previous element. And the list "knows" it's beginning and end. Look here: http://www.cplusplus.com/reference/stl/list/
The std::queue is a list, with special functionality. This functionality allows you to easily insert elements at the front, and remove elements from the back. Have a look here:
http://www.cplusplus.com/reference/stl/queue/
If you want to have minimal functionality, I'd use queue. The queue is optimized for its purpose. It also prevents you from doing things accidentally wrong (such as remove an element from the middle).
I hope that answers your (confusing) question. ;-)

Erasing and inserting into middle of the list by using iterator has O(n) complexity because in the background it has to shift all the other elements. (uses special model of vector ADT, but you cant even access to list element with index mechanism).
In linked-lists erasing and inserting to list has O(1) complexity. It doesn't needs to shift the elements for the operations. Even searching an element in linked lists has O(n) complexity like the list ADT.

Related

why Run-time for add-before for doubly-linked lists is O(1)?

In data structures, we say pushing an element before a node in singly-linked lists are O(n) operation! since there is no backward pointers, we have to walk all the way through the elements to get to the key we are going to add before the new element. Therefore, it has a linear run time.
Then, when we introduce doubly-linked lists, we say the problem is resolved and now since we have pointers in both directions pushing before becomes a constant time operation O(1).
I understand the logic but still, something is confusing to me! Since we DO NOT have constant time access to the elements of the list, for finding the element we want to add before, we have to walk through the previous element to get there! that is true that in the doubly-linked list it is now faster to implement the add-before command, but still, the action of finding the interested key is O(n)! then why we say with the doubly-linked list the operation of add before becomes O(1)?
Thanks,
In C++, the std::list::insert() function takes an iterator to indicate where the insert should occur. That means the caller already has this iterator, and the insert operation is not doing a search and therefore runs in constant time.
The find() algorithm, however, is linear, and is the normal way to search for a list element. If you need to find+insert, the combination is O(n).
However, there is no requirement to do a search before an insert. For example, if you have a cached (valid) iterator, you can insert in front of (or delete) the element it corresponds with in constant time.

What is the purpose of sorting a linked list?

I am wondering what is the purpose of sorting a linked list. Because if you need to find an element in an unsorted linked list and a sorted linked list, you have to do O(n).
Please forgive if my question is stupid
The purpose of sorting isn't always to search in logarithmic time. There are lots of other applications of sorted data obviously.
Suppose, you have to de-duplicate(remove the duplicate elements) from a large linked list and you don't have enough space to load the list items into hashtable as the list is very big. In this case, you can sort the list and remove consecutive elements if they are same and thus de-duplicate the list.
If you want to insert an element into it's appropriate position in a sorted container, sorted linked list is very handy which will guarantee linear time and constant space complexity. But for array, you need to use a temporary array and move all the elements afterwards one by one. Infact LRU cache is a doubly-linked list under the hood and keep sorted based on the recent hit on items. Newly used item and old item which is recently being accessed again, are inserted in front to keep the already sorted list sorted. If an array like structure would be used here, LRU cache can't offer of constant complexity
This is just some classic applications. You can find a lot of other applications.
Let us think a linked list is used to implement a priority queue. We can add elements of different priorities at random, but we want to process the elements of the queue according to priority, it would be useful to maintain a sorted linked list so that the top priority items appear at the beginning, and removing them from the queue is an easy operation. This not exactly sorting the list, but as and when an item is inserted, it would be placed in it's correct position based on the priority. This is similar to insertion sort of an array.

iterating through queue with circular linked lists

I have to implement a queue by using circular linked lists with only one iterator. My doubt is which is the better way in terms of performance, maintaining an iterator to the first item or from the last item?
Well, if you have a pointer to the first item, then operations on the end of the list are going to be O(N). With a pointer to the end of the list, you can do operations on both the beginning and the end in O(1). Generally, if you have a circularly linked list, then you want to be able to reach the beginning and the end, so the answer is that you performance will be better with a pointer to the end.

trivial singly linked list complexity query

We know that lookup on a singly linked list is O(n) given a head pointer. Let us say I maintain a pointer at half the linked list at all times. Would I be improving any lookup times?
Yes, it can reduce the complexity by a constant factor of 2, provided you have some way of determining whether to start from the beginning or middle of the list (typically, but not necessarily, the list being sorted). This is, however, a constant factor, so in terms of big-O complexity, it's irrelevant.
To be relevant to big-O complexity, you need more than a constant factor change. If, for example, you had a pointer to bisect each half, and again each half of that, and so on, you'd end up with logarithmic complexity instead of linear -- and you'd have transformed your "linked list" into an (already well known) threaded tree.
Nice thought, but this still does not improve the search operation. No matter how many pointers you have at different portions of the list, you still have to analyze each element in the list. However, you -could- two threads to search each half of the list making the operation twice as fast in theory.
Only if your linked list's data is sorted. Otherwise, as already said in the other reply.
It would, but asymptotically it would be still the same. However, there is a data structure that uses this idea, it is called skip list. Skip list is a linked list where some nodes have more pointers that are pointing in some sense to the middle of the rest of list. The idea is well illustrated on this image. This structure usually has logarithmic insert find and delete.

Is the linked list only of limited use?

I was having a nice look at my STL options today. Then I thought of something.
It seems a linked list (a std::list) is only of limited use. Namely, it only really seems
useful if
The sequential order of elements in my container matters, and
I need to erase or insert elements in the middle.
That is, if I just want a lot of data and don't care about its order, I'm better off using an std::set (a balanced tree) if I want O(log n) lookup or a std::unordered_map (a hash map) if I want O(1) expected lookup or a std::vector (a contiguous array) for better locality of reference, or a std::deque (a double-ended queue) if I need to insert in the front AND back.
OTOH, if the order does matter, I am better off using a std::vector for better locality of reference and less overhead or a std::deque if a lot of resizing needs to occur.
So, am I missing something? Or is a linked list just not that great? With the exception of middle insertion/erasure, why on earth would someone want to use one?
Any sort of insertion/deletion is O(1). Even std::vector isn't O(1) for appends, it approaches O(1) because most of the time it is, but sometimes you are going to have to grow that array.
It's also very good at handling bulk insertion, deletion. If you have 1 million records and want to append 1 million records from another list (concat) it's O(1). Every other structure (assuming stadard/naive implementations) are at least O(n) (where n is the number of elements added).
Order is important very often. When it is, linked lists are good. If you have a growing collection, you have the option of linked lists, array lists (vector in C++) and double-ended queues (deque). Use linked lists if you want to modify (add, delete) elements anywhere in the list often. Use array lists if fast retrieval is important. Use double-ended queues if you want to add stuff to both ends of the data structure and fast retrieval is important. For the deque vs vector question: use vector unless inserting/removing things from the beginning is important, in which case use deque. See here for an in-depth look at this.
If order isn't important, linked lists aren't normally ideal.
std::list is notable for its splice() method, which allows you to move one more more elements from one list to another in constant time, without copying or allocating any elements or list nodes.
This question reminds me of this infamous one. Read it for the parallels as to why such simple data structures are important.
Linked List is a fundamental data structure. Other data structures, like hash maps, may use linked lists internally.
Two different algorithms may have O(1) time complexity, for a look up, but that doesn't mean they have the same performance. For example the first one may be 10 or 100 times faster than the second.
Whenever you need to store, iterate and do something with a bunch of data, the normal (and fast) data stucture for that task is the Linked List. More complex data structures are for special cases, ie Set is suitable when you don't want repeated values.
std::list has the following properties:
Sequence
Front Seuqence
Back Seuqence
Forward Container
Reverse Container
Of these properties std::vector does not have (Back Seuqence)
While std::set does not support any sequence properties or (Reverse Container)
So what does this mean?
Will a back sequence supports O(1) for rend() and rbegin() etc
For full information see:
What are the complexity guarantees of the standard containers?
Linked lists are immutable and recursive datastructures whereas arrays are mutable and imperative (=change-based). In functional programming, there are usually no arrays - You don't change elements but transform lists into new lists. While linked lists don't even need additional memory here, this isn't possible efficiently with arrays.
You can easily build or decompose lists without having to change any value.
double [] = []
double (head:rest) = (2 * head):(double rest)
In C++, which is an imperative language, you won't use lists that often. One example could be a list of spaceships in a game from which you can easily remove all spaceships that have been destroyed since the previous frame.