Why and when to add element to the end of singly linked list? - singly-linked-list

Is there any reason to add element to the end of a singly linked list instead of adding to the beginning? Adding to the end you have to traverse the whole list every time so the bigger the list gets the more time it takes to add a new item.

Yes. For example, if you're keeping the list sorted, you might need to insert your new node at any point, including at the end.

Related

Run-times of sorted linked list

What is the run time to find an element, insert an element, and remove an element in a sorted linked list?
I believe that they're all O(n) since you have to go through each link regardless. Am I right?
Yes you are right. Think about it, regardless of whether or not you know the order of the nodes, the only way to actually access any node is to go through the node before it. That means for N nodes you have to go through N nodes making the operation O(N).

before and after pointers, doubly linked list in c++

Is anyone familiar with before* and after* pointers when using a doubly linked list with two dummy nodes in c++? I am trying to account for all the special cases of insertions (empty list, insert at very front, insert at very back, insert in the middle) using a before* and after* as my iterators.
How do you correctly use the before* and after* to determine where to insert?
Any feedback is greatly appreciated. Thanks in advance.
With two dummy nodes, there are no special cases. Since you always have a dummy node in front and a dummy node at the end, you never operate on an empty list. You never insert at the very front. You never insert at the very back. All insertions and deletions are in the middle -- that's the point of the two sentinel nodes.

iterating through queue with circular linked lists

I have to implement a queue by using circular linked lists with only one iterator. My doubt is which is the better way in terms of performance, maintaining an iterator to the first item or from the last item?
Well, if you have a pointer to the first item, then operations on the end of the list are going to be O(N). With a pointer to the end of the list, you can do operations on both the beginning and the end in O(1). Generally, if you have a circularly linked list, then you want to be able to reach the beginning and the end, so the answer is that you performance will be better with a pointer to the end.

What is the best data structure for removing an element from the middle?

The data stucture must be like a stack. Only with one difference. I want to pop from any index not only last. When I have popped element n, the elements with indexes N > n must swap to N-1. Any ideas?
P.S.
Pushing element n into the last index of the stack.
Then popping it out.
Then deleting stack[n]
is a bad idea.
I think you're looking for a linked list.
A linked list (std::list) will allow you to remove an element from the middle with O(1) complexity and automatically "pull" up the elements after it. You can use a linked list like a stack by using push_front. However you need to be aware that accessing an element in a linked list is O(n) as you would need to start at the head of the list and then walk along the links from one element to the next until you have arrived at element n (so there is no O(1) indexing)
Basically you would need to
Create an iterator
advance it to position n
Get the element from the iterator
erase the element the iterator is currently pointing to
Some example code can be found here.
You need to implement a linked list, but unlike an array, the order in a linked list is determined by a pointer in each object. So, you cannot use indices to access the elements.

Delete duplicates from Doubly linked list

Hello
I stumbled following question
You given unsorted doubly linked list.You should find and delete duplicates from Doubly linked list.
What is the best way to do it with minimum algorithmic complexity?
Thank you.
If the space is abundance and you have to really optimize this with time, perhaps you can use a Hashset (or equivalent in C++). You read each element and push it to the hashset. If the hashset reports a duplicate, it means that there is a duplicate. You simply would delete that node.
The complexity is O(n)
Think of it as two singly linked lists instead of one doubly linked list, with one set of links going first to last and another set going last to first. You can sort the second list with a merge sort, which will be O(n log n). Now traverse the list using the first link. For each node, check if (node.back)->key==node.key and if so remove it from the list. Restore the back pointer during this traversal so that the list is properly doubly linked again.
This isn't necessarily the fastest method, but it doesn't use any extra space.
Assuming that the potential employer believes in the C++ library:
// untested O(n*log(n))
temlate <class T>
void DeDup(std::list<T>& l) {
std::set<T> s(l.begin(), l.end());
std::list<T>(s.begin(), s.end()).swap(l);
}
With minimum complexity? Simply traverse the list up to X times (where X is the number of items), starting at the head and then delete (and reassign pointers) down the list. O(n log n) (I believe) time at worse case, and really easy to code.