Are these time complexities correct - c++

I wanted to know if my time complexities regarding the following points along with the reasoning are correct or not.
Insertion at the end of a dynamic array is O(1) and anywhere else in
O(n) (since elements might need to copied and moved) (resembling a std::vector)
Searching through a single link list is O(n) since its linear.
Insertion and deletion in a single link list could either be O(1) or
O(n). It is O(1) if the address to the node is available otherwise
its O(n) since a linear search would be required.
I would appreciate feedback on this

Insertion at the end of a dynamic array is O(1) and anywhere else in O(n) (since elements might >need to copied and moved) (resembling a std::vector)
Amortized time complexity of a dynamic array is O(1).
https://stackoverflow.com/a/249695/1866301
Searching through a single link list is O(n) since its linear.
Yes
Insertion and deletion in a single link list could either be O(1) or O(n). It is O(1) if the
address to the node is available otherwise its O(n) since a linear search would be required.
Yes, if the nodes of the linked list are indexed by their addresses, you could get O(1), else you will have to search through the list which is O(n). There are other variants like skip list for which search is logarithmic, O(log n).
http://en.wikipedia.org/wiki/Skip_list

1) Insertion at the end of a dynamic array is amortized O(1). The reason is that, if the insertion forces a reallocation, the existing elements need to be moved to a new block of memory which is O(n). If you make sure the array is grown by a constant factor every time an allocation occurs, eventually the moves become infrequent enough to become insignificant. Inserting into the middle of a dynamic array will be O(n) to shift the elements after the insertion point over.
2) Correct; searching through a linked list, sorted or not, is O(n) because each element of the linked list will be visited during the search.
3) This is also correct. If you don't need a sorted list, insertion into a singly linked list is usually implemented as adding to the head of the list, so you can just update the list head to the new node and the next pointer of the new node to the be the old head of the list. This means insertion into an unsorted singly linked list will often be indicated as O(1) without much discussion.

Take a look at this cheatsheet for algorithm complexity, I use it as a reference

Related

which is better a sorted ArrayList or a sorted linkedList?

When adding an element to a sorted arrayList you can find the position in O(logN) using binary search but you still have to move all the elements in front of it in order to add it. And in the worst case in which the position has to be the first element, all the elements of the list would have to be moved. Not to mention that it is possible that there is no space and there to call the reAllocate() method which I understand adds another O(N) in time complexity. In the worst case the time complexity would be O(logN) (binary search) + O(N) (having to move all the elements) + O(N) (reAllocate())? In a linked List you could not use binary search but you would not have to worry about having to move the elements forward one position or having to reAllocate(). I don't know how to conclude which one is better or if I'm missing something or if I'm misunderstanding something.

Binary Search on a Doubly Linked List

It is possible to perform binary search on a doubly-linked list in Θ(log 𝑛) time?
My answer is yes because if the list is already somewhat ordered it could be faster than just O(n).
In order to do a binary search on a doubly-linked list, you're going to have to first iterate to the halfway-point of the list, so that you can do your first recursion on the two halves of the list.
Iterating to the halfway-point of a linked list is already an O(n) operation, since the time it takes to iterate to the halfway-point will grow linearly as the list itself gets longer.
So you're already at O(n) time, even before you've done any actual searching. Hence, the answer is no.
As you asked the question, the answer is no. You cannot have O(lg(n)) time for a linked list since traversal is linear, it cannot be better than O(n) in general, but binary search would be worse than a linear scan in that case since it must iterate multiple times to "jump" around. It would be better to do a single linear scan to find the element.
However, the C++ standard specifies that std::lower_bound algorithm (which does a binary search) has the following complexity:
[lower.bound]
Complexity: At most log2(last - first) + O(1) comparisons and projections.
That is, it is counting the element comparisons, not time, if you are measuring time by number of iterator advancements. That is, it finds the proper place by calling std::advance() on an iterator many times, but each of those calls on a list will be O(N) iterator advancements but on random access containers it's a constant, and for each call to advance there would be a corresponding call to the comparator.
That's why it is always so important to be clear what big-oh notation is measuring. Often the comparisons are a proxy for time, but not always!

why Run-time for add-before for doubly-linked lists is O(1)?

In data structures, we say pushing an element before a node in singly-linked lists are O(n) operation! since there is no backward pointers, we have to walk all the way through the elements to get to the key we are going to add before the new element. Therefore, it has a linear run time.
Then, when we introduce doubly-linked lists, we say the problem is resolved and now since we have pointers in both directions pushing before becomes a constant time operation O(1).
I understand the logic but still, something is confusing to me! Since we DO NOT have constant time access to the elements of the list, for finding the element we want to add before, we have to walk through the previous element to get there! that is true that in the doubly-linked list it is now faster to implement the add-before command, but still, the action of finding the interested key is O(n)! then why we say with the doubly-linked list the operation of add before becomes O(1)?
Thanks,
In C++, the std::list::insert() function takes an iterator to indicate where the insert should occur. That means the caller already has this iterator, and the insert operation is not doing a search and therefore runs in constant time.
The find() algorithm, however, is linear, and is the normal way to search for a list element. If you need to find+insert, the combination is O(n).
However, there is no requirement to do a search before an insert. For example, if you have a cached (valid) iterator, you can insert in front of (or delete) the element it corresponds with in constant time.

Looking for clarification on Hashing and BST functions and Big O notation

So I am trying to understand the data types and Big O notation of some functions for a BST and Hashing.
So first off, how are BSTs and Hashing stored? Are BSTs usually arrays, or are they linked lists because they have to point to their left and right leaves?
What about Hashing? I've had the most trouble finding clear information regarding Hashing in terms of computation-based searching. I understand that Hashing is best implemented with an array of chains. Is this for faster searching or to decrease overhead on creating the allocated data type?
This following question might be just bad interpretation on my part, but what makes a traversal function different from a search function in BSTs, Hashing, and STL containers?
Is traversal Big O(N) for BSTS because you're actually visiting each node/data member, whereas search() can reduce its time by eliminating half the searching field?
And somewhat related, why is it that in the STL, list.insert() and list.erase() have a Big O(1) whereas the vector and deque counterparts are O(N)?
Lastly, why would a vector.push_back() be O(N)? I thought the function could be done something along the lines of this like O(1), but I've come across text saying it is O(N):
vector<int> vic(2,3);
vector<int>::const iterator IT = vic.end();
//wanna insert 4 to the end using push_back
IT++;
(*IT) = 4;
hopefully this works. I'm a bit tired but I would love any explanations why something similar to that wouldn't be efficient or plausible. Thanks
BST's (Ordered Binary Trees) are a series of nodes where a parent node points to its two children, which in turn point to their max-two children, etc. They're traversed in O(n) time because traversal visits every node. Lookups take O(log n) time. Inserts take O(1) time because internally they don't need to a bunch of existing nodes; just allocate some memory and re-aim the pointers. :)
Hashes (unordered_map) use a hashing algorithm to assign elements to buckets. Usually buckets contain a linked list so that hash collisions just result in several elements in the same bucket. Traversal will again be O(n), as expected. Lookups and inserts will be amortized O(1). Amortized means that on average, O(1), though an individual insert might result in a rehashing (redistribution of buckets to minimize collisions). But over time the average complexity is O(1). Note, however, that big-O notation doesn't really deal with the "constant" aspect; only order of growth. The constant overhead in the hashing algorithms can be high enough that for some data-sets the O(log n) binary trees outperform the hashes. Nevertheless, the hash's advantage is that its operations are constant time-complexity.
Search functions take advantage (in the case of binary trees) of the notion of "order"; a search through a BST has the same characteristics as a basic binary search over an ordered array. O(log n) growth. Hashes don't really "search". They compute the bucket, and then quickly run through the collisions to find the target. That's why lookups are constant time.
As for insert and erase; in array-based sequence containers, all elements that come after the target have to be bumped over to the right. Move semantics in C++11 can improve upon the performance, but the operation is still O(n). For linked sequence containers (list, forward_list, trees), insertion and erasing just means fiddling with some pointers internally. It's a constant-time process.
push_back() will be O(1) until you exceed the existing allocated capacity of the vector. Once the capacity is exceeded, a new allocation takes place to produce a container that is large enough to accept more elements. All the elements need to then be moved into the larger memory region, which is an O(n) process. I believe Move Semantics can help here as well, but it's still going to be O(n). Vectors and strings are implemented such that as they allocate space for a growing data set, they allocate more than they need, in anticipation of additional growth. This is an efficiency safeguard; it means that the typical push_back() won't trigger a new allocation and move of the entire data set into a larger container. But eventually after enough push_backs, the limit will be reached, and the vector's elements will be copied into a larger container, which again has some extra headroom left over for more efficient push_backs.
Traversal refers to visiting every node, whereas search is only to find a particular node, so your intuition is spot on there. O(N) complexity because you need to visit N nodes.
std::vector::insert is for insert in the middle, and it involves copying all subsequent elements over by one slot, inorder to make room for the element being inserted, hence O(N). Linked list doesnt have this issue, hence O(1). Similar logic for erase. deque properties are similar to vector
std::vector::push_back is a O(1) operation, for the most part, only deviates if capacity is exceeded and reallocations + copy are needed.

Is time complexity for insertion/deletion in a doubly linked list of order O(n)?

To insert/delete a node with a particular value in DLL (doubly linked list) entire list need to be traversed to find the location hence these operations should be O(n).
If that's the case then how come STL list (most likely implemented using DLL) is able to provide these operations in constant time?
Thanks everyone for making it clear to me.
Insertion and deletion at a known position is O(1). However, finding that position is O(n), unless it is the head or tail of the list.
When we talk about insertion and deletion complexity, we generally assume we already know where that's going to occur.
It's not. The STL methods take an iterator to the position where insertion is to happen, so strictly speaking, they ARE O(1), because you're giving them the position. You still have to find the position yourself in O(n) however.
Deleting an arbitrary value (rather than a node) will indeed be O(n) as it will need to find the value. Deleting a node (i.e. when you start off knowing the node) is O(1).
Inserting based on the value - e.g. inserting in a sorted list - will be O(n). If you're inserting after or before an existing known node is O(1).
Inserting to the head or tail of the list will always be O(1) - because those are just special cases of the above.