I ran into the following interview question:
We need a data structure to keep n points on the X-axis such that we get efficient implementations of
Insert(x), Delete(x) and Find (a, b) (giving the number of points in
the interval [a, b]). Assume that the maximum number returned by Find(a, b) is k.
We can create a data structure that performs the three operations in O(log n)
We can create a data structure that performs Insert and Delete in O(log n) and Find in O(k + log n).
I know from general information that Find is like a Range on 1D points (but for counting elements in this question, i.e we need the number of elements). If we use for example an AVL tree, then we get the time complexities of option (2).
But I was surprised when told that (1) is the correct answer. Why is (1) the right answer?
The answer is indeed (1).
The idea of an AVL tree is fine, and your conclusions are right. But you can extend the AVL tree such that each node has one extra property: the number of values that precede the node's own value. You would have to take care in the AVL operations (including rotations) that this extra property is kept up to date. But this can be done with a constant overhead, so it does not impact the time complexities of Insert or Delete.
Then Find could just search the node with value a (or the one with the greatest value less than a), and do the same for value b. From both nodes that you find you get the extra property. The subtraction of these two will give the required result. There are some boundary cases to take into consideration, like when a is present in the tree, then that node itself should be counted too, otherwise not. It may be that no node is found with a value less than or equal to a. Then the missing property should be taken as a 0 in the subtraction.
Clearly this makes Find independent of its return value (up to k). The two binary searches give it a time complexity of O(logn).
Related
Given an input stream of numbers ranging from 1 to 10^5 (non-repeating) we need to be able to tell at each point how many numbers smaller than this have been previously encountered.
I tried to use the set in C++ to maintain the elements already encountered and then taking upper_bound on the set for the current number. But upper_bound gives me the iterator of the element and then again I have to iterate through the set or use std::distance which is again linear in time.
Can I maintain some other data structure or follow some other algorithm in order to achieve this task more efficiently?
EDIT : Found an older question related to fenwick trees that is helpful here. Btw I have solved this problem now using segment trees taking hints from #doynax comment.
How to use Binary Indexed tree to count the number of elements that is smaller than the value at index?
Regardless of the container you are using, it is very good idea to enter them as sorted set so at any point we can just get the element index or iterator to know how many elements are before it.
You need to implement your own binary search tree algorithm. Each node should store two counters with total number of its child nodes.
Insertion to binary tree takes O(log n). During the insertion counters of all parents of that new element should be incremented O(log n).
Number of elements that are smaller than the new element can be derived from stored counters O(log n).
So, total running time O(n log n).
Keep your table sorted at each step. Use binary search. At each point, when you are searching for the number that was just given to you by the input stream, binary search is going to find either the next greatest number, or the next smallest one. Using the comparison, you can find the current input's index, and its index will be the numbers that are less than the current one. This algorithm takes O(n^2) time.
What if you used insertion sort to store each number into a linked list? Then you can count the number of elements less than the new one when finding where to put it in the list.
It depends on whether you want to use std or not. In certain situations, some parts of std are inefficient. (For example, std::vector can be considered inefficient in some cases due to the amount of dynamic allocation that occurs.) It's a case-by-case type of thing.
One possible solution here might be to use a skip list (relative of linked lists), as it is easier and more efficient to insert an element into a skip list than into an array.
You have to use the skip list approach, so you can use a binary search to insert each new element. (One cannot use binary search on a normal linked list.) If you're tracking the length with an accumulator, returning the number of larger elements would be as simple as length-index.
One more possible bonus to using this approach is that std::set.insert() is log(n) efficient already without a hint, so efficiency is already in question.
I am faced with an application where I have to design a container that has random access (or at least better than O(n)) has inexpensive (O(1)) insert and removal, and stores the data according to the order (rank) specified at insertion.
For example if I have the following array:
[2, 9, 10, 3, 4, 6]
I can call the remove on index 2 to remove 10 and I can also call the insert on index 1 by inserting 13.
After those two operations I would have:
[2, 13, 9, 3, 4, 6]
The numbers are stored in a sequence and insert/remove operations require an index parameter to specify where the number should be inserted or which number should be removed.
My question is, what kind of data structures, besides a Linked List and a vector, could maintain something like this? I am leaning towards a Heap that prioritizes on the next available index. But I have been seeing something about a Fusion Tree being useful (but more in a theoretical sense).
What kind of Data structures would give me the most optimal running time while still keeping memory consumption down? I have been playing around with an insertion order preserving hash table, but it has been unsuccessful so far.
The reason I am tossing out using a std:: vector straight up is because I must construct something that out preforms a vector in terms of these basic operations. The size of the container has the potential to grow to hundreds of thousands of elements, so committing to shifts in a std::vector is out of the question. The same problem lines with a Linked List (even if doubly Linked), traversing it to a given index would take in the worst case O (n/2), which is rounded to O (n).
I was thinking of a doubling linked list that contained a Head, Tail, and Middle pointer, but I felt that it wouldn't be much better.
In a basic usage, to be able to insert and delete at arbitrary position, you can use linked lists. They allow for O(1) insert/remove, but only provided that you have already located the position in the list where to insert. You can insert "after a given element" (that is, given a pointer to an element), but you can not as efficiently insert "at given index".
To be able to insert and remove an element given its index, you will need a more advanced data structure. There exist at least two such structures that I am aware of.
One is a rope structure, which is available in some C++ extensions (SGI STL, or in GCC via #include <ext/rope>). It allows for O(log N) insert/remove at arbitrary position.
Another structure allowing for O(log N) insert/remove is a implicit treap (aka implicit cartesian tree), you can find some information at http://codeforces.com/blog/entry/3767, Treap with implicit keys or https://codereview.stackexchange.com/questions/70456/treap-with-implicit-keys.
Implicit treap can also be modified to allow to find minimal value in it (and also to support much more operations). Not sure whether rope can handle this.
UPD: In fact, I guess that you can adapt any O(log N) binary search tree (such as AVL or red-black tree) for your request by converting it to "implicit key" scheme. A general outline is as follows.
Imagine a binary search tree which, at each given moment, stores the consequitive numbers 1, 2, ..., N as its keys (N being the number of nodes in the tree). Every time we change the tree (insert or remove the node) we recalculate all the stored keys so that they are still from 1 to the new value of N. This will allow insert/remove at arbitrary position, as the key is now the position, but it will require too much time for all keys update.
To avoid this, we will not store keys in the tree explicitly. Instead, for each node, we will store the number of nodes in its subtree. As a result, any time we go from the tree root down, we can keep track of the index (position) of current node — we just need to sum the sizes of subtrees that we have to our left. This allows us, given k, locate the node that has index k (that is, which is the k-th in the standard order of binary search tree), on O(log N) time. After this, we can perform insert or delete at this position using standard binary tree procedure; we will just need to update the subtree sizes of all the nodes changed during the update, but this is easily done in O(1) time per each node changed, so the total insert or remove time will be O(log N) as in original binary search tree.
So this approach allows to insert/remove/access nodes at given position in O(log N) time using any O(log N) binary search tree as a basis. You can of course store the additional information ("values") you need in the nodes, and you can even be able to calculate the minimum of these values in the tree just by keeping the minimum value of each node's subtree.
However, the aforementioned treap and rope are more advanced as they allow also for split and merge operations (taking a substring/subarray and concatenating two strings/arrays).
Consider a skip list, which can implement linear time rank operations in its "indexable" variation.
For algorithms (pseudocode), see A Skip List Cookbook, by Pugh.
It may be that the "implicit key" binary search tree method outlined by #Petr above is easier to get to, and may even perform better.
I was thinking about this and was encountering a lot of bugs while trying to do this. Is it possible?
I believe you are asking whether you can do the following update:
Given the update A B C, you want to add C to all elements from A to B.
The problem is to do the update on the segment tree would normally take O(N * logN) time given that N is the maximum number of elements. However, the key idea about implementing a segment tree is that you would like to suppose range queries and that normally you are not interested in all O(N^2) ranges but rather a much smaller subset of them.
You can enhance the range updates using lazy propagation which would generally mean that you do an update, but you do not update all of the nodes in the segment tree -> you update to some point, but you do not continue futher down the tree since it not needed.
Say that you have updated everything up to a node K which is responsible for the range [10; 30] for example. Later, you do a "get info" query on [20;40]. Obviously, you will have to visit node K, but you are not interested on the whole range, but rather the range [20;30], which is actually its right child.
What you have to do is "push" the update of node K to its left child and then its right child and continue as needed.
In general this would mean that when doing an update, you will do an update only until you find a suitable node(s) for the interval you update, but not go any further. This yields O(logN) time.
Then, when quering you continue propagating the updates down the tree when you reach a node for which you know you have saved some update for later. This also yields O(logN) time.
Some good material: Lazy propagation on segment trees
I am currently creating a source code to combine two heaps that satisfy the min heap property with the shape invariant of a complete binary tree. However, I'm not sure if what I'm doing is the correct accepted method of merging two heaps satisfying the requirements I laid out.
Here is what I think:
Given two priority queues represented as min heaps, I insert the nodes of the second tree one by one into the first tree and fix the heap property. Then I continue this until all of the nodes in the second tree is in the first tree.
From what I see, this feels like a nlogn algorithm since I have to go through all the elements in the second tree and for every insert it takes about logn time because the height of a complete binary tree is at most logn.. But I think there is a faster way, however I'm not sure what other possible method there is.
I was thinking that I could just insert the entire tree in, but that break the shape invariant and order invariant..Is my method the only way?
In fact building a heap is possible in linear time and standard function std::make_heap guarantees linear time. The method is explained in Wikipedia article about binary heap.
This means that you can simply merge heaps by calling std::make_heap on range containing elements from both heaps. This is asymptotically optimal if heaps are of similar size. There might be a way to exploit preexisting structure to reduce constant factor, but I find it not likely.
I have to implement a data structure which supports the following three functions. The data is a pair(a,b) of two double values and the data is concentrated in a particular region. Let's say with values of 'a' in the range 500-600.
Insert(double a, double b) - Insert the data, a pair(double,double) in the data structure. If the first element of the pair already exists, update its second element to the new value.
Delete(double a) - Delete the data containing the first element = a.
PrintData(int count) - Print the value of the data which has the count-th largest value. Value is compared according to data.first.
The input file contains a series of Insert, Delete and PrintData operations. Currently, I have implemented the data structure as a height balanced binary search tree using STL-Map but it is not fast enough.
Is there any other implementation which is faster than a Map.
We can use caching to store the most common PrintData queries.
I'd recommend 2 binary search trees (BSTs) - one being the map from a to b (sorted by a), the other should be sorted by b.
The second will need to be a custom BST - you'll need to let each node store a count of the number of nodes in the subtree with it as root - these counts can be updated in O(log n), and will allow for O(log n) queries to get the k-th largest element.
When doing an insert, you'll simply look up b's value in the first BST first, then remove that value from the second, then update the first and insert the new value into the second.
For a delete, you'll simply look up b's value in the first BST (and remove that pair), then remove that value from the second.
All mentioned operations should take O(log n).
Caching
If you are, for instance, only going to query the top 10 elements, you could maintain another BST containing only those 10 elements (or even just an optionally-sorted array, since there's only 10 elements), which we'll then query instead of the second BST above.
When inserting, also insert into this structure if the value is greater than the smallest one, and remove the smallest.
When removing, we need to look up the next largest value and insert it into the small BST. Although this could also be done lazily - when removing, just remove it from this BST - don't fill it up to 10 again. When querying, if there are enough elements in this BST, we can just look up using this one, otherwise we find all the values in the big BST required to fill this BST up, then we query.
This would result in best-case O(1) query (worst-case O(log n)), while the other operations will still be O(log n).
Although the added complexity is probably not worth it - O(log n) is pretty fast, even for a large n.
Building on this idea, we could only have this small BST along with the BST mapping a to b - this would require that we check all values to find the required ones during a query after a removal, so it would only really be beneficial if there aren't a whole lot of removals.
I would recommend an indexed skip list. That will give you O(log n) insert and delete, and O(log n) access to the nth largest value (assuming that you maintain the list in descending order).
Skip list isn't any more difficult to implement than a self-balancing binary tree, and gives much better performance in some situations. Well worth considering.
The original skip list paper.