Missing number(s) Interview Question Redux - c++

The common interview problem of determining the missing value in a range from 1 to N has been done a thousand times over. Variations include 2 missing values up to K missing values.
Example problem: Range [1,10] (1 2 4 5 7 8 9 10) = {3,6}
Here is an example of the various solutions:
Easy interview question got harder: given numbers 1..100, find the missing number(s)
My question is that seeing as the simple case of one missing value is of O(n) complexity and that the complexity of the larger cases converge at roughly something larger than O(nlogn):
Couldn't it just be easier to answer the question by saying sort (mergesort) the range and iterate over it observing the missing elements?
This solution should take no more than O(nlogn) and is capable of solving the problem for ranges other than 1-to-N such as 10-to-1000 or -100 to +100 etc...
Is there any reason to believe that the given solutions in the above SO link will be better than the sorting based solution for larger number of missing values?
Note: It seems a lot of the common solutions to this problem, assume an only number theoretic approach. If one is being asked such a question in an S/E interview wouldn't it be prudent to use a more computer science/algorithmic approach, assuming the approach is on par with the number theoretic solution's complexity...
More related links:
https://mathoverflow.net/questions/25374/duplicate-detection-problem
How to tell if an array is a permutation in O(n)?

You are only specifying the time complexity, but the space complexity is also important to consider.
The problem complexity can be specified in term of N (the length of the range) and K (the number of missing elements).
In the question you link, the solution of using equations is O(K) in space (or perhaps a bit more ?), as you need one equation per unknown value.
There is also the preservation point: may you alter the list of known elements ? In a number of cases this is undesirable, in which case any solution involving reordering the elements, or consuming them, must first make a copy, O(N-K) in space.
I cannot see faster than a linear solution: you need to read all known elements (N-K) and output all unknown elements (K). Therefore you cannot get better than O(N) in time.
Let us break down the solutions
Destroying, O(N) space, O(N log N) time: in-place sort
Preserving, O(K) space ?, O(N log N) time: equation system
Preserving, O(N) space, O(N) time: counting sort
Personally, though I find the equation system solution clever, I would probably use either of the sorting solutions. Let's face it: they are much simpler to code, especially the counting sort one!
And as far as time goes, in a real execution, I think the "counting sort" would beat all other solutions hands down.
Note: the counting sort does not require the range to be [0, X), any range will do, as any finite range can be transposed to the [0, X) form by a simple translation.
EDIT:
Changed the sort to O(N), one needs to have all the elements available to sort them.
Having had some time to think about the problem, I also have another solution to propose. As noted, when N grows (dramatically) the space required might explode. However, if K is small, then we could change our representation of the list, using intervals:
{4, 5, 3, 1, 7}
can be represented as
[1,1] U [3,5] U [7,7]
In the average case, maintaining a sorted list of intervals is much less costly than maintaining a sorted list of elements, and it's as easy to deduce the missing numbers too.
The time complexity is easy: O(N log N), after all it's basically an insertion sort.
Of course what's really interesting is that there is no need to actually store the list, thus you can feed it with a stream to the algorithm.
On the other hand, I have quite a hard time figuring out the average space complexity. The "final" space occupied is O(K) (at most K+1 intervals), but during the construction there will be much more missing intervals as we introduce the elements in no particular order.
The worst case is easy enough: N/2 intervals (think odd vs even numbers). I cannot however figure out the average case though. My gut feeling is telling me it should be better than O(N), but I am not that trusting.

Whether the given solution is theoretically better than the sorting one depends on N and K. While your solution has complexity of O(N*log(N)), the given solution is O(N*K). I think that the given solution is (same as the sorting solution) able to solve any range [A, B] just by transforming the range [A, B] to [1, N].

What about this?
create your own set containing all the numbers
remove the given set of numbers from your set (no need to sort)
What's left in your set are the missing numbers.

My question is that seeing as the [...] cases converge at roughly
something larger than O(nlogn) [...]
In 2011 (after you posted this question) Caf posted a simple answer that solves the problem in O(n) time and O(k) space [where the array size is n - k].
Importantly, unlike in other solutions, Caf's answer has no hidden memory requirements (using bit array's, adding numbers to elements, multiplying elements by -1 - these would all require O(log(n)) space).
Note: The question here (and the original question) didn't ask about the streaming version of the problem, and the answer here doesn't handle that case.
Regarding the other answers: I agree that many of the proposed "solutions" to this problem have dubious complexity claims, and if their time complexities aren't better in some way than either:
count sort (O(n) time and space)
compare (heap) sort (O(n*log(n)) time, O(1) space)
...then you may as well just solve the problem by sorting.
However, we can get better complexities (and more importantly, genuinely faster solutions):

Because the numbers are taken from a small, finite range, they can be 'sorted' in linear time.
All we do is initialize an array of 100 booleans, and for each input, set the boolean corresponding to each number in the input, and then step through reporting the unset booleans.

If there are total N elements where each number x is such that 1 <= x <= N then we can solve this in O(nlogn) time complexity and O(1) space complexity.
First sort the array using quicksort or mergesort.
Scan through the sorted array and if the difference between previously scanned number, a and current number, b is equal to 2 (b - a = 2), then the missing number is a+1. This can be extended to condition where (b - a > 2).
Time complexity is O(nlogn)+O(n) almost equal to O(nlogn) when N > 100.

I already answered it HERE
You can also create an array of boolean of the size last_element_in_the_existing_array + 1.
In a for loop mark all the element true that are present in the existing array.
In another for loop print the index of the elements which contains false AKA The missing ones.
Time Complexity: O(last_element_in_the_existing_array)
Space Complexity: O(array.length)

If the range is given to you well ahead, in this case range is [1,10] you can perform XOR operation with your range and the numbers given to you. Since XOR is commutative operation. You will be left with {3,6}
(1 2 3 4 5 6 7 8 9 10) XOR (1 2 4 5 7 8 9 10) ={3,6}

Related

performance: find the index of max value in an arr(tie allowed)

Just as the title, and BTW, it's just out of curiosity and it's not a homework question. It might seem to be trivial for people of CS major. The problem is I would like to find the indices of max value in an array. Basically I have two approaches.
scan over and find the maximum, then scan twice to get the vector of indices
scan over and find the maximum, along this scan construct indices array and abandon if a better one is there.
May I now how should I weigh over these two approaches in terms of performance(mainly time complexity I suppose)? It is hard for me because I have even no idea what the worst case should be for the second approach! It's not a hard problem perse. But I just want to know how to approach this problem or how should I google this type of problem to get the answer.
In term of complexity:
scan over and find the maximum,
then scan twice to get the vector of indices
First scan is O(n).
Second scan is O(n) + k insertions (with k, the number of max value)
vector::push_back has amortized complexity of O(1).
so a total O(2 * n + k) which might be simplified to O(n) as k <= n
scan over and find the maximum,
along this scan construct indices array and abandon if a better one is there.
Scan is O(n).
Number of insertions is more complicated to compute.
Number of clear (and number of element cleared) is more complicated to compute too. (clear's complexity would be less or equal to number of element removed)
But both have upper bound to n, so complexity is less or equal than O(3 * n) = O(n) but also greater than equal to O(n) (Scan) so it is O(n) too.
So for both methods, complexity is the same: O(n).
For performance timing, as always, you have to measure.
For your first method, you can set a condition to add the index to the array. Whenever the max changes, you need to clear the array. You don't need to iterate twice.
For the second method, the implementation is easier. You just find max the first go. Then you find the indices that match on the second go.
As stated in a previous answer, complexity is O(n) in both cases, and measures are needed to compare performances.
However, I would like to add two points:
The first one is that the performance comparison may depend on the compiler, how optimisation is performed.
The second point is more critical: performance may depend on the input array.
For example, let us consider the corner case: 1,1,1, .., 1, 2, i.e. a huge number of 1 followed by one 2. With your second approach, you will create a huge temporary array of indices, to provide at the end an array of one element. It is possible at the end to redefine the size of the memory allocated to this array. However, I don't like the idea to create a temporary unnecessary huge vector, independently of the time performance concern. Note that such a array could suffer of several reallocations, which would impact time performance.
This is why in the general case, without any knowledge on the input, I would prefer your first approach, two scans. The situation could be different if you want to implement a function dedicated to a specific type of data.

Is it possible for 1 to be O(n)

Im learning O-notation, and I thought that 1 is O(1) because since 1 is considered a constant its Big-O would be 1. However, I'm reading that it can be O(n) as well. How is this possible? Would it be because is n = 1 then it would be the same?
Yes a function that is O(1) is also O(n) -- and O(n2), and O(en), and so on. (And mathematically, you can think of 1 as a function that always has the same value.)
If you look at the formal definition of Big-O notation, you'll see (roughly stated) that a function f(x) is O(g(x)) if g(x) exceeds f(x) by at least some constant factor as x goes to infinity. Quoting the linked article:
Let f and g be two functions defined on some subset of the real
numbers. One writes
*f(x)=O(g(x)) as x --> ∞
if and only if there is a positive
constant M such that for all sufficiently large values of x, the
absolute value of f(x) is at most M multiplied by the absolute value
of g(x). That is, f(x) = O(g(x)) if and only if there exists a
positive real number M and a real number x0 such that
*|f(x)| ≤ M |g(x)| for all x ≥ *x0.
However, we rarely say that an O(1) function or algorithm is O(n), since saying it's O(n) is misleading and doesn't convey as much information. For example, we say that the Quicksort algorithm is O(n log n), and Bubblesort is O(n2). It's strictly true that Quicksort is also O(n2), but there's no point in saying so -- largely because a lot of people aren't familiar with the exact mathematical meaning of Big-O notation.
There's another notation called Big Theta (Θ) which applies tighter bounds.
Actually, Big O Notation shows how the program complexity (it may be time, memory etc.) depends on the problem size.
O(1) means that the program complexity is independent of problem size. e.g. Accessing an array element. No matter which index you select, the time of accessing will be independent of the index.
O(n) means that the program complexity linearly depends on problem size. e.g. If you are linear searching an element in the array, you need to traverse most of the elements of the array. In the worst case, if element is not present in the array, you will be traversing the complete array.
If we increase the size of the array, the complexity say, time complexity will be different i.e. it will take more time to execute if we are traversing 100 elements than the time taken if we are traversing only 10 elements.
I hope this helps you a bit.
The Big O Notation is a mathematical way to explain the behaviour of a function near to a point or to infinity. This is a tool that is used in computer science to analyse the complexity of an algorithm. The complexity of an algorithm helps you analyse if your algorithm suits your situation and process your logic in a reasonable time.
But that does not answer your question. The question that if n is equal to 1 doesn't make sense in Big O notation. Like the name said, it's a notation and not a way to calculate in mathematics. Big O notation means to evaluate the behaviour of this algorithm near to infinite to tell which part of the algorithm is the most significant. For example, if an algorithm has a behaviour that can be represented by the function 2x^2 + 3x the Big O notation says to take each part of this function and evaluate it near to infinite and take the function that is the most significant. So, by evaluating 2x^2 and 3x we will see that 2x^2 will be a bigger infinite that 3x. And the difference between x^2 and 3x are infinite too. So, if we eliminate the coefficients (that are not the variables part of function) we will have two complexities: O(x^2) and O(x), so O(n^2) and O(n). But we know that the most significant is the O(n^2).
It's the same thing if inside a part of code, you have two complexities O(1) and O(n) the O(n) will be your algorithm complexity.
But if a O(n) complexity process only one element the behaviour will be equivalent to O(1). But it doesn’t means that your algorithm has O(1) complexity.

Linear algorithm to find minimum subset sum over a threshold

I have a collection of N positive integers, each bounded by a (relatively small) constant C. I want to find a subset of these numbers with the smallest sum greater than (or equal to) a value K.
The numbers involved aren't terribly large (<100), but I need good performance even in the worst case. I thought maybe I could adapt Pisinger's dynamic programming algorithm to the task; it runs in O(NC) time, and I happen to meet the requirements of bounded, positive numbers.
[Edit]: The numbers are not sorted and there may be duplicates.
However, I don't understand the algorithm well enough to do this myself. In fact, I'm not even certain if the assumptions it is based on still hold...
-Is it possible to adapt this algorithm to my needs?
-Or is there another linear algorithm I could use that is similarly efficient?
-Could anyone provide pseudocode or a detailed explanation?
Thanks.
Link to the Subset-Sum code I was investigating:
Fast solution to Subset sum algorithm by Pisinger
(Apologies if this is poorly worded/formatted/etc. I'm still new to StackOverflow...)
Pisinger's algorithm gives you the largest sum less than or equal to the capacity of the knapsack. To solve your problem, use Pisinger to figure out what not to put in the subset. Formally, let the items be w_1, ..., w_n and the minimum be K. Give w_1, ..., w_n and w_1 + ... + w_n - K to Pisinger, then take every item that Pisinger does not.
Well one solution is to:
T = {0}
for x in V
for t in T
T.insert(x+t)
for i in K to max(T)
if (T.contains(i))
return i
fail
This gives you the size of the subset, but you can adapt to output the members.
The maximum size of T is O(N) (because of C bound), so the running time is O(N^2) and the space is O(N). You can use a bit array of length NC as the backing store of T.

Find that unique element from the 10^5 array size [duplicate]

This question already has answers here:
How to find the only number in an array that doesn't occur twice [duplicate]
(5 answers)
Closed 7 years ago.
What would be the best algorithm for finding a number that occurs only once in a list which has all other numbers occurring exactly twice.
So, in the list of integers (lets take it as an array) each integer repeats exactly twice, except one. To find that one, what is the best algorithm.
The fastest (O(n)) and most memory efficient (O(1)) way is with the XOR operation.
In C:
int arr[] = {3, 2, 5, 2, 1, 5, 3};
int num = 0, i;
for (i=0; i < 7; i++)
num ^= arr[i];
printf("%i\n", num);
This prints "1", which is the only one that occurs once.
This works because the first time you hit a number it marks the num variable with itself, and the second time it unmarks num with itself (more or less). The only one that remains unmarked is your non-duplicate.
By the way, you can expand on this idea to very quickly find two unique numbers among a list of duplicates.
Let's call the unique numbers a and b. First take the XOR of everything, as Kyle suggested. What we get is a^b. We know a^b != 0, since a != b. Choose any 1 bit of a^b, and use that as a mask -- in more detail: choose x as a power of 2 so that x & (a^b) is nonzero.
Now split the list into two sublists -- one sublist contains all numbers y with y&x == 0, and the rest go in the other sublist. By the way we chose x, we know that a and b are in different buckets. We also know that each pair of duplicates is still in the same bucket. So we can now apply ye olde "XOR-em-all" trick to each bucket independently, and discover what a and b are completely.
Bam.
O(N) time, O(N) memory
HT= Hash Table
HT.clear()
go over the list in order
for each item you see
if(HT.Contains(item)) -> HT.Remove(item)
else
ht.add(item)
at the end, the item in the HT is the item you are looking for.
Note (credit #Jared Updike): This system will find all Odd instances of items.
comment: I don't see how can people vote up solutions that give you NLogN performance. in which universe is that "better" ?
I am even more shocked you marked the accepted answer s NLogN solution...
I do agree however that if memory is required to be constant, then NLogN would be (so far) the best solution.
Kyle's solution would obviously not catch situations were the data set does not follow the rules. If all numbers were in pairs the algorithm would give a result of zero, the exact same value as if zero would be the only value with single occurance.
If there were multiple single occurance values or triples, the result would be errouness as well.
Testing the data set might well end up with a more costly algorithm, either in memory or time.
Csmba's solution does show some errouness data (no or more then one single occurence value), but not other (quadrouples). Regarding his solution, depending on the implementation of HT, either memory and/or time is more then O(n).
If we cannot be sure about the correctness of the input set, sorting and counting or using a hashtable counting occurances with the integer itself being the hash key would both be feasible.
I would say that using a sorting algorithm and then going through the sorted list to find the number is a good way to do it.
And now the problem is finding "the best" sorting algorithm. There are a lot of sorting algorithms, each of them with its strong and weak points, so this is quite a complicated question. The Wikipedia entry seems like a nice source of info on that.
Implementation in Ruby:
a = [1,2,3,4,123,1,2,.........]
t = a.length-1
for i in 0..t
s = a.index(a[i])+1
b = a[s..t]
w = b.include?a[i]
if w == false
puts a[i]
end
end
You need to specify what you mean by "best" - to some, speed is all that matters and would qualify an answer as "best" - for others, they might forgive a few hundred milliseconds if the solution was more readable.
"Best" is subjective unless you are more specific.
That said:
Iterate through the numbers, for each number search the list for that number and when you reach the number that returns only a 1 for the number of search results, you are done.
Seems like the best you could do is to iterate through the list, for every item add it to a list of "seen" items or else remove it from the "seen" if it's already there, and at the end your list of "seen" items will include the singular element. This is O(n) in regards to time and n in regards to space (in the worst case, it will be much better if the list is sorted).
The fact that they're integers doesn't really factor in, since there's nothing special you can do with adding them up... is there?
Question
I don't understand why the selected answer is "best" by any standard. O(N*lgN) > O(N), and it changes the list (or else creates a copy of it, which is still more expensive in space and time). Am I missing something?
Depends on how large/small/diverse the numbers are though. A radix sort might be applicable which would reduce the sorting time of the O(N log N) solution by a large degree.
The sorting method and the XOR method have the same time complexity. The XOR method is only O(n) if you assume that bitwise XOR of two strings is a constant time operation. This is equivalent to saying that the size of the integers in the array is bounded by a constant. In that case you can use Radix sort to sort the array in O(n).
If the numbers are not bounded, then bitwise XOR takes time O(k) where k is the length of the bit string, and the XOR method takes O(nk). Now again Radix sort will sort the array in time O(nk).
You could simply put the elements in the set into a hash until you find a collision. In ruby, this is a one-liner.
def find_dupe(array)
h={}
array.detect { |e| h[e]||(h[e]=true; false) }
end
So, find_dupe([1,2,3,4,5,1]) would return 1.
This is actually a common "trick" interview question though. It is normally about a list of consecutive integers with one duplicate. In this case the interviewer is often looking for you to use the Gaussian sum of n-integers trick e.g. n*(n+1)/2 subtracted from the actual sum. The textbook answer is something like this.
def find_dupe_for_consecutive_integers(array)
n=array.size-1 # subtract one from array.size because of the dupe
array.sum - n*(n+1)/2
end

Algorithm to find a duplicate entry in constant space and O(n) time

Given an array of N integer such that only one integer is repeated. Find the repeated integer in O(n) time and constant space. There is no range for the value of integers or the value of N
For example given an array of 6 integers as 23 45 67 87 23 47. The answer is 23
(I hope this covers ambiguous and vague part)
I searched on the net but was unable to find any such question in which range of integers was not fixed.
Also here is an example that answers a similar question to mine but here he created a hash table with the highest integer value in C++.But the cpp does not allow such to create an array with 2^64 element(on a 64-bit computer).
I am sorry I didn't mention it before the array is immutable
Jun Tarui has shown that any duplicate finder using O(log n) space requires at least Ω(log n / log log n) passes, which exceeds linear time. I.e. your question is provably unsolvable even if you allow logarithmic space.
There is an interesting algorithm by Gopalan and Radhakrishnan that finds duplicates in one pass over the input and O((log n)^3) space, which sounds like your best bet a priori.
Radix sort has time complexity O(kn) where k > log_2 n often gets viewed as a constant, albeit a large one. You cannot implement a radix sort in constant space obviously, but you could perhaps reuse your input data's space.
There are numerical tricks if you assume features about the numbers themselves. If almost all numbers between 1 and n are present, then simply add them up and subtract n(n+1)/2. If all the numbers are primes, you could cheat by ignoring the running time of division.
As an aside, there is a well-known lower bound of Ω(log_2(n!)) on comparison sorting, which suggests that google might help you find lower bounds on simple problems like finding duplicates as well.
If the array isn't sorted, you can only do it in O(nlogn).
Some approaches can be found here.
If the range of the integers is bounded, you can perform a counting sort variant in O(n) time. The space complexity is O(k) where k is the upper bound on the integers(*), but that's a constant, so it's O(1).
If the range of the integers is unbounded, then I don't think there's any way to do this, but I'm not an expert at complexity puzzles.
(*) It's O(k) since there's also a constant upper bound on the number of occurrences of each integer, namely 2.
In the case where the entries are bounded by the length of the array, then you can check out Find any one of multiple possible repeated integers in a list and the O(N) time and O(1) space solution.
The generalization you mention is discussed in this follow up question: Algorithm to find a repeated number in a list that may contain any number of repeats and the O(n log^2 n) time and O(1) space solution.
The approach that would come closest to O(N) in time is probably a conventional hash table, where the hash entries are simply the numbers, used as keys. You'd walk through the list, inserting each entry in the hash table, after first checking whether it was already in the table.
Not strictly O(N), however, since hash search/insertion gets slower as the table fills up. And in terms of storage it would be expensive for large lists -- at least 3x and possibly 10-20x the size of the array of numbers.
As was already mentioned by others, I don't see any way to do it in O(n).
However, you can try a probabilistic approach by using a Bloom Filter. It will give you O(n) if you are lucky.
Since extra space is not allowed this can't be done without comparison.The concept of lower bound on the time complexity of comparison sort can be applied here to prove that the problem in its original form can't be solved in O(n) in the worst case.
We can do in linear time o(n) here as well
public class DuplicateInOnePass {
public static void duplicate()
{
int [] ar={6,7,8,8,7,9,9,10};
Arrays.sort(ar);
for (int i =0 ; i <ar.length-1; i++)
{
if (ar[i]==ar[i+1])
System.out.println("Uniqie Elements are" +ar[i]);
}
}
public static void main(String[] args) {
duplicate();
}
}