A large array array[n] of integers is given as input. Two index values are given - start,end. It is desired to find very quickly - min & max in the set [start,end] (inclusive) and max in the rest of array (excluding [start,end]).
eg-
array - 3 4 2 2 1 3 12 5 7 9 7 10 1 5 2 3 1 1
start,end - 2,7
min,max in [2,7] -- 1,12
max in rest - 10
I cannot think of anything better than linear. But this is not good enough as n is of order 10^5 and the number of such find operations is also of the same order.
Any help would be highly appreciated.
The way I understand your question is that you want to do some preprocessing on a fixed array that then makes your find max operation very fast.
This answers describes an approach that does O(nlogn) preprocessing work, followed by O(1) work for each query.
Preprocessing O(nlogn)
The idea is to prepare two 2d arrays BIG[a,k] and SMALL[a,k] where
1. BIG[a,k] is the max of the 2^k elements starting at a
2. SMALL[a,k] is the min of the 2^k elements starting at a
You can compute this arrays in a recursive way by starting at k==0 and then building up the value for each higher element by combining two previous elements together.
BIG[a,k] = max(BIG[a,k-1] , BIG[a+2^(k-1),k-1])
SMALL[a,k] = min(SMALL[a,k-1] , SMALL[a+2^(k-1),k-1])
Lookup O(1) per query
You are then able to instantly find the max and min for any range by combining 2 preprepared answers.
Suppose you want to find the max for elements from 100 to 133.
You already know the max of 32 elements 100 to 131 (in BIG[100,5]) and also the max of 32 elements from 102 to 133 (in BIG[102,5]) so you can find the largest of these to get the answer.
The same logic applies for the minimum. You can always find two overlapping prepared answers that will combine to give the answer you need.
You're asking for a data structure that will answer min and max queries for intervals on an array quickly.
You want to build two segment trees on your input array; one for answering interval minimum queries and one for answering interval maximum queries. This takes linear preprocessing, linear extra space, and allows queries to take logarithmic time.
I am afraid, that there is no faster way. Your data is completly random, and in that way, you have to go through every value.
Even sorting wont help you, because its at best O(n log n), so its slower. You cant use bisection method, because data are not sorted. If you start building data structures (like heap), it will again be O(n log n) at the best.
If the array is very large, then split it into partitions and use threads to do a linear check of each partition. Then do min/max with the results from the threads.
Searching for min and max in an unsorted array can only be optimized by taking two values at a time and comparing them to each other first:
register int min, max, i;
min = max = array[0] ;
for(i = 1; i + 1 < length; i += 2)
{
if(array[i] < array[i+1])
{
if(min > array[i]) min = array[i];
if(max < array[i+1]) max = array[i+1];
}
else
{
if(min > array[i+1]) min = array[i];
if(max < array[i]) max = array[i+1];
}
}
if(i < length)
if(min > array[i]) min = array[i];
else if(max < array[i]) max = array[i];
But I don't believe it's actually faster. Consider writing it in assembly.
EDIT:
When comparing strings, this algorithm could make the difference!
If you kinda know the min you can test from x to min if the value exists in the array. If you kinda know the max, you can test (backwards) from y to max, if the value exists in array, you found max.
For example, from your array, I will assume you have only positive integers.:
array - 3 4 2 2 1 3 12 5 7 9 7 10 1 5 2 3 1 1
You set x to be 0, test if 0 exists, doesn't, you then change it to 1, you find 1. there is your min.
You set y to be 15 (arbitrary large number): exists? no. set to 14. exists? no, set to 13. exists? no. set to 12. exists? yes! there is your max! I just made 4 comparisons.
If y exists from the first try, you might have tested a value INSIDE the array. So you test it again with y + length / 2. Assume you found the center of the array, so decal it a bit. If again you found the value from the first try, it might be within the array.
If you have negative and/or float values, this technique does not work :)
Of course it is not possible to have sub-linear algorithm (as far as I know) to search the way you want. However, you can achieve sub-linear time is some cases by storing fixed ranges of min-max and with some knowledge of the range you can improve search time.
e.g. if you know that 'most' of the time range of search will be say 10 then you can store min-max of 10/2 = 5 elements separately and index those ranges. During search you have to find the superset of ranges that can subsume search-range.
e.g. in the example
array - 3 4 2 2 1 3 12 5 7 9 7 10 1 5 2 3 1 1
start,end - 2,7
min,max in [2,7] -- 1,12
if you 'know' that most of the time search range would be 5 elements then, you can index the min-max beforehand like: since 5/2 = 2,
0-1 min-max (3,4)
2-3 min-max (2,2)
4-5 min-max (1,3)
6-7 min-max (5,12)
...
I think, this method will work better when ranges are large so that storing min-max avoids some searches.
To search min-max [2-7] you have to search the stored indexes like: 2/2 = 1 to 7/2 = 3,
then min of mins(2,1,5) will give you the minimum (1) and max of maxes (2,3,12) will give you the maximum(12). In case of overlap you will have to search only the corner indexes (linearly). Still it could avoid several searches I think.
It is possible that this algorithm is slower than linear search (because linear search has a very good locality of reference) so I would advise you to measure them first.
Linear is the best you can do, and its relatively easy to prove it.
Assume an infinite amount instantaneous memory storage and costless access, just so we can ignore them.
Furthermore, we'll assume away your task of finding min/max in a substring. We will think of them both as essentially the exact same mechanical problem. One just magically keeping track of the numbers smaller than other numbers in a comparison, and one magically keeping track of the numbers bigger than in a comparison. This action is assumed to be costless.
Lets then assume away the min/max of the sub-array problem, because its just the same problem as the min/max of any array, and we'll magically assume that it is solved and as part of our general action of finding the max in the bigger array. We can do this by assuming that the biggest number in the entire array is in fact the first number we look at by some magical fluke, and it is also the biggest number in the sub-array, and also happens to be the smallest number in the sub-array, but we just don't happen to know how lucky we are. How can we find out?
The least work we have to do is one comparison between it and every other number in the array to prove it is the biggest/smallest. This is the only action we are assuming has a cost.
How many comparisons do we have to do? We'll let N be the length of the array, and the total number of operations for any length N is N - 1. As we add elements to the array, the number of comparisons scales at the same rate even if all of our widely outrageous assumptions held true.
So we've arrived at the point where N is both the length of the array, and the determinant of the increasing cost of the best possible operation in our wildly unrealistic best case scenario.
Your operation scales with N in the best case scenario. I'm sorry.
/sorting the inputs must be more expensive than this minimal operation, so it would only be applicable if you were doing the operation multiple times, and had no way of storing the actual results, which doesn't seem likely, because 10^5 answers is not exactly taxing.
//multithreading and the like is all well and good too, just assume away any cost of doing so, and divide N by the number of threads. The best algorithm possible still scales linearly however.
///I'm guessing it would in fact have to be a particularly curious phenomenon for anything to ever scale better than linearly without assuming things about the data...stackoverflowers?
Related
I am looking to generate derangements uniformly at random. In other words: shuffle a vector so that no element stays in its original place.
Requirements:
uniform sampling (each derangement is generated with equal probability)
a practical implementation is faster than the rejection method (i.e. keep generating random permutations until we find a derangement)
None of the answers I found so far are satisfactory in that they either don't sample uniformly (or fail to prove uniformity) or do not make a practical comparison with the rejection method. About 1/e = 37% of permutations are derangements, which gives a clue about what performance one might expect at best relative to the rejection method.
The only reference I found which makes a practical comparison is in this thesis which benchmarks 7.76 s for their proposed algorithm vs 8.25 s for the rejection method (see page 73). That's a speedup by a factor of only 1.06. I am wondering if something significantly better (> 1.5) is possible.
I could implement and verify various algorithms proposed in papers, and benchmark them. Doing this correctly would take quite a bit of time. I am hoping that someone has done it, and can give me a reference.
Here is an idea for an algorithm that may work for you. Generate the derangement in cycle notation. So (1 2) (3 4 5) represents the derangement 2 1 4 5 3. (That is (1 2) is a cycle and so is (3 4 5).)
Put the first element in the first place (in cycle notation you can always do this) and take a random permutation of the rest. Now we just need to find out where the parentheses go for the cycle lengths.
As https://mathoverflow.net/questions/130457/the-distribution-of-cycle-length-in-random-derangement notes, in a permutation, a random cycle is uniformly distributed in length. They are not randomly distributed in derangements. But the number of derangements of length m is m!/e rounded up for even m and down for odd m. So what we can do is pick a length uniformly distributed in the range 2..n and accept it with the probability that the remaining elements would, proceeding randomly, be a derangement. This cycle length will be correctly distributed. And then once we have the first cycle length, we repeat for the next until we are done.
The procedure done the way I described is simpler to implement but mathematically equivalent to taking a random derangement (by rejection), and writing down the first cycle only. Then repeating. It is therefore possible to prove that this produces all derangements with equal probability.
With this approach done naively, we will be taking an average of 3 rolls before accepting a length. However we then cut the problem in half on average. So the number of random numbers we need to generate for placing the parentheses is O(log(n)). Compared with the O(n) random numbers for constructing the permutation, this is a rounding error. However it can be optimized by noting that the highest probability for accepting is 0.5. So if we accept with twice the probability of randomly getting a derangement if we proceeded, our ratios will still be correct and we get rid of most of our rejections of cycle lengths.
If most of the time is spent in the random number generator, for large n this should run at approximately 3x the rate of the rejection method. In practice it won't be as good because switching from one representation to another is not actually free. But you should get speedups of the order of magnitude that you wanted.
this is just an idea but i think it can produce a uniformly distributed derangements.
but you need a helper buffer with max of around N/2 elements where N is the size of the items to be arranged.
first is to choose a random(1,N) position for value 1.
note: 1 to N instead of 0 to N-1 for simplicity.
then for value 2, position will be random(1,N-1) if 1 fall on position 2 and random(1,N-2) otherwise.
the algo will walk the list and count only the not-yet-used position until it reach the chosen random position for value 2, of course the position 2 will be skipped.
for value 3 the algo will check if position 3 is already used. if used, pos3 = random(1,N-2), if not, pos3 = random(1,N-3)
again, the algo will walk the list and count only the not-yet-used position until reach the count=pos3. and then position the value 3 there.
this will goes for the next values until totally placed all the values in positions.
and that will generate a uniform probability derangements.
the optimization will be focused on how the algo will reach pos# fast.
instead of walking the list to count the not-yet-used positions, the algo can used a somewhat heap like searching for the positions not yet used instead of counting and checking positions 1 by 1. or any other methods aside from heap-like searching. this is a separate problem to be solved: how to reached an unused item given it's position-count in a list of unused-items.
I'm curious ... and mathematically uninformed. So I ask innocently, why wouldn't a "simple shuffle" be sufficient?
for i from array_size downto 1: # assume zero-based arrays
j = random(0,i-1)
swap_elements(i,j)
Since the random function will never produce a value equal to i it will never leave an element where it started. Every element will be moved "somewhere else."
Let d(n) be the number of derangements of an array A of length n.
d(n) = (n-1) * (d(n-1) + d(n-2))
The d(n) arrangements are achieved by:
1. First, swapping A[0] with one of the remaining n-1 elements
2. Next, either deranging all n-1 remaning elements, or deranging
the n-2 remaining that excludes the index
that received A[0] from the initial matrix.
How can we generate a derangement uniformly at random?
1. Perform the swap of step 1 above.
2. Randomly decide which path we're taking in step 2,
with probability d(n-1)/(d(n-1)+d(n-2)) of deranging all remaining elements.
3. Recurse down to derangements of size 2-3 which are both precomputed.
Wikipedia has d(n) = floor(n!/e + 0.5) (exactly). You can use this to calculate the probability of step 2 exactly in constant time for small n. For larger n the factorial can be slow, but all you need is the ratio. It's approximately (n-1)/n. You can live with the approximation, or precompute and store the ratios up to the max n you're considering.
Note that (n-1)/n converges very quickly.
I would like to write an algorithm to find min and max of 100000 arrays 100000 with the size of 1000, containing random numbers from 1 to 1000. This algorithm suppose to return the average number of comparisons.
Suppose I use a naive solution with the complexity of O(n) , what does the average number of comparisons suppose to be? 1999 or 2000 (to min and max)?
I also would like to ask how to creat a random array in cpp.
You have to compare every element twice (once to the current min, once to the current max).
That's not "naive", it's the optimal way to find min and max of unsorted numbers.
There is a big difference between naive and simple / optimal ,it's always good to look for other solutions , but not always you are going to find a better or more optimal one .
as for your question you have to compare them twice once as a min and once as a max
Well, I disagree with the Sid's answer as checking each element with max and min is optimal.
Firstly, you can take first two elements and compare them together and set one as min and another as maximum.
Then you can easily in loop take 2 elements at once, compare them together and check lower one with the minimum and bigger one with the maximum.
Therefore on 2 elements you have only 3 comparisons.
It is better than checking each number with minimum and maximum, because you have 4 comparisons on 2 elements.
I'm studying Binary trees! and i have a problem in this Homework.
I have to use binary trees to solve this problem
here is the problem :
You are given a list of integers. You then need to answer a number of questions of the form: "What is the maximum value of the elements of the list content between the A index and the index B?".
example :
INPUT :
10
2 4 3 5 7 19 3 8 6 7
4
1 5
3 6
8 10
3 9
OUTPUT:
7
19
8
19
TIME LIMITS AND MEMORY (Language: C + +)
Time: 0.5s on a 1GHz machine.
Memory: 16000 KB
CONSTRAINTS
1 <= N <= 100000, where N is the number of elements in the list.
1 <= A, B <= N, where A, B are the limits of a range.
1 <= I <= 10 000, where I is the number of intervals.
Please do not give me the solution just a hint !
Thanks so much !
As already discussed in the comments, to make things simple, you can add entries to the array to make its size a power of two, so the binary tree has the same depth for all leaves. It doesn't really matter what elements you add to this list, as you won't use these computed values in the actual algorithm.
In the binary tree, you have to compute the maxima in a bottom-up manner. These values then tell you the maximum of the whole range these nodes are representing; this is the major idea of the tree.
What remains is splitting a query into such tree nodes, so they represent the original interval using less nodes than the size of the interval. Figure out "the pattern" of the intervals the tree nodes represent. Then figure out a way to split the input interval into as few nodes as possible. Maybe start with the trivial solution: just split the input in leave nodes, i.e. single elements. Then figure out how you can "combine" multiple elements from the interval using inner nodes from the tree. Find an algorithm doing this for you by not using the tree (since this would require a linear time in the number of elements, but the whole idea of the tree is to make it logarithmic).
Write some code which works with an interval of size 0. It will be very simple.
Then write some for a interval of size 1. It will still be simple.
Then write some for an interval of size 2. It may need a comparison. It will still be simple.
Then write some for an interval of size 3. It may involve a choice of which interval of size 2 to compare. This isn't too hard.
Once you've done this, it should be easy to make it work with any interval size.
An array would be the best data structure for this problem.
But given you need to use a binary tree, I would store (index, value) in the binary
tree and key on index.
I have a small algorithm problem. I have an array for example.
array[10] = {23,54,10,63,52,36,41,7,20,22};
now given an input number for example 189 i want to know that in which slot it should lie.
for example this input should lie in 4 index in the array because
23+54+10+63 = 150 and if we add 52 then sum will be 202 which will cover the range where 189 should lie. so the answer should be 4.
I want to find an amortized constant time algorithm may be in the first step we do some prepossessing on the array so that all the next queries we can get in constant time.
The input number will always be in between 1 and sum of all the entries in array
Thanks
If you really need constant time, create a second array that has a size that is the largest sum value that contains indexes into the original array. So new_array[189] = 4;
The natural solution would be to first build an array with the cumulative sums. This would look like
sums[10] = {23,77,87,...}
and then use a binary search such as the lower_bound algorithm to find where to insert. That would be O(log(n)). Assuming that your number of slots is constant, this solution is also time-constant. But I guess you want the lookup to be O(1) in terms of the number of slots. In this case, you will have to make a full lookup table. Since the size of these numbers is relatively small, that is perfectly doable:
int lookup[N];
for(i=0,j=0;i<10;i++)
for(k=0;k<sums[i];k++,j++)
lookup[j]=i;
Using this, the slot number is simply lookup[number].
I think the best you can get is to use the cumulative array and run in logarithmic time by using binary search. I am not sure if a solution with constant time is existing. Are you sure there is one?
If you know that the number is always between 1 and the sum of all items in the array, then the trivial constant time algorithm is to build an array of [1..sum], each entry containing the proper slot for each number. Building the array, which you only have to do once, is O(N). Lookup is then O(1).
This assumes, of course, that you have enough memory for the array.
Other than that, I think the best you'll be able to do is O(log(N)) using binary search on the sums.
assuming
The input number will always be in between 1 and sum of all the
entries in array
int total(0), i(0);
for(;total < inputValue; ++i)
{
total += array[i];
}
//your answer is i-1
Please look at this picture:
Is it possible to find per-column sum for all columns faster than in O(n^2)?
Firstly I thought it's possible to make it n * log(n), if we regroup summation like this (to sum 2 rows at time, then remaining 2 rows, and then remaining 2 rows...):
But then I counted the number of pluses and it came out to be equal in both cases - 7 = 7 from both pictures.
So is it possible to compose such a sum in n * log(n) time, or I have fooled myself (I know there are FHT or FFT like transforms, so that might be the case)?
No, our input size is O(n^2), so our algorithm can not be faster than that (because we are using all the input values).
This is assuming that n is the amount of rows, that the matrix is square (giving n^2) and there is no special relation between the elements.
No. You need to read (at least) n^2 items from memory, which takes (at least) O(n^2) time.1
1. Assuming n is the number of columns (or number of rows).
It cannot be done better then O(n^2) unless you have more knowledge on the matrix.
You need to read each element in the matrix to get the correct sum for each column, so you get a lower bound of Omega(n^2)
Also, note that your idea is O(n^2), because even at the first iteration, you summaize have n * (n/2) sum ops, which is O(n^2)