Given a sorted array of N integers, I need to find to all pairs with different indexes(i!=j). I need the maximum (a[j]+a[i]-1) and minimum (a[j]-a[i]+1) out of all pairs with (j>i). Numbers aren't unique but their pairing is allowed. Numbers can't pair with themselves.
What I'm doing right now :
for(i=0;i<n;i++)
{
for(j=i+1;j<n;j++)
{
MAX= max(MAX,a[j] + a[i] -1);
MIN=min(MIN,a[j]-a[i]+1);
}
}
This gives the time complexity of O(n^2). Is there a way to reduce it to O(nlogn) or even less ?
To find the max you just need to add the elements at index n-1 and n-2, as the array is already sorted and the 2 biggest elements will be only at the end of the array. No other element in the array will be bigger than these and hence their sum will also be greater than the sum of any other elements.
MAX = a[n-1] + a[n-2] - 1;
Time complexity : O(1)
For finding the min , you should look for pivot in the array. I choose to start from a[0]. If space is not a constraint create another array of similar size and populate it with the delta values from your pivot.
int[] b = new int[n];
for(int i=1; i<n; i++)
{
b[i] = a[i] - a[0];
}
Now the second array will have the delta values from your pivot. All you have to find is the indices of the Minimum and next-Minimum values of Array b. These 2 will be the closest values to each and hence their difference will also be the least.
Time Complexity : O(n) + O(n) = O(n)
Space Complexity : O(n) as a new array of same size has to be created.
Related
Algorithm:
insert element counts in a map
start from first element
if first is present in a map, insert in output array (total number of count), increment first
if first is not in a map, find next number which is present in a map
Complexity: O(max element in array) which is linear, so, O(n).
vector<int> sort(vector<int>& can) {
unordered_map<int,int> mp;
int first = INT_MAX;
int last = INT_MIN;
for(auto &n : can) {
first = min(first, n);
last = max(last, n);
mp[n]++;
}
vector<int> out;
while(first <= last) {
while(mp.find(first) == mp.end()) first ++;
int cnt = mp[first];
while(cnt--) out.push_back(first);
first++;
}
return out;
}
Complexity: O(max element in array) which is linear, so, O(n).
No, it's not O(n). The while loop iterates last - first + 1 times, and this quantity depends on the array's contents, not the array's length.
Usually we use n to mean the length of the array that the algorithm works on. To describe the range (i.e. the difference between the largest and smallest values in the array), we could introduce a different variable r, and then the time complexity is O(n + r), because the first loop populating the map iterates O(n) times, the second loop populating the vector iterates O(r) times, and its inner loop which counts down from cnt iterates O(n) times in total.
Another more formal way to define n is the "size of the input", typically measured in the number of bits that it takes to encode the algorithm's input. Suppose the input is an array of length 2, containing just the numbers 0 and M for some number M. In this case, if the number of bits used to encode the input is n, then the number M can be on the order of O(2n), and the second loop does that many iterations; so by this formal definition the time complexity is exponential.
Suppose you are given an n sized array A and a integer k
Now you have to follow this function:
long long sum(int k)
{
long long sum=0;
for(int i=0;i<n;i++){
sum+=min(A[i],k);
}
return sum;
}
what is the most efficient way to find sum?
EDIT: if I am given m(<=100000) queries, and given a different k every time, it becomes very time consuming.
If set of queries changes with each k then you can't do better than in O(n). Your only options for optimizing is to use multiple threads (each thread sums some region of array) or at least ensure that your loop is properly vectorized by compiler (or write vectorized version manually using intrinsics).
But if set of queries is fixed and only k is changed, then you may do in O(log n) by using following optimization.
Preprocess array. This is done only once for all ks:
Sort elements
Make another array of the same length which contains partial sums
For example:
inputArray: 5 1 3 8 7
sortedArray: 1 3 5 7 8
partialSums: 1 4 9 16 24
Now, when new k is given, you need to perform following steps:
Make binary search for given k in sortedArray -- returns index of maximal element <= k
Result is partialSums[i] + (partialSums.length - i) * k
You can do way better than that if you can sort the array A[i] and have a secondary array prepared once.
The idea is:
Count how many items are less than k, and just compute the equivalent sum by the formula: count*k
Prepare an helper array which will give you the sum of the items superior to k directly
Preparation
Step 1: sort the array
std::sort(begin(A), end(A));
Step 2: prepare an helper array
std::vector<long long> p_sums(A.size());
std::partial_sum(rbegin(A), rend(A), begin(p_sums));
Query
long long query(int k) {
// first skip all items whose value is below k strictly
auto it = std::lower_bound(begin(A), end(A), k);
// compute the distance (number of items skipped)
auto index = std::distance(begin(A), it);
// do the sum
long long result = index*k + p_sums[index];
return result;
}
The complexity of the query is: O(log(N)) where N is the length of the array A.
The complexity of the preparation is: O(N*log(N)). We could go down to O(N) with a radix sort but I don't think it is useful in your case.
References
std::sort()
std::partial_sum()
std::lower_bound()
What you do seems absolutely fine. Unless this is really absolutely time critical (that is customers complain that your app is too slow and you measured it, and this function is the problem, in which case you can try some non-portable vector instructions, for example).
Often you can do things more efficiently by looking at them from a higher level. For example, if I write
for (n = 0; n < 1000000; ++n)
printf ("%lld\n", sum (100));
then this will take an awful long time (half a trillion additions) and can be done a lot quicker. Same if you change one element of the array A at a time and recalculate sum each time.
Suppose there are x elements of array A which are no larger than k and set B contains those elements which are larger than k and belongs to A.
Then the result of function sum(k) equals
k * x + sum_b
,where sum_b is the sum of elements belonging to B.
You can firstly sort the the array A, and calculate the array pre_A, where
pre_A[i] = pre_A[i - 1] + A[i] (i > 0),
or 0 (i = 0);
Then for each query k, use binary search on A to find the largest element u which is no larger than k. Assume the index of u is index_u, then sum(k) equals
k * index_u + pre_A[n] - pre_A[index_u]
. The time complex for each query is log(n).
In case array A may be dynamically changed, you can use BST to handle it.
I had a following interview question.
There is an array of nxn elements. The array is partially sorted i.e the biggest element in row i is smaller than the smallest element in row i+1.
How can you find a given element with complexity O(n)
Here is my take on this:
You should go to the row n/2.And start compare for example you search for 100 and the first number you see is 110 so you know it's either in this row or in rows above now you go n/4 and so on.
From the comments
Isn't it O(n * log n) in total? He has
to parse through every row that he
reaches per binary search, therefore
the number of linear searches is
multiplied with the number of rows he
will have to scan in average. – Martin
Matysiak 5 mins ago.
I am not sure that is a right solution. Does anyone have something better
Your solution indeed takes O(n log n) assuming you're searching each row you parse. If you don't search each row, then you can't accurately perform the binary step.
O(n) solution:
Pick the n/2 row, instead of searching the entire row, we simply take the first element of the previous row, and the first element of the next row. O(1).
We know that all elements of the n/2 row must be between these selected values (this is the key observation). If our target value lies in the interval, then search all three rows (3*O(n) = O(n)).
If our value is outside this range, then continue in the binary search manner by selecting n/4 if our value was less than the range, and 3n/4 row if the value was greater, and again comparing against one element of adjacent rows.
Finding the right block of 3 rows will cost O(1) * O(log n), and finding the element will cost O(n).
In total O(log n) + O(n) = O(n).
Here is a simple implementation - since we need O(n) for finding an element within a row anyhow, I left out the bin-search...
void search(int n[][], int el) {
int minrow = 0, maxrow;
while (minrow < n.length && el >= n[minrow][0])
++minrow;
minrow = Math.max(0, minrow - 1);
maxrow = Math.min(n.length - 1, minrow + 1);
for (int row = minrow; row <= maxrow; ++row) {
for (int col = 0; col < n[row].length; ++col) {
if (n[row][col] == el) {
System.out.printf("found at %d,%d\n", row, col);
}
}
}
}
given an unsorted number array where there can be duplicates, pre-process the array so that to find the count of numbers within a given range, the time is O(1).
For example, 7,2,3,2,4,1,4,6. The count of numbers both >= 2 and <= 5 is 5. (2,2,3,4,4).
Sort the array. For each element in the sorted array, insert that element into a hash table, with the value of the element as the key, and its position in the array as the associated value. Any values that are skipped, you'll need to insert as well.
To find the number of items in a range, look up the position of the value at each end of the range in the hash table, and subtract the lower from the upper to find the size of the range.
This sounds suspiciously like one of those clever interview questions some interviewers like to ask, which is usually associated with hints along the way to see how you think.
Regardless... one possible way of implementing this is to make a list of the counts of numbers equal to or less than the list index.
For example, from your list above, generate the list: 0, 1, 3, 4, 6, 6, 7, 8. Then you can count the numbers between 2 and 5 by subtracting list[1] from list[5].
Since we need to access in O(1), the data structure needed would be memory-intensive.
With Hash Table, in worst case access would take O(n)
My Solution:
Build a 2D matrix.
array = {2,3,2,4,1,4,6} Range of numbers = 0 to 6 so n = 7
So we've to create nxn matrix.
array[i][i] represents total count of element = i
so array[4][4] = 2 (since 4 appears 2 times in array)
array[5][5] = 0
array[5][2] = count of numbers both >= 2 and <= 5 = 5
//preprocessing stage 1: Would populate a[i][i] with total count of element = i
a[n][n]={0};
for(i=0;i<=n;i++){
a[i][i]++;
}
//stage 2
for(i=1;i<=n;i++)
for(j=0;j<i;j++)
a[i][j] = a[i-1][j] + a[i][i];
//we are just adding count of element=i to each value in i-1th row and we get ith row.
Now (5,2) would query for a[5][2] and would give answer in O(1)
int main()
{
int arr[8]={7,2,3,2,4,1,4,6};
int count[9];
int total=0;
memset(count,0, sizeof(count));
for(int i=0;i<8;i++)
count[arr[i]]++;
for(int k=0;k<9;k++)
{
if(k>=2 && k<=5 && count[k]>0 )
{
total= total+count[k] ;
}
}
printf("%d:",total);
return 0;
}
What is the best way to solve this?
A balancing point of an N-element array A is an index i such that all elements on lower indexes have values <= A[i] and all elements on higher indexes have values higher or equal A[i].
For example, given:
A[0]=4 A[1]=2 A[2]=7 A[3]=11 A[4]=9
one of the correct solutions is: 2. All elements below A[2] is less than A[2], all elements after A[2] is more than A[2].
One solution that appeared to my mind is O(nsquare) solution. Is there any better solution?
Start by assuming A[0] is a pole. Then start walking the array; comparing each element A[i] in turn against A[0], and also tracking the current maximum.
As soon as you find an i such that A[i] < A[0], you know that A[0] can no longer be a pole, and by extension, neither can any of the elements up to and including A[i]. So now continue walking until you find the next value that's bigger than the current maximum. This then becomes the new proposed pole.
Thus, an O(n) solution!
In code:
int i_pole = 0;
int i_max = 0;
bool have_pole = true;
for (int i = 1; i < N; i++)
{
if (A[i] < A[i_pole])
{
have_pole = false;
}
if (A[i] > A[i_max])
{
i_max = i;
if (!have_pole)
{
i_pole = i;
}
have_pole = true;
}
}
If you want to know where all the poles are, an O(n log n) solution would be to create a sorted copy of the array, and look to see where you get matching values.
EDIT: Sorry, but this doesn't actually work. One counterexample is [2, 5, 3, 1, 4].
Make two auxiliary arrays, each with as many elements as the input array, called MIN and MAX.
Each element M of MAX contains the maximum of all the elements in the input from 0..M. Each element M of MIN contains the minimum of all the elements in the input from M..N-1.
For each element M of the input array, compare its value to the corresponding values in MIN and MAX. If INPUT[M] == MIN[M] and INPUT[M] == MAX[M] then M is a balancing point.
Building MIN takes N steps, and so does MAX. Testing the array then takes N more steps. This solution has O(N) complexity and finds all balancing points. In the case of sorted input every element is a balancing point.
Create a double-linked list such as i-th node of this list contains A[i] and i. Traverse this list while elements grow (counting maximum of these elements). If some A[bad] < maxSoFar it can't be MP. Remove it and go backward removing elements until you find A[good] < A[bad] or reach the head of the list. Continue (starting with maxSoFar as maximum) until you reach end of the list. Every element in result list is MP and every MP is in this list. Complexity is O(n) since is maximum of steps is performed for descending array - n steps forward and n removals.
Update
Oh my, I confused "any" with "every" in problem definition :).
You can combine bmcnett's and Oli's answers to find all the poles as quickly as possible.
std::vector<int> i_poles;
i_poles.push_back(0);
int i_max = 0;
for (int i = 1; i < N; i++)
{
while (!i_poles.empty() && A[i] < A[i_poles.back()])
{
i_poles.pop_back();
}
if (A[i] >= A[i_max])
{
i_poles.push_back(i);
}
}
You could use an array preallocated to size N if you wanted to avoid reallocations.