Maximum element after M operations - c++

Given are the 3 elements N1,N2,N3
Now we can perform operation on these elements.
Operation is as follow :
In a single operation,we will choose a particular element and decrease the value of elements to half (i.e. if initial value of element is x, then after decrement it will be x/2 where division is integer division, e.g. 3/2=1 and 4/2=2). Meanwhile, the value of other two elements will increase by one.
Now we need to minimise the maximum element if we can perform this operation for ATMOST M seconds and each second we can perform this operation atmost once.
Example : Let N1=1 , N2=2 , N3=3 , M=1 Then here answer is 3
Explanation : We can pick the 3rd element and make it half. Note that first and second element will increase by 1 units. So the values become 2,3,1. Maximum of these values is 3. Hence answer is 3.
My Approach : Every time pick up the largest element and it decrease by half and increase other two by +1.
Code :
long long ans=max(N1,max(N2,N3));
for(int i=0;i<m;i++){
if(N1>=N2 && N1>=N3){
N1/=2;
N2++;
N3++;
}
else if(N2>=N1 && N2>=N3){
N2/=2;
N1++;
N3++;
}
else{
N1++;
N2++;
N3/=2;
}
ans=min(ans,max(N1,max(N2,N3)));
}
Failure :
But let N1=8 , N2=8 , N3=4 , M=3 then answer is 5 and this approach goes wrong as according to mentioned algorithm steps would have been :
8 8 4 -> 4 9 5 -> 5 4 6 -> 6 5 3
But correct one is :
8 8 4 -> 9 9 2 -> 4 10 3 -> 5 5 4
Constraints : M is between 1 and 100 . N1 , N2 and N3 can go upto 10^9.

Related

Can we really avoid extra space when all the values are non-negative?

This question is a follow-up of another one I had asked quite a while ago:
We have been given an array of integers and another number k and we need to find the total number of continuous subarrays whose sum equals to k. For e.g., for the input: [1,1,1] and k=2, the expected output is 2.
In the accepted answer, #talex says:
PS: BTW if all values are non-negative there is better algorithm. it doesn't require extra memory.
While I didn't think much about it then, I am curious about it now. IMHO, we will require extra memory. In the event that all the input values are non-negative, our running (prefix) sum will go on increasing, and as such, sure, we don't need an unordered_map to store the frequency of a particular sum. But, we will still need extra memory (perhaps an unordered_set) to store the running (prefix) sums that we get along the way. This obviously contradicts what #talex said.
Could someone please confirm if we absolutely do need extra memory or if it could be avoided?
Thanks!
Let's start with a slightly simpler problem: all values are positive (no zeros). In this case the sub arrays can overlap, but they cannot contain one another.
I.e.: arr = 2 1 5 1 1 5 1 2, Sum = 8
2 1 5 1 1 5 1 2
|---|
|-----|
|-----|
|---|
But this situation can never occur:
* * * * * * *
|-------|
|---|
With this in mind there is algorithm that doesn't require extra space (well.. O(1) space) and has O(n) time complexity. The ideea is to have left and right indexes indicating the current sequence and the sum of the current sequence.
if the sum is k increment the counter, advance left and right
if the sum is less than k then advance right
else advance left
Now if there are zeros the intervals can contain one another, but only if the zeros are on the margins of the interval.
To adapt to non-negative numbers:
Do as above, except:
skip zeros when advancing left
if sum is k:
count consecutive zeros to the right of right, lets say zeroes_right_count
count consecutive zeros to the left of left. lets say zeroes_left_count
instead of incrementing the count as before, increase the counter by: (zeroes_left_count + 1) * (zeroes_right_count + 1)
Example:
... 7 0 0 5 1 2 0 0 0 9 ...
^ ^
left right
Here we have 2 zeroes to the left and 3 zeros to the right. This makes (2 + 1) * (3 + 1) = 12 sequences with sum 8 here:
5 1 2
5 1 2 0
5 1 2 0 0
5 1 2 0 0 0
0 5 1 2
0 5 1 2 0
0 5 1 2 0 0
0 5 1 2 0 0 0
0 0 5 1 2
0 0 5 1 2 0
0 0 5 1 2 0 0
0 0 5 1 2 0 0 0
I think this algorithm would work, using O(1) space.
We maintain two pointers to the beginning and end of the current subsequence, as well as the sum of the current subsequence. Initially, both pointers point to array[0], and the sum is obviously set to array[0].
Advance the end pointer (thus extending the subsequence to the right), and increase the sum by the value it points to, until that sum exceeds k. Then advance the start pointer (thus shrinking the subsequence from the left), and decrease the sum, until that sum gets below k. Keep doing this until the end pointer reaches the end of the array. Keep track of the number of times the sum was exactly k.

What is the maximum number of comparisons to heapify an array?

Is there a general formula to calculate the maximum number of comparisons to heapify n elements?
If not, is 13 the max number of comparisons to heapify an array of 8 elements?
My reasoning is as such:
at h = 0, 1 node, 0 comparisons, 1* 0 = 0 comparisons
at h = 1, 2 nodes, 1 comparison each, 2*1 = 2 comparisons
at h = 2, 4 nodes, 2 comparisons each, 4*2 = 8 comparisons
at h = 3, 1 node, 3 comparisons each, 1*3 = 3 comparisons
Total = 0 + 2 + 8 + 3 =13
Accepted theory is that build-heap requires at most (2N - 2) comparisons. So the maximum number of comparisons required should be 14. We can confirm that easily enough by examining a heap of 8 elements:
7
/ \
3 1
/ \ / \
5 4 8 2
/
6
Here, the 4 leaf nodes will never move down. The nodes 5 and 1 can move down 1 level. 3 could move down two levels. And 7 could move down 3 levels. So the maximum number of level moves is:
(0*4)+(1*2)+(2*1)+(3*1) = 7
Every level move requires 2 comparisons, so the maximum number of comparisons would be 14.

Downscale array for decimal factor

Is there efficient way to downscale number of elements in array by decimal factor?
I want to downsize elements from one array by certain factor.
Example:
If I have 10 elements and need to scale down by factor 2.
1 2 3 4 5 6 7 8 9 10
scaled to
1.5 3.5 5.5 7.5 9.5
Grouping 2 by 2 and use arithmetic mean.
My problem is what if I need to downsize array with 10 elements to 6 elements? In theory I should group 1.6 elements and find their arithmetic mean, but how to do that?
Before suggesting a solution, let's define "downsize" in a more formal way. I would suggest this definition:
Downsizing starts with an array a[N] and produces an array b[M] such that the following is true:
M <= N - otherwise it would be upsizing, not downsizing
SUM(b) = (M/N) * SUM(a) - The sum is reduced proportionally to the number of elements
Elements of a participate in computation of b in the order of their occurrence in a
Let's consider your example of downsizing 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 to six elements. The total for your array is 55, so the total for the new array would be (6/10)*55 = 33. We can achieve this total in two steps:
Walk the array a totaling its elements until we've reached the integer part of N/M fraction (it must be an improper fraction by rule 1 above)
Let's say that a[i] was the last element of a that we could take as a whole in the current iteration. Take the fraction of a[i+1] equal to the fractional part of N/M
Continue to the next number starting with the remaining fraction of a[i+1]
Once you are done, your array b would contain M numbers totaling to SUM(a). Walk the array once more, and scale the result by N/M.
Here is how it works with your example:
b[0] = a[0] + (2/3)*a[1] = 2.33333
b[1] = (1/3)*a[1] + a[2] + (1/3)*a[3] = 5
b[2] = (2/3)*a[3] + a[4] = 7.66666
b[3] = a[5] + (2/3)*a[6] = 10.6666
b[4] = (1/3)*a[6] + a[7] + (1/3)*a[8] = 13.3333
b[5] = (2/3)*a[8] + a[9] = 16
--------
Total = 55
Scaling down by 6/10 produces the final result:
1.4 3 4.6 6.4 8 9.6 (Total = 33)
Here is a simple implementation in C++:
double need = ((double)a.size()) / b.size();
double have = 0;
size_t pos = 0;
for (size_t i = 0 ; i != a.size() ; i++) {
if (need >= have+1) {
b[pos] += a[i];
have++;
} else {
double frac = (need-have); // frac is less than 1 because of the "if" condition
b[pos++] += frac * a[i]; // frac of a[i] goes to current element of b
have = 1 - frac;
b[pos] += have * a[i]; // (1-frac) of a[i] goes to the next position of b
}
}
for (size_t i = 0 ; i != b.size() ; i++) {
b[i] /= need;
}
Demo.
You will need to resort to some form of interpolation, as the number of elements to average isn't integer.
You can consider computing the prefix sum of the array, i.e.
0 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9 10
yields by summation
0 1 2 3 4 5 6 7 8 9
1 3 6 10 15 21 28 36 45 55
Then perform linear interpolation to get the intermediate values that you are lacking, like at 0*, 10/6, 20/6, 30/5*, 40/6, 50/6, 60/6*. (Those with an asterisk are readily available).
0 1 10/6 2 3 20/6 4 5 6 40/6 7 8 50/6 9
1 3 15/3 6 10 35/3 15 21 28 100/3 36 45 145/3 55
Now you get fractional sums by subtracting values in pairs. The first average is
(15/3-1)/(10/6) = 12/5
I can't think of anything in the C++ library that will crank out something like this, all fully cooked and ready to go.
So you'll have to, pretty much, roll up your sleeves and go to work. At this point, the question of what's the "efficient" way of doing it boils down to its very basics. Which means:
1) Calculate how big the output array should be. Based on the description of the issue, you should be able to make that calculation even before looking at the values in the input array. You know the input array's size(), you can calculate the size() of the destination array.
2) So, you resize() the destination array up front. Now, you no longer need to worry about the time wasted in growing the size of the dynamic output array, incrementally, as you go through the input array, making your calculations.
3) So what's left is the actual work: iterating over the input array, and calculating the downsized values.
auto b=input_array.begin();
auto e=input_array.end();
auto p=output_array.begin();
Don't see many other options here, besides brute force iteration and calculations. Iterate from b to e, getting your samples, calculating each downsized value, and saving the resulting value into *p++.

Max number ways to jump to the last element

I had a question from a contest and would like to know the solution.
Question is about finding max number of unique ways to jump to last element. I am thinking about a solution with dynamic programming but couldnt figure it out.
You can jump max 3 steps in any position. Number of steps will be given as n, and our program should calculate Max number of jumps to reach n+1 position.
For example:
n=4, max number of jumps to n+1 position should be 7
Jump1: 1 2 1
Jump2: 1 1 2
Jump3: 2 1 1
Jump4: 1 3
Jump5: 3 1
Jump6: 2 2
Jump7: 1 1 1 1
Thank you
The longest journey, says the proverb, starts with a single step.
In this case, there are three possible first steps in the journey to the end: a hop of 1, 2 or 3 spots. In each case, the journey will continue from a closer point, either 1, 2 or 3 steps closer to the end. So if we know the number of possible paths from the closer points, we can simply add them up:
paths(n) = paths(n-1) // First hop was one, n-1 elements left
+ paths(n-2) // First hop was two, n-2 elements left
+ paths(n-3) // First hop was three, n-3 elements left.
The similarity to the Fibonacci recursion is not coincidental. This sequence is often called the "Tribonacci sequence", and you can easily look that up in the usual places (mathworld, wikipedia, oeis, etc.) to find a variety of computation techniques, including the one below.
Clearly, you can compute the Tribonacci function in O(n) by starting at the end and working backwards (defining f(0) = 1, f(-1) = 0, f(-2) = 0 to provide a starting position.) But it's easy to do better than that, using the same technique that can be used to compute Fibonacci numbers in O(log n) operations.
Here's the Fibonacci algorithm. We start with the observation that the matrix product:
| 1 1 |
[ a b ] x | | = [ a+b a ]
| 1 0 |
Let's use F(n) for the nth Fibonacci number, and call matrix of 1s and 0s above MF. We can see that
[ F(n) F(n-1) ] = [ 1 0 ] × MF × MF × … × MF
n products
But since matrix multiplication is associative, we can rewrite that as:
[ F(n) F(n-1) ] = [ 1 0 ] × MFn
Again, since matrix multiplication is associative, we can compute MFn in O(log N) steps. For example, we could use the recursion:
= Mn/2 × Mn/2 if n is even
Mn
= M × M(n-1)/2 × M(n-1)/2 if n is odd
Similarly, for the Tribonacci numbers T(n), we can define the matrix MT:
| 1 1 0 |
MT = | 1 0 1 |
| 1 0 0 |
and by the same logic as above:
[ T(n) T(n-1) T(n-2) ] = [ 1 0 0 ] × MTn
Do you know number of ways for n = 0, n = 1 and n = 2?
For any larger value N, number of ways = number of ways for N - 1 + number of ways for N - 2 + number of ways for N - 3
You should not calculate the number of ways for given n more than 1 time. (Remember it in a dp array)
The important function is going to be (number_of_elements)!/product((number_repeated_characters)!)
For instance, if you know 2211 is one of your paths, then 4!/2!*2! = 6 so there are 6 path combinations for 2 "2"s and 2 "1"s.
Since you're only going up to a maximum of 3 steps, it's really not too bad once you know that formula. Really you're just looking for the combinations of 2s and 3s that can replace the 1s in your input. I suggest starting with 1 3 and then going through each 2 that fills in the remainder. Then repeat for 2 3s and so on. If you precompute and save all the factorials, it should run very fast, although I'm sure there are additional optimizations.

How to balance between two arrays such as the difference is minimized?

I have an array A[]={3,2,5,11,17} and B[]={2,3,6}, size of B is always less than A. Now I have to map from every element B to distinct elements of A such that the total difference sum( abs(Bi-Aj) ) becomes minimum (Where Bi has been mapped to Aj). What is the type of algorithm?
For the example input, I could select, 2->2=0 , 3->3=0 and then 6->5=1. So the total cost is 0+0+1 = 1. I have been thinking sorting both the arrays and then take the first sizeof B elements from the A. Will this work?
It can be thought of as an unbalanced Assignment Problem.
The cost matrix shall be the difference in values of B[i] and A[j]. You can add dummy elements to B so that the problem becomes balanced and put the costs associated very high.
Then Hungarian Algorithm can be applied to solve it.
For the example case A[]={3,2,5,11,17} and B[]={2,3,6} the cost matrix shall be:
. 3 2 5 11 17
2 1 0 3 9 15
3 0 1 2 8 14
6 3 4 1 5 11
d1 16 16 16 16 16
d2 16 16 16 16 16