I'm trying to solve MexDoubleSliceSum problem without Kandane's bidirectional algorithm.
Problem Definition:
A non-empty array A consisting of N integers is given.
A triplet (X, Y, Z), such that 0 ≤ X < Y < Z < N, is called a double
slice.
The sum of double slice (X, Y, Z) is the total of A[X + 1] + A[X + 2]
+ ... + A[Y − 1] + A[Y + 1] + A[Y + 2] + ... + A[Z − 1].
For example, array A such that:
A[0] = 3
A[1] = 2
A[2] = 6
A[3] = -1
A[4] = 4
A[5] = 5
A[6] = -1
A[7] = 2
The goal is to find the maximal sum of any double slice.
that, given a non-empty array A consisting of N integers, returns the
maximal sum of any double slice.
For example, given:
A[0] = 3
A[1] = 2
A[2] = 6
A[3] = -1
A[4] = 4
A[5] = 5
A[6] = -1
A[7] = 2
the function should return 17, because no double slice of array A has
a sum of greater than 17.
I have figured out following idea:
I'm taking a slice and putting lever (value in the middle that's being dropped) to lowest value included in this slice. If I notice that next value is lowering total sum i'm changing lever to it and reducing sum with values before last lever(including old lever).
int solution(vector<int> &A) {
if(A.size()<4)
return 0;
int lever=A[1];
int sum=-lever;
int presliceValue=0;
int maxVal=A[1];
for(int i=1;i<A.size()-1;i++){
if(sum+A[i]<sum || A[i]<lever){
sum+=lever;
if(presliceValue<0)
sum=sum-presliceValue;
lever=A[i];
presliceValue=sum+lever;
}
else
sum=sum+A[i];
if(sum>maxVal)
maxVal=sum;
}
return maxVal;
}
This solution returns wrong value on few test cases (unfortunately cannot tell what's tested values):
unfortunately i cannot reproduce following error and codility does not share test values.
Failed Test cases
many the same small sequences, length = ~100,000
large random: random, length = ~100,000
random, numbers from -30 to 30, length = 300
random, numbers form -104 to 104, length = 70
Related
What data type or what ways can I store large integers possibly greater than 10^18 and How can I efficiently improve my approach to the problem?
I am currently working on a problem that asks to find the sum of all divisors d(k) given that:
N N
S(N) = ∑ ∑ d(j*i)
i=1 j=1
with the largest value of N = 10^9 and largest divisor (10^9 * 10^9). Stored in:
long long int
The program solves and slows down at N = 10^3 and anything higher takes up to much memory and crashes.
I used a for loop for the values of i and j that calculates the values of d(k) > d(i * j) and store it in a vector:
{d(1 * 1), d(1 * 2), ... , d(i>N * j>N)}
Then a separate function that finds all divisors of d(k) then adds them up:
d(1) = 1
d(2) = 1 + 2 = 3
d(3) = 1 + 3 = 4
d(4) = 1 + 2 + 4 = 7
...
d(i>N * j>N)
S(N) = d(1) + d(2) + d(3) + d(4) + ... + d(i>N * j>N)
Any values of N greater than 10^5 gets displayed as S(N) mod 10^9.
I have a number n and I have to split it into k numbers such that all k numbers are distinct, the sum of the k numbers is equal to n and k is maximum. Example if n is 9 then the answer should be 1,2,6. If n is 15 then answer should be 1,2,3,4,5.
This is what I've tried -
void findNum(int l, int k, vector<int>& s)
{
if (k <= 2 * l) {
s.push_back(k);
return;
}
else if (l == 1) {
s.push_back(l);
findNum(l + 1, k - 1, s);
}
else if(l == 2) {
s.push_back(l);
findNum(l + 2, k - 2, s);
}
else{
s.push_back(l);
findNum(l + 1, k - l, s);
}
}
Initially k = n and l = 1. Resulting numbers are stored in s. This solution even though returns the number n as a sum of k distinct numbers but it is the not the optimal solution(k is not maximal). Example output for n = 15 is 1,2,4,8. What changes should be made to get the correct result?
Greedy algorithm works for this problem. Just start summing up from 1 to m such that sum(1...m) <= n. As soon as it exceeds, add the excess to m-1. Numbers from 1 upto m|m-1 will be the answer.
eg.
18
1+2+3+4+5 < 18
+6 = 21 > 18
So, answer: 1+2+3+4+(5+6-(21-18))
28
1+2+3+4+5+6+7 = 28
So, answer: 1+2+3+4+5+6+7
Pseudocode (in constant time, complexity O(1))
Find k such that, m * (m+1) > 2 * n
Number of terms = m-1
Terms: 1,2,3...m-2,(m-1 + m - (sum(1...m) - n))
sum can be partitionned into k terms in {1, ... , m} if min(k) <= sum <= max(k,m), with
min(k) = 1 + 2 + .. + k = (k*(k+1))/2
max(k,m) = m + (m-1) + .. + (m-k+1) = k*m - (k*(k-1))/2
So, you can use the following pseudo-code:
fn solve(n, k, sum) -> set or error
s = new_set()
for m from n down to 1:
# will the problem be solvable if we add m to s?
if min(k-1) <= sum-m <= max(k-1, m-1) then
s.add(m), sum-=m, k-=1
if s=0 and k=0 then s else error()
I need to write a program which displays all possible change combinations given an array of denominations [1 , 2, 5, 10, 20, 50, 100, 200] // 1 = 1 cent
Value to make the change from = 300
I'm basing my code on the solution from this site http://www.geeksforgeeks.org/dynamic-programming-set-7-coin-change/
#include<stdio.h>
int count( int S[], int m, int n )
{
int i, j, x, y;
// We need n+1 rows as the table is consturcted in bottom up manner using
// the base case 0 value case (n = 0)
int table[n+1][m];
// Fill the enteries for 0 value case (n = 0)
for (i=0; i<m; i++)
table[0][i] = 1;
// Fill rest of the table enteries in bottom up manner
for (i = 1; i < n+1; i++)
{
for (j = 0; j < m; j++)
{
// Count of solutions including S[j]
x = (i-S[j] >= 0)? table[i - S[j]][j]: 0;
// Count of solutions excluding S[j]
y = (j >= 1)? table[i][j-1]: 0;
// total count
table[i][j] = x + y;
}
}
return table[n][m-1];
}
// Driver program to test above function
int main()
{
int arr[] = {1, 2, 5, 10, 20, 50, 100, 200}; //coins array
int m = sizeof(arr)/sizeof(arr[0]);
int n = 300; //value to make change from
printf(" %d ", count(arr, m, n));
return 0;
}
The program runs fine. It displays the number of all possible combinations, but I need it to be more advanced. The way I need it to work is to display the result in following fashion:
1 cent: n number of possible combinations.
2 cents:
5 cents:
and so on...
How can I modify the code to achieve that ?
Greedy Algorithm Approach
Have this denominations in an int array say, int den[] = [1 , 2, 5, 10, 20, 50, 100, 200]
Iterate over this array
For each iteration do the following
Take the element in the denominations array
Divide the change to be allotted number by the element in denominations array number
If the change allotted number is perfectly divisible by the number in denomination array then you are done with the change for that number.
If the number is not perfectly divisible then check for the remainder and do the same iteration with other number
Exit the inner iteration once you get the value equal to the change number
Do the same for the next denomination available in our denomination array.
Explained with example
den = [1 , 2, 5, 10, 20, 50, 100, 200]
Change to be alloted : 270, let take this as x
and y be the temporary variable
Change map z[coin denomination, count of coins]
int y, z[];
First iteration :
den = 1
x = 270
y = 270/1;
if x is equal to y*den
then z[den, y] // z[1, 270]
Iteration completed
Second Iteration:
den = 2
x = 270
y = 270/2;
if x is equal to y*den
then z[den , y] // [2, 135]
Iteration completed
Lets take a odd number
x = 217 and den = 20
y= 217/20;
now x is not equal to y*den
then update z[den, y] // [20, 10]
find new x = x - den*y = 17
x=17 and identify the next change value by greedy it would be 10
den = 10
y = 17/10
now x is not equal to y*den
then update z[den, y] // [10, 1]
find new x = x - den*y = 7
then do the same and your map would be having following entries
[20, 10]
[10, 1]
[5, 1]
[2, 1]
Question:
A non-empty zero-indexed array A consisting of N integers is given.
A monotonic pair is a pair of integers (P, Q), such that 0 ≤ P ≤ Q < N and A[P] ≤ A[Q].
The goal is to find the monotonic pair whose indices are the furthest apart. More precisely, we should maximize the value Q − P. It is sufficient to find only the distance.
For example, consider array A such that:
A[0] = 5
A[1] = 3
A[2] = 6
A[3] = 3
A[4] = 4
A[5] = 2
There are eleven monotonic pairs: (0,0), (0, 2), (1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (3, 3), (3, 4), (4, 4), (5, 5). The biggest distance is 3, in the pair (1, 4).
Write a function:
int solution(vector &A);
that, given a non-empty zero-indexed array A of N integers, returns the biggest distance within any of the monotonic pairs.
For example, given:
A[0] = 5
A[1] = 3
A[2] = 6
A[3] = 3
A[4] = 4
A[5] = 2
the function should return 3, as explained above.
Assume that:
N is an integer within the range [1..300,000];
each element of array A is an integer within the range [−1,000,000,000..1,000,000,000].
Complexity:
expected worst-case time complexity is O(N);
expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments).
Elements of input arrays can be modified.
Here is my solution of MaxDistanceMonotonic:
int solution(vector<int> &A) {
long int result;
long int max = A.size() - 1;
long int min = 0;
while(A.at(max) < A.at(min)){
max--;
min++;
}
result = max - min;
while(max < (long int)A.size()){
while(min >= 0){
if(A.at(max) >= A.at(min) && max - min > result){
result = max - min;
}
min--;
}
max++;
}
return result;
}
And my result is like this, what's wrong with my answer for the last test:
If you have:
0 1 2 3 4 5
31 2 10 11 12 30
Your algorithm outputs 3, but the correct answer is 4 = 5 - 1.
This happens because your min goes to -1 on the first full run of the inner while loop, so the pair (1, 5) will never have the chance to get checked, max starting out at 4 when entering the nested whiles.
Note that the problem description expects O(n) extra storage, while you use O(1). I don't think it's possible to solve the problem with O(1) extra storage and O(n) time.
I suggest you rethink your approach. If you give up, there is an official solution here.
I have a big matrix as input, and I have the size of a smaller matrix. I have to compute the sum of all possible smaller matrices which can be formed out of the bigger matrix.
Example.
Input matrix size: 4 × 4
Matrix:
1 2 3 4
5 6 7 8
9 9 0 0
0 0 9 9
Input smaller matrix size: 3 × 3 (not necessarily a square)
Smaller matrices possible:
1 2 3
5 6 7
9 9 0
5 6 7
9 9 0
0 0 9
2 3 4
6 7 8
9 0 0
6 7 8
9 0 0
0 9 9
Their sum, final output
14 18 22
29 22 15
18 18 18
I did this:
int** matrix_sum(int **M, int n, int r, int c)
{
int **res = new int*[r];
for(int i=0 ; i<r ; i++) {
res[i] = new int[c];
memset(res[i], 0, sizeof(int)*c);
}
for(int i=0 ; i<=n-r ; i++)
for(int j=0 ; j<=n-c ; j++)
for(int k=i ; k<i+r ; k++)
for(int l=j ; l<j+c ; l++)
res[k-i][l-j] += M[k][l];
return res;
}
I guess this is too slow, can anyone please suggest a faster way?
Your current algorithm is O((m - p) * (n - q) * p * q). The worst case is when p = m / 2 and q = n / 2.
The algorithm I'm going to describe will be O(m * n + p * q), which will be O(m * n) regardless of p and q.
The algorithm consists of 2 steps.
Let the input matrix A's size be m x n and the size of the window matrix being p x q.
First, you will create a precomputed matrix B of the same size as the input matrix. Each element of the precomputed matrix B contains the sum of all the elements in the sub-matrix, whose top-left element is at coordinate (1, 1) of the original matrix, and the bottom-right element is at the same coordinate as the element that we are computing.
B[i, j] = Sum[k = 1..i, l = 1..j]( A[k, l] ) for all 1 <= i <= m, 1 <= j <= n
This can be done in O(m * n), by using this relation to compute each element in O(1):
B[i, j] = B[i - 1, j] + Sum[k = 1..j-1]( A[i, k] ) + A[j] for all 2 <= i <= m, 1 <= j <= n
B[i - 1, j], which is everything of the sub-matrix we are computing except the current row, has been computed previously. You keep a prefix sum of the current row, so that you can use it to quickly compute the sum of the current row.
This is another way to compute B[i, j] in O(1), using the property of the 2D prefix sum:
B[i, j] = B[i - 1, j] + B[i, j - 1] - B[i - 1, j - 1] + A[j] for all 1 <= i <= m, 1 <= j <= n and invalid entry = 0
Then, the second step is to compute the result matrix S whose size is p x q. If you make some observation, S[i, j] is the sum of all elements in the matrix size (m - p + 1) * (n - q + 1), whose top-left coordinate is (i, j) and bottom-right is (i + m - p + 1, j + n - q + 1).
Using the precomputed matrix B, you can compute the sum of any sub-matrix in O(1). Apply this to compute the result matrix S:
SubMatrixSum(top-left = (x1, y1), bottom-right = (x2, y2))
= B[x2, y2] - B[x1 - 1, y2] - B[x2, y1 - 1] + B[x1 - 1, y1 - 1]
Therefore, the complexity of the second step will be O(p * q).
The final complexity is as mentioned above, O(m * n), since p <= m and q <= n.