SPOJ GSS1 WA - Segment tree - c++

I am trying to solve SPOJ problem GSS1 (Can you answer these queries I) using segment tree. I am using 'init' method to initialise a tree and 'query' method to get maximum in a range [i,j].
Limits |A[i]| <= 15707 and 1<=N (Number of elements)<=50000.
int A[50500], tree[100500];
void init(int n, int b, int e) // n=1, b=lower, e=end
{
if(b == e)
{
tree[n] = A[b];
return;
}
init(2 * n, b, (b + e) / 2);
init(2 * n + 1, ((b + e) / 2) + 1, e);
tree[n] = (tree[2 * n] > tree[2 * n + 1]) ? tree[2 * n] : tree[2 * n + 1];
}
int query(int n, int b, int e, int i, int j) // n=1, b=lower, e=end, [i,j]=range
{
if(i>e || j<b)
return -20000;
if(b>=i && e<=j)
return tree[n];
int p1 = query(2 * n, b, (b + e) / 2, i, j);
int p2 = query(2 * n + 1, ((b + e) / 2) + 1, e, i, j);
return (p1 > p2) ? p1 : p2;
}
Program is giving Wrong Answer.I debbuged code for most of the cases (negative numbers, odd/even N) but I am unable to figure out what is wrong with algorithm.
If anyone can point me in right direction, I would be grateful.
Thanks

I fear the (accepted) answer has missed out on a very important point here. The problem is here with the algorithm used in the code itself. The code says the answer for the node is the max of its children's values. But it is very much possible that the maximum subarray lies partially in BOTH children. For example
-1 -2 3 4 5 6 -5 -10 (n=8)
The code will output 11 while the answer is 18.
You will need to look into this case also to beat WA.
(I am answering this because the accepted answer is not entirely right and doesn't answer this question correctly.)

Edit: it seems your implementation is also correct, I'm just having another. And we both misread the problem statement.
I guess you call your query function with params
query( 1, 0, n-1, x-1, y-1 );
I believe that it's wrong to handle with segment tree in such way when your n is not a pow of 2.
I'm offering you to
enlarge tree array to 131072 elements (2^17) and A to 65536 (2^16);
found the smallest k such is not smaller than n and is a pow of 2;
initialize elements from n (0-based) to k-1 with -20000;
make n equal to k;
make sure you call init(1,0,n-1);
Hope that'll help you to beat WA.

Related

My segment tree update function doesn't work properly

The problem:
In this task, you need to write a regular segment tree for the sum.
Input The first line contains two integers n and m (1≤n,m≤100000), the
size of the array and the number of operations. The next line contains
n numbers a_i, the initial state of the array (0≤a_i≤10^9). The following
lines contain the description of the operations. The description of
each operation is as follows:
1 i v: set the element with index i to v (0≤i<n, 0≤v≤10^9).
2 l r:
calculate the sum of elements with indices from l to r−1 (0≤l<r≤n).
Output
For each operation of the second type print the corresponding
sum.
I'm trying to implement segment tree and all my functions works properly except for the update function:
void update(int i, int delta, int v = 0, int tl = 0, int tr = n - 1)
{
if (tl == i && tr == i)
t[v] += delta;
else if (tl <= i && i <= tr)
{
t[v] += delta;
int m = (tl + tr) / 2;
int left = 2 * v + 1;
int right = left + 1;
update(i, delta, left, tl, m);
update(i, delta, right, m + 1, tr);
}
}
I got WA on segment tree problem, meanwhile with this update function I got accepted:
void update(int i, int new_value, int v = 0, int tl = 0, int tr = n - 1)
{
if (tl == i && tr == i)
t[v] = new_value;
else if (tl <= i && i <= tr)
{
int m = (tl + tr) / 2;
int left = 2 * v + 1;
int right = left + 1;
update(i, new_value, left, tl, m);
update(i, new_value, right, m + 1, tr);
t[v] = t[left] + t[right];
}
}
I really don't understand why my first version is not working. I thought maybe I had some kind of overflowing problem and decided to change everything to long longs, but it didn't help, so the problem in the algorithm of updating itself. But it seems ok to me. For every segment that includes i I need to add sum of this segment to some delta (it can be negative, if for example I had number 5 and decided to change it to 3, then delta will be -2). So what's the problem? I really don't see it :(
There are 2 problems with your first solution:
The question expects you to do a point update. The condition (tl == i && tr == i) checks if you are the leaf node of the tree.
At leaf node, you have to actually replace the value instead of adding something into it, which you did for the second solution.
Secondly, you can only update the non-leaf nodes after all its child nodes are updated. Updating t[v] before making recursive call will anyways result into wrong answer.

How to determine time complexity of this code?

I have just started learning algorithms,
int findMinPath(vector<vector<int> > &V, int r, int c){
int R = V.size();
int C = V[0].size();
if (r >= R || c >= C) return 100000000; // Infinity
if (r == R - 1 && c == C - 1) return 0;
return V[r][c] + min(findMinPath(V, r + 1, c), findMinPath(V, r, c + 1));
}
I think the answer should be O(RC) but the right answer is O(2^(RC)) I cannot understand why. Please explain.
When you have multiple recursive calls, a good approximation is the number of branches (calls) raised to the power of the height of the "tree".
In your case, you have two branches:
findMinPath(V, r+1, c)
findMinPath(V, r, c+1)
So we'll start with a base of 2.
Then the height (or depth) of your "tree" is determined by how many elements are in your vector; in your case, You have R elements, but each element has C elements in it. So our power is RC.
Thus your runtime will approximate, in the worst case to O(2^RC).
Worst case scenario, findMinPath calls itself twice in this line
return V[r][c] + min(findMinPath(V, r + 1, c), findMinPath(V, r, c + 1));
So, worst case, every call will involve two further calls until this the recursion ends. Hence, two-to-the-power.

Recursive algorithm for cos taylor series expansion c++

I recently wrote a Computer Science exam where they asked us to give a recursive definition for the cos taylor series expansion. This is the series
cos(x) = 1 - x^2/2! + x^4/4! + x^6/6! ...
and the function signature looks as follows
float cos(int n , float x)
where n represents the number in the series the user would like to calculate till and x represents the value of x in the cos function
I obviously did not get that question correct and I have been trying to figure it out for the past few days but I have hit a brick wall
Would anyone be able to help out getting me started somewhere ?
All answers so far recompute the factorial every time. I surely wouldn't do that. Instead you can write :
float cos(int n, float x)
{
if (n > MAX)
return 1;
return 1 - x*x / ((2 * n - 1) * (2 * n)) * cos(n + 1, x);
}
Consider that cos returns the following (sorry for the dots position) :
You can see that this is true for n>MAX, n=MAX, and so on. The sign alternating and powers of x are easy to see.
Finally, at n=1 you get 0! = 1, so calling cos(1, x) gets you the first MAX terms of the Taylor expansion of cos.
By developing (easier to see when it has few terms), you can see the first formula is equivalent to the following :
For n > 0, you do in cos(n-1, x) a division by (2n-3)(2n-2) of the previous result, and a multiplication by x². You can see that when n=MAX+1 this formula is 1, with n=MAX then it is 1-x²/((2MAX-1)2MAX) and so on.
If you allow yourself helper functions, then you should change the signature of the above to float cos_helper(int n, float x, int MAX) and call it like so :
float cos(int n, float x) { return cos_helper(1, x, n); }
Edit : To reverse the meaning of n from degree of the evaluated term (as in this answer so far) to number of terms (as in the question, and below), but still not recompute the total factorial every time, I would suggest using a two-term relation.
Let us define trivially cos(0,x) = 0 and cos(1,x) = 1, and try to achieve generally cos(n,x) the sum of the n first terms of the Taylor series.
Then for each n > 0, we can write, cos(n,x) from cos(n-1,x) :
cos(n,x) = cos(n-1,x) + x2n / (2n)!
now for n > 1, we try to make the last term of cos(n-1,x) appear (because it is the closest term to the one we want to add) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( x2n-2 / (2n-2)! )
By combining this formula with the previous one (adapting it to n-1 instead of n) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( cos(n-1,x) - cos(n-2,x) )
We now have a purely recursive definition of cos(n,x), without helper function, without recomputing the factorial, and with n the number of terms in the sum of the Taylor decomposition.
However, I must stress that the following code will perform terribly :
performance wise, unless some optimization allows to not re-evaluate a cos(n-1,x) that was evaluated at the previous step as cos( (n-1) - 1, x)
precision wise, because of cancellation effects : the precision with which we get x2n-2 / (2n-2)! is very bad
Now this disclaimer is in place, here comes the code :
float cos(int n, float x)
{
if (n < 2)
return n;
float c = x * x / (2 * (n - 1) * 2 * n);
return (1-c) * cos(n-1, x) + c * cos(n-2, x);
}
cos(x)=1 - x^2/2! + x^4/4! - x^6/6! + x^8/8!.....
=1-x^2/2 (1 - x^2/3*4 + x^4/3*4*5*6 -x^6/3*4*5*6*7*8)
=1 - x^2/2 {1- x^2/3*4 (1- x^2/5*6 + x^4/5*6*7*8)}
=1 - x^2/2 [1- x^2/3*4 {1- x^2/5*6 ( 1- x^2/7*8)}]
double cos_series_recursion(double x, int n, double r=1){
if(n>0){
r=1-((x*x*r)/(n*(n-1)));
return cos_series_recursion(x,n-2,r);
}else return r;
}
A simple approach that makes use of static variables:
double cos(double x, int n) {
static double p = 1, f = 1;
double r;
if(n == 0)
return 1;
r = cos(x, n-1);
p = (p*x)*x;
f = f*(2*n-1)*2*n;
if(n%2==0) {
return r+p/f;
} else {
return r-p/f;
}
}
Notice that I'm multiplying 2*n in the operation to get the next factorial.
Having n align to the factorial we need makes this easy to do in 2 operations: f = f * (n - 1) then f = f * n.
when n = 1, we need 2!
when n = 2, we need 4!
when n = 3, we need 6!
So we can safely double n and work from there. We could write:
n = 2*n;
f = f*(n-1);
f = f*n;
If we did this, we would need to update our even/odd check to if((n/2)%2==0) since we're doubling the value of n.
This can instead be written as f = f*(2*n-1)*2*n; and now we don't have to divide n when checking if it's even/odd, since n is not being altered.
You can use a loop or recursion, but I would recommend a loop. Anyway, if you must use recursion you could use something like the code below
#include <iostream>
using namespace std;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
float Cos(int n, float x) {
if (n == 0) return 1;
return Cos(n-1, x) + (n%2 ? -1 : 1) * pow (x, 2*n) / (fact(2*n));
}
int main()
{
cout << Cos(6, 3.14/6);
}
Just do it like the sum.
The parameter n in float cos(int n , float x) is the l and now just do it...
Some pseudocode:
float cos(int n , float x)
{
//the sum-part
float sum = pow(-1, n) * (pow(x, 2*n))/faculty(2*n);
if(n <= /*Some predefined maximum*/)
return sum + cos(n + 1, x);
return sum;
}
The usual technique when you want to recurse but the function arguments don't carry the information that you need, is to introduce a helper function to do the recursion.
I have the impression that in the Lisp world the convention is to name such a function something-aux (short for auxiliary), but that may have been just a limited group in the old days.
Anyway, the main problem here is that n represents the natural ending point for the recursion, the base case, and that you then also need some index that works itself up to n. So, that's one good candidate for extra argument for the auxiliary function. Another candidate stems from considering how one term of the series relates to the previous one.

Cut rectangle in minimum number of squares

I'm trying to solve the following problem:
A rectangular paper sheet of M*N is to be cut down into squares such that:
The paper is cut along a line that is parallel to one of the sides of the paper.
The paper is cut such that the resultant dimensions are always integers.
The process stops when the paper can't be cut any further.
What is the minimum number of paper pieces cut such that all are squares?
Limits: 1 <= N <= 100 and 1 <= M <= 100.
Example: Let N=1 and M=2, then answer is 2 as the minimum number of squares that can be cut is 2 (the paper is cut horizontally along the smaller side in the middle).
My code:
cin >> n >> m;
int N = min(n,m);
int M = max(n,m);
int ans = 0;
while (N != M) {
ans++;
int x = M - N;
int y = N;
M = max(x, y);
N = min(x, y);
}
if (N == M && M != 0)
ans++;
But I am not getting what's wrong with this approach as it's giving me a wrong answer.
I think both the DP and greedy solutions are not optimal. Here is the counterexample for the DP solution:
Consider the rectangle of size 13 X 11. DP solution gives 8 as the answer. But the optimal solution has only 6 squares.
This thread has many counter examples: https://mathoverflow.net/questions/116382/tiling-a-rectangle-with-the-smallest-number-of-squares
Also, have a look at this for correct solution: http://int-e.eu/~bf3/squares/
I'd write this as a dynamic (recursive) program.
Write a function which tries to split the rectangle at some position. Call the function recursively for both parts. Try all possible splits and take the one with the minimum result.
The base case would be when both sides are equal, i.e. the input is already a square, in which case the result is 1.
function min_squares(m, n):
// base case:
if m == n: return 1
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
return min { min_hor, min_ver }
To improve performance, you can cache the recursive results:
function min_squares(m, n):
// base case:
if m == n: return 1
// check if we already cached this
if cache contains (m, n):
return cache(m, n)
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
// put in cache and return
result := min { min_hor, min_ver }
cache(m, n) := result
return result
In a concrete C++ implementation, you could use int cache[100][100] for the cache data structure since your input size is limited. Put it as a static local variable, so it will automatically be initialized with zeroes. Then interpret 0 as "not cached" (as it can't be the result of any inputs).
Possible C++ implementation: http://ideone.com/HbiFOH
The greedy algorithm is not optimal. On a 6x5 rectangle, it uses a 5x5 square and 5 1x1 squares. The optimal solution uses 2 3x3 squares and 3 2x2 squares.
To get an optimal solution, use dynamic programming. The brute-force recursive solution tries all possible horizontal and vertical first cuts, recursively cutting the two pieces optimally. By caching (memoizing) the value of the function for each input, we get a polynomial-time dynamic program (O(m n max(m, n))).
This problem can be solved using dynamic programming.
Assuming we have a rectangle with width is N and height is M.
if (N == M), so it is a square and nothing need to be done.
Otherwise, we can divide the rectangle into two other smaller one (N - x, M) and (x,M), so it can be solved recursively.
Similarly, we can also divide it into (N , M - x) and (N, x)
Pseudo code:
int[][]dp;
boolean[][]check;
int cutNeeded(int n, int m)
if(n == m)
return 1;
if(check[n][m])
return dp[n][m];
check[n][m] = true;
int result = n*m;
for(int i = 1; i <= n/2; i++)
int tmp = cutNeeded(n - i, m) + cutNeeded(i,m);
result = min(tmp, result);
for(int i = 1; i <= m/2; i++)
int tmp = cutNeeded(n , m - i) + cutNeeded(n,i);
result = min(tmp, result);
return dp[n][m] = result;
Here is a greedy impl. As #David mentioned it is not optimal and is completely wrong some cases so dynamic approach is the best (with caching).
def greedy(m, n):
if m == n:
return 1
if m < n:
m, n = n, m
cuts = 0
while n:
cuts += m/n
m, n = n, m % n
return cuts
print greedy(2, 7)
Here is DP attempt in python
import sys
def cache(f):
db = {}
def wrap(*args):
key = str(args)
if key not in db:
db[key] = f(*args)
return db[key]
return wrap
#cache
def squares(m, n):
if m == n:
return 1
xcuts = sys.maxint
ycuts = sys.maxint
x, y = 1, 1
while x * 2 <= n:
xcuts = min(xcuts, squares(m, x) + squares(m, n - x))
x += 1
while y * 2 <= m:
ycuts = min(ycuts, squares(y, n) + squares(m - y, n))
y += 1
return min(xcuts, ycuts)
This is essentially classic integer or 0-1 knapsack problem that can be solved using greedy or dynamic programming approach. You may refer to: Solving the Integer Knapsack

Sum of all Maximum and minimum in all segments

I am doing a problem in which there is a need to find sum of maximum elements in a segment - sum of minimum elements in a segment.I tried using Sparse Table ,but it is two slow for the time limit.So i did something like this:
If n=4 segments are [1,2],[1,3],[1,4],[2,3],[2,4],[3,4].
The problem is similar to an RMQ problem but i have to do it for all segments and find the
sum=max(a[1],a[2])+
max(a[1],a[2],a[3])+max(a[1],a[2],a[3],a[4])+max(a[2],a[3])+m‌​ax(a[2],a[3],a[4])+max(a[3],a[4])-min(a[1],a[2])+min(a[1],a[2],a[3])+min(a[1],a[2‌​],a[3],a[4])+min(a[2],a[3])+min(a[2],a[3],a[4])+min(a[3],a[4])
for(i=1;i<n;i++)
{
maxtilli[i-1]=INT_MIN;
mintilli[i-1]=INT_MAX;
for(k=1,j=i;j<=n;k++,j++)
{
if(a[j]>maxtilli[k-1])
{
maxtilli[k]=a[j];
}
else
{
maxtilli[k]=maxtilli[k-1];
}
if(a[j]<mintilli[k-1])
{
mintilli[k]=a[j];
}
else
{
mintilli[k]=mintilli[k-1];
}
if(i!=j)
{
ans+=(maxtilli[k]-mintilli[k]);
}
}
}
Here n is of the order of 100,000. So is there any way to optimize it.
Suppose n=4 then segments are [1,2],[1,3],[1,4],[2,3],[2,4],[3,4].
The thing required is
sum=max(a[1],a[2])+max(a[1],a[2],a[3])+max(a[1],a[2],a[3],a[4])+max(a[2],a[3])+m‌​ax(a[2],a[3],a[4])+max(a[3],a[4])-min(a[1],a[2])+min(a[1],a[2],a[3])+min(a[1],a[2‌​],a[3],a[4])+min(a[2],a[3])+min(a[2],a[3],a[4])+min(a[3],a[4])
We can try to finish the first problem, sum of the max value in all segments.
Algorithm
First, you can find the max value a[i] in the whole sequence. All segments which contain a[i] would be considered. The answer plus A[i] * (i * (n - i)). And the problem is split into two small sequences [1, i - 1] and [i + 1, n], you can do it in the same way.
Code
void cal(int L, int R){
max_index = find_max(L, R); // O(logN), using Sparse Table or Segment Tree
int all_segments = (max_index - L + 1) * (R - max_index)
ans += a[max_index] * all_segments;
cal(L, max_index - 1);
cal(max_index + 1, R);
}
// call max_index N times, so the total complexity is O(N * logN)