I have just started learning algorithms,
int findMinPath(vector<vector<int> > &V, int r, int c){
int R = V.size();
int C = V[0].size();
if (r >= R || c >= C) return 100000000; // Infinity
if (r == R - 1 && c == C - 1) return 0;
return V[r][c] + min(findMinPath(V, r + 1, c), findMinPath(V, r, c + 1));
}
I think the answer should be O(RC) but the right answer is O(2^(RC)) I cannot understand why. Please explain.
When you have multiple recursive calls, a good approximation is the number of branches (calls) raised to the power of the height of the "tree".
In your case, you have two branches:
findMinPath(V, r+1, c)
findMinPath(V, r, c+1)
So we'll start with a base of 2.
Then the height (or depth) of your "tree" is determined by how many elements are in your vector; in your case, You have R elements, but each element has C elements in it. So our power is RC.
Thus your runtime will approximate, in the worst case to O(2^RC).
Worst case scenario, findMinPath calls itself twice in this line
return V[r][c] + min(findMinPath(V, r + 1, c), findMinPath(V, r, c + 1));
So, worst case, every call will involve two further calls until this the recursion ends. Hence, two-to-the-power.
Related
I am working on a project for school and I have run into a problem. Our goal is to calculate the last six decimal digits of the series of sums, n, i = 0, i^i using modular arithmetic. I have found a recursive method online to help me better understand how modular exponentiation works.
int exponentMod(int A, int B, int C)
{
// Base cases
if (A == 0)
return 0;
if (B == 0)
return 1;
// If B is even
long y;
if (B % 2 == 0) {
y = exponentMod(A, B / 2, C);
y = (y * y) % C;
}
// If B is odd
else {
y = A % C;
y = (y * exponentMod(A, B - 1, C) % C) % C;
}
return (int)((y + C) % C);
}
This method works fine when I am just calculating one exponent, however I'm not sure how to make this work for the problem I have been given with the series. The purpose of modular exponentiation is because we are working with large exponents, so I can not just add them up normally then mod by 1000000. I can't add up the numbers output by the above function either because they don't produce the full number that I need, when the numbers get larger, to get the correct summation. Can Someone point me in the right direction for finding the last six digits of the summation using modular arithmetic?
I do a simulation with a lot of particles (up to 100000) in periodic domain(box), and in order particles to stay inside the box, I use modulo function with float or double numbers.
In Matlab everything works great with mod function. However in C++ I found out, that function fmod is not completely equal to Matlab's mod function:
mod(-0.5,10)=9.5 - I want this result in C++
fmod(-0.5,10)=-0.5 - I don't want this.
I, of course, can solve my problem with if statements. However, i think, it will affect efficiency (if statement in critical loop). Is there a way to implement this function without if statement? May be some other function?
Thanks.
Just use a conditional. It will not meaningfully affect efficiency.
inline double realmod (x, y)
{
result = fmod(x, y);
return result >= 0 ? result : result + y;
}
fmod() calls assembly instruction FPREM which takes 16-64 cycles (according to the Pentium manual, http://www.intel.com/design/pentium/manuals/24143004.pdf). The jump instructions for the conditional and the floating point addition only amount to 5 or so.
When your code has floating point division, you don't need to sweat the small stuff.
Either use floor and regular division:
float modulo(float a, float q)
{
float b = a / q;
return (b - floor(b)) * q;
}
or you can add the divisor to the result of fmod without branching:
float modulo(float a, float q)
{
float m = fmod(a, q);
return m + q * (m < 0.f);
}
Based on Matlab mod(a, m) documentation and #QuestionC's answer -
A general solution that behaves exactly like Matlab - also for negative and zero divisor.
Tested against multiple values :
static inline double MatlabMod(double q, double m)
{
if(m == 0)
return q;
double result = fmod(q, m);
return ((result >= 0 && m > 0) || (q <= 0 && m < 0)) ? result : (result + m);
}
Tested with matlab for :
(q, m) -> result
(54, 321) -> 54
(-50, 512) -> 462
(54, -152) -> -98
(-53, -500) -> -53
(-500, 300) -> 100
(-5000, 400) -> 200
(-1000, -360) -> -280
(500, 360) -> 140
(1000, 360) -> 280
(-1000, 360) -> 80
(-5051, 0) -> -5051
(512, 0) -> 512
(0, 52) -> 0
(0, -58) -> 0
Just add the divisor to the number you want to keep in the interval before you apply the modulo operator:
return fmod(a+q,q);
this requires no branching at all.
If you have to worry about a exeeding -q between two updates, you can make it more robust by e.g.:
return fmod(a+q*10,q);
which would work for a >= -10*q
The most straightforward with working for both floats and ints without any branching.
// b = MOD(a, m)
int a = a - m * floor(a / m) + m;
int b = a - m * floor(a / m);
I recently wrote a Computer Science exam where they asked us to give a recursive definition for the cos taylor series expansion. This is the series
cos(x) = 1 - x^2/2! + x^4/4! + x^6/6! ...
and the function signature looks as follows
float cos(int n , float x)
where n represents the number in the series the user would like to calculate till and x represents the value of x in the cos function
I obviously did not get that question correct and I have been trying to figure it out for the past few days but I have hit a brick wall
Would anyone be able to help out getting me started somewhere ?
All answers so far recompute the factorial every time. I surely wouldn't do that. Instead you can write :
float cos(int n, float x)
{
if (n > MAX)
return 1;
return 1 - x*x / ((2 * n - 1) * (2 * n)) * cos(n + 1, x);
}
Consider that cos returns the following (sorry for the dots position) :
You can see that this is true for n>MAX, n=MAX, and so on. The sign alternating and powers of x are easy to see.
Finally, at n=1 you get 0! = 1, so calling cos(1, x) gets you the first MAX terms of the Taylor expansion of cos.
By developing (easier to see when it has few terms), you can see the first formula is equivalent to the following :
For n > 0, you do in cos(n-1, x) a division by (2n-3)(2n-2) of the previous result, and a multiplication by x². You can see that when n=MAX+1 this formula is 1, with n=MAX then it is 1-x²/((2MAX-1)2MAX) and so on.
If you allow yourself helper functions, then you should change the signature of the above to float cos_helper(int n, float x, int MAX) and call it like so :
float cos(int n, float x) { return cos_helper(1, x, n); }
Edit : To reverse the meaning of n from degree of the evaluated term (as in this answer so far) to number of terms (as in the question, and below), but still not recompute the total factorial every time, I would suggest using a two-term relation.
Let us define trivially cos(0,x) = 0 and cos(1,x) = 1, and try to achieve generally cos(n,x) the sum of the n first terms of the Taylor series.
Then for each n > 0, we can write, cos(n,x) from cos(n-1,x) :
cos(n,x) = cos(n-1,x) + x2n / (2n)!
now for n > 1, we try to make the last term of cos(n-1,x) appear (because it is the closest term to the one we want to add) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( x2n-2 / (2n-2)! )
By combining this formula with the previous one (adapting it to n-1 instead of n) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( cos(n-1,x) - cos(n-2,x) )
We now have a purely recursive definition of cos(n,x), without helper function, without recomputing the factorial, and with n the number of terms in the sum of the Taylor decomposition.
However, I must stress that the following code will perform terribly :
performance wise, unless some optimization allows to not re-evaluate a cos(n-1,x) that was evaluated at the previous step as cos( (n-1) - 1, x)
precision wise, because of cancellation effects : the precision with which we get x2n-2 / (2n-2)! is very bad
Now this disclaimer is in place, here comes the code :
float cos(int n, float x)
{
if (n < 2)
return n;
float c = x * x / (2 * (n - 1) * 2 * n);
return (1-c) * cos(n-1, x) + c * cos(n-2, x);
}
cos(x)=1 - x^2/2! + x^4/4! - x^6/6! + x^8/8!.....
=1-x^2/2 (1 - x^2/3*4 + x^4/3*4*5*6 -x^6/3*4*5*6*7*8)
=1 - x^2/2 {1- x^2/3*4 (1- x^2/5*6 + x^4/5*6*7*8)}
=1 - x^2/2 [1- x^2/3*4 {1- x^2/5*6 ( 1- x^2/7*8)}]
double cos_series_recursion(double x, int n, double r=1){
if(n>0){
r=1-((x*x*r)/(n*(n-1)));
return cos_series_recursion(x,n-2,r);
}else return r;
}
A simple approach that makes use of static variables:
double cos(double x, int n) {
static double p = 1, f = 1;
double r;
if(n == 0)
return 1;
r = cos(x, n-1);
p = (p*x)*x;
f = f*(2*n-1)*2*n;
if(n%2==0) {
return r+p/f;
} else {
return r-p/f;
}
}
Notice that I'm multiplying 2*n in the operation to get the next factorial.
Having n align to the factorial we need makes this easy to do in 2 operations: f = f * (n - 1) then f = f * n.
when n = 1, we need 2!
when n = 2, we need 4!
when n = 3, we need 6!
So we can safely double n and work from there. We could write:
n = 2*n;
f = f*(n-1);
f = f*n;
If we did this, we would need to update our even/odd check to if((n/2)%2==0) since we're doubling the value of n.
This can instead be written as f = f*(2*n-1)*2*n; and now we don't have to divide n when checking if it's even/odd, since n is not being altered.
You can use a loop or recursion, but I would recommend a loop. Anyway, if you must use recursion you could use something like the code below
#include <iostream>
using namespace std;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
float Cos(int n, float x) {
if (n == 0) return 1;
return Cos(n-1, x) + (n%2 ? -1 : 1) * pow (x, 2*n) / (fact(2*n));
}
int main()
{
cout << Cos(6, 3.14/6);
}
Just do it like the sum.
The parameter n in float cos(int n , float x) is the l and now just do it...
Some pseudocode:
float cos(int n , float x)
{
//the sum-part
float sum = pow(-1, n) * (pow(x, 2*n))/faculty(2*n);
if(n <= /*Some predefined maximum*/)
return sum + cos(n + 1, x);
return sum;
}
The usual technique when you want to recurse but the function arguments don't carry the information that you need, is to introduce a helper function to do the recursion.
I have the impression that in the Lisp world the convention is to name such a function something-aux (short for auxiliary), but that may have been just a limited group in the old days.
Anyway, the main problem here is that n represents the natural ending point for the recursion, the base case, and that you then also need some index that works itself up to n. So, that's one good candidate for extra argument for the auxiliary function. Another candidate stems from considering how one term of the series relates to the previous one.
I need to find n!%1000000009.
n is of type 2^k for k in range 1 to 20.
The function I'm using is:
#define llu unsigned long long
#define MOD 1000000009
llu mulmod(llu a,llu b) // This function calculates (a*b)%MOD caring about overflows
{
llu x=0,y=a%MOD;
while(b > 0)
{
if(b%2 == 1)
{
x = (x+y)%MOD;
}
y = (y*2)%MOD;
b /= 2;
}
return (x%MOD);
}
llu fun(int n) // This function returns answer to my query ie. n!%MOD
{
llu ans=1;
for(int j=1; j<=n; j++)
{
ans=mulmod(ans,j);
}
return ans;
}
My demand is such that I need to call the function 'fun', n/2 times. My code runs too slow for values of k around 15. Is there a way to go faster?
EDIT:
In actual I'm calculating 2*[(i-1)C(2^(k-1)-1)]*[((2^(k-1))!)^2] for all i in range 2^(k-1) to 2^k. My program demands (nCr)%MOD caring about overflows.
EDIT: I need an efficient way to find nCr%MOD for large n.
The mulmod routine can be speeded up by a large factor K.
1) '%' is overkill, since (a + b) are both less than N.
- It's enough to evaluate c = a+b; if (c>=N) c-=N;
2) Multiple bits can be processed at once; see optimization to "Russian peasant's algorithm"
3) a * b is actually small enough to fit 64-bit unsigned long long without overflow
Since the actual problem is about nCr mod M, the high level optimization requires using the recurrence
(n+1)Cr mod M = (n+1)nCr / (n+1-r) mod M.
Because the left side of the formula ((nCr) mod M)*(n+1) is not divisible by (n+1-r), the division needs to be implemented as multiplication with the modular inverse: (n+r-1)^(-1). The modular inverse b^(-1) is b^(M-1), for M being prime. (Otherwise it's b^(phi(M)), where phi is Euler's Totient function.)
The modular exponentiation is most commonly implemented with repeated squaring, which requires in this case ~45 modular multiplications per divisor.
If you can use the recurrence
nC(r+1) mod M = nCr * (n-r) / (r+1) mod M
It's only necessary to calculate (r+1)^(M-1) mod M once.
Since you are looking for nCr for multiple sequential values of n you can make use of the following:
(n+1)Cr = (n+1)! / ((r!)*(n+1-r)!)
(n+1)Cr = n!*(n+1) / ((r!)*(n-r)!*(n+1-r))
(n+1)Cr = n! / ((r!)*(n-r)!) * (n+1)/(n+1-r)
(n+1)Cr = nCr * (n+1)/(n+1-r)
This saves you from explicitly calling the factorial function for each i.
Furthermore, to save that first call to nCr you can use:
nC(n-1) = n //where n in your case is 2^(k-1).
EDIT:
As Aki Suihkonen pointed out, (a/b) % m != a%m / b%m. So the method above so the method above won't work right out of the box. There are two different solutions to this:
1000000009 is prime, this means that a/b % m == a*c % m where c is the inverse of b modulo m. You can find an explanation of how to calculate it here and follow the link to the Extended Euclidean Algorithm for more on how to calculate it.
The other option which might be easier is to recognize that since nCr * (n+1)/(n+1-r) must give an integer, it must be possible to write n+1-r == a*b where a | nCr and b | n+1 (the | here means divides, you can rewrite that as nCr % a == 0 if you like). Without loss of generality, let a = gcd(n+1-r,nCr) and then let b = (n+1-r) / a. This gives (n+1)Cr == (nCr / a) * ((n+1) / b) % MOD. Now your divisions are guaranteed to be exact, so you just calculate them and then proceed with the multiplication as before. EDIT As per the comments, I don't believe this method will work.
Another thing I might try is in your llu mulmod(llu a,llu b)
llu mulmod(llu a,llu b)
{
llu q = a * b;
if(q < a || q < b) // Overflow!
{
llu x=0,y=a%MOD;
while(b > 0)
{
if(b%2 == 1)
{
x = (x+y)%MOD;
}
y = (y*2)%MOD;
b /= 2;
}
return (x%MOD);
}
else
{
return q % MOD;
}
}
That could also save some precious time.
I am trying to solve SPOJ problem GSS1 (Can you answer these queries I) using segment tree. I am using 'init' method to initialise a tree and 'query' method to get maximum in a range [i,j].
Limits |A[i]| <= 15707 and 1<=N (Number of elements)<=50000.
int A[50500], tree[100500];
void init(int n, int b, int e) // n=1, b=lower, e=end
{
if(b == e)
{
tree[n] = A[b];
return;
}
init(2 * n, b, (b + e) / 2);
init(2 * n + 1, ((b + e) / 2) + 1, e);
tree[n] = (tree[2 * n] > tree[2 * n + 1]) ? tree[2 * n] : tree[2 * n + 1];
}
int query(int n, int b, int e, int i, int j) // n=1, b=lower, e=end, [i,j]=range
{
if(i>e || j<b)
return -20000;
if(b>=i && e<=j)
return tree[n];
int p1 = query(2 * n, b, (b + e) / 2, i, j);
int p2 = query(2 * n + 1, ((b + e) / 2) + 1, e, i, j);
return (p1 > p2) ? p1 : p2;
}
Program is giving Wrong Answer.I debbuged code for most of the cases (negative numbers, odd/even N) but I am unable to figure out what is wrong with algorithm.
If anyone can point me in right direction, I would be grateful.
Thanks
I fear the (accepted) answer has missed out on a very important point here. The problem is here with the algorithm used in the code itself. The code says the answer for the node is the max of its children's values. But it is very much possible that the maximum subarray lies partially in BOTH children. For example
-1 -2 3 4 5 6 -5 -10 (n=8)
The code will output 11 while the answer is 18.
You will need to look into this case also to beat WA.
(I am answering this because the accepted answer is not entirely right and doesn't answer this question correctly.)
Edit: it seems your implementation is also correct, I'm just having another. And we both misread the problem statement.
I guess you call your query function with params
query( 1, 0, n-1, x-1, y-1 );
I believe that it's wrong to handle with segment tree in such way when your n is not a pow of 2.
I'm offering you to
enlarge tree array to 131072 elements (2^17) and A to 65536 (2^16);
found the smallest k such is not smaller than n and is a pow of 2;
initialize elements from n (0-based) to k-1 with -20000;
make n equal to k;
make sure you call init(1,0,n-1);
Hope that'll help you to beat WA.