Complexity of a recursive algorithm - c++

I have a recursive algorithm like :
int alg(int n) {
if (n == 0)
return 2;
for (int i = 0; i<= n-1; ++i)
alg(i);
}
Obviously the n == 0 case is Θ(1). However I am having trouble understanding how it really works. I mean it must be something like :
T(n) = T(0) + T(1) + ... + T(n-1).
And then we have to give T(1), T(2), .., T(n-1) = xT(0). I can understand the way it goes for n=0 and n=1, but for bigger n it goes wrong.

You can observe that:
T(n) = T(0) + T(1) + ... + T(n-2) + T(n-1)
T(n - 1) = T(0) + T(1) + ... + T(n-2)
Therefore
T(n) = 2 * T(n-1)
At this point, we can conclude that the complexity is O(2n). Actually, it is Θ(2n).

Related

Big O Notation Calculation

I'm stuck determining the big o notation for the below fragmented code, the given expression is part of I'm trying to figure out. I know given two plain, default for loops results in O(n^2) but the latter is entirely different. Here are the instructions.
The algorithm of
for (j = 0; j < n; j++)
{
for (k = j; k < n; k++)
{
}
}
will result in a number of iterations of given by the expression:
= n + (n-1) + (n-2) + (n-3) + ........ + (n - n)
Reduce the above series expression to an algebraic expression, without summation.
After determining the algebraic expression express the performance in Big O Notation.
You can use this method (supposedly applied by Gauss when he was a wee lad).
If you sum all the numbers twice, you have
1 + 2 + 3 + ... + n
+ n + (n-1) + (n-2) + ... + 1
—————————————————————————————————————--
(n+1) + (n+1) + (n+1) + ... + (n+1) = n(n+1)
Thus,
1 + 2 + 3 + ... + n = n(n+1)/2
and n(n+1)/2 is (n^2)/2 + n/2, so it is in O(n^2).

Time complexity of recursive algorithm with two recursive calls

I am trying to analyze the Time Complexity of a recursive algorithm that solves the Generate all sequences of bits within Hamming distance t problem. The algorithm is this:
// str is the bitstring, i the current length, and changesLeft the
// desired Hamming distance (see linked question for more)
void magic(char* str, int i, int changesLeft) {
if (changesLeft == 0) {
// assume that this is constant
printf("%s\n", str);
return;
}
if (i < 0) return;
// flip current bit
str[i] = str[i] == '0' ? '1' : '0';
magic(str, i-1, changesLeft-1);
// or don't flip it (flip it again to undo)
str[i] = str[i] == '0' ? '1' : '0';
magic(str, i-1, changesLeft);
}
What is the time complexity of this algorithm?
I fond myself pretty rusty when it comes to this and here is my attempt, which I feel is no where near the truth:
t(0) = 1
t(n) = 2t(n - 1) + c
t(n) = t(n - 1) + c
= t(n - 2) + c + c
= ...
= (n - 1) * c + 1
~= O(n)
where n is the length of the bit string.
Related questions: 1, 2.
It's exponential:
t(0) = 1
t(n) = 2 t(n - 1) + c
t(n) = 2 (2 t(n - 2) + c) + c = 4 t (n - 2) + 3 c
= 2 (2 (2 t(n - 3) + c) + c) + c = 8 t (n - 3) + 7 c
= ...
= 2^i t(n-i) + (2^i - 1) c [at any step i]
= ...
= 2^n t(0) + (2^n - 1) c = 2^n + (2^n - 1) c
~= O(2^n)
Or, using WolframAlpha: https://www.wolframalpha.com/input/?i=t(0)%3D1,+t(n)%3D2+t(n-1)+%2B+c
The reason it's exponential is that your recursive calls are reducing the problem size by 1, but you're making two recursive calls. Your recursive calls are forming a binary tree.

Big-O complexity of this algorithm

CODE:
void fun(int n){
if(n>2){
for(int i=0;i<n;i++){
j=0;
while(j<n){
cout<<j;
j++;
}
}
fun(n/2);
}
}
Here's what I think:
The recursive part is running log(n) times ?
and during each recursive call, the for loop will run n^2 times, with n changing to half in each recursive call.
So is it n^2 + (n^2)/4 + (n^2)/16 + ... + 1?
You are right, so the big(O) is n^2 since the sum of the series n^2 + (n^2)/4 + (n^2)/16 + ... + 1 never exceeds 2n^2
The number of writes to cout is given by the following recurrence:
T(N) = N² + T(N/2).
By educated guess, T(N) can be a quadratic polynomial. Hence
T(N) = aN²+bN+c = N² + T(N/2) = N² + aN²/4+bN/2+c.
By identification, we have
3a/4 = 1
b/2 = 0
c = c.
and
T(N) = 4N²/3 + c.
With T(2)= 0,
T(N) = 4(N²-4)/3
which is obviously O(N²).
This is simple mathematics. The complexity is n^2 + (n^2)/4 + (n^2)/16 + ... + 1. It is (n² * (1 + 1/4+ ...)) . And the maths says that the infinite serie converges to 4/3 (the formula is: 1 / (1 - 1/4)).
It gives actually O(n2).

Calculating the summation of powers of a number modulo a number

There are 3 numbers: T, N, M. 1 ≤ T, M ≤ 10^9, 1 ≤ N ≤ 10^18 .
What is asked in the problem is to compute [Σ(T^i)]mod(m) where i varies from 0 to n. Obviously, O(N) or O(M) solutions wouldn't work because of 1 second time limit. How should I proceed?
As pointed out in previous answers, you may use the formula for geometric progression sum. However there is a small problem - if m is not prime, computing (T^n - 1) / (T - 1) can not be done directly - the division will not be a well-defined operations. In fact there is a solution that can handle even non prime modules and will have a complexity O(log(n) * log(n)). The approach is similar to binary exponentiation. Here is my code written in c++ for this(note that my solution uses binary exponentiation internally):
typedef long long ll;
ll binary_exponent(ll x, ll y, ll mod) {
ll res = 1;
ll p = x;
while (y) {
if (y % 2) {
res = (res * p) % mod;
}
p = (p * p) % mod;
y /= 2;
}
return res;
}
ll gp_sum(ll a, int n, ll mod) {
ll A = 1;
int num = 0;
ll res = 0;
ll degree = 1;
while (n) {
if (n & (1 << num)) {
n &= (~(1 << num));
res = (res + (A * binary_exponent(a, n, mod)) % mod) % mod;
}
A = (A + (A * binary_exponent(a, degree, mod)) % mod) % mod;
degree *= 2;
num++;
}
return res;
}
In this solution A stores consecutively the values 1, 1 + a, 1 + a + a^2 + a^3, ...1 + a + a^2 + ... a ^ (2^n - 1).
Also just like in binary exponentiation if I want to compute the sum of n degrees of a, I split n to sum of powers of two(essentially using the binary representation of n). Now having the above sequence of values for A, I choose the appropriate lengths(the ones that correspond to 1 bits of the binary representation of n) and multiply the sum by some value of a accumulating the result in res. Computing the values of A will take O(log(n)) time and for each value I may have to compute a degree of a which will result in another O(log(n)) - thus overall we have O(log(n) * log (n)).
Let's take an example - we want to compute 1 + a + a^2 .... + a ^ 10. In this case, we call gp_sum(a, 11, mod).
On the first iteration n & (1 << 0) is not zero as the first bit of 11(1011(2)) is 1. Thus I turn off this bit setting n to 10 and I accumulate in res: 0 + 1 * (a ^ (10)) = a^10. A is now a + 1.
The next second bit is also set in 10(1010(2)), so now n becomes 8 and res is a^10 + (a + 1)*(a^8)=a^10 + a^9 + a^8. A is now 1 + a + a^2 + a^3
Next bit is 0, thus res stays the same, but A will become 1 + a + a^2 + ... a^7.
On the last iteration the bit is 1 so we have:
res = a^10 + a^9 + a^8 + a^0 *(1 + a + a^2 + ... +a^7) = 1 + a .... + a ^10.
One can use an algorithm which is similar to binary exponentiation:
// Returns a pair <t^n mod m, sum of t^0..t^n mod m>,
// I assume that int is big enough to hold all values without overflowing.
pair<int, int> calc(int t, int n, int m)
if n == 0 // Base case. t^0 is always 1.
return (1 % m, 1 % m)
if n % 2 == 1
// We just compute the result for n - 1 and then add t^n.
(prevPow, prevSum) = calc(t, n - 1, m)
curPow = prevPow * t % m
curSum = (prevSum + curPow) % m
return (curPow, curSum)
// If n is even, we compute the sum for the first half.
(halfPow, halfSum) = calc(t, n / 2, m)
curPow = halfPow * halfPow % m // t^n = (t^(n/2))^2
curSum = (halfSum * halfPow + halfSum) % m
return (curPow, curSum)
The time complexity is O(log n)(the analysis is the same as for the binary exponentiation algorithm). Why is it better than a closed form formula for geometric progression? The latter involves division by (t - 1). But it is not guaranteed that there is an inverse of t - 1 mod m.
you can use this:
a^1 + a^2 + ... + a^n = a(1-a^n) / (1-a)
so, you just need to calc:
a * (1 - a^n) / (1 - a) mod M
and you can find O(logN) way to calc a^n mod M
It's a geometric series whose sum is equal to :

Running time of nested for loop

Running time for following alorithm
int b = 0;
for (i = 0; i < n; i++)
for (j = 0; j < i * n; j++)
b = b + 5;
I know that the first loop is O(n) but that's about as far as I've gotten. I think that the second loop may be O(n^2) but the more I think about it the less sense it makes. Any guidance would be much appreciated.
We want to express the running time of this code as a function of n. Call this T(n).
We can say that T(n) = U(0,n) + U(1,n) + ... + U(n-1,n), where U(i,n) is the running time of the inner loop as a function of i and n.
The inner loop will run i * n times. So U(i,n) is just i * n.
So we get that T(n) = 0*n + 1*n + 2*n + ... + (n-1)*n = n * (1 + 2 + ... + (n-1)).
The closed form for (1 + 2 + ... + (n-1)) is just (n^2 - n)/2 http://www.wolframalpha.com/input/?i=1+%2B+2+%2B+...+%2B+(n-1) .
So we get that T(n) = n * (1 + 2 + ... + (n-1)) = n * ((n^2 - n)/2) = (n^3 - n^2) / 2,
which is O(n^3).
easiest way would be to use a example
assume n=10
1st for loop runs 10 times o(n)
2nd loop loop runs 0 if i=0
10 time for i=1
20 times for i=2
30 times for i=3
.... 100 times(for i=10) o(n^2)
hope it helps you
Outer loop runs for n iterations.
When n is 0, inner loop executes 0*n = 0 times
When n is 1, inner loop executes 1*n = n times
When n is 2, inner loop executes 2*n = 2n times
When n is 3, inner loop executes 3*n = 3n times
...
...
When n is n, inner loop executes n*n = n*n times
So it looks like inner loop executes a total of:
0 + n + 2n + 3n + ... + n*n
Multiply this with outer loop's n and you get approx. a O(n^3) complexity.
Statements Iterations
for (i = 0; i < n; i++) | n+1
for (j = 0; j < i * n; j++) | 0+n+2n+3n...n*n = n*n(n+1)/2
b = b + 5; | n*n(n+1)/2
So overall: O(n3)