CODE:
void fun(int n){
if(n>2){
for(int i=0;i<n;i++){
j=0;
while(j<n){
cout<<j;
j++;
}
}
fun(n/2);
}
}
Here's what I think:
The recursive part is running log(n) times ?
and during each recursive call, the for loop will run n^2 times, with n changing to half in each recursive call.
So is it n^2 + (n^2)/4 + (n^2)/16 + ... + 1?
You are right, so the big(O) is n^2 since the sum of the series n^2 + (n^2)/4 + (n^2)/16 + ... + 1 never exceeds 2n^2
The number of writes to cout is given by the following recurrence:
T(N) = N² + T(N/2).
By educated guess, T(N) can be a quadratic polynomial. Hence
T(N) = aN²+bN+c = N² + T(N/2) = N² + aN²/4+bN/2+c.
By identification, we have
3a/4 = 1
b/2 = 0
c = c.
and
T(N) = 4N²/3 + c.
With T(2)= 0,
T(N) = 4(N²-4)/3
which is obviously O(N²).
This is simple mathematics. The complexity is n^2 + (n^2)/4 + (n^2)/16 + ... + 1. It is (n² * (1 + 1/4+ ...)) . And the maths says that the infinite serie converges to 4/3 (the formula is: 1 / (1 - 1/4)).
It gives actually O(n2).
Related
This question already has answers here:
Is log(n!) = Θ(n·log(n))?
(10 answers)
Closed 3 years ago.
If inserting an element takes log(N) time in a map, where N is the size of the map.
Then inserting elements of the array one by one means
log(1)+log(2)+....+log(N) = log(N!) complexity. But the normal best complexity for getting the elements sorted is Nlog(N), Where am I going wrong?
Nowhere, O(log n!) == O(n log n). The proof is a bit of math.
First, we have log(n!) = log(1) + log(2) + ... + log(n) <= log(n) + log(n) + ... + log(n) = n log(n). On the other hand, we also get the following
2 log(n!) = 2*(log(1) + log(2) + ... + log(n))
= (log(1) + log(n)) + (log(2) + log(n-1)) + ... + (log(i) + log(n-i+1)) + ... + (log(n) + log(1))
= log(1*n) + log(2*(n-1)) + ... + log(i*(n-i+1)) + ... log(n*1)
>= log(n) + ... + log(n) = n log(n)
Where we get the inequality since i*(n-i+1) = i*n - i*i + i >= n (seems a bit mysterious but it basically says that products grow faster than sums).
So we have log(n!) <= n log(n) <= 2 log(n!). By the defintion of the O-notation this means that O(log(n!)) = O(n log(n)).
I'm stuck determining the big o notation for the below fragmented code, the given expression is part of I'm trying to figure out. I know given two plain, default for loops results in O(n^2) but the latter is entirely different. Here are the instructions.
The algorithm of
for (j = 0; j < n; j++)
{
for (k = j; k < n; k++)
{
}
}
will result in a number of iterations of given by the expression:
= n + (n-1) + (n-2) + (n-3) + ........ + (n - n)
Reduce the above series expression to an algebraic expression, without summation.
After determining the algebraic expression express the performance in Big O Notation.
You can use this method (supposedly applied by Gauss when he was a wee lad).
If you sum all the numbers twice, you have
1 + 2 + 3 + ... + n
+ n + (n-1) + (n-2) + ... + 1
—————————————————————————————————————--
(n+1) + (n+1) + (n+1) + ... + (n+1) = n(n+1)
Thus,
1 + 2 + 3 + ... + n = n(n+1)/2
and n(n+1)/2 is (n^2)/2 + n/2, so it is in O(n^2).
Can anyone help in solving the recurrence relationship of a divide and conquer algorithm with the following equation? I am pretty sure you can't use master theorem here because it is not in the form T(n/b) but may be forgetting a simple math rule here. Please help.
T(n)=T(√n)+logn.
Notice that for some k>0 we have
T(n) = log n + log n^{1/2} + log n^{1/4} + ... + log n^{1/2^k} =
= log n + (1/2)*log n + (1/4)*log n + ... + (1/k) * log n
= (1 + 1/2 + 1/4 + ... + 1/2*k) log n
= (1 + 2^{-1} + 2^{-2} + ... + 2^{-k})log n
<= 2 log n
from which it follows that T(n) = O(log n). The bound <= 2 log n follows because 1+1/2+1/4+1/8+1/16+...=2 in the limit.
Running time for following alorithm
int b = 0;
for (i = 0; i < n; i++)
for (j = 0; j < i * n; j++)
b = b + 5;
I know that the first loop is O(n) but that's about as far as I've gotten. I think that the second loop may be O(n^2) but the more I think about it the less sense it makes. Any guidance would be much appreciated.
We want to express the running time of this code as a function of n. Call this T(n).
We can say that T(n) = U(0,n) + U(1,n) + ... + U(n-1,n), where U(i,n) is the running time of the inner loop as a function of i and n.
The inner loop will run i * n times. So U(i,n) is just i * n.
So we get that T(n) = 0*n + 1*n + 2*n + ... + (n-1)*n = n * (1 + 2 + ... + (n-1)).
The closed form for (1 + 2 + ... + (n-1)) is just (n^2 - n)/2 http://www.wolframalpha.com/input/?i=1+%2B+2+%2B+...+%2B+(n-1) .
So we get that T(n) = n * (1 + 2 + ... + (n-1)) = n * ((n^2 - n)/2) = (n^3 - n^2) / 2,
which is O(n^3).
easiest way would be to use a example
assume n=10
1st for loop runs 10 times o(n)
2nd loop loop runs 0 if i=0
10 time for i=1
20 times for i=2
30 times for i=3
.... 100 times(for i=10) o(n^2)
hope it helps you
Outer loop runs for n iterations.
When n is 0, inner loop executes 0*n = 0 times
When n is 1, inner loop executes 1*n = n times
When n is 2, inner loop executes 2*n = 2n times
When n is 3, inner loop executes 3*n = 3n times
...
...
When n is n, inner loop executes n*n = n*n times
So it looks like inner loop executes a total of:
0 + n + 2n + 3n + ... + n*n
Multiply this with outer loop's n and you get approx. a O(n^3) complexity.
Statements Iterations
for (i = 0; i < n; i++) | n+1
for (j = 0; j < i * n; j++) | 0+n+2n+3n...n*n = n*n(n+1)/2
b = b + 5; | n*n(n+1)/2
So overall: O(n3)
std::sort performs approximately N*log2(N) (where N is distance) comparisons of elements(source - http://www.cplusplus.com/), so its complexity is N*log2(N).
Please, help me to calculate complexity for the next code:
void func(std::vector<float> & Storage)
{
for(int i = 0; i < Storage.size() - 1; ++i)
{
std::sort(Storage.begin()+i, Storage.end());
Storage[i+1] += Storage[i];
}
}
complexity = N^2*log2(N) or 2log2(2)+3log2(3)+...+(N)log2(N)?
Thank you.
The proper way to compute the complexity is to evaluate the complexity of repeated O(K Log K) problems of linearly increasing sizes K = 1 ... N. This can be done either by computing the sum, or by just computing the integral
Integrate[K Log[K], {K, 0, N}]
with e.g. Mathematica, and you get
1/4 N^2 (-1 + 2 Log[N])
which is of O(N^2 Log N).
Even though for polynomial and logarithmic functions it holds true, in general it is not true that the integral of K = 1 ... N subproblems of complexity f(K) is equal to N f(N). E.g. the sum of K = 1 ... N subproblems of complexity Exp[K] is simply Exp[N], not N Exp[N].
I would agree with N^2*log2(N) as the sort algorithm is run N times. In Big-O, where c is a constant:
c*N * N*log2(N) => O(N^2*log2(N))
It will be asymptotically O((N^2)*(log2(N))
we need sum of k*log2(k) k from 1 to N
You are summing up logarithmic functions:
complexity <- 0
for i = 1..N
complexity += i Log(i)
Resulting in the summation:
Log(1) + 2 Log(2) + ... + N Log(N)
from http://en.wikipedia.org/wiki/Logarithm:
the logarithm of a product is the sum of the logarithms of the factors:
thus:
the summation becomes:
Log(1) + Log(2^2) + .. + Log(N^N)
further simplifying:
Log(1*2^2*3^3*...*N^N)