Is this O(N^2) or O(nlogn). Isnt it n^2 when there are nested loops?
int a[], N;
int f1(){ int i, j, sum=0;
for (i=1;; i=2*i)
{
If (i>=N) return sum;
for (j=1; j<2*i;j++) sum+=a[i];
}
This is O(N log N) as the outer loop is doubling the value of i in every iteration. So the complexity of outer loop is O(log N) instead of O(N).
If you had i++ or similar instead of i=2*i then the time complexity of two loops would have been O(n^2).
Edit: this is a simplified analysis. Please see the answer from R Sahu for more rigorous analysis.
Is this O(N^2) or O(nlogn).
It is neither.
Isnt it n^2 when there are nested loops?
That is true when you iterate over the items linearly. That is not true in your case.
In your case ...
The values of i are: 1 2 4 8 16 ... N
The inner loop is executed 2 + 4 + 8 + 16 + 32 ... N times.
That is a geometric series. The sum of a geometric series is a(1 - r^n)/(1 - r).
In your case, a is 2, r is 2, and n is log2(N) (log with base 2). Hence, the sum, after some simplification, is 2*2^(log2(N)), which is same as 2*N.
i.e your algorithmic complexity is O(N).
Thanks are due to #LedHead for correcting the error in the initial post.
Related
I wrote the following code:
class Solution {
public:
int countPrimes(int n) {
if (n==0 || n==1)
return 0;
int counter=n-2;
vector<bool> res(n,true);
for (int i=2;i<=sqrt(n)+1;++i)
{
if (res[i]==false)
continue;
for (int j=i*i;j<n;j+=i)
{
if (res[j]==true)
{
--counter;
res[j]=false;
}
}
}
return counter;
}
};
but couldn't find its complexity, the inner loop according to my calculations runs n/2 + n/3 + ... + n/sqrt(n)
Ok, let try to get the sum from your formula first (I am going to use your convention naming the variables):
Now, please note that n is a constant in the sum, so it can be moved outside the summary.
Now, we have one part which is linear and one part that we still need to estimate, but if you look closely it is very similar to the harmonic series, indeed for n that goes to infinity is the harmonic series - 1.
The grow rate of it is well know ln(n) + 1.(https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)
So, complexity of the algorithm is n*ln(n).
Update
The Beta answer has the correct result (using the correct starting point), I will leave the above answer because the procedure remain the same and the answer, IMHO, it is still useful.
"...The inner loop according to my calculations runs n/2 + n/3 + ... + n/sqrt(n)"
Ah, be careful with that ellipsis. It actually runs
n/2 + n/3 + n/5 + n/7 + n/11 + ... + n/sqrt(n)
This is not n times the harmonic series, this is n times the sum of the reciprocals of the primes, a sum which grows as log(log(greatest denominator)).
So the complexity of the algorithm is O(n log log(n)).
According to me the time complexity should be O(nlogn) as the outer loop works until n/2^k =1 and inner loop works for n times. Can anyone tell if i'm correct or not.
while(n){
j=n;
while(j>1){
j-=1;
}
n/=2;
}
Inner cycle does n iterations, outer each iteration divides n by 2, so there are n + n/2 + n/4 + ... = 2n total iterations of the inner cycle and time complexity is O(n), not O(n log n).
As a beginner programmer, I've always had trouble noticing the complexity of sometimes simple code. The code in question is:
k = 1;
while (k <= n){
cout << k << endl;
k = k * 2;
}
At first, I thought the complexity was O(log n) due to the k = k*2 line, and I ran the code as a test and kept track of how many times it looped in regard to the size of n, which was relatively low for even large sizes of n. I also am pretty sure that it is not O(n) because it would have taken much longer to run, but I could be wrong there, as that is why I'm asking the question.
Thanks!
It is O(log n).
Each Iteration, k doubles - which means that in (log n) iterations it will be equal or greater than n.
In your example k doesn’t increase by 1 (k++), it doubles every time it runs which traverses the loop in log(n) time. Remember that logarithms are the opposite operation of exponentiating something. Logarithms appear when things are constantly halved or doubled such as k in your example
As you suggested, the provided example would be O(log n) due to the fact that k is being multiplied by a constant regardless of the size of n. This behavior can also be observed by comparing the necessary traversals of two very simple test cases.
For instance, if n = 10, it's easy to demonstrate that the program would then iterate through the loop 6 times.
Yet if you double the value of n so that n = 20, the program will only require one more traversal, whereas you would expect a program that is O(n) to require roughly twice as many traversals as the original test case.
Example: 1~9
1
/ \
2 3
/ \ / \
4 5 6 7
/ \
8 9
The deepth of the tree(or focus on 1 2 4 8...) is alsways ⌊O(logn)⌋+1, so the complexity is O(log n)
I am learning at my own pace online. I was solving some examples but I can't wrap my mind around this one:
while(i<n)
{
for(int j=1; j<=i; j++)
sum = sum + 1;
i *=2;
}
I think the answer should be 2^n but my friend says nlog(n)
Can someone find the big-O for this loop and explain to me how to do so?
The outer loop will enter it's body log2(n) times, because i is increasing exponentially and thereby reaches the end n faster and faster. For example, if n were 1024, it would need only 10 iterations, with n=65536, it were 16 iterations. The accurate count is log2(n), but in terms of runtime complexity the logarithmic behaviour is enough. So here the complexity is O(log(n)).
The inner loop for(int j=1; j<=i; j++), each time when evaluated, will run to the corresponding i. It can be shown that the average run width is about n / log2(n), since i is 1, 2, 4, ... n with log2(n) steps. For example, if n is 31, i is 1, 2, 4, 8, 16, the sum is 31 with 5 steps. So it is permissible to take complexity O(n/log(n)) here.
The overal complexity is then O(log(n)*n/log(n)), which is O(n).
It's n.
if n=2^k then while complexity is k,
second loop : 2^1 + 2^2 + ... 2^k = 2^(k+1)-1 ~= 2^(k+1)
2^(k+1) = 2*n
We can assume without loss of generality that N is equal to 2^k + 1. We need to find the number of iterations of the inner loop. There will be k iterations of outer loop with 2^0, 2^1, ..., 2^k iterations of the inner loop. Let's sum up this values.
Is the Big-O for the following code O(n) or O(log n)?
for (int i = 1; i < n; i*=2)
sum++;
It looks like O(n) or am I missing this completely?
It is O(logn), since i is doubled each time. So at overall you need to iterate k times, until 2^k = n, and in this case it happens when k = logn (since 2^logn = n).
Simple example: Assume n = 100 - then:
iter1: i = 1
iter2: i = 2
iter3: i = 4
iter4: i = 8
iter5: i = 16
iter6: i = 32
iter7: i = 64
iter8: i = 128 > 100
It is easy to see that an iteration will be added when n is doubled, which is logarithmic behavior, while linear behavior is adding iterations for a constant increase of n.
P.S. (EDIT): mathematically speaking, the algorithm is indeed O(n) - since big-O notation gives asymptotic upper bound, and your algorithm runs asymptotically "faster" then O(n) - so it is indeed O(n) - but it is not a tight bound (It is not Theta(n)) and I doubt that is actually what you are looking for.
The complexity is O(logn) because the loops runs (log2n - 1) times.
O(log(n)), as you only loop ~log2(n) times
No the complexity is not linear. Try to play through a few scenarios: how many iterations does this cycle do for n = 2, n=4, n=16, n=1024? How about for n = 1024 * 1024? Maybe this will help you get the correct answer.
For loop check runs lg(n) +1 times. The inner loop runs lg(n) times. So, the complexity is is O(lg n), not O(log n).
If n==8, the following is how the code will run:
i=1
i=2
i=4
i=8 --Exit condition
It is O(log(n)).
Look at the code num++;
It loops O(log(n)) times.