What's the time-complexity function [ T(n) ] for these loops? - c++

j = n;
while (j>=1) {
i = j;
while (i <= n) { cout<<"Printed"; i*= 2; }
j /= 2;
}
My goal is finding T(n) (function that gives us number of algorithm execution) whose order is expected to be n.log(n) but I need exact function which can work fine at least for n=1 to n=10 data
I have tried to predict the function, finally I ended in *T(n) = floor((n-1)log(n)) + n
which is correct just for n=1 and n=2.
I should mention that I found that inaccurate function by converting the original code to the for-loop just like below :
for ( j = 1 ; j <= n ; j*= 2) {
for ( i = j ; i<= n ; i*=2 ) {
cout << "Printed";
}
}
Finally I appreciate your help to find the exact T(n) in advance. 🙏

using log(x) is the floor of log based 2
1.)
The inner loop is executed 1+log(N)-log(j) the outer loop executed times 1+log(N) with j=1,2,4...N times the overall complexity is T(N)=log(N)log(N)+2*log(N)+1-(log(1)+log(2)+log(4)...+log(N))= log(N)^2-(0+1+2+...+log(N))+2*log(N)+1= log(N)^2-log(N)(log(N)-1)/2+1= log(N)^2/2+3*log(N)/2+1
2.) same here just in reverse order.
I know it is no proof but maybe easier to follow then math : godbolt play with n. it always returns 0;

Outer loop and inner loop are both O(log₂ N).
So total time is
O(log₁₀ N * log₂ N) == O(2 * log₂ N)
Which just gets reduced to O(lg N)

Related

Complexity of a function with 1 loop

Can anyone tell me what's the complexity of the below function? And how to calculate the complexity?
I am suspecting that it's O(log(n)) or O(sqrt(N)).
My reasoning was based on taking examples of n=4, n=8, n=16 and I found that the loop will take log(n) but I don't think it'll be enough since sqrt also will give the same values so I need to work on bigger values of n, so I am not sure how to approach this.
I had this function in the exam today.
void f(int n){
int i=1;
int j=1;
while(j <= n){
i += 1;
j += i;
}
}
The sequence j goes through is 1 3 6 10 15 21, aka the triangular numbers, aka n*(n+1)/2.
Expanded, this is ( n^2 + n ) / 2. We can ignore the scaling ( / 2) and linear ( + n) factors, which leaves us with n^2.
j grows as a n^2 polynomial, so the loop will stop after the inverse of that growth:
The time complexity is O(sqrt(n))
For what it's worth, I wrote a small program that attempts to illustrate whether this is O(log(N)) or O(sqrt(N)) by actually counting how many iterations of your loop execute. This seemed a reasonable approximation given that the body of the loop is largely negligible (simply incrementing two integer variables).
#include <stdio.h>
#include <math.h>
int f(int n)
{
int i=1;
int j=1;
int count = 0;
while(j <= n){
i += 1;
j += i;
count++;
}
return count;
}
int main()
{
for (int ii = 0; ii < 10; ii++) {
int count = pow(10, ii);
int rc = f(count);
char *fmt = "N=%d^%-2d -> %d, log(N)=%.2f, sqrt(N)=%.2f\n";
printf(fmt, 10, ii, rc, log(count), sqrt(count));
}
return 0;
}
Running this code results in the following output:
N=10^0 -> 1, log(N)=0.00, sqrt(N)=1.00
N=10^1 -> 4, log(N)=2.30, sqrt(N)=3.16
N=10^2 -> 13, log(N)=4.61, sqrt(N)=10.00
N=10^3 -> 44, log(N)=6.91, sqrt(N)=31.62
N=10^4 -> 140, log(N)=9.21, sqrt(N)=100.00
N=10^5 -> 446, log(N)=11.51, sqrt(N)=316.23
N=10^6 -> 1413, log(N)=13.82, sqrt(N)=1000.00
N=10^7 -> 4471, log(N)=16.12, sqrt(N)=3162.28
N=10^8 -> 14141, log(N)=18.42, sqrt(N)=10000.00
N=10^9 -> 44720, log(N)=20.72, sqrt(N)=31622.78
So, for example, you can see that when N=10^9, the number of iterations is 44720, which is much greater than log(N) (20.72) but quite close to sqrt(N) (31622.78).
It is depended your condition. In other words, the time complexity is O(log n).
How many statements are executed, relative to input size n? Often,
but NOT always, we can get an idea from the number of times a loop
iterates. The loop body executes for i= 2^0 + 2^1 + 2^2 + .... + 2^n; and this
sequence has O(log n) values.
Check the "Introduction to Algorithms" book about more details.

What's the time complexity of for (int i = 2; i < n; i = i*i)?

What would be the time complexity of the following loop?
for (int i = 2; i < n; i = i * i) {
++a;
}
While practicing runtime complexities, I came across this code and can't find the answer. I thought this would be sqrt(n), though it doesn't seem correct, since the loop has the sequence of 2, 4, 16, 256, ....
To understand the answer you must understand that: Inverse of Exponent is not SQRT, but log is.
This loop is multiplying i by itself(.i.e. exponential increment) and will stop only when i >= n, therefore the complexity would be O(log(n)) (log to the base 2 to be precise because i=2 at initialization)
To illustrate this:
In the above image, you can see that SQRT is giving correct number of steps only when i is a even power of 2. However log2 is giving accurate number of steps everytime.
Each time i is powered by 2. Hence, if A(n) shows the current value of i in the last step (which is n), it can be written in a recursive for like the following (suppose n is power of 2):
A(n) = A(n-1)^2
Now, you can expand it to find a pattern:
A(n) = A(n-2)^4 = A(n-3)^8 = ... = A(n-(n-1))^(2^(n-1)) = 2^(2^(n-1))
Hence, the loop iterates k step such that n = 2 ^ (2^ (k-1)). Therefore, this loop iterates Theta(log(log(n)).

What is the time complexity of this code? Is it O(logn) or O(loglogn)?

int n = 8; // In the video n = 8
int p = 0;
for (int i = 1; i < n; i *= 2) { // In the video i = 1
p++;
}
for (int j = 1; j < p; j *= 2) { // In the video j = 1
//code;
}
This is code from Abdul Bari Youtube channel ( link of the video), they said time complexity of this is O(loglogn) but I think it is O(log), what is the correct answer?
Fix the initial value. 0 multiplied by 2 will never end the loop.
The last loop is O(log log N) because p == log(n). However, the first loop is O(log N), hence in total it is also O(log N).
On the other hand, once you put some code in place of //code then the first loop can be negligible compared to the second and we have:
O ( log N + X * log log N)
^ first loop
^ second loop
and when X is just big enough, one can consider it as O( log log N) in total. However strictly speaking that is wrong, because complexity is about asymptotic behavior and no matter how big X, for N going to infinity, log N will always be bigger than X * log log N at some point.
PS: I assumed that //code does not depend on N, ie it has constant complexity. The above consideration changes if this is not the case.
PPS: In general complexity is important when designing algorithms. When using an algorithm it is rather irrelevant. In that case you rather care about actual runtime for your specific value of N. Complexity can be misleading and even lead to wrong expectations for a specific use case with given N.
You are correct, the time complexity of the complete code is O(log(n)).
But, Abdul Bari Sir is also correct, Because:-
In the video, Abdul Sir is trying to find the time complexity of the second for loop and not the time complexity of the whole code. Take a look at the video again and listen properly what he is saying at this time https://youtu.be/9SgLBjXqwd4?t=568
Once again, what he has derived is the time complexity of the second loop and not the time complexity of the complete code. Please listen to what he says at 9 mins and 28 secs in the video.
If your confusion is clear, please mark this as correct.
The time complexity of
int n;
int p = 0;
for (int i = 1; i < n; i *= 2) { // start at 1, not at 0
p++;
}
is O(log(n)), because you do p++ log2(n) times. The logarithms base does not matter in big O notation, because it just scales by a constant.
for (int j = 1; j < p; j *= 2) {
//code;
}
has O(log(log(n)), because you only loop up to p=log(n) by multiplying, so you have O(log(p)), so O(log(log(n)).
However, both together still are O(log(n)), because O(log(n)+log(log(n)))=O(log(n)

Calculating Time Complexity of Maximum Subsequence Sum

Hello everyone I trying to calculate the time complexity of Maximum Subsequence Sum.
Actually I know the answer which is O(n^3) and it follows from the function (n^3 + 3n^2 + 2n)/6
My question is how is that function obtained.
Quite simply, actually: just look at the loops in the code.
for (int i=0; i<n; i++)
for(j = i; j<n; j++) {
...
for (int k=i; k<=j; k++)
XXX;
The line XXX is executed n^3 times (modulo some constant factor and some lower powers of n), since the outer loop obviously runs from 0 to n-1, the "middle" loop runs from i (which will start out with 0, 1, ...) to n-1, meaning that the inner loop will be "started" approx n^2 times. Now, both i and j depend on n (eg., i will be 0 and j=n-1 at the end of the first outer iteration), so line XXX will be n times (for the inner loop) by n^2 times (for the outer two loops), resulting in a total of n^3.
To get the concrete function (n^3 + 3n^2 + 2n)/6, you'd have to be more thorough in your calculation and take care of all those factors I simply omitted above.
Here is how..
i=0
j=0 k=0 (count=1 )
j=1 k=0,1 (count =2)
j=2 k=0,1,2 (count = 3)
...
j=n-1 k=0,1,2,...n-1 (count = n)
Total number of times code executed = 1+2+3+...+n = n(n+1)/2
i=1
j=1 k=1 (count=1 )
j=2 k=1,2 (count =2)
j=3 k=1,2, 3 (count = 3)
...
j=n-1 k=1,2,...n-1 (count = n-2)
Total number of times code executed = 1+2+3+...+n-1 = (n-1)n/2
...
i=n-1
j=n-1 k=n-1 ( count = 1)
Total number of of times code executed = 1 = 1(1+1)/2
Now if we sum for all the values of i
n(n+1)/2 + ((n-1)((n-1)+1)/2+.....+1(1+1)/2
=∑ N(N+1)/2 =1/2∑(N^2 +N) =1/2(∑N^2+∑N)=1/2{ 1/6 N(N+1)(2N+1) + 1/2 N(N+1) } =1/2{ (2N^3 + 3N^2+N )/6 +(N^2+N)/2} =(N^3 + 3N^2 + 2N)/6
Check this solution suggested by Mark Allen Weiss (in his book).

Why can the KMP failure function be computed in O(n) time?

Wikipedia claims that the failure function table can be computed in O(n) time.
Let's look at its `canonical' implementation (in C++):
vector<int> prefix_function (string s) {
int n = (int) s.length();
vector<int> pi (n);
for (int i=1; i<n; ++i) {
int j = pi[i-1];
while (j > 0 && s[i] != s[j])
j = pi[j-1];
if (s[i] == s[j]) ++j;
pi[i] = j;
}
return pi;
}
Why does it work in O(n) time, even if there is an inner while-loop? I'm not really strong at the analysis of algorithms, so may somebody explain it?
This line: if (s[i] == s[j]) ++j; is executed at most O(n) times.
It caused increase in the value of p[i]. Note that p[i] starts at same value as p[i-1].
Now this line: j = pi[j-1]; causes decrease of p[i] by at least one. And since it was increased at most O(n) times (we count also increases and decreases on previous values), it cannot be decreased more than O(n) times.
So it as also executed at most O(n) times.
Thus the whole time complexity is O(n).
There's already two answers here that are correct, but I often think a fully laid out
proof can make things clearer. You said you wanted an answer for a 9-year-old, but
I don't think it's feasible (I think it's easy to be fooled into thinking it's true
without actually having any intuition for why it's true). Maybe working through this answer will help.
First off, the outer loop runs n times clearly because i is not modified
within the loop. The only code within the loop that could run more than once is
the block
while (j > 0 && s[i] != s[j])
{
j = pi[j-1]
}
So how many times can that run? Well notice that every time that condition is
satisfied we decrease the value of j which, at this point, is at most
pi[i-1]. If it hits 0 then the while loop is done. To see why this is important,
we first prove a lemma (you're a very smart 9-year-old):
pi[i] <= i
This is done by induction. pi[0] <= 0 since it's set once in the initialization of pi and never touched again. Then inductively we let 0 < k < n and assume
the claim holds for 0 <= a < k. Consider the value of p[k]. It's set
precisely once in the line pi[i] = j. Well how big can j be? It's initialized
to pi[k-1] <= k-1 by induction. In the while block it then may be updated to pi[j-1] <= j-1 < pi[k-1]. By another mini-induction you can see that j will never increase past pi[k-1]. Hence after the
while loop we still have j <= k-1. Finally it might be incremented once so we have
j <= k and so pi[k] = j <= k (which is what we needed to prove to finish our induction).
Now returning back to the original point, we ask "how many times can we decrease the value of
j"? Well with our lemma we can now see that every iteration of the while loop will
monotonically decrease the value of j. In particular we have:
pi[j-1] <= j-1 < j
So how many times can this run? At most pi[i-1] times. The astute reader might think
"you've proven nothing! We have pi[i-1] <= i-1 but it's inside the while loop so
it's still O(n^2)!". The slightly more astute reader notices this extra fact:
However many times we run j = pi[j-1] we then decrease the value of pi[i] which shortens the next iteration of the loop!
For example, let's say j = pi[i-1] = 10. But after ~6 iterations of the while loop we have
j = 3 and let's say it gets incremented by 1 in the s[i] == s[j] line so j = 4 = pi[i].
Well then at the next iteration of the outer loop we start with j = 4... so we can only execute the while at most 4 times.
The final piece of the puzzle is that ++j runs at most once per loop. So it's not like we can have
something like this in our pi vector:
0 1 2 3 4 5 1 6 1 7 1 8 1 9 1
^ ^ ^ ^ ^
Those spots might mean multiple iterations of the while loop if this
could happen
To make this actually formal you might establish the invariants described above and then use induction
to show that the total number of times that while loop is run, summed with pi[i] is at most i.
From that, it follows that the total number of times the while loop is run is O(n) which means that the entire outer loop has complexity:
O(n) // from the rest of the outer loop excluding the while loop
+ O(n) // from the while loop
=> O(n)
Let's start with the fact the outer loop executes n times, where n is the length of the pattern we seek. The inner loop decreases the value of j by at least 1, since pi[j] < j. The loop terminates at the latest when j == -1, therefore it can decrease the value of j at most as often as it has been increased previously by j++ (the outer loop). Since j++ is executed in the outer loop exactly n times, the overall number of executions of the inner while loop is limited to n. The preprocessing algorithm therefore requires O(n) steps.
If you care, consider this simpler implementation of the preprocessing stage:
/* ff stands for 'failure function': */
void kmp_table(const char *needle, int *ff, size_t nff)
{
int pos = 2, cnd = 0;
if (nff > 1){
ff[0] = -1;
ff[1] = 0;
} else {
ff[0] = -1;
}
while (pos < nff) {
if (needle[pos - 1] == needle[cnd]) {
ff[pos++] = ++cnd;
} else if (cnd > 0) {
cnd = ff[cnd]; /* This is O(1) for the reasons above. */
} else {
ff[pos++] = 0;
}
}
}
from which it is painfully obvious the failure function is O(n), where n is the length of the pattern sought.