If I have the for loops below:
for(k = 1; k <= n; ++k){
for(j = 1; j * j <= k; ++j){
//O(1) operations
}
}
I know the outer loop will iterate n times, and the inner loop will iterate floor(sqrt(k)) for every kth iteration from outer loop.
Therfore,to determine the time complexity, we have something like, summation of
\sum_{k=1}^{n} \floor{\sqrt{k}}
Not sure how to proceed and get a closed form time complexity in terms of n.
I would say that you need to integrate sqrt(n) => n^(1/2). The result is n^(3/2). Forget about floor function.
Outer loop is iterated n times, inner loop is iterated sqrt(n) times. When one loop is inside another loop you multiply their complexity. Therefore it runs in O(n^1.5)
Related
I understand how to get a general picture of the big O of a nested loop, but what would be the operations for each loop in a nested for loop?
If we have:
for(int i=0; i<n; i++)
{
for(int j=i+1; j<1000; j++)
{
do something of constant time;
}
}
How exactly would we get T(N)? The outer for loop would be n operations, the inner would be 1000(n-1) and the inside would just be c is that right?
So T(n)=cn(1000(n-1)) is that right?
You want to collapse the loops and do a double summation. When i = 0, you run 1000-1 times. When i = 1, you run 1000 - 2 times, and so on up to n-1. This is equivalent to the sum from i = 0 to n of the series 999 - i, Note that you can separate the terms and get 999 n - n (n - 1)/2.
This is a pretty strange formula, because once n hits 1,000, the inner loop immediately short-circuits and does nothing. In this case, then, the asymptotic time complexity is actually O(n), because for high values of n, the code will just skip the inner loop in constant time.
I am new to Algorithms and DS. I am referring to a book and it has some questions given which I am having difficulty understanding.
I am required to find the running time of the following programs: (the comments are from book only)
function(int n) {
for(int i=1;i<=n/3;i++) { // will execute n/3 time
for(int j=1;j<=n;j+=4) { // will execute n/4 times
printf("*");
}
}
}
Answer: O(n^2)
How is it n^2? The first loop will execute for n/3 times and the second one for n/4. n/3 * n/4 = n^2/12. How is it n^2? Please help me understand.
Question 2
function(int n) {
for(int i=0;i<n;i++) { // will execute n times
for(int j=i;j<i*i;j+=4) { // will execute n*n times ?????? (How?)
if(j%i==0) {
for(int k=0;k<j;k++) { // will execute j times
printf("*");
}
}
}
}
}
Answer: O(n^5)
The first loop executes for n times. Fine.
How does the second loop execute for n * n times? Here, the value of j is initialized to i, so shouldn't it be (n * n)-i times? If j was initialized to 0, it would have been n * n times, right?
The third loop executes j times because k
Please help me understand why 2nd loop (j) will execute n*n times. Thank you.
The book deals with big-Oh. A complete introduction to big-Oh would be too long, but in big-Oh-land it holds that:
O(a*f(n)) = O(f(n))
with a a constant.
another one is that:
O(a_k * n^k+ a_(k-1) n^(k-1)+...+a_0) = O(n^k)
with f(n) a random function.
About the second question: the second loop runs from i to i*i. Now since i will reach n-1 it has size O(n), thus the loop will be executed in the last run (n-1)*(n-1) times. Since j will eventually reach something of the order O(n^2) and the third loop runs from 0 to j-1, the third (most inner) loop has a time complexity of O(n^2) as well. This thus means that the total time complexity of the loops is:
O(n)*O(n^2)*O(n^2)=O(n^5)
Hello I have spent many hours now trying to figure out how this example given by my tutorial works, and there is a few things I don't understand and yes i have searched the web for help, but there is not much when it is this specific example i really want to understand.
The first thing I don't understand is that 'i' and 'j' = 2 and both the for loops has i++ and j++, won't that make 'i' and 'j' equal all the time?
So in the second for loop, if 'j' has to be less than e.g.. 4/4 = 1 then it has to be less than 1? when it has been initialized to be 2.
int i, j;
for(i=2; i<100; i++)
{
for(j=2; j <= (i/j); j++)
{
if(!(i%j))
break; // if factor found, not prime
if(j > (i/j))
cout << i << " is prime\n";
}
}
both the for loops has i++ and j++, won't that make 'i' and 'j' equal all the time?
Nope! i++ increments the outer loop, and j++ increments the inner loop. For each round of the outer loop, the inner loop can be iterated (and thus incremented) several times. So for every round of the outer loop, j goes through values from 2 to i/j in the inner loop.
I recommend you to try this code out in a debugger, or simulate it on pen and paper to understand what's happening.
The for loop on j will execute it full range for each and every value of i. so no, they will not always be equal.
And yes, when the value of i is low, the loop on j will not even get started, but then as i takes on progressively higher values, the loop on j will run a little longer for each value of i.
Just for example, think of the case i == 81. Then j will take on values in the range [2..9]
The code is searching for all the prime numbers between 2 and 99, so i and j are initialized to 2.*
Understood that, the first for loop try if every number between 2 and 99 is prime, using the second for loop, which searches for divisors of i.
If the second for loop doesn't find divisors, then i is prime.
The two nested loop don't have the same value because they are nested! So when i = 2, j=2, then j=3(and i is still 2), then j=4,(i is still 2)........then j=99, so the second loop is ended,then also the first for loop increment : i=3, j=2, then j=3(i is still 3) ..... Hope i've been clear :) Ask for doubts!
It doesn't look like the existing code will actually ever declare i to be prime, because of the upper limit on j. The cout statement that declares i to be prime triggers when j > (i/j), but j only is incremented up to (i/j) (it currently will never be greater than (i/j), even if i is prime).
Try adjusting the inner loop to be:
for (j = 2; j <= ceilf(float(i)/float(j)) + 1; j++)
or something along those lines.
Wikipedia claims that the failure function table can be computed in O(n) time.
Let's look at its `canonical' implementation (in C++):
vector<int> prefix_function (string s) {
int n = (int) s.length();
vector<int> pi (n);
for (int i=1; i<n; ++i) {
int j = pi[i-1];
while (j > 0 && s[i] != s[j])
j = pi[j-1];
if (s[i] == s[j]) ++j;
pi[i] = j;
}
return pi;
}
Why does it work in O(n) time, even if there is an inner while-loop? I'm not really strong at the analysis of algorithms, so may somebody explain it?
This line: if (s[i] == s[j]) ++j; is executed at most O(n) times.
It caused increase in the value of p[i]. Note that p[i] starts at same value as p[i-1].
Now this line: j = pi[j-1]; causes decrease of p[i] by at least one. And since it was increased at most O(n) times (we count also increases and decreases on previous values), it cannot be decreased more than O(n) times.
So it as also executed at most O(n) times.
Thus the whole time complexity is O(n).
There's already two answers here that are correct, but I often think a fully laid out
proof can make things clearer. You said you wanted an answer for a 9-year-old, but
I don't think it's feasible (I think it's easy to be fooled into thinking it's true
without actually having any intuition for why it's true). Maybe working through this answer will help.
First off, the outer loop runs n times clearly because i is not modified
within the loop. The only code within the loop that could run more than once is
the block
while (j > 0 && s[i] != s[j])
{
j = pi[j-1]
}
So how many times can that run? Well notice that every time that condition is
satisfied we decrease the value of j which, at this point, is at most
pi[i-1]. If it hits 0 then the while loop is done. To see why this is important,
we first prove a lemma (you're a very smart 9-year-old):
pi[i] <= i
This is done by induction. pi[0] <= 0 since it's set once in the initialization of pi and never touched again. Then inductively we let 0 < k < n and assume
the claim holds for 0 <= a < k. Consider the value of p[k]. It's set
precisely once in the line pi[i] = j. Well how big can j be? It's initialized
to pi[k-1] <= k-1 by induction. In the while block it then may be updated to pi[j-1] <= j-1 < pi[k-1]. By another mini-induction you can see that j will never increase past pi[k-1]. Hence after the
while loop we still have j <= k-1. Finally it might be incremented once so we have
j <= k and so pi[k] = j <= k (which is what we needed to prove to finish our induction).
Now returning back to the original point, we ask "how many times can we decrease the value of
j"? Well with our lemma we can now see that every iteration of the while loop will
monotonically decrease the value of j. In particular we have:
pi[j-1] <= j-1 < j
So how many times can this run? At most pi[i-1] times. The astute reader might think
"you've proven nothing! We have pi[i-1] <= i-1 but it's inside the while loop so
it's still O(n^2)!". The slightly more astute reader notices this extra fact:
However many times we run j = pi[j-1] we then decrease the value of pi[i] which shortens the next iteration of the loop!
For example, let's say j = pi[i-1] = 10. But after ~6 iterations of the while loop we have
j = 3 and let's say it gets incremented by 1 in the s[i] == s[j] line so j = 4 = pi[i].
Well then at the next iteration of the outer loop we start with j = 4... so we can only execute the while at most 4 times.
The final piece of the puzzle is that ++j runs at most once per loop. So it's not like we can have
something like this in our pi vector:
0 1 2 3 4 5 1 6 1 7 1 8 1 9 1
^ ^ ^ ^ ^
Those spots might mean multiple iterations of the while loop if this
could happen
To make this actually formal you might establish the invariants described above and then use induction
to show that the total number of times that while loop is run, summed with pi[i] is at most i.
From that, it follows that the total number of times the while loop is run is O(n) which means that the entire outer loop has complexity:
O(n) // from the rest of the outer loop excluding the while loop
+ O(n) // from the while loop
=> O(n)
Let's start with the fact the outer loop executes n times, where n is the length of the pattern we seek. The inner loop decreases the value of j by at least 1, since pi[j] < j. The loop terminates at the latest when j == -1, therefore it can decrease the value of j at most as often as it has been increased previously by j++ (the outer loop). Since j++ is executed in the outer loop exactly n times, the overall number of executions of the inner while loop is limited to n. The preprocessing algorithm therefore requires O(n) steps.
If you care, consider this simpler implementation of the preprocessing stage:
/* ff stands for 'failure function': */
void kmp_table(const char *needle, int *ff, size_t nff)
{
int pos = 2, cnd = 0;
if (nff > 1){
ff[0] = -1;
ff[1] = 0;
} else {
ff[0] = -1;
}
while (pos < nff) {
if (needle[pos - 1] == needle[cnd]) {
ff[pos++] = ++cnd;
} else if (cnd > 0) {
cnd = ff[cnd]; /* This is O(1) for the reasons above. */
} else {
ff[pos++] = 0;
}
}
}
from which it is painfully obvious the failure function is O(n), where n is the length of the pattern sought.
I have this loop running inside a program:
for(int I =0;I < n;I++){
for(int it = 0; it < m; it++){
Access vector.at(it+1) & add number plus vector.at(it)
}
}
Both n & m are user input and what I want to do is run the inside loop the size of the vector (m) and store information. The outside loop is saying to repeat that process n times.
So would my big O notation be O(m^n) since I'm repeating m however many times n is?
Thanks.
You're performing 2 operations in the inside loop, thus you are doing a total of 2 * n * m operations, which gives a O(n*m) complexity.
It would actually be O(M x N)
O(M^N) is very very slow :)
It is O(mn), assuming that the operation inside the inner loop is O(1).