void f(int n)
{
for(int i =1; i<=n; i++){
if(i % (int)sqrt(n)==0){
for(int k=0; k< pow(i,3); k++){
//do something
}
}
}
}
My thinking process:
number of times execute if statement: sum i=1 to n (theta(1)).
number of times execute things inside if: sum i=1 to sqrt(n) (for loop)
number of times execute for loops: sum k=0 to i^3 (theta(1)) = i^3
This will give me: theta(n) + sum i=0 to sqrt(n) (theta(i^3)) = theta(n) + theta(n^2)
which gives me theta(n^2)
The answer key he gave is theta(n^3.5)
I am just wondering if i made any mistake on my thinking process. I have asked my professor twice about this question. Just want to see if there is anything I didn't see before I bother him again.. Thanks!
Using Sigma notation, I came up with the exact closed-form.
Besides, the formula below assumes the process, which doesn't verify the condition that executes the innermost loop, is negligible.
It's up to you to determine tight order of growth bounds, because of flooring function and square root etc.
Further details here: https://math.stackexchange.com/questions/1840414/summation-with-floor-and-square-root-functions-tight-bounds
void f(int n) {
for(int i =1; i<=n; i++){ //--- n times
if(i % (int)sqrt(n)==0){ // --- 2 times (constant)
for(int k=0; k< pow(i,3); k++){ // sqrt(n)^3 and n^3 times
//do something
}
}
}
}
Taking the highest order term it should be Theta(n^3)
Assuming do something is constant
c = do somrthing + plus running cost of single iteration of inner
loop
a = runnning cost of running single iteration of outer most loop
b = running cost of if block Thinking more about it cn^3/2 + cn^3
+ a*n + b*2)
Taking the highest order term Theta(n^3) or
since c is same coefficient for both the n^3 and n^3/2 we can reduce
it
= cn^3 + cn^3/2
= cn^3(n^1/2+1)
~ cn^3 * n^1/2
= cn^3.5
Related
What is the time complexity (big O) of this function ? and how to calculate it ?
I think it's O(N^3) but am not sure.
int DAA(int n){
int i, j, k, x = 0;
for(i=1; i <= n; i++){
for(j=1; j <= i*i; j++){
if(j % i == 0){
for(k=1; k <= j; k++){
x += 10;
}
}
}
}
return x;
}
The complexity is O(n^4)
But not because you blindly drop unused iteration.
it's because when you consider all instruction, O(n + n^3 + n^4) = O(n^4)
int DAA(int n){
int x = 0;
for(int i=1; i <= n; i++) // O(n)
for(int j=1; j <= i*i; j++) // O(1+2+...n^2) = O(n^3)
if(j % i == 0) // O(n^3) same as loop j
for(int k=1; k <= j; k++) // O(n^4), see below
x += 10; // O(n^4) same as loop k
return x;
}
Complexity of conditioned inner loop
the loop k only execute when j%i==0, i.e. {i, i*2, i*3 ... i*i}
so for the case the inner-most loop execute, the algorithm is effectively
int DAA(int n){
int x = 0;
for(int i=1; i <= n; i++) // O(n)
for(int t=1; t <= i; t++) // O(1+2+...+n) = O(n^2)
for(int k=1; k <= t*i; k++) // O(n^4)
x += 10;
return x;
}
Why simply drop unused iteration not work?
let's say it's now
int DAA(int n){
int x = 0;
for(int i=1; i <= n; i++) // O(n)
for(int j=1; j <= i*i; j++) // O(1+2+...+n^2) = O(n^3)
if(j == i)
for(int k=1; k <= j; k++)
x += 10; // oops! this only run O(n^2) time
return x;
}
// if(j==i*log(n)) also cause loop k becomes O((n^2)log(n))
// or, well, if(false) :P
Although the innermost instruction only run O(n^2) time. The program actually do if(j==i)(and j++, j<=i*i) O(n^3) time, which make the whole algorithm O(n^3)
Time complexity can be easier to compute if you get rid of do-nothing iterations. The middle loop does not do anything unless j is a multiple of i. So we could force j to be a multiple of i and eliminate the if statement, which makes the code easier to analyze.
int DAA(int n){
int x = 0;
for(int i=1; i <= n; i++){
for(int m=1; m <= i; m++){ // New variable to avoid the if statement
int j = m*i; // The values for which the inner loop executes
for(int k=1; k <= j; k++){
x += 10;
}
}
}
return x;
}
The outer loop iterates n times. O(n) so far.
The middle loop iterates 1 time, then 2 times, then... n times. One might recognize this setup from the O(n2) sorting algorithms. The loop executes n times, and the number of iterations increases to n, leading to O(n×n) complexity.
The inner loop is executed on the order of n×n times (the complexity of the middle loop). The number of iterations for each execution increases to n×n (the maximum value of j). Similar to how the middle loop multiplied its number of executions and largest number of iterations to get its complexity, the complexity of the inner loop – hence of the code as a whole – should become O(n4), but I'll leave the precise proof as an exercise.
The above does assume that the time complexity represents the number of times that x += 10; is executed. That is, it assumes that the main work of the innermost loop overwhelms the rest of the work. This is usually what is of interest, but there are some caveats.
The first caveat is that adding 10 is not overwhelming more work than incrementing. If the line x += 10; is not a convenient stand-in for "do work", then it might be that the time complexity should include all iterations, even those that do no work.
The second caveat is that the condition in the if statement is cheap relative to the innermost loop. In some cases, the conditional might be expensive, so the time complexity should include the number of times the if statement is executed. Eliminating the if statement does interfere with this.
If you happen to fall into one of these caveats, you'll need a count of what was omitted. The modified code omits i2−i iterations of the middle loop on each of its n executions. So the omitted iterations would contribute n times n2−n, or O(n3) towards the overall complexity.
Therefore, the complexity of the original code is O(n4 + n3), which is the same as O(n4).
Keep in mind that the following pseudo code is similar to c++, so i will use a c++ tag
void matrixmult (int n, const number A[][], const number B[][], number C[][])
{
index i, j, k;
for(i = 1; i <= n; i++) //the i for loop will run n + 1 times
for(j = 1; j <=n; j++) //the j for loop will run n(n+1) times
C[i][j] = 0 //this will run (n-1)n times
for(k = 1; k <=n; k++) //the k for loop will run (n-1)(n+1) times
C[i][j] = C[i][j]+ A[i][k] * B[k][j]; //this will run n((n-1)(n+1))
I was instructed by my professor to find the time complexity function of the very last line of code above
I believe that the time complexity function is T(n) = n(n-1)(n+1)
I need someone to double check my work, did i make a mistake somewhere? did i even get the correct time complexity here?
any help will be appreciated
You have three nested loops, looping n steps each, so it's n^3.
Getting more detailed. Depending on the model of computation, you could instead count the number of assignments, comparisons, multiplications and even memory accesses.
I'm trying to find runtime functions and corresponding big-O notations for two different algorithms that both find spans for each element on a stack. The X passed in is the list that the span is to be computed from and the S passed in is the list for the span. I think I know how to find most of what goes into the runtime functions and once I know what that is, I have a good understanding of how to get to big-O notation. What I need to understand is how to figure out the while loops involved. I think they usually involve logarithms, although I can't see why here because I've been going through with the worst cases being each element is larger than the previous one, so the spans are always getting bigger and I see no connection to logs. Here is what I have so far:
void span1(My_stack<int> X, My_stack<int> &S) { //Algorithm 1
int j = 0; //+1
for(int i = 0; i < X.size(); ++i) { //Find span for each index //n
j = 1; //+1
while((j <= i) && (X.at(i-j) <= X.at(i))) { //Check if span is larger //???
++j; //1
}
S.at(i) = j; //+1
}
}
void span2(My_stack<int> X, My_stack<int> &S) { //Algorithm 2
My_stack<int> A; //empty stack //+1
for(int i = 0; i < (X.size()); ++i) { //Find span for each index //n
while(!A.empty() && (X.at(A.top()) <= X.at(i))) { //???
A.pop(); //1
}
if(A.empty()) //+1
S.at(i) = i+1;
else
S.at(i) = i - A.top();
A.push(i); //+1
}
}
span1: f(n) = 1+n(1+???+1)
span2: f(n) = 1+n(???+1+1)
Assuming all stack operations are O(1):
span1: Outer loop executes n times. Inner loop upto i times for each value of i from 0 to n. Hence total time is proportional to sum of integers from 1 to n, i.e. O(n2)
span2: We need to think about this differently, since the scope of A is function-wide. A starts as empty, so can only be popped as many times as something is pushed onto it, i.e. the inner while loop can only be executed as many times as A.push is called, over the entirety of the function's execution time. However A.push is only called once every outer loop, i.e. n times - so the while loop can only execute n times. Hence the overall complexity is O(n).
I have the following algorithm:
for(int i = n; i > 0; i--){
for(int j = 1; j < n; j *= 2){
for(int k = 0; k < j; k++){
... // constant number C of operations
}
}
}
I need to calculate the algorithm's running time complexity,
I'm pretty sure the outer loop runs O(n) times, the middle loop runs O(log(n)) times, and the inner loop runs O(log(n)) times as well, but I'm not so sure about it.
The final result of the running time complexity is O(n^2), but I have no idea how.
Hope someone could give me a short explanation about it, thanks!
For each i, the second loop runs j through the powers of 2 until it exceeds n: 1, 2, 4, 8, ... , 2h, where h=int(log2n). So the body of the inner-most loop runs 20 + 21 + ... + 2h = 2h+1-1 times. And 2h+1-1 = 2int(log2n)+1-1 which is O(n).
Now, the outer loop executes n times. This gives complexity of the whole thing O(n*n).
My Computer Science II final is tomorrow, and I need some help understanding how to find the Big-Oh for segments of code. I've searched the internet and haven't been able to find any examples of how I need to understand it.
Here's a problem from our sample final:
for(int pass = 1; i <= n; pass++)
{
for(int index = 0; index < n; index++)
for(int count = 1; count < n; count++)
{
//O(1) things here.
}
}
}
We are supposed to find the order (Big-Oh) of the algorithm.
I think that it would be O(n^3), and here is how I came to that conclusion
for(int pass = 1; i <= n; pass++) // Evaluates n times
{
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < n; count++) // Evaluates n * n * (n) times
{
//O(1) things here.
}
}
}
// T(n) = (n) + (n^2 + n) + n^3
// T(n) = n^3 + n^2 + 2n
// T(n) <= c*f(x)
// n^3 + n^2 + 2n <= c * (n^3)
// O(n) = n^3
I'm just not sure if I'm doing it correctly. Can someone explain how to evaluate code like this and/or confirm my answer?
Yes, it is O(n^3). However:
for(int pass = 1; pass <= n; pass++) // Evaluates n times
{ //^^i should be pass
for(int index = 0; index < n; index++) //Evaluates n times
for(int count = 1; count < n; count++) // Evaluates n-1 times
{
//O(1) things here.
}
}
}
Since you have three layer of nested for loops, the nested loop will be evaluated n *n * (n-1) times, each operation inside the most inner for loop takes O(1) time, so in total you have n^3 - n^2 constant operations, which is O(n^3) in order of growth.
A good summary of how to measure order of growth in Big O notation can be found here:
Big O Notation MIT
Quoting part from the above file:
Nested loops
for I in 1 .. N loop
for J in 1 .. M loop
sequence of statements
end loop;
end loop;
The outer loop executes N times. Every time the outer loop executes, the inner loop
executes M times. As a result, the statements in the inner loop execute a total of N * M
times. Thus, the complexity is O(N * M).
In a common special case where the stopping condition of the inner loop is J <N instead
of J <M (i.e., the inner loop also executes N times), the total complexity for the two loops is O(N^2).
Similar rationale can be applied in your case.
You are absolutely correct. It is O(n^3) for your example.
To find the Big Oh running time of any segment of code, you should think about how many times the piece of code does O(1) things.
Let me simplify your example to give a better idea of this:
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < n; count++) // Evaluates n * n * (n) times
{
//O(1) things here.
}
}
In the above case, the inner loop runs n times for each run of the outer loop. And your outer loop also runs n times. This means you're doing n things, n number of times. Making it O(n^2).
One other thing to take care of is that Big Oh is an upper bound limit. This means that you should always think about what's going to happen to the code when you have a large input (in your case, a large value of n. Another implication of this fact is that multiplying or adding by constants has no effect on the Big Oh bound. For example:
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < 2*n; count++) // Runs 2*n times
{
//O(1) things here.
}
}
The Big Oh running time of this code is also O(n^2) since O(n*(2n)) = O(n^2).
Also check this out: http://ellard.org/dan/www/Q-97/HTML/root/node7.html