// Every iteration of loop counts triplet with
// first element as arr[i].
for (int i = 0; i < n - 2; i++) {
// Initialize other two elements as corner
// elements of subarray arr[j+1..k]
int j = i + 1, k = n - 1;
// Use Meet in the Middle concept
while (j < k) {
// If sum of current triplet is more or equal,
// move right corner to look for smaller values
if (arr[i] + arr[j] + arr[k] >= sum)
k--;
// Else move left corner
else {
// This is important. For current i and j,
// there are total k-j third elements.
for (int x = j + 1; x <= k; x++)
cout << arr[i] << ", " << arr[j]
<< ", " << arr[x] << endl;
j++;
}
}
}
What is the time complexity of this algorithm? Is it O(n*n)??
This is a problem from geekforgeeeks webpage. How do you handle the loop inside the else statement?
Break the code into parts and find the runtime of each.
The outer for loop is O(n).
For every time that loop is called you have a while loop called. So, you'll multiply the complexity of the two loops.
To figure out the runtime of the while loop you have to look at the if and else blocks. If the if block ran every time then the while loop would have an O(n) complexity. If the else part ran every time, then the while loop would have an O(n^2) complexity.
If we are considering worst-case, then you would ignore the if blocks runtime. So, the while loops runtime is O(n^2) and it is called n times through the outer for loop.
Thus, the runtime of it all should be O(n^3).
Related
What is the time complexity (big O) of this function ? and how to calculate it ?
I think it's O(N^3) but am not sure.
int DAA(int n){
int i, j, k, x = 0;
for(i=1; i <= n; i++){
for(j=1; j <= i*i; j++){
if(j % i == 0){
for(k=1; k <= j; k++){
x += 10;
}
}
}
}
return x;
}
The complexity is O(n^4)
But not because you blindly drop unused iteration.
it's because when you consider all instruction, O(n + n^3 + n^4) = O(n^4)
int DAA(int n){
int x = 0;
for(int i=1; i <= n; i++) // O(n)
for(int j=1; j <= i*i; j++) // O(1+2+...n^2) = O(n^3)
if(j % i == 0) // O(n^3) same as loop j
for(int k=1; k <= j; k++) // O(n^4), see below
x += 10; // O(n^4) same as loop k
return x;
}
Complexity of conditioned inner loop
the loop k only execute when j%i==0, i.e. {i, i*2, i*3 ... i*i}
so for the case the inner-most loop execute, the algorithm is effectively
int DAA(int n){
int x = 0;
for(int i=1; i <= n; i++) // O(n)
for(int t=1; t <= i; t++) // O(1+2+...+n) = O(n^2)
for(int k=1; k <= t*i; k++) // O(n^4)
x += 10;
return x;
}
Why simply drop unused iteration not work?
let's say it's now
int DAA(int n){
int x = 0;
for(int i=1; i <= n; i++) // O(n)
for(int j=1; j <= i*i; j++) // O(1+2+...+n^2) = O(n^3)
if(j == i)
for(int k=1; k <= j; k++)
x += 10; // oops! this only run O(n^2) time
return x;
}
// if(j==i*log(n)) also cause loop k becomes O((n^2)log(n))
// or, well, if(false) :P
Although the innermost instruction only run O(n^2) time. The program actually do if(j==i)(and j++, j<=i*i) O(n^3) time, which make the whole algorithm O(n^3)
Time complexity can be easier to compute if you get rid of do-nothing iterations. The middle loop does not do anything unless j is a multiple of i. So we could force j to be a multiple of i and eliminate the if statement, which makes the code easier to analyze.
int DAA(int n){
int x = 0;
for(int i=1; i <= n; i++){
for(int m=1; m <= i; m++){ // New variable to avoid the if statement
int j = m*i; // The values for which the inner loop executes
for(int k=1; k <= j; k++){
x += 10;
}
}
}
return x;
}
The outer loop iterates n times. O(n) so far.
The middle loop iterates 1 time, then 2 times, then... n times. One might recognize this setup from the O(n2) sorting algorithms. The loop executes n times, and the number of iterations increases to n, leading to O(n×n) complexity.
The inner loop is executed on the order of n×n times (the complexity of the middle loop). The number of iterations for each execution increases to n×n (the maximum value of j). Similar to how the middle loop multiplied its number of executions and largest number of iterations to get its complexity, the complexity of the inner loop – hence of the code as a whole – should become O(n4), but I'll leave the precise proof as an exercise.
The above does assume that the time complexity represents the number of times that x += 10; is executed. That is, it assumes that the main work of the innermost loop overwhelms the rest of the work. This is usually what is of interest, but there are some caveats.
The first caveat is that adding 10 is not overwhelming more work than incrementing. If the line x += 10; is not a convenient stand-in for "do work", then it might be that the time complexity should include all iterations, even those that do no work.
The second caveat is that the condition in the if statement is cheap relative to the innermost loop. In some cases, the conditional might be expensive, so the time complexity should include the number of times the if statement is executed. Eliminating the if statement does interfere with this.
If you happen to fall into one of these caveats, you'll need a count of what was omitted. The modified code omits i2−i iterations of the middle loop on each of its n executions. So the omitted iterations would contribute n times n2−n, or O(n3) towards the overall complexity.
Therefore, the complexity of the original code is O(n4 + n3), which is the same as O(n4).
Say I have a for loop as:
for(int i=0,j=i+1;i<n-1,j<n;j++)
{
//some code
if(condition)
{
i++;
j=i;
}
}
What will be the time complexity and why?
Edited:
void printAllAPTriplets(int arr[], int n)
{
for (int i = 1; i < n - 1; i++)
{
// Search other two elements of
// AP with arr[i] as middle.
for (int j = i - 1, k = i + 1; j >= 0 && k < n;)
{
// if a triplet is found
if (arr[j] + arr[k] == 2 * arr[i])
{
cout << arr[j] << " " << arr[i]
<< " " << arr[k] << endl;
// Since elements are distinct,
// arr[k] and arr[j] cannot form
// any more triplets with arr[i]
k++;
j--;
}
// If middle element is more move to
// higher side, else move lower side.
else if (arr[j] + arr[k] < 2 * arr[i])
k++;
else
j--;
}
}
}
What would be the time complexity of this particular function and why?? #walnut #DeducibleSteak #Acorn .This is the code for "Printing all triplets in sorted array that form AP"
O(n^2) is when you iterate through all the possible values of one variable each time you iterate through the second one. As such:
for(int i=0; i < n; i++){
for (int j = 0; j < m; j++{
//Do some action
}
}
In your example, even though you're using two vars, but it's still a O(n).
Assuming that increasing i by one takes one second, then assigning the new i to j takes one second too, then the complexity is O(2n). Since constant numbers are insignificant when speaking about complexities, then the complexity of your code is still O(n)
The loop you have written does not make sense, because you are using the comma operator and discarding one of the conditions, so it is equivalent to j < n.
Even if the condition gets triggered many times (but a constant number w.r.t. n, i.e. not becoming larger as n grows), then you can easily show you will do <= k*n iterations, which means O(n) iterations.
If that is not true, but the condition is at least side-effect free, then you can only bound it by O(n^2), e.g. as #walnut suggests with j == n - 1 (like in a triangle matrix).
If you allow for side-effects in the condition (e.g. j = 0, with an equals sign), then it can be an infinite loop, so there is no possible bound.
I'm trying to find runtime functions and corresponding big-O notations for two different algorithms that both find spans for each element on a stack. The X passed in is the list that the span is to be computed from and the S passed in is the list for the span. I think I know how to find most of what goes into the runtime functions and once I know what that is, I have a good understanding of how to get to big-O notation. What I need to understand is how to figure out the while loops involved. I think they usually involve logarithms, although I can't see why here because I've been going through with the worst cases being each element is larger than the previous one, so the spans are always getting bigger and I see no connection to logs. Here is what I have so far:
void span1(My_stack<int> X, My_stack<int> &S) { //Algorithm 1
int j = 0; //+1
for(int i = 0; i < X.size(); ++i) { //Find span for each index //n
j = 1; //+1
while((j <= i) && (X.at(i-j) <= X.at(i))) { //Check if span is larger //???
++j; //1
}
S.at(i) = j; //+1
}
}
void span2(My_stack<int> X, My_stack<int> &S) { //Algorithm 2
My_stack<int> A; //empty stack //+1
for(int i = 0; i < (X.size()); ++i) { //Find span for each index //n
while(!A.empty() && (X.at(A.top()) <= X.at(i))) { //???
A.pop(); //1
}
if(A.empty()) //+1
S.at(i) = i+1;
else
S.at(i) = i - A.top();
A.push(i); //+1
}
}
span1: f(n) = 1+n(1+???+1)
span2: f(n) = 1+n(???+1+1)
Assuming all stack operations are O(1):
span1: Outer loop executes n times. Inner loop upto i times for each value of i from 0 to n. Hence total time is proportional to sum of integers from 1 to n, i.e. O(n2)
span2: We need to think about this differently, since the scope of A is function-wide. A starts as empty, so can only be popped as many times as something is pushed onto it, i.e. the inner while loop can only be executed as many times as A.push is called, over the entirety of the function's execution time. However A.push is only called once every outer loop, i.e. n times - so the while loop can only execute n times. Hence the overall complexity is O(n).
I have been working on an assignment question for days and cannot seem to get the correct output (I've tried so many things!) The question is:
Write a program that uses two nested for loops and the modulus operator (%) to detect and print the prime numbers from 1 to 10,000.
I have been doing from 1 to 10 as a small test to ensure its working. I am getting 2,3,5,7,9 as my output, so I know something is wrong. When I increase the number from 10 to 20 it is printing 2 plus all odd numbers. I am including my code below. Thanks!!
int main() {
for (int i=2; i <=10; i++){
for (int j=2; j<=i; j++){
if (i%j==0 && j!=i) {
break;
}
else {
cout<< i <<endl;
break;
}
}
}
}
In addition to Sumit Jindal's answer inner for loop can be done by this way as well:
for(int j=2; j*j<=i ; j++)
If we think about every (x,y) ordered pair that satisfies x*y = i, maximum value of x can be square root of i.
The problem lies in the if-else branch. Your inner loop will be run exactly once because it will break out of the inner loop as a result of your if else branch.
When you first enter the inner loop the value of j is 2. Your condition will test if variable i is divisible by 2. If it is it breaks. Other wise (your else branch) will print the value of i and breaks out.
Hence printing odd numbers.
Break out of the inner loop and check whether j equals i in outer loop. You have to make j available for outer loop.
Your print statement is within the inner loop, and it should not be - it's only a prime if you run all the way through the inner loop without finding a divisor.
As a second point, you only need to check for divisors up to the square root of i, not all the way up to i.
You are breaking the inner loop after the first iteration itself, which is checking if the number(ie i) is different from j and is divisible by 2 or not (since j=2 for the first iteration)
I am getting 2,3,5,7,9 as my output
This is because every odd number fails the if and is printed in else condition
A minor correction in your code, adding a flag. Also you don't need to run the inner loop i times, infact only i/2 times is sufficient. This is simple mathematics, but will save significant number of CPU cycles (~5000 iterations lesser in your case)
#include <iostream>
int main()
{
int n = 10;
for(int i=2; i<=n; i++){
bool isPrime = true;
for(int j=2; j<=i/2; j++){
if(i!=j && i%j==0){
isPrime = false;
break;
}
}
if(isPrime)
std::cout << i << " ";
}
return 0;
}
Another version, if you don't mind output in reverse order.
int n = 10;
for (int i = n; i > 1; --i)
{
int factorCount = 0;
for (int j = 2; j <= n; ++j)
{
if (i % j == 0)
factorCount++;
if (factorCount > 1)
break;
}
if (factorCount == 1)
cout << i << endl;
}
int main() {
for (int i = 2; i <= 100; i++) {
for (int j = 2; j < i; j++) {
if (i%j == 0)
break;
if (j==i-1) // means has never run previous if blog
cout << i << endl;
}
}
return 0;
}
My Computer Science II final is tomorrow, and I need some help understanding how to find the Big-Oh for segments of code. I've searched the internet and haven't been able to find any examples of how I need to understand it.
Here's a problem from our sample final:
for(int pass = 1; i <= n; pass++)
{
for(int index = 0; index < n; index++)
for(int count = 1; count < n; count++)
{
//O(1) things here.
}
}
}
We are supposed to find the order (Big-Oh) of the algorithm.
I think that it would be O(n^3), and here is how I came to that conclusion
for(int pass = 1; i <= n; pass++) // Evaluates n times
{
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < n; count++) // Evaluates n * n * (n) times
{
//O(1) things here.
}
}
}
// T(n) = (n) + (n^2 + n) + n^3
// T(n) = n^3 + n^2 + 2n
// T(n) <= c*f(x)
// n^3 + n^2 + 2n <= c * (n^3)
// O(n) = n^3
I'm just not sure if I'm doing it correctly. Can someone explain how to evaluate code like this and/or confirm my answer?
Yes, it is O(n^3). However:
for(int pass = 1; pass <= n; pass++) // Evaluates n times
{ //^^i should be pass
for(int index = 0; index < n; index++) //Evaluates n times
for(int count = 1; count < n; count++) // Evaluates n-1 times
{
//O(1) things here.
}
}
}
Since you have three layer of nested for loops, the nested loop will be evaluated n *n * (n-1) times, each operation inside the most inner for loop takes O(1) time, so in total you have n^3 - n^2 constant operations, which is O(n^3) in order of growth.
A good summary of how to measure order of growth in Big O notation can be found here:
Big O Notation MIT
Quoting part from the above file:
Nested loops
for I in 1 .. N loop
for J in 1 .. M loop
sequence of statements
end loop;
end loop;
The outer loop executes N times. Every time the outer loop executes, the inner loop
executes M times. As a result, the statements in the inner loop execute a total of N * M
times. Thus, the complexity is O(N * M).
In a common special case where the stopping condition of the inner loop is J <N instead
of J <M (i.e., the inner loop also executes N times), the total complexity for the two loops is O(N^2).
Similar rationale can be applied in your case.
You are absolutely correct. It is O(n^3) for your example.
To find the Big Oh running time of any segment of code, you should think about how many times the piece of code does O(1) things.
Let me simplify your example to give a better idea of this:
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < n; count++) // Evaluates n * n * (n) times
{
//O(1) things here.
}
}
In the above case, the inner loop runs n times for each run of the outer loop. And your outer loop also runs n times. This means you're doing n things, n number of times. Making it O(n^2).
One other thing to take care of is that Big Oh is an upper bound limit. This means that you should always think about what's going to happen to the code when you have a large input (in your case, a large value of n. Another implication of this fact is that multiplying or adding by constants has no effect on the Big Oh bound. For example:
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < 2*n; count++) // Runs 2*n times
{
//O(1) things here.
}
}
The Big Oh running time of this code is also O(n^2) since O(n*(2n)) = O(n^2).
Also check this out: http://ellard.org/dan/www/Q-97/HTML/root/node7.html