Finding running time of program - c++

I am new to Algorithms and DS. I am referring to a book and it has some questions given which I am having difficulty understanding.
I am required to find the running time of the following programs: (the comments are from book only)
function(int n) {
for(int i=1;i<=n/3;i++) { // will execute n/3 time
for(int j=1;j<=n;j+=4) { // will execute n/4 times
printf("*");
}
}
}
Answer: O(n^2)
How is it n^2? The first loop will execute for n/3 times and the second one for n/4. n/3 * n/4 = n^2/12. How is it n^2? Please help me understand.
Question 2
function(int n) {
for(int i=0;i<n;i++) { // will execute n times
for(int j=i;j<i*i;j+=4) { // will execute n*n times ?????? (How?)
if(j%i==0) {
for(int k=0;k<j;k++) { // will execute j times
printf("*");
}
}
}
}
}
Answer: O(n^5)
The first loop executes for n times. Fine.
How does the second loop execute for n * n times? Here, the value of j is initialized to i, so shouldn't it be (n * n)-i times? If j was initialized to 0, it would have been n * n times, right?
The third loop executes j times because k
Please help me understand why 2nd loop (j) will execute n*n times. Thank you.

The book deals with big-Oh. A complete introduction to big-Oh would be too long, but in big-Oh-land it holds that:
O(a*f(n)) = O(f(n))
with a a constant.
another one is that:
O(a_k * n^k+ a_(k-1) n^(k-1)+...+a_0) = O(n^k)
with f(n) a random function.
About the second question: the second loop runs from i to i*i. Now since i will reach n-1 it has size O(n), thus the loop will be executed in the last run (n-1)*(n-1) times. Since j will eventually reach something of the order O(n^2) and the third loop runs from 0 to j-1, the third (most inner) loop has a time complexity of O(n^2) as well. This thus means that the total time complexity of the loops is:
O(n)*O(n^2)*O(n^2)=O(n^5)

Related

Determining the time complexity c++ [duplicate]

This question already has answers here:
Has this snippet O(log^2(n)) complexity?
(3 answers)
Closed 2 years ago.
given the statements:
for (int i = 1; i < n; i = i * 2)
for (int j = 0; j < i; j++)
how is this o(n) runtime?
I am pretty sure the first for loop runs in O(logn) but I am not sure how to interpret the second for loop.
For the start, I will assume that n is a potency of 2, plus 1 (so that i actually reaches it).
Given that, in the last iteration of the outer loop, we have i = n. For that value, the inner loop will iterate n times.
Now, the second last iteration of the outer loop had i being half that value. Thus, the inner loop had i/2 iterations back then.
Putting the steps of each inner loop in reverse, we get the total amount of steps being
n + n/2 + n/4 + n/8 + ...
This sum is equal to 2*n-1, which is in Θ(n).
(If n is not a potency of 2, the last value of i is the last potency of 2 that is smaller than n, thus we still keep our complexity class.)

Big O - Nested For Loop Breakdown Loop by Loop

I understand how to get a general picture of the big O of a nested loop, but what would be the operations for each loop in a nested for loop?
If we have:
for(int i=0; i<n; i++)
{
for(int j=i+1; j<1000; j++)
{
do something of constant time;
}
}
How exactly would we get T(N)? The outer for loop would be n operations, the inner would be 1000(n-1) and the inside would just be c is that right?
So T(n)=cn(1000(n-1)) is that right?
You want to collapse the loops and do a double summation. When i = 0, you run 1000-1 times. When i = 1, you run 1000 - 2 times, and so on up to n-1. This is equivalent to the sum from i = 0 to n of the series 999 - i, Note that you can separate the terms and get 999 n - n (n - 1)/2.
This is a pretty strange formula, because once n hits 1,000, the inner loop immediately short-circuits and does nothing. In this case, then, the asymptotic time complexity is actually O(n), because for high values of n, the code will just skip the inner loop in constant time.

Runtime Analysis a single for-loop with i not incrementing by 1

double sum_skip7 (double array[], int n)
//n: size of the array. Assume n is divisible by 7
{
double sum = 0;
for(int i=0; i< n; i=i+7)
sum = sum + array[i];
return sum;
}
I understand that if the for-loop is to increment i by one, the number of times it takes to run the for-loop statement is n+1 (i= 0,1,2,...n). But since i is being incremented by 7 will it still be n+1 times? Or will it be (n-7)+1 times? The second answer seems to make more sense but I am not willing to bet on it.
No, it's n/7, because i is incremented by 7 each time.
Since i is going up by 7 each time we can see that the loop will run n/7 times. Since with run times we ignore constants this has a runtime of O(n).

finding big oh for prime algorithm [duplicate]

This question already has an answer here:
finding the running time for my algorithm for finding whether an input is prime in terms of the input
(1 answer)
Closed 9 years ago.
void print(int num)
{
for(int i=2; i<sqrt(num); i++) // VS for(int i=2; i<num/2; i++)
{
if(num%i==0)
{
cout<<"not prime\n";
exit(0);
}
}
cout<<"prime\n";
}
I know that these algorithms are slow for finding primes but I hope to learn about Big oh using these examples.
Im assuming that the algorithm that goes from i=2 to i
Can someone explain the running time of both of the algorithms in terms of the input num using big oh notation?
As only constant statements are within if-statement, the total time complexity is actually determined by the for-loop.
for(int i=2; i<sqrt(num); i++)
This means it will run sqrt(num)-2 times, so the total complexity is O(sqrt(n)).
And easily, you will realize if the for-loop changes to:
for(int i=2; i<num/2; i++)
, it will run num/2-2 times, thus the total complexity will be O(num).
If you run this, you will actually go through the loop sqrt(num)-2 times, i.e. for i==2 to i==sqrt(num), increasing step by 1 at a time.
Thus, in terms of size of num, this algorithm's running time is O( sqrt(num) ).
As stated in other answers, the cost of the algorithm that iterates from 2 to sqrt(n) is O(sqrt n) and the cost of the algorithm that iterates from 2 to n/2 is O(n). However, these bounds apply for the worst case, and the worst case happens when n is prime.
In the average, both algorithms run in O(1) expected time: Half of the numbers are even, so their cost is 2*n/2. A third of the numbers are multiple of 3, so their cost is 3*n/3. A 1/4 of the numbers are multiple of 4, so their cost is 4*n/4...
First we have to specify our task. So what we want is to find a function
f(N) = number_of_steps
when N is your num argument passed to function. From this point forward we are going to assume that every sentence that doesn't depend on the size of the input data takes a constant C number computational steps.
We are going to add the individual number of steps of the function.
f(N) = for_ + C;
Now how many times will be for executed? sqrt(N)-2, so:
f(N) = sqrt(N) -2 + C = sqrt(num) -2 + C
O( f(num)) = sqrt(num)

Confusion with determining Big-O notation?

So, I really don't get Big O notation. I have been tasked with determining "O value" for this code segment.
for (int count =1; count < n; count++) // Runs n times, so linear, or O(N)
{
int count2 = 1; // Declares an integer, so constant, O(1)
while (count2 < count) // Here's where I get confused. I recognize that it is a nested loop, but does that make it O(N^2)?
{
count2 = count2 * 2; // I would expect this to be constant as well, O(N)
}
}
O(f(n))=g(n)
This implies that for some value k, f(n)>g(n) where n>k. This gives the upper bound for the function g(n).
When you are asked to find Big O for some code,
1) Try to count the number of computations being performed in terms of n and thus getting g(n).
2) Now try estimating the upper bound function of g(n). That will be your answer.
Lets apply this procedure to your code.
Lets count the number of computations made. The statements declaring and multiply by 2 take O(1) time. But these are executed repeatedly. We need to find how many times they are executed.
The outer loop executes for n times. Hence the first statement executes for n times. Now the number of times inner loop gets executed depends on value of n. For a given value of n it executes for logn times.
Now lets count the total number of computations performed,
log(1) + log(2) + log(3) +.... log(n) + n
Note that the last n is for the first statement. Simplifying the above series we get:
= log(1*2*3*...n) + n
= log(n!) + n
We have
g(n)=log(n!) + n
Lets guess the upper bound for log(n!).
Since,
1.2.3.4...n < n.n.n...(n times)
Hence,
log(n!) < log(n^n) for n>1
which implies
log(n!) = O(nlogn).
If you want a formal proof for this, check this out. Since nlogn increases faster than n , we therefore have:
O(nlogn + n) = O(nlogn)
Hence your final answer is O(nlogn).