Worst-case time complexity of a recursive dividing function? - c++

Given this algorithm:
void turtle_down(double val){
if (val >= 1.0)
turtle_down(val/2.0);
}
From what I know, T(n) = T(n/2) + O(1).
O(1) is the worst-case time complexity of the base function which is val != 0.0 (am i getting this right?).
And then the recursive call gives a time complexity of T(n/2) since we divide n before the recursive call. Is that right?
But I don't understand how to do the math here. I don't know how will we arrive at O(log n)(base 2). Anyone care to explain or show me the math?

void turtle_down(double val){
if (val != 0.0)
turtle_down(val/2.0);
}
In the above code, the test condition if (val != 0.0) may not give you the expected result. It would go into an infinite loop. Consider the case when val=32. You can see that it will never reach 0 by repeated division with 2.
But if you replace the test condition with say if (val >= 1) then recurrence relation for the given function will be T(n) = T(n/2) + O(1).
In this case the time complexity is T(n) = O(log n).
To get this result you can use the Master Theorem.
To understand the given complexity consider val = 32. You can divide val repeatedly by 2, for 5 times till it becomes 1. Notice that log 32 = 5. From this we can see that the number of calls made to the function is log n.

Related

Calculating Big-O Runtime

I'm in dire need of some guidance with calculating Big-O runtime for the following C++ function:
Fraction Polynomial::solve(const Fraction& x) const{
Fraction rc;
auto it=poly_.begin();
while(it!=poly_.end()){
Term t=*it;
//find x^exp
Fraction curr(1,1);
for(int i=0;i<t.exponent_;i++){
curr=curr*x;
}
rc+=t.coefficient_*curr;
it++;
}
return rc;
}
This is still a new concept to me, so I'm having a bit of trouble with getting it right. I'm assuming that there are at least two operations that happen once (auto it = poly_.begin, and the return rc at the end), but I am not sure how to count the number of operations with the while loop. According to my professor, the correct runtime is not O(n). If anyone could offer any guidance, it would be greatly appreciated. I want to understand how to answer this question, but I couldn't find anything else like this function online, so here I am. Thank you.
I assume you want to evaluate a certain polynomial (let us say A_n*X^n + ... + A_0) in a given point (rational value since it is given as a Fraction).
The first while loop will iterate through all the individual components of your polynomial. For an n-degree polynomial, that will yield n + 1 iterations, so the outer loop alone takes O(n) time.
However, for every term (let us say of rank i)of the polynomial, you have to compute the value of X^i, and that is what your inner for loop does. It computes X^i using a linear method, yielding linear complexity: O(i).
Since you have two nested loops the overall complexity is obtained by multiplying the worst-case time complexities of the loops. The resulting complexity is given by O(n) * O(n) = O(n^2). (First term indicates the complexity of the while loop, while the second one indicates the worst-case time complexity for computing X^i, which is O(n) when i == n).
Assuming this is a n-order polynomial (highest term is raised to the power of n).
In the outer while loop, you will iterate through n+1 terms (0 to n inclusive on both side).
For each term, in the inner for loop, you are going to perform multiplication m times whereby m is the power of current term. Since this is a n-order polynomial, m range from 0 to n. On average, you are going to perform multiplication n/2 times.
The overall complexity will be O((n+1) * (n/2)) = O(n^2)

Time complexity in recursive function in which recursion reduces size

I have to estimate time complexity of Solve():
// Those methods and list<Element> Elements belongs to Solver class
void Solver::Solve()
{
while(List is not empty)
Recursive();
}
void Solver::Recursive(some parameters)
{
Element WhatCanISolve = WhatCanISolve(some parameters); //O(n) in List size. When called directly from Solve() - will always return valid element. When called by recursion - it may or may not return element
if(WhatCanISolve == null)
return;
//We reduce GLOBAL problem size by one.
List.remove(Element); //This is list, and Element is pointed by iterator, so O(1)
//Some simple O(1) operations
//Now we call recursive function twice.
Recursive(some other parameters 1);
Recursive(some other parameters 2);
}
//This function performs search with given parameters
Element Solver::WhatCanISolve(some parameters)
{
//Iterates through whole List, so O(n) in List size
//Returns first element matching parameters
//Returns single Element or null
}
My first thought was that it should be somwhere around O(n^2).
Then I thought of
T(n) = n + 2T(n-1)
which (according to wolframalpha) expands to:
O(2^n)
However i think that the second idea is false, since n is reduced between recursive calls.
I also did some benchmarking with large sets. Here are the results:
N t(N) in ms
10000 480
20000 1884
30000 4500
40000 8870
50000 15000
60000 27000
70000 44000
80000 81285
90000 128000
100000 204380
150000 754390
Your algorithm is still O(2n), even though it reduces the problem size by one item each time. Your difference equation
T(n) = n + 2T(n-1)
does not account for the removal of an item at each step. But it only removes one item, so the equation should be T(n) = n + 2T(n-1) - 1. Following your example and
Saving the algebra by using WolframAlpha to solve this gives the solution T(n) = (c1 + 4) 2n-1 - n - 2 which is still O(2n). It removes one item, which is not a considerable amount given the other factors (especially the recursion).
A similar example that comes to mind is an n*n 2D matrix. Suppose you're only using it for a triangular matrix. Even though you remove one row to process for each column, iterating through every element still has complexity O(n2), which is the same as if all elements were used (i.e. a square matrix).
For further evidence, I present a plot of your own collected running time data:
Presumably the time is quadratic. If WhatCanISolve returns nullptr, iff the list is empty, then all calls
Recursive(some other parameters 2);
will finish in O(1), because they are run with an empty list. This means, the correct formula is actually
T(n) = C*n + T(n-1)
This means, T(n)=O(n^2), which corresponds well to what we see on the plot.

Determining complexity for functions (Big O notation)

I've had this question in my midterm and I'm not sure of my answer, which was O(n^2) I want the answer with explanation , thank you .
int recursiveFun1(int n)
{ for(i=0;i<n;i+=1)
do something;
if (n <= 0)
return 1;
else
return 1 + recursiveFun1(n-1);}
First I put your code with another indentation
int recursiveFun1(int n)
{
for(i=0;i<n;i+=1) // this is bounded by O(n)
do something; // I assume this part is O(1)
if (n <= 0)
return 1;
else
return 1 + recursiveFun1(n-1);
}
The first thing to say is that each time recursiveFun1() is called O(n) is payed due to the for. Although n decreases at each call, the time is still bounded by O(n).
The second thing is to count how many times recursiveFun1() would be called. Clearly (for me) it will be called exactly n + 1 times, until the parameter n reaches the zero value.
So the time is n + (n-1) + (n - 2) + ... + 1 + 0 which is ((n+1)n)/2 which is O(n^2).
Denote by R(n) the execution time of this recursive function for input n. Then, if n is greater than 0, it does the following:
n times do something - assuming the "something" has constant runnning time, it consumes c1*n time
Various checks and bookkeeping work - constant time c2
Calculating for input n-1 - once. The running time of this is R(n-1) (by definition)
So
R(n) = c1*n + c2 + R(n-1)
This equation has a solution, which is O(n^2). You can prove it by induction, or just by guessing a solution in the form a*n^2 + b*n + c.
Note: I assumed that "do something" has constant run time. This seems reasonable. However, if it's not true (e.g. it contains a recursive call), your complexity is going to be greater - maybe much greater, depending on what the "something" is doing.

Confusion with determining Big-O notation?

So, I really don't get Big O notation. I have been tasked with determining "O value" for this code segment.
for (int count =1; count < n; count++) // Runs n times, so linear, or O(N)
{
int count2 = 1; // Declares an integer, so constant, O(1)
while (count2 < count) // Here's where I get confused. I recognize that it is a nested loop, but does that make it O(N^2)?
{
count2 = count2 * 2; // I would expect this to be constant as well, O(N)
}
}
O(f(n))=g(n)
This implies that for some value k, f(n)>g(n) where n>k. This gives the upper bound for the function g(n).
When you are asked to find Big O for some code,
1) Try to count the number of computations being performed in terms of n and thus getting g(n).
2) Now try estimating the upper bound function of g(n). That will be your answer.
Lets apply this procedure to your code.
Lets count the number of computations made. The statements declaring and multiply by 2 take O(1) time. But these are executed repeatedly. We need to find how many times they are executed.
The outer loop executes for n times. Hence the first statement executes for n times. Now the number of times inner loop gets executed depends on value of n. For a given value of n it executes for logn times.
Now lets count the total number of computations performed,
log(1) + log(2) + log(3) +.... log(n) + n
Note that the last n is for the first statement. Simplifying the above series we get:
= log(1*2*3*...n) + n
= log(n!) + n
We have
g(n)=log(n!) + n
Lets guess the upper bound for log(n!).
Since,
1.2.3.4...n < n.n.n...(n times)
Hence,
log(n!) < log(n^n) for n>1
which implies
log(n!) = O(nlogn).
If you want a formal proof for this, check this out. Since nlogn increases faster than n , we therefore have:
O(nlogn + n) = O(nlogn)
Hence your final answer is O(nlogn).

What is the algorithmic complexity of the code below

Is the Big-O for the following code O(n) or O(log n)?
for (int i = 1; i < n; i*=2)
sum++;
It looks like O(n) or am I missing this completely?
It is O(logn), since i is doubled each time. So at overall you need to iterate k times, until 2^k = n, and in this case it happens when k = logn (since 2^logn = n).
Simple example: Assume n = 100 - then:
iter1: i = 1
iter2: i = 2
iter3: i = 4
iter4: i = 8
iter5: i = 16
iter6: i = 32
iter7: i = 64
iter8: i = 128 > 100
It is easy to see that an iteration will be added when n is doubled, which is logarithmic behavior, while linear behavior is adding iterations for a constant increase of n.
P.S. (EDIT): mathematically speaking, the algorithm is indeed O(n) - since big-O notation gives asymptotic upper bound, and your algorithm runs asymptotically "faster" then O(n) - so it is indeed O(n) - but it is not a tight bound (It is not Theta(n)) and I doubt that is actually what you are looking for.
The complexity is O(logn) because the loops runs (log2n - 1) times.
O(log(n)), as you only loop ~log2(n) times
No the complexity is not linear. Try to play through a few scenarios: how many iterations does this cycle do for n = 2, n=4, n=16, n=1024? How about for n = 1024 * 1024? Maybe this will help you get the correct answer.
For loop check runs lg(n) +1 times. The inner loop runs lg(n) times. So, the complexity is is O(lg n), not O(log n).
If n==8, the following is how the code will run:
i=1
i=2
i=4
i=8 --Exit condition
It is O(log(n)).
Look at the code num++;
It loops O(log(n)) times.