void f(int n)
{
doOh(n);
if(n<1) return;
for(int i=0; i<2; i++)
{
f(n/2);
}
}
Time complexity of doOh(n) is O(N).
How can we compute the time complexity of the given function.
Denoting the complexity as T(n), the code says
T(n) = T0 if n = 0
= O(n) + 2 T(n/2) otherwise
We can get an upper bound by replacing O(n) with c n for some c*, and we expand
T(n) = c n + 2 T(n/2)
= c n + 2 c n/2 + 4 T(n/4)
= c n + 2 c n/2 + 4 c n/4 + 8 T(n/8)
= ...
The summation stops when n < 2^l, where l is the number of significant bits of n. Hence we conclude that
T(n) = O(l c n + 2^l T0) = O(l n).
*Technically, we can only do that as of n > some N. But this does not change the conclusion regarding the asymptotic behavior of T.
Complexity = O(nlog2n)
The reason is
Every time f(n) is called, doOh(n) is called. Its time complexity is O(n) when we pass n in f(n) as you mentioned
f(n) will call f(n/2) 2 times. So time complexity will be 2*O(f(n/2)) = O(f(n/2))
f(n) is becoming f(n/2). f(n/2) is becoming f(n/4) and so on... it means this f(n) will be called log2n times (logn base 2 times).
Hence, doOh(n) will also create n + n/2 + n/4 and so on time complexity logn times.
n or n/2 or n/4 all have O(n) complexity.
Hence the total time complexity is O(logn) * O(n) which is order of nlogn base 2 = O(nlog2n)
Related
Insertion of a new element in a heap through an insert function "insert(A,n)" takes O(log n) time (where n is the number of elements in array 'A'). The insert function is given below:
void insert(int A[], int n)
{
int temp,i=n;
cout<<"Enter the element you want to insert";
cin>>A[n];
temp=A[n];
while(i>0 && temp>A[(i-1)/2])
{
A[i]=A[(i-1)/2];
i=(i-1)/2;
}
A[i]=temp;
}
The time complexity for the insert function is O(log n).
A function that will convert an array to a heap array is given as:
void create_heap()
{
int A[50]={10,20,30,25,5,6,7};
//I have not taken input in array A from user for simplicity.
int i;
for(i=1;i<7;i++)
{
insert(A,i);
}
}
It was given that the time complexity of this function is O(nlogn).
-> But the insert function has a maximum 'i' of elements to compare in each call. I.e., the time complexity in a single run of the loop is O(log i) for each call.
-> So first it's log1, then log2, then log3 and so on till log6.
-> So for n elements of an array, the total time complexity would be
log2 + log3 + log4 +....logn
-> This will be log(2x3x4x...xn) = log(n!)
So why is the time complexity not O(log(n!)) but O(nlogn) ??
Log(n!) Is bounded by log(n^n) from log rules its n*logn
1*2*3*4*....*n = n!
n*n*n*n*....*n = n^n
Clearly n! < n^n
Then why use O(nlogn) when O(logn!) is a tighter bound? because nlogn is bounded by log(n!), surprising, is it not?
log(1*2*3*4*....*n) = log(1) + log(2) + ... + log(n)
Let's throw away first half
log(1*2*3*4*....*n) > log(n/2) + log((n/2) + 1) + log((n/2)+2) + ... + log(n)
> log(n/2) + log(n/2) + ... + log(n/2)
= n/2*log(n/2) = O(nlogn)
I need to predict the algorithm's average case efficiency with respect to the size of its inputs using summation/sigma notation to arrive at the final answer. Many resources use summation to predict worst-case, and I couldn't find someone explaining how to predict average case so step-by-step answers are appreciated.
The algorithm contains a nested for loop, with the basic operation inside the innermost loop:
[code redacted]
EDIT: The execution of the basic operation it will always execute inside the second for loop if the second for loop has been entered, and has no break or return statements. HOWEVER: the end of the first for loop has the return statement which is dependent on the value produced in the basic operation, so the contents of the array do affect how many total times the basic operation will be executed for each run of the algorithm.
The array passed to the algorithm has randomly generated contents
I think the predicted average case efficiency is (n^2)/2, making it n^2 order of growth/big Theta of n^2, but I don't know how to theoretically prove this using summation.
Answers are very appreciated!
TL;DR: Your code complexity in average case is Θ(n²) if "basic operation" complexity is Θ(1) and it has no return, break or goto operators.
Explanation: the average-case complexity is just an expectation of the number of operations in your code given the size of the input.
Let's say T(A, n) is a number of operations your code performs given array A of size n. It's easy to see that
T(A, n) = 1 + // int k = ceil(size/2.0);
n * 2 + 1 + // for (int i = 0; i < size; i++){
n * (n * 2 + 1) + // for(int j = 0; j < size; j++){
n * n * X + // //Basic operation
1 // return (some int);
Where X is a number of operations in your "basic operation". As we can see, T(A, n) does not depend on actual contents of the array A. Thus, the expected number of operations given size of the array (which is simply the arithmetical mean of T(A, n) for all possible A for given n) is exactly equal to each of them:
T(n) = T(A, n) = 3 + n * 2 + n * n * (2 + X)
If we assume that X = Θ(1), this expression is Θ(n²).
Even without this assumption we can have an estimate: if X = Θ(f(n)), then your code complexity is T(n) = Θ(f(n)n²). For example, if X is Θ(log n), T(n) = Θ(n² log n)
what is the complexity of a loop which goes this
for (int i = 0; i < n; i++)
{
for (int j = 0; j < log(i); j++)
{
// do something
}
}
According to me the inner loop will be running log(1)+log(2)+log(3)+...+log(n) times so how do i calculate its complexity?
So, you have a sum log(1) + log(2) + log(3) + ... + log(n) = log(n!). By using Stirling's approximation and the fact that ln(x) = log(x) / log(e) one can get
log(n!) = log(e) * ln(n!) = log(e) (n ln(n) - n + O(ln(n)))
which gives the same complexity O(n ln(n)) as in the other answer (with slightly better understanding of the constants involved).
Without doing this in a formal manner, such complexity calculations can be "guessed" using integrals. The integrand is the complexity of do_something, which is assumed to be O(1), and combined with the interval of log N, this then becomes log N for the inner loop. Combined with the outer loop, the overall complexity is O(N log N). So between linear and quadratic.
Note: this assumes that "do something" is O(1) (in terms of N, it could be of a very high constant of course).
Lets start with log(1)+log(2)+log(3)+...+log(n). Roughly half of the elements of this sum are greater than or equal to log(n/2) = log(n) - log(2). Hence the lower bound of this sum is n / 2 * (log(n) - log(2)) = Omega(nlog(n)). To get upper bound simply multiply n by the largest element which is log(n), hence O(nlog(n)).
So, I really don't get Big O notation. I have been tasked with determining "O value" for this code segment.
for (int count =1; count < n; count++) // Runs n times, so linear, or O(N)
{
int count2 = 1; // Declares an integer, so constant, O(1)
while (count2 < count) // Here's where I get confused. I recognize that it is a nested loop, but does that make it O(N^2)?
{
count2 = count2 * 2; // I would expect this to be constant as well, O(N)
}
}
O(f(n))=g(n)
This implies that for some value k, f(n)>g(n) where n>k. This gives the upper bound for the function g(n).
When you are asked to find Big O for some code,
1) Try to count the number of computations being performed in terms of n and thus getting g(n).
2) Now try estimating the upper bound function of g(n). That will be your answer.
Lets apply this procedure to your code.
Lets count the number of computations made. The statements declaring and multiply by 2 take O(1) time. But these are executed repeatedly. We need to find how many times they are executed.
The outer loop executes for n times. Hence the first statement executes for n times. Now the number of times inner loop gets executed depends on value of n. For a given value of n it executes for logn times.
Now lets count the total number of computations performed,
log(1) + log(2) + log(3) +.... log(n) + n
Note that the last n is for the first statement. Simplifying the above series we get:
= log(1*2*3*...n) + n
= log(n!) + n
We have
g(n)=log(n!) + n
Lets guess the upper bound for log(n!).
Since,
1.2.3.4...n < n.n.n...(n times)
Hence,
log(n!) < log(n^n) for n>1
which implies
log(n!) = O(nlogn).
If you want a formal proof for this, check this out. Since nlogn increases faster than n , we therefore have:
O(nlogn + n) = O(nlogn)
Hence your final answer is O(nlogn).
1) Althoug i have studied about the big O notation i couldn't understand how we calculate the the time complexity of this function in terms of Big-O notation. Can you explain in detail.
2) For recursive function; why we call len-2 while using recursive function ?
bool isPalindrome( char *s, int len) {
if (len <= 1) {
return true;
}
else
return ((s[0] == s[len-1]) && isPalindrome(s+1,len-2));
}
What is the time complexity of this function in terms of Big-O notation?
T(0) = 1 // base case
T(1) = 1 // base case
T(n) = 1 + T(n-2)// general case
T(n-2)=1+T(n-4)
T(n) = 2 + T(n-4)
T(n) = 3 + T(n-6)
T(n) = k + T(n-2k) ... n-2k = 1 k= (n-1)/2
T(n) = (n-1)/2 + T(1) O(n)
You call the recursive function with len-2 because in each execution you remove 2 characters from the word(the first and last). Hence len-2.
T(n) = 1 + T(n-2) = 1 + 1 + T(n-4) = 1 + 1 + 1 + T(n-6) = n/2 + T(1) = O(n)
A function g(n) is O(f(n)) if there exists a constant c and a number n0 so that for n>n0
g(n) < c*f(n).
The big-O notation is just an upper limit, so that function is O(n) but also O(n^2) and so on.
The function starts with a string of length n, and reduces it by 2 every time around the loop until it's all reduced away.
The number of iterations is therefore proportional to the length/2, ie O(n/2) => O(n).