time complexity of function in terms of Big-O notation? - c++

1) Althoug i have studied about the big O notation i couldn't understand how we calculate the the time complexity of this function in terms of Big-O notation. Can you explain in detail.
2) For recursive function; why we call len-2 while using recursive function ?
bool isPalindrome( char *s, int len) {
if (len <= 1) {
return true;
}
else
return ((s[0] == s[len-1]) && isPalindrome(s+1,len-2));
}
What is the time complexity of this function in terms of Big-O notation?
T(0) = 1 // base case
T(1) = 1 // base case
T(n) = 1 + T(n-2)// general case
T(n-2)=1+T(n-4)
T(n) = 2 + T(n-4)
T(n) = 3 + T(n-6)
T(n) = k + T(n-2k) ... n-2k = 1 k= (n-1)/2
T(n) = (n-1)/2 + T(1) O(n)

You call the recursive function with len-2 because in each execution you remove 2 characters from the word(the first and last). Hence len-2.
T(n) = 1 + T(n-2) = 1 + 1 + T(n-4) = 1 + 1 + 1 + T(n-6) = n/2 + T(1) = O(n)
A function g(n) is O(f(n)) if there exists a constant c and a number n0 so that for n>n0
g(n) < c*f(n).
The big-O notation is just an upper limit, so that function is O(n) but also O(n^2) and so on.

The function starts with a string of length n, and reduces it by 2 every time around the loop until it's all reduced away.
The number of iterations is therefore proportional to the length/2, ie O(n/2) => O(n).

Related

How can we compute the time complexity of the below function

void f(int n)
{
doOh(n);
if(n<1) return;
for(int i=0; i<2; i++)
{
f(n/2);
}
}
Time complexity of doOh(n) is O(N).
How can we compute the time complexity of the given function.
Denoting the complexity as T(n), the code says
T(n) = T0 if n = 0
= O(n) + 2 T(n/2) otherwise
We can get an upper bound by replacing O(n) with c n for some c*, and we expand
T(n) = c n + 2 T(n/2)
= c n + 2 c n/2 + 4 T(n/4)
= c n + 2 c n/2 + 4 c n/4 + 8 T(n/8)
= ...
The summation stops when n < 2^l, where l is the number of significant bits of n. Hence we conclude that
T(n) = O(l c n + 2^l T0) = O(l n).
*Technically, we can only do that as of n > some N. But this does not change the conclusion regarding the asymptotic behavior of T.
Complexity = O(nlog2n)
The reason is
Every time f(n) is called, doOh(n) is called. Its time complexity is O(n) when we pass n in f(n) as you mentioned
f(n) will call f(n/2) 2 times. So time complexity will be 2*O(f(n/2)) = O(f(n/2))
f(n) is becoming f(n/2). f(n/2) is becoming f(n/4) and so on... it means this f(n) will be called log2n times (logn base 2 times).
Hence, doOh(n) will also create n + n/2 + n/4 and so on time complexity logn times.
n or n/2 or n/4 all have O(n) complexity.
Hence the total time complexity is O(logn) * O(n) which is order of nlogn base 2 = O(nlog2n)

how to find a recurrence relation from algorithm

I'm trying to understand recurrence relations. I've found a way to determine the maximum element in an array of integers through recursion. Below is the function. The first time it is called, n is the size of the array.
int ArrayMax(int array[], int n) {
if(n == 1)
return array[0];
int result = ArrayMax(array, n-1);
if(array[n-1] > result)
return array[n-1];
else
return result;
}
Now I want to understand the recurrence relation and how to get to big-O notation from there. I know that T(n) = aT(n/b) + f(n), but I don't see how to get what a and b should be.
a is "how many recursive calls there are", and b is "how many pieces you split the data into", intuitively. Note that the parameter inside the recursive call doesn't have to be n divided by something, in general it's any function of n that describes how the magnitude of your data has been changed.
For example binary search does one recursive call at each layer, splits the data into 2, and does constant work at each layer, so it has T(n) = T(n/2) + c. Merge sort splits the data in two each time (the split taking work proportional to n) and recurses on both subarrays - so you get T(n) = 2T(n/2) + cn.
In your example, you'd have T(n) = T(n-1) + c, as you're making one recursive call and "splitting the data" by reducing its size by 1 each time.
To get the big O notation from this, you just make substitutions or expand. With your example it's easy:
T(n) = T(n-1) + c = T(n-2) + 2c = T(n-3) + 3c = ... = T(0) + nc
If you assume T(0) = c0, some "base constant", then you get T(n) = nc + c0, which means the work done is in O(n).
The binary search example is similar, but you've got to make a substitution - try letting n = 2^m, and see where you can get with it. Finally, deriving the big O notation of eg. T(n) = T(sqrt(n)) + c is a really cool exercise.
Edit: There are other ways to solve recurrence relations - the Master Theorem is a standard method. But the proof isn't particularly nice and the above method works for every recurrence I've ever applied it to. And... well, it's just more fun than plugging values into a formula.
In your case recurrence relation is:
T(n) = T(n-1) + constant
And Master theorem says:
T(n) = aT(n/b) + f(n) where a >= 1 and b > 1
Here Master theorem can not be applied because for master theorem
b should be greater than 1 (b>1)
And in your case b=1

I don't understand the shell sort complexity with shell gap 8,4,2,1 [duplicate]

First, here's my Shell sort code (using Java):
public char[] shellSort(char[] chars) {
int n = chars.length;
int increment = n / 2;
while(increment > 0) {
int last = increment;
while(last < n) {
int current = last - increment;
while(current >= 0) {
if(chars[current] > chars[current + increment]) {
//swap
char tmp = chars[current];
chars[current] = chars[current + increment];
chars[current + increment] = tmp;
current -= increment;
}
else { break; }
}
last++;
}
increment /= 2;
}
return chars;
}
Is this a correct implementation of Shell sort (forgetting for now about the most efficient gap sequence - e.g., 1,3,7,21...)? I ask because I've heard that the best-case time complexity for Shell Sort is O(n). (See http://en.wikipedia.org/wiki/Sorting_algorithm). I can't see this level of efficiency being realized by my code. If I added heuristics to it, then yeah, but as it stands, no.
That being said, my main question now - I'm having difficulty calculating the Big O time complexity for my Shell sort implementation. I identified that the outer-most loop as O(log n), the middle loop as O(n), and the inner-most loop also as O(n), but I realize the inner two loops would not actually be O(n) - they would be much less than this - what should they be? Because obviously this algorithm runs much more efficiently than O((log n) n^2).
Any guidance is much appreciated as I'm very lost! :P
The worst-case of your implementation is Θ(n^2) and the best-case is O(nlogn) which is reasonable for shell-sort.
The best case ∊ O(nlogn):
The best-case is when the array is already sorted. The would mean that the inner if statement will never be true, making the inner while loop a constant time operation. Using the bounds you've used for the other loops gives O(nlogn). The best case of O(n) is reached by using a constant number of increments.
The worst case ∊ O(n^2):
Given your upper bound for each loop you get O((log n)n^2) for the worst-case. But add another variable for the gap size g. The number of compare/exchanges needed in the inner while is now <= n/g. The number of compare/exchanges of the middle while is <= n^2/g. Add the upper-bound of the number of compare/exchanges for each gap together: n^2 + n^2/2 + n^2/4 + ... <= 2n^2 ∊ O(n^2). This matches the known worst-case complexity for the gaps you've used.
The worst case ∊ Ω(n^2):
Consider the array where all the even positioned elements are greater than the median. The odd and even elements are not compared until we reach the last increment of 1. The number of compare/exchanges needed for the last iteration is Ω(n^2).
Insertion Sort
If we analyse
static void sort(int[] ary) {
int i, j, insertVal;
int aryLen = ary.length;
for (i = 1; i < aryLen; i++) {
insertVal = ary[i];
j = i;
/*
* while loop exits as soon as it finds left hand side element less than insertVal
*/
while (j >= 1 && ary[j - 1] > insertVal) {
ary[j] = ary[j - 1];
j--;
}
ary[j] = insertVal;
}
}
Hence in case of average case the while loop will exit in middle
i.e 1/2 + 2/2 + 3/2 + 4/2 + .... + (n-1)/2 = Theta((n^2)/2) = Theta(n^2)
You saw here we achieved (n^2)/2 even though divide by two doesn't make more difference.
Shell Sort is nothing but insertion sort by using gap like n/2, n/4, n/8, ...., 2, 1
mean it takes advantage of Best case complexity of insertion sort (i.e while loop exit) starts happening very quickly as soon as we find small element to the left of insert element, hence it adds up to the total execution time.
n/2 + n/4 + n/8 + n/16 + .... + n/n = n(1/2 + 1/4 + 1/8 + 1/16 + ... + 1/n) = nlogn (Harmonic Series)
Hence its time complexity is some thing close to n(logn)^2

Writing and solving a recurrence that counts the number of multiplications in this code?

Let M(n) be the number of multiplications that the function fct does.
//precondition: n>0
int fct (const int A[], int n) {
if (n==1)
return A[0]*A[0];
else return A[n-1] * fct(A,n-1) * A[n-1];
}
write recurrence relation for M(n) where n is the number of elements in the array
Solve the recurrence relation to obtain M(n) in terms of n
Write the resulting expression of part 2 in big O notation
So this was a quiz, and I have the answer key, but not too sure how this was computed, M(n)=2n-1 with O(n)..I do not know how this was determined, can someone explain? Thanks
Let's look at what each call does. Each call, when n > 1,
Does exactly two multiplications, then
Makes a recursive call on a problem of size n - 1
Therefore, we can write the recurrence as
M(1) = 1
M(n) = 2 + M(n-1)
If you use the iteration method, you'll notice this pattern:
M(1) = 1
M(2) = M(1) + 2 = 3
M(3) = M(2) + 2 = 5
M(4) = M(3) + 2 = 7
...
M(n) = 2n - 1
Now that you have this, you can write it asymptotically as M(n) = Θ(n).
Hope this helps!

Bit Counting from 0 ~ N in O(logn)

The following is the C++ code I saw on interview street website which counting the 1's bit from 0 ~ a (input number), we can say is 1 ~ a though because 0 has no 1s. This code's time complexity is O(logn) using recurrence.
I just don't understand the logic. Can anybody explain why? Thx!
long long solve(int a)
{
if(a == 0) return 0 ;
if(a % 2 == 0) return solve(a - 1) + __builtin_popcount(a) ;
return ((long long)a + 1) / 2 + 2 * solve(a / 2) ;
}
BTW __builtin_popcount() is a built-in method that GNU provided for counting the bit which is 1.
I'll take a stab at the O(lg n) complexity. Note that I don't quite understand what the function does though the proof should still hold on the running time.
Given our recurrence relationship:
T(a) = 0, if a == 0
| T(a - 1) + O(1), if a is divisible by 2
\ O(1) + T(a/2)
I'll use the iterative method here:
T(a) = T(a/2) + O(1)
T(a) = T(a/2^2)) + O(1) + O(1)
T(a) = T(a/2^k)) + (k-1)O(1)
// continue unrolling until we reach the base case
T(a) = 0 + O(1) + ... + O(1) + O(1) = kO(1)
// k here corresponds to lg a since we continued dividing the problem set in half
T(a) = (lg a)*O(1)
T(a) = lg a
Q.E.D.