Time Complexity of
if ( true ) ?
while ( true && false ) ?
Time Complexity for simple Multiplication, Addition or Boolean, etc is O(1). which is very low and is considered constant-time operations.
if-else statement are constantly O(1)
true && false give us false result for while loop, so it's never been executed for this situation, complexity also will be O(1)
Related
Below is the program which find the length of the longest substring without repeating characters, given a string str. (details)
int test(string str) {
int left = 0, right = 0, ans = 0;
unordered_set<char> set;
while(left < str.size() and right < str.size()) {
if(set.find(str[right]) == set.end()) set.insert(str[right]);
else {
while(str[left] != str[right]){
set.erase(str[left]);
left++;
}
left++;
}
right++;
ans = (ans > set.size() ? ans : set.size());
}
return ans;
};
What is the time complexity of above solution? Is it O(n^2) or O(n) where n is the length of string?
Please note that I have gone through multiple questions on internet and also read about big oh but I am still confused. To me, it looks like O(n^2) complexity due to two while loops but I want to confirm from experts here.
It's O(n) on average.
What you see here is a sliding window technique (with variable window size, also called "two pointers technique").
Yes there are two loops, but if you look, any iteration of any of the two loops will always increase one of the pointers (either left or right).
In the first loop, either you call the second loop or you don't, but you will increase right at each iteration. The second loop always increases left.
Both left and right can have n different values (because both loops would stop when either right >= n or left == right).
So the first loop will have n executions (all the values of right from 0 to n-1) and the second loop can have at most n executions (all the possible values of left), which is a worst case of 2n = O(n) executions.
Worst case complexity
For the sake of completeness, please note that I wrote O(n) on average. The reason is that set.find has a complexity of O(1) in average but O(n) in the worst case. Same goes for set.erase. The reason is that unordered_set is implemented with a hash table and it the very unlikely case of all your items being in the same bucket, it needs to iterate on all the items.
So even though we have O(n) iterations of the loop, some iterations could be O(n). It means that in some very unlikely cases, the execution could go up to O(n^2). You shouldn't really worry about it as the probability of this to happen is close to 0, and even though I don't exactly know what the hashing technique for char in C++, I would bet that we will never end up with all characters in the same bucket.
I'm really having trouble calculating big O. I get the basics but when it gets to nested for loops and all that, my mind just blanks out. I was asked to write down the complexity of the following algorithm which I have no clue how to do. The input string contains only A,B,C and D
string solution(string &S) {
int length = S.length();
int i = 0;
while(i < length - 1)
{
if ( (S[i] == 'A' && S[i+1] == 'B') || (S[i] == 'B' && S[i+1] == 'A'))
{
S = S.erase(i,2);
i = 0;
length = S.length();
}
if ( (S[i] == 'C' && S[i+1] == 'D') || (S[i] == 'D' && S[i+1] == 'C'))
{
S = S.erase(i,2);
i = 0;
length = S.length();
}
i++;
}
return S;
}
What would the big O of this algorithm be?
It is O(n^2).
DDDDDDDDDDDDDDDDDDDABABABABABABABABABABABAB
First n/2 characters are D
Last n/2 characters are AB
For each AB, (there are 1/4n such) - O(n)
You are resetting i (iterating from start)
shifting all successive elements to fill the gap created after erase.
Total:
O(n)*(O(n) + O(n)) = O(n^2)
It's easy to get hung up about the precise detail of how efficient an algorithm is. Fundamentally though, all you're concerned about is whether the operation is:
Constant time
Proportional to the number of elements
Proportional to the square of the number of elements
etc...
Look at this for guidance on how to estimate the Big-O for a compound operation:
https://hackernoon.com/big-o-for-beginners-622a64760e2
The big-O essentially defines the worst-case complexity of a method, with particular regard to effects that would be observed with very large n. On the face of it you would consider how many times you repeat an operation, but you also need to consider if any embodied methods (e.g. string erase, string length) have complexity that's "constant time", "proportional to the number of elements", "proportional to the number of elements - squared" and so on.
So if your outer loop performs n scans but also invokes methods which also perform n scans on up to every item then you end up with O(n^2).
The main concern is the exponential dimension; you could have a very time-consuming linear-complexity operation, but also a very fast, say, power-of-4 element. In such a case, it's considered to be O(n^4) ( as opposed to O(20000n + n^4) ) because as n tends to infinity, all of the lesser exponent factors become insignificant. See here : https://en.wikipedia.org/wiki/Big_O_notation#Properties
So in your case, you have the following loops:
Repetition of the scan (setting i=0) whose frequency is proportional to number of matches (worst case n for argument's sake - even if it's a fraction, when n becomes infinite it remains significant). Although this is not supposedly the outer loop, it does fundamentally govern how many times the other scans are performed.
String scan whose frequency is proportional to length (n), PLUS Embodied loop in the string erase - n in the worst case. Note these operations are performed in isolation, together governed by the frequency of the aforementioned repetition. As stated elsewhere, O(n)+O(n) reduces to O(n) because we only care about exponent.
So in this case the complexity is O(n^2)
A separate consideration when assessing the performance of any algorithm regards how cache friendly it is; algorithms using hashmaps, linked lists etc are considered prima-facie to be more efficient, but in some cases a O(n^2) algorithm that operates within a cache line and doesn't invoke page faults nor cache flushes can execute a lot faster than a supposedly more efficient algorithm that has memory scattered all over the place.
I guess this would be O(n) because there is one loop thats going through the string.
The longer the string the more time it takes so i would say O(n)
In big O notation, you give the answer for the worst case. Here the worst case will be that the string does not satisfy any if statements. Then time complexity here will be O(n) because there is only one loop.
Given this algorithm:
void turtle_down(double val){
if (val >= 1.0)
turtle_down(val/2.0);
}
From what I know, T(n) = T(n/2) + O(1).
O(1) is the worst-case time complexity of the base function which is val != 0.0 (am i getting this right?).
And then the recursive call gives a time complexity of T(n/2) since we divide n before the recursive call. Is that right?
But I don't understand how to do the math here. I don't know how will we arrive at O(log n)(base 2). Anyone care to explain or show me the math?
void turtle_down(double val){
if (val != 0.0)
turtle_down(val/2.0);
}
In the above code, the test condition if (val != 0.0) may not give you the expected result. It would go into an infinite loop. Consider the case when val=32. You can see that it will never reach 0 by repeated division with 2.
But if you replace the test condition with say if (val >= 1) then recurrence relation for the given function will be T(n) = T(n/2) + O(1).
In this case the time complexity is T(n) = O(log n).
To get this result you can use the Master Theorem.
To understand the given complexity consider val = 32. You can divide val repeatedly by 2, for 5 times till it becomes 1. Notice that log 32 = 5. From this we can see that the number of calls made to the function is log n.
Should the the time complexity be calculated as T(n-1,m-1)/T(n-1,m or T(n-2)?
def isDeelRijRecursief(lijst1,lijst2):
if len(lijst1) == 1: #vergelijking , hieruit constante halen
if len(lijst2) > 1:
return False
if len(lijst2) == 1: #vergelijking
for i in range(len(lijst1)):
if lijst2[0] == lijst1[i]:
return True
return False
else:
if lijst1[0] == lijst2[0]: #vergelijking
return isDeelRijRecursief(lijst1[1:],lijst2[1:]) #T(n-1,m-1)? or T(n-2)?
else:
return isDeelRijRecursief(lijst1[1:],lijst2) # of T(n-1,m)? or T(n-1)?
Time complexity is usually defined in big-O notation to signify the asymptotic complexity of the evaluated function, that is, its complexity in the limit.
Let's say that the sizes of the first and second lists are N and M respectively. The recursive step of your function always creates a new sublist of size N-1. This means that the recursion will bottom-out after at most N steps. The termination condition of the function must also perform at most N operations. This means that time complexity of the function is in fact O(N), that is, the number of operations that this algorithm will perform is on the order of N or that it is asymptotically linear in the size of its first argument.
Is the Big-O for the following code O(n) or O(log n)?
for (int i = 1; i < n; i*=2)
sum++;
It looks like O(n) or am I missing this completely?
It is O(logn), since i is doubled each time. So at overall you need to iterate k times, until 2^k = n, and in this case it happens when k = logn (since 2^logn = n).
Simple example: Assume n = 100 - then:
iter1: i = 1
iter2: i = 2
iter3: i = 4
iter4: i = 8
iter5: i = 16
iter6: i = 32
iter7: i = 64
iter8: i = 128 > 100
It is easy to see that an iteration will be added when n is doubled, which is logarithmic behavior, while linear behavior is adding iterations for a constant increase of n.
P.S. (EDIT): mathematically speaking, the algorithm is indeed O(n) - since big-O notation gives asymptotic upper bound, and your algorithm runs asymptotically "faster" then O(n) - so it is indeed O(n) - but it is not a tight bound (It is not Theta(n)) and I doubt that is actually what you are looking for.
The complexity is O(logn) because the loops runs (log2n - 1) times.
O(log(n)), as you only loop ~log2(n) times
No the complexity is not linear. Try to play through a few scenarios: how many iterations does this cycle do for n = 2, n=4, n=16, n=1024? How about for n = 1024 * 1024? Maybe this will help you get the correct answer.
For loop check runs lg(n) +1 times. The inner loop runs lg(n) times. So, the complexity is is O(lg n), not O(log n).
If n==8, the following is how the code will run:
i=1
i=2
i=4
i=8 --Exit condition
It is O(log(n)).
Look at the code num++;
It loops O(log(n)) times.