So I am preparing for an exam and 25% of that exam is over Big-O and I'm kind of lost at how to get the complexity and Big-O from an algorithm. Below are examples with the answers, I just need an explanation of how to the answers came to be and reasoning as to why some things are done, this is the best explanation I can give because, as mentioned above, I don't know this very well:
int i =n; //this is 1 because it is an assignment (=)
while (i>0){ //this is log10(10)*(1 or 2) because while
i/=10; //2 bc / and = // loops are log base (whatever is being /='d
} //the answer to this one is 1+log10(n)*(1 or 2) or O(logn)
//so i know how to do this one, but im confused when while and for
//loops nested in each other
int i = n; int s = 0;
while (i>0){
for(j=1;j<=i;j++)s++;{
i/=2;
} //the answer to this one is 2n +log2(n) + 2 or O(n)
//also the i/=2 is outside for loop for this and the next one
int i = n; int s=0
while (i>0){
for(j=1;j<=n;++J) s++;
i/=2;
} //answer 1+nlogn or O(nlogn)
int i = n;
for(j=1;j<=n;j++)
while(i>o) i/=2;
//answer is 1+log2(n) or O(log(n))
for(j=1; <=n; ++j){
int i-n;
while(i>0) i/=2;
} //answer O(nlog(n))
Number 4: the for loop counts from 1 to N, so it is at least O(n). The while loop takes O(log n) the first time, but since i doesn't get reset, while loop has only has one iteration each successive time through the for loop. So basically O(n + log n), which simplifies to O(n).
Number 5: same as above, but now i does get reset each time, so you have O(log n) done N times: O(n log n).
Related
I am having trouble understanding how this code is O(N). Is the inner while loop O(1). If so, why? When is a while/for loop considered O(N) and when is it O(1)?
int minSubArrayLen(int target, vector& nums)
{
int left=0;
int right=0;
int n=nums.size();
int sum=0;
int ans=INT_MAX;
int flag=0;
while(right<n)
{
sum+=nums[right];
if(sum>=target)
{
while(sum>=target)
{
flag=1;
sum=sum-nums[left];
left++;
}
ans=min(ans,right-left+2);
}
right++;
}
if(flag==0)
{
return 0;
}
return ans;
}
};
Both the inner and outer loop are O(n) on their own.
But consider the whole function and count the number of accesses to nums:
The outer loop does:
sum+=nums[right];
right++;
No element of nums is accessed more than once through right. So that is O(n) accesses and loop iterations.
Now the tricky one, the inner loop:
sum=sum-nums[left];
left++;
No element of nums is accessed more than once through left. So while the inner loop runs many times in their sum it's O(n).
So overall is O(2n) == O(n) accesses to nums and O(n) runtime for the whole function.
Outer while loop is going from 0 till the n so time complexity is O(n).
O(1):
int sum= 0;
for(int x=0 ; x<10 ; x++) sum+=x;
Every time you run this loop, it will run 10 times, so it will take constant time . So time complexity will be O(1).
O(n):
int sum=0;
For(int x=0; x<n; x++) sum+=x;
Time complexity of this loop would be O(n) because the number of iterations is varying with the value of n.
Consider the scenario
The array is filled with the same value x and target (required sum) is also x. So at every iteration of the outer while loop the condition sum >= target is satisfied, which invokes the inner while loop at every iterations. It is easy to see that in this case, both right and left pointers would move together towards the end of the array. Both the pointers therefore move n positions in all, the outer loop just checks for a condition which calls the inner loop. Both the pointes are moved independently.
You can consider any other case, and in every case you would find the same observation. 2 independent pointers controlling the loop, and both are having O(n) operations, so the overall complexity is O(n).
O(n) or O(1) is just a notation for time complexity of an algorithm.
O(n) is linear time, that means, that if we have n elements, it will take n operations to perform the task.
O(1) is constant time, that means, that amount of operations is indifferent to n.
It is also worth mentioning, that your code does not cover one edge case - when target is equal to zero.
Your code has linear complexity, because it scans all the element of the array, so at least n operations will be performed.
Here is a little refactored code:
int minSubArrayLen(int target, const std::vector<int>& nums) {
int left = 0, right = 0, size = nums.size();
int total = 0, answer = INT_MAX;
bool found = false;
while (right < size) {
total += nums[right];
if (total >= target) {
found = true;
while (total >= target) {
total -= nums[left];
++left;
}
answer = std::min(answer, right - left + 2);
}
++right;
}
return found ? answer : -1;
}
I was wondering if the time complexity of the following code snippet is O(n^2):
class Solution {
public:
int numSquares(int n) {
if(n<=0)
return 0;
vector<int> dp(n+1, INT_MAX);
dp[0]=0;
for(int i=1; i<=n; i++) {
for(int j=1; j*j<=i; j++) {
//+1 because you are adding the current `j`
dp[i]=min(dp[i], dp[i-j*j]+1);
}
}
return dp[n];
}
};
I am not sure because in the inner loop, we are checking for perfect squares less than i, which would be very less in comparison to i (and I think so less, that they can be assumed to be constant). In this case then, the complexity would be just O(n). So, can I say that the complexity is O(n) or is it O(n^2)?
Note: The code snippet is a solution to a question from LeetCode.com which apparently has a collection of interview questions.
The outer loop is O(N).
The inner loop is O(sqrt(i)).
The sum will be:
1 + sqrt(2) + ... + sqrt(N)
It's greater than O(N) but is less than O(N^2).
Without going into a very accurate computation of the above sum, I would say, it's close to O(N*sqrt(N)).
Update
From http://ramanujan.sirinudi.org/Volumes/published/ram09.pdf, the above sum is:
C1 + (2.0/3)*N*SQRT(N) + (1.0/2)*SQRT(N) + ....
I am very confused by the computation of algorithm complexity. For one assignment, we are given the following function and asked to find its complexity.
int selectkth(int a[], int k, int n) {
int i, j, mini, tmp;
for (i = 0; i < k; i++) {
mini = i;
for (j = i+1; j < n; j++)
if (a[j]<a[mini])
mini = j;
tmp = a[i];
a[i] = a[mini];
a[mini] = tmp;
}
return a[k-1];
}
The assignment itself asks to "Find the complexity of the function used to find the k-th smallest integer in an unordered array of integers."
Additionally we were asked to write our f function as well as our g function.
From what I understand, for the f function, I would add all the assignments and operations in the function. Do I include the variables k or n in this f function?
As a best guess, I would say that f(n) = 6n + 4(n^2), as there are 6 operations that are looped in the first for loop, followed by 4 operations in the nested for loop.
For further understanding, would the Big O complexity of this function be O(n^2)? I say that because there is a nested for loop, and that would mean a worst case scenario of going through every item, every time.
I apologise if I'm not being clear. I am quite confused with how this works.
Here goes a simple analysis:
Outer loop is doing k iterations.
Inter loop is doing n-1 iterations but it does that k times.
So we have O(k*(n-1)) = O(kn-k)
Since k can be equal to n (we can ask for n-th smallest integer in an array) the expression becames O(n*n-n) = O(n^2-n) = O(n^2).
For more help about Big O notation notation check out: http://web.mit.edu/16.070/www/lecture/big_o.pdf
Can someone tell me complexity of the bellow code.
std::cin>>n1;
int ctr=0;
for(int i=2;i<=n1;i++)
{
if(i>=n/2&&ctr==0)
{
cout << " You entered a prime no";
break;
}
else if(n1%i==0)
{
ctr++;
cout<<i<<" ";
n1/=i;
}
}
Can someone suggest how to calculate the complexity of such loops which involve multiple if-else conditions?
The inner loop is O(1). The complexity of the outer loop depends on what the code does with n, and you didn't show the code, so it could be anything.
As to a general guideline: asymptotic complexity is always with respect to a quantity. Usually, this is taken to be input size (whatever that means for the problem being solved) and denoted n. In your case, it could very well be the variable n, seeing as it's used for the loop stop condition.
Once you know the quantity (the n) with respect to which you want complexity, it's simple. Operations which don't depend on n are O(1). Operations which do O(f) amount of work for each value of n are O(n * f), where f can indeed be a function of n. It gets more tricky with recursion, but this is the basic overview.
int n;
std::cin >> n;
// O(oo) e.g. O(infinite)
while( n > 0 ) {
// for loop is O(1)
for(int i = 1 ; i <= 9 ; i++) {
if( n % i == 0) {
//piece of code involving O(1)complexity.
}
}
// this makes the while loop O(1)
if ( n == 10000000000000 ) {
break;
}
}
this algorithm is O(1)
The Complexity of a for loop is O(n) where n is the number of iterations...
Here n=9 , but it is wrong to take as conclusion that for loop in general is stable (O(1)) and not relevant to the number of iterations
I have a question about quick sort algorithm. I implement quick sort algorithm and play it.
The elements in initial unsorted array are random numbers chosen from certain range.
I find the range of random number effects the running time. For example, the running time for 1, 000, 000 random number chosen from the range (1 - 2000) takes 40 seconds. While it takes 9 seconds if the 1,000,000 number chosen from the range (1 - 10,000).
But I do not know how to explain it. In class, we talk about the pivot value can effect the depth of recursion tree.
For my implementation, the last value of the array is chosen as pivot value. I do not use randomized scheme to select pivot value.
int partition( vector<int> &vec, int p, int r) {
int x = vec[r];
int i = (p-1);
int j = p;
while(1) {
if (vec[j] <= x){
i = (i+1);
int temp = vec[j];
vec[j] = vec[i];
vec[i] = temp;
}
j=j+1;
if (j==r)
break;
}
int temp = vec[i+1];
vec[i+1] = vec[r];
vec[r] = temp;
return i+1;
}
void quicksort ( vector<int> &vec, int p, int r) {
if (p<r){
int q = partition(vec, p, r);
quicksort(vec, p, q-1);
quicksort(vec, q+1, r);
}
}
void random_generator(int num, int * array) {
srand((unsigned)time(0));
int random_integer;
for(int index=0; index< num; index++){
random_integer = (rand()%10000)+1;
*(array+index) = random_integer;
}
}
int main() {
int array_size = 1000000;
int input_array[array_size];
random_generator(array_size, input_array);
vector<int> vec(input_array, input_array+array_size);
clock_t t1, t2;
t1 = clock();
quicksort(vec, 0, (array_size - 1)); // call quick sort
int length = vec.size();
t2 = clock();
float diff = ((float)t2 - (float)t1);
cout << diff << endl;
cout << diff/CLOCKS_PER_SEC <<endl;
}
Most likely it's not performing well because quicksort doesn't handle lots of duplicates very well and may still result in swapping them (order of key-equal elements isn't guaranteed to be preserved). You'll notice that the number of duplicates per number is 100 for 10000 or 500 for 2000, while the time factor is also approximately a factor of 5.
Have you averaged the runtimes over at least 5-10 runs at each size to give it a fair shot of getting a good starting pivot?
As a comparison have you checked to see how std::sort and std::stable_sort also perform on the same data sets?
Finally for this distribution of data (unless this is a quicksort exercise) I think counting sort would be much better - 40K memory to store the counts and it runs in O(n).
It probably has to do with how well sorted the input is. Quicksort is O(n logn) if the input is reasonably random. If it's in reverse order, performance can degrade to O(n^2). You're probably getting closer to the O(n^2) behavior with the smaller data range.
Late answer - the effect of duplicates depends on the partition scheme. The example code in the question is a variation of Lomuto partition scheme, which takes more time as the number of duplicates increases, due to the partitioning getting worse. In the case of all equal elements, Lomuto only reduces the size by 1 element with each level of recursion.
If instead Hoare partition scheme was used (with middle value as pivot), it generally takes less time as the number of duplicates increases. Hoare will needlessly swap values equal to the pivot, due to duplicates, but the partitioning will approach the ideal case of splitting an array in nearly equally sized parts. The swap overhead is somewhat masked by memory cache. Link to Wiki example of Hoare partition scheme:
https://en.wikipedia.org/wiki/Quicksort#Hoare_partition_scheme