finding runtime function involving a while loop - c++

I'm trying to find runtime functions and corresponding big-O notations for two different algorithms that both find spans for each element on a stack. The X passed in is the list that the span is to be computed from and the S passed in is the list for the span. I think I know how to find most of what goes into the runtime functions and once I know what that is, I have a good understanding of how to get to big-O notation. What I need to understand is how to figure out the while loops involved. I think they usually involve logarithms, although I can't see why here because I've been going through with the worst cases being each element is larger than the previous one, so the spans are always getting bigger and I see no connection to logs. Here is what I have so far:
void span1(My_stack<int> X, My_stack<int> &S) { //Algorithm 1
int j = 0; //+1
for(int i = 0; i < X.size(); ++i) { //Find span for each index //n
j = 1; //+1
while((j <= i) && (X.at(i-j) <= X.at(i))) { //Check if span is larger //???
++j; //1
}
S.at(i) = j; //+1
}
}
void span2(My_stack<int> X, My_stack<int> &S) { //Algorithm 2
My_stack<int> A; //empty stack //+1
for(int i = 0; i < (X.size()); ++i) { //Find span for each index //n
while(!A.empty() && (X.at(A.top()) <= X.at(i))) { //???
A.pop(); //1
}
if(A.empty()) //+1
S.at(i) = i+1;
else
S.at(i) = i - A.top();
A.push(i); //+1
}
}
span1: f(n) = 1+n(1+???+1)
span2: f(n) = 1+n(???+1+1)

Assuming all stack operations are O(1):
span1: Outer loop executes n times. Inner loop upto i times for each value of i from 0 to n. Hence total time is proportional to sum of integers from 1 to n, i.e. O(n2)
span2: We need to think about this differently, since the scope of A is function-wide. A starts as empty, so can only be popped as many times as something is pushed onto it, i.e. the inner while loop can only be executed as many times as A.push is called, over the entirety of the function's execution time. However A.push is only called once every outer loop, i.e. n times - so the while loop can only execute n times. Hence the overall complexity is O(n).

Related

Time complexity (How is this O(n))

I am having trouble understanding how this code is O(N). Is the inner while loop O(1). If so, why? When is a while/for loop considered O(N) and when is it O(1)?
int minSubArrayLen(int target, vector& nums)
{
int left=0;
int right=0;
int n=nums.size();
int sum=0;
int ans=INT_MAX;
int flag=0;
while(right<n)
{
sum+=nums[right];
if(sum>=target)
{
while(sum>=target)
{
flag=1;
sum=sum-nums[left];
left++;
}
ans=min(ans,right-left+2);
}
right++;
}
if(flag==0)
{
return 0;
}
return ans;
}
};
Both the inner and outer loop are O(n) on their own.
But consider the whole function and count the number of accesses to nums:
The outer loop does:
sum+=nums[right];
right++;
No element of nums is accessed more than once through right. So that is O(n) accesses and loop iterations.
Now the tricky one, the inner loop:
sum=sum-nums[left];
left++;
No element of nums is accessed more than once through left. So while the inner loop runs many times in their sum it's O(n).
So overall is O(2n) == O(n) accesses to nums and O(n) runtime for the whole function.
Outer while loop is going from 0 till the n so time complexity is O(n).
O(1):
int sum= 0;
for(int x=0 ; x<10 ; x++) sum+=x;
Every time you run this loop, it will run 10 times, so it will take constant time . So time complexity will be O(1).
O(n):
int sum=0;
For(int x=0; x<n; x++) sum+=x;
Time complexity of this loop would be O(n) because the number of iterations is varying with the value of n.
Consider the scenario
The array is filled with the same value x and target (required sum) is also x. So at every iteration of the outer while loop the condition sum >= target is satisfied, which invokes the inner while loop at every iterations. It is easy to see that in this case, both right and left pointers would move together towards the end of the array. Both the pointers therefore move n positions in all, the outer loop just checks for a condition which calls the inner loop. Both the pointes are moved independently.
You can consider any other case, and in every case you would find the same observation. 2 independent pointers controlling the loop, and both are having O(n) operations, so the overall complexity is O(n).
O(n) or O(1) is just a notation for time complexity of an algorithm.
O(n) is linear time, that means, that if we have n elements, it will take n operations to perform the task.
O(1) is constant time, that means, that amount of operations is indifferent to n.
It is also worth mentioning, that your code does not cover one edge case - when target is equal to zero.
Your code has linear complexity, because it scans all the element of the array, so at least n operations will be performed.
Here is a little refactored code:
int minSubArrayLen(int target, const std::vector<int>& nums) {
int left = 0, right = 0, size = nums.size();
int total = 0, answer = INT_MAX;
bool found = false;
while (right < size) {
total += nums[right];
if (total >= target) {
found = true;
while (total >= target) {
total -= nums[left];
++left;
}
answer = std::min(answer, right - left + 2);
}
++right;
}
return found ? answer : -1;
}

What is the use of the outer loop (i loop) in the bubble sort algorithm?

int main() {
int arr[5];
for ( int i = 0; i < 5; i++) {
cin>>arr[i];
}
for (int i = 0; i < 5; i++) {
for ( int j = 1; j < 5; j++) {
if ( arr[j] < arr[j-1]) swap(arr[j], arr[j-1]);
}
}
for( int i = 0; i < 5; i++) cout<<" "<<arr[i]<<" "<<endl;
}
/* take an element then if the adjacent element is smaller, replace it with that and repeat this process */
I tried removing the j loop and replace j, j-1 with i, i-1 but it throws an error which I am not able to figure out, this is bubble sort algo. I want to understand what is the use of two loops there.
The bubble sort algorithm consist in sinking each number till it reaches its position. For doing this, the inner loop (j loop) swaps each pair of adjacents numbers. For assure that the full list is ordered, you need to repeat this process a specific number of times.
In the worst case (reversed order), the algorithm has to repeat the swapping process at least n times (one per each element in the list), the outer loop (i loop), takes care of it.
So, the inner loop (j loop) swaps pairs of adjacent elements, and the outer loop (i loop) assures that the swapping process is repeated at least n times. (If the list is ordered before reaching n iterations in the outer loop, you can stop the algorithm saving time).
This bubble sort algorithm works like so: find the element with the maximum value, and place it at the end of your arr list. Then find the next element with the maximum value, and place it right before the previous value. And so on, and so forth.
Removing the second for loop (the one with j), means you'll only find the first maximum value once, and place it at the end.
Hence, the use of the second for loop is to find each next maximum value, and place them in order.

Time complexity of a loop with a nested if-else

// Every iteration of loop counts triplet with
// first element as arr[i].
for (int i = 0; i < n - 2; i++) {
// Initialize other two elements as corner
// elements of subarray arr[j+1..k]
int j = i + 1, k = n - 1;
// Use Meet in the Middle concept
while (j < k) {
// If sum of current triplet is more or equal,
// move right corner to look for smaller values
if (arr[i] + arr[j] + arr[k] >= sum)
k--;
// Else move left corner
else {
// This is important. For current i and j,
// there are total k-j third elements.
for (int x = j + 1; x <= k; x++)
cout << arr[i] << ", " << arr[j]
<< ", " << arr[x] << endl;
j++;
}
}
}
What is the time complexity of this algorithm? Is it O(n*n)??
This is a problem from geekforgeeeks webpage. How do you handle the loop inside the else statement?
Break the code into parts and find the runtime of each.
The outer for loop is O(n).
For every time that loop is called you have a while loop called. So, you'll multiply the complexity of the two loops.
To figure out the runtime of the while loop you have to look at the if and else blocks. If the if block ran every time then the while loop would have an O(n) complexity. If the else part ran every time, then the while loop would have an O(n^2) complexity.
If we are considering worst-case, then you would ignore the if blocks runtime. So, the while loops runtime is O(n^2) and it is called n times through the outer for loop.
Thus, the runtime of it all should be O(n^3).

Running-time complexity of two nested loops: quadratic or linear?

int Solution::diffPossible(vector<int> &A, int B) {
for (int i = 0; i < A.size(); i++) {
for (int j = i+1; j < A.size(); j++)
if ((A[j]-A[i]) == B)
return 1;
}
return 0;
}
This is the solution to a simple question where we are supposed to write a code with time complexity less than or equal to O(n). I think the time complexity of this code is O(n^2) but still it got accepted. So, I am in doubt please tell me the right answer.
Let's analyze the worst-case scenario, i.e. when the condition of the if-statement in the inner loop, (A[j]-A[i]) == B, is never fulfilled, and therefore the statement return 1 is never executed.
If we denote A.size() as n, the comparison in the inner loop is performed n-1 times for the first iteration of the outer loop, then n-2 times for the second iteration, and so on...
So, the number of the comparisons performed in the inner loop for this worst-case scenario is (by calculating the sum of the resulting arithmetic progression below):
n-1 + n-2 + ... + 1 = (n-1)n/2 = (n^2 - n)/2
^ ^
|_________________|
n-1 terms
Therefore, the running-time complexity is quadratic, i.e., O(n^2), and not O(n).

I need help understanding how to find the Big-Oh of a code segment

My Computer Science II final is tomorrow, and I need some help understanding how to find the Big-Oh for segments of code. I've searched the internet and haven't been able to find any examples of how I need to understand it.
Here's a problem from our sample final:
for(int pass = 1; i <= n; pass++)
{
for(int index = 0; index < n; index++)
for(int count = 1; count < n; count++)
{
//O(1) things here.
}
}
}
We are supposed to find the order (Big-Oh) of the algorithm.
I think that it would be O(n^3), and here is how I came to that conclusion
for(int pass = 1; i <= n; pass++) // Evaluates n times
{
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < n; count++) // Evaluates n * n * (n) times
{
//O(1) things here.
}
}
}
// T(n) = (n) + (n^2 + n) + n^3
// T(n) = n^3 + n^2 + 2n
// T(n) <= c*f(x)
// n^3 + n^2 + 2n <= c * (n^3)
// O(n) = n^3
I'm just not sure if I'm doing it correctly. Can someone explain how to evaluate code like this and/or confirm my answer?
Yes, it is O(n^3). However:
for(int pass = 1; pass <= n; pass++) // Evaluates n times
{ //^^i should be pass
for(int index = 0; index < n; index++) //Evaluates n times
for(int count = 1; count < n; count++) // Evaluates n-1 times
{
//O(1) things here.
}
}
}
Since you have three layer of nested for loops, the nested loop will be evaluated n *n * (n-1) times, each operation inside the most inner for loop takes O(1) time, so in total you have n^3 - n^2 constant operations, which is O(n^3) in order of growth.
A good summary of how to measure order of growth in Big O notation can be found here:
Big O Notation MIT
Quoting part from the above file:
Nested loops
for I in 1 .. N loop
for J in 1 .. M loop
sequence of statements
end loop;
end loop;
The outer loop executes N times. Every time the outer loop executes, the inner loop
executes M times. As a result, the statements in the inner loop execute a total of N * M
times. Thus, the complexity is O(N * M).
In a common special case where the stopping condition of the inner loop is J <N instead
of J <M (i.e., the inner loop also executes N times), the total complexity for the two loops is O(N^2).
Similar rationale can be applied in your case.
You are absolutely correct. It is O(n^3) for your example.
To find the Big Oh running time of any segment of code, you should think about how many times the piece of code does O(1) things.
Let me simplify your example to give a better idea of this:
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < n; count++) // Evaluates n * n * (n) times
{
//O(1) things here.
}
}
In the above case, the inner loop runs n times for each run of the outer loop. And your outer loop also runs n times. This means you're doing n things, n number of times. Making it O(n^2).
One other thing to take care of is that Big Oh is an upper bound limit. This means that you should always think about what's going to happen to the code when you have a large input (in your case, a large value of n. Another implication of this fact is that multiplying or adding by constants has no effect on the Big Oh bound. For example:
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < 2*n; count++) // Runs 2*n times
{
//O(1) things here.
}
}
The Big Oh running time of this code is also O(n^2) since O(n*(2n)) = O(n^2).
Also check this out: http://ellard.org/dan/www/Q-97/HTML/root/node7.html