I am very confused by the computation of algorithm complexity. For one assignment, we are given the following function and asked to find its complexity.
int selectkth(int a[], int k, int n) {
int i, j, mini, tmp;
for (i = 0; i < k; i++) {
mini = i;
for (j = i+1; j < n; j++)
if (a[j]<a[mini])
mini = j;
tmp = a[i];
a[i] = a[mini];
a[mini] = tmp;
}
return a[k-1];
}
The assignment itself asks to "Find the complexity of the function used to find the k-th smallest integer in an unordered array of integers."
Additionally we were asked to write our f function as well as our g function.
From what I understand, for the f function, I would add all the assignments and operations in the function. Do I include the variables k or n in this f function?
As a best guess, I would say that f(n) = 6n + 4(n^2), as there are 6 operations that are looped in the first for loop, followed by 4 operations in the nested for loop.
For further understanding, would the Big O complexity of this function be O(n^2)? I say that because there is a nested for loop, and that would mean a worst case scenario of going through every item, every time.
I apologise if I'm not being clear. I am quite confused with how this works.
Here goes a simple analysis:
Outer loop is doing k iterations.
Inter loop is doing n-1 iterations but it does that k times.
So we have O(k*(n-1)) = O(kn-k)
Since k can be equal to n (we can ask for n-th smallest integer in an array) the expression becames O(n*n-n) = O(n^2-n) = O(n^2).
For more help about Big O notation notation check out: http://web.mit.edu/16.070/www/lecture/big_o.pdf
Related
Given some array of numbers i.e. [5,11,13,26,2,5,1,9,...]
What is the time complexity of these loops? The first loop is O(n), but what is the second loop? It iterates the number of times specified at each index in the array.
for (int i = 0; i < nums.size(); i++) {
for (int j = 0; j < nums[i]; j++) {
// ...
}
}
This loop has time complexity O(N*M) (using * to denote multiplication).
N is the number of items in your list, M is either the average value for your numbers, or the maximum possible value. Both would yield the same order, so use whichever is easier.
That arises because the number of times ... runs is proportional to both N and M. It also assumes ... to be constant complexity. If not you need to multiply by the complexity of ....
It is O(sum(nums[i]) * nums.size())
What is the complexity of the two algorithms below (size is the length of each dimension)?:
void a(int** arr, int size) {
int k = 0;
for (int i = 0; i < size; ++i)
{
for (int j = 0; j < size; ++j)
{
arr[i][j] += 1;
}
}
print(arr, size);
}
void b(int*** arr, int size) {
int m = 0;
for (int i = 0; i < size; ++i)
{
for (int j = 0; j < size; ++j)
{
for (int k = 0; k < size; ++k)
{
arr[i][j][k] += 1;
}
}
}
print(arr, size);
}
I believe the first function is O(N^2) and the second function is O(N^3). Is this right?
For any N-D array of N size I am saying the complexity will be N!. Is this correct?
Time Complexity :
The time complexity of first function is - O(size^2)
The time complexity of second function is - O(size^3)
The time complexity of N-dimensional array each of size N for a similar function would be - O(N^N) since the iterations required would be N * N * N... upto N times.
So, you were correct in the first two - O(N^2) and O(N^3) if by N you meant size. The last statement, however, was incorrect. N! grows slower than N^N and hence the N! as the upper bound would be wrong. It should be O(N^N).
I believe the first function is O(N^2) and the second function is O(N^3). Is this right?
Yes, it is N * N for the first, and N * N * N for the second
For any N-D array of N size I am saying the complexity will be N!. Is this correct?
Not exactly. The complexity will be N^N (N to the Nth power), which is higher
N^N = N * N * .... * N
N! = N * (N - 1) * ... * 1
(To find the ratio between the two, you can use Stirling's approximation, incidentally.)
I believe the first function is O(N^2) and the second function is O(N^3). Is this right?
For any N-D array of N size I am saying the complexity will be N!. Is this correct?
I think you skipped an important step in your analysis. You started by looking at two sample cases (2-D and 3-D). So far, so good. You analyzed the complexity in those two cases, deciding the 2-D case is O(N^2) and the 3-D is O(N^3). Also good. But then you skipped a step.
The next step should be to generalize to arbitrary dimension D. You looked at two sample cases, you see the 2 and the 3 appearing in the formulas, so it is reasonable to theorize that you can replace that with D. The theory is that for an array of dimension D, the complexity is O(N^D). Ideally you do some more work to either prove this or at least check that it holds in a case you have not looked at yet, like 4-D. Once you have confidence in this result, you are ready to move on.
It is only after getting the formula for a the arbitrary dimension case that you should specialize to the case where the dimension equals the size. This result is rather easy, as assuming D == N means it is valid to replace D with N in your formula; the complexity is O(N^N).
int Solution::diffPossible(vector<int> &A, int B) {
for (int i = 0; i < A.size(); i++) {
for (int j = i+1; j < A.size(); j++)
if ((A[j]-A[i]) == B)
return 1;
}
return 0;
}
This is the solution to a simple question where we are supposed to write a code with time complexity less than or equal to O(n). I think the time complexity of this code is O(n^2) but still it got accepted. So, I am in doubt please tell me the right answer.
Let's analyze the worst-case scenario, i.e. when the condition of the if-statement in the inner loop, (A[j]-A[i]) == B, is never fulfilled, and therefore the statement return 1 is never executed.
If we denote A.size() as n, the comparison in the inner loop is performed n-1 times for the first iteration of the outer loop, then n-2 times for the second iteration, and so on...
So, the number of the comparisons performed in the inner loop for this worst-case scenario is (by calculating the sum of the resulting arithmetic progression below):
n-1 + n-2 + ... + 1 = (n-1)n/2 = (n^2 - n)/2
^ ^
|_________________|
n-1 terms
Therefore, the running-time complexity is quadratic, i.e., O(n^2), and not O(n).
I'm trying to find runtime functions and corresponding big-O notations for two different algorithms that both find spans for each element on a stack. The X passed in is the list that the span is to be computed from and the S passed in is the list for the span. I think I know how to find most of what goes into the runtime functions and once I know what that is, I have a good understanding of how to get to big-O notation. What I need to understand is how to figure out the while loops involved. I think they usually involve logarithms, although I can't see why here because I've been going through with the worst cases being each element is larger than the previous one, so the spans are always getting bigger and I see no connection to logs. Here is what I have so far:
void span1(My_stack<int> X, My_stack<int> &S) { //Algorithm 1
int j = 0; //+1
for(int i = 0; i < X.size(); ++i) { //Find span for each index //n
j = 1; //+1
while((j <= i) && (X.at(i-j) <= X.at(i))) { //Check if span is larger //???
++j; //1
}
S.at(i) = j; //+1
}
}
void span2(My_stack<int> X, My_stack<int> &S) { //Algorithm 2
My_stack<int> A; //empty stack //+1
for(int i = 0; i < (X.size()); ++i) { //Find span for each index //n
while(!A.empty() && (X.at(A.top()) <= X.at(i))) { //???
A.pop(); //1
}
if(A.empty()) //+1
S.at(i) = i+1;
else
S.at(i) = i - A.top();
A.push(i); //+1
}
}
span1: f(n) = 1+n(1+???+1)
span2: f(n) = 1+n(???+1+1)
Assuming all stack operations are O(1):
span1: Outer loop executes n times. Inner loop upto i times for each value of i from 0 to n. Hence total time is proportional to sum of integers from 1 to n, i.e. O(n2)
span2: We need to think about this differently, since the scope of A is function-wide. A starts as empty, so can only be popped as many times as something is pushed onto it, i.e. the inner while loop can only be executed as many times as A.push is called, over the entirety of the function's execution time. However A.push is only called once every outer loop, i.e. n times - so the while loop can only execute n times. Hence the overall complexity is O(n).
My Computer Science II final is tomorrow, and I need some help understanding how to find the Big-Oh for segments of code. I've searched the internet and haven't been able to find any examples of how I need to understand it.
Here's a problem from our sample final:
for(int pass = 1; i <= n; pass++)
{
for(int index = 0; index < n; index++)
for(int count = 1; count < n; count++)
{
//O(1) things here.
}
}
}
We are supposed to find the order (Big-Oh) of the algorithm.
I think that it would be O(n^3), and here is how I came to that conclusion
for(int pass = 1; i <= n; pass++) // Evaluates n times
{
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < n; count++) // Evaluates n * n * (n) times
{
//O(1) things here.
}
}
}
// T(n) = (n) + (n^2 + n) + n^3
// T(n) = n^3 + n^2 + 2n
// T(n) <= c*f(x)
// n^3 + n^2 + 2n <= c * (n^3)
// O(n) = n^3
I'm just not sure if I'm doing it correctly. Can someone explain how to evaluate code like this and/or confirm my answer?
Yes, it is O(n^3). However:
for(int pass = 1; pass <= n; pass++) // Evaluates n times
{ //^^i should be pass
for(int index = 0; index < n; index++) //Evaluates n times
for(int count = 1; count < n; count++) // Evaluates n-1 times
{
//O(1) things here.
}
}
}
Since you have three layer of nested for loops, the nested loop will be evaluated n *n * (n-1) times, each operation inside the most inner for loop takes O(1) time, so in total you have n^3 - n^2 constant operations, which is O(n^3) in order of growth.
A good summary of how to measure order of growth in Big O notation can be found here:
Big O Notation MIT
Quoting part from the above file:
Nested loops
for I in 1 .. N loop
for J in 1 .. M loop
sequence of statements
end loop;
end loop;
The outer loop executes N times. Every time the outer loop executes, the inner loop
executes M times. As a result, the statements in the inner loop execute a total of N * M
times. Thus, the complexity is O(N * M).
In a common special case where the stopping condition of the inner loop is J <N instead
of J <M (i.e., the inner loop also executes N times), the total complexity for the two loops is O(N^2).
Similar rationale can be applied in your case.
You are absolutely correct. It is O(n^3) for your example.
To find the Big Oh running time of any segment of code, you should think about how many times the piece of code does O(1) things.
Let me simplify your example to give a better idea of this:
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < n; count++) // Evaluates n * n * (n) times
{
//O(1) things here.
}
}
In the above case, the inner loop runs n times for each run of the outer loop. And your outer loop also runs n times. This means you're doing n things, n number of times. Making it O(n^2).
One other thing to take care of is that Big Oh is an upper bound limit. This means that you should always think about what's going to happen to the code when you have a large input (in your case, a large value of n. Another implication of this fact is that multiplying or adding by constants has no effect on the Big Oh bound. For example:
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < 2*n; count++) // Runs 2*n times
{
//O(1) things here.
}
}
The Big Oh running time of this code is also O(n^2) since O(n*(2n)) = O(n^2).
Also check this out: http://ellard.org/dan/www/Q-97/HTML/root/node7.html