Big O-Notation of N-Dimensional array - c++

What is the complexity of the two algorithms below (size is the length of each dimension)?:
void a(int** arr, int size) {
int k = 0;
for (int i = 0; i < size; ++i)
{
for (int j = 0; j < size; ++j)
{
arr[i][j] += 1;
}
}
print(arr, size);
}
void b(int*** arr, int size) {
int m = 0;
for (int i = 0; i < size; ++i)
{
for (int j = 0; j < size; ++j)
{
for (int k = 0; k < size; ++k)
{
arr[i][j][k] += 1;
}
}
}
print(arr, size);
}
I believe the first function is O(N^2) and the second function is O(N^3). Is this right?
For any N-D array of N size I am saying the complexity will be N!. Is this correct?

Time Complexity :
The time complexity of first function is - O(size^2)
The time complexity of second function is - O(size^3)
The time complexity of N-dimensional array each of size N for a similar function would be - O(N^N) since the iterations required would be N * N * N... upto N times.
So, you were correct in the first two - O(N^2) and O(N^3) if by N you meant size. The last statement, however, was incorrect. N! grows slower than N^N and hence the N! as the upper bound would be wrong. It should be O(N^N).

I believe the first function is O(N^2) and the second function is O(N^3). Is this right?
Yes, it is N * N for the first, and N * N * N for the second
For any N-D array of N size I am saying the complexity will be N!. Is this correct?
Not exactly. The complexity will be N^N (N to the Nth power), which is higher
N^N = N * N * .... * N
N! = N * (N - 1) * ... * 1
(To find the ratio between the two, you can use Stirling's approximation, incidentally.)

I believe the first function is O(N^2) and the second function is O(N^3). Is this right?
For any N-D array of N size I am saying the complexity will be N!. Is this correct?
I think you skipped an important step in your analysis. You started by looking at two sample cases (2-D and 3-D). So far, so good. You analyzed the complexity in those two cases, deciding the 2-D case is O(N^2) and the 3-D is O(N^3). Also good. But then you skipped a step.
The next step should be to generalize to arbitrary dimension D. You looked at two sample cases, you see the 2 and the 3 appearing in the formulas, so it is reasonable to theorize that you can replace that with D. The theory is that for an array of dimension D, the complexity is O(N^D). Ideally you do some more work to either prove this or at least check that it holds in a case you have not looked at yet, like 4-D. Once you have confidence in this result, you are ready to move on.
It is only after getting the formula for a the arbitrary dimension case that you should specialize to the case where the dimension equals the size. This result is rather easy, as assuming D == N means it is valid to replace D with N in your formula; the complexity is O(N^N).

Related

Time complexity of a Function in c++

What will be the time complexity for this function, can someone explain?
void fun(int n) {
int i,j,k, count = 0;
for(i = n/2; i <= n; i++)
for(j = 1; j <= n; j = 2*j)
for(k = 1; k <= n; k++)
count++;
}
I am trying to find the time complexity for the given function. I think that second loop is O(n) but some said that it is O(log(n)).
The outer loop will perform n/2 iterations. On each of its iterations
the middle loop will perform log(n) iterations, since on every step j gets multiplied by factor of 2.
On each of its iterations the inner loop will perform n steps.
So complexity is O(n/2 * log(n) * n) = O(n^2 * log(n)).
Your outer loop (i) has a complexity of O(n/2)
The middl-er loop (j) has a complexity of O(log(n))
The inner loop (k) has a complexity of O(n)
Has they are nested, the total complexity of the function in term of n is
(n/2) * log(n) * n = n² * log(sqrt(n))
which asymptotically, taking into account big-O notation gives O(n² * log(n))

What is the time complexity of these nested for loops?

Given some array of numbers i.e. [5,11,13,26,2,5,1,9,...]
What is the time complexity of these loops? The first loop is O(n), but what is the second loop? It iterates the number of times specified at each index in the array.
for (int i = 0; i < nums.size(); i++) {
for (int j = 0; j < nums[i]; j++) {
// ...
}
}
This loop has time complexity O(N*M) (using * to denote multiplication).
N is the number of items in your list, M is either the average value for your numbers, or the maximum possible value. Both would yield the same order, so use whichever is easier.
That arises because the number of times ... runs is proportional to both N and M. It also assumes ... to be constant complexity. If not you need to multiply by the complexity of ....
It is O(sum(nums[i]) * nums.size())

Running-time complexity of two nested loops: quadratic or linear?

int Solution::diffPossible(vector<int> &A, int B) {
for (int i = 0; i < A.size(); i++) {
for (int j = i+1; j < A.size(); j++)
if ((A[j]-A[i]) == B)
return 1;
}
return 0;
}
This is the solution to a simple question where we are supposed to write a code with time complexity less than or equal to O(n). I think the time complexity of this code is O(n^2) but still it got accepted. So, I am in doubt please tell me the right answer.
Let's analyze the worst-case scenario, i.e. when the condition of the if-statement in the inner loop, (A[j]-A[i]) == B, is never fulfilled, and therefore the statement return 1 is never executed.
If we denote A.size() as n, the comparison in the inner loop is performed n-1 times for the first iteration of the outer loop, then n-2 times for the second iteration, and so on...
So, the number of the comparisons performed in the inner loop for this worst-case scenario is (by calculating the sum of the resulting arithmetic progression below):
n-1 + n-2 + ... + 1 = (n-1)n/2 = (n^2 - n)/2
^ ^
|_________________|
n-1 terms
Therefore, the running-time complexity is quadratic, i.e., O(n^2), and not O(n).

Count number of sub-sequences of given array such that their sum is less or equal to given number?

I have an array of size n of integer values and a given number S.
1<=n<=30
I want to find the total number of sub-sequences such that for each sub-sequences elements sum is less than S.
For example: let n=3 , S=5and array's elements be as {1,2,3}then its total sub-sequences be 7 as-
{1},{2},{3},{1,2},{1,3},{2,3},{1,2,3}
but, required sub sequences is:
{1},{2},{3},{1,2},{1,3},{2,3}
that is {1,2,3}is not taken because its element sum is (1+2+3)=6which is greater than S that is 6>S. Others is taken because, for others sub-sequences elements sum is less than S.
So, total of possible sub-sequences be 6.
So my answer is count, which is6.
I have tried recursive method but its time complexity is 2^n.
Please help us to do it in polynomial time.
You can solve this in reasonable time (probably) using the pseudo-polynomial algorithm for the knapsack problem, if the numbers are restricted to be positive (or, technically, zero, but I'm going to assume positive). It is called pseudo polynomial because it runs in nS time. This looks polynomial. But it is not, because the problem has two complexity parameters: the first is n, and the second is the "size" of S, i.e. the number of digits in S, call it M. So this algorithm is actually n 2^M.
To solve this problem, let's define a two dimensional matrix A. It has n rows and S columns. We will say that A[i][j] is the number of sub-sequences that can be formed using the first i elements and with a maximum sum of at most j. Immediately observe that the bottom-right element of A is the solution, i.e. A[n][S] (yes we are using 1 based indexing).
Now, we want a formula for A[i][j]. Observe that all subsequences using the first i elements either include the ith element, or do not. The number of subsequences that don't is just A[i-1][j]. The number of subsequences that do is just A[i-1][j-v[i]], where v[i] is just the value of the ith element. That's because by including the ith element, we need to keep the remainder of the sum below j-v[i]. So by adding those two numbers, we can combine the subsequences that do and don't include the jth element to get the total number. So this leads us to the following algorithm (note: I use zero based indexing for elements and i, but 1 based for j):
std::vector<int> elements{1,2,3};
int S = 5;
auto N = elements.size();
std::vector<std::vector<int>> A;
A.resize(N);
for (auto& v : A) {
v.resize(S+1); // 1 based indexing for j/S, otherwise too annoying
}
// Number of subsequences using only first element is either 0 or 1
for (int j = 1; j != S+1; ++j) {
A[0][j] = (elements[0] <= j);
}
for (int i = 1; i != N; ++i) {
for (int j = 1; j != S+1; ++j) {
A[i][j] = A[i-1][j]; // sequences that don't use ith element
auto leftover = j - elements[i];
if (leftover >= 0) ++A[i][j]; // sequence with only ith element, if i fits
if (leftover >= 1) { // sequences with i and other elements
A[i][j] += A[i-1][leftover];
}
}
}
Running this program and then outputting A[N-1][S] yields 6 as required. If this program does not run fast enough you can significantly improve performance by using a single vector instead of a vector of vectors (and you can save a bit of space/perf by not wasting a column in order to 1-index, as I did).
Yes. This problem can be solved in pseudo-polynomial time.
Let me redefine the problem statement as "Count the number of subsets that have SUM <= K".
Given below is a solution that works in O(N * K),
where N is the number of elements and K is the target value.
int countSubsets (int set[], int K) {
int dp[N][K];
//1. Iterate through all the elements in the set.
for (int i = 0; i < N; i++) {
dp[i][set[i]] = 1;
if (i == 0) continue;
//2. Include the count of subsets that doesn't include the element set[i]
for (int k = 1; k < K; k++) {
dp[i][k] += dp[i-1][k];
}
//3. Now count subsets that includes element set[i]
for (int k = 0; k < K; k++) {
if (k + set[i] >= K) {
break;
}
dp[i][k+set[i]] += dp[i-1][k];
}
}
//4. Return the sum of the last row of the dp table.
int count = 0;
for (int k = 0; k < K; k++) {
count += dp[N-1][k];
}
// here -1 is to remove the empty subset
return count - 1;
}

I need help understanding how to find the Big-Oh of a code segment

My Computer Science II final is tomorrow, and I need some help understanding how to find the Big-Oh for segments of code. I've searched the internet and haven't been able to find any examples of how I need to understand it.
Here's a problem from our sample final:
for(int pass = 1; i <= n; pass++)
{
for(int index = 0; index < n; index++)
for(int count = 1; count < n; count++)
{
//O(1) things here.
}
}
}
We are supposed to find the order (Big-Oh) of the algorithm.
I think that it would be O(n^3), and here is how I came to that conclusion
for(int pass = 1; i <= n; pass++) // Evaluates n times
{
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < n; count++) // Evaluates n * n * (n) times
{
//O(1) things here.
}
}
}
// T(n) = (n) + (n^2 + n) + n^3
// T(n) = n^3 + n^2 + 2n
// T(n) <= c*f(x)
// n^3 + n^2 + 2n <= c * (n^3)
// O(n) = n^3
I'm just not sure if I'm doing it correctly. Can someone explain how to evaluate code like this and/or confirm my answer?
Yes, it is O(n^3). However:
for(int pass = 1; pass <= n; pass++) // Evaluates n times
{ //^^i should be pass
for(int index = 0; index < n; index++) //Evaluates n times
for(int count = 1; count < n; count++) // Evaluates n-1 times
{
//O(1) things here.
}
}
}
Since you have three layer of nested for loops, the nested loop will be evaluated n *n * (n-1) times, each operation inside the most inner for loop takes O(1) time, so in total you have n^3 - n^2 constant operations, which is O(n^3) in order of growth.
A good summary of how to measure order of growth in Big O notation can be found here:
Big O Notation MIT
Quoting part from the above file:
Nested loops
for I in 1 .. N loop
for J in 1 .. M loop
sequence of statements
end loop;
end loop;
The outer loop executes N times. Every time the outer loop executes, the inner loop
executes M times. As a result, the statements in the inner loop execute a total of N * M
times. Thus, the complexity is O(N * M).
In a common special case where the stopping condition of the inner loop is J <N instead
of J <M (i.e., the inner loop also executes N times), the total complexity for the two loops is O(N^2).
Similar rationale can be applied in your case.
You are absolutely correct. It is O(n^3) for your example.
To find the Big Oh running time of any segment of code, you should think about how many times the piece of code does O(1) things.
Let me simplify your example to give a better idea of this:
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < n; count++) // Evaluates n * n * (n) times
{
//O(1) things here.
}
}
In the above case, the inner loop runs n times for each run of the outer loop. And your outer loop also runs n times. This means you're doing n things, n number of times. Making it O(n^2).
One other thing to take care of is that Big Oh is an upper bound limit. This means that you should always think about what's going to happen to the code when you have a large input (in your case, a large value of n. Another implication of this fact is that multiplying or adding by constants has no effect on the Big Oh bound. For example:
for(int index = 0; index < n; index++) // Evaluates n * (n+1) times
for(int count = 1; count < 2*n; count++) // Runs 2*n times
{
//O(1) things here.
}
}
The Big Oh running time of this code is also O(n^2) since O(n*(2n)) = O(n^2).
Also check this out: http://ellard.org/dan/www/Q-97/HTML/root/node7.html