Calculating Time Complexity of Maximum Subsequence Sum - c++

Hello everyone I trying to calculate the time complexity of Maximum Subsequence Sum.
Actually I know the answer which is O(n^3) and it follows from the function (n^3 + 3n^2 + 2n)/6
My question is how is that function obtained.

Quite simply, actually: just look at the loops in the code.
for (int i=0; i<n; i++)
for(j = i; j<n; j++) {
...
for (int k=i; k<=j; k++)
XXX;
The line XXX is executed n^3 times (modulo some constant factor and some lower powers of n), since the outer loop obviously runs from 0 to n-1, the "middle" loop runs from i (which will start out with 0, 1, ...) to n-1, meaning that the inner loop will be "started" approx n^2 times. Now, both i and j depend on n (eg., i will be 0 and j=n-1 at the end of the first outer iteration), so line XXX will be n times (for the inner loop) by n^2 times (for the outer two loops), resulting in a total of n^3.
To get the concrete function (n^3 + 3n^2 + 2n)/6, you'd have to be more thorough in your calculation and take care of all those factors I simply omitted above.

Here is how..
i=0
j=0 k=0 (count=1 )
j=1 k=0,1 (count =2)
j=2 k=0,1,2 (count = 3)
...
j=n-1 k=0,1,2,...n-1 (count = n)
Total number of times code executed = 1+2+3+...+n = n(n+1)/2
i=1
j=1 k=1 (count=1 )
j=2 k=1,2 (count =2)
j=3 k=1,2, 3 (count = 3)
...
j=n-1 k=1,2,...n-1 (count = n-2)
Total number of times code executed = 1+2+3+...+n-1 = (n-1)n/2
...
i=n-1
j=n-1 k=n-1 ( count = 1)
Total number of of times code executed = 1 = 1(1+1)/2
Now if we sum for all the values of i
n(n+1)/2 + ((n-1)((n-1)+1)/2+.....+1(1+1)/2
=∑ N(N+1)/2 =1/2∑(N^2 +N) =1/2(∑N^2+∑N)=1/2{ 1/6 N(N+1)(2N+1) + 1/2 N(N+1) } =1/2{ (2N^3 + 3N^2+N )/6 +(N^2+N)/2} =(N^3 + 3N^2 + 2N)/6

Check this solution suggested by Mark Allen Weiss (in his book).

Related

What's the time-complexity function [ T(n) ] for these loops?

j = n;
while (j>=1) {
i = j;
while (i <= n) { cout<<"Printed"; i*= 2; }
j /= 2;
}
My goal is finding T(n) (function that gives us number of algorithm execution) whose order is expected to be n.log(n) but I need exact function which can work fine at least for n=1 to n=10 data
I have tried to predict the function, finally I ended in *T(n) = floor((n-1)log(n)) + n
which is correct just for n=1 and n=2.
I should mention that I found that inaccurate function by converting the original code to the for-loop just like below :
for ( j = 1 ; j <= n ; j*= 2) {
for ( i = j ; i<= n ; i*=2 ) {
cout << "Printed";
}
}
Finally I appreciate your help to find the exact T(n) in advance. 🙏
using log(x) is the floor of log based 2
1.)
The inner loop is executed 1+log(N)-log(j) the outer loop executed times 1+log(N) with j=1,2,4...N times the overall complexity is T(N)=log(N)log(N)+2*log(N)+1-(log(1)+log(2)+log(4)...+log(N))= log(N)^2-(0+1+2+...+log(N))+2*log(N)+1= log(N)^2-log(N)(log(N)-1)/2+1= log(N)^2/2+3*log(N)/2+1
2.) same here just in reverse order.
I know it is no proof but maybe easier to follow then math : godbolt play with n. it always returns 0;
Outer loop and inner loop are both O(log₂ N).
So total time is
O(log₁₀ N * log₂ N) == O(2 * log₂ N)
Which just gets reduced to O(lg N)

Counting the basic operations of a given program

I am looking at the following: Operations Counting Example
Which is supposed to present the operations count of the following pseudocode:
Algorithm prefixAverages(A)
Input array A of n numbers
Output array B of n numbers such that B[i] is the average
of elements A[0], A[1], … , A[i]
for i = 0 to n - 1 do
b = 0
for j = 0 to i do
b = b + A[j]
j++;
B[i] = b / (i + 1)
return B
But I don't see how the counts on the inner for loop are reached. It says that for case i=0; j=0; the inner for loop runs twice? But it strikes me that it should only run once to see that 0 < 0. Can anyone provide insight into where the given operations count comes from or provide their own operations count?
This is under the assumption that primitive operations are:
Assignment
Array access
Mathematical operators (+, -, /, *)
Comparison
Increment/Decrement (math in disguise)
Return statements
Let me know if anything is unclear or you need more information
When the article you are following says "for var <- 0 to var2", it is like "for (var = 0; var <= var2; var++), so yes, when i = 0, it enters the "for" twice (once when i = 0, and again when i = 1, then it goes out).
(Sorry if bad english)
Edit and improve: When I calculate the complexity of a program, the only thing that interest me is the big O complexity; in this case, you have that the 'i' loop run 'n' times, and the 'j' loop run 'i' times, so the 'i' loop runs (1+2+3+...+n) times, that is n(n+1)/2 times, and that is an O(n**2) complexity.
In the first line, you have an assignament (i = something), and a comparison (i <= n-1) ("2 operations") for each i value, and as the last value is i=n, it does those 2 operations since i=0, until i=n, and as those are n+1 values (from 0 to n), this line do 2(n+1) operations.
The second line is a little obvious, as it enters the loop n times (since i=0, until i=n-1).
On the second loop, it do 2 things, an assignament, and a comparison (just as the first loop), and it do this i+2 times (for example, when i=0, it enters the loop 1 time, but it has to do the i=1 assignament, and the 1<=0 comparison, so its 2 times in total), so it do this calculus 2(i+2) times, but it do this since i=0, until i=n-1, so to calculate all of this, we have to do the sum (sum from i=0 until i=n-1: 2(i+2)) = 2((sum of i from 0 to n-1) + (sum of 2 from i=0 to i=n-1)) = 2((n(n-1)/2) + 2n) = n(n-1) + 4n = n"2 - n + 4n = n"2 + 3n.
I'll continue this later, I hope my answer so far is helpful for you. (again, sorry if some bad english)

Finding growth Function

I'm trying to find growth function of given code.
int sum = 0;
for (int k = n; k > 0; k /= 2) {
cout<<k<<endl;
for (int i = 0; i < k; i++)
{
sum++;
cout<<i<<endl;
}
}
but I'm stuck in first loop for (int k = n; k > 0; k /= 2) , it's being execute in this way :
for n = 5 , it executes 3 times
n = 10 , 4 times
n = 100 , 7 times
n = 1000 , 10 times
how can I generalize it?
First, 10 is about log_2 of 1000. There are about log_2(n) iterations of the outer loop. However, that doesn't tell you the total number of steps because you do a variable number of steps inside.
n + n/2 + n/4 + n/8 + ... = 2n = O(n). You are doing a constant number of things inside the loops, so the total number of steps is O(n). About half of the time is spent on the first iteration of the outer loop, when k=n.
At each step k is divided by two, cut in halves. How many cuts do you need to go to zero?
After 1 cut you have n/2.
After 2 cuts you have n/4.
After 3 cuts you have n/8.
After 4 cuts you have n/16.
After 5 cuts you have n/32.
After x cuts you have n/2x.
So, how long until n = 2x?
Answer is simple: x = log2(n).
Your loop runs at log n times.
But the inner loop runs on the size of these parts. The first run is of size n, the second is n/2, the third is n/4 and so on. The sum of all runs of this inner loop is:
n + n/2 + n/4 + n/8 + ... = 2n.
Thus the total run time equals O(n) (thanks Douglas Zare!)

Time complexity of nested loop: where does cn(n+1)/2 come from?

Consider the following loop:
for (i =1; i <= n; i++) {
for (j = 1; j <= i; j++) {
k = k + i + j;
}
}
The outer loop executes n times. For i= 1, 2, ..., the inner loop is executed one time, two times, and
n times. Thus, the time complexity for the loop is
T(n)=c+2c+3c+4c...nc
=cn(n+1)/2
=c/2(n^2)+c/2n
=O(n^2)..
Ok so I don't understand how the time complexity, T(n) even determines c+2c+3c. etc.. and then cn(n+1)/2? Where did that come from?
The sum 1 + 2 + 3 + 4 + ... + n is equal to n(n+1)/2, which is the Gauss series. Therefore,
c + 2c + 3c + ... + nc
= c(1 + 2 + 3 + ... + n)
= cn(n+1) / 2
This summation comes up a lot in algorithmic analysis and is useful to know when working with big-O notation.
Or is your question where the summation comes from at all?
Hope this helps!

Why can the KMP failure function be computed in O(n) time?

Wikipedia claims that the failure function table can be computed in O(n) time.
Let's look at its `canonical' implementation (in C++):
vector<int> prefix_function (string s) {
int n = (int) s.length();
vector<int> pi (n);
for (int i=1; i<n; ++i) {
int j = pi[i-1];
while (j > 0 && s[i] != s[j])
j = pi[j-1];
if (s[i] == s[j]) ++j;
pi[i] = j;
}
return pi;
}
Why does it work in O(n) time, even if there is an inner while-loop? I'm not really strong at the analysis of algorithms, so may somebody explain it?
This line: if (s[i] == s[j]) ++j; is executed at most O(n) times.
It caused increase in the value of p[i]. Note that p[i] starts at same value as p[i-1].
Now this line: j = pi[j-1]; causes decrease of p[i] by at least one. And since it was increased at most O(n) times (we count also increases and decreases on previous values), it cannot be decreased more than O(n) times.
So it as also executed at most O(n) times.
Thus the whole time complexity is O(n).
There's already two answers here that are correct, but I often think a fully laid out
proof can make things clearer. You said you wanted an answer for a 9-year-old, but
I don't think it's feasible (I think it's easy to be fooled into thinking it's true
without actually having any intuition for why it's true). Maybe working through this answer will help.
First off, the outer loop runs n times clearly because i is not modified
within the loop. The only code within the loop that could run more than once is
the block
while (j > 0 && s[i] != s[j])
{
j = pi[j-1]
}
So how many times can that run? Well notice that every time that condition is
satisfied we decrease the value of j which, at this point, is at most
pi[i-1]. If it hits 0 then the while loop is done. To see why this is important,
we first prove a lemma (you're a very smart 9-year-old):
pi[i] <= i
This is done by induction. pi[0] <= 0 since it's set once in the initialization of pi and never touched again. Then inductively we let 0 < k < n and assume
the claim holds for 0 <= a < k. Consider the value of p[k]. It's set
precisely once in the line pi[i] = j. Well how big can j be? It's initialized
to pi[k-1] <= k-1 by induction. In the while block it then may be updated to pi[j-1] <= j-1 < pi[k-1]. By another mini-induction you can see that j will never increase past pi[k-1]. Hence after the
while loop we still have j <= k-1. Finally it might be incremented once so we have
j <= k and so pi[k] = j <= k (which is what we needed to prove to finish our induction).
Now returning back to the original point, we ask "how many times can we decrease the value of
j"? Well with our lemma we can now see that every iteration of the while loop will
monotonically decrease the value of j. In particular we have:
pi[j-1] <= j-1 < j
So how many times can this run? At most pi[i-1] times. The astute reader might think
"you've proven nothing! We have pi[i-1] <= i-1 but it's inside the while loop so
it's still O(n^2)!". The slightly more astute reader notices this extra fact:
However many times we run j = pi[j-1] we then decrease the value of pi[i] which shortens the next iteration of the loop!
For example, let's say j = pi[i-1] = 10. But after ~6 iterations of the while loop we have
j = 3 and let's say it gets incremented by 1 in the s[i] == s[j] line so j = 4 = pi[i].
Well then at the next iteration of the outer loop we start with j = 4... so we can only execute the while at most 4 times.
The final piece of the puzzle is that ++j runs at most once per loop. So it's not like we can have
something like this in our pi vector:
0 1 2 3 4 5 1 6 1 7 1 8 1 9 1
^ ^ ^ ^ ^
Those spots might mean multiple iterations of the while loop if this
could happen
To make this actually formal you might establish the invariants described above and then use induction
to show that the total number of times that while loop is run, summed with pi[i] is at most i.
From that, it follows that the total number of times the while loop is run is O(n) which means that the entire outer loop has complexity:
O(n) // from the rest of the outer loop excluding the while loop
+ O(n) // from the while loop
=> O(n)
Let's start with the fact the outer loop executes n times, where n is the length of the pattern we seek. The inner loop decreases the value of j by at least 1, since pi[j] < j. The loop terminates at the latest when j == -1, therefore it can decrease the value of j at most as often as it has been increased previously by j++ (the outer loop). Since j++ is executed in the outer loop exactly n times, the overall number of executions of the inner while loop is limited to n. The preprocessing algorithm therefore requires O(n) steps.
If you care, consider this simpler implementation of the preprocessing stage:
/* ff stands for 'failure function': */
void kmp_table(const char *needle, int *ff, size_t nff)
{
int pos = 2, cnd = 0;
if (nff > 1){
ff[0] = -1;
ff[1] = 0;
} else {
ff[0] = -1;
}
while (pos < nff) {
if (needle[pos - 1] == needle[cnd]) {
ff[pos++] = ++cnd;
} else if (cnd > 0) {
cnd = ff[cnd]; /* This is O(1) for the reasons above. */
} else {
ff[pos++] = 0;
}
}
}
from which it is painfully obvious the failure function is O(n), where n is the length of the pattern sought.