Intuition behind initializing both the pointers at the beginning versus one at the beginning and other at the ending - c++

I solved a problem few days ago:
Given an unsorted array A containing N integers and an integer B, find if there exists a pair of elements in the array whose difference is B. Return true if any such pair exists else return false. For [2, 3, 5, 10, 50, 80]; B=40;, it should return true.
as:
int Solution::solve(vector<int> &A, int B) {
if(A.size()==1) return false;
int i=0, j=0; //note: both initialized at the beginning
sort(begin(A), end(A));
while(i< A.size() && j<A.size()) {
if(A[j]-A[i]==B && i!=j) return true;
if(A[j]-A[i]<B) j++;
else i++;
}
return false;
}
While solving this problem the mistake I had committed earlier was initializing i=0 and j=A.size()-1. Due to this, decrementing j and incrementing i both decreased the differences and so valid differences were missed. On initializing both at the beginning as above, I was able to solve the problem.
Now I am solving a follow-up 3sum problem:
Given an integer array nums, return all the triplets [nums[i], nums[j], nums[k]] such that i != j, i != k, and j != k, and nums[i] + nums[j] + nums[k] == 0. Notice that the solution set must not contain duplicate triplets. If nums = [-1,0,1,2,-1,-4], output should be: [[-1,-1,2],[-1,0,1]] (any order works).
A solution to this problem is given as:
vector<vector<int>> threeSum(vector<int>& nums) {
sort(nums.begin(), nums.end());
vector<vector<int>> res;
for (unsigned int i=0; i<nums.size(); i++) {
if ((i>0) && (nums[i]==nums[i-1]))
continue;
int l = i+1, r = nums.size()-1; //note: unlike `l`, `r` points to the end
while (l<r) {
int s = nums[i]+nums[l]+nums[r];
if (s>0) r--;
else if (s<0) l++;
else {
res.push_back(vector<int> {nums[i], nums[l], nums[r]});
while (nums[l]==nums[l+1]) l++;
while (nums[r]==nums[r-1]) r--;
l++; r--;
}
}
}
return res;
}
The logic is pretty straightforward: each of nums[i]s (from the outer loop) is the 'target' that we search for, in the inner while loop using a two pointer approach like in the first code at the top.
What I don't follow is the logic behind initializing r=nums.size()-1 and working backwards - how are valid differences (in this case, the 'sum's actually) not being missed?
Edit1: Both problems contain negative and positive numbers, as well as zeroes.
Edit2: I understand how both snippets work. My question specifically is the reasoning behind r=nums.size()-1 in code# 2: as we see in code #1 above it, starting r from the end misses some valid pairs (http://cpp.sh/36y27 - the valid pair (10,50) is missed); so why do we not miss valid pair(s) in the second code?

Reformulating the problem
The difference between the two algorithms boils down to addition and subtraction, not 3 vs 2 sums.
Your 3-sum variant asks for the sum of 3 numbers matching a target. When you fix one number in the outer loop, the inner loop reduces to a 2-sum that's actually a 2-sum (i.e. addition). The "2-sum" variant in your top code is really a 2-difference (i.e. subtraction).
You're comparing 2-sum (A[i] + A[j] == B s.t. i != j) to a 2-difference (A[i] - A[j] == B s.t. i != j). I'll use those terms going forward, and forget about the outer loop in 3-sum as a red herring.
2-sum
Why L = 0, R = length - 1 works for 2-sum
For 2-sum, you probably already see the intuition of starting at the ends and working towards the middle, but it's worth making the logic explicit.
At any iteration in the loop, if the sum of A[L] + A[R] > B, then we have no choice but to decrement the right pointer to a lower index. Incrementing the left pointer is guaranteed to increase our sum or leave it the same and we'll get further and further away from the target, potentially closing off the potential to find the solution pair, which may well still include A[L].
On the other hand, if A[L] + A[R] < B, then you must increase your sum by moving the left pointer forward to a larger number. There's a chance A[R] is still part of that sum -- we can't guarantee it's not a part of the sum until A[L] + A[R] > B.
The key takeaway is that there is no decision to be made at each step: either the answer was found or one of the two numbers at either index can be definitively eliminated from further consideration.
Why L = 0, R = 0 doesn't work for 2-sum
This explains why starting both numbers at 0 won't help for 2-sum. What rule would you use to increment the pointers to find a solution? There's no way to know which pointer needs to move forward and which should wait. Both moves increase the sum at best and neither move decreases the sum (the start is the minimum sum, A[0] + A[0]). Moving the wrong one could prohibit finding the solution later on, and there's no way to definitively eliminate either number.
You're back to keeping left at 0 and moving the right pointer forward to the first element that causes A[R] + A[L] > B, then running the tried-and-true original two-pointer logic. You might as well just start R at length - 1.
2-difference
Why L = 0, R = length - 1 doesn't work for 2-difference
Now that we understand how 2-sum works, let's look at 2-difference. Why is it that the same approach starting from both ends and working towards the middle won't work?
The reason is that when you're subtacting two numbers, you lose the all-important guarantee from 2-sum that moving the left pointer forward will always increase the sum and that moving the right pointer backwards will always decrease it.
In subtraction between two numbers in a sorted array, A[R] - A[L] s.t. R > L, regardless of whether you move L forward or R backwards, the sum will decrease, even in an array of only positive numbers. This means that at a given index, there's no way to know which pointer needs to move to find the correct pair later on, breaking the algorithm for the same reason as 2-sum with both pointers starting at 0.
Why L = 0, R = 0 works for 2-difference
Finally, why does starting both pointers at 0 work on 2-difference? The reason is that you're back to the 2-sum guarantee that moving one pointer increases the difference while the other decreases the difference. Specifically, if A[R] - A[L] < B, then L++ is guaranteed to decrease the difference, while R++ is guaranteed to increase it.
We're back in business: there is no choice or magical oracle necessary to decide which index to move. We can systematically eliminate values that are either too large or too small and hone in on the target. The logic works for the same reasons L = 0, R = length - 1 works on 2-sum.
As an aside, the first solution is suboptimal O(n log(n)) instead of O(n) with two passes and O(n) space. You can use an unordered map to keep track of the items seen so far, then perform a lookup for every item in the array: if B - A[i] for some i is in the map, you found your pair.

Conside this:
A = {2, 3, 5, 10, 50, 80}
B = 40
i = 0, j = 5;
When you have something like
while(i<j) {
if(A[j]-A[i]==B && i!=j) return true;
if(A[j]-A[i]>B) j--;
else i++;
}
consider the case when if(A[j]-A[i]==B && i!=j) is not true. Your code makes an incorrect assumption that if the difference of the two endpoints is > B then one should decrement j. Given a sorted array, you don't know whether you decrementing j and then taking the difference would give you the target difference, or incrementing i and then taking the difference would give you the target number since it can go both ways. In your example, when A[5] - A[0] != 10 you could've gone both ways, A[4] - A[0] (which is what you do) or A[5] - A[1]. Both would still give you a difference greater than the target difference. In short, the presumption in your algorithm is incorrect and hence isn't the right way to go about.
In the second approach, that's not the case. When the triplet nums[i]+nums[l]+nums[r] isn't found, you know that the array is sorted and if the sum was more than 0, it has to mean that the num[r] needs to be decremented since incrementing l would only further increase the sum further since num[l + 1] > num[l].

Your question boils down to the following:
For a sorted array in ascending order A, why is it that we perform a different two-pointer search for t for the problem A[i] + A[j] == t versus A[i] - A[j] == t, where j > i?
It's more intuitive why for the first problem, we can fix i and j to be at opposite ends and decrease the j or increase i, so I'll focus on the second problem.
With array problems it's sometimes easiest to draw out the solution space, then come up with the algorithm from there. First, let's draw out the solution space B, where B[i][j] = -(A[i] - A[j]) (defined only for j > i):
B, for A of length N
i ---------------------------->
j B[0][0] B[0][1] ... B[0][N - 1]
| B[1][0] B[1][1] ... B[1][N - 1]
| . . .
| . . .
| . . .
v B[N - 1][0] B[N - 1][1] ... B[N - 1][N - 1]
---
In terms of A:
X -(A[0] - A[1]) -(A[0] - A[2]) ... -(A[0] - A[N - 2]) -(A[0] - A[N - 1])
X X -(A[1] - A[2]) ... -(A[1] - A[N - 2]) -(A[1] - A[N - 1])
. . . . .
. . . . .
. . . . .
X X X ... X -(A[N - 2] - A[N - 1])
X X X ... X X
Notice that B[i][j] = A[j] - A[i], so the rows of B are in ascending order and the columns of B are in descending order. Let's compute B for A = [2, 3, 5, 10, 50, 80].
B = [
i------------------------>
j X 1 3 8 48 78
| X X 2 7 47 77
| X X X 5 45 75
| X X X X 40 70
| X X X X X 30
v X X X X X X
]
Now the equivalent problem is searching for t = 40 in B. Note that if we start with i = 0 and j = N = 5 there's no good/guaranteed way to reach 40. However, if we start in a position where we can always increment/decrement our current element in B in small steps, we can guarantee that we'll get as close to t as possible.
In this case, the small steps we take involve traversing either right/downwards in the matrix, starting from the top left (could equivalently traverse left/upwards from the bottom right), which corresponds to incrementing both i and j in the original question in A.

Related

Proving that a two-pointer approach works (pair sum)

I was trying to solve the pair sum problem, i.e., given a sorted array, we need to if there exist two indices i and j such that i!=j and a[i]+a[j] == k for some k.
One of the approaches to do the same problem is running two nested for loops, resulting in a complexity of O(n*n).
Another way to solve it is using a two-pointer technique. I wasn't able to solve the problem using the two-pointer method and therefore looked it up, but I couldn't understand why it works. How do I prove that it works?
#define lli long long
//n is size of array
bool f(lli sum) {
int l = 0, r = n - 1;
while ( l < r ) {
if ( A[l] + A[r] == sum ) return 1;
else if ( A[l] + A[r] > sum ) r--;
else l++;
}
return 0;
}
Well, think of it this way:
You have a sorted array (you didn't mention that the array is sorted, but for this problem, that is generally the case):
{ -1,4,8,12 }
The algorithm starts by choosing the first element in the array and the last element, adding them together and comparing them to the sum you are after.
If our initial sum matches the sum we are looking for, great!! If not, well, we need to continue looking at possible sums either greater than or less than the sum we started with. By starting with the smallest and the largest value in the array for our initial sum, we can eliminate one of those elements as being part of a possible solution.
Let's say we are looking for the sum 3. We see that 3 < 11. Since our big number (12) is paired with the smallest possible number (-1), the fact that our sum is too large means that 12 cannot be part of any possible solution, since any other sum using 12 would have to be larger than 11 (12 + 4 > 12 - 1, 12 + 8 > 12 - 1).
So we know we cannot possibly make a sum of 3 using 12 + one other number in the array; they would all be too big. So we can eliminate 12 from our search by moving down to the next largest number, 8. We do the same thing here. We see 8 + -1 is still too big, so we move down to the next number, 4, and voila! We find a match.
The same logic applies if the sum we get is too small. We can eliminate our small number, because any sum we can get using our current smallest number has to be less than or equal to the sum we get when it is paired with our current largest number.
We keep doing this until we find a match, or until the indices cross each other, since, after they cross, we are simply adding up pairs of numbers we have already checked (i.e. 4 + 8 = 8 + 4).
This may not be a mathematical proof, but hopefully it illustrates how the algorithm works.
Stephen Docy made a great job tracing the program's execution and explaining the rationale behind its decisions. Maybe making the answer closer to a mathematical proof of the algorithm's correctness could make it easier to generalize to problems like the one mentioned by zzzzzzz in the comments.
We are given a sorted array A of length n and an integer sum. We need to find if there are any two indices i and j such that i != j and A[i] + A[j] == sum.
The solutions (i, j) and (j, i) are equivalent, so we can assume that i < j without loss of generality. In the program, the current guess at i is called l and the current guess at j is called r.
We iteratively slice the array till we find a slice that has the two summands that sum to sum at its boundary, or we find there is no such slice. The slice starts at index l and ends at index r and I will write it as (l, r).
Initially, the slice is the whole array. In each iteration, the length of the slice is decreased by 1: either the left boundary index l increases or the right boundary index r decreases. When the slice length decreases to 1 (l == r), there are no pairs of different indexes inside the slice, so false is returned. This means that the algorithm halts for any input. The O(n) complexity is also immediately clear. The correctness remains to be proven.
We can assume there is a solution; if there is none, the analysis in the above paragraph applies and the branch returning true can never be executed.
The loop has an invariant (statement that holds true regardless of how many iterations have been done yet): When a solution exists, it is either (l, r) itself or its sub-slice. Mathematically, such an invariant is a lemma -- something that is not very useful by itself but makes a stepping stone in the overall proof. We get the overall correctness by initially making (l, r) the whole array and observing that as each iteration makes the slice shorter, the invariant ensures that we will eventually find the solution. Now, we just need to prove the invariant.
We will prove the invariant by induction. The induction base is trivial -- the initial slice (l, r) either is the solution, or contains it as a sub-slice. The hard part is the induction step, i.e. proving that when (l, r) contains the solution, either it is the solution itself or the slice for the next iteration contains the solution as a sub-slice.
When A[l] + A[r] == sum, (l, r) is the solution itself; the first condition in the loop is triggered, true is returned, and everyone is happy.
When A[l] + A[r] > sum, the slice for the next iteration is (l, r - 1), which still contains the solution. Let's prove that by contradiction, assuming (l, r - 1) does not contain the solution. How could that happen, when (l, r) contained the solution (by induction hypothesis)? The only way would be that the solution (i, j) has j == r (r is the only index we removed from the slice). Because by definition A[i] + A[j] == sum, we get A[i] + A[r] == sum < A[l] + A[r] in this branch. When we subtract A[r] from both sides of the inequality, we get A[i] < A[l]. But A[l] is the smallest value in the (l, r) slice (the array is sorted), so this is a contradiction.
When A[l] + A[r] < sum, the slice for the next iteration is (l + 1, r). The argument is symmetric to the previous case.
∎
The algorithm may be easily rewritten as recursive, which simplifies the analysis at the expense of actual performance. This is the functional programming approach.
#define lli long long
//n is size of array
bool f(lli sum) {
return g(sum, 0, n - 1);
}
bool g(lli sum, int l, int r) {
if ( l >= r ) return 0;
else if ( A[l] + A[r] == sum ) return 1;
else if ( A[l] + A[r] > sum ) return g(sum, l, r - 1);
else return g(sum, l + 1, r);
}
The f function still contains the initialization, but it calls the new g function, which implements the original loop. Instead of keeping the state in local variables, it uses its parameters. Each call of the g function corresponds to a single iteration of the original loop.
The g function is a solution to a more general problem than the original one: Given a sorted array A, are there any two indices i and j such that i != j and A[i] + A[j] == sum and both i and j are between l and r (inclusive)?
This makes reading the analysis even simpler. The loop invariant is actually the proof of correctness of g and the structure of g guides the proof.

[Competitive Programming]:How do I optimise this brute force method? [duplicate]

If n numbers are given, how would I find the total number of possible triangles? Is there any method that does this in less than O(n^3) time?
I am considering a+b>c, b+c>a and a+c>b conditions for being a triangle.
Assume there is no equal numbers in given n and it's allowed to use one number more than once. For example, we given a numbers {1,2,3}, so we can create 7 triangles:
1 1 1
1 2 2
1 3 3
2 2 2
2 2 3
2 3 3
3 3 3
If any of those assumptions isn't true, it's easy to modify algorithm.
Here I present algorithm which takes O(n^2) time in worst case:
Sort numbers (ascending order).
We will take triples ai <= aj <= ak, such that i <= j <= k.
For each i, j you need to find largest k that satisfy ak <= ai + aj. Then all triples (ai,aj,al) j <= l <= k is triangle (because ak >= aj >= ai we can only violate ak < a i+ aj).
Consider two pairs (i, j1) and (i, j2) j1 <= j2. It's easy to see that k2 (found on step 2 for (i, j2)) >= k1 (found one step 2 for (i, j1)). It means that if you iterate for j, and you only need to check numbers starting from previous k. So it gives you O(n) time complexity for each particular i, which implies O(n^2) for whole algorithm.
C++ source code:
int Solve(int* a, int n)
{
int answer = 0;
std::sort(a, a + n);
for (int i = 0; i < n; ++i)
{
int k = i;
for (int j = i; j < n; ++j)
{
while (n > k && a[i] + a[j] > a[k])
++k;
answer += k - j;
}
}
return answer;
}
Update for downvoters:
This definitely is O(n^2)! Please read carefully "An Introduction of Algorithms" by Thomas H. Cormen chapter about Amortized Analysis (17.2 in second edition).
Finding complexity by counting nested loops is completely wrong sometimes.
Here I try to explain it as simple as I could. Let's fix i variable. Then for that i we must iterate j from i to n (it means O(n) operation) and internal while loop iterate k from i to n (it also means O(n) operation). Note: I don't start while loop from the beginning for each j. We also need to do it for each i from 0 to n. So it gives us n * (O(n) + O(n)) = O(n^2).
There is a simple algorithm in O(n^2*logn).
Assume you want all triangles as triples (a, b, c) where a <= b <= c.
There are 3 triangle inequalities but only a + b > c suffices (others then hold trivially).
And now:
Sort the sequence in O(n * logn), e.g. by merge-sort.
For each pair (a, b), a <= b the remaining value c needs to be at least b and less than a + b.
So you need to count the number of items in the interval [b, a+b).
This can be simply done by binary-searching a+b (O(logn)) and counting the number of items in [b,a+b) for every possibility which is b-a.
All together O(n * logn + n^2 * logn) which is O(n^2 * logn). Hope this helps.
If you use a binary sort, that's O(n-log(n)), right? Keep your binary tree handy, and for each pair (a,b) where a b and c < (a+b).
Let a, b and c be three sides. The below condition must hold for a triangle (Sum of two sides is greater than the third side)
i) a + b > c
ii) b + c > a
iii) a + c > b
Following are steps to count triangle.
Sort the array in non-decreasing order.
Initialize two pointers ‘i’ and ‘j’ to first and second elements respectively, and initialize count of triangles as 0.
Fix ‘i’ and ‘j’ and find the rightmost index ‘k’ (or largest ‘arr[k]‘) such that ‘arr[i] + arr[j] > arr[k]‘. The number of triangles that can be formed with ‘arr[i]‘ and ‘arr[j]‘ as two sides is ‘k – j’. Add ‘k – j’ to count of triangles.
Let us consider ‘arr[i]‘ as ‘a’, ‘arr[j]‘ as b and all elements between ‘arr[j+1]‘ and ‘arr[k]‘ as ‘c’. The above mentioned conditions (ii) and (iii) are satisfied because ‘arr[i] < arr[j] < arr[k]'. And we check for condition (i) when we pick 'k'
4.Increment ‘j’ to fix the second element again.
Note that in step 3, we can use the previous value of ‘k’. The reason is simple, if we know that the value of ‘arr[i] + arr[j-1]‘ is greater than ‘arr[k]‘, then we can say ‘arr[i] + arr[j]‘ will also be greater than ‘arr[k]‘, because the array is sorted in increasing order.
5.If ‘j’ has reached end, then increment ‘i’. Initialize ‘j’ as ‘i + 1′, ‘k’ as ‘i+2′ and repeat the steps 3 and 4.
Time Complexity: O(n^2).
The time complexity looks more because of 3 nested loops. If we take a closer look at the algorithm, we observe that k is initialized only once in the outermost loop. The innermost loop executes at most O(n) time for every iteration of outer most loop, because k starts from i+2 and goes upto n for all values of j. Therefore, the time complexity is O(n^2).
I have worked out an algorithm that runs in O(n^2 lgn) time. I think its correct...
The code is wtitten in C++...
int Search_Closest(A,p,q,n) /*Returns the index of the element closest to n in array
A[p..q]*/
{
if(p<q)
{
int r = (p+q)/2;
if(n==A[r])
return r;
if(p==r)
return r;
if(n<A[r])
Search_Closest(A,p,r,n);
else
Search_Closest(A,r,q,n);
}
else
return p;
}
int no_of_triangles(A,p,q) /*Returns the no of triangles possible in A[p..q]*/
{
int sum = 0;
Quicksort(A,p,q); //Sorts the array A[p..q] in O(nlgn) expected case time
for(int i=p;i<=q;i++)
for(int j =i+1;j<=q;j++)
{
int c = A[i]+A[j];
int k = Search_Closest(A,j,q,c);
/* no of triangles formed with A[i] and A[j] as two sides is (k+1)-2 if A[k] is small or equal to c else its (k+1)-3. As index starts from zero we need to add 1 to the value*/
if(A[k]>c)
sum+=k-2;
else
sum+=k-1;
}
return sum;
}
Hope it helps........
possible answer
Although we can use binary search to find the value of 'k' hence improve time complexity!
N0,N1,N2,...Nn-1
sort
X0,X1,X2,...Xn-1 as X0>=X1>=X2>=...>=Xn-1
choice X0(to Xn-3) and choice form rest two item x1...
choice case of (X0,X1,X2)
check(X0<X1+X2)
OK is find and continue
NG is skip choice rest
It seems there is no algorithm better than O(n^3). In the worst case, the result set itself has O(n^3) elements.
For Example, if n equal numbers are given, the algorithm has to return n*(n-1)*(n-2) results.

Simplest bubble sort possible

Suppose I want to sort an array of integer of size n. Suppose I have the swap method
Is this bubble sort implementation of mine correct?
for(int i=0;i<n;i++)
for (int j=0;j<n;j++)
if (array[i]<array[j]) swap(array[i], array[j]);
(I just want to know if it's correct or not, I don't care about inefficiency)
It's not correct for descending-order sort..
think about array = [2, 1], it output [1, 2]
You can make it correct by change j=0 to j=i+1
for(int i=0;i<n;i++)
for (int j=i+1;j<n;j++)
if (array[i]<array[j]) swap(array[i], array[j]);
But it's correct for ascending-order sort.
Simple proof here:
Suppose after each step for output for loop we have a[0] <= a[1] <= ... <= a[i-1] <= a[i], we call this suppose_i
suppose_i is right when i = 0
If suppose_i is correct for 0 <= i < M <= N. When i = M, we have a[0] <= a[1] <= ... <= a[M - 2] <= a[M - 1]. After inner loop j from 0 to M, we got a[0] <= a[1] <= ... <= a[M - 2] <= a[M - 1] <= a[M]. When continue inner loop j from M+1 to N - 1, a[M] will become even larger. So suppose_i is also correct for i = M.
Yes, it's correct. Proof can be constructed along the following lines.
Always when j-loop (the inner) completes (so j=n, i will be increased as next op), then a[i] is the max, and the part before a[i] is in ascending order (proofs below). So when the outer cycle is about to complete with i=n-1 then a[i] is max, and the items up to the index i are ordered (and since none of the preceding items is greater than max) so the whole array is ordered.
To prove that a[i] is always max after the j-loop is simple: i is not changing while the j-loop and if j encounters an item larger than a[i] then that is brought to a[i] and since j has scanned the whole array it's not possible that it includes an element larger than a[i].
To prove that the items up to i are ordered is full induction. We will use the above statement about a[i] being max.
For i=0 trivial (no preceding elements). a[0] is max and "it is ordered".
i=1 (just for fun): 1 item got to a[0] (don't care about its value, it cannot be greater than max), and a[1] is max. So a[0..1] sorted.
Now if the theses are satisfied after a j-loop ending at i=k then the following happens:
i <- k+1
Let's say the current item a[i]=q.
j scans a[] to k. Since k is the max it will be swapped to i. The items beyond i are not bothered yet. So essentially max moves up by one, so one item, particulaily q was added to the first part of the array. Let's see how:
The sorted part to max is scanned by j until it finds an item at index m that is larger than a[i]. (It will find a[i-1] in the worst case.) The items up to m are sorted. Now a[i] will be inserted here, an all items in the range [m..i-1] will be moved up by one. Since m is a right place to insert a[i] so a[0..i] will be ordered after the move. Now the only thing to prove is that the j-loop in [m..i] really performs a move:
At the beginning the sequence a[i],a[m..i-1] is ordered, thus every comparison in this interval will trigger a swap: a[i] is always the smallest in the a[j..i] part. The swap (i with j) will make the j-th to be at the right place (minimal item to the front) and j steps on to the remaining part of the interval.
So j reaches i=k+1 (no swap here) and a[k+1] is max so no more swaps in this j-loop, so at the end a[0..k+1] is sorted.
So finally if the theses hold for i=k then they hold for i=k+1 after a j-loop. We'we established that they hold for i=0 after 1 j-loop, and from i-loop shows that there will be altogether n j-loops so the theses hold for i=n-1 which is just what we've promised to prove in the firs paragraph.

find triangular triplet in an array [duplicate]

zero-indexed array A consisting of N integers is given. A triplet (P, Q, R) is triangular if and
A[P] + A[Q] > A[R],
A[Q] + A[R] > A[P],
A[R] + A[P] > A[Q].
For example, consider array A such that
A[0] = 10 A[1] = 2 A[2] = 5
A[3] = 1 A[4] = 8 A[5] = 20
Triplet (0, 2, 4) is triangular.
Write a function
int triangle(const vector<int> &A);
that, given a zero-indexed array A consisting of N integers, returns 1 if there exists a triangular triplet for this array and returns 0 otherwise.
Assume that:
N is an integer within the range [0..100,000];
each element of array A is an integer within the range [-2,147,483,648..2,147,483,647].
For example, given array A such that
A[0] = 10 A[1] = 2 A[2] = 5
A[3] = 1 A[4] = 8 A[5] = 20
the function should return 1, as explained above. Given array A such that
A[0] = 10 A[1] = 50 A[2] = 5
A[3] = 1
the function should return 0.
Expected worst-case time complexity:
Expected worst-case space complexity: O(1)
First claim
First of all there is no point to take into account non-positive number. There's no chance you may achieve the triangle inequalities if at least one of the numbers is negative or zero. This is obvious, nevertheless here is the proof:
Assume A, B, C obey the triangle inequality, whereas C <= 0. Then you have
A + C > B. Hence A > B.
B + C > A. Hence B > A.
(contradiction).
Second claim
Suppose A, B, C obey the triangle inequalities, whereas C is the largest among A,B,C. Then for each A2 and B2 between A,B respectively and C - they will also obey triangle inequality.
In other words:
A,B,C obey triangle inequalities.
C >= A
C >= B
C >= A2 >= A
C >= B2 >= B
Then A2,B2,C also obey triangle inequalities.
The proof is trivial, enough to write the inequalities explicitly.
The consequence of this is that if C is the largest number for which you want to find the triangle inequality - you should check only two largest numbers from the set not exceeding C, and check if A + B > C.
Third claim
If 0 < A <= B <= C don't obey triangle inequalities, then C >= A*2.
The proof is trivial as well: A + B <= C, hence A + A <= C, hence C >= A*2
The algorithm
Pick 2 largest numbers B and C (B <= C).
Pick the largest number A not exceeding B, such that
A <= B <= C.
Make sure it's not the same element as B,C
Take into account only positive integers.
If unable to pick such a number - done. (No triangulars).
Check if A,B,C obey the triangle inequality. Test if A + B > C. (done if they do).
Discard the largest number C. Substitute C = B, then B = A.
Go to step 2.
Fourth claim
The above algorithm is logarithmic in the maximum integer size. In other words, its linear in the data type bitness. It's worst-case complexity is independent on the input length. Hence - it's O(1) in the input length.
Proof:
At every iteration (that does not find the solution) we have A <= C/2. After two such iterations A becomes the new C. This means that after every two iterations the largest number becomes at least 2 times smaller.
Obviously this gives us the upper bound of the number of the iterations. Gives our integers are limited by 31 bit (we ignore negatives), whereas the minimum interesting largest C is 1, this gives us no more that 2 * (31 - 1) = 60 iterations.
If O(N³) is acceptable time complexity then the Pseudocode below should work. If you have stricter time complexity requirements then you'll have to specify them.
for (P in A){
for (Q in A){
for (R in A){
if(A[P] > 0 && A[Q] > 0 && A[R] > 0){
if(A[P] > A[R] - A[Q] && A[Q] > A[P] - A[R] && A[R] > A[Q] - A[P]){
return 1;
}
}
}
}
}
return 0;
The reasoning behind the if statements is this:
Since the ints can be anything up to max int you have to deal with overflow. Adding them together could cause a weird error if there are two very large ints in the array. So instead we test if they are positive and then rewrite the formulae to do the same checks, but with subtraction. We don't need to do anything if any of the values are negative or 0, since:
Assume x <= 0
Assume x+y > z
Assume x+z > y
Then y > z and z > y which is a contradiction
So no negative or zero valued ints will be a part of a triple
Sorting would be very cool, but const vector and O(1) space requirements doesn't allow it.
(because this is homework) Some hint: triangular numbers are close to each other.
A hint: if you pick just two members of the array then what are the limits on the possible value of the third member of a triangular triplet? Any number outside those limits can be rejected immediately.
There are many in-place sorts; use one of them to sort the array - say comb sort for smaller ones (time complexity O(N^2)) or heap sort (complexity O(N log(N)).
Once you have sorted array, problem should be whether there is a set of 3 numbers where A[X] > (A[X-1] + A[X+1]) / 2 i.e. middle number is greater than average of preceding & succeeding numbers (sadly this is a guess, I don't have a real basis - if its incorrect I hope someone corrects me, but there should be some good way to redefine the 'triangle' requirement to be more easily checked).
Now you just have an O(1) iteration over the sorted array to check whether the condition is true, hence overall complexity will be that of the sorting algorithm (best case N logN)

finding triangulars from array

zero-indexed array A consisting of N integers is given. A triplet (P, Q, R) is triangular if and
A[P] + A[Q] > A[R],
A[Q] + A[R] > A[P],
A[R] + A[P] > A[Q].
For example, consider array A such that
A[0] = 10 A[1] = 2 A[2] = 5
A[3] = 1 A[4] = 8 A[5] = 20
Triplet (0, 2, 4) is triangular.
Write a function
int triangle(const vector<int> &A);
that, given a zero-indexed array A consisting of N integers, returns 1 if there exists a triangular triplet for this array and returns 0 otherwise.
Assume that:
N is an integer within the range [0..100,000];
each element of array A is an integer within the range [-2,147,483,648..2,147,483,647].
For example, given array A such that
A[0] = 10 A[1] = 2 A[2] = 5
A[3] = 1 A[4] = 8 A[5] = 20
the function should return 1, as explained above. Given array A such that
A[0] = 10 A[1] = 50 A[2] = 5
A[3] = 1
the function should return 0.
Expected worst-case time complexity:
Expected worst-case space complexity: O(1)
First claim
First of all there is no point to take into account non-positive number. There's no chance you may achieve the triangle inequalities if at least one of the numbers is negative or zero. This is obvious, nevertheless here is the proof:
Assume A, B, C obey the triangle inequality, whereas C <= 0. Then you have
A + C > B. Hence A > B.
B + C > A. Hence B > A.
(contradiction).
Second claim
Suppose A, B, C obey the triangle inequalities, whereas C is the largest among A,B,C. Then for each A2 and B2 between A,B respectively and C - they will also obey triangle inequality.
In other words:
A,B,C obey triangle inequalities.
C >= A
C >= B
C >= A2 >= A
C >= B2 >= B
Then A2,B2,C also obey triangle inequalities.
The proof is trivial, enough to write the inequalities explicitly.
The consequence of this is that if C is the largest number for which you want to find the triangle inequality - you should check only two largest numbers from the set not exceeding C, and check if A + B > C.
Third claim
If 0 < A <= B <= C don't obey triangle inequalities, then C >= A*2.
The proof is trivial as well: A + B <= C, hence A + A <= C, hence C >= A*2
The algorithm
Pick 2 largest numbers B and C (B <= C).
Pick the largest number A not exceeding B, such that
A <= B <= C.
Make sure it's not the same element as B,C
Take into account only positive integers.
If unable to pick such a number - done. (No triangulars).
Check if A,B,C obey the triangle inequality. Test if A + B > C. (done if they do).
Discard the largest number C. Substitute C = B, then B = A.
Go to step 2.
Fourth claim
The above algorithm is logarithmic in the maximum integer size. In other words, its linear in the data type bitness. It's worst-case complexity is independent on the input length. Hence - it's O(1) in the input length.
Proof:
At every iteration (that does not find the solution) we have A <= C/2. After two such iterations A becomes the new C. This means that after every two iterations the largest number becomes at least 2 times smaller.
Obviously this gives us the upper bound of the number of the iterations. Gives our integers are limited by 31 bit (we ignore negatives), whereas the minimum interesting largest C is 1, this gives us no more that 2 * (31 - 1) = 60 iterations.
If O(N³) is acceptable time complexity then the Pseudocode below should work. If you have stricter time complexity requirements then you'll have to specify them.
for (P in A){
for (Q in A){
for (R in A){
if(A[P] > 0 && A[Q] > 0 && A[R] > 0){
if(A[P] > A[R] - A[Q] && A[Q] > A[P] - A[R] && A[R] > A[Q] - A[P]){
return 1;
}
}
}
}
}
return 0;
The reasoning behind the if statements is this:
Since the ints can be anything up to max int you have to deal with overflow. Adding them together could cause a weird error if there are two very large ints in the array. So instead we test if they are positive and then rewrite the formulae to do the same checks, but with subtraction. We don't need to do anything if any of the values are negative or 0, since:
Assume x <= 0
Assume x+y > z
Assume x+z > y
Then y > z and z > y which is a contradiction
So no negative or zero valued ints will be a part of a triple
Sorting would be very cool, but const vector and O(1) space requirements doesn't allow it.
(because this is homework) Some hint: triangular numbers are close to each other.
A hint: if you pick just two members of the array then what are the limits on the possible value of the third member of a triangular triplet? Any number outside those limits can be rejected immediately.
There are many in-place sorts; use one of them to sort the array - say comb sort for smaller ones (time complexity O(N^2)) or heap sort (complexity O(N log(N)).
Once you have sorted array, problem should be whether there is a set of 3 numbers where A[X] > (A[X-1] + A[X+1]) / 2 i.e. middle number is greater than average of preceding & succeeding numbers (sadly this is a guess, I don't have a real basis - if its incorrect I hope someone corrects me, but there should be some good way to redefine the 'triangle' requirement to be more easily checked).
Now you just have an O(1) iteration over the sorted array to check whether the condition is true, hence overall complexity will be that of the sorting algorithm (best case N logN)