Question Description : Given an array arr[] of length N, the task is to find the XOR of pairwise sum of every possible unordered pairs of the array.
I solved this question using the method described in this post.
My Code :
int xorAllSum(int a[], int n)
{
int curr, prev = 0;
int ans = 0;
for (int k = 0; k < 32; k++) {
int o = 0, z = 0;
for (int i = 0; i < n; i++) {
if (a[i] & (1 << k)) {
o++;
}
else {
z++;
}
}
curr = o * z + prev;
if (curr & 1) {
ans = ans | (1 << k);
}
prev = o * (o - 1) / 2;
}
return ans;
}
Code Descrption : I am finding out at each bit, whether our answer will have that bit set ort not. So to do this for each bit-position, I find the count of all the numbers which have a set bit at the position(represeneted by 'o' in the code) and the count of numbers having un-set bit at that position(represented by 'z').
Now if we pair up these numbers(the numbers having set bit and unset bit together, then we will get a set bit in their sum(Because we need to get XOR of all pair sums).
The factor of 'prev' is included to account for the carry over bits. Now we know that the answer will have a set bit at current position only if the number of set bits are 'odd' as we are doing an XOR operation.
But I am not getting correct output. Can anyone please help me
Test Cases :
n = 3, a[] = {1, 2, 3} => (1 + 2) ^ (1 + 3) ^ (2 + 3)
=> 3 ^ 4 ^ 5 = 2
=> Output : 2
n = 6
a[] = {1 2 10 11 18 20}
Output : 50
n = 8
a[] = {10 26 38 44 51 70 59 20}
Output : 182
Constraints : 2 <= n <= 10^8
Also, here we need to consider UNORDERED PAIRS and not Ordered Pairs for the answer
PS : I know that the same question has been asked before but I couldn't explain my problem with this much detail in the comments so I created a new post. I am new here, so please pardon me and give me your feedback :)
I suspect that the idea in the post you referred to is missing important details, if it could work at all with the stated complexity. (I would be happy to better understand and be corrected should that author wish to clarify their method further.)
Here's my understanding of at least one author's intention for an O(n * log n * w) solution, where w is the number of bits in the largest sum, as well as JavaScript code with a random comparison to brute force to show that it works (easily translatable to C or Python).
The idea is to examine the contribution of each bit one a time. Since in any one iteration, we are only interested in whether the kth bit in the sums is set, we can remove all parts of the numbers that include higher bits, taking them each modulo 2^(k + 1).
Now the sums that would necessarily have the kth bit set are in the intervals, [2^k, 2^(k + 1)) (that's when the kth bit is the highest) and [2^(k+1) + 2^k, 2^(k+2) − 2] (when we have both the kth and (k+1)th bits set). So in the iteration for each bit, we sort the input list (modulo 2^(k + 1)), and for each left summand, we decrement a pointer to the end of each of the two intervals, and binary search the relevant start index.
// https://stackoverflow.com/q/64082509
// Returns the lowest index of a value
// greater than or equal to the target
function lowerIdx(a, val, left, right){
if (left >= right)
return left;
mid = left + ((right - left) >> 1);
if (a[mid] < val)
return lowerIdx(a, val, mid+1, right);
else
return lowerIdx(a, val, left, mid);
}
function bruteForce(A){
let answer = 0;
for (let i=1; i<A.length; i++)
for (let j=0; j<i; j++)
answer ^= A[i] + A[j];
return answer;
}
function f(A, W){
const n = A.length;
const _A = new Array(n);
let result = 0;
for (let k=0; k<W; k++){
for (let i=0; i<n; i++)
_A[i] = A[i] % (1 << (k + 1));
_A.sort((a, b) => a - b);
let pairs_with_kth_bit = 0;
let l1 = 1 << k;
let r1 = 1 << (k + 1);
let l2 = (1 << (k + 1)) + (1 << k);
let r2 = (1 << (k + 2)) - 2;
let ptr1 = n - 1;
let ptr2 = n - 1;
for (let i=0; i<n-1; i++){
// Interval [2^k, 2^(k+1))
while (ptr1 > i+1 && _A[i] + _A[ptr1] >= r1)
ptr1 -= 1;
const idx1 = lowerIdx(_A, l1-_A[i], i+1, ptr1);
let sum = _A[i] + _A[idx1];
if (sum >= l1 && sum < r1)
pairs_with_kth_bit += ptr1 - idx1 + 1;
// Interval [2^(k+1)+2^k, 2^(k+2)−2]
while (ptr2 > i+1 && _A[i] + _A[ptr2] > r2)
ptr2 -= 1;
const idx2 = lowerIdx(_A, l2-_A[i], i+1, ptr2);
sum = _A[i] + _A[idx2]
if (sum >= l2 && sum <= r2)
pairs_with_kth_bit += ptr2 - idx2 + 1;
}
if (pairs_with_kth_bit & 1)
result |= 1 << k;
}
return result;
}
var As = [
[1, 2, 3], // 2
[1, 2, 10, 11, 18, 20], // 50
[10, 26, 38, 44, 51, 70, 59, 20] // 182
];
for (let A of As){
console.log(JSON.stringify(A));
console.log(`DP, brute force: ${ f(A, 10) }, ${ bruteForce(A) }`);
console.log('');
}
var numTests = 500;
for (let i=0; i<numTests; i++){
const W = 8;
const A = [];
const n = 12;
for (let j=0; j<n; j++){
const num = Math.floor(Math.random() * (1 << (W - 1)));
A.push(num);
}
const fA = f(A, W);
const brute = bruteForce(A);
if (fA != brute){
console.log('Mismatch:');
console.log(A);
console.log(fA, brute);
console.log('');
}
}
console.log("Done testing.");
Related
I need to find the minimum sum of the distances between an element in the array and the set of k-elements of the array, not including that index.
For example:
arr = {5, 7, 4, 9}
k = 2
min_sum(5) = |5-4| + |5-7| = 3
min_sum(7) = |7-9| + |7-5| = 4
min_sum(4) = |4-5| + |4-7| = 4
min_sum(9) = |9-7| + |9-5| = 6
So, a naive solution would be to subtract the i-th element from each element of the array, then sort the array and calculate the sum of the first k elements in the sorted array. But it takes too long... I believe this is a dp-problem or something like that (maybe treaps).
Input:
n - number of array elements
k - number of elements in a set
array
Constraints:
2 <= n <= 350 000
1 <= k < n
1 <= a[i] <= 10^9
time limit: 2 seconds
Input:
4
2
5 7 4 9
Output:
3 4 4 6
What is the most efficient way to solve this problem? How to optimize the search for the minimum sum?
This is my code in C++, and it works about 3 mins for n = 350 000, k = 150 000:
#include <bits/stdc++.h>
using namespace std;
int main() {
int n, k, tp;
unsigned long long temp;
cin >> n >> k;
vector<unsigned int> org;
vector<unsigned int> a;
vector<unsigned long long> cum(n, 0);
//unordered_map <int, long long> ans;
unordered_map <int, long long> mp;
for (int i = 0; i < n; i++){
cin >> tp;
org.push_back(tp);
a.push_back(tp);
}
/*
srand(time(0));
for (int i = 0; i < n; i++){
org.push_back(rand());
a.push_back(org[i]);
}
*/
sort(a.begin(), a.end());
partial_sum(a.begin(), a.end(), cum.begin());
mp[a[0]] = cum[k] - cum[0] - a[0] * k;
//ans[a[0]] = mp[a[0]];
for (int i = 1; i <= k; i++) {
mp[a[i]] = a[i] * i - cum[i-1] + cum[k] - cum[i] - a[i] * (k-i);
}
for (int i = 1; i < n-k; i++){
for (int j = 0; j <= k; j++){
//if (ans.find(a[i+j]) != ans.end()) {continue;}
temp = ( (a[i+j] * j) - (cum[i+j-1] - cum[i-1]) ) + ( cum[i+k] - cum[i+j] - a[i+j] * (k-j) );
if (mp.find(a[i+j]) == mp.end()) { mp[a[i+j]] = temp; }
else if (mp[a[i+j]] > temp) { mp[a[i+j]] = temp; }
//else { ans[a[i+j]] = mp[a[i+j]]; }
}
}
for (int i = 0; i < n; i++) {
cout << mp[org[i]] << " ";
}
return 0;
}
We can solve this problem efficiently by taking the sliding window approach.
It seems safe to assume that there are no duplicates in the array. If it contains duplicates, then we can simply discard them with the help of HashSet.
The next step is to sort the array to guarantee that the closest k elements will be within the window [i - k; i + k] for each index i.
We will keep three variables for the window: left, right and currentSum. They will be adjusted accordingly at each iteration. Initially, left = 0 and right = k(since the element at index 0 doesn't have elements to its left) and currentSum = result for index 0.
The key consideration is that the variables left and right are unlikely to change 'significantly' during the iteration. To be more precise, at each iteration we should attempt to move the window to the right by comparing the distances nums[i + right + 1] - nums[i] vs nums[i] - nums[i - left]. (You can prove mathematically that there is no point in trying to move the window to the left.) If the former is less than the latter, we increment right and decrement left while updating currentSum at the same time.
In order to recalculate currentSum, I would suggest writing down expressions for two adjacent iterations and looking closer at the difference between them.
For instance, if
result[i] = nums[i + 1] + ... + nums[i + right] - (nums[i - 1] + ... + nums[i - left]) + (left - right) * nums[i], then
result[i] = nums[i + 2] + ... + nums[i + right] - (nums[i] + ... + nums[i - left]) + (left - right + 2) * nums[i + 1].
As we can see, these expressions are quite similar. The time complexity of this solution is O(n * log(n)). (my solution in Java for n ~ 500_000 and k ~ 400_000 works within 300 ms) I hope this together with the consideration above will help you.
Assuming that we have sorted the original array nums and computed the mapping element->its index in the sorted array(for instance, through binary search), we can proceed with finding the distances.
public long[] findMinDistances(int[] nums, int k) {
long[] result = new long[nums.length];
long currentSum = 0;
for (int i = 1; i <= k; i++) {
currentSum += nums[i];
}
result[0] = currentSum - (long) k * nums[0];
int left = 0;
int right = k;
currentSum = result[0];
for (int i = 1; i < nums.length; i++) {
int current = nums[i];
int previous = nums[i - 1];
currentSum -= (long) (left - right) * previous;
currentSum -= previous;
if (right >= 1) {
currentSum -= current;
left++;
right--;
} else {
currentSum += nums[i - 1 - left];
}
currentSum += (long) (left - right) * current;
while (i + right + 1 < nums.length && i - left >= 0 &&
nums[i + right + 1] - current < current - nums[i - left]) {
currentSum += nums[i + right + 1] - current;
currentSum -= current - nums[i - left];
right++;
left--;
}
result[i] = currentSum;
}
return result;
}
For every element e in the original array its minimal sum of distances will be result[mapping.get(e)].
I think this one is better:
Sort the array first then you can know that fact -
For every element i in the array the k minimum distances of it with other elemets will be the distances with the ones that in k places around it in the array.
(of course it's maybe to the right or to the left or from both sides).
So for every element i to calculate min_sum(a[i]) do that:
First, min_sum(a[i]) = 0.
Then, go with two indexes, let's mark them r (to the right of i) and l (to the left of i)
and compare the distance (a[i]-a[r]) with the distance (a[i]-a[l]).
You will add the smallest to min_sum(a[i]) and if it was the right one then
increas index r, and if it was the left one then decrease index l.
Of course if the left got to 0 or the right one got to n you will most take the distaces with elemets from the other side.
Anyway you do that till you sum k elemets and that's it.
This way you didn't sort any thing but the main array.
If I have multiplication table 3x4
1 2 3 4
2 4 6 8
3 6 9 12
and put all these numbers in the order:
1 2 2 3 3 4 4 6 6 8 9 12
What number at the K position?
For example, if K = 5, then this is number 3.
N and M in the range 1 to 500 000. K is always less then N * M.
I've tried to use binary-search like in this(If an NxM multiplication table is put in order, what is number in the middle?) solution, but there some mistake if desired value not in the middle of sequence.
long findK(long n, long m, long k)
{
long min = 1;
long max = n * m;
long ans = 0;
long prev_sum = 0;
while (min <= max) {
ans = (min + max) / 2;
long sum = 0;
for (int i = 1; i <= m; i++)
{
sum += std::min(ans / i, n);
}
if (prev_sum + 1 == sum) break;
sum--;
if (sum < k) min = ans - 1;
else if (sum > k) max = ans + 1;
else break;
prev_sum = sum;
}
long sum = 0;
for (int i = 1; i <= m; i++)
sum += std::min((ans - 1) / i, n);
if (sum == k) return ans - 1;
else return ans;
}
For example, when N = 1000, M = 1000, K = 876543; expected value is 546970, but returned 546972.
I believe that the breakthrough will lie with counting the quantity of factorizations of each integer up to the desired point. For each integer prod, you need to count how many simple factorizations i*j there are with i <= m, j <= n. See the divisor functions.
You need to iterate prod until you reach the desired point, midpt = N*M / 2. Cumulatively subtract σ0(prod) from midpt until you reach 0. Note that once prod passes min(i, j), you need to start cropping the divisor count, due to running off the edge of the multiplication table.
Is that enough to get you started?
Code of third method from this(https://leetcode.com/articles/kth-smallest-number-in-multiplication-table/#) site solve the problem.
bool enough(int x, int m, int n, int k) {
int count = 0;
for (int i = 1; i <= m; i++) {
count += std::min(x / i, n);
}
return count >= k;
}
int findK(int m, int n, int k) {
int lo = 1, hi = m * n;
while (lo < hi) {
int mi = lo + (hi - lo) / 2;
if (!enough(mi, m, n, k)) lo = mi + 1;
else hi = mi;
}
return lo;
}
I tried to create a function which takes two variables n and k.
The function returns the number of positive integers that have prime factors all less than or equal to k. The number of positive integers is limited by n which is the largest positive integer.
For example, if k = 4 and n = 10; the positive integers which have all prime factors less than or equal to 4 are 1, 2, 3, 4, 6, 8, 9, 12...(1 is always part for some reason even though its not prime) but since n is 10, 12 and higher numbers are ignored.
So the function will return 7. The code I wrote works for smaller values of n while it just keeps on running for larger values.
How can I optimize this code? Should I start from scratch and come up with a better algorithm?
int generalisedHammingNumbers(int n, int k)
{
vector<int>store;
vector<int>each_prime = {};
for (int i = 1; i <= n; ++i)
{
for (int j = 1; j <= i; ++j)
{
if (i%j == 0 && is_prime(j))
{
each_prime.push_back(j); //temporary vector of prime factors for each integer(i)
}
}
for (int m = 0; m<each_prime.size(); ++m)
{
while(each_prime[m] <= k && m<each_prime.size()-1) //search for prime factor greater than k
{
++m;
}
if (each_prime[m] > k); //do nothing for prime factor greater than k
else store.push_back(i); //if no prime factor greater than k, i is valid, store i
}
each_prime = {};
}
return (store.size()+1);
}
bool is_prime(int x)
{
vector<int>test;
if (x != 1)
{
for (int i = 2; i < x; ++i)
{
if (x%i == 0)test.push_back(i);
}
if (test.size() == 0)return true;
else return false;
}
return false;
}
int main()
{
long n;
int k;
cin >> n >> k;
long result = generalisedHammingNumbers(n, k);
cout << result << endl;
}
Should I start from scratch and come up with a better algorithm?
Yes... I think so.
This seems to me a work for the Sieve of Eratosthenes.
So I propose to
1) create a std::vector<bool> to detect, through Eratosthenes, the primes to n
2) remove primes starting from k+1, and their multiples, from the pool of your numbers (another std::vector<bool>)
3) count the true remained values in the pool vector
The following is a full working example
#include <vector>
#include <iostream>
#include <algorithm>
std::size_t foo (std::size_t n, std::size_t k)
{
std::vector<bool> primes(n+1U, true);
std::vector<bool> pool(n+1U, true);
std::size_t const sqrtOfN = std::sqrt(n);
// first remove the not primes from primes list (Sieve of Eratosthenes)
for ( auto i = 2U ; i <= sqrtOfN ; ++i )
if ( primes[i] )
for ( auto j = i << 1 ; j <= n ; j += i )
primes[j] = false;
// then remove from pool primes, bigger than k, and multiples
for ( auto i = k+1U ; i <= n ; ++i )
if ( primes[i] )
for ( auto j = i ; j <= n ; j += i )
pool[j] = false;
// last count the true value in pool (excluding the zero)
return std::count(pool.begin()+1U, pool.end(), true);
}
int main ()
{
std::cout << foo(10U, 4U) << std::endl;
}
Generate the primes using a sieve of Erastothenes, and then use a modified coin-change algorithm to find numbers which are products of only those primes. In fact, one can do both simultaneously like this (in Python, but is easily convertible to C++):
def limited_prime_factors(n, k):
ps = [False] * (k+1)
r = [True] * 2 + [False] * n
for p in xrange(2, k+1):
if ps[p]: continue
for i in xrange(p, k+1, p):
ps[i] = True
for i in xrange(p, n+1, p):
r[i] = r[i//p]
return [i for i, b in enumerate(r) if b]
print limited_prime_factors(100, 3)
The output is:
[0, 1, 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 27, 32, 36, 48, 54, 64, 72, 81, 96]
Here, each time we find a prime p, we strike out all multiples of p in the ps array (as a standard Sieve of Erastothenes), and then in the r array, mark all multiples of any number that's a multiple of p whether their prime factors are all less than or equal to p.
It runs in O(n) space and O(n log log k) time, assuming n>k.
A simpler O(n log k) solution tests if all the factors of a number are less than or equal to k:
def limited_prime_factors(n, k):
r = [True] * 2 + [False] * n
for p in xrange(2, k+1):
for i in xrange(p, n+1, p):
r[i] = r[i//p]
return [i for i, b in enumerate(r) if b]
Here's an Eulerian version in Python (seems about 1.5 times faster than Paul Hankin's). We generate only the numbers themselves by multiplying a list by each prime and its powers in turn.
import time
start = time.time()
n = 1000000
k = 100
total = 1
a = [None for i in range(0, n+1)]
s = []
p = 1
while (p < k):
p = p + 1
if a[p] is None:
#print("\n\nPrime: " + str(p))
a[p] = True
total = total + 1
s.append(p)
limit = n / p
new_s = []
for i in s:
j = i
while j <= limit:
new_s.append(j)
#print j*p
a[j * p] = True
total = total + 1
j = j * p
s = new_s
print("\n\nGilad's answer: " + str(total))
end = time.time()
print(end - start)
# Paul Hankin's solution
def limited_prime_factors(n, k):
ps = [False] * (k+1)
r = [True] * 2 + [False] * n
for p in xrange(2, k+1):
if ps[p]: continue
for i in xrange(p, k+1, p):
ps[i] = True
for i in xrange(p, n+1, p):
r[i] = r[i//p]
return len([i for i, b in enumerate(r) if b]) - 1
start = time.time()
print "\nPaul's answer:" + str(limited_prime_factors(1000000, 100))
end = time.time()
print(end - start)
I have an array with the elements {7,2,1} and the idea is to do 7 * 2 + 7 * 1 + 2 * 1 which is basically this algorithm:
for(int i=0;i<n-1;++i)
for(int k=i+1;k<n;++k)
sum += a[i] * a[k];
Where a is the array in which I have the numbers and n is the number of elements, I need a more efficient algorithm for doing this, and I have no clue how to do it, can someone give me a hand?
Thank you!
You can do better in the general case. Time to do some math. Let's look at the 3-element version, we have:
ab + ac + bc
= 1/2 * (2ab + 2ac + 2bc)
= 1/2 * (2ab + 2ac + 2bc + a^2 + b^2 + c^2 - (a^2 + b^2 + c^2))
= 1/2 * ((a+b+c)^2 - (a^2 + b^2 + c^2))
That is:
int sum = 0;
int sum_sq = 0;
for (int i : arr) {
sum += i;
sum_sq += i*i;
}
int result = (sum*sum - sum_sq) / 2;
This is O(n) multiplications, instead of O(n^2). This'll certainly be better than the naive implementation at some point. Whether or not it's better for just 3 elements is something I haven't timed.
#chux's suggestion is essentially to redistribute operations:
ai * ai + 1 + ai * ai + 2 + ... + ai * an
-->
ai * (ai + 1 + ... + an)
combined with the avoiding unnecessary recomputation of partial sums of the (ai + 1 + ... + an) terms by leveraging the fact that each differs from the next by the value of one element of the input array.
Here's a one-pass implementation with O(1) overhead:
int psum(size_t n, int array[n]) {
int result = 0;
int rsum = array[n - 1];
for (int i = n - 2; i >= 0; i--) {
result += array[i] * rsum;
rsum += array[i];
}
return result;
}
The sum of all elements to the right of index i is maintained from iteration to iteration in variable rsum. It's unnecessary to track its various values in an array, because we need each value only for one iteration of the loop.
This scales linearly with the number of elements in the input array. You'll see that the number and type of operations is quite similar to #Barry's answer, but nothing analogous to his final step is required, which saves a few operations.
As #Barry observes in comments, the iteration can also be run in the other direction, in conjunction with tracking the left-hand partial sums intead of the right-hand ones. That would diverge a bit more from #chux's description, but it relies on exactly the same principles.
We have (a + b + c + ...)2 = (a2 + b2 + c2 + ...) + 2(ab + bc + ca + ...)
You want the sum S = ab + bc + ca + ..., which has O(n2) pairs (using 2 nested loops)
You can do 2 separated loops, one calculates P = a2 + b2 + c2 + ... in O(n) time, and another calculates Q = (a + b + c + ...)2 also in O(n) time. Then take S = (Q - P) / 2.
Make 1 pass, walk from the end of [a] to the front and form a sum of all the elements "to the right".
2nd pass, Multiple a[i] * sum[i].
O(n).
long sum0(int a[], int n) {
long sum = 0;
for (int i = 0; i < n - 1; ++i)
for (int k = i + 1; k < n; ++k)
sum += a[i] * a[k];
return sum;
}
long sum1(int a[], int n) {
int long sums[n];
sums[n - 1] = 0;
for (int i = n - 2; i >= 0; i--) {
sums[i] = a[i+1] + sums[i + 1];
}
long sum = 0;
for (int i = 0; i < n - 1; ++i)
sum += a[i] * sums[i];
return sum;
}
void test(int a[], int n) {
long s0 = sum0(a, n);
long s1 = sum1(a, n);
if (s0 != s1) printf("%9ld %9ld\n", s0, s1);
}
void tests(int k) {
while (k--) {
int n = rand() % 10 + 2;
int a[n + 1];
for (int m = 0; m < n; m++)
a[m] = rand() % 256;
test(a, n);
}
}
int main() {
int a[3] = { 7, 2, 1 };
printf("%d\n", sum1(a, 3));
tests(1000000);
puts("Done");
}
As it turns out the sums[] array is not needed either as the the running sums needs only 1 location. This effectively makes this answers similar to others
long sum1(int a[], int n) {
int long sums = 0;
long sum = 0;
for (int i = n - 2; i >= 0; i--) {
sums = a[i+1] + sums;
sum += a[i] * sums;
}
return sum;
}
I stumbled upon this problem on Codility Lessons, here is the description:
A non-empty zero-indexed array A consisting of N integers is given.
A triplet (X, Y, Z), such that 0 ≤ X < Y < Z < N, is called a double slice.
The sum of double slice (X, Y, Z) is the total of A[X + 1] + A[X + 2] + ... + A[Y − 1] + A[Y + 1] + A[Y + 2] + ... + A[Z − 1].
For example, array A such that:
A[0] = 3
A[1] = 2
A[2] = 6
A[3] = -1
A[4] = 4
A[5] = 5
A[6] = -1
A[7] = 2
contains the following example double slices:
double slice (0, 3, 6), sum is 2 + 6 + 4 + 5 = 17,
double slice (0, 3, 7), sum is 2 + 6 + 4 + 5 − 1 = 16,
double slice (3, 4, 5), sum is 0.
The goal is to find the maximal sum of any double slice.
Write a function:
int solution(vector &A);
that, given a non-empty zero-indexed array A consisting of N integers, returns the maximal sum of any double slice.
For example, given:
A[0] = 3
A[1] = 2
A[2] = 6
A[3] = -1
A[4] = 4
A[5] = 5
A[6] = -1
A[7] = 2
the function should return 17, because no double slice of array A has a sum of greater than 17.
Assume that:
N is an integer within the range [3..100,000];
each element of array A is an integer within the range [−10,000..10,000].
Complexity:
expected worst-case time complexity is O(N);
expected worst-case space complexity is O(N), beyond input storage (not counting >the storage required for input arguments).
Elements of input arrays can be modified.
I have already read about the algorithm with counting MaxSum starting at index i and ending at index i, but I don't know why my approach sometimes gives bad results. The idea is to compute MaxSum ending at index i, ommiting the minimum value at range 0..i. And here is my code:
int solution(vector<int> &A) {
int n = A.size();
int end = 2;
int ret = 0;
int sum = 0;
int min = A[1];
while (end < n-1)
{
if (A[end] < min)
{
sum = max(0, sum + min);
ret = max(ret, sum);
min = A[end];
++end;
continue;
}
sum = max(0, sum + A[end]);
ret = max(ret, sum);
++end;
}
return ret;
}
I would be glad if you could help me point out the loophole!
My solution based on bidirectional Kadane's algorithm. More details on my blog here. Scores 100/100.
public int solution(int[] A) {
int N = A.length;
int[] K1 = new int[N];
int[] K2 = new int[N];
for(int i = 1; i < N-1; i++){
K1[i] = Math.max(K1[i-1] + A[i], 0);
}
for(int i = N-2; i > 0; i--){
K2[i] = Math.max(K2[i+1]+A[i], 0);
}
int max = 0;
for(int i = 1; i < N-1; i++){
max = Math.max(max, K1[i-1]+K2[i+1]);
}
return max;
}
Here is my code:
int get_max_sum(const vector<int>& a) {
int n = a.size();
vector<int> best_pref(n);
vector<int> best_suf(n);
//Compute the best sum among all x values assuming that y = i.
int min_pref = 0;
int cur_pref = 0;
for (int i = 1; i < n - 1; i++) {
best_pref[i] = max(0, cur_pref - min_pref);
cur_pref += a[i];
min_pref = min(min_pref, cur_pref);
}
//Compute the best sum among all z values assuming that y = i.
int min_suf = 0;
int cur_suf = 0;
for (int i = n - 2; i > 0; i--) {
best_suf[i] = max(0, cur_suf - min_suf);
cur_suf += a[i];
min_suf = min(min_suf, cur_suf);
}
//Check all y values(y = i) and return the answer.
int res = 0;
for (int i = 1; i < n - 1; i++)
res = max(res, best_pref[i] + best_suf[i]);
return res;
}
int get_max_sum_dummy(const vector<int>& a) {
//Try all possible values of x, y and z.
int res = 0;
int n = a.size();
for (int x = 0; x < n; x++)
for (int y = x + 1; y < n; y++)
for (int z = y + 1; z < n; z++) {
int cur = 0;
for (int i = x + 1; i < z; i++)
if (i != y)
cur += a[i];
res = max(res, cur);
}
return res;
}
bool test() {
//Generate a lot of small test cases and compare the output of
//a brute force and the actual solution.
bool ok = true;
for (int test = 0; test < 10000; test++) {
int size = rand() % 20 + 3;
vector<int> a(size);
for (int i = 0; i < size; i++)
a[i] = rand() % 20 - 10;
if (get_max_sum(a) != get_max_sum_dummy(a))
ok = false;
}
for (int test = 0; test < 10000; test++) {
int size = rand() % 20 + 3;
vector<int> a(size);
for (int i = 0; i < size; i++)
a[i] = rand() % 20;
if (get_max_sum(a) != get_max_sum_dummy(a))
ok = false;
}
return ok;
}
The actual solution is get_max_sum function(the other two are a brute force solution and a tester functions that generates a random array and compares the output of a brute force and actual solution, I used them for testing purposes only).
The idea behind my solution is to compute the maximum sum in a sub array that that starts somewhere before i and ends in i - 1, then do the same thing for suffices(best_pref[i] and best_suf[i], respectively). After that I just iterate over all i and return the best value of best_pref[i] + best_suf[i]. It works correctly because best_pref[y] finds the best x for a fixed y, best_suf[y] finds the best z for a fixed y and all possible values of y are checked.
def solution(A):
n = len(A)
K1 = [0] * n
K2 = [0] * n
for i in range(1,n-1,1):
K1[i] = max(K1[i-1] + A[i], 0)
for i in range(n-2,0,-1):
K2[i] = max(K2[i+1]+A[i], 0)
maximum = 0;
for i in range(1,n-1,1):
maximum = max(maximum, K1[i-1]+K2[i+1])
return maximum
def main():
A = [3,2,6,-1,4,5,-1,2]
print(solution(A))
if __name__ == '__main__': main()
Ruby 100%
def solution(a)
max_starting =(a.length - 2).downto(0).each.inject([[],0]) do |(acc,max), i|
[acc, acc[i]= [0, a[i] + max].max ]
end.first
max_ending =1.upto(a.length - 3).each.inject([[],0]) do |(acc,max), i|
[acc, acc[i]= [0, a[i] + max].max ]
end.first
max_ending.each_with_index.inject(0) do |acc, (el,i)|
[acc, el.to_i + max_starting[i+2].to_i].max
end
end