A zero-indexed array A consisting of N different integers is given. The array contains integers in the range [1..(N + 1)], which means that exactly one element is missing.
Your goal is to find that missing element.
Write a function:
int solution(int A[], int N);
that, given a zero-indexed array A, returns the value of the missing element.
For example, given array A such that:
A[0] = 2 A[1] = 3 A[2] = 1 A[3] = 5
the function should return 4, as it is the missing element.
Assume that:
N is an integer within the range [0..100,000];
the elements of A are all distinct;
each element of array A is an integer within the range [1..(N + 1)].
Complexity:
expected worst-case time complexity is O(N);
expected worst-case space complexity is O(1), beyond input storage (not counting the storage required for input arguments).
It doesn't work for a case that there are two elements
int solution(vector<int> &A) {
sort(A.begin(), A.end());
int missingIndex = 0;
for (int i = 0; i < A.size(); i++)
{
if ( i != A[i]-1)
{
missingIndex = i+1;
}
}
return missingIndex;
}
Since your array is zero-indexed and the numbers are from 1 to N+1, the statement should be:
if ( i != A[i]-1)
Also, you should immediately break out from the for loop after updating the missingIndex because all entries beyond the missing element shall have (i != A[i]-1)
Moreover because of sorting your solution is O(NlogN) and not O(N).
Instead you can sum all the elements in the array (using unsigned long long int) and check its difference from N(N+1)/2
You can use the simple math formula for an arithmetic progression to get the sum of all numbers from 1 to N+1. Then iterate over all the given numbers and calculate that sum. The missing element will be the difference between the two sums.
int solution(std::vector<int> &a) {
uint64_t sum = (a.size() +1 ) * (a.size() + 2) / 2;
uint64_t actual = 0;
for(int element : a) {
actual += element;
}
return static_cast<int>(sum - actual);
}
Use all the power of STL:
#include <algorithm>
#include <functional>
int solution(vector<int> &A) {
return std::accumulate(A.begin(), A.end(), (A.size()+1) * (A.size()+2) / 2, std::minus<int>());
}
This solution uses the sign of the values as a flag. It needs at worst two pass over the elements. The N(N+1)/2 solution needs exactly one pass.
int solution(vector<int> &a) {
int n = (int)a.size();
for(auto k : a)
{
int i = abs(k) - 1;
if (i != n)
a[i] = -a[i];
}
for (int i = 0; i < n; ++i)
if (a[i]>0)
return i+1;
return n+1;
}
I solved it this way and thought of posting it here for my own reference for future and for others :)
#include <cstdint>
#include <numeric>
int solution(vector<int> &A) {
uint64_t sumAll = (A.size() + 1) * (A.size() + 2) / 2;
uint64_t sumA = std::accumulate(A.begin(), A.end(), 0);
return sumAll- sumA;
}
I solve it with this solution maybe there is something better but I test it with different values and find it work fine while the other solutions gives me strange results.
as example:
std::vector<int> A = { 12,13,11,14,16 };
std::vector<int> A2 = { 112,113,111,114,116 };
int Solution(std::vector<int> &A)
{
int temp;
for (int i = 0; i < A.size(); ++i)
{
for (int j = i+1;j< A.size();++j )
{
if (A[i] > A[j])
{
temp = A[i];
A[i] = A[j];
A[j] = temp;
}
}
}
for (int i = 0; i < A.size()-1;++i)
{
if ((A[i] + 1 != A[i + 1]))
{
return (A[i] + 1);
}
if(i+1 == A.size() - 1)
return (A[i+1] + 1);
}}
Now Everything fine but if I use the array above with the methods below, I will get wrong values excepts with small numbers <10;
std::vector<int> A = { 12,13,11,14,16 };
int Solution_2(std::vector<int> &A)
{
unsigned int n = A.size() + 1;
long long int estimated = n * (n + 1) / 2;
long long int total = 0;
for (unsigned int i = 0; i < n - 1; i++) total += A[i];
return estimated - total;
}
I will get this result -45.
or this one also the same result if I use array A :
std::vector<int> A = { 12,13,11,14,16 };
int Solution_3(std::vector<int> &A)
{
uint64_t sumAll = (A.size() + 1) * (A.size() + 2) / 2;
uint64_t sumA = std::accumulate(A.begin(), A.end(), 0);
return sumAll - sumA;
}
Hope Someone explains why this happens.
Related
I have this problem I'm curious about where I have an Array and I need to compute the Sum of this function:
Arr[L] + (Arr[L] ^ Arr[L+1]) + ... + (Arr[L] ^ Arr[L+1] ^ ... ^ Arr[R])
Example:
If the Array given was: [1, 2, 3, 5] and I asked what's the sum on the range [L = 1, R = 3] (assuming 1-based Index), then it'd be:
Sum = 1 + (1 ^ 2) + (1 ^ 2 ^ 3) = 4
In this problem, the Array, the size of the Array, and the Ranges are given. My approach for this is too slow.
There's also a variable called Q which indicates the number of Queries that would process each [L, R].
What I have:
I XOR'ed each element and then summed it to a variable within the range of [L, R]. Is there any faster way to compute this if the elements in the Array are suppose... 1e18 or 1e26 larger?
#include <iostream>
#include <array>
int main (int argc, const char** argv)
{
long long int N, L, R;
std::cin >> N;
long long int Arr[N];
for (long long int i = 0; i < N; i++)
{
std::cin >> Arr[i];
}
std::cin >> L >> R;
long long int Summation = 0, Answer = 0;
for (long long int i = L; i <= R; i++)
{
Answer = Answer ^ Arr[i - 1];
Summation += Answer;
}
std::cout << Summation << '\n';
return 0;
}
There are two loops in your code:
for (long long int i = 0; i < N; i++)
{
std::cin >> Arr[i];
}
long long int Summation = 0, Answer = 0;
for (long long int i = L; i <= R; i++)
{
Answer = Answer ^ Arr[i - 1];
Summation += Answer;
}
The second loop is smaller, and only does two operations (^= and +). These are native CPU instructions; this will be memory bound on the sequential access of Arr[]. You can't speed this up. You need all elements, and it doesn't get faster than a single sequential scan. The CPU prefetcher will hit maximum memory bandwidth.
However, the killer is the first loop. Parsing digits takes many, many more operations, and the range is even larger.
Disclaimer : NOT A FASTER SOLUTION !
Changing a bit the subject by making L and R valid indices of an integer matrix ( range [0, size) ), the following function is working for me:
size_t xor_rec(size_t* array, size_t size, size_t l, size_t r) {
if (l < 0 || r >= size || l > r) {
return 0; // error control
}
if (r > l + 1) {
size_t prev_to_prev_sum = xor_rec(array, size, l, r - 2);
size_t prev_sum = xor_rec(array, size, l, r - 1);
return prev_sum + ((prev_sum - prev_to_prev_sum) ^ array[r]);
}
if (r == l + 1) {
return array[r - 1] + (array[r - 1] ^ array[r]);
}
if (r == l) {
return array[r];
}
return 0;
}
Edit: changed int for size_t.
If indices are 0 based. That is L=0 implies the first element: Arr[0] is the first element in the array, then it's simply this:
int sum = 0;
int prev = 0;
for (int i = L; i <= R; i++)
{
int current = (prev ^ Arr[i]);
sum += current;
prev = current;
}
If it's 1 based, where L=1 is really Arr[0], then it's a quick adjustment:
int sum = 0;
int prev = 0;
for (int i = L; i <= R; i++)
{
int current = (prev ^ Arr[i-1]);
sum += current;
prev = current;
}
I have a sorted array with length n where 1 < n <= 1e5, how can I find the kth smallest difference between two elements in the array?
For example, I have {1,4,9,16} and k equal 5, then I have differences {3,5,7,8,12,15} and the result is 12.
I couldn't find any solution other than finding all differences between two elements, this algorithm will take Θ(n2).
It is unclear to me how you intend to handle duplicate differences. Consider the array {1, 2, 3, 4}. Do you say that the differences are {1, 2, 3}? Or would you say that they are {1, 1, 1, 2, 2, 3}?
If the latter, then the following code will take average time O(n log(n)) and worst case time O(n log(n)^2). It is based on a binary search of the differences.
I am ahem not a C++ programmer.
#include <iostream>
#include <vector>
#include <utility>
using namespace std;
template <typename my_type>
my_type kth_diff(my_type a[], int n, int k) {
// {j, {m, n}} represents a[m] - a[j], a[m+1] - a[j], ..., a[n] - a[j]
vector<pair<int, pair<int, int>>> diff_range;
for (int i = 0; i+1 < n; i++) {
diff_range.push_back({i, {i+1, n-1}});
}
while (0 < diff_range.size()) {
int i = diff_range[0].first;
int j = (diff_range[0].second.first + diff_range[0].second.second)/2;
my_type pivot = a[j] - a[i];
// And back up over the max values that make a pivot.
while (0 < j && a[j-1] == a[j]) {
j--;
}
int count_below = 0;
int count_at = 0;
vector<pair<int, pair<int, int>>> diff_range_low;
vector<pair<int, pair<int, int>>> diff_range_high;
vector<pair<int, pair<int, int>>>::iterator it;
for (it = diff_range.begin(); it != diff_range.end(); it++) {
i = it->first;
j = max(it->second.first, j);
while (j < n && a[j] - a[i] < pivot) {
j++;
}
count_below += j - it->second.first;
if (it->second.first < j) {
// If the pivot is too small, use this.
diff_range_low.push_back({i, {it->second.first, j-1}});
}
while (j < n && a[j] - a[i] == pivot) {
j++;
count_at++;
}
if (j <= it->second.second) {
// If the pivot is too big, use this.
diff_range_high.push_back({i, {j, it->second.second}});
}
}
if (count_below + count_at <= k) {
// We only need to count ranges past the pivot.
diff_range = diff_range_high;
// Keep track of the number below that are accounted for.
k -= count_below + count_at;
}
else if (k < count_below) {
// We only need to count ranges before the pivot.
diff_range = diff_range_low;
}
else {
return pivot;
}
}
return a[0];
}
int main() {
int a[] = {1,4,9,16};
int n = sizeof(a) / sizeof(a[0]);
for (int k = 0; k < n*(n-1)/2; k++) {
cout << k << "\t" << kth_diff(a, n, k) << endl;
}
}
I look at this problem like this.
For [a,b,c,d] such a<b<c<d and x,y,z>0 and b = a+x, c = b+y, d=c+z, then b-a = x, c-b = y, c-a = x+y, d-b = y+z, d-a = x+y+z.
What does it tell us? The more apart are the elements the bigger the difference, and it adds up with every step.
It is unknown if x<y or x>y but for sure x+y>x and x+y>y, so your differences can be divided into distance classes.
diff_1 = {d-c,c-b,b-a}, diff_2 = {d-b,c-a}, diff_3={d-a}
now what you probably can see by now is min(diff_1) < min(diff_2) < min(diff_3), so to find the second smallest difference you don't need to check for min(diff_3) because it is at best the third smallest element.
So what you do is to implement something like this pseudocode:
int findLeastDiff<k>(std::vector<int> v)
{
assert(v_contains_k_distinct_elements(v));
std::vector<int> result;
std::sort(v.begin(),v.end()); // O(nlog(n));
v.erase(std::unique(v.begin(),v.end()),v.end()); // O(n)
for<<int i=1; i<=k; ++i>>
{
adjacent_difference<i>(v.begin(), v.end(), std::back_inserter(result));// O(n-i)
} // O(k(n-k/2)) = O(k*n)
std::sort(result.begin(), result.begin()); // O(nlog(n));
result.erase(std::unique(result.begin(),result.end()),result.end()); // O(n)
return result[k];
}
The above is just a concept and for sure can be optimized a lot. What is the complexity? O(nlog(n)+n+k*n+nlog(n)+n) = O((2log(n)+k+2)*n)
It probably can be optimized with some clever way of reducing the search space by removing ranges with already too big differances.
Given an integer n and array a. Finding maximum of (a[i]+a[j])*(j-i) with 1<=i<=n-1 and i+1<=j<=n
Example:
Input
5
1 3 2 5 4
Output
21
Explanation :With i=2 and j=5, we have the maximum of (a[i]+a[j])*(j-i) is (3+4)*(5-2)=21
Constraints:
n<=10^6
a[i]>0 with 1<=i<=n
I can solve this problem with n<=10^4, but what should I do if n is too large, like the constraints?
First, let's reference the "brute force" force algorithm. This will have some issues, that I will call out below, but it is a correct solution.
struct Result
{
size_t i;
size_t j;
int64_t value;
};
Result findBestBruteForce(const vector<int>& a)
{
size_t besti = 0;
size_t bestj = 0;
int64_t bestvalue = INT64_MIN;
for (size_t i = 0; i < a.size(); i++)
{
for (size_t j = i + 1; j < a.size(); j++)
{
// do the math in 64-bit space to avoid overflow
int64_t value = (a[i] + (int64_t)a[j]) * (j - i);
if (value > bestvalue)
{
bestvalue = value;
besti = i;
bestj = j;
}
}
}
return { besti, bestj, bestvalue };
}
The problem with the above code is that it runs at O(N²). Or more precisely, for the the N iterations of the outer for-loop (where i goes from 0 to N), there are an average of N/2 iterations on the inner for-loop. If N is small, this isn't a problem.
On my PC, with full optimizations turned on. When is N under 20000, the run time is less than a second. Once N approaches 100000, it takes several seconds to process the 5 billion iterations. Let's just go with a "billion operations per second" as an expected rate. If N were to 1000000, the maximum as the OP outlined, it would probably take 500 seconds. Such is the nature of a N-squared algorithm.
So how can we speed it up? Here's an interesting observation. Let's say our array was this:
10 5 4 15 13 100 101 6
On the first iteration of the outer loop above, where i=0, we'd be computing this on each iteration of the inner loop:
for each j: (a[0]+a[j])(j-0)
for each j: (10+a[j])(j-0)
for each j: [15*1, 14*2, 25*3, 23*4, 1000*5, 1010*6, 16*6]
= [15, 28, 75, 92, 5000, 6060, 96]
Hence, for when i=0, a[i] = 15 and the largest value computed from that set is 6060.
Since A[0] is 15, and we're tracking a current "best" value, there's no incentive to iterate all the values again for i=1 since a[1]==14 is less than 15. There's no j index that would compute a value of (a[1]+a[j])*(j-1) larger than what's already been found. Because (14+a[j])*(j-1) will always be less than (15+a[j])*(j-1). (Assumes all values in the array are non-negative).
So to generalize, the outer loop can skip over any index of i where A[best_i] > A[i]. And that's a real simple alteration to our above code:
Result findBestOptimized(const std::vector<int>& a)
{
if (a.size() < 2)
{
return {0,0,INT64_MIN};
}
size_t besti = 0;
size_t bestj = 0;
int64_t bestvalue = INT64_MIN;
int minimum = INT_MIN;
for (size_t i = 0; i < a.size(); i++)
{
if (a[i] <= minimum)
{
continue;
}
for (size_t j = i + 1; j < a.size(); j++)
{
int64_t value = (a[i] + (int64_t)a[j]) * (j - i);
if (value > bestvalue)
{
bestvalue = value;
besti = i;
bestj = j;
minimum = a[i];
}
}
}
return { besti, bestj, bestvalue };
}
Above, we introduce a minimum value for A[i] to be before considering doing the full inner loop enumeration.
I benchmarked this with build optimizations on. On a random array of a million items, it runs in under a second.
But wait... there's another optimization!
If the inner loop fails to find an index j such that value > bestvalue, then we already know that the current A[i] is greater than minimum. Hence, we can increment minimum to A[i] regardless at the end of the inner loop.
Now, I'll present the final solution:
Result findBestOptimizedEvenMore(const std::vector<int>& a)
{
if (a.size() < 2)
{
return { 0,0,INT64_MIN };
}
size_t besti = 0;
size_t bestj = 0;
int64_t bestvalue = INT64_MIN;
int minimum = INT_MIN;
for (size_t i = 0; i < a.size(); i++)
{
if (a[i] <= minimum)
{
continue;
}
for (size_t j = i + 1; j < a.size(); j++)
{
int64_t value = (a[i] + (int64_t)a[j]) * (j - i);
if (value > bestvalue)
{
bestvalue = value;
besti = i;
bestj = j;
}
}
minimum = a[i]; // since we know a[i] > minimum, we can do this
}
return { besti, bestj, bestvalue };
}
I benchmarked the above solution on different array sizes from N=100 to N=1000000. It does all iterations in under 25 milliseconds.
In the above solution, there's likely a worst case runtime of O(N²) again when all the items in the array are in ascending order. But I believe the average case should be on the order of O(N lg N) or better. I'll do some more analysis later if anyone is interested.
Note: Some notation for variables and the Result class in the code have been copied from #selbie's excellent answer.
Here's another O(n^2) worst-case solution with (likely provable) O(n) expected performance on random permutations and room for optimization.
Suppose [i, j] are our array bounds for an optimal pair. By the problem definition, this means all elements left of i must be strictly less than A[i], and all elements right of j must be strictly less than A[j].
This means we can compute the left-maxima of A: all elements strictly greater than all previous elements, as well as the right-maxima of A. Then, we only need to consider left endpoints from the left-maxima and right endpoints from the right-maxima.
I don't know the expectation of the product of the sizes of left and right maxima sets, but we can get an upper bound. The size of left maxima is at most the size of the longest increasing subsequence (LIS) of A. The right maxima are at most the size of the longest decreasing subsequence. These aren't independent, but I'm taking as an (unproven) assumption that the LIS and LDS lengths are inversely correlated with each other for random permutations. The right-maxima must start after the left-maxima end, so this seems like a safe assumption.
The length of the LIS for random permutations follows the Tracy-Widom distribution, so it has mean sqrt(2N) and standard deviation N^(-1/6). The expected square of the size is therefore 2N + 1/(N^1/3) so ~2N. This isn't exactly the proof we wanted, since you'd need to sum over the partial density function to be rigorous, but the LIS is already an upper bound on the left-maxima size, so I think the conclusion is still true.
C++ code (Result class and some variable names taken from selbie's post, as mentioned):
struct Result
{
size_t i;
size_t j;
int64_t value;
};
Result find_best_sum_size_product(const std::vector<int>& nums)
{
/* Given: list of positive integers nums
Returns: Tuple with (best_i, best_j, best_product)
where best_i and best_j maximize the product
(nums[i]+nums[j])*(j-i) over 0 <= i < j < n
Runtime: O(n^2) worst case,
O(n) average on random permutations.
*/
int n = nums.size();
if (n < 2)
{
return {0,0,INT64_MIN};
}
std::vector<int> left_maxima_indices;
left_maxima_indices.push_back(0);
for (int i = 1; i < n; i++){
if (nums.at(i) > nums.at(left_maxima_indices.back())) {
left_maxima_indices.push_back(i);
}
}
std::vector<int> right_maxima_indices;
right_maxima_indices.push_back(n-1);
for (int i = n-1; i >= 0; i--){
if (nums.at(i) > nums.at(right_maxima_indices.back())) {
right_maxima_indices.push_back(i);
}
}
size_t best_i = 0;
size_t best_j = 0;
int64_t best_product = INT64_MIN;
int i = 0;
int j = 0;
for (size_t left_idx = 0;
left_idx < left_maxima_indices.size();
left_idx++)
{
i = left_maxima_indices.at(left_idx);
for (size_t right_idx = 0;
right_idx < right_maxima_indices.size();
right_idx++)
{
j = right_maxima_indices.at(right_idx);
if (i == j) continue;
int64_t value = (nums.at(i) + (int64_t)nums.at(j)) * (j - i);
if (value > best_product)
{
best_product = value;
best_i = i;
best_j = j;
}
}
}
return { best_i, best_j, best_product };
}
I started from the two excellent answers by #selbie and #kcsquared.
Their solutions gave impressive results for random inputs. What was not clear is the worst case behavior.
What sequence would correspsond to the worst case?
I finally found a critial sequence for these two answers, a triangle sequence: this sequence slightly increases up to a max, and then slightly decrease. With such a sequence and n=10^5 for example, these answers take more than 10s.
My solutions starts from #selbie solution and add two improvements:
I add #kcsquared's trick: on the right (of j), they can be only lower elements
When considering a new left element a[i], it is useless to start from i + 1 to get the second element. We can start from the current best_j
With these tricks, I was able to improve the two posted answer performances a little bit. However, it still
fails to solve the triangle sequence issue: about 10s for n = 10^5.
#include <iostream>
#include <vector>
#include <string>
#include <cstdlib>
#include <ctime>
#include <chrono>
struct Result {
size_t i;
size_t j;
int64_t value;
};
void print (const Result& res, const std::string& prefix = "") {
std::cout << prefix;
std::cout << "(" << res.i << ", " << res.j << ") -> " << res.value << std::endl;
}
Result findBest(const std::vector<int>& a) {
if (a.size() < 2) {
return { 0, 0, INT64_MIN };
}
int n = a.size();
std::vector<int> next_max(n, -1);
int current_max = n-1;
for (int i = n-1; i >= 0; --i) {
if (a[i] > a[current_max]) {
current_max = i;
}
next_max[i] = current_max;
}
size_t besti = 0;
size_t bestj = 0;
int64_t bestvalue = INT64_MIN;
int minimum = INT_MIN;
for (size_t i = 0; i < a.size(); i++) {
if (a[i] <= minimum) {
continue;
}
minimum = a[i];
size_t jmin = (bestj > i) ? bestj : i+1;
for (size_t j = jmin; j < a.size(); j++) {
j = next_max[j];
value = (a[i] + (int64_t)a[j]) * (j - i);
if (value > bestvalue) {
bestvalue = value;
besti = i;
bestj = j;
}
}
}
return { besti, bestj, bestvalue };
}
int main() {
int n = 1000000;
int vmax = 100000000;
std::vector<int> A (n);
std::srand(std::time(0));
for (int i = 0; i < n; ++i) {
A[i] = rand() % vmax + 1;
}
std::cout << "n = " << n << std::endl;
auto t0 = std::chrono::high_resolution_clock::now();
auto res = findBest (A);
auto t1 = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>(t1 - t0).count();
print (res, "Random: ");
std::cout << "time = " << duration/1000 << " ms" << std::endl;
int i_max = n/2;
for (int i = 0; i < i_max; ++i) A[i] = i+1;
A[i_max] = 10 * i_max;
for (int i = i_max+1; i < n; ++i) {
A[i] = 2*i_max - i;
}
t0 = std::chrono::high_resolution_clock::now();
res = findBest (A);
t1 = std::chrono::high_resolution_clock::now();
duration = std::chrono::duration_cast<std::chrono::microseconds>(t1 - t0).count();
print (res, "Triangle sequence: ");
std::cout << "time = " << duration/1000 << " ms" << std::endl;
return 0;
}
There is a problem that I can't solve it. Here are two unordered arrays
int a1[] = { 5, 7, 14, 0, 6, 2, 9, 11, 3 }; int n = 9;
int b[] = { 6, 4, 3, 10, 9, 15, 7 }; int m = 7;
I want to compare them and remove elements in a[] that can be found in b[]. The following code return a correct value of n to me. The correct value should be 4 but it give me 5 even if I successfully sorted array a1[]. It gave me a result like this:
a1[] = { 5, 2, 14, 0 ,11 }
There is a slightly difference between my result and the model answer. I mean the order of the elements in a1[]. The model answer is
a1[] = {5, 11, 14, 0, 2}
Can you guys help me to figure out the problem?
int removeAll_unordered(int *a, int& n, const int *b, int m)
{
for (int i = 0; i < m; i++) {
int j = 0;
for (j = 0; j < n; j++)
{
if (b[i] == a[j])
{
a[j] = a[n - 1];
n -= 1;
}
}
}
return n;
}
If you write code in C++ you should use what standard library provides for you - in your case std::vector and std::remove_if algo:
void removeAll_unordered( std::vector<int> &a, const std::vector<int> &b )
{
auto end = std::remove_if( a.begin(), a.end(), [b]( int i ) {
return std::find( b.begin(), b.end(), i ) != b.end();
} );
a.erase( end, a.end() );
}
Live code 1
But this usage is very inefficient, so using standard library as well which provides std::unordered_set aka hash set we can easily make it optimized:
void removeAll_unordered( std::vector<int> &a, const std::vector<int> &b )
{
auto end = std::remove_if( a.begin(), a.end(),
[set = std::unordered_set<int>( b.begin(), b.end() )]( int i ) {
return set.count( i );
} );
a.erase( end, a.end() );
}
Live code 2
I found one problem in you code, I couldn't compile though, but it should work.
In your code,
if (b[i] == a[j])
{
a[j] = a[n - 1];
n -= 1;
}
When an element in b is found in a, you replace that value with a[n-1], this is okay, but that value was not compared with b[i] as j got incremented, So I correct this part. If you run with different inputs you will able to catch this problem.
int removeAll_unordered(int *a, int& n, const int *b, int m)
{
for (int i = 0; i < m; i++)
{
for (int j = 0; j < n;)
{
if (a[j] == b[i]) // replace a[j] with a[n-1] and decrease n
{
a[j] = a[n - 1];
n--;
}
else
j++; // otherwise increase j
}
}
return n;
}
To get the exact answer (order of the elements in a after the removal)
here is the modified code:
int duplicates = 0; // counts total numbers that removed from a[]
for (int i = 0; i < n; i++)
{
for (int j = 0; j < m;)
{
if (a[i] == b[j]) // replace a[j] with a[n-1] and decrease n
{
if (i == n - 1) // when we reach the last element of a that matches in b
{
n--; // updating length of a[]
duplicates++; // one more removed
break;
}
a[i] = a[n - 1];
n--; // updating length of a[]
duplicates++; // one more removed
j = 0;
}
else
j++; // otherwise increase j
}
}
return duplicates; // returned total removed numbers
I found the problem of your code. You change the variables in the for loops. First for loop use the 'n' variable for their maximum value and second for loop use 'm' value.
Then you only decreasing n value but you didn't check the new i th value. Because now i th value changed. So, you want check again that value also. For that you can decrease 'i' value also.
And also you mentioned above your answer is 5 not 4. Then it correct answer. Because you write this code thinking about 9 elements of the array. Not the 0 to 8. So, if you write this code thinking about 0 to 8 elements you can get whatever you want. If you want 4 you can decrease final value by one. Then you can get your value.
Modified cord given below.
for (int i = 0; i < n; i++)
{
for (int j = 0; j < m; j++)
{
if (b[j] == a[i])
{
a[i] = a[n - 1];
n -= 1;
i = i - 1;
}
}
}
return n;
I need a way to solve the classic 5SUM problem without hashing or with a memory efficient way of hashing.
The problem asks you to find how many subsequences in a given array of length N have the sum equal to S
Ex:
Input
6 5
1 1 1 1 1 1
Output
6
The restrictions are:
N <= 1000 ( size of the array )
S <= 400000000 ( the sum of the subsequence )
Memory usage <= 5555 kbs
Execution time 2.2s
I'm pretty sure the excepted complexity is O(N^3). Due to the memory limitations hashing doesn't provide an actual O(1) time.
The best I got was 70 points using this code. ( I got TLE on 6 tests )
#include <iostream>
#include <fstream>
#include <algorithm>
#include <vector>
#define MAX 1003
#define MOD 10472
using namespace std;
ifstream in("take5.in");
ofstream out("take5.out");
vector<pair<int, int>> has[MOD];
int v[MAX];
int pnt;
vector<pair<int, int>>::iterator it;
inline void ins(int val) {
pnt = val%MOD;
it = lower_bound(has[pnt].begin(), has[pnt].end(), make_pair(val, -1));
if(it == has[pnt].end() || it->first != val) {
has[pnt].push_back({val, 1});
sort(has[pnt].begin(), has[pnt].end());
return;
}
it->second++;
}
inline int get(int val) {
pnt = val%MOD;
it = lower_bound(has[pnt].begin(), has[pnt].end(), make_pair(val, -1));
if(it == has[pnt].end() || it->first != val)
return 0;
return it->second;
}
int main() {
int n,S;
int ach = 0;
int am = 0;
int rez = 0;
in >> n >> S;
for(int i = 1; i <= n; i++)
in >> v[i];
sort(v+1, v+n+1);
for(int i = n; i >= 1; i--) {
if(v[i] > S)
continue;
for(int j = i+1; j <= n; j++) {
if(v[i]+v[j] > S)
break;
ins(v[i]+v[j]);
}
int I = i-1;
if(S-v[I] < 0)
continue;
for(int j = 1; j <= I-1; j++) {
if(S-v[I]-v[j] < 0)
break;
for(int k = 1; k <= j-1; k++) {
if(S-v[I]-v[j]-v[k] < 0)
break;
ach = S-v[I]-v[j]-v[k];
rez += get(ach);
}
}
}
out << rez << '\n';
return 0;
}
I think it can be done. We are looking for all subsets of 5 items in the array arr with the correct SUM. We have array with indexes 0..N-1. Third item of those five can have index i in range 2..N-3. We cycle through all those indexes. For every index i we generate all combinations of two numbers for index in range 0..i-1 on the left of index i and all combinations of two numbers for index in the range i+1..N-1 on the right of index i. For every index i there are less than N*N combinations on the left plus on the right side. We would store only sum for every combination, so it would not be more than 1000 * 1000 * 4 = 4MB.
Now we have two sequences of numbers (the sums) and task is this: Take one number from first sequence and one number from second sequence and get sum equal to Si = SUM - arr[i]. How many combinations are there? To do it efficiently, sequences have to be sorted. Say first is sorted ascending and have numbers a, a, a, b, c ,.... Second is sorted descending and have numbers Z, Z, Y, X, W, .... If a + Z > Si then we can throw Z away, because we do not have smaller number to match. If a + Z < Si we can throw away a, because we do not have bigger number to match. And if a + Z = Si we have 2 * 3 = 6 new combinations and get rid of both a and Z. If we get sorting for free, it is nice O(N^3) algorithm.
While sorting is not for free, it is O(N * N^2 * log(N^2)) = O(N^3 * log(N)). We need to do sorting in linear time, which is not possible. Or is it? In index i+1 we can reuse sequences from index i. There are only few new combinations for i+1 - only those that involve number arr[i] together with some number from index 0..i-1. If we sort them (and we can, because there are not N*N of them, but N at most), all we need is to merge two sorted sequences. And that can be done in linear time. We can even avoid sorting completely if we sort arr at the beginning. We just merge.
For second sequence the merging does not involve adding but removing, but it is very simmilar.
The implementation seems to work, but I expect there is off by one error somewhere ;-)
#include <iostream>
#include <fstream>
#include <algorithm>
#include <vector>
using namespace std;
int Generate(int arr[], int i, int sums[], int N, int NN)
{
int p1 = 0;
for (int i1 = 0; i1 < i - 1; ++i1)
{
int ai = arr[i1];
for (int i2 = i1 + 1; i2 < i; ++i2)
{
sums[p1++] = ai + arr[i2];
}
}
sort(sums, sums + p1);
return p1;
}
int Combinations(int n, int sums[], int p1, int p2, int NN)
{
int cnt = 0;
int a = 0;
int b = NN - p2;
do
{
int state = sums[a] + sums[b] - n;
if (state > 0) { ++b; }
else if (state < 0) { ++a; }
else
{
int cnta = 0;
int lastA = sums[a];
while (a < p1 && sums[a] == lastA) { a++; cnta++; }
int cntb = 0;
int lastB = sums[b];
while (b < NN && sums[b] == lastB) { b++; cntb++; }
cnt += cnta * cntb;
}
} while (b < NN && a < p1);
return cnt;
}
int Add(int arr[], int i, int sums[], int p2, int N, int NN)
{
int ii = N - 1;
int n = arr[i];
int nn = n + arr[ii--];
int ip = NN - p2;
int newP2 = p2 + N - i - 1;
for (int p = NN - newP2; p < NN; ++p)
{
if (ip < NN && (ii < i || sums[ip] > nn))
{
sums[p] = sums[ip++];
}
else
{
sums[p] = nn;
nn = n + arr[ii--];
}
}
return newP2;
}
int Remove(int arr[], int i, int sums[], int p1)
{
int ii = 0;
int n = arr[i];
int nn = n + arr[ii++];
int pp = 0;
int p = 0;
for (; p < p1 - i; ++p)
{
while (ii <= i && sums[pp] == nn)
{
++pp;
nn = n + arr[ii++];
}
sums[p] = sums[pp++];
}
return p;
}
int main() {
ifstream in("take5.in");
ofstream out("take5.out");
int N, SUM;
in >> N >> SUM;
int* arr = new int[N];
for (int i = 0; i < N; i++)
in >> arr[i];
sort(arr, arr + N);
int NN = (N - 3) * (N - 4) / 2 + 1;
int* sums = new int[NN];
int combinations = 0;
int p1 = 0;
int p2 = 1;
for (int i = N - 3; i >= 2; --i)
{
if (p1 == 0)
{
p1 = Generate(arr, i, sums, N, NN);
sums[NN - 1] = arr[N - 1] + arr[N - 2];
}
else
{
p1 = Remove(arr, i, sums, p1);
p2 = Add(arr, i + 1, sums, p2, N, NN);
}
combinations += Combinations(SUM - arr[i], sums, p1, p2, NN);
}
out << combinations << '\n';
return 0;
}