Split Array into two sets? - c++

I have an array W of 0..N-1
I need to split them into two sets: Say K and N-K elements.
But the condition is: sum(N-K) - sum(K) should be maximum.
How do I approach this?
I tried doing this:
Sort the array - std::sort(W,W+N), and then:
for(int i=0; i<K; ++i) less+=W[i];
for(int i=K; i<N; ++i) more+=W[i];
And then more-less
But I don't think this is the optimum way, or it may even be wrong for some of the cases.
Thanks.
UPDATE:
We have to choose K elements from W such that difference betweensum(k elements) and sum(remaining elements) is maximum.

Edit: Note that in your posted question, you seem to be expecting sort to sort from high-to-low. Both std::sort and std::nth_element put the low elements first. I have replaced K with (N-K) in the answer below to correct that.
Edit after UPDATE: Do the below twice, once for K and once for (N-K). Choose the optimal answer.
More optimal than std::sort would be std::nth_element for your purposes.
std::nth_element( W, W+(N-K), W+N );
Your use of std::sort will use O(n log n) complexity to order all the elements within both your sets, which you don't need.
std::nth_element will use O(n) complexity to partition without completely sorting.
Note: your for loops may also be replaced with std::accumulate
less = std::accumulate( W, W+(N-K), 0 );
more = std::accumulate( W+(N-K), W+N, 0 );

You are to split the set of elements into two distinctive nonoverlapping subsets A and B. You want the sum(A)-sum(B) be as high as possible.
Therefore, you want the sum(A) be as high as possible and sum(B) as low as possible.
Therefore, the set 'A' should contain as high elements as possible
and the set 'B' should contain as low elements as possible
By sorting the input set by element's value, and by assigning 'lowest elements' to B and 'highest elements' to A, you are guaranteed that the sum(A)-sum(B) will be max possible.
I do not see any cases where your approach would be wrong.
As to the 'being optimal' things, I did not analyze it at all. Drew's note seems quite probable.

It can be done using max heap. O(n + n log k) time
Make a max heap of size k. We have find the lowest k elements of the array. The root of heap will be the highest element in the heap. Make a heap of first k elements.
Now iterate through the array. Compare the array element with the root of max heap. If it is smaller than root then replace it and heapify the heap again. This will take O(n log k) time.
Find the sum of elements of heap.
Now you can find the sum of rest of the elements of array and get the difference. (O(n)) time
Total time O(n + n log k)
EDIT: Perhaps you can find the sum of all elements of array while traversing it for heap. This will save O(n) time and it can be solved in O(n log k)

Related

Nearest permutation to given array

Question
I have two arrays of integers A[] and B[]. Array B[] is fixed, I need to to find the permutation of A[] which is lexiographically smaller than B[] and the permutation is nearest to B[]. Here what I mean is:
for i in (0 <= i < n)
abs(B[i]-A[i]) is minimum and A[] should be smaller than B[] lexiographically.
For Example:
A[]={1,3,5,6,7}
B[]={7,3,2,4,6}
So,possible nearest permutation of A[] to B[] is
A[]={7,3,1,6,5}
My Approach
Try all permutation of A[] and then compare that with B[]. But the time complexity would be (n! * n)
So is there any way to optimize this?
EDIT
n can be as large as 10^5
For better understanding
First, build an ordered map of the counts of the distinct elements of A.
Then, iterate forward through array indices (0 to n−1), "withdrawing" elements from this map. At each point, there are three possibilities:
If i < n-1, and it's possible to choose A[i] == B[i], do so and continue iterating forward.
Otherwise, if it's possible to choose A[i] < B[i], choose the greatest possible value for A[i] < B[i]. Then proceed by choosing the largest available values for all subsequent array indices. (At this point you no longer need to worry about maintaining A[i] <= B[i], because we're already after an index where A[i] < B[i].) Return the result.
Otherwise, we need to backtrack to the last index where it was possible to choose A[i] < B[i], then use the approach in the previous bullet-point.
Note that, despite the need for backtracking, the very worst case here is three passes: one forward pass using the logic in the first bullet-point, one backward pass in backtracking to find the last index where A[i] < B[i] was possible, and then a final forward pass using the logic in the second bullet-point.
Because of the overhead of maintaining the ordered map, this requires O(n log m) time and O(m) extra space, where n is the total number of elements of A and m is the number of distinct elements. (Since m ≤ n, we can also express this as O(n log n) time and O(n) extra space.)
Note that if there's no solution, then the backtracking step will reach all the way to i == -1. You'll probably want to raise an exception if that happens.
Edited to add (2019-02-01):
In a now-deleted answer, גלעד ברקן summarizes the goal this way:
To be lexicographically smaller, the array must have an initial optional section from left to right where A[i] = B[i] that ends with an element A[j] < B[j]. To be closest to B, we want to maximise the length of that section, and then maximise the remaining part of the array.
So, with that summary in mind, another approach is to do two separate loops, where the first loop determines the length of the initial section, and the second loop actually populates A. This is equivalent to the above approach, but may make for cleaner code. So:
Build an ordered map of the counts of the distinct elements of A.
Initialize initial_section_length := -1.
Iterate through the array indices 0 to n−1, "withdrawing" elements from this map. For each index:
If it's possible to choose an as-yet-unused element of A that's less than the current element of B, set initial_section_length equal to the current array index. (Otherwise, don't.)
If it's not possible to choose an as-yet-unused element of A that's equal to the current element of B, break out of this loop. (Otherwise, continue looping.)
If initial_section_length == -1, then there's no solution; raise an exception.
Repeat step #1: re-build the ordered map.
Iterate through the array indices from 0 to initial_section_length-1, "withdrawing" elements from the map. For each index, choose an as-yet-unused element of A that's equal to the current element of B. (The existence of such an element is ensured by the first loop.)
For array index initial_section_length, choose the greatest as-yet-unused element of A that's less than the current element of B (and "withdraw" it from the map). (The existence of such an element is ensured by the first loop.)
Iterate through the array indices from initial_section_length+1 to n−1, continuing to "withdraw" elements from the map. For each index, choose the greatest element of A that hasn't been used yet.
This approach has the same time and space complexities as the backtracking-based approach.
There are n! permutations of A[n] (less if there are repeating elements).
Use binary search over range 0..n!-1 to determine k-th lexicographic permutation of A[] (arbitrary found example) which is closest lower one to B[].
Perhaps in C++ you can exploit std::lower_bound
Based on the discussion in the comment section to your question, you seek an array made up entirely of elements of the vector A that is -- in lexicographic ordering -- closest to the vector B.
For this scenario, the algorithm becomes quite straightforward. The idea is the same as as already mentioned in the answer of #ruakh (although his answer refers to an earlier and more complicated version of your question -- that is still displayed in the OP -- and is therefore more complicated):
Sort A
Loop over B and select the element of A that is closest to B[i]. Remove that element from the list.
If no element in A is smaller-or-equal than B[i], pick the largest element.
Here is the basic implementation:
#include <string>
#include <vector>
#include <algorithm>
auto get_closest_array(std::vector<int> A, std::vector<int> const& B)
{
std::sort(std::begin(A), std::end(A), std::greater<>{});
auto select_closest_and_remove = [&](int i)
{
auto it = std::find_if(std::begin(A), std::end(A), [&](auto x) { return x<=i;});
if(it==std::end(A))
{
it = std::max_element(std::begin(A), std::end(A));
}
auto ret = *it;
A.erase(it);
return ret;
};
std::vector<int> ret(B.size());
for(int i=0;i<(int)B.size();++i)
{
ret[i] = select_closest_and_remove(B[i]);
}
return ret;
}
Applied to the problem in the OP one gets:
int main()
{
std::vector<int> A ={1,3,5,6,7};
std::vector<int> B ={7,3,2,4,6};
auto C = get_closest_array(A, B);
for(auto i : C)
{
std::cout<<i<<" ";
}
std::cout<<std::endl;
}
and it displays
7 3 1 6 5
which seems to be the desired result.

Whats the efficient way to sum up the elements of an array in following way?

Suppose you are given an n sized array A and a integer k
Now you have to follow this function:
long long sum(int k)
{
long long sum=0;
for(int i=0;i<n;i++){
sum+=min(A[i],k);
}
return sum;
}
what is the most efficient way to find sum?
EDIT: if I am given m(<=100000) queries, and given a different k every time, it becomes very time consuming.
If set of queries changes with each k then you can't do better than in O(n). Your only options for optimizing is to use multiple threads (each thread sums some region of array) or at least ensure that your loop is properly vectorized by compiler (or write vectorized version manually using intrinsics).
But if set of queries is fixed and only k is changed, then you may do in O(log n) by using following optimization.
Preprocess array. This is done only once for all ks:
Sort elements
Make another array of the same length which contains partial sums
For example:
inputArray: 5 1 3 8 7
sortedArray: 1 3 5 7 8
partialSums: 1 4 9 16 24
Now, when new k is given, you need to perform following steps:
Make binary search for given k in sortedArray -- returns index of maximal element <= k
Result is partialSums[i] + (partialSums.length - i) * k
You can do way better than that if you can sort the array A[i] and have a secondary array prepared once.
The idea is:
Count how many items are less than k, and just compute the equivalent sum by the formula: count*k
Prepare an helper array which will give you the sum of the items superior to k directly
Preparation
Step 1: sort the array
std::sort(begin(A), end(A));
Step 2: prepare an helper array
std::vector<long long> p_sums(A.size());
std::partial_sum(rbegin(A), rend(A), begin(p_sums));
Query
long long query(int k) {
// first skip all items whose value is below k strictly
auto it = std::lower_bound(begin(A), end(A), k);
// compute the distance (number of items skipped)
auto index = std::distance(begin(A), it);
// do the sum
long long result = index*k + p_sums[index];
return result;
}
The complexity of the query is: O(log(N)) where N is the length of the array A.
The complexity of the preparation is: O(N*log(N)). We could go down to O(N) with a radix sort but I don't think it is useful in your case.
References
std::sort()
std::partial_sum()
std::lower_bound()
What you do seems absolutely fine. Unless this is really absolutely time critical (that is customers complain that your app is too slow and you measured it, and this function is the problem, in which case you can try some non-portable vector instructions, for example).
Often you can do things more efficiently by looking at them from a higher level. For example, if I write
for (n = 0; n < 1000000; ++n)
printf ("%lld\n", sum (100));
then this will take an awful long time (half a trillion additions) and can be done a lot quicker. Same if you change one element of the array A at a time and recalculate sum each time.
Suppose there are x elements of array A which are no larger than k and set B contains those elements which are larger than k and belongs to A.
Then the result of function sum(k) equals
k * x + sum_b
,where sum_b is the sum of elements belonging to B.
You can firstly sort the the array A, and calculate the array pre_A, where
pre_A[i] = pre_A[i - 1] + A[i] (i > 0),
or 0 (i = 0);
Then for each query k, use binary search on A to find the largest element u which is no larger than k. Assume the index of u is index_u, then sum(k) equals
k * index_u + pre_A[n] - pre_A[index_u]
. The time complex for each query is log(n).
In case array A may be dynamically changed, you can use BST to handle it.

What algorithm used to find the nth sorted subarray of an unordered array?

I had this question recently in an interview and I failed, and now search for the answer.
Let's say I have a big array of n integers, all differents.
If this array was ordered, I could subdivide it in x smaller
arrays, all of size y, except maybe the last one, which could be less.
I could then extract the nth subarray and return it, already sorted.
Example : Array 4 2 5 1 6 3. If y=2 and I want the 2nd array, it would be 3 4.
Now what I did is simply sort the array and return the nth subarray, which takes O(n log n). But it was said to me that there exists a way to do it in O(n + y log y). I searched on internet and didn't find anything. Ideas ?
The algorithm you are looking for is Selection Algorithm, which lets you find k-th order statistics in linear time. The algorithm is quite complex, but the standard C++ library conveniently provides an implementation of it.
The algorithm for finding k-th sorted interval that the interviewers had in mind went like this:
Find b=(k-1)*y-th order statistics in O(N)
Find e=k*y-th order statistics in O(N)
There will be y numbers between b and e. Store them in a separate array of size y. This operation takes O(N)
Sort the array of size y for O(y * log2y) cost.
The overall cost is O(N+N+N+y * log2y), i.e. O(N+y * log2y)
You can combine std::nth_element and std::sort for this:
std::vector<int> vec = muchData();
// Fix those bound iterators as needed
auto lower = vec.begin() + k*y;
auto upper = lower + y;
// put right element at lower and partition vector by it
std::nth_element(vec.begin(), lower, vec.end());
// Same for upper, but don't mess up lower
std::nth_element(lower + 1, upper - 1, vec.end());
// Now sort the subarray
std::sort(lower, upper);
[lower, upper) is now the k-th sorted subarray of length y, with the desired complexity on average.
To be checked for special cases like y = 1 before real world use, but this is the general idea.

Find pair of elements in integer array such that abs(v[i]-v[j]) is minimized

Lets say we have int array with 5 elements: 1, 2, 3, 4, 5
What I need to do is to find minimum abs value of array's elements' subtraction:
We need to check like that
1-2 2-3 3-4 4-5
1-3 2-4 3-5
1-4 2-5
1-5
And find minimum abs value of these subtractions. We can find it with 2 fors. The question is, is there any algorithm for finding value with one and only for?
sort the list and subtract nearest two elements
The provably best performing solution is assymptotically linear O(n) up until constant factors.
This means that the time taken is proportional to the number of the elements in the array (which of course is the best we can do as we at least have to read every element of the array, which already takes O(n) time).
Here is one such O(n) solution (which also uses O(1) space if the list can be modified in-place):
int mindiff(const vector<int>& v)
{
IntRadixSort(v.begin(), v.end());
int best = MAX_INT;
for (int i = 0; i < v.size()-1; i++)
{
int diff = abs(v[i]-v[i+1]);
if (diff < best)
best = diff;
}
return best;
}
IntRadixSort is a linear time fixed-width integer sorting algorithm defined here:
http://en.wikipedia.org/wiki/Radix_sort
The concept is that you leverage the fixed-bitwidth nature of ints by paritioning them in a series of fixed passes on the bit positions. ie partition them on the hi bit (32nd), then on the next highest (31st), then on the next (30th), and so on - which only takes linear time.
The problem is equivalent to sorting. Any sorting algorithm could be used, and at the end, return the difference between the nearest elements. A final pass over the data could be used to find that difference, or it could be maintained during the sort. Before the data is sorted the min difference between adjacent elements will be an upper bound.
So to do it without two loops, use a sorting algorithm that does not have two loops. In a way it feels like semantics, but recursive sorting algorithms will do it with only one loop. If this issue is the n(n+1)/2 subtractions required by the simple two loop case, you can use an O(n log n) algorithm.
No, unless you know the list is sorted, you need two
Its simple Iterate in a for loop
keep 2 variable "minpos and maxpos " and " minneg" and "maxneg"
check for the sign of the value you encounter and store maximum positive in maxpos
and minimum +ve number in "minpos" do the same by checking in if case for number
less than zero. Now take the difference of maxpos-minpos in one variable and
maxneg and minneg in one variable and print the larger of the two . You will get
desired.
I believe you definitely know how to find max and min in one for loop
correction :- The above one is to find max difference in case of minimum you need to
take max and second max instead of max and min :)
This might be help you:
end=4;
subtractmin;
m=0;
for(i=1;i<end;i++){
if(abs(a[m]-a[i+m])<subtractmin)
subtractmin=abs(a[m]-a[i+m];}
if(m<4){
m=m+1
end=end-1;
i=m+2;
}}

Generate a new element different from 1000 elements of an array

I was asked this questions in an interview. Consider the scenario of punched cards, where each punched card has 64 bit pattern. I was suggested each card as an int since each int is a collection of bits.
Also, to be considered that I have an array which already contains 1000 such cards. I have to generate a new element everytime which is different from the previous 1000 cards. The integers(aka cards) in the array are not necessarily sorted.
Even more, how would that be possible the question was for C++, where does the 64 bit int comes from and how can I generate this new card from the array where the element to be generated is different from all the elements already present in the array?
There are 264 64 bit integers, a number that is so much
larger than 1000 that the simplest solution would be to just generate a
random 64 bit number, and then verify that it isn't in the table of
already generated numbers. (The probability that it is is
infinitesimal, but you might as well be sure.)
Since most random number generators do not generate 64 bit values, you
are left with either writing your own, or (much simpler), combining the
values, say by generating 8 random bytes, and memcpying them into a
uint64_t.
As for verifying that the number isn't already present, std::find is
just fine for one or two new numbers; if you have to do a lot of
lookups, sorting the table and using a binary search would be
worthwhile. Or some sort of a hash table.
I may be missing something, but most of the other answers appear to me as overly complicated.
Just sort the original array and then start counting from zero: if the current count is in the array skip it, otherwise you have your next number. This algorithm is O(n), where n is the number of newly generated numbers: both sorting the array and skipping existing numbers are constants. Here's an example:
#include <algorithm>
#include <iostream>
unsigned array[] = { 98, 1, 24, 66, 20, 70, 6, 33, 5, 41 };
unsigned count = 0;
unsigned index = 0;
int main() {
std::sort(array, array + 10);
while ( count < 100 ) {
if ( count > array[index] )
++index;
else {
if ( count < array[index] )
std::cout << count << std::endl;
++count;
}
}
}
Here's an O(n) algorithm:
int64 generateNewValue(list_of_cards)
{
return find_max(list_of_cards)+1;
}
Note: As #amit points out below, this will fail if INT64_MAX is already in the list.
As far as I'm aware, this is the only way you're going to get O(n). If you want to deal with that (fairly important) edge case, then you're going to have to do some kind of proper sort or search, which will take you to O(n log n).
#arne is almost there. What you need is a self-balancing interval tree, which can be built in O(n lg n) time.
Then take the top node, which will store some interval [i, j]. By the properties of an interval tree, both i-1 and j+1 are valid candidates for a new key, unless i = UINT64_MIN or j = UINT64_MAX. If both are true, then you've stored 2^64 elements and you can't possibly generate a new element. Store the new element, which takes O(lg n) worst-case time.
I.e.: init takes O(n lg n), generate takes O(lg n). Both are worst-case figures. The greatest thing about this approach is that the top node will keep "growing" (storing larger intervals) and merging with its successor or predecessor, so the tree will actually shrink in terms of memory use and eventually the time per operation decays to O(1). You also won't waste any numbers, so you can keep generating until you've got 2^64 of them.
This algorithm has O(N lg N) initialisation, O(1) query and O(N) memory usage. I assume you have some integer type which I will refer to as int64 and that it can represent the integers [0, int64_max].
Sort the numbers
Create a linked list containing intervals [u, v]
Insert [1, first number - 1]
For each of the remaining numbers, insert [prev number + 1, current number - 1]
Insert [last number + 1, int64_max]
You now have a list representing the numbers which are not used. You can simply iterate over them to generate new numbers.
I think the way to go is to use some kind of hashing. So you store your cards in some buckets based on lets say on MOD operation. Until you create some sort of indexing you are stucked with looping over the whole array.
IF you have a look on HashSet implementation in java you might get a clue.
Edit: I assume you wanted them to be random numbers, if you don't mind sequence MAX+1 below is good solution :)
You could build a binary tree of the already existing elements and traverse it until you find a node whose depth is not 64 and which has less than two child nodes. You can then construct a "missing" child node and have a new element. The should be fairly quick, in the order of about O(n) if I'm not mistaken.
bool seen[1001] = { false };
for each element of the original array
if the element is in the range 0..1000
seen[element] = true
find the index for the first false value in seen
Initialization:
Don't sort the list.
Create a new array 1000 long containing 0..999.
Iterate the list and, if any number is in the range 0..999, invalidate it in the new array by replacing the value in the new array with the value of the first item in the list.
Insertion:
Use an incrementing index to the new array. If the value in the new array at this index is not the value of the first element in the list, add it to the list, else check the value from the next position in the new array.
When the new array is used up, refill it using 1000..1999 and invalidating existing values as above. Yes, this is looping over the list, but it doesn't have to be done for each insertion.
Near O(1) until the list gets so large that occasionally iterating it for invalidation of the 'new' new array becomes significant. Maybe you could mitigate this by using a new array that grows, maybee always the size of the list?
Rgds,
Martin
Put them all into a hash table of size > 1000, and find the empty cell (this is the parking problem). Generate a key for that. This will of course work better for bigger table size. The table needs only 1-bit entries.
EDIT: this is the pigeonhole principle.
This needs "modulo tablesize" (or some other "semi-invertible" function) for a hash function.
unsigned hashtab[1001] = {0,};
unsigned long long long long numbers[1000] = { ... };
void init (void)
{
unsigned idx;
for (idx=0; idx < 1000; idx++) {
hashtab [ numbers[idx] % 1001 ] += 1; }
}
unsigned long long long long generate(void)
{
unsigned idx;
for (idx = 0; idx < 1001; idx++) {
if ( !hashtab [ idx] ) break; }
return idx + rand() * 1001;
}
Based on the solution here: question on array and number
Since there are 1000 numbers, if we consider their remainders with 1001, at least one remainder will be missing. We can pick that as our missing number.
So we maintain an array of counts: C[1001], which will maintain the number of integers with remainder r (upon dividing by 1001) in C[r].
We also maintain a set of numbers for which C[j] is 0 (say using a linked list).
When we move the window over, we decrement the count of the first element (say remainder i), i.e. decrement C[i]. If C[i] becomes zero we add i to the set of numbers. We update the C array with the new number we add.
If we need one number, we just pick a random element from the set of j for which C[j] is 0.
This is O(1) for new numbers and O(n) initially.
This is similar to other solutions but not quite.
How about something simple like this:
1) Partition the array into numbers equal and below 1000 and above
2) If all the numbers fit within the lower partition then choose 1001 (or any number greater than 1000) and we're done.
3) Otherwise we know that there must exist a number between 1 and 1000 that doesn't exist within the lower partition.
4) Create a 1000 element array of bools, or a 1000-element long bitfield, or whatnot and initialize the array to all 0's
5) For each integer in the lower partition, use its value as an index into the array/bitfield and set the corresponding bool to true (ie: do a radix sort)
6) Go over the array/bitfield and pick any unset value's index as the solution
This works in O(n) time, or since we've bounded everything by 1000, technically it's O(1), but O(n) time and space in general. There are three passes over the data, which isn't necessarily the most elegant approach, but the complexity remains O(n).
you can create a new array with the numbers that are not in the original array, then just pick one from this new array.
¿O(1)?