Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Consider an array arr.
s1 and s2 are any two sub sets of our original array arr. We should partition our original array such that these two sub sets have the minimum difference, i.e., min(sum(s1) - sum(s2)) where sum(s1) >= sum(s2).
I'm supposed to print these two sub sets, i.e., s1 and s2, as my output.
You can store these sub sets in a vector.
For example if arr[] = [0,1,5,6], then the minimal difference is 0 and my two sub sets are s1[] = [0,1,5] and s2[] = [6].
Another example could be arr[] = [16,14,13,13,12,10,9,3], and the two sub sets would be s1[]=[16,13,13,3] and s2[] = [14,12,10,9] with a minimum difference of 0 again.
I can't seem to figure out how to get to these two sub sets.
Much appreciated!
Note: I know how to obtain the minimum difference from the two sub sets using DP but I am unable to proceed further. It's getting the two sub sets (with the minimal difference) that I'm unable to do.
Just an algorithm with a nudge along the right direction would do.
#include<iostream>
#include<vector>
using namespace std;
int min_subset_sum_diff(int a[], int n,int sum) {
vector<vector<bool>> go(n + 1, vector<bool>(sum + 1, false));
for (int i = 0; i < n + 1; ++i) {
go[i][0] = true;
}
for (int i = 1; i <= n; ++i) {
for (int j = 1; j <= sum; ++j) {
if (a[i - 1] <= j) {
go[i][j] = go[i - 1][j - a[i - 1]] || go[i - 1][j];
}
else {
go[i][j] = go[i - 1][j];
}
}
}
for (int j = (sum / 2); j >= 0; --j) {
if (go[n][j]) { // only need the last row as I need all the elements, which is n
return sum - 2 * j;
}
}
return INT_MAX;
}
int main() {
int a[] = { 3,100,4,4 };
int n = sizeof(a) / sizeof(a[0]);
int sum = 0;
for (auto i : a) {
sum += i;
}
cout << min_subset_sum_diff(a, n,sum);
}
s1 = sum(first_subset)
s2= sum(second_subset)
Assuming s2>=s1,
s1 + s2 = sum_arr
s2 = sum_arr - s1 //using this in the next equation
s2-s1 = difference
(sum_arr - s1) - s1 = difference
sum_arr - 2*s1 = difference
This is my underlying logic.
sum(s1) and sum(s2) lie between 0..sum_arr
Since I assume s1 < s2, s1 can have values only between 0..(sum_arr/2)
And my aim is to minimum value of sum_arr - 2*s1, this will be true for the largest s1 that is true
Make parallel table of int T[][] with the same dimensions as go[][]
When you make a decision here
if (a[i - 1] <= j)
put in T[i][j] some kind of information pointing onto true precursor coordinates for go[i][j]
After filling the tables you search for the best true entry in go[n] row of boolean table. When it is found, get value from the same cell of T table and follow to precursor, then to precursor of prcursor and so on ("unwind" subset information)
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I want to calculate ways of coloring i black blocks with white color, in which red blocks should be in groups with count at least j.
Let C(m) be the number of ways of colouring m blocks (fixing n).
If m<n, then C(m)=1, since no red blocks can appear.
Otherwise, a colouring either ends in a grey block, is all red, or ends with a grey block followed by n or more red blocks.
That is, if m>=n, C(m)=C(m-1)+1+sum(C(m-i-1) for i=n..m-1). Rewriting this gives C(m)=1+C(m-1)+sum(C(i) for i=0..(m-n-1)).
With a bit of care, we can compute this in O(m) time and space. The care is needed to note that the sum can be incrementally computed.
Here's one way to do it (python, but should be easy to convert to whatever language you want):
def count(m, n):
C = [0] * (m+1)
s = 0
for i in range(m+1):
if i-n-1 >= 0:
s += C[i-n-1]
C[i] = 1 if i<n else C[i-1] + 1 + s
return C[m]
print(count(7, 3))
With a little more care, we can note that we only access the last n+1 elements of C at any point in time, so we can reduce the space usage to O(n):
def count(m, n):
C = [0] * (n+1)
s = 0
for i in range(m+1):
if i-n-1 >= 0:
s += C[(i-n-1)%(n+1)]
C[i%(n+1)] = 1 if i<n else C[(i+n)%(n+1)] + 1 + s
return C[m%(n+1)]
print(count(7, 3))
This is a possible program for that:
#include <iostream>
#include <vector>
int count_rec(int m, int n, std::vector<int>& cache)
{
// Count starts with 1 for no blocks filled
int c = 1;
// i = starting position of current block
for (int i = 0; i <= m - n; i++)
{
// j = number of filled blocks
for (int j = n; j <= m - i; j++)
{
// r = number of blocks available after this block
int r = m - j - i;
// Check if number of combinations for r has already been calculated
if (cache[r] < 0)
{
// If not calculate (subtract 1 block gap)
cache[r] = count_rec(r - 1, n, cache);
}
// Get cached value
c += cache[r];
}
}
return c;
}
int count(int m, int n)
{
if (m <= 0) return 0;
if (n <= 0)
{
n = 1;
}
std::vector<int> cache(m, -1);
return count_rec(m, n, cache);
}
int main()
{
std::cout << "count(7, 3) = " << count(7, 3) << std::endl;
return 0;
}
Output:
count(7, 3) = 17
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have recently learned the FFT algorithm.
I applied it to the problem of fast multiplication of very large natural number following this pseudocode,
Let A be array of length m, w be primitive m-th root of unity.
Goal: produce DFT F(A): evaluation of A at 1, w, w^2,...,w^{m-1}.
FFT(A, m, w)
{
if (m==1) return vector (a_0)
else {
A_even = (a_0, a_2, ..., a_{m-2})
A_odd = (a_1, a_3, ..., a_{m-1})
F_even = FFT(A_even, m/2, w^2) //w^2 is a primitive m/2-th root of unity
F_odd = FFT(A_odd, m/2, w^2)
F = new vector of length m
x = 1
for (j=0; j < m/2; ++j) {
F[j] = F_even[j] + x*F_odd[j]
F[j+m/2] = F_even[j] - x*F_odd[j]
x = x * w
}
return F
}
It works great but I found a better code that does the same job without recursion and also runs much faster.
I have tried to figure out how it works line by line, however, I failed.
I would really appreciate if you can explain me in detail what is happening those first two for loop (not the math part)
Below is the new code
typedef complex<double> base;
void fft(vector<base> &a, bool invert)
{
int n = a.size();
for (int i = 1, j = 0; i < n; i++){
int bit = n >> 1;
for (; j >= bit; bit >>= 1) j -= bit;
j += bit;
if (i < j) swap(a[i], a[j]);
}
for (int len = 2; len <= n; len <<= 1){
double ang = 2 * M_PI / len * (invert ? -1 : 1);
base wlen(cos(ang), sin(ang));
for (int i = 0; i < n; i += len){
base w(1);
for (int j = 0; j < len / 2; j++){
base u = a[i + j], v = a[i + j + len / 2] * w;
a[i + j] = u + v;
a[i + j + len / 2] = u - v;
w *= wlen;
}
}
}
if (invert)
{
for (int i = 0; i < n; i++)
a[i] /= n;
}
}
Cooley–Tukey FFT implementation has beed described hundreds of times.
Wiki page part with non-recursive method.
The first loop is bit reversal part - code repacks source array, swapping element at i-th index with index of reversed bits of i (so for length=8 index 6=110b is swapped with index 3=011b, and index 5=101b remains at the same place).
This reordering allows to treat array in-place, making calculations on pairs, separated by 1,2,4,8... indexes (len/2 step here) with corresponding trigonometric coefficients.
P.S. Your answer contains onlinejudge tag, so such compact implementation is quite good for you purposes. But for real work it is worth to use some highly-optimized library like fftw etc
Working on a USACO programming problem, I got stuck when using a brute-force approach.
From a list of N elements, I need to compute all distinct pair-configurations.
My problem is twofold.
How do I express such a configuration in, lets say, an array?
How do I go about computing all distinct combinations?
I only resorted to the brute-force approach after I gave up solving it analytically. Although this is context-specific, I came as far as noting that one can quickly rule out the rows where there is only a single, so called, "wormhole" --- it isn't effectively in an infinite cycle.
Update
I'll express them with a tree structure. Set N = 6; {A,B,C,D,E,F}.
By constructing the following trees chronologically, all combinations are listed.
A --> B,C,D,E,F;
B --> C,D,E,F;
C --> D,E,F;
D --> E,F;
E --> F.
Check: in total there are 6 over 2 = 6!/(2!*4!) = 15 combinations.
Note. Once a lower node is selected, it should be discarded as a top node; it can only exist in one single pair.
Next, selecting them and looping over all configurations.
Here is a sample code (in C/C++):
int c[N];
void LoopOverAll(int n)
{
if (n == N)
{
// output, the array c now contains a configuration
// do anything you want here
return;
}
if (c[n] != - 1)
{
// this warmhole is already paired with someone
LoopOverAll(n + 1);
return;
}
for (int i = n + 1; i < N; i ++)
{
if (c[i] != - 1)
{
// this warmhole is already paired with someone
continue;
}
c[i] = n; c[n] = i; LoopOverAll(n + 1);
c[i] = - 1;
}
c[n] = - 1;
}
int main()
{
for (int i = 0; i < N; i ++)
c[i] = - 1;
LoopOverAll(0);
return 1;
}
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Given an array, arr, of length n, find how many subsets of arr there are such that XOR(^) of those subsets is equal to a given number, ans.
I have this dp approach but is there a way to improve its time complexity. ans is always less than 1024.
Here ans is the no. such that XOR(^) of the subsets is equal to it.
arr[n] contains all the numbers
memset(dp, 0, sizeof(dp));
dp[0][0] = 1;
for(i = 1; i <= n; i++){
for(j = 0; j < 1024; j++) {
dp[i][j] = (dp[i-1][j] + dp[i-1][j^arr[i]]);
}
}
cout << (dp[n][ans]);
From user3386109's comment, building on top of your code:
/* Warning: Untested */
int counts[1024] = {0}, ways[1024];
for(int i = 1; i <= n; ++i) counts[ arr[i] ] += 1;
for(int i = 0; i <= 1024; ++i) {
const int z = counts[i];
// Look for overflow here
ways[i] = z == 0 ?
0 :
(int)(1U << (z-1));
}
memset(dp, 0, sizeof(dp));
dp[0][0] = 1;
for(i = 1; i <= 1024; i++){
for(j = 0; j < 1024; j++) {
// Check for overflow
const int howmany = ways[i] * dp[i-1][j];
dp[i][j] += howmany;
dp[i][j^i] += howmany;
}
}
cout << (dp[1024][ans]);
For calculating odd_ and even_, you can also use the following:
nc0+nc2+... =
nc1+nc3... =
2n-1
Because number of ways to select odd items = number of ways to reject odd items = number of ways to select even numbers
You can also optimize the space by keeping just 2 columns of dp arrays and reusing them as dp[i-2][x] are discarded.
The Idea behind dynamic programming is, to (1) never compute the same result twice and (2) only compute results at demand and not precompute the whole thing as you do it.
So there is a solution needed for solve(arr, n, ans) with ans < 1024, n < 1000000 and arr = array[n]. The idea of having dp[n][ans] holding the number of results is reasonable, so dp size is needed as dp = array[n+1][1024]. What we need is a way to distinguish between not yet computed results and available results. So memset(dp, -1, sizeof(dp)) and then as you already did dp[0][0] = 1
solve(arr, n, ans):
if (dp[n][ans] == -1)
if (n == 0) // and ans != 0 since that was initialized already
dp[n][ans] = 0
else
// combine results with current and without current array element
dp[n][ans] = solve(arr + 1, n - 1, ans) + solve(arr + 1, n - 1, ans XOR arr[0])
return dp[n][ans]
The advantage is, that your dp array is only partially computed on the way to your solution, so this might save some time.
Depending on the stack size and n, it might be necessary to translate this from a recursive to an iterative solution
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Given an array, arr, of length n, find how many subsets of arr there are such that XOR(^) of those subsets is equal to a given number, ans.
I have this dp approach but is there a way to improve its time complexity. ans is always less than 1024.
Here ans is the no. such that XOR(^) of the subsets is equal to it.
arr[n] contains all the numbers
memset(dp, 0, sizeof(dp));
dp[0][0] = 1;
for(i = 1; i <= n; i++){
for(j = 0; j < 1024; j++) {
dp[i][j] = (dp[i-1][j] + dp[i-1][j^arr[i]]);
}
}
cout << (dp[n][ans]);
From user3386109's comment, building on top of your code:
/* Warning: Untested */
int counts[1024] = {0}, ways[1024];
for(int i = 1; i <= n; ++i) counts[ arr[i] ] += 1;
for(int i = 0; i <= 1024; ++i) {
const int z = counts[i];
// Look for overflow here
ways[i] = z == 0 ?
0 :
(int)(1U << (z-1));
}
memset(dp, 0, sizeof(dp));
dp[0][0] = 1;
for(i = 1; i <= 1024; i++){
for(j = 0; j < 1024; j++) {
// Check for overflow
const int howmany = ways[i] * dp[i-1][j];
dp[i][j] += howmany;
dp[i][j^i] += howmany;
}
}
cout << (dp[1024][ans]);
For calculating odd_ and even_, you can also use the following:
nc0+nc2+... =
nc1+nc3... =
2n-1
Because number of ways to select odd items = number of ways to reject odd items = number of ways to select even numbers
You can also optimize the space by keeping just 2 columns of dp arrays and reusing them as dp[i-2][x] are discarded.
The Idea behind dynamic programming is, to (1) never compute the same result twice and (2) only compute results at demand and not precompute the whole thing as you do it.
So there is a solution needed for solve(arr, n, ans) with ans < 1024, n < 1000000 and arr = array[n]. The idea of having dp[n][ans] holding the number of results is reasonable, so dp size is needed as dp = array[n+1][1024]. What we need is a way to distinguish between not yet computed results and available results. So memset(dp, -1, sizeof(dp)) and then as you already did dp[0][0] = 1
solve(arr, n, ans):
if (dp[n][ans] == -1)
if (n == 0) // and ans != 0 since that was initialized already
dp[n][ans] = 0
else
// combine results with current and without current array element
dp[n][ans] = solve(arr + 1, n - 1, ans) + solve(arr + 1, n - 1, ans XOR arr[0])
return dp[n][ans]
The advantage is, that your dp array is only partially computed on the way to your solution, so this might save some time.
Depending on the stack size and n, it might be necessary to translate this from a recursive to an iterative solution