Just for fun, I'm trying to implement the pseudocode from this StackOverflow answer for the highly factorized Sieve of Eratosthenes in C++. I can't figure out why my code returns both prime and non-prime numbers. Am I implementing these for loops incorrectly? Should I be using while loops instead? I suspect that I'm not incrementing the for loops properly. Any help would be greatly appreciated. I've spent several hours trying to hunt down the flaw.
GordonBGood's pseudocode is inserted as comments, and I've used all the same variable names.
#include <iostream>
#include <vector>
#include <cmath>
const int limit = 1000000000;
const std::vector<int> r {23,29,31,37,41,43,47,53, 59,61,67,71,73,79,83, //positions + 19
89,97,101,103,107,109,113,121,127, 131,137,139,
143,149,151,157,163,167,169,173,179,181,187,191,193,
197,199,209,211,221,223,227,229};
int main()
{
// an array of length 11 times 13 times 17 times 19 = 46189 wheels initialized
// so that it doesn't contain multiples of the large wheel primes
// for n where n ← 210 × w + x where w ∈ {0,...46189}, x in r: // already
// if (n mod cp) not equal to 0 where cp ∈ {11,13,17,19}: // no 2,3,5,7
// mstr(n) ← true else mstr(n) ← false // factors
std::vector<bool> mstr(limit);
int n;
for (int w=0; w <= 46189; ++w) {
for (auto x = begin(r); x != end(r); ++x) {
n = 210*w + *x;
if (n % 11 != 0 && n % 13 != 0 && n % 17 != 0 && n % 19 != 0)
mstr[n]=true;
else
mstr[n]=false;
}
}
// Initialize the sieve as an array of the smaller wheels with
// enough wheels to include the representation for limit
// for n where n ← 210 × w + x, w ∈ {0,...(limit - 19) ÷ 210}, x in r:
// sieve(n) ← mstr(n mod (210 × 46189)) // init pre-culled primes.
std::vector<bool> sieve(limit+1000);
for (int w=0; w <= (limit-19)/210; ++w) {
for (auto x = begin(r); x != end(r); ++x) {
n = 210*w + *x;
sieve[n] = mstr[(n % (210*46189))];
}
}
// Eliminate composites by sieving, only for those occurrences on the
// wheel using wheel factorization version of the Sieve of Eratosthenes
// for n² ≤ limit when n ← 210 × k + x where k ∈ {0..}, x in r
// if sieve(n):
// // n is prime, cull its multiples
// s ← n² - n × (x - 23) // zero'th modulo cull start position
// while c0 ≤ limit when c0 ← s + n × m where m in r:
// c0d ← (c0 - 23) ÷ 210, cm ← (c0 - 23) mod 210 + 23 //means cm in r
// while c ≤ limit for c ← 210 × (c0d + n × j) + cm
// where j ∈ {0,...}:
// sieve(c) ← false // cull composites of this prime
int s, c, c0, c0d, cm, j;
for ( auto x = begin(r); x != end(r); ++x) {
for ( int k=0; (n=210*k + (*x)) <= sqrt(limit); ++k){
if (sieve[n]) {
s = n*n - n*((*x)-23);
for ( auto m = begin(r); (c0=s+n*(*m)) <= limit && m != end(r); ++m) {
c0d = (c0-23)/210;
cm = (c0-23)%210 + 23;
for ( int j=0; (c=210*(c0d+n*j)+cm) <= limit; ++j) {
sieve[c] = false;
}
}
}
}
}
// output 2, 3, 5, 7, 11, 13, 17, 19,
// for n ≤ limit when n ← 210 × k + x where k ∈ {0..}, x in r:
// if sieve(n): output n
std::cout << "2\n3\n5\n7\n11\n13\n17\n19\n";
for ( auto x = begin(r); x != end(r); ++x) {
for ( int k = 0; (n=210*k + (*x)) <= limit; ++k) {
if (sieve[n]);
std::cout << n << std::endl;
}
}
std::cout << std::endl;
return 0;
}
Related
The body of the problem is as follows:
Let n be a positive integer. Let v be an array with n positions counting from 1 to n and its elements being different numbers from 1 to n
Consider n being a power of 2(n = 2^m, with m being a positive integer) and array v has the property that for any i from 1 to m and any j from 1 to 2^(m-i), there is a k from 1 to 2^(m-i), so that on the positions in v from 2^i * (j-1)+1 to 2^i * j there are positive integers from 2^i * (k-1)+1 to 2^i * k, randomly. Write a program that sorts the array v in an ascending order, using for changing the order of the elements in v only the operation FLIP(n, v, 2^i * (j-1)+1, 2^i * j), with i from 1 to m and j from 1 to 2^(m-i), using the property of the array v.
The FLIP operation:
void FLIP(int v[], int n, int i, int j) {
while (i < j) {
int aux = v[i];
v[i] = v[j];
v[j] = aux;
i++;
j--;
}
}
Example of input:
n = 16
v = [14 13 15 16 11 12 9 10 2 1 4 3 8 7 6 5]
Output:
v = [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16]
What i found out is that if you group the elements in array v as follows:
2 by 2, the resulting groups have 2 consecutive values(14, 13),(15,16)
4 by 4, the resulting groups have 4 consecutive values(14,13,15,16)
2^i by 2^i, the resulting groups have 2^i consecutive values.
So my mind goes to a divide and conquer approach, but i don't know how to implement it.
Think I got it, I think so ... basically a modification of Merge sort, without merging, but instead using the FLIP function that was required to be used. Took me a while, I had an off by one error in there and it took me a minute.
#include <iostream>
using namespace std;
void FLIP(int v[], int n, int i, int j) {
while (i < j) {
int aux = v[i];
v[i] = v[j];
v[j] = aux;
i++;
j--;
}
}
void run(int v[], int n, int l, int r) {
if (n < 2) {
return;
}
if (v[l] > v[r - 1]) FLIP(v, n, l, r-1);
int m = (l + r) / 2;
run(v, n / 2, l, m);
run(v, n / 2, m, r);
}
int main(int argc, char** argv) {
int v[] = { 14,13,15,16,11,12,9,10,2,1,4,3,8,7,6,5 };
run(v, 16, 0, 16);
for (int& x : v) cout << x << ' ';
cout << endl;
}
Question Description : Given an array arr[] of length N, the task is to find the XOR of pairwise sum of every possible unordered pairs of the array.
I solved this question using the method described in this post.
My Code :
int xorAllSum(int a[], int n)
{
int curr, prev = 0;
int ans = 0;
for (int k = 0; k < 32; k++) {
int o = 0, z = 0;
for (int i = 0; i < n; i++) {
if (a[i] & (1 << k)) {
o++;
}
else {
z++;
}
}
curr = o * z + prev;
if (curr & 1) {
ans = ans | (1 << k);
}
prev = o * (o - 1) / 2;
}
return ans;
}
Code Descrption : I am finding out at each bit, whether our answer will have that bit set ort not. So to do this for each bit-position, I find the count of all the numbers which have a set bit at the position(represeneted by 'o' in the code) and the count of numbers having un-set bit at that position(represented by 'z').
Now if we pair up these numbers(the numbers having set bit and unset bit together, then we will get a set bit in their sum(Because we need to get XOR of all pair sums).
The factor of 'prev' is included to account for the carry over bits. Now we know that the answer will have a set bit at current position only if the number of set bits are 'odd' as we are doing an XOR operation.
But I am not getting correct output. Can anyone please help me
Test Cases :
n = 3, a[] = {1, 2, 3} => (1 + 2) ^ (1 + 3) ^ (2 + 3)
=> 3 ^ 4 ^ 5 = 2
=> Output : 2
n = 6
a[] = {1 2 10 11 18 20}
Output : 50
n = 8
a[] = {10 26 38 44 51 70 59 20}
Output : 182
Constraints : 2 <= n <= 10^8
Also, here we need to consider UNORDERED PAIRS and not Ordered Pairs for the answer
PS : I know that the same question has been asked before but I couldn't explain my problem with this much detail in the comments so I created a new post. I am new here, so please pardon me and give me your feedback :)
I suspect that the idea in the post you referred to is missing important details, if it could work at all with the stated complexity. (I would be happy to better understand and be corrected should that author wish to clarify their method further.)
Here's my understanding of at least one author's intention for an O(n * log n * w) solution, where w is the number of bits in the largest sum, as well as JavaScript code with a random comparison to brute force to show that it works (easily translatable to C or Python).
The idea is to examine the contribution of each bit one a time. Since in any one iteration, we are only interested in whether the kth bit in the sums is set, we can remove all parts of the numbers that include higher bits, taking them each modulo 2^(k + 1).
Now the sums that would necessarily have the kth bit set are in the intervals, [2^k, 2^(k + 1)) (that's when the kth bit is the highest) and [2^(k+1) + 2^k, 2^(k+2) − 2] (when we have both the kth and (k+1)th bits set). So in the iteration for each bit, we sort the input list (modulo 2^(k + 1)), and for each left summand, we decrement a pointer to the end of each of the two intervals, and binary search the relevant start index.
// https://stackoverflow.com/q/64082509
// Returns the lowest index of a value
// greater than or equal to the target
function lowerIdx(a, val, left, right){
if (left >= right)
return left;
mid = left + ((right - left) >> 1);
if (a[mid] < val)
return lowerIdx(a, val, mid+1, right);
else
return lowerIdx(a, val, left, mid);
}
function bruteForce(A){
let answer = 0;
for (let i=1; i<A.length; i++)
for (let j=0; j<i; j++)
answer ^= A[i] + A[j];
return answer;
}
function f(A, W){
const n = A.length;
const _A = new Array(n);
let result = 0;
for (let k=0; k<W; k++){
for (let i=0; i<n; i++)
_A[i] = A[i] % (1 << (k + 1));
_A.sort((a, b) => a - b);
let pairs_with_kth_bit = 0;
let l1 = 1 << k;
let r1 = 1 << (k + 1);
let l2 = (1 << (k + 1)) + (1 << k);
let r2 = (1 << (k + 2)) - 2;
let ptr1 = n - 1;
let ptr2 = n - 1;
for (let i=0; i<n-1; i++){
// Interval [2^k, 2^(k+1))
while (ptr1 > i+1 && _A[i] + _A[ptr1] >= r1)
ptr1 -= 1;
const idx1 = lowerIdx(_A, l1-_A[i], i+1, ptr1);
let sum = _A[i] + _A[idx1];
if (sum >= l1 && sum < r1)
pairs_with_kth_bit += ptr1 - idx1 + 1;
// Interval [2^(k+1)+2^k, 2^(k+2)−2]
while (ptr2 > i+1 && _A[i] + _A[ptr2] > r2)
ptr2 -= 1;
const idx2 = lowerIdx(_A, l2-_A[i], i+1, ptr2);
sum = _A[i] + _A[idx2]
if (sum >= l2 && sum <= r2)
pairs_with_kth_bit += ptr2 - idx2 + 1;
}
if (pairs_with_kth_bit & 1)
result |= 1 << k;
}
return result;
}
var As = [
[1, 2, 3], // 2
[1, 2, 10, 11, 18, 20], // 50
[10, 26, 38, 44, 51, 70, 59, 20] // 182
];
for (let A of As){
console.log(JSON.stringify(A));
console.log(`DP, brute force: ${ f(A, 10) }, ${ bruteForce(A) }`);
console.log('');
}
var numTests = 500;
for (let i=0; i<numTests; i++){
const W = 8;
const A = [];
const n = 12;
for (let j=0; j<n; j++){
const num = Math.floor(Math.random() * (1 << (W - 1)));
A.push(num);
}
const fA = f(A, W);
const brute = bruteForce(A);
if (fA != brute){
console.log('Mismatch:');
console.log(A);
console.log(fA, brute);
console.log('');
}
}
console.log("Done testing.");
Problem
Given an array A = a0,a1,...an, with size up to N ≤ 10^5, and 0 ≤ ai ≤ 10^9.
And a number 0 < M ≤ 10^9.
The task is to find the maximum ∑(k=i, j) ak % M = (ai + ai+1 + a(i+2) + ⋯ + a(j−1) + a(j)) % M, and how many different range(i,j) get that sum.
The complexity has to be less than O(N^2), the latter is too slow.
Example
N = 3, M = 5
A = {2, 4, 3}
The Maximum Sum mod M is 4 and there are 2 ranges, which are a0 to a2 and a1
My attempt
Let's define s[j] = (a0 + a1 + ... + aj) % M so if you want the best sum that ends in j you have to choose an s[i] i < j that s[i] is the smallest sum higher than you.
Because if s[i] > s[j]; s[i] = M - K; K < M - s[j] then the result sum range will be (s[j]-s[i]+M) % M = (s[j] + K) % M and because K < M - s[j] it will increase the result mod M, and as s[j] gets closer to s[j] it will increase the result mod M.
The idea is my attemp, first you have to have to calculate all the sums that starts from 0 and end in a index i, then you can search the smaller value grater than you fast by searching the value with a binary search that the map already have (lower_bound), and count how many time you could do sum with the value that you found. You have to keep the sum somewhere to count how many time you could do it.
#include <iostream>
#include <map>
#define optimizar_io ios_base::sync_with_stdio(false);cin.tie(NULL);
using namespace std;
const int LN = 1e5;
long long N, M, num[LN];
map < long long, int > sum;
int main() {
optimizar_io
cin >> N >> M;
sum[0]++;
long long cont = 0, tmax = 0, res = 1, val;
map < long long, int > :: iterator best;
for (int i = 0; i < N; i++)
{
cin >> num[i];
cont = (cont + num[i]) % M;
if (tmax == cont)
res += sum[0];
if (tmax < cont)
tmax = cont, res = sum[0];
best = sum.lower_bound(cont + 1);
if (best != sum.end())
{
val = cont - (*best).first + M;
if (tmax == val)
res += (*best).second;
if (tmax < val)
tmax = val, res = (*best).second;
}
sum[cont]++;
}
cout << tmax << " " << res;
return 0;
}
I tried to create a function which takes two variables n and k.
The function returns the number of positive integers that have prime factors all less than or equal to k. The number of positive integers is limited by n which is the largest positive integer.
For example, if k = 4 and n = 10; the positive integers which have all prime factors less than or equal to 4 are 1, 2, 3, 4, 6, 8, 9, 12...(1 is always part for some reason even though its not prime) but since n is 10, 12 and higher numbers are ignored.
So the function will return 7. The code I wrote works for smaller values of n while it just keeps on running for larger values.
How can I optimize this code? Should I start from scratch and come up with a better algorithm?
int generalisedHammingNumbers(int n, int k)
{
vector<int>store;
vector<int>each_prime = {};
for (int i = 1; i <= n; ++i)
{
for (int j = 1; j <= i; ++j)
{
if (i%j == 0 && is_prime(j))
{
each_prime.push_back(j); //temporary vector of prime factors for each integer(i)
}
}
for (int m = 0; m<each_prime.size(); ++m)
{
while(each_prime[m] <= k && m<each_prime.size()-1) //search for prime factor greater than k
{
++m;
}
if (each_prime[m] > k); //do nothing for prime factor greater than k
else store.push_back(i); //if no prime factor greater than k, i is valid, store i
}
each_prime = {};
}
return (store.size()+1);
}
bool is_prime(int x)
{
vector<int>test;
if (x != 1)
{
for (int i = 2; i < x; ++i)
{
if (x%i == 0)test.push_back(i);
}
if (test.size() == 0)return true;
else return false;
}
return false;
}
int main()
{
long n;
int k;
cin >> n >> k;
long result = generalisedHammingNumbers(n, k);
cout << result << endl;
}
Should I start from scratch and come up with a better algorithm?
Yes... I think so.
This seems to me a work for the Sieve of Eratosthenes.
So I propose to
1) create a std::vector<bool> to detect, through Eratosthenes, the primes to n
2) remove primes starting from k+1, and their multiples, from the pool of your numbers (another std::vector<bool>)
3) count the true remained values in the pool vector
The following is a full working example
#include <vector>
#include <iostream>
#include <algorithm>
std::size_t foo (std::size_t n, std::size_t k)
{
std::vector<bool> primes(n+1U, true);
std::vector<bool> pool(n+1U, true);
std::size_t const sqrtOfN = std::sqrt(n);
// first remove the not primes from primes list (Sieve of Eratosthenes)
for ( auto i = 2U ; i <= sqrtOfN ; ++i )
if ( primes[i] )
for ( auto j = i << 1 ; j <= n ; j += i )
primes[j] = false;
// then remove from pool primes, bigger than k, and multiples
for ( auto i = k+1U ; i <= n ; ++i )
if ( primes[i] )
for ( auto j = i ; j <= n ; j += i )
pool[j] = false;
// last count the true value in pool (excluding the zero)
return std::count(pool.begin()+1U, pool.end(), true);
}
int main ()
{
std::cout << foo(10U, 4U) << std::endl;
}
Generate the primes using a sieve of Erastothenes, and then use a modified coin-change algorithm to find numbers which are products of only those primes. In fact, one can do both simultaneously like this (in Python, but is easily convertible to C++):
def limited_prime_factors(n, k):
ps = [False] * (k+1)
r = [True] * 2 + [False] * n
for p in xrange(2, k+1):
if ps[p]: continue
for i in xrange(p, k+1, p):
ps[i] = True
for i in xrange(p, n+1, p):
r[i] = r[i//p]
return [i for i, b in enumerate(r) if b]
print limited_prime_factors(100, 3)
The output is:
[0, 1, 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 27, 32, 36, 48, 54, 64, 72, 81, 96]
Here, each time we find a prime p, we strike out all multiples of p in the ps array (as a standard Sieve of Erastothenes), and then in the r array, mark all multiples of any number that's a multiple of p whether their prime factors are all less than or equal to p.
It runs in O(n) space and O(n log log k) time, assuming n>k.
A simpler O(n log k) solution tests if all the factors of a number are less than or equal to k:
def limited_prime_factors(n, k):
r = [True] * 2 + [False] * n
for p in xrange(2, k+1):
for i in xrange(p, n+1, p):
r[i] = r[i//p]
return [i for i, b in enumerate(r) if b]
Here's an Eulerian version in Python (seems about 1.5 times faster than Paul Hankin's). We generate only the numbers themselves by multiplying a list by each prime and its powers in turn.
import time
start = time.time()
n = 1000000
k = 100
total = 1
a = [None for i in range(0, n+1)]
s = []
p = 1
while (p < k):
p = p + 1
if a[p] is None:
#print("\n\nPrime: " + str(p))
a[p] = True
total = total + 1
s.append(p)
limit = n / p
new_s = []
for i in s:
j = i
while j <= limit:
new_s.append(j)
#print j*p
a[j * p] = True
total = total + 1
j = j * p
s = new_s
print("\n\nGilad's answer: " + str(total))
end = time.time()
print(end - start)
# Paul Hankin's solution
def limited_prime_factors(n, k):
ps = [False] * (k+1)
r = [True] * 2 + [False] * n
for p in xrange(2, k+1):
if ps[p]: continue
for i in xrange(p, k+1, p):
ps[i] = True
for i in xrange(p, n+1, p):
r[i] = r[i//p]
return len([i for i, b in enumerate(r) if b]) - 1
start = time.time()
print "\nPaul's answer:" + str(limited_prime_factors(1000000, 100))
end = time.time()
print(end - start)
I am trying to solve the following problem:
Find the smallest n-bit integer c that has k 1-bits and is the sum of two n-bit integers that have g, h bits set to 1. g, h, k <= n
To start with, n-bit integer here means that we may use all n bits, i.e. max. value of such an integer is 2^n - 1. The described integer may not exist at all.
It is obvious the case k > g + h has no solutions and for g + h = k the answer is just 2^k - 1 (first k bits are 1-bits, k - n zeroes in the front).
As for some examples of what the program is supposed to do:
g = h = k = 4, n = 10 :
0000001111 + 0000001111 = 0000011110
15 + 15 = 30 (30 should be the output)
(4, 6, 5, 10):
0000011110 + 0000111111 = 0001011101
30 + 63 = 93
(30, 1, 1, 31):
1 + (2^30 - 1) = 2^30
As I think of it, this is a dynamic programming problem and I've chosen the following approach:
Let dp[g][h][k][n][c] be the described integer and c is an optional bit for carrying. I try to reconstruct possible sums depending on the lowest-order bits.
So, dp[g][h][k][n + 1][0] is the minimum of
(0, 0): dp[g][h][k][n][0]
(0, 0): 2^n + dp[g][h][k - 1][n][1]
(1, 0): 2^n + dp[g - 1][h][k - 1][n][0]
(0, 1): 2^n + dp[g][h - 1][k - 1][n][0]
Similarly, dp[g][h][k][n + 1][1] is the minimum of
(1, 1): dp[g - 1][h - 1][k][n][0]
(1, 1): dp[g - 1][h - 1][k - 1][n][1] + 2^n
(1, 0): dp[g - 1][h][k][n][1]
(0, 1): dp[g][h - 1][k][n][1]
The idea isn't that hard but I'm not really experienced with such things and my algorithm doesn't work even for simplest cases. I've chosen top-down approach. It's hard for me to consider all the corner cases. I do not really know if I've properly chosen base of recursion, etc. My algorithm doesn't even work for the most basic case for g = h = k = 1, n = 2(the answer is 01 + 01 = 10). There shouldn't be an answer for g = h = k = 1, n = 1 but the algorithm gives 1(which is basically why the former example outputs 1 instead of 2).
So, here goes my awful code(only very basic C++):
int solve(int g, int h, int k, int n, int c = 0) {
if (n <= 0) {
return 0;
}
if (dp[g][h][k][n][c]) {
return dp[g][h][k][n][c];
}
if (!c) {
if (g + h == k) {
return dp[g][h][k][n][c] = (1 << k) - 1;
}
int min, a1, a2, a3, a4;
min = a1 = a2 = a3 = a4 = std::numeric_limits<int>::max();
if (g + h > k && k <= n - 1) {
a1 = solve(g, h, k, n - 1, 0);
}
if (g + h >= k - 1 && k - 1 <= n - 1) {
a2 = (1 << (n - 1)) + solve(g, h, k - 1, n - 1, 1);
}
if (g - 1 + h >= k - 1 && k - 1 <= n - 1) {
a3 = (1 << (n - 1)) + solve(g - 1, h, k - 1, n - 1, 0);
}
if (g + h - 1 >= k - 1 && k - 1 <= n - 1) {
a4 = (1 << (n - 1)) + solve(g, h - 1, k - 1, n - 1, 0);
}
min = std::min({a1, a2, a3, a4});
return dp[g][h][k][n][c] = min;
} else {
int min, a1, a2, a3, a4;
min = a1 = a2 = a3 = a4 = std::numeric_limits<int>::max();
if (g - 2 + h >= k && k <= n - 1) {
a1 = solve(g - 1, h - 1, k, n - 1, 0);
}
if (g - 2 + h >= k - 1 && k - 1 <= n - 1) {
a2 = (1 << (n - 1)) + solve(g - 1, h - 1, k - 1, n - 1, 1);
}
if (g - 1 + h >= k && k <= n - 1) {
a3 = solve(g - 1, h, k, n - 1, 1);
}
if (g - 1 + h >= k && k <= n - 1) {
a4 = solve(g, h - 1, k, n - 1, 1);
}
min = std::min({a1, a2, a3, a4});
return dp[g][h][k][n][c] = min;
}
}
You can construct the smallest sum based on the bit counts g, h, and k, without doing any dynamic programming at all. Assuming that g ≥ h (switch them otherwise) these are the rules:
k ≤ h ≤ g
11111111 <- g ones
111100000111 <- h-k ones + g-k zeros + k ones
1000000000110 <- n must be at least h+g-k+1
h ≤ k ≤ g
1111111111 <- g ones
11111100 <- h ones + k-h zeros
1011111011 <- n must be at least g+1
h ≤ g ≤ k
1111111100000 <- g ones + k-g ones
1100000011111 <- g+h-k ones, k-h zeros, k-g ones
11011111111111 <- n must be at least k+1, or k if g+h=k
Example: all values of k for n=10, g=6 and h=4:
k=1 k=2 k=3 k=4
0000111111 0000111111 0000111111 0000111111
0111000001 0011000011 0001000111 0000001111
---------- ---------- ---------- ----------
1000000000 0100000010 0010000110 0001001110
k=4 k=5 k=6
0000111111 0000111111 0000111111
0000001111 0000011110 0000111100
---------- ---------- ----------
0001001110 0001011101 0001111011
k=6 k=7 k=8 k=9 k=10
0000111111 0001111110 0011111100 0111111000 1111110000
0000111100 0001110001 0011000011 0100000111 0000001111
---------- ---------- ---------- ---------- ----------
0001111011 0011101111 0110111111 1011111111 1111111111
Or, going straight to the value of c without calculating a and b first:
k ≤ h ≤ g
c = (1 << (g + h - k)) + ((1 << k) - 2)
h ≤ k ≤ g
c = (1 << g) + ((1 << k) - 1) - (1 << (k - h))
h ≤ g ≤ k
c = ((1 << (k + 1)) - 1) - (1 << ((g - h) + 2 * (k - g)))
h + g = k
c = (1 << k) - 1
which results in this disappointingly mundane code:
int smallest_sum(unsigned n, unsigned g, unsigned h, unsigned k) {
if (g < h) {unsigned swap = g; g = h; h = swap;}
if (k == 0) return (g > 0 || h > 0 || n < 1) ? -1 : 0;
if (h == 0) return (g != k || n < k) ? -1 : (1 << k) - 1;
if (k <= h) return (n <= h + g - k) ? -1 : (1 << (g + h - k)) + ((1 << k) - 2);
if (k <= g) return (n <= g) ? -1 : (1 << g) + ((1 << k) - 1) - (1 << (k - h));
if (k < g + h) return (n <= k) ? -1 : (1 << (k + 1)) - 1 - (1 << (2 * k - g - h));
if (k == g + h) return (n < k) ? -1 : (1 << k) - 1;
return -1;
}
Some example results:
n=31, g=15, h=25, k=10 -> 1,073,742,846 (1000000000000000000001111111110)
n=31, g=15, h=25, k=20 -> 34,602,975 (0000010000011111111111111011111)
n=31, g=15, h=25, k=30 -> 2,146,435,071 (1111111111011111111111111111111)
(I compared the results with those of a brute-force algorithm for every value of n, g, h and k from 0 to 20 to check correctness, and found no differences.)
I'm not too convinced about the dynamic programming approach. If I understand correctly, you would need to define how to go to dp[g + 1][h][k][n], dp[g][h + 1][k][n], dp[g][h][k + 1][n] and dp[g][h][k][n + 1], with and without the carry bit, in function of previous computations, and I'm not sure about what are the right rules for all of those.
I think an easier way to think of the problem is as an A* search tree, where each node contains two partial candidate numbers to add, let's call them G and H. You start with a node with G = 0 and H = 0 at level m = 0, and work as follows:
If G + H has n or fewer bits and k 1 bits, that's the solution, you found it!
Otherwise, if
n - m < number of 1 bits in G + H - k
discard the node (no solution possible).
Otherwise, if
(g + h) - (number of 1 bits in G + number of 1 bits in H) < k - number of 1 bits in G + H
discard the node (not viable candidates).
Otherwise, branch the node into a new level. Generally you make up to four children of each node, prefixing G and H with 0 and 0, 0 and 1, 1 and 0 or 1 and 1 respectively. However:
You can only precede G with a 1 if the number of 1 bits in G is fewer than g, and similarly for H and h.
At level m (G and H have m bits), you can only precede G with a 0 if
n - m > g - number of 1 bits in G
and similarly for H and h.
If G == H and g == h, you can skip one of 0 and 1 and 1 and 0, since they will lead to the same subtree.
Continue to the next node and repeat until you find a solution or you don't have any more nodes to visit.
The order in which you visit the nodes is important. You should store the nodes in a priority queue/heap such that the next node is always the first node that could potentially lead to the best solution. This is actually easy, you just need to take for each node G + H and prefix it with the necessary number of 1 bits to reach k; that's the best possible solution from there.
There are possibly better rules to discard invalid nodes (steps 2 and 3), but the idea of the algorithm is the same.