Computing Rand error efficiently - c++

I'm trying to compare two image segmentations to one another.
In order to do so, I transform each image into a vector of unsigned short values, and calculate the rand error,
according to the following formula:
where:
Here is my code (the rand error calculation part):
cv::Mat im1,im2;
//code for acquiring data for im1, im2
//code for copying im1(:)->v1, im2(:)->v2
int N = v1.size();
double a = 0;
double b = 0;
for (int i = 0; i <N; i++)
{
for (int j = 0; j < i; j++)
{
unsigned short l1 = v1[i];
unsigned short l2 = v1[j];
unsigned short gt1 = v2[i];
unsigned short gt2 = v2[j];
if (l1 == l2 && gt1 == gt2)
{
a++;
}
else if (l1 != l2 && gt1 != gt2)
{
b++;
}
}
}
double NPairs = (double)(N*N)/2;
double res = (a + b) / NPairs;
My problem is that length of each vector is 307,200.
Therefore the total number of iterations is 47,185,920,000.
It makes the running time of the entire process is very slow (a few minutes to compute).
Do you have any idea how can I improve it?
Thanks!

Let's assume that we have P distinct labels in the first image and Q distinct labels in the second image. The key observation for efficient computation of Rand error, also called Rand index, is that the number of distinct labels is usually much smaller than the number of pixels (i.e. P, Q << n).
Step 1
First, pre-compute the following auxiliary data:
the vector s1, with size P, such that s1[p] is the number of pixel positions i with v1[i] = p.
the vector s2, with size Q, such that s2[q] is the number of pixel positions i with v2[i] = q.
the matrix M, with size P x Q, such that M[p][q] is the number of pixel positions i with v1[i] = p and v2[i] = q.
The vectors s1, s2 and the matrix M can be computed by passing once through the input images, i.e. in O(n).
Step 2
Once s1, s2 and M are available, a and b can be computed efficiently:
This holds because each pair of pixels (i, j) that we are interested in has the property that both its pixels have the same label in image 1, i.e. v1[i] = v1[j] = p; and the same label in image 2, i.e. v2[i] = v2[ j ] = q. Since v1[i] = p and v2[i] = q, the pixel i will contribute to the bin M[p][q], and the same does the pixel j. Therefore, for each combination of labels p and q we need to consider the number of pairs of pixels that fall into the M[p][q] bin, and then to sum them up for all possible labels p and q.
Similarly, for b we have:
Here, we are counting how many pairs are formed with one of the pixels falling into the bin M[p][q]. Such a pixel can form a good pair with each pixel that is falling into a bin M[p'][q'], with the condition that p != p' and q != q'. Summing over all such M[p'][q'] is equivalent to subtracting from the sum over the entire matrix M (this sum is n) the sum on row p (i.e. s1[p]) and the sum on the column q (i.e. s2[q]). However, after subtracting the row and column sums, we have subtracted M[p][q] twice, and this is why it is added at the end of the expression above. Finally, this is divided by 2 because each pair was counted twice (once for each of its two constituent pixels as being part of a bin M[p][q] in the argument above).
The Rand error (Rand index) can now be computed as:
The overall complexity of this method is O(n) + O(PQ), with the first term usually being the dominant one.

After reading your comments, I tried the following approach:
calculate the intersections for each possible pair of values.
use the intersection results to calculate the error.
I performed the calculation straight on the cv::Mat objects, without converting them into std::vector objects. That gave me the ability to use opencv functions and achieve a faster runtime.
Code:
double a = 0, b = 0; //init variables
//unique function finds all the unique value of a matrix, with an optional input mask
std::set<unsigned short> m1Vals = unique(mat1);
for (unsigned short s1 : m1Vals)
{
cv::Mat mask1 = (mat1 == s1);
std::set<unsigned short> m2ValsInRoi = unique(mat2, mat1==s1);
for (unsigned short s2 : m2ValsInRoi)
{
cv::Mat mask2 = mat2 == s2;
cv::Mat andMask = mask1 & mask2;
double andVal = cv::countNonZero(andMask);
a += (andVal*(andVal - 1)) / 2;
b += ((double)cv::countNonZero(andMask) * (double)cv::countNonZero(~mask1 & ~mask2)) / 2;
}
}
double NPairs = (double)(N*(N-1)) / 2;
double res = (a + b) / NPairs;
The runtime is now reasonable (only a few milliseconds vs a few minutes), and the output is the same as the code above.
Example:
I ran the code on the following matrices:
//mat1 = [1 1 2]
cv::Mat mat1 = cv::Mat::ones(cv::Size(3, 1), CV_16U);
mat1.at<ushort>(cv::Point(2, 0)) = 2;
//mat2 = [1 2 1]
cv::Mat mat2 = cv::Mat::ones(cv::Size(3, 1), CV_16U);
mat2.at<ushort>(cv::Point(1, 0)) = 2;
In this case a = 0 (no matching pairs correspondence), and b=1(one matching pair for i=2,j=3). The algorithm result:
a = 0
b = 1
NPairs = 3
result = 0.3333333
Thank you all for your help!

Related

Need help understanding this line in an FFT algorithm

In my program I have a function that performs the fast Fourier transform. I know there are very good implementations freely available, but this is a learning thing so I don't want to use those. I ended up finding this comment with the following implementation (it originated from the Italian entry for the FFT):
void transform(complex<double>* f, int N) //
{
ordina(f, N); //first: reverse order
complex<double> *W;
W = (complex<double> *)malloc(N / 2 * sizeof(complex<double>));
W[1] = polar(1., -2. * M_PI / N);
W[0] = 1;
for(int i = 2; i < N / 2; i++)
W[i] = pow(W[1], i);
int n = 1;
int a = N / 2;
for(int j = 0; j < log2(N); j++) {
for(int k = 0; k < N; k++) {
if(!(k & n)) {
complex<double> temp = f[k];
complex<double> Temp = W[(k * a) % (n * a)] * f[k + n];
f[k] = temp + Temp;
f[k + n] = temp - Temp;
}
}
n *= 2;
a = a / 2;
}
free(W);
}
I've made a lot of changes by now but this was my starting point. One of the changes I made was to not cache the twiddle factors, because I decided to see if it's needed first. Now I've decided I do want to cache them. The way this implementation seems to do it is it has this array W of length N/2, where every index k has the value . What I don't understand is this expression:
W[(k * a) % (n * a)]
Note that n * a is always equal to N/2. I get that this is supposed to be equal to , and I can see that , which this relies on. I also get that modulo can be used here because the twiddle factors are cyclic. But there's one thing I don't get: this is a length-N DFT, and yet only N/2 twiddle factors are ever calculated. Shouldn't the array be of length N, and the modulo should be by N?
But there's one thing I don't get: this is a length-N DFT, and yet only N/2 twiddle factors are ever calculated. Shouldn't the array be of length N, and the modulo should be by N?
The twiddle factors are equally spaced points on the unit circle, and there is an even number of points because N is a power-of-two. After going around half of the circle (starting at 1, going counter clockwise above the X-axis), the second half is a repeat of the first half but this time it's below the X-axis (the points can be reflected through the origin). That is why Temp is subtracted the second time. That subtraction is the negation of the twiddle factor.

Cut rectangle in minimum number of squares

I'm trying to solve the following problem:
A rectangular paper sheet of M*N is to be cut down into squares such that:
The paper is cut along a line that is parallel to one of the sides of the paper.
The paper is cut such that the resultant dimensions are always integers.
The process stops when the paper can't be cut any further.
What is the minimum number of paper pieces cut such that all are squares?
Limits: 1 <= N <= 100 and 1 <= M <= 100.
Example: Let N=1 and M=2, then answer is 2 as the minimum number of squares that can be cut is 2 (the paper is cut horizontally along the smaller side in the middle).
My code:
cin >> n >> m;
int N = min(n,m);
int M = max(n,m);
int ans = 0;
while (N != M) {
ans++;
int x = M - N;
int y = N;
M = max(x, y);
N = min(x, y);
}
if (N == M && M != 0)
ans++;
But I am not getting what's wrong with this approach as it's giving me a wrong answer.
I think both the DP and greedy solutions are not optimal. Here is the counterexample for the DP solution:
Consider the rectangle of size 13 X 11. DP solution gives 8 as the answer. But the optimal solution has only 6 squares.
This thread has many counter examples: https://mathoverflow.net/questions/116382/tiling-a-rectangle-with-the-smallest-number-of-squares
Also, have a look at this for correct solution: http://int-e.eu/~bf3/squares/
I'd write this as a dynamic (recursive) program.
Write a function which tries to split the rectangle at some position. Call the function recursively for both parts. Try all possible splits and take the one with the minimum result.
The base case would be when both sides are equal, i.e. the input is already a square, in which case the result is 1.
function min_squares(m, n):
// base case:
if m == n: return 1
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
return min { min_hor, min_ver }
To improve performance, you can cache the recursive results:
function min_squares(m, n):
// base case:
if m == n: return 1
// check if we already cached this
if cache contains (m, n):
return cache(m, n)
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
// put in cache and return
result := min { min_hor, min_ver }
cache(m, n) := result
return result
In a concrete C++ implementation, you could use int cache[100][100] for the cache data structure since your input size is limited. Put it as a static local variable, so it will automatically be initialized with zeroes. Then interpret 0 as "not cached" (as it can't be the result of any inputs).
Possible C++ implementation: http://ideone.com/HbiFOH
The greedy algorithm is not optimal. On a 6x5 rectangle, it uses a 5x5 square and 5 1x1 squares. The optimal solution uses 2 3x3 squares and 3 2x2 squares.
To get an optimal solution, use dynamic programming. The brute-force recursive solution tries all possible horizontal and vertical first cuts, recursively cutting the two pieces optimally. By caching (memoizing) the value of the function for each input, we get a polynomial-time dynamic program (O(m n max(m, n))).
This problem can be solved using dynamic programming.
Assuming we have a rectangle with width is N and height is M.
if (N == M), so it is a square and nothing need to be done.
Otherwise, we can divide the rectangle into two other smaller one (N - x, M) and (x,M), so it can be solved recursively.
Similarly, we can also divide it into (N , M - x) and (N, x)
Pseudo code:
int[][]dp;
boolean[][]check;
int cutNeeded(int n, int m)
if(n == m)
return 1;
if(check[n][m])
return dp[n][m];
check[n][m] = true;
int result = n*m;
for(int i = 1; i <= n/2; i++)
int tmp = cutNeeded(n - i, m) + cutNeeded(i,m);
result = min(tmp, result);
for(int i = 1; i <= m/2; i++)
int tmp = cutNeeded(n , m - i) + cutNeeded(n,i);
result = min(tmp, result);
return dp[n][m] = result;
Here is a greedy impl. As #David mentioned it is not optimal and is completely wrong some cases so dynamic approach is the best (with caching).
def greedy(m, n):
if m == n:
return 1
if m < n:
m, n = n, m
cuts = 0
while n:
cuts += m/n
m, n = n, m % n
return cuts
print greedy(2, 7)
Here is DP attempt in python
import sys
def cache(f):
db = {}
def wrap(*args):
key = str(args)
if key not in db:
db[key] = f(*args)
return db[key]
return wrap
#cache
def squares(m, n):
if m == n:
return 1
xcuts = sys.maxint
ycuts = sys.maxint
x, y = 1, 1
while x * 2 <= n:
xcuts = min(xcuts, squares(m, x) + squares(m, n - x))
x += 1
while y * 2 <= m:
ycuts = min(ycuts, squares(y, n) + squares(m - y, n))
y += 1
return min(xcuts, ycuts)
This is essentially classic integer or 0-1 knapsack problem that can be solved using greedy or dynamic programming approach. You may refer to: Solving the Integer Knapsack

How to compute sum of evenly spaced binomial coefficients

How to find sum of evenly spaced Binomial coefficients modulo M?
ie. (nCa + nCa+r + nCa+2r + nCa+3r + ... + nCa+kr) % M = ?
given: 0 <= a < r, a + kr <= n < a + (k+1)r, n < 105, r < 100
My first attempt was:
int res = 0;
int mod=1000000009;
for (int k = 0; a + r*k <= n; k++) {
res = (res + mod_nCr(n, a+r*k, mod)) % mod;
}
but this is not efficient. So after reading here
and this paper I found out the above sum is equivalent to:
summation[ω-ja * (1 + ωj)n / r], for 0 <= j < r; and ω = ei2π/r is a primitive rth root of unity.
What can be the code to find this sum in Order(r)?
Edit:
n can go upto 105 and r can go upto 100.
Original problem source: https://www.codechef.com/APRIL14/problems/ANUCBC
Editorial for the problem from the contest: https://discuss.codechef.com/t/anucbc-editorial/5113
After revisiting this post 6 years later, I'm unable to recall how I transformed the original problem statement into mine version, nonetheless, I shared the link to the original solution incase anyone wants to have a look at the correct solution approach.
Binomial coefficients are coefficients of the polynomial (1+x)^n. The sum of the coefficients of x^a, x^(a+r), etc. is the coefficient of x^a in (1+x)^n in the ring of polynomials mod x^r-1. Polynomials mod x^r-1 can be specified by an array of coefficients of length r. You can compute (1+x)^n mod (x^r-1, M) by repeated squaring, reducing mod x^r-1 and mod M at each step. This takes about log_2(n)r^2 steps and O(r) space with naive multiplication. It is faster if you use the Fast Fourier Transform to multiply or exponentiate the polynomials.
For example, suppose n=20 and r=5.
(1+x) = {1,1,0,0,0}
(1+x)^2 = {1,2,1,0,0}
(1+x)^4 = {1,4,6,4,1}
(1+x)^8 = {1,8,28,56,70,56,28,8,1}
{1+56,8+28,28+8,56+1,70}
{57,36,36,57,70}
(1+x)^16 = {3249,4104,5400,9090,13380,9144,8289,7980,4900}
{3249+9144,4104+8289,5400+7980,9090+4900,13380}
{12393,12393,13380,13990,13380}
(1+x)^20 = (1+x)^16 (1+x)^4
= {12393,12393,13380,13990,13380}*{1,4,6,4,1}
{12393,61965,137310,191440,211585,203373,149620,67510,13380}
{215766,211585,204820,204820,211585}
This tells you the sums for the 5 possible values of a. For example, for a=1, 211585 = 20c1+20c6+20c11+20c16 = 20+38760+167960+4845.
Something like that, but you have to check a, n and r because I just put anything without regarding about the condition:
#include <complex>
#include <cmath>
#include <iostream>
using namespace std;
int main( void )
{
const int r = 10;
const int a = 2;
const int n = 4;
complex<double> i(0.,1.), res(0., 0.), w;
for( int j(0); j<r; ++j )
{
w = exp( i * 2. * M_PI / (double)r );
res += pow( w, -j * a ) * pow( 1. + pow( w, j ), n ) / (double)r;
}
return 0;
}
the mod operation is expensive, try avoiding it as much as possible
uint64_t res = 0;
int mod=1000000009;
for (int k = 0; a + r*k <= n; k++) {
res += mod_nCr(n, a+r*k, mod);
if(res > mod)
res %= mod;
}
I did not test this code
I don't know if you reached something or not in this question, but the key to implementing this formula is to actually figure out that w^i are independent and therefore can form a ring. In simpler terms you should think of implement
(1+x)^n%(x^r-1) or finding out (1+x)^n in the ring Z[x]/(x^r-1)
If confused I will give you an easy implementation right now.
make a vector of size r . O(r) space + O(r) time
initialization this vector with zeros every where O(r) space +O(r) time
make the first two elements of that vector 1 O(1)
calculate (x+1)^n using the fast exponentiation method. each multiplication takes O(r^2) and there are log n multiplications therefore O(r^2 log(n) )
return first element of the vector.O(1)
Complexity
O(r^2 log(n) ) time and O(r) space.
this r^2 can be reduced to r log(r) using fourier transform.
How is the multiplication done, this is regular polynomial multiplication with mod in the power
vector p1(r,0);
vector p2(r,0);
p1[0]=p1[1]=1;
p2[0]=p2[1]=1;
now we want to do the multiplication
vector res(r,0);
for(int i=0;i<r;i++)
{
for(int j=0;j<r;j++)
{
res[(i+j)%r]+=(p1[i]*p2[j]);
}
}
return res[0];
I have implemented this part before, if you are still cofused about something let me know. I would prefer that you implement the code yourself, but if you need the code let me know.

Generating incomplete iterated function systems

I am doing this assignment for fun.
http://groups.csail.mit.edu/graphics/classes/6.837/F04/assignments/assignment0/
There are sample outputs at site if you want to see how it is supposed to look. It involves iterated function systems, whose algorithm according the the assignment is:
for "lots" of random points (x0, y0)
for k=0 to num_iters
pick a random transform fi
(xk+1, yk+1) = fi(xk, yk)
display a dot at (xk, yk)
I am running into trouble with my implementation, which is:
void IFS::render(Image& img, int numPoints, int numIterations){
Vec3f color(0,1,0);
float x,y;
float u,v;
Vec2f myVector;
for(int i = 0; i < numPoints; i++){
x = (float)(rand()%img.Width())/img.Width();
y = (float)(rand()%img.Height())/img.Height();
myVector.Set(x,y);
for(int j = 0; j < numIterations;j++){
float randomPercent = (float)(rand()%100)/100;
for(int k = 0; k < num_transforms; k++){
if(randomPercent < range[k]){
matrices[k].Transform(myVector);
}
}
}
u = myVector.x()*img.Width();
v = myVector.y()*img.Height();
img.SetPixel(u,v,color);
}
}
This is how my pick a random transform from the input matrices:
fscanf(input,"%d",&num_transforms);
matrices = new Matrix[num_transforms];
probablility = new float[num_transforms];
range = new float[num_transforms+1];
for (int i = 0; i < num_transforms; i++) {
fscanf (input,"%f",&probablility[i]);
matrices[i].Read3x3(input);
if(i == 0) range[i] = probablility[i];
else range[i] = probablility[i] + range[i-1];
}
My output shows only the beginnings of a Sierpinski triangle (1000 points, 1000 iterations):
My dragon is better, but still needs some work (1000 points, 1000 iterations):
If you have RAND_MAX=4 and picture width 3, an evenly distributed sequence like [0,1,2,3,4] from rand() will be mapped to [0,1,2,0,1] by your modulo code, i.e. some numbers will occur more often. You need to cut off those numbers that are above the highest multiple of the target range that is below RAND_MAX, i.e. above ((RAND_MAX / 3) * 3). Just check for this limit and call rand() again.
Since you have to fix that error in several places, consider writing a utility function. Then, reduce the scope of your variables. The u,v declaration makes it hard to see that these two are just used in three lines of code. Declare them as "unsigned const u = ..." to make this clear and additionally get the compiler to check that you don't accidentally modify them afterwards.

Constructing fractions Interview challenge

I recently came across the following interview question, I was wondering if a dynamic programming approach would work, or/and if there was some kind of mathematical insight that would make the solution easier... Its very similar to how ieee754 doubles are constructed.
Question:
There is vector V of N double values. Where the value at the ith index of the vector is equal to 1/2^(i+1). eg: 1/2, 1/4, 1/8, 1/16 etc...
You're to write a function that takes one double 'r' as input, where 0 < r < 1, and output the indexes of V to stdout that when summed will give a value closest to the value 'r' than any other combination of indexes from the vector V.
Furthermore the number of indexes should be a minimum, and in the event there are two solutions, the solution closest to zero should be preferred.
void getIndexes(std::vector<double>& V, double r)
{
....
}
int main()
{
std::vector<double> V;
// populate V...
double r = 0.3;
getIndexes(V,r);
return 0;
}
Note: It seems like there are a few SO'ers that aren't in the mood of reading the question completely. So lets all note the following:
The solution, aka the sum may be larger than r - hence any strategy incrementally subtracting fractions from r, until it hits zero or near zero is wrong
There are examples of r, where there will be 2 solutions, that is |r-s0| == |r-s1| and s0 < s1 - in this case s0 should be selected, this makes the problem slightly more difficult, as the knapsack style solutions tend to greedy overestimates first.
If you believe this problem is trivial, you most likely haven't understood it. Hence it would be a good idea to read the question again.
EDIT (Matthieu M.): 2 examples for V = {1/2, 1/4, 1/8, 1/16, 1/32}
r = 0.3, S = {1, 3}
r = 0.256652, S = {1}
Algorithm
Consider a target number r and a set F of fractions {1/2, 1/4, ... 1/(2^N)}. Let the smallest fraction, 1/(2^N), be denoted P.
Then the optimal sum will be equal to:
S = P * round(r/P)
That is, the optimal sum S will be some integer multiple of the smallest fraction available, P. The maximum error, err = r - S, is ± 1/2 * 1/(2^N). No better solution is possible because this would require the use of a number smaller than 1/(2^N), which is the smallest number in the set F.
Since the fractions F are all power-of-two multiples of P = 1/(2^N), any integer multiple of P can be expressed as a sum of the fractions in F. To obtain the list of fractions that should be used, encode the integer round(r/P) in binary and read off 1 in the kth binary place as "include the kth fraction in the solution".
Example:
Take r = 0.3 and F as {1/2, 1/4, 1/8, 1/16, 1/32}.
Multiply the entire problem by 32.
Take r = 9.6, and F as {16, 8, 4, 2, 1}.
Round r to the nearest integer.
Take r = 10.
Encode 10 as a binary integer (five places)
10 = 0b 0 1 0 1 0 ( 8 + 2 )
^ ^ ^ ^ ^
| | | | |
| | | | 1
| | | 2
| | 4
| 8
16
Associate each binary bit with a fraction.
= 0b 0 1 0 1 0 ( 1/4 + 1/16 = 0.3125 )
^ ^ ^ ^ ^
| | | | |
| | | | 1/32
| | | 1/16
| | 1/8
| 1/4
1/2
Proof
Consider transforming the problem by multiplying all the numbers involved by 2**N so that all the fractions become integers.
The original problem:
Consider a target number r in the range 0 < r < 1, and a list of fractions {1/2, 1/4, .... 1/(2**N). Find the subset of the list of fractions that sums to S such that error = r - S is minimised.
Becomes the following equivalent problem (after multiplying by 2**N):
Consider a target number r in the range 0 < r < 2**N and a list of integers {2**(N-1), 2**(N-2), ... , 4, 2, 1}. Find the subset of the list of integers that sums to S such that error = r - S is minimised.
Choosing powers of two that sum to a given number (with as little error as possible) is simply binary encoding of an integer. This problem therefore reduces to binary encoding of a integer.
Existence of solution: Any positive floating point number r, 0 < r < 2**N, can be cast to an integer and represented in binary form.
Optimality: The maximum error in the integer version of the solution is the round-off error of ±0.5. (In the original problem, the maximum error is ±0.5 * 1/2**N.)
Uniqueness: for any positive (floating point) number there is a unique integer representation and therefore a unique binary representation. (Possible exception of 0.5 = see below.)
Implementation (Python)
This function converts the problem to the integer equivalent, rounds off r to an integer, then reads off the binary representation of r as an integer to get the required fractions.
def conv_frac (r,N):
# Convert to equivalent integer problem.
R = r * 2**N
S = int(round(R))
# Convert integer S to N-bit binary representation (i.e. a character string
# of 1's and 0's.) Note use of [2:] to trim leading '0b' and zfill() to
# zero-pad to required length.
bin_S = bin(S)[2:].zfill(N)
nums = list()
for index, bit in enumerate(bin_S):
k = index + 1
if bit == '1':
print "%i : 1/%i or %f" % (index, 2**k, 1.0/(2**k))
nums.append(1.0/(2**k))
S = sum(nums)
e = r - S
print """
Original number `r` : %f
Number of fractions `N` : %i (smallest fraction 1/%i)
Sum of fractions `S` : %f
Error `e` : %f
""" % (r,N,2**N,S,e)
Sample output:
>>> conv_frac(0.3141,10)
1 : 1/4 or 0.250000
3 : 1/16 or 0.062500
8 : 1/512 or 0.001953
Original number `r` : 0.314100
Number of fractions `N` : 10 (smallest fraction 1/1024)
Sum of fractions `S` : 0.314453
Error `e` : -0.000353
>>> conv_frac(0.30,5)
1 : 1/4 or 0.250000
3 : 1/16 or 0.062500
Original number `r` : 0.300000
Number of fractions `N` : 5 (smallest fraction 1/32)
Sum of fractions `S` : 0.312500
Error `e` : -0.012500
Addendum: the 0.5 problem
If r * 2**N ends in 0.5, then it could be rounded up or down. That is, there are two possible representations as a sum-of-fractions.
If, as in the original problem statement, you want the representation that uses fewest fractions (i.e. the least number of 1 bits in the binary representation), just try both rounding options and pick whichever one is more economical.
Perhaps I am dumb...
The only trick I can see here is that the sum of (1/2)^(i+1) for i in [0..n) where n tends towards infinity gives 1. This simple fact proves that (1/2)^i is always superior to sum (1/2)^j for j in [i+1, n), whatever n is.
So, when looking for our indices, it does not seem we have much choice. Let's start with i = 0
either r is superior to 2^-(i+1) and thus we need it
or it is inferior and we need to choose whether 2^-(i+1) OR sum 2^-j for j in [i+2, N] is closest (deferring to the latter in case of equality)
The only step that could be costly is obtaining the sum, but it can be precomputed once and for all (and even precomputed lazily).
// The resulting vector contains at index i the sum of 2^-j for j in [i+1, N]
// and is padded with one 0 to get the same length as `v`
static std::vector<double> partialSums(std::vector<double> const& v) {
std::vector<double> result;
// When summing doubles, we need to start with the smaller ones
// because of the precision of representations...
double sum = 0;
BOOST_REVERSE_FOREACH(double d, v) {
sum += d;
result.push_back(sum);
}
result.pop_back(); // there is a +1 offset in the indexes of the result
std::reverse(result.begin(), result.end());
result.push_back(0); // pad the vector to have the same length as `v`
return result;
}
// The resulting vector contains the indexes elected
static std::vector<size_t> getIndexesImpl(std::vector<double> const& v,
std::vector<double> const& ps,
double r)
{
std::vector<size_t> indexes;
for (size_t i = 0, max = v.size(); i != max; ++i) {
if (r >= v[i]) {
r -= v[i];
indexes.push_back(i);
continue;
}
// We favor the closest to 0 in case of equality
// which is the sum of the tail as per the theorem above.
if (std::fabs(r - v[i]) < std::fabs(r - ps[i])) {
indexes.push_back(i);
return indexes;
}
}
return indexes;
}
std::vector<size_t> getIndexes(std::vector<double>& v, double r) {
std::vector<double> const ps = partialSums(v);
return getIndexesImpl(v, ps, r);
}
The code runs (with some debug output) at ideone. Note that for 0.3 it gives:
0.3:
1: 0.25
3: 0.0625
=> 0.3125
which is slightly different from the other answers.
At the risk of downvotes, this problem seems to be rather straightforward. Just start with the largest and smallest numbers you can produce out of V, adjust each index in turn until you have the two possible closest answers. Then evaluate which one is the better answer.
Here is untested code (in a language that I don't write):
void getIndexes(std::vector<double>& V, double r)
{
double v_lower = 0;
double v_upper = 1.0 - 0.5**V.size();
std::vector<int> index_lower;
std::vector<int> index_upper;
if (v_upper <= r)
{
// The answer is trivial.
for (int i = 0; i < V.size(); i++)
cout << i;
return;
}
for (int i = 0; i < N; i++)
{
if (v_lower + V[i] <= r)
{
v_lower += V[i];
index_lower.push_back(i);
}
if (r <= v_upper - V[i])
v_upper -= V[i];
else
index_upper.push_back(i);
}
if (r - v_lower < v_upper - r)
printIndexes(index_lower);
else if (v_upper - r < r - v_lower)
printIndexes(index_upper);
else if (v_upper.size() < v_lower.size())
printIndexes(index_upper);
else
printIndexes(index_lower);
}
void printIndexes(std::vector<int>& ind)
{
for (int i = 0; i < ind.size(); i++)
{
cout << ind[i];
}
}
Did I get the job! :D
(Please note, this is horrible code that relies on our knowing exactly what V has in it...)
I will start by saying that I do believe that this problem is trivial...
(waits until all stones have been thrown)
Yes, I did read the OP's edit that says that I have to re-read the question if I think so. Therefore I might be missing something that I fail to see - in this case please excuse my ignorance and feel free to point out my mistakes.
I don't see this as a dynamic programming problem. At the risk of sounding naive, why not try keeping two estimations of r while searching for indices - namely an under-estimation and an over-estimation. After all, if r does not equal any sum that can be computed from elements of V, it will lie between some two sums of the kind. Our goal is to find these sums and to report which is closer to r.
I threw together some quick-and-dirty Python code that does the job. The answer it reports is correct for the two test cases that the OP provided. Note that if the return is structured such that at least one index always has to be returned - even if the best estimation is no indices at all.
def estimate(V, r):
lb = 0 # under-estimation (lower-bound)
lbList = []
ub = 1 - 0.5**len(V) # over-estimation = sum of all elements of V
ubList = range(len(V))
# calculate closest under-estimation and over-estimation
for i in range(len(V)):
if r == lb + V[i]:
return (lbList + [i], lb + V[i])
elif r == ub:
return (ubList, ub)
elif r > lb + V[i]:
lb += V[i]
lbList += [i]
elif lb + V[i] < ub:
ub = lb + V[i]
ubList = lbList + [i]
return (ubList, ub) if ub - r < r - lb else (lbList, lb) if lb != 0 else ([len(V) - 1], V[len(V) - 1])
# populate V
N = 5 # number of elements
V = []
for i in range(1, N + 1):
V += [0.5**i]
# test
r = 0.484375 # this value is equidistant from both under- and over-estimation
print "r:", r
estimate = estimate(V, r)
print "Indices:", estimate[0]
print "Estimate:", estimate[1]
Note: after finishing writing my answer I noticed that this answer follows the same logic. Alas!
I don't know if you have test cases, try the code below. It is a dynamic-programming approach.
1] exp: given 1/2^i, find the largest i as exp. Eg. 1/32 returns 5.
2] max: 10^exp where exp=i.
3] create an array of size max+1 to hold all possible sums of the elements of V.
Actually the array holds the indexes, since that's what you want.
4] dynamically compute the sums (all invalids remain null)
5] the last while loop finds the nearest correct answer.
Here is the code:
public class Subset {
public static List<Integer> subsetSum(double[] V, double r) {
int exp = exponent(V);
int max = (int) Math.pow(10, exp);
//list to hold all possible sums of the elements in V
List<Integer> indexes[] = new ArrayList[max + 1];
indexes[0] = new ArrayList();//base case
//dynamically compute the sums
for (int x=0; x<V.length; x++) {
int u = (int) (max*V[x]);
for(int i=max; i>=u; i--) if(null != indexes[i-u]) {
List<Integer> tmp = new ArrayList<Integer>(indexes[i - u]);
tmp.add(x);
indexes[i] = tmp;
}
}
//find the best answer
int i = (int)(max*r);
int j=i;
while(null == indexes[i] && null == indexes[j]) {
i--;j++;
}
return indexes[i]==null || indexes[i].isEmpty()?indexes[j]:indexes[i];
}// subsetSum
private static int exponent(double[] V) {
double d = V[V.length-1];
int i = (int) (1/d);
String s = Integer.toString(i,2);
return s.length()-1;
}// summation
public static void main(String[] args) {
double[] V = {1/2.,1/4.,1/8.,1/16.,1/32.};
double r = 0.6, s=0.3,t=0.256652;
System.out.println(subsetSum(V,r));//[0, 3, 4]
System.out.println(subsetSum(V,s));//[1, 3]
System.out.println(subsetSum(V,t));//[1]
}
}// class
Here are results of running the code:
For 0.600000 get 0.593750 => [0, 3, 4]
For 0.300000 get 0.312500 => [1, 3]
For 0.256652 get 0.250000 => [1]
For 0.700000 get 0.687500 => [0, 2, 3]
For 0.710000 get 0.718750 => [0, 2, 3, 4]
The solution implements Polynomial time approximate algorithm. Output of the program is the same as outputs of another solutions.
#include <math.h>
#include <stdio.h>
#include <vector>
#include <algorithm>
#include <functional>
void populate(std::vector<double> &vec, int count)
{
double val = .5;
vec.clear();
for (int i = 0; i < count; i++) {
vec.push_back(val);
val *= .5;
}
}
void remove_values_with_large_error(const std::vector<double> &vec, std::vector<double> &res, double r, double max_error)
{
std::vector<double>::const_iterator iter;
double min_err, err;
min_err = 1.0;
for (iter = vec.begin(); iter != vec.end(); ++iter) {
err = fabs(*iter - r);
if (err < max_error) {
res.push_back(*iter);
}
min_err = std::min(err, min_err);
}
}
void find_partial_sums(const std::vector<double> &vec, std::vector<double> &res, double r)
{
std::vector<double> svec, tvec, uvec;
std::vector<double>::const_iterator iter;
int step = 0;
svec.push_back(0.);
for (iter = vec.begin(); iter != vec.end(); ++iter) {
step++;
printf("step %d, svec.size() %d\n", step, svec.size());
tvec.clear();
std::transform(svec.begin(), svec.end(), back_inserter(tvec),
std::bind2nd(std::plus<double>(), *iter));
uvec.clear();
uvec.insert(uvec.end(), svec.begin(), svec.end());
uvec.insert(uvec.end(), tvec.begin(), tvec.end());
sort(uvec.begin(), uvec.end());
uvec.erase(unique(uvec.begin(), uvec.end()), uvec.end());
svec.clear();
remove_values_with_large_error(uvec, svec, r, *iter * 4);
}
sort(svec.begin(), svec.end());
svec.erase(unique(svec.begin(), svec.end()), svec.end());
res.clear();
res.insert(res.end(), svec.begin(), svec.end());
}
double find_closest_value(const std::vector<double> &sums, double r)
{
std::vector<double>::const_iterator iter;
double min_err, res, err;
min_err = fabs(sums.front() - r);
res = sums.front();
for (iter = sums.begin(); iter != sums.end(); ++iter) {
err = fabs(*iter - r);
if (err < min_err) {
min_err = err;
res = *iter;
}
}
printf("found value %lf with err %lf\n", res, min_err);
return res;
}
void print_indexes(const std::vector<double> &vec, double value)
{
std::vector<double>::const_iterator iter;
int index = 0;
printf("indexes: [");
for (iter = vec.begin(); iter != vec.end(); ++iter, ++index) {
if (value >= *iter) {
printf("%d, ", index);
value -= *iter;
}
}
printf("]\n");
}
int main(int argc, char **argv)
{
std::vector<double> vec, sums;
double r = .7;
int n = 5;
double value;
populate(vec, n);
find_partial_sums(vec, sums, r);
value = find_closest_value(sums, r);
print_indexes(vec, value);
return 0;
}
Sort the vector and search for the closest fraction available to r. store that index, subtract the value from r, and repeat with the remainder of r. iterate until r is reached, or no such index can be found.
Example :
0.3 - the biggest value available would be 0.25. (index 2). the remainder now is 0.05
0.05 - the biggest value available would be 0.03125 - the remainder will be 0.01875
etc.
etc. every step would be an O(logN) search in a sorted array. the number of steps will also be O(logN) total complexity will be than O(logN^2).
This is not dynamic programming question
The output should rather be vector of ints (indexes), not vector of doubles
This might by off 0-2 in exact values, this is just concept:
A) output zero index until the r0 (r - index values already outputded) is bigger than 1/2
B) Inspect the internal representation of r0 double and:
x (1st bit shift) = -Exponent; // The bigger exponent, the smallest numbers (bigger x in 1/2^(x) you begin with)
Inspect bit representation of the fraction part of float in cycle with body:
(direction depends on little/big endian)
{
if (bit is 1)
output index x;
x++;
}
Complexity of each step is constant, so overall it is O(n) where n is size of output.
To paraphrase the question, what are the one bits in the binary representation of r (after the binary point)? N is the 'precision', if you like.
In Cish pseudo-code
for (int i=0; i<N; i++) {
if (r>V[i]) {
print(i);
r -= V[i];
}
}
You could add an extra test for r == 0 to terminate the loop early.
Note that this gives the least binary number closest to 'r', i.e. the one closer to zero if there are two equally 'right' answers.
If the Nth digit was a one, you'll need to add '1' to the 'binary' number obtained and check both against the original 'r'. (Hint: construct vectors a[N], b[N] of 'bits', set '1' bits instead of 'print'ing above. Set b = a and do a manual add, digit by digit from the end of 'b' until you stop carrying. Convert to double and choose whichever is closer.
Note that a[] <= r <= a[] + 1/2^N and that b[] = a[] + 1/2^N.
The 'least number of indexes [sic]' is a red-herring.