Range Reduction Poor Precision For Single Precision Floating Point - c++

I am trying to implement range reduction as the first step of implementing the sine function.
I am following the method described in the paper "ARGUMENT REDUCTION FOR HUGE ARGUMENTS" by K.C. NG
I am getting error as large as 0.002339146 when using the input range of x from 0 to 20000. My error obviously shouldn't be that large, and I'm not sure how I can reduce it. I noticed that the error magnitude is associated with the input theta magnitude to cosine/sine.
I was able to obtain the nearpi.c code that the paper mentions, but I'm not sure how to utilize the code for single precision floating point. If anyone is interested, the nearpi.c file can be found at this link: nearpi.c
Here is my MATLAB code:
x = 0:0.1:20000;
% Perform range reduction
% Store constant 2/pi
twooverpi = single(2/pi);
% Compute y
y = (x.*twooverpi);
% Compute k (round to nearest integer
k = round(y);
% Solve for f
f = single(y-k);
% Solve for r
r = single(f*single(pi/2));
% Find last two bits of k
n = bitand(fi(k,1,32,0),fi(3,1,32,0));
n = single(n);
% Preallocate for speed
z(length(x)) = 0;
for i = 1:length(x)
switch(n(i))
case 0
z(i)=sin(r(i));
case 1
z(i) = single(cos(r(i)));
case 2
z(i) = -sin(r(i));
case 3
z(i) = single(-cos(r(i)));
otherwise
end
end
maxerror = max(abs(single(z - single(sin(single(x))))))
minerror = min(abs(single(z - single(sin(single(x))))))
I have edited the program nearpi.c so that it compiles. However I am not sure how to interpret the output. Also the file expects an input, which I had to input by hand, also I am not sure of the significance of the input.
Here is the working nearpi.c:
/*
============================================================================
Name : nearpi.c
Author :
Version :
Copyright : Your copyright notice
Description : Hello World in C, Ansi-style
============================================================================
*/
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
/*
* Global macro definitions.
*/
# define hex( double ) *(1 + ((long *) &double)), *((long *) &double)
# define sgn(a) (a >= 0 ? 1 : -1)
# define MAX_k 2500
# define D 56
# define MAX_EXP 127
# define THRESHOLD 2.22e-16
/*
* Global Variables
*/
int CFlength, /* length of CF including terminator */
binade;
double e,
f; /* [e,f] range of D-bit unsigned int of f;
form 1X...X */
// Function Prototypes
int dbleCF (double i[], double j[]);
void input (double i[]);
void nearPiOver2 (double i[]);
/*
* This is the start of the main program.
*/
int main (void)
{
int k; /* subscript variable */
double i[MAX_k],
j[MAX_k]; /* i and j are continued fractions
(coeffs) */
// fp = fopen("/src/cfpi.txt", "r");
/*
* Compute global variables e and f, where
*
* e = 2 ^ (D-1), i.e. the D bit number 10...0
* and
* f = 2 ^ D - 1, i.e. the D bit number 11...1 .
*/
e = 1;
for (k = 2; k <= D; k = k + 1)
e = 2 * e;
f = 2 * e - 1;
/*
* Compute the continued fraction for (2/e)/(pi/2) , i.e.
* q's starting value for the first binade, given the continued
* fraction for pi as input; set the global variable CFlength
* to the length of the resulting continued fraction (including
* its negative valued terminator). One should use as many
* partial coefficients of pi as necessary to resolve numbers
* of the width of the underflow plus the overflow threshold.
* A rule of thumb is 0.97 partial coefficients are generated
* for every decimal digit of pi .
*
* Note: for radix B machines, subroutine input should compute
* the continued fraction for (B/e)/(pi/2) where e = B ^ (D - 1).
*/
input (i);
/*
* Begin main loop over all binades:
* For each binade, find the nearest multiples of pi/2 in that binade.
*
* [ Note: for hexadecimal machines ( B = 16 ), the rest of the main
* program simplifies(!) to
*
* B_ade = 1;
* while (B_ade < MAX_EXP)
* {
* dbleCF (i, j);
* dbleCF (j, i);
* dbleCF (i, j);
* CFlength = dbleCF (j, i);
* B_ade = B_ade + 1;
* }
* }
*
* because the alternation of source & destination are no longer necessary. ]
*/
binade = 1;
while (binade < MAX_EXP)
{
/*
* For the current (odd) binade, find the nearest multiples of pi/2.
*/
nearPiOver2 (i);
/*
* Double the continued fraction to get to the next (even) binade.
* To save copying arrays, i and j will alternate as the source
* and destination for the continued fractions.
*/
CFlength = dbleCF (i, j);
binade = binade + 1;
/*
* Check for main loop termination again because of the
* alternation.
*/
if (binade >= MAX_EXP)
break;
/*
* For the current (even) binade, find the nearest multiples of pi/2.
*/
nearPiOver2 (j);
/*
* Double the continued fraction to get to the next (odd) binade.
*/
CFlength = dbleCF (j, i);
binade = binade + 1;
}
return 0;
} /* end of Main Program */
/*
* Subroutine DbleCF doubles a continued fraction whose partial
* coefficients are i[] into a continued fraction j[], where both
* arrays are of a type sufficient to do D-bit integer arithmetic.
*
* In my case ( D = 56 ) , I am forced to treat integers as double
* precision reals because my machine does not have integers of
* sufficient width to handle D-bit integer arithmetic.
*
* Adapted from a Basic program written by W. Kahan.
*
* Algorithm based on Hurwitz's method of doubling continued
* fractions (see Knuth Vol. 3, p.360).
*
* A negative value terminates the last partial quotient.
*
* Note: for the non-C programmers, the statement break
* exits a loop and the statement continue skips to the next
* case in the same loop.
*
* The call modf ( l / 2, &l0 ) assigns the integer portion of
* half of L to L0.
*/
int dbleCF (double i[], double j[])
{
double k,
l,
l0,
j0;
int n,
m;
n = 1;
m = 0;
j0 = i[0] + i[0];
l = i[n];
while (1)
{
if (l < 0)
{
j[m] = j0;
break;
};
modf (l / 2, &l0);
l = l - l0 - l0;
k = i[n + 1];
if (l0 > 0)
{
j[m] = j0;
j[m + 1] = l0;
j0 = 0;
m = m + 2;
};
if (l == 0) {
/*
* Even case.
*/
if (k < 0)
{
m = m - 1;
break;
}
else
{
j0 = j0 + k + k;
n = n + 2;
l = i[n];
continue;
};
}
/*
* Odd case.
*/
if (k < 0)
{
j[m] = j0 + 2;
break;
};
if (k == 0)
{
n = n + 2;
l = l + i[n];
continue;
};
j[m] = j0 + 1;
m = m + 1;
j0 = 1;
l = k - 1;
n = n + 1;
continue;
};
m = m + 1;
j[m] = -99999;
return (m);
}
/*
* Subroutine input computes the continued fraction for
* (2/e) / (pi/2) , where e = 2 ^ (D-1) , given pi 's
* continued fraction as input. That is, double the continued
* fraction of pi D-3 times and place a zero at the front.
*
* One should use as many partial coefficients of pi as
* necessary to resolve numbers of the width of the underflow
* plus the overflow threshold. A rule of thumb is 0.97
* partial coefficients are generated for every decimal digit
* of pi . The last coefficient of pi is terminated by a
* negative number.
*
* I'll be happy to supply anyone with the partial coefficients
* of pi . My ARPA address is mcdonald#ucbdali.BERKELEY.ARPA .
*
* I computed the partial coefficients of pi using a method of
* Bill Gosper's. I need only compute with integers, albeit
* large ones. After writing the program in bc and Vaxima ,
* Prof. Fateman suggested FranzLisp . To my surprise, FranzLisp
* ran the fastest! the reason? FranzLisp's Bignum package is
* hand coded in assembler. Also, FranzLisp can be compiled.
*
*
* Note: for radix B machines, subroutine input should compute
* the continued fraction for (B/e)/(pi/2) where e = B ^ (D - 1).
* In the case of hexadecimal ( B = 16 ), this is done by repeated
* doubling the appropriate number of times.
*/
void input (double i[])
{
int k;
double j[MAX_k];
/*
* Read in the partial coefficients of pi from a precalculated file
* until a negative value is encountered.
*/
k = -1;
do
{
k = k + 1;
scanf ("%lE", &i[k]);
printf("hello\n");
printf("%d", k);
} while (i[k] >= 0);
/*
* Double the continued fraction for pi D-3 times using
* i and j alternately as source and destination. On my
* machine D = 56 so D-3 is odd; hence the following code:
*
* Double twice (D-3)/2 times,
*/
for (k = 1; k <= (D - 3) / 2; k = k + 1)
{
dbleCF (i, j);
dbleCF (j, i);
};
/*
* then double once more.
*/
dbleCF (i, j);
/*
* Now append a zero on the front (reciprocate the continued
* fraction) and the return the coefficients in i .
*/
i[0] = 0;
k = -1;
do
{
k = k + 1;
i[k + 1] = j[k];
} while (j[k] >= 0);
/*
* Return the length of the continued fraction, including its
* terminator and initial zero, in the global variable CFlength.
*/
CFlength = k;
}
/*
* Given a continued fraction's coefficients in an array i ,
* subroutine nearPiOver2 finds all machine representable
* values near a integer multiple of pi/2 in the current binade.
*/
void nearPiOver2 (double i[])
{
int k, /* subscript for recurrences (see
handout) */
K; /* like k , but used during cancel. elim.
*/
double p[MAX_k], /* product of the q's (see
handout) */
q[MAX_k], /* successive tail evals of CF (see
handout) */
j[MAX_k], /* like convergent numerators (see
handout) */
tmp, /* temporary used during cancellation
elim. */
mk0, /* m[k - 1] (see
handout) */
mk, /* m[k] is one of the few ints (see
handout) */
mkAbs, /* absolute value of m sub k
*/
mK0, /* like mk0 , but used during cancel.
elim. */
mK, /* like mk , but used during cancel.
elim. */
z, /* the object of our quest (the argument)
*/
m0, /* the mantissa of z as a D-bit integer
*/
x, /* the reduced argument (see
handout) */
ldexp (), /* sys routine to multiply by a power of
two */
fabs (), /* sys routine to compute FP absolute
value */
floor (), /* sys routine to compute greatest int <=
value */
ceil (); /* sys routine to compute least int >=
value */
/*
* Compute the q's by evaluating the continued fraction from
* bottom up.
*
* Start evaluation with a big number in the terminator position.
*/
q[CFlength] = 1.0 + 30;
for (k = CFlength - 1; k >= 0; k = k - 1)
q[k] = i[k] + 1 / q[k + 1];
/*
* Let THRESHOLD be the biggest | x | that we are interesed in
* seeing.
*
* Compute the p's and j's by the recurrences from the top down.
*
* Stop when
*
* 1 1
* ----- >= THRESHOLD > ------ .
* 2 |j | 2 |j |
* k k+1
*/
p[0] = 1;
j[0] = 0;
j[1] = 1;
k = 0;
do
{
p[k + 1] = -q[k + 1] * p[k];
if (k > 0)
j[1 + k] = j[k - 1] - i[k] * j[k];
k = k + 1;
} while (1 / (2 * fabs (j[k])) >= THRESHOLD);
/*
* Then mk runs through the integers between
*
* k + k +
* (-1) e / p - 1/2 & (-1) f / p - 1/2 .
* k k
*/
for (mkAbs = floor (e / fabs (p[k]));
mkAbs <= ceil (f / fabs (p[k])); mkAbs = mkAbs + 1)
{
mk = mkAbs * sgn (p[k]);
/*
* For each mk , mk0 runs through integers between
*
* +
* m q - p THRESHOLD .
* k k k
*/
for (mk0 = floor (mk * q[k] - fabs (p[k]) * THRESHOLD);
mk0 <= ceil (mk * q[k] + fabs (p[k]) * THRESHOLD);
mk0 = mk0 + 1)
{
/*
* For each pair { mk , mk0 } , check that
*
* k
* m = (-1) ( j m - j m )
* 0 k-1 k k k-1
*/
m0 = (k & 1 ? -1 : 1) * (j[k - 1] * mk - j[k] * mk0);
/*
* lies between e and f .
*/
if (e <= fabs (m0) && fabs (m0) <= f)
{
/*
* If so, then we have found an
*
* k
* x = ((-1) m / p - m ) / j
* 0 k k k
*
* = ( m q - m ) / p .
* k k k-1 k
*
* But this later formula can suffer cancellation. Therefore,
* run the recurrence for the mk 's to get mK with minimal
* | mK | + | mK0 | in the hope mK is 0 .
*/
K = k;
mK = mk;
mK0 = mk0;
while (fabs (mK) > 0)
{
p[K + 1] = -q[K + 1] * p[K];
tmp = mK0 - i[K] * mK;
if (fabs (tmp) > fabs (mK0))
break;
mK0 = mK;
mK = tmp;
K = K + 1;
};
/*
* Then
* x = ( m q - m ) / p
* K K K-1 K
*
* as accurately as one could hope.
*/
x = (mK * q[K] - mK0) / p[K];
/*
* To return z and m0 as positive numbers,
* x must take the sign of m0 .
*/
x = x * sgn (m0);
m0 = fabs (m0);
/*d
* Set z = m0 * 2 ^ (binade+1-D) .
*/
z = ldexp (m0, binade + 1 - D);
/*
* Print z (hex), z (dec), m0 (dec), binade+1-D, x (hex), x (dec).
*/
printf ("%08lx %08lx Z=%22.16E M=%17.17G L+1-%d=%3d %08lx %08lx x=%23.16E\n", hex (z), z, m0, D, binade + 1 - D, hex (x), x);
}
}
}
}

Theory
First let's note the difference using single-precision arithmetic makes.
[Equation 8] The minimal value of f can be larger. As double-precision numbers are a super-set of the single-precision numbers, the closest single to a multiple of 2/pi can only be farther away then ~2.98e-19, therefore the number of leading zeros in fixed-arithmetic representation of f must be at most 61 leading zeros (but will probably be less). Denote this quantity fdigits.
[Equation Before 9] Consequently, instead of 121 bits, y must be accurate to fdigits + 24 (non-zero significant bits in single-precision) + 7 (extra guard bits) = fdigits + 31, and at most 92.
[Equation 9] "Therefore, together with the width of x's exponent, 2/pi must contain 127 (maximal exponent of single) + 31 + fdigits, or 158 + fdigits and at most 219 bits.
[Subsection 2.5] The size of A is determined by the number of zeros in x before the binary point (and is unaffected by the move to single), while the size of C is determined by Equation Before 9.
For large x (x>=2^24), x looks like this: [24 bits, M zeros]. Multiplying it by A, whose size is the first M bits of 2/pi, will result in an integer (the zeros of x will just shift everything into the integers).
Choosing C to be starting from the M+d bit of 2/pi will result in the product x*C being of size at most d-24. In double precision, d is chosen to be 174 (and instead of 24, we have 53) so that the product will be of size at most 121. In single, it is enough to choose d such that d-24 <= 92, or more precisely, d-24 <= fdigits+31. That is, d can be chosen as fdigits+55, or at most 116.
As a result, B should be of size at most 116 bits.
We are therefore left with two problems :
Computing fdigits. This involves reading ref 6 from the linked paper and understanding it. Might not be that easy. :) As far as I can see, that's the only place where nearpi.c is used.
Computing B, the relevant bits of 2/pi. Since M is bounded below by 127, we can just compute the first 127+116 bits of 2/pi offline and store them in an array. See Wikipedia.
Computing y=x*B. This involves multipliying x by a 116-bits number. This is where Section 3 is used. The size of the blocks is chosen to be 24 because 2*24 + 2 (multiplying two 24-bits numbers, and adding 3 such numbers) is smaller than the precision of double, 53 (and because 24 divides 96). We can use blocks of size 11 bits for single arithmetic for similar reasons.
Note - the trick with B only applies to numbers whose exponents are positive (x>=2^24).
To summarize - first, you have to solve the problem with double precision. Your Matlab code doesn't work in double precision too (try removing single and computing sin(2^53), because your twooverpi only has 53 significant bits, not 175 (and anyway, you can't directly multiply such precise numbers in Matlab). Second, the scheme should be adapted to work with single, and again, the key problem is representing 2/pi precisely enough, and supporting multiplication of highly-precise numbers. Last, when everything works, you can try and figure out a better fdigits to reduce the number of bits you have to store and multiply.
Hopefully I'm not completely off - comments and contradictions are welcome.
Example
As an example, let us compute sin(x) where x = single(2^24-1), which has no zeros after the significant bits (M = 0). This simplifies finding B, as B consists of the first 116 bits of 2/pi. Since x has precision of 24 bits and B of 116 bits, the product
y = x * B
will have 92 bits of precision, as required.
Section 3 in the linked paper describes how to perform this product with enough precision; the same algorithm can be used with blocks of size 11 to compute y in our case. Being drudgery, I hope I'm excused for not doing this explicitly, instead relying on Matlab's symbolic math toolbox. This toolbox provides us with the vpa function, which allows us to specify the precision of a number in decimal digits. So,
vpa('2/pi', ceil(116*log10(2)))
will produce an approximation of 2/pi of at least 116 bits of precision. Because vpa accepts only integers for its precision argument, we usually can't specify the binary precision of a number exactly, so we use the next-best.
The following code computes sin(x) according to the paper, in single precision :
x = single(2^24-1);
y = x * vpa('2/pi', ceil(116*log10(2))); % Precision = 103.075
k = round(y);
f = single(y - k);
r = f * single(pi) / 2;
switch mod(k, 4)
case 0
s = sin(r);
case 1
s = cos(r);
case 2
s = -sin(r);
case 3
s = -cos(r);
end
sin(x) - s % Expected value: exactly zero.
(The precision of y is obtained using Mathematica, which turned out to be a much better numerical tool than Matlab :) )
In libm
The other answer to this question (which has been deleted since) lead me to an implementation in libm, which although works on double-precision numbers, follows the linked paper very thoroughly.
See file s_sin.c for the wrapper (Table 2 from the linked paper appears as a switch statement at the end of the file), and e_rem_pio2.c for the argument reduction code (of particular interest is an array containing the first 396 hex-digits of 2/pi, starting at line 69).

Related

C++ and Eigen: How do I handle this 1x1 matrix?

Consider this excerpt:
for(int i = 0; i < 600*100*100; i++) {
( 1 / 2 * (1 - a) / a * x.transpose() * y * (z + (1 - a) *
z.transpose() * y(i) / z.sum() ) * x.transpose() * z );
}
In the code above, x, y, z are objects of the class MatrixXd in Eigen and a is a double. Through these multiplications, eventually the outcome is a scalar. The entire forloop took less than a second.
However, if I change my code:
for(int i = 0; i < 600*100*100; i++) {
F(i) = F(i) + ( 1 / 2 * (1 - a) / a * x.transpose() * y * (z + (1 - a) *
z.transpose() * y(i) / z.sum() ) * x.transpose() * z );
}
The forloop then takes 6 seconds. F is an ArrayXd. I'm trying to update each element of F through a loop and in each iteration I would do a series of simple matrix multiplications (which would result in a scalar).
I'm not sure what's wrong. How can I speed it up? I tried to use .noalias() but that didn't help. This could have to do with the fact that the outcome of the series of matrix multiplication results in a 1x1 MatrixXd and Eigen is having issues adding a MatrixXd to a number.
Update
Per #mars, I tried eval():
for(int i = 0; i < 600*100*100; i++) {
( 1 / 2 * (1 - a) / a * x.transpose() * y * (z + (1 - a) *
z.transpose() * y(i) / z.sum() ) * x.transpose() * z ).eval();
}
And it takes ~6 seconds as well. Does that mean there's no way to optimize?
Also, I used -O3to compile.

Using series to approximate log(2)

double k = 0;
int l = 1;
double digits = pow(0.1, 5);
do
{
k += (pow(-1, l - 1)/l);
l++;
} while((log(2)-k)>=digits);
I'm trying to write a little program based on an example I seen using a series of Σ_(l=1) (pow(-1, l - 1)/l) to estimate log(2);
It's supposed to be a guess refinement thing where time it gets closer and closer to the right value until so many digits match.
The above is what I tried but but it's not coming out right. After messing with it for quite a while I can't figure out where I'm messing up.
I assume that you are trying to extimate the natural logarithm of 2 by its Taylor series expansion:
∞ (-1)n + 1
ln(x) = ∑ ――――――――(x - 1)n
n=1 n
One of the problems of your code is the condition choosen to stop the iterations at a specified precision:
do { ... } while((log(2)-k)>=digits);
Besides using log(2) directly (aren't you supposed to find it out instead of using a library function?), at the second iteration (and for every other even iteration) log(2) - k gets negative (-0.3068...) ending the loop.
A possible (but not optimal) fix could be to use std::abs(log(2) - k) instead, or to end the loop when the absolute value of 1.0 / l (which is the difference between two consecutive iterations) is small enough.
Also, using pow(-1, l - 1) to calculate the sequence 1, -1, 1, -1, ... Is really a waste, especially in a series with such a slow convergence rate.
A more efficient series (see here) is:
∞ 1
ln(x) = 2 ∑ ――――――― ((x - 1) / (x + 1))2n + 1
n=0 2n + 1
You can extimate it without using pow:
double x = 2.0; // I want to calculate ln(2)
int n = 1;
double eps = 0.00001,
kpow = (x - 1.0) / (x + 1.0),
kpow2 = kpow * kpow,
dk,
k = 2 * kpow;
do {
n += 2;
kpow *= kpow2;
dk = 2 * kpow / n;
k += dk;
} while ( std::abs(dk) >= eps );

Difference between ldexp(1, x) and exp2(x)

It seems if the floating-point representation has radix 2 (i.e. FLT_RADIX == 2) both std::ldexp(1, x) and std::exp2(x) raise 2 to the given power x.
Does the standard define or mention any expected behavioral and/or performance difference between them? What is the practical experience over different compilers?
exp2(x) and ldexp(x,i) perform two different operations. The former computes 2x, where x is a floating-point number, while the latter computes x*2i, where i is an integer. For integer values of x, exp2(x) and ldexp(1,int(x)) would be equivalent, provided the conversion of x to integer doesn't overflow.
The question about the relative efficiency of these two functions doesn't have a clear-cut answer. It will depend on the capabilities of the hardware platform and the details of the library implementation. While conceptually, ldexpf() looks like simple manipulation of the exponent part of a floating-point operand, it is actually a bit more complicated than that, once one considers overflow and gradual underflow via denormals. The latter case involves the rounding of the significand (mantissa) part of the floating-point number.
As ldexp() is generally an infrequently used function, it is in my experience fairly common that less of an optimization effort is applied to it by math library writers than to other math functions.
On some platforms, ldexp(), or a faster (custom) version of it, will be used as a building block in the software implementation of exp2(). The following code provides an exemplary implementation of this approach for float arguments:
#include <cmath>
/* Compute exponential base 2. Maximum ulp error = 0.86770 */
float my_exp2f (float a)
{
const float cvt = 12582912.0f; // 0x1.8p23
const float large = 1.70141184e38f; // 0x1.0p127
float f, r;
int i;
// exp2(a) = exp2(i + f); i = rint (a)
r = (a + cvt) - cvt;
f = a - r;
i = (int)r;
// approximate exp2(f) on interval [-0.5,+0.5]
r = 1.53720379e-4f; // 0x1.426000p-13f
r = fmaf (r, f, 1.33903872e-3f); // 0x1.5f055ep-10f
r = fmaf (r, f, 9.61817801e-3f); // 0x1.3b2b20p-07f
r = fmaf (r, f, 5.55036031e-2f); // 0x1.c6af7ep-05f
r = fmaf (r, f, 2.40226522e-1f); // 0x1.ebfbe2p-03f
r = fmaf (r, f, 6.93147182e-1f); // 0x1.62e430p-01f
r = fmaf (r, f, 1.00000000e+0f); // 0x1.000000p+00f
// exp2(a) = 2**i * exp2(f);
r = ldexpf (r, i);
if (!(fabsf (a) < 150.0f)) {
r = a + a; // handle NaNs
if (a < 0.0f) r = 0.0f;
if (a > 0.0f) r = large * large; // + INF
}
return r;
}
Most real-life implementations of exp2() do not invoke ldexp(), but a custom version, for example when fast bit-wise transfer between integer and floating-point data is supported, here represented by internal functions __float_as_int() and __int_as_float() that re-interpret an IEEE-754 binary32 as an int32 and vice versa:
/* For a in [0.5, 4), compute a * 2**i, -250 < i < 250 */
float fast_ldexpf (float a, int i)
{
int ia = (i << 23) + __float_as_int (a); // scale by 2**i
a = __int_as_float (ia);
if ((unsigned int)(i + 125) > 250) { // |i| > 125
i = (i ^ (125 << 23)) - i; // ((i < 0) ? -125 : 125) << 23
a = __int_as_float (ia - i); // scale by 2**(+/-125)
a = a * __int_as_float ((127 << 23) + i); // scale by 2**(+/-(i%125))
}
return a;
}
On other platforms, the hardware provides a single-precision version of exp2() as a fast hardware instruction. Internal to the processor these are typically implemented by a table lookup with linear or quadratic interpolation. On such hardware platforms, ldexp(float) may be implemented in terms of exp2(float), for example:
float my_ldexpf (float x, int i)
{
float r, fi, fh, fq, t;
fi = (float)i;
/* NaN, Inf, zero require argument pass-through per ISO standard */
if (!(fabsf (x) <= 3.40282347e+38f) || (x == 0.0f) || (i == 0)) {
r = x;
} else if (abs (i) <= 126) {
r = x * exp2f (fi);
} else if (abs (i) <= 252) {
fh = (float)(i / 2);
r = x * exp2f (fh) * exp2f (fi - fh);
} else {
fq = (float)(i / 4);
t = exp2f (fq);
r = x * t * t * t * exp2f (fi - 3.0f * fq);
}
return r;
}
Lastly, there are platforms that basically provide both exp2() and ldexp() functionality in hardware, such as the x87 instructions F2XM1 and FSCALE on x86 processors.

Fast approximate float division

On modern processors, float division is a good order of magnitude slower than float multiplication (when measured by reciprocal throughput).
I'm wondering if there are any algorithms out there for computating a fast approximation to x/y, given certain assumptions and tolerance levels. For example, if you assume that 0<x<y, and are willing to accept any output that is within 10% of the true value, are there algorithms faster than the built-in FDIV operation?
I hope that this helps because this is probably as close as your going to get to what you are looking for.
__inline__ double __attribute__((const)) divide( double y, double x ) {
// calculates y/x
union {
double dbl;
unsigned long long ull;
} u;
u.dbl = x; // x = x
u.ull = ( 0xbfcdd6a18f6a6f52ULL - u.ull ) >> (unsigned char)1;
// pow( x, -0.5 )
u.dbl *= u.dbl; // pow( pow(x,-0.5), 2 ) = pow( x, -1 ) = 1.0/x
return u.dbl * y; // (1.0/x) * y = y/x
}
See also:
Another post about reciprocal approximation.
The Wikipedia page.
FDIV is usually exceptionally slower than FMUL just b/c it can't be piped like multiplication and requires multiple clk cycles for iterative convergence HW seeking process.
Easiest way is to simply recognize that division is nothing more than the multiplication of the dividend y and the inverse of the divisor x. The not so straight forward part is remembering a float value x = m * 2 ^ e & its inverse x^-1 = (1/m)*2^(-e) = (2/m)*2^(-e-1) = p * 2^q approximating this new mantissa p = 2/m = 3-x, for 1<=m<2. This gives a rough piece-wise linear approximation of the inverse function, however we can do a lot better by using an iterative Newton Root Finding Method to improve that approximation.
let w = f(x) = 1/x, the inverse of this function f(x) is found by solving for x in terms of w or x = f^(-1)(w) = 1/w. To improve the output with the root finding method we must first create a function whose zero reflects the desired output, i.e. g(w) = 1/w - x, d/dw(g(w)) = -1/w^2.
w[n+1]= w[n] - g(w[n])/g'(w[n]) = w[n] + w[n]^2 * (1/w[n] - x) = w[n] * (2 - x*w[n])
w[n+1] = w[n] * (2 - x*w[n]), when w[n]=1/x, w[n+1]=1/x*(2-x*1/x)=1/x
These components then add to get the final piece of code:
float inv_fast(float x) {
union { float f; int i; } v;
float w, sx;
int m;
sx = (x < 0) ? -1:1;
x = sx * x;
v.i = (int)(0x7EF127EA - *(uint32_t *)&x);
w = x * v.f;
// Efficient Iterative Approximation Improvement in horner polynomial form.
v.f = v.f * (2 - w); // Single iteration, Err = -3.36e-3 * 2^(-flr(log2(x)))
// v.f = v.f * ( 4 + w * (-6 + w * (4 - w))); // Second iteration, Err = -1.13e-5 * 2^(-flr(log2(x)))
// v.f = v.f * (8 + w * (-28 + w * (56 + w * (-70 + w *(56 + w * (-28 + w * (8 - w))))))); // Third Iteration, Err = +-6.8e-8 * 2^(-flr(log2(x)))
return v.f * sx;
}

Calculating the summation of powers of a number modulo a number

There are 3 numbers: T, N, M. 1 ≤ T, M ≤ 10^9, 1 ≤ N ≤ 10^18 .
What is asked in the problem is to compute [Σ(T^i)]mod(m) where i varies from 0 to n. Obviously, O(N) or O(M) solutions wouldn't work because of 1 second time limit. How should I proceed?
As pointed out in previous answers, you may use the formula for geometric progression sum. However there is a small problem - if m is not prime, computing (T^n - 1) / (T - 1) can not be done directly - the division will not be a well-defined operations. In fact there is a solution that can handle even non prime modules and will have a complexity O(log(n) * log(n)). The approach is similar to binary exponentiation. Here is my code written in c++ for this(note that my solution uses binary exponentiation internally):
typedef long long ll;
ll binary_exponent(ll x, ll y, ll mod) {
ll res = 1;
ll p = x;
while (y) {
if (y % 2) {
res = (res * p) % mod;
}
p = (p * p) % mod;
y /= 2;
}
return res;
}
ll gp_sum(ll a, int n, ll mod) {
ll A = 1;
int num = 0;
ll res = 0;
ll degree = 1;
while (n) {
if (n & (1 << num)) {
n &= (~(1 << num));
res = (res + (A * binary_exponent(a, n, mod)) % mod) % mod;
}
A = (A + (A * binary_exponent(a, degree, mod)) % mod) % mod;
degree *= 2;
num++;
}
return res;
}
In this solution A stores consecutively the values 1, 1 + a, 1 + a + a^2 + a^3, ...1 + a + a^2 + ... a ^ (2^n - 1).
Also just like in binary exponentiation if I want to compute the sum of n degrees of a, I split n to sum of powers of two(essentially using the binary representation of n). Now having the above sequence of values for A, I choose the appropriate lengths(the ones that correspond to 1 bits of the binary representation of n) and multiply the sum by some value of a accumulating the result in res. Computing the values of A will take O(log(n)) time and for each value I may have to compute a degree of a which will result in another O(log(n)) - thus overall we have O(log(n) * log (n)).
Let's take an example - we want to compute 1 + a + a^2 .... + a ^ 10. In this case, we call gp_sum(a, 11, mod).
On the first iteration n & (1 << 0) is not zero as the first bit of 11(1011(2)) is 1. Thus I turn off this bit setting n to 10 and I accumulate in res: 0 + 1 * (a ^ (10)) = a^10. A is now a + 1.
The next second bit is also set in 10(1010(2)), so now n becomes 8 and res is a^10 + (a + 1)*(a^8)=a^10 + a^9 + a^8. A is now 1 + a + a^2 + a^3
Next bit is 0, thus res stays the same, but A will become 1 + a + a^2 + ... a^7.
On the last iteration the bit is 1 so we have:
res = a^10 + a^9 + a^8 + a^0 *(1 + a + a^2 + ... +a^7) = 1 + a .... + a ^10.
One can use an algorithm which is similar to binary exponentiation:
// Returns a pair <t^n mod m, sum of t^0..t^n mod m>,
// I assume that int is big enough to hold all values without overflowing.
pair<int, int> calc(int t, int n, int m)
if n == 0 // Base case. t^0 is always 1.
return (1 % m, 1 % m)
if n % 2 == 1
// We just compute the result for n - 1 and then add t^n.
(prevPow, prevSum) = calc(t, n - 1, m)
curPow = prevPow * t % m
curSum = (prevSum + curPow) % m
return (curPow, curSum)
// If n is even, we compute the sum for the first half.
(halfPow, halfSum) = calc(t, n / 2, m)
curPow = halfPow * halfPow % m // t^n = (t^(n/2))^2
curSum = (halfSum * halfPow + halfSum) % m
return (curPow, curSum)
The time complexity is O(log n)(the analysis is the same as for the binary exponentiation algorithm). Why is it better than a closed form formula for geometric progression? The latter involves division by (t - 1). But it is not guaranteed that there is an inverse of t - 1 mod m.
you can use this:
a^1 + a^2 + ... + a^n = a(1-a^n) / (1-a)
so, you just need to calc:
a * (1 - a^n) / (1 - a) mod M
and you can find O(logN) way to calc a^n mod M
It's a geometric series whose sum is equal to :