Difference between ldexp(1, x) and exp2(x) - c++

It seems if the floating-point representation has radix 2 (i.e. FLT_RADIX == 2) both std::ldexp(1, x) and std::exp2(x) raise 2 to the given power x.
Does the standard define or mention any expected behavioral and/or performance difference between them? What is the practical experience over different compilers?

exp2(x) and ldexp(x,i) perform two different operations. The former computes 2x, where x is a floating-point number, while the latter computes x*2i, where i is an integer. For integer values of x, exp2(x) and ldexp(1,int(x)) would be equivalent, provided the conversion of x to integer doesn't overflow.
The question about the relative efficiency of these two functions doesn't have a clear-cut answer. It will depend on the capabilities of the hardware platform and the details of the library implementation. While conceptually, ldexpf() looks like simple manipulation of the exponent part of a floating-point operand, it is actually a bit more complicated than that, once one considers overflow and gradual underflow via denormals. The latter case involves the rounding of the significand (mantissa) part of the floating-point number.
As ldexp() is generally an infrequently used function, it is in my experience fairly common that less of an optimization effort is applied to it by math library writers than to other math functions.
On some platforms, ldexp(), or a faster (custom) version of it, will be used as a building block in the software implementation of exp2(). The following code provides an exemplary implementation of this approach for float arguments:
#include <cmath>
/* Compute exponential base 2. Maximum ulp error = 0.86770 */
float my_exp2f (float a)
{
const float cvt = 12582912.0f; // 0x1.8p23
const float large = 1.70141184e38f; // 0x1.0p127
float f, r;
int i;
// exp2(a) = exp2(i + f); i = rint (a)
r = (a + cvt) - cvt;
f = a - r;
i = (int)r;
// approximate exp2(f) on interval [-0.5,+0.5]
r = 1.53720379e-4f; // 0x1.426000p-13f
r = fmaf (r, f, 1.33903872e-3f); // 0x1.5f055ep-10f
r = fmaf (r, f, 9.61817801e-3f); // 0x1.3b2b20p-07f
r = fmaf (r, f, 5.55036031e-2f); // 0x1.c6af7ep-05f
r = fmaf (r, f, 2.40226522e-1f); // 0x1.ebfbe2p-03f
r = fmaf (r, f, 6.93147182e-1f); // 0x1.62e430p-01f
r = fmaf (r, f, 1.00000000e+0f); // 0x1.000000p+00f
// exp2(a) = 2**i * exp2(f);
r = ldexpf (r, i);
if (!(fabsf (a) < 150.0f)) {
r = a + a; // handle NaNs
if (a < 0.0f) r = 0.0f;
if (a > 0.0f) r = large * large; // + INF
}
return r;
}
Most real-life implementations of exp2() do not invoke ldexp(), but a custom version, for example when fast bit-wise transfer between integer and floating-point data is supported, here represented by internal functions __float_as_int() and __int_as_float() that re-interpret an IEEE-754 binary32 as an int32 and vice versa:
/* For a in [0.5, 4), compute a * 2**i, -250 < i < 250 */
float fast_ldexpf (float a, int i)
{
int ia = (i << 23) + __float_as_int (a); // scale by 2**i
a = __int_as_float (ia);
if ((unsigned int)(i + 125) > 250) { // |i| > 125
i = (i ^ (125 << 23)) - i; // ((i < 0) ? -125 : 125) << 23
a = __int_as_float (ia - i); // scale by 2**(+/-125)
a = a * __int_as_float ((127 << 23) + i); // scale by 2**(+/-(i%125))
}
return a;
}
On other platforms, the hardware provides a single-precision version of exp2() as a fast hardware instruction. Internal to the processor these are typically implemented by a table lookup with linear or quadratic interpolation. On such hardware platforms, ldexp(float) may be implemented in terms of exp2(float), for example:
float my_ldexpf (float x, int i)
{
float r, fi, fh, fq, t;
fi = (float)i;
/* NaN, Inf, zero require argument pass-through per ISO standard */
if (!(fabsf (x) <= 3.40282347e+38f) || (x == 0.0f) || (i == 0)) {
r = x;
} else if (abs (i) <= 126) {
r = x * exp2f (fi);
} else if (abs (i) <= 252) {
fh = (float)(i / 2);
r = x * exp2f (fh) * exp2f (fi - fh);
} else {
fq = (float)(i / 4);
t = exp2f (fq);
r = x * t * t * t * exp2f (fi - 3.0f * fq);
}
return r;
}
Lastly, there are platforms that basically provide both exp2() and ldexp() functionality in hardware, such as the x87 instructions F2XM1 and FSCALE on x86 processors.

Related

how to wrap radians between -pi and pi with mod? [duplicate]

I'm looking for some nice C code that will accomplish effectively:
while (deltaPhase >= M_PI) deltaPhase -= M_TWOPI;
while (deltaPhase < -M_PI) deltaPhase += M_TWOPI;
What are my options?
Edit Apr 19, 2013:
Modulo function updated to handle boundary cases as noted by aka.nice and arr_sea:
static const double _PI= 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348;
static const double _TWO_PI= 6.2831853071795864769252867665590057683943387987502116419498891846156328125724179972560696;
// Floating-point modulo
// The result (the remainder) has same sign as the divisor.
// Similar to matlab's mod(); Not similar to fmod() - Mod(-3,4)= 1 fmod(-3,4)= -3
template<typename T>
T Mod(T x, T y)
{
static_assert(!std::numeric_limits<T>::is_exact , "Mod: floating-point type expected");
if (0. == y)
return x;
double m= x - y * floor(x/y);
// handle boundary cases resulted from floating-point cut off:
if (y > 0) // modulo range: [0..y)
{
if (m>=y) // Mod(-1e-16 , 360. ): m= 360.
return 0;
if (m<0 )
{
if (y+m == y)
return 0 ; // just in case...
else
return y+m; // Mod(106.81415022205296 , _TWO_PI ): m= -1.421e-14
}
}
else // modulo range: (y..0]
{
if (m<=y) // Mod(1e-16 , -360. ): m= -360.
return 0;
if (m>0 )
{
if (y+m == y)
return 0 ; // just in case...
else
return y+m; // Mod(-106.81415022205296, -_TWO_PI): m= 1.421e-14
}
}
return m;
}
// wrap [rad] angle to [-PI..PI)
inline double WrapPosNegPI(double fAng)
{
return Mod(fAng + _PI, _TWO_PI) - _PI;
}
// wrap [rad] angle to [0..TWO_PI)
inline double WrapTwoPI(double fAng)
{
return Mod(fAng, _TWO_PI);
}
// wrap [deg] angle to [-180..180)
inline double WrapPosNeg180(double fAng)
{
return Mod(fAng + 180., 360.) - 180.;
}
// wrap [deg] angle to [0..360)
inline double Wrap360(double fAng)
{
return Mod(fAng ,360.);
}
One-liner constant-time solution:
Okay, it's a two-liner if you count the second function for [min,max) form, but close enough — you could merge them together anyways.
/* change to `float/fmodf` or `long double/fmodl` or `int/%` as appropriate */
/* wrap x -> [0,max) */
double wrapMax(double x, double max)
{
/* integer math: `(max + x % max) % max` */
return fmod(max + fmod(x, max), max);
}
/* wrap x -> [min,max) */
double wrapMinMax(double x, double min, double max)
{
return min + wrapMax(x - min, max - min);
}
Then you can simply use deltaPhase = wrapMinMax(deltaPhase, -M_PI, +M_PI).
The solutions is constant-time, meaning that the time it takes does not depend on how far your value is from [-PI,+PI) — for better or for worse.
Verification:
Now, I don't expect you to take my word for it, so here are some examples, including boundary conditions. I'm using integers for clarity, but it works much the same with fmod() and floats:
Positive x:
wrapMax(3, 5) == 3: (5 + 3 % 5) % 5 == (5 + 3) % 5 == 8 % 5 == 3
wrapMax(6, 5) == 1: (5 + 6 % 5) % 5 == (5 + 1) % 5 == 6 % 5 == 1
Negative x:
Note: These assume that integer modulo copies left-hand sign; if not, you get the above ("Positive") case.
wrapMax(-3, 5) == 2: (5 + (-3) % 5) % 5 == (5 - 3) % 5 == 2 % 5 == 2
wrapMax(-6, 5) == 4: (5 + (-6) % 5) % 5 == (5 - 1) % 5 == 4 % 5 == 4
Boundaries:
wrapMax(0, 5) == 0: (5 + 0 % 5) % 5 == (5 + 0) % 5 == 5 % 5 == 0
wrapMax(5, 5) == 0: (5 + 5 % 5) % 5 == (5 + 0) % 5== 5 % 5 == 0
wrapMax(-5, 5) == 0: (5 + (-5) % 5) % 5 == (5 + 0) % 5 == 5 % 5 == 0
Note: Possibly -0 instead of +0 for floating-point.
The wrapMinMax function works much the same: wrapping x to [min,max) is the same as wrapping x - min to [0,max-min), and then (re-)adding min to the result.
I don't know what would happen with a negative max, but feel free to check that yourself!
If ever your input angle can reach arbitrarily high values, and if continuity matters, you can also try
atan2(sin(x),cos(x))
This will preserve continuity of sin(x) and cos(x) better than modulo for high values of x, especially in single precision (float).
Indeed, exact_value_of_pi - double_precision_approximation ~= 1.22e-16
On the other hand, most library/hardware use a high precision approximation of PI for applying the modulo when evaluating trigonometric functions (though x86 family is known to use a rather poor one).
Result might be in [-pi,pi], you'll have to check the exact bounds.
Personaly, I would prevent any angle to reach several revolutions by wrapping systematically and stick to a fmod solution like the one of boost.
There is also fmod function in math.h but the sign causes trouble so that a subsequent operation is needed to make the result fir in the proper range (like you already do with the while's). For big values of deltaPhase this is probably faster than substracting/adding `M_TWOPI' hundreds of times.
deltaPhase = fmod(deltaPhase, M_TWOPI);
EDIT:
I didn't try it intensively but I think you can use fmod this way by handling positive and negative values differently:
if (deltaPhase>0)
deltaPhase = fmod(deltaPhase+M_PI, 2.0*M_PI)-M_PI;
else
deltaPhase = fmod(deltaPhase-M_PI, 2.0*M_PI)+M_PI;
The computational time is constant (unlike the while solution which gets slower as the absolute value of deltaPhase increases)
I would do this:
double wrap(double x) {
return x-2*M_PI*floor(x/(2*M_PI)+0.5);
}
There will be significant numerical errors. The best solution to the numerical errors is to store your phase scaled by 1/PI or by 1/(2*PI) and depending on what you are doing store them as fixed point.
Instead of working in radians, use angles scaled by 1/(2π) and use modf, floor etc. Convert back to radians to use library functions.
This also has the effect that rotating ten thousand and a half revolutions is the same as rotating half then ten thousand revolutions, which is not guaranteed if your angles are in radians, as you have an exact representation in the floating point value rather than summing approximate representations:
#include <iostream>
#include <cmath>
float wrap_rads ( float r )
{
while ( r > M_PI ) {
r -= 2 * M_PI;
}
while ( r <= -M_PI ) {
r += 2 * M_PI;
}
return r;
}
float wrap_grads ( float r )
{
float i;
r = modff ( r, &i );
if ( r > 0.5 ) r -= 1;
if ( r <= -0.5 ) r += 1;
return r;
}
int main ()
{
for (int rotations = 1; rotations < 100000; rotations *= 10 ) {
{
float pi = ( float ) M_PI;
float two_pi = 2 * pi;
float a = pi;
a += rotations * two_pi;
std::cout << rotations << " and a half rotations in radians " << a << " => " << wrap_rads ( a ) / two_pi << '\n' ;
}
{
float pi = ( float ) 0.5;
float two_pi = 2 * pi;
float a = pi;
a += rotations * two_pi;
std::cout << rotations << " and a half rotations in grads " << a << " => " << wrap_grads ( a ) / two_pi << '\n' ;
}
std::cout << '\n';
}}
Here is a version for other people finding this question that can use C++ with Boost:
#include <boost/math/constants/constants.hpp>
#include <boost/math/special_functions/sign.hpp>
template<typename T>
inline T normalizeRadiansPiToMinusPi(T rad)
{
// copy the sign of the value in radians to the value of pi
T signedPI = boost::math::copysign(boost::math::constants::pi<T>(),rad);
// set the value of rad to the appropriate signed value between pi and -pi
rad = fmod(rad+signedPI,(2*boost::math::constants::pi<T>())) - signedPI;
return rad;
}
C++11 version, no Boost dependency:
#include <cmath>
// Bring the 'difference' between two angles into [-pi; pi].
template <typename T>
T normalizeRadiansPiToMinusPi(T rad) {
// Copy the sign of the value in radians to the value of pi.
T signed_pi = std::copysign(M_PI,rad);
// Set the value of difference to the appropriate signed value between pi and -pi.
rad = std::fmod(rad + signed_pi,(2 * M_PI)) - signed_pi;
return rad;
}
I encountered this question when searching for how to wrap a floating point value (or a double) between two arbitrary numbers. It didn't answer specifically for my case, so I worked out my own solution which can be seen here. This will take a given value and wrap it between lowerBound and upperBound where upperBound perfectly meets lowerBound such that they are equivalent (ie: 360 degrees == 0 degrees so 360 would wrap to 0)
Hopefully this answer is helpful to others stumbling across this question looking for a more generic bounding solution.
double boundBetween(double val, double lowerBound, double upperBound){
if(lowerBound > upperBound){std::swap(lowerBound, upperBound);}
val-=lowerBound; //adjust to 0
double rangeSize = upperBound - lowerBound;
if(rangeSize == 0){return upperBound;} //avoid dividing by 0
return val - (rangeSize * std::floor(val/rangeSize)) + lowerBound;
}
A related question for integers is available here:
Clean, efficient algorithm for wrapping integers in C++
A two-liner, non-iterative, tested solution for normalizing arbitrary angles to [-π, π):
double normalizeAngle(double angle)
{
double a = fmod(angle + M_PI, 2 * M_PI);
return a >= 0 ? (a - M_PI) : (a + M_PI);
}
Similarly, for [0, 2π):
double normalizeAngle(double angle)
{
double a = fmod(angle, 2 * M_PI);
return a >= 0 ? a : (a + 2 * M_PI);
}
In the case where fmod() is implemented through truncated division and has the same sign as the dividend, it can be taken advantage of to solve the general problem thusly:
For the case of (-PI, PI]:
if (x > 0) x = x - 2PI * ceil(x/2PI) #Shift to the negative regime
return fmod(x - PI, 2PI) + PI
And for the case of [-PI, PI):
if (x < 0) x = x - 2PI * floor(x/2PI) #Shift to the positive regime
return fmod(x + PI, 2PI) - PI
[Note that this is pseudocode; my original was written in Tcl, and I didn't want to torture everyone with that. I needed the first case, so had to figure this out.]
deltaPhase -= floor(deltaPhase/M_TWOPI)*M_TWOPI;
The way suggested you suggested is best. It is fastest for small deflections. If angles in your program are constantly being deflected into the proper range, then you should only run into big out of range values rarely. Therefore paying the cost of a complicated modular arithmetic code every round seems wasteful. Comparisons are cheap compared to modular arithmetic (http://embeddedgurus.com/stack-overflow/2011/02/efficient-c-tip-13-use-the-modulus-operator-with-caution/).
In C99:
float unwindRadians( float radians )
{
const bool radiansNeedUnwinding = radians < -M_PI || M_PI <= radians;
if ( radiansNeedUnwinding )
{
if ( signbit( radians ) )
{
radians = -fmodf( -radians + M_PI, 2.f * M_PI ) + M_PI;
}
else
{
radians = fmodf( radians + M_PI, 2.f * M_PI ) - M_PI;
}
}
return radians;
}
If linking against glibc's libm (including newlib's implementation) you can access
__ieee754_rem_pio2f() and __ieee754_rem_pio2() private functions:
extern __int32_t __ieee754_rem_pio2f (float,float*);
float wrapToPI(float xf){
const float p[4]={0,M_PI_2,M_PI,-M_PI_2};
float yf[2];
int q;
int qmod4;
q=__ieee754_rem_pio2f(xf,yf);
/* xf = q * M_PI_2 + yf[0] + yf[1] /
* yf[1] << y[0], not sure if it could be ignored */
qmod4= q % 4;
if (qmod4==2)
/* (yf[0] > 0) defines interval (-pi,pi]*/
return ( (yf[0] > 0) ? -p[2] : p[2] ) + yf[0] + yf[1];
else
return p[qmod4] + yf[0] + yf[1];
}
Edit: Just realised that you need to link to libm.a, I couldn't find the symbols declared in libm.so
I have used (in python):
def WrapAngle(Wrapped, UnWrapped ):
TWOPI = math.pi * 2
TWOPIINV = 1.0 / TWOPI
return UnWrapped + round((Wrapped - UnWrapped) * TWOPIINV) * TWOPI
c-code equivalent:
#define TWOPI 6.28318531
double WrapAngle(const double dWrapped, const double dUnWrapped )
{
const double TWOPIINV = 1.0/ TWOPI;
return dUnWrapped + round((dWrapped - dUnWrapped) * TWOPIINV) * TWOPI;
}
notice that this brings it in the wrapped domain +/- 2pi so for +/- pi domain you need to handle that afterward like:
if( angle > pi):
angle -= 2*math.pi

Exact value of a floating-point number as a rational

I'm looking for a method to convert the exact value of a floating-point number to a rational quotient of two integers, i.e. a / b, where b is not larger than a specified maximum denominator b_max. If satisfying the condition b <= b_max is impossible, then the result falls back to the best approximation which still satisfies the condition.
Hold on. There are a lot of questions/answers here about the best rational approximation of a truncated real number which is represented as a floating-point number. However I'm interested in the exact value of a floating-point number, which is itself a rational number with a different representation. More specifically, the mathematical set of floating-point numbers is a subset of rational numbers. In case of IEEE 754 binary floating-point standard it is a subset of dyadic rationals. Anyway, any floating-point number can be converted to a rational quotient of two finite precision integers as a / b.
So, for example assuming IEEE 754 single-precision binary floating-point format, the rational equivalent of float f = 1.0f / 3.0f is not 1 / 3, but 11184811 / 33554432. This is the exact value of f, which is a number from the mathematical set of IEEE 754 single-precision binary floating-point numbers.
Based on my experience, traversing (by binary search of) the Stern-Brocot tree is not useful here, since that is more suitable for approximating the value of a floating-point number, when it is interpreted as a truncated real instead of an exact rational.
Possibly, continued fractions are the way to go.
The another problem here is integer overflow. Think about that we want to represent the rational as the quotient of two int32_t, where the maximum denominator b_max = INT32_MAX. We cannot rely on a stopping criterion like b > b_max. So the algorithm must never overflow, or it must detect overflow.
What I found so far is an algorithm from Rosetta Code, which is based on continued fractions, but its source mentions it is "still not quite complete". Some basic tests gave good results, but I cannot confirm its overall correctness and I think it can easily overflow.
// https://rosettacode.org/wiki/Convert_decimal_number_to_rational#C
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <stdint.h>
/* f : number to convert.
* num, denom: returned parts of the rational.
* md: max denominator value. Note that machine floating point number
* has a finite resolution (10e-16 ish for 64 bit double), so specifying
* a "best match with minimal error" is often wrong, because one can
* always just retrieve the significand and return that divided by
* 2**52, which is in a sense accurate, but generally not very useful:
* 1.0/7.0 would be "2573485501354569/18014398509481984", for example.
*/
void rat_approx(double f, int64_t md, int64_t *num, int64_t *denom)
{
/* a: continued fraction coefficients. */
int64_t a, h[3] = { 0, 1, 0 }, k[3] = { 1, 0, 0 };
int64_t x, d, n = 1;
int i, neg = 0;
if (md <= 1) { *denom = 1; *num = (int64_t) f; return; }
if (f < 0) { neg = 1; f = -f; }
while (f != floor(f)) { n <<= 1; f *= 2; }
d = f;
/* continued fraction and check denominator each step */
for (i = 0; i < 64; i++) {
a = n ? d / n : 0;
if (i && !a) break;
x = d; d = n; n = x % n;
x = a;
if (k[1] * a + k[0] >= md) {
x = (md - k[0]) / k[1];
if (x * 2 >= a || k[1] >= md)
i = 65;
else
break;
}
h[2] = x * h[1] + h[0]; h[0] = h[1]; h[1] = h[2];
k[2] = x * k[1] + k[0]; k[0] = k[1]; k[1] = k[2];
}
*denom = k[1];
*num = neg ? -h[1] : h[1];
}
All finite double are rational numbers as OP well stated..
Use frexp() to break the number into its fraction and exponent. The end result still needs to use double to represent whole number values due to range requirements. Some numbers are too small, (x smaller than 1.0/(2.0,DBL_MAX_EXP)) and infinity, not-a-number are issues.
The frexp functions break a floating-point number into a normalized fraction and an integral power of 2. ... interval [1/2, 1) or zero ...
C11 §7.12.6.4 2/3
#include <math.h>
#include <float.h>
_Static_assert(FLT_RADIX == 2, "TBD code for non-binary FP");
// Return error flag
int split(double x, double *numerator, double *denominator) {
if (!isfinite(x)) {
*numerator = *denominator = 0.0;
if (x > 0.0) *numerator = 1.0;
if (x < 0.0) *numerator = -1.0;
return 1;
}
int bdigits = DBL_MANT_DIG;
int expo;
*denominator = 1.0;
*numerator = frexp(x, &expo) * pow(2.0, bdigits);
expo -= bdigits;
if (expo > 0) {
*numerator *= pow(2.0, expo);
}
else if (expo < 0) {
expo = -expo;
if (expo >= DBL_MAX_EXP-1) {
*numerator /= pow(2.0, expo - (DBL_MAX_EXP-1));
*denominator *= pow(2.0, DBL_MAX_EXP-1);
return fabs(*numerator) < 1.0;
} else {
*denominator *= pow(2.0, expo);
}
}
while (*numerator && fmod(*numerator,2) == 0 && fmod(*denominator,2) == 0) {
*numerator /= 2.0;
*denominator /= 2.0;
}
return 0;
}
void split_test(double x) {
double numerator, denominator;
int err = split(x, &numerator, &denominator);
printf("e:%d x:%24.17g n:%24.17g d:%24.17g q:%24.17g\n",
err, x, numerator, denominator, numerator/ denominator);
}
int main(void) {
volatile float third = 1.0f/3.0f;
split_test(third);
split_test(0.0);
split_test(0.5);
split_test(1.0);
split_test(2.0);
split_test(1.0/7);
split_test(DBL_TRUE_MIN);
split_test(DBL_MIN);
split_test(DBL_MAX);
return 0;
}
Output
e:0 x: 0.3333333432674408 n: 11184811 d: 33554432 q: 0.3333333432674408
e:0 x: 0 n: 0 d: 9007199254740992 q: 0
e:0 x: 1 n: 1 d: 1 q: 1
e:0 x: 0.5 n: 1 d: 2 q: 0.5
e:0 x: 1 n: 1 d: 1 q: 1
e:0 x: 2 n: 2 d: 1 q: 2
e:0 x: 0.14285714285714285 n: 2573485501354569 d: 18014398509481984 q: 0.14285714285714285
e:1 x: 4.9406564584124654e-324 n: 4.4408920985006262e-16 d: 8.9884656743115795e+307 q: 4.9406564584124654e-324
e:0 x: 2.2250738585072014e-308 n: 2 d: 8.9884656743115795e+307 q: 2.2250738585072014e-308
e:0 x: 1.7976931348623157e+308 n: 1.7976931348623157e+308 d: 1 q: 1.7976931348623157e+308
Leave the b_max consideration for later.
More expedient code is possible with replacing pow(2.0, expo) with ldexp(1, expo) #gammatester or exp2(expo) #Bob__
while (*numerator && fmod(*numerator,2) == 0 && fmod(*denominator,2) == 0) could also use some performance improvements. But first, let us get the functionality as needed.

Division with negative dividend, but rounded towards negative infinity?

Consider the following code (in C++11):
int a = -11, b = 3;
int c = a / b;
// now c == -3
C++11 specification says that division with a negative dividend is rounded toward zero.
It is quite useful for there to be a operator or function to do division with rounding toward negative infinity (e.g. for consistency with positive dividends when iterating a range), so is there a function or operator in the standard library that does what I want? Or perhaps a compiler-defined function/intrinsic that does it in modern compilers?
I could write my own, such as the following (works only for positive divisors):
int div_neg(int dividend, int divisor){
if(dividend >= 0) return dividend / divisor;
else return (dividend - divisor + 1) / divisor;
}
But it would not be as descriptive of my intent, and possibly not be as optimized a standard library function or compiler intrinsic (if it exists).
I'm not aware of any intrinsics for it. I would simply apply a correction to standard division retrospectively.
int div_floor(int a, int b)
{
int res = a / b;
int rem = a % b;
// Correct division result downwards if up-rounding happened,
// (for non-zero remainder of sign different than the divisor).
int corr = (rem != 0 && ((rem < 0) != (b < 0)));
return res - corr;
}
Note it also works for pre-C99 and pre-C++11, i.e. without standarization of rounding division towards zero.
Here's another possible variant, valid for positive divisors and arbitrary dividends.
int div_floor(int n, int d) {
return n >= 0 ? n / d : -1 - (-1 - n) / d;
}
Explanation: in the case of negative n, write q for (-1 - n) / d, then -1 - n = qd + r for some r satisfying 0 <= r < d. Rearranging gives n = (-1 - q)d + (d - 1 - r). It's clear that 0 <= d - 1 - r < d, so d - 1 - r is the remainder of the floor division operation, and -1 - q is the quotient.
Note that the arithmetic operations here are all safe from overflow, regardless of the internal representation of signed integers (two's complement, ones' complement, sign-magnitude).
Assuming two's complement representation for signed integers, a good compiler should optimise the two -1-* operations to bitwise negation operations. On my x86-64 machine, the second branch of the conditional gets compiled to the following sequence:
notl %edi
movl %edi, %eax
cltd
idivl %esi
notl %eax
The standard library has only one function that can be used to do what you want: floor. The division you're after can be expressed as floor((double) n / d). However, this assumes that double has enough precision to represent both n and d exactly. If not, then this may introduce rounding errors.
Personally, I'd go with a custom implementation. But you can use the floating point version too, if that's easier to read and you've verified that the results are correct for the ranges you're calling it for.
C++11 has a std::div(a, b) that returns both a % b and a / b in struct with rem and quot members (so both remainder and quotient primitives) and for which modern processors have a single instruction. C++11 does truncated division.
To do floored division for both the remainder and the quotient, you can write:
// http://stackoverflow.com/a/4609795/819272
auto signum(int n) noexcept
{
return static_cast<int>(0 < n) - static_cast<int>(n < 0);
}
auto floored_div(int D, int d) // Throws: Nothing.
{
assert(d != 0);
auto const divT = std::div(D, d);
auto const I = signum(divT.rem) == -signum(d) ? 1 : 0;
auto const qF = divT.quot - I;
auto const rF = divT.rem + I * d;
assert(D == d * qF + rF);
assert(abs(rF) < abs(d));
assert(signum(rF) == signum(d));
return std::div_t{qF, rF};
}
Finally, it's handy to also have Euclidean division (for which the remainder is always non-negative) in your own library:
auto euclidean_div(int D, int d) // Throws: Nothing.
{
assert(d != 0);
auto const divT = std::div(D, d);
auto const I = divT.rem >= 0 ? 0 : (d > 0 ? 1 : -1);
auto const qE = divT.quot - I;
auto const rE = divT.rem + I * d;
assert(D == d * qE + rE);
assert(abs(rE) < abs(d));
assert(signum(rE) != -1);
return std::div_t{qE, rE};
}
There is a Microsoft research paper discussing the pros and cons of the 3 versions.
When the operands are both positive, the / operator does floored division.
When the operands are both negative, the / operator does floored division.
When exactly one of the operands is negative, the / operator does ceiling division.
For the last case, the quotient can be adjusted when exactly one operand is negative and there is no remainder (with no remainder, floored division and ceiling division work the same).
int floored_div(int numer, int denom) {
int div = numer / denom;
int n_negatives = (numer < 0) + (denom < 0);
div -= (n_negatives == 1) && (numer % denom != 0);
return div;
}

Fast approximate float division

On modern processors, float division is a good order of magnitude slower than float multiplication (when measured by reciprocal throughput).
I'm wondering if there are any algorithms out there for computating a fast approximation to x/y, given certain assumptions and tolerance levels. For example, if you assume that 0<x<y, and are willing to accept any output that is within 10% of the true value, are there algorithms faster than the built-in FDIV operation?
I hope that this helps because this is probably as close as your going to get to what you are looking for.
__inline__ double __attribute__((const)) divide( double y, double x ) {
// calculates y/x
union {
double dbl;
unsigned long long ull;
} u;
u.dbl = x; // x = x
u.ull = ( 0xbfcdd6a18f6a6f52ULL - u.ull ) >> (unsigned char)1;
// pow( x, -0.5 )
u.dbl *= u.dbl; // pow( pow(x,-0.5), 2 ) = pow( x, -1 ) = 1.0/x
return u.dbl * y; // (1.0/x) * y = y/x
}
See also:
Another post about reciprocal approximation.
The Wikipedia page.
FDIV is usually exceptionally slower than FMUL just b/c it can't be piped like multiplication and requires multiple clk cycles for iterative convergence HW seeking process.
Easiest way is to simply recognize that division is nothing more than the multiplication of the dividend y and the inverse of the divisor x. The not so straight forward part is remembering a float value x = m * 2 ^ e & its inverse x^-1 = (1/m)*2^(-e) = (2/m)*2^(-e-1) = p * 2^q approximating this new mantissa p = 2/m = 3-x, for 1<=m<2. This gives a rough piece-wise linear approximation of the inverse function, however we can do a lot better by using an iterative Newton Root Finding Method to improve that approximation.
let w = f(x) = 1/x, the inverse of this function f(x) is found by solving for x in terms of w or x = f^(-1)(w) = 1/w. To improve the output with the root finding method we must first create a function whose zero reflects the desired output, i.e. g(w) = 1/w - x, d/dw(g(w)) = -1/w^2.
w[n+1]= w[n] - g(w[n])/g'(w[n]) = w[n] + w[n]^2 * (1/w[n] - x) = w[n] * (2 - x*w[n])
w[n+1] = w[n] * (2 - x*w[n]), when w[n]=1/x, w[n+1]=1/x*(2-x*1/x)=1/x
These components then add to get the final piece of code:
float inv_fast(float x) {
union { float f; int i; } v;
float w, sx;
int m;
sx = (x < 0) ? -1:1;
x = sx * x;
v.i = (int)(0x7EF127EA - *(uint32_t *)&x);
w = x * v.f;
// Efficient Iterative Approximation Improvement in horner polynomial form.
v.f = v.f * (2 - w); // Single iteration, Err = -3.36e-3 * 2^(-flr(log2(x)))
// v.f = v.f * ( 4 + w * (-6 + w * (4 - w))); // Second iteration, Err = -1.13e-5 * 2^(-flr(log2(x)))
// v.f = v.f * (8 + w * (-28 + w * (56 + w * (-70 + w *(56 + w * (-28 + w * (8 - w))))))); // Third Iteration, Err = +-6.8e-8 * 2^(-flr(log2(x)))
return v.f * sx;
}

Fast Inverse Square Root on x64

I found on net Fast Inverse Square Root on http://en.wikipedia.org/wiki/Fast_inverse_square_root . Does it work properly on x64 ?
Did anyone use and serious test ?
Originally Fast Inverse Square Root was written for a 32-bit float, so as long as you operate on IEEE-754 floating point representation, there is no way x64 architecture will affect the result.
Note that for "double" precision floating point (64-bit) you should use another constant:
...the "magic number" for 64 bit IEEE754 size type double ... was shown to be exactly 0x5fe6eb50c7b537a9
Here is an implementation for double precision floats:
#include <cstdint>
double invsqrtQuake( double number )
{
double y = number;
double x2 = y * 0.5;
std::int64_t i = *(std::int64_t *) &y;
// The magic number is for doubles is from https://cs.uwaterloo.ca/~m32rober/rsqrt.pdf
i = 0x5fe6eb50c7b537a9 - (i >> 1);
y = *(double *) &i;
y = y * (1.5 - (x2 * y * y)); // 1st iteration
// y = y * ( 1.5 - ( x2 * y * y ) ); // 2nd iteration, this can be removed
return y;
}
I did a few tests and it seems to work fine
Yes, it works if using the correct magic number and corresponding integer type. In addition to the answers above, here's a C++11 implementation that works for both double and float. Conditionals should optimise out at compile time.
template <typename T, char iterations = 2> inline T inv_sqrt(T x) {
static_assert(std::is_floating_point<T>::value, "T must be floating point");
static_assert(iterations == 1 or iterations == 2, "itarations must equal 1 or 2");
typedef typename std::conditional<sizeof(T) == 8, std::int64_t, std::int32_t>::type Tint;
T y = x;
T x2 = y * 0.5;
Tint i = *(Tint *)&y;
i = (sizeof(T) == 8 ? 0x5fe6eb50c7b537a9 : 0x5f3759df) - (i >> 1);
y = *(T *)&i;
y = y * (1.5 - (x2 * y * y));
if (iterations == 2)
y = y * (1.5 - (x2 * y * y));
return y;
}
As for testing, I use the following doctest in my project:
#ifdef DOCTEST_LIBRARY_INCLUDED
TEST_CASE_TEMPLATE("inv_sqrt", T, double, float) {
std::vector<T> vals = {0.23, 3.3, 10.2, 100.45, 512.06};
for (auto x : vals)
CHECK(inv_sqrt<T>(x) == doctest::Approx(1.0 / std::sqrt(x)));
}
#endif