CGAL: convert quotient to double - c++

I am having a problem in converting CGAL QP solver
typedef CGAL::Gmpzf ET;
...define a quadratic program qp here...
Solution s = CGAL::solve_quadratic_program(qp, ET());
assert (s.solves_quadratic_program(qp));
cout<<"QP objective = "<<s.objective_value()<<endl;
// The above returns a value of type CGAL::Quotient<ET>
// and I need to convert it to double
double n = s.objective_value_numerator().to_double();
double d = s.objective_value_denominator().to_double();
cout<<"QP objective 2 = "<<n/d<<endl;
I got:
QP objective = -2.57497e-22/2.01459e-22
QP objective 2 = -nan
I checked and observed that n = -inf and d = inf.
How do we properly convert a Quotient into double?
Thank you in advance for any suggestion!!

CGAL has a function CGAL::to_double that can be used on most number types and in particular on Quotient. It has special code exactly for this case where numerator and denominator would overflow. It does not have code for underflow, which cannot happen with a quotient of integers, but could happen with Gmpzf, yielding 0/0.

Related

How does Cpp work with large numbers in calculations?

I have a code that tries to solve an integral of a function in a given interval numerically, using the method of Trapezoidal Rule (see the formula in Trapezoid method ), now, for the function sin(x) in the interval [-pi/2.0,pi/2.0], the integral is waited to be zero.
In this case, I take the number of partitions 'n' equal to 4. The problem is that when I have pi with 20 decimal places it is zero, with 14 decimal places it is 8.72e^(-17), then with 11 decimal places, it is zero, with 8 decimal places it is 8.72e^(-17), with 3 decimal places it is zero. I mean, the integral is zero or a number near zero for different approximations of pi, but it doesn't have a clear trend.
I would appreciate your help in understanding why this happens. (I did run it in Dev-C++).
#include <iostream>
#include <math.h>
using namespace std;
#define pi 3.14159265358979323846
//Pi: 3.14159265358979323846
double func(double x){
return sin(x);
}
int main() {
double x0 = -pi/2.0, xf = pi/2.0;
int n = 4;
double delta_x = (xf-x0)/(n*1.0);
double sum = (func(x0)+func(xf))/2.0;
double integral;
for (int k = 1; k<n; k++){
// cout<<"func: "<<func(x0+(k*delta_x))<<" "<<"last sum: "<<sum<<endl;
sum = sum + func(x0+(k*delta_x));
// cout<<"func + last sum= "<<sum<<endl;
}
integral = delta_x*sum;
cout<<"The value for the integral is: "<<integral<<endl;
return 0;
}
OP is integrating y=sin(x) from -a to +a. The various tests use different values of a, all near pi/2.
The approach uses a linear summation of values near -1.0, down to 0 and then up to near 1.0.
This summation is sensitive to calculation error with the last terms as the final math sum is expected to be 0.0. Since the start/end a varies, the error varies.
A more stable result would be had adding the extreme f = sin(f(k)) values first. e.g. sum += sin(f(k=1)), then sum += sin(f(k=3)), then sum += sin(f(k=2)) rather than k=1,2,3. In particular the formation of term x=f(k=3) is likely a bit off from the negative of its x=f(k=1) earlier term, further compounding the issue.
Welcome to the world or numerical analysis.
Problem exists if code used all float or all long double, just different degrees.
Problem is not due to using an inexact value of pi (Exact value is impossible with FP as pi is irrational and all finite FP are rational).
Much is due to the formation of x. Could try the below to form the x symmetrically about 0.0. Compare exactly x generated this way to x the original way.
x = (x0-x1)/2 + ((k - n/2)*delta_x)
Print out the exact values computed for deeper understanding.
printf("x:%a y:%a\n", x0+(k*delta_x), func(x0+(k*delta_x)));

OpenCL kernel float division gives different result

I have a OpenCL kernel for some computation. I found only one thread gives different result with CPU codes. I am using vs2010 x64 release mode.
By checking the OpenCL codes by some examples, I found some interesting results. Here are the testing examples in kernel codes.
I tested 3 cases in OpenCl kernel, the precision is checked by printf("%.10f", fval);
case 1:
float fval = (10296184.0) / (float)(x*y*z); // which gives result fval = 3351.6225585938
float fval = (10296184.0f) / (float)(x*y*z); // which gives result fval = 3351.6225585938
Variables are: int x,y, z
these values are computed by some operations. And their values are x=12, y=16, z=16;
case 2:
float fval = (10296184.0) / (float)(12*16*16); // which gives result fval = 3351.6223144531
float fval = (10296184.0f) / (float)(12*16*16); // which gives result fval = 3351.6223144531
case 3:
However, when I compute the difference of fval by using above two expressions, the result is 0 if using 10296184.0.
float fval = (10296184.0) / (float)(x*y*z) - (10296184.0) / (float)(12*16*16); // which gives result fval = 0.0000000000
float fval = (10296184.0f) / (float)(x*y*z) - (10296184.0f) / (float)(12*16*16); // which gives result fval = 0.0001812663
Could anyone explain the reason or give me some hints?
Some observations:
The two float values differ by 1 ULP. So the results differ by a minimum amount.
// Float ULP in the 2's place here
// v
0x1.a2f3ea0000000p+11 3351.622314... // OP's lower float value
0x1.a2f3eaaaaaaabp+11 3351.622395... // higher precision quotient
0x1.a2f3ec0000000p+11 3351.622558... // OP's higher float value
(10296184.0) / (float)(12*16*16) is calculated at compile time as is the closer result to the expected mathematical answer.
float fval = (10296184.0) / (float)(x*y*z) is calculated at run time.
Considering float variables being used, surprising that code is doing this division with double math. This is a double constant divide by a double (which is the promotion of the float product) resulting in a double quotient, converted to a float and then saved. I'd expect 10296184.0f - note the f - to have been used, then the math could have all been done as floats.
C allows different rounding modes denoted by FLT_ROUNDS This may differ at compile time and run time and may explain the difference. Knowing the result of fegetround() (The function gets the current rounding direction.) would help.
OP may have employed various compiler optimizations that sacrifice precision for speed.
C does not specify the precision of math operations, yet good to the last ULP should be expected with * / + - sqrt() modf() on quality platforms. I suspect code suffers from a weak math implementation.

Floating point math rounding weird in C++ compared to mathematica

The following post is solved,the problem occurred because of miss interpretation of the formula on http://www.cplusplus.com/reference/random/piecewise_constant_distribution/ The reader is strongly encouraged to consider the page: http://en.cppreference.com/w/cpp/numeric/random/piecewise_constant_distribution
I have the following strange phenomenon which puzzles me!:
I have a piecewise constant probability density given as
using RandomGenType = std::mt19937_64;
RandomGenType gen(51651651651);
using PREC = long double;
std::array<PREC,5> intervals {0.59, 0.7, 0.85, 1, 1.18};
std::array<PREC,4> weights {1.36814, 1.99139, 0.29116, 0.039562};
// integral over the pdf to normalize:
PREC normalization =0;
for(unsigned int i=0;i<4;i++){
normalization += weights[i]*(intervals[i+1]-intervals[i]);
}
std::cout << std::setprecision(30) << "Normalization: " << normalization << std::endl;
// normalize all weights (such that the integral gives 1)!
for(auto & w : weights){
w /= normalization;
}
std::piecewise_constant_distribution<PREC>
distribution (intervals.begin(),intervals.end(),weights.begin());
When I draw n random numbers (radius of sphere in millimeters) from this distribution and compute the mass of the sphere and sum them up like:
unsigned int n = 1000000;
double density = 2400;
double mass = 0;
for(int i=0;i<n;i++){
auto d = 2* distribution(gen) * 1e-3;
mass += d*d*d/3.0*M_PI_2*density;
}
I get mass = 4.3283 kg (see LIVE here)
Doing the EXACT identical thing in Mathematica like:
Gives the assumably correct value of 4.5287 kg. (see mathematica)
Which is not the same, also with different seeds , C++ and Mathematica never match! ? Is that numeric inaccuracy, which I doubt it is...?
Question : What the hack is wrong with the sampling in C++?
Simple Mathematica Code:
pdf[r_] = 2*Piecewise[{{0, r < 0.59}, {1.36814, 0.59 <= r <= 0.7},
{1.99139, Inequality[0.7, Less, r, LessEqual, 0.85]},
{0.29116, Inequality[0.85, Less, r, LessEqual, 1]},
{0.039562, Inequality[1, Less, r, LessEqual, 1.18]},
{0, r > 1.18}}];
pdfr[r_] = pdf[r] / Integrate[pdf[r], {r, 0, 3}];(*normalize*)
Plot[pdf[r], {r, 0.4, 1.3}, Filling -> Axis]
PDFr = ProbabilityDistribution[pdfr[r], {r, 0, 1.18}];
(*if you put 1.18=2 then we dont get 4.52??*)
SeedRandom[100, Method -> "MersenneTwister"]
dataR = RandomVariate[PDFr, 1000000, WorkingPrecision -> MachinePrecision];
Fold[#1 + (2*#2*10^-3)^3 Pi/6 2400 &, 0, dataR]
(*Analytical Solution*)
PDFr = ProbabilityDistribution[pdfr[r], {r, 0, 3}];
1000000 Integrate[ 2400 (2 InverseCDF[PDFr, p] 10^-3)^3 Pi/6, {p, 0, 1}]
Update:
I did some analysis:
Read in the numbers (64bit doubles) generated from Mathematica into
C++ -> calculated the sum and it gives the same as Mathematica
Mass computed by reduction: 4.52528010260687096888432279229
Read in the numbers generated from C++ (64bit double) into Mathematica -> calculated the sum and it gives the same 4.32402
I almost conclude the sampling with std::piecewise_constant_distribution is inaccurate (or as accurate as it gets with 64bit floats) or has a bug... OR there is something wrong with my weights?
Densities are calculated wrongly std::piecewise_constant_distribution in http://coliru.stacked-crooked.com/a/ca171bf600b5148f ===> It seems to be a bug!
Histogramm Plot of CPP Generated values compared to the wanted Distribution:
file = NotebookDirectory[] <> "numbersCpp.bin";
dataCPP = BinaryReadList[file, "Real64"];
Hpdf = HistogramDistribution[dataCPP];
h = DiscretePlot[ PDF[ Hpdf, x], {x, 0.4, 1.2, 0.001},
PlotStyle -> Red];
Show[h, p, PlotRange -> All]
The file is generated here: Number generation CPP
It seems that the formula for the probabilities is wrongly written for std::piecewise_constant_distribution on
http://www.cplusplus.com/reference/random/piecewise_constant_distribution/
The summation of the weights is done without the interval lengths multiplied!
The correct formula is:
http://en.cppreference.com/w/cpp/numeric/random/piecewise_constant_distribution
This solves every stupid quirk previously discovered as bug/floating point error and so on!
[The following paragraph was edited for correctness. --Editor's note]
Mathematica may or may not use IEEE 754 floating point numbers. From the Wolfram documentation:
The Wolfram Language has sophisticated built-in automatic numerical precision and accuracy control. But for special-purpose optimization of numerical computations, or for studying numerical analysis, the Wolfram Language also allows detailed control over precision and accuracy.
and
The Wolfram Language handles both integers and real numbers with any number of digits, automatically tagging numerical precision when appropriate. The Wolfram Language internally uses several highly optimized number representations, but nevertheless provides a uniform interface for digit and precision manipulation, while allowing numerical analysts to study representation details when desired.

The result of own double precision cos() implemention in a shader is NaN, but works well on the CPU. What is going wrong?

as i said, i want implement my own double precision cos() function in a compute shader with GLSL, because there is just a built-in version for float.
This is my code:
double faculty[41];//values are calculated at the beginning of main()
double myCOS(double x)
{
double sum,tempExp,sign;
sum = 1.0;
tempExp = 1.0;
sign = -1.0;
for(int i = 1; i <= 30; i++)
{
tempExp *= x;
if(i % 2 == 0){
sum = sum + (sign * (tempExp / faculty[i]));
sign *= -1.0;
}
}
return sum;
}
The result of this code is, that the sum turns out to be NaN on the shader, but on the CPU the algorithm is working well.
I tried to debug this code too and I got the following information:
faculty[i] is positive and not zero for all entries
tempExp is positive in each step
none of the other variables are NaN during each step
the first time sum is NaN is at the step with i=4
and now my question: What exactly can go wrong if each variable is a number and nothing is divided by zero especially when the algorithm works on the CPU?
Let me guess:
First you determined the problem is in the loop, and you use only the following operations: +, *, /.
The rules for generating NaN from these operations are:
The divisions 0/0 and ±∞/±∞
The multiplications 0×±∞ and ±∞×0
The additions ∞ + (−∞), (−∞) + ∞ and equivalent subtractions
You ruled out the possibility for 0/0 and ±∞/±∞ by stating that faculty[] is correctly initialized.
The variable sign is always 1.0 or -1.0 so it cannot generate the NaN through the * operation.
What remains is the + operation if tempExp ever become ±∞.
So probably tempExp is too high on entry of your function and becomes ±∞, this will make sum to be ±∞ too. At the next iteration you will trigger the NaN generating operation through: ∞ + (−∞). This is because you multiply one side of the addition by sign and sign switches between positive and negative at each iteration.
You're trying to approximate cos(x) around 0.0. So you should use the properties of the cos() function to reduce your input value to a value near 0.0. Ideally in the range [0, pi/4]. For instance, remove multiples of 2*pi, and get the values of cos() in [pi/4, pi/2] by computing sin(x) around 0.0 and so on.
What can go dramatically wrong is a loss of precision. cos(x) usually is implemented by range reduction followed by a dedicated implementation for the range [0, pi/2]. Range reduction uses cos(x+2*pi) = cos(x). But this range reduction isn't perfect. For starters, pi cannot be exactly represented in finite math.
Now what happens if you try something as absurd as cos(1<<30) ? It's quite possible that the range reduction algorithm introduces an error in x that's larger than 2*pi, in which case the outcome is meaningless. Returning NaN in such cases is reasonable.

C++ double division by 0.0 versus DBL_MIN

When finding the inverse square root of a double, is it better to clamp invalid non-positive inputs at 0.0 or MIN_DBL? (In my example below double b may end up being negative due to floating point rounding errors and because the laws of physics are slightly slightly fudged in the game.)
Both division by 0.0 and MIN_DBL produce the same outcome in the game because 1/0.0 and 1/DBL_MIN are effectively infinity. My intuition says MIN_DBL is the better choice, but would there be any case for using 0.0? Like perhaps sqrt(0.0), 1/0.0 and multiplication by 1.#INF000000000000 execute faster because they are special cases.
double b = 1 - v.length_squared()/(c*c);
#ifdef CLAMP_BY_0
if (b < 0.0) b = 0.0;
#endif
#ifdef CLAMP_BY_DBL_MIN
if (b <= 0.0) b = DBL_MIN;
#endif
double lorentz_factor = 1/sqrt(b);
double division in MSVC:
1/0.0 = 1.#INF000000000000
1/DBL_MIN = 4.4942328371557898e+307
When dealing with floating point math, "infinity" and "effectively infinity" are quite different. Once a number stops being finite, it tends to stay that way. So while the value of lorentz_factor is "effectively" the same for both methods, depending on how you use that value, later computations can be radically different. sqrt(lorentz_factor) for instance remains infinite if you clamp to 0, but will actually be calculated if you clamp to some very very small number.
So the answer will largely depend on what you plan on doing with that value once you've clamped it.
Why not just assign INF to lorentz_factor directly, avoiding both the sqrt call and the division?
double lorentz_factor;
if (b <= 0.0)
lorentz_factor = std::numeric_limits<double>::infinity();
else
lorentz_factor = 1/sqrt(b);
You'll need to #include <limits> for this.
You can also use ::max() instead of ::infinity(), if that's what you need.