Quadratic roots shows NaN - c++

I referred to this post for complex number roots of a quadratic equation. Solve Quadratic Equation in C++
So, I wrote something similiar in C++ using OpenCV and std libraries, but I am always getting NaN and dont know why.
cv::Vec3f coefficients(1,-1,1);
cv::Vec<std::complex<float>,2> result_manual = {{0,0},{0,0}};
float c = coefficients.operator()(0);
float b = coefficients.operator()(1);
float a = coefficients.operator()(2);
std::cout << "---------manual method solving quadratic equation\n";
double delta;
delta = std::pow(b,2)-4*a*c;
if ( delta < 0) {
result_manual[0].real(-b/(2*a));
result_manual[1].real(-b/(2*a));
result_manual[0].imag((float)std::sqrt(delta)/(2*a));
result_manual[1].imag((float)-std::sqrt(delta)/(2*a));
}
else {
result_manual[0].real((float)(-b + std::sqrt(delta))/2*a);
result_manual[1].real((float)(-b - std::sqrt(delta))/2*a);
}
std::cout << result_manual[0] << std::endl;
std::cout << result_manual[1] << std::endl;
Result
---------manual method solving quadratic equation
(0.5,-nan)
(0.5,nan)

Answering myself just for the completion, after many useful comments.
The link in the question is a wrong implementation as the sqrt of a negative number is not defined. The correct implementation would be
result_manual[0].imag((float)-std::sqrt(std::abs(delta))/(2*a));
result_manual[1].imag((float)std::sqrt(std::abs(delta))/(2*a));

Related

Eigen::Quaternion FromTwoVectors() does not return a valid quaternion

I have two vectors in 3D space and I'm trying to use the function of Eigen::Quaternion FromTwoVectors() to calculate the rotation between them. Despite the fact that there is no unique solution to this problem, I'm getting a quaternion with almost 4 zero values (probably just numerical errors) e.g.: 6.95251e-310, 6.90758e-310, 9.88131e-324, 6.90758e-310. And this is certainly not a valid quaternion.
Eigen::Vector3d normal1;
Eigen::Vector3d normal2;
Eigen::Quaterniond quat;
quat.FromTwoVectors(normal1, normal2);
std::cout << quat.x() << quat.y() << quat.z() << quat.w() << std::endl;
Using the same vectors to compute AngleAxis and then converting to a quaternion gives valid values:
Eigen::Vector3d axis = normal1.cross(normal2);
axis.normalize()
double angle = acos(normal1.dot(normal2));
Eigen::AngleAxisd aa(angle,axis);
Eigen::Quaterniond quat(aa);
std::cout << quat.x() << quat.y() << quat.z() << quat.w() << std::endl;
0.000407447, 0.00621866, 0.0035146, 0.999974
What is going wrong here?
See the documentation for FromTwoVectors https://eigen.tuxfamily.org/dox/classEigen_1_1Quaternion.html#title13
FromTwoVectors appears to be a static method. So you need to look at the return value. You should get correct answer if you tried Eigen::Quaterniond out = Eigen::Quaterniond::FromTwoVectors(normal1, normal2).

Strange behaviours in porting code from Eigen2 to Eigen3

I'm considering to use this library to perform spectral clustering in my research project.
But, to do so, I need to port it from Eigen2 to Eigen3 (which is what I use in my code).
There's a portion of code that is causing me some troubles.
This is for Eigen2:
double Evrot::evqual(Eigen::MatrixXd& X) {
// take the square of all entries and find max of each row
Eigen::MatrixXd X2 = X.cwise().pow(2);
Eigen::VectorXd max_values = X2.rowwise().maxCoeff();
// compute cost
for (int i=0; i<mNumData; i++ ) {
X2.row(i) = X2.row(i) / max_values[i];
}
double J = 1.0 - (X2.sum()/mNumData -1.0)/mNumDims;
if( DEBUG )
std::cout << "Computed quality = "<< J << std::endl;
return J;
}
as explained here, Eigen3 replaces .cwise() with the slightly different .array() functionality.
So, I wrote:
double Evrot::evqual(Eigen::MatrixXd& X) {
// take the square of all entries and find max of each row
Eigen::MatrixXd X2 = X.array().pow(2);
Eigen::VectorXd max_values = X2.rowwise().maxCoeff();
// compute cost
for (int i=0; i<mNumData; i++ ) {
X2.row(i) = X2.row(i) / max_values[i];
}
double J = 1.0 - (X2.sum()/mNumData -1.0)/mNumDims;
if( DEBUG )
std::cout << "Computed quality = "<< J << std::endl;
return J;
}
and I got no compiler errors.
But, if I give to the two programs the same input (and check that they're actually getting consistent inputs), in the first case I get numbers and in the second only NANs.
My idea is that this is caused by the fact that max_values is badly computed and then using this vector in a division causes all the NANs. But I have no clue on how to fix that.
Can, please, someone explain me how to solve this problem?
Thanks!
Have you checked when the values start to diverge ? Are you sure there is no empty rows or that X^2 do not underflow. Anyways, you should had a guard before dividing by max_values[i]. Moreover, to avoid underflow in squaring you could rewrite it like that:
VectorXd max_values = X.array().abs().rowwise().maxCoeff();
double sum = 0;
for (int i=0; i<mNumData; i++ ) {
if(max_values[i]>0)
sum += (X.row(i)/max_values[i]).squaredNorm();
}
double J = 1.0 - (sum/mNumData -1.0)/mNumDims;
This will work even if X.abs().maxCoeff()==1e-170 whereas your code will underflow and produces NaN. Of course, if you are in a such a case, maybe you should check your inputs first as you are already on dangerous side regarding numerical issues.

Advice on method to integrate Bessel functions in C++ [duplicate]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
How can I integrate an equation including bessel functions numerically from "0" to "infinity" in Fortran or/and C?
I did in matlab, but it's not true for larger inputs and after a specific values , the bessel functions give completely wrong results(there is a restriction in Matlab)
There's a large number of analytic results for various integrals of the Bessel functions (see DLMF, Sect. 10.22), including definite integrals over precisely this range. You'd be much better off, and almost certainly faster and more accurate, trying hard to recast your expression into something that's integrable and using an exact result.
Last time I had to do with such things, it was state of the art to do simple integration of the intervals defined by the zero crossings. That is in most cases relatively stable and if the integrand is approaching zero reasonable fast easy to do.
As a starting point for playing around I´ve included a bit of code. Of course you need to work on the convergence detection and error checking. This is no production code but I thought maybe it provides a starting point for you. Its using gsl.
On my iMac this code takes about 2 µs per iteration. It will not become faster by including a hardcoded table for the intervals.
I hope this is of some use for you.
#include <iostream>
#include <vector>
#include <gsl/gsl_sf_bessel.h>
#include <gsl/gsl_integration.h>
#include <gsl/gsl_sf.h>
double f (double x, void * params) {
double y = 1.0 / (1.0 + x) * gsl_sf_bessel_J0 (x);
return y;
}
int main(int argc, const char * argv[]) {
double sum = 0;
double delta = 0.00001;
int max_steps = 1000;
gsl_integration_workspace * w = gsl_integration_workspace_alloc (max_steps);
gsl_function F;
F.function = &f;
F.params = 0;
double result, error;
double a,b;
for(int n=0; n < max_steps; n++)
{
if(n==0)
{
a = 0.0;
b = gsl_sf_bessel_zero_J0(1);
}
else
{
a = n;
b = gsl_sf_bessel_zero_J0(n+1);
}
gsl_integration_qag (&F, // function
besselj0_intervals[n], // from
besselj0_intervals[n+1], // to
0, // eps absolute
1e-4,// eps relative
max_steps,
GSL_INTEG_GAUSS15,
w,
&result,
&error);
sum += result;
std::cout << n << " " << result << " " << sum << "\n";
if(abs(result) < delta)
break;
}
return 0;
}
You can pretty much google and find lots of Bessel functions implemented in C already.
http://www.atnf.csiro.au/computing/software/gipsy/sub/bessel.c
http://jean-pierre.moreau.pagesperso-orange.fr/c_bessel.html
https://msdn.microsoft.com/en-us/library/h7zkk1bz.aspx
In the end, these use the built in types and will be limited to the ranges they can represent (just as MATLAB is). At best, expect 15 digits of precision using double precision floating point representation. So, for large numbers, they will appear to be rounded. eg. 1237846464123450000000000.00000
And, of course, others on Stack Overflow have looked into it.
C++ Bessel function for complex numbers

Increase precision in SelfAdjointEigenSolver in Eigen

I am trying to determine the eigenvalues and eigenvectors of a sparse array in Eigen. Since I need to compute all the eigenvectors and eigenvalues, and I could not get this done using the unsupported ArpackSupport module working, I chose to convert the system to a dense matrix and compute the eigensystem using SelfAdjointEigenSolver (I know my matrix is real and has real eigenvalues). This works well until I have matrices of size 1024*1024 but then I start getting deviations from the expected results.
In the documentation of this module (https://eigen.tuxfamily.org/dox/classEigen_1_1SelfAdjointEigenSolver.html) from what I understood it is possible to change the number of max iterations:
const int m_maxIterations
static
Maximum number of iterations.
The algorithm terminates if it does not converge within m_maxIterations * n iterations, where n denotes the size of the matrix. This value is currently set to 30 (copied from LAPACK).
However, I do not understand how do you implement this, using their examples:
SelfAdjointEigenSolver<Matrix4f> es;
Matrix4f X = Matrix4f::Random(4,4);
Matrix4f A = X + X.transpose();
es.compute(A);
cout << "The eigenvalues of A are: " << es.eigenvalues().transpose() << endl;
es.compute(A + Matrix4f::Identity(4,4)); // re-use es to compute eigenvalues of A+I
cout << "The eigenvalues of A+I are: " << es.eigenvalues().transpose() << endl
How would you modify it in order to change the maximum number of iterations?
Additionally, will this solve my problem or should I try to find an alternative function or algorithm to solve the eigensystem?
My thanks in advance.
Increasing the number of iterations is unlikely to help. On the other hand, moving from float to double will help a lot!
If that does not help, please, be more specific on "deviations from the expected results".
m_maxIterations is a static const int variable, and as such it can be considered an intrinsic property of the type. Changing such a type property usually would be done via a specific template parameter. In this case, however, it is set to the constant number 30, so it's not possible.
Therefore, you're only choice is to change the value in the header file and recompile your program.
However, before doing that, I would try the Singular Value Decomposition. According to the homepage, its accuracy is "Excellent-Proven". Moreover, it can overcome problems due to numerically not completely symmetric matrices.
I solved the problem by writing the Jacobi algorithm adapted from the Book Numerical Recipes:
void ROTATy(MatrixXd &a, int i, int j, int k, int l, double s, double tau)
{
double g,h;
g=a(i,j);
h=a(k,l);
a(i,j)=g-s*(h+g*tau);
a(k,l)=h+s*(g-h*tau);
}
void jacoby(int n, MatrixXd &a, MatrixXd &v, VectorXd &d )
{
int j,iq,ip,i;
double tresh,theta,tau,t,sm,s,h,g,c;
VectorXd b(n);
VectorXd z(n);
v.setIdentity();
z.setZero();
for (ip=0;ip<n;ip++)
{
d(ip)=a(ip,ip);
b(ip)=d(ip);
}
for (i=0;i<50;i++)
{
sm=0.0;
for (ip=0;ip<n-1;ip++)
{
for (iq=ip+1;iq<n;iq++)
sm += fabs(a(ip,iq));
}
if (sm == 0.0) {
break;
}
if (i < 3)
tresh=0.2*sm/(n*n);
else
tresh=0.0;
for (ip=0;ip<n-1;ip++)
{
for (iq=ip+1;iq<n;iq++)
{
g=100.0*fabs(a(ip,iq));
if (i > 3 && (fabs(d(ip))+g) == fabs(d[ip]) && (fabs(d[iq])+g) == fabs(d[iq]))
a(ip,iq)=0.0;
else if (fabs(a(ip,iq)) > tresh)
{
h=d(iq)-d(ip);
if ((fabs(h)+g) == fabs(h))
{
t=(a(ip,iq))/h;
}
else
{
theta=0.5*h/(a(ip,iq));
t=1.0/(fabs(theta)+sqrt(1.0+theta*theta));
if (theta < 0.0)
{
t = -t;
}
c=1.0/sqrt(1+t*t);
s=t*c;
tau=s/(1.0+c);
h=t*a(ip,iq);
z(ip)=z(ip)-h;
z(iq)=z(iq)+h;
d(ip)=d(ip)- h;
d(iq)=d(iq) + h;
a(ip,iq)=0.0;
for (j=0;j<ip;j++)
ROTATy(a,j,ip,j,iq,s,tau);
for (j=ip+1;j<iq;j++)
ROTATy(a,ip,j,j,iq,s,tau);
for (j=iq+1;j<n;j++)
ROTATy(a,ip,j,iq,j,s,tau);
for (j=0;j<n;j++)
ROTATy(v,j,ip,j,iq,s,tau);
}
}
}
}
}
}
the function jacoby receives the size of of the square matrix n, the matrix we want to calculate the we want to solve (a) and a matrix that will receive the eigenvectors in each column and a vector that is going to receive the eigenvalues. It is a bit slower so I tried to parallelize it with OpenMp (see: Parallelization of Jacobi algorithm using eigen c++ using openmp) but for 4096x4096 sized matrices what I did not mean an improvement in computation time, unfortunately.

Eigen - Check if matrix is Positive (Semi-)Definite

I'm implementing a spectral clustering algorithm and I have to ensure that a matrix (laplacian) is positive semi-definite.
A check if the matrix is positive definite (PD) is enough, since the "semi-" part can be seen in the eigenvalues. The matrix is pretty big (nxn where n is in the order of some thousands) so eigenanalysis is expensive.
Is there any check in Eigen that gives a bool result in runtime?
Matlab can give a result with the chol() method by throwing an exception if a matrix is not PD. Following this idea, Eigen returns a result without complaining for LLL.llt().matrixL(), although I was expecting some warning/error.
Eigen also has the method isPositive, but due to a bug it is unusable for systems with an old Eigen version.
You can use a Cholesky decomposition (LLT), which returns Eigen::NumericalIssue if the matrix is negative, see the documentation.
Example below:
#include <Eigen/Dense>
#include <iostream>
#include <stdexcept>
int main()
{
Eigen::MatrixXd A(2, 2);
A << 1, 0 , 0, -1; // non semi-positive definitie matrix
std::cout << "The matrix A is" << std::endl << A << std::endl;
Eigen::LLT<Eigen::MatrixXd> lltOfA(A); // compute the Cholesky decomposition of A
if(lltOfA.info() == Eigen::NumericalIssue)
{
throw std::runtime_error("Possibly non semi-positive definitie matrix!");
}
}
In addition to #vsoftco 's answer, we shall also check for matrix symmetry, since the definition of PD/PSD requires symmetric matrix.
Eigen::LLT<Eigen::MatrixXd> A_llt(A);
if (!A.isApprox(A.transpose()) || A_llt.info() == Eigen::NumericalIssue) {
throw std::runtime_error("Possibly non semi-positive definitie matrix!");
}
This check is important, e.g. some Eigen solvers (like LTDT) requires PSD(or NSD) matrix input. In fact, there exists non-symmetric and hence non-PSD matrix A that passes the A_llt.info() != Eigen::NumericalIssue test. Consider the following example (numbers taken from Jiuzhang Suanshu, Chapter 8, Problem 1):
Eigen::Matrix3d A;
Eigen::Vector3d b;
Eigen::Vector3d x;
// A is full rank and all its eigen values >= 0
// However A is not symmetric, thus not PSD
A << 3, 2, 1,
2, 3, 1,
1, 2, 3;
b << 39, 34, 26;
// This alone doesn't check matrix symmetry, so can't guarantee PSD
Eigen::LLT<Eigen::Matrix3d> A_llt(A);
std::cout << (A_llt.info() == Eigen::NumericalIssue)
<< std::endl; // false, no issue detected
// ldlt solver requires PSD, wrong answer
x = A.ldlt().solve(b);
std::cout << x << std::endl; // Wrong solution [10.625, 1.5, 4.125]
std::cout << b.isApprox(A * x) << std::endl; // false
// ColPivHouseholderQR doesn't assume PSD, right answer
x = A.colPivHouseholderQr().solve(b);
std::cout << x << std::endl; // Correct solution [9.25, 4.25, 2.75]
std::cout << b.isApprox(A * x) << std::endl; // true
Notes: to be more exact, one could apply the definition of PSD by checking A is symmetric and all of A's eigenvalues >= 0. But as mentioned in the question, this could be computationally expensive.
you have to test that the matrix is symmetric (A.isApprox(A.transpose())), then create the LDLT (and not LLT because LDLT takes care of the case where one of the eigenvalues is 0, ie not strictly positive), then test for numerical issues and positiveness:
template <class MatrixT>
bool isPsd(const MatrixT& A) {
if (!A.isApprox(A.transpose())) {
return false;
}
const auto ldlt = A.template selfadjointView<Eigen::Upper>().ldlt();
if (ldlt.info() == Eigen::NumericalIssue || !ldlt.isPositive()) {
return false;
}
return true;
}
I tested this on
1 2
2 3
which has a negative eigenvalue (hence not PSD). Without the isPositive() test, isPsd() incorrectly returns true here.
and on
1 2
2 4
which has a null eigenvalue (hence PSD but not PD).