Eigen: How to check if Matrix is invertible - c++

I would like to generate a random invertible Matrix using Eigen, which fulfills these criteria:
Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic> res(M, N + 1);
Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic> y(M, 1);
y.setRandom();
while (true) {
res.setRandom();
Eigen::FullPivLU<Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic>> lu(res.transpose() * res);
if (lu.isInvertible()) {
break;
}
}
So res^T*res should be invertible, but i need a random res.

Your check isInvertible should work here. For a diagnostic, you can check the absolute value of the determinant or the condition number of your matrix. You should be able to use the determinant member function, see: https://eigen.tuxfamily.org/dox/classEigen_1_1MatrixBase.html#a7ad8f77004bb956b603bb43fd2e3c061

Related

C++ (Eigen) - Coefficientwise boolean between two differently sized vector

Suppose I have two Eigen Arrays defined as
Array<float, 10, 1> a;
Array<float, 4, 1> b;
Now, I would like to get an Array of Array<bool, 10, 4> result; reading true for every one of the 10 entries of a where it is larger than every of the four entries of b. One way to achieve that I thought of was:
Array<float, 10, 1> a;
Array<float, 4, 1> b;
// ... fill the arrays
Array<bool, 10, 4> result = a.replicate(1 , 4) > (b.transpose()).replicate(10, 1);
In reality, the matrix dimensions are much larger though. This approach works, but I have two questions.
First: if I replace Array<bool, 10, 4> result = ... by auto result = ...,using the result is much slower for some reason (I use it after to do some calculations based on the outcome of the bool).
Second: While this works, I wonder if this is the most efficient approach as I don't know whether replicate requires copying. I could consider iterating over one of the two dimensions
Array<float, 10, 1> a;
Array<float, 4, 1> b;
// ... fill the arrays
Array<bool, 10, 4> result;
for(int i = 0; i < 4; i++){
result.col(i) = a > b(i, 0);
}
which would remove the necessity of using replicate at the cost of adding an iteration.
Any help is greatly appreciated!
EDIT:
Here's the chunk of code of interest. It gets called very often and so speedup is of great concern. Right now, this eats up 90% of the overall execution time. It is part of line intersection detection for thousands of lines. My approach was to write all the checks in matrix expressions to avoid iterating over all line pairs. To do that, I build a matrix with n rows for all n lines and m columns for all lines the other n lines could collide with. Then, the line intersection formula can be applied coefficientwise over the big matrices which I hope brings speedup.
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x1 = rays.col(0).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x2 = (rays.col(2) + rays.col(0)).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y1 = rays.col(1).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y2 = (rays.col(3) + rays.col(1)).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x3 = obstacles.col(0).transpose().replicate(num_rays*num_ships, 1);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x4 = obstacles.col(2).transpose().replicate(num_rays*num_ships, 1);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y3 = obstacles.col(1).transpose().replicate(num_rays*num_ships, 1);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y4 = obstacles.col(3).transpose().replicate(num_rays*num_ships, 1);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> t_den = (x1-x2)*(y3-y4) -(y1-y2)*(x3-x4);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> t = (x1-x3)*(y3-y4) -(y1-y3)*(x3-x4)/t_den;
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> u = ((x1-x3)*(y1-y2) - (y1-y3)*(x1-x2));
Eigen::Array<bool, Eigen::Dynamic, Eigen::Dynamic> col_r = 0 <= t && 0 <= u && u <= t_den;
t_rays = col_r.select(t, 1000).rowwise().minCoeff();
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x1_ = ship_b_boxs.col(0).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x2_ = (ship_b_boxs.col(2)).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y1_ = ship_b_boxs.col(1).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y2_ = (ship_b_boxs.col(3)).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x3_ = x3(Eigen::seq(0, 4*num_ships-1), Eigen::all);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x4_ = x4(Eigen::seq(0, 4*num_ships-1), Eigen::all);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y3_ = y3(Eigen::seq(0, 4*num_ships-1), Eigen::all);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y4_ = y4(Eigen::seq(0, 4*num_ships-1), Eigen::all);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> t_den_ = (x1_-x2_)*(y3_-y4_) -(y1_-y2_)*(x3_-x4_);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> t_ = (x1_-x3_)*(y3_-y4_) -(y1_-y3_)*(x3_-x4_)/t_den_;
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> u_ = ((x1_-x3_)*(y1_-y2_) - (y1_-y3_)*(x1_-x2_));
Eigen::Array<bool, Eigen::Dynamic, Eigen::Dynamic> col_s = (0 <= t_ && t_ <= 1 && 0 <= u_ && u_ <= t_den_).rowwise().maxCoeff();
with 85% taken by inlined'Eigen::Array::Array'. I can see that probably now all the temporary array constructions take a lot of time. However, storing these temporary variables is beneficial as they are used more than once. Is there any way to speed this up?
First: if I replace Array<bool, 10, 4> result = ... by auto result =
...,using the result is much slower for some reason (I use it after to
do some calculations based on the outcome of the bool).
As mentioned by #Homer512 in a comment, do not use auto in Eigen expressions
Second: While this works, I wonder if this is the most efficient
approach as I don't know whether replicate requires copying
No, if you use replicate within an expression and you do not actively store it in some intermediate variable or otherwise force the expression to be evaluated, there is no copying involved. As in typical Eigen style replicate only returns an expression of the replication, see the doc, an actual object containing a full replica is only created if absolutely necessary.
In short, this involves copies:
Array<bool, 10, 4> a4r = a.replicate(1 , 4);
Array<bool, 10, 4> b4r = b.transpose().replicate(10, 1);
Array<bool, 10, 4> result = a4r > b4r;
while your expression does not:
Array<bool, 10, 4> result = a.replicate(1 , 4) > b.transpose().replicate(10, 1);
So I really think it's the most efficient way to do it, or close to it.
How is this magic possible? It is well explained in the doc.
Side note:
if your matrices are big you should consider switching to dynamic size, see Eigen's own recommendations "Fixed vs. Dynamic size"
on the other hand, if you do know the sizes at compile time, you could consider the templeate version of replicate: a.replicate<1,4>() > b.transpose().replicate<10,1>();
If you still do not believe there is no copying involved, you can check the source code, as of version 3.4.0
Eigen/src/Core/Replicate.h
template<typename MatrixType,int RowFactor,int ColFactor> class Replicate
: public internal::dense_xpr_base< Replicate<MatrixType,RowFactor,ColFactor> >::type
{
[...]
template<typename OriginalMatrixType>
EIGEN_DEVICE_FUNC
inline Replicate(const OriginalMatrixType& matrix, Index rowFactor, Index colFactor)
: m_matrix(matrix), m_rowFactor(rowFactor), m_colFactor(colFactor)
{
EIGEN_STATIC_ASSERT((internal::is_same<typename internal::remove_const<MatrixType>::type,OriginalMatrixType>::value),
THE_MATRIX_OR_EXPRESSION_THAT_YOU_PASSED_DOES_NOT_HAVE_THE_EXPECTED_TYPE)
}
EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR
inline Index rows() const { return m_matrix.rows() * m_rowFactor.value(); }
EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR
inline Index cols() const { return m_matrix.cols() * m_colFactor.value(); }
EIGEN_DEVICE_FUNC
const _MatrixTypeNested& nestedExpression() const
{
return m_matrix;
}
protected:
MatrixTypeNested m_matrix;
const internal::variable_if_dynamic<Index, RowFactor> m_rowFactor;
const internal::variable_if_dynamic<Index, ColFactor> m_colFactor;
};
Eigen/src/Core/DenseBase.h
const Replicate<Derived, Dynamic, Dynamic> replicate(Index rowFactor, Index colFactor) const
{
return Replicate<Derived, Dynamic, Dynamic>(derived(), rowFactor, colFactor);
}
Eigen/src/Core/CoreEvaluators.h
template<typename ArgType, int RowFactor, int ColFactor>
struct unary_evaluator<Replicate<ArgType, RowFactor, ColFactor> >
: evaluator_base<Replicate<ArgType, RowFactor, ColFactor> >
{
[...]
EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE
explicit unary_evaluator(const XprType& replicate)
: m_arg(replicate.nestedExpression()),
m_argImpl(m_arg),
m_rows(replicate.nestedExpression().rows()),
m_cols(replicate.nestedExpression().cols())
{}
EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE
CoeffReturnType coeff(Index row, Index col) const
{
// try to avoid using modulo; this is a pure optimization strategy
const Index actual_row = internal::traits<XprType>::RowsAtCompileTime==1 ? 0
: RowFactor==1 ? row
: row % m_rows.value();
const Index actual_col = internal::traits<XprType>::ColsAtCompileTime==1 ? 0
: ColFactor==1 ? col
: col % m_cols.value();
return m_argImpl.coeff(actual_row, actual_col);
}
[...]
protected:
const ArgTypeNested m_arg;
evaluator<ArgTypeNestedCleaned> m_argImpl;
const variable_if_dynamic<Index, ArgType::RowsAtCompileTime> m_rows;
const variable_if_dynamic<Index, ArgType::ColsAtCompileTime> m_cols;
};
I know that the code is heavy and difficult to interpret if you are not familiar with this kind of mechanisms, that's why I've removed all but the most relevant part. What you need to retain is this: replicate only creates an expression of type Replicate which simply stores a reference to the original object, the number of row copies, and the number of column copies; then, when you want to retrieve a coefficient from the expression, the evaluator computes the appropriate index in the original matrix using modulo and returns the corresponding element.

Eigen LLT (Cholesky) fails, while SVD works

I'm trying to reproduce some numpy code on Gaussian Processes (from here) using Eigen. Basically, I need to sample from a multivariate normal distribution:
samples = np.random.multivariate_normal(mu.ravel(), cov, 1)
The mean vector is currently zero, while the covariance matrix is a square matrix generated via the isotropic squared exponential kernel:
sqdist = np.sum(X1**2, 1).reshape(-1, 1) + np.sum(X2**2, 1) - 2 * np.dot(X1, X2.T)
return sigma_f**2 * np.exp(-0.5 / l**2 * sqdist)
I can generate the covariance matrix just fine for now (it can probably be cleaned but for now it's a POC):
Matrix2D kernel(const Matrix2D & x1, const Matrix2D & x2, double l = 1.0, double sigma = 1.0) {
auto dists = ((- 2.0 * (x1 * x2.transpose())).colwise()
+ x1.rowwise().squaredNorm()).rowwise() +
+ x2.rowwise().squaredNorm().transpose();
return std::pow(sigma, 2) * ((-0.5 / std::pow(l, 2)) * dists).array().exp();
}
However, my problems start when I need to sample the multivariate normal.
I've tried using the solution proposed in this accepted answer; however, the decomposition only works with covariance matrices of size up to 30x30; more than that and LLT fails to decompose the matrix. The alternative version provided in the answer also does not work, and creates NaNs. I tried LDLT as well but it also breaks (D contains negative values, so sqrt gives NaN).
Then, I got curious, and I looked into how numpy does this. Turns out the numpy implementation uses SVD decomposition (with LAPACK), rather than Cholesky. So I tried copying their implementation:
// SVD on the covariance matrix generated via kernel function
Eigen::BDCSVD<Matrix2D> solver(covs, Eigen::ComputeFullV);
normTransform = (-solver.matrixV().transpose()).array().colwise() * solver.singularValues().array().sqrt();
// Generate gaussian samples, "randN" is from the multivariate StackOverflow answer
Matrix2D gaussianSamples = Eigen::MatrixXd::NullaryExpr(1, means.size(), randN);
Eigen::MatrixXd samples = (gaussianSamples * normTransform).rowwise() + means.transpose();
The various minuses are me trying to exactly reproduce numpy's results.
In any case, this works perfectly fine, even with large dimensions. I was wondering why Eigen is not able to do LLT, but SVD works. The covariance matrix I use is the same. Is there something I can do to simply use LLT?
EDIT: Here is my full example:
#include <iostream>
#include <random>
#include <Eigen/Cholesky>
#include <Eigen/SVD>
#include <Eigen/Eigenvalues>
using Matrix2D = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor | Eigen::AutoAlign>;
using Vector = Eigen::Matrix<double, Eigen::Dynamic, 1>;
/*
We need a functor that can pretend it's const,
but to be a good random number generator
it needs mutable state.
*/
namespace Eigen {
namespace internal {
template<typename Scalar>
struct scalar_normal_dist_op
{
static std::mt19937 rng; // The uniform pseudo-random algorithm
mutable std::normal_distribution<Scalar> norm; // The gaussian combinator
EIGEN_EMPTY_STRUCT_CTOR(scalar_normal_dist_op)
template<typename Index>
inline const Scalar operator() (Index, Index = 0) const { return norm(rng); }
};
template<typename Scalar> std::mt19937 scalar_normal_dist_op<Scalar>::rng;
template<typename Scalar>
struct functor_traits<scalar_normal_dist_op<Scalar> >
{ enum { Cost = 50 * NumTraits<Scalar>::MulCost, PacketAccess = false, IsRepeatable = false }; };
} // end namespace internal
} // end namespace Eigen
Matrix2D kernel(const Matrix2D & x1, const Matrix2D & x2, double l = 1.0, double sigma = 1.0) {
auto dists = ((- 2.0 * (x1 * x2.transpose())).colwise() + x1.rowwise().squaredNorm()).rowwise() + x2.rowwise().squaredNorm().transpose();
return std::pow(sigma, 2) * ((-0.5 / std::pow(l, 2)) * dists).array().exp();
}
int main() {
unsigned size = 50;
unsigned seed = 1;
Matrix2D X = Vector::LinSpaced(size, -5.0, 4.8);
Eigen::internal::scalar_normal_dist_op<double> randN; // Gaussian functor
Eigen::internal::scalar_normal_dist_op<double>::rng.seed(seed); // Seed the rng
Vector means = Vector::Zero(X.rows());
auto covs = kernel(X, X);
Eigen::LLT<Matrix2D> cholSolver(covs);
// We can only use the cholesky decomposition if
// the covariance matrix is symmetric, pos-definite.
// But a covariance matrix might be pos-semi-definite.
// In that case, we'll go to an EigenSolver
Eigen::MatrixXd normTransform;
if (cholSolver.info()==Eigen::Success) {
std::cout << "Used LLT\n";
// Use cholesky solver
normTransform = cholSolver.matrixL();
} else {
std::cout << "Broken\n";
Eigen::BDCSVD<Matrix2D> solver(covs, Eigen::ComputeFullV);
normTransform = (-solver.matrixV().transpose()).array().colwise() * solver.singularValues().array().sqrt();
}
Matrix2D gaussianSamples = Eigen::MatrixXd::NullaryExpr(1, means.size(), randN);
Eigen::MatrixXd samples = (gaussianSamples * normTransform).rowwise() + means.transpose();
return 0;
}

copying elements from an opencv matrix to a eigen matrix

I have a std::vector filled with 3x3 opencv matrices. I want to unfold the matrices and write them in a 9xn eigen::matrix.
std::vector<cv::Mat1d> cvMatrix;
// some code that generates a std::vector with 3880 cv matrices, the cv matrices have the size 3x3
Eigen::Matrix<double, Eigen::Dynamic, 9> eigenMatrix;
for (int i = 0; i < curvatures.size(); i++) {
eigenMatrix.resize(i + 1, 9);
for (int j = 0; j < 9; j++) {
eigenMatrix(i, j) = cvMatrix[i](j / 3, j % 3);
}
}
If I check the elements right after they are written (e.g. printing the values of eigenMatrix if i==10) everything seems to be find, but after the for loop is finished that does not hold anymore. Most of the elements in eigenMatrix seem to contain zeros. Does anyone can explain what happens here?
eigenMatrix.resize(i + 1, 9); destroys the content of eigenMatrix.
Since you already know the final dimension at the beginning, just write
Eigen::Matrix<double, Eigen::Dynamic, 9> eigenMatrix;
eigenMatrix.resize(curvatures.size(), 9);
or even just
Eigen::Matrix<double, Eigen::Dynamic, 9> eigenMatrix(curvatures.size(), 9);
before starting the for-loop.
If you need to resize a matrix, but keep the content, you can use conservativeResize() -- but that should be avoided since it requires a full copy for each resizing.

How to convert an std::vector to a matrix in Eigen?

I am somewhat new to Stack Overflow and C++ so feel free to correct any errors in my code and the formatting of this question.
I am trying to make a linear regression calculator using the normal equation which involved the transposing of matrices and multiplication of vectors (and their inverses). The program is supposed to read from a csv file and pass the information from that file into a matrix and calculate the regression line. To make the job easier, I decided to use a library called Eigen for matrix-matrix multiplication.
The problem that I have run into is that the Map function can only take in an array as opposed to a std::vector.
This is what I have so far:
float feature_data[] = { 1, 1, 1, 1, 1, 1,
2, 4.5, 3, 1,4, 5};
float labels[] = { 1, 4, 3, 2, 5, 7 };
//maps the array to a matrix called "feature_data"
MatrixXf mFeatures = Map< Matrix<float, 6, 2> >(feature_data);
MatrixXf mLabels = Map< Matrix<float, 6, 1> >(labels);
//use the toArray function
std::vector<float> test_vector = { 2,1,3 };
float* test_array = toArray(test_vector);
calcLinReg(mFeatures, mLabels);
const int n = 2;
int arr[n];
system("pause");
For context, the toArray function is my unsuccessful attempt to make an array from a vector (in all honesty, it works but it returns a pointer which you can't pass into the Map function in Eigen.) calcLinReg does exactly what it sounds like: calculates the linear regression line parameters.
Is there anyway I can convert a vector to an array or convert a vector to a matrix in Eigen?
How about trying to use the vectors data() method, which gives you access to the memory array used internally by the vector, like this:
std::vector<float> test_vector = { 2,1,3 };
float* test_array = test_vector.data();
Eigen::MatrixXf test = Eigen::Map<Eigen::Matrix<float, 3, 1> >(test_array);
Or shorter:
std::vector<float> test_vector = { 2,1,3 };
Eigen::MatrixXf test = Eigen::Map<Eigen::Matrix<float, 3, 1> >(test_vector.data());
Beware The asignment actually copies the data, therefore this is safe. However, you can also directly use the data of the vector like this
std::vector<float> test_vector(3,2);
Eigen::Map<Eigen::Matrix<float, 3, 1> > dangerousVec (test_vector.data());
If vector goes out of scope the memory is deallocated and dangerousVec's data is dangeling.
Someone in a comment is asking for the case of dynamic numbers of rows and columns. This is possible, as follows:
typedef Eigen::Matrix<double,Eigen::Dynamic,Eigen::Dynamic> MyMatrix;
size_t nrow = ...;
size_t ncol = ...;
MyMatrix M = Eigen::Map<MyMatrix>(test_vector.data(), nrow, ncol);

convert T* array (Jet* or float*) to cv::Mat<CV_32f>

I am using ceres-solver with AutoDiffCostFunction. My cost function takes as parameter 1x3 vector and outputs 1x1 residual.
How can I create opencv Mat out of my T* parameter vector? It may be either Jet or float.
I tried following code, but get error "cannot conver from Jet to float"
struct ErrorFunc
{
template <typename T>
bool operator()(const T * const Kparams, T * residual) const // Kparams - [f, u, v]
{
cv::Mat K = cv::Mat::eye(3, 3, CV_32F);
K.at<float>(0, 0) = float(Kparams[0]); // error
K.at<float>(0, 2) = float(Kparams[1]); // error
K.at<float>(1, 1) = float(Kparams[0]); // error
K.at<float>(1, 2) = float(Kparams[2]); // error
Mat Hdot = K.inv() * H * K;
cv::decomposeHomographyMat(Hdot, K, rot, tr, norm); //want to call this opencv function
residual[0] = calcResidual(norm);
return true;
}
Mat H;
}
There is a way to get Eigen matrix out of T* matrix:
const Eigen::Matrix< T, 3, 3, Eigen::RowMajor> hom = Eigen::Map< const Eigen::Matrix< T, 3, 3, Eigen::RowMajor> >(Matrix)
but I want to call cv::decomposeHomographyMat . How can I do this?
You cannot use an OpenCV method in a ceres::AutoDiffCostFunction in this way. The OpenCV method is not templated with type T as required by ceres to do the automatic differentiation. The float cast cannot be done because the ceres jet of Jacobians is a vector and not a scalar.
You have two options:
1) Use numerical differentiation: see http://ceres-solver.org/nnls_tutorial.html#numeric-derivatives
2) Use a templated library (e.g. Eigen http://eigen.tuxfamily.org/index.php?title=Main_Page) to rewrite the required homography decomposition