following is my code.
Eigen::Matrix3d first_rotation = firstPoint.q.matrix();
Eigen::Vector3d first_trans= firstPoint.t;
for(auto &iter:in_points )
{
iter.second.t= first_rotation / (iter.second.t-first_trans).array();
}
However,the vscode says"no operator / matches the operands" for division."
How can I division a matrix by a vector?
In Matlab, the line was t2 = R1 \ (R2 - t1);
Matlab defined the / and \ operators, when applied to matrices, as solving linear equation systems, as you can read up on in their operator documentation. In particular
x = A\B solves the system of linear equations A*x = B
Eigen doesn't do this. And I don't think most other languages or libraries do it either. The main point is that there are multiple ways to decompose a matrix for solving. Each with their own pros and cons. You can read up on it in their documentation.
In your case, you can save time by reusing the decomposition multiple times. Something like this:
Eigen::PartialPivLU<Eigen::Matrix3d> first_rotation =
firstPoint.q.matrix().partialPivLu();
for(auto &iter: in_points)
iter.second.t = first_rotation.solve(
(iter.second.t-first_trans).eval());
Note: I picked LU over e.g. householderQr because that is what Matlab does for general square matrices. If you know more about your input matrix, e.g. that it is symmetrical, or not invertible, try something else.
Note 2: I'm not sure it is necessary in this case but I added a call to eval so that the left side of the assignment is not an alias of anything on the right side. With dynamically sized vectors, this would introduce extra memory allocations but here it is fine.
Related
Consider the following code:
#include <Eigen/Core>
using Matrix = Eigen::Matrix<float, 2, 2>;
Matrix func1(const Matrix& mat) { return mat + 0.5; }
Matrix func2(const Matrix& mat) { return mat / 0.5; }
func1() does not compile; you need to replace mat with mat.array() in the function body to fix it ([1]). However, func2() does compile as-is.
My question has to do with why the API is designed this way. Why is addition-with-scalar and division-by-scalar treated differently? What problems would arise if the following method is added to the Matrix class, and why haven't those problems arisen already for the operator/ method?:
auto operator+(Scalar s) const { return this->array() + s; }
From a mathematics perspective, a scalar added to a matrix "should" be the same as adding the scalar only to the diagonal. That is, a math text would usually use M + 0.5 to mean M + 0.5I, for I the identity matrix. There are many ways to justify this. For example, you can appeal to the analogy I = 1, or you can appeal to the desire to say Mx + 0.5x = (M + 0.5)x whenever x is a vector, etc.
Alternatively, you could take M + 0.5 to add 0.5 to every element. This is what you think is right if you don't think of matrices from a "linear algebra mindset" and treat them as just collections (arrays) of numbers, where it is natural to just "broadcast" scalar operations.
Since there are multiple "obvious" ways to handle + between a scalar and a matrix, where someone expecting one may be blindsided by the other, it is a good idea to do as Eigen does and ban such expressions. You are then forced to signify what you want in a less ambiguous way.
The natural definition of / from an algebra perspective coincides with the array perspective, so no reason to ban it.
I'm getting unexpected and invalid results from a linear solve computation in Eigen. I have a vector x and matrix P.
In MATLAB notation, I'm doing this:
x'/P*x
(similar to x'*inv(P)*x, but no direct inversion)
which is quadratic form and should yield a positive result. The matrix P is positive definite and not ill conditioned. In MATLAB the result is positive though large.
In C++ Eigen, my slash operator is like this:
x/P is implemented as:
(P.transpose()).ColPivHouseholderQr().solve(x.transpose).transpose()
This gives the correct answer in general, but appears to be more fragile than MATLAB. In the case I'm looking at now, it gives a very large negative number for x'/P*x as though there was overflow and wraparound or somesuch.
Is there a way to defragilize this?
EDIT: Doing some experimentation I see that Eigen also fails to invert this matrix, where MATLAB has no problem. The matrix condition number is 3e7, which is not bad. Eigen gives the same (wrong) answer with the linear solve and a simple x.transpose() * P.inverse() * x. What's going on?
The following is wrong (besides the () you missed in your question):
(P.transpose()).ColPivHouseholderQr().solve(x.transpose()).transpose();
because
x'*inv(P) = ((x' *inv(P))')'
= (inv(P)'* (x')' )'
= (inv(P) * x )' % Note: P=P'
Your expression should actually raise an assertion in Eigen when built without -DNDEBUG, since x.transpose() has only one row but P has (usually) more.
To compute x'*inv(P)*x for symmetric positive definite P, I suggest using the Cholesky decomposition L*L'=P which gives you x'*inv(P)*x = (L\x)'*(L\x) which in Eigen is:
typedef Eigen::LLT<Eigen::MatrixXd> Chol;
Chol chol(P); // Can be reused if x changes but P stays the same
// Handle non positive definite covariance somehow:
if(chol.info()!=Eigen::Success) throw "decomposition failed!";
const Chol::Traits::MatrixL& L = chol.matrixL();
double quadform = (L.solve(x)).squaredNorm();
For the matrix Eigen failed to invert (which Matlab does invert), it would be interesting to see it.
in Matlab if I write
A = B*inv(C)
(with A, B and C being square matrices), I get a warning that matrix inversion should be replaced with a matrix "right-division" (due to being numerically more stable and accurate) like
A = B/C
In my Eigen C++ project I have the following code:
Eigen::Matrix<double> A = B*(C.inverse());
and I was woundering if there is an equivalent replacement for taking the matrix inverse in Eigen analogous to the one in Matlab mentioned above?
I know that matrix "left-division" can be expressed by solving a system of equations for expressions like
A = inv(C)*B
but what about
A = C*inv(B)
in Eigen?
At the moment the most efficient way to do this is to rewrite your equation to
A^T = inv(C^T) * B^T
A = (inv(C^T) * B^T)^T
which can be implemented in Eigen as
SomeDecomposition decompC(C); // decompose C with a suiting decomposition
Eigen::MatrixXd A = decompC.transpose().solve(B.transpose()).transpose();
There were/are plans, that eventually, one can write
A = B * decompC.inverse();
and Eigen will evaluate this in the most efficient way.
I'm a new to Eigen and I'm working with sparse LU problem.
I found that if I create a vector b(n), Eigen could compute the x(n) for the Ax=b equation.
Questions:
How to display the L & U, which is the factorization result of the original matrix A?
How to insert non-zeros in Eigen? Right now I just test with some small sparse matrix so I insert non-zeros one by one, but if I have a large-scale matrix, how can I input the matrix in my program?
I realize that this question was asked a long time ago. Apparently, referring to Eigen documentation:
an expression of the matrix L, internally stored as supernodes The only operation available with this expression is the triangular solve
So there is no way to actually convert this to an actual sparse matrix to display it. Eigen::FullPivLU performs dense decomposition and is of no use to us here. Using it on a large sparse matrix, we would quickly run out of memory while trying to convert it to dense, and the time required to compute the factorization would increase several orders of magnitude.
An alternative solution is using the CSparse library from the Suite Sparse as:
extern "C" { // we are in C++ now, since you are using Eigen
#include <csparse/cs.h>
}
const cs *p_matrix = ...; // perhaps possible to use Eigen::internal::viewAsCholmod()
css *p_symbolic_decomposition;
csn *p_factor;
p_symbolic_decomposition = cs_sqr(2, p_matrix, 0); // 1 = ordering A + AT, 2 = ATA
p_factor = cs_lu(p_matrix, m_p_symbolic_decomposition, 1.0); // tol = 1.0 for ATA ordering, or use A + AT with a small tol if the matrix has amostly symmetric nonzero pattern and large enough entries on its diagonal
// calculate ordering, symbolic decomposition and numerical decomposition
cs *L = p_factor->L, *U = p_factor->U;
// there they are (perhaps can use Eigen::internal::viewAsEigen())
cs_sfree(p_symbolic_decomposition); cs_nfree(p_factor);
// clean up (deletes the L and U matrices)
Note that although this does not use expliit vectorization as some Eigen functions do, it is still fairly fast. CSparse is also very compact, it is just a single header and about thirty .c files with no external dependencies. Easy to incorporate in any C++ project. There is no need to actually include all of Suite Sparse.
If you'll use Eigen::FullPivLU::matrixLU() to the original matrix, you'll receive LU decomposition matrix. To display L and U separately, you can use method triangularView<mode>. In Eigen wiki you can find good example of it. Inserting nonzeros into matrices depends on numbers, which you wan't to put. Eigen has convenient syntax, so you can easily insert values in loop:
for(int i=0;i<size;i++)
{
for(int j=size;j>someNumber;j--)
{
matrix(i,j)=yourClass.getNextNumber();
}
}
I am attempting to implement a complex-valued matrix equation in OpenCV. I've prototyped in MATLAB which works fine. Starting off with the equation (exact MATLAB implementation):
kernel = exp(1i .* k .* Circ3D) .* z ./ (1i .* lambda .* Circ3D .* Circ3D)
In which
1i = complex number
k = constant (float)
Circ3D = real-valued matrix of known size
lambda = constant (float)
.* = element-wise multiplication
./ = element-wise division
The result is a complex-valued matrix. I succeeded in generating the necessary Circ3D matrix as a CV_32F, however the multiplication by the complex number i is giving me trouble. From the OpenCV documentation I understand that a complex matrix is simply a two-channel matrix (CV_32FC2).
The real trouble comes from how to define i. I've tried several options, among which defining i as
cv::Vec2d complex = cv::Vec2d(0,1);
and then multiplying by the matrix
kernel = complex * Circ3D
But this doesn't work (although I didn't expect it to). I suspect I need to do something with std::complex but I have no idea what (http://docs.opencv.org/modules/core/doc/basic_structures.html).
Thanks in advance for any help.
Edit: Just after writing this post I did make some progress, by defining i as follows:
std::complex<float> complex(0,1)
I am then able to assign complex values as follows:
kernel.at<std::complex<float>>(i,j) = cv::exp(complex * k * Circ3D.at<float>(i,j)) * ...
z / (complex * lambda * pow(Circ3D.at<float>(i,j),2));
However, this works in a loop, which makes the procedure incredibly slow. Any way to do it in one go?
OpenCV treats std::complex just like the simple pair of numbers (see example in the documentation). No special rules on arithmetic operations are applied. You overcome this by multiplying std::complex directly. So basically, this is simple: you either chose automatic complex arithmetic (as you are doing now), or automatic vectorization (when using OpenCV functions on matrices).
I think, in your case you should carry all the complex arithmetic by yourself. Store matrix of complex values C{ai + b} as two matrices A{a} and B{b}. Implement exponentiation by yourself. Multiplication on scalars and addition shouldn't be a problem.
There is also the function mulSpectrums, which lets you do element wise multiplication of complex matrices. So if K is your kernel matrix and I is some complex matrix, that is, CV_32FC2 (float two channel) you can do the following to compute the element wise multiplication,
// Make K a complex matrix
cv::Mat Ktmp[] = {cv::Mat_<float>(K), cv::Mat::zeros(K.size(), CV_32FC1)};
cv::Mat Kc;
cv::merge(Ktmp,2,Kc);
// Do matrix multiplication
cv::mulSpectrums(Kc,I,I,0);