Eigen complex matrix-vector multiplication - c++

I have these Eigen complex matrices:
Eigen::MatrixXcd matrix;
Eigen::VectorXcd rhs;
Eigen::VectorXcd solution;
I can successfully load values and compute the solution, but if I try:
rhs = matrix*solution;
I get compiler errors related to overloaded "+=" operator and double/complex conversion in Eigen files GeneralProduct.h and CxRealAbs.h
Similar issues with trying to compute a residual.
Is there an
Help ??
thanks
Kevin

According to their documentation, Eigen checks the validity of operations:
Validity of operations
Eigen checks the validity of the operations that you perform. When possible, it checks them at compile-time, producing compilation errors. These error messages can be long and ugly, but Eigen writes the important message in UPPERCASE_LETTERS_SO_IT_STANDS_OUT. For example:
Matrix3f m;
Vector4f v;
v = m*v; // Compile-time error: YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES
Of course, in many cases, for example when checking dynamic sizes, the check cannot be performed at compile time. Eigen then uses runtime assertions. This means that the program will abort with an error message when executing an illegal operation if it is run in "debug mode", and it will probably crash if assertions are turned off.
MatrixXf m(3,3);
VectorXf v(4);
v = m * v; // Run-time assertion failure here: "invalid matrix product"
Eigen
Your RHS is a vector and you are trying to assign it a value or set of values from a matrix and vector product. Even from a mathematical perspective, you should ask yourself if this is a valid type of operation.
Consider that M and N are matrices and A & B are vectors what are the results of the following operations:
M = AB // Cross product and not a dot product
M = AM, MA, BM, MB // yields a transformed matrix using the original matrix
M = AN, NA, BN, NB // yields a transformed matrix using a supplied matrix
A = AA, AB, BA // cross product and not the dot product
A = AM, MA, AN, NA, BM, MB, BN, NB // yields what?
So within Eigen at compile time it is checking the validity of your operators and therefore it is giving you the compile-time error that the += is not defined and that you have not provided an overloaded version of one. It simply doesn't know how to perform the intended operation. It shouldn't matter what the underlying type of the matrix and vector classes are. It pertains to the fact that the operators you are trying to perform are not defined somewhere.
Edit
Here is a link that describes matrix and vector multiplication. Intuitively we would assume that this yields a vector, however, this is implicitly understood, but from a matrix perspective, this vector can be row or column such that the returned vector could either be a 1xM or a Mx1 matrix.
Here is a link to show the different interpretations even though they will return the same result: This can be found in section 3.2.3 of the documentation: Maxtrix-Vector Product
The number of operations between the two different interpretations varies although they provide the same final result. Without explicitly stating if it should be a row or column product this could lead to some type of ambiguity. I don't know how Eigen determines which method it would use, I would assume that it would choose the one with the fewest operations with the least amount of computations.
This isn't the main issue of your problem. At the end of the day, the compiler is still generating an error message that an overloaded operator is not being defined. I don't know where this is coming from within your code. So this could be within their library itself when trying to perform the operations on their version of the Complex types. Without being able to run and compile your full source code I can not easily determine what exactly is generating this compiler error and I also don't know what compiler you are using that is generating your errors.

The problem was mine. I had overloaded the *operator for std::complex * real because some earlier versions of std were incomplete. The first error listed was into Eigen, which led me astray.
GeneralProduct.h(287): no viable overloaded '+='
CxRealAbs.h(111): cannot convert 'const Eigen::Map, -1, 1, 0, -1, 1>, 1, Eigen::Stride<0, 0> >' to 'double' without a conversion
Remove my overload and it compiles and runs ok. No problem with Eigen dynamic matrices.
thanks for the replies.
Kevin

Related

Eigen equivalent to Octave/MATLAB mldivide for rectangular matrices

I'm using Eigen v3.2.7.
I have a medium-sized rectangular matrix X (170x17) and row vector Y (170x1) and I'm trying to solve them using Eigen. Octave solves this problem fine using X\Y, but Eigen is returning incorrect values for these matrices (but not smaller ones) - however I suspect that it's how I'm using Eigen, rather than Eigen itself.
auto X = Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic>{170, 17};
auto Y = Eigen::Matrix<T, Eigen::Dynamic, 1>{170};
// Assign their values...
const auto theta = X.colPivHouseholderQr().solve(Y).eval(); // Wrong!
According to the Eigen documentation, the ColPivHouseholderQR solver is for general matrices and pretty robust, but to make sure I've also tried the FullPivHouseholderQR. The results were identical.
Is there some special magic that Octave's mldivide does that I need to implement manually for Eigen?
Update
This spreadsheet has the two input matrices, plus Octave's and my result matrices.
Replacing auto doesn't make a difference, nor would I expect it to because construction cannot be a lazy operation, and I have to call .eval() on the solve result because the next thing I do with the result matrix is get at the raw data (using .data()) on tail and head operations. The expression template versions of the result of those block operations do not have a .data() member, so I have to force evaluation beforehand - in other words theta is the concrete type already, not an expression template.
The result for (X*theta-Y).norm()/Y.norm() is:
2.5365e-007
And the result for (X.transpose()*X*theta-X.transpose()*Y).norm() / (X.transpose()*Y).norm() is:
2.80096e-007
As I'm currently using single precision float for my basic numerical type, that's pretty much zero for both.
According to your verifications, the solution you get is perfectly fine. If you want more accuracy, then use double floating point numbers. Note that MatLab/Octave use double precision by default.
Moreover, it might also likely be that your problem is not full rank, in which case your problem admit an infinite number of solution. ColPivHouseholderQR picks one, somehow arbitrarily. On the other hand, mldivide will pick the minimal norm one that you can also obtain with Eigen::BDCSVD (Eigen 3.3), or the slower Eigen::JacobiSVD.

Unit Test a 3X3 or 4X4 Determinant

Are there any mathematical properties similar to A * A-1 = I that can be used to test the calculation of the determinant in a unit test like format?
Calculate the determinant of a known array (or arrays) manually and compare your result to that number.
Try arrays of different sizes, arrangements, etc.
By the way I would NOT use A * A-1 = I as a definitive test of inverse or multiplication. Unit tests typically test one thing against a specific, constant result. Testing two offsetting operations could lead to false positives - e.g. your "multiply" code could just return the constant identity array and your test would not fail.
You may want to check http://en.wikipedia.org/wiki/Determinant#Properties_of_the_determinant.
Some of those are fairly straightforward to check in a unit test (e.g. det(I) = 1, det(AT) = det(A), and det(cA) = cn det(A)), either directly or used to derive specific 'corner cases'.
There are others properties dependent on the correct implementation of other matrix manipulations. This can make them slightly less interesting for unit testing purposes since you can't as easily pinpoint a test failure.
You could get some test cases using Sylvester's theorem In the notation of the link, if you take A to be a column vector and B to be a row vector then the right hand side is the determinant of a scalar, which is just that scalar.
More explicitly I'm saying that if A and B are vectors and M the matrix
M[i,j] = I[i,j] + A[i]*B[j]
(I the identity matrix) then
det( M) = 1 + Sum{ i | A[i]*B[i]}

OpenCV error due possibly to different "step"

I have converted some code from old Opencv to the c++ version, and I get an error at matrix multiplication.
OpenCV Error: Sizes of input arguments do not match (The operation is neither
'array op array' (where arrays have the same size and the same number of channels),
nor 'array op scalar', nor 'scalar op array')
On the web, this error seems to be associated with having different number of channels - mine are all 1.
What I did find different though is a "step" - on one it is 24, on another is 32.
Where is this step ?
I created both input matrices using
cv::Mat YYY(3, 4, CV_64FC1); // step 32
cv::Mat XXX(3, 3, CV_64FC1); // step 24
Yet they seem to have different step ?
Could this be the culprit for the error in cv::multiply(XXX,YYY, DDD); ?
Is it not possible to perform operations (like a mask) between different types ?
Thank you
cv::multiply() performs element-wise multiplication of two matrices. As the error states, your matrices are not the same size.
You may be looking for matrix multiplication, which is accomplished via the * operator. Thus
cv::Mat DDD = XXX * YYY;
will compile and run correctly.
For the record, this has nothing (directly) to do with the step field, which for your matrices is the number of columns times sizeof(double), since your matrices are of type CV_64FC1. Things get more complicated if the matrices are not continuous, but that is not the case for you.

C++ - How to find the rank of a matrix

I'm having difficulty coming up with the method by which a program can find the rank of a matrix. In particular, I don't fully understand how you can make sure the program would catch all cases of linear combinations resulting in dependencies.
The general idea of how to solve this is what I'm interested in. However, if you want to take the answer a step farther, I'm specifically looking for the solution in regards to square matrices only. Also the code would be in C++.
Thanks for your time!
General process:
matrix = 'your matrix you want to find rank of'
m2 = rref(matrix)
rank = number_non_zero_rows(m2)
where rref(matrix) is a function that does your run-of-the-mill Gaussian elimination
number_non_zero_rows(m2) is a function that sums the number of rows with non-zero entries
Your concern about all cases of linear combinations resulting in dependencies is taken care of with the rref (Gaussian elimination) step. Incidentally, this works no matter what the dimensions of the matrix are.

CUBLAS - matrix addition.. how?

I am trying to use CUBLAS to sum two big matrices of unknown size. I need a fully optimized code (if possible) so I chose not to rewrite the matrix addition code (simple) but using CUBLAS, in particular the cublasSgemm function which allows to sum A and C (if B is a unit matrix): *C = alpha*op(A)*op(B)+beta*c*
The problem is: C and C++ store the matrices in row-major format, cublasSgemm is intended (for fortran compatibility) to work in column-major format. You can specify whether A and B are to be transposed first, but you can NOT indicate to transpose C. So I'm unable to complete my matrix addition..
I can't transpose the C matrix by myself because the matrix is something like 20000x20000 maximum size.
Any idea on how to solve please?
cublasgeam has been added to CUBLAS5.0.
It computes the weighted sum of 2 optionally transposed matrices
If you're just adding the matrices, it doesn't actually matter. You give it alpha, Aij, beta, and Cij. It thinks you're giving it alpha, Aji, beta, and Cji, and gives you what it thinks is Cji = beta Cji + alpha Aji. But that's the correct Cij as far as you're concerned. My worry is when you start going to things which do matter -- like matrix products. There, there's likely no working around it.
But more to the point, you don't want to be using GEMM to do matrix addition -- you're doing a completely pointless matrix multiplication (which takes takes ~20,0003 operations and many passes through memory) for an operatinon which should only require ~20,0002 operations and a single pass! Treat the matricies as 20,000^2-long vectors and use saxpy.
Matrix multiplication is memory-bandwidth intensive, so there is a huge (factors of 10x or 100x) difference in performance between coding it yourself and a tuned version. Ideally, you'd change structures in your code to match the library. If you can't, in this case you can manage just by using linear algebra identities. The C-vs-Fortran ordering means that when you pass in A, CUBLAS "sees" AT (A transpose). Which is fine, we can work around it. If what you want is C=A.B, pass in the matricies in the opposite order, B.A . Then the library sees (BT . AT), and calculates CT = (A.B)T; and then when it passes back CT, you get (in your ordering) C. Test it and see.