OpenCV Assertion failed on Matrix multiplication - c++

I'm multiplying two matrices with OpenCV, A in NxM and B is MxP.
According to the documentation:
All the arrays must have the same type and the same size (or ROI
size). For types that have limited range this operation is saturating.
However, by the theory of matrix multiplication:
Assume two matrices are to be multiplied (the generalization to any
number is discussed below). If A is an n×m matrix and B is an m×p
matrix, the result would be AB of their multiplication is an n×p matrix defined
only if the number of columns m in A is equal to the number of rows m
in B.
shouldn't this code be working?
- (CvMat *) multMatrix:(CvMat *)AMatrix BMatrix:(CvMat *)BMatrix
{
CvMat *result = cvCreateMat(AMatrix->rows, BMatrix->cols, kMatrixType);
cvMul(AMatrix, BMatrix, result, 1.0);
return result;
}
I get the following exception:
OpenCV Error: Assertion failed (src1.size == dst.size &&
src1.channels() == dst.channels()) in cvMul, file
/Users/Aziz/Documents/Projects/opencv_sources/trunk/modules/core/src/arithm.cpp,
line 2728
kMatrixType is CV_32F, A is 6x234, B is 234x5 and result is 6x5...
Am I doing something wrong? Or is this an OpenCV restriction to matrix multiplication ?

You are doing element-wise multiplication with cvMul.
You should look at cvMatMul for doing proper matrix multiplication.
http://opencv.willowgarage.com/wiki/Matrix_operations

Related

Matrix inverse calculation of upper triangular matrix gives error for large matrix dimensions

I have a recursive function to calculate the inverse of an upper triangular matrix. I have divided the matrix into Top, Bottom and Corner sections and then followed the methodology as laid down in https://math.stackexchange.com/a/2333418. Here is a pseudocode form:
//A diagram structure of the Matrix
Matrix = [Top Corner]
[0 Bottom]
Matrix multiply_matrix(Matrix A, Matrix B){
Simple Code to multiply two matrices and return a Matrix
}
Matrix simple_inverse(Matrix A){
Simple Code to get inverse of a 2x2 Matrix
}
Matrix inverse_matrix(Matrix A){
//Creating an empty A_inv matrix of dimension equal to A
Matrix A_inv;
if(A.dimension == 2){
A_inv = simple_inverse(A)
}
else{
Top_inv = inverse_matrix(Top);
(Code to check Top*Top_inv == Identity Matrix)
Bottom_inv = inverse_matrix(Bottom);
(Code to check Bottom*Bottom_inv == Identity Matrix)
Corner_inv = multiply_matrix(Top_inv, Corner);
Corner_inv = multiply_matrix(Corner_inv, Bottom_inv);
Corner_inv = negate(Corner_inv); //Just a function for negation of the matrix elements
//Code to copy Top_inv, Bottom_inv and Corner_inv to A_inv
...
...
}
return A_inv;
}
int main(){
matrix A = {An upper triangular matrix with random integers between 1 and 9};
A_inv = inverse_matrix(A);
test_matrix = multiply_matrix(A, A_inv);
(Code to Raise error if test_matrix != Identity matrix)
}
For simplicity I have implemented the code such that only power of 2 dimension matrices are supported.
My problem is that I have tested this code for matrix dimensions of 2, 4, 8, 16, 32 and 64. All of these pass all of the assertion checks as shown in code.
But for matrix dimension of 128 I get failure is the assertion in main(). And when I check, I observer that the test_matrix is not Identity matrix. Some non-diagonal elements are not equal to 0.
I am wondering what could be the reason for this:-
I am using C++ std::vector<std::vector<double>> for Matrix representation.
Since the data type is double the non-diagonal elements of test_matrix for cases 2, 4, 8, ..., 64 do have some value but very small. For example, -9.58122e-14
All my matrices at any recursion stage are square matrix
I am performing checks that Top*Top_inv = Identity and Bottom*Bottom_inv = Identity.
Finally, for dimensions 2, 4, ..., 64 I generated random numbers(b/w 1 and 10) to create my upper triangular matrix. Since, these cases passed, I guess my mathematical implementation is correct.
I feel like there is some aspect of C++ datatypes about double which I am unaware of that could be causing the error. Otherwise the sudden error from 64->128 doesn't make sense.
Could you please elaborate on how the matrix == identity operation is implemented?
My guess is that the problem might be resumed to the floating point comparison.
The matrix inversion can be O(n^3) in the worst case. This means that, as the matrix size increases, the amount of computations involved also increase. Real numbers cannot be perfectly represented even when using 64 bit floating point, they are always an approximation.
For operations such as matrix inversion this can cause problems of numerical error propagation, due to the loss of precision on the accumulated multiply adds operations.
For this, there has been discussions already in the StackOverflow: How should I do floating point comparison?
EDIT: Other thing to consider if the full matrix is actually invertible.
Perhaps the Top and/or Bottom matrices are invertible, but the full matrix (when composing with the Corner matrix) is not.

Why doesn't Eigen converge for tiny constant matrix?

I am running repeated matrix diagonalization routines for small complex valued square matrices (dimension < 10), but encountered a failure on a small constant value matrix. The ComplexEigenSolver doesn't converge, returning empty objects for the eigenvalues and eigenvectors.
I have checked this problem by trying to solve the matrix with values all 1, but this works fine. My problem must be related to the small values of my matrix.
MatrixXcd matrix(2,2);
matrix(0,0) = std::complex<double>(1.4822e-322, 0);
matrix(0,1) = std::complex<double>(1.4822e-322, 0);
matrix(1,0) = std::complex<double>(1.4822e-322, 0);
matrix(1,1) = std::complex<double>(1.4822e-322, 0);
ComplexEigenSolver<MatrixXcd> ces;
ces.compute(matrix);
ces.eigenvalues();
ces.eigenvectors();
ces.info();
This gives empty eigenvalues and eigenvectors, and returns 2 from ces.info().
I expect it to simply give eigenvalues with entries 0 and 2.96e-322 (a scaled version of the matrix of ones given here: https://en.wikipedia.org/wiki/Matrix_of_ones)
Are the values too small?

How to get all eigenvalues from sparse matrix with eigs_gen

I'm using Armadillo. The eigs_gen function (for SpMat sparse matrices) has a parameter k for the number of eigenvalues to compute.
I have a 3x3 matrix my_matrix, when I run
arma::cx_fvec values;
arma::cx_fmat vectors;
arma::eigs_gen (values, vectors, my_matrix, 3);
I get the following exception
eigs_gen(): n_eigvals + 1 must be less than the number of rows in the matrix
To have 3 eigenvalues for a 3x3 matrix is well-defined in general, so I don't understand this restriction.
On the other hand, the eig_gen function, which computes all eigenvalues, only compiles for the dense matrix Mat type.
How do I find all eigenvalues for a sparse matrix with Armadillo?

Opencv Multiplication of Large matrices

I have 2 matrices of dimension 1*280000.
I wanted to multiply one matrix with transposed second matrix using opencv.
I tried to multiply them using Multiplication operator(*).
But it is giving me error :'The total size matrix does not fit to size_t type'
As after multiplication the size will be 280000*28000 of matrix.
So,I am thinking multiplication should 32 bit.
Is there any method to do the 32bit multiplication?
Why do you want to multiply them like that? But because this is an answer, I would like to help you thinking more than just do it:
supposing that you have the two matrix: A and B (A.size() == B.size() == [1x280000]).
and A * B.t() = AB (AB is the result)
then AB = [A[0][0]*B A[0][1]*B ... A[0][279999]*B] (each column is the transposed matrix multiplied by the corresponding element of the other matrix)
AB may also be written as:
[ B[0][0]*A
B[0][1]*A
...
B[0][279999]*A]
(each row of the result will be the row matrix multiplied by the corresponding element of the column (transposed) matrix)
Hope that this will help you in what you are doing... Using a for loop you can print, or store, or what you need with the result

c++ eigenvalue and eigenvector corresponding to the smallest eigenvalue

I am trying to find out the eigenvalues and the eigenvector corresponding to the smallest eigenvalue. I have a matrix A (nx2) and I have computed B = transpose(A) * a. When I am using c++ eigen function compute() and print the eigenvalues of matrix B, it shows something like this:
(4.4, 0)
(72.1, 0)
Printing the eigenvectors it gives output:
(-0.97, 0) (0.209, 0)
(-0.209, 0) (-0.97, 0)
I am confused. Eigenvectors can't be zero I guess. So, for the smallest eigenvalue 4.4, is the corresponding eigenvector (-0.97, -0.209)?
P.S. - when I print
mysolution.eigenvalues()[0]
it prints (4.4, 0). And when I print
mysolution.eigenvectors().col(0)
it prints (-0.97, 0) (0.209, 0). That's why I guess I can assume that for eigenvalue 4.4, the corresponding eigenvector is (-0.97, -0.209).
I guess you are correct.
None of your eigenvalues is null, though. It seems that you are working with complex numbers.
Could it be that you selected a complex floating point matrix to do your computations? Something along the lines of MatrixX2cf or MatrixX2cd.
Every square matrix has a set of eigenvalues. But even if the matrix itself only consists of real numbers, the eigenvalues and -vectors might contain complex numbers (take (0 1;-1 0) for example)
If Eigen knows nothing about your matrix structure (i.e. is it symmetric/self-adjoint? Is it orthonormal/unitary?) but still wants to provide you with exact eigenvalues, the only general type that can hold all possible eigenvalues is a complex number.
Thus, Eigen always returns complex numbers which are represented as pairs (a, b) for a + bi. Eigen will only return real numbers if the matrix is self-adjoint, i.e. SelfAdjointView is used to access the matrix.
If you know for a fact that your matrix only has real eigenvalues, you can just extract the real part by eigenvalue.real since Eigen returns std::complex values.
EDIT: I just realized that if your matrix A has no complex entries, B=transposed(A)*A is self-adjoint and thus you could just use a SelfAdjointView of the matrix to compute the real eigenvalues and -vectors.