calculating the eigenvector from a complex eigenvalue in opencv - c++

I am trying to calculate the eigenvector of a 4x4 matrix in opencv.
For this I first calculate the eigenvalue according to this formula:
Det( A - lambda * identity matrix ) = 0
From wiki on eigenvalues and eigenvectors.
After solving this, it gives me 4 eigenvalues that look something like this:
0.37789 + 1.91687i
0.37789 - 1.91687i
0.412312 + 1.87453i
0.412312 - 1.87453i
From these 4 eigenvalues I take the highest value and I want use that with this formula:
( A - lambda * identity matrix ) v = 0
I tried to use my original matrix A with the opencv function "eigen()", but this doesn't give me the results I am looking for.
I also tried to use RREF (reduced row echelon form), however I don't know how to do this with complex eigenvalues.
So my question is, how would you calculate this eigenvector?
I plugged my data in to wolframalpha to see what my results should be.

Opencv already has function for calculating eigenvalues and eigenvectors, cv::eigen(). I advise using it instead of writing the algorithm yourself.
Here is good blog that explains how to do this in c, c++ and python.

So I solved the problem using the 'ComplexEigenSolver' from the Eigen library.
//create a multichannel matrix
Mat a_com = Mat::zeros(4,4,CV_32FC2);
for(int i = 0; i<4; i++)
{
for(int j = 0; j<4; j++)
{
a_com.at<Vec2f>(i,j)[0] = a.at<double>(i,j);
a_com.at<Vec2f>(i,j)[1] = 0;
}
}
MatrixXcf eigenA;
cv2eigen(a_com,eigenA); //convert OpenCV to Eigen
ComplexEigenSolver<MatrixXcf> ces;
ces.compute(eigenA);
cout << "The eigenvalues of A are:\n" << ces.eigenvalues() << endl;
cout << "The matrix of eigenvectors, V, is:\n" << ces.eigenvectors() << endl;
This gives me the following output (which is more or less what I was looking for):
The eigenvalues of A are:
(0.3951,-1.89571)
(0.3951,1.89571)
(0.3951,1.89571)
(0.3951,-1.89571)
The matrix of eigenvectors, V, is:
(-0.704546,0) (-5.65862e-009,-0.704546) (-0.064798,-0.0225427) (0.0167534,0.0455606)
(-2.22328e-008,0.707107) (0.707107,-1.65536e-008) (0.0206999,-0.00474562) (-0.0145628,-0.0148895)
(-6.07644e-011,0.0019326) (0.00193259,-4.52426e-011) (-0.706729,6.83797e-005) (-0.000121153,0.706757)
(-1.88954e-009,0.0600963) (0.0600963,-1.40687e-009) (0.00200449,0.703827) (-0.70548,-0.00151068)

Related

2D FFT what to do after converting both matrix into FFT-ed form?

Assume that I have 2 matrix: image, filter; with size MxM and NxN.
My regular convolution looks like this and produces matrix output size (M-N+1)x(M-N+1). Basically it places the top-left corner of a filter on a pixel, convolute, then assign the sum onto that pixel:
for (int i=0; i<M-N; i++)
for (int j=0; j<M-N; j++)
{
float sum = 0;
for (int u=0; u<N; u++)
for (int v=0; v<N; v++)
sum += image[i+u][j+v] * filter[u][v];
output[i][j] = sum;
}
Next, to perform FFT:
Apply zero-padding to both image, filter to the right and bottom (that is, adding more zero columns to the right, zero rows to the bottom). Now both have size (M+N)x(M+N); the original image is at
image[0->M-1][0-M-1].
(Do the same for both matrix) Calculate the FFT of each row into a new matrix, then calculate the FFT of each column of that new matrix.
Now, I have 2 matrices imageFreq and filterFreq, both size (M+N)x(M+N), which is the FFT-ed form of the image and the filter.
But how can I get the convolution values that I need (as described in the sample code) from them?
convolution between A,B using FFT is done by per element multiplication in the frequency domain so in 1D something like this:
convert A,B by FFT
assuming the sizes are N,M of A[N],B[M] first zero pad to common size Q which is a power of 2 and at least M+N in size and then apply FFT:
Q = exp2(ceil(log2(M+N)));
zeropad(A,Q);
zeropad(B,Q);
a = FFT(A);
b = FFT(B);
convolute
in frequency domain use just element wise multiplication:
for (i=0;i<Q;i++) a[i]*=b[i];
reconstruct result
simply apply IFFT (inverse of FFT)...
AB = IFFT(a); // crop to first N (real) elements
and use only the first N element (unless algorithm used need more depends on what you are doing...)
For 2D you can either convolute directly in 2D (using 2 nested for loops) or convolve each axis separately. Beware that separating axises need also to normalize the result by some constant (which depends on dimensionality, resolution and kernel used)
So when put together (also assuming the same resolution NxN and MxM) first zero pad to (QxQ) and then:
Q = exp2(ceil(log2(M+N)));
zeropad(A,Q,Q);
zeropad(B,Q,Q);
a = FFT(A);
b = FFT(B);
for (i=0;i<Q;i++)
for (j=0;j<Q;j++) a[i][j]*=b[i][j];
AB = IFFT(a); // crop to first NxN (real) elements
And again crop to AB to NxN size (unless ...) for more info see:
How to compute Discrete Fourier Transform?
and all sublinks there... Also here at the end is 1D convolution example using NTT (its a special form of FFT) to compute bignum multiplication:
Modular arithmetics and NTT (finite field DFT) optimizations
Also if you want real result then just use only the real parts of the result (ignore imaginary part).

Intel MKL Mismatch results of LAPACKE_dgesvd

I have in my code a call to LAPACKE_dgesvd function. This code is covered by autotest. Upon compiler migration we decided to upgrade MKL too from 11.3.4 to 2019.0.5.
And tests became red. After deep investigation I found that this function is not more returning the same U & V matrices.
I extracted the code and make it run in a separate env/project and same observation. the observation is the first column of U and first row of V have opposite sign
Could you please tell my what I'm doing wrong there ? or how should I use the new version to have the old results ?
I made a simple project allowing to easily reproduce the issue. Here is the code :
// MKL.cpp : This file contains the 'main' function. Program execution begins and ends there
#include <iostream>
#include <algorithm>
#include <mkl.h>
int main()
{
const int rows(3), cols(3);
double covarMatrix[rows*cols] = { 0.9992441421012894, -0.6088405718211041, -0.4935146797825398,
-0.6088405718211041, 0.9992441421012869, -0.3357678733652218,
-0.4935146797825398, -0.3357678733652218, 0.9992441421012761};
double U[rows*rows] = { -1,-1,-1,
-1,-1,-1,
-1,-1,-1 };
double V[cols*cols] = { -1,-1,-1,
-1,-1,-1,
-1,-1,-1 };
double superb[std::min(rows, cols) - 1];
double eigenValues[std::max(rows, cols)];
MKL_INT info = LAPACKE_dgesvd(LAPACK_ROW_MAJOR, 'A', 'A',
rows, cols, covarMatrix, cols, eigenValues, U, rows, V, cols, superb);
if (info > 0)
std::cout << "not converged!\n";
std::cout << "U\n";
for (int row(0); row < rows; ++row)
{
for (int col(0); col < rows; ++col)
std::cout << U[row * rows + col] << " ";
std::cout << std::endl;
}
std::cout << "V\n";
for (int row(0); row < cols; ++row)
{
for (int col(0); col < cols; ++col)
std::cout << V[row * rows + col] << " ";
std::cout << std::endl;
}
std::cout << "Converged!\n";
}
Here is more numerical explanations :
A = 0.9992441421012894, -0.6088405718211041, -0.4935146797825398,
-0.6088405718211041, 0.9992441421012869, -0.3357678733652218,
-0.4935146797825398, -0.3357678733652218, 0.9992441421012761
results on :
11.3.4 2019.0.5 & 2020.1.216
U
-0.765774 -0.13397 0.629 0.765774 -0.13397 0.629
0.575268 -0.579935 0.576838 -0.575268 -0.579935 0.576838
0.2875 0.803572 0.521168 -0.2875 0.803572 0.521168
V
-0.765774 0.575268 0.2875 0.765774 -0.575268 -0.2875
-0.13397 -0.579935 0.803572 -0.13397 -0.579935 0.803572
0.629 0.576838 0.521168 0.629 0.576838 0.521168
I tested using scipy and the result is identical as on 11.3.4 version.
from scipy import linalg
from numpy import array
A = array([[0.9992441421012894, -0.6088405718211041, -0.4935146797825398], [-0.6088405718211041, 0.9992441421012869, -0.3357678733652218], [-0.4935146797825398, -0.3357678733652218, 0.9992441421012761]])
print(A)
u,s,vt,info = linalg.lapack.dgesvd(A)
print(u)
print(s)
print(vt)
print(info)
Thanks for your help and best regards
Mokhtar
The singular value decomposition is not unique. For example, if we have a SVD decomposition (e.g. a set of matrices U, S, V) so that A=U* S* V^T then the set of matrices (-U, S, -V) is also a SVD decomposition because (-U) S (-V^T) = USV^T = A. Moreover if D is a diagonal matrix which diagonal entries are equal to -1 or 1 then the set of matrices UD, S, VD is also a SVD decomposition because (UD)SDV^T = US*V^T =A.
Since that it is not a good idea to validate the SVD decomposition by comparing two sets of matrices. The LAPACK User’s Guide as many other publications recommend to check the following conditions for the computed SVD decomposition:
1. || A V – US || / || A|| should be small enough
2. || U^T *U – I || close to zero
3. || V^T *V – I || close to zero
4. all diagonal entries of the diagonal S must be positive and sorted in decreasing order. The error bounds for all expressions given above can be found on https://www.netlib.org/lapack/lug/node97.html
So the both MKL versions mentioned in the post-return the singular values and singular vectors which satisfied all 4 error bounds. Since that and because the SVD is not unique, both results are correct. The change of sign in the first singular vectors happened because for very small matrices another faster method for the reduction to bidiagonal form started to use.

From Matlab to C++ Eigen matrix operations - vector normalization

Converting some Matlab code to C++.
Questions (how to in C++):
Concatenate two vectors in a matrix. (already found the solution)
Normalize each array ("pts" col) dividing it by its 3rd value
Matlab code for 1 and 2:
% 1. A 3x1 vector. d0, d1 double.
B = [d0*A (d0+d1)*A]; % B is 3x2
% 2. Normalize a set of 3D points
% Divide each col by its 3rd value
% pts 3xN. C 3xN.
% If N = 1 you can do: C = pts./pts(3); if not:
C = bsxfun(#rdivide, pts, pts(3,:));
C++ code for 1 and 2:
// 1. Found the solution for that one!
B << d0*A, (d0 + d1)*A;
// 2.
for (int i=0, i<N; i++)
{
// Something like this, but Eigen may have a better solution that I don't know.
C.block<3,1>(0,i) = C.block<3,1>(0,i)/C(0,i);
}
Edit:
I hope the question is more clear now².
For #2:
C = C.array().rowwise() / C.row(2).array();
Only arrays have multiplication and division operators defined for row and column partial reductions. The array converts back to a matrix when you assign it back into C

Principal Component Analysis with Eigen Library

I'm trying to compute the 2 major principal components from a dataset in C++ with Eigen.
The way I do it at the moment is to normalize the data between [0, 1] and then center the mean. After that I compute the covariance matrix and run an eigenvalue decomposition on it. I know SVD is faster, but I'm confused about the computed components.
Here is the major code about how I do it (where traindata is my MxN sized input matrix):
Eigen::VectorXf normalize(Eigen::VectorXf vec) {
for (int i = 0; i < vec.size(); i++) { // normalize each feature.
vec[i] = (vec[i] - minCoeffs[i]) / scalingFactors[i];
}
return vec;
}
// Calculate normalization coefficients (globals of type Eigen::VectorXf).
maxCoeffs = traindata.colwise().maxCoeff();
minCoeffs = traindata.colwise().minCoeff();
scalingFactors = maxCoeffs - minCoeffs;
// For each datapoint.
for (int i = 0; i < traindata.rows(); i++) { // Normalize each datapoint.
traindata.row(i) = normalize(traindata.row(i));
}
// Mean centering data.
Eigen::VectorXf featureMeans = traindata.colwise().mean();
Eigen::MatrixXf centered = traindata.rowwise() - featureMeans;
// Compute the covariance matrix.
Eigen::MatrixXf cov = centered.adjoint() * centered;
cov = cov / (traindata.rows() - 1);
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXf> eig(cov);
// Normalize eigenvalues to make them represent percentages.
Eigen::VectorXf normalizedEigenValues = eig.eigenvalues() / eig.eigenvalues().sum();
// Get the two major eigenvectors and omit the others.
Eigen::MatrixXf evecs = eig.eigenvectors();
Eigen::MatrixXf pcaTransform = evecs.rightCols(2);
// Map the dataset in the new two dimensional space.
traindata = traindata * pcaTransform;
The result of this code is something like this:
To confirm my results, I tried the same with WEKA. So what I did is to use the normalize and the center filter, in this order. Then the principal component filter and save + plot the output. The result is this:
Technically I should have done the same, however the outcome is so different. Can anyone see if I made a mistake?
When scaling to 0,1, you modify the local variable vec but forgot to update traindata.
Moreover, this can be done more easily this way:
RowVectorXf minCoeffs = traindata.colwise().maxCoeff();
RowVectorXf minCoeffs = traindata.colwise().minCoeff();
RowVectorXf scalingFactors = maxCoeffs - minCoeffs;
traindata = (traindata.rowwise()-minCoeffs).array().rowwise() / scalingFactors.array();
that is, using row-vectors and array features.
Let me also add that the symmetric eigenvalue decomposition is actually faster than SVD. The true advantage of SVD in this case is that it avoids squaring the entries, but since your input data are normalized and centered, and that you only care about the largest eigenvalues, there is no accuracy concern here.
The reason was that Weka standardized the dataset. This means it scales each feature's variance to unit variance. When I did this, the plots looked the same. Technically my approach was correct as well.

Total Least Squares algorithm in C/C++

Given a set of points P I need to find a line L that best approximates these points. I have tried to use the function gsl_fit_linear from the GNU scientific library. However my data set often contains points that have a line of best fit with undefined slope (x=c), thus gsl_fit_linear returns NaN. It is my understanding that it is best to use total least squares for this sort of thing because it is fast, robust and it gives the equation in terms of r and theta (so x=c can still be represented). I can't seem to find any C/C++ code out there currently for this problem. Does anyone know of a library or something that I can use? I've read a few research papers on this but the topic is still a little fizzy so I don't feel confident implementing my own.
Update:
I made a first attempt at programming my own with armadillo using the given code on this wikipedia page. Alas I have so far been unsuccessful.
This is what I have so far:
void pointsToLine(vector<Point> P)
{
Row<double> x(P.size());
Row<double> y(P.size());
for (int i = 0; i < P.size(); i++)
{
x << P[i].x;
y << P[i].y;
}
int m = P.size();
int n = x.n_cols;
mat Z = join_rows(x, y);
mat U;
vec s;
mat V;
svd(U, s, V, Z);
mat VXY = V(span(0, (n-1)), span(n, (V.n_cols-1)));
mat VYY = V(span(n, (V.n_rows-1)) , span(n, (V.n_cols-1)));
mat B = (-1*VXY) / VYY;
cout << B << endl;
}
the output from B is always 0.5504, Even when my data set changes. As well I thought that the output should be two values, so I'm definitely doing something very wrong.
Thanks!
To find the line that minimises the sum of the squares of the (orthogonal) distances from the line, you can proceed as follows:
The line is the set of points p+r*t where p and t are vectors to be found, and r varies along the line. We restrict t to be unit length. While there is another, simpler, description in two dimensions, this one works with any dimension.
The steps are
1/ compute the mean p of the points
2/ accumulate the covariance matrix C
C = Sum{ i | (q[i]-p)*(q[i]-p)' } / N
(where you have N points and ' denotes transpose)
3/ diagonalise C and take as t the eigenvector corresponding to the largest eigenvalue.
All this can be justified, starting from the (orthogonal) distance squared of a point q from a line represented as above, which is
d2(q) = q'*q - ((q-p)'*t)^2