Access eigenvalues - c++

I am trying to get the smallest the eigenvalues and eigenvectors of a covariance matrix:
Eigen::Matrix3d covariance_matrix; //has to be Matrix3d
double minEigenValue = 0;
int minEigenVectorIndex = 0;
//compute covariance matrix
Eigen::EigenSolver<Eigen::Matrix3d > solver(covariance_matrix);
Eigen::Matrix eigenvalues = solver.eigenvalues();
// Eigen::Matrix3d eigenvalues = solver.eigenvalues(); results in an error
for(int i = 0; i < 3;i++)
{
//How do I access the eigenvalues? This fails. eigenvalues[0][i] also fails
if(eigenvalues(0,i) > minEigenValue)
{
minEigenValue = eigenvalues(0,i);
minEigenVectorIndex = i;
}
}
// somehow get pair of vector[0], vector[1], vector[2]:
//solver.eigenvectors().col(minEigenVectorIndex);
I have read through the documentation quite a lot, but couldn't find a clear example / explanation
How do I access the eigenvectors and values?

Eigen::Matrix<std::complex<double>,3,1> eigenvalues = solver.eigenvalues();
Eigen::Matrix<std::complex<double>,3,3> eigenvec = solver.eigenvectors();

Related

How to write Multiplicative Update Rules for Matrix Factorization when one doesn't have access to the whole matrix?

So we want to approximate the matrix A with m rows and n columns with the product of two matrices P and Q that have dimension mxk and kxn respectively. Here is an implementation of the multiplicative update rule due to Lee in C++ using the Eigen library.
void multiplicative_update()
{
Q = Q.cwiseProduct((P.transpose()*matrix).cwiseQuotient(P.transpose()*P*Q));
P = P.cwiseProduct((matrix*Q.transpose()).cwiseQuotient(P*Q*Q.transpose()));
}
where P, Q, and the matrix (matrix = A) are global variables in the class mat_fac. Thus I train them using the following method,
void train_2(){
double error_trial = 0;
for (int count = 0;count < num_iterations; count ++)
{
multiplicative_update();
error_trial = (matrix-P*Q).squaredNorm();
if (error_trial < 0.001)
{
break;
}
}
}
where num_iterations is also a global variable in the class mat_fac.
The problem is that I am working with very large matrices and in particular I do not have access to the entire matrix. Given a triple (i,j,matrix[i][j]), I have access to the row vector P[i][:] and the column vector Q[:][j]. So my goal is to write rewrite the multiplicative update rule in such a way that I update these two vectors every time, I see a non-zero matrix value.
In code, I want to have something like this:
void multiplicative_update(int i, int j, double mat_value)
{
Eigen::MatrixXd q_vect = get_vector(1, j); // get_vector returns Q[:][j] as a column vector
Eigen::MatrixXd p_vect = get_vector(0, i); // get_vector returns P[i][:] as a column vector
// Somehow compute coeff_AQ_t, coeff_PQQ_t, coeff_P_tA and coeff_P_tA.
for(int i = 0; i< k; i++):
p_vect[i] = p_vect[i]* (coeff_AQ_t)/(coeff_PQQ_t)
q_vect[i] = q_vect[i]* (coeff_P_tA)/(coeff_P_tA)
}
Thus the problem boils down to computing the required coefficients given the two vectors. Is this a possible thing to do? If not, what more data do I need for the multiplicative update to work in this manner?

Eigen compund addition gives wrong result

I declared two Eigen::RowVectorXd variables in the program as below. I get wrong results in the compound addition statement sdf_grad+=gradval. Only the first two elements are added and the rest of elements in the sdf_grad vector become 1e19. I don't have any clue why it is happening. Please Help.
Eigen::RowVectorXd sdf_grad(24);
Eigen::VectorXd stress_dof = get_stress_dof();
Eigen::VectorXd strain_dof = get_strain_dof();
for(unsigned int i=0;i!=qn.size(); i++)
{
for(unsigned int j=0; j!=qn.size();j++)
{
double sval = qn[i];
double tval = qn[j];
if(!m_shape->m_set_coordinate)
m_shape->add_coordinates(this->get_xcoords(),this->get_ycoords());
m_shape->update_shapefn(sval,tval);
Eigen::MatrixXd Bs = get_bsmat_local(i,j);
Eigen::Vector3d stress = Bs*stress_dof;
Eigen::MatrixXd Bd = get_bmat(sval,tval);
Eigen::Vector3d strain = Bd* strain_dof;
Eigen::Vector3d cnfn = m_material->get_constitutive_function(stress,strain);
auto WxJ = qw[i] * qw[j] * m_shape->get_detJ();
double delval=cnfn.norm();
objval+=delval*WxJ;
//SETTING GRADIENT OF STRESS DOF
Eigen::MatrixXd CxBs = m_material->get_cmat()*Bs;
Eigen::MatrixXd Bstrans = CxBs.transpose();
Eigen::RowVectorXd gradval= (-WxJ/delval)*Bstrans*cnfn;
sdf_grad+= gradval ; // Wrong Result.
}
}
You did not zero initialize your vector. Write this instead of the first line:
Eigen::RowVectorXd sdf_grad = Eigen::RowVectorXd::Zero(24);

Orthogonalization in QR Factorization outputting slightly innaccurate orthogonalized matrix

I am writing code for QR Factorization and for some reason my orthogonal method does not work as intended. Basically, my proj() method is outputting random projections. Here is the code:
apmatrix<double> proj(apmatrix<double> v, apmatrix<double> u)
//Projection of u onto v
{
//proj(v,u) = [(u dot v)/(v dot v)]*v
double a = mult(transpose(u,u),v)[0][0], b = mult(transpose(v,v),v)[0][0], c = (a/b);
apmatrix<double>k;
k.resize(v.numrows(),v.numcols());
for(int i = 0; i<v.numrows(); i++)
{
for(int j = 0; j<v.numcols(); j++)
{
k[i][j]=v[i][j]*c;
}
}
return k;
}
I tested the method by itself with manual matrix inputs, and it seems to work fine. Here is my orthogonal method:
apmatrix<double> orthogonal(apmatrix<double> A) //Orthogonal
{
/*
n = (number of columns of A)-1
x = columns of A
v0 = x0
v1 = x1 - proj(v0,x1)
vn = xn - proj(v0,xn) - proj(v1,xn) - ... - proj(v(n-1),xn)
V = {v1, v2, ..., vn} or [v0 v1 ... vn]
*/
apmatrix<double> V, x, v;
int n = A.numcols();
V.resize(A.numrows(),n);
x.resize(A.numrows(), 1);
v.resize(A.numrows(),1);
for(int i = 0; i<A.numrows(); i++)
{
x[i][0]=A[i][1];
v[i][0]=A[i][0];
V[i][0]=A[i][0];
}
for (int c = 1; c<n; c++) //Iterates through each col of A as if each was its own matrix
{
apmatrix<double>vn,vc; //vn = Orthogonalized v (avoiding matrix overwriting of v); vc = previously orthogonalized v
vn=x;
vc.resize(v.numrows(), 1);
for(int i=0; i<c; i++) //Vn = an-(sigma(t=1, n-1, proj(vt, xn))
{
for(int k = 0; k<V.numrows(); k++)
vc[k][0] = V[k][i]; //Sets vc to designated v matrix
apmatrix<double>temp = proj(vc, x);
for(int j = 0; j<A.numrows(); j++)
{
vn[j][0]-=temp[j][0]; //orthogonalize matrix
}
}
for(int k = 0; k<V.numrows(); k++)
{
V[k][c]=vn[k][0]; //Subtracts orthogonalized col to V
v[k][0]=V[k][c]; //v is redundant. more of a placeholder
}
if((c+1)<A.numcols()) //Matrix Out of Bounds Checker
{
for(int k = 0; k<A.numrows(); k++)
{
vn[k][0]=0;
vc[k][0]=0;
x[k][0]=A[k][c+1]; //Moves x onto next v
}
}
}
system("PAUSE");
return V;
}
For testing purposes, I have been using the 2D Array: [[1,1,4],[1,4,2],[1,4,2],[1,1,0]]. Each column is its own 4x1 matrix. The matrices should be outputted as: [1,1,1,1]T, [-1.5,1.5,1.5,-1.5]T, and [2,0,0,-2]T respectively. What's happening now is that the first column comes out correctly (it's the same matrix), but the second and third come out to something that is potentially similar but not equal to their intended values.
Again, each time I call on the orthogonal method, it outputs something different. I think it's due to the numbers inputted in the proj() method, but I am not fully sure.
The apmatrix is from the AP college board, back when they taught cpp. It is similar to vectors or ArrayLists in Java.
Here is a link to apmatrix.cpp and to the documentation or conditions (probably more useful), apmatrix.h.
Here is a link to the full code (I added visual markers to see what the computer is doing).
It's fair to assume that all custom methods work as intended (except maybe Matrix Regressions, but that's irrelevant). And be sure to enter the matrix using the enter method before trying to factorize. The code might be inefficient partly because I self-taught myself cpp not too long ago and I've been trying different ways to fix my code. Thank you for the help!
As said in comments:
#AhmedFasih After doing more tests today, I have found that it is in-fact some >memory issue. I found that for some reason, if a variable or an apmatrix object >is declared within a loop, initialized, then that loop is reiterated, the >memory does not entirely wipe the value stored in that variable or object. This >is noted in two places in my code. For whatever reason, I had to set the >doubles a,b, and c to 0 in the proj method and apmatrixdh to 0 in the >mult method or they would store some value in the next iteration. Thank you so >much for you help!

How to find euclidean distance between keypoints of a single image in opencv

I want to get a distance vector d for each key point in the image. The distance vector should consist of distances from that keypoint to all other keypoints in that image.
Note: Keypoints are found using SIFT.
Im pretty new to opencv. Is there a library function in C++ that can make my task easy?
If you aren't interested int the position-distance but the descriptor-distance you can use this:
cv::Mat SelfDescriptorDistances(cv::Mat descr)
{
cv::Mat selfDistances = cv::Mat::zeros(descr.rows,descr.rows, CV_64FC1);
for(int keyptNr = 0; keyptNr < descr.rows; ++keyptNr)
{
for(int keyptNr2 = 0; keyptNr2 < descr.rows; ++keyptNr2)
{
double euclideanDistance = 0;
for(int descrDim = 0; descrDim < descr.cols; ++descrDim)
{
double tmp = descr.at<float>(keyptNr,descrDim) - descr.at<float>(keyptNr2, descrDim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
selfDistances.at<double>(keyptNr, keyptNr2) = euclideanDistance;
}
}
return selfDistances;
}
which will give you a N x N matrix (N = number of keypoints) where Mat_i,j = euclidean distance between keypoint i and j.
with this input:
I get these outputs:
image where keypoints are marked which have a distance of less than 0.05
image that corresponds to the matrix. white pixels are dist < 0.05.
REMARK: you can optimize many things in the computation of the matrix, since distances are symmetric!
UPDATE:
Here is another way to do it:
From your chat I know that you would need 13GB memory to hold those distance information for 41381 keypoints (which you tried). If you want instead only the N best matches, try this code:
// choose double here if you are worried about precision!
#define intermediatePrecision float
//#define intermediatePrecision double
//
void NBestMatches(cv::Mat descriptors1, cv::Mat descriptors2, unsigned int n, std::vector<std::vector<float> > & distances, std::vector<std::vector<int> > & indices)
{
// TODO: check whether descriptor dimensions and types are the same for both!
// clear vector
// get enough space to create n best matches
distances.clear();
distances.resize(descriptors1.rows);
indices.clear();
indices.resize(descriptors1.rows);
for(int i=0; i<descriptors1.rows; ++i)
{
// references to current elements:
std::vector<float> & cDistances = distances.at(i);
std::vector<int> & cIndices = indices.at(i);
// initialize:
cDistances.resize(n,FLT_MAX);
cIndices.resize(n,-1); // for -1 = "no match found"
// now find the 3 best matches for descriptor i:
for(int j=0; j<descriptors2.rows; ++j)
{
intermediatePrecision euclideanDistance = 0;
for( int dim = 0; dim < descriptors1.cols; ++dim)
{
intermediatePrecision tmp = descriptors1.at<float>(i,dim) - descriptors2.at<float>(j, dim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
float tmpCurrentDist = euclideanDistance;
int tmpCurrentIndex = j;
// update current best n matches:
for(unsigned int k=0; k<n; ++k)
{
if(tmpCurrentDist < cDistances.at(k))
{
int tmpI2 = cIndices.at(k);
float tmpD2 = cDistances.at(k);
// update current k-th best match
cDistances.at(k) = tmpCurrentDist;
cIndices.at(k) = tmpCurrentIndex;
// previous k-th best should be better than k+1-th best //TODO: a simple memcpy would be faster I guess.
tmpCurrentDist = tmpD2;
tmpCurrentIndex =tmpI2;
}
}
}
}
}
It computes the N best matches for each keypoint of the first descriptors to the second descriptors. So if you want to do that for the same keypoints you'll set to be descriptors1 = descriptors2 ion your call as shown below. Remember: the function doesnt know that both descriptor sets are identical, so the first best match (or at least one) will be the keypoint itself with distance 0 always! Keep that in mind if using the results!
Here's sample code to generate an image similar to the one above:
int main()
{
cv::Mat input = cv::imread("../inputData/MultiLena.png");
cv::Mat gray;
cv::cvtColor(input, gray, CV_BGR2GRAY);
cv::SiftFeatureDetector detector( 7500 );
cv::SiftDescriptorExtractor describer;
std::vector<cv::KeyPoint> keypoints;
detector.detect( gray, keypoints );
// draw keypoints
cv::drawKeypoints(input,keypoints,input);
cv::Mat descriptors;
describer.compute(gray, keypoints, descriptors);
int n = 4;
std::vector<std::vector<float> > dists;
std::vector<std::vector<int> > indices;
// compute the N best matches between the descriptors and themselves.
// REMIND: ONE best match will always be the keypoint itself in this setting!
NBestMatches(descriptors, descriptors, n, dists, indices);
for(unsigned int i=0; i<dists.size(); ++i)
{
for(unsigned int j=0; j<dists.at(i).size(); ++j)
{
if(dists.at(i).at(j) < 0.05)
cv::line(input, keypoints[i].pt, keypoints[indices.at(i).at(j)].pt, cv::Scalar(255,255,255) );
}
}
cv::imshow("input", input);
cv::waitKey(0);
return 0;
}
Create a 2D vector (size of which would be NXN) -->
std::vector< std::vector< float > > item;
Create 2 for loops to go till the number of keypoints (N) you have
Calculate distances as suggested by a-Jays
Point diff = kp1.pt - kp2.pt;
float dist = std::sqrt( diff.x * diff.x + diff.y * diff.y );
Add this to vector using push_back for each keypoint --> N times.
The keypoint class has a member called pt which in turn has x and y [the (x,y) location of the point] as its own members.
Given two keypoints kp1 and kp2, it's then easy to calculate the euclidean distance as:
Point diff = kp1.pt - kp2.pt;
float dist = std::sqrt( diff.x * diff.x + diff.y * diff.y )
In your case, it is going to be a double loop iterating over all the keypoints.

matrix inversion in boost

I am trying to do a simple matrix inversion operation using boost. But I
am getting an error.
Basically what I am trying to find is inversted_matrix =
inverse(trans(matrix) * matrix)
But I am getting an error
Check failed in file boost_1_53_0/boost/numeric/ublas/lu.hpp at line 299:
detail::expression_type_check (prod (triangular_adaptor<const_matrix_type,
upper> (m), e), cm2)
terminate called after throwing an instance of
'boost::numeric::ublas::internal_logic'
what(): internal logic
Aborted (core dumped)
My attempt:
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/vector.hpp>
#include <boost/numeric/ublas/io.hpp>
#include <boost/numeric/ublas/vector_proxy.hpp>
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/triangular.hpp>
#include <boost/numeric/ublas/lu.hpp>
namespace ublas = boost::numeric::ublas;
template<class T>
bool InvertMatrix (const ublas::matrix<T>& input, ublas::matrix<T>& inverse) {
using namespace boost::numeric::ublas;
typedef permutation_matrix<std::size_t> pmatrix;
// create a working copy of the input
matrix<T> A(input);
// create a permutation matrix for the LU-factorization
pmatrix pm(A.size1());
// perform LU-factorization
int res = lu_factorize(A,pm);
if( res != 0 )
return false;
// create identity matrix of "inverse"
inverse.assign(ublas::identity_matrix<T>(A.size1()));
// backsubstitute to get the inverse
lu_substitute(A, pm, inverse);
return true;
}
int main(){
using namespace boost::numeric::ublas;
matrix<double> m(4,5);
vector<double> v(4);
vector<double> thetas;
m(0,0) = 1; m(0,1) = 2104; m(0,2) = 5; m(0,3) = 1;m(0,4) = 45;
m(1,0) = 1; m(1,1) = 1416; m(1,2) = 3; m(1,3) = 2;m(1,4) = 40;
m(2,0) = 1; m(2,1) = 1534; m(2,2) = 3; m(2,3) = 2;m(2,4) = 30;
m(3,0) = 1; m(3,1) = 852; m(3,2) = 2; m(3,3) = 1;m(3,4) = 36;
std::cout<<m<<std::endl;
matrix<double> product = prod(trans(m), m);
std::cout<<product<<std::endl;
matrix<double> inversion(5,5);
bool inverted;
inverted = InvertMatrix(product, inversion);
std::cout << inversion << std::endl;
}
Boost Ublas has runtime checks to ensure among other thing numerical stability.
If you look at source of the error, you can see that it tries to make sure that
U*X = B, X = U^-1*B, U*X = B (or smth like that) are coorect to within some epsilon. If you have too much deviation numerically this will likely not hold.
You can disable checks via -DBOOST_UBLAS_NDEBUG or twiddle with BOOST_UBLAS_TYPE_CHECK_EPSILON, BOOST_UBLAS_TYPE_CHECK_MIN.
As m has only 4 rows, prod(trans(m), m) cannot have a rank higher than 4, and as the product is a 5x5 matrix, it must be singular (i.e. it has determinant 0) and calculating the inverse of a singular matrix is like division by 0. Add independent rows to m to solve this singularity problem.
I think your matrix dimension, 4 by 5, caused the error. Like what Maarten Hilferink mentioned, you may try with a square matrix like 5 by 5. Here are requirement to have an inverse:
The matrix must be square (same number of rows and columns).
The determinant of the matrix must not be zero (determinants are covered in section 6.4). This is instead of the real number not being zero to have an inverse, the determinant must not be zero to have an inverse.