I would like to translate [vec,val] = eig(A) from MATLAB to c++ using Eigen library, but I couldn't reach to the same result!
I tried eigensolver,ComplexEigenSolver and SelfAdjointEigenSolver. None of them give me the result like eig(A) in MATLAB.
Sample matrices:
Tv(:,:,223) =
0.8648 -1.9658 -0.2785
-1.9658 4.9142 0.8646
-0.2785 0.8646 0.3447
Tv(:,:,224) =
1.9735 -0.4218 1.0790
-0.4218 3.3012 0.1855
1.0790 0.1855 3.7751
Tv(:,:,225) =
2.4948 1.0185 1.1633
1.0185 1.1732 -0.4479
1.1633 -0.4479 4.3289
Tv(:,:,226) =
0.3321 0.0317 0.1617
0.0317 0.0020 -0.0139
0.1617 -0.0139 0.5834
Eigen:
MatrixXcd vec(3 * n, 3);
VectorXcd val(3);
for (int k = 0; k < n; k++){
EigenSolver<Matrix3d> eig(Tv.block<3, 3>(3 * k, 0));
vec.block<3, 3>(3 * k, 0) = eig.eigenvectors();
cout <<endl << vec.block<3, 3>(3 * k, 0) << endl;
val = eig.eigenvalues();
cout << "val= " << endl << val << endl;
}
//results
(0.369152,0) (-0.830627,0) (-0.416876,0)
(-0.915125,0) (-0.403106,0) (-0.00717218,0)
(-0.162088,0) (0.384142,0) (-0.908935,0)
val=
(5.86031,0)
(0.0396418,0)
(0.223765,0)
(0.881678,0) (0.204005,0) (0.425472,0)
(0.23084,0) (-0.97292,0) (-0.011858,0)
(-0.411531,0) (-0.108671,0) (0.904894,0)
val=
(1.35945,0)
(3.41031,0)
(4.27996,0)
(0.526896,0) (-0.726801,0) (0.440613,0)
(-0.813164,0) (-0.581899,0) (0.0125466,0)
(-0.247274,0) (0.364902,0) (0.897609,0)
val=
(0.377083,0)
(2.72623,0)
(4.89367,0)
(0.88992,0) (-0.43968,0) (0.121341,0)
(0.13406,0) (-0.00214387,0) (-0.990971,0)
(-0.43597,0) (-0.898152,0) (-0.0570358,0)
val=
(0.257629,0)
(0.662467,0)
(-0.00267575,0)
MATLAB:
for k=1:n
[u,d] = eig(Tv(:,:,k))
end
%results
u =
0.8306 -0.4169 -0.3692
0.4031 -0.0072 0.9151
-0.3841 -0.9089 0.1621
d =
0.0396 0 0
0 0.2238 0
0 0 5.8603
u =
0.8817 0.2040 0.4255
0.2308 -0.9729 -0.0119
-0.4115 -0.1087 0.9049
d =
1.3594 0 0
0 3.4103 0
0 0 4.2800
u =
-0.5269 0.7268 0.4406
0.8132 0.5819 0.0125
0.2473 -0.3649 0.8976
d =
0.3771 0 0
0 2.7262 0
0 0 4.8937
u =
-0.1213 -0.8899 0.4397
0.9910 -0.1341 0.0021
0.0570 0.4360 0.8982
d =
-0.0027 0 0
0 0.2576 0
0 0 0.6625
What's your suggestion?
I don't get your question, as looking at your results they all returns the same. Recall that the eigen-decomposition of a matrix is not completely unique:
eigenvalues/vectors can be arbitrarily reordered
if v is an eigenvector, then -v is also a valid eigenvector
Since your matrices are symmetric, you should use SelfAdjointEigenSolver to get them automatically ordered as MatLab. Then the eigenvectors will only differs from their sign, but you will have to live with that.
Well.... the results are the same....
Result eigen:
(0.369152,0) (-0.830627,0) (-0.416876,0)
(-0.915125,0) (-0.403106,0) (-0.00717218,0)
(-0.162088,0) (0.384142,0) (-0.908935,0)
val=
(5.86031,0)
(0.0396418,0)
(0.223765,0)
result matlab:
u =
0.8306 -0.4169 -0.3692
0.4031 -0.0072 0.9151
-0.3841 -0.9089 0.1621
d =
0.0396 0 0
0 0.2238 0
0 0 5.8603
I have good news....
The vectors are THE SAME, but unordered.....
eigV1 from eigen is -eigV3 from Matlab,
eigV2 from eigen is -eigV1 from Matlab,
eigV3 from eigen is -eigV2 from Matlab,
The eigenvalues are reordered equally....
Related
I am developing a program, making heavy use of Armadillo library. I have the 10.8.2 version, linked against Intel oneAPI MKL 2022.0.2. At some point, I need to perform many
sparse matrix times dense vector multiplications, both of which are defined using Armadillo structures. I have found this point to be a probable bottleneck, and was being curious if replacing the Armadillo multiplication with "bare bones" sparse CBLAS routines from MKL (mkl_sparse_d_mv) would speed things up. But in order to do so, I need to convert from Armadillo's SpMat to something that MKL understands. As per Armadillo docs, sparse matrices are stored in CSC format, so I have tried
mkl_sparse_d_create_csc. My attempt at this is below:
#include <iostream>
#include <armadillo>
#include "mkl.h"
int main()
{
arma::umat locations = {{0, 0, 1, 3, 2},{0, 1, 0, 2, 3}};
// arma::vec vals = {0.5, 2.5, 2.5, 4.5, 4.5};
arma::vec vals = {0.5, 2.5, 3.5, 4.5, 5.5};
arma::sp_mat X(locations, vals);
std::cout << "X = \n" << arma::mat(X) << std::endl;
arma::vec v = {1,1,1,1};
arma::vec v2;
v2.resize(4);
std::cout << "v = \n" << v << std::endl;
std::cout << "X * v = \n" << X * v << std::endl;
MKL_INT *cols_beg = static_cast<MKL_INT *>(mkl_malloc(X.n_cols * sizeof(MKL_INT), 64));
MKL_INT *cols_end = static_cast<MKL_INT *>(mkl_malloc(X.n_cols * sizeof(MKL_INT), 64));
MKL_INT *row_idx = static_cast<MKL_INT *>(mkl_malloc(X.n_nonzero * sizeof(MKL_INT), 64));
double *values = static_cast<double *>(mkl_malloc(X.n_nonzero * sizeof(double), 64));
for (MKL_INT i = 0; i < X.n_cols; i++)
{
cols_beg[i] = static_cast<MKL_INT>(X.col_ptrs[i]);
cols_end[i] = static_cast<MKL_INT>((--X.end_col(i)).pos());
std::cout << cols_beg[i] << " --- " << cols_end[i] << std::endl;
}
std::cout << std::endl;
for (MKL_INT i = 0; i < X.n_nonzero; i++)
{
row_idx[i] = static_cast<MKL_INT>(X.row_indices[i]);
values[i] = X.values[i];
std::cout << row_idx[i] << " --- " << values[i] << std::endl;
}
std::cout << std::endl;
sparse_matrix_t X_mkl = NULL;
sparse_status_t res = mkl_sparse_d_create_csc(&X_mkl, SPARSE_INDEX_BASE_ZERO,
X.n_rows, X.n_cols, cols_beg, cols_end, row_idx, values);
if(res == SPARSE_STATUS_SUCCESS) std::cout << "Constructed mkl representation of X" << std::endl;
matrix_descr dsc;
dsc.type = SPARSE_MATRIX_TYPE_GENERAL;
sparse_status_t stat = mkl_sparse_d_mv(SPARSE_OPERATION_NON_TRANSPOSE, 1.0, X_mkl, dsc, v.memptr(), 0.0, v2.memptr());
std::cout << "Multiplication status = " << stat << std::endl;
if(stat == SPARSE_STATUS_SUCCESS)
{
std::cout << "Calculated X*v via mkl" << std::endl;
std::cout << v2;
}
mkl_free(cols_beg);
mkl_free(cols_end);
mkl_free(row_idx);
mkl_free(values);
mkl_sparse_destroy(X_mkl);
return 0;
}
I am compiling this code with (with the help of Link Line Advisor)
icpc -g testing.cpp -o intel_testing.out -DARMA_ALLOW_FAKE_GCC -O3 -xhost -Wall -Wextra -L${MKLROOT}/lib/intel64 -liomp5 -lpthread -lm -DMKL_ILP64 -qmkl=parallel -larmadillo
on Pop!_OS 21.10.
It compiles and runs without any problems. The output is as follows:
X =
0.5000 2.5000 0 0
3.5000 0 0 0
0 0 0 5.5000
0 0 4.5000 0
v =
1.0000
1.0000
1.0000
1.0000
X * v =
3.0000
3.5000
5.5000
4.5000
0 --- 1
2 --- 2
3 --- 3
4 --- 4
0 --- 0.5
1 --- 3.5
0 --- 2.5
3 --- 4.5
2 --- 5.5
Constructed mkl representation of X
Multiplication status = 0
Calculated X*v via mkl
0.5000
0
0
0
As we can see, the result of Armadillo's multiplication is correct, whereas the one from MKL is wrong. My question is this: Am I making a mistake somewhere? Or is there something wrong with MKL?. I suspect the former of course, but after spending considerable amount of time, cannot find anything. Any help would be much appreciated!
EDIT
As CJR and Vidyalatha_Intel suggested, I have changed col_end to
cols_end[i] = static_cast<MKL_INT>((X.end_col(i)).pos());
The result is now
X =
0.5000 2.5000 0 0
3.5000 0 0 0
0 0 0 5.5000
0 0 4.5000 0
v =
1.0000
1.0000
1.0000
1.0000
X * v =
3.0000
3.5000
5.5000
4.5000
0 --- 2
2 --- 3
3 --- 4
4 --- 5
0 --- 0.5
1 --- 3.5
0 --- 2.5
3 --- 4.5
2 --- 5.5
Constructed mkl representation of X
Multiplication status = 0
Calculated X*v via mkl
4.0000
2.5000
0
0
col_end is indeed 2,3,4,5 as suggested, but the result is still wrong.
Yes, the cols_end array is incorrect as pointed out by CJR. They should be indexed as 2,3,4,5. Please see the documentation regarding the parameter to the function mkl_sparse_d_create_csc
cols_end:
This array contains col indices, such that cols_end[i] - ind - 1 is the last index of col i in the arrays values and row_indx. ind takes 0 for zero-based indexing and 1 for one-based indexing.
https://www.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-c/top/blas-and-sparse-blas-routines/inspector-executor-sparse-blas-routines/matrix-manipulation-routines/mkl-sparse-create-csc.html
Change this line
cols_end[i] = static_cast<MKL_INT>((--X.end_col(i)).pos());
to
cols_end[i] = static_cast<MKL_INT>((X.end_col(i)).pos());
Now recompile and run the code. I've tested it and it is showing the correct results. Image with results and compilation command
I would like to generate a matrix in C ++ using armadillo that behaves like a "truth table", for example:
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
I was thinking of a cycle of this kind, but I'm not very practical with armadillo and its data structures.
imat A = zeros<imat>(8, 3);
/* fill each row */
for(int i=0; i < 8; i++)
{
A.row(i) = (i/(pow(2, i)))%2 * ones<ivec>(3).t(); //
}
cout << "A = \n" << A << endl;
Any ideas?
If you need a large size truth table matrix (~2^30 x 30) as you said here, from the memory point of view, you should implement a function which quickly calculates the values you want rather than storing them on a matrix.
This is easily done using std::bitset as follows.
Note that N must be determined at compile-time in this method.
Then you can get the value of your A(i,j) by matrix<3>(i,j):
DEMO
#include <bitset>
template <std::size_t N>
std::size_t matrix(std::size_t i, std::size_t j)
{
return std::bitset<N>(i)[N-j-1];
}
I would like to find a mapping f:X --> N, with multiple discrete natural variables X of varying dimension, where f produces a unique number between 0 to the multiplication of all dimensions. For example. Assume X = {a,b,c}, with dimensions |a| = 2, |b| = 3, |c| = 2. f should produce 0 to 12 (2*3*2).
a b c | f(X)
0 0 0 | 0
0 0 1 | 1
0 1 0 | 2
0 1 1 | 3
0 2 0 | 4
0 2 1 | 5
1 0 0 | 6
1 0 1 | 7
1 1 0 | 8
1 1 1 | 9
1 2 0 | 10
1 2 1 | 11
This is easy when all dimensions are equal. Assume binary for example:
f(a=1,b=0,c=1) = 1*2^2 + 0*2^1 + 1*2^0 = 5
Using this naively with varying dimensions we would get overlapping values:
f(a=0,b=1,c=1) = 0*2^2 + 1*3^1 + 1*2^2 = 4
f(a=1,b=0,c=0) = 1*2^2 + 0*3^1 + 0*2^2 = 4
A computationally fast function is preferred as I intend to use/implement it in C++. Any help is appreciated!
Ok, the most important part here is math and algorythmics. You have variable dimensions of size (from least order to most one) d0, d1, ... ,dn. A tuple (x0, x1, ... , xn) with xi < di will represent the following number: x0 + d0 * x1 + ... + d0 * d1 * ... * dn-1 * xn
In pseudo-code, I would write:
result = 0
loop for i=n to 0 step -1
result = result * d[i] + x[i]
To implement it in C++, my advice would be to create a class where the constructor would take the number of dimensions and the dimensions itself (or simply a vector<int> containing the dimensions), and a method that would accept an array or a vector of same size containing the values. Optionaly, you could control that no input value is greater than its dimension.
A possible C++ implementation could be:
class F {
vector<int> dims;
public:
F(vector<int> d) : dims(d) {}
int to_int(vector<int> x) {
if (x.size() != dims.size()) {
throw std::invalid_argument("Wrong size");
}
int result = 0;
for (int i = dims.size() - 1; i >= 0; i--) {
if (x[i] >= dims[i]) {
throw std::invalid_argument("Value >= dimension");
}
result = result * dims[i] + x[i];
}
return result;
}
};
I don't understand the result I get when I try to iterate over valuePtr of a sparse matrix. Here is my code.
#include <iostream>
#include <vector>
#include <Eigen/Sparse>
using namespace Eigen;
int main()
{
SparseMatrix<double> sm(4,5);
std::vector<int> cols = {0,1,4,0,4,0,4};
std::vector<int> rows = {0,0,0,2,2,3,3};
std::vector<double> values = {0.2,0.4,0.6,0.3,0.7,0.9,0.2};
for(int i=0; i < cols.size(); i++)
sm.insert(rows[i], cols[i]) = values[i];
std::cout << sm << std::endl;
int nz = sm.nonZeros();
std::cout << "non_zeros : " << nz << std::endl;
for (auto it = sm.valuePtr(); it != sm.valuePtr() + nz; ++it)
std::cout << *it << std::endl;
return 0;
}
Output:
0.2 0.4 0 0 0.6 // The values are in the matrix
0 0 0 0 0
0.3 0 0 0 0.7
0.9 0 0 0 0.2
non_zeros : 7
0.2 // but valuePtr() does not point to them
0.3 // I expected: 0.2, 0.3, 0.9, 0.4, 0.6, 0.7, 0.2
0.9
0
0.4
0
0
I don't understand why I am getting zeros, what's going on here?
According to the documentation for SparseMatrix:
Unlike the compressed format, there might be extra space inbetween the
nonzeros of two successive columns (resp. rows) such that insertion of
new non-zero can be done with limited memory reallocation and copies.
[...]
A call to the function makeCompressed() turns the matrix into the standard compressed format compatible with many library.
For example:
This storage scheme is better explained on an example. The following
matrix
0 3 0 0 0
22 0 0 0 17
7 5 0 1 0
0 0 0 0 0
0 0 14 0 8
and one of its possible sparse, column major representation:
Values: 22 7 _ 3 5 14 _ _ 1 _ 17 8
InnerIndices: 1 2 _ 0 2 4 _ _ 2 _ 1 4
[...]
The "_" indicates available free space to quickly insert new elements.
Since valuePtr() simply return a pointer to the Values array, you'll see the empty spaces (the zeroes that got printed) unless you make the matrix compressed.
array_2D = new ushort * [nx];
// Allocate each member of the "main" array
//
for (ii = 0; ii < nx; ii++)
array_2D[ii] = new ushort[ny];
// Allocate "main" array
array_3D = new ushort ** [numexp];
// Allocate each member of the "main" array
for(kk=0;kk<numexp;kk++)
array_3D[kk]= new ushort * [nx];
for(kk=0;kk<numexp;kk++)
for(ii=0;ii<nx;ii++)
array_3D[kk][ii]= new ushort[ny];
the values of numexp,nx and ny is obtained by user..
Is this the correct form for dynamic allocation for a 3d array....We know that the code is working for the 2D array...If this is not correct can anyone suggest a better method?
I think the simplest way to allocate and deal with a multidimensional array is to use one big 1d array (or better yet a std::vector) and provide an interface to index into correctly.
This is easiest to think about first in 2 dimensions. Consider a 2D array with "x" and "y" axis
x=0 1 2
y=0 a b c
1 d e f
2 g h i
We can represent this using a 1-d array, rearranged as follows:
y= 0 0 0 1 1 1 2 2 2
x= 0 1 2 0 1 2 0 1 2
array: a b c d e f g h i
So our 2d array is simply
unsigned int maxX = 0;
unsigned int maxY = 0;
std::cout << "Enter x and y dimensions":
std::cin << maxX << maxY
int array = new int[maxX*maxY];
// write to the location where x = 1, y = 2
int x = 1;
int y = 2;
array[y*maxX/*jump to correct row*/+x/*shift into correct column*/] = 0;
The most important thing is to wrap up the accessing into a neat interface so you only have to figure this out once
(In a similar way we can work with 3-d arrays
z = 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2
y = 0 0 0 1 1 1 2 2 2 0 0 0 1 1 1 0 0 0 1 1 1 2 2 2
x = 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2
array: a b c d e f g h i j k l m n o p q r s t u v w x
Once you figure out how to index into the array correctly and put this code in a common place, you don't have to deal with the nastiness of pointers to arrays of pointers to arrays of pointers. You'll only have to do one delete [] at the end.
Looks fine too me, so long an array of arr[numexp][nx][ny] is what you wanted.
A little tip: you can put the allocation of the third dimension into the loop of the second dimension, aka you allocate each 3rd dimension while the parent subarray gets allocated:
ushort*** array_3D = new ushort**[nx];
for(int i=0; i<nx; ++i){
array_3D[i] = new ushort*[ny];
for(int j=0; j<ny; ++j)
array_3D[i][j] = new ushort[nz];
}
And of course, the general hint: Do that with std::vectors to not have to deal with that nasty (de)allocation stuff. :)
#include <vector>
int main(){
using namespace std;
typedef unsigned short ushort;
typedef vector<ushort> usvec;
vector<vector<usvec> > my3DVector(numexp, vector<usvec>(nx, vector<ushort>(ny)));
// size of -- dimension 1 ^^^^^^ -- dimension 2 ^^ --- dimension 3 ^^
}