I am looking for a builtin way with the eigen library to perform coordinate transformations by normal vectors in 2D space.
Mathematically, it's not difficult: Let v = (v_x, v_y) be a 2D column vector and n = (n_x, n_y) be a normal vector, then the transformation I am looking for is one by rotational matrix:
v_T = N * v, with v_T being the transformed vector and N being the rotational matrix, which is
| nx, ny |
| -ny, nx |
In my case, the data I need to transform is stored in an Array2Xd and the normal vectors are stored in a Matrix2Xd, with each column holding x- and y-component. I need to transform each column in the array by the corresponding normal vector in the matrix.
Currently, I'm doing it like this:
#include <Eigen/Dense>
#include <iostream>
using namespace Eigen;
/* transform a single vector, just for illustration */
Array2d transform_s( const Ref<const Array2d>& v, const Ref<const Vector2d>& n ){
return {
n.dot( v.matrix() ),
-n.y() * v.x() + n.x() * v.y()
};
}
/* transform multiple columns */
Array2Xd transform_m( const Ref<const Array2Xd>& v, const Ref<const Array2Xd>& n ){
Array2Xd transformed ( 2, v.cols() );
/* colwise dot product for first row */
transformed.row(0) = (n * v).colwise().sum();
/* even less elegant calculation for the second row */
transformed.row(1) = n.row(0) * v.row(1) - n.row(1) * v.row(0);
return transformed;
}
int main(){
Array2Xd vals (2, 3);
vals <<
2, 0,-1,
0, 3, 2;
Matrix2Xd n;
n.resizeLike(vals);
n <<
0, 0, 1,
1,-1, 1;
n.colwise().normalize();
std::cout
<< "single column:\n" << transform_s( vals.col(0), n.col(0) )
<< "\nall columns:\n" << transform_m( vals, n.array() )
<< "\n";
return 0;
}
I'm aware of Eigen::Rotation2D, but it appears to either require an angle or a rotational matrix. I am specifically looking for a way to only provide the normal vectors. Otherwise I need to build the rotational matrices from the normal vectors myself, which doesn't really reduce the complexity on my end.
If there's no way to do this with eigen, I'll accept that as an answer. In that case, I'd be very happy about a more efficient implementation of what I wrote above.
What you are doing is essentially a complex multiplication with conj(n).
There is no elegant way to reinterpret a Vector2d/Array2Xd to a complex<double>/ArrayXcd, but you can hack something together using Maps:
Array2Xd transform_complex( const Ref<const Array2Xd>& v, const Ref<const Array2Xd>& n ){
Array2Xd transformed(2, v.cols());
ArrayXcd::Map(reinterpret_cast<std::complex<double>*>(transformed.data()), v.cols())
= ArrayXcd::Map(reinterpret_cast<std::complex<double> const*>(v.data()), v.cols())
* ArrayXcd::Map(reinterpret_cast<std::complex<double> const*>(n.data()), n.cols()).conjugate();
return transformed;
}
You could write yourself a helper function which takes a const Ref<const Array2Xd>& and returns a Map<ArrayXcd> with the same content.
Related
My task is to find all sub-matrices inside a matrix such that each sub-matrix counted satisfies a certain condition and also is not a part of another sub-matrix that works.
My first thought was to write a recursive procedure so that we could simply return from the current sub-matrix whenever we find that it works (to prevent any sub-matrices of that sub-matrix from being tested). Here is my code that attempts to do this:
void find(int xmin, int xmax, int ymin, int ymax){
if(xmin > xmax || ymin > ymax){return;}
else if(works(xmin, xmax, ymin, ymax)){++ANS; return;}
find(xmin + 1, xmax, ymin, ymax);
find(xmin, xmax - 1, ymin, ymax);
find(xmin, xmax, ymin + 1, ymax);
find(xmin, xmax, ymin, ymax - 1);
}
The problem with my current method seems to be the fact that it allows sub-matrices to be visited more than once, meaning that the return statements are ineffective and don't actually prevent sub-matrices of working sub-matrices from being counted, because they are visited from other matrices. I think I have the right idea with writing a recursive procedure, though. Can someone please point me in the right direction?
Obviously, you need a way to check if a submatrix was evaluated before or is contained in a larger solution. Also, you need to account that after a solution is found, a larger solution may be found later which covers the currently found solution.
One way of doing this is to utilize a structure called R*-tree, which allows to query spatial (or geometric) data efficiently. For this purpose, you could use R-tree implementation from boost. By using a box (rectangle) type to represent a submatrix, you can then use the R-tree with queries boost::geometry::index::contains (to find previously found solutions which include the submatrix considered) and boost::geometry::index::within (to find previously found solutions which are contained within a newly found solution).
Here is a working example in C++11, which is based on your idea:
#include <vector>
#include <numeric>
#include <iostream>
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/register/point.hpp>
#include <boost/geometry/geometries/register/box.hpp>
#include <boost/geometry/index/rtree.hpp>
namespace bg = boost::geometry;
namespace bgi = boost::geometry::index;
struct Element
{
int x, y;
};
struct Box
{
Element lt, rb;
};
BOOST_GEOMETRY_REGISTER_POINT_2D(Element, long, cs::cartesian, x, y)
BOOST_GEOMETRY_REGISTER_BOX(Box, Element, lt, rb)
template<class M>
bool works(M&& matrix, Box box) {
// Dummy function to check if sum of a submatrix is divisible by 7
long s = 0;
for (int y=box.lt.y; y<box.rb.y; y++)
for (int x=box.lt.x; x<box.rb.x; x++)
s += matrix[y][x];
return s % 7 == 0;
}
template <class T, class M>
void find(T& tree, M&& matrix, Box box, T& result) {
if (box.lt.x >= box.rb.x || box.lt.y >= box.rb.y) return;
for (auto it = tree.qbegin(bgi::contains(box)); it != tree.qend(); ++it) return;
if (works(matrix, box)) {
// Found a new solution
// Remove any working submatrices which are within the new solution
std::vector<Box> temp;
for (auto it = result.qbegin(bgi::within(box)); it != result.qend(); ++it)
temp.push_back(*it);
result.remove(std::begin(temp), std::end(temp));
// Remember the new solution
result.insert(box);
tree.insert(box);
return;
}
// Recursion
find(tree, matrix, Box{{box.lt.x+1,box.lt.y},{box.rb.x,box.rb.y}}, result);
find(tree, matrix, Box{{box.lt.x,box.lt.y+1},{box.rb.x,box.rb.y}}, result);
find(tree, matrix, Box{{box.lt.x,box.lt.y},{box.rb.x-1,box.rb.y}}, result);
find(tree, matrix, Box{{box.lt.x,box.lt.y},{box.rb.x,box.rb.y-1}}, result);
tree.insert(box);
}
template <class T>
void show(const T& vec) {
for (auto box : vec) {
std::cout << "Start at (" << box.lt.x << ", " << box.lt.y << "), width="
<< box.rb.x-box.lt.x << ", height=" << box.rb.y-box.lt.y << "\n";
}
}
int main()
{
// Initialize R-tree
const size_t rtree_max_size = 20000;
bgi::rtree<Box, bgi::rstar<rtree_max_size> > tree, result;
// Initialize sample matrix
const int width = 4;
const int height = 3;
int matrix[height][width];
std::iota((int*)matrix, (int*)matrix + height*width, 1);
// Calculate result
find(tree, matrix, Box{{0,0},{width,height}}, result);
// Output
std::cout << "Resulting submatrices:\n";
show(result);
return 0;
}
In this example the following matrix is considered:
1 2 3 4
5 6 7 8
9 10 11 12
And the program will find all submatrices for which the sum of their elements is divisible by 7. Output:
Resulting submatrices:
Start at (0, 2), width=4, height=1
Start at (1, 0), width=3, height=3
Start at (0, 0), width=2, height=2
Start at (0, 1), width=1, height=2
What I liked about your recursive approach is that it works very fast even for large matrices of 1000×1000 elements.
I'm trying to reproduce some numpy code on Gaussian Processes (from here) using Eigen. Basically, I need to sample from a multivariate normal distribution:
samples = np.random.multivariate_normal(mu.ravel(), cov, 1)
The mean vector is currently zero, while the covariance matrix is a square matrix generated via the isotropic squared exponential kernel:
sqdist = np.sum(X1**2, 1).reshape(-1, 1) + np.sum(X2**2, 1) - 2 * np.dot(X1, X2.T)
return sigma_f**2 * np.exp(-0.5 / l**2 * sqdist)
I can generate the covariance matrix just fine for now (it can probably be cleaned but for now it's a POC):
Matrix2D kernel(const Matrix2D & x1, const Matrix2D & x2, double l = 1.0, double sigma = 1.0) {
auto dists = ((- 2.0 * (x1 * x2.transpose())).colwise()
+ x1.rowwise().squaredNorm()).rowwise() +
+ x2.rowwise().squaredNorm().transpose();
return std::pow(sigma, 2) * ((-0.5 / std::pow(l, 2)) * dists).array().exp();
}
However, my problems start when I need to sample the multivariate normal.
I've tried using the solution proposed in this accepted answer; however, the decomposition only works with covariance matrices of size up to 30x30; more than that and LLT fails to decompose the matrix. The alternative version provided in the answer also does not work, and creates NaNs. I tried LDLT as well but it also breaks (D contains negative values, so sqrt gives NaN).
Then, I got curious, and I looked into how numpy does this. Turns out the numpy implementation uses SVD decomposition (with LAPACK), rather than Cholesky. So I tried copying their implementation:
// SVD on the covariance matrix generated via kernel function
Eigen::BDCSVD<Matrix2D> solver(covs, Eigen::ComputeFullV);
normTransform = (-solver.matrixV().transpose()).array().colwise() * solver.singularValues().array().sqrt();
// Generate gaussian samples, "randN" is from the multivariate StackOverflow answer
Matrix2D gaussianSamples = Eigen::MatrixXd::NullaryExpr(1, means.size(), randN);
Eigen::MatrixXd samples = (gaussianSamples * normTransform).rowwise() + means.transpose();
The various minuses are me trying to exactly reproduce numpy's results.
In any case, this works perfectly fine, even with large dimensions. I was wondering why Eigen is not able to do LLT, but SVD works. The covariance matrix I use is the same. Is there something I can do to simply use LLT?
EDIT: Here is my full example:
#include <iostream>
#include <random>
#include <Eigen/Cholesky>
#include <Eigen/SVD>
#include <Eigen/Eigenvalues>
using Matrix2D = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor | Eigen::AutoAlign>;
using Vector = Eigen::Matrix<double, Eigen::Dynamic, 1>;
/*
We need a functor that can pretend it's const,
but to be a good random number generator
it needs mutable state.
*/
namespace Eigen {
namespace internal {
template<typename Scalar>
struct scalar_normal_dist_op
{
static std::mt19937 rng; // The uniform pseudo-random algorithm
mutable std::normal_distribution<Scalar> norm; // The gaussian combinator
EIGEN_EMPTY_STRUCT_CTOR(scalar_normal_dist_op)
template<typename Index>
inline const Scalar operator() (Index, Index = 0) const { return norm(rng); }
};
template<typename Scalar> std::mt19937 scalar_normal_dist_op<Scalar>::rng;
template<typename Scalar>
struct functor_traits<scalar_normal_dist_op<Scalar> >
{ enum { Cost = 50 * NumTraits<Scalar>::MulCost, PacketAccess = false, IsRepeatable = false }; };
} // end namespace internal
} // end namespace Eigen
Matrix2D kernel(const Matrix2D & x1, const Matrix2D & x2, double l = 1.0, double sigma = 1.0) {
auto dists = ((- 2.0 * (x1 * x2.transpose())).colwise() + x1.rowwise().squaredNorm()).rowwise() + x2.rowwise().squaredNorm().transpose();
return std::pow(sigma, 2) * ((-0.5 / std::pow(l, 2)) * dists).array().exp();
}
int main() {
unsigned size = 50;
unsigned seed = 1;
Matrix2D X = Vector::LinSpaced(size, -5.0, 4.8);
Eigen::internal::scalar_normal_dist_op<double> randN; // Gaussian functor
Eigen::internal::scalar_normal_dist_op<double>::rng.seed(seed); // Seed the rng
Vector means = Vector::Zero(X.rows());
auto covs = kernel(X, X);
Eigen::LLT<Matrix2D> cholSolver(covs);
// We can only use the cholesky decomposition if
// the covariance matrix is symmetric, pos-definite.
// But a covariance matrix might be pos-semi-definite.
// In that case, we'll go to an EigenSolver
Eigen::MatrixXd normTransform;
if (cholSolver.info()==Eigen::Success) {
std::cout << "Used LLT\n";
// Use cholesky solver
normTransform = cholSolver.matrixL();
} else {
std::cout << "Broken\n";
Eigen::BDCSVD<Matrix2D> solver(covs, Eigen::ComputeFullV);
normTransform = (-solver.matrixV().transpose()).array().colwise() * solver.singularValues().array().sqrt();
}
Matrix2D gaussianSamples = Eigen::MatrixXd::NullaryExpr(1, means.size(), randN);
Eigen::MatrixXd samples = (gaussianSamples * normTransform).rowwise() + means.transpose();
return 0;
}
Given a sparse matrix A and a vector b, I would like to obtain a solution x to the equation A * x = b as well as the kernel of A.
One possibility is to convert A to a dense representation.
#include <iostream>
#include <Eigen/Dense>
#include <Eigen/SparseQR>
int main()
{
// This is a toy problem. My actual matrix
// is of course bigger and sparser.
Eigen::SparseMatrix<double> A(2,2);
A.insert(0,0) = 1;
A.insert(0,1) = 2;
A.insert(1,0) = 4;
A.insert(1,1) = 8;
A.makeCompressed();
Eigen::Vector2d b;
b << 3, 12;
Eigen::SparseQR<Eigen::SparseMatrix<double>,
Eigen::COLAMDOrdering<int> > solver;
solver.compute(A);
std::cout << "Solution:\n" << solver.solve(b) << std::endl;
Eigen::Matrix2d A_dense(A);
std::cout << "Kernel:\n" << A_dense.fullPivLu().kernel() << std::endl;
return 0;
}
Is it possible to do the same directly in the sparse representation? I could not find a function kernel() anywhere except in FullPivLu.
I think #chtz's answer is almost correct, except we need to take the last A.cols() - qr.rank() columns. Here is a mathematical derivation.
Say we do a QR decomposition of your matrix Aᵀ as
Aᵀ * P = [Q₁ Q₂] * [R; 0] = Q₁ * R
where P is the permutation matrix, thus
Aᵀ = Q₁ * R * P⁻¹.
We can see that Range(Aᵀ) = Range(Q₁ * R * P⁻¹) = Range(Q₁) (because both P and R are invertible).
Since Aᵀ and Q₁ have the same range space, this implies that A and Q₁ᵀ will also have the same null space, namely Null(A) = Null(Q₁ᵀ). (Here we use the property that Range(M) and Null(Mᵀ) are complements to each other for any matrix M, hence Null(A) = complement(Range(Aᵀ)) = complement(Range(Q₁)) = Null(Q₁ᵀ)).
On the other hand, since the matrix [Q₁ Q₂] is orthonormal, Null(Q₁ᵀ) = Range(Q₂), thus Null(A) = Range(Q₂), i.e., kernal(A) = Q₂.
Since Q₂ is the right A.cols() - qr.rank() columns, you could call rightCols(A.cols() - qr.rank()) to retrieve the kernal of A.
For more information on kernal space, you could refer to https://en.wikipedia.org/wiki/Kernel_(linear_algebra)
I have a ~3000x3000 covariance-alike matrix on which I compute the eigenvalue-eigenvector decomposition (it's a OpenCV matrix, and I use cv::eigen() to get the job done).
However, I actually only need the, say, first 30 eigenvalues/vectors, I don't care about the rest. Theoretically, this should allow to speed up the computation significantly, right? I mean, that means it has 2970 eigenvectors less that need to be computed.
Which C++ library will allow me to do that? Please note that OpenCV's eigen() method does have the parameters for that, but the documentation says they are ignored, and I tested it myself, they are indeed ignored :D
UPDATE:
I managed to do it with ARPACK. I managed to compile it for windows, and even to use it. The results look promising, an illustration can be seen in this toy example:
#include "ardsmat.h"
#include "ardssym.h"
int n = 3; // Dimension of the problem.
double* EigVal = NULL; // Eigenvalues.
double* EigVec = NULL; // Eigenvectors stored sequentially.
int lowerHalfElementCount = (n*n+n) / 2;
//whole matrix:
/*
2 3 8
3 9 -7
8 -7 19
*/
double* lower = new double[lowerHalfElementCount]; //lower half of the matrix
//to be filled with COLUMN major (i.e. one column after the other, always starting from the diagonal element)
lower[0] = 2; lower[1] = 3; lower[2] = 8; lower[3] = 9; lower[4] = -7; lower[5] = 19;
//params: dimensions (i.e. width/height), array with values of the lower or upper half (sequentially, row major), 'L' or 'U' for upper or lower
ARdsSymMatrix<double> mat(n, lower, 'L');
// Defining the eigenvalue problem.
int noOfEigVecValues = 2;
//int maxIterations = 50000000;
//ARluSymStdEig<double> dprob(noOfEigVecValues, mat, "LM", 0, 0.5, maxIterations);
ARluSymStdEig<double> dprob(noOfEigVecValues, mat);
// Finding eigenvalues and eigenvectors.
int converged = dprob.EigenValVectors(EigVec, EigVal);
for (int eigValIdx = 0; eigValIdx < noOfEigVecValues; eigValIdx++) {
std::cout << "Eigenvalue: " << EigVal[eigValIdx] << "\nEigenvector: ";
for (int i = 0; i < n; i++) {
int idx = n*eigValIdx+i;
std::cout << EigVec[idx] << " ";
}
std::cout << std::endl;
}
The results are:
9.4298, 24.24059
for the eigenvalues, and
-0.523207, -0.83446237, -0.17299346
0.273269, -0.356554, 0.893416
for the 2 eigenvectors respectively (one eigenvector per row)
The code fails to find 3 eigenvectors (it can only find 1-2 in this case, an assert() makes sure of that, but well, that's not a problem).
In this article, Simon Funk shows a simple, effective way to estimate a singular value decomposition (SVD) of a very large matrix. In his case, the matrix is sparse, with dimensions: 17,000 x 500,000.
Now, looking here, describes how eigenvalue decomposition closely related to SVD. Thus, you might benefit from considering a modified version of Simon Funk's approach, especially if your matrix is sparse. Furthermore, your matrix is not only square but also symmetric (if that is what you mean by covariance-like), which likely leads to additional simplification.
... Just an idea :)
It seems that Spectra will do the job with good performances.
Here is an example from their documentation to compute the 3 first eigen values of a dense symmetric matrix M (likewise your covariance matrix):
#include <Eigen/Core>
#include <Spectra/SymEigsSolver.h>
// <Spectra/MatOp/DenseSymMatProd.h> is implicitly included
#include <iostream>
using namespace Spectra;
int main()
{
// We are going to calculate the eigenvalues of M
Eigen::MatrixXd A = Eigen::MatrixXd::Random(10, 10);
Eigen::MatrixXd M = A + A.transpose();
// Construct matrix operation object using the wrapper class DenseSymMatProd
DenseSymMatProd<double> op(M);
// Construct eigen solver object, requesting the largest three eigenvalues
SymEigsSolver< double, LARGEST_ALGE, DenseSymMatProd<double> > eigs(&op, 3, 6);
// Initialize and compute
eigs.init();
int nconv = eigs.compute();
// Retrieve results
Eigen::VectorXd evalues;
if(eigs.info() == SUCCESSFUL)
evalues = eigs.eigenvalues();
std::cout << "Eigenvalues found:\n" << evalues << std::endl;
return 0;
}
The Eigen library can map existing memory into Eigen matrices.
float array[3];
Map<Vector3f>(array, 3).fill(10);
int data[4] = 1, 2, 3, 4;
Matrix2i mat2x2(data);
MatrixXi mat2x2 = Map<Matrix2i>(data);
MatrixXi mat2x2 = Map<MatrixXi>(data, 2, 2);
My question is, how can we get c array (e.g. float[] a) from eigen matrix (e.g. Matrix3f m)? What it the real layout of eigen matrix? Is the real data stored as in normal c array?
You can use the data() member function of the Eigen Matrix class. The layout by default is column-major, not row-major as a multidimensional C array (the layout can be chosen when creating a Matrix object). For sparse matrices the preceding sentence obviously doesn't apply.
Example:
ArrayXf v = ArrayXf::LinSpaced(11, 0.f, 10.f);
// vc is the corresponding C array. Here's how you can use it yourself:
float *vc = v.data();
cout << vc[3] << endl; // 3.0
// Or you can give it to some C api call that takes a C array:
some_c_api_call(vc, v.size());
// Be careful not to use this pointer after v goes out of scope! If
// you still need the data after this point, you must copy vc. This can
// be done using in the usual C manner, or with Eigen's Map<> class.
To convert normal data type to eigen matrix type
double *X; // non-NULL pointer to some data
You can create an nRows x nCols size double matrix using the Map functionality like this:
MatrixXd eigenX = Map<MatrixXd>( X, nRows, nCols );
To convert eigen matrix type into normal data type
MatrixXd resultEigen; // Eigen matrix with some result (non NULL!)
double *resultC; // NULL pointer <-- WRONG INFO from the site. resultC must be preallocated!
Map<MatrixXd>( resultC, resultEigen.rows(), resultEigen.cols() ) = resultEigen;
In this way you can get in and out from eigen matrix. Full credits goes to http://dovgalecs.com/blog/eigen-how-to-get-in-and-out-data-from-eigen-matrix/
If the array is two-dimensional, one needs to pay attention to the storage order. By default, Eigen stores matrices in column-major order. However, a row-major order is needed for the direct conversion of an array into an Eigen matrix. If such conversions are performed frequently in the code, it might be helpful to use a corresponding typedef.
using namespace Eigen;
typedef Matrix<int, Dynamic, Dynamic, RowMajor> RowMatrixXi;
With such a definition one can obtain an Eigen matrix from an array in a simple and compact way, while preserving the order of the original array.
From C array to Eigen::Matrix
int nrow = 2, ncol = 3;
int arr[nrow][ncol] = { {1 ,2, 3}, {4, 5, 6} };
Map<RowMatrixXi> eig(&arr[0][0], nrow, ncol);
std::cout << "Eigen matrix:\n" << eig << std::endl;
// Eigen matrix:
// 1 2 3
// 4 5 6
In the opposite direction, the elements of an Eigen matrix can be transferred directly to a C-style array by using Map.
From Eigen::Matrix to C array
int arr2[nrow][ncol];
Map<RowMatrixXi>(&arr2[0][0], nrow, ncol) = eig;
std::cout << "C array:\n";
for (int i = 0; i < nrow; ++i) {
for (int j = 0; j < ncol; ++j) {
std::cout << arr2[i][j] << " ";
}
std::cout << "\n";
}
// C array:
// 1 2 3
// 4 5 6
Note that in this case the original matrix eig does not need to be stored in row-major layout. It is sufficient to specify the row-major order in Map.
You need to use the Map function again. Please see the example here:
http://forum.kde.org/viewtopic.php?f=74&t=95457
The solution with Map above segfaults when I try it (see comment above).
Instead, here's a solution that works for me, copying the data into a std::vector from an Eigen::Matrix. I pre-allocate space in the vector to store the result of the Map/copy.
Eigen::MatrixXf m(2, 2);
m(0, 0) = 3;
m(1, 0) = 2.5;
m(0, 1) = -1;
m(1, 1) = 0;
cout << m << "\n";
// Output:
// 3 -1
// 2.5 0
// Segfaults with this code:
//
// float* p = nullptr;
// Eigen::Map<Eigen::MatrixXf>(p, m.rows(), m.cols()) = m;
// Better code, which also copies into a std::vector:
// Note that I initialize vec with the matrix size to begin with:
std::vector<float> vec(m.size());
Eigen::Map<Eigen::MatrixXf>(vec.data(), m.rows(), m.cols()) = m;
for (const auto& x : vec)
cout << x << ", ";
cout << "\n";
// Output: 3, 2.5, -1, 0
I tried this : passing the address of the element at (0,0) and iterating forward.
Eigen::Matrix<double, 3, 8> coordinates3d;
coordinates3d << 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0,
0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0,
1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0;
double *p = &coordinates3d(0,0);
std::vector<double> x2y2;
x2y2.assign(p, p + coordinates3d.size());
for(int i=0;i < coordinates3d.size(); i++) {
std::cout <<x2y2[i];
}
This is the output : 001011111101000010110100
The data is stored row-major it seems
ComplexEigenSolver < MyMatrix > es;
complex<double> *eseig;
es.compute(H);
es.eigenvalues().transpose();
eseig=(complex<double> *)es.eigenvalues().data();