I'm creating a circuit analysis library in C++ (also to learn C++, so I'm very new to it).
After getting familiar with Eigen, I'd like to have a matrix where each cell hosts a 3x3 complex matrix.
So far I've tried this very simple prove of principle:
typedef Eigen::MatrixXcd cx_mat;
typedef Eigen::SparseMatrix<cx_mat> sp_mat_mat;
void test(cx_mat Z1){
sp_mat_mat Y(2, 2);
Y(0, 0) = Z1;
Y(2, 2) = Z1;
cout << "\n\nY:\n" << Y << endl;
}
Testing this simple example fails as a probable consequence of Eigen expecting a number instead of a structure.
As a matter of fact the matrix of matrices is prompt to be sparse, hence the sparse matrix structure.
Is there any way to make this work?
Any help is appreciated.
I don't believe Eigen will give you a way to make this work. I you think about the other functions which are connected to Matrix or Sparse matrix, like:
inverse()
norm()
m.row()*m.col()
what should Eigen do when a matrix element number is replaced by a matrix?
What I can understand is that you want to have a data structure that stores your Eigen::MatrixXcd in an memory efficient way.
You could also realize this using the map container:
#include <map>
typedef Eigen::MatrixXcd cx_mat;
cx_mat Z1;
std::map<int,Eigen::MatrixXcd> sp_mat_mat;
int cols = 2;
sp_mat_mat[0*cols+0]=Z1;
sp_mat_mat[2*cols+2]=Z1;
Less memory efficient, but perhaps easier to access would be using the vector container:
#include <vector>
std::vector<std::vector<Eigen::MatrixXcd>> mat_mat;
Have you found a way to create a matrix of matrices?
I see that we can use a 2-D array to create a matrix of matrices.
For example,
#include <Eigen/Dense>
MatrixXd A;
MatrixXd B;
A = MatrixXd::Random(3, 3);
B = MatrixXd::Random(3, 4);
C = MatrixXd::Random(4, 4);
MatrixXd D[2][2];
D[0][0] = A;
D[0][1] = B; D[1][0] = B.transpose();
D[1][1] = C;
I don't know if this way is memory-efficient or not. Let's check it out.
You asked "sparse matrix structure. Is there any way to make this work?" I would say no, because it is not easy to translate a circuit design into a "matrix of matrices" in the first place.. if you want to simulate something, you choose a representation close to it,. In case of an electronic circuit diagram, the schema in memory should IMHO be a directed graph, with linked-list items. At each node/junction, there is a matrix representing the behaviour of a particular component input to output transfer (eg resistor, capacitor, transistor) and you propagate the signal through the matrices assigned to each component. The transformed signal eventually arrives at an output, through the connections in your connected graph. In software, it should work similarly.. Suggested further reading: https://core.ac.uk/download/pdf/53745212.pdf
Related
The following code works as expected:
matrix.cpp
// [[Rcpp::depends(RcppEigen)]]
#include <RcppEigen.h>
// [[Rcpp::export]]
SEXP eigenMatTrans(Eigen::MatrixXd A){
Eigen::MatrixXd C = A.transpose();
return Rcpp::wrap(C);
}
// [[Rcpp::export]]
SEXP eigenMatMult(Eigen::MatrixXd A, Eigen::MatrixXd B){
Eigen::MatrixXd C = A * B;
return Rcpp::wrap(C);
}
// [[Rcpp::export]]
SEXP eigenMapMatMult(const Eigen::Map<Eigen::MatrixXd> A, Eigen::Map<Eigen::MatrixXd> B){
Eigen::MatrixXd C = A * B;
return Rcpp::wrap(C);
}
This is using the C++ eigen class for matrices, See https://eigen.tuxfamily.org/dox
In R, I can access those functions.
library(Rcpp);
Rcpp::sourceCpp('matrix.cpp');
A <- matrix(rnorm(10000), 100, 100);
B <- matrix(rnorm(10000), 100, 100);
library(microbenchmark);
microbenchmark(eigenMatTrans(A), t(A), A%*%B, eigenMatMult(A, B), eigenMapMatMult(A, B))
This shows that R performs pretty well on resorting (transpose). Multiplying has some advantages with eigen.
Using the Matrix library, I can convert a normal matrix to a sparse matrix.
Example from https://cmdlinetips.com/2019/05/introduction-to-sparse-matrices-in-r/
library(Matrix);
data<- rnorm(1e6)
zero_index <- sample(1e6)[1:9e5]
data[zero_index] <- 0
A = matrix(data, ncol=1000)
A.csr = as(A, "dgRMatrix");
B.csr = t(A.csr);
A.csc = as(A, "dgCMatrix");
B.csc = t(A.csc);
So if I wanted to multiply A.csr times B.csr using eigen, how to do that in C++? I do not want to have to convert types if I don't have to. It is a memory size thing.
The A.csr %*% B.csr is not-yet-implemented.
The A.csc %*% B.csc is working.
I would like to microbenchmark the different options, and see how matrix size will be most efficient. In the end, I will have a matrix that is about 1% sparse and have 5 million rows and cols ...
There's a reason that dgRMatrix crossproduct functions are not yet implemented, in fact, they should not be implemented because otherwise they would enable bad practice.
There are a few performance considerations when working with sparse matrices:
Accessing marginal views against the major marginal orientation is highly inefficient. For instance, a column iterator in a dgRMatrix and a row iterator in a dgCMatrix need to loop through almost all elements of the matrix to find the ones in just that column or row. See this Rcpp gallery post for additional enlightenment.
A matrix cross-product is simply a dot product between all combinations of columns. This means the penalty of using a column iterator in a dgRMatrix (vs. a column iterator in a dgCMatrix) is multiplied by the number of column combinations.
Cross-product functions in R are highly optimized, and are not (in my experience) significantly faster than Eigen, Armadillo, equivalent STL variants. They are parallelized, and the Matrix package takes wonderful advantage of these optimized algorithms. I have written C++ parallelized STL cross-product variants using Rcpp structures and I don't see any increase in performance.
If you're really going this route, check out my Rcpp gallery post on Sparse Matrix structures in Rcpp. This is to be preferred to Eigen and Armadillo Sparse Matrices if memory is a concern, as Eigen and Armadillo perform a deep copy rather than a reference to an R object already existing in memory.
At 1% density, the inefficiencies of row iterators will be greater than at say 5 or 10% density. I do most of my tests at 5% density and generally binary operations take 5-10x longer for row iterators than for column iterators.
There may be applications where row-major ordering shines (i.e. see the work by Dmitry Selivanov on CSR matrices and irlba svd), but this is absolutely not one of them, in fact, so much so you are better off doing in-place conversion to get to a CSC matrix.
tl;dr: column-wise cross-product in row-major matrices is the ultimatum of inefficiency.
I'm trying to do a real-valued 2d Fourier Transform with FFTW. My data is stored in a dynamically sized Eigen Matrix. Here's the wrapper class I wrote:
FFT2D.h:
#include <Eigen>
class FFT2D {
public:
enum FFT_TYPE {FORWARD=0, REVERSE=1};
FFT2D(EMatrix &input, EMatrix &output, FFT_TYPE type_ = FORWARD);
~FFT2D();
void execute();
private:
EMatrix& input;
EMatrix& output;
fftw_plan plan;
FFT_TYPE type;
};
FFT2D.cpp:
#include "FFT2D.h"
#include <fftw3.h>
#include "Defs.h"
FFT2D::FFT2D(EMatrix &input_, EMatrix &output_, FFT_TYPE type_)
: type(type_), input(input_), output(output_) {
if (type == FORWARD)
plan = fftw_plan_dft_2d((int) input.rows(), (int) input.cols(),
(fftw_complex *) &input(0), (fftw_complex *) &output(0),
FFTW_FORWARD, FFTW_ESTIMATE);
else
// placeholder for ifft-2d code, unwritten
}
FFT2D::~FFT2D() {
fftw_destroy_plan(plan);
}
void FFT2D::execute() {
fftw_execute(plan); // seg-fault here
}
And a definition for EMatrix:
typedef Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor> EMatrix;
The problem is, is I'm getting a seg fault in FFT2D::execute(). I know I'm setting something up wrong in the constructor, and I've tried a number of different ways, but I can't seem to make any headway on this.
Things I've tried include: changing EMatrix typedef to Eigen::ColMajor, passing (fftw_complex *) input.data() to fftw_plan_dft_2d, using different fftw plans (fftw_plan_dft_r2c_2d).
My C++ is (clearly) rusty, but at the end of the day what I need is to do a 2D FT on a real-valued 2D Eigen Matrix of doubles. Thanks in advance.
The major problem here is that there is no such thing as "real-valued Fourier transform". It's just a Fourier transform of something with zero imaginary part, but the zeroes still have to be there, as you can see from fftw_complex definition:
typedef double fftw_complex[2];
This makes sense as the output can (and probably will) have non-zero imaginary part.
The output will have some symmetric properties though, i.e. in case of 1D transform it would be an even function.
As a result the (fftw_complex *) &input(0) cast doesn't really work - FFTW expects twice as many double values as you pass to it.
The solution is to interleave your matrix raw data with zeroes, and there is a number of ways to do that. Few examples:
You could copy the whole matrix into a new array before passing it to FFTW, adding the zeroes in process.
You could reserve the space for zeroes in the matrix itself - this way you'll be able to avoid copying, but it will probably require a lot of refactoring:)
The best way I can think of is to use std::complex<double> as a scalar. This will somewhat hurt your notice of "real-valued FFT", but again there is hardly such thing in the first place. Instead you'll be able to keep all your real-value operations as they are, and the layout of std::complex will fit fftw_complex perfectly.
There could be some other things to consider here, like storage order (FFTW operates on arrays in row-major order, so Eigen matrices should comply) and the validity of linear access to Eigen matrix data (seems OK to me).
Is there a way to set a dynamic vector or matrix in Eigen library? If not, is there a way to still use the Eigen library in conjunction with another class like vector?
For example let's say I have n*1 matrix called MatrixXd S(n,1); Now for simplicity let n=3 and S = 4 2 6. Pretend that the elements in S are future stock prices and let K = 2 which will be the strike price. Don't worry you won't need to understand the terminology of an option. Now say I want to know at what positions of S will we have S - K > 0 and say I want to store these positions in a vector call b.
Clearly, depending on the elements of S the vector b will be of a different size. Thus, I need to have b being of a dynamic variable. The only class I am familiar with that allows this is the vector class i.e., #include <vector>.
My question is as follows: Is it okay to use the Eigen library and the #include <vector> class together? Note that I will be performing operations of b with the Eigen library vectors and matrices I have created.
If I am not making sense, or if my question is unclear please let me know and I will clarify as much as possible.
Yes, it does. It's presented in the "A simple first program" of Getting started:
#include <iostream>
#include <Eigen/Dense>
using Eigen::MatrixXd;
int main()
{
MatrixXd m(2,2);
m(0,0) = 3;
m(1,0) = 2.5;
m(0,1) = -1;
m(1,1) = m(1,0) + m(0,1);
std::cout << m << std::endl;
}
You do need to pass the size to the constructor, but it's works like a vector. You can resize it later on too.
MatrixXd is a convenient typedef to a Matrix template which uses Dynamic as a template value for Rows, and Cols. It's basically Matrix<double, Dynamic, Dynamic>.
So you can have not only dynamic sized vectors and matrices, but also arbitrarily large fixed-size ones. Eigen does pretty nifty optimizations for small matrices, so using a fixed size there might be beneficial.
I'm a new to Eigen and I'm working with sparse LU problem.
I found that if I create a vector b(n), Eigen could compute the x(n) for the Ax=b equation.
Questions:
How to display the L & U, which is the factorization result of the original matrix A?
How to insert non-zeros in Eigen? Right now I just test with some small sparse matrix so I insert non-zeros one by one, but if I have a large-scale matrix, how can I input the matrix in my program?
I realize that this question was asked a long time ago. Apparently, referring to Eigen documentation:
an expression of the matrix L, internally stored as supernodes The only operation available with this expression is the triangular solve
So there is no way to actually convert this to an actual sparse matrix to display it. Eigen::FullPivLU performs dense decomposition and is of no use to us here. Using it on a large sparse matrix, we would quickly run out of memory while trying to convert it to dense, and the time required to compute the factorization would increase several orders of magnitude.
An alternative solution is using the CSparse library from the Suite Sparse as:
extern "C" { // we are in C++ now, since you are using Eigen
#include <csparse/cs.h>
}
const cs *p_matrix = ...; // perhaps possible to use Eigen::internal::viewAsCholmod()
css *p_symbolic_decomposition;
csn *p_factor;
p_symbolic_decomposition = cs_sqr(2, p_matrix, 0); // 1 = ordering A + AT, 2 = ATA
p_factor = cs_lu(p_matrix, m_p_symbolic_decomposition, 1.0); // tol = 1.0 for ATA ordering, or use A + AT with a small tol if the matrix has amostly symmetric nonzero pattern and large enough entries on its diagonal
// calculate ordering, symbolic decomposition and numerical decomposition
cs *L = p_factor->L, *U = p_factor->U;
// there they are (perhaps can use Eigen::internal::viewAsEigen())
cs_sfree(p_symbolic_decomposition); cs_nfree(p_factor);
// clean up (deletes the L and U matrices)
Note that although this does not use expliit vectorization as some Eigen functions do, it is still fairly fast. CSparse is also very compact, it is just a single header and about thirty .c files with no external dependencies. Easy to incorporate in any C++ project. There is no need to actually include all of Suite Sparse.
If you'll use Eigen::FullPivLU::matrixLU() to the original matrix, you'll receive LU decomposition matrix. To display L and U separately, you can use method triangularView<mode>. In Eigen wiki you can find good example of it. Inserting nonzeros into matrices depends on numbers, which you wan't to put. Eigen has convenient syntax, so you can easily insert values in loop:
for(int i=0;i<size;i++)
{
for(int j=size;j>someNumber;j--)
{
matrix(i,j)=yourClass.getNextNumber();
}
}
I need to convert a MATLAB code into C++, and I'm stuck with this instruction:
a = K\F
, where K is a sparse matrix of size n x n, and F is a column vector of size n.
I know it's easy to solve that using the Eigen library - I have tried the fullPivLu() method, and I've been able to built a working snippet, using a Matrix and a Vector.
However, my K is a SparseMatrix<double> (while F is a VectorXd). My declarations:
SparseMatrix<double> K(nec, nec);
VectorXd F(nec);
and it seems that SparseMatrix doesn't have the fullPivLu() method, nor the lu() one.
I've tried, in fact, these two different approaches, taken from the documentation:
//1.
MatrixXd x = K.fullPivLu().solve(F);
//2.
VectorXf x;
K.lu().solve(F, &x);
They don't work, because fullPivLu() and lu() are not members of 'Eigen::SparseMatrix<_Scalar>'
So, I am asking: is there a way to solve a system of linear equations (the MATLAB's mldivide, or '\'), using Eigen for C++, with K being a sparse matrix?
Thank you for any help.
Would Eigen::SparseLU work for you?