I'm doing some physics simulation in C++ using Armadillo. I need to calculate a product looking like:
Q = R * exp(neg_i*Lambda*t) * R.t() * Q
Where Q,R are cx_mat class of the same size, Lambda is a mat class of the same size as Q,R and is diagonal, neg_i is -i the complex number and t is a double. I should get a unitary matrix as a solution but what I'm getting is non unitary. I was wondering if the exponential function works well with complex matrices? or if not what should I replace it with?
You need to use the expmat() function for a matrix exponential, exp() calculates an element-wise exponential.
For example, some code I'm currently using for a physics simulation:
arma::cx_mat f; // A hermitian matrix
double delta_t ; // A time step
std::complex<double> i_imag(0.0,1.0) ; // i, the imaginary number
std::vector<arma::cx_mat> U; // A vector of complex unitary matrices.
U.push_back(arma::expmat(-i_imag * delta_t * f));
Have tested this code, taking the matrix exponential of anti-hermitian matrices to get unitary transformation and works fine.
Related
I have been trying to reconstruct the input data fed to my RBM program written in C++ with the aid of Eigen library. But in order to keep the matrix elements of the reconstructed matrix into some specific range, i need to apply a sigmoid function to those.
When i do so i get a conversion error and i don't know the way round it.
Here is my Sigmoid function computed in an header file:
double sigmoid(double x)
{
return 1.0 / (1.0 + exp(-x));
}
And here is how i compute the reconstruction:
MatrixXd V;
double well[36];
Map<MatrixXd>( well, V.rows(), V.cols() ) = V;
V = sigmoid(H * result3Eigen.transpose() + onesmat*result2Eigen.transpose());
At last here the error message i get when compiling the code:
error C2664:'utils::sigmoid':cannot convert parameter 1 from 'Eigen::MatrixXd'
to 'double'
Thank you for any hints in solving the issue.
If you want to apply a function to each element of an Eigen matrix, you can use the unaryExpr function:
V = my_matrix.unaryExpr(&sigmoid);
This will run the sigmoid function on each element of the Eigen matrix my_matrix, and then return another matrix as the result.
I'm a new to Eigen and I'm working with sparse LU problem.
I found that if I create a vector b(n), Eigen could compute the x(n) for the Ax=b equation.
Questions:
How to display the L & U, which is the factorization result of the original matrix A?
How to insert non-zeros in Eigen? Right now I just test with some small sparse matrix so I insert non-zeros one by one, but if I have a large-scale matrix, how can I input the matrix in my program?
I realize that this question was asked a long time ago. Apparently, referring to Eigen documentation:
an expression of the matrix L, internally stored as supernodes The only operation available with this expression is the triangular solve
So there is no way to actually convert this to an actual sparse matrix to display it. Eigen::FullPivLU performs dense decomposition and is of no use to us here. Using it on a large sparse matrix, we would quickly run out of memory while trying to convert it to dense, and the time required to compute the factorization would increase several orders of magnitude.
An alternative solution is using the CSparse library from the Suite Sparse as:
extern "C" { // we are in C++ now, since you are using Eigen
#include <csparse/cs.h>
}
const cs *p_matrix = ...; // perhaps possible to use Eigen::internal::viewAsCholmod()
css *p_symbolic_decomposition;
csn *p_factor;
p_symbolic_decomposition = cs_sqr(2, p_matrix, 0); // 1 = ordering A + AT, 2 = ATA
p_factor = cs_lu(p_matrix, m_p_symbolic_decomposition, 1.0); // tol = 1.0 for ATA ordering, or use A + AT with a small tol if the matrix has amostly symmetric nonzero pattern and large enough entries on its diagonal
// calculate ordering, symbolic decomposition and numerical decomposition
cs *L = p_factor->L, *U = p_factor->U;
// there they are (perhaps can use Eigen::internal::viewAsEigen())
cs_sfree(p_symbolic_decomposition); cs_nfree(p_factor);
// clean up (deletes the L and U matrices)
Note that although this does not use expliit vectorization as some Eigen functions do, it is still fairly fast. CSparse is also very compact, it is just a single header and about thirty .c files with no external dependencies. Easy to incorporate in any C++ project. There is no need to actually include all of Suite Sparse.
If you'll use Eigen::FullPivLU::matrixLU() to the original matrix, you'll receive LU decomposition matrix. To display L and U separately, you can use method triangularView<mode>. In Eigen wiki you can find good example of it. Inserting nonzeros into matrices depends on numbers, which you wan't to put. Eigen has convenient syntax, so you can easily insert values in loop:
for(int i=0;i<size;i++)
{
for(int j=size;j>someNumber;j--)
{
matrix(i,j)=yourClass.getNextNumber();
}
}
I need to compute two equations related with trace and inverse of a around 30000 x 30000 density matrices. The equations are
-trace( W_i %*% C)
and
-trace(W_i %*% C %*% W_j C)
I know W_i, W_j and inverse of C. These equations are related with Pearson estimating functions. I am trying to use R and package Matrix, but I couldn't compute the C matrix, using solve() or chol() and chol2inv(). I do not know with is possible using solve() to solve a system of equation and after compute the trace. It is common to use solve function to compute something like C^{-1} W = solve(C, W), but my equation is a little bit different. Any help is welcome. I am thinking about to use RcppArmadillo, but I am not sure that it is able to compute my equations.
You can use RcppArmadillo, but you have to be careful about memory usage. If you save the below code as arma_test.cpp it can be sourced via Rcpp::sourceCpp('w_graph_class.cpp'). Obviously the matrix data is dummy, but it should give you a starting point. Also check out alternative methods to invert C rather than using .i(), such as pinv(), etc.
See this question about large matrices with RcppArmadillo
#include <RcppArmadillo.h>
using namespace Rcpp;
// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
Rcpp::List arma_calc() {
arma::mat C_inv = arma::mat(30000,30000,arma::fill::randu);
arma::mat W_i = arma::mat(30000,30000,arma::fill::randu);
arma::mat W_j = arma::mat(30000,30000,arma::fill::randu);
double tr_1=-arma::trace(W_i*C_inv.i());
double tr_2=-arma::trace(W_i*C_inv.i()*W_j*C_inv.i());
return Rcpp::List::create(Rcpp::Named("Trace1")=tr_1,Rcpp::Named("Trace2")=tr_2);
}
I am trying to solve a simple least square of type Ax = b. The c++ eigen library offers several functionalities regarding this and I have seen some kind of solutions here: Solving system Ax=b in linear least squares fashion with complex elements and lower-triangular square A matrix and here: Least Squares Solution of Linear Algerbraic Equation Ax = By in Eigen C++
What I want to do is that using dynamic version of the matrix A and b. The elements of matrix A are floating points in my case and has 3 columns, but the number of data items (i.e. rows) will be dynamic (inside a loop).
It will be helpful to have a short code snippet of basic declaration of A, b and filling out values.
If you need dynamic matrices/vectors, just use:
MatrixXd m1(5,7); // double
VectorXd v1(23); // double
MatrixXf m2(3,5); // floating
VectorXf v2(12); // floating
Those variables will all be saved in heap.
If you need square matrices or vectors with fixed size (but be careful, they aren't dynamic!) use the following syntax:
Matrix3d m3; // double, size 3x3
Vector3d v3; // double, size 1x3
Matrix4d m4; // double, size 4x4
Vector4d v4; // double, size 1x4
I am attempting to implement a complex-valued matrix equation in OpenCV. I've prototyped in MATLAB which works fine. Starting off with the equation (exact MATLAB implementation):
kernel = exp(1i .* k .* Circ3D) .* z ./ (1i .* lambda .* Circ3D .* Circ3D)
In which
1i = complex number
k = constant (float)
Circ3D = real-valued matrix of known size
lambda = constant (float)
.* = element-wise multiplication
./ = element-wise division
The result is a complex-valued matrix. I succeeded in generating the necessary Circ3D matrix as a CV_32F, however the multiplication by the complex number i is giving me trouble. From the OpenCV documentation I understand that a complex matrix is simply a two-channel matrix (CV_32FC2).
The real trouble comes from how to define i. I've tried several options, among which defining i as
cv::Vec2d complex = cv::Vec2d(0,1);
and then multiplying by the matrix
kernel = complex * Circ3D
But this doesn't work (although I didn't expect it to). I suspect I need to do something with std::complex but I have no idea what (http://docs.opencv.org/modules/core/doc/basic_structures.html).
Thanks in advance for any help.
Edit: Just after writing this post I did make some progress, by defining i as follows:
std::complex<float> complex(0,1)
I am then able to assign complex values as follows:
kernel.at<std::complex<float>>(i,j) = cv::exp(complex * k * Circ3D.at<float>(i,j)) * ...
z / (complex * lambda * pow(Circ3D.at<float>(i,j),2));
However, this works in a loop, which makes the procedure incredibly slow. Any way to do it in one go?
OpenCV treats std::complex just like the simple pair of numbers (see example in the documentation). No special rules on arithmetic operations are applied. You overcome this by multiplying std::complex directly. So basically, this is simple: you either chose automatic complex arithmetic (as you are doing now), or automatic vectorization (when using OpenCV functions on matrices).
I think, in your case you should carry all the complex arithmetic by yourself. Store matrix of complex values C{ai + b} as two matrices A{a} and B{b}. Implement exponentiation by yourself. Multiplication on scalars and addition shouldn't be a problem.
There is also the function mulSpectrums, which lets you do element wise multiplication of complex matrices. So if K is your kernel matrix and I is some complex matrix, that is, CV_32FC2 (float two channel) you can do the following to compute the element wise multiplication,
// Make K a complex matrix
cv::Mat Ktmp[] = {cv::Mat_<float>(K), cv::Mat::zeros(K.size(), CV_32FC1)};
cv::Mat Kc;
cv::merge(Ktmp,2,Kc);
// Do matrix multiplication
cv::mulSpectrums(Kc,I,I,0);