I am trying to convert some MATLAB code to C++ using armadillo and one of the MATLAB functions used is interp1. "Easy enough", I though, Armadillo has interp1. "linear should be good enough", well I was wrong. So I searched for interp1.m source code and found the Octave source, it uses pchip.m which I found the octave source code but pchip.m uses pchip_deriv.cc which then seems to use a fortran fcn.
So before I start really diving in to the conversion of pchip are there any other libraries or sources that include pchip out there that I could possibly use?
In case anyone else ever needs this I looked here to find the equations. For large data sets it does not seem to work properly so using the matrix formula, I create a sliding window PCHIP interp. Still did not match MATLABs so I average with a linear interp and it's pretty close. Is it the right way to do this? probably not... does it work? Yup!
void pchip_slide(const arma::vec& x, const arma::vec& y, const arma::vec& x_new, arma::vec& y_custom,const int M=4){
vec y_interp=zeros<vec>(size(x_new));
interp1(x,y,x_new,y_interp);
int I=0;
int start_interp=0;
int end_interp=0;
for(int ii=0;ii<x_new.n_elem;ii++){
I=index_min(abs(x-x_new(ii)));
start_interp=std::max((I-2),0);
end_interp=std::min((start_interp+M-1),int(numel(x)-1));
vec x_mini=x(span(start_interp,end_interp));
vec y_mini=y(span(start_interp,end_interp));
mat x_mat=ones<mat>(x_mini.n_elem,x_mini.n_elem);
for(int ll=0;ll<x_mini.n_elem-1;ll++){
x_mat.row(ll)=pow((x_mini.t()),(x_mini.n_elem-ll));
}
vec c_mini=solve( x_mat, y_mini,solve_opts::fast + solve_opts::no_approx);
rowvec x_pchip_mini=ones<rowvec>(x_mini.n_elem);
for (int ll=0;ll<x_mini.n_elem-1;ll++){
x_pchip_mini(ll)=pow((x_new(ii)),(x_mini.n_elem-ll));
}
y_custom(ii)=conv_to<double>::from(x_pchip_mini*c_mini);
if ((x_new(ii)>=x(0))&& (x_new(ii)<=x(x.n_elem-1))){
y_custom(ii)=(y_custom(ii)*1/M+y_interp(ii)*3/M);
}
}
return;
}
Related
I am a beginner with Rcpp. Currently I wrote a Rcpp code, which was applied on two 3 dimensional arrays: Array1 and Array2. Suppose Array1 has dimension (1000, 100, 40) and Array2 has dimension (1000, 96, 40).
I would like to perform wilcox.test using:
wilcox.test(Array1[i, j,], Array2[i,,])
In R, I wrote nested for loops that completed the calculation in about a half hour.
Then, I wrote it into Rcpp. The calculation within Rcpp took an hour to achieve the same results. I thought it should be faster since it is written in C++ language. I guess that my style of coding is the cause of the low efficient.
The following is my Rcpp code, would you mind helping me find out what improvement should I make please? I appreciate it!
// [[Rcpp::depends(RcppArmadillo)]]
#include <RcppArmadillo.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericVector Cal(NumericVector Array1,NumericVector Array2,Function wilc) {
NumericVector vecArray1(Array1);
IntegerVector arrayDims1 = vecArray1.attr("dim");
NumericVector vecArray2(Array2);
IntegerVector arrayDims2 = vecArray2.attr("dim");
arma::cube cubeArray1(vecArray1.begin(), arrayDims1[0], arrayDims1[1], arrayDims1[2], false);
arma::cube cubeArray2(vecArray2.begin(), arrayDims2[0], arrayDims2[1], arrayDims2[2], false);
arma::mat STORE=arma::mat(arrayDims1[0], arrayDims1[1]);
for(int i=0;i<arrayDims1[1];i++)
{
for(int j=0;j<arrayDims1[0];j++){
arma::vec v_cl=cubeArray1.subcube(arma::span(j),arma::span(i),arma::span::all);
//arma::mat tem=cubeArray2.subcube(arma::span(j),arma::span::all,arma::span::all);
//arma::vec v_ct=arma::vectorise(tem);
arma::vec v_ct=arma::vectorise(cubeArray2.subcube(arma::span(j),arma::span::all,arma::span::all));
Rcpp::List resu=wilc(v_cl,v_ct);
STORE(j,i)=resu[2];
}
}
return(Rcpp::wrap(STORE));
}
The function wilc will be wilcox.test from R.
The following is part of my R code for implementing the above idea, where CELLS and CTRLS are two 3D array in R.
for(i in 1:ncol(CELLS)) {
if(T){ print(i) }
for (j in 1:dim(CELLS)[1]) {
wtest = wilcox.test(CELLS[j,i,], CTRLS[j,,])
TSTAT_clcl[j,i] = wtest$p.value
}
}
Then, I wrote it into Rcpp. The calculation within Rcpp took an hour to achieve the same results. I thought it should be faster since it is written in C++ language.
The required disclaimer:
Embedding R code in C++ and expecting a speed up is a fool's game. You will need to rewrite wilcox.test full in C++ instead of making a call to R. Otherwise, you lose whatever speed up advantage you get.
In particular, I wrote up a post illustrating this conundrum regarding the using the diff function in R. Within the post, I detailed comparing a pure C++ implementation, an C++ implementation using an R function within the routine, and a pure R implementation. Stealing the microbenchmark illustrates the above issue.
expr min lq mean median uq max neval
arma_fun 26.117 27.318 37.54248 28.218 29.869 751.087 100
r_fun 127.883 134.187 212.81091 138.390 151.148 1012.856 100
rcpp_fun 250.663 265.972 356.10870 274.228 293.590 1430.426 100
Thus, a pure C++ implementation had the largest speed up.
Hence, the take away is the need to translate the wilcox.test R routine code to a pure C++ implementation to drop the run time. Otherwise, it is meaningless to write the code in C++ because the C++ component must stop and await results from R before continuing. This traditionally has a lot of overhead to ensure the data is well protected.
I've some matlab script to translate in C++ so I decided to use the library Armadillo for the linear algebra parts.
But I'm blocked with the conv() function. I tried this:
hist2=arma::hist(X2,nbins);
arma::vec g (smoothingWindowWidth, 0.0);
int halfWidth = smoothingWindowWidth/2;
for (int i=0; i<smoothingWindowWidth; i++)
{
int n=i - halfWidth;
g[i]= exp(-0.5 *((n/((double)halfWidth))*(n/((double) halfWidth))));
}
g=g/(arma::sum(g));
arma::vec hist3= arma::conv(hist2,g,"same");
When I try to compile I get the following error: "no matching function for call to 'conv(arma::uvec&,arma::vec&,int)'".
hist2 has been defined previously as an uvec using the hist() function.
X2 is a vec and nbins an int.
I'm not sure to understand the error: It's seems that conv() doesn't take vec or uvec as parameters but after checking the Armadillo website it has to.
I tried to convert the uvec into vec but it didn't change anything.
Thank you for your help!
Ok, finally the answer was quite simple: conv() doesn't allow mixed types so I had to use two vec instead of one vec and one uvec.
I was pretty sure to already have tried this but maybe there was a problem with my installation at this moment.
So I reinstalled Armadillo properly, assuring that both Lapack and BLAS where found by Armadillo.
Then I add: #define ARMA_DONT_USE_WRAPPER just before #include<armadillo>
After doing this, the example of conv() given in the documentation worked.
So I modified my code by converting hist3 into a vec:
arma::vec hist3=arma::conv_to<arma::vec>::from(arma::hist(X2,nbbins));
Then it worked!
I'm trying to calculate the autocorrelation of a vector of doubles using Armadillo, as following:
QVector<double> calculateAutocorrelation(QVector<double> samples){
arma::Row<double> armadillo_samples(samples.toStdVector());//Convert samples to armadillo vector
arma::Row<double> armadillo_autocorrelation = cor(armadillo_samples); //compute the autocorrelation, returns a 1x1 matrix!
QVector<double> ret(samples.size());
for(int i = 0; i <samples.size();i++)
ret[i] = armadillo_autocorrelation(i);//copy back into a QVector
return ret;
}
However, as commented on the 2nd line, cor(armadillo_samples) returns a 1x1 matrix instead of another vector, as I would expect.
I have downloaded the latests stable release of Armadillo from their website (5.100.1) and tried this code on Linux with MKL enabled and on Windows with the precompiled BLAS/LAPACK libraries enabled.
Am I misunderstanding how this function works/using it wrong?
Relevant Links:
-Armadillo documentation of cor
-Autocorrelation on Wikipedia (there's a link to Mathworld in the Armadillo documation which is usefull too, but I can't link to it)
To convert a 1x1 matrix to a pure scalar in Armadillo, use the as_scalar() function. For example:
mat X(1,1, fill::ones);
double val = as_scalar(X);
I am new to the Eigen library and trying to solve a generalized eigen value problem. As per the documentation of the GeneralizedEigenSolver template class in Eigen library here I am able to get the eigen values but not the eigen vectors. Seems like the eigenvectors() member function is not implemented. Is there any other way I can generate the eigen vectors once I know the eigen values.I am using Eigen 3.2.4.
It's strange that this isn't implemented, the docs suggest that it is. It's definitely worth asking on the Eigen mailing list or filing a ticket, maybe somebody is working on this and it's in the latest repository.
I have in the past used the GeneralizedSelfAdjointEigenSolver and it definitely produces eigenvectors. So if you know that both your matrices are symmetric, you can use that.
If your matrices are very small, as a quick fix you could apply the standard EigenSolver to M^{-1} A since
A x = lambda * M x <==> M^{-1} A x = lambda * x,
but obviously this requires you to compute the inverse of your right-hand side matrix which is very expensive, so this is really a last resort.
If all else fails, you could pull in a dedicated eigensolver library, say, FEAST, or use the LAPACK routines.
It doesn't appear to be implemented yet. At the end of the compute function there is:
m_eigenvectorsOk = false;//computeEigenvectors;
indicating that they're not actually calculated. Additionally, the eigenvectors() function is commented out and looks like (note the TODO):
//template<typename MatrixType>
//typename GeneralizedEigenSolver<MatrixType>::EigenvectorsType GeneralizedEigenSolver<MatrixType>::eigenvectors() const
//{
// eigen_assert(m_isInitialized && "EigenSolver is not initialized.");
// eigen_assert(m_eigenvectorsOk && "The eigenvectors have not been computed together with the eigenvalues.");
// Index n = m_eivec.cols();
// EigenvectorsType matV(n,n);
// // TODO
// return matV;
//}
If you wanted eigenvalues from a single matrix you could use EigenSolver like this:
int main(int argc, char *argv[]) {
Eigen::EigenSolver<Eigen::MatrixXf> es;
Eigen::MatrixXf A = Eigen::MatrixXf::Random(4,4);
es.compute(A);
std::cout << es.eigenvectors() << std::endl;
return 0;
}
so I have a function like
int f(int i, int j, int c, double d) {
/*...any operations with i, j, c, d affect on some return int we have*/
}
Is there any thing in boost or STD that would take my function and find the input arguments that minimize my function output?
I assume you're trying to do a "simple" mathematical multi-dimensional minimization.
GSL has some functions to help you with this. I wouldn't look any further ;)
I understand you to be looking for code to perform mathematical optimization.
Boost does not have anything to do this as far as I know, and neither does the standard library; however, NLopt may be what you're looking for.
You can use Brent's algorithm to minimise simple functions.
http://www.boost.org/doc/libs/1_65_0/libs/math/doc/html/math_toolkit/roots/brent_minima.html