I have just started using the Rcpp package in R, my learning is inspired by the Advanced R course by Hadley Wickham.
Within R studio I have the following .cpp file. The question is more general but this example helps.
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericVector runifC(int n, double min=0, double max=1) {
NumericVector out(n);
for(int i = 0; i < n; ++i) {
out[i] = min + ((double) rand() / (RAND_MAX)) * (max - min);
}
return out;
}
/*** R
library(microbenchmark)
microbenchmark(
'R unif-1' = runif(1),
'C++ unif-1' = runifC(1),
'R unif-100' = runif(100),
'C++ unif-100' = runifC(100),
'R unif-1000' = runif(1000),
'C++ unif-1000' = runifC(1000),
'R unif-100000' = runif(100000),
'C++ unif-100000' = runifC(100000)
)
*/
When I source/save the file it shows me the performance output.
Unit: nanoseconds
expr min lq mean median uq max neval
R unif-1 2061 2644.5 4000.71 3456.0 4297.0 15402 100
C++ unif-1 710 1190.0 1815.11 1685.0 2168.5 5776 100
R unif-100 4717 5566.5 6794.14 6563.0 7435.5 16600 100
C++ unif-100 1450 1997.5 2663.29 2591.5 3107.0 5307 100
R unif-1000 28210 29584.5 31310.54 30380.0 31599.0 52879 100
C++ unif-1000 8292 8951.0 10113.78 9462.5 10121.5 25099 100
R unif-100000 2642581 2975117.0 3104580.62 3030938.5 3119489.0 5435046 100
C++ unif-100000 699833 990924.0 1058855.49 1034430.5 1075078.0 1530351 100
I would expect that runif would be a very optimised function but the C++ code runs much more efficiently. I might be naive here, but if there is such a difference in performance then why aren't all applicable R functions rewritten in C++?
It seems so obvious that there are a lot of improvements possible that I feel as if I am missing a huge reason of why not all R functions can be blindly copied to C++ for performance.
edit: for this example it has been shown that the C++ implementation of rand() is slightly flawed. the performance gap that I noticed most used the rand() function. performance of other functions doesn't seem as drastic so i changed the name of the question.
Please DO NOT USE rand(). Doing so will kick your package off CRAN too should you submit it.
See eg this C++ reference page for a warning:
Notes
There are no guarantees as to the quality of the random sequence produced. In the past, some implementations of rand() have had serious shortcomings in the randomness, distribution and period of the sequence produced (in one well-known example, the low-order bit simply alternated between 1 and 0 between calls).
If you are interested in alternate random number generators and timing, the Rcpp Gallery.
In general, use the generators provided by R which are of excellent statistical quality, and offered in both scalar and vectorised form ("Rcpp Sugar") by Rcpp.
As of R-3.1.1, runif uses the .External interface, which copies its arguments. Luke Tierney changed this to use the .Call interface in R-devel in revision 66110. The .Call interface does not copy its arguments. Rcpp uses the .Call interface.
Your C++ code is still faster under R-devel (using the .Call interface). This is likely because of differences in the random number generator being used. Also, R's functions will generally have more checks than whatever specialized code you write; and those checks take time.
Related
I am a beginner with Rcpp. Currently I wrote a Rcpp code, which was applied on two 3 dimensional arrays: Array1 and Array2. Suppose Array1 has dimension (1000, 100, 40) and Array2 has dimension (1000, 96, 40).
I would like to perform wilcox.test using:
wilcox.test(Array1[i, j,], Array2[i,,])
In R, I wrote nested for loops that completed the calculation in about a half hour.
Then, I wrote it into Rcpp. The calculation within Rcpp took an hour to achieve the same results. I thought it should be faster since it is written in C++ language. I guess that my style of coding is the cause of the low efficient.
The following is my Rcpp code, would you mind helping me find out what improvement should I make please? I appreciate it!
// [[Rcpp::depends(RcppArmadillo)]]
#include <RcppArmadillo.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericVector Cal(NumericVector Array1,NumericVector Array2,Function wilc) {
NumericVector vecArray1(Array1);
IntegerVector arrayDims1 = vecArray1.attr("dim");
NumericVector vecArray2(Array2);
IntegerVector arrayDims2 = vecArray2.attr("dim");
arma::cube cubeArray1(vecArray1.begin(), arrayDims1[0], arrayDims1[1], arrayDims1[2], false);
arma::cube cubeArray2(vecArray2.begin(), arrayDims2[0], arrayDims2[1], arrayDims2[2], false);
arma::mat STORE=arma::mat(arrayDims1[0], arrayDims1[1]);
for(int i=0;i<arrayDims1[1];i++)
{
for(int j=0;j<arrayDims1[0];j++){
arma::vec v_cl=cubeArray1.subcube(arma::span(j),arma::span(i),arma::span::all);
//arma::mat tem=cubeArray2.subcube(arma::span(j),arma::span::all,arma::span::all);
//arma::vec v_ct=arma::vectorise(tem);
arma::vec v_ct=arma::vectorise(cubeArray2.subcube(arma::span(j),arma::span::all,arma::span::all));
Rcpp::List resu=wilc(v_cl,v_ct);
STORE(j,i)=resu[2];
}
}
return(Rcpp::wrap(STORE));
}
The function wilc will be wilcox.test from R.
The following is part of my R code for implementing the above idea, where CELLS and CTRLS are two 3D array in R.
for(i in 1:ncol(CELLS)) {
if(T){ print(i) }
for (j in 1:dim(CELLS)[1]) {
wtest = wilcox.test(CELLS[j,i,], CTRLS[j,,])
TSTAT_clcl[j,i] = wtest$p.value
}
}
Then, I wrote it into Rcpp. The calculation within Rcpp took an hour to achieve the same results. I thought it should be faster since it is written in C++ language.
The required disclaimer:
Embedding R code in C++ and expecting a speed up is a fool's game. You will need to rewrite wilcox.test full in C++ instead of making a call to R. Otherwise, you lose whatever speed up advantage you get.
In particular, I wrote up a post illustrating this conundrum regarding the using the diff function in R. Within the post, I detailed comparing a pure C++ implementation, an C++ implementation using an R function within the routine, and a pure R implementation. Stealing the microbenchmark illustrates the above issue.
expr min lq mean median uq max neval
arma_fun 26.117 27.318 37.54248 28.218 29.869 751.087 100
r_fun 127.883 134.187 212.81091 138.390 151.148 1012.856 100
rcpp_fun 250.663 265.972 356.10870 274.228 293.590 1430.426 100
Thus, a pure C++ implementation had the largest speed up.
Hence, the take away is the need to translate the wilcox.test R routine code to a pure C++ implementation to drop the run time. Otherwise, it is meaningless to write the code in C++ because the C++ component must stop and await results from R before continuing. This traditionally has a lot of overhead to ensure the data is well protected.
I read here that it is possible (and I interpreted straightforward) to call Stan routines from a C++ program.
I have some complex log-likelihood functions which I have coded up in C++ and really have no idea how I could code them using the Stan language. Is it possible to call the Monte Carlo routines in Stan using the log-likelihood function I have already coded in C++? If so are there any examples of this?
It seems like quite a natural thing to do but I cannot find any examples or pointers as to how to do this.
Upon further review (you may want to unaccept my previous answer), you could try this: Write a .stan program with a user-defined function in the functions block that has the correct signature (and parses) but basically does nothing. Like this
functions {
real foo_log(real[] y, vector beta, matrix X, real sigma) {
return not_a_number(); // replace this after parsing to C++
}
}
data {
int<lower=1> N;
int<lower=1> K;
matrix[N,K] X;
real y[N];
}
parameters {
vector[K] beta;
real<lower=0> sigma;
}
model {
y ~ foo(beta, X, sigma);
// priors here
}
Then, use CmdStan to compile that model, which will generate a .hpp file as an intermediate step. Edit that .hpp file inside the body of foo_log to call your templated C++ function and also #include the header file(s) where your stuff is defined. Then recompile and execute the binary.
That might actually work for you, but if whatever you are doing is somewhat widely useful, we would love for you to contribute the C++ stuff.
I think your question is a bit different from the one you linked to. He had a complete Stan program and wanted to drive it from C++, whereas you are asking if you could circumvent writing a Stan program by calling an external C++ function to evaluate the log-likelihood. But that would not get you very far because you still have to pass in the data in a form that Stan can handle, declare to Stan what are the unknown parameters (plus their support), etc. So, I don't think you can (or should) evade learning the Stan language.
But it is fairly easy to expose a C++ function to the Stan language, which essentially just involves adding your my_loglikelihood.hpp file in the right place under ${STAN_HOME}/lib/stan_math_${VERSION}/stan/math/, adding an include statement to the math.hpp file in that subdirectory, and editing ${STAN_HOME}/src/stan/lang/function_signatures.h. At that point, your .stan program could look as simple as
data {
// declare data like y, X, etc.
}
parameters {
// declare parameters like theta
}
model {
// call y ~ my_logliklihood_log(theta, X)
}
But I think the real answer to your question is that if you have already written a C++ function to evaluate the log-likelihood, then rewriting it in the Stan language shouldn't take more than a few minutes. The Stan language is very C-like so that it is easier to parse the .stan file into a C++ source file. Here is a Stan function I wrote for the log-likelihood of a conditionally Gaussian outcome in a regression context:
functions {
/**
* Increments the log-posterior with the logarithm of a multivariate normal
* likelihood with a scalar standard deviation for all errors
* Equivalent to y ~ normal(intercept + X * beta, sigma) but faster
* #param beta vector of coefficients (excluding intercept)
* #param b precomputed vector of OLS coefficients (excluding intercept)
* #param middle matrix (excluding ones) typically precomputed as crossprod(X)
* #param intercept scalar (assuming X is centered)
* #param ybar precomputed sample mean of the outcome
* #param SSR positive precomputed value of the sum of squared OLS residuals
* #param sigma positive value for the standard deviation of the errors
* #param N integer equal to the number of observations
*/
void ll_mvn_ols_lp(vector beta, vector b, matrix middle,
real intercept, real ybar,
real SSR, real sigma, int N) {
increment_log_prob( -0.5 * (quad_form_sym(middle, beta - b) +
N * square(intercept - ybar) + SSR) /
square(sigma) - # 0.91... is log(sqrt(2 * pi()))
N * (log(sigma) + 0.91893853320467267) );
}
}
which is basically just me dumping what could otherwise be C-syntax into the body of a function in the Stan language that is then callable in the model block of a .stan program.
So, in short, I think it would probably be easiest for you to rewrite your C++ function as a Stan function. However, it is possible that your log-likelihood involves something exotic for which there is currently no corresponding Stan syntax. In that case, you could fall back to exposing that C++ function to the Stan language and ideally making pull requests to the math and stan repositories on GitHub under stan-dev so that other people could use it (although then you would also have to write unit-tests, documentation, etc.).
I have two multi-class data sets with 5 labels, one for training, and the other for cross validation. These data sets are stored as .csv files, so they act as a control in this experiment.
I have a C++ wrapper for libsvm, and the MATLAB functions for libsvm.
For both C++ and MATLAB:
Using a C-type SVM with an RBF kernel, I iterate over 2 lists of C and Gamma values. For each parameter combination, I train on the training data set and then predict the cross validation data set. I store the accuracy of the prediction in a 2D map which correlates to the C and Gamma value which yielded the accuracy.
I've recreated different training and cross validation data sets many, many times. Each time, the C++ and MATLAB accuracies are different; sometimes by a lot! Mostly MATLAB produces higher accuracies, but sometimes the C++ implementation is better.
What could be accounting for these differences? The C/Gamma values I'm trying are the same, as are the remaining SVM parameters (default).
There should be no significant differences as both C and Matlab codes use the same svm.c file. So what can be the reason?
implementation error in your code(s), this is unfortunately the most probable one
used wrapper has some bug and/or use other version of libsvm then your matlab code (libsvm is written in pure C and comes with python, Matlab and java wrappers, so your C++ wrapper is "not official") or your wrapper assumes some additional default values, which are not default in C/Matlab/Python/Java implementations
you perform cross validation in somewhat randomized form (shuffling the data and then folding, which is completely correct and reasonable, but will lead to different results in two different runs)
There is some rounding/conversion performed during loading data from .csv in one (or both) of your codes which leads to inconsistencies (really not likely to happen, yet still possible)
I trained an SVC using scikit-Learn (sklearn.svm.SVC) within a python Jupiter Notebook. I wanted to use the trained classifier in MATLAB v. 2022a and C++. I nedeed to verify that all three versions' predictions matched for each implementation of the kernel, decision, and prediction functions. I found some useful guidance from bcorso's implementation of the original libsvm C++ code.
Exporting structure that represents the structure's model is explained in bcorso's post ab required to call his prediction function implementation:
predict(params, sv, nv, a, b, cs, X)
for it to match sklearn's version for trained classifier instance, clf:
clf.predict(X)
Once I established this match, I created a MATLAB versions of bcorso's kernel,
function [k] = kernel_svm(params, sv, X)
k = zeros(1,length(sv));
if strcmp(params.kernel,'linear')
for i = 1:length(sv)
k(i) = dot(sv(i,:),X);
end
elseif strcmp(params.kernel,'rbf')
for i = 1:length(sv)
k(i) =exp(-params.gamma*dot(sv(i,:)-X,sv(i,:)-X));
end
else
uiwait(msgbox('kernel not defined','Error','modal'));
end
k = k';
end
decision,
function [d] = decision_svm(params, sv, nv, a, b, X)
%% calculate the kernels
kvalue = kernel_svm(params, sv, X);
%% define the start and end index for support vectors for each class
nr_class = length(nv);
start = zeros(1,nr_class);
start(1) = 1;
%% First Class Loop
for i = 1:(nr_class-1)
start(i+1) = start(i)+ nv(i)-1;
end
%% Other Classes Nested Loops
for i = 1:nr_class
for j = i+1:nr_class
sum = 0;
si = start(i); %first class start
sj = start(j); %first class end
ci = nv(i)+1; %next class start
cj = ci+ nv(j)-1; %next class end
for k = si:sj
sum =sum + a(k) * kvalue(k);
end
sum1=sum;
sum = 0;
for k = ci:cj
sum = sum + a(k) * kvalue(k);
end
sum2=sum;
end
end
%% Add class sums and the intercept
sumd = sum1 + sum2;
d = -(sumd +b);
end
and predict functions.
function [class, classIndex] = predict_svm(params, sv, nv, a, b, cs, X)
dec_value = decision_svm(params, sv, nv, a, b, X);
if dec_value <= 0
class = cs(1);
classIndex = 1;
else
class = cs(2);
classIndex = 0;
end
end
Translation of the python comprehension syntax to a MATLAB/C++ equivalent of the summations required nested for loops in the decision function.
It is also required to account for for MATLAB indexing (base 1) vs.Python/C++ indexing (base 0).
The trained classifer model is conveyed by params, sv, nv, a, b, cs, which can be gathered within a structure after hanving exported the sv and a matrices as .csv files from teh python notebook. I simply created a wrapper MATLAB function svcInfo that builds the structure:
svcStruct = svcInfo();
params = svcStruct.params;
sv= svcStruct.sv;
nv = svcStruct.nv;
a = svcStruct.a;
b = svcStruct.b;
cs = svcStruct.cs;
Or one can save the structure contents within as MATLAB workspace within a .mat file.
The new case for prediction is provided as a vector X,
%Classifier input feature vector
X=[x1 x2...xn];
A simplified C++ implementation that follows bcorso's python version is fairly similar to this MATLAB implementation in that it uses the nested "for" loop within the decision function but it uses zero based indexing.
Once tested, I may expand this post with the C++ version on the MATLAB code shared above.
Inspired by Herb Sutter's compelling lecture Not your father's C++, I decided to take another look at the latest version of C++ using Microsoft's Visual Studio 2010. I was particularly interested by Herb's assertion that C++ is "safe and fast" because I write a lot of performance-critical code.
As a benchmark, I decided to try to write the same simple FFT algorithm in a variety of languages.
I came up with the following C++11 code that uses the built-in complex type and vector collection:
#include <complex>
#include <vector>
using namespace std;
// Must provide type or MSVC++ barfs with "ambiguous call to overloaded function"
double pi = 4 * atan(1.0);
void fft(int sign, vector<complex<double>> &zs) {
unsigned int j=0;
// Warning about signed vs unsigned comparison
for(unsigned int i=0; i<zs.size()-1; ++i) {
if (i < j) {
auto t = zs.at(i);
zs.at(i) = zs.at(j);
zs.at(j) = t;
}
int m=zs.size()/2;
j^=m;
while ((j & m) == 0) { m/=2; j^=m; }
}
for(unsigned int j=1; j<zs.size(); j*=2)
for(unsigned int m=0; m<j; ++m) {
auto t = pi * sign * m / j;
auto w = complex<double>(cos(t), sin(t));
for(unsigned int i = m; i<zs.size(); i+=2*j) {
complex<double> zi = zs.at(i), t = w * zs.at(i + j);
zs.at(i) = zi + t;
zs.at(i + j) = zi - t;
}
}
}
Note that this function only works for n-element vectors where n is an integral power of two. Anyone looking for fast FFT code that works for any n should look at FFTW.
As I understand it, the traditional xs[i] syntax from C for indexing a vector does not do bounds checking and, consequently, is not memory safe and can be a source of memory errors such as non-deterministic corruption and memory access violations. So I used xs.at(i) instead.
Now, I want this code to be "safe and fast" but I am not a C++11 expert so I'd like to ask for improvements to this code that would make it more idiomatic or efficient?
I think you are being overly "safe" in your use of at(). In most of your cases the index used is trivially verifable as being constrained by the size of the container in the for loop.
e.g.
for(unsigned int i=0; i<zs.size()-1; ++i) {
...
auto t = zs.at(i);
The only ones I'd leave as at()s are the (i + j)s. It's not immediately obvious whether they would always be constrained (although if I was really unsure I'd probably manually check - but I'm not familiar with FFTs enough to have an opinion in this case).
There are also some fixed computations being repeated for each loop iteration:
int m=zs.size()/2;
pi * sign
2*j
And the zs.at(i + j) is computed twice.
It's possible that the optimiser may catch these - but if you are treating this as performance critical, and you have your timers testing it, I'd hoist them out of the loops (or, in the case of zs.at(i + j), just take a reference on first use) and see if that impacts the timer.
Talking of second-guessing the optimiser: I'm sure that the calls to .size() will be inlined as, at least, a direct call to an internal member variable - but given how many times you call it I'd also experiment with introducing local variables for zs.size() and zs.size()-1 upfront. They're more likely to be put into registers that way too.
I don't know how much of a difference (if any) all of this will have on your total runtime - some of it may already be caught by the optimiser, and the differences may be small compared to the computations involved - but worth a shot.
As for being idiomatic my only comment, really, is that size() returns a std::size_t (which is usually a typedef for an unsigned int - but it's more idiomatic to use that type instead). If you did want to use auto but avoid the warning you could try adding the ul suffix to the 0 - not sure I'd say that is idiomatic, though. I suppose you're already less than idiomatic in not using iterators here, but I can see why you can't do that (easily).
Update
I gave all my suggestions a try and they all had a measurable performance improvement - except the i+j and 2*j precalcs - they actually caused a slight slowdown! I presume they either prevented a compiler optimisation or prevented it from using registers for some things.
Overall I got a >10% perf. improvement with those suggestions.
I suspect more could be had if the second block of loops was refactored a little to avoid the jumps - and having done so enabling SSE2 instruction set may give a significant boost (I did try it as is and saw a slight slowdown).
I think that refactoring, along with using something like MKL for the cos and sin calls should give greater, and less brittle, improvements. And neither of those things would be language dependent (I know this was originally being compared to an F# implementation).
Update 2
I forgot to mention that pre-calculating zs.size() did make a difference.
Update 3
Also forgot to say (until reminded by #xeo in comment to OP) that the block following the i < j check can be boiled down to a std::swap. This is more idiomatic and at least as performant - in the worst case should inline to the same code as written. Indeed when I did it I saw no change in the performance. In other cases it can lead to a performance gain if move constructors are available.
Since last night I have been trying out Rcpp and inline, and so far I am really enjoying it. But I am kinda new to C in general and can only do basic stuff yet, and I am having a hard time finding help online on things like functions.
Something I was working on was a function that finds the minimum of a vector in the global environment. I came up with:
library("inline")
library("Rcpp")
foo <- rnorm(100)
bar <- cxxfunction( signature(),
'
Environment e = Environment::global_env();
NumericVector foo = e["foo"];
int min;
for (int i = 0; i < foo.size(); i++)
{
if ( foo[i] < foo[min] ) min = i;
}
return wrap(min+1);
', plugin = "Rcpp")
bar()
But it seems like there should be an easier way to do this, and it is quite slower than which.max()
system.time(replicate(100000,bar()))
user system elapsed
0.27 0.00 0.26
system.time(replicate(100000,which.min(foo)))
user system elapsed
0.2 0.0 0.2
Am I overlooking a basic c++ or Rcpp function that does this? And if so, where could I find a list of such functions?
I guess this question is related to:
Where can I learn how to write C code to speed up slow R functions?
but different in that I am not really interested in how to incorporate c++ in R, but more on how and where to learn basic c++ code that is usable in R.
Glad you are finding Rcpp useful.
The first comment by Billy is quite correct. There is overhead in the function lookup and there is overhead in the [] lookup for each element etc.
Also, a much more common approach is to take a vector you have in R, pass it to a compiled function you create via inline and Rcpp, and have it return the result. Try that. There are plenty of examples in the package and scattered over the rcpp-devel mailing list archives.
Edit: I could not resist trying to set up a very C++ / STL style answer.
R> src <- '
+ Rcpp::NumericVector x(xs);
+ Rcpp::NumericVector::iterator it = // iterator type
+ std::min_element(x.begin(), x.end()); // STL algo
+ return Rcpp::wrap(it - x.begin()); '
R> minfun <- cxxfunction(signature(xs="numeric"), body=src, plugin="Rcpp")
R> minfun(c(7:20, 3:5))
[1] 14
R>
That is not exactly the easiest answer but it shows how by using what C++ offers you can find a minimum element without an (explicit) loop even at the C++ level. But the builtin min() function is still faster.
*Edit 2: Corrected as per Romain's comment below.