cbind in R is relatively time consuming in repeated calls, but it also is powerful for various data types.
I have written code that is 3X faster than cbind when binding two matrices. But bind_cols in dplyr package is merely 100X faster than cbind. It is only a pity that it cannot take matrix as input. Can someone make the code below more fast. Also, how do I fast bind sparse matrix? Here is the code I used:
require( Rcpp )
func <- 'NumericMatrix mmult(NumericMatrix a,NumericMatrix b) {
//the colnumber of first matrix
int acoln=a.ncol();
//the colnumber of second matrix
int bcoln=b.ncol();
//build a new matrix, the dim is a.nrow() and acoln+bcoln
NumericMatrix out(a.nrow(),acoln+bcoln) ;
for (int j = 0; j < acoln + bcoln; j++) {
if (j < acoln) {
out(_,j) = a(_,j);
} else {
//put the context in the second matrix to the new matrix
out(_,j) = b(_,j-acoln);
}
}
return out ;
}'
a <- matrix(rep(1,2000*100),2000)
b <- matrix(rep(2,2000*10),2000)
cppFunction(func)
system.time(for (i in seq(1,800)) {mmult(a,b)})
system.time(for (i in seq(1,800)) {cbind(a,b)})
identical(mmult(a,b),cbind(a,b))
Borrowing an idea from this comment by Romain Francois on one of my previous Rcpp adventures,
func1 <- 'NumericMatrix mmult1(NumericMatrix a, NumericMatrix b) {
int acoln = a.ncol();
int bcoln = b.ncol();
NumericMatrix out = no_init_matrix(a.nrow(), acoln + bcoln);
for (int j = 0; j < acoln + bcoln; j++) {
if (j < acoln) {
out(_, j) = a(_, j);
} else {
out(_, j) = b(_, j - acoln);
}
}
return out;
}'
cppFunction(func1)
set.seed(42)
a <- matrix(rnorm(1e7), 1e3)
b <- matrix(runif(1e7), 1e3)
identical(mmult(a, b), mmult1(a, b))
#TRUE
library(microbenchmark)
microbenchmark(mmult(a, b),
mmult1(a, b),
cbind(a, b),
times = 10)
#Unit: milliseconds
# expr min lq mean median uq max neval
# mmult(a, b) 69.64 70.52 89.71 72.28 128.8 136.6 10
# mmult1(a, b) 50.84 50.95 69.65 51.43 111.6 114.4 10
# cbind(a, b) 192.35 194.67 201.13 195.30 196.1 255.9 10
Not a huge deal, but not bad either for such a trivial change.
Related
First of all, I am a novice user so forget my general ignorance. I am looking for a faster alternative to the %*% operator in R. Even though older posts suggest the use of RcppArmadillo, I have tried for 2 hours to make RcppArmadillo work without success. I always run into lexical issues that yield 'unexpected ...' errors. I have found the following function in Rcpp which I do can make work:
library(Rcpp)
func <- '
NumericMatrix mmult( NumericMatrix m , NumericMatrix v, bool byrow=true )
{
if( ! m.nrow() == v.nrow() ) stop("Non-conformable arrays") ;
if( ! m.ncol() == v.ncol() ) stop("Non-conformable arrays") ;
NumericMatrix out(m) ;
for (int i = 0; i < m.nrow(); i++)
{
for (int j = 0; j < m.ncol(); j++)
{
out(i,j)=m(i,j) * v(i,j) ;
}
}
return out ;
}
'
This function, however, performs element-wise multiplication and does not behave as %*%. Is there an easy way to modify the above code to achieve the intended result?
EDIT:
I have come up with a function using RcppEigen that seems to beat %*%:
etest <- cxxfunction(signature(tm="NumericMatrix",
tm2="NumericMatrix"),
plugin="RcppEigen",
body="
NumericMatrix tm22(tm2);
NumericMatrix tmm(tm);
const Eigen::Map<Eigen::MatrixXd> ttm(as<Eigen::Map<Eigen::MatrixXd> >(tmm));
const Eigen::Map<Eigen::MatrixXd> ttm2(as<Eigen::Map<Eigen::MatrixXd> >(tm22));
Eigen::MatrixXd prod = ttm*ttm2;
return(wrap(prod));
")
set.seed(123)
M1 <- matrix(sample(1e3),ncol=50)
M2 <- matrix(sample(1e3),nrow=50)
identical(etest(M1,M2), M1 %*% M2)
[1] TRUE
res <- microbenchmark(
+ etest(M1,M2),
+ M1 %*% M2,
+ times=10000L)
res
Unit: microseconds
expr min lq mean median uq max neval
etest(M1, M2) 5.709 6.61 7.414607 6.611 7.211 49.879 10000
M1 %*% M2 11.718 12.32 13.505272 12.621 13.221 58.592 10000
There are good reasons to rely on existing libraries / packages for standard tasks. The routines in the libraries are
optimized
thoroughly tested
a good means to keep the code compact, human-readable, and easy to maintain.
Therefore I think that using RcppArmadillo or RcppEigen should be preferable here. However, to answer your question, below is a possible Rcpp code to perform a matrix multiplication:
library(Rcpp)
cppFunction('NumericMatrix mmult(const NumericMatrix& m1, const NumericMatrix& m2){
if (m1.ncol() != m2.nrow()) stop ("Incompatible matrix dimensions");
NumericMatrix out(m1.nrow(),m2.ncol());
NumericVector rm1, cm2;
for (size_t i = 0; i < m1.nrow(); ++i) {
rm1 = m1(i,_);
for (size_t j = 0; j < m2.ncol(); ++j) {
cm2 = m2(_,j);
out(i,j) = std::inner_product(rm1.begin(), rm1.end(), cm2.begin(), 0.);
}
}
return out;
}')
Let's test it:
A <- matrix(c(1:6),ncol=2)
B <- matrix(c(0:7),nrow=2)
mmult(A,B)
# [,1] [,2] [,3] [,4]
#[1,] 4 14 24 34
#[2,] 5 19 33 47
#[3,] 6 24 42 60
identical(mmult(A,B), A %*% B)
#[1] TRUE
Hope this helps.
As benchmark tests show, the above Rcpp code is slower than R's built-in %*% operator. I assume that, while my Rcpp code can certainly be improved, it will be hard to beat the optimized code behind %*% in terms of performance:
library(microbenchmark)
set.seed(123)
M1 <- matrix(rnorm(1e4),ncol=100)
M2 <- matrix(rnorm(1e4),nrow=100)
identical(M1 %*% M2, mmult(M1,M2))
#[1] TRUE
res <- microbenchmark(
mmult(M1,M2),
M1 %*% M2,
times=1000L)
#> res
#Unit: microseconds
# expr min lq mean median uq max neval cld
# mmult(M1, M2) 1466.855 1484.8535 1584.9509 1494.0655 1517.5105 2699.643 1000 b
# M1 %*% M2 602.053 617.9685 687.6863 621.4335 633.7675 2774.954 1000 a
I would encourage to try to work out your issues with RcppArmadillo. Using it is as simple as this example also created by calling RcppArmadillo.package.skeleton():
// another simple example: outer product of a vector,
// returning a matrix
//
// [[Rcpp::export]]
arma::mat rcpparma_outerproduct(const arma::colvec & x) {
arma::mat m = x * x.t();
return m;
}
// and the inner product returns a scalar
//
// [[Rcpp::export]]
double rcpparma_innerproduct(const arma::colvec & x) {
double v = arma::as_scalar(x.t() * x);
return v;
}
There is actually more code in the example but this should give you an idea.
The following approach can also be used :
NumericMatrix mmult(NumericMatrix m, NumericMatrix v)
{
Environment base("package:base");
Function mat_Mult = base["%*%"];
return(mat_Mult(m, v));
}
With this approach, we use the operator %*% of R.
As a follow up to this question, I've decided to go down the route of Rcpp vs convoluted syntax in R. I think this will provide better readability (and possibly also be faster).
Let's say I have a list of data.frames (which I can easily convert to matrices via as). Given prior answe -r -s, this seems the best approach.
# input data
my_list <- vector("list", length= 10)
set.seed(65L)
for (i in 1:10) {
my_list[[i]] <- data.frame(matrix(rnorm(10000),ncol=10))
# alternatively
# my_list[[i]] <- matrix(rnorm(10000),ncol=10)
}
What's the appropriate way to extract rows from the matrices? The goal is to create a list with each list element containing a list of the nrth row of each of the original list's data.frames. I've tried several different syntaxes and keep getting errors:
#include <Rcpp.h>
using namespace Rcpp;
using namespace std:
List foo(const List& my_list, const int& n_geo) {
int n_list = my_list.size();
std::vector<std::vector<double> > list2(n_geo);
// needed code....
return wrap(list2);
}
options
for (int i = 0; i < n_list; i++) {
for (int nr = 0; nr < n_geo; nr++) {
list2[nr][i] = my_list[i].row(nr);
// or list2[nr].push_back(my_list[i].row(nr));
// or list2[nr].push_back(as<double>(my_list[i].row(nr)));
// or list2[nr].push_back(as<double>(my_list[i](nr, _)));
}
}
// or:
NumericMatrix a = my_list[1]
...
NumericMatrix j = my_list[10]
for (int nr = 0; nr < n_geo; nr++) {
list2[nr][1] = // as above
}
None of these are working for me. What am I doing wrong? Here are the errors I receive from my above syntax choices.
error: no matching function for call to 'as(Rcpp::Matrix<14>::Row)'
or
error: cannot convert 'Rcpp::Matrix<14>::Row {aka Rcpp::MatrixRow<14>}' to 'double' in assignment
Here is one way to do it:
#include <Rcpp.h>
// x[[nx]][ny,] -> y[[ny]][[nx]]
// [[Rcpp::export]]
Rcpp::List Transform(Rcpp::List x) {
R_xlen_t nx = x.size(), ny = Rcpp::as<Rcpp::NumericMatrix>(x[0]).nrow();
Rcpp::List y(ny);
for (R_xlen_t iy = 0; iy < ny; iy++) {
Rcpp::List tmp(nx);
for (R_xlen_t ix = 0; ix < nx; ix++) {
Rcpp::NumericMatrix mtmp = Rcpp::as<Rcpp::NumericMatrix>(x[ix]);
tmp[ix] = mtmp.row(iy);
}
y[iy] = tmp;
}
return y;
}
/*** R
L1 <- lapply(1:10, function(x) {
matrix(rnorm(20), ncol = 5)
})
L2 <- lapply(1:nrow(L1[[1]]), function(x) {
lapply(L1, function(y) unlist(y[x,]))
})
all.equal(L2, Transform(L1))
#[1] TRUE
microbenchmark::microbenchmark(
"R" = lapply(1:nrow(L1[[1]]), function(x) {
lapply(L1, function(y) unlist(y[x,]))
}),
"Cpp" = Transform(L1),
times = 200L)
#Unit: microseconds
#expr min lq mean median uq max neval
# R 254.660 316.627 383.92739 347.547 392.7705 1909.097 200
#Cpp 18.314 26.007 71.58795 30.230 38.8650 945.167 200
*/
I'm not sure how this will scale; I think it is just an inherently inefficient transformation. As per my comment at the top of the source, it seems like you are just doing a sort of coordinate swap -- the nyth row of the nxth element of the input list becomes the nxth element of the nyth element of the output list:
x[[nx]][ny,] -> y[[ny]][[nx]]
To address the errors you were getting, Rcpp::List is a generic object - technically an Rcpp::Vector<VECSXP> - so when you try to do, e.g.
my_list[i].row(nr)
the compiler doesn't know that my_list[i] is a NumericMatrix. Therefore, you have to make an explicit cast with Rcpp::as<>,
Rcpp::NumericMatrix mtmp = Rcpp::as<Rcpp::NumericMatrix>(x[ix]);
tmp[ix] = mtmp.row(iy);
I just used matrix elements in the example data to simplify things. In practice you are probably better off coercing data.frames to matrix objects directly in R than trying to do it in C++; it will be much simpler, and most likely, the coercion is just calling underlying C code, so there isn't really anything to be gained trying to do it otherwise.
I should also point out that if you are using a Rcpp::List of homogeneous types, you can squeeze out a little more performance with Rcpp::ListOf<type>. This will allow you to skip the Rcpp::as<type> conversions done above:
typedef Rcpp::ListOf<Rcpp::NumericMatrix> MatList;
// [[Rcpp::export]]
Rcpp::List Transform2(MatList x) {
R_xlen_t nx = x.size(), ny = x[0].nrow();
Rcpp::List y(ny);
for (R_xlen_t iy = 0; iy < ny; iy++) {
Rcpp::List tmp(nx);
for (R_xlen_t ix = 0; ix < nx; ix++) {
tmp[ix] = x[ix].row(iy);
}
y[iy] = tmp;
}
return y;
}
/*** R
L1 <- lapply(1:10, function(x) {
matrix(rnorm(20000), ncol = 100)
})
L2 <- lapply(1:nrow(L1[[1]]), function(x) {
lapply(L1, function(y) unlist(y[x,]))
})
microbenchmark::microbenchmark(
"R" = lapply(1:nrow(L1[[1]]), function(x) {
lapply(L1, function(y) unlist(y[x,]))
}),
"Transform" = Transform(L1),
"Transform2" = Transform2(L1),
times = 200L)
#Unit: microseconds
# expr min lq mean median uq max neval
# R 6049.594 6318.822 7604.871 6707.242 8592.510 64005.190 200
# Transform 928.468 1041.936 3130.959 1166.819 1659.745 71552.284 200
#Transform2 850.912 957.918 1694.329 1061.183 2856.724 4502.065 200
*/
I wish to implement a simple split-apply-combine routine in Rcpp where a dataset (matrix) is split up into groups, and then the groupwise column sums are returned. This is a procedure easily implemented in R, but often takes quite some time. I have managed to implement an Rcpp solution that beats the performance of R, but I wonder if I can further improve upon it. To illustrate, here some code, first for the use of R:
n <- 50000
k <- 50
set.seed(42)
X <- matrix(rnorm(n*k), nrow=n)
g=rep(1:8,length.out=n )
use.for <- function(mat, ind){
sums <- matrix(NA, nrow=length(unique(ind)), ncol=ncol(mat))
for(i in seq_along(unique(ind))){
sums[i,] <- colSums(mat[ind==i,])
}
return(sums)
}
use.apply <- function(mat, ind){
apply(mat,2, function(x) tapply(x, ind, sum))
}
use.dt <- function(mat, ind){ # based on Roland's answer
dt <- as.data.table(mat)
dt[, cvar := ind]
dt2 <- dt[,lapply(.SD, sum), by=cvar]
as.matrix(dt2[,cvar:=NULL])
}
It turns out that the for-loops is actually quite fast and is the easiest (for me) to implement with Rcpp. It works by creating a submatrix for each group and then calling colSums on the matrix. This is implemented using RcppArmadillo:
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
using namespace Rcpp;
using namespace arma;
// [[Rcpp::export]]
arma::mat use_arma(arma::mat X, arma::colvec G){
arma::colvec gr = arma::unique(G);
int gr_n = gr.n_rows;
int ncol = X.n_cols;
arma::mat out = zeros(gr_n, ncol);
for(int g=0; g<gr_n; g++){
int g_id = gr(g);
arma::uvec subvec = find(G==g_id);
arma::mat submat = X.rows(subvec);
arma::rowvec res = sum(submat,0);
out.row(g) = res;
}
return out;
}
However, based on answers to this question, I learned that creating copies is expensive in C++ (just as in R), but that loops are not as bad as they are in R. Since the arma-solution relies on creating matrixes (submat in the code) for each group, my guess is that avoiding this will speed up the process even further. Hence, here a second implementation based on Rcpp only using a loop:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericMatrix use_Rcpp(NumericMatrix X, IntegerVector G){
IntegerVector gr = unique(G);
std::sort(gr.begin(), gr.end());
int gr_n = gr.size();
int nrow = X.nrow(), ncol = X.ncol();
NumericMatrix out(gr_n, ncol);
for(int g=0; g<gr_n; g++){
int g_id = gr(g);
for (int j = 0; j < ncol; j++) {
double total = 0;
for (int i = 0; i < nrow; i++) {
if (G(i) != g_id) continue; // not sure how else to do this
total += X(i, j);
}
out(g,j) = total;
}
}
return out;
}
Benchmarking these solutions, including the use_dt version provided by #Roland (my previous version discriminted unfairly against data.table), as well as the dplyr-solution suggested by #beginneR, yields the following:
library(rbenchmark)
benchmark(use.for(X,g), use.apply(X,g), use.dt(X,g), use.dplyr(X,g), use_arma(X,g), use_Rcpp(X,g),
+ columns = c("test", "replications", "elapsed", "relative"), order = "relative", replications = 1000)
test replications elapsed relative
# 5 use_arma(X, g) 1000 29.65 1.000
# 4 use.dplyr(X, g) 1000 42.05 1.418
# 3 use.dt(X, g) 1000 56.94 1.920
# 1 use.for(X, g) 1000 60.97 2.056
# 6 use_Rcpp(X, g) 1000 113.96 3.844
# 2 use.apply(X, g) 1000 301.14 10.156
My intution (use_Rcpp better than use_arma) did not turn out right. Having said that, I guess that the line if (G(i) != g_id) continue; in my use_Rcpp function slows down everything. I am happy to learn about alternatives to set this up.
I am happy that I have achieved the same task in half the time it takes R to do it, but maybe the several Rcpp is much faster than R-examples have messed with my expectations, and I am wondering if I can speed this up even more. Does anyone have an idea? I also welcome any programming / coding comments in general since I am relatively new to Rcpp and C++.
No, it's not the for loop that you need to beat:
library(data.table)
#it doesn't seem fair to include calls to library in benchmarks
#you need to do that only once in your session after all
use.dt2 <- function(mat, ind){
dt <- as.data.table(mat)
dt[, cvar := ind]
dt2 <- dt[,lapply(.SD, sum), by=cvar]
as.matrix(dt2[,cvar:=NULL])
}
all.equal(use.dt(X,g), use.dt2(X,g))
#TRUE
benchmark(use.for(X,g), use.apply(X,g), use.dt(X,g), use.dt2(X,g),
columns = c("test", "replications", "elapsed", "relative"),
order = "relative", replications = 50)
# test replications elapsed relative
#4 use.dt2(X, g) 50 3.12 1.000
#1 use.for(X, g) 50 4.67 1.497
#3 use.dt(X, g) 50 7.53 2.413
#2 use.apply(X, g) 50 17.46 5.596
Maybe you're looking for (the oddly named) rowsum
library(microbenchmark)
use.rowsum = rowsum
and
> all.equal(use.for(X, g), use.rowsum(X, g), check.attributes=FALSE)
[1] TRUE
> microbenchmark(use.for(X, g), use.rowsum(X, g), times=5)
Unit: milliseconds
expr min lq median uq max neval
use.for(X, g) 126.92876 127.19027 127.51403 127.64082 128.06579 5
use.rowsum(X, g) 10.56727 10.93942 11.01106 11.38697 11.38918 5
Here's my critiques with in-line comments for your Rcpp solution.
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericMatrix use_Rcpp(NumericMatrix X, IntegerVector G){
// Rcpp has a sort_unique() function, which combines the
// sort and unique steps into one, and is often faster than
// performing the operations separately. Try `sort_unique(G)`
IntegerVector gr = unique(G);
std::sort(gr.begin(), gr.end());
int gr_n = gr.size();
int nrow = X.nrow(), ncol = X.ncol();
// This constructor zero-initializes memory (kind of like
// making a copy). You should use:
//
// NumericMatrix out = no_init(gr_n, ncol)
//
// to ensure the memory is allocated, but not zeroed.
//
// EDIT: We don't have no_init for matrices right now, but you can hack
// around that with:
//
// NumericMatrix out(Rf_allocMatrix(REALSXP, gr_n, ncol));
NumericMatrix out(gr_n, ncol);
for(int g=0; g<gr_n; g++){
// subsetting with operator[] is cheaper, so use gr[g] when
// you can be sure bounds checks are not necessary
int g_id = gr(g);
for (int j = 0; j < ncol; j++) {
double total = 0;
for (int i = 0; i < nrow; i++) {
// similarily here
if (G(i) != g_id) continue; // not sure how else to do this
total += X(i, j);
}
// IIUC, you are filling the matrice row-wise. This is slower as
// R matrices are stored in column-major format, and so filling
// matrices column-wise will be faster.
out(g,j) = total;
}
}
return out;
}
I am new to C++ programming (using Rcpp for seamless integration into R), and I would appreciate some advice on how to speed up some calculations.
Consider the following example:
testmat <- matrix(1:9, nrow=3)
testvec <- 1:3
testmat*testvec
# [,1] [,2] [,3]
#[1,] 1 4 7
#[2,] 4 10 16
#[3,] 9 18 27
Here, R recycled testvec so that, loosely speaking, testvec "became" a matrix of the same dimensions as testmat for the purpose of this multiplication. Then the Hadamard product is returned. I wish to implement this behavior using Rcpp, that is I want that each element of the i-th row in the matrix testmat is multiplied with the i-th element of the vector testvec. My benchmarks tell me that my implementations are extremely slow, and I would appreciate advise on how to speed this up. Here my code:
First, using Eigen:
#include <RcppEigen.h>
// [[Rcpp::depends(RcppEigen)]]
using namespace Rcpp;
using namespace Eigen;
// [[Rcpp::export]]
NumericMatrix E_matvecprod_elwise(NumericMatrix Xs, NumericVector ys){
Map<MatrixXd> X(as<Map<MatrixXd> >(Xs));
Map<VectorXd> y(as<Map<VectorXd> >(ys));
int k = X.cols();
int n = X.rows();
MatrixXd Y(n,k) ;
// here, I emulate R's recycling. I did not find an easier way of doing this. Any hint appreciated.
for(int i = 0; i < k; ++i) {
Y.col(i) = y;
}
MatrixXd out = X.cwiseProduct(Y);
return wrap(out);
}
Here my implementation using Armadillo (adjusted to follow Dirk's example, see answer below):
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
using namespace Rcpp;
using namespace arma;
// [[Rcpp::export]]
arma::mat A_matvecprod_elwise(const arma::mat & X, const arma::vec & y){
int k = X.n_cols ;
arma::mat Y = repmat(y, 1, k) ; //
arma::mat out = X % Y;
return out;
}
Benchmarking these solutions using R, Eigen or Armadillo shows that both Eigen and Armadillo are about 2 times slower than R. Is there a way to speed these computations up or to get at least as fast as R? Are there more elegant ways of setting this up? Any advise is appreciated and welcome. (I also encourage tangential remarks about programming style in general as I am new to Rcpp / C++.)
Here some reproducable benchmarks:
# for comparison, define R function:
R_matvecprod_elwise <- function(mat, vec) mat*vec
n <- 50000
k <- 50
X <- matrix(rnorm(n*k), nrow=n)
e <- rnorm(n)
benchmark(R_matvecprod_elwise(X, e), A2_matvecprod_elwise(X, e), E_matvecprod_elwise(X,e),
columns = c("test", "replications", "elapsed", "relative"), order = "relative", replications = 1000)
This yields
test replications elapsed relative
1 R_matvecprod_elwise(X, e) 1000 10.89 1.000
2 A_matvecprod_elwise(X, e) 1000 26.87 2.467
3 E_matvecprod_elwise(X, e) 1000 27.73 2.546
As you can see, my Rcpp-solutions perform quite miserably. Any way to do it better?
If you want to speed up your calculations you will have to be a little careful about not making copies. This usually means sacrificing readability. Here is a version which makes no copies and modifies matrix X inplace.
// [[Rcpp::export]]
NumericMatrix Rcpp_matvecprod_elwise(NumericMatrix & X, NumericVector & y){
unsigned int ncol = X.ncol();
unsigned int nrow = X.nrow();
int counter = 0;
for (unsigned int j=0; j<ncol; j++) {
for (unsigned int i=0; i<nrow; i++) {
X[counter++] *= y[i];
}
}
return X;
}
Here is what I get on my machine
> library(microbenchmark)
> microbenchmark(R=R_matvecprod_elwise(X, e), Arma=A_matvecprod_elwise(X, e), Rcpp=Rcpp_matvecprod_elwise(X, e))
Unit: milliseconds
expr min lq median uq max neval
R 8.262845 9.386214 10.542599 11.53498 12.77650 100
Arma 18.852685 19.872929 22.782958 26.35522 83.93213 100
Rcpp 6.391219 6.640780 6.940111 7.32773 7.72021 100
> all.equal(R_matvecprod_elwise(X, e), Rcpp_matvecprod_elwise(X, e))
[1] TRUE
For starters, I'd write the Armadillo version (interface) as
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
using namespace Rcpp;
using namespace arma;
// [[Rcpp::export]]
arama::mat A_matvecprod_elwise(const arma::mat & X, const arma::vec & y){
int k = X.n_cols ;
arma::mat Y = repmat(y, 1, k) ; //
arma::mat out = X % Y;
return out;
}
as you're doing an additional conversion in and out (though the wrap() gets added by the glue code). The const & is notional (as you learned via your last question, a SEXP is a pointer object that is lightweight to copy) but better style.
You didn't show your benchmark results so I can't comment on the effect of matrix size etc pp. I suspect you might get better answers on rcpp-devel than here. Your pick.
Edit: If you really want something cheap and fast, I would just do this:
// [[Rcpp::export]]
mat cheapHadamard(mat X, vec y) {
// should row dim of X versus length of Y here
for (unsigned int i=0; i<y.n_elem; i++) X.row(i) *= y(i);
return X;
}
which allocates no new memory and will hence be faster, and probably be competitive with R.
Test output:
R> cheapHadamard(testmat, testvec)
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 4 10 16
[3,] 9 18 27
R>
My apologies for giving an essentially C answer to a C++ question, but as has been suggested the solution generally lies in the efficient BLAS implementation of things. Unfortunately, BLAS itself lacks a Hadamard multiply so you would have to implement your own.
Here is a pure Rcpp implementation that basically calls C code. If you want to make it proper C++, the worker function can be templated but for most applications using R that isn't a concern. Note that this also operates "in-place", which means that it modifies X without copying it.
// it may be necessary on your system to uncomment one of the following
//#define restrict __restrict__ // gcc/clang
//#define restrict __restrict // MS Visual Studio
//#define restrict // remove it completely
#include <Rcpp.h>
using namespace Rcpp;
#include <cstdlib>
using std::size_t;
void hadamardMultiplyMatrixByVectorInPlace(double* restrict x,
size_t numRows, size_t numCols,
const double* restrict y)
{
if (numRows == 0 || numCols == 0) return;
for (size_t col = 0; col < numCols; ++col) {
double* restrict x_col = x + col * numRows;
for (size_t row = 0; row < numRows; ++row) {
x_col[row] *= y[row];
}
}
}
// [[Rcpp::export]]
NumericMatrix C_matvecprod_elwise_inplace(NumericMatrix& X,
const NumericVector& y)
{
// do some dimension checking here
hadamardMultiplyMatrixByVectorInPlace(X.begin(), X.nrow(), X.ncol(),
y.begin());
return X;
}
Here is a version that makes a copy first. I don't know Rcpp well enough to do this natively and not incur a substantial performance hit. Creating and returning a NumericMatrix(numRows, numCols) on the stack causes the code to run about 30% slower.
#include <Rcpp.h>
using namespace Rcpp;
#include <cstdlib>
using std::size_t;
#include <R.h>
#include <Rdefines.h>
void hadamardMultiplyMatrixByVector(const double* restrict x,
size_t numRows, size_t numCols,
const double* restrict y,
double* restrict z)
{
if (numRows == 0 || numCols == 0) return;
for (size_t col = 0; col < numCols; ++col) {
const double* restrict x_col = x + col * numRows;
double* restrict z_col = z + col * numRows;
for (size_t row = 0; row < numRows; ++row) {
z_col[row] = x_col[row] * y[row];
}
}
}
// [[Rcpp::export]]
SEXP C_matvecprod_elwise(const NumericMatrix& X, const NumericVector& y)
{
size_t numRows = X.nrow();
size_t numCols = X.ncol();
// do some dimension checking here
SEXP Z = PROTECT(Rf_allocVector(REALSXP, (int) (numRows * numCols)));
SEXP dimsExpr = PROTECT(Rf_allocVector(INTSXP, 2));
int* dims = INTEGER(dimsExpr);
dims[0] = (int) numRows;
dims[1] = (int) numCols;
Rf_setAttrib(Z, R_DimSymbol, dimsExpr);
hadamardMultiplyMatrixByVector(X.begin(), X.nrow(), X.ncol(), y.begin(), REAL(Z));
UNPROTECT(2);
return Z;
}
If you're curious about usage of restrict, it means that you as the programmer enter a contract with the compiler that different bits of memory do not overlap, allowing the compiler to make certain optimizations. The restrict keyword is part of C++11 (and C99), but many compilers added extensions to C++ for earlier standards.
Some R code to benchmark:
require(rbenchmark)
n <- 50000
k <- 50
X <- matrix(rnorm(n*k), nrow=n)
e <- rnorm(n)
R_matvecprod_elwise <- function(mat, vec) mat*vec
all.equal(R_matvecprod_elwise(X, e), C_matvecprod_elwise(X, e))
X_dup <- X + 0
all.equal(R_matvecprod_elwise(X, e), C_matvecprod_elwise_inplace(X_dup, e))
benchmark(R_matvecprod_elwise(X, e),
C_matvecprod_elwise(X, e),
C_matvecprod_elwise_inplace(X, e),
columns = c("test", "replications", "elapsed", "relative"),
order = "relative", replications = 1000)
And the results:
test replications elapsed relative
3 C_matvecprod_elwise_inplace(X, e) 1000 3.317 1.000
2 C_matvecprod_elwise(X, e) 1000 7.174 2.163
1 R_matvecprod_elwise(X, e) 1000 10.670 3.217
Finally, the in-place version may actually be faster, as the repeated multiplications into the same matrix can cause some overflow mayhem.
Edit:
Removed the loop unrolling, as it provided no benefit and was otherwise distracting.
I need to compute a similarity measure call the Dice coefficient over large matrices (600,000 x 500) of binary vectors in R. For speed I use C / Rcpp. The function runs great but as I am not a computer scientist by background I would like to know if it could run faster. This code is suitable for parallelisation but I have no experience parallelising C code.
The Dice coefficient is a simple measure of similarity / dissimilarity (depending how you take it). It is intended to compare asymmetric binary vectors, meaning one of the combination (usually 0-0) is not important and agreement (1-1 pairs) have more weight than disagreement (1-0 or 0-1 pairs). Imagine the following contingency table:
1 0
1 a b
0 c d
The Dice coef is: (2*a) / (2*a +b + c)
Here is my Rcpp implementation:
library(Rcpp)
cppFunction('
NumericMatrix dice(NumericMatrix binaryMat){
int nrows = binaryMat.nrow(), ncols = binaryMat.ncol();
NumericMatrix results(ncols, ncols);
for(int i=0; i < ncols-1; i++){ // columns fixed
for(int j=i+1; j < ncols; j++){ // columns moving
double a = 0;
double d = 0;
for (int l = 0; l < nrows; l++) {
if(binaryMat(l, i)>0){
if(binaryMat(l, j)>0){
a++;
}
}else{
if(binaryMat(l, j)<1){
d++;
}
}
}
// compute Dice coefficient
double abc = nrows - d;
double bc = abc - a;
results(j,i) = (2*a) / (2*a + bc);
}
}
return wrap(results);
}
')
And here is a running example:
x <- rbinom(1:200000, 1, 0.5)
X <- matrix(x, nrow = 200, ncol = 1000)
system.time(dice(X))
user system elapsed
0.814 0.000 0.814
The solution proposed by Roland was not entirely satisfying for my use case. So based on the source code from the arules package I implement a much faster version. The code in arules rely on an algorithm from Leisch (2005) using the tcrossproduct() function in R.
First, I wrote a Rcpp / RcppEigen version of crossprod that is 2-3 time faster. This is based on the example code in the RcppEigen vignette.
library(Rcpp)
library(RcppEigen)
library(inline)
crossprodCpp <- '
using Eigen::Map;
using Eigen::MatrixXi;
using Eigen::Lower;
const Map<MatrixXi> A(as<Map<MatrixXi> >(AA));
const int m(A.rows()), n(A.cols());
MatrixXi AtA(MatrixXi(n, n).setZero().selfadjointView<Lower>().rankUpdate(A.adjoint()));
return wrap(AtA);
'
fcprd <- cxxfunction(signature(AA = "matrix"), crossprodCpp, "RcppEigen")
Then I wrote a small R function to compute the Dice coefficient.
diceR <- function(X){
a <- fcprd(X)
nx <- ncol(X)
rsx <- colSums(X)
c <- matrix(rsx, nrow = nx, ncol = nx) - a
# b <- matrix(rsx, nrow = nx, ncol = nx, byrow = TRUE) - a
b <- t(c)
m <- (2 * a) / (2*a + b + c)
return(m)
}
This new function is ~8 time faster than the old one and ~3 time faster than the one in arules.
m <- microbenchmark(dice(X), diceR(X), dissimilarity(t(X), method="dice"), times=100)
m
# Unit: milliseconds
# expr min lq median uq max neval
# dice(X) 791.34558 809.8396 812.19480 814.6735 910.1635 100
# diceR(X) 62.98642 76.5510 92.02528 159.2557 507.1662 100
# dissimilarity(t(X), method = "dice") 264.07997 342.0484 352.59870 357.4632 520.0492 100
I cannot run your function at work, but is the result the same as this?
library(arules)
plot(dissimilarity(X,method="dice"))
system.time(dissimilarity(X,method="dice"))
#user system elapsed
#0.04 0.00 0.04