RcppArmadillo: Lambda expression with each_slice - c++

I have a three dimensional array with positive definite matrices and I would like to obtain an array of the same size with the Cholesky factors of all matrices. I am using the Armadillo library and the cube type, for which there is the convenient function each_slice which I'm trying to use. But I am not getting the lambda expression to work correctly, so hopefully someone can help me and point out my mistake.
Here is a minimal example:
// [[Rcpp::depends(RcppArmadillo)]]
#include <RcppArmadillo.h>
// [[Rcpp::export]]
arma::cube chol_array(arma::cube Sigma) {
arma::cube Sigma_chol = Sigma;
Sigma_chol.each_slice([](arma::mat X) {return arma::chol(X);});
return Sigma_chol;
}
// [[Rcpp::export]]
arma::cube chol_array2(arma::cube Sigma) {
arma::cube Sigma_chol(size(Sigma));
for (arma::uword i = 0; i < Sigma.n_slices; i++) {
Sigma_chol.slice(i) = arma::chol(Sigma.slice(i));
}
return Sigma_chol;
}
/*** R
Sigma <- array(crossprod(matrix(rnorm(9), 3, 3)), dim = c(3, 3, 2))
chol_array(Sigma)
chol_array2(Sigma)
*/
The function chol_array2 does the job, but chol_array just returns the original matrices. What am I missing?

The issue here is the lack of references in the .each_slice() call. Armadillo's use of lambda expressions require references to update the object and not a return statement. In particular, we have:
For form 3:
apply the given lambda_function to each slice; the function must accept a reference to a Mat object with the same element type as the underlying cube
So, change:
Sigma_chol.each_slice([](arma::mat X) {return arma::chol(X);});
to:
Sigma_chol.each_slice([](arma::mat& X) {X = arma::chol(X);});
Fixed Code
// [[Rcpp::depends(RcppArmadillo)]]
#include <RcppArmadillo.h>
// Enable lambda expressions....
// [[Rcpp::plugins(cpp11)]]
// [[Rcpp::export]]
arma::cube chol_array(arma::cube Sigma) {
arma::cube Sigma_chol = Sigma;
// NOTE: the '&' and saving _back_ into the object are crucial
Sigma_chol.each_slice( [](arma::mat& X) { X = arma::chol(X); } );
return Sigma_chol;
}
Test code
set.seed(1113)
Sigma = array(crossprod(matrix(rnorm(9), 3, 3)), dim = c(3, 3, 2))
all.equal(chol_array(Sigma), chol_array2(Sigma))
# [1] TRUE

Related

Armadillo - no member named i in matrix expression

According to Armadillo docs:
.i()
Member function of any matrix expression
Provides an inverse of the matrix expression
...
However, when I try to compile this snippet:
#include <armadillo>
#include <iostream>
arma::sp_mat linReg(arma::sp_mat X, arma::sp_mat Y) {
return (X.t() * X).i() * X.t() * Y;
}
int main() {
arma::sp_mat X = arma::sprandu(1000, 10, 0.3);
arma::sp_mat y = arma::sprandu(1000, 10, 0.3);
std::cout << linReg(X,y).t() << std::endl;
}
I get the following error
lreg.cpp: In function ‘arma::sp_mat linReg(arma::sp_mat,
arma::sp_mat)’: lreg.cpp:6:24: error: ‘arma::enable_if2<true, const
arma::SpGluearma::SpOp<arma::SpMat<double, arma::spop_htrans>,
arma::SpMat, arma::spglue_times> >::result’ {aka ‘const class
arma::SpGluearma::SpOp<arma::SpMat<double, arma::spop_htrans>,
arma::SpMat, arma::spglue_times>’} has no member named ‘i’
6 | return (X.t() * X).i() * X.t() * Y;
|
I already tried with mat and it works fine. Any clue why it's not working with sparse matrix? And if so, how can we calculate the inverse of a sparse matrix?
Taking the inverse of a sparse matrix is often not desired as you end up with a dense matrix. Often the explicit inverse is not required.
Instead of taking the inverse here, maybe treat the problem as solving a system of linear equations. Then reformulate using solve() or spsolve(). Below is an untested example for demonstrating the general approach:
arma::mat linReg(const arma::sp_mat& X, const arma::sp_mat& Y) {
arma::sp_mat A = X.t() * X;
arma::mat B = arma::mat(X.t() * Y); // convert to dense matrix
arma::mat result;
bool ok = arma::spsolve(result, A, B);
if(ok == false) {
// handle failure here
}
return result;
}

Eigen LLT (Cholesky) fails, while SVD works

I'm trying to reproduce some numpy code on Gaussian Processes (from here) using Eigen. Basically, I need to sample from a multivariate normal distribution:
samples = np.random.multivariate_normal(mu.ravel(), cov, 1)
The mean vector is currently zero, while the covariance matrix is a square matrix generated via the isotropic squared exponential kernel:
sqdist = np.sum(X1**2, 1).reshape(-1, 1) + np.sum(X2**2, 1) - 2 * np.dot(X1, X2.T)
return sigma_f**2 * np.exp(-0.5 / l**2 * sqdist)
I can generate the covariance matrix just fine for now (it can probably be cleaned but for now it's a POC):
Matrix2D kernel(const Matrix2D & x1, const Matrix2D & x2, double l = 1.0, double sigma = 1.0) {
auto dists = ((- 2.0 * (x1 * x2.transpose())).colwise()
+ x1.rowwise().squaredNorm()).rowwise() +
+ x2.rowwise().squaredNorm().transpose();
return std::pow(sigma, 2) * ((-0.5 / std::pow(l, 2)) * dists).array().exp();
}
However, my problems start when I need to sample the multivariate normal.
I've tried using the solution proposed in this accepted answer; however, the decomposition only works with covariance matrices of size up to 30x30; more than that and LLT fails to decompose the matrix. The alternative version provided in the answer also does not work, and creates NaNs. I tried LDLT as well but it also breaks (D contains negative values, so sqrt gives NaN).
Then, I got curious, and I looked into how numpy does this. Turns out the numpy implementation uses SVD decomposition (with LAPACK), rather than Cholesky. So I tried copying their implementation:
// SVD on the covariance matrix generated via kernel function
Eigen::BDCSVD<Matrix2D> solver(covs, Eigen::ComputeFullV);
normTransform = (-solver.matrixV().transpose()).array().colwise() * solver.singularValues().array().sqrt();
// Generate gaussian samples, "randN" is from the multivariate StackOverflow answer
Matrix2D gaussianSamples = Eigen::MatrixXd::NullaryExpr(1, means.size(), randN);
Eigen::MatrixXd samples = (gaussianSamples * normTransform).rowwise() + means.transpose();
The various minuses are me trying to exactly reproduce numpy's results.
In any case, this works perfectly fine, even with large dimensions. I was wondering why Eigen is not able to do LLT, but SVD works. The covariance matrix I use is the same. Is there something I can do to simply use LLT?
EDIT: Here is my full example:
#include <iostream>
#include <random>
#include <Eigen/Cholesky>
#include <Eigen/SVD>
#include <Eigen/Eigenvalues>
using Matrix2D = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor | Eigen::AutoAlign>;
using Vector = Eigen::Matrix<double, Eigen::Dynamic, 1>;
/*
We need a functor that can pretend it's const,
but to be a good random number generator
it needs mutable state.
*/
namespace Eigen {
namespace internal {
template<typename Scalar>
struct scalar_normal_dist_op
{
static std::mt19937 rng; // The uniform pseudo-random algorithm
mutable std::normal_distribution<Scalar> norm; // The gaussian combinator
EIGEN_EMPTY_STRUCT_CTOR(scalar_normal_dist_op)
template<typename Index>
inline const Scalar operator() (Index, Index = 0) const { return norm(rng); }
};
template<typename Scalar> std::mt19937 scalar_normal_dist_op<Scalar>::rng;
template<typename Scalar>
struct functor_traits<scalar_normal_dist_op<Scalar> >
{ enum { Cost = 50 * NumTraits<Scalar>::MulCost, PacketAccess = false, IsRepeatable = false }; };
} // end namespace internal
} // end namespace Eigen
Matrix2D kernel(const Matrix2D & x1, const Matrix2D & x2, double l = 1.0, double sigma = 1.0) {
auto dists = ((- 2.0 * (x1 * x2.transpose())).colwise() + x1.rowwise().squaredNorm()).rowwise() + x2.rowwise().squaredNorm().transpose();
return std::pow(sigma, 2) * ((-0.5 / std::pow(l, 2)) * dists).array().exp();
}
int main() {
unsigned size = 50;
unsigned seed = 1;
Matrix2D X = Vector::LinSpaced(size, -5.0, 4.8);
Eigen::internal::scalar_normal_dist_op<double> randN; // Gaussian functor
Eigen::internal::scalar_normal_dist_op<double>::rng.seed(seed); // Seed the rng
Vector means = Vector::Zero(X.rows());
auto covs = kernel(X, X);
Eigen::LLT<Matrix2D> cholSolver(covs);
// We can only use the cholesky decomposition if
// the covariance matrix is symmetric, pos-definite.
// But a covariance matrix might be pos-semi-definite.
// In that case, we'll go to an EigenSolver
Eigen::MatrixXd normTransform;
if (cholSolver.info()==Eigen::Success) {
std::cout << "Used LLT\n";
// Use cholesky solver
normTransform = cholSolver.matrixL();
} else {
std::cout << "Broken\n";
Eigen::BDCSVD<Matrix2D> solver(covs, Eigen::ComputeFullV);
normTransform = (-solver.matrixV().transpose()).array().colwise() * solver.singularValues().array().sqrt();
}
Matrix2D gaussianSamples = Eigen::MatrixXd::NullaryExpr(1, means.size(), randN);
Eigen::MatrixXd samples = (gaussianSamples * normTransform).rowwise() + means.transpose();
return 0;
}

Calling numDeriv:hessian() with multiple-parameter-objective-function in Rcpp

My aim is to call the hessian() function from the numDeriv R package from a cpp file (using Rcpp).
A toy example:
I want to calculate a hessian of a one-dimensional function x^n at the point x=1 with parameter n=3.
R code:
H = call_1D_hessian_in_C(K=1)
print(H)
Cpp code:
double one_dimensional(double X, double N){
return pow(X,N);
}
// [[Rcpp::export]]
double call_1D_hessian_in_C(double K) {
Rcpp::Environment numDeriv("package:numDeriv");
Rcpp::Function hessian = numDeriv["hessian"];
double param = 3;
Rcpp::List hessian_results =
hessian(
Rcpp::_["func"] = Rcpp::InternalFunction(one_dimensional),
Rcpp::_["x"] = 1.0,
Rcpp::_["N"] = param
);
return hessian_results[0];
}
This works fine and I indeed get "6" at the output.
However my true goal is to calculate hessians of K-dimensional functions, therefore K=/=1. I try the following:
H = call_KD_hessian_in_C(K=2)
print(H)
And in Cpp:
NumericVector k_dimensional(NumericVector X, double N){
return pow(X,N);
}
// [[Rcpp::export]]
double call_KD_hessian_in_C(double K) {
Rcpp::Environment numDeriv("package:numDeriv");
Rcpp::Function hessian = numDeriv["hessian"];
double param = 3;
Rcpp::NumericVector x = rep(1.0,K);
Rcpp::List hessian_results =
hessian(
Rcpp::_["func"] = Rcpp::InternalFunction(k_dimensional),
Rcpp::_["x"] = x,
Rcpp::_["N"] = param
);
return hessian_results[0];
}
But now I get "invalid pointer" errors. A am not sure how to provide the hessian function call with a cpp function that takes a set of parameters to evaluate the partial derivatives at...
Couple of quick notes:
Try the implementation in R and then move it to C++.
Provides a reference point and makes sure that everything works as intended.
Search paths and names matter
Explicitly load numDeriv package before compiling.
Respect capitalization X vs. x.
Ensure output types are accurate
From ?numDeriv::hessian, the output type is an N x N Rcpp::NumericMatrix instead of Rcpp::List.
Implementing in R
Coding the example and running it in pure R would give:
k = 2
k_dimensional = function(x, N) {
x ^ N
}
numDeriv::hessian(k_dimensional, x = rep(1, k), N = 3)
Error in hessian.default(k_dimensional, x = rep(1, k), N = 3) :
Richardson method for hessian assumes a scalar valued function.
So, immediately, this means that the k_dimensional() function is missing a reduction down to a scalar (e.g. single value).
Environment Run Time Error with C++ variant
After compiling the original code, there is a runtime error or when the function was called the issue an issue appears. For example, we have:
Rcpp::sourceCpp("path/to/call_KD_hessian_in_C.cpp")
call_KD_hessian_in_C(K = 2)
This provides the error of:
Error in call_KD_hessian_in_C(2) :
Cannot convert object to an environment: [type=character; target=ENVSXP].
As we are using an R function found in a package not loaded by default, we must explicitly load it via library() or require() before calling the function.
Therefore, the process to avoid an environment issue should be:
# Compile the routine
Rcpp::sourceCpp("path/to/call_KD_hessian_in_C.cpp")
# Load numDeriv to ensure it is on the search path
library("numDeriv")
# Call function
call_KD_hessian_in_C(2)
Cleaned Up C++ Implementation
From prior discussion, note that we've:
Changed the function used with the hessian to be a scalar or single value, e.g. double, instead of a vector of values, e.g. NumericVector.
Ensured that before the function call the numDeriv R package is loaded.
Changed the return type expected from the hessian() function from Rcpp::List to Rcpp::NumericMatrix.
This results in the following C++ code:
#include <Rcpp.h>
double k_dimensional_cpp(Rcpp::NumericVector x, double N){
// ^^ Change return type from NumericVector to double
// Speed up the process by writing the _C++_ loop
// instead of relying on Rcpp sugar.
double total = 0;
for(int i = 0 ; i < x.size(); ++i) {
total += std::pow(x[i], N);
}
// Return the accumulated total
return total;
}
// [[Rcpp::export]]
Rcpp::NumericMatrix call_KD_hessian_in_C(double K) {
// Ensure that numDeriv package is loaded prior to calling this function
Rcpp::Environment numDeriv("package:numDeriv");
Rcpp::Function hessian = numDeriv["hessian"];
double param = 3;
Rcpp::NumericVector x = Rcpp::rep(1.0, K);
// Switched from Rcpp::List to Rcpp::NumericMatrix
Rcpp::NumericMatrix hessian_results =
hessian(
Rcpp::_["func"] = Rcpp::InternalFunction(k_dimensional_cpp),
Rcpp::_["x"] = x, // use lower case x to match function signature.
Rcpp::_["N"] = param
);
// Return the calculated hessian
return hessian_results;
}
Testing the routine gives:
# Ensure numDeriv is on search path
library("numDeriv")
# Call function
call_KD_hessian_in_C(K = 2)
# [,1] [,2]
# [1,] 6.000000e+00 3.162714e-12
# [2,] 3.162714e-12 6.000000e+00

Rcppparallel bootstrap

I presume, or rather hope, that I have a singular fixable problem or perhaps many smaller ones and should give up. Either way I am relatively new to Rcpp and extremely uninformed on parallel computation and can't find a solution online.
The problem is typically, a 'fatal error' in R or R gets stuck in a loop, something like 5 minuets for 10 iterations, when the non-parallel version will do 5K iterations in the same time, roughly speaking.
As this algorithm fits into a much larger project I call on several other functions, these are all in Rcpp and I rewrote them with only 'arma' objects as that seemed to help other people, here. I also ran the optimization part with a 'heat map' optimizer I wrote in Rcpp, again exclusively in 'arma' without improvement - I should also point out this returned as an 'arma::vec'.
// [[Rcpp::depends("RcppArmadillo")]]
// [[Rcpp::depends("RcppParallel")]]
#include <RcppArmadillo.h>
#include <RcppParallel.h>
using namespace Rcpp;
using namespace std;
using namespace arma;
using namespace RcppParallel;
struct Boot_Worker : public Worker {
//Generate Inputs
// Source vector to keep track of the number of bootstraps
const arma::vec Boot_reps;
// Initial non-linear theta parameter values
const arma::vec init_val;
// Decimal date vector
const arma::colvec T_series;
// Generate the price series observational vector
const arma::colvec Y_est;
const arma::colvec Y_res;
// Generate the optimization constants
const arma::mat U;
const arma::colvec C;
const int N;
// Generate Output Matrix
arma::mat Boots_out;
// Initialize with the proper input and output
Boot_Worker( const arma::vec Boot_reps, const arma::vec init_val, const arma::colvec T_series, const arma::colvec Y_est, const arma::colvec Y_res, const arma::mat U, const arma::colvec C, const int N, arma::mat Boots_out)
: Boot_reps(Boot_reps), init_val(init_val), T_series(T_series), Y_est(Y_est), Y_res(Y_res), U(U), C(C), N(N), Boots_out(Boots_out) {}
void operator()(std::size_t begin, std::size_t end){
//load necessary stuffs from around
Rcpp::Environment stats("package:stats");
Rcpp::Function constrOptim = stats["constrOptim"];
Rcpp::Function SDK_pred_mad( "SDK_pred_mad");
arma::mat fake_data(N,2);
arma::colvec index(N);
for(unsigned int i = begin; i < end; i ++){
// Need a nested loop to create and fill the fake data matrix
arma::vec pool = arma::regspace(0, N-1) ;
std::random_shuffle(pool.begin(), pool.end());
for(int k = 0; k <= N-1; k++){
fake_data(k, 0) = Y_est[k] + Y_res[ pool[k] ];
fake_data(k, 1) = T_series[k];
}
// Call the optimization
Rcpp::List opt_results = constrOptim(Rcpp::_["theta"] = init_val,
Rcpp::_["f"] = SDK_pred_mad,
Rcpp::_["data_in"] = fake_data,
Rcpp::_["grad"] = "NULL",
Rcpp::_["method"] = "Nelder-Mead",
Rcpp::_["ui"] = U,
Rcpp::_["ci"] = C );
/// fill the output matrix ///
// need to create an place holder arma vector for the parameter output
arma::vec opt_param = Rcpp::as<arma::vec>(opt_results[0]);
Boots_out(i, 0) = opt_param[0];
Boots_out(i, 1) = opt_param[1];
Boots_out(i, 2) = opt_param[2];
// for the cost function value at optimization
arma::vec opt_value = Rcpp::as<arma::vec>(opt_results[1]);
Boots_out(i, 3) = opt_value[0];
// for the number of function calls (?)
arma::vec counts = Rcpp::as<arma::vec>(opt_results[2]);
Boots_out(i, 4) = counts[0];
// for thhe convergence code
arma::vec convergence = Rcpp::as<arma::vec>(opt_results[3]);
Boots_out(i, 5) = convergence[0];
}
}
};
// [[Rcpp::export]]
arma::mat SDK_boots_test(arma::vec init_val, arma::mat data_in, int boots_n){
//First establish theta_sp, estimate and residuals
const int N = arma::size(data_in)[0];
// Create the constraints for the constrained optimization
// Make a boundry boundry condition matrix of the form Ui*theta - ci >= 0
arma::mat U(6, 3);
U(0, 0) = 1;
U(1, 0) = -1;
U(2, 0) = 0;
U(3, 0) = 0;
U(4, 0) = 0;
U(5, 0) = 0;
U(0, 1) = 0;
U(1, 1) = 0;
U(2, 1) = 1;
U(3, 1) = -1;
U(4, 1) = 0;
U(5, 1) = 0;
U(0, 2) = 0;
U(1, 2) = 0;
U(2, 2) = 0;
U(3, 2) = 0;
U(4, 2) = 1;
U(5, 2) = -1;
arma::colvec C(6);
C[0] = 0;
C[1] = -data_in(N-1, 9)-0.5;
C[2] = 0;
C[3] = -3;
C[4] = 0;
C[5] = -50;
Rcpp::Function SDK_est( "SDK_est");
Rcpp::Function SDK_res( "SDK_res");
arma::vec Y_est = as<arma::vec>(SDK_est(init_val, data_in));
arma::vec Y_res = as<arma::vec>(SDK_res(init_val, data_in));
// Generate feed items for the Bootstrap Worker
arma::vec T_series = data_in( span(0, N-1), 9);
arma::vec Boots_reps(boots_n+1);
// Allocate the output matrix
arma::mat Boots_out(boots_n, 6);
// Pass input and output the Bootstrap Worker
Boot_Worker Boot_Worker(Boots_reps, init_val, T_series, Y_est, Y_res, U, C, N, Boots_out);
// Now finnaly call the parallel for loop
parallelFor(0, Boots_reps.size(), Boot_Worker);
return Boots_out;
}
So I wrote back in my 'heat algorithm' to solve the optimization, this is entirely in Rcpp-armadillo, this simplifies the code massively as the constraints are written into the optimizer. Additionally, I removed the randomization, so it just has to solve the same optimization; just to see if that was the only problem. Without fail I am still having the same 'fatal error'.
as it stands here is code:
// [[Rcpp::depends("RcppArmadillo")]]
// [[Rcpp::depends("RcppParallel")]]
#include <RcppArmadillo.h>
#include <RcppParallel.h>
#include <random>
using namespace Rcpp;
using namespace std;
using namespace arma;
using namespace RcppParallel;
struct Boot_Worker : public Worker {
//Generate Inputs
// Source vector to keep track of the number of bootstraps
const arma::vec Boot_reps;
// Initial non-linear theta parameter values
const arma::vec init_val;
// Decimal date vector
const arma::colvec T_series;
// Generate the price series observational vector
const arma::colvec Y_est;
const arma::colvec Y_res;
const int N;
// Generate Output Matrix
arma::mat Boots_out;
// Initialize with the proper input and output
Boot_Worker( const arma::vec Boot_reps, const arma::vec init_val, const arma::colvec T_series, const arma::colvec Y_est, const arma::colvec Y_res, const int N, arma::mat Boots_out)
: Boot_reps(Boot_reps), init_val(init_val), T_series(T_series), Y_est(Y_est), Y_res(Y_res), N(N), Boots_out(Boots_out) {}
void operator()(std::size_t begin, std::size_t end){
//load necessary stuffs from around
Rcpp::Function SDK_heat( "SDK_heat");
arma::mat fake_data(N,2);
arma::colvec index(N);
for(unsigned int i = begin; i < end; i ++){
// Need a nested loop to create and fill the fake data matrix
//arma::vec pool = arma::shuffle( arma::regspace(0, N-1) );
for(int k = 0; k <= N-1; k++){
fake_data(k, 0) = Y_est[k] + Y_res[ k ];
//fake_data(k, 0) = Y_est[k] + Y_res[ pool[k] ];
fake_data(k, 1) = T_series[k];
}
// Call the optimization
arma::vec opt_results = Rcpp::as<arma::vec>( SDK_heat(Rcpp::_["data_in"] = fake_data, Rcpp::_["tol"] = 0.1) );
/// fill the output matrix ///
// need to create an place holder arma vector for the parameter output
Boots_out(i, 0) = opt_results[0];
Boots_out(i, 1) = opt_results[1];
Boots_out(i, 2) = opt_results[2];
// for the cost function value at optimization
Boots_out(i, 3) = opt_results[3];
}
}
};
// [[Rcpp::export]]
arma::mat SDK_boots_test(arma::vec init_val, arma::mat data_in, int boots_n){
//First establish theta_sp, estimate and residuals
const int N = arma::size(data_in)[0];
Rcpp::Function SDK_est( "SDK_est");
Rcpp::Function SDK_res( "SDK_res");
const arma::vec Y_est = as<arma::vec>(SDK_est(init_val, data_in));
const arma::vec Y_res = as<arma::vec>(SDK_res(init_val, data_in));
// Generate feed items for the Bootstrap Worker
const arma::vec T_series = data_in( span(0, N-1), 9);
arma::vec Boots_reps(boots_n+1);
// Allocate the output matrix
arma::mat Boots_out(boots_n, 4);
// Pass input and output the Bootstrap Worker
Boot_Worker Boot_Worker(Boots_reps, init_val, T_series, Y_est, Y_res, N, Boots_out);
// Now finnaly call the parallel for loop
parallelFor(0, Boots_reps.size(), Boot_Worker);
return Boots_out;
}
Looking at your code I see the following:
struct Boot_Worker : public Worker {
[...]
void operator()(std::size_t begin, std::size_t end){
//load necessary stuffs from around
Rcpp::Environment stats("package:stats");
Rcpp::Function constrOptim = stats["constrOptim"];
Rcpp::Function SDK_pred_mad( "SDK_pred_mad");
[...]
// Call the optimization
Rcpp::List opt_results = constrOptim(Rcpp::_["theta"] = init_val,
Rcpp::_["f"] = SDK_pred_mad,
Rcpp::_["data_in"] = fake_data,
Rcpp::_["grad"] = "NULL",
Rcpp::_["method"] = "Nelder-Mead",
Rcpp::_["ui"] = U,
Rcpp::_["ci"] = C );
You are calling an R function from a multi-threaded C++ context. That's something you should not do. R is single-threaded so this will lead to undefined behavior or crashes:
API Restrictions
The code that you write within parallel workers should not call the R or Rcpp API in any fashion. This is because R is single-threaded and concurrent interaction with it’s data structures can cause crashes and other undefined behavior. Here is the official guidance from Writing R Extensions:
Calling any of the R API from threaded code is ‘for experts only’: they will need to read the source code to determine if it is thread-safe. In particular, code which makes use of the stack-checking mechanism must not be called from threaded code.
Besides, calling back to R from C++ even in a single threaded context is not the best thing you can do for performance. It should be more efficient to use a optimization library that offers a direct C(++) interface. One possibility might be the development version of nlopt, c.f. this issue for a discussion and references to examples. In addition, std::random_shuffle is not only deprecated in C++14 and removed from C++17, but it is also not thread-safe.
In your second example, you say that the function SDK_heat is actually implemented in C++. In that case you can call it directly:
Remove importing the corresponding R function, i.e. the Rcpp::Function SDK_heat( "SDK_heat");
Make sure that the compiler knows the declaration of the C++ function and that the linker has the actual function:
Quick and dirty: Copy the function definition into your cpp file before the definition of BootWorker.
For a cleaner approach, see section "1.10 Sharing code" in the Rcpp attributes vignette
Call the function like any other C++ function, i.e. using positional arguments with types compatible to the function declaration.
All this assumes you are using sourceCpp as indicated by your usage of [[Rcpp::depends(...)]]. You are reaching a complexity that warrants to build a package from this.

Rcpp - Using the optim function

How to use the varargs functions of the R language, as is the case of the optim function?
Consider the code below where I want to maximize the log-likelihood function verossimilhanca:
#include <Rcpp.h>
#include <RInside.h>
using namespace Rcpp;
// [[Rcpp::export]]
double verossimilhanca(Function pdf, NumericVector par, NumericVector x){
NumericVector log_result = log(pdf(par,x));
double soma =0;
for(int i = 0; i < log_result.size(); i++){
soma += log_result[i];
}
return -1*soma;
}
// [[Rcpp::export]]
List bootC(NumericVector x, NumericVector init_val){
Rcpp::Environment stats("package:stats");
Rcpp::Function optim = stats["optim"];
R["my_objective_fn"] = Rcpp::InternalFunction(&verossimilhanca);
Rcpp::List opt_results = optim(Rcpp::_["par"] = init_val,
Rcpp::_["fn"] = Rcpp::InternalFunction(&verossimilhanca),
Rcpp::_["method"] = "BFGS", x);
return opt_results;
// x is a data vetor.
}
In summary, I have a log-likelihood function and I want to maximize this function and x is my data set. I know that RInside allows me to create instances of R in C++ but I want to solve this problem only by using the Rcpp.h library without resorting to RInside.h.
Replace x with Rcpp::_["x"] = x in the arguments of optim function.
It bothers me too until I find the answer of #coatless.