I'm trying to use sourceCpp but I want to use C++17 or C++2a. Here is my .cpp file:
#include <Rcpp.h>
#include "../include/brownian_bridge_min.hpp"
#include "../include/inverse_gauss.hpp"
using namespace Rcpp;
// [[Rcpp::plugins(cpp17)]]
// [[Rcpp::export]]
double M_function(const double &a, const double &x, const double &y, const double &s, const double &t) {
// M function that is used to simulate a minimum of a Brownian bridge
return exp(-2.0 * (a-x) * (a-y) / (t-s));
}
// [[Rcpp::export]]
std::array<double, 2> min_sampler(const double &x, const double &y,
const double &s, const double &t,
const double &low_bound, const double &up_bound,
std::mt19937 &RNG)
{
// function simulates a minimum of a Brownian bridge between (low_bound) and (up_bound)
// first element returned is the simulated minimum
// second element returned is the simulated time which the minimum occurs
// calculate bounds for u1
double low_M {M_function(low_bound, x, y, s, t)};
double up_M {M_function(up_bound, x, y, s, t)};
// simulate uniform random variables
std::uniform_real_distribution<double> u1(low_M, up_M);
std::uniform_real_distribution<double> u2(0.0, 1.0);
// set simulated minimum value
double min {x - (0.5*(sqrt((y-x)*(y-x) - 2.0*(t-s)*log(u1(RNG))) - y + x))};
// condition for setting V
double condition {(x-min)/(x+y-(2.0*min))};
// simulating from Inverse Gaussian
double mu, lambda, V;
if (u2(RNG) < condition) {
mu = (y-min)/(x-min);
lambda = (y-min)*(y-min)/(t-s);
V = inv_gauss_sampler(mu, lambda, RNG);
} else {
mu = (x-min)/(y-min);
lambda = (x-min)*(x-min)/(t-s);
V = 1.0 / inv_gauss_sampler(mu, lambda, RNG);
}
// set tau (time of simualted minimum)
double tau{((s*V)+t)/(1.0+V)};
// setting simulated minimum and tau in array
std::array<double, 2> simulated_min {min, tau};
return simulated_min;
}
If I don't try to specify the C++ standard I want, I get the following syntax errors:
brownian_bridge_rcpp.cpp:34:15: error: expected ';' at end of declaration
double low_M {M_function(low_bound, x, y, s, t)};
But this type of initialisation is fine. I tried to use : '// [[Rcpp::plugins(cpp17)]]'
in my .cpp file, but now get the following error:
error: invalid value 'gnu++17' in '-std=gnu++17'
make: *** [brownian_bridge_rcpp.o] Error 1
/usr/local/clang4/bin/clang++ -std=gnu++17 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I"/Users/rchan/Library/R/3.5/library/Rcpp/include" -I"/Users/rchan/OneDrive - The Alan Turing Institute/freefor7/bessel_layers_cpp/R" -I/usr/local/include -fPIC -Wall -g -O2 -c brownian_bridge_rcpp.cpp -o brownian_bridge_rcpp.o
Error in Rcpp::sourceCpp("brownian_bridge_rcpp.cpp") :
Error 1 occurred building shared library.
Does anyone know how I can fix this?
Related
According to Armadillo docs:
.i()
Member function of any matrix expression
Provides an inverse of the matrix expression
...
However, when I try to compile this snippet:
#include <armadillo>
#include <iostream>
arma::sp_mat linReg(arma::sp_mat X, arma::sp_mat Y) {
return (X.t() * X).i() * X.t() * Y;
}
int main() {
arma::sp_mat X = arma::sprandu(1000, 10, 0.3);
arma::sp_mat y = arma::sprandu(1000, 10, 0.3);
std::cout << linReg(X,y).t() << std::endl;
}
I get the following error
lreg.cpp: In function ‘arma::sp_mat linReg(arma::sp_mat,
arma::sp_mat)’: lreg.cpp:6:24: error: ‘arma::enable_if2<true, const
arma::SpGluearma::SpOp<arma::SpMat<double, arma::spop_htrans>,
arma::SpMat, arma::spglue_times> >::result’ {aka ‘const class
arma::SpGluearma::SpOp<arma::SpMat<double, arma::spop_htrans>,
arma::SpMat, arma::spglue_times>’} has no member named ‘i’
6 | return (X.t() * X).i() * X.t() * Y;
|
I already tried with mat and it works fine. Any clue why it's not working with sparse matrix? And if so, how can we calculate the inverse of a sparse matrix?
Taking the inverse of a sparse matrix is often not desired as you end up with a dense matrix. Often the explicit inverse is not required.
Instead of taking the inverse here, maybe treat the problem as solving a system of linear equations. Then reformulate using solve() or spsolve(). Below is an untested example for demonstrating the general approach:
arma::mat linReg(const arma::sp_mat& X, const arma::sp_mat& Y) {
arma::sp_mat A = X.t() * X;
arma::mat B = arma::mat(X.t() * Y); // convert to dense matrix
arma::mat result;
bool ok = arma::spsolve(result, A, B);
if(ok == false) {
// handle failure here
}
return result;
}
I am working on a problem in C++17 where I am building a root solver that allows the user to pass a user-defined function to the root-solving function. An example of the class is shown below for the .cpp file and the prototype is in the .hpp file.
// root.cpp
double RootSolver::newton(double guess, double right_side,
double (*func)(double),
double unc, int iter)
/**
Finds a specific root of a function using the Newton iteration
method
#param guess An initial guess for the value of the root
#param right_side The value of the right side of the
function.
#param func The function for which the root will be
determined
#param unc The uncertainty or tolerance in the accepted
solution. Defaulted to 0.001
#param iter The number of iterations to try before the
function fails. Defaulted to 150.
#return root
*/
{
double x1, x2, x3, y1, y2, slope;
x1 = guess;
x2 = x1 + 0.0000001;
for (int i = 0; i < iter; i++)
{
y1 = func(x1) - right_side;
y2 = func(x2) - right_side;
slope = (y2 - y1) / (x2 - x1);
x3 = x1 - (y1 / slope);
if (func(x3) - right_side <= unc and
func(x3) - right_side >= -unc) return x3;
x1 = x3;
x2 = x1 + 0.0000001;
}
exit_program(iter);
}
// ================================================================
// RootSolver PRIVATE FUNCTIONS
[[noreturn]] void RootSolver::exit_program(int iter)
{
std::string one("Function did not converge within ");
std::string two(" iterations");
std::cout << one << iter << two << std::endl;
exit (EXIT_FAILURE);
}
The main file looks like this;
double func1(double x);
double func2(double x, double a, double b);
int main() {
RootSolver q;
double guess = 2.0;
double right_side = 0.0;
// This function works fine
result = q.newton(guess, right_side, func1)
// - Not sure how to reformat RootSolver.newton so
I can pass it func1 as well as func2 so it can
accept the arguments a and b
return 0;
}
double func1(double x)
{
return pow(x, 6) - x - 1.0;
}
double func2(double x)
{
return pow(x, 6) - a * x - b * 1.0;
}
The code shown above works great for func1, since x is the only argument; however, I am not sure how to reformat the RootSolver.newton function so it will take func1 with no arguments except x and accept func2 and the arguments a and b. Does anyone know how I can pass arguments to the function newton such that it is not hardcoded for a specific input function?
Based on the loose description, it sounds like a caller-side lambda solves your problem:
result = q.newton(guess, right_side, [](double x) {
return func2(x, 0, 0); // Replace 0s with values of a and b.
});
This lambda is converted to double(*)(double) as needed. Note that this will not work if you need to capture something because function pointers can't store additional state. There are two easy ways to handle that.
Make a template (and put the definition in the header):
template<typename F>
// requires std::is_invocable_r_v<double, F, double> // C++20 constraint option A
// requires requires(F f, double x) { f(x) -> double; } // C++20 constraint option B - can be extracted into a concept
double RootSolver::newton(double guess, double right_side,
F func,
double unc, int iter)
Use std::function at the cost of some performance when calling it:
double RootSolver::newton(double guess, double right_side,
const std::function<double(double)>& func,
double unc, int iter)
You can use function overloading in this case.
You can pass function name and x, a and b as parameters in overloaded versions, somewhat like this (I am just considering func, x, a and b for now, but you get the idea):
1) Overloaded version 1 that accepts func1 and its 2-parameters
double newton(...<other parameters>..., double (*func)(double), double x)
2) Overloaded version 2 that accepts func2 and its 3-parameters
double newton(...<other parameters>..., double (*func)(double, double, double), double x, double a, double b)
Now when you wish to call with func1, use:
newton(...., func1, x)
when you wish to call with func2, use:
newton(..., func2, x, a, b)
I am trying to numerically integrate a nonlinear system using boost::odeint. The system has time-varying parameters that are generated externally, and I want to incorporate this into my program. Is this possible with odeint? In Matlab, if you were to do something similar, you would need to interpolate the values as they become available.
Thank you in advance for your help!
Edit:
You can solve nonlinear time-varying system easily with odeint. The following example of a nonlinear time-varying system is taken from Applied Nonlinear Control by Slotine
Notice that we can insert 6sin(t) inside ode safely since we are not doing anything at each time step. If your system has a controller that depends on a time step like PID controller that requires delta time to compute derivatives, then in this case, don't put it inside the ode since ode() is called several times by the ode solver. This is my code for solving the system.
#include <iostream>
#include <boost/math/constants/constants.hpp>
#include <boost/numeric/odeint.hpp>
#include <fstream>
std::ofstream data("data.txt");
using namespace boost::numeric::odeint;
typedef std::vector< double > state_type;
class System
{
public:
System(const double& deltaT);
void updateODE();
void updateSystem();
private:
double t, dt;
runge_kutta_dopri5 < state_type > stepper;
state_type y;
void ode(const state_type &y, state_type &dy, double t);
};
System::System(const double& deltaT) : dt(deltaT), t(0.0), y(2)
{
/*
x = y[0]
dx = y[1] = dy[0]
ddx = dy[1] = ode equation
*/
// initial values
y[0] = 2.0; // x1
y[1] = 0.0; // x2
}
void System::updateODE()
{
// store data for plotting
data << t << " " << y[0] << std::endl;
//=========================================================================
using namespace std::placeholders;
stepper.do_step(std::bind(&System::ode, this, _1, _2, _3), y, t, dt);
t += dt;
}
void System::updateSystem()
{
// you can utitilize this function in case you have a controller and
// you need to update the controller at a fixed step size.
}
void System::ode(const state_type &y, state_type &dy, double t)
{
//#####################( ODE Equation )################################
dy[0] = y[1];
dy[1] = 6.0*sin(t) - 0.1*y[1] - pow(y[0],5);
}
int main(int argc, char **argv)
{
const double dt(0.001);
System sys(dt);
for (double t(0.0); t <= 50.0; t += dt){
// update time-varying parameters of the system
//sys.updateSystem();
// solve the ODE one step forward.
sys.updateODE();
}
return 0;
}
The result is (i.e. same result presented in the aforementioned book).
Yes, this is possible. But you might need to interpolate the values too. You interact with the solver via the system function. It can be functor and it can contain the a link to the data, for example
struct ode
{
void operator()( const state_type& x , state_type& dxdt , double t ) const
{
// get the the parameter from time_varying_parameters
}
std::vector<double> time_varying_parameters;
};
I'm trying to speedup my R code performing some computationally expensive task with C++ and Rcpp. My problem involves approximately a system of 100 equations, so any hint to speed up the computation is welcome.
What I need is to import a matrix MX created in R into a C++ script. The C++ script have to use rows of MX as x0 (x initial values) in a systems of ODE.
To simplify the explanation of my problem, the code below is based on the Lorenz systems.
As it is clear from the quality of my code, I'm new to C++ (and Rcpp).
For clarity, I don't post all my test code that are terrible, bu I really need your help to try solve this problem.
Any help will be really, really appreciated!
Thanks in advance.
#include <boost/array.hpp>
#include <boost/numeric/odeint.hpp>
#include <Rcpp.h>
// [[Rcpp::depends(BH)]]
// [[Rcpp::plugins(cpp11)]]
using namespace std;
using namespace boost::numeric::odeint;
double theta [] = {10.000,28,2.5};
typedef boost::array< double , 3 > state_type;
void lorenz( const state_type &x , state_type &dxdt , double t ) {
dxdt[0] = theta[0] * ( x[1] - x[0] );
dxdt[1] = theta[1] * x[0] - x[1] - x[0] * x[2];
dxdt[2] = -theta[2] * x[2] + x[0] * x[1];
}
struct foo { std::vector<double> a, b, c; };
struct foo f;
//observer should be a function that append a single output row for each input row of mx corresponding to the last integration step.
void append_lorenz(const state_type &x , const double t ) {
f.a.push_back(x[0]);
f.b.push_back(x[1]);
f.c.push_back(x[2]);
}
using namespace Rcpp;
// [[Rcpp::export]]
DataFrame callMain(NumericMatrix mx){
int n = mx.nrow();
NumericMatrix total(mx);
for(int i = 0; i < n; ++i) {
// state_type x should be mx rows
state_type x = total.row(i); // initial conditions
const double dt =0.1;
integrate(lorenz , x , 0.0 , 1.0 , dt , append_lorenz );
}
return DataFrame::create(Named("a") = f.a, Named("b") = f.b, Named("c") = f.c);
}
/*** R
mx=matrix(1:9,3,3)
res <- callMain(mx)
print((res))
*/
the error I get is:
error: conversion from ‘Rcpp::Matrix<14>::Row {aka Rcpp::MatrixRow<14>}’ to non-scalar type ‘state_type {aka boost::array}’ requested
state_type x = total.row(i); // initial conditions
I think the error message is clear enough.
state_type x = total.row(i);
There is no conversion between Rcpp object and boost::array, you need to develop your own.
I would like to use runge_kutta4 method in the odeint C++ library. I've solved the problem in Matlab. My following code in Matlab to solve x'' = -x - g*x', with initial values x1 = 1, x2 = 0, is as follows
main.m
clear all
clc
t = 0:0.1:10;
x0 = [1; 0];
[t, x] = ode45('ODESolver', t, x0);
plot(t, x(:,1));
title('Position');
xlabel('time (sec)');
ylabel('x(t)');
ODESolver.m
function dx = ODESolver(t, x)
dx = zeros(2,1);
g = 0.15;
dx(1) = x(2);
dx(2) = -x(1) - g*x(2);
end
I've installed the odeint Library. My code for using runge_kutta4 is as follows
#include <iostream>
#include <boost/numeric/odeint.hpp>
using namespace std;
using namespace boost::numeric::odeint;
/* The type of container used to hold the state vector */
typedef std::vector< double > state_type;
const double gam = 0.15;
/* The rhs of x' = f(x) */
void lorenz( const state_type &x , state_type &dx , double t )
{
dx[0] = x[1];
dx[1] = -x[0] - gam*x[1];
}
int main(int argc, char **argv)
{
const double dt = 0.1;
runge_kutta_dopri5<state_type> stepper;
state_type x(2);
x[0] = 1.0;
x[1] = 0.0;
double t = 0.0;
cout << x[0] << endl;
for ( size_t i(0); i <= 100; ++i){
stepper.do_step(lorenz, x , t, dt );
t += dt;
cout << x[0] << endl;
}
return 0;
}
The result is in the following picture
My question is why the result varies? Is there something wrong with my C++ code?
These are the first values of both methods
Matlab C++
-----------------
1.0000 0.9950
0.9950 0.9803
0.9803 0.9560
0.9560 0.9226
0.9226 0.8806
0.8806 0.8304
0.8304 0.7728
0.7728 0.7084
0.7083 0.6379
Update:
The problem is that I forgot to include the initial value in my C++ code. Thanks for #horchler for noticing it. After including the proper values and using runge_kutta_dopri5 as #horchler suggested, the result is
Matlab C++
-----------------
1.0000 1.0000
0.9950 0.9950
0.9803 0.9803
0.9560 0.9560
0.9226 0.9226
0.8806 0.8806
0.8304 0.8304
0.7728 0.7728
0.7083 0.7084
I've updated the code to reflect these modifications.
The runge_kutta4 stepper in odeint is nothing like Matlab's ode45, which is an adaptive scheme based on the Dormand-Prince method. To replicate Matlab's results, you should probably try the runge_kutta_dopri5 stepper. Also, make sure that your C++ code uses the same absolute and relative tolerances as ode45 (defaults are 1e-6 and 1e-3, respectively). Lastly, it looks like you may not be saving/printing your initial condition in your C++ results.
If you're confused at why ode45 is not taking fixed steps even though you specified t = 0:0.1:10;, see my detailed answer here.
If you must use the fixed steprunge_kutta4 stepper, then you'll need to reduce the integration step-size in your C++ code to match Matlab's output.
The Matlab ode45 function already includes error control and I think also interpolation (dense output). to compare with boost.odeint you should use the same functionality there. Boost.odeint provides integrate functions that perform step-size control and dense output if the used stepper algorithm provides this functionality. The following code piece shows how you this is used with the default error control parameters from Matlab given by horchler:
#include <boost/numeric/odeint.hpp>
using namespace std;
using namespace boost::numeric::odeint;
/* The type of container used to hold the state vector */
typedef std::vector< double > state_type;
const double gam = 0.15;
/* The rhs of x' = f(x) */
void damped_osc( const state_type &x , state_type &dx , const double t )
{
dx[0] = x[1];
dx[1] = -x[0] - gam*x[1];
}
void print( const state_type &x, const double t )
{
cout << x[0] << endl;
}
int main(int argc, char **argv)
{
cout.precision(16); // full precision output
const double dt = 0.1;
typedef runge_kutta_dopri5<state_type> stepper_type;
state_type x(2);
x[0] = 1.0;
x[1] = 0.0;
integrate_const(make_dense_output<stepper_type>( 1E-6, 1E-3 ),
damped_osc, x, 0.0, 10.0, dt , print);
return 0;
}
Please note that the results might still not be exactly the same (as in all 16 digits) because the error control in Boost.odeint might not be impemented exactly as in Matlab's ode45.