boost::unit_test::data::random(-FLT_MAX, FLT_MAX) only generates +Infinity - c++

I'm using boost::unit_test::data::random (with boost-1.61.0_1 installed) and I'm having some issues generating random floats using boost::unit_test::data::random(-FLT_MAX,FLT_MAX). It only seems to generate +Infinity.
Through trial and error, I found that I could generate random floats in [-FLT_MAX,-FLT_MAX * 2^-25) and [-FLT_MAX * 2^-25, FLT_MAX) separately, which gives me a possible work-around, but I'm still curious as to what I'm doing wrong trying to generate floats in [-FLT_MAX, FLT_MAX).
#define BOOST_TEST_MODULE example
#include <boost/test/included/unit_test.hpp>
#include <boost/test/data/monomorphic.hpp>
#include <boost/test/data/test_case.hpp>
#include <cfloat>
inline void in_range(float const & min, float const & x, float const & max) {
BOOST_TEST_REQUIRE(min <= x);
BOOST_TEST_REQUIRE(x < max);
}
static constexpr float lo{-FLT_MAX / (1024.0 * 1024.0 * 32.0)};
// this test passes
namespace bdata = boost::unit_test::data;
BOOST_DATA_TEST_CASE(low_floats, bdata::random(-FLT_MAX, lo) ^ bdata::xrange(100), x,
index) {
#pragma unused(index)
in_range(-FLT_MAX, x, lo);
}
// this test passes
BOOST_DATA_TEST_CASE(high_floats, bdata::random(lo, FLT_MAX) ^ bdata::xrange(100), x,
index) {
#pragma unused(index)
in_range(lo, x, FLT_MAX);
}
// this test fails
BOOST_DATA_TEST_CASE(all_floats, bdata::random(-FLT_MAX, FLT_MAX) ^ bdata::xrange(100), x,
index) {
#pragma unused(index)
in_range(-FLT_MAX, x, FLT_MAX);
}
results in:
$ ./example
Running 300 test cases...
example.cpp:9: fatal error: in "all_floats": critical check x < max has failed [inf >= 3.40282347e+38]
Failure occurred in a following context:
x = inf; index = 0;
...
example.cpp:9: fatal error: in "all_floats": critical check x < max has failed [inf >= 3.40282347e+38]
Failure occurred in a following context:
x = inf; index = 99;
*** 100 failures are detected in the test module "example"

boost::unit_test::data::random uses std::uniform_real_distribution, which has the requirement:
Requires: a ≤ b and b - a ≤ numeric_limits<RealType>::max().
In your case, b - a is 2 * FLT_MAX, which is +Inf in float.
You could use your workaround, or you could generate in double and cast back to float.

Related

Cuba library on Arm64 Mac

I'm switching from an old Intel 86-64 MacBook Air to a new M1 MacBook Pro.
I'm having some problems using the library Cuba, in particular the (ll)Vegas integrator: while on the former pc everything was working properly, while in the latter (with the same compiler g++-12 and the same extra libraries required for my code) I get a generic "segmentation fault" kind of error, after a bunch of "nan" results from integration. The very same code on the older pc was working just fine, so I guess it may be due to architecture in some way.
I found people suggesting to increase nstart and nincrease parameters of the used integrator by an order of magnitude or so, but nothing changes.
I'm using a brew-installed gcc version of g++-12 as mentioned, running on macOS Monterey version 12.3
Thanks everyone!
Edit 1: I'm adding the reduced code as well as the error message.
#include <stdlib.h>
#include <stdexcept>
#include <iostream>
#include <stdio.h>
#include <string>
#include <sstream>
#include <fstream>
#include <algorithm> // std::min
#include <math.h>
#include <cmath>
#include <memory>
#include <boost/multiprecision/float128.hpp>
#include <boost/math/special_functions/bessel.hpp>
#include "cuba.h"
#include "gsl/gsl_rng.h" // scan
using boost::multiprecision::float128;
// ----------------------------------------------------- Physical constants
double eps2, qf, qD, mf, delta;
double T,sminima,spminima,saminima;
double TCB,Tfo;
double ma,mA,mAY,mZhatY,mAL,mZhatL;
double gfLAY,gfLZY,guLAY,guLZY,gdLAY,gdLZY,gfRAY,gfRZY,guRAY,guRZY,gdRAY,gdRZY,gnuLAY,gnuLZY,gXAY,gXZY,eps2Y;
double gfLAL,gfLZL,guLAL,guLZL,gdLAL,gdLZL,gfRAL,gfRZL,guRAL,guRZL,gdRAL,gdRZL,gnuLAL,gnuLZL,gXAL,gXZL,eps2L;
#define mX 0.01
#define m mX
#define s0 2970
#define rhocr 1.054e-05
#define T0 2.32e-13 //2.7 Kelvin in GeV
#define Tfr (mX/10.) // freeze-out temperature, xfr = 10
#define TCMB T0*(1) //CMB formation
#define mnu 1.e-9 // neutrino mass
#define mqt 173.21
#define mqb 4.18
#define mqc 1.27
#define mqs 96.e-3
#define mqd 4.7e-3
#define mqu 2.2e-3
#define mpionC 139.57e-3
#define mpion0 134.98e-3
#define mtau 1.78
#define mmu 105.66e-3
#define me 0.51e-3
#define mZ 91.19
#define mW 80.38
#define mH 125.09
#define pi 3.1415927
#define MP 1.22e19
#define qe 0.303
#define gquark 12.
#define ggluon 2.
#define Lqcd 0.217
#define gpion 3.
#define gchargedlepton 4.
#define gneutrino 2.
#define gmassiveboson 3.
#define gmasslessboson 2.
#define gscalar 1.
#define cW 0.8815
#define sW 0.4721
#define thetaW 0.4917 //radiant
// step function
int StepF(double x) {
int step;
if (x > 0.) {step = 1.;}
else { step = 0.;}
return step;
}
// dof function
double gstar(double x, int i) {
/* function computing the 2 effective d.o.f
i=1: ge, i=2:gs
*/
double TnuoT, res;
if (x > me) {
TnuoT = 1.;
}
else {
if (i == 1) {TnuoT = pow(4./11,4./3);}
else if (i == 2) {TnuoT = 4./11;}
}
res = StepF(x-Lqcd)*( gmasslessboson + 8.*ggluon + 7./8.*gquark*( StepF(x-mqt) + StepF(x-mqb) + StepF(x-mqc) + StepF(x-mqs) + StepF(x-mqd) + StepF(x-mqu) ) + gscalar*(StepF(x-mH) ) + gmassiveboson*( StepF(x-mA) + StepF(x-mZ) + 2.*StepF(x-mW) ) + 7./8.*( gchargedlepton*( StepF(x-me) + StepF(x-mmu) + StepF(x-mtau) ) + 3.*gneutrino + gneutrino*StepF(x-mX) ) ) +
StepF(Lqcd-x)*( gmasslessboson + gmassiveboson*StepF(x-mA) + gpion*( 2.*StepF(x-mpionC) + StepF(x-mpion0)) + 7./8.*(gchargedlepton*( StepF(x-me) + StepF(x-mmu)) + 3.*gneutrino*TnuoT + gneutrino*StepF(x-mX) ) );
return res;
}
// bessel functions
template <class T1, class T2>
long double cyl_bessel_k(T1 v, T2 x){
long double result;
result = cyl_bessel_k(v, x);
return result;
}
double Power(double x, double y){
return pow(x,y);
}
double Sqrt(double x){
return sqrt(x);
}
// ----------------------------------------------------- Gamma_A' U(1)_em x U(1)_d
double GammaA(double massA) {
double tw = 0.;
double PSe = 0.;
if(massA>2.*me)
PSe = eps2*pow(qe*qf,2.)*(1.+2.*me*me/(massA*massA))*sqrt(1.-4.*me*me/(massA*massA));
else PSe = 0.;
double PSmu = 0.;
if(massA>2.*mmu)
PSe = eps2*pow(qe*qf,2.)*(1.+2.*mmu*mmu/(massA*massA))*sqrt(1.-4.*mmu*mmu/(massA*massA));
else PSmu = 0.;
double PStau = 0.;
if(massA>2.*mtau)
PStau = eps2*pow(qe*qf,2.)*(1.+2.*mtau*mtau/(massA*massA))*sqrt(1.-4.*mtau*mtau/(massA*massA));
else PStau = 0.;
double PSqu = 0.;
if(massA>2.*mqu)
PSqu = eps2*Power(qe*qf*2./3.,2)*(1.+2.*mqu*mqu/(massA*massA))*sqrt(1.-4.*mqu*mqu/(massA*massA));
else PSqu = 0.;
double PSqc = 0.;
if(massA>2.*mqc)
PSqc = eps2*Power(qe*qf*2./3.,2)*(1.+2.*mqc*mqc/(massA*massA))*sqrt(1.-4.*mqc*mqc/(massA*massA));
else PSqc = 0.;
double PSqt = 0.;
if(massA>2.*mqt)
PSqt = eps2*Power(qe*qf*2./3.,2)*(1.+2.*mqt*mqt/(massA*massA))*sqrt(1.-4.*mqt*mqt/(massA*massA));
else PSqt = 0.;
double PSqd = 0.;
if(massA>2.*mqd)
PSqd = eps2*Power(qe*qf*1./3.,2)*(1.+2.*mqd*mqd/(massA*massA))*sqrt(1.-4.*mqd*mqd/(massA*massA));
else PSqd = 0.;
double PSqs = 0.;
if(massA>2.*mqs)
PSqs = eps2*Power(qe*qf*1./3.,2)*(1.+2.*mqs*mqs/(massA*massA))*sqrt(1.-4.*mqs*mqs/(massA*massA));
else PSqs = 0.;
double PSqb = 0.;
if(massA>2.*mqb)
PSqb = eps2*Power(qe*qf*1./3.,2)*(1.+2.*mqb*mqb/(massA*massA))*sqrt(1.-4.*mqb*mqb/(massA*massA));
else PSqb = 0.;
double PSX = 0.;
if(massA>2.*mX)
PSX = qD*qD*(1.+2.*mX*mX/(massA*massA))*sqrt(1.-4.*mX*mX/(massA*massA));
else PSX = 0.;
tw = massA/4./pi*( PSe + PSmu + PStau + PSX + 3.*(PSqu + PSqc + PSqt + PSqd + PSqs + PSqb)) ;
return tw;
}
// ----------------------------------------------------- Integral XX -> ee (em x d)
static int IntegrandEXXee(const int *ndim, const double xx[],
const int *ncomp, double ff[], void *userdata)
{
sminima = 4.*m*m;
#define st1 xx[1] // x
#define Tt1 xx[0] // y
#define f1 ff[0] // integrand: factor of yield
#define Tmax1 Tfr
#define Tmin1 T0
#define smax1 1000
#define smin1 sminima
#define T1 (Tmin1*exp( Tt1*log(Tmax1/Tmin1) ))
#define JacT1 (Tmin1*log(Tmax1/Tmin1)*exp(Tt1*log(Tmax1/Tmin1)))
#define s1 (smin1*exp( st1*log(smax1/smin1) ))
#define JacS1 (smin1*log(smax1/smin1)*exp(st1*log(smax1/smin1)))
long double func1;
long double neqX = gchargedlepton*mX*mX*T1*boost::math::cyl_bessel_k(2, mX/T1)/(2.*pi*pi);
long double sigma_XXff = Power(qe*qf*qD,2)*eps2*(s1+2.*m*m)*(s1+2.*mf*mf)*Sqrt(1-4.*mf*mf/s1)/(12.*pi*s1*Sqrt(1-4.*m*m/s1)*(Power(s1-mA*mA,2)+mA*mA*GammaA(mA)*GammaA(mA)));
func1 = JacT1 * JacS1;
func1 = func1 * Sqrt(pi/45) * MP * 2. * Power(pi,2.) * gchargedlepton/ Power(2.*pi,6.);
func1 = func1 * gstar(T1,2) * T1 /(sqrt(gstar(T1,1)) * neqX * neqX);
func1 = func1 * (sqrt(s1) * boost::math::cyl_bessel_k(1, sqrt(s1)/T1) * (s1-4.*m*m) * sigma_XXff);
f1 = func1;
return 0;
}
// ----------------------------------------------------- Common parameters
#define ndim 2
#define ncomp 1
#define userdata NULL
#define epsrel 1.e-1
#define epsabs 0
#define flags 6
//#define VERBOSE 2
//#define LAST 4
#define seed 1
#define mineval 1000
#define maxeval 300000
#define statefile NULL
// ----------------------------------------------------- Vegas-specific parameters
#define nstart 10000
#define nincrease 5000
#define nbatch 1000
#define gridno 1
int main(int numb, char** array)
{
double xtest;
long long int neval;
int fail;
double integralEXXee[ncomp], errorEXXee[ncomp],probEXXee[ncomp];
double numbd;
int nregions;
#define resultXXee integralEXXee[0]
#define errorXXee errorEXXee[0]
qD = 1.;
qf = 1.;
mA = 1.e-1;
eps2 = 1.e-8;
double neqX = gchargedlepton*mX*mX*Tfr*boost::math::cyl_bessel_k(2, mX/Tfr)/(2.*pi*pi);
double stot = 2.*pi*pi/45.*gstar(Tfr,2)*Tfr*Tfr*Tfr;
llVegas(ndim, ncomp, IntegrandEXXee, userdata, 1 /* nvec */,
epsrel, epsabs, flags, seed, mineval, maxeval, nstart, nincrease,
nbatch, gridno, statefile, NULL,&neval, &fail, integralEXXee, errorEXXee, probEXXee);
}
error I get on m1:
Iteration 1: 10000 integrand evaluations so far
[1] nan +- 2.22056e-18 chisq nan (0 df)
zsh: segmentation fault ./draft
the same routine works just fine on x86-64
edit:
Regardless of past comment, I managed to understand the problem ins't related to Cuba libraries but I'm guessing something related to arm64 architecture itself: if I remove a Bessel function from integrand a solution is found, whilst keeping it gives segmentation fault. There's something different in the way precision is treated at this point

Unit tests with boost::multiprecision

Some of my unit tests have started failing since adapting some code to enable multi-precision. Header file:
#ifndef SCRATCH_UNITTESTBOOST_INCLUDED
#define SCRATCH_UNITTESTBOOST_INCLUDED
#include <boost/multiprecision/cpp_dec_float.hpp>
// typedef double FLOAT;
typedef boost::multiprecision::cpp_dec_float_50 FLOAT;
const FLOAT ONE(FLOAT(1));
struct Rect
{
Rect(const FLOAT &width, const FLOAT &height) : Width(width), Height(height){};
FLOAT getArea() const { return Width * Height; }
FLOAT Width, Height;
};
#endif
Main test file:
#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE RectTest
#include <boost/test/unit_test.hpp>
#include "SCRATCH_UnitTestBoost.h"
namespace utf = boost::unit_test;
// Failing
BOOST_AUTO_TEST_CASE(AreaTest1)
{
Rect R(ONE / 2, ONE / 3);
FLOAT expected_area = (ONE / 2) * (ONE / 3);
std::cout << std::setprecision(std::numeric_limits<FLOAT>::digits10) << std::showpoint;
std::cout << "Expected: " << expected_area << std::endl;
std::cout << "Actual : " << R.getArea() << std::endl;
// BOOST_CHECK_EQUAL(expected_area, R.getArea());
BOOST_TEST(expected_area == R.getArea());
}
// Tolerance has no effect?
BOOST_AUTO_TEST_CASE(AreaTestTol, *utf::tolerance(1e-40))
{
Rect R(ONE / 2, ONE / 3);
FLOAT expected_area = (ONE / 2) * (ONE / 3);
BOOST_TEST(expected_area == R.getArea());
}
// Passing
BOOST_AUTO_TEST_CASE(AreaTest2)
{
Rect R(ONE / 7, ONE / 2);
FLOAT expected_area = (ONE / 7) * (ONE / 2);
BOOST_CHECK_EQUAL(expected_area, R.getArea());
}
Note that when defining FLOAT as the double type, all the tests pass. What confuses me is that when printing the exact expected and actual values (see AreaTest1) we see the same result. But the error reported from BOOST_TEST is:
error: in "AreaTest1": check expected_area == R.getArea() has failed
[0.16666666666666666666666666666666666666666666666666666666666666666666666666666666 !=
0.16666666666666666666666666666666666666666666666666666666666666666666666672236366]
Compiling with g++ SCRATCH_UnitTestBoost.cpp -o utb.o -lboost_unit_test_framework.
Questions:
Why is the test failing?
Why does the use of tolerance in AreaTestTol not give outputs as documented here?
Related info:
Tolerances with floating point comparison
Gotchas with multiprecision types
Two Issues:
where does the difference come from
how to apply the epsilon?
Where The Difference Comes From
Boost Multiprecision uses template expressions to defer evaluation.
Also, you're choosing some rational fractions that cannot be exactly represented base-10 (cpp_dec_float uses decimal, so base-10).
This means that when you do
T x = 1/3;
T y = 1/7;
That will actually approximate both fractions inexactly.
Doing this:
T z = 1/3 * 1/7;
Will actually evaluate the right-handside expression template, so instead of calculating the temporaries like x ans y before, the right hand side has a type of:
expression<detail::multiplies, detail::expression<?>, detail::expression<?>, [2 * ...]>
That's shortened from the actual type:
boost::multiprecision::detail::expression<
boost::multiprecision::detail::multiplies,
boost::multiprecision::detail::expression<
boost::multiprecision::detail::divide_immediates,
boost::multiprecision::number<boost::multiprecision::backends::cpp_dec_float<50u,
int, void>, (boost::multiprecision::expression_template_option)1>, int,
void, void>,
boost::multiprecision::detail::expression<
boost::multiprecision::detail::divide_immediates,
boost::multiprecision::number<boost::multiprecision::backends::cpp_dec_float<50u,
int, void>, (boost::multiprecision::expression_template_option)1>, int,
void, void>,
void, void>
Long story short, this is what you want because it saves you work and keeps better accuracy because the expression is is first normalized to 1/(3*7) so 1/21.
This is where your difference comes from in the first place. Fix it by either:
turning off expression templates
using T = boost::multiprecision::number<
boost::multiprecision::cpp_dec_float<50>,
boost::multiprecision::et_off > >;
rewriting the expression to be equivalent of your implementation:
T expected_area = T(ONE / 7) * T(ONE / 2);
T expected_area = (ONE / 7).eval() * (ONE / 2).eval();
Applying The Tolerance
I find it hard to parse the Boost Unit Test docs on this, but here's empirical data:
BOOST_CHECK_EQUAL(expected_area, R.getArea());
T const eps = std::numeric_limits<T>::epsilon();
BOOST_CHECK_CLOSE(expected_area, R.getArea(), eps);
BOOST_TEST(expected_area == R.getArea(), tt::tolerance(eps));
This fails the first, and passes the last two. Indeed, in addition, the following two also fail:
BOOST_CHECK_EQUAL(expected_area, R.getArea());
BOOST_TEST(expected_area == R.getArea());
So it appears that something has to be done before the utf::tolerance decorator takes effect. Testing with native doubles tells me that only BOOST_TEST applies the tolerance implicitly. So dived into the preprocessed expansion:
::boost::unit_test::unit_test_log.set_checkpoint(
::boost::unit_test::const_string(
"/home/sehe/Projects/stackoverflow/test.cpp",
sizeof("/home/sehe/Projects/stackoverflow/test.cpp") - 1),
static_cast<std::size_t>(42));
::boost::test_tools::tt_detail::report_assertion(
(::boost::test_tools::assertion::seed()->*a == b).evaluate(),
(::boost::unit_test::lazy_ostream::instance()
<< ::boost::unit_test::const_string("a == b", sizeof("a == b") - 1)),
::boost::unit_test::const_string(
"/home/sehe/Projects/stackoverflow/test.cpp",
sizeof("/home/sehe/Projects/stackoverflow/test.cpp") - 1),
static_cast<std::size_t>(42), ::boost::test_tools::tt_detail::CHECK,
::boost::test_tools::tt_detail::CHECK_BUILT_ASSERTION, 0);
} while (::boost::test_tools::tt_detail::dummy_cond());
Digging in a lot more, I ran into:
/*!#brief Indicates if a type can be compared using a tolerance scheme
*
* This is a metafunction that should evaluate to #c mpl::true_ if the type
* #c T can be compared using a tolerance based method, typically for floating point
* types.
*
* This metafunction can be specialized further to declare user types that are
* floating point (eg. boost.multiprecision).
*/
template <typename T>
struct tolerance_based : tolerance_based_delegate<T, !is_array<T>::value && !is_abstract_class_or_function<T>::value>::type {};
There we have it! But no,
static_assert(boost::math::fpc::tolerance_based<double>::value);
static_assert(boost::math::fpc::tolerance_based<cpp_dec_float_50>::value);
Both already pass. Hmm.
Looking at the decorator I noticed that the tolerance injected into the fixture context is typed.
Experimentally I have reached the conclusion that the tolerance decorator needs to have the same static type argument as the operands in the comparison for it to take effect.
This may actually be very useful (you can have different implicit tolerances for different floating point types), but it is pretty surprising as well.
TL;DR
Here's the full test set fixed and live for your enjoyment:
take into account evaluation order and the effect on accuracy
use the static type in utf::tolerance(v) to match your operands
do not use BOOST_CHECK_EQUAL for tolerance-based comparison
I'd suggest to use explicit test_tools::tolerance instead of relying on "ambient" tolerance. After all, we want to be testing our code, not the test framework
Live On Coliru
template <typename T> struct Rect {
Rect(const T &width, const T &height) : width(width), height(height){};
T getArea() const { return width * height; }
private:
T width, height;
};
#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE RectTest
#include <boost/multiprecision/cpp_dec_float.hpp>
using DecFloat = boost::multiprecision::cpp_dec_float_50;
#include <boost/test/unit_test.hpp>
namespace utf = boost::unit_test;
namespace tt = boost::test_tools;
namespace {
template <typename T>
static inline const T Eps = std::numeric_limits<T>::epsilon();
template <typename T> struct Fixture {
T const epsilon = Eps<T>;
T const ONE = 1;
using Rect = ::Rect<T>;
void checkArea(int wdenom, int hdenom) const {
auto w = ONE/wdenom; // could be expression templates
auto h = ONE/hdenom;
Rect const R(w, h);
T expect = w*h;
BOOST_TEST(expect == R.getArea(), "1/" << wdenom << " x " << "1/" << hdenom);
// I'd prefer explicit toleranc
BOOST_TEST(expect == R.getArea(), tt::tolerance(epsilon));
}
};
}
BOOST_AUTO_TEST_SUITE(Rectangles)
BOOST_FIXTURE_TEST_SUITE(Double, Fixture<double>, *utf::tolerance(Eps<double>))
BOOST_AUTO_TEST_CASE(check2_3) { checkArea(2, 3); }
BOOST_AUTO_TEST_CASE(check7_2) { checkArea(7, 2); }
BOOST_AUTO_TEST_CASE(check57_31) { checkArea(57, 31); }
BOOST_AUTO_TEST_SUITE_END()
BOOST_FIXTURE_TEST_SUITE(MultiPrecision, Fixture<DecFloat>, *utf::tolerance(Eps<DecFloat>))
BOOST_AUTO_TEST_CASE(check2_3) { checkArea(2, 3); }
BOOST_AUTO_TEST_CASE(check7_2) { checkArea(7, 2); }
BOOST_AUTO_TEST_CASE(check57_31) { checkArea(57, 31); }
BOOST_AUTO_TEST_SUITE_END()
BOOST_AUTO_TEST_SUITE_END()
Prints

R code containing Rcpp function runs twice as fast on Mac, than on a much more powerful Windows machine

I have written R code containing Rcpp function, which in turn calls other cpp functions through inline, on my Mac. I have switched to a Windows machine with a much more powerful cpu and higher RAM, but the same code takes on average twice as much to run on this new machine.
my R session info on Mac is here
and that of the Windows machine is here
As a clear example, this small function in my code (lik2altcpp.cpp)
#ifndef __lik2altcpp__
#define __lik2altcpp__
// [[Rcpp::depends(RcppArmadillo)]]
#include "RcppArmadillo.h"
#include "FactorialLog.cpp"
using namespace arma;
using namespace Rcpp;
// [[Rcpp::export]]
inline vec lik2altF(vec p,int k,double eps) {
wall_clock timer;
timer.tic();
double ptie=0,arg11=0,arg12=0,arg21=0,arg22=0,ptb=0,pu1=0,pu2=0;
double p1,p2,ps;
vec prob(2);
if (p(0)==0) p1=10e-20; else p1=p(0);
if (p(1)==0) p2=10e-20; else p2=p(1);
if (p(2)==0) ps=10e-20; else ps=p(2);
for (int i=0;i<=k;i++)
{
if (i!=0)
{
ptie=ptie+ exp(FactorialLog(2*k-i-1)-(FactorialLog(i-1)+FactorialLog(k-i)+FactorialLog(k-i))+i*log(ps)+(k-i)*log(p1)+(k-i)*log(p2));
}
if(i!=k)
{
arg11=arg11+exp(FactorialLog(k+i-1)-(FactorialLog(i)+FactorialLog(k-1))+k*log(p1)+i*log(p2)); //first argument of the P(1)
arg12=arg12+exp(FactorialLog(k+i-1)-(FactorialLog(i)+FactorialLog(k-1))+i*log(p1)+k*log(p2)); //first argument of the P(2)
}
if((i!=0) && (i!=k))
{
for(int j=0; j<=(k-i-1);j++)
{
arg21=arg21+ exp(FactorialLog(k+j-1)-(FactorialLog(i-1)+FactorialLog(k-i)+FactorialLog(j))+ i*log(ps)+(k-i)*log(p1)+j*log(p2)) + exp(FactorialLog(k+j-1)-(FactorialLog(i)+FactorialLog(k-i-1)+FactorialLog(j)) + i*log(ps)+(k-i)*log(p1)+j*log(p2)); //second argument of the P(1)
arg22=arg22+ exp(FactorialLog(k+j-1)-(FactorialLog(i-1)+FactorialLog(k-i)+FactorialLog(j))+ + i*log(ps)+j*log(p1)+(k-i)*log(p2)) + exp(FactorialLog(k+j-1)-(FactorialLog(i)+FactorialLog(k-i-1)+FactorialLog(j)) + i*log(ps)+j*log(p1)+(k-i)*log(p2)); //second argument of the P(2)
}
}
}
//summing up the terms of the prob. formula
pu1=arg11+arg21 ;
pu2=arg12+arg22;
// ptb=(p1+eps*ps)/(p1+p2+2*eps*ps); //the actual formula for ptb
///////////REVERT THE CHANGES AFTER THE TEST ////////////////
ptb=.5;
//Calculating probabilities
prob(0)=pu1+ptb*ptie;
prob(1)=pu2+(1-ptb)*ptie;
double n = timer.toc();
cout << "number of seconds: " << n;
return prob;
}
#endif //__lik2altcpp__
along with the function it includes(FactorialLog.cpp):
#ifndef __FactorialLog__
#define __FactorialLog__
#include "RcppArmadillo.h"
using namespace arma;
using namespace Rcpp;
// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
inline double FactorialLog(int n)
{
if (n < 0)
{
std::stringstream os;
os << "Invalid input argument (" << n
<< "); may not be negative";
throw std::invalid_argument( os.str() );
}
else if (n > 254)
{
const double Pi = 3.141592653589793;
double x = n + 1;
return (x - 0.5)*log(x) - x + 0.5*log(2*Pi) + 1.0/(12.0*x);
}
else
{
double lf[] =
{
0.000000000000000,
0.000000000000000,
0.693147180559945,
1.791759469228055,
3.178053830347946,
4.787491742782046,
6.579251212010101,
8.525161361065415,
10.604602902745251,
12.801827480081469,
15.104412573075516,
17.502307845873887,
19.987214495661885,
22.552163853123421,
25.191221182738683,
27.899271383840894,
30.671860106080675,
33.505073450136891,
36.395445208033053,
39.339884187199495,
42.335616460753485,
45.380138898476908,
48.471181351835227,
51.606675567764377,
54.784729398112319,
58.003605222980518,
61.261701761002001,
64.557538627006323,
67.889743137181526,
71.257038967168000,
74.658236348830158,
78.092223553315307,
81.557959456115029,
85.054467017581516,
88.580827542197682,
92.136175603687079,
95.719694542143202,
99.330612454787428,
102.968198614513810,
106.631760260643450,
110.320639714757390,
114.034211781461690,
117.771881399745060,
121.533081515438640,
125.317271149356880,
129.123933639127240,
132.952575035616290,
136.802722637326350,
140.673923648234250,
144.565743946344900,
148.477766951773020,
152.409592584497350,
156.360836303078800,
160.331128216630930,
164.320112263195170,
168.327445448427650,
172.352797139162820,
176.395848406997370,
180.456291417543780,
184.533828861449510,
188.628173423671600,
192.739047287844900,
196.866181672889980,
201.009316399281570,
205.168199482641200,
209.342586752536820,
213.532241494563270,
217.736934113954250,
221.956441819130360,
226.190548323727570,
230.439043565776930,
234.701723442818260,
238.978389561834350,
243.268849002982730,
247.572914096186910,
251.890402209723190,
256.221135550009480,
260.564940971863220,
264.921649798552780,
269.291097651019810,
273.673124285693690,
278.067573440366120,
282.474292687630400,
286.893133295426990,
291.323950094270290,
295.766601350760600,
300.220948647014100,
304.686856765668720,
309.164193580146900,
313.652829949878990,
318.152639620209300,
322.663499126726210,
327.185287703775200,
331.717887196928470,
336.261181979198450,
340.815058870798960,
345.379407062266860,
349.954118040770250,
354.539085519440790,
359.134205369575340,
363.739375555563470,
368.354496072404690,
372.979468885689020,
377.614197873918670,
382.258588773060010,
386.912549123217560,
391.575988217329610,
396.248817051791490,
400.930948278915760,
405.622296161144900,
410.322776526937280,
415.032306728249580,
419.750805599544780,
424.478193418257090,
429.214391866651570,
433.959323995014870,
438.712914186121170,
443.475088120918940,
448.245772745384610,
453.024896238496130,
457.812387981278110,
462.608178526874890,
467.412199571608080,
472.224383926980520,
477.044665492585580,
481.872979229887900,
486.709261136839360,
491.553448223298010,
496.405478487217580,
501.265290891579240,
506.132825342034830,
511.008022665236070,
515.890824587822520,
520.781173716044240,
525.679013515995050,
530.584288294433580,
535.496943180169520,
540.416924105997740,
545.344177791154950,
550.278651724285620,
555.220294146894960,
560.169054037273100,
565.124881094874350,
570.087725725134190,
575.057539024710200,
580.034272767130800,
585.017879388839220,
590.008311975617860,
595.005524249382010,
600.009470555327430,
605.020105849423770,
610.037385686238740,
615.061266207084940,
620.091704128477430,
625.128656730891070,
630.172081847810200,
635.221937855059760,
640.278183660408100,
645.340778693435030,
650.409682895655240,
655.484856710889060,
660.566261075873510,
665.653857411105950,
670.747607611912710,
675.847474039736880,
680.953419513637530,
686.065407301994010,
691.183401114410800,
696.307365093814040,
701.437263808737160,
706.573062245787470,
711.714725802289990,
716.862220279103440,
722.015511873601330,
727.174567172815840,
732.339353146739310,
737.509837141777440,
742.685986874351220,
747.867770424643370,
753.055156230484160,
758.248113081374300,
763.446610112640200,
768.650616799717000,
773.860102952558460,
779.075038710167410,
784.295394535245690,
789.521141208958970,
794.752249825813460,
799.988691788643450,
805.230438803703120,
810.477462875863580,
815.729736303910160,
820.987231675937890,
826.249921864842800,
831.517780023906310,
836.790779582469900,
842.068894241700490,
847.352097970438420,
852.640365001133090,
857.933669825857460,
863.231987192405430,
868.535292100464630,
873.843559797865740,
879.156765776907600,
884.474885770751830,
889.797895749890240,
895.125771918679900,
900.458490711945270,
905.796028791646340,
911.138363043611210,
916.485470574328820,
921.837328707804890,
927.193914982476710,
932.555207148186240,
937.921183163208070,
943.291821191335660,
948.667099599019820,
954.046996952560450,
959.431492015349480,
964.820563745165940,
970.214191291518320,
975.612353993036210,
981.015031374908400,
986.422203146368590,
991.833849198223450,
997.249949600427840,
1002.670484599700300,
1008.095434617181700,
1013.524780246136200,
1018.958502249690200,
1024.396581558613400,
1029.838999269135500,
1035.285736640801600,
1040.736775094367400,
1046.192096209724900,
1051.651681723869200,
1057.115513528895000,
1062.583573670030100,
1068.055844343701400,
1073.532307895632800,
1079.012946818975000,
1084.497743752465600,
1089.986681478622400,
1095.479742921962700,
1100.976911147256000,
1106.478169357800900,
1111.983500893733000,
1117.492889230361000,
1123.006317976526100,
1128.523770872990800,
1134.045231790853000,
1139.570684729984800,
1145.100113817496100,
1150.633503306223700,
1156.170837573242400,
};
return lf[n];
}
}
#endif //__FactorialLog__
runs three times as fast on Mac as it does on Windows. You can try it e.g. with these inputs:
> lik2altF(p=c(.3,.3,.4),k=10000,eps=1000)

C++ Single Layer Multi Output Perceptron Weird Behaviour

Some background:
I wrote a single layer multi output perceptron class in C++. It uses the typical WX + b discriminant function and allows for user-defined activation functions. I have tested everything pretty throughly and it all seems to be working as I expect it to. I noticed a small logical error in my code, and when I attempted to fix it the network performed much worse than before. The error is as follows:
I evaluate the value at each output neuron using the following code:
output[i] =
activate_(std::inner_product(weights_[i].begin(), weights_[i].end(),
features.begin(), -1 * biases_[i]));
Here I treat the bias input as a fixed -1, but when I apply the learning rule to each bias, I treat the input as +1.
// Bias can be treated as a weight with a constant feature value of 1.
biases_[i] = weight_update(1, error, learning_rate_, biases_[i]);
So I attempted to fix my mistake by changing the call to weight_updated to be conistent with the output evaluation:
biases_[i] = weight_update(-1, error, learning_rate_, biases_[i]);
But doing so results in a 20% drop in accuracy!
I have been pulling my hair out for the past few days trying to find some other logical error in my code which might explain this strange behaviour, but have come up empty handed. Can anyone with more knowledge than I provide any insight into this? I have provided the entire class below for reference. Thank you in advance.
#ifndef SINGLE_LAYER_PERCEPTRON_H
#define SINGLE_LAYER_PERCEPTRON_H
#include <cassert>
#include <functional>
#include <numeric>
#include <vector>
#include "functional.h"
#include "random.h"
namespace qp {
namespace rf {
namespace {
template <typename Feature>
double weight_update(const Feature& feature, const double error,
const double learning_rate, const double current_weight) {
return current_weight + (learning_rate * error * feature);
}
template <typename T>
using Matrix = std::vector<std::vector<T>>;
} // namespace
template <typename Feature, typename Label, typename ActivationFn>
class SingleLayerPerceptron {
public:
// For testing only.
SingleLayerPerceptron(const Matrix<double>& weights,
const std::vector<double>& biases, double learning_rate)
: weights_(weights),
biases_(biases),
n_inputs_(weights.front().size()),
n_outputs_(biases.size()),
learning_rate_(learning_rate) {}
// Initialize the layer with random weights and biases in [-1, 1].
SingleLayerPerceptron(std::size_t n_inputs, std::size_t n_outputs,
double learning_rate)
: n_inputs_(n_inputs),
n_outputs_(n_outputs),
learning_rate_(learning_rate) {
weights_.resize(n_outputs_);
std::for_each(
weights_.begin(), weights_.end(), [this](std::vector<double>& wv) {
generate_back_n(wv, n_inputs_,
std::bind(random_real_range<double>, -1, 1));
});
generate_back_n(biases_, n_outputs_,
std::bind(random_real_range<double>, -1, 1));
}
std::vector<double> predict(const std::vector<Feature>& features) const {
std::vector<double> output(n_outputs_);
for (auto i = 0ul; i < n_outputs_; ++i) {
output[i] =
activate_(std::inner_product(weights_[i].begin(), weights_[i].end(),
features.begin(), -1 * biases_[i]));
}
return output;
}
void learn(const std::vector<Feature>& features,
const std::vector<double>& true_output) {
const auto actual_output = predict(features);
for (auto i = 0ul; i < n_outputs_; ++i) {
const auto error = true_output[i] - actual_output[i];
for (auto weight = 0ul; weight < n_inputs_; ++weight) {
weights_[i][weight] = weight_update(
features[weight], error, learning_rate_, weights_[i][weight]);
}
// Bias can be treated as a weight with a constant feature value of 1.
biases_[i] = weight_update(1, error, learning_rate_, biases_[i]);
}
}
private:
Matrix<double> weights_; // n_outputs x n_inputs
std::vector<double> biases_; // 1 x n_outputs
std::size_t n_inputs_;
std::size_t n_outputs_;
ActivationFn activate_;
double learning_rate_;
};
struct StepActivation {
double operator()(const double x) const { return x > 0 ? 1 : -1; }
};
} // namespace rf
} // namespace qp
#endif /* SINGLE_LAYER_PERCEPTRON_H */
I ended up figuring it out...
My fix was indeed correct and the loss of accuracy was just a consequence of having a lucky (or unlucky) dataset.

Error C2668: 'boost::bind' : ambiguous call to overloaded function

I am trying to build Quantlib on VS2013 in the Release x64 mode.
I added the Boost libraries using Property Manager and then Went to solutions explorer and clicked on Build.
The final output was: Build: 18 succeeded, 1 failed. 0 up-to-date, 0 skipped.
When I double clicked on the error this file opened up (convolvedstudentt.cpp)
Copyright (C) 2014 Jose Aparicio
This file is part of QuantLib, a free-software/open-source library
for financial quantitative analysts and developers - http://quantlib.org/
QuantLib is free software: you can redistribute it and/or modify it
under the terms of the QuantLib license. You should have received a
copy of the license along with this program; if not, please email
<quantlib-dev#lists.sf.net>. The license is also available online at
<http://quantlib.org/license.shtml>.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the license for more details.
*/
#include <ql/experimental/math/convolvedstudentt.hpp>
#include <ql/errors.hpp>
#include <ql/math/factorial.hpp>
#include <ql/math/distributions/normaldistribution.hpp>
#include <ql/math/solvers1d/brent.hpp>
#include <boost/function.hpp>
#if defined(__GNUC__) && (((__GNUC__ == 4) && (__GNUC_MINOR__ >= 8)) || (__GNUC__ > 4))
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-local-typedefs"
#endif
#include <boost/bind.hpp>
#include <boost/math/distributions/students_t.hpp>
#if defined(__GNUC__) && (((__GNUC__ == 4) && (__GNUC_MINOR__ >= 8)) || (__GNUC__ > 4))
#pragma GCC diagnostic pop
#endif
namespace QuantLib {
CumulativeBehrensFisher::CumulativeBehrensFisher(
const std::vector<Integer>& degreesFreedom,
const std::vector<Real>& factors
)
: degreesFreedom_(degreesFreedom), factors_(factors),
polyConvolved_(std::vector<Real>(1, 1.)), // value to start convolution
a_(0.)
{
QL_REQUIRE(degreesFreedom.size() == factors.size(),
"Incompatible sizes in convolution.");
for(Size i=0; i<degreesFreedom.size(); i++) {
QL_REQUIRE(degreesFreedom[i]%2 != 0,
"Even degree of freedom not allowed");
QL_REQUIRE(degreesFreedom[i] >= 0,
"Negative degree of freedom not allowed");
}
for(Size i=0; i<degreesFreedom_.size(); i++)
polynCharFnc_.push_back(polynCharactT((degreesFreedom[i]-1)/2));
// adjust the polynomial coefficients by the factors in the linear
// combination:
for(Size i=0; i<degreesFreedom_.size(); i++) {
Real multiplier = 1.;
for(Size k=1; k<polynCharFnc_[i].size(); k++) {
multiplier *= std::abs(factors_[i]);
polynCharFnc_[i][k] *= multiplier;
}
}
//convolution, here it is a product of polynomials and exponentials
for(Size i=0; i<polynCharFnc_.size(); i++)
polyConvolved_ =
convolveVectorPolynomials(polyConvolved_, polynCharFnc_[i]);
// trim possible zeros that might have arised:
std::vector<Real>::reverse_iterator it = polyConvolved_.rbegin();
while(it != polyConvolved_.rend()) {
if(*it == 0.) {
polyConvolved_.pop_back();
it = polyConvolved_.rbegin();
}else{
break;
}
}
// cache 'a' value (the exponent)
for(Size i=0; i<degreesFreedom_.size(); i++)
a_ += std::sqrt(static_cast<Real>(degreesFreedom_[i]))
* std::abs(factors_[i]);
a2_ = a_ * a_;
}
Disposable<std::vector<Real> >
CumulativeBehrensFisher::polynCharactT(Natural n) const {
Natural nu = 2 * n +1;
std::vector<Real> low(1,1.), high(1,1.);
high.push_back(std::sqrt(static_cast<Real>(nu)));
if(n==0) return low;
if(n==1) return high;
for(Size k=1; k<n; k++) {
std::vector<Real> recursionFactor(1,0.); // 0 coef
recursionFactor.push_back(0.); // 1 coef
recursionFactor.push_back(nu/((2.*k+1.)*(2.*k-1.))); // 2 coef
std::vector<Real> lowUp =
convolveVectorPolynomials(recursionFactor, low);
//add them up:
for(Size i=0; i<high.size(); i++)
lowUp[i] += high[i];
low = high;
high = lowUp;
}
return high;
}
Disposable<std::vector<Real> >
CumulativeBehrensFisher::convolveVectorPolynomials(
const std::vector<Real>& v1,
const std::vector<Real>& v2) const {
#if defined(QL_EXTRA_SAFETY_CHECKS)
QL_REQUIRE(!v1.empty() && !v2.empty(),
"Incorrect vectors in polynomial.");
#endif
const std::vector<Real>& shorter = v1.size() < v2.size() ? v1 : v2;
const std::vector<Real>& longer = (v1 == shorter) ? v2 : v1;
Size newDegree = v1.size()+v2.size()-2;
std::vector<Real> resultB(newDegree+1, 0.);
for(Size polyOrdr=0; polyOrdr<resultB.size(); polyOrdr++) {
for(Size i=std::max<Integer>(0, polyOrdr-longer.size()+1);
i<=std::min(polyOrdr, shorter.size()-1); i++)
resultB[polyOrdr] += shorter[i]*longer[polyOrdr-i];
}
return resultB;
}
Probability CumulativeBehrensFisher::operator()(const Real x) const {
// 1st & 0th terms with the table integration
Real integral = polyConvolved_[0] * std::atan(x/a_);
Real squared = a2_ + x*x;
Real rootsqr = std::sqrt(squared);
Real atan2xa = std::atan2(-x,a_);
if(polyConvolved_.size()>1)
integral += polyConvolved_[1] * x/squared;
for(Size exponent = 2; exponent <polyConvolved_.size(); exponent++) {
integral -= polyConvolved_[exponent] *
Factorial::get(exponent-1) * std::sin((exponent)*atan2xa)
/std::pow(rootsqr, static_cast<Real>(exponent));
}
return .5 + integral / M_PI;
}
Probability
CumulativeBehrensFisher::density(const Real x) const {
Real squared = a2_ + x*x;
Real integral = polyConvolved_[0] * a_ / squared;
Real rootsqr = std::sqrt(squared);
Real atan2xa = std::atan2(-x,a_);
for(Size exponent=1; exponent <polyConvolved_.size(); exponent++) {
integral += polyConvolved_[exponent] *
Factorial::get(exponent) * std::cos((exponent+1)*atan2xa)
/std::pow(rootsqr, static_cast<Real>(exponent+1) );
}
return integral / M_PI;
}
InverseCumulativeBehrensFisher::InverseCumulativeBehrensFisher(
const std::vector<Integer>& degreesFreedom,
const std::vector<Real>& factors,
Real accuracy)
: normSqr_(std::inner_product(factors.begin(), factors.end(),
factors.begin(), 0.)),
accuracy_(accuracy), distrib_(degreesFreedom, factors) { }
Real InverseCumulativeBehrensFisher::operator()(const Probability q) const {
Probability effectiveq;
Real sign;
// since the distrib is symmetric solve only on the right side:
if(q==0.5) {
return 0.;
}else if(q < 0.5) {
sign = -1.;
effectiveq = 1.-q;
}else{
sign = 1.;
effectiveq = q;
}
Real xMin =
InverseCumulativeNormal::standard_value(effectiveq) * normSqr_;
// inversion will fail at the Brent's bounds-check if this is not enough
// (q is very close to 1.), in a bad combination fails around 1.-1.e-7
Real xMax = 1.e6;
return sign *
Brent().solve(boost::bind(std::bind2nd(std::minus<Real>(),
effectiveq), boost::bind<Real>(
&CumulativeBehrensFisher::operator (),
distrib_, _1)), accuracy_, (xMin+xMax)/2., xMin, xMax);
}
}
The error seems to be in the third line from the bottom. That's the one that's highlighted.
effectiveq), boost::bind<Real>(
&CumulativeBehrensFisher::operator (),
distrib_, _1)), accuracy_, (xMin+xMax)/2., xMin, xMax);
When I hover a mouse over it, it says
Error: more than one instance of overloaded function "boost::bind" matches the argument list: function template "boost_bi::bind_t " etc. Please see the attached screenshot
How can I fix this? Please help.
This came up quite a few times lately on the QuantLib mailing list. In short, the code worked with Boost 1.57 (the latest version at the time of the QuantLib 1.5 release) but broke with Boost 1.58.
There's a fix for this in the QuantLib master branch on GitHub, but it hasn't made it into a release yet. If you want to (or have to) use Boost 1.58, you can check out the latest code from there. If you want to use a released QuantLib version instead, the workaround is to downgrade to Boost 1.57.