Unit tests with boost::multiprecision - c++

Some of my unit tests have started failing since adapting some code to enable multi-precision. Header file:
#ifndef SCRATCH_UNITTESTBOOST_INCLUDED
#define SCRATCH_UNITTESTBOOST_INCLUDED
#include <boost/multiprecision/cpp_dec_float.hpp>
// typedef double FLOAT;
typedef boost::multiprecision::cpp_dec_float_50 FLOAT;
const FLOAT ONE(FLOAT(1));
struct Rect
{
Rect(const FLOAT &width, const FLOAT &height) : Width(width), Height(height){};
FLOAT getArea() const { return Width * Height; }
FLOAT Width, Height;
};
#endif
Main test file:
#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE RectTest
#include <boost/test/unit_test.hpp>
#include "SCRATCH_UnitTestBoost.h"
namespace utf = boost::unit_test;
// Failing
BOOST_AUTO_TEST_CASE(AreaTest1)
{
Rect R(ONE / 2, ONE / 3);
FLOAT expected_area = (ONE / 2) * (ONE / 3);
std::cout << std::setprecision(std::numeric_limits<FLOAT>::digits10) << std::showpoint;
std::cout << "Expected: " << expected_area << std::endl;
std::cout << "Actual : " << R.getArea() << std::endl;
// BOOST_CHECK_EQUAL(expected_area, R.getArea());
BOOST_TEST(expected_area == R.getArea());
}
// Tolerance has no effect?
BOOST_AUTO_TEST_CASE(AreaTestTol, *utf::tolerance(1e-40))
{
Rect R(ONE / 2, ONE / 3);
FLOAT expected_area = (ONE / 2) * (ONE / 3);
BOOST_TEST(expected_area == R.getArea());
}
// Passing
BOOST_AUTO_TEST_CASE(AreaTest2)
{
Rect R(ONE / 7, ONE / 2);
FLOAT expected_area = (ONE / 7) * (ONE / 2);
BOOST_CHECK_EQUAL(expected_area, R.getArea());
}
Note that when defining FLOAT as the double type, all the tests pass. What confuses me is that when printing the exact expected and actual values (see AreaTest1) we see the same result. But the error reported from BOOST_TEST is:
error: in "AreaTest1": check expected_area == R.getArea() has failed
[0.16666666666666666666666666666666666666666666666666666666666666666666666666666666 !=
0.16666666666666666666666666666666666666666666666666666666666666666666666672236366]
Compiling with g++ SCRATCH_UnitTestBoost.cpp -o utb.o -lboost_unit_test_framework.
Questions:
Why is the test failing?
Why does the use of tolerance in AreaTestTol not give outputs as documented here?
Related info:
Tolerances with floating point comparison
Gotchas with multiprecision types

Two Issues:
where does the difference come from
how to apply the epsilon?
Where The Difference Comes From
Boost Multiprecision uses template expressions to defer evaluation.
Also, you're choosing some rational fractions that cannot be exactly represented base-10 (cpp_dec_float uses decimal, so base-10).
This means that when you do
T x = 1/3;
T y = 1/7;
That will actually approximate both fractions inexactly.
Doing this:
T z = 1/3 * 1/7;
Will actually evaluate the right-handside expression template, so instead of calculating the temporaries like x ans y before, the right hand side has a type of:
expression<detail::multiplies, detail::expression<?>, detail::expression<?>, [2 * ...]>
That's shortened from the actual type:
boost::multiprecision::detail::expression<
boost::multiprecision::detail::multiplies,
boost::multiprecision::detail::expression<
boost::multiprecision::detail::divide_immediates,
boost::multiprecision::number<boost::multiprecision::backends::cpp_dec_float<50u,
int, void>, (boost::multiprecision::expression_template_option)1>, int,
void, void>,
boost::multiprecision::detail::expression<
boost::multiprecision::detail::divide_immediates,
boost::multiprecision::number<boost::multiprecision::backends::cpp_dec_float<50u,
int, void>, (boost::multiprecision::expression_template_option)1>, int,
void, void>,
void, void>
Long story short, this is what you want because it saves you work and keeps better accuracy because the expression is is first normalized to 1/(3*7) so 1/21.
This is where your difference comes from in the first place. Fix it by either:
turning off expression templates
using T = boost::multiprecision::number<
boost::multiprecision::cpp_dec_float<50>,
boost::multiprecision::et_off > >;
rewriting the expression to be equivalent of your implementation:
T expected_area = T(ONE / 7) * T(ONE / 2);
T expected_area = (ONE / 7).eval() * (ONE / 2).eval();
Applying The Tolerance
I find it hard to parse the Boost Unit Test docs on this, but here's empirical data:
BOOST_CHECK_EQUAL(expected_area, R.getArea());
T const eps = std::numeric_limits<T>::epsilon();
BOOST_CHECK_CLOSE(expected_area, R.getArea(), eps);
BOOST_TEST(expected_area == R.getArea(), tt::tolerance(eps));
This fails the first, and passes the last two. Indeed, in addition, the following two also fail:
BOOST_CHECK_EQUAL(expected_area, R.getArea());
BOOST_TEST(expected_area == R.getArea());
So it appears that something has to be done before the utf::tolerance decorator takes effect. Testing with native doubles tells me that only BOOST_TEST applies the tolerance implicitly. So dived into the preprocessed expansion:
::boost::unit_test::unit_test_log.set_checkpoint(
::boost::unit_test::const_string(
"/home/sehe/Projects/stackoverflow/test.cpp",
sizeof("/home/sehe/Projects/stackoverflow/test.cpp") - 1),
static_cast<std::size_t>(42));
::boost::test_tools::tt_detail::report_assertion(
(::boost::test_tools::assertion::seed()->*a == b).evaluate(),
(::boost::unit_test::lazy_ostream::instance()
<< ::boost::unit_test::const_string("a == b", sizeof("a == b") - 1)),
::boost::unit_test::const_string(
"/home/sehe/Projects/stackoverflow/test.cpp",
sizeof("/home/sehe/Projects/stackoverflow/test.cpp") - 1),
static_cast<std::size_t>(42), ::boost::test_tools::tt_detail::CHECK,
::boost::test_tools::tt_detail::CHECK_BUILT_ASSERTION, 0);
} while (::boost::test_tools::tt_detail::dummy_cond());
Digging in a lot more, I ran into:
/*!#brief Indicates if a type can be compared using a tolerance scheme
*
* This is a metafunction that should evaluate to #c mpl::true_ if the type
* #c T can be compared using a tolerance based method, typically for floating point
* types.
*
* This metafunction can be specialized further to declare user types that are
* floating point (eg. boost.multiprecision).
*/
template <typename T>
struct tolerance_based : tolerance_based_delegate<T, !is_array<T>::value && !is_abstract_class_or_function<T>::value>::type {};
There we have it! But no,
static_assert(boost::math::fpc::tolerance_based<double>::value);
static_assert(boost::math::fpc::tolerance_based<cpp_dec_float_50>::value);
Both already pass. Hmm.
Looking at the decorator I noticed that the tolerance injected into the fixture context is typed.
Experimentally I have reached the conclusion that the tolerance decorator needs to have the same static type argument as the operands in the comparison for it to take effect.
This may actually be very useful (you can have different implicit tolerances for different floating point types), but it is pretty surprising as well.
TL;DR
Here's the full test set fixed and live for your enjoyment:
take into account evaluation order and the effect on accuracy
use the static type in utf::tolerance(v) to match your operands
do not use BOOST_CHECK_EQUAL for tolerance-based comparison
I'd suggest to use explicit test_tools::tolerance instead of relying on "ambient" tolerance. After all, we want to be testing our code, not the test framework
Live On Coliru
template <typename T> struct Rect {
Rect(const T &width, const T &height) : width(width), height(height){};
T getArea() const { return width * height; }
private:
T width, height;
};
#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE RectTest
#include <boost/multiprecision/cpp_dec_float.hpp>
using DecFloat = boost::multiprecision::cpp_dec_float_50;
#include <boost/test/unit_test.hpp>
namespace utf = boost::unit_test;
namespace tt = boost::test_tools;
namespace {
template <typename T>
static inline const T Eps = std::numeric_limits<T>::epsilon();
template <typename T> struct Fixture {
T const epsilon = Eps<T>;
T const ONE = 1;
using Rect = ::Rect<T>;
void checkArea(int wdenom, int hdenom) const {
auto w = ONE/wdenom; // could be expression templates
auto h = ONE/hdenom;
Rect const R(w, h);
T expect = w*h;
BOOST_TEST(expect == R.getArea(), "1/" << wdenom << " x " << "1/" << hdenom);
// I'd prefer explicit toleranc
BOOST_TEST(expect == R.getArea(), tt::tolerance(epsilon));
}
};
}
BOOST_AUTO_TEST_SUITE(Rectangles)
BOOST_FIXTURE_TEST_SUITE(Double, Fixture<double>, *utf::tolerance(Eps<double>))
BOOST_AUTO_TEST_CASE(check2_3) { checkArea(2, 3); }
BOOST_AUTO_TEST_CASE(check7_2) { checkArea(7, 2); }
BOOST_AUTO_TEST_CASE(check57_31) { checkArea(57, 31); }
BOOST_AUTO_TEST_SUITE_END()
BOOST_FIXTURE_TEST_SUITE(MultiPrecision, Fixture<DecFloat>, *utf::tolerance(Eps<DecFloat>))
BOOST_AUTO_TEST_CASE(check2_3) { checkArea(2, 3); }
BOOST_AUTO_TEST_CASE(check7_2) { checkArea(7, 2); }
BOOST_AUTO_TEST_CASE(check57_31) { checkArea(57, 31); }
BOOST_AUTO_TEST_SUITE_END()
BOOST_AUTO_TEST_SUITE_END()
Prints

Related

How to wrap several boolean flags into struct to pass them to a function with a convenient syntax

In some testing code there's a helper function like this:
auto make_condiment(bool salt, bool pepper, bool oil, bool garlic) {
// assumes that first bool is salt, second is pepper,
// and so on...
//
// Make up something according to flags
return something;
};
which essentially builds up something based on some boolean flags.
What concerns me is that the meaning of each bool is hardcoded in the name of the parameters, which is bad because at the call site it's hard to remember which parameter means what (yeah, the IDE can likely eliminate the problem entirely by showing those names when tab completing, but still...):
// at the call site:
auto obj = make_condiment(false, false, true, true); // what ingredients am I using and what not?
Therefore, I'd like to pass a single object describing the settings. Furthermore, just aggregating them in an object, e.g. std::array<bool,4>.
I would like, instead, to enable a syntax like this:
auto obj = make_smart_condiment(oil + garlic);
which would generate the same obj as the previous call to make_condiment.
This new function would be:
auto make_smart_condiment(Ingredients ingredients) {
// retrieve the individual flags from the input
bool salt = ingredients.hasSalt();
bool pepper = ingredients.hasPepper();
bool oil = ingredients.hasOil();
bool garlic = ingredients.hasGarlic();
// same body as make_condiment, or simply:
return make_condiment(salt, pepper, oil, garlic);
}
Here's my attempt:
struct Ingredients {
public:
enum class INGREDIENTS { Salt = 1, Pepper = 2, Oil = 4, Garlic = 8 };
explicit Ingredients() : flags{0} {};
explicit Ingredients(INGREDIENTS const& f) : flags{static_cast<int>(f)} {};
private:
explicit Ingredients(int fs) : flags{fs} {}
int flags; // values 0-15
public:
bool hasSalt() const {
return flags % 2;
}
bool hasPepper() const {
return (flags / 2) % 2;
}
bool hasOil() const {
return (flags / 4) % 2;
}
bool hasGarlic() const {
return (flags / 8) % 2;
}
Ingredients operator+(Ingredients const& f) {
return Ingredients(flags + f.flags);
}
}
salt{Ingredients::INGREDIENTS::Salt},
pepper{Ingredients::INGREDIENTS::Pepper},
oil{Ingredients::INGREDIENTS::Oil},
garlic{Ingredients::INGREDIENTS::Garlic};
However, I have the feeling that I am reinventing the wheel.
Is there any better, or standard, way of accomplishing the above?
Is there maybe a design pattern that I could/should use?
I think you can remove some of the boilerplate by using a std::bitset. Here is what I came up with:
#include <bitset>
#include <cstdint>
#include <iostream>
class Ingredients {
public:
enum Option : uint8_t {
Salt = 0,
Pepper = 1,
Oil = 2,
Max = 3
};
bool has(Option o) const { return value_[o]; }
Ingredients(std::initializer_list<Option> opts) {
for (const Option& opt : opts)
value_.set(opt);
}
private:
std::bitset<Max> value_ {0};
};
int main() {
Ingredients ingredients{Ingredients::Salt, Ingredients::Pepper};
// prints "10"
std::cout << ingredients.has(Ingredients::Salt)
<< ingredients.has(Ingredients::Oil) << "\n";
}
You don't get the + type syntax, but it's pretty close. It's unfortunate that you have to keep an Option::Max, but not too bad. Also I decided to not use an enum class so that it can be accessed as Ingredients::Salt and implicitly converted to an int. You could explicitly access and cast if you wanted to use enum class.
If you want to use enum as flags, the usual way is merge them with operator | and check them with operator &
#include <iostream>
enum Ingredients{ Salt = 1, Pepper = 2, Oil = 4, Garlic = 8 };
// If you want to use operator +
Ingredients operator + (Ingredients a,Ingredients b) {
return Ingredients(a | b);
}
int main()
{
using std::cout;
cout << bool( Salt & Ingredients::Salt ); // has salt
cout << bool( Salt & Ingredients::Pepper ); // doesn't has pepper
auto sp = Ingredients::Salt + Ingredients::Pepper;
cout << bool( sp & Ingredients::Salt ); // has salt
cout << bool( sp & Ingredients::Garlic ); // doesn't has garlic
}
note: the current code (with only the operator +) would not work if you mix | and + like (Salt|Salt)+Salt.
You can also use enum class, just need to define the operators
I would look at a strong typing library like:
https://github.com/joboccara/NamedType
For a really good video talking about this:
https://www.youtube.com/watch?v=fWcnp7Bulc8
When I first saw this, I was a little dismissive, but because the advice came from people I respected, I gave it a chance. The video convinced me.
If you look at CPP Best Practices and dig deeply enough, you'll see the general advice to avoid boolean parameters, especially strings of them. And Jonathan Boccara gives good reasons why your code will be stronger if you don't directly use the raw types, for the very reason that you've already identified.

C++ Single Layer Multi Output Perceptron Weird Behaviour

Some background:
I wrote a single layer multi output perceptron class in C++. It uses the typical WX + b discriminant function and allows for user-defined activation functions. I have tested everything pretty throughly and it all seems to be working as I expect it to. I noticed a small logical error in my code, and when I attempted to fix it the network performed much worse than before. The error is as follows:
I evaluate the value at each output neuron using the following code:
output[i] =
activate_(std::inner_product(weights_[i].begin(), weights_[i].end(),
features.begin(), -1 * biases_[i]));
Here I treat the bias input as a fixed -1, but when I apply the learning rule to each bias, I treat the input as +1.
// Bias can be treated as a weight with a constant feature value of 1.
biases_[i] = weight_update(1, error, learning_rate_, biases_[i]);
So I attempted to fix my mistake by changing the call to weight_updated to be conistent with the output evaluation:
biases_[i] = weight_update(-1, error, learning_rate_, biases_[i]);
But doing so results in a 20% drop in accuracy!
I have been pulling my hair out for the past few days trying to find some other logical error in my code which might explain this strange behaviour, but have come up empty handed. Can anyone with more knowledge than I provide any insight into this? I have provided the entire class below for reference. Thank you in advance.
#ifndef SINGLE_LAYER_PERCEPTRON_H
#define SINGLE_LAYER_PERCEPTRON_H
#include <cassert>
#include <functional>
#include <numeric>
#include <vector>
#include "functional.h"
#include "random.h"
namespace qp {
namespace rf {
namespace {
template <typename Feature>
double weight_update(const Feature& feature, const double error,
const double learning_rate, const double current_weight) {
return current_weight + (learning_rate * error * feature);
}
template <typename T>
using Matrix = std::vector<std::vector<T>>;
} // namespace
template <typename Feature, typename Label, typename ActivationFn>
class SingleLayerPerceptron {
public:
// For testing only.
SingleLayerPerceptron(const Matrix<double>& weights,
const std::vector<double>& biases, double learning_rate)
: weights_(weights),
biases_(biases),
n_inputs_(weights.front().size()),
n_outputs_(biases.size()),
learning_rate_(learning_rate) {}
// Initialize the layer with random weights and biases in [-1, 1].
SingleLayerPerceptron(std::size_t n_inputs, std::size_t n_outputs,
double learning_rate)
: n_inputs_(n_inputs),
n_outputs_(n_outputs),
learning_rate_(learning_rate) {
weights_.resize(n_outputs_);
std::for_each(
weights_.begin(), weights_.end(), [this](std::vector<double>& wv) {
generate_back_n(wv, n_inputs_,
std::bind(random_real_range<double>, -1, 1));
});
generate_back_n(biases_, n_outputs_,
std::bind(random_real_range<double>, -1, 1));
}
std::vector<double> predict(const std::vector<Feature>& features) const {
std::vector<double> output(n_outputs_);
for (auto i = 0ul; i < n_outputs_; ++i) {
output[i] =
activate_(std::inner_product(weights_[i].begin(), weights_[i].end(),
features.begin(), -1 * biases_[i]));
}
return output;
}
void learn(const std::vector<Feature>& features,
const std::vector<double>& true_output) {
const auto actual_output = predict(features);
for (auto i = 0ul; i < n_outputs_; ++i) {
const auto error = true_output[i] - actual_output[i];
for (auto weight = 0ul; weight < n_inputs_; ++weight) {
weights_[i][weight] = weight_update(
features[weight], error, learning_rate_, weights_[i][weight]);
}
// Bias can be treated as a weight with a constant feature value of 1.
biases_[i] = weight_update(1, error, learning_rate_, biases_[i]);
}
}
private:
Matrix<double> weights_; // n_outputs x n_inputs
std::vector<double> biases_; // 1 x n_outputs
std::size_t n_inputs_;
std::size_t n_outputs_;
ActivationFn activate_;
double learning_rate_;
};
struct StepActivation {
double operator()(const double x) const { return x > 0 ? 1 : -1; }
};
} // namespace rf
} // namespace qp
#endif /* SINGLE_LAYER_PERCEPTRON_H */
I ended up figuring it out...
My fix was indeed correct and the loss of accuracy was just a consequence of having a lucky (or unlucky) dataset.

Issue passing template type to function and using for local variable assignment c++

I have the following code:
template<typename T> void computeFractalDimensionData(RandomWalkMethods::LatticeType latticeType, gsl_rng* randNumGen) {
int nD = 0;
// if T is of type std::pair<int,int> then set no. of dimensions to 2
if (typeid(T) == typeid(std::pair<int, int>)) {
nD = 2;
}
// else if T is of type RWM::Triple<int,int,int> then set no. of dimensions to 3
else if (typeid(T) == typeid(RandomWalkMethods::Triple<int, int, int>)) {
nD = 3;
}
else {
return;
}
// Create vector of T structs to store DLA structure results
std::vector<T> aggResults;
// Initialise particle spawning type and attractor type for DLA system
RandomWalkMethods::ParticleSpawnType spawn = RandomWalkMethods::CONSTANT_RANDOM_BOUNDINGBOX_EDGE;
RandomWalkMethods::AttractorDLAType attractor = RandomWalkMethods::POINT;
// Under-estimate for fractal dimension of the DLA
const double fractalDimUnderestimateRecip = 1 / 1.65;
for (int i = 100; i <= 1000; i += 100) {
// initialise spawnDiameter using: exp(log(n)/fDUR) = n^{1/fDUR}
int spawnDiam = 2*static_cast<int>(std::pow(i, fractalDimUnderestimateRecip));
// if system is 2-dimensional, compute DLA for 2D on given lattice
if (nD == 2) {
aggResults = RandomWalkMethods::diffusionLimitedAggregateRandomWalk2D(i, spawn, spawnDiam, latticeType, randNumGen, attractor);
}
// else if system is 3 dimensional, compute DLA for 3D on given lattice
else if (nD == 3) {
aggResults = RandomWalkMethods::diffusionLimitedAggregateRandomWalk3D(i, spawn, spawnDiam, latticeType, randNumGen, attractor);
}
// compute the minimum bounding radius which encloses all particles in the DLA structure
double boundingRadius = std::sqrt(maxMagnitudeVectorOfMultiples< double, T >(aggResults));
}
}
which I may call with a statement such as
computeFractalDimensionData< std::pair<int,int> >(lattice, randNumGen);
or
computeFractalDimensionData< RandomWalkMethods::Triple<int,int,int> >(lattice, randNumGen);
where Triple is simply a struct I defined with 3 elements (essentially the same as std::pair but extended for 3 fields). Also, the functions diffusionLimitedAggregateRandomWalk2D and diffusionLimitedAggregateRandomWalk3D return types of std::vector<std::pair<int,int>> and std::vector<Triple<int,int,int>> respectively.
The issue is that when I call with either statement above I get the following errors (occurring at the assignment statements aggResults = ...):
binary '=': no operator found which takes a right-hand operand of type 'std::vector<std::pair<int,int>,std::allocator<_Ty>>' (or there is no acceptable conversion)
and similarly for the case of Triple<int,int,int>. From what I understand, this implies that I'd need an overloaded assignment operator for these 2 structs - however I do not think that is the issue here as the following statement has been used correctly before in my program:
std::vector< std::pair<int,int> > aggResults = RandomWalkMethods::diffusionLimitedAggregateRandomWalk2D(nParticles, boundingBox, spawnDiam, latticeType, randNumGen, attractor, &diffLimAggFile);
So I know that I can assign the result of the DLA methods to variables of the correct type however the compiler complains if I try it through the use of passing a type to a template function as was shown above.
What is happening here and how would I go about solving this issue?
This comes from the fact that
aggResults = diffusionLimitedAggregateRandomWalk2D(i, spawn, spawnDiam, latticeType, randNumGen, attractor);
with aggResults being a std::vector<T> is compiled even if T is Triple<int, int, int> but diffusionLimitedAggregateRandomWalk2D returns a std::vector<std::pair<int, int>>.
Suggested solution : declare a templated function and specialize it for some T.
template<typename T>
void computeFractalDimensionData(RandomWalkMethods::LatticeType latticeType, gsl_rng* randNumGen);
template<>
void computeFractalDimensionData<std::pair<int, int>>(RandomWalkMethods::LatticeType latticeType, gsl_rng* randNumGen)
{
// ...
}
template<>
void computeFractalDimensionData<Triple<int, int, int>>(RandomWalkMethods::LatticeType latticeType, gsl_rng* randNumGen)
{
// ...
}
It makes more readable code and fails to compile the following line with a helping compilation error:
computeFractalDimensionData<void>(lattice, randNumGen);
YSC's solution is good. I want you just to notice that the following code in your function is a wrong use of templates:
// if system is 2-dimensional, compute DLA for 2D on given lattice
if (nD == 2) {
aggResults = RandomWalkMethods::diffusionLimitedAggregateRandomWalk2D(i, spawn, spawnDiam, latticeType, randNumGen, attractor);
}
// else if system is 3 dimensional, compute DLA for 3D on given lattice
else if (nD == 3) {
aggResults = RandomWalkMethods::diffusionLimitedAggregateRandomWalk3D(i, spawn, spawnDiam, latticeType, randNumGen, attractor);
}
Templates are for static polymorphism, and you are using dynamic code (these if (nd == ...)) in a template function. Proper use of a static polymorphism could be introducing a template parameter dimension.

Expect a value within a given range using Google Test

I want to specify an expectation that a value is between an upper and lower bound, inclusively.
Google Test provides LT,LE,GT,GE, but no way of testing a range that I can see. You could use EXPECT_NEAR and juggle the operands, but in many cases this isn't as clear as explicitly setting upper and lower bounds.
Usage should resemble:
EXPECT_WITHIN_INCLUSIVE(1, 3, 2); // 2 is in range [1,3]
How would one add this expectation?
Google mock has richer composable matchers:
EXPECT_THAT(x, AllOf(Ge(1),Le(3)));
Maybe that would work for you.
Using just Google Test (not mock), then the simple, obvious answer is:
EXPECT_TRUE((a >= 1) && (a <= 3)); // a is between 1 and 3 inclusive
I find this more readable than some of the Mock based answers.
--- begin edit --
The simple answer above not providing any useful diagnostics
You can use AssertionResult to define a custom assert that does produce useful a useful error message like this.
#include <gtest/gtest.h>
::testing::AssertionResult IsBetweenInclusive(int val, int a, int b)
{
if((val >= a) && (val <= b))
return ::testing::AssertionSuccess();
else
return ::testing::AssertionFailure()
<< val << " is outside the range " << a << " to " << b;
}
TEST(testing, TestPass)
{
auto a = 2;
EXPECT_TRUE(IsBetweenInclusive(a, 1, 3));
}
TEST(testing, TestFail)
{
auto a = 5;
EXPECT_TRUE(IsBetweenInclusive(a, 1, 3));
}
There is a nice example in google mock cheat sheet:
using namespace testing;
MATCHER_P2(IsBetween, a, b,
std::string(negation ? "isn't" : "is") + " between " + PrintToString(a)
+ " and " + PrintToString(b))
{
return a <= arg && arg <= b;
}
Then to use it:
TEST(MyTest, Name) {
EXPECT_THAT(42, IsBetween(40, 46));
}
I would define these macros:
#define EXPECT_IN_RANGE(VAL, MIN, MAX) \
EXPECT_GE((VAL), (MIN)); \
EXPECT_LE((VAL), (MAX))
#define ASSERT_IN_RANGE(VAL, MIN, MAX) \
ASSERT_GE((VAL), (MIN)); \
ASSERT_LE((VAL), (MAX))
In the end I created a macro to do this that resembles other macros in the Google Test lib.
#define EXPECT_WITHIN_INCLUSIVE(lower, upper, val) \
do { \
EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperGE, val, lower); \
EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperLE, val, upper); \
} while (0)
Using an Existing Boolean Function in Google Test which don't need google mock.The link is quite specific.
Here is the example.
// Returns true iff m and n have no common divisors except 1.
bool MutuallyPrime(int m, int n) { ... }
const int a = 3;
const int b = 4;
const int c = 10;
the assertion EXPECT_PRED2(MutuallyPrime, a, b); will succeed, while
the assertion EXPECT_PRED2(MutuallyPrime, b, c); will fail with the
message
!MutuallyPrime(b, c) is false, where
b is 4
c is 10

NLOpt with windows forms

I am suffering serious problems while trying to use nlopt library (http://ab-initio.mit.edu/wiki/index.php/NLopt_Tutorial) in windows forms application. I have created following namespace which runs perfectly in console application.
#include "math.h"
#include "nlopt.h"
namespace test
{
typedef struct {
double a, b;
} my_constraint_data;
double myfunc(unsigned n, const double *x, double *grad, void *my_func_data)
{
if (grad) {
grad[0] = 0.0;
grad[1] = 0.5 / sqrt(x[1]);
}
return sqrt(x[1]);
}
double myconstraint(unsigned n, const double *x, double *grad, void *data)
{
my_constraint_data *d = (my_constraint_data *) data;
double a = d->a, b = d->b;
if (grad) {
grad[0] = 3 * a * (a*x[0] + b) * (a*x[0] + b);
grad[1] = -1.0;
}
return ((a*x[0] + b) * (a*x[0] + b) * (a*x[0] + b) - x[1]);
}
int comp()
{
double lb[2] = { -HUGE_VAL, 0 }; /* lower bounds */
nlopt_opt opt;
opt = nlopt_create(NLOPT_LD_MMA, 2); /* algorithm and dimensionality */
nlopt_set_lower_bounds(opt, lb);
nlopt_set_min_objective(opt, myfunc, NULL);
my_constraint_data data[2] = { {2,0}, {-1,1} };
nlopt_add_inequality_constraint(opt, myconstraint, &data[0], 1e-8);
nlopt_add_inequality_constraint(opt, myconstraint, &data[1], 1e-8);
nlopt_set_xtol_rel(opt, 1e-4);
double x[2] = { 1.234, 5.678 }; /* some initial guess */
double minf; /* the minimum objective value, upon return */
int a=nlopt_optimize(opt, x, &minf) ;
return 1;
}
}
It optimizes simple nonlinear constrained minimization problem. The problem arises when I try to use this namespace in windows form application. I am constantly getting unhandled exception in myfunc which sees "x" as empty pointer for some reason and therefore causes error when trying to access its location. I believe that the problem is somehow caused by the fact that windows forms uses CLR but I dont know if it is solvable or not. I am using visual studio 2008 and the test programs are simple console project (which works fine) and windows forms project (that causes aforementioned errors).
My test code is based on tutorial for C from the provided link. I although tried C++ version which once again works fine in console application but gives debug assertion failed error in windows forms application.
So I guess my questions is : I have working windows forms application and I would like to use NLOpt. Is there a way to make this work ?