Expect a value within a given range using Google Test - c++

I want to specify an expectation that a value is between an upper and lower bound, inclusively.
Google Test provides LT,LE,GT,GE, but no way of testing a range that I can see. You could use EXPECT_NEAR and juggle the operands, but in many cases this isn't as clear as explicitly setting upper and lower bounds.
Usage should resemble:
EXPECT_WITHIN_INCLUSIVE(1, 3, 2); // 2 is in range [1,3]
How would one add this expectation?

Google mock has richer composable matchers:
EXPECT_THAT(x, AllOf(Ge(1),Le(3)));
Maybe that would work for you.

Using just Google Test (not mock), then the simple, obvious answer is:
EXPECT_TRUE((a >= 1) && (a <= 3)); // a is between 1 and 3 inclusive
I find this more readable than some of the Mock based answers.
--- begin edit --
The simple answer above not providing any useful diagnostics
You can use AssertionResult to define a custom assert that does produce useful a useful error message like this.
#include <gtest/gtest.h>
::testing::AssertionResult IsBetweenInclusive(int val, int a, int b)
{
if((val >= a) && (val <= b))
return ::testing::AssertionSuccess();
else
return ::testing::AssertionFailure()
<< val << " is outside the range " << a << " to " << b;
}
TEST(testing, TestPass)
{
auto a = 2;
EXPECT_TRUE(IsBetweenInclusive(a, 1, 3));
}
TEST(testing, TestFail)
{
auto a = 5;
EXPECT_TRUE(IsBetweenInclusive(a, 1, 3));
}

There is a nice example in google mock cheat sheet:
using namespace testing;
MATCHER_P2(IsBetween, a, b,
std::string(negation ? "isn't" : "is") + " between " + PrintToString(a)
+ " and " + PrintToString(b))
{
return a <= arg && arg <= b;
}
Then to use it:
TEST(MyTest, Name) {
EXPECT_THAT(42, IsBetween(40, 46));
}

I would define these macros:
#define EXPECT_IN_RANGE(VAL, MIN, MAX) \
EXPECT_GE((VAL), (MIN)); \
EXPECT_LE((VAL), (MAX))
#define ASSERT_IN_RANGE(VAL, MIN, MAX) \
ASSERT_GE((VAL), (MIN)); \
ASSERT_LE((VAL), (MAX))

In the end I created a macro to do this that resembles other macros in the Google Test lib.
#define EXPECT_WITHIN_INCLUSIVE(lower, upper, val) \
do { \
EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperGE, val, lower); \
EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperLE, val, upper); \
} while (0)

Using an Existing Boolean Function in Google Test which don't need google mock.The link is quite specific.
Here is the example.
// Returns true iff m and n have no common divisors except 1.
bool MutuallyPrime(int m, int n) { ... }
const int a = 3;
const int b = 4;
const int c = 10;
the assertion EXPECT_PRED2(MutuallyPrime, a, b); will succeed, while
the assertion EXPECT_PRED2(MutuallyPrime, b, c); will fail with the
message
!MutuallyPrime(b, c) is false, where
b is 4
c is 10

Related

How to wrap several boolean flags into struct to pass them to a function with a convenient syntax

In some testing code there's a helper function like this:
auto make_condiment(bool salt, bool pepper, bool oil, bool garlic) {
// assumes that first bool is salt, second is pepper,
// and so on...
//
// Make up something according to flags
return something;
};
which essentially builds up something based on some boolean flags.
What concerns me is that the meaning of each bool is hardcoded in the name of the parameters, which is bad because at the call site it's hard to remember which parameter means what (yeah, the IDE can likely eliminate the problem entirely by showing those names when tab completing, but still...):
// at the call site:
auto obj = make_condiment(false, false, true, true); // what ingredients am I using and what not?
Therefore, I'd like to pass a single object describing the settings. Furthermore, just aggregating them in an object, e.g. std::array<bool,4>.
I would like, instead, to enable a syntax like this:
auto obj = make_smart_condiment(oil + garlic);
which would generate the same obj as the previous call to make_condiment.
This new function would be:
auto make_smart_condiment(Ingredients ingredients) {
// retrieve the individual flags from the input
bool salt = ingredients.hasSalt();
bool pepper = ingredients.hasPepper();
bool oil = ingredients.hasOil();
bool garlic = ingredients.hasGarlic();
// same body as make_condiment, or simply:
return make_condiment(salt, pepper, oil, garlic);
}
Here's my attempt:
struct Ingredients {
public:
enum class INGREDIENTS { Salt = 1, Pepper = 2, Oil = 4, Garlic = 8 };
explicit Ingredients() : flags{0} {};
explicit Ingredients(INGREDIENTS const& f) : flags{static_cast<int>(f)} {};
private:
explicit Ingredients(int fs) : flags{fs} {}
int flags; // values 0-15
public:
bool hasSalt() const {
return flags % 2;
}
bool hasPepper() const {
return (flags / 2) % 2;
}
bool hasOil() const {
return (flags / 4) % 2;
}
bool hasGarlic() const {
return (flags / 8) % 2;
}
Ingredients operator+(Ingredients const& f) {
return Ingredients(flags + f.flags);
}
}
salt{Ingredients::INGREDIENTS::Salt},
pepper{Ingredients::INGREDIENTS::Pepper},
oil{Ingredients::INGREDIENTS::Oil},
garlic{Ingredients::INGREDIENTS::Garlic};
However, I have the feeling that I am reinventing the wheel.
Is there any better, or standard, way of accomplishing the above?
Is there maybe a design pattern that I could/should use?
I think you can remove some of the boilerplate by using a std::bitset. Here is what I came up with:
#include <bitset>
#include <cstdint>
#include <iostream>
class Ingredients {
public:
enum Option : uint8_t {
Salt = 0,
Pepper = 1,
Oil = 2,
Max = 3
};
bool has(Option o) const { return value_[o]; }
Ingredients(std::initializer_list<Option> opts) {
for (const Option& opt : opts)
value_.set(opt);
}
private:
std::bitset<Max> value_ {0};
};
int main() {
Ingredients ingredients{Ingredients::Salt, Ingredients::Pepper};
// prints "10"
std::cout << ingredients.has(Ingredients::Salt)
<< ingredients.has(Ingredients::Oil) << "\n";
}
You don't get the + type syntax, but it's pretty close. It's unfortunate that you have to keep an Option::Max, but not too bad. Also I decided to not use an enum class so that it can be accessed as Ingredients::Salt and implicitly converted to an int. You could explicitly access and cast if you wanted to use enum class.
If you want to use enum as flags, the usual way is merge them with operator | and check them with operator &
#include <iostream>
enum Ingredients{ Salt = 1, Pepper = 2, Oil = 4, Garlic = 8 };
// If you want to use operator +
Ingredients operator + (Ingredients a,Ingredients b) {
return Ingredients(a | b);
}
int main()
{
using std::cout;
cout << bool( Salt & Ingredients::Salt ); // has salt
cout << bool( Salt & Ingredients::Pepper ); // doesn't has pepper
auto sp = Ingredients::Salt + Ingredients::Pepper;
cout << bool( sp & Ingredients::Salt ); // has salt
cout << bool( sp & Ingredients::Garlic ); // doesn't has garlic
}
note: the current code (with only the operator +) would not work if you mix | and + like (Salt|Salt)+Salt.
You can also use enum class, just need to define the operators
I would look at a strong typing library like:
https://github.com/joboccara/NamedType
For a really good video talking about this:
https://www.youtube.com/watch?v=fWcnp7Bulc8
When I first saw this, I was a little dismissive, but because the advice came from people I respected, I gave it a chance. The video convinced me.
If you look at CPP Best Practices and dig deeply enough, you'll see the general advice to avoid boolean parameters, especially strings of them. And Jonathan Boccara gives good reasons why your code will be stronger if you don't directly use the raw types, for the very reason that you've already identified.

Unit tests with boost::multiprecision

Some of my unit tests have started failing since adapting some code to enable multi-precision. Header file:
#ifndef SCRATCH_UNITTESTBOOST_INCLUDED
#define SCRATCH_UNITTESTBOOST_INCLUDED
#include <boost/multiprecision/cpp_dec_float.hpp>
// typedef double FLOAT;
typedef boost::multiprecision::cpp_dec_float_50 FLOAT;
const FLOAT ONE(FLOAT(1));
struct Rect
{
Rect(const FLOAT &width, const FLOAT &height) : Width(width), Height(height){};
FLOAT getArea() const { return Width * Height; }
FLOAT Width, Height;
};
#endif
Main test file:
#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE RectTest
#include <boost/test/unit_test.hpp>
#include "SCRATCH_UnitTestBoost.h"
namespace utf = boost::unit_test;
// Failing
BOOST_AUTO_TEST_CASE(AreaTest1)
{
Rect R(ONE / 2, ONE / 3);
FLOAT expected_area = (ONE / 2) * (ONE / 3);
std::cout << std::setprecision(std::numeric_limits<FLOAT>::digits10) << std::showpoint;
std::cout << "Expected: " << expected_area << std::endl;
std::cout << "Actual : " << R.getArea() << std::endl;
// BOOST_CHECK_EQUAL(expected_area, R.getArea());
BOOST_TEST(expected_area == R.getArea());
}
// Tolerance has no effect?
BOOST_AUTO_TEST_CASE(AreaTestTol, *utf::tolerance(1e-40))
{
Rect R(ONE / 2, ONE / 3);
FLOAT expected_area = (ONE / 2) * (ONE / 3);
BOOST_TEST(expected_area == R.getArea());
}
// Passing
BOOST_AUTO_TEST_CASE(AreaTest2)
{
Rect R(ONE / 7, ONE / 2);
FLOAT expected_area = (ONE / 7) * (ONE / 2);
BOOST_CHECK_EQUAL(expected_area, R.getArea());
}
Note that when defining FLOAT as the double type, all the tests pass. What confuses me is that when printing the exact expected and actual values (see AreaTest1) we see the same result. But the error reported from BOOST_TEST is:
error: in "AreaTest1": check expected_area == R.getArea() has failed
[0.16666666666666666666666666666666666666666666666666666666666666666666666666666666 !=
0.16666666666666666666666666666666666666666666666666666666666666666666666672236366]
Compiling with g++ SCRATCH_UnitTestBoost.cpp -o utb.o -lboost_unit_test_framework.
Questions:
Why is the test failing?
Why does the use of tolerance in AreaTestTol not give outputs as documented here?
Related info:
Tolerances with floating point comparison
Gotchas with multiprecision types
Two Issues:
where does the difference come from
how to apply the epsilon?
Where The Difference Comes From
Boost Multiprecision uses template expressions to defer evaluation.
Also, you're choosing some rational fractions that cannot be exactly represented base-10 (cpp_dec_float uses decimal, so base-10).
This means that when you do
T x = 1/3;
T y = 1/7;
That will actually approximate both fractions inexactly.
Doing this:
T z = 1/3 * 1/7;
Will actually evaluate the right-handside expression template, so instead of calculating the temporaries like x ans y before, the right hand side has a type of:
expression<detail::multiplies, detail::expression<?>, detail::expression<?>, [2 * ...]>
That's shortened from the actual type:
boost::multiprecision::detail::expression<
boost::multiprecision::detail::multiplies,
boost::multiprecision::detail::expression<
boost::multiprecision::detail::divide_immediates,
boost::multiprecision::number<boost::multiprecision::backends::cpp_dec_float<50u,
int, void>, (boost::multiprecision::expression_template_option)1>, int,
void, void>,
boost::multiprecision::detail::expression<
boost::multiprecision::detail::divide_immediates,
boost::multiprecision::number<boost::multiprecision::backends::cpp_dec_float<50u,
int, void>, (boost::multiprecision::expression_template_option)1>, int,
void, void>,
void, void>
Long story short, this is what you want because it saves you work and keeps better accuracy because the expression is is first normalized to 1/(3*7) so 1/21.
This is where your difference comes from in the first place. Fix it by either:
turning off expression templates
using T = boost::multiprecision::number<
boost::multiprecision::cpp_dec_float<50>,
boost::multiprecision::et_off > >;
rewriting the expression to be equivalent of your implementation:
T expected_area = T(ONE / 7) * T(ONE / 2);
T expected_area = (ONE / 7).eval() * (ONE / 2).eval();
Applying The Tolerance
I find it hard to parse the Boost Unit Test docs on this, but here's empirical data:
BOOST_CHECK_EQUAL(expected_area, R.getArea());
T const eps = std::numeric_limits<T>::epsilon();
BOOST_CHECK_CLOSE(expected_area, R.getArea(), eps);
BOOST_TEST(expected_area == R.getArea(), tt::tolerance(eps));
This fails the first, and passes the last two. Indeed, in addition, the following two also fail:
BOOST_CHECK_EQUAL(expected_area, R.getArea());
BOOST_TEST(expected_area == R.getArea());
So it appears that something has to be done before the utf::tolerance decorator takes effect. Testing with native doubles tells me that only BOOST_TEST applies the tolerance implicitly. So dived into the preprocessed expansion:
::boost::unit_test::unit_test_log.set_checkpoint(
::boost::unit_test::const_string(
"/home/sehe/Projects/stackoverflow/test.cpp",
sizeof("/home/sehe/Projects/stackoverflow/test.cpp") - 1),
static_cast<std::size_t>(42));
::boost::test_tools::tt_detail::report_assertion(
(::boost::test_tools::assertion::seed()->*a == b).evaluate(),
(::boost::unit_test::lazy_ostream::instance()
<< ::boost::unit_test::const_string("a == b", sizeof("a == b") - 1)),
::boost::unit_test::const_string(
"/home/sehe/Projects/stackoverflow/test.cpp",
sizeof("/home/sehe/Projects/stackoverflow/test.cpp") - 1),
static_cast<std::size_t>(42), ::boost::test_tools::tt_detail::CHECK,
::boost::test_tools::tt_detail::CHECK_BUILT_ASSERTION, 0);
} while (::boost::test_tools::tt_detail::dummy_cond());
Digging in a lot more, I ran into:
/*!#brief Indicates if a type can be compared using a tolerance scheme
*
* This is a metafunction that should evaluate to #c mpl::true_ if the type
* #c T can be compared using a tolerance based method, typically for floating point
* types.
*
* This metafunction can be specialized further to declare user types that are
* floating point (eg. boost.multiprecision).
*/
template <typename T>
struct tolerance_based : tolerance_based_delegate<T, !is_array<T>::value && !is_abstract_class_or_function<T>::value>::type {};
There we have it! But no,
static_assert(boost::math::fpc::tolerance_based<double>::value);
static_assert(boost::math::fpc::tolerance_based<cpp_dec_float_50>::value);
Both already pass. Hmm.
Looking at the decorator I noticed that the tolerance injected into the fixture context is typed.
Experimentally I have reached the conclusion that the tolerance decorator needs to have the same static type argument as the operands in the comparison for it to take effect.
This may actually be very useful (you can have different implicit tolerances for different floating point types), but it is pretty surprising as well.
TL;DR
Here's the full test set fixed and live for your enjoyment:
take into account evaluation order and the effect on accuracy
use the static type in utf::tolerance(v) to match your operands
do not use BOOST_CHECK_EQUAL for tolerance-based comparison
I'd suggest to use explicit test_tools::tolerance instead of relying on "ambient" tolerance. After all, we want to be testing our code, not the test framework
Live On Coliru
template <typename T> struct Rect {
Rect(const T &width, const T &height) : width(width), height(height){};
T getArea() const { return width * height; }
private:
T width, height;
};
#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE RectTest
#include <boost/multiprecision/cpp_dec_float.hpp>
using DecFloat = boost::multiprecision::cpp_dec_float_50;
#include <boost/test/unit_test.hpp>
namespace utf = boost::unit_test;
namespace tt = boost::test_tools;
namespace {
template <typename T>
static inline const T Eps = std::numeric_limits<T>::epsilon();
template <typename T> struct Fixture {
T const epsilon = Eps<T>;
T const ONE = 1;
using Rect = ::Rect<T>;
void checkArea(int wdenom, int hdenom) const {
auto w = ONE/wdenom; // could be expression templates
auto h = ONE/hdenom;
Rect const R(w, h);
T expect = w*h;
BOOST_TEST(expect == R.getArea(), "1/" << wdenom << " x " << "1/" << hdenom);
// I'd prefer explicit toleranc
BOOST_TEST(expect == R.getArea(), tt::tolerance(epsilon));
}
};
}
BOOST_AUTO_TEST_SUITE(Rectangles)
BOOST_FIXTURE_TEST_SUITE(Double, Fixture<double>, *utf::tolerance(Eps<double>))
BOOST_AUTO_TEST_CASE(check2_3) { checkArea(2, 3); }
BOOST_AUTO_TEST_CASE(check7_2) { checkArea(7, 2); }
BOOST_AUTO_TEST_CASE(check57_31) { checkArea(57, 31); }
BOOST_AUTO_TEST_SUITE_END()
BOOST_FIXTURE_TEST_SUITE(MultiPrecision, Fixture<DecFloat>, *utf::tolerance(Eps<DecFloat>))
BOOST_AUTO_TEST_CASE(check2_3) { checkArea(2, 3); }
BOOST_AUTO_TEST_CASE(check7_2) { checkArea(7, 2); }
BOOST_AUTO_TEST_CASE(check57_31) { checkArea(57, 31); }
BOOST_AUTO_TEST_SUITE_END()
BOOST_AUTO_TEST_SUITE_END()
Prints

Get the variable values at runtime using reflection in Dlang

Is it possible to get the class/struct/other variables value during runtime in dlang to get/set its value? If yes how to do that please provide example.
And also is it possible to get the runtime variable value?
Ex:
class S{ int svariable = 5;}
class B { int bvariable = 10;}
void printValue(T, T instanceVariable, string variableName) {
writeln("Value of ", variableName, "=", instanceVariable.variableName);
}
Output:
Value of svariable = 5;
Value of bvariable = 10;
There is a library named witchcraft that allows for runtime reflection. There are examples of how to use it on that page.
I'd first recommend trying a reflection library like #mitch_ mentioned. However, if you want to do without an external library, you can use getMember to get and set fields as well as invoke functions:
struct S {
int i;
int fun(int val) { return val * 2; }
}
unittest {
S s;
__traits(getMember, s, "i") = 5; // set a field
assert(__traits(getMember, s, "i") == 5); // get a field
assert(__traits(getMember, s, "fun")(12) == 24); // call a method
}

Z3 Optimizer Unsatisfiability with Real Constraints Using C++ API

I'm running into a problem when trying to use the Z3 optimizer to solve graph partitioning problems. Specifically, the code bellow will fail to produce a satisfying model:
namespace z3 {
expr ite(context& con, expr cond, expr then_, expr else_) {
return to_expr(con, Z3_mk_ite(con, cond, then_, else_));;
}
}
bool smtPart(void) {
// Graph setup
vector<int32_t> nodes = {{ 4, 2, 1, 1 }};
vector<tuple<node_pos_t, node_pos_t, int32_t>> edges;
GraphType graph(nodes, edges);
// Z3 setup
z3::context con;
z3::optimize opt(con);
string n_str = "n", sub_p_str = "_p";
// Re-usable constants
z3::expr zero = con.int_val(0);
// Create the sort representing the different partitions.
const char* part_sort_names[2] = { "P0", "P1" };
z3::func_decl_vector part_consts(con), part_preds(con);
z3::sort part_sort =
con.enumeration_sort("PartID",
2,
part_sort_names,
part_consts,
part_preds);
// Create the constants that represent partition choices.
vector<z3::expr> part_vars;
part_vars.reserve(graph.numNodes());
z3::expr p0_acc = zero,
p1_acc = zero;
typename GraphType::NodeData total_weight = typename GraphType::NodeData();
for (const auto& node : graph.nodes()) {
total_weight += node.data;
ostringstream name;
name << n_str << node.id << sub_p_str;
z3::expr nchoice = con.constant(name.str().c_str(), part_sort);
part_vars.push_back(nchoice);
p0_acc = p0_acc + z3::ite(con,
nchoice == part_consts[0](),
con.int_val(node.data),
zero);
p1_acc = p1_acc + z3::ite(con,
nchoice == part_consts[1](),
con.int_val(node.data),
zero);
}
z3::expr imbalance = con.int_const("imbalance");
opt.add(imbalance ==
z3::ite(con,
p0_acc > p1_acc,
p0_acc - p1_acc,
p1_acc - p0_acc));
z3::expr imbalance_limit = con.real_val(total_weight, 100);
opt.add(imbalance <= imbalance_limit);
z3::expr edge_cut = zero;
for(const auto& edge : graph.edges()) {
edge_cut = edge_cut +
z3::ite(con,
(part_vars[edge.node0().pos()] ==
part_vars[edge.node1().pos()]),
zero,
con.int_val(edge.data));
}
opt.minimize(edge_cut);
opt.minimize(imbalance);
z3::check_result opt_result = opt.check();
if (opt_result == z3::check_result::sat) {
auto mod = opt.get_model();
size_t node_id = 0;
for (z3::expr& npv : part_vars) {
cout << "Node " << node_id++ << ": " << mod.eval(npv) << endl;
}
return true;
} else if (opt_result == z3::check_result::unsat) {
cerr << "Constraints are unsatisfiable." << endl;
return false;
} else {
cerr << "Result is unknown." << endl;
return false;
}
}
If I remove the minimize commands and use a solver instead of an optimize it will find a satisfying model with 0 imbalance. I can also get an optimize to find a satisfying model if I either:
Remove the constraint imbalance <= imbalance_limit or
Make the imbalance limit reducible to an integer. In this example the total weight is 8. If the imbalance limit is set to 8/1, 8/2, 8/4, or 8/8 the optimizer will find satisfying models.
I have tried to_real(imbalance) <= imbalance_limit to no avail. I also considered the possibility that Z3 is using the wrong logic (one that doesn't include theories for real numbers) but I haven't found a way to set that using the C/C++ API.
If anyone could tell me why the optimizer fails in the presence of the real valued constraint or could suggest improvements to my encoding it would be much appreciated. Thanks in advance.
Could you reproduce the result by using opt.to_string() to dump the state (just before the check())? This would create a string formatted in SMT-LIB2 with optimization commands. It is then easier to exchange benchmarks. You should see that it reports unsat with the optimization commands and sat if you comment out the optimization commands.
If you are able to produce a bug, then post an issue on GitHub.com/z3prover/z3.git with a repro.
If not, you can use Z3_open_log before you create the z3 context and record a rerunnable log file. It is possible (but not as easy) to dig into unsoundness bugs that way.
It turns out that this was a bug in Z3. I created an Issue on GitHub and they have since responded with a patch. I'm compiling and testing the fix now, but I expect it to work.
Edit: Yup, that patch fixed the issue for the command line tool and the C++ API.

BOOST_CHECK_EQUAL (and dervatives) Add custom message

We recently started using the Boost Test framework, and like it so far.
However, there are certain tests where it would be great if we could add custom messages to an existing helper.
For example, I can get the output in mytest and mytest2, but have found no way to get the output in mytest3:
#define BOOST_TEST_MODULE mytests
#include <boost/test/unit_test.hpp>
BOOST_AUTO_TEST_SUITE(myunit)
BOOST_AUTO_TEST_CASE(mytest)
{
// This give a nice output [2+2 != 5]
BOOST_CHECK_EQUAL(2+2, 5);
}
BOOST_AUTO_TEST_CASE(mytest2)
{
// This give only a custom output
BOOST_CHECK_MESSAGE(2+2 == 5, "comparison error");
}
BOOST_AUTO_TEST_CASE(mytest3)
{
// Ideally, it should output [2+2 != 5] comparison error
BOOST_CHECK_EQUAL_WITH_ADDITIONAL_MESSAGE(2+2, 5, "comparison error");
}
BOOST_AUTO_TEST_SUITE_END()
The reason I want this is because if I wish to have test cases like this:
BOOST_AUTO_TEST_CASE(mytest4)
{
for(int i = 0; i < 10; ++i)
{
BOOST_CHECK_EQUAL_WITH_ADDITIONAL_MESSAGE(i%3, 0, i);
}
}
In that case, there is no way to know for which i the test failed.
I have tried to "duplicate" the BOOST_CHECK_EQUAL macro as follows in hopes boost would append to the passed message as the original macro passes an empty literal:
#define BOOST_CHECK_EQUAL2( L, R ) \
BOOST_CHECK_WITH_ARGS_IMPL( ::boost::test_tools::tt_detail::equal_impl_frwd(), "hello world", CHECK, CHECK_EQUAL, (L)(R) )
However, "hello world: is overwritten somewhere in the test implementation with the failed condition.
Is there any (easy and clean) way to solve this?
UPDATE It appears as though the check_impl() implementation in test_tools.ipp doesn't utilize the check_descr parameter for the equality checks.
UPDATE 2020
BOOST_TEST_CONTEXT() and BOOST_TEST_INFO() , if you can use a reasonable recent boost version should now be the preferred method, as framework-provided operations are of course a lot cleaner.
Is there an elegant way to override/provide my own?
Ok, I would just like to post for reference in case someone else runs into this that I solved it like this:
//____________________________________________________________________________//
#define BOOST_TEST_REL_EQ_MESSAGE_EXTENSION(L, R, M, CMP, ICMP, CT) \
{ \
auto _1(L); \
auto _2(R); \
std::stringstream ss; \
ss << "check " << BOOST_TEST_STRINGIZE(L) << " " << BOOST_TEST_STRINGIZE(CMP) << " " << BOOST_TEST_STRINGIZE(R) << " failed [" << _1 << " " << BOOST_TEST_STRINGIZE(ICMP) << " " << _2 << "] : " << M;\
BOOST_CHECK_IMPL( (_1 CMP _2), ss.str(), CT, CHECK_MSG ); \
} \
/**/
#define BOOST_CHECK_EQUAL_MESSAGE(L, R, M) BOOST_TEST_REL_EQ_MESSAGE_EXTENSION(L, R, M, ==, !=, CHECK )
#define BOOST_WARN_EQUAL_MESSAGE(L, R, M) BOOST_TEST_REL_EQ_MESSAGE_EXTENSION(L, R, M, ==, !=, WARN )
#define BOOST_REQUIRE_EQUAL_MESSAGE(L, R, M) BOOST_TEST_REL_EQ_MESSAGE_EXTENSION(L, R, M, ==, !=, REQUIRE )
While this may not be optimal (mostly due to stringstream being used on every iteration in mytest4 above), it seems as through this provides a reasonably clean and non-intrusive solution for the few cases where the extra message might be required
UPDATE 2017-08
For newer boost test versions we can use BOOST_TEST_INFO() for outputting the message, which is much cleaner:
#define BOOST_CHECK_EQUAL_MESSAGE(L, R, M) { BOOST_TEST_INFO(M); BOOST_CHECK_EQUAL(L, R); }
#define BOOST_WARN_EQUAL_MESSAGE(L, R, M) { BOOST_TEST_INFO(M); BOOST_WARN_EQUAL(L, R); }
#define BOOST_REQUIRE_EQUAL_MESSAGE(L, R, M) { BOOST_TEST_INFO(M); BOOST_REQUIRE_EQUAL(L, R); }
For the need that you are describing, you should use the notion of context.
Otherwise, the assertion BOOST_TEST (and here) support a string as second argument that can be used for displaying a custom message.