First, I am new to googletest framework so be kind.
I have a function
void setID(const int id)
{
ID = id;
}
where ID is a global unsigned int. (Yes, globals are bad, I am just trying to figure what I am doing.)
My unit test looks like this
TEST_F(TempTests, SetId)
{
// Arrange
int id = -99;
// Act
setId(id);
// Assert
EXPECT_EQ(id, ID);
}
The problem is my unit test always passes and I need it to fail as ID should have been a signed int not an unsigned int. If I hadn't visually caught the error the unit test would have passed and it could have caused errors later on.
To make sure that this doesn't happen in the future it would be best if the unit test comparison failed in this case.
I have tried static casting id and ID to signed and unsigned ints in various ways.
I have tried doing EXPECT_TRUE(id == ID) with static casting the variables to signed and unsigned ints in various ways.
But in all of these cases the result is a passing test.
So how can I get gtest to compare the signed value of id and unsigned value of ID so that the test will fail because id will be -99 and ID will be 4294967197?
The compiler will need to convert the types to be equal. I recommend reading this related answer.
You might be able to create a custom googletest comparator. Even if not, you can at the very least use something similar like such:
#include <iostream>
#include <cstdint>
#include <limits>
#include <type_traits>
#include <typeinfo>
template <class T>
class SignedUnsignedIntCompare final /* final for non-virtual dtor, remember to make dtor virtual if you need to inherit */ {
public:
const T & v;
SignedUnsignedIntCompare(const T & v) : v(v) {}
SignedUnsignedIntCompare(SignedUnsignedIntCompare && move_ctor) = default;
SignedUnsignedIntCompare(const SignedUnsignedIntCompare & copy_ctor) = default;
SignedUnsignedIntCompare & operator=(SignedUnsignedIntCompare && move_assign) = default;
SignedUnsignedIntCompare & operator=(const SignedUnsignedIntCompare & copy_assign) = default;
~SignedUnsignedIntCompare() = default;
template <class TT>
bool operator==(const TT & i) const {
if ( std::numeric_limits<T>::is_signed != std::numeric_limits<TT>::is_signed ) {
return ((v == i) && (T(v) <= std::numeric_limits<TT>::max()) && (TT(i) <= std::numeric_limits<T>::max()));
}
return (v == i);
}
};
typedef SignedUnsignedIntCompare<int> SignedIntCompare;
typedef SignedUnsignedIntCompare<unsigned> UnsignedIntCompare;
int main() {
int i = -99;
unsigned int u = i;
std::cout << (i == u) << " vs " << (SignedIntCompare(i) == u) << std::endl;
return 0;
}
At that point, you can then use EXPECT_TRUE or similar boolean checks, like such:
TEST(foo, bar) {
int id = -99;
setId(id);
EXPECT_TRUE(SignedUnsignedIntCompare<decltype(ID)>(ID) == id);
}
So I'm not sure how to give credit, but I ended up using a combination of inetknght's and mkk's suggestions.
TEST_F(TempTests, SetId)
{
// Arrange
int id = -99;
// Act
setId(id);
// Assert
EXPECT_TRUE(std::numeric_limits<decltype(id)>::is_signed == std::numeric_limits<decltype(ID)>::is_signed);
EXPECT_EQ(id, ID);
}
Per inetknght's suggestion, by checking the signed types I am able to fail the test as the types are not both signed. And per mkk's suggestion by using decltype I can get the declared types of the variables without modifying the unit test in the future when the type of ID is corrected. When the type of ID is corrected then the test passes.
Edit
Per Adrian McCarthy's suggestion I have also added -Werror=conversion to my compiler flags.
Related
The situation: occasionally I write a function that can take a number of boolean parameters, and instead of writing something like this:
void MyFunc(bool useFoo, bool useBar, bool useBaz, bool useBlah);
[...]
// hard to tell what this means (requires looking at the .h file)
// not obvious if/when I got the argument-ordering wrong!
MyFunc(true, true, false, true);
I like to be able to specify them using a bit-chord of defined bits-indices, like this:
enum {
MYARG_USE_FOO = 0,
MYARG_USE_BAR,
MYARG_USE_BAZ,
MYARG_USE_BLAH,
NUM_MYARGS
};
void MyFunc(unsigned int myArgsBitChord);
[...]
// easy to see what this means
// "argument" ordering doesn't matter
MyFunc((1<<MYARG_USE_FOO)|(1<<MYARG_USE_BAR)|(1<<MYARG_USE_BLAH));
That works fine, in that it allows me to pass around a lot of boolean arguments easily (as a single unsigned long, rather than a long list of separate bools), and I can easy see what my call to MyFunc() is specifying (without having to refer back to a separate header file).
It also allows me to iterate over the defined bits if I want to, which is sometimes useful:
unsigned int allDefinedBits = 0;
for (int i=0; i<NUM_MYARGS; i++) allDefinedBits |= (1<<i);
The main downsides are that it can be a bit error-prone. In particular, it's easy to mess up and do this by mistake:
// This will compile but do the wrong thing at run-time!
void MyFunc(MYARG_USE_FOO | MYARG_USE_BAR | MYARG_USE_BLAH);
... or even to make this classic forehead-slapping mistake:
// This will compile but do the wrong thing at run-time!
void MyFunc((1<<MYARG_USE_FOO) | (1<<MYARG_USE_BAR) || (1<<MYARG_USE_BLAH));
My question is, is there a recommended "safer" way to do this? i.e. one where I can still easily pass multiple defined booleans as a bit-chord in a single argument, and can iterate over the defined bit-values, but where "dumb mistakes" like the ones shown above will be caught by the compiler rather than causing unexpected behavior at runtime?
#include <iostream>
#include <type_traits>
#include <cstdint>
enum class my_options_t : std::uint32_t {
foo,
bar,
baz,
end
};
using my_options_value_t = std::underlying_type<my_options_t>::type;
inline constexpr auto operator|(my_options_t const & lhs, my_options_t const & rhs)
{
return (1 << static_cast<my_options_value_t>(lhs)) | (1 << static_cast<my_options_value_t>(rhs));
}
inline constexpr auto operator|(my_options_value_t const & lhs, my_options_t const & rhs)
{
return lhs | (1 << static_cast<my_options_value_t>(rhs));
}
inline constexpr auto operator&(my_options_value_t const & lhs, my_options_t const & rhs)
{
return lhs & (1 << static_cast<my_options_value_t>(rhs));
}
void MyFunc(my_options_value_t options)
{
if (options & my_options_t::bar)
std::cout << "yay!\n\n";
}
int main()
{
MyFunc(my_options_t::foo | my_options_t::bar | my_options_t::baz);
}
How about a template...
#include <iostream>
template <typename T>
constexpr unsigned int chordify(const T& v) {
return (1 << v);
}
template <typename T1, typename... Ts>
constexpr unsigned int chordify(const T1& v1, const Ts&... rest) {
return (1 << v1) | chordify(rest... );
}
enum {
MYARG_USE_FOO = 0,
MYARG_USE_BAR,
MYARG_USE_BAZ,
MYARG_USE_BLAH,
NUM_MYARGS
};
int main() {
static_assert(__builtin_constant_p(
chordify(MYARG_USE_FOO, MYARG_USE_BAZ, MYARG_USE_BLAH)
));
std::cout << chordify(MYARG_USE_FOO, MYARG_USE_BAZ, MYARG_USE_BLAH);
}
That outputs 13, and it's a compile-time constant.
You can use a bit field, which allows you to efficiently construct a structure with individually-named bit flags.
for example, you could pass a struct myOptions to the function, where it is defined as:
struct myOptions {
unsigned char foo:1;
unsigned char bar:1;
unsigned char baz:1;
};
Then, when you have to construct the values to send to the function, you'd do something like this:
myOptions opt;
opt.foo = 1;
opt.bar = 0;
opt.baz = 1;
MyFunct(opt);
Bit fields are compact and efficient, yet allow you to name the bits (or groups of bits) as if they were independent variables.
By the way, given the verbosity of the declaration, this is one place where I might break the common style of only declaring one variable per statement, and declare the struct as follows:
struct myOptions {
unsigned char foo:1, bar:1, baz:1;
};
And, in C++20 you can add initializers:
struct myOptions {
unsigned char foo:1{0}, bar:1{0}, baz:1{0};
}
As in the title. As an exercise, I wanted to create an int that would enforce constraints on its value and would disallow being set to values outside its specified range.
Here is how I tried to approach this:
#include <cassert>
#include <cstdint>
#include <iostream>
using namespace std;
int main();
template<typename valtype, valtype minval, valtype maxval>
class ConstrainedValue
{
valtype val;
static bool checkval (valtype val)
{
return minval <= val && val <= maxval;
}
public:
ConstrainedValue() : val{minval} // so that we're able to write ConstrainedValue i;
{
assert(checkval(val));
}
ConstrainedValue(valtype val) : val{val}
{
assert(checkval(val));
}
ConstrainedValue &operator = (valtype val)
{
assert(checkval(val));
this->val = val;
return *this;
}
operator const valtype&() // Not needed here but can be; safe since it returns a const reference
{
return val;
}
friend ostream &operator << (ostream& out, const ConstrainedValue& v) // Needed because otherwise if valtype is char the output could be bad
{
out << +v.val;
return out;
}
friend istream &operator >> (istream& in, ConstrainedValue& v) // this is horrible ugly; I'd love to know how to do it better
{
valtype hlp;
auto hlp2 = +hlp;
in >> hlp2;
assert(checkval(hlp2));
v.val = hlp2;
return in;
}
};
int main()
{
typedef ConstrainedValue<uint_least8_t, 0, 100> ConstrainedInt;
ConstrainedInt i;
cin >> i;
cout << i;
return 0;
}
The problem is that... this is not working. If this custom integer is given values that overflow its underlying type it just sets erroneous values.
For example, let's assume that we have range constraints of [0; 100] and the underlying type is uint_least8_t, as in the example above. uint_least8_t evaluates to char or unsigned char, I'm not sure which. Let's try feeding this program with different values:
10
10
Nice. Works.
101
test: test.cpp:52: std::istream& operator>>(std::istream&, ConstrainedValue<unsigned int, 0u, 100u>&): Assertion `checkval(hlp2)' failed.
Aborted
Haha! Exactly what I wanted.
But:
257
1
Yeah. Overflow, truncate, wrong value, failed to check range correctly.
How to fix this problem?
I think that you have a specification problem, that unfortunately implementation did not automatically solve.
As soon as you write : ConstrainedValue(valtype val) : val{val} you lose any hope to be able to detect overflow, because the conversion to valtype happens before you code is called. Because if uint_least8_t is translated to unsigned char which seems to happen in your (and my) implementation, (uint_least8_t) 257 is 2.
To be able to detect overflow, you must use greater integral types in your constructor and operator = methods.
IMHO, you should use templated constructor, operator = and checkval :
template<typename valtype, valtype minval, valtype maxval>
class ConstrainedValue
{
valtype val;
template<typename T> static bool lt(valtype v, T other) {
if (v <= 0) {
if (other >= 0) return true;
else return static_cast<long>(v) <= static_cast<long>(other);
}
else {
if (other <= 0) return false;
else return static_cast<unsigned long>(v)
<= static_cast<unsigned long>(other);
}
}
template <typename T> static bool checkval (T val)
{
return lt(minval, val) && (! lt(maxval, val));
}
public:
ConstrainedValue() : val{minval} // so that we're able to write ConstrainedValue i;
{
assert(checkval(val));
}
template<typename T> ConstrainedValue(T val) : val{val}
{
assert(checkval(val));
}
template<typename T> ConstrainedValue &operator = (T val)
{
assert(checkval(val));
this->val = val;
return *this;
}
operator const valtype&() // Not needed here but can be; safe since it returns a const reference
{
return val;
}
That way, the compiler will automatically choose the proper type to avoid early overflow : you use original type in checkval, and use the best of long long and unsigned long long for the comparisons, with care for signed/unsigned comparisons (no compilation warning) !
In fact, lt could be written more simply if you accept a possible (harmless) signed/unsigned mismatch warning :
template<typename T> static bool lt(valtype v, T other) {
if (v <= 0) && (other >= 0) return true;
else if (v >= 0) && (other <= 0) return false;
else return v <= other;
}
}
The warning could arise if one of valtype and T is signed while the other is unsigned. It is harmless because the cases where v and other are of opposite signs is explicetely processed, and if both are negative, they must be signed. So it can only happen when one is signed and the other unsigned but both are positive. In that case, clause 5 (5 Expressions from Standard for Programming Language C++, § 10) guarantees that the biggest type will be used, with unsigned precedence, meaning that it will be correct for positive values. And it avoids to force a possibly useless conversion to unsigned long long.
But there is still a case that I cannot handle properly : the injector. Until you decode it, you cannot be sure whether the input value should be cast to a long long or to an unsigned long long (assuming they are the biggest possible integral types). The cleanest way I can imagine would be to get the value as a string and decode it by hand. As there are many corner cases, I would advise you to :
first get it as a string
if first char is a minus - decode it to a long long
else decode it to an unsigned long long
It will still give weird results for really big numbers, but it is the best I can :
friend std::istream &operator >> (std::istream& in, ConstrainedValue& v)
{
std::string hlp;
in >> hlp;
std::stringstream str(hlp);
if (hlp[0] == '-') {
long long hlp2;
str >> hlp2;
assert(checkval(hlp2));
v.val = static_cast<valtype>(hlp2);
}
else {
unsigned long long hlp2;
str >> hlp2;
assert(checkval(hlp2));
v.val = static_cast<valtype>(hlp2);
}
return in;
}
I don't think it's possible to detect overflow on the way in to your constructor, because it is carried out in order to fulfil your constructor argument, so once it reaches your constructor body it has already overflowed.
A possible workaround would be to accept large types in your interface, then carry out the check. For example, you could take long ints in your interface, then store them internally as valtype. Since you'll be carrying out bounds checking anyway, it should be fairly safe.
Use the biggest version of integer possible for comparison. Let's assume it's intmax_t. Change your code as below:
template<typename valtype, intmax_t minval, intmax_t maxval>
class ConstrainedValue
{
valtype val;
// In special case of `uintmax_t` the comparison should happen with `uintmax_t` only, in other cases it will be `intmax_t`
using Compare = typename std::conditional<std::is_same<valtype, uintmax_t>::value, uintmax_t, intmax_t>::type;
static bool checkval (valtype val)
{ // this will cover all the scenarios of smaller integer values
return Compare(minval) <= Compare(val) && Compare(val) <= Compare(maxval);
}
...
This should resolve the int size problem. I also see other problems with other part of the code, which deserves a new question.
I have composed a solution from Serge Ballesta's answer and from my research in how operator >> works. It looks like this:
#include <cassert>
#include <cstdint>
#include <iostream>
using namespace std;
int main();
template<typename valtype, valtype minval, valtype maxval>
class ConstrainedValue
{
valtype val;
template<typename T> static bool lt(valtype v, T other) {
if (v <= 0) {
if (other >= 0) return true;
else return static_cast<long>(v) <= static_cast<long>(other);
}
else {
if (other <= 0) return false;
else return static_cast<unsigned long>(v)
<= static_cast<unsigned long>(other);
}
}
template <typename T> static bool checkval (T val)
{
return lt(minval, val) && (! lt(maxval, val));
}
public:
ConstrainedValue() : val{minval}
{
assert(checkval(val));
}
template <typename T>
ConstrainedValue(T val) : val{val}
{
assert(checkval(val));
}
template <typename T>
ConstrainedValue &operator = (T val)
{
assert(checkval(val));
this->val = val;
return *this;
}
operator const valtype&()
{
return val;
}
friend ostream &operator << (ostream& out, const ConstrainedValue& v)
{
out << +v.val;
return out;
}
friend istream &operator >> (istream& in, ConstrainedValue& v)
{
auto hlp = +v.val; // I think it's safe to assume that hlp will have at least as much precision as v.val?
in >> hlp;
assert(in.good()); // In case of input overflow this fails.
assert(checkval(hlp));
v.val = hlp;
return in;
}
};
int main()
{
typedef ConstrainedValue<uint_least8_t, 0, 100> ConstrainedInt;
ConstrainedInt i;
cin >> i;
cout << i;
return 0;
}
I think it covers both problems: passing overflowing values to the constructor or operator = and overflowing input. In the latter case, according to http://www.cplusplus.com/reference/locale/num_get/get/ , input will write numeric_limits::max() or numeric_limits::lowest() to v.val, which should cover most scenarios; but in case maxval equals numeric_limits::max() or minval equals numeric_limits::lowest() we can check in.good() which should necessarily yield false in such situations. Of course, there's always the problem that v.val might be a char type, in which case operator >> will actually write to auto hlp = +v.val which will be of a larger type, which may prevent in.good() from detecting overflows. However, such cases will be handled by Serge's improved checkval() function.
This should hopefully work, assuming that auto hlp = +v.val will necessarily be of an at least as large type as v.val. If the standard says otherwise, or if I have overlooked some possible scenarios, please correct me.
Is it possible to perform a unique string to int mapping at compile time?
Let's say I have a template like this for profiling:
template <int profilingID>
class Profile{
public:
Profile(){ /* start timer */ }
~Profile(){ /* stop timer */ }
};
which I place at the beginning of function calls like this:
void myFunction(){
Profile<0> profile_me;
/* some computations here */
}
Now I'm trying to do something like the following, which is not possible since string literals cannot be used as a template argument:
void myFunction(){
Profile<"myFunction"> profile_me; // or PROFILE("myFunction")
/* some computations here */
}
I could declare global variables to overcome this issue, but I think it would be more elegant to avoid previous declarations. A simple mapping of the form
”myFunction” → 0
”myFunction1” → 1
…
”myFunctionN” → N
would be sufficient. But to this point neither using constexpr, template meta-programming nor macros I could find a way to accomplish such a mapping. Any ideas?
As #harmic has already mentioned in the comments, you should probably just pass the name to the constructor. This might also help reduce code bloat because you don't generate a new type for each function.
However, I don't want to miss the opportunity to show a dirty hack that might be useful in situations where the string cannot be passed to the constructor. If your strings have a maximum length that is known at compile-time, you can encode them into integers. In the following example, I'm only using a single integer which limits the maximum string length to 8 characters on my system. Extending the approach to multiple integers (with the splitting logic conveniently hidden by a small macro) is left as an exercise to the reader.
The code makes use of the C++14 feature to use arbitrary control structures in constexpr functions. In C++11, you'd have to write wrap as a slightly less straight-forward recursive function.
#include <climits>
#include <cstdint>
#include <cstdio>
#include <type_traits>
template <typename T = std::uintmax_t>
constexpr std::enable_if_t<std::is_integral<T>::value, T>
wrap(const char *const string) noexcept
{
constexpr auto N = sizeof(T);
T n {};
std::size_t i {};
while (string[i] && i < N)
n = (n << CHAR_BIT) | string[i++];
return (n << (N - i) * CHAR_BIT);
}
template <typename T>
std::enable_if_t<std::is_integral<T>::value>
unwrap(const T n, char *const buffer) noexcept
{
constexpr auto N = sizeof(T);
constexpr auto lastbyte = static_cast<char>(~0);
for (std::size_t i = 0UL; i < N; ++i)
buffer[i] = ((n >> (N - i - 1) * CHAR_BIT) & lastbyte);
buffer[N] = '\0';
}
template <std::uintmax_t Id>
struct Profile
{
char name[sizeof(std::uintmax_t) + 1];
Profile()
{
unwrap(Id, name);
std::printf("%-8s %s\n", "ENTER", name);
}
~Profile()
{
std::printf("%-8s %s\n", "EXIT", name);
}
};
It can be used like this:
void
function()
{
const Profile<wrap("function")> profiler {};
}
int
main()
{
const Profile<wrap("main")> profiler {};
function();
}
Output:
ENTER main
ENTER function
EXIT function
EXIT main
In principle you can. However, I doubt any option is practical.
You can set your key type to be a constexpr value type (this excludes std::string), initializing the value type you implement is not a problem either, just throw in there a constexpr constructor from an array of chars. However, you also need to implement a constexpr map, or hash table, and a constexpr hashing function. Implementing a constexpr map is the hard part. Still doable.
You could create a table:
struct Int_String_Entry
{
unsigned int id;
char * text;
};
static const Int_String_Entry my_table[] =
{
{0, "My_Function"},
{1, "My_Function1"},
//...
};
const unsigned int my_table_size =
sizeof(my_table) / sizeof(my_table[0]);
Maybe what you want is a lookup table with function pointers.
typedef void (*Function_Pointer)(void);
struct Int_vs_FP_Entry
{
unsigned int func_id;
Function_Point p_func;
};
static const Int_vs_FP_Entry func_table[] =
{
{ 0, My_Function},
{ 1, My_Function1},
//...
};
For more completion, you can combine all three attributes into another structure and create another table.
Note: Since the tables are declared as "static const", they are assembled during compilation time.
Why not just use an Enum like:
enum ProfileID{myFunction = 0,myFunction1 = 1, myFunction2 = 2 };
?
Your strings will not be loaded in runtime, so I don't understand the reason for using strings here.
It is an interesting question.
It is possible to statically-initialize a std::map as follows:
static const std::map<int, int> my_map {{1, 2}, {3, 4}, {5, 6}};
but I get that such initialization is not what you are looking for, so I took another approach after looking at your example.
A global registry holds a mapping between function name (an std::string) and run time (an std::size_t representing the number of milliseconds).
An AutoProfiler is constructed providing the name of the function, and it will record the current time. Upon destruction (which will happen as we exit the function) it will calculate the elapsed time and record it in the global registry.
When the program ends we print the contents of the map (to do so we utilize the std::atexit function).
The code looks as follows:
#include <cstdlib>
#include <iostream>
#include <map>
#include <chrono>
#include <cmath>
using ProfileMapping = std::map<std::string, std::size_t>;
ProfileMapping& Map() {
static ProfileMapping map;
return map;
}
void show_profiles() {
for(const auto & pair : Map()) {
std::cout << pair.first << " : " << pair.second << std::endl;
}
}
class AutoProfiler {
public:
AutoProfiler(std::string name)
: m_name(std::move(name)),
m_beg(std::chrono::high_resolution_clock::now()) { }
~AutoProfiler() {
auto end = std::chrono::high_resolution_clock::now();
auto dur = std::chrono::duration_cast<std::chrono::milliseconds>(end - m_beg);
Map().emplace(m_name, dur.count());
}
private:
std::string m_name;
std::chrono::time_point<std::chrono::high_resolution_clock> m_beg;
};
void foo() {
AutoProfiler ap("foo");
long double x {1};
for(std::size_t k = 0; k < 1000000; ++k) {
x += std::sqrt(k);
}
}
void bar() {
AutoProfiler ap("bar");
long double x {1};
for(std::size_t k = 0; k < 10000; ++k) {
x += std::sqrt(k);
}
}
void baz() {
AutoProfiler ap("baz");
long double x {1};
for(std::size_t k = 0; k < 100000000; ++k) {
x += std::sqrt(k);
}
}
int main() {
std::atexit(show_profiles);
foo();
bar();
baz();
}
I compiled it as:
$ g++ AutoProfile.cpp -std=c++14 -Wall -Wextra
and obtained:
$ ./a.out
bar : 0
baz : 738
foo : 7
You do not need -std=c++14, but you will need at least -std=c++11.
I realize this is not what you are looking for, but I liked your question and decided to pitch in my $0.02.
And notice that if you use the following definition:
using ProfileMapping = std::multi_map<std::string, std::size_t>;
you can record every access to each function (instead of ditching the new results once the first entry has been written, or overwriting the old results).
You could do something similar to the following. It's a bit awkward, but may do what you want a little more directly than mapping to an integer:
#include <iostream>
template <const char *name>
class Profile{
public:
Profile() {
std::cout << "start: " << name << std::endl;
}
~Profile() {
std::cout << "stop: " << name << std::endl;
}
};
constexpr const char myFunction1Name[] = "myFunction1";
void myFunction1(){
Profile<myFunction1Name> profile_me;
/* some computations here */
}
int main()
{
myFunction1();
}
The problem I am trying to solve is that, for readability of my code, I would like to use string literals instead of numerals.
These should be converted to numerals at compile time (without additional preprocessing of the code).
In principle this should not be a problem nowadays, and actually the following seems to work:
constexpr unsigned long bogus_hash(char const *input) {
return input[0]+input[89];
}
constexpr unsigned long compute_hash(const char* a) {
return bogus_hash(a);
}
class HashedString {
public:
constexpr HashedString(const char* a): my_hash(compute_hash(a)) {};
constexpr HashedString(unsigned long a): my_hash(a) {};
constexpr unsigned long const& get() const {return my_hash;}
constexpr bool operator ==(const char* b) {
return my_hash == compute_hash(b);
};
protected:
unsigned long my_hash;
};
Almost: Of course, the hash function is use is not a good hash function. (The 89 in there is due to my test code:
int fun_new(HashedString a) {
return (a == "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaab");
}
int fun_old(std::string a) {
return (a == "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaab");
}
//#define CHOOSE fun_old
#define CHOOSE fun_new
int main() {
long res = 0;
for (long i = 0; i < 1000*1000*100; ++i) {
res += CHOOSE("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaac"); // will add 1
res += CHOOSE("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaab"); // will add 0
}
std::cout << "res = " << res << std::endl; // should return number of iterations in loop
return 0;
}
(Note that the internal use of hashes instead of the string literal is transparent to the caller of the function.)
My questions are:
Does the code actually do what I want it to? Timing the code shows that it does run much faster when using HashedString (fun_new) instead of std::string (fun_old). (I am using g++ 4.8.2, with options -std=c++0x -O0. Using -O1 seems to get rid of the loop completely?)
Is there a compile-time hash function I can use for this purpose? All other (constexpr) hash functions I tried made the code run slower than before (likely because the constexpr is not evaluated at compile but at run-time.) I tried to use these: 1, 2, 3
My question regards coding style and the decomposition of complicated expressions in C++.
My program has a complicated hierarchy of classes composed of classes composed of classes, etc. Many of the program's objects hold pointers to, or indices into, other objects. There are a few namespaces. As I said, it's complicated—by which I mean that it is a pretty typical 10,000-line C++ program.
A problem of scale emerges. My code is starting to have lots of unreadable expressions like p->a(b.c(r).d()).q(s)->f(g(h.i())). (As I said, it's unreadable. I have trouble reading it, and I was the one who wrote it! You can just look at it to catch the mood.) I have tried rewriting such expressions as
{
const I ii = h.i();
const G &gg = g(ii);
const C &cc = b.c(r);
// ..
T *const qq = aa.q(s);
qq->f(gg);
}
All those locally scoped symbols arguably make the code more readable, but I admit that I do not care for the overall style. After all, some of the local symbols are const & references, while others represent copies of data. What if I accidentally omitted one of the &, thereby invoking an unnecessary copy of some large object? My compiler would hardly warn me.
Besides, the version with the local symbols is verbose.
Neither solution suits. Does there not exist a more idiomatic, less bug-prone way to decompose unreadable expressions in C++?
ILLUSTRATION
If a minimal, compilable illustration of the problem helps, then here is one.
#include <iostream>
class A {
private:
int m1;
int n1;
public:
int m() const { return m1; }
int n() const { return n1; }
A(const int m0, const int n0) : m1(m0), n1(n0) {}
};
class B {
private:
A a1;
public:
const A &a() const { return a1; }
B(const A &a0) : a1(a0) {}
};
B f(int k) {
return B(A(k, -k));
}
int main() {
const B &my_b = f(15);
{
// Here is a local scope in which to decompose an expression
// like my_b.a().m() via intermediate symbols.
const A &aa = my_b.a();
const int mm = aa.m();
const int nn = aa.n();
std::cout << "m == " << mm << ", n == " << nn << "\n";
}
return 0;
}
ADDITIONAL INFORMATION
I doubt that it is relevant to the question, but in case it is: My program defines several templates, but does not presently use any run-time polymorphism.
AN ACTUAL EXAMPLE
One of the commenters has kindly requested an actual example out of my code. Here it is:
bool Lgl::is_order_legal_for_movement(
const Mb::Mapboard &mb, const size_t ix, Chains *const p_chns1
) {
if (!mb.accepts_orders()) return false;
const Basic_mapboard &bmb = mb.basic();
if (!is_location_valid(bmb, ix, false)) return false;
const Space &x = mb.spc(ix);
if (!x.p_unit()) return true;
const size_t cx = x.i_cst_occu();
const Basic_space &bx = x.basic();
const Unit &u = x.unit();
const bool army = u.is_army();
const bool fleet = u.is_fleet();
const Order order = u.order();
const size_t is = u.source();
const Location lt = u.target_loc();
const size_t it = lt.i_spc;
const size_t ct = lt.i_cst;
// ...
{
const Space &s = mb.spc(is);
const Basic_space &bs = s.basic();
result = (
(army_s && (
bs.nbor_land(it) || count_chains_if(
Does_chain_include(ix), chns_s, false
)
)) || (fleet_s && (
// By rule, support for a fleet can name a coast,
// but need not.
ct == NO_CST
? is_nbor_sea_no_cst(bs, cs, it)
: is_nbor_sea (bs, cs, lt)
))
) && is_nbor_no_cst(army, fleet, bx, cx, it);
}
// ...
}
For your actual code example, I can see why you'd like to make it more readable. I'd probably recode it something like this:
if (army_s) {
result = bs.nbor_land(it) ||
count_chains_if(Does_chain_include(ix), chns_s, false);
} else if (fleet_s) {
// By rule, support for a fleet can name a coast,
// but need not.
if (ct == NO_CST)
result = is_nbor_sea_no_cst(bs, cs, it);
else
result = is_nbor_sea(bs, cs, lt);
}
result &= is_nbor_no_cst(army, fleet, bx, cx, it);
It executes the same logic, including short-circuit evaluations, but is a little better for human interpretation, I think. I have also encountered compilers that also generate better code with this style versus the very complex compound line the original code contained.
Hope that helps.