I'm wondering what the difference is between using a static const and an enum hack when using template metaprogramming techniques.
EX: (Fibonacci via TMP)
template< int n > struct TMPFib {
static const int val =
TMPFib< n-1 >::val + TMPFib< n-2 >::val;
};
template<> struct TMPFib< 1 > {
static const int val = 1;
};
template<> struct TMPFib< 0 > {
static const int val = 0;
};
vs.
template< int n > struct TMPFib {
enum {
val = TMPFib< n-1 >::val + TMPFib< n-2 >::val
};
};
template<> struct TMPFib< 1 > {
enum { val = 1 };
};
template<> struct TMPFib< 0 > {
enum { val = 0 };
};
Why use one over the other? I've read that the enum hack was used before static const was supported inside classes, but why use it now?
Enums aren't lvals, static member values are and if passed by reference the template will be instanciated:
void f(const int&);
f(TMPFib<1>::value);
If you want to do pure compile time calculations etc. this is an undesired side-effect.
The main historic difference is that enums also work for compilers where in-class-initialization of member values is not supported, this should be fixed in most compilers now.
There may also be differences in compilation speed between enum and static consts.
There are some details in the boost coding guidelines and an older thread in the boost archives regarding the subject.
For some the former one may seem less of a hack, and more natural. Also it has memory allocated for itself if you use the class, so you can for example take the address of val.
The latter is better supported by some older compilers.
On the flip side to #Georg's answer, when a structure that contains a static const variable is defined in a specialized template, it needs to be declared in source so the linker can find it and actually give it an address to be referenced by. This may unnecessarily(depending on desired effects) cause inelegant code, especially if you're trying to create a header only library. You could solve it by converting the values to functions that return the value, which could open up the templates to run-time info as well.
"enum hack" is a more constrained and close-enough to #define and that helps to initialise the enum once and it's not legal to take the address of an enum anywhere in the program and it's typically not legal to take the address of a #define, either. If you don't want to let people get a pointer or reference to one of your integral constants, an enum is a good way to enforce that constraint. To see how to implies to TMP is that during recursion, each instance will have its own copy of the enum { val = 1 } during recursion and each of those val will have proper place in it's loop. As #Kornel Kisielewicz mentioned "enum hack" also supported by older compilers those forbid the in-class specification of initial values to those static const.
Related
Assuming I have the following:
struct A
{
unsigned x,y;
char b[4];
};
template <unsigned N> struct B : public A
{
static constexpr unsigned L = N + sizeof(A::b);
char e[N];
};
Should I assume that the static array from B will be appended to the static array from A? Such that I could treat the array from A as having a size that would also include the array from B.
For example, the following would output than 4 bytes:
using T = B<60>;
T o;
snprintf(o.b, T::L, "more than 4 bytes");
puts(o.e);
Which it does. But I'm not an expert in how a more complex compiler actually deals with deciding the layout the structures and/or in what order it might arrange those types in memory. Depending on the requested optimizations.
Which is why I'm asking if this might have unexpected results. And if so, under what circumstances. And what should I expect?
Leaving aside the warnings given by the compiler for "out of range access" (if any).
Also, this is not the actual use case. But rather an example to better describe my question.
Behavior is undefined by C++ standard (and not just because of alignment). However on mainstream compilers IF you add __attribute__((packed)) to both structures (packed is gcc thing, but others have analogs), then it should work. Your code will depend on compiler implementation details though, you'll need static assertion to safeguard against breakage:
// dummy class B_ used in static assertion in B since the value
// of offsetof(B,e) is yet undefined at that point
template <unsigned N> struct B_ : public A
{
char e[sizeof(A::b) + N];
} __attribute__((packed));
template <unsigned N> struct B : public A
{
STATIC_ASSERT(OFFSETOF(B_, A::b) + sizeof(A::b) == OFFSETOF(B_::e));
char e[sizeof(A::b) + N];
} __attribute__((packed));
But it's all so ugly. I recommend asking question closer to you actual code, likely there are easier ways to accomplish what you want.
I'm attempting to do the following:
Say I have a class, A, and I have a static member variable, static_bit_id, that is a bitset<128>. At compile time, I want to create an ID that is a one hot bit of my choosing using bitset. I have a function that does this for me onehotbits(int n). for example:
bitset<128> b = onehotbits(4) // 0....01000
At compile time I want to assign to a static member of class A in such a way:
//in A.h
class A{
public:
const static bitset<128> bits;
};
// in A.cpp
const bitset<128> A::bits = onehotbits(1);
Previously this pattern worked with a constexpr function that instead of bitset, took uint64_t's and shifted them. Now doing the same with bitsets violates constexpr rules since operator << for bitset< T > is not a constexpr.
I can't think of a way to accomplish this using constexpr, or in a namespace safe way in general. I can initialize bitset<128> with a string, but that violates constexpr.
I'm currently getting around this problem via:
const bitset<128> A::bits = bitset<128>(1) >> n;
which seems to violate DRIP, the only way to get rid of this problem appears to use a MACRO, which wouldn't be necessary if I could just use >> operator for bitset.
Note I want to avoid using other libraries for this, especially not boost, that is overkill.
Note, while my question is similar to Initializing c++ std::bitset at compile time it is not the same, since the solution there does not work for my problem (since I'm not simply using a literal, but a literal that would have to be created at compile time via some input)
Does this solve your problem?
constexpr auto onehotbits(int i)
{
return 1<<(i-1);
}
class A{
public:
constexpr static std::bitset<128> bits{onehotbits(4)};
};
constexpr std::bitset<128> A::bits;
Calling it via
int main()
{
for(int i=0;i<128;++i)
{
std::cout<<A::bits[i]<<" ";
}
}
yields
0 0 0 1 0 0 0 0 ...
DEMO
With regard to the question whether the shift-operator is constexpr, see here. Didn't get that so far ... if not, the same behavior can attained via a template class.
It is possible to use itk::NumericTraits to get 0 and 1 of some type. Thus we can see this kind of code in the wild:
const PixelType ZERO = itk::NumericTraits<PixelType>::Zero;
const PixelType ONE = itk::NumericTraits<PixelType>::One;
This feels heavy and hard to read. As a programmer, I would prefer a more pragmatic version like:
const PixelType ZERO = 0;
const PixelType ONE = 1;
But is it entirely equivalent? I think the cast is done during the compilation so both versions should be identical in term of speed. If it's the case, why would anyone want to use itk::NumericTraits to get 0 and 1? There must be an advantage I'm not seeing.
Traits are typically used/useful in the context of generic programming. It's kind of heavily used in STL.
Lets consider your NumericTraits looks like below:
template <typename PixelT>
struct NumericTraits {
static const int ZERO = 0;
static const int ONE = 1;
};
In addition to this, you should or can constrain you template instance to a particular kind of type too..using enable_if et al.
Now, there comes a particular type of pixel which is special, how would you define ZERO and ONE for that ? Just specialize your NumericTraits
template <>
struct NumericTraits<SpecialPixel>{
static const int ZERO = 10;
static const int ONE = 20;
};
Got the idea and the usefulness? Now, another benefit of this is for converting value to type and then using it for tag dispatching:
void func(int some_val, std::true_type) {....}
void func(int some_val, std::false_type) {.....}
And call it like:
func(42, typename std::conditional<NumericTraits<PixelType>::ONE == 1, std::true_type, std::false_type>::type());
Which overload to call is decided at compile time here, relieving you from doing if - else checks and there by probably improving performance :)
A standard tuple in C++ 11 allows access by the integer template param like this:
tuple<int, double> test;
test.get<1>();
but if I want get access by the string template param:
test.get<"first">()
how can I implement it?
You can create custom constexpr cast function. I just wanted to show that what the OP wants is (almost) possible.
#include <tuple>
#include <cstring>
constexpr size_t my_cast(const char * text)
{
return !std::strcmp(text, "first") ? 1 :
!std::strcmp(text, "second") ? 2 :
!std::strcmp(text, "third") ? 3 :
!std::strcmp(text, "fourth") ? 4 :
5;
}
int main()
{
std::tuple<int, double> test;
std::get<my_cast("first")>(test);
return 0;
}
This can be compiled with C++11 (C++14) in GCC 4.9.2. Doesn't compile in Visual Studio 2015.
First of all, std::tuple::get is not a member function. There is a non-member function std::get.
Given,
std::tuple<int, double> test;
You cannot get the first element by using:
std::get<"first">(test);
You can use other mnemonics:
const int First = 0;
const int Second = 1;
std::get<First>(test);
std::get<Second>(test);
if that makes the code more readable for you.
R Sahu gives a couple of good mnemonics, I wanted to add another though. You can use a C style enum (i.e. non-class enum):
enum TupleColumns { FIRST, SECOND };
std::get<FIRST>(test);
If you combine enums with a smart enum reflection library like so: https://github.com/krabicezpapundeklu/smart_enum, then you can create a set of enums that have automatic conversions to and from string. So you could automatically convert column names into enums and access your tuple that way.
All this requires you to commit to your column names and orders at compile time. In addition, you'll always need to use string literals or constexpr functions, so that you can get the enum value as constexpr to use it this way.
constexpr TupleColumns f(const char *);
constexpr auto e = f("first");
std::get<e>(test);
I should probably add a warning at this point: this is all a fairly deep rabbit hole, fairly strong C++ is required. I probably would look for a different solution in the bigger picture, but I don't know your bigger picture well enough, nor do I know the level of your C++ nor your colleagues (assuming you have them).
I'm building a toy interpreter and I have implemented a token class which holds the token type and value.
The token type is usually an integer, but how should I abstract the int's?
What would be the better idea:
// #defines
#define T_NEWLINE 1
#define T_STRING 2
#define T_BLAH 3
/**
* Or...
*/
// enum
enum TokenTypes
{
t_newline = 1,
t_string = 2,
t_blah = 3
};
Enums can be cast to ints; furthermore, they're the preferred way of enumerating lists of predefined values in C++. Unlike #defines, they can be put in namespaces, classes, etc.
Additionally, if you need the first index to start with 1, you can use:
enum TokenTypes
{
t_newline = 1,
t_string,
t_blah
};
Enums work in debuggers (e.g. saying "print x" will print the "English" value). #defines don't (i.e. you're left with the numeric and have to refer to the source to do the mapping yourself).
Therefore, use enums.
There are various solutions here.
The first, using #define refers to the old days of C. It's usually considered bad practice in C++ because symbols defined this way don't obey scope rules and are replaced by the preprocessor which does not perform any kind of syntax check... leading to hard to understand errors.
The other solutions are about creating global constants. The net benefit is that instead of being interpreted by the preprocessor they will be interpreted by the compiler, and thus obey syntax checks and scope rules.
There are many ways to create global constants:
// ints
const int T_NEWLINE = 1;
struct Tokens { static const int T_FOO = 2; };
// enums
enum { T_BAR = 3; }; // anonymous enum
enum Token { T_BLAH = 4; }; // named enum
// Strong Typing
BOOST_STRONG_TYPEDEF(int, Token);
const Token NewLine = 1;
const Token Foo = 2;
// Other Strong Typing
class Token
{
public:
static const Token NewLine; // defined to Token("NewLine")
static const Token Foo; // defined to Token("Foo")
bool operator<(Token rhs) const { return mValue < rhs.mValue; }
bool operator==(Token rhs) const { return mValue == rhs.mValue; }
bool operator!=(Token rhs) const { return mValue != rhs.mValue; }
friend std::string toString(Token t) { return t.mValue; } // for printing
private:
explicit Token(const char* value);
const char* mValue;
};
All have their strengths and weaknesses.
int lacks from type safety, you can easily use one category of constants in the place where another is expected
enum support auto incrementing but you don't have pretty printing and it's still not so type safe (even though a bit better).
StrongTypedef I prefer to enum. You can get back to int.
Creating your own class is the best option, here you get pretty printing for your messages for example, but that's also a bit more work (not much, but still).
Also, the int and enum approach are likely to generate a code as efficient as the #define approach: compilers substitute the const values for their actual values whenever possible.
In the cases like the one you've described I prefer using enum, since they are much easier to maintain. Especially, if the numerical representation doesn't have any specific meaning.
Enum is type safe, easier to read, easier to debug and well supported by intellisense. I will say use Enum whenever possible, and resort to #define when you have to.
See this related discussion on const versus define in C/C++ and my answer to this post also list when you have to use #define preprocessor.
Shall I prefer constants over defines?
I vote for enum
#define 's aren't type safe and can be redefined if you aren't careful.
Another reason for enums: They are scoped, so if the label t_blah is present in another namespace (e.g. another class), it doesn't interfere with t_blah in your current namespace (or class), even if they have different int representations.
enum provided type-safety and readability and debugger. They are very important, as already mentioned.
Another thing that enum provides is a collection of possibilities. E.g.
enum color
{
red,
green,
blue,
unknown
};
I think this is not possible with #define (or const's for that matter)
Ok, many many answers have been posted already so I'll come up with something a little bit different: C++0x strongly typed enumerators :)
enum class Color /* Note the "class" */
{
Red,
Blue,
Yellow
};
Characteristics, advantages and differences from the old enums
Type-safe: int color = Color::Red; will be a compile-time error. You would have to use Color color or cast Red to int.
Change the underlying type: You can change its underlying type (many compilers offer extensions to do this in C++98 too): enum class Color : unsigned short. unsigned short will be the type.
Explicit scoping (my favorite): in the example above Red will be undefined; you must use Color::Red. Imagine the new enums as being sort of namespaces too, so they don't pollute your current namespace with what is probably going to be a common name ("red", "valid", "invalid",e tc).
Forward declaration: enum class Color; tells the compiler that Color is an enum and you can start using it (but not values, of course); sort of like class Test; and then use Test *.