C++11 standard conformant bitmasks using enum class - c++

Can you implement standard conformant (as described in 17.5.2.1.3 of the n3242 draft) type safe bitmasks using enum class? The way I read it, a type T is a bitmask if it supports the |,&,^,~,|=,&= and ^= operators and further you can do if(l&r) where l and r are of type T. Missing from the list are the operator != and == and to allow sorting one probably also wants to overload <.
Getting the operators to works is just annoying boilerplate code but I do not see how to do if(l&r). At least the following does not compile with GCC (besides being extremely dangerous as it allows an erroneous implicit conversion to int):
enum class Foo{
operator bool(){
return (unsigned)*this;
}
};
EDIT: I now know for certain that enum classes can not have members. The actual question how to do if(l&r) remains though.

I think you can...
You'll have to add operators for bitmasky things. I didn't do it here but you could easily overload any relational operator.
/**
*
*/
// NOTE: I changed to a more descriptive and consistent name
// This needs to be a real bitmask type.
enum class file_permissions : int
{
no_perms = 0,
owner_read = 0400,
owner_write = 0200,
owner_exe = 0100,
owner_all = 0700,
group_read = 040,
group_write = 020,
group_exe = 010,
group_all = 070,
others_read = 04,
others_write = 02,
others_exe = 01,
others_all = 07,
all_all = owner_all | group_all | others_all, // 0777
set_uid_on_exe = 04000,
set_gid_on_exe = 02000,
sticky_bit = 01000,
perms_mask = all_all | set_uid_on_exe | set_gid_on_exe | sticky_bit, // 07777
perms_not_known = 0xffff,
add_perms = 0x1000,
remove_perms = 0x2000,
symlink_perms = 0x4000
};
inline constexpr file_permissions
operator&(file_permissions x, file_permissions y)
{
return static_cast<file_permissions>
(static_cast<int>(x) & static_cast<int>(y));
}
inline constexpr file_permissions
operator|(file_permissions x, file_permissions y)
{
return static_cast<file_permissions>
(static_cast<int>(x) | static_cast<int>(y));
}
inline constexpr file_permissions
operator^(file_permissions x, file_permissions y)
{
return static_cast<file_permissions>
(static_cast<int>(x) ^ static_cast<int>(y));
}
inline constexpr file_permissions
operator~(file_permissions x)
{
return static_cast<file_permissions>(~static_cast<int>(x));
}
inline file_permissions &
operator&=(file_permissions & x, file_permissions y)
{
x = x & y;
return x;
}
inline file_permissions &
operator|=(file_permissions & x, file_permissions y)
{
x = x | y;
return x;
}
inline file_permissions &
operator^=(file_permissions & x, file_permissions y)
{
x = x ^ y;
return x;
}

I'm not entirely sure what your acceptance criteria are, but you can just make operator & return a wrapper class with appropriate conversions and an explicit operator bool:
#include <type_traits>
template<typename T> using Underlying = typename std::underlying_type<T>::type;
template<typename T> constexpr Underlying<T>
underlying(T t) { return Underlying<T>(t); }
template<typename T> struct TruthValue {
T t;
constexpr TruthValue(T t): t(t) { }
constexpr operator T() const { return t; }
constexpr explicit operator bool() const { return underlying(t); }
};
enum class Color { RED = 0xff0000, GREEN = 0x00ff00, BLUE = 0x0000ff };
constexpr TruthValue<Color>
operator&(Color l, Color r) { return Color(underlying(l) & underlying(r)); }
All your other operators can continue to return Color, of course:
constexpr Color
operator|(Color l, Color r) { return Color(underlying(l) | underlying(r)); }
constexpr Color operator~(Color c) { return Color(~underlying(c)); }
int main() {
constexpr Color YELLOW = Color::RED | Color::GREEN;
constexpr Color WHITE = Color::RED | Color::GREEN | Color::BLUE;
static_assert(YELLOW == (WHITE & ~Color::BLUE), "color subtraction");
return (YELLOW & Color::BLUE) ? 1 : 0;
}

Scoped enumerations (those created with either enum class or enum struct) are not classes. They cannot have member functions, they just provide enclosed enumerators (not visible at namespace level).

Missing from the list are the operator != and ==
Those operators are already supported by enumeration types, integer types and std::bitset so there's no need to overload them.
and to allow sorting one probably also wants to overload <.
Why do you want to sort bitmasks? Is (a|b) greater than (a|c)? Is std::ios::in less than std::ios::app? Does it matter? The relational operators are always defined for enumeration types and integer types anyway.
To answer the main question, you would implement & as an overloaded non-member function:
Foo operator&(Foo l, Foo r)
{
typedef std::underlying_type<Foo>::type ut;
return static_cast<Foo>(static_cast<ut>(l) & static_cast<ut>(r));
}
I believe all the required operations for bitmask types could be defined for scoped enums, but not the requirements such as
Ci & Cj is nonzero and Ci & Cj iszero
and:
The value Y is set in the object X is the expression X & Y is nonzero.
Since scoped enums don't support impicit conversion to integer types, you cannot reliably test if it's nonzero or not. You would need to write if ((X&Y) != bitmask{}) and I don't think that's the intention of the committee.
(I initially thought they could be used to define bitmask types, then remembered I'd tried to implement one using scoped enums and encountered the problem with testing for zero/nonzero.)
Edit: I've just remembered that std::launch is a scoped enum type and a bitmask type ... so apparently scoped enums can be bitmask types!

A short example of enum-flags below.
#indlude "enum_flags.h"
ENUM_FLAGS(foo_t)
enum class foo_t
{
none = 0x00
,a = 0x01
,b = 0x02
};
ENUM_FLAGS(foo2_t)
enum class foo2_t
{
none = 0x00
,d = 0x01
,e = 0x02
};
int _tmain(int argc, _TCHAR* argv[])
{
if(flags(foo_t::a & foo_t::b)) {};
// if(flags(foo2_t::d & foo_t::b)) {}; // Type safety test - won't compile if uncomment
};
ENUM_FLAGS(T) is a macro, defined in enum_flags.h (less then 100 lines, free to use with no restrictions).

Related

Is there a recommended "safe" way to specify a bit-chord made out of constant flag-values?

The situation: occasionally I write a function that can take a number of boolean parameters, and instead of writing something like this:
void MyFunc(bool useFoo, bool useBar, bool useBaz, bool useBlah);
[...]
// hard to tell what this means (requires looking at the .h file)
// not obvious if/when I got the argument-ordering wrong!
MyFunc(true, true, false, true);
I like to be able to specify them using a bit-chord of defined bits-indices, like this:
enum {
MYARG_USE_FOO = 0,
MYARG_USE_BAR,
MYARG_USE_BAZ,
MYARG_USE_BLAH,
NUM_MYARGS
};
void MyFunc(unsigned int myArgsBitChord);
[...]
// easy to see what this means
// "argument" ordering doesn't matter
MyFunc((1<<MYARG_USE_FOO)|(1<<MYARG_USE_BAR)|(1<<MYARG_USE_BLAH));
That works fine, in that it allows me to pass around a lot of boolean arguments easily (as a single unsigned long, rather than a long list of separate bools), and I can easy see what my call to MyFunc() is specifying (without having to refer back to a separate header file).
It also allows me to iterate over the defined bits if I want to, which is sometimes useful:
unsigned int allDefinedBits = 0;
for (int i=0; i<NUM_MYARGS; i++) allDefinedBits |= (1<<i);
The main downsides are that it can be a bit error-prone. In particular, it's easy to mess up and do this by mistake:
// This will compile but do the wrong thing at run-time!
void MyFunc(MYARG_USE_FOO | MYARG_USE_BAR | MYARG_USE_BLAH);
... or even to make this classic forehead-slapping mistake:
// This will compile but do the wrong thing at run-time!
void MyFunc((1<<MYARG_USE_FOO) | (1<<MYARG_USE_BAR) || (1<<MYARG_USE_BLAH));
My question is, is there a recommended "safer" way to do this? i.e. one where I can still easily pass multiple defined booleans as a bit-chord in a single argument, and can iterate over the defined bit-values, but where "dumb mistakes" like the ones shown above will be caught by the compiler rather than causing unexpected behavior at runtime?
#include <iostream>
#include <type_traits>
#include <cstdint>
enum class my_options_t : std::uint32_t {
foo,
bar,
baz,
end
};
using my_options_value_t = std::underlying_type<my_options_t>::type;
inline constexpr auto operator|(my_options_t const & lhs, my_options_t const & rhs)
{
return (1 << static_cast<my_options_value_t>(lhs)) | (1 << static_cast<my_options_value_t>(rhs));
}
inline constexpr auto operator|(my_options_value_t const & lhs, my_options_t const & rhs)
{
return lhs | (1 << static_cast<my_options_value_t>(rhs));
}
inline constexpr auto operator&(my_options_value_t const & lhs, my_options_t const & rhs)
{
return lhs & (1 << static_cast<my_options_value_t>(rhs));
}
void MyFunc(my_options_value_t options)
{
if (options & my_options_t::bar)
std::cout << "yay!\n\n";
}
int main()
{
MyFunc(my_options_t::foo | my_options_t::bar | my_options_t::baz);
}
How about a template...
#include <iostream>
template <typename T>
constexpr unsigned int chordify(const T& v) {
return (1 << v);
}
template <typename T1, typename... Ts>
constexpr unsigned int chordify(const T1& v1, const Ts&... rest) {
return (1 << v1) | chordify(rest... );
}
enum {
MYARG_USE_FOO = 0,
MYARG_USE_BAR,
MYARG_USE_BAZ,
MYARG_USE_BLAH,
NUM_MYARGS
};
int main() {
static_assert(__builtin_constant_p(
chordify(MYARG_USE_FOO, MYARG_USE_BAZ, MYARG_USE_BLAH)
));
std::cout << chordify(MYARG_USE_FOO, MYARG_USE_BAZ, MYARG_USE_BLAH);
}
That outputs 13, and it's a compile-time constant.
You can use a bit field, which allows you to efficiently construct a structure with individually-named bit flags.
for example, you could pass a struct myOptions to the function, where it is defined as:
struct myOptions {
unsigned char foo:1;
unsigned char bar:1;
unsigned char baz:1;
};
Then, when you have to construct the values to send to the function, you'd do something like this:
myOptions opt;
opt.foo = 1;
opt.bar = 0;
opt.baz = 1;
MyFunct(opt);
Bit fields are compact and efficient, yet allow you to name the bits (or groups of bits) as if they were independent variables.
By the way, given the verbosity of the declaration, this is one place where I might break the common style of only declaring one variable per statement, and declare the struct as follows:
struct myOptions {
unsigned char foo:1, bar:1, baz:1;
};
And, in C++20 you can add initializers:
struct myOptions {
unsigned char foo:1{0}, bar:1{0}, baz:1{0};
}

How can I Initialize a div_t Object?

So the order of the members returned from the div functions seems to be implementation defined.
Is quot the 1st member or is rem?
Let's say that I'm doing something like this:
generate(begin(digits), end(digits), [i = div_t{ quot, 0 }]() mutable {
i = div(i.quot, 10);
return i.rem;
})
Of course the problem here is that I don't know if I initialized i.quot or i.rem in my lambda capture. Is intializing i with div(quot, 1) the only cross platform way to do this?
You're right that the order of the members is unspecified. The definition is inherited from C, which explicitly states it is (emphasis mine):
7.20.6.2 The div, ldiv, and lldiv functions
3 [...] The structures shall contain (in either order) the members quot (the quotient) and rem (the remainder), each of which has the same type as the arguments numer and denom. [...]
In C, the fact that the order is unspecified doesn't matter, and an example is included specifically regarding div_t:
6.7.8 Initialization
34 EXAMPLE 10 Structure members can be initialized to nonzero values without depending on their order:
div_t answer = { .quot = 2, .rem = -1 };
Unfortunately, C++ never adopted this syntax.
I'd probably go for simple assignment in a helper function:
div_t make_div_t(int quot, int rem) {
div_t result;
result.quot = quot;
result.rem = rem;
return result;
}
For plain int values, whether you use initialisation or assignment doesn't really matter, they have the same effect.
Your division by 1 is a valid option as well.
To quote the C11 Standard Draft N1570 ยง7.22.6.2
The div, ldiv, and lldiv functions return a structure of type div_t, ldiv_t, and lldiv_t, respectively, comprising both the quotient and the remainder. The structures shall contain (in either order) the members quot (the quotient) and rem (the remainder), each of which has the same type as the arguments numer and denom.
So in this case div_t is a plain POD struct, consisting of two ints.
So you can initialize it like every plain struct, your way would be something I would have done too. It's also portable.
Otherwise I can't find anything special mechanism to initialize them, neither in the C nor in the C++ standard. But for POD aka Plain Old Datatypes, there isn't any need for.
EDIT:
I think the VS workaround could look like this:
#include <cstdlib>
#include <type_traits>
template<class T>
struct DTMaker {
using D = decltype(div(T{}, T{}));
static constexpr D dt = D{0,1};
static constexpr auto quot = dt.quot;
};
template <class T, typename std::enable_if<DTMaker<T>::quot == 0>::type* = nullptr>
typename DTMaker<T>::D make_div(const T &quot, const T& rem) { return {quot, rem}; }
template <class T, typename std::enable_if<DTMaker<T>::quot == 1>::type* = nullptr>
typename DTMaker<T>::D make_div(const T &quot, const T &rem) { return {rem, qout}; }
int main() {
div_t d_t = make_div(1, 2);
}
[live demo]
OLD ANSWER:
If you are using c++17 you could also try to use structured binding, constexpr function and SFINAE overloading to detect which field is declared first in the structure:
#include <cstdlib>
#include <algorithm>
#include <iterator>
constexpr bool first_quot() {
auto [x, y] = std::div_t{1, 0};
(void)y;
return x;
}
template <bool B = first_quot()>
std::enable_if_t<B, std::div_t> foo() {
int quot = 1;
int rem = 0;
return {quot, rem};
}
template <bool B = first_quot()>
std::enable_if_t<!B, std::div_t> foo() {
int quot = 1;
int rem = 0;
return {rem, quot};
}
int main() {
foo();
}
[live demo]
Or even simpler use if constexpr:
#include <cstdlib>
#include <algorithm>
#include <iterator>
constexpr bool first_quot() {
auto [x, y] = std::div_t{1, 0};
(void)y;
return x;
}
std::div_t foo() {
int quot = 1;
int rem = 0;
if constexpr(first_quot())
return {quot, rem};
else
return {rem, quot};
}
int main() {
foo();
}
[live demo]
Try something like this:)
int quot = 10;
auto l = [i = [=] { div_t tmp{}; tmp.quot = quot; return tmp; }()]() mutable
{
i = div(i.quot, 10);
return i.rem;
};
It looks like using a compound literal in C.:)
or you can simplify the task by defining the variable i outside the lambda expression and use it in the lambda by reference.
For example
int quot = 10;
dov_t i = {};
i.quot = quot;
auto l = [&i]()
{
i = div(i.quot, 10);
return i.rem;
};
You can use a ternary to initialize this:
generate(rbegin(digits), rend(digits), [i = div_t{ 1, 0 }.quot ? div_t{ quot, 0 } : div_t{ 0, quot }]() mutable {
i = div(i.quot, 10);
return i.rem;
});
gcc6.3 for example will compile identical code with the ternary and without the ternary.
On the other hand clang3.9 compiles longer code with the ternary than it does without the ternary.
So whether the ternary is optimized out will vary between compilers. But in all cases it will give you implementation independent code that doesn't require a secondary function to be written.
Incidentally if you are into creating a helper function to create a div_t (or any of the other div returns) you can do it like this:
template <typename T>
enable_if_t<decltype(div(declval<T>(), declval<T>())){ 1, 0 }.quot != 0, decltype(div(declval<T>(), declval<T>()))> make_div(const T quot, const T rem) { return { quot, rem }; }
template <typename T>
enable_if_t<decltype(div(declval<T>(), declval<T>())){ 1, 0 }.quot == 0, decltype(div(declval<T>(), declval<T>()))> make_div(const T quot, const T rem) { return { rem, quot }; }
Note this does work on gcc but fails to compile on Visual Studio because of some non-conformity.
My solution uses a constexpr function that itself wraps and executes a lambda function that determines and initializes the correct div_t depending on the template parameters.
template <typename T>
constexpr auto make_div(const T quot, const T rem)
{
return [&]() {
decltype(std::div(quot, rem)) result;
result.quot = quot;
result.rem = rem;
return result;
}();
}
This works with MSVC15, gcc 6.3 and clang 3.9.1.
http://rextester.com/AOBCH32388
The lambda allows us to initialize a value step by step within a constexpr function. So we can set quot and rem correctly and independently from the order they appear in the datatype itself.
By wrapping it into a constexpr function we allow the compiler to completely optimize the call to make_div:
clang: https://godbolt.org/g/YdZGkX
gcc: https://godbolt.org/g/sA61LK

Combining enum values in C++ [duplicate]

Treating enums as flags works nicely in C# via the [Flags] attribute, but what's the best way to do this in C++?
For example, I'd like to write:
enum AnimalFlags
{
HasClaws = 1,
CanFly =2,
EatsFish = 4,
Endangered = 8
};
seahawk.flags = CanFly | EatsFish | Endangered;
However, I get compiler errors regarding int/enum conversions. Is there a nicer way to express this than just blunt casting? Preferably, I don't want to rely on constructs from 3rd party libraries such as boost or Qt.
EDIT: As indicated in the answers, I can avoid the compiler error by declaring seahawk.flags as int. However, I'd like to have some mechanism to enforce type safety, so someone can't write seahawk.flags = HasMaximizeButton.
The "correct" way is to define bit operators for the enum, as:
enum AnimalFlags
{
HasClaws = 1,
CanFly = 2,
EatsFish = 4,
Endangered = 8
};
inline AnimalFlags operator|(AnimalFlags a, AnimalFlags b)
{
return static_cast<AnimalFlags>(static_cast<int>(a) | static_cast<int>(b));
}
Etc. rest of the bit operators. Modify as needed if the enum range exceeds int range.
Note (also a bit off topic): Another way to make unique flags can be done using a bit shift. I, myself, find this easier to read.
enum Flags
{
A = 1 << 0, // binary 0001
B = 1 << 1, // binary 0010
C = 1 << 2, // binary 0100
D = 1 << 3 // binary 1000
};
It can hold values up to an int so that is, most of the time, 32 flags which is clearly reflected in the shift amount.
Note if you are working in Windows environment, there is a DEFINE_ENUM_FLAG_OPERATORS macro defined in winnt.h that does the job for you. So in this case, you can do this:
enum AnimalFlags
{
HasClaws = 1,
CanFly =2,
EatsFish = 4,
Endangered = 8
};
DEFINE_ENUM_FLAG_OPERATORS(AnimalFlags)
seahawk.flags = CanFly | EatsFish | Endangered;
For lazy people like me, here is templated solution to copy&paste:
template<class T> inline T operator~ (T a) { return (T)~(int)a; }
template<class T> inline T operator| (T a, T b) { return (T)((int)a | (int)b); }
template<class T> inline T operator& (T a, T b) { return (T)((int)a & (int)b); }
template<class T> inline T operator^ (T a, T b) { return (T)((int)a ^ (int)b); }
template<class T> inline T& operator|= (T& a, T b) { return (T&)((int&)a |= (int)b); }
template<class T> inline T& operator&= (T& a, T b) { return (T&)((int&)a &= (int)b); }
template<class T> inline T& operator^= (T& a, T b) { return (T&)((int&)a ^= (int)b); }
What type is the seahawk.flags variable?
In standard C++, enumerations are not type-safe. They are effectively integers.
AnimalFlags should NOT be the type of your variable. Your variable should be int and the error will go away.
Putting hexadecimal values like some other people suggested is not needed. It makes no difference.
The enum values ARE of type int by default. So you can surely bitwise OR combine them and put them together and store the result in an int.
The enum type is a restricted subset of int whose value is one of its enumerated values. Hence, when you make some new value outside of that range, you can't assign it without casting to a variable of your enum type.
You can also change the enum value types if you'd like, but there is no point for this question.
EDIT: The poster said they were concerned with type safety and they don't want a value that should not exist inside the int type.
But it would be type unsafe to put a value outside of AnimalFlags's range inside a variable of type AnimalFlags.
There is a safe way to check for out of range values though inside the int type...
int iFlags = HasClaws | CanFly;
//InvalidAnimalFlagMaxValue-1 gives you a value of all the bits
// smaller than itself set to 1
//This check makes sure that no other bits are set.
assert(iFlags & ~(InvalidAnimalFlagMaxValue-1) == 0);
enum AnimalFlags {
HasClaws = 1,
CanFly =2,
EatsFish = 4,
Endangered = 8,
// put new enum values above here
InvalidAnimalFlagMaxValue = 16
};
The above doesn't stop you from putting an invalid flag from a different enum that has the value 1,2,4, or 8 though.
If you want absolute type safety then you could simply create a std::set and store each flag inside there. It is not space efficient, but it is type safe and gives you the same ability as a bitflag int does.
C++0x note: Strongly typed enums
In C++0x you can finally have type safe enum values....
enum class AnimalFlags {
CanFly = 2,
HasClaws = 4
};
if(CanFly == 2) { }//Compiling error
I find the currently accepted answer by eidolon too dangerous. The compiler's optimizer might make assumptions about possible values in the enum and you might get garbage back with invalid values. And usually nobody wants to define all possible permutations in flags enums.
As Brian R. Bondy states below, if you're using C++11 (which everyone should, it's that good) you can now do this more easily with enum class:
enum class ObjectType : uint32_t
{
ANIMAL = (1 << 0),
VEGETABLE = (1 << 1),
MINERAL = (1 << 2)
};
constexpr enum ObjectType operator |( const enum ObjectType selfValue, const enum ObjectType inValue )
{
return (enum ObjectType)(uint32_t(selfValue) | uint32_t(inValue));
}
// ... add more operators here.
This ensures a stable size and value range by specifying a type for the enum, inhibits automatic downcasting of enums to ints etc. by using enum class, and uses constexpr to ensure the code for the operators gets inlined and thus just as fast as regular numbers.
For people stuck with pre-11 C++ dialects
If I was stuck with a compiler that doesn't support C++11, I'd go with wrapping an int-type in a class that then permits only use of bitwise operators and the types from that enum to set its values:
template<class ENUM,class UNDERLYING=typename std::underlying_type<ENUM>::type>
class SafeEnum
{
public:
SafeEnum() : mFlags(0) {}
SafeEnum( ENUM singleFlag ) : mFlags(singleFlag) {}
SafeEnum( const SafeEnum& original ) : mFlags(original.mFlags) {}
SafeEnum& operator |=( ENUM addValue ) { mFlags |= addValue; return *this; }
SafeEnum operator |( ENUM addValue ) { SafeEnum result(*this); result |= addValue; return result; }
SafeEnum& operator &=( ENUM maskValue ) { mFlags &= maskValue; return *this; }
SafeEnum operator &( ENUM maskValue ) { SafeEnum result(*this); result &= maskValue; return result; }
SafeEnum operator ~() { SafeEnum result(*this); result.mFlags = ~result.mFlags; return result; }
explicit operator bool() { return mFlags != 0; }
protected:
UNDERLYING mFlags;
};
You can define this pretty much like a regular enum + typedef:
enum TFlags_
{
EFlagsNone = 0,
EFlagOne = (1 << 0),
EFlagTwo = (1 << 1),
EFlagThree = (1 << 2),
EFlagFour = (1 << 3)
};
typedef SafeEnum<enum TFlags_> TFlags;
And usage is similar as well:
TFlags myFlags;
myFlags |= EFlagTwo;
myFlags |= EFlagThree;
if( myFlags & EFlagTwo )
std::cout << "flag 2 is set" << std::endl;
if( (myFlags & EFlagFour) == EFlagsNone )
std::cout << "flag 4 is not set" << std::endl;
And you can also override the underlying type for binary-stable enums (like C++11's enum foo : type) using the second template parameter, i.e. typedef SafeEnum<enum TFlags_,uint8_t> TFlags;.
I marked the operator bool override with C++11's explicit keyword to prevent it from resulting in int conversions, as those could cause sets of flags to end up collapsed into 0 or 1 when writing them out. If you can't use C++11, leave that overload out and rewrite the first conditional in the example usage as (myFlags & EFlagTwo) == EFlagTwo.
Easiest way to do this as shown here, using the standard library class bitset.
To emulate the C# feature in a type-safe way, you'd have to write a template wrapper around the bitset, replacing the int arguments with an enum given as a type parameter to the template. Something like:
template <class T, int N>
class FlagSet
{
bitset<N> bits;
FlagSet(T enumVal)
{
bits.set(enumVal);
}
// etc.
};
enum MyFlags
{
FLAG_ONE,
FLAG_TWO
};
FlagSet<MyFlags, 2> myFlag;
In my opinion none of the answers so far are ideal. To be ideal I would expect the solution:
Support the ==,!=,=,&,&=,|,|= and ~ operators in the conventional
sense (i.e. a & b)
Be type safe i.e. not permit non-enumerated values such as literals or integer types to be assigned (except for bitwise combinations of enumerated values) or allow an enum variable to be assigned to an integer type
Permit expressions such as if (a & b)...
Not require evil macros, implementation specific features or other hacks
Most of the solutions thus far fall over on points 2 or 3. WebDancer's is the closes in my opinion but fails at point 3 and needs to be repeated for every enum.
My proposed solution is a generalized version of WebDancer's that also addresses point 3:
#include <cstdint>
#include <type_traits>
template<typename T, typename = typename std::enable_if<std::is_enum<T>::value, T>::type>
class auto_bool
{
T val_;
public:
constexpr auto_bool(T val) : val_(val) {}
constexpr operator T() const { return val_; }
constexpr explicit operator bool() const
{
return static_cast<std::underlying_type_t<T>>(val_) != 0;
}
};
template <typename T, typename = typename std::enable_if<std::is_enum<T>::value, T>::type>
constexpr auto_bool<T> operator&(T lhs, T rhs)
{
return static_cast<T>(
static_cast<typename std::underlying_type<T>::type>(lhs) &
static_cast<typename std::underlying_type<T>::type>(rhs));
}
template <typename T, typename = typename std::enable_if<std::is_enum<T>::value, T>::type>
constexpr T operator|(T lhs, T rhs)
{
return static_cast<T>(
static_cast<typename std::underlying_type<T>::type>(lhs) |
static_cast<typename std::underlying_type<T>::type>(rhs));
}
enum class AnimalFlags : uint8_t
{
HasClaws = 1,
CanFly = 2,
EatsFish = 4,
Endangered = 8
};
enum class PlantFlags : uint8_t
{
HasLeaves = 1,
HasFlowers = 2,
HasFruit = 4,
HasThorns = 8
};
int main()
{
AnimalFlags seahawk = AnimalFlags::CanFly; // Compiles, as expected
AnimalFlags lion = AnimalFlags::HasClaws; // Compiles, as expected
PlantFlags rose = PlantFlags::HasFlowers; // Compiles, as expected
// rose = 1; // Won't compile, as expected
if (seahawk != lion) {} // Compiles, as expected
// if (seahawk == rose) {} // Won't compile, as expected
// seahawk = PlantFlags::HasThorns; // Won't compile, as expected
seahawk = seahawk | AnimalFlags::EatsFish; // Compiles, as expected
lion = AnimalFlags::HasClaws | // Compiles, as expected
AnimalFlags::Endangered;
// int eagle = AnimalFlags::CanFly | // Won't compile, as expected
// AnimalFlags::HasClaws;
// int has_claws = seahawk & AnimalFlags::CanFly; // Won't compile, as expected
if (seahawk & AnimalFlags::CanFly) {} // Compiles, as expected
seahawk = seahawk & AnimalFlags::CanFly; // Compiles, as expected
return 0;
}
This creates overloads of the necessary operators but uses SFINAE to limit them to enumerated types. Note that in the interests of brevity I haven't defined all of the operators but the only one that is any different is the &. The operators are currently global (i.e. apply to all enumerated types) but this could be reduced either by placing the overloads in a namespace (what I do), or by adding additional SFINAE conditions (perhaps using particular underlying types, or specially created type aliases). The underlying_type_t is a C++14 feature but it seems to be well supported and is easy to emulate for C++11 with a simple template<typename T> using underlying_type_t = underlying_type<T>::type;
Edit: I incorporated the change suggested by Vladimir Afinello. Tested with GCC 10, CLANG 13 and Visual Studio 2022.
Only syntactic sugar. No additional metadata.
namespace UserRole // grupy
{
constexpr uint8_t dea = 1;
constexpr uint8_t red = 2;
constexpr uint8_t stu = 4;
constexpr uint8_t kie = 8;
constexpr uint8_t adm = 16;
constexpr uint8_t mas = 32;
}
Flag operators on integral type just works.
The C++ standard explicitly talks about this, see section "17.5.2.1.3 Bitmask types":
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3485.pdf
Given this "template" you get:
enum AnimalFlags : unsigned int
{
HasClaws = 1,
CanFly = 2,
EatsFish = 4,
Endangered = 8
};
constexpr AnimalFlags operator|(AnimalFlags X, AnimalFlags Y) {
return static_cast<AnimalFlags>(
static_cast<unsigned int>(X) | static_cast<unsigned int>(Y));
}
AnimalFlags& operator|=(AnimalFlags& X, AnimalFlags Y) {
X = X | Y; return X;
}
And similar for the other operators.
Also note the "constexpr", it is needed if you want the compiler to be able to execute the operators compile time.
If you are using C++/CLI and want to able assign to enum members of ref classes you need to use tracking references instead:
AnimalFlags% operator|=(AnimalFlags% X, AnimalFlags Y) {
X = X | Y; return X;
}
NOTE: This sample is not complete, see section "17.5.2.1.3 Bitmask types" for a complete set of operators.
I use the following macro:
#define ENUM_FLAG_OPERATORS(T) \
inline T operator~ (T a) { return static_cast<T>( ~static_cast<std::underlying_type<T>::type>(a) ); } \
inline T operator| (T a, T b) { return static_cast<T>( static_cast<std::underlying_type<T>::type>(a) | static_cast<std::underlying_type<T>::type>(b) ); } \
inline T operator& (T a, T b) { return static_cast<T>( static_cast<std::underlying_type<T>::type>(a) & static_cast<std::underlying_type<T>::type>(b) ); } \
inline T operator^ (T a, T b) { return static_cast<T>( static_cast<std::underlying_type<T>::type>(a) ^ static_cast<std::underlying_type<T>::type>(b) ); } \
inline T& operator|= (T& a, T b) { return reinterpret_cast<T&>( reinterpret_cast<std::underlying_type<T>::type&>(a) |= static_cast<std::underlying_type<T>::type>(b) ); } \
inline T& operator&= (T& a, T b) { return reinterpret_cast<T&>( reinterpret_cast<std::underlying_type<T>::type&>(a) &= static_cast<std::underlying_type<T>::type>(b) ); } \
inline T& operator^= (T& a, T b) { return reinterpret_cast<T&>( reinterpret_cast<std::underlying_type<T>::type&>(a) ^= static_cast<std::underlying_type<T>::type>(b) ); }
It is similar to the ones mentioned above but has several improvements:
It is type safe (it does not suppose that the underlying type is an int)
It does not require to specify manually the underlying type (as opposed to #LunarEclipse 's answer)
It does need to include type_traits:
#include <type_traits>
I found myself asking the same question and came up with a generic C++11 based solution, similar to soru's:
template <typename TENUM>
class FlagSet {
private:
using TUNDER = typename std::underlying_type<TENUM>::type;
std::bitset<std::numeric_limits<TUNDER>::max()> m_flags;
public:
FlagSet() = default;
template <typename... ARGS>
FlagSet(TENUM f, ARGS... args) : FlagSet(args...)
{
set(f);
}
FlagSet& set(TENUM f)
{
m_flags.set(static_cast<TUNDER>(f));
return *this;
}
bool test(TENUM f)
{
return m_flags.test(static_cast<TUNDER>(f));
}
FlagSet& operator|=(TENUM f)
{
return set(f);
}
};
The interface can be improved to taste. Then it can be used like so:
FlagSet<Flags> flags{Flags::FLAG_A, Flags::FLAG_C};
flags |= Flags::FLAG_D;
If your compiler doesn't support strongly typed enums yet, you can give a look to the following article from the c++ source:
From the abstract:
This article presents a solution to the problem of constraining bit operations to
allow only safe and legitimate ones, and turn all invalid bit manipulations into
compile-time errors. Best of all, the syntax of bit operations remains unchanged,
and the code working with bits does not need to be modified, except possibly to
fix errors that had as yet remained undetected.
Here's an option for bitmasks if you don't actually have a use for the individual enum values (ex. you don't need to switch off of them)... and if you aren't worried about maintaining binary compatibility ie: you don't care where your bits live... which you probably are. Also you'd better not be too concerned with scoping and access control. Hmmm, enums have some nice properties for bit-fields... wonder if anyone has ever tried that :)
struct AnimalProperties
{
bool HasClaws : 1;
bool CanFly : 1;
bool EatsFish : 1;
bool Endangered : 1;
};
union AnimalDescription
{
AnimalProperties Properties;
int Flags;
};
void TestUnionFlags()
{
AnimalDescription propertiesA;
propertiesA.Properties.CanFly = true;
AnimalDescription propertiesB = propertiesA;
propertiesB.Properties.EatsFish = true;
if( propertiesA.Flags == propertiesB.Flags )
{
cout << "Life is terrible :(";
}
else
{
cout << "Life is great!";
}
AnimalDescription propertiesC = propertiesA;
if( propertiesA.Flags == propertiesC.Flags )
{
cout << "Life is great!";
}
else
{
cout << "Life is terrible :(";
}
}
We can see that life is great, we have our discrete values, and we have a nice int to & and | to our hearts content, which still has context of what its bits mean. Everything is consistent and predictable... for me... as long as I keep using Microsoft's VC++ compiler w/ Update 3 on Win10 x64 and don't touch my compiler flags :)
Even though everything is great... we have some context as to the meaning of flags now, since its in a union w/ the bitfield in the terrible real world where your program may be be responsible for more than a single discrete task you could still accidentally (quite easily) smash two flags fields of different unions together (say, AnimalProperties and ObjectProperties, since they're both ints), mixing up all yours bits, which is a horrible bug to trace down... and how I know many people on this post don't work with bitmasks very often, since building them is easy and maintaining them is hard.
class AnimalDefinition {
public:
static AnimalDefinition *GetAnimalDefinition( AnimalFlags flags ); //A little too obvious for my taste... NEXT!
static AnimalDefinition *GetAnimalDefinition( AnimalProperties properties ); //Oh I see how to use this! BORING, NEXT!
static AnimalDefinition *GetAnimalDefinition( int flags ); //hmm, wish I could see how to construct a valid "flags" int without CrossFingers+Ctrl+Shift+F("Animal*"). Maybe just hard-code 16 or something?
AnimalFlags animalFlags; //Well this is *way* too hard to break unintentionally, screw this!
int flags; //PERFECT! Nothing will ever go wrong here...
//wait, what values are used for this particular flags field? Is this AnimalFlags or ObjectFlags? Or is it RuntimePlatformFlags? Does it matter? Where's the documentation?
//Well luckily anyone in the code base and get confused and destroy the whole program! At least I don't need to static_cast anymore, phew!
private:
AnimalDescription m_description; //Oh I know what this is. All of the mystery and excitement of life has been stolen away :(
}
So then you make your union declaration private to prevent direct access to "Flags", and have to add getters/setters and operator overloads, then make a macro for all that, and you're basically right back where you started when you tried to do this with an Enum.
Unfortunately if you want your code to be portable, I don't think there is any way to either A) guarantee the bit layout or B) determine the bit layout at compile time (so you can track it and at least correct for changes across versions/platforms etc)
Offset in a struct with bit fields
At runtime you can play tricks w/ setting the the fields and XORing the flags to see which bits did change, sounds pretty crappy to me though verses having a 100% consistent, platform independent, and completely deterministic solution ie: an ENUM.
TL;DR:
Don't listen to the haters. C++ is not English. Just because the literal definition of an abbreviated keyword inherited from C might not fit your usage doesn't mean you shouldn't use it when the C and C++ definition of the keyword absolutely includes your use case. You can also use structs to model things other than structures, and classes for things other than school and social caste. You may use float for values which are grounded. You may use char for variables which are neither un-burnt nor a person in a novel, play, or movie. Any programmer who goes to the dictionary to determine the meaning of a keyword before the language spec is a... well I'll hold my tongue there.
If you do want your code modeled after spoken language you'd be best off writing in Objective-C, which incidentally also uses enums heavily for bitfields.
I'd like to elaborate on Uliwitness answer, fixing his code for C++98 and using the Safe Bool idiom, for lack of the std::underlying_type<> template and the explicit keyword in C++ versions below C++11.
I also modified it so that the enum values can be sequential without any explicit assignment, so you can have
enum AnimalFlags_
{
HasClaws,
CanFly,
EatsFish,
Endangered
};
typedef FlagsEnum<AnimalFlags_> AnimalFlags;
seahawk.flags = AnimalFlags() | CanFly | EatsFish | Endangered;
You can then get the raw flags value with
seahawk.flags.value();
Here's the code.
template <typename EnumType, typename Underlying = int>
class FlagsEnum
{
typedef Underlying FlagsEnum::* RestrictedBool;
public:
FlagsEnum() : m_flags(Underlying()) {}
FlagsEnum(EnumType singleFlag):
m_flags(1 << singleFlag)
{}
FlagsEnum(const FlagsEnum& original):
m_flags(original.m_flags)
{}
FlagsEnum& operator |=(const FlagsEnum& f) {
m_flags |= f.m_flags;
return *this;
}
FlagsEnum& operator &=(const FlagsEnum& f) {
m_flags &= f.m_flags;
return *this;
}
friend FlagsEnum operator |(const FlagsEnum& f1, const FlagsEnum& f2) {
return FlagsEnum(f1) |= f2;
}
friend FlagsEnum operator &(const FlagsEnum& f1, const FlagsEnum& f2) {
return FlagsEnum(f1) &= f2;
}
FlagsEnum operator ~() const {
FlagsEnum result(*this);
result.m_flags = ~result.m_flags;
return result;
}
operator RestrictedBool() const {
return m_flags ? &FlagsEnum::m_flags : 0;
}
Underlying value() const {
return m_flags;
}
protected:
Underlying m_flags;
};
Currently there is no language support for enum flags, Meta classes might inherently add this feature if it would ever be part of the c++ standard.
My solution would be to create enum-only instantiated template functions adding support for type-safe bitwise operations for enum class using its underlying type:
File: EnumClassBitwise.h
#pragma once
#ifndef _ENUM_CLASS_BITWISE_H_
#define _ENUM_CLASS_BITWISE_H_
#include <type_traits>
//unary ~operator
template <typename Enum, typename std::enable_if_t<std::is_enum<Enum>::value, int> = 0>
constexpr inline Enum& operator~ (Enum& val)
{
val = static_cast<Enum>(~static_cast<std::underlying_type_t<Enum>>(val));
return val;
}
// & operator
template <typename Enum, typename std::enable_if_t<std::is_enum<Enum>::value, int> = 0>
constexpr inline Enum operator& (Enum lhs, Enum rhs)
{
return static_cast<Enum>(static_cast<std::underlying_type_t<Enum>>(lhs) & static_cast<std::underlying_type_t<Enum>>(rhs));
}
// &= operator
template <typename Enum, typename std::enable_if_t<std::is_enum<Enum>::value, int> = 0>
constexpr inline Enum operator&= (Enum& lhs, Enum rhs)
{
lhs = static_cast<Enum>(static_cast<std::underlying_type_t<Enum>>(lhs) & static_cast<std::underlying_type_t<Enum>>(rhs));
return lhs;
}
//| operator
template <typename Enum, typename std::enable_if_t<std::is_enum<Enum>::value, int> = 0>
constexpr inline Enum operator| (Enum lhs, Enum rhs)
{
return static_cast<Enum>(static_cast<std::underlying_type_t<Enum>>(lhs) | static_cast<std::underlying_type_t<Enum>>(rhs));
}
//|= operator
template <typename Enum, typename std::enable_if_t<std::is_enum<Enum>::value, int> = 0>
constexpr inline Enum& operator|= (Enum& lhs, Enum rhs)
{
lhs = static_cast<Enum>(static_cast<std::underlying_type_t<Enum>>(lhs) | static_cast<std::underlying_type_t<Enum>>(rhs));
return lhs;
}
#endif // _ENUM_CLASS_BITWISE_H_
For convenience and for reducing mistakes, you might want to wrap your bit flags operations for enums and for integers as well:
File: BitFlags.h
#pragma once
#ifndef _BIT_FLAGS_H_
#define _BIT_FLAGS_H_
#include "EnumClassBitwise.h"
template<typename T>
class BitFlags
{
public:
constexpr inline BitFlags() = default;
constexpr inline BitFlags(T value) { mValue = value; }
constexpr inline BitFlags operator| (T rhs) const { return mValue | rhs; }
constexpr inline BitFlags operator& (T rhs) const { return mValue & rhs; }
constexpr inline BitFlags operator~ () const { return ~mValue; }
constexpr inline operator T() const { return mValue; }
constexpr inline BitFlags& operator|=(T rhs) { mValue |= rhs; return *this; }
constexpr inline BitFlags& operator&=(T rhs) { mValue &= rhs; return *this; }
constexpr inline bool test(T rhs) const { return (mValue & rhs) == rhs; }
constexpr inline void set(T rhs) { mValue |= rhs; }
constexpr inline void clear(T rhs) { mValue &= ~rhs; }
private:
T mValue;
};
#endif //#define _BIT_FLAGS_H_
Possible usage:
#include <cstdint>
#include <BitFlags.h>
void main()
{
enum class Options : uint32_t
{
NoOption = 0 << 0
, Option1 = 1 << 0
, Option2 = 1 << 1
, Option3 = 1 << 2
, Option4 = 1 << 3
};
const uint32_t Option1 = 1 << 0;
const uint32_t Option2 = 1 << 1;
const uint32_t Option3 = 1 << 2;
const uint32_t Option4 = 1 << 3;
//Enum BitFlags
BitFlags<Options> optionsEnum(Options::NoOption);
optionsEnum.set(Options::Option1 | Options::Option3);
//Standard integer BitFlags
BitFlags<uint32_t> optionsUint32(0);
optionsUint32.set(Option1 | Option3);
return 0;
}
#Xaqq has provided a really nice type-safe way to use enum flags here by a flag_set class.
I published the code in GitHub, usage is as follows:
#include "flag_set.hpp"
enum class AnimalFlags : uint8_t {
HAS_CLAWS,
CAN_FLY,
EATS_FISH,
ENDANGERED,
_
};
int main()
{
flag_set<AnimalFlags> seahawkFlags(AnimalFlags::HAS_CLAWS
| AnimalFlags::EATS_FISH
| AnimalFlags::ENDANGERED);
if (seahawkFlags & AnimalFlags::ENDANGERED)
cout << "Seahawk is endangered";
}
Another macro solution, but unlike the existing answers this does not use reinterpret_cast (or a C-cast) to cast between Enum& and Int&, which is forbidden in standard C++ (see this post).
#define MAKE_FLAGS_ENUM(TEnum, TUnder) \
TEnum operator~ ( TEnum a ) { return static_cast<TEnum> (~static_cast<TUnder> (a) ); } \
TEnum operator| ( TEnum a, TEnum b ) { return static_cast<TEnum> ( static_cast<TUnder> (a) | static_cast<TUnder>(b) ); } \
TEnum operator& ( TEnum a, TEnum b ) { return static_cast<TEnum> ( static_cast<TUnder> (a) & static_cast<TUnder>(b) ); } \
TEnum operator^ ( TEnum a, TEnum b ) { return static_cast<TEnum> ( static_cast<TUnder> (a) ^ static_cast<TUnder>(b) ); } \
TEnum& operator|= ( TEnum& a, TEnum b ) { a = static_cast<TEnum>(static_cast<TUnder>(a) | static_cast<TUnder>(b) ); return a; } \
TEnum& operator&= ( TEnum& a, TEnum b ) { a = static_cast<TEnum>(static_cast<TUnder>(a) & static_cast<TUnder>(b) ); return a; } \
TEnum& operator^= ( TEnum& a, TEnum b ) { a = static_cast<TEnum>(static_cast<TUnder>(a) ^ static_cast<TUnder>(b) ); return a; }
Losing the reinterpret_cast means we can't rely on the x |= y syntax any more, but by expanding these into their x = x | y forms we no longer need it.
Note: You can use std::underlying_type to obtain TUnder, I've not included it for brevity.
You are confusing objects and collections of objects. Specifically, you are confusing binary flags with sets of binary flags. A proper solution would look like this:
// These are individual flags
enum AnimalFlag // Flag, not Flags
{
HasClaws = 0,
CanFly,
EatsFish,
Endangered
};
class AnimalFlagSet
{
int m_Flags;
public:
AnimalFlagSet() : m_Flags(0) { }
void Set( AnimalFlag flag ) { m_Flags |= (1 << flag); }
void Clear( AnimalFlag flag ) { m_Flags &= ~ (1 << flag); }
bool Get( AnimalFlag flag ) const { return (m_Flags >> flag) & 1; }
};
Here is my solution without needing any bunch of overloading or casting:
namespace EFoobar
{
enum
{
FB_A = 0x1,
FB_B = 0x2,
FB_C = 0x4,
};
typedef long Flags;
}
void Foobar(EFoobar::Flags flags)
{
if (flags & EFoobar::FB_A)
// do sth
;
if (flags & EFoobar::FB_B)
// do sth
;
}
void ExampleUsage()
{
Foobar(EFoobar::FB_A | EFoobar::FB_B);
EFoobar::Flags otherflags = 0;
otherflags|= EFoobar::FB_B;
otherflags&= ~EFoobar::FB_B;
Foobar(otherflags);
}
I think it's ok, because we identify (non strongly typed) enums and ints anyway.
Just as a (longer) side note, if you
want to use strongly typed enums and
don't need heavy bit fiddling with your flags
performance is not an issue
I would come up with this:
#include <set>
enum class EFoobarFlags
{
FB_A = 1,
FB_B,
FB_C,
};
void Foobar(const std::set<EFoobarFlags>& flags)
{
if (flags.find(EFoobarFlags::FB_A) != flags.end())
// do sth
;
if (flags.find(EFoobarFlags::FB_B) != flags.end())
// do sth
;
}
void ExampleUsage()
{
Foobar({EFoobarFlags::FB_A, EFoobarFlags::FB_B});
std::set<EFoobarFlags> otherflags{};
otherflags.insert(EFoobarFlags::FB_B);
otherflags.erase(EFoobarFlags::FB_B);
Foobar(otherflags);
}
using C++11 initializer lists and enum class.
Copy-pasteable "evil" macro based on some of the other answers in this thread:
#include <type_traits>
/*
* Macro to allow enum values to be combined and evaluated as flags.
* * Based on:
* - DEFINE_ENUM_FLAG_OPERATORS from <winnt.h>
* - https://stackoverflow.com/a/63031334/1624459
*/
#define MAKE_ENUM_FLAGS(TEnum) \
inline TEnum operator~(TEnum a) { \
using TUnder = typename std::underlying_type_t<TEnum>; \
return static_cast<TEnum>(~static_cast<TUnder>(a)); \
} \
inline TEnum operator|(TEnum a, TEnum b) { \
using TUnder = typename std::underlying_type_t<TEnum>; \
return static_cast<TEnum>(static_cast<TUnder>(a) | static_cast<TUnder>(b)); \
} \
inline TEnum operator&(TEnum a, TEnum b) { \
using TUnder = typename std::underlying_type_t<TEnum>; \
return static_cast<TEnum>(static_cast<TUnder>(a) & static_cast<TUnder>(b)); \
} \
inline TEnum operator^(TEnum a, TEnum b) { \
using TUnder = typename std::underlying_type_t<TEnum>; \
return static_cast<TEnum>(static_cast<TUnder>(a) ^ static_cast<TUnder>(b)); \
} \
inline TEnum& operator|=(TEnum& a, TEnum b) { \
using TUnder = typename std::underlying_type_t<TEnum>; \
a = static_cast<TEnum>(static_cast<TUnder>(a) | static_cast<TUnder>(b)); \
return a; \
} \
inline TEnum& operator&=(TEnum& a, TEnum b) { \
using TUnder = typename std::underlying_type_t<TEnum>; \
a = static_cast<TEnum>(static_cast<TUnder>(a) & static_cast<TUnder>(b)); \
return a; \
} \
inline TEnum& operator^=(TEnum& a, TEnum b) { \
using TUnder = typename std::underlying_type_t<TEnum>; \
a = static_cast<TEnum>(static_cast<TUnder>(a) ^ static_cast<TUnder>(b)); \
return a; \
}
Usage
enum class Passability : std::uint8_t {
Clear = 0,
GroundUnit = 1 << 1,
FlyingUnit = 1 << 2,
Building = 1 << 3,
Tree = 1 << 4,
Mountain = 1 << 5,
Blocked = 1 << 6,
Water = 1 << 7,
Coastline = 1 << 8
};
MAKE_ENUM_FLAGS(Passability)
Advantages
Only applies to chosen enums when used explicitly.
No use of illegal reinterpret_cast.
No need to specify the underlying type.
Notes
Replace std::underlying_type_t<TEnum> with std::underlying_type<TEnum>::type if using C++ <14.
Here's a lazy C++11 solution that doesn't change the default behavior of enums. It also works for enum struct and enum class, and is constexpr.
#include <type_traits>
template<class T = void> struct enum_traits {};
template<> struct enum_traits<void> {
struct _allow_bitops {
static constexpr bool allow_bitops = true;
};
using allow_bitops = _allow_bitops;
template<class T, class R = T>
using t = typename std::enable_if<std::is_enum<T>::value and
enum_traits<T>::allow_bitops, R>::type;
template<class T>
using u = typename std::underlying_type<T>::type;
};
template<class T>
constexpr enum_traits<>::t<T> operator~(T a) {
return static_cast<T>(~static_cast<enum_traits<>::u<T>>(a));
}
template<class T>
constexpr enum_traits<>::t<T> operator|(T a, T b) {
return static_cast<T>(
static_cast<enum_traits<>::u<T>>(a) |
static_cast<enum_traits<>::u<T>>(b));
}
template<class T>
constexpr enum_traits<>::t<T> operator&(T a, T b) {
return static_cast<T>(
static_cast<enum_traits<>::u<T>>(a) &
static_cast<enum_traits<>::u<T>>(b));
}
template<class T>
constexpr enum_traits<>::t<T> operator^(T a, T b) {
return static_cast<T>(
static_cast<enum_traits<>::u<T>>(a) ^
static_cast<enum_traits<>::u<T>>(b));
}
template<class T>
constexpr enum_traits<>::t<T, T&> operator|=(T& a, T b) {
a = a | b;
return a;
}
template<class T>
constexpr enum_traits<>::t<T, T&> operator&=(T& a, T b) {
a = a & b;
return a;
}
template<class T>
constexpr enum_traits<>::t<T, T&> operator^=(T& a, T b) {
a = a ^ b;
return a;
}
To enable bitwise operators for an enum:
enum class my_enum {
Flag1 = 1 << 0,
Flag2 = 1 << 1,
Flag3 = 1 << 2,
// ...
};
// The magic happens here
template<> struct enum_traits<my_enum> :
enum_traits<>::allow_bitops {};
constexpr my_enum foo = my_enum::Flag1 | my_enum::Flag2 | my_enum::Flag3;
As above(Kai) or do the following. Really enums are "Enumerations", what you want to do is have a set, therefore you should really use stl::set
enum AnimalFlags
{
HasClaws = 1,
CanFly =2,
EatsFish = 4,
Endangered = 8
};
int main(void)
{
AnimalFlags seahawk;
//seahawk= CanFly | EatsFish | Endangered;
seahawk= static_cast<AnimalFlags>(CanFly | EatsFish | Endangered);
}
Maybe like NS_OPTIONS of Objective-C.
#define ENUM(T1, T2) \
enum class T1 : T2; \
inline T1 operator~ (T1 a) { return (T1)~(int)a; } \
inline T1 operator| (T1 a, T1 b) { return static_cast<T1>((static_cast<T2>(a) | static_cast<T2>(b))); } \
inline T1 operator& (T1 a, T1 b) { return static_cast<T1>((static_cast<T2>(a) & static_cast<T2>(b))); } \
inline T1 operator^ (T1 a, T1 b) { return static_cast<T1>((static_cast<T2>(a) ^ static_cast<T2>(b))); } \
inline T1& operator|= (T1& a, T1 b) { return reinterpret_cast<T1&>((reinterpret_cast<T2&>(a) |= static_cast<T2>(b))); } \
inline T1& operator&= (T1& a, T1 b) { return reinterpret_cast<T1&>((reinterpret_cast<T2&>(a) &= static_cast<T2>(b))); } \
inline T1& operator^= (T1& a, T1 b) { return reinterpret_cast<T1&>((reinterpret_cast<T2&>(a) ^= static_cast<T2>(b))); } \
enum class T1 : T2
ENUM(Options, short) {
FIRST = 1 << 0,
SECOND = 1 << 1,
THIRD = 1 << 2,
FOURTH = 1 << 3
};
auto options = Options::FIRST | Options::SECOND;
options |= Options::THIRD;
if ((options & Options::SECOND) == Options::SECOND)
cout << "Contains second option." << endl;
if ((options & Options::THIRD) == Options::THIRD)
cout << "Contains third option." << endl;
return 0;
// Output:
// Contains second option.
// Contains third option.
C++20 Type-Safe Enum Operators
TL;DR
template<typename T>
requires std::is_enum_v<T> and
requires (std::underlying_type_t<T> x) {
{ x | x } -> std::same_as<std::underlying_type_t<T>>;
T(x);
}
T operator|(T left, T right)
{
using U = std::underlying_type_t<T>;
return T( U(left) | U(right) );
}
template<typename T>
requires std::is_enum_v<T> and
requires (std::underlying_type_t<T> x) {
{ x | x } -> std::same_as<std::underlying_type_t<T>>;
T(x);
}
T operator&(T left, T right)
{
using U = std::underlying_type_t<T>;
return T( U(left) & U(right) );
}
template<typename T>
requires std::is_enum_v<T> and requires (T x) { { x | x } -> std::same_as<T>; }
T & operator|=(T &left, T right)
{
return left = left | right;
}
template<typename T>
requires std::is_enum_v<T> and requires (T x) { { x & x } -> std::same_as<T>; }
T & operator&=(T &left, T right)
{
return left = left & right;
}
Rationale
With type trait std::is_enum we can test some type T for whether it is an enumeration type.
This includes both unscoped and scoped enums (i.e. enum and enum class).
With type trait std::underlying_type we can get the underlying type of an enum.
With C++20 concepts and constraints it is quite easy to then provide overloads for bitwise operations.
Scoped vs. Unscoped
If the operations should only be overloaded for either scoped or unscoped enums, std::is_scoped_enum can be used to extend the template constraints accordingly.
C++23
With C++23 we get std::to_underlying to convert an enum value to its underlying type more easily.
Move Semantics & Perfect Forwarding
Should you get in the bizarre situation that your underlying type has different semantics for copy vs. move or it does not provide a copy c'tor, then you should do perfect forwarding of the operands with std::forward.
You can use struct as follow:
struct UiFlags2 {
static const int
FULLSCREEN = 0x00000004, //api 16
HIDE_NAVIGATION = 0x00000002, //api 14
LAYOUT_HIDE_NAVIGATION = 0x00000200, //api 16
LAYOUT_FULLSCREEN = 0x00000400, //api 16
LAYOUT_STABLE = 0x00000100, //api 16
IMMERSIVE_STICKY = 0x00001000; //api 19
};
and use as this:
int flags = UiFlags2::FULLSCREEN | UiFlags2::HIDE_NAVIGATION;
So you don't need to int casting and it is directly usable.
Also it is scope separated like enum class
I prefer using magic_enum as it helps automate converting strings to enums and vice versa.
It is a header-only library which is written in C++17 standard.
magic_enum already has template functions for enum bitwise operators.
See documentation.
Usage:
#include <magic_enum.hpp>
enum Flag { ... };
Flag flag{};
Flag value{};
using namespace magic_enum::bitwise_operators;
flag |= value;

How does one use an enum class as a set of flags?

Let's say I have a set of flags and a class like this:
/// <summary>Options controlling a search for files.</summary>
enum class FindFilesOptions : unsigned char
{
LocalSearch = 0,
RecursiveSearch = 1,
IncludeDotDirectories = 2
};
class FindFiles : boost::noncopyable
{
/* omitted */
public:
FindFiles(std::wstring const& pattern, FindFilesOptions options);
/* omitted */
}
and I want a caller to be able to select more than one option:
FindFiles handle(Append(basicRootPath, L"*"),
FindFilesOptions::RecursiveSearch | FindFilesOptions::IncludeDotDirectories);
Is it possible to support this in a strongly-typed way with C++11 enum class, or do I have to revert to untyped enumerations?
(I know the caller could static_cast to the underlying type and static_cast back, but I don't want the caller to have to do that)
It is certainly possible to use enum classes for bitmaps. It is, unfortunately, a bit painful to do so: You need to define the necessary bit operations on your type. Below is an example how this could look like. It would be nice if the enum classes could derive from some other type which could live in a suitable namespace defining the necessary operator boilerplate code.
#include <iostream>
#include <type_traits>
enum class bitmap: unsigned char
{
a = 0x01,
b = 0x02,
c = 0x04
};
bitmap operator& (bitmap x, bitmap y)
{
typedef std::underlying_type<bitmap>::type uchar;
return bitmap(uchar(x) & uchar(y));
}
bitmap operator| (bitmap x, bitmap y)
{
typedef std::underlying_type<bitmap>::type uchar;
return bitmap(uchar(x) | uchar(y));
}
bitmap operator^ (bitmap x, bitmap y)
{
typedef std::underlying_type<bitmap>::type uchar;
return bitmap(uchar(x) ^ uchar(y));
}
bool test(bitmap x)
{
return std::underlying_type<bitmap>::type(x);
}
int main()
{
bitmap v = bitmap::a | bitmap::b;
if (test(v & bitmap::a)) {
std::cout << "a ";
}
if (test(v & bitmap::b)) {
std::cout << "b ";
}
if (test(v & bitmap::c)) {
std::cout << "c ";
}
std::cout << '\n';
}
Templates play well with enum class so you can define sets of operators that work on sets of similar enumeration types. The key is to use a traits template to specify what interface(s) each enumeration conforms/subscribes to.
As a start:
enum class mood_flag {
jumpy,
happy,
upset,
count // size of enumeration
};
template<>
struct enum_traits< mood_flag > {
static constexpr bool bit_index = true;
};
template< typename t >
struct flag_bits : std::bitset< static_cast< int >( t::count ) > {
flag_bits( t bit ) // implicit
{ this->set( static_cast< int >( bit ) ); }
// Should be explicit but I'm lazy to type:
flag_bits( typename flag_bits::bitset set )
: flag_bits::bitset( set ) {}
};
template< typename e >
typename std::enable_if< enum_traits< e >::bit_index,
flag_bits< e > >::type
operator | ( flag_bits< e > set, e next )
{ return set | flag_bits< e >( next ); }
template< typename e >
typename std::enable_if< enum_traits< e >::bit_index,
flag_bits< e > >::type
operator | ( e first, e next )
{ return flag_bits< e >( first ) | next; }
http://ideone.com/kJ271Z
GCC 4.9 reported that some implicit member functions were constexpr while I was getting this to compile, so the templates should probably be so as well.
This should probably also have a free function to_scalar or something which returns an unsigned integer type given either an individual flag or a flag_bits set.
How about defining FindFiles so that it takes std::initializer_list of FindFilesOptions.
void FindFiles(std::wstring const& pattern, std::initializer_list<FindFilesOptions> options)
{
auto has_option = [&](FindFilesOptions const option)
{
return std::find(std::begin(options), std::end(options), option) != std::end(options);
};
if (has_option(FindFilesOptions::LocalSearch))
{
// ...
}
if (has_option(FindFilesOptions::RecursiveSearch))
{
// ...
}
if (has_option(FindFilesOptions::IncludeDotDirectories))
{
// ...
}
}
Then you could call it like so:
FindFiles({}, {FindFilesOptions::RecursiveSearch, FindFilesOptions::IncludeDotDirectories});
The problem is not the explicit enum type but the class scope.
With C++11, enum as compile time constant loose a lot of interest compared to a bunch of constexpr when you need operate on the value ( bitwise operation, incrementation, etc)
If you don't care about performance, change your options to set<FindFilesOptions>!

How to overload operators with a built-in return type?

Say I have a class, that wraps some mathematic operation. Lets use a toy example
class Test
{
public:
Test( float f ) : mFloat( f ), mIsInt( false ) {}
float mFloat;
int mInt;
bool mIsFloat;
};
I'd like to create an operator overload with the following prototype:
float operator=( const Test& test )
{
if ( !test.mIsFloat ) return *this; // in this case don't actually do the assignment
return test.mFloat; // in this case do it.
}
So my questions are: can I overload operator= with a built-in return type?
and if so, is there a way to refer to the built-in type?
I know I could do this if I wrapped the built-ins with a class. But in this case I want to have the assignment operator work with built-in types on the LHS
Example of usage:
Test t( 0.5f );
float f = t; // f == 0.5
int i = 0;
i = t; // i stays 0.
UPDATE: Thanks so much for the help. Expanding a little bit from the toy example so people understand what I'm really trying to do.
I have a configuration system that allows me to get config parameters from a tree of parameters with different type ( they can be integers, floats, strings, arrays etc. ).
I can get items from the tree with operations like this:
float updateTime = config["system.updateTime"];
But it is possible that "system.updateTime" does not exist. Or is of the wrong type. Generally for configuration I have a block of defaults, and then code to overide the defaults from the config:
float updateTime = 10;
const char* logFile = "tmp.log";
... etc etc...
I want to do something like:
updateTime = config["system.updateTime"];
Where the operation succeeds if there is an override. So generally the assignment doesn't happen if the return from operator[] is an "invalid" node in the tree.
Right now I solve it with a function like:
getConfig( config, "system.updateTime", updateTime );
But I would prefer to use assignment operator.
I could do this if I were willing to create classes to wrap the builtins.
class MyFloat
{
operator=( const Test& test ) { if (test.isValidNode() ) f = test.float(); return *this; }
float f;
}
But obviously it would be prefereable not to wrap built-ins with trivial classes just to overload assignment. Question is - is this possible in c++?
Based on your example, what you really want is an implicit conversion operator:
class Test
{
// ...
public:
operator float() const;
};
inline Test::operator float() const
{
return mIsFloat ? mFloat : mInt;
}
If you want to conditionally do the assignment, then you need to take another approach. A named method would probably be the best option, all things considered... something like this:
class Test
{
public:
bool try_assign(float & f) const;
};
inline bool Test::try_assign(float & f) const
{
if (mIsFloat) {
f = mFloat;
}
return mIsFloat;
}
Whatever you do, be careful that the syntactic sugar you introduce doesn't result in unreadable code.
You already have one implicit conversion from float to Test, in the form of the convert constructor
class Test
{
public:
/* ... */
Test( float f ) : mFloat( f ) /*...*/ {}
};
Which will support conversions such as:
Test t(0.5f);
You will likely also want a copy-assignement operator in order to make further implicit conversions from float to Test possible:
class Test
{
public:
Test& operator=(float f) { mFloat = f; return *this; }
};
t = 0.75; // This is possible now
In order to support implicit conversion from Test to float you don't use operator=, but a custom cast operator, declared & implemented thusly:
class Test
{
public:
/* ... */
operator float () const { return mFloat; }
};
This makes implicit conversions posslible, such as:
float f = t;
As an aside, you have another implicit conversion happening here you may not even be aware of. In this code:
Test t( 0.5 );
The literal value 0.5 isn't a float, but a double. In order to call the convert constructor this value must be converted to float, which may result in loss of precision. In order to specify a float literal, use the f suffix:
Test t( 0.5f );
Using template specialization:
class Config {
template<typename T>
void setValue(const std::string& index, T& value); //sets the value if available
};
template<float>
void Config::setValue(const std::string& index, float& value){...} //only sets float values
template<int>
void Config::setValue(const std::string& index, int& value){...} //only sets int values;
You can't "overload/add" operators for basic types, but you can for your type Type. But this shall not be operator = but operator >> - like in istreams.
class Test
{
public:
float mFloat;
int mInt;
bool mIsFloat;
Test& operator >> (float& v) { if (mIsFloat) v = mFloat; return *this; }
Test& operator >> (int& v) { if (!mIsFloat) v = mInt; return *this; }
};
Then you can:
int main() {
float v = 2;
Test t = { 1.0, 2, false };
t >> v; // no effect
t.mIsFloat = true;
t >> v; // now v is changed
}
Update
I want to do something like:
updateTime = config["system.updateTime"];
Then with my proposal, you cam:
config["system.updateTime"] >> updateTime;