In some testing code there's a helper function like this:
auto make_condiment(bool salt, bool pepper, bool oil, bool garlic) {
// assumes that first bool is salt, second is pepper,
// and so on...
//
// Make up something according to flags
return something;
};
which essentially builds up something based on some boolean flags.
What concerns me is that the meaning of each bool is hardcoded in the name of the parameters, which is bad because at the call site it's hard to remember which parameter means what (yeah, the IDE can likely eliminate the problem entirely by showing those names when tab completing, but still...):
// at the call site:
auto obj = make_condiment(false, false, true, true); // what ingredients am I using and what not?
Therefore, I'd like to pass a single object describing the settings. Furthermore, just aggregating them in an object, e.g. std::array<bool,4>.
I would like, instead, to enable a syntax like this:
auto obj = make_smart_condiment(oil + garlic);
which would generate the same obj as the previous call to make_condiment.
This new function would be:
auto make_smart_condiment(Ingredients ingredients) {
// retrieve the individual flags from the input
bool salt = ingredients.hasSalt();
bool pepper = ingredients.hasPepper();
bool oil = ingredients.hasOil();
bool garlic = ingredients.hasGarlic();
// same body as make_condiment, or simply:
return make_condiment(salt, pepper, oil, garlic);
}
Here's my attempt:
struct Ingredients {
public:
enum class INGREDIENTS { Salt = 1, Pepper = 2, Oil = 4, Garlic = 8 };
explicit Ingredients() : flags{0} {};
explicit Ingredients(INGREDIENTS const& f) : flags{static_cast<int>(f)} {};
private:
explicit Ingredients(int fs) : flags{fs} {}
int flags; // values 0-15
public:
bool hasSalt() const {
return flags % 2;
}
bool hasPepper() const {
return (flags / 2) % 2;
}
bool hasOil() const {
return (flags / 4) % 2;
}
bool hasGarlic() const {
return (flags / 8) % 2;
}
Ingredients operator+(Ingredients const& f) {
return Ingredients(flags + f.flags);
}
}
salt{Ingredients::INGREDIENTS::Salt},
pepper{Ingredients::INGREDIENTS::Pepper},
oil{Ingredients::INGREDIENTS::Oil},
garlic{Ingredients::INGREDIENTS::Garlic};
However, I have the feeling that I am reinventing the wheel.
Is there any better, or standard, way of accomplishing the above?
Is there maybe a design pattern that I could/should use?
I think you can remove some of the boilerplate by using a std::bitset. Here is what I came up with:
#include <bitset>
#include <cstdint>
#include <iostream>
class Ingredients {
public:
enum Option : uint8_t {
Salt = 0,
Pepper = 1,
Oil = 2,
Max = 3
};
bool has(Option o) const { return value_[o]; }
Ingredients(std::initializer_list<Option> opts) {
for (const Option& opt : opts)
value_.set(opt);
}
private:
std::bitset<Max> value_ {0};
};
int main() {
Ingredients ingredients{Ingredients::Salt, Ingredients::Pepper};
// prints "10"
std::cout << ingredients.has(Ingredients::Salt)
<< ingredients.has(Ingredients::Oil) << "\n";
}
You don't get the + type syntax, but it's pretty close. It's unfortunate that you have to keep an Option::Max, but not too bad. Also I decided to not use an enum class so that it can be accessed as Ingredients::Salt and implicitly converted to an int. You could explicitly access and cast if you wanted to use enum class.
If you want to use enum as flags, the usual way is merge them with operator | and check them with operator &
#include <iostream>
enum Ingredients{ Salt = 1, Pepper = 2, Oil = 4, Garlic = 8 };
// If you want to use operator +
Ingredients operator + (Ingredients a,Ingredients b) {
return Ingredients(a | b);
}
int main()
{
using std::cout;
cout << bool( Salt & Ingredients::Salt ); // has salt
cout << bool( Salt & Ingredients::Pepper ); // doesn't has pepper
auto sp = Ingredients::Salt + Ingredients::Pepper;
cout << bool( sp & Ingredients::Salt ); // has salt
cout << bool( sp & Ingredients::Garlic ); // doesn't has garlic
}
note: the current code (with only the operator +) would not work if you mix | and + like (Salt|Salt)+Salt.
You can also use enum class, just need to define the operators
I would look at a strong typing library like:
https://github.com/joboccara/NamedType
For a really good video talking about this:
https://www.youtube.com/watch?v=fWcnp7Bulc8
When I first saw this, I was a little dismissive, but because the advice came from people I respected, I gave it a chance. The video convinced me.
If you look at CPP Best Practices and dig deeply enough, you'll see the general advice to avoid boolean parameters, especially strings of them. And Jonathan Boccara gives good reasons why your code will be stronger if you don't directly use the raw types, for the very reason that you've already identified.
I'm currently trying to come up with a clever way of implementing flags that include the states "default" and (optional) "toggle" in addition to the usual "true" and "false".
The general problem with flags is, that one has a function and wants to define its behaviour (either "do something" or "don't do something") by passing certain parameters.
Single flag
With a single (boolean) flag the solution is simple:
void foo(...,bool flag){
if(flag){/*do something*/}
}
Here it is especially easy to add a default, by just changing the function to
void foo(...,bool flag=true)
and call it without the flag parameter.
Multiple flags
Once the number of flags increases, the solution i usually see and use is something like this:
typedef int Flag;
static const Flag Flag1 = 1<<0;
static const Flag Flag2 = 1<<1;
static const Flag Flag3 = 1<<2;
void foo(/*other arguments ,*/ Flag f){
if(f & Flag1){/*do whatever Flag1 indicates*/}
/*check other flags*/
}
//call like this:
foo(/*args ,*/ Flag1 | Flag3)
This has the advantage, that you don't need a parameter for each flag, which means the user can set the flags he likes and just forget about the ones he don't care about. Especially you dont get a call like foo (/*args*/, true, false, true) where you have to count which true/false denotes which flag.
The problem here is:
If you set a default argument, it is overwritten as soon as the user specifies any flag. It is not possible to do somethink like Flag1=true, Flag2=false, Flag3=default.
Obviously, if we want to have 3 options (true, false, default) we need to pass at least 2 bits per flag. So while it might not be neccessary, i guess it should be easy for any implementation to use the 4th state to indicate a toggle (= !default).
I have 2 approaches to this, but i'm not really happy with both of them:
Approach 1: Defining 2 Flags
I tried using something like this up to now:
typedef int Flag;
static const Flag Flag1 = 1<<0;
static const Flag Flag1False= 1<<1;
static const Flag Flag1Toggle = Flag1 | Flag1False;
static const Flag Flag2= 1<<2;
static const Flag Flag2False= 1<<3;
static const Flag Flag2Toggle = Flag2 | Flag2False;
void applyDefault(Flag& f){
//do nothing for flags with default false
//for flags with default true:
f = ( f & Flag1False)? f & ~Flag1 : f | Flag1;
//if the false bit is set, it is either false or toggle, anyway: clear the bit
//if its not set, its either true or default, anyway: set
}
void foo(/*args ,*/ Flag f){
applyDefault(f);
if (f & Flag1) //do whatever Flag1 indicates
}
However what i don't like about this is, that there are two different bits used for each flag. This leads to the different code for "default-true" and "default-false" flags and to the neccessary if instead of some nice bitwise operation in applyDefault().
Approach 2: Templates
By defining a template-class like this:
struct Flag{
virtual bool apply(bool prev) const =0;
};
template<bool mTrue, bool mFalse>
struct TFlag: public Flag{
inline bool apply(bool prev) const{
return (!prev&&mTrue)||(prev&&!mFalse);
}
};
TFlag<true,false> fTrue;
TFlag<false,true> fFalse;
TFlag<false,false> fDefault;
TFlag<true,true> fToggle;
i was able to condense the apply into a single bitwise operation, with all but 1 argument known at compile time. So using the TFlag::apply directly compiles (using gcc) to the same machine code as a return true;, return false;, return prev; or return !prev; would, which is pretty efficient, but that would mean i have to use template-functions if i want to pass a TFlag as argument. Inheriting from Flag and using a const Flag& as argument adds the overhead of a virtual function call, but saves me from using templates.
However i have no idea how to scale this up to multiple flags...
Question
So the question is:
How can i implement multiple flags in a single argument in C++, so that a user can easily set them to "true", "false" or "default" (by not setting the specific flag) or (optional) indicate "whatever is not default"?
Is a class with two ints, using a similar bitwise operation like the template-approach with its own bitwise-operators the way to go? And if so, is there a way to give the compiler the option to do most of the bitwise operations at compile-time?
Edit for clarification:
I don't want to pass the 4 distinct flags "true", "false", "default", "toggle" to a function.
E.g. think of a circle that gets drawn where the flags are used for "draw border", "draw center", "draw fill color", "blurry border", "let the circle hop up and down", "do whatever other fancy stuff you can do with a circle", ....
And for each of those "properties" i want to pass a flag with value either true, false, default or toggle.
So the function might decide to draw the border, fill color and center by default, but none of the rest. A call, roughly like this:
draw_circle (DRAW_BORDER | DONT_DRAW_CENTER | TOGGLE_BLURRY_BORDER) //or
draw_circle (BORDER=true, CENTER=false, BLURRY=toggle)
//or whatever nice syntax you come up with....
should draw the border (specified by flag), not draw the center (specified by flag), blurry the border (the flag says: not the default) and draw the fill color (not specified, but its default).
If i later decide to not draw the center by default anymore but blurry the border by default, the call should draw the border (specified by flag), not draw the center (specified by flag), not blurry the border (now blurrying is default, but we don't want default) and draw the fill color (no flag for it, but its default).
Not exactly pretty, but very simple (building from your Approach 1):
#include <iostream>
using Flag = int;
static const Flag Flag1 = 1<<0;
static const Flag Flag2 = 1<<2;
// add more flags to turn things off, etc.
class Foo
{
bool flag1 = true; // default true
bool flag2 = false; // default false
void applyDefault(Flag& f)
{
if (f & Flag1)
flag1 = true;
if (f & Flag2)
flag2 = true;
// apply off flags
}
public:
void operator()(/*args ,*/ Flag f)
{
applyDefault(f);
if (flag1)
std::cout << "Flag 1 ON\n";
if (flag2)
std::cout << "Flag 2 ON\n";
}
};
void foo(/*args ,*/ Flag flags)
{
Foo f;
f(flags);
}
int main()
{
foo(Flag1); // Flag1 ON
foo(Flag2); // Flag1 ON\nFlag2 ON
foo(Flag1 | Flag2); // Flag1 ON\nFlag2 ON
return 0;
}
Your comments and answers pointed me towards a solution that i like and wanted to share with you:
struct Default_t{} Default;
struct Toggle_t{} Toggle;
struct FlagSet{
uint m_set;
uint m_reset;
constexpr FlagSet operator|(const FlagSet other) const{
return {
~m_reset & other.m_set & ~other.m_reset |
~m_set & other.m_set & other.m_reset |
m_set & ~other.m_set,
m_reset & ~other.m_reset |
~m_set & ~other.m_set & other.m_reset|
~m_reset & other.m_set & other.m_reset};
}
constexpr FlagSet& operator|=(const FlagSet other){
*this = *this|other;
return *this;
}
};
struct Flag{
const uint m_bit;
constexpr FlagSet operator= (bool val) const{
return {(uint)val<<m_bit,(!(uint)val)<<m_bit};
}
constexpr FlagSet operator= (Default_t) const{
return {0u,0u};
}
constexpr FlagSet operator= (Toggle_t) const {
return {1u<<m_bit,1u<<m_bit};
}
constexpr uint operator& (FlagSet i) const{
return i.m_set & (1u<<m_bit);
}
constexpr operator FlagSet() const{
return {1u<<m_bit,0u}; //= set
}
constexpr FlagSet operator|(const Flag other) const{
return (FlagSet)*this|(FlagSet)other;
}
constexpr FlagSet operator|(const FlagSet other) const{
return (FlagSet)*this|other;
}
};
constexpr uint operator& (FlagSet i, Flag f){
return f & i;
}
So basically the FlagSet holds two integers. One for set, one for reset. Different combinations represent different actions for that particular bit:
{false,false} = Default (D)
{true ,false} = Set (S)
{false,true } = Reset (R)
{true ,true } = Toggle (T)
The operator| is using a rather complex bitwise operation, designed to fullfill
D|D = D
D|R = R|D = R
D|S = S|D = S
D|T = T|D = T
T|T = D
T|R = R|T = S
T|S = S|T = R
S|S = S
R|R = R
S|R = S (*)
R|S = R (*)
The non-commutative behaviour in (*) is due to the fact, that we somehow need the ability to decide which one is the "default" and which one is the "user defined" one. So in case of conflicting values, the left one takes precedence.
The Flag class represents a single flag, basically one of the bits. Using the different operator=() overloads enables some kind of "Key-Value-Notation" to convert directly to a FlagSet with the bit-pair at position m_bit set to one of the previously defined pairs. By default (operator FlagSet()) this converts to a Set(S) action on the given bit.
The class also provides some overloads for bitwise-OR that auto convert to FlagSet and operator&() to actually compare the Flag with a FlagSet. In this comparison both Set(S) and Toggle(T) are considered true while both Reset(R) and Default(D) are considered false.
Using this is incredibly simple and very close to the "usual" Flag-implementation:
constexpr Flag Flag1{0};
constexpr Flag Flag2{1};
constexpr Flag Flag3{2};
constexpr auto NoFlag1 = (Flag1=false); //Just for convenience, not really needed;
void foo(FlagSet f={0,0}){
f |= Flag1|Flag2; //This sets the default. Remember: default right, user left
cout << ((f & Flag1)?"1":"0");
cout << ((f & Flag2)?"2":"0");
cout << ((f & Flag3)?"3":"0");
cout << endl;
}
int main() {
foo();
foo(Flag3);
foo(Flag3|(Flag2=false));
foo(Flag3|NoFlag1);
foo((Flag1=Toggle)|(Flag2=Toggle)|(Flag3=Toggle));
return 0;
}
Output:
120
123
103
023
003
Test it on ideone
One last word about efficiency: While i didn't test it without all the constexpr keywords, with them this code:
bool test1(){
return Flag1&((Flag1=Toggle)|(Flag2=Toggle)|(Flag3=Toggle));
}
bool test2(){
FlagSet f = Flag1|Flag2 ;
return f & Flag1;
}
bool test3(FlagSet f){
f |= Flag1|Flag2 ;
return f & Flag1;
}
compiles to (usign gcc 5.3 on gcc.godbolt.org)
test1():
movl $1, %eax
ret
test2():
movl $1, %eax
ret
test3(FlagSet):
movq %rdi, %rax
shrq $32, %rax
notl %eax
andl $1, %eax
ret
and while i'm not totally familiar with Assembler-Code, this looks like very basic bitwise operations and probably the fastest you can get without inlining the test-functions.
If I understand the question you can solve the problem by creating a simple class with implicit constructor from bool and default constructor:
class T
{
T(bool value):def(false), value(value){} // implicit constructor from bool
T():def(true){}
bool def; // is flag default
bool value; // flag value if flag isn't default
}
and using it in function like this:
void f(..., T flag = T());
void f(..., true); // call with flag = true
void f(...); // call with flag = default
If I understand correctly, you want a simple way to pass one or more flags to a function as a single parameter, and/or a simple way for an object to keep track of one or more flags in a single variable, correct? A simple approach would be to specify the flags as a typed enum, with an unsigned underlying type large enough to hold all the flags you need. For example:
/* Assuming C++11 compatibility. If you need to work with an older compiler, you'll have
* to manually insert the body of flag() into each BitFlag's definition, and replace
* FLAG_INVALID's definition with something like:
* FLAG_INVALID = static_cast<flag_t>(-1) -
* (FFalse + FTrue + FDefault + FToggle),
*/
#include <climits>
// For CHAR_BIT.
#include <cstdint>
// For uint8_t.
// Underlying flag type. Change as needed. Should remain unsigned.
typedef uint8_t flag_t;
// Helper functions, to provide cleaner syntax to the enum.
// Not actually necessary, they'll be evaluated at compile time either way.
constexpr flag_t flag(int f) { return 1 << f; }
constexpr flag_t fl_validate(int f) {
return (f ? (1 << f) + fl_validate(f - 1) : 1);
}
constexpr flag_t register_invalids(int f) {
// The static_cast is a type-independent maximum value for unsigned ints. The compiler
// may or may not complain.
// (f - 1) compensates for bits being zero-indexed.
return static_cast<flag_t>(-1) - fl_validate(f - 1);
}
// List of available flags.
enum BitFlag : flag_t {
FFalse = flag(0), // 0001
FTrue = flag(1), // 0010
FDefault = flag(2), // 0100
FToggle = flag(3), // 1000
// ...
// Number of defined flags.
FLAG_COUNT = 4,
// Indicator for invalid flags. Can be used to make sure parameters are valid, or
// simply to mask out any invalid ones.
FLAG_INVALID = register_invalids(FLAG_COUNT),
// Maximum number of available flags.
FLAG_MAX = sizeof(flag_t) * CHAR_BIT
};
// ...
void func(flag_t f);
// ...
class CL {
flag_t flags;
// ...
};
Note that this assumes that FFalse and FTrue should be distinct flags, both of which can be specified at the same time. If you want them to be mutually exclusive, a couple small changes would be necessary:
// ...
constexpr flag_t register_invalids(int f) {
// Compensate for 0th and 1st flags using the same bit.
return static_cast<flag_t>(-1) - fl_validate(f - 2);
}
// ...
enum BitFlag : flag_t {
FFalse = 0, // 0000
FTrue = flag(0), // 0001
FDefault = flag(1), // 0010
FToggle = flag(2), // 0100
// ...
Alternatively, instead of modifying the enum itself, you could modify flag():
// ...
constexpr flag_t flag(int f) {
// Give bit 0 special treatment as "false", shift all other flags down to compensate.
return (f ? 1 << (f - 1) : 0);
}
// ...
constexpr flag_t register_invalids(int f) {
return static_cast<flag_t>(-1) - fl_validate(f - 2);
}
// ...
enum BitFlag : flag_t {
FFalse = flag(0), // 0000
FTrue = flag(1), // 0001
FDefault = flag(2), // 0010
FToggle = flag(3), // 0100
// ...
While I believe this to be the simplest approach, and possibly the most memory-efficient if you choose the smallest possible underlying type for flag_t, it is likely also the least useful. [Also, if you end up using this or something similar, I would suggest hiding the helper functions in a namespace, to prevent unnecessary clutter in the global namespace.]
A simple example.
Is there a reason we cannot use an enum for this? Here is a solution that I have used recently:
// Example program
#include <iostream>
#include <string>
enum class Flag : int8_t
{
F_TRUE = 0x0, // Explicitly defined for readability
F_FALSE = 0x1,
F_DEFAULT = 0x2,
F_TOGGLE = 0x3
};
struct flags
{
Flag flag_1;
Flag flag_2;
Flag flag_3;
Flag flag_4;
};
int main()
{
flags my_flags;
my_flags.flag_1 = Flag::F_TRUE;
my_flags.flag_2 = Flag::F_FALSE;
my_flags.flag_3 = Flag::F_DEFAULT;
my_flags.flag_4 = Flag::F_TOGGLE;
std::cout << "size of flags: " << sizeof(flags) << "\n";
std::cout << (int)(my_flags.flag_1) << "\n";
std::cout << (int)(my_flags.flag_2) << "\n";
std::cout << (int)(my_flags.flag_3) << "\n";
std::cout << (int)(my_flags.flag_4) << "\n";
}
Here, we get the following output:
size of flags: 4
0
1
2
3
It's not quite memory efficient this way. Each Flag is 8 bits compared to two bools at one bit each, for a 4x memory increase. However, we are afforded the benefits of enum class which prevents some stupid programmer mistakes.
Now, I have another solution for when memory is critical. Here we pack 4 flags into an 8-bit struct. This one I came up with for a data editor, and it worked perfectly for my uses. However there may be downsides that I am now aware of.
// Example program
#include <iostream>
#include <string>
enum Flag
{
F_TRUE = 0x0, // Explicitly defined for readability
F_FALSE = 0x1,
F_DEFAULT = 0x2,
F_TOGGLE = 0x3
};
struct PackedFlags
{
public:
bool flag_1_0:1;
bool flag_1_1:1;
bool flag_2_0:1;
bool flag_2_1:1;
bool flag_3_0:1;
bool flag_3_1:1;
bool flag_4_0:1;
bool flag_4_1:1;
public:
Flag GetFlag1()
{
return (Flag)(((int)flag_1_1 << 1) + (int)flag_1_0);
}
Flag GetFlag2()
{
return (Flag)(((int)flag_2_1 << 1) + (int)flag_2_0);
}
Flag GetFlag3()
{
return (Flag)(((int)flag_3_1 << 1) + (int)flag_3_0);
}
Flag GetFlag4()
{
return (Flag)(((int)flag_4_1 << 1) + (int)flag_4_0);
}
void SetFlag1(Flag flag)
{
flag_1_0 = (flag & (1 << 0));
flag_1_1 = (flag & (1 << 1));
}
void SetFlag2(Flag flag)
{
flag_2_0 = (flag & (1 << 0));
flag_2_1 = (flag & (1 << 1));
}
void SetFlag3(Flag flag)
{
flag_3_0 = (flag & (1 << 0));
flag_3_1 = (flag & (1 << 1));
}
void SetFlag4(Flag flag)
{
flag_4_0 = (flag & (1 << 0));
flag_4_1 = (flag & (1 << 1));
}
};
int main()
{
PackedFlags my_flags;
my_flags.SetFlag1(F_TRUE);
my_flags.SetFlag2(F_FALSE);
my_flags.SetFlag3(F_DEFAULT);
my_flags.SetFlag4(F_TOGGLE);
std::cout << "size of flags: " << sizeof(my_flags) << "\n";
std::cout << (int)(my_flags.GetFlag1()) << "\n";
std::cout << (int)(my_flags.GetFlag2()) << "\n";
std::cout << (int)(my_flags.GetFlag3()) << "\n";
std::cout << (int)(my_flags.GetFlag4()) << "\n";
}
Output:
size of flags: 1
0
1
2
3
Before you ask, I've looked and looked for this on SO, and cannot find a solid answer.
I need to be able to dynamically iterate over an enum that has non-incremental values, as an example:
typedef enum {
CAPI_SUBTYPE_NULL = 0, /* Null subtype. */
CAPI_SUBTYPE_DIAG_DFD = 1, /* Data Flow diag. */
CAPI_SUBTYPE_DIAG_ERD = 2, /* Entity-Relationship diag. */
CAPI_SUBTYPE_DIAG_STD = 3, /* State Transition diag. */
CAPI_SUBTYPE_DIAG_STC = 4, /* Structure Chart diag. */
CAPI_SUBTYPE_DIAG_DSD = 5, /* Data Structure diag. */
CAPI_SUBTYPE_SPEC_PROCESS = 6, /* Process spec. */
CAPI_SUBTYPE_SPEC_MODULE = 7, /* Module spec. */
CAPI_SUBTYPE_SPEC_TERMINATOR = 8, /* Terminator spec. */
CAPI_SUBTYPE_DD_ALL = 13, /* DD Entries (All). */
CAPI_SUBTYPE_DD_COUPLE = 14, /* DD Entries (Couples). */
CAPI_SUBTYPE_DD_DATA_AREA = 15, /* DD Entries (Data Areas). */
CAPI_SUBTYPE_DD_DATA_OBJECT = 16, /* DD Entries (Data Objects). */
CAPI_SUBTYPE_DD_FLOW = 17, /* DD Entries (Flows). */
CAPI_SUBTYPE_DD_RELATIONSHIP = 18, /* DD Entries (Relationships). */
CAPI_SUBTYPE_DD_STORE = 19, /* DD Entries (Stores). */
CAPI_SUBTYPE_DIAG_PAD = 35, /* Physical architecture diagram. */
CAPI_SUBTYPE_DIAG_BD = 36, /* Behaviour diagram. */
CAPI_SUBTYPE_DIAG_UCD = 37, /* UML Use case diagram. */
CAPI_SUBTYPE_DIAG_PD = 38, /* UML Package diagram. */
CAPI_SUBTYPE_DIAG_COD = 39, /* UML Collaboration diagram. */
CAPI_SUBTYPE_DIAG_SQD = 40, /* UML Sequence diagram. */
CAPI_SUBTYPE_DIAG_CD = 41, /* UML Class diagram. */
CAPI_SUBTYPE_DIAG_SCD = 42, /* UML State chart. */
CAPI_SUBTYPE_DIAG_ACD = 43, /* UML Activity chart. */
CAPI_SUBTYPE_DIAG_CPD = 44, /* UML Component diagram. */
CAPI_SUBTYPE_DIAG_DPD = 45, /* UML Deployment diagram. */
CAPI_SUBTYPE_DIAG_PFD = 47, /* Process flow diagram. */
CAPI_SUBTYPE_DIAG_HIER = 48, /* Hierarchy diagram. */
CAPI_SUBTYPE_DIAG_IDEF0 = 49, /* IDEF0 diagram. */
CAPI_SUBTYPE_DIAG_AID = 50, /* AID diagram. */
CAPI_SUBTYPE_DIAG_SAD = 51, /* SAD diagram. */
CAPI_SUBTYPE_DIAG_ASG = 59 /* ASG diagram. */
} CAPI_SUBTYPE_E ;
The reason I'd like to be able to do this is because the enum is given in an API (which I cannot change, obviously) and would prefer to be able to, regardless of the API version, be able to iterate over these values.
Any direction is appreciated.
With C++, the only way to iterate through enums is store them in an array and iterate through the same. The main challenge is how to track the same order in the enum declaration and the array declaration?
You can automate the way you order them in the enum as well as array. I feel that this is a decent way:
// CAPI_SUBTYPE_E_list.h
// This header file contains all the enum in the order
// Whatever order is set will be followed everywhere
NAME_VALUE(CAPI_SUBTYPE_NULL, 0), /* Null subtype. */
NAME_VALUE(CAPI_SUBTYPE_DIAG_DFD, 1), /* Data Flow diag. */
NAME_VALUE(CAPI_SUBTYPE_DIAG_ERD, 2), /* Entity-Relationship diag. */
...
NAME_VALUE(CAPI_SUBTYPE_DD_ALL, 13), /* DD Entries (All). */
NAME_VALUE(CAPI_SUBTYPE_DD_COUPLE, 14), /* DD Entries (Couples). */
...
NAME_VALUE(CAPI_SUBTYPE_DIAG_ASG, 59) /* ASG diagram. */
Now you #include this file in your enum declaration and array declaration both places with macro redefinition:
// Enum.h
typedef enum {
#define NAME_VALUE(NAME,VALUE) NAME = VALUE
#include"CAPI_SUBTYPE_E_list.h"
#undef NAME_VALUE
}CAPI_SUBTYPE_E;
And put the same file for array with other macro definition:
// array file
// Either this array can be declared `static` or inside unnamed `namespace` to make
// ... it visible through a header file; Or it should be declared `extern` and keep ...
// ... the record of its size; declare a getter method for both array and the size
unsigned int CAPI_SUBTYPE_E_Array [] = {
#define NAME_VALUE(NAME,VALUE) NAME
#include"CAPI_SUBTYPE_E_list.h"
#undef NAME_VALUE
};
Now iterate in C++03 as:
for(unsigned int i = 0, size = sizeof(CAPI_SUBTYPE_E_Array)/sizeof(CAPI_SUBTYPE_E_Array[0]);
i < size; ++i)
or yet simple in C++11:
for(auto i : CAPI_SUBTYPE_E_Array)
It is about tricky and more C than C++ practice, but you can use X macros. It is very ugly and you need to keep TABLE in right order. In C++ I believe we don't need to iterate over enumerations and more we don't need to assign values to enumeration (ostensibly enumeration value is random in every compilation). So think of it as a joke :)
#include <iostream>
#define CAPI_SUBTYPE_TABLE \
CAPI_SUBTYPE_X(CAPI_SUBTYPE_NULL, 0 ) \
CAPI_SUBTYPE_X(CAPI_SUBTYPE_DIAG_DFD, 1 ) \
CAPI_SUBTYPE_X(CAPI_SUBTYPE_DD_ALL, 13)
#define CAPI_SUBTYPE_X(name, value) name = value,
enum CAPI_SUBTYPE
{
CAPI_SUBTYPE_TABLE
CAPI_SUBTYPE_END
};
#undef CAPI_SUBTYPE_X
#define CAPI_SUBTYPE_X(name, value) name,
CAPI_SUBTYPE subtype_iteratable[] =
{
CAPI_SUBTYPE_TABLE
CAPI_SUBTYPE_END
};
#undef CAPI_SUBTYPE_X
#define CAPI_SUBTYPE_SIZE (sizeof(subtype_iteratable) / sizeof(subtype_iteratable[0]) - 1)
int main()
{
for (unsigned i = 0; i < CAPI_SUBTYPE_SIZE; ++i)
std::cout << subtype_iteratable[i] << std::endl; // 0, 1, 13
}
I agree with the already given statements that this isn't possible without either altering or copying the definitions of the enum. However, in C++11 (maybe even C++03?) you can go as far as providing a syntax where all you have to do (literally) is to copy and paste the enumerator definitions from the enum into a macro. This works as long as every enumerator has an explicit definition (using =).
Edit: You can expand this to work even if not every enumerator has an explicit definition, but this shouldn't be required in this case.
I've once developed this for some physicists, so the example is about particles.
Usage example:
// required for this example
#include <iostream>
enum ParticleEnum
{
PROTON = 11,
ELECTRON = 42,
MUON = 43
};
// define macro (see below)
MAKE_ENUM(
ParticleEnum, // name of enum type
particle_enum_detail, // some namespace to place some types in
all_particles, // name of array to list all enumerators
// paste the enumerator definitions of your enum here
PROTON = 11,
ELECTRON = 42,
MUON = 43
) // don't forget the macro's closing paranthesis
int main()
{
for(ParticleEnum p : all_particles)
{
std::cout << p << ", ";
}
}
The macro yields to (effectively):
namespace particle_enum_detail
{
// definition of a type and some constants
constexpr ParticleEnum all_particles[] = {
PROTON,
ELECTRON,
MUON
};
}
using particle_enum_detail::all_particles;
macro definition
#define MAKE_ENUM(ENUM_TYPE, NAMESPACE, ARRAY_NAME, ...) \
namespace NAMESPACE \
{ \
struct iterable_enum_ \
{ \
using storage_type = ENUM_TYPE; \
template < typename T > \
constexpr iterable_enum_(T p) \
: m{ static_cast<storage_type>(p) } \
{} \
constexpr operator storage_type() \
{ return m; } \
template < typename T > \
constexpr iterable_enum_ operator= (T p) \
{ return { static_cast<storage_type>(p) }; } \
private: \
storage_type m; \
}; \
\
/* the "enumeration" */ \
constexpr iterable_enum_ __VA_ARGS__; \
/* the array to store all "enumerators" */ \
constexpr ENUM_TYPE ARRAY_NAME[] = { __VA_ARGS__ }; \
} \
using NAMESPACE::ARRAY_NAME; // macro end
Note: the type iterable_enum_ could as well be defined once outside the macro.
macro explanation
The idea is to allow a syntax like proton = 11, electron = 12 within the macro invocation. This works very easy for any kind of declaration, yet it makes problems for storing the names:
#define MAKE_ENUM(ASSIGNMEN1, ASSIGNMENT2) \
enum my_enum { ASSIGNMENT1, ASSIGNMENT2 }; \
my_enum all[] = { ASSIGNMENT1, ASSIGNMENT2 };
MAKE_ENUM(proton = 11, electron = 22);
yields to:
enum my_enum { proton = 11, electron = 22 }; // would be OK
my_enum all[] = { proton = 11, electron = 22 }; // cannot assign to enumerator
As with many syntactical tricks, operator overloading provides a way to overcome this problem; but the assignment operator has to be a member functions - and enums are not classes.
So why not use some constant objects instead of an enum?
enum my_enum { proton = 11, electron = 22 };
// alternatively
constexpr int proton = 11, electron = 12;
// the `constexpr` here is equivalent to a `const`
This does not yet solve our problem, it just demonstrates we can easily replace enums by a list of constants if we don't need the auto-increment feature of enumerators.
Now, the syntactical trick with operator overloading:
struct iterable_enum_
{
// the trick: a constexpr assignment operator
constexpr iterable_enum_ operator= (int p) // (op)
{ return {p}; }
// we need a ctor for the syntax `object = init`
constexpr iterable_enum_(int p) // (ctor)
: m{ static_cast<ParticleEnum>(p) }
{}
private:
ParticleEnum m;
};
constexpr iterable_enum_ proton = 11, electron = 22; // (1)
iterable_enum_ all_particles[] = { proton = 11, electron = 22 }; // (2)
The trick is, in line (1) the = designates a copy-initialisation, which is done by converting the number (11, 22) to a temporary of type particle by using the (ctor) and copying/moving the temporary via an implicitly-defined ctor to the destination object (proton, electron).
In contrast, the = in line (2) is resolved to an operator call to (op), which effectively returns a copy of the object on which it has been called (*this). The constexpr stuff allows to use these variables at compile time, e.g. in a template declaration. Due to restrictions on constexpr functions, we cannot simply return *this in the (op) function. Additionally, constexpr implies all restrictions of const.
By providing an implicit conversion operator, you can create the array in line (2) of type ParticleEnum:
// in struct particle
constexpr operator ParticleEnum() { return m; }
// in namespace particle_enum_detail
ParticleEnum all_particles[] = { proton = 11, electron = 22 };
Based on the articles given at the begin of the question, I derived a solution that is based in the assumption that you know the invalids ranges.
I really wanna knows if this is a good solution.
First, end you enum with something like that: CAPI_END = 60. It will helps to interates. So my code is:
typedef enum {
CAPI_SUBTYPE_NULL = 0, /* Null subtype. */
CAPI_SUBTYPE_DIAG_DFD = 1, /* Data Flow diag. */
CAPI_SUBTYPE_DIAG_ERD = 2, /* Entity-Relationship diag. */
CAPI_SUBTYPE_DIAG_STD = 3, /* State Transition diag. */
CAPI_SUBTYPE_DIAG_STC = 4, /* Structure Chart diag. */
CAPI_SUBTYPE_DIAG_DSD = 5, /* Data Structure diag. */
CAPI_SUBTYPE_SPEC_PROCESS = 6, /* Process spec. */
CAPI_SUBTYPE_SPEC_MODULE = 7, /* Module spec. */
CAPI_SUBTYPE_SPEC_TERMINATOR = 8, /* Terminator spec. */
CAPI_SUBTYPE_DD_ALL = 13, /* DD Entries (All). */
CAPI_SUBTYPE_DD_COUPLE = 14, /* DD Entries (Couples). */
CAPI_SUBTYPE_DD_DATA_AREA = 15, /* DD Entries (Data Areas). */
CAPI_SUBTYPE_DD_DATA_OBJECT = 16, /* DD Entries (Data Objects). */
CAPI_SUBTYPE_DD_FLOW = 17, /* DD Entries (Flows). */
CAPI_SUBTYPE_DD_RELATIONSHIP = 18, /* DD Entries (Relationships). */
CAPI_SUBTYPE_DD_STORE = 19, /* DD Entries (Stores). */
CAPI_SUBTYPE_DIAG_PAD = 35, /* Physical architecture diagram. */
CAPI_SUBTYPE_DIAG_BD = 36, /* Behaviour diagram. */
CAPI_SUBTYPE_DIAG_UCD = 37, /* UML Use case diagram. */
CAPI_SUBTYPE_DIAG_PD = 38, /* UML Package diagram. */
CAPI_SUBTYPE_DIAG_COD = 39, /* UML Collaboration diagram. */
CAPI_SUBTYPE_DIAG_SQD = 40, /* UML Sequence diagram. */
CAPI_SUBTYPE_DIAG_CD = 41, /* UML Class diagram. */
CAPI_SUBTYPE_DIAG_SCD = 42, /* UML State chart. */
CAPI_SUBTYPE_DIAG_ACD = 43, /* UML Activity chart. */
CAPI_SUBTYPE_DIAG_CPD = 44, /* UML Component diagram. */
CAPI_SUBTYPE_DIAG_DPD = 45, /* UML Deployment diagram. */
CAPI_SUBTYPE_DIAG_PFD = 47, /* Process flow diagram. */
CAPI_SUBTYPE_DIAG_HIER = 48, /* Hierarchy diagram. */
CAPI_SUBTYPE_DIAG_IDEF0 = 49, /* IDEF0 diagram. */
CAPI_SUBTYPE_DIAG_AID = 50, /* AID diagram. */
CAPI_SUBTYPE_DIAG_SAD = 51, /* SAD diagram. */
CAPI_SUBTYPE_DIAG_ASG = 59, /* ASG diagram. */
CAPI_END = 60 /* just to mark the end of your enum */
} CAPI_SUBTYPE_E ;
CAPI_SUBTYPE_E& operator++(CAPI_SUBTYPE_E& capi)
{
const int ranges = 2; // you have 2 invalid ranges in your example
int invalid[ranges][2] = {{8, 12}, {19, 34}}; // {min, max} (inclusive, exclusive)
CAPI_SUBTYPE_E next = CAPI_SUBTYPE_NULL;
for (int i = 0; i < ranges; i++)
if ( capi >= invalid[i][0] && capi < invalid[i][1] ) {
next = static_cast<CAPI_SUBTYPE_E>(invalid[i][1] + 1);
break;
} else {
next = static_cast<CAPI_SUBTYPE_E>(capi + 1);
}
// if ( next > CAPI_END )
// throw an exception
return capi = next;
}
int main()
{
for(CAPI_SUBTYPE_E i = CAPI_SUBTYPE_NULL; i < CAPI_END; ++i)
cout << i << endl;
cout << endl;
}
I'm providing only a pre increment operator. A post increment operator is let to be implemanted later.
The answer is "no, you cannot iterate over the elements of an enum in C++03 or C++11".
Now, you can describe the set of values of an enum in a way that can be understood at compile time.
template<typename E, E... Es>
struct TypedEnumList {};
typedef TypedEnumList<
CAPI_SUBTYPE_E,
CAPI_SUBTYPE_NULL, // etc
// ...
CAPI_SUBTYPE_DIAG_ASG
> CAPI_SUBTYPE_E_LIST;
which gives you a type CAPI_SUBTYPE_E_LIST which encapsulates the list of enum values.
We can then populate an array with these easily:
template<typename T, T... Es>
std::array<T, sizeof...(Es)> GetRuntimeArray( TypedEnumList<T, Es... > ) {
return { Es... };
}
auto Capis = GetRuntimeArray( CAPI_SUBTYPE_E_LIST() );
if you really need it. But this is just a special case of the more general case of being able to generate code for each element of your enum CAPI_SUBTYPE_E -- directly building a for loop isn't needed.
Amusingly, with a compliant C++11 compiler, we could write code that would generate our CAPI_SUBTYPE_E_LIST with particular enum elements iff those elements are actually in CAPI_SUBTYPE_E using SFINAE. This would be useful because we can take the most recent version of the API we can support, and have it auto-degrade (at compile time) if the API we compile against is more primitive.
To demonstrate the technique, I'll start with a toy enum
enum Foo { A = 0, /* B = 1 */ };
Imagine that B=1 is uncommented in the most modern version of the API, but is not there in the more primitive.
template<int index, typename EnumList, typename=void>
struct AddElementN: AddElementN<index-1, EnumList> {};
template<typename EnumList>
struct AddElementN<-1, EnumList, void> {
typedef EnumList type;
};
template<typename Enum, Enum... Es>
struct AddElementN<0, TypedEnumList<Enum, Es...>, typename std::enable_if< Enum::A == Enum::A >::type >:
AddElement<-1, TypedEnumList<Enum, A, Es...>>
{};
template<typename Enum, Enum... Es>
struct AddElementN<1, TypedEnumList<Enum, Es...>, typename std::enable_if< Enum::B == Enum::B >::type >:
AddElement<0, TypedEnumList<Enum, B, Es...>>
{};
// specialize this for your enum to call AddElementN:
template<typename Enum>
struct BuildTypedList;
template<>
struct BuildTypedList<CAPI_SUBTYPE_E>:
AddElementN<1, TypedEnumList<CAPI_SUBTYPE_E>>
{};
template<typename Enum>
using TypedList = typename BuildTypedList<Enum>::type;
now, if I wrote that right, TypedList<CAPI_SUBTYPE_E> contains B iff B is defined as an element of CAPI_SUBTYPE_E. This lets you compile against more than one version of the library, and get a different set of elements in your enum element list depending on what is in the library. You do have to maintain that annoying boilerplate (which could probably be made easier with macros or code generation) against the "final" version of the enums elements, but it should automatically handle previous versions at compile time.
This sadly requires lots of maintenance to work.
Finally, your requirement that this be dynamic: the only practical way for this to be dynamic is to wrap the 3rd party API in code that knows what the version of the API is, and exposes a different buffer of enum values (I'd put it in a std::vector) depending on what the version the API is. Then when you load the API, you also load this helper wrapper, which then uses the above techniques to build the set of elements of the enum, which you iterate over.
Some of this boilerplate can be made easier to write with some horrible macros, like ones that build the various AddElementN type SFINAE code by using the __LINE__ to index the recursive types. But that would be horrible.
Somewhat clearer (???) with a bit of boost preprocessing.
You define your enums by a sequence
#define CAPI_SUBTYPE_E_Sequence \
(CAPI_SUBTYPE_NULL)(0) \
(CAPI_SUBTYPE_DIAG_DFD)(1) ...
then you can automate (through macros) the declaration of the enum,
DECL_ENUM(CAPI_SUBTYPE_E) ;
the table that indexes it
DECL_ENUM_TABLE(CAPI_SUBTYPE_E);
the number of enums / size of the table
ENUM_SIZE(CAPI_SUBTYPE_E)
and access to it:
ITER_ENUM_i(i,CAPI_SUBTYPE_E)
Here is the full text.
#include <boost/preprocessor.hpp>
// define your enum as (name)(value) sequence
#define CAPI_SUBTYPE_E_Sequence \
(CAPI_SUBTYPE_NULL)(0) /* Null subtype. */ \
(CAPI_SUBTYPE_DIAG_DFD)(1) /* Data Flow diag. */ \
(CAPI_SUBTYPE_DIAG_ERD)(2) /* Entity-Relationship diag. */ \
(CAPI_SUBTYPE_DIAG_DSD)(5) /* Data Structure diag. */ \
(CAPI_SUBTYPE_DD_ALL)(13) /* DD Entries (All). */
// # enums
#define ENUM_SIZE(name) \
BOOST_PP_DIV(BOOST_PP_SEQ_SIZE(BOOST_PP_CAT(name,_Sequence)),2)
#define ENUM_NAME_N(N,seq) BOOST_PP_SEQ_ELEM(BOOST_PP_MUL(N,2),seq)
#define ENUM_VALUE_N(N,seq) BOOST_PP_SEQ_ELEM(BOOST_PP_INC(BOOST_PP_MUL(N,2)),seq)
// declare Nth enum
#define DECL_ENUM_N(Z,N,seq) \
BOOST_PP_COMMA_IF(N) ENUM_NAME_N(N,seq) = ENUM_VALUE_N(N,seq)
// declare whole enum
#define DECL_ENUM(name) \
typedef enum { \
BOOST_PP_REPEAT( ENUM_SIZE(name) , DECL_ENUM_N , BOOST_PP_CAT(name,_Sequence) ) \
} name
DECL_ENUM(CAPI_SUBTYPE_E) ;
// declare Nth enum value
#define DECL_ENUM_TABLE_N(Z,N,seq) \
BOOST_PP_COMMA_IF(N) ENUM_NAME_N(N,seq)
// declare table
#define DECL_ENUM_TABLE(name) \
static const name BOOST_PP_CAT(name,_Table) [ENUM_SIZE(name)] = { \
BOOST_PP_REPEAT( ENUM_SIZE(name) , DECL_ENUM_TABLE_N , BOOST_PP_CAT(name,_Sequence) ) \
}
DECL_ENUM_TABLE(CAPI_SUBTYPE_E);
#define ITER_ENUM_i(i,name) BOOST_PP_CAT(name,_Table) [i]
// demo
// outputs : [0:0] [1:1] [2:2] [3:5] [4:13]
#include <iostream>
int main() {
for (int i=0; i<ENUM_SIZE(CAPI_SUBTYPE_E) ; i++)
std::cout << "[" << i << ":" << ITER_ENUM_i(i,CAPI_SUBTYPE_E) << "] ";
return 0;
}
// bonus : check enums are unique and in-order
#include <boost/preprocessor/stringize.hpp>
#include <boost/static_assert.hpp>
#define CHECK_ENUM_N(Z,N,seq) \
BOOST_PP_IF( N , \
BOOST_STATIC_ASSERT_MSG( \
ENUM_VALUE_N(BOOST_PP_DEC(N),seq) < ENUM_VALUE_N(N,seq) , \
BOOST_PP_STRINGIZE( ENUM_NAME_N(BOOST_PP_DEC(N),seq) ) " not < " BOOST_PP_STRINGIZE( ENUM_NAME_N(N,seq) ) ) \
, ) ;
#define CHECK_ENUM(name) \
namespace { void BOOST_PP_CAT(check_enum_,name) () { \
BOOST_PP_REPEAT( ENUM_SIZE(name) , CHECK_ENUM_N , BOOST_PP_CAT(name,_Sequence) ) } }
// enum OK
CHECK_ENUM(CAPI_SUBTYPE_E)
#define Bad_Enum_Sequence \
(one)(1)\
(five)(5)\
(seven)(7)\
(three)(3)
// enum not OK : enum_iter.cpp(81): error C2338: seven not < three
CHECK_ENUM(Bad_Enum)
You cannot iterate over arbitrary enum in C++. For iterating, values should be put in some container. You can automate maintaining such a container using 'enum classes' as described here: http://www.drdobbs.com/when-enum-just-isnt-enough-enumeration-c/184403955http://www.drdobbs.com/when-enum-just-isnt-enough-enumeration-c/184403955
Use Higher Order macros
Here's the technique we've been using in our projects.
Concept:
The idea is to generate a macro called LISTING which contains the definition of name-value pairs and it takes another macro as an argument. In the example below I defined two such helper macros. 'GENERATE_ENUM' to generate the enum and 'GENERATE_ARRAY' to generate an iteratable array. Of course this can be extended as necessary. I think this solution gives you the most bang for the buck.
Conceptually it's very similar to iammilind's solution.
Example:
// helper macros
#define GENERATE_ENUM(key,value) \
key = value \
#define GENERATE_ARRAY(name,value) \
name \
// Since this is C++, I took the liberty to wrap everthing in a namespace.
// This done mostly for aesthetic reasons, you don't have to if you don't want.
namespace CAPI_SUBTYPES
{
// I define a macro containing the key value pairs
#define LISTING(m) \
m(NONE, 0), /* Note: I can't use NULL here because it conflicts */
m(DIAG_DFD, 1), \
m(DIAG_ERD, 2), \
...
m(DD_ALL, 13), \
m(DD_COUPLE, 14), \
...
m(DIAG_SAD, 51), \
m(DIAG_ASG, 59), \
typedef enum {
LISTING(GENERATE_ENUM)
} Enum;
const Enum At[] = {
LISTING(GENERATE_ARRAY)
};
const unsigned int Count = sizeof(At)/sizeof(At[0]);
}
Usage:
Now in code you can refer to the enum like this:
CAPI_SUBTYPES::Enum eVariable = CAPI_SUBTYPES::DIAG_STD;
You can iterate over the enumeration like this:
for (unsigned int i=0; i<CAPI_SUBTYPES::Count; i++) {
...
CAPI_SUBTYPES::Enum eVariable = CAPI_SUBTYPES::At[i];
...
}
Note:
If memory serves me right, C++11 enums live in their own namespaces (like in Java or C#) , therefore the above usage wouldn't work. You'd have to refer to the enum values like this CAPI_SUBTYPES::Enum::FooBar.
Put them into an array or other container and iterate over that. If you modify the enum, you will have to update the code that puts them in the container.
the beginnings of a solution involving no macros and (almost) no runtime overhead:
#include <iostream>
#include <utility>
#include <boost/mpl/vector.hpp>
#include <boost/mpl/find.hpp>
template<int v> using has_value = std::integral_constant<int, v>;
template<class...EnumValues>
struct better_enum
{
static constexpr size_t size = sizeof...(EnumValues);
using value_array = int[size];
static const value_array& values() {
static const value_array _values = { EnumValues::value... };
return _values;
}
using name_array = const char*[size];
static const name_array& names() {
static const name_array _names = { EnumValues::name()... };
return _names;
}
using enum_values = boost::mpl::vector<EnumValues...>;
struct iterator {
explicit iterator(size_t i) : index(i) {}
const char* name() const {
return names()[index];
}
int value() const {
return values()[index];
}
operator int() const {
return value();
}
void operator++() {
++index;
}
bool operator==(const iterator& it) const {
return index == it.index;
}
bool operator!=(const iterator& it) const {
return index != it.index;
}
const iterator& operator*() const {
return *this;
}
private:
size_t index;
};
friend std::ostream& operator<<(std::ostream& os, const iterator& iter)
{
os << "{ " << iter.name() << ", " << iter.value() << " }";
return os;
}
template<class EnumValue>
static iterator find() {
using iter = typename boost::mpl::find<enum_values, EnumValue>::type;
static_assert(iter::pos::value < size, "attempt to find a value which is not part of this enum");
return iterator { iter::pos::value };
}
static iterator begin() {
return iterator { 0 };
}
static iterator end() {
return iterator { size };
}
};
struct Pig : has_value<0> { static const char* name() { return "Pig";} };
struct Dog : has_value<7> { static const char* name() { return "Dog";} };
struct Cat : has_value<100> { static const char* name() { return "Cat";} };
struct Horse : has_value<90> { static const char* name() { return "Horse";} };
struct Monkey : has_value<1000> { static const char* name() { return "Monkey";} };
using animals = better_enum<
Pig,
Dog,
Cat,
Horse
>;
using namespace std;
auto main() -> int
{
cout << "size : " << animals::size << endl;
for (auto v : animals::values())
cout << v << endl;
for (auto v : animals::names())
cout << v << endl;
cout << "full iteration:" << endl;
for (const auto& i : animals())
{
cout << i << endl;
}
cout << "individials" << endl;
auto animal = animals::find<Dog>();
cout << "found : " << animal << endl;
while (animal != animals::find<Horse>()) {
cout << animal << endl;
++animal;
}
// will trigger the static_assert auto xx = animals::find<Monkey>();
return 0;
}
output:
size : 4
0
7
100
90
Pig
Dog
Cat
Horse
full iteration:
{ Pig, 0 }
{ Dog, 7 }
{ Cat, 100 }
{ Horse, 90 }
individials
found : { Dog, 7 }
{ Dog, 7 }
{ Cat, 100 }
Since an enum does not allow iteration you have to create an alternative representation of the enum and its range of values.
The approach that I would take would be a simple table lookup embedded in a class. The problem is that as the API modifies its enum with new entries, you would also need to update the constructor for this class.
The simple class that I would use would be comprised of a constructor to build the table along with a couple of methods to iterate over the table. Since you also want to know if there is a problem with table size when adding items, you could use an assert () macro that would issue an assert() in debug mode. In the source example below I use the preprocessor to test for debug compile or not and whether assert has been included or not so as to provide a mechanism for basic consistency checking.
I have borrowed an idea I saw from P. J. Plauger in his book Standard C Library of using a simple lookup table for ANSI character operations in which the character is used to index into a table.
To use this class you would do something like the following which uses a for loop to iterate over the set of values in the table. Within the body of the loop you would do whatever you want to do with the enum values.
CapiEnum myEnum;
for (CAPI_SUBTYPE_E jj = myEnum.Begin(); !myEnum.End(); jj = myEnum.Next()) {
// do stuff with the jj enum value
}
Since this class enumerates over the values, I have arbitrarily chosen to return the value of CAPI_SUBTYPE_NULL in those cases where we have reached the end of the enumeration. So the return value in the case of a table lookup error is in the valid range however it can not be depended upon. There fore a check on the End() method should be done to see if the end of an iteration has been reached. Also after doing the construction of the object one can check the m_bTableError data member to see if there was an error during construction.
The source example for the class follows. You would need to update the constructor with the enum values for the API as they change. Unfortunately there is not much that can be done to automate the check on an update enum however we do have tests in place in a debug compile to check that the table is large enough and that the value of the enum being put into the table is within range of the table size.
class CapiEnum {
public:
CapiEnum (void); // constructor
CAPI_SUBTYPE_E Begin (void); // method to call to begin an iteration
CAPI_SUBTYPE_E Next (void); // method to get the next in the series of an iteration
bool End (void); // method to indicate if we have reached the end or not
bool Check (CAPI_SUBTYPE_E value); // method to see if value specified is in the table
bool m_TableError;
private:
static const int m_TableSize = 256; // set the lookup table size
static const int m_UnusedTableEntry = -1;
int m_iIterate;
bool m_bEndReached;
CAPI_SUBTYPE_E m_CapiTable[m_TableSize];
};
#if defined(_DEBUG)
#if defined(assert)
#define ADD_CAPI_ENUM_ENTRY(capi) (((capi) < m_TableSize && (capi) > m_UnusedTableEntry) ? (m_CapiTable[(capi)] = (capi)) : assert(((capi) < m_TableSize) && ((capi) > m_UnusedTableEntry)))
#else
#define ADD_CAPI_ENUM_ENTRY(capi) (((capi) < m_TableSize && (capi) > m_UnusedTableEntry) ? (m_CapiTable[(capi)] = (capi)) : (m_TableError = true))
#endif
#else
#define ADD_CAPI_ENUM_ENTRY(capi) (m_CapiTable[(capi)] = (capi))
#endif
CapiEnum::CapiEnum (void) : m_bEndReached(true), m_iIterate(0), m_TableError(false)
{
for (int iLoop = 0; iLoop < m_TableSize; iLoop++) m_CapiTable[iLoop] = static_cast <CAPI_SUBTYPE_E> (m_UnusedTableEntry);
ADD_CAPI_ENUM_ENTRY(CAPI_SUBTYPE_NULL);
// .....
ADD_CAPI_ENUM_ENTRY(CAPI_SUBTYPE_DIAG_ASG);
}
CAPI_SUBTYPE_E CapiEnum::Begin (void)
{
m_bEndReached = false;
for (m_iIterate = 0; m_iIterate < m_TableSize; m_iIterate++) {
if (m_CapiTable[m_iIterate] > m_UnusedTableEntry) return m_CapiTable[m_iIterate];
}
m_bEndReached = true;
return CAPI_SUBTYPE_NULL;
}
CAPI_SUBTYPE_E CapiEnum::Next (void)
{
if (!m_bEndReached) {
for (m_iIterate++; m_iIterate < m_TableSize; m_iIterate++) {
if (m_CapiTable[m_iIterate] > m_UnusedTableEntry) return m_CapiTable[m_iIterate];
}
}
m_bEndReached = true;
return CAPI_SUBTYPE_NULL;
}
bool CapiEnum::End (void)
{
return m_bEndReached;
}
bool CapiEnum::Check (CAPI_SUBTYPE_E value)
{
return (value >= 0 && value < m_TableSize && m_CapiTable[value] > m_UnusedTableEntry);
}
And if you like you could add an additional method to retrieve the current value of the iteration. Notice that rather than incrementing to the next, the Current() method uses whatever the iteration index is currently at and starts searching from the current position. So if the current position is a valid value it just returns it otherwise it will find the first valid value. Alternatively, you could make this just return the current table value pointed to by the index and if the value is invalid, set an error indicator.
CAPI_SUBTYPE_E CapiEnum::Current (void)
{
if (!m_bEndReached) {
for (m_iIterate; m_iIterate < m_TableSize; m_iIterate++) {
if (m_CapiTable[m_iIterate] > m_UnusedTableEntry) return m_CapiTable[m_iIterate];
}
}
m_bEndReached = true;
return CAPI_SUBTYPE_NULL;
}
Here's one more approach. One bonus is that your compiler may warn you if you omit an enum value in a switch:
template<typename T>
void IMP_Apply(const int& pSubtype, T& pApply) {
switch (pSubtype) {
case CAPI_SUBTYPE_NULL :
case CAPI_SUBTYPE_DIAG_DFD :
case CAPI_SUBTYPE_DIAG_ERD :
case CAPI_SUBTYPE_DIAG_STD :
case CAPI_SUBTYPE_DIAG_STC :
case CAPI_SUBTYPE_DIAG_DSD :
case CAPI_SUBTYPE_SPEC_PROCESS :
case CAPI_SUBTYPE_SPEC_MODULE :
case CAPI_SUBTYPE_SPEC_TERMINATOR :
case CAPI_SUBTYPE_DD_ALL :
case CAPI_SUBTYPE_DD_COUPLE :
case CAPI_SUBTYPE_DD_DATA_AREA :
case CAPI_SUBTYPE_DD_DATA_OBJECT :
case CAPI_SUBTYPE_DD_FLOW :
case CAPI_SUBTYPE_DD_RELATIONSHIP :
case CAPI_SUBTYPE_DD_STORE :
case CAPI_SUBTYPE_DIAG_PAD :
case CAPI_SUBTYPE_DIAG_BD :
case CAPI_SUBTYPE_DIAG_UCD :
case CAPI_SUBTYPE_DIAG_PD :
case CAPI_SUBTYPE_DIAG_COD :
case CAPI_SUBTYPE_DIAG_SQD :
case CAPI_SUBTYPE_DIAG_CD :
case CAPI_SUBTYPE_DIAG_SCD :
case CAPI_SUBTYPE_DIAG_ACD :
case CAPI_SUBTYPE_DIAG_CPD :
case CAPI_SUBTYPE_DIAG_DPD :
case CAPI_SUBTYPE_DIAG_PFD :
case CAPI_SUBTYPE_DIAG_HIER :
case CAPI_SUBTYPE_DIAG_IDEF0 :
case CAPI_SUBTYPE_DIAG_AID :
case CAPI_SUBTYPE_DIAG_SAD :
case CAPI_SUBTYPE_DIAG_ASG :
/* do something. just `applying`: */
pApply(static_cast<CAPI_SUBTYPE_E>(pSubtype));
return;
}
std::cout << "Skipped: " << pSubtype << '\n';
}
template<typename T>
void Apply(T& pApply) {
const CAPI_SUBTYPE_E First(CAPI_SUBTYPE_NULL);
const CAPI_SUBTYPE_E Last(CAPI_SUBTYPE_DIAG_ASG);
for (int idx(static_cast<int>(First)); idx <= static_cast<int>(Last); ++idx) {
IMP_Apply(idx, pApply);
}
}
int main(int argc, const char* argv[]) {
class t_apply {
public:
void operator()(const CAPI_SUBTYPE_E& pSubtype) const {
std::cout << "Apply: " << static_cast<int>(pSubtype) << '\n';
}
};
t_apply apply;
Apply(apply);
return 0;
}
I'm using this type of constructions to define my own enums:
#include <boost/unordered_map.hpp>
namespace enumeration
{
struct enumerator_base : boost::noncopyable
{
typedef
boost::unordered_map<int, std::string>
kv_storage_t;
typedef
kv_storage_t::value_type
kv_type;
typedef
std::set<int>
entries_t;
typedef
entries_t::const_iterator
iterator;
typedef
entries_t::const_iterator
const_iterator;
kv_storage_t const & kv() const
{
return storage_;
}
const char * name(int i) const
{
kv_storage_t::const_iterator it = storage_.find(i);
if(it != storage_.end())
return it->second.c_str();
return "empty";
}
iterator begin() const
{
return entries_.begin();
}
iterator end() const
{
return entries_.end();
}
iterator begin()
{
return entries_.begin();
}
iterator end()
{
return entries_.end();
}
void register_e(int val, std::string const & desc)
{
storage_.insert(std::make_pair(val, desc));
entries_.insert(val);
}
protected:
kv_storage_t storage_;
entries_t entries_;
};
template<class T>
struct enumerator;
template<class D>
struct enum_singleton : enumerator_base
{
static enumerator_base const & instance()
{
static D inst;
return inst;
}
};
}
#define QENUM_ENTRY(K, V, N) K, N register_e((int)K, V);
#define QENUM_ENTRY_I(K, I, V, N) K = I, N register_e((int)K, V);
#define QBEGIN_ENUM(NAME, C) \
enum NAME \
{ \
C \
} \
}; \
} \
#define QEND_ENUM(NAME) \
}; \
namespace enumeration \
{ \
template<> \
struct enumerator<NAME>\
: enum_singleton< enumerator<NAME> >\
{ \
enumerator() \
{
QBEGIN_ENUM(test_t,
QENUM_ENTRY(test_entry_1, "number uno",
QENUM_ENTRY_I(test_entry_2, 10, "number dos",
QENUM_ENTRY(test_entry_3, "number tres",
QEND_ENUM(test_t)))))
int _tmain(int argc, _TCHAR* argv[])
{
BOOST_FOREACH(int x, enumeration::enumerator<test_t>::instance())
std::cout << enumeration::enumerator<test_t>::instance().name(x) << "=" << x << std::endl;
return 0;
}
Also you can replace storage_ type to boost::bimap to have bidirectional correspondance int <==> string
There are a lot of answers to this question already, but most of them are either very complicated or inefficient in that they don't directly address the requirement of iterating over an enum with gaps. Everyone so far has said that this is not possible, and they are sort of correct in that there is no language feature to allow you to do this. That certainly does not mean you can't, and as we can see by all the answers so far, there are many different ways to do it. Here is my way, based on the enum you have provided and the assumption that it's structure won't change much. Of course this method can be adapted as needed.
typedef enum {
CAPI_SUBTYPE_NULL = 0, /* Null subtype. */
CAPI_SUBTYPE_DIAG_DFD = 1, /* Data Flow diag. */
CAPI_SUBTYPE_DIAG_ERD = 2, /* Entity-Relationship diag. */
CAPI_SUBTYPE_DIAG_STD = 3, /* State Transition diag. */
CAPI_SUBTYPE_DIAG_STC = 4, /* Structure Chart diag. */
CAPI_SUBTYPE_DIAG_DSD = 5, /* Data Structure diag. */
CAPI_SUBTYPE_SPEC_PROCESS = 6, /* Process spec. */
CAPI_SUBTYPE_SPEC_MODULE = 7, /* Module spec. */
CAPI_SUBTYPE_SPEC_TERMINATOR = 8, /* Terminator spec. */
CAPI_SUBTYPE_DD_ALL = 13, /* DD Entries (All). */
CAPI_SUBTYPE_DD_COUPLE = 14, /* DD Entries (Couples). */
CAPI_SUBTYPE_DD_DATA_AREA = 15, /* DD Entries (Data Areas). */
CAPI_SUBTYPE_DD_DATA_OBJECT = 16, /* DD Entries (Data Objects). */
CAPI_SUBTYPE_DD_FLOW = 17, /* DD Entries (Flows). */
CAPI_SUBTYPE_DD_RELATIONSHIP = 18, /* DD Entries (Relationships). */
CAPI_SUBTYPE_DD_STORE = 19, /* DD Entries (Stores). */
CAPI_SUBTYPE_DIAG_PAD = 35, /* Physical architecture diagram. */
CAPI_SUBTYPE_DIAG_BD = 36, /* Behaviour diagram. */
CAPI_SUBTYPE_DIAG_UCD = 37, /* UML Use case diagram. */
CAPI_SUBTYPE_DIAG_PD = 38, /* UML Package diagram. */
CAPI_SUBTYPE_DIAG_COD = 39, /* UML Collaboration diagram. */
CAPI_SUBTYPE_DIAG_SQD = 40, /* UML Sequence diagram. */
CAPI_SUBTYPE_DIAG_CD = 41, /* UML Class diagram. */
CAPI_SUBTYPE_DIAG_SCD = 42, /* UML State chart. */
CAPI_SUBTYPE_DIAG_ACD = 43, /* UML Activity chart. */
CAPI_SUBTYPE_DIAG_CPD = 44, /* UML Component diagram. */
CAPI_SUBTYPE_DIAG_DPD = 45, /* UML Deployment diagram. */
CAPI_SUBTYPE_DIAG_PFD = 47, /* Process flow diagram. */
CAPI_SUBTYPE_DIAG_HIER = 48, /* Hierarchy diagram. */
CAPI_SUBTYPE_DIAG_IDEF0 = 49, /* IDEF0 diagram. */
CAPI_SUBTYPE_DIAG_AID = 50, /* AID diagram. */
CAPI_SUBTYPE_DIAG_SAD = 51, /* SAD diagram. */
CAPI_SUBTYPE_DIAG_ASG = 59 /* ASG diagram. */
} CAPI_SUBTYPE_E;
struct ranges_t
{
int start;
int end;
};
ranges_t ranges[] =
{
{CAPI_SUBTYPE_NULL, CAPI_SUBTYPE_NULL},
{CAPI_SUBTYPE_DIAG_DFD, CAPI_SUBTYPE_DIAG_DSD},
{CAPI_SUBTYPE_SPEC_PROCESS, CAPI_SUBTYPE_SPEC_TERMINATOR},
{CAPI_SUBTYPE_DD_ALL, CAPI_SUBTYPE_DD_STORE},
{CAPI_SUBTYPE_DIAG_PAD, CAPI_SUBTYPE_DIAG_SAD},
{CAPI_SUBTYPE_DIAG_ASG, CAPI_SUBTYPE_DIAG_ASG},
};
int numRanges = sizeof(ranges) / sizeof(*ranges);
for( int rangeIdx = 0; rangeIdx < numRanges; ++rangeIdx )
{
for( int enumValue = ranges[rangeIdx].start; enumValue <= ranges[rangeIdx].end; ++enumValue )
{
processEnumValue( enumValue );
}
}
Or something along those lines.
The only real 'solution' I finally came up with to solve this problem is to create a pre-run script that reads the c/c++ file(s) containing the enums and generates a class file that has a list of all the enums as vectors. This is very much the same way Visual Studio supports T4 Templates. In the .Net world it's pretty common practice but since I can't work in that environment I was forced to do it this way.
The script I wrote is in Ruby, but you could do it in whatever language. If anyone wants the source script, I uploaded it here. It's by no means a perfect script but it fit the bill for my project. I encourage anyone to improve on it and give tips here.
I have a set of bit flags that are used in a program I am porting from C to C++.
To begin...
The flags in my program were previously defined as:
/* Define feature flags for this DCD file */
#define DCD_IS_CHARMM 0x01
#define DCD_HAS_4DIMS 0x02
#define DCD_HAS_EXTRA_BLOCK 0x04
...Now I've gather that #defines for constants (versus class constants, etc.) are generally considered bad form.
This raises questions about how best to store bit flags in c++ and why c++ doesn't support assignment of binary text to an int, like it allows for hex numbers to be assigned in this way (via "0x"). These questions are summarized at the end of this post.
I could see one simple solution is to simply create individual constants:
namespace DCD {
const unsigned int IS_CHARMM = 1;
const unsigned int HAS_4DIMS = 2;
const unsigned int HAS_EXTRA_BLOCK = 4;
};
Let's call this idea 1.
Another idea I had was to use an integer enum:
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 1,
HAS_4DIMS = 2,
HAS_EXTRA_BLOCK = 8
};
};
But one thing that bothers me about this is that its less intuitive when it comes to higher values, it seems... i.e.
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 1,
HAS_4DIMS = 2,
HAS_EXTRA_BLOCK = 8,
NEW_FLAG = 16,
NEW_FLAG_2 = 32,
NEW_FLAG_3 = 64,
NEW_FLAG_4 = 128
};
};
Let's call this approach option 2.
I'm considering using Tom Torf's macro solution:
#define B8(x) ((int) B8_(0x##x))
#define B8_(x) \
( ((x) & 0xF0000000) >( 28 - 7 ) \
| ((x) & 0x0F000000) >( 24 - 6 ) \
| ((x) & 0x00F00000) >( 20 - 5 ) \
| ((x) & 0x000F0000) >( 16 - 4 ) \
| ((x) & 0x0000F000) >( 12 - 3 ) \
| ((x) & 0x00000F00) >( 8 - 2 ) \
| ((x) & 0x000000F0) >( 4 - 1 ) \
| ((x) & 0x0000000F) >( 0 - 0 ) )
converted to inline functions, e.g.
#include <iostream>
#include <string>
....
/* TAKEN FROM THE C++ LITE FAQ [39.2]... */
class BadConversion : public std::runtime_error {
public:
BadConversion(std::string const& s)
: std::runtime_error(s)
{ }
};
inline double convertToUI(std::string const& s)
{
std::istringstream i(s);
unsigned int x;
if (!(i >> x))
throw BadConversion("convertToUI(\"" + s + "\")");
return x;
}
/** END CODE **/
inline unsigned int B8(std::string x) {
unsigned int my_val = convertToUI(x.insert(0,"0x").c_str());
return ((my_val) & 0xF0000000) >( 28 - 7 ) |
((my_val) & 0x0F000000) >( 24 - 6 ) |
((my_val) & 0x00F00000) >( 20 - 5 ) |
((my_val) & 0x000F0000) >( 16 - 4 ) |
((my_val) & 0x0000F000) >( 12 - 3 ) |
((my_val) & 0x00000F00) >( 8 - 2 ) |
((my_val) & 0x000000F0) >( 4 - 1 ) |
((my_val) & 0x0000000F) >( 0 - 0 );
}
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = B8("00000001"),
HAS_4DIMS = B8("00000010"),
HAS_EXTRA_BLOCK = B8("00000100"),
NEW_FLAG = B8("00001000"),
NEW_FLAG_2 = B8("00010000"),
NEW_FLAG_3 = B8("00100000"),
NEW_FLAG_4 = B8("01000000")
};
};
Is this crazy? Or does it seem more intuitive? Let's call this choice 3.
So to recap, my over-arching questions are:
1. Why doesn't c++ support a "0b" value flag, similar to "0x"?
2. Which is the best style to define flags...
i. Namespace wrapped constants.
ii. Namespace wrapped enum of unsigned ints assigned directly.
iii. Namespace wrapped enum of unsigned ints assigned using readable binary string.
Thanks in advance! And please don't close this thread as subjective, because I really want to get help on what the best style is and why c++ lacks built in binary assignment capability.
EDIT 1
A bit of additional info. I will be reading a 32-bit bitfield from a file and then testing it with these flags. So bear that in mind when you post suggestions.
Prior to C++14, binary literals had been discussed off and on over the years, but as far as I know, nobody had ever written up a serious proposal to get it into the standard, so it never really got past the stage of talking about it.
For C++ 14, somebody finally wrote up a proposal, and the committee accepted it, so if you can use a current compiler, the basic premise of the question is false--you can use binary literals, which have the form 0b01010101.
In C++11, instead of adding binary literals directly, they added a much more general mechanism to allow general user-defined literals, which you could use to support binary, or base 64, or other kinds of things entirely. The basic idea is that you specify a number (or string) literal followed by a suffix, and you can define a function that will receive that literal, and convert it to whatever form you prefer (and you can maintain its status as a "constant" too...)
As to which to use: if you can, the binary literals built into C++14 or above are the obvious choice. If you can't use them, I'd generally prefer a variation of option 2: an enum with initializers in hex:
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 0x1,
HAS_4DIMS = 0x2,
HAS_EXTRA_BLOCK = 0x8,
NEW_FLAG = 0x10,
NEW_FLAG_2 = 0x20,
NEW_FLAG_3 = 0x40,
NEW_FLAG_4 = 0x80
};
};
Another possibility is something like:
#define bit(n) (1<<(n))
enum e_feature_flags = {
IS_CHARM = bit(0),
HAS_4DIMS = bit(1),
HAS_EXTRA_BLOCK = bit(3),
NEW_FLAG = bit(4),
NEW_FLAG_2 = bit(5),
NEW_FLAG_3 = bit(6),
NEW_FLAG_4 = bit(7)
};
With option two, you can use the left shift, which is perhaps a bit less "unintuitive:"
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 1,
HAS_4DIMS = (1 << 1),
HAS_EXTRA_BLOCK = (1 << 2),
NEW_FLAG = (1 << 3),
NEW_FLAG_2 = (1 << 4),
NEW_FLAG_3 = (1 << 5),
NEW_FLAG_4 = (1 << 6)
};
};
Just as a note, Boost (as usual) provides an implementation of this idea.
Why not use a bitfield struct?
struct preferences {
unsigned int likes_ice_cream : 1;
unsigned int plays_golf : 1;
unsigned int watches_tv : 1;
unsigned int reads_stackoverflow : 1;
};
struct preferences fred;
fred.likes_ice_cream = 1;
fred.plays_golf = 0;
fred.watches_tv = 0;
fred.reads_stackoverflow = 1;
if (fred.likes_ice_cream == 1)
/* ... */
GCC has an extension making it capable of binary assignment:
int n = 0b01010101;
Edit: As of C++14, this is now an official part of the language.
What's wrong with hex for this use case?
enum Flags {
FLAG_A = 0x00000001,
FLAG_B = 0x00000002,
FLAG_C = 0x00000004,
FLAG_D = 0x00000008,
FLAG_E = 0x00000010,
// ...
};
I guess the bottom line is that it's not really necessary.
If you are just wanting to use binary for flags, the below approach is how I typically do it. After the orginal define you never have to worry about looking at "messier" bigger multiples of 2 as you mentioned
int FLAG_1 = 1
int FLAG_2 = 2
int FLAG_3 = 4
...
int FLAG_N = 256
you can easily check them with
if(someVariable & FLAG_3 == FLAG_3) {
// the flag is set
}
And btw, Depending on your compiler (i'm using GNU GCC Compiler) it may support "0b"
note Edited to answer the question.
How could I make a function with flags like how Windows' CreateWindow(...style | style,...), for example, a createnum function:
int CreateNum(flag flags) //???
{
int num = 0;
if(flags == GREATER_THAN_TEN)
num = 11;
if(flags == EVEN && ((num % 2) == 1)
num++;
else if(flags == ODD && ((num % 2) == 0)
num++;
return num;
}
//called like this
int Number = CreateNum(GREATER_THAN_TEN | EVEN);
Is this possible, and if so, how?
You can define an enum specifying "single bit" values (note that the enclosing struct is acting here only as a naming context, so that you can write e.g. MyFlags::EVEN):
struct MyFlags{
enum Value{
EVEN = 0x01,
ODD = 0x02,
ANOTHER_FLAG = 0x04,
YET_ANOTHER_FLAG = 0x08,
SOMETHING_ELSE = 0x10,
SOMETHING_COMPLETELY_DIFFERENT = 0x20
};
};
and then use it like this:
int CreateNum(MyFlags::Value flags){
if (flags & MyFlags::EVEN){
// do something...
}
}
void main(){
CreateNum((MyFlags::Value)(MyFlags::EVEN | MyFlags::ODD));
}
or simply like this:
int CreateNum(int flags){
if (flags & MyFlags::EVEN){
// do something...
}
}
void main(){
CreateNum(MyFlags::EVEN | MyFlags::ODD);
}
You could also simply declare integer constants, but the enum is clearer in my opinion.
Note: I updated the post to take some comments into account, thanks!
I upvoted orsogufo's answer, but I always liked doing the following for defining the values:
enum Value{
EVEN = (1<<0),
ODD = (1<<2),
ANOTHER_FLAG = (1<<3),
YET_ANOTHER_FLAG = (1<<4),
SOMETHING_ELSE = (1<<5),
SOMETHING_COMPLETELY_DIFFERENT = (1<<6),
ANOTHER_EVEN = EVEN|ANOTHER_FLAG
};
<< is the shift operator. Incrementing the right side lets you generate sequential bit masks by moving the 1 over, one bit at a time. This has the same values for the bare flags, but reads easier to my eyes and makes it obvious if you skip or duplicate a value.
I also like combining some common flag combinations when appropriate.
You can use const int like this:
const int FLAG1 = 0x0001;
const int FLAG2 = 0x0010;
const int FLAG3 = 0x0100;
// ...
And when you use it:
int CreateNum(int flags)
{
if( flags & FLAG1 )
// FLAG1 is present
if( flags & FLAG2 )
// FLAG2 is present
// ...
}
Of course you can put one or more flag in your flags using the | operator.
Use powers of two as the individual constants, like
enum Flags { EVEN = 0x1, ODD = 0x2, GREATER_TEN = 0x4 };
and you use the logical and operator '&' for testing, like
if( flags & GREATER_THAN_TEN)
num = 11;
if( (flags & EVEN) && (num % 2) == 1 )
num++;
else if ( (flags & ODD) && (num % 2) == 0 )
num++;
return num;
You've got your tests wrong. What you want is something like (flags & EVEN), where EVEN is an integer with a single bit set (1, 2, 4, 8, 16 - some power of 2). (The integer can be an int or an enum. You could have a macro, but that's generally not a good idea.)
You can use the notation you listed, by overloading flags::operator==(flagvalue f), but it's a bad idea.
enum flags {
EVEN = 0x0100,
ODD = 0x0200,
BELOW_TEN = 0x0400,
ABOVETEN = 0x0800,
HUNDRED = 0x1000,
MASK = 0xff00
};
void some_func(int id_and_flags)
{
int the_id = id_and_flags & ~MASK;
int flags = id_and_flags & MASK;
if ((flags & EVEN) && (the_id % 2) == 1)
++the_id;
if ((flags & ODD) && (the_id % 2) == 0)
++the_id;
// etc
}
Illustrates masking of bit fields too which can be useful when you just need to bolt on a simplistic bit of extra functionality without adding any extra data structure.