How could I make a function with flags like how Windows' CreateWindow(...style | style,...), for example, a createnum function:
int CreateNum(flag flags) //???
{
int num = 0;
if(flags == GREATER_THAN_TEN)
num = 11;
if(flags == EVEN && ((num % 2) == 1)
num++;
else if(flags == ODD && ((num % 2) == 0)
num++;
return num;
}
//called like this
int Number = CreateNum(GREATER_THAN_TEN | EVEN);
Is this possible, and if so, how?
You can define an enum specifying "single bit" values (note that the enclosing struct is acting here only as a naming context, so that you can write e.g. MyFlags::EVEN):
struct MyFlags{
enum Value{
EVEN = 0x01,
ODD = 0x02,
ANOTHER_FLAG = 0x04,
YET_ANOTHER_FLAG = 0x08,
SOMETHING_ELSE = 0x10,
SOMETHING_COMPLETELY_DIFFERENT = 0x20
};
};
and then use it like this:
int CreateNum(MyFlags::Value flags){
if (flags & MyFlags::EVEN){
// do something...
}
}
void main(){
CreateNum((MyFlags::Value)(MyFlags::EVEN | MyFlags::ODD));
}
or simply like this:
int CreateNum(int flags){
if (flags & MyFlags::EVEN){
// do something...
}
}
void main(){
CreateNum(MyFlags::EVEN | MyFlags::ODD);
}
You could also simply declare integer constants, but the enum is clearer in my opinion.
Note: I updated the post to take some comments into account, thanks!
I upvoted orsogufo's answer, but I always liked doing the following for defining the values:
enum Value{
EVEN = (1<<0),
ODD = (1<<2),
ANOTHER_FLAG = (1<<3),
YET_ANOTHER_FLAG = (1<<4),
SOMETHING_ELSE = (1<<5),
SOMETHING_COMPLETELY_DIFFERENT = (1<<6),
ANOTHER_EVEN = EVEN|ANOTHER_FLAG
};
<< is the shift operator. Incrementing the right side lets you generate sequential bit masks by moving the 1 over, one bit at a time. This has the same values for the bare flags, but reads easier to my eyes and makes it obvious if you skip or duplicate a value.
I also like combining some common flag combinations when appropriate.
You can use const int like this:
const int FLAG1 = 0x0001;
const int FLAG2 = 0x0010;
const int FLAG3 = 0x0100;
// ...
And when you use it:
int CreateNum(int flags)
{
if( flags & FLAG1 )
// FLAG1 is present
if( flags & FLAG2 )
// FLAG2 is present
// ...
}
Of course you can put one or more flag in your flags using the | operator.
Use powers of two as the individual constants, like
enum Flags { EVEN = 0x1, ODD = 0x2, GREATER_TEN = 0x4 };
and you use the logical and operator '&' for testing, like
if( flags & GREATER_THAN_TEN)
num = 11;
if( (flags & EVEN) && (num % 2) == 1 )
num++;
else if ( (flags & ODD) && (num % 2) == 0 )
num++;
return num;
You've got your tests wrong. What you want is something like (flags & EVEN), where EVEN is an integer with a single bit set (1, 2, 4, 8, 16 - some power of 2). (The integer can be an int or an enum. You could have a macro, but that's generally not a good idea.)
You can use the notation you listed, by overloading flags::operator==(flagvalue f), but it's a bad idea.
enum flags {
EVEN = 0x0100,
ODD = 0x0200,
BELOW_TEN = 0x0400,
ABOVETEN = 0x0800,
HUNDRED = 0x1000,
MASK = 0xff00
};
void some_func(int id_and_flags)
{
int the_id = id_and_flags & ~MASK;
int flags = id_and_flags & MASK;
if ((flags & EVEN) && (the_id % 2) == 1)
++the_id;
if ((flags & ODD) && (the_id % 2) == 0)
++the_id;
// etc
}
Illustrates masking of bit fields too which can be useful when you just need to bolt on a simplistic bit of extra functionality without adding any extra data structure.
Related
I'm a Rust beginner which comes from C/C++. To start off I tried to create a simple "Hello-World" like program for Microsoft Windows using user32.MessageBox where I stumbled upon a problem related to bitfields. Disclaimer: All code snippets were written in the SO editor and might contain errors.
MessageBox "Hello-World" in C
The consolidated C declarations needed to call the UTF-16LE version of the function are:
enum MessageBoxResult {
IDFAILED,
IDOK,
IDCANCEL,
IDABORT,
IDRETRY,
IDIGNORE,
IDYES,
IDNO,
IDTRYAGAIN = 10,
IDCONTINUE
};
enum MessageBoxType {
// Normal enumeration values.
MB_OK,
MB_OKCANCEL,
MB_ABORTRETRYIGNORE,
MB_YESNOCANCEL,
MB_YESNO,
MB_RETRYCANCEL,
MB_CANCELTRYCONTINUE,
MB_ICONERROR = 0x10UL,
MB_ICONQUESTION = 0x20UL,
MB_ICONEXCLAMATION = 0x30UL,
MB_ICONINFORMATION = 0x40UL,
MB_DEFBUTTON1 = 0x000UL,
MB_DEFBUTTON2 = 0x100UL,
MB_DEFBUTTON3 = 0x200UL,
MB_DEFBUTTON4 = 0x300UL,
MB_APPLMODAL = 0x0000UL,
MB_SYSTEMMODAL = 0x1000UL,
MB_TASKMODAL = 0x2000UL,
// Flag values.
MB_HELP = 1UL << 14,
MB_SETFOREGROUND = 1UL << 16,
MB_DEFAULT_DESKTOP_ONLY = 1UL << 17,
MB_TOPMOST = 1UL << 18,
MB_RIGHT = 1UL << 19,
MB_RTLREADING = 1UL << 20,
MB_SERVICE_NOTIFICATION = 1UL << 21
};
MessageBoxResult __stdcall MessageBoxW(
HWND hWnd,
const wchar_t * lpText,
const wchar_t * lpCaption,
MessageBoxType uType
);
Usage:
MessageBoxType mbType = MB_YESNO | MB_ICONEXCLAMATION | MB_DEFBUTTON3 | MB_TOPMOST;
if ((mbType & 0x0F /* All bits for buttons */ == MB_YESNO) && (mbType & 0xF0 /* All bits for icons */ == MB_ICONEXCLAMATION) && (mbType & 0xF00 /* All bits for default buttons */ == MB_DEFBUTTON3) && (mbType & MB_TOPMOST != 0)) {
MessageBoxW(NULL, L"Text", L"Title", mbType);
}
The MessageBoxType enumeration contains enumeration values and flag values. The problem with that is that MB_DEFBUTTON2 and MB_DEFBUTTON3 can be used together and "unexpectedly" result in MB_DEFBUTTON4. Also the access is quite error prone and ugly, I have to |, & and shift everything manually when checking for flags in the value.
MessageBox "Hello-World" in C++
In C++ the same enumeration can be cleverly put into a structure, which has the same size as the enumeration and makes the access way easier, safer and prettier. It makes use of bitfields - the layout of bitfields not defined by the C standard, but since I only want to use it for x86-Windows it is always the same, so I can rely on it.
enum class MessageBoxResult : std::uint32_t {
Failed,
Ok,
Cancel,
Abort,
Retry,
Ignore,
Yes,
No,
TryAgain = 10,
Continue
};
enum class MessageBoxButton : std::uint32_t {
Ok,
OkCancel,
AbortRetryIgnore,
YesNoCancel,
YesNo,
RetryCancel,
CancelTryContinue
};
enum class MessageBoxDefaultButton : std::uint32_t {
One,
Two,
Three,
Four
};
// Union so one can access all flags as a value and all boolean values separately.
union MessageBoxFlags {
enum class Flags : std::uint32_t {
None,
Help = 1UL << 0,
SetForeground = 1UL << 2,
DefaultDesktopOnly = 1UL << 3,
TopMost = 1UL << 4,
Right = 1UL << 5,
RtlReading = 1UL << 6,
ServiceNotification = 1UL << 7
};
// Flags::operator|, Flags::operator&, etc. omitted here.
Flags flags;
struct {
bool help : 1;
char _padding0 : 1;
bool setForeground : 1;
bool defaultDesktopOnly : 1;
bool topMost : 1;
bool right : 1;
bool rtlReading : 1;
bool serviceNotification : 1;
char _padding1 : 8;
char _padding2 : 8;
char _padding3 : 8;
};
constexpr MessageBoxFlags(const Flags flags = Flags::None)
: flags(flags) {}
};
enum class MessageBoxIcon : std::uint32_t {
None,
Stop,
Question,
Exclamation,
Information
};
enum class MessageBoxModality : std::uint32_t {
Application,
System,
Task
};
union MessageBoxType {
std::uint32_t value;
struct { // Used bits Minimum (Base 2) Maximum (Base 2) Min (Base 16) Max (Base 16)
MessageBoxButton button : 4; // 0000.0000.0000.0000|0000.0000.0000.XXXX 0000.0000.0000.0000|0000.0000.0000.0000 - 0000.0000.0000.0000|0000.0000.0000.0110 : 0x0000.0000 - 0x0000.0006
MessageBoxIcon icon : 4; // 0000.0000.0000.0000|0000.0000.XXXX.0000 0000.0000.0000.0000|0000.0000.0001.0000 - 0000.0000.0000.0000|0000.0000.0100.0000 : 0x0000.0010 - 0x0000.0040
MessageBoxDefaultButton defaultButton : 4; // 0000.0000.0000.0000|0000.XXXX.0000.0000 0000.0000.0000.0000|0000.0001.0000.0000 - 0000.0000.0000.0000|0000.0011.0000.0000 : 0x0000.0100 - 0x0000.0300
MessageBoxModality modality : 2; // 0000.0000.0000.0000|00XX.0000.0000.0000 0000.0000.0000.0000|0001.0000.0000.0000 - 0000.0000.0000.0000|0010.0000.0000.0000 : 0x0000.1000 - 0x0000.2000
MessageBoxFlags::Flags flags : 8; // 0000.0000.00XX.XXXX|XX00.0000.0000.0000 0000.0000.0000.0000|0100.0000.0000.0000 - 0000.0000.0010.0000|0000.0000.0000.0000 : 0x0000.4000 - 0x0020.0000
std::uint32_t _padding0 : 10; // XXXX.XXXX.XX00.0000|0000.0000.0000.0000
};
MessageBoxType(
const MessageBoxButton button,
const MessageBoxIcon icon = MessageBoxIcon::None,
const MessageBoxDefaultButton defaultButton = MessageBoxDefaultButton::One,
const MessageBoxModality modality = MessageBoxModality::Application,
const MessageBoxFlags::Flags flags = MessageBoxFlags::Flags::None
) : button(button), icon(icon), defaultButton(defaultButton), modality(modality), flags(flags), _padding0(0) {}
MessageBoxType() : value(0) {}
};
MessageBoxResult __stdcall MessageBoxW(
HWND parentWindow,
const wchar_t * text,
const wchar_t * caption,
MessageBoxType type
);
Usage:
auto mbType = MessageBoxType(MessageBoxButton::YesNo, MessageBoxIcon::Exclamation, MessageBoxDefaultButton::Three, MessageBoxModality::Application, MessageBoxFlags::Flags::TopMost);
if (mbType.button == MessageBoxButton::YesNo && mbType.icon == MessageBoxIcon::Exclamation && mbType.defaultButton == MessageBoxDefaultButton::Three && mbType.flags.topMost) {
MessageBoxW(nullptr, L"Text", L"Title", mbType);
}
With this C++ version I can access flags as boolean values and have enumeration classes for the other types, all while it still being a simple std::uint32_t in memory. Now I struggled to implement this in Rust.
MessageBox "Hello-World" in Rust
#[repr(u32)]
enum MessageBoxResult {
Failed,
Ok,
Cancel,
Abort,
Retry,
Ignore,
Yes,
No,
TryAgain = 10,
Continue
}
#[repr(u32)]
enum MessageBoxButton {
Ok,
OkCancel,
AbortRetryIgnore,
YesNoCancel,
YesNo,
RetryCancel,
CancelTryContinue
}
#[repr(u32)]
enum MessageBoxDefaultButton {
One,
Two,
Three,
Four
}
#[repr(u32)]
enum MessageBoxIcon {
None,
Stop,
Question,
Exclamation,
Information
}
#[repr(u32)]
enum MessageBoxModality {
Application,
System,
Task
}
// MessageBoxFlags and MessageBoxType ?
I know about the WinApi crate which to my understanding is generated automatically from VC++-header files which doesn't help, because I will have the same problems as in C. I also saw the bitflags macro but it seems to me it doesn't handle this kind of "complexity".
How would I implement MessageBoxFlags and MessageBoxType in Rust, so I can access it in a nice (not necessarily the same) way as in my C++ implementation?
The bitfield crate #Boiethios mentioned is kind of what I wanted. I created my own first macro crate bitfield which allows me to write the following:
#[bitfield::bitfield(32)]
struct Styles {
#[field(size = 4)] button: Button,
#[field(size = 4)] icon: Icon,
#[field(size = 4)] default_button: DefaultButton,
#[field(size = 2)] modality: Modality,
style: Style
}
#[derive(Copy, Clone, bitfield::Flags)]
#[repr(u8)]
enum Style {
Help = 14,
Foreground = 16,
DefaultDesktopOnly,
TopMost,
Right,
RightToLeftReading,
ServiceNotification
}
#[derive(Clone, Copy, bitfield::Field)]
#[repr(u8)]
enum Button {
Ok,
OkCancel,
AbortRetryIgnore,
YesNoCancel,
YesNo,
RetryCancel,
CancelTryContinue
}
#[derive(Clone, Copy, bitfield::Field)]
#[repr(u8)]
enum DefaultButton {
One,
Two,
Three,
Four
}
#[derive(Clone, Copy, bitfield::Field)]
#[repr(u8)]
enum Icon {
None,
Stop,
Question,
Exclamation,
Information
}
#[derive(Clone, Copy, bitfield::Field)]
#[repr(u8)]
enum Modality {
Application,
System,
Task
}
I can then use the code like this:
// Verbose:
let styles = Styles::new()
.set_button(Button::CancelTryContinue)
.set_icon(Icon::Exclamation)
.set_style(Style::Foreground, true)
.set_style(Style::TopMost, true);
// Alternatively:
let styles = Styles::new() +
Button::CancelTryContinue +
Icon::Exclamation +
Style::Foreground +
Style::TopMost;
let result = user32::MessageBoxW(/* ... */, styles);
I'm currently trying to come up with a clever way of implementing flags that include the states "default" and (optional) "toggle" in addition to the usual "true" and "false".
The general problem with flags is, that one has a function and wants to define its behaviour (either "do something" or "don't do something") by passing certain parameters.
Single flag
With a single (boolean) flag the solution is simple:
void foo(...,bool flag){
if(flag){/*do something*/}
}
Here it is especially easy to add a default, by just changing the function to
void foo(...,bool flag=true)
and call it without the flag parameter.
Multiple flags
Once the number of flags increases, the solution i usually see and use is something like this:
typedef int Flag;
static const Flag Flag1 = 1<<0;
static const Flag Flag2 = 1<<1;
static const Flag Flag3 = 1<<2;
void foo(/*other arguments ,*/ Flag f){
if(f & Flag1){/*do whatever Flag1 indicates*/}
/*check other flags*/
}
//call like this:
foo(/*args ,*/ Flag1 | Flag3)
This has the advantage, that you don't need a parameter for each flag, which means the user can set the flags he likes and just forget about the ones he don't care about. Especially you dont get a call like foo (/*args*/, true, false, true) where you have to count which true/false denotes which flag.
The problem here is:
If you set a default argument, it is overwritten as soon as the user specifies any flag. It is not possible to do somethink like Flag1=true, Flag2=false, Flag3=default.
Obviously, if we want to have 3 options (true, false, default) we need to pass at least 2 bits per flag. So while it might not be neccessary, i guess it should be easy for any implementation to use the 4th state to indicate a toggle (= !default).
I have 2 approaches to this, but i'm not really happy with both of them:
Approach 1: Defining 2 Flags
I tried using something like this up to now:
typedef int Flag;
static const Flag Flag1 = 1<<0;
static const Flag Flag1False= 1<<1;
static const Flag Flag1Toggle = Flag1 | Flag1False;
static const Flag Flag2= 1<<2;
static const Flag Flag2False= 1<<3;
static const Flag Flag2Toggle = Flag2 | Flag2False;
void applyDefault(Flag& f){
//do nothing for flags with default false
//for flags with default true:
f = ( f & Flag1False)? f & ~Flag1 : f | Flag1;
//if the false bit is set, it is either false or toggle, anyway: clear the bit
//if its not set, its either true or default, anyway: set
}
void foo(/*args ,*/ Flag f){
applyDefault(f);
if (f & Flag1) //do whatever Flag1 indicates
}
However what i don't like about this is, that there are two different bits used for each flag. This leads to the different code for "default-true" and "default-false" flags and to the neccessary if instead of some nice bitwise operation in applyDefault().
Approach 2: Templates
By defining a template-class like this:
struct Flag{
virtual bool apply(bool prev) const =0;
};
template<bool mTrue, bool mFalse>
struct TFlag: public Flag{
inline bool apply(bool prev) const{
return (!prev&&mTrue)||(prev&&!mFalse);
}
};
TFlag<true,false> fTrue;
TFlag<false,true> fFalse;
TFlag<false,false> fDefault;
TFlag<true,true> fToggle;
i was able to condense the apply into a single bitwise operation, with all but 1 argument known at compile time. So using the TFlag::apply directly compiles (using gcc) to the same machine code as a return true;, return false;, return prev; or return !prev; would, which is pretty efficient, but that would mean i have to use template-functions if i want to pass a TFlag as argument. Inheriting from Flag and using a const Flag& as argument adds the overhead of a virtual function call, but saves me from using templates.
However i have no idea how to scale this up to multiple flags...
Question
So the question is:
How can i implement multiple flags in a single argument in C++, so that a user can easily set them to "true", "false" or "default" (by not setting the specific flag) or (optional) indicate "whatever is not default"?
Is a class with two ints, using a similar bitwise operation like the template-approach with its own bitwise-operators the way to go? And if so, is there a way to give the compiler the option to do most of the bitwise operations at compile-time?
Edit for clarification:
I don't want to pass the 4 distinct flags "true", "false", "default", "toggle" to a function.
E.g. think of a circle that gets drawn where the flags are used for "draw border", "draw center", "draw fill color", "blurry border", "let the circle hop up and down", "do whatever other fancy stuff you can do with a circle", ....
And for each of those "properties" i want to pass a flag with value either true, false, default or toggle.
So the function might decide to draw the border, fill color and center by default, but none of the rest. A call, roughly like this:
draw_circle (DRAW_BORDER | DONT_DRAW_CENTER | TOGGLE_BLURRY_BORDER) //or
draw_circle (BORDER=true, CENTER=false, BLURRY=toggle)
//or whatever nice syntax you come up with....
should draw the border (specified by flag), not draw the center (specified by flag), blurry the border (the flag says: not the default) and draw the fill color (not specified, but its default).
If i later decide to not draw the center by default anymore but blurry the border by default, the call should draw the border (specified by flag), not draw the center (specified by flag), not blurry the border (now blurrying is default, but we don't want default) and draw the fill color (no flag for it, but its default).
Not exactly pretty, but very simple (building from your Approach 1):
#include <iostream>
using Flag = int;
static const Flag Flag1 = 1<<0;
static const Flag Flag2 = 1<<2;
// add more flags to turn things off, etc.
class Foo
{
bool flag1 = true; // default true
bool flag2 = false; // default false
void applyDefault(Flag& f)
{
if (f & Flag1)
flag1 = true;
if (f & Flag2)
flag2 = true;
// apply off flags
}
public:
void operator()(/*args ,*/ Flag f)
{
applyDefault(f);
if (flag1)
std::cout << "Flag 1 ON\n";
if (flag2)
std::cout << "Flag 2 ON\n";
}
};
void foo(/*args ,*/ Flag flags)
{
Foo f;
f(flags);
}
int main()
{
foo(Flag1); // Flag1 ON
foo(Flag2); // Flag1 ON\nFlag2 ON
foo(Flag1 | Flag2); // Flag1 ON\nFlag2 ON
return 0;
}
Your comments and answers pointed me towards a solution that i like and wanted to share with you:
struct Default_t{} Default;
struct Toggle_t{} Toggle;
struct FlagSet{
uint m_set;
uint m_reset;
constexpr FlagSet operator|(const FlagSet other) const{
return {
~m_reset & other.m_set & ~other.m_reset |
~m_set & other.m_set & other.m_reset |
m_set & ~other.m_set,
m_reset & ~other.m_reset |
~m_set & ~other.m_set & other.m_reset|
~m_reset & other.m_set & other.m_reset};
}
constexpr FlagSet& operator|=(const FlagSet other){
*this = *this|other;
return *this;
}
};
struct Flag{
const uint m_bit;
constexpr FlagSet operator= (bool val) const{
return {(uint)val<<m_bit,(!(uint)val)<<m_bit};
}
constexpr FlagSet operator= (Default_t) const{
return {0u,0u};
}
constexpr FlagSet operator= (Toggle_t) const {
return {1u<<m_bit,1u<<m_bit};
}
constexpr uint operator& (FlagSet i) const{
return i.m_set & (1u<<m_bit);
}
constexpr operator FlagSet() const{
return {1u<<m_bit,0u}; //= set
}
constexpr FlagSet operator|(const Flag other) const{
return (FlagSet)*this|(FlagSet)other;
}
constexpr FlagSet operator|(const FlagSet other) const{
return (FlagSet)*this|other;
}
};
constexpr uint operator& (FlagSet i, Flag f){
return f & i;
}
So basically the FlagSet holds two integers. One for set, one for reset. Different combinations represent different actions for that particular bit:
{false,false} = Default (D)
{true ,false} = Set (S)
{false,true } = Reset (R)
{true ,true } = Toggle (T)
The operator| is using a rather complex bitwise operation, designed to fullfill
D|D = D
D|R = R|D = R
D|S = S|D = S
D|T = T|D = T
T|T = D
T|R = R|T = S
T|S = S|T = R
S|S = S
R|R = R
S|R = S (*)
R|S = R (*)
The non-commutative behaviour in (*) is due to the fact, that we somehow need the ability to decide which one is the "default" and which one is the "user defined" one. So in case of conflicting values, the left one takes precedence.
The Flag class represents a single flag, basically one of the bits. Using the different operator=() overloads enables some kind of "Key-Value-Notation" to convert directly to a FlagSet with the bit-pair at position m_bit set to one of the previously defined pairs. By default (operator FlagSet()) this converts to a Set(S) action on the given bit.
The class also provides some overloads for bitwise-OR that auto convert to FlagSet and operator&() to actually compare the Flag with a FlagSet. In this comparison both Set(S) and Toggle(T) are considered true while both Reset(R) and Default(D) are considered false.
Using this is incredibly simple and very close to the "usual" Flag-implementation:
constexpr Flag Flag1{0};
constexpr Flag Flag2{1};
constexpr Flag Flag3{2};
constexpr auto NoFlag1 = (Flag1=false); //Just for convenience, not really needed;
void foo(FlagSet f={0,0}){
f |= Flag1|Flag2; //This sets the default. Remember: default right, user left
cout << ((f & Flag1)?"1":"0");
cout << ((f & Flag2)?"2":"0");
cout << ((f & Flag3)?"3":"0");
cout << endl;
}
int main() {
foo();
foo(Flag3);
foo(Flag3|(Flag2=false));
foo(Flag3|NoFlag1);
foo((Flag1=Toggle)|(Flag2=Toggle)|(Flag3=Toggle));
return 0;
}
Output:
120
123
103
023
003
Test it on ideone
One last word about efficiency: While i didn't test it without all the constexpr keywords, with them this code:
bool test1(){
return Flag1&((Flag1=Toggle)|(Flag2=Toggle)|(Flag3=Toggle));
}
bool test2(){
FlagSet f = Flag1|Flag2 ;
return f & Flag1;
}
bool test3(FlagSet f){
f |= Flag1|Flag2 ;
return f & Flag1;
}
compiles to (usign gcc 5.3 on gcc.godbolt.org)
test1():
movl $1, %eax
ret
test2():
movl $1, %eax
ret
test3(FlagSet):
movq %rdi, %rax
shrq $32, %rax
notl %eax
andl $1, %eax
ret
and while i'm not totally familiar with Assembler-Code, this looks like very basic bitwise operations and probably the fastest you can get without inlining the test-functions.
If I understand the question you can solve the problem by creating a simple class with implicit constructor from bool and default constructor:
class T
{
T(bool value):def(false), value(value){} // implicit constructor from bool
T():def(true){}
bool def; // is flag default
bool value; // flag value if flag isn't default
}
and using it in function like this:
void f(..., T flag = T());
void f(..., true); // call with flag = true
void f(...); // call with flag = default
If I understand correctly, you want a simple way to pass one or more flags to a function as a single parameter, and/or a simple way for an object to keep track of one or more flags in a single variable, correct? A simple approach would be to specify the flags as a typed enum, with an unsigned underlying type large enough to hold all the flags you need. For example:
/* Assuming C++11 compatibility. If you need to work with an older compiler, you'll have
* to manually insert the body of flag() into each BitFlag's definition, and replace
* FLAG_INVALID's definition with something like:
* FLAG_INVALID = static_cast<flag_t>(-1) -
* (FFalse + FTrue + FDefault + FToggle),
*/
#include <climits>
// For CHAR_BIT.
#include <cstdint>
// For uint8_t.
// Underlying flag type. Change as needed. Should remain unsigned.
typedef uint8_t flag_t;
// Helper functions, to provide cleaner syntax to the enum.
// Not actually necessary, they'll be evaluated at compile time either way.
constexpr flag_t flag(int f) { return 1 << f; }
constexpr flag_t fl_validate(int f) {
return (f ? (1 << f) + fl_validate(f - 1) : 1);
}
constexpr flag_t register_invalids(int f) {
// The static_cast is a type-independent maximum value for unsigned ints. The compiler
// may or may not complain.
// (f - 1) compensates for bits being zero-indexed.
return static_cast<flag_t>(-1) - fl_validate(f - 1);
}
// List of available flags.
enum BitFlag : flag_t {
FFalse = flag(0), // 0001
FTrue = flag(1), // 0010
FDefault = flag(2), // 0100
FToggle = flag(3), // 1000
// ...
// Number of defined flags.
FLAG_COUNT = 4,
// Indicator for invalid flags. Can be used to make sure parameters are valid, or
// simply to mask out any invalid ones.
FLAG_INVALID = register_invalids(FLAG_COUNT),
// Maximum number of available flags.
FLAG_MAX = sizeof(flag_t) * CHAR_BIT
};
// ...
void func(flag_t f);
// ...
class CL {
flag_t flags;
// ...
};
Note that this assumes that FFalse and FTrue should be distinct flags, both of which can be specified at the same time. If you want them to be mutually exclusive, a couple small changes would be necessary:
// ...
constexpr flag_t register_invalids(int f) {
// Compensate for 0th and 1st flags using the same bit.
return static_cast<flag_t>(-1) - fl_validate(f - 2);
}
// ...
enum BitFlag : flag_t {
FFalse = 0, // 0000
FTrue = flag(0), // 0001
FDefault = flag(1), // 0010
FToggle = flag(2), // 0100
// ...
Alternatively, instead of modifying the enum itself, you could modify flag():
// ...
constexpr flag_t flag(int f) {
// Give bit 0 special treatment as "false", shift all other flags down to compensate.
return (f ? 1 << (f - 1) : 0);
}
// ...
constexpr flag_t register_invalids(int f) {
return static_cast<flag_t>(-1) - fl_validate(f - 2);
}
// ...
enum BitFlag : flag_t {
FFalse = flag(0), // 0000
FTrue = flag(1), // 0001
FDefault = flag(2), // 0010
FToggle = flag(3), // 0100
// ...
While I believe this to be the simplest approach, and possibly the most memory-efficient if you choose the smallest possible underlying type for flag_t, it is likely also the least useful. [Also, if you end up using this or something similar, I would suggest hiding the helper functions in a namespace, to prevent unnecessary clutter in the global namespace.]
A simple example.
Is there a reason we cannot use an enum for this? Here is a solution that I have used recently:
// Example program
#include <iostream>
#include <string>
enum class Flag : int8_t
{
F_TRUE = 0x0, // Explicitly defined for readability
F_FALSE = 0x1,
F_DEFAULT = 0x2,
F_TOGGLE = 0x3
};
struct flags
{
Flag flag_1;
Flag flag_2;
Flag flag_3;
Flag flag_4;
};
int main()
{
flags my_flags;
my_flags.flag_1 = Flag::F_TRUE;
my_flags.flag_2 = Flag::F_FALSE;
my_flags.flag_3 = Flag::F_DEFAULT;
my_flags.flag_4 = Flag::F_TOGGLE;
std::cout << "size of flags: " << sizeof(flags) << "\n";
std::cout << (int)(my_flags.flag_1) << "\n";
std::cout << (int)(my_flags.flag_2) << "\n";
std::cout << (int)(my_flags.flag_3) << "\n";
std::cout << (int)(my_flags.flag_4) << "\n";
}
Here, we get the following output:
size of flags: 4
0
1
2
3
It's not quite memory efficient this way. Each Flag is 8 bits compared to two bools at one bit each, for a 4x memory increase. However, we are afforded the benefits of enum class which prevents some stupid programmer mistakes.
Now, I have another solution for when memory is critical. Here we pack 4 flags into an 8-bit struct. This one I came up with for a data editor, and it worked perfectly for my uses. However there may be downsides that I am now aware of.
// Example program
#include <iostream>
#include <string>
enum Flag
{
F_TRUE = 0x0, // Explicitly defined for readability
F_FALSE = 0x1,
F_DEFAULT = 0x2,
F_TOGGLE = 0x3
};
struct PackedFlags
{
public:
bool flag_1_0:1;
bool flag_1_1:1;
bool flag_2_0:1;
bool flag_2_1:1;
bool flag_3_0:1;
bool flag_3_1:1;
bool flag_4_0:1;
bool flag_4_1:1;
public:
Flag GetFlag1()
{
return (Flag)(((int)flag_1_1 << 1) + (int)flag_1_0);
}
Flag GetFlag2()
{
return (Flag)(((int)flag_2_1 << 1) + (int)flag_2_0);
}
Flag GetFlag3()
{
return (Flag)(((int)flag_3_1 << 1) + (int)flag_3_0);
}
Flag GetFlag4()
{
return (Flag)(((int)flag_4_1 << 1) + (int)flag_4_0);
}
void SetFlag1(Flag flag)
{
flag_1_0 = (flag & (1 << 0));
flag_1_1 = (flag & (1 << 1));
}
void SetFlag2(Flag flag)
{
flag_2_0 = (flag & (1 << 0));
flag_2_1 = (flag & (1 << 1));
}
void SetFlag3(Flag flag)
{
flag_3_0 = (flag & (1 << 0));
flag_3_1 = (flag & (1 << 1));
}
void SetFlag4(Flag flag)
{
flag_4_0 = (flag & (1 << 0));
flag_4_1 = (flag & (1 << 1));
}
};
int main()
{
PackedFlags my_flags;
my_flags.SetFlag1(F_TRUE);
my_flags.SetFlag2(F_FALSE);
my_flags.SetFlag3(F_DEFAULT);
my_flags.SetFlag4(F_TOGGLE);
std::cout << "size of flags: " << sizeof(my_flags) << "\n";
std::cout << (int)(my_flags.GetFlag1()) << "\n";
std::cout << (int)(my_flags.GetFlag2()) << "\n";
std::cout << (int)(my_flags.GetFlag3()) << "\n";
std::cout << (int)(my_flags.GetFlag4()) << "\n";
}
Output:
size of flags: 1
0
1
2
3
At our organization we recieve a daily blacklist (much bigger as this is just a snippet) in the following format:
172.44.12.0
198.168.1.5
10.10.0.0
192.168.78.6
192.168.22.22
111.111.0.0
222.222.0.0
12.12.12.12
When I run the program after the code compiles I receive:
1
1
1
1
1
1
1
1
I am using C++ in a Linux/Unix environment.
So far, I am just spitting it out to make sure I have it formatted correctly.
The name of the file is blacklist.txt which contains the IP's listed above for now. I am only using cout to make sure my variable are defined correctly.
#include <iostream>
#include <vector>
#include <fstream>
#include <string>
#include <netinet/in.h>
#include <stdint.h>
#include <arpa/inet.h>
using namespace std;
bool is_match(std::string &hay_stack, std::string &srcip) {
in_addr_t _ip = inet_addr(hay_stack.c_str());
in_addr_t _IP = inet_addr(srcip.c_str());
_ip = ntohl(_ip);
_IP = ntohl(_IP);
uint32_t mask=(_ip & 0x00ffffff == 0) ? 0xff000000 :
(_ip & 0x0000ffff == 0 ? 0xffff0000 : 0);
return ( (_ip & mask) == (_IP & mask) );
}
int main()
{
vector<std::string> lines;
lines.reserve(5000); //Assuming that the file to read can have max 5K lines
string fileName("blacklist.txt");
ifstream file;
file.open(fileName.c_str());
if(!file.is_open())
{
cerr<<"Error opening file : "<<fileName.c_str()<<endl;
return -1;
}
//Read the lines and store it in the vector
string line;
while(getline(file,line))
{
lines.push_back(line);
}
file.close();
//Dump all the lines in output
for(unsigned int i = 0; i < lines.size(); i++)
{
string h = lines[i];
string mi = "10.10.10.10";
cout<<is_match(h,mi)<<endl;
}
return 0;
}
I am expecting the output to be 10.10.10.10 (some sort of host subnet here) 10.10.0.0 (and some sort of subnet mask here)
This is where your problem is:
uint32_t mask=(_ip & 0x00ffffff == 0) ? 0xff000000 :
(_ip & 0x0000ffff == 0 ? 0xffff0000 : 0);
return ( (_ip & mask) == (_IP & mask) );
If _ip is in the form x.0.0.0, it only compares x in _IP,
and if _ip is in the form x.y.0.0, it only compares x and y in _IP,
which is fine.
But if _ip isn't in either format you set the mask to 0 <- this is the problem.
When you take (_ip & 0) the result is always 0, likewise with (_IP & 0).
This means you always return true on addresses with a.b.c.d, c != 0 or d != 0.
Instead, make the default mask equal 0xffffffff to check for a complete match.
But it turns out that's not the big problem. The big problem is that == has a higher operator precedence than &, so your code is actually working like this:
uint32_t mask=(_ip & (0x00ffffff == 0)) ? 0xff000000 :
(_ip & (0x0000ffff == 0) ? 0xffff0000 : 0);
return ( (_ip & mask) == (_IP & mask) );
As a result, you will always get the 0 for a mask. You need to apply parens to fix this.
So in conclusion, your code should change to look like this:
uint32_t mask=( (_ip & 0x00ffffff) == 0) ? 0xff000000 :
( (_ip & 0x0000ffff) == 0 ? 0xffff0000 : 0xffffffff);
return ( (_ip & mask) == (_IP & mask) );
Responding to the implicit question, "Why doesn't my program work the way I expect?"
I am expecting the output to be 10.10.10.10 (some sort of host subnet here) 10.10.0.0 (and some sort of subnet mask here)
I don't know why you are expecting that. Your code (if the file opens successfully) only has one print statement in it:
cout<<is_match(h,mi)<<endl;
The function is_match always return a bool, either true or false. When printed, it will always be either 1 or 0, respectively. There simply isn't any code in your program which could print an IP address or netmask.
I have a DWORD variable & I want to test if specific bits are set in it. I have my code below but I am not sure if I am transferring the bits from the win32 data type KBDLLHOOKSTRUCT to my lparam datatype correctly?
See MSDN that documents the DWORD flag variable: http://msdn.microsoft.com/en-us/library/ms644967(v=vs.85).aspx
union KeyState
{
LPARAM lparam;
struct
{
unsigned nRepeatCount : 16;
unsigned nScanCode : 8;
unsigned nExtended : 1;
unsigned nReserved : 4;
unsigned nContext : 1;
unsigned nPrev : 1;
unsigned nTrans : 1;
};
};
KBDLLHOOKSTRUCT keyInfo = *((KBDLLHOOKSTRUCT*)lParam);
KeyState myParam;
myParam.nRepeatCount = 1;
myParam.nScanCode = keyInfo.scanCode;
myParam.nExtended = keyInfo.flags && LLKHF_EXTENDED; // maybe it should be keyInfo.flags & LLKHF_EXTENDED or keyInfo.flags >> LLKHF_EXTENDED
myParam.nReserved = 0;
myParam.nContext = keyInfo.flags && LLKHF_ALTDOWN;
myParam.nPrev = 0; // can store the last key pressed as virtual key/code, then check against this one, if its the same then set this to 1 else do 0
myParam.nTrans = keyInfo.flags && LLKHF_UP;
// Or maybe I shd do this to transfer bits...
myParam.nRepeatCount = 1;
myParam.nScanCode = keyInfo.scanCode;
myParam.nExtended = keyInfo.flags & 0x01;
myParam.nReserved = (keyInfo.flags >> 0x01) & (1<<3)-1;
myParam.nContext = keyInfo.flags & 0x05;
myParam.nPrev = 0; // can store the last key pressed as virtual key/code, then check against this one, if its the same then set this to 1 else do 0
myParam.nTrans = keyInfo.flags & 0x07;
Rather than
myParam.nExtended = keyInfo.flags && LLKHF_EXTENDED
you need
myParam.nExtended = (keyInfo.flags & LLKHF_EXTENDED) != 0;
It's & not && because you want a bitwise and not a logical and. And the !=0 ensures the answer is either 0 or 1 (rather than 0 or some-other-nonzero-value) so it can be represented in your one-bit bitfield.
CheckBits(DWORD var, DWORD mask)
{
DWORD setbits=var&mask; //Find all bits are set
DWORD diffbits=setbits^mask; //Find all set bits that differ from mask
return diffbits!=0; //Retrun True if all specific bits in variable are set
}
If you want to merge two bits, you would use | (bitwise OR) operator:
myParam.nExtended = keyInfo.flags | LLKHF_EXTENDED;
myParam.nExtended = keyInfo.flags | 0x01;
To check if bit was set, you would use & (bitwise AND) operator:
if(myParam.nExtended & LLKHF_EXTENDED) ...
I have a set of bit flags that are used in a program I am porting from C to C++.
To begin...
The flags in my program were previously defined as:
/* Define feature flags for this DCD file */
#define DCD_IS_CHARMM 0x01
#define DCD_HAS_4DIMS 0x02
#define DCD_HAS_EXTRA_BLOCK 0x04
...Now I've gather that #defines for constants (versus class constants, etc.) are generally considered bad form.
This raises questions about how best to store bit flags in c++ and why c++ doesn't support assignment of binary text to an int, like it allows for hex numbers to be assigned in this way (via "0x"). These questions are summarized at the end of this post.
I could see one simple solution is to simply create individual constants:
namespace DCD {
const unsigned int IS_CHARMM = 1;
const unsigned int HAS_4DIMS = 2;
const unsigned int HAS_EXTRA_BLOCK = 4;
};
Let's call this idea 1.
Another idea I had was to use an integer enum:
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 1,
HAS_4DIMS = 2,
HAS_EXTRA_BLOCK = 8
};
};
But one thing that bothers me about this is that its less intuitive when it comes to higher values, it seems... i.e.
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 1,
HAS_4DIMS = 2,
HAS_EXTRA_BLOCK = 8,
NEW_FLAG = 16,
NEW_FLAG_2 = 32,
NEW_FLAG_3 = 64,
NEW_FLAG_4 = 128
};
};
Let's call this approach option 2.
I'm considering using Tom Torf's macro solution:
#define B8(x) ((int) B8_(0x##x))
#define B8_(x) \
( ((x) & 0xF0000000) >( 28 - 7 ) \
| ((x) & 0x0F000000) >( 24 - 6 ) \
| ((x) & 0x00F00000) >( 20 - 5 ) \
| ((x) & 0x000F0000) >( 16 - 4 ) \
| ((x) & 0x0000F000) >( 12 - 3 ) \
| ((x) & 0x00000F00) >( 8 - 2 ) \
| ((x) & 0x000000F0) >( 4 - 1 ) \
| ((x) & 0x0000000F) >( 0 - 0 ) )
converted to inline functions, e.g.
#include <iostream>
#include <string>
....
/* TAKEN FROM THE C++ LITE FAQ [39.2]... */
class BadConversion : public std::runtime_error {
public:
BadConversion(std::string const& s)
: std::runtime_error(s)
{ }
};
inline double convertToUI(std::string const& s)
{
std::istringstream i(s);
unsigned int x;
if (!(i >> x))
throw BadConversion("convertToUI(\"" + s + "\")");
return x;
}
/** END CODE **/
inline unsigned int B8(std::string x) {
unsigned int my_val = convertToUI(x.insert(0,"0x").c_str());
return ((my_val) & 0xF0000000) >( 28 - 7 ) |
((my_val) & 0x0F000000) >( 24 - 6 ) |
((my_val) & 0x00F00000) >( 20 - 5 ) |
((my_val) & 0x000F0000) >( 16 - 4 ) |
((my_val) & 0x0000F000) >( 12 - 3 ) |
((my_val) & 0x00000F00) >( 8 - 2 ) |
((my_val) & 0x000000F0) >( 4 - 1 ) |
((my_val) & 0x0000000F) >( 0 - 0 );
}
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = B8("00000001"),
HAS_4DIMS = B8("00000010"),
HAS_EXTRA_BLOCK = B8("00000100"),
NEW_FLAG = B8("00001000"),
NEW_FLAG_2 = B8("00010000"),
NEW_FLAG_3 = B8("00100000"),
NEW_FLAG_4 = B8("01000000")
};
};
Is this crazy? Or does it seem more intuitive? Let's call this choice 3.
So to recap, my over-arching questions are:
1. Why doesn't c++ support a "0b" value flag, similar to "0x"?
2. Which is the best style to define flags...
i. Namespace wrapped constants.
ii. Namespace wrapped enum of unsigned ints assigned directly.
iii. Namespace wrapped enum of unsigned ints assigned using readable binary string.
Thanks in advance! And please don't close this thread as subjective, because I really want to get help on what the best style is and why c++ lacks built in binary assignment capability.
EDIT 1
A bit of additional info. I will be reading a 32-bit bitfield from a file and then testing it with these flags. So bear that in mind when you post suggestions.
Prior to C++14, binary literals had been discussed off and on over the years, but as far as I know, nobody had ever written up a serious proposal to get it into the standard, so it never really got past the stage of talking about it.
For C++ 14, somebody finally wrote up a proposal, and the committee accepted it, so if you can use a current compiler, the basic premise of the question is false--you can use binary literals, which have the form 0b01010101.
In C++11, instead of adding binary literals directly, they added a much more general mechanism to allow general user-defined literals, which you could use to support binary, or base 64, or other kinds of things entirely. The basic idea is that you specify a number (or string) literal followed by a suffix, and you can define a function that will receive that literal, and convert it to whatever form you prefer (and you can maintain its status as a "constant" too...)
As to which to use: if you can, the binary literals built into C++14 or above are the obvious choice. If you can't use them, I'd generally prefer a variation of option 2: an enum with initializers in hex:
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 0x1,
HAS_4DIMS = 0x2,
HAS_EXTRA_BLOCK = 0x8,
NEW_FLAG = 0x10,
NEW_FLAG_2 = 0x20,
NEW_FLAG_3 = 0x40,
NEW_FLAG_4 = 0x80
};
};
Another possibility is something like:
#define bit(n) (1<<(n))
enum e_feature_flags = {
IS_CHARM = bit(0),
HAS_4DIMS = bit(1),
HAS_EXTRA_BLOCK = bit(3),
NEW_FLAG = bit(4),
NEW_FLAG_2 = bit(5),
NEW_FLAG_3 = bit(6),
NEW_FLAG_4 = bit(7)
};
With option two, you can use the left shift, which is perhaps a bit less "unintuitive:"
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 1,
HAS_4DIMS = (1 << 1),
HAS_EXTRA_BLOCK = (1 << 2),
NEW_FLAG = (1 << 3),
NEW_FLAG_2 = (1 << 4),
NEW_FLAG_3 = (1 << 5),
NEW_FLAG_4 = (1 << 6)
};
};
Just as a note, Boost (as usual) provides an implementation of this idea.
Why not use a bitfield struct?
struct preferences {
unsigned int likes_ice_cream : 1;
unsigned int plays_golf : 1;
unsigned int watches_tv : 1;
unsigned int reads_stackoverflow : 1;
};
struct preferences fred;
fred.likes_ice_cream = 1;
fred.plays_golf = 0;
fred.watches_tv = 0;
fred.reads_stackoverflow = 1;
if (fred.likes_ice_cream == 1)
/* ... */
GCC has an extension making it capable of binary assignment:
int n = 0b01010101;
Edit: As of C++14, this is now an official part of the language.
What's wrong with hex for this use case?
enum Flags {
FLAG_A = 0x00000001,
FLAG_B = 0x00000002,
FLAG_C = 0x00000004,
FLAG_D = 0x00000008,
FLAG_E = 0x00000010,
// ...
};
I guess the bottom line is that it's not really necessary.
If you are just wanting to use binary for flags, the below approach is how I typically do it. After the orginal define you never have to worry about looking at "messier" bigger multiples of 2 as you mentioned
int FLAG_1 = 1
int FLAG_2 = 2
int FLAG_3 = 4
...
int FLAG_N = 256
you can easily check them with
if(someVariable & FLAG_3 == FLAG_3) {
// the flag is set
}
And btw, Depending on your compiler (i'm using GNU GCC Compiler) it may support "0b"
note Edited to answer the question.