I came across the following declaration while working on a sample code in C++. Can anyone explain the use "|" while making the declaration?
static const DWORD c_FaceFrameFeatures = FaceFrameFeatures::FaceFrameFeatures_BoundingBoxInColorSpace
| FaceFrameFeatures::FaceFrameFeatures_PointsInColorSpace
| FaceFrameFeatures::FaceFrameFeatures_RotationOrientation
| FaceFrameFeatures::FaceFrameFeatures_Happy;
Note that DWORD is an alias for unsigned int.
This snippet is taken from the FaceBasicsD2D sample for Kinect V2.
What you have there is the creation of a "bitmask" using several const "enum" values combined using "bitwise OR" (the | character).
Typically this is done when several "flags" are desired in a compact, somewhat extensible representation. Only somewhat extensible because a DWORD is 32 bits so holds at most 32 flags.
Given that the flags usually have values which are all-bits-zero except one bit, you can also simply add them, though this is less conventional:
static const DWORD c_FaceFrameFeatures = FaceFrameFeatures::FaceFrameFeatures_BoundingBoxInColorSpace
+ FaceFrameFeatures::FaceFrameFeatures_PointsInColorSpace
+ FaceFrameFeatures::FaceFrameFeatures_RotationOrientation
+ FaceFrameFeatures::FaceFrameFeatures_Happy;
In short: The bitwise OR operator is setting multiple flags at once.
c_faceFrameFeatures is an unsigned int, and each bit of that int is used individually as a flag. So one bit will define whether the option PointsInColorSpace is true or false, and another bit will define RotationOrientation, and so on. So the intent of this code snippet is to set several flags at once during the declaration.
You could look at the documentation's list of FaceFrameFeatures flags, and note how each flag is defined as a single bit in hex notation.
Related
I'm trying to figure out how does this code work, but I can't manage to get a single answer.
#define testbit(x, y) ( ( ((const char*) & (x))[(y)>>3] & 0x80 >> ((y)&0x07)) >> (7-((y)&0x07) ) )
I'm new at pointers, so if you can figure out a way to explain this in simplified english, I would really appreciate it.
It belongs to a segment of code for an X-Plane Plug-in found at https://code.google.com/p/xplugins/source/browse/trunk/Xsaitekpanels/SwitchPanel.cpp?r=38 line=19
The macro tests the value of the y-th bit in x. You can't directly address bits, so the code starts by treating x as an array of bytes (the const char* cast).
It then looks up the byte where the bit lives. There are 8 bits in a byte, so it divides by 8. Chasing performance, instead of simply dividing by 8, the code uses the binary trick of shifting right 3 places. In general, for unsigned x and y, x >> y = x/2^y, and x << y = x*2^y.
At this point you need to test the bit within the byte, so you get the remainder of y/8. Yet another bit trick, using y & 7 instead of the clearer y % 8.
With this information you can make a mask, a single on bit, 0x80 and shift it into position to test the y%8-th bit. The mask is ANDed against the byte and a non-zero result here means the bit was set to 1, otherwise 0.
Completing #RhythmicFistman's answer
#RhythmicFistman's answer is missing one small part to it and that is the last step in the shifts.
The >> (7-((y)&0x07) step ensures that you only ever get a result of 1 or 0. With this code it is safe to do comparisons like:
if (testbit(varible, 6) == 1) {
// do something
}
Where without that step testbit would return a bit mask in which the 6th bit would be set to 1 or 0 and all the other bits are always set to 0. That is the intent but it is not implemented in what is considered a portable way, see Warning 3 below.
Possible issues with using this code
Now to add something to the other answers. The other answers have not pointed out some keywords that should be mentioned here and they are strict aliasing and shift arithmetic right. My elaboration will come in the form of warnings below.
Warning 1: Endianness
This code assumes that you are using a big endian architecture or only wish to get the correct bit from an array of chars.
The reason is that if you convert an int into an array of chars (bytes) you will get different results on a big endian machine vs a little endian machine.
Warning 2: Strict Aliasing
The macro makes use of a cast (const char*) &(x) which is designed to change the type, a.k.a. alias, of (x) so that it is easier to get to the correct bits.
This is dangerous and the reason why is explained beautifully in this SO answer. The short version is that if you compile this code with optimisations strange things can happen.
The wikipedia pages on Aliasing and Pointer Aliasing are also useful and should be read.
Warning 3: Shift Arithmetic Right
In addition to this there could be a potential issue with the way this code uses the right shift operator >>. This operator has two different behaviors depending on whether the variable it is operating on is signed or unsigned. So long as you never use negative numbers you will be safe but this code will not protect you against that mistake. I suspect though, that you're less likely to make such a mistake anyway so it should be ok to use it.
Also worth mentioning, you are using signed char and are shifting it right. Though this works I would prefer unsigned char which would improve portability because it will not risk generating an arithmetic shift right when char and int are the same width (which is almost never the case in practice, granted). This works because char is promoted to int for the shift, see this SO answer for an explanation.
What you see is a macro, that make the following job :
(In order)
Make a bit shift to y (value : 3)
That take the address of x and pick the character in position y (into the string x)
Make a binary operation between the selected char and 0x80
Make a bit shift to the previous result (value: result of binary operation between y and 0x7)
Make a bit shift ti the previous result (value: 7 - (result of binary operation between y and 0x7))
Well, this is help you? I don't think so!
Because this macro is clairly unproper, and kind of tricky.
Bit mask, Binary operation, Binary shift...
If you can explain more precisly what you want to understand in this, maybe i can be helpfull.
My question: I'm looking at the Characteristics member of the IMAGE_SECTION_HEADER struct. I want to know if a certain section is executable or not. How would I go about checking this? The Characteristics member is a DWORD, and I want to be able to know if it contains the value IMAGE_SCN_MEM_EXECUTE (0x20000000). What would the calculation for this look like? I'm guessing I have to use the modulo operator, but have no idea how.
if (imageSectionHeader.Characteristics & IMAGE_SCN_MEM_EXECUTE)
{
// Do work here...
}
This is called masking. You're masking the Characteristics value with IMAGE_SCN_MEM_EXECUTE mask to see if those specific bits are set. The condition above will only be true if all the bits set in the IMAGE_SCN_MEM_EXECUTE mask are also set in the Characteristics value.
It looks like IMAGE_SECTION_HEADER::Characteristics is a bit field. You want to check if the bit denoted by IMAGE_SCN_MEM_EXECUTE is set. To do that, you do the bitwise AND between Characteristics and IMAGE_SCN_MEM_EXECUTE:
header.Characteristics & IMAGE_SCN_MEM_EXECUTE
When converted to bool, this expression will be true only if the IMAGE_SCN_MEM_EXECUTE bit is set.
Found some facts about windows flag design :
Lets assume Flag A is "0x0001000", B is "0x0002000" and C is "0x0003000".
Characteristics may contain multibyte flag. Suppose exe contains flag A & B.
Then Characteristics value will be "0x0003000".
if we are checking (Characteristics&(A|B)) this will be okay but (Characteristics&(C)) will also return true.
But Microsoft designed flags in such way that no multiple flags having possibility to come together and form third flag.
If we check possible values of Characteristics closly, there are some intermediate values which are skipped to avoid above issue.
Bitwise AND(&) will always work for flag checking.
For Safety one can also write expression as follows:
if we wants to check for Flag1 & Flag2 in Characteristics.
((Characteristics & (Flag1|Flag2|Highest Bit flag in flag list)==(Flag1|Flag2))
This is my first time trying to create a bitmask, and although seemingly simple I have having trouble visualizing everything.
Keep in mind I cannot use std::bitset
First, I have read that accessing raw bits is undefined behavior. (so using a union of a char would be bad because the bits might be reversed for a different compiler).
Most code I've looked at uses a struct to define each bit, and this way of structuring data should be compiler independent because the first bit will always be the LSB. (I assume) Here is an example:
struct foo
{
unsigned char a : 1;
unsigned char b : 1;
unsigned char unused : 6;
};
Now the question is...could you use more than one bit for a variable in the struct AND have it still be comipiler independent? It seems like the answer is yes, but I have had some weird answers and want to be sure. Something like:
struct foo
{
unsigned char ab : 2;
unsigned char unused : 6;
};
It seems like regardless if the raw structure is reversed, the first bit accessed from the struct is always the LSB, so how many bits you use should not matter.
The C standard does not specify the ordering of fields within a unit -- there's no guarantee that a, in your example, is in the LSB. If you want fully portable behavior, you need to do the bit manipulation yourself, using unsigned integral types, and (if using unsigned integral types bigger than a byte) you need to worry about the endianness when reading/writing them from external sources.
The behaviour does not depend on the bit order. What you have written corresponds to the language standard and therefore behaves the same on all platforms.
Bitfields cannot be portably used to access specific bits in an external block of data (like a hardware register or data serialized in a stream of bytes). So bitfields aren't useful in this context - at least for portable code.
But if you're talking about using the bitfield within the program and not trying to have it model some external bit representation, then it's 100% portable. Not super useful, but portable.
I've spent a career twiddling bits in C/C++, and maybe because of this issue, I never see it done this way. We always use unsigned variables and apply bit masks to them:
#define BITMASK_A 0x01
#define BITMASK_B 0x02
unsigned char bitfield;
Then when you want to access a, you use (bitfield & BITMASK_A)
But to answer your question, there should be no logical difference between your two examples, if the compiler places ab at the low end, then the first example should also place a at the LSb.
screen = SDL_SetVideoMode(1000,1000,32, SDL_HWSURFACE | SDL_FULLSCREEN);
What does the | do in SDL_HWSURFACE | SDL_FULLSCREEN? (I tried googling this but google wont accept special characters..)
C / C++ APIs often set up bit 'flags' when there are a lot of different, non-mutually exclusive options that can be set for a given function call. Each of the flags will be assigned a bit position in a value (often a DWORD or other large integer type). One or more of these bits can then be set by bitwise OR-ing a collection of defined constants that represent the options and give you a more clear label to identify them than a raw numeric constant. The resultant value is passed as a single argument to the API function, which helps to keep the signature manageable.
In this particular case, SDL_HWSURFACE and SDL_FULLSCREEN represent options that can be passed in to the SDL_SetVideoMode call. There are probably several other possible options available as well that were not set in this case. This particular call sets the two options by bitwise OR-ing the flag constants together, and passing the result as the last parameter.
That's the bitwise OR operator. It applies it to SDL_HWSURFACE and SDL_FULLSCREEN.
The other answers have explained it's a bitwise OR, but you probably want to know how that works:
The flags are passed as a binary number, like 00001000 or 01000000, and each bit represents an individual flag. So the first bit (0) means HW_SURFACE is off, and the second bit (1) means FULLSCREEN is on. (Note that these are examples, I'm not sure about the actual bits.)
So what the bitwise OR function does is combine those two flags by comparing each bit and saying "Is this bit OR this bit 1?" and if either is 1, it will set the result to 1. This will provide the result 01001000 which can be parsed by SDL to set the appropriate flags.
The | operator means bitwise or. That means each bit in the result is set if the corresponding bit in the left hand or the right hand value is set. So 1 | 2 = 3, because 1 in binary is 01 and 2 in binary is 10. So the result in binary is 11 which is 3 in decimal.
In your example this is used to pass a lot of different on/off options to a function. Each of the constants has exactly one bit set. The function then looks at the value you pass and using the bitwise and operator & to check which ones you specified.
As per this website, I wish to represent a Maze with a 2 dimensional array of 16 bit integers.
Each 16 bit integer needs to hold the following information:
Here's one way to do it (this is by no means the only way): a 12x16 maze grid can be represented as an array m[16][12] of 16-bit integers. Each array element would contains all the information for a single corresponding cell in the grid, with the integer bits mapped like this:
(source: mazeworks.com)
To knock down a wall, set a border, or create a particular path, all we need to do is flip bits in one or two array elements.
How do I use bitwise flags on 16 bit integers so I can set each one of those bits and check if they are set.
I'd like to do it in an easily readable way (ie, Border.W, Border.E, Walls.N, etc).
How is this generally done in C++? Do I use hexidecimal to represent each one (ie, Walls.N = 0x02, Walls.E = 0x04, etc)? Should I use an enum?
See also How do you set, clear, and toggle a single bit?.
If you want to use bitfields then this is an easy way:
typedef struct MAZENODE
{
bool backtrack_north:1;
bool backtrack_south:1;
bool backtrack_east:1;
bool backtrack_west:1;
bool solution_north:1;
bool solution_south:1;
bool solution_east:1;
bool solution_west:1;
bool maze_north:1;
bool maze_south:1;
bool maze_east:1;
bool maze_west:1;
bool walls_north:1;
bool walls_south:1;
bool walls_east:1;
bool walls_west:1;
};
Then your code can just test each one for true or false.
Use std::bitset
Use hex constants/enums and bitwise operations if you care about which particular bits mean what.
Otherwise, use C++ bitfields (but be aware that the ordering of bits in the integer will be compiler-dependent).
Learn your bitwise opertors: &, |, ^, and !.
At the top of a lot of C/C++ files I have seen flags defined in hex to mask each bit.
#define ONE 0x0001
To see if a bit is turned on, you AND it with 1. To turn it on, you OR it with 1. To toggle like a switch, XOR it with 1.
To manipulate sets of bits, you can also use ....
std::bitset<N>
std::bitset<4*4> bits;
bits[ 10 ] = false;
bits.set(10);
bits.flip();
assert( !bits.test(10) );
You can do it with hexadecimal flags or enums as you suggested, but the most readable/self-documenting is probably to use what are called "bitfields" (for details, Google for C++ bitfields).
Yes a good way is to use hex decimal to represent the bit patterns. Then you use the bitwise operators to manipulate your 16-bit ints.
For example:
if(x & 0x01){} // tests if bit 0 is set using bitwise AND
x ^= 0x02; // toggles bit 1 (0 based) using bitwise XOR
x |= 0x10; // sets bit 4 (0 based) using bitwise OR
I'm not a huge fan of bitset. It's just more typing in my opinion. And it doesn't hide what you are doing anyway. You still have to & && | bits. Unless you are picking on just 1 bit. That may work for small groups of flags. Not that we need to hide what we are doing either. But the intention of a class is usually to make something easier for it's users. I don't think this class accomplishes it.
Say for instance, you have a flag system with .. 64 flags. If you want to test.. I don't know.. 39 of them in 1 if statement to see if they are all on... using bitfields is a huge pain. You have to type them all out.. Course. I'm making the assumption you use only bitfields functionality and not mix and match methods. Same thing with bitset. Unless I am missing something with the class.. which is quite possible since I rarely use it.. I don't see a way you can test all 39 flags unless you type out the hole thing or resort to "standard methods" (using enum flag lists or some defined value for 39 bits and using the bitsets && operator). This can start to get messy depending on your approach. And I know.. 64 flags sounds like a lot. And well. It is.. depending on what you are doing. Personally speaking, most of the projects I'm involved with depend on flag systems. So actually.. 64 is not that unheard of. Though 16~32 is far more common in my experience. I'm actually helping out in a project right now where one flag system has 640 bits. It's basically a privilege system. So it makes some sense to arrange them all together... However.. admittedly.. I would like to break that up a bit.. but.. eh... I'm helping.. not creating.