What does "variable|variable" mean in C++? - c++

I was looking into this ITE8712 watchdog timer demo code when I saw this:
void InitWD(char cSetWatchDogUnit, char cSetTriggerSignal)
{
OpenIoConfig(); //open super IO of configuration for Super I/O
SelectIoDevice(0x07); //select device7
//set watch dog counter of unit
WriteIoCR(0x72, cSetWatchDogUnit|cSetTriggerSignal);
//CloseIoConfig(); //close super IO of configuration for Super I/O
}
and, I wonder what is meant by this line:
cSetWatchDogUnit|cSetTriggerSignal
because the WriteIoCR function looks like this:
void WriteIoCR(char cIndex, char cData)
{
//super IO of index port for Super I/O
//select super IO of index register for Super I/O
outportb(equIndexPort,cIndex);
//super IO of data for Super I/O
//write data to data register
outportb(equDataPort,cData);
}
So cIndex should be 0x72, but what about the cData? I really don't get the "|" thing as I've only used it for OR ("||") in a conditional statement.

It's a bitwise or, as distinct to your normal logical or. It basically sets the bits in the target variable if the corresponding bit in either of the source variables was set.
For example, the expression 43 | 17 can be calculated as:
43 = 0x2b = binary 0010 1011
17 = 0x11 = binary 0001 0001
==== ====
"or" them: 0011 1011 = 0x3b = 59
See this answer for a more thorough examination of the various bitwise operators.
It's typically used when you want to manipulate specific bits within a data type, such as control of a watchdog timer in an embedded system (your particular use case).
You can use or (|) to turn bits on and and (&) to turn them off (with the inversion of the bitmask that's used to turn them on.
So, to turn on the b3 bit, use:
val = val | 0x08; // 0000 1000
To turn it off, use:
val = val & 0xf7; // 1111 0111
To detect if b3 is currently set, use:
if ((val & 0x08) != 0) {
// it is set.
}
You'll typically see the bitmasks defined something like:
#define B0 0x01
#define B1 0x02
#define B2 0x04
#define B3 0x08
#define B4 0x10
or:
enum BitMask {
B0 = 0x01,
B1 = 0x02,
B2 = 0x04,
B3 = 0x08,
B4 = 0x10
};
As to what this means:
WriteIoCR (0x72, cSetWatchDogUnit|cSetTriggerSignal);
More than likely, 0x72 will be an I/O port of some sort that you're writing to and cSetWatchDogUnit and cSetTriggerSignal will be bitmasks that you combine to output the command (set the trigger signal and use a unit value for the watchdog). What that command means in practice can be inferred but you're safer referring to the documentation for the watchdog circuitry itself.
And, on the off chance that you don't know what a watchdog circuit is for, it's a simple circuit that, if you don't kick it often enough (with another command), it will reset your system, probably by activating the reset pin on whatever processor you're using.
It's a way to detect badly behaving software automatically and return a device to a known initial state, subscribing to the theory that it's better to reboot than continue executing badly.

That's a bitwise or.
It is used here to combine flags.

x | y is generally used with Plain Old Datas in C/C++. It means bitwise OR.
e.g.
char x = 0x1, y = 0x2;
x | y ==> 0x3
[Note: operator | can be overloaded for class/struct according to your need.]

| is a bitwise or. It toggles the bits on (1 instead of 0) if one OR the other of the same bit in either integer is on.
|| is the logical or. It returns true if one OR the other are true.

OK, here's why you use a bitwise or, or see them used, in this sort of situation.
Often times, those variables are flags that are used to pack multiple pieces of data into one char
If cSetWatchDogUnit and
cSetTriggerSignal
have non-overlapping bits (imagine cSetWatchDogUnit = 1 << 0 and cSetTriggerSignal = 1 << 1 you can check later to see if they are set with a bitwise and, like this contrived example:
if cData & cSetWatchDogUnit
do something
if cData & cSetTriggerSignal
do something else
The whole time, both of these flags can be packed into and passed around in a single char. That way, you don't end up passing an array of bools, you can add new constants cSetSomeOtherDamnfoolThing = 1 << 2 and you can refer to flags as variables in your code.

Related

fastest way to convert int8 to int7

I've a function that takes int8_t val and converts it to int7_t.
//Bit [7] reserved
//Bits [6:0] = signed -64 to +63 offset value
// user who calls this function will use it correctly (-64 to +63)
uint8_t func_int7_t(int8_t val){
uint8_t val_6 = val & 0b01111111;
if (val & 0x80)
val_6 |= 0x40;
//...
//do stuff...
return val_6;
}
What is the best and fastest way to manipulate the int8 to int7? Did I do it efficient and fast? or there is better way?
The target is ARM Cortex M0+ if that matters
UPDATE:
After reading different answers I can say the question was asked wrong? (or my code in the question is what gave wrong assumptions to others) I had the intension to make an int8 to int7
So it will be done by doing nothing because
8bit:
63 = 0011 1111
62 = 0011 1110
0 = 0000 0000
-1 = 1111 1111
-2 = 1111 1110
-63 = 1100 0001
-64 = 1100 0000
7bit:
63 = 011 1111
62 = 011 1110
0 = 000 0000
-1 = 111 1111
-2 = 111 1110
-63 = 100 0001
-64 = 100 0000
the faster way is probably :
uint8_t val_7 = (val & 0x3f) | ((val >> 1) & 0x40);
val & 0x3f get the 6 lower bits (truncate) and ((val >> 1) & 0x40) move the bit to sign from the position 8 to 7
The advantage to not use a if is to have a shorter code (even you can use arithmetic if) and to have a code without sequence break
To clear the reserved bit, just
return val & 0x7f;
To leave the reserved bit exactly like how it was from input, nothing needs to be done
return val;
and the low 7 bits will contain the values in [-64, 63]. Because in two's complement down casting is done by a simple truncation. The value remains the same. That's what happens for an assignment like (int8_t)some_int_value
There's no such thing as 0bX1100001. There's no undefined bit in machine language. That state only exists in hardware, like the high-Z state or undefined state in Verilog or other hardware description languages
Use bitfield to narrow the value and let compiler to choose what sequence of shifts and/or masks is most efficient for that on your platform.
inline uint8_t to7bit(int8_t x)
{
struct {uint8_t x:7;} s;
return s.x = x;
}
If you are not concerned about what happens to out-of-range values, then
return val & 0x7f;
is enough. This correctly handles values in the range -64 <= val <= 63.
You haven't said how you want to handle out-of-range values, so I have nothing to say about that.
Updated to add: The question has been updated so stipulate that the function will never be called with out-of-range values. So this method qualifies unambiguously as "best and fastest".
the user who calls this function he knows he should put data -64 to +63
So not considering any other values, the really fastest thing you can do is not doing anything at all!
You have a 7 bit value stored in eight bits. Any value within specified range will have both bit 7 and bit 6 the same value, and when you process the 7-bit value, you just ignore the MSB (of 8-bit value), no matter if set or not, e. g.:
for(unsigned int bit = 0x40; bit; bit >>= 1)
// NOT: 0x80!
std::cout << (value & bit);
The other way round is more critical: whenever you receive these seven bits via some communication channel, then you need to do manual sign extension for eight (or more) bits to be able to correctly use that value.

C++ Compressing size of integer down to 2 bits?

I am doing a little game physics networking project right now, and I am trying to optimize the packets I am sending using this guide:
https://gafferongames.com/post/snapshot_compression/
In the "Optimize Quaternions" section it says:
Don’t always drop the same component due to numerical precision issues. Instead, find the component with the largest absolute value and ENCODE its index using two bits [0,3] (0=x, 1=y, 2=z, 3=w), then send the index of the largest component and the smallest three components over the network
Now my question is, how do I encode an integer down to 2 bits... or have I misunderstood the task?
I know very little about compressing data, but reducing a 4 byte integer (32 bits) down to ONLY 2 bits seems a bit insane to me. Is that even possible, or have I completely misunderstood everything?
EDIT:
Here is some code of what I have so far:
void HavNetConnection::sendBodyPacket(HavNetBodyPacket bp)
{
RakNet::BitStream bsOut;
bsOut.Write((RakNet::MessageID)ID_BODY_PACKET);
float maxAbs = std::abs(bp.rotation(0));
int maxIndex = 0;
for (int i = 1; i < 4; i++)
{
float rotAbs = std::abs(bp.rotation(i));
if (rotAbs > maxAbs) {
maxAbs = rotAbs;
maxIndex = i;
}
}
bsOut.Write(bp.position(0));
bsOut.Write(bp.position(1));
bsOut.Write(bp.position(2));
bsOut.Write(bp.linearVelocity(0));
bsOut.Write(bp.linearVelocity(1));
bsOut.Write(bp.linearVelocity(2));
bsOut.Write(bp.rotation(0));
bsOut.Write(bp.rotation(1));
bsOut.Write(bp.rotation(2));
bsOut.Write(bp.rotation(3));
bsOut.Write(bp.bodyId.toRawInt(bp.bodyId));
bsOut.Write(bp.stepCount);
// Send body packets over UDP (UNRELIABLE), priority could be low.
m_peer->Send(&bsOut, MEDIUM_PRIORITY, UNRELIABLE,
0, RakNet::UNASSIGNED_SYSTEM_ADDRESS, true);
}
The simplest solution to your problem is to use bitfields:
// working type (use your existing Quaternion implementation instead)
struct Quaternion{
float w,x,y,z;
Quaternion(float w_=1.0f, float x_=0.0f, float y_=0.0f, float z_=0.0f) : w(w_), x(x_), y(y_), z(z_) {}
};
struct PacketQuaternion
{
enum LargestElement{
W=0, X=1, Y=2, Z=3,
};
LargestElement le : 2; // 2 bits;
signed int i1 : 9, i2 : 9, i3 : 9; // 9 bits each
PacketQuaternion() : le(W), i1(0), i2(0), i3(0) {}
operator Quaternion() const { // convert packet quaternion to regular quaternion
const float s = 1.0f/float(1<<8); // scale int to [-1, 1]; you could also scale to [-sqrt(.5), sqrt(.5)]
const float f1=s*i1, f2 = s*i2, f3 = s*i3;
const float f0 = std::sqrt(1.0f - f1*f1-f2*f2-f3*f3);
switch(le){
case W: return Quaternion(f0, f1, f2, f3);
case X: return Quaternion(f1, f0, f2, f3);
case Y: return Quaternion(f1, f2, f0, f3);
case Z: return Quaternion(f1, f2, f3, f0);
}
return Quaternion(); // default, can't happen
}
};
If you have a look at the assembler code this generates, you will see a bit of shifting to extract le and i1 to i3 -- essentially the same code you could write manually as well.
Your PacketQuaternion structure will always occupy a whole number of bytes, so (on any non-exotic platform) you will still waste 3 bits (you could just use 10 bits per integer field here, unless you have other use for those bits).
I left out the code to convert from regular quaternion to PacketQuaternion, but that should be relatively simple as well.
Generally (as always when networking is involved), be extra careful that data is converted correctly in all directions, especially, if different architectures or different compilers are involved!
Also, as others have noted, make sure that network bandwidth indeed is a bottle neck before doing aggressive optimization here.
I'm guessing they want you to fit the 2 bits into some value you are already sending that doesn't need all of the available bits, or to pack several small bit fields into a single int for transmission.
You can do things like this:
// these are going to be used as 2 bit fields,
// so we can only go to 3.
enum addresses
{
x = 0, // 00
y = 1, // 01
z = 2, // 10
w = 3 // 11
};
int val_to_send;
// set the value to send, and shift it 2 bits left.
val_to_send = 1234;
// bit pattern: 0000 0100 1101 0010
// bit shift left by 2 bits
val_to_send = val_to_send << 2;
// bit pattern: 0001 0011 0100 1000
// set the address to the last 2 bits.
// this value is address w (bit pattern 11) for example...
val_to_send |= w;
// bit pattern: 0001 0011 0100 1011
send_value(val_to_send);
On the receive end:
receive_value(&rx_value);
// pick off the address by masking with the low 2 bits
address = rx_value & 0x3;
// address now = 3 (w)
// bit shift right to restore the value
rx_value = rx_value >> 2;
// rx_value = 1234 again.
You can 'pack' bits this way, any number of bits at a time.
int address_list;
// set address to w (11)
address_list = w;
// 0000 0011
// bit shift left by 2 bits
address_list = address_list << 2;
// 0000 1100
// now add address x (00)
address_list |= x;
// 0000 1100
// bit shift left 2 more bits
address_list = address_list << 2;
// 0011 0000
// add the address y (01)
address_list |= y;
// 0011 0001
// bit shift left 2 more bits
address_list = address_list << 2;
// 1100 0100
// add the address z. (10)
address_list |= z;
// 1100 0110
// w x y z are now in the lower byte of 'address_list'
This packs 4 addresses into the lower byte of 'address_list';
You just have to do the unpacking on the other end.
This has some implementation details to work out. You only have 30 bits now for the value, not 32. If the data is a signed int, you have more work to do to avoid shifting the sign bit out to the left, etc.
But, fundamentally, this is how you can stuff bit patterns into data that you are sending.
Obviously this assumes that sending is more expensive than the work of packing bits into bytes and ints, etc. This is often the case, especially where low baud rates are involved, as in serial ports.
There are a lot of possible understandings and misunderstandings in play here.
ttemple addressed your technical problem of sending less than a byte.
I want to reiterate the more theoretical points.
This is not done
You originally misunderstood the quoted passage.
We do not use two bits to say “not sending 2121387”,
but to say “not sending z-component”.
That these match exactly, should be easy to see.
This is impossible
If you want to send a 32 bit integer which might take any of the 2^32 possible values,
you need at least 32 bits.
As n bits can represent at most exactly 2^n states,
any smaller amount of bits just will not suffice.
This is kinda possible
Beyond your actual question:
When we relax the requirement that we will always use 2 bits
and have sufficiently strong assumptions
on the probability distribution of the values,
we can get the expected value of the number of bits down.
Ideas like this are used all over the place in the linked article.
Example
Let c be some integer that is 0 almost all the time (97%, say)
and can take any value the rest of the time (3%).
Then we can take one bit to say whether “c is zero”
and need no further bits most of the time.
In the cases where c is not zero,
we spend another 32 bits to encode it regularly.
In total we need 0.97*1+0.03*(1+32) = 1.96 bits on average.
But we need 33 bits sometimes,
which makes this compatible with my earlier assertion of impossibility.
This is complicated
Depending on your background (in math, bit-fiddling etc.) it might just seem like an enormous, unknowable piece of black magic.
(It isn't. You can learn this stuff.)
You do not seem completely lost and a quick learner
but I agree with Remy Lebeau
that you seem to be out of your depth.
Do you really need to do this?
Or are you optimizing prematurely?
If it runs well enough, let it run.
Concentrate on the important stuff.

How are state flags represented and how bitwise OR is used to work with bit flags?

If we open a file for reading, we may define one or more state flags,
for example: ios::out as well as ios::out | iso::app
I read about the bitwise OR, and how it "merges" the two bit sets,
for example: 1010 | 0111 = 1111
now that being said, I do not understand how it works "behind the scenes" when we use a method like ifstream.open(filename, stateflagA | stateflagB | stateflagC) and so on.
Can someone elaborate more on the inner workings of these state flags and their memory representation?
EDIT:
To give more emphasis on what i am trying to understand (if it helps),
I would assume that the open method could receive one or more state flags as separate arguments in the signature, and not delimited by a bitwise OR, so i want to understand how the bitwise OR works on these state flags to produce a different final state when combining several flags, and as a result allows me to use only one argument for a state flag or a set of state flags.
ie:
ifstream.open(filename, stateflagA | stateflagB | stateflagC)
and NOT
ifstream.open(filename, stateflagA , stateflagB , stateflagC)
Bit flags are represented in the same exact way all integral values are represented. What makes them "flags" is your program's interpretation of their values.
Bit flags are used for compact representation of small sets of values. Each value is assigned a bit index. All integer numbers with the bit at that index set to 1 are interpreted as sets that include the corresponding member.
Consider a small example: let's say we need to represent a set of three colors - red, green, and blue. We assign red an index of zero, green and index of 1, and blue an index of two. This corresponds to the following representation:
BINARY DECIMAL COLOR
------ ------- -----
001 1 Red
010 2 Green
100 4 Blue
Note that each flag is a power of two. That's the property of binary numbers that have a single bit set to 1. Here is how it would look in C++:
enum Color {
Red = 1 << 0
, Green = 1 << 1
, Blue = 1 << 2
};
1 << n is the standard way of constructing an integer with a single bit at position n set to 1.
With this representation in hand we can construct sets that have any combination of these colors:
BINARY DECIMAL COLOR
------ ------- -----
001 1 Red
010 2 Green
011 3 Red+Green
100 4 Blue
101 5 Blue+Red
110 6 Blue+Green
111 7 Blue+Green+Red
Here is when bit operations come into play: we can use them to construct sets and check membership in a single operation.
For example, we can construct a set of Red and Blue with an | like this:
Color purple = Red | Blue;
Behind the scenes, all this does is assigning 5 to purple, because 4 | 1 is 5. But since your program interprets 5 as a set of two colors, the meaning of that 5 is not the same as that of an integer 5 that represents, say, the number of things in a bag.
You can check if a set has a particular member by applying & to it:
if (purple & Red) {
// returns true
}
if (purple & Green) {
// returns false
}
The flags used by I/O library work in the same way. Some of the flags are combined to produce bit masks. They work in the same way as individual flags, but instead of letting you find membership they let you find set intersection in a single bit operation:
Color yellow = Blue | Green;
Color purple = Red | Blue;
Color common = yellow & purple; // common == Blue
If we take the GNU libstdc++ implementation and look at how these are actually implemented, we find:
enum _Ios_Openmode
{
_S_app = 1L << 0,
_S_ate = 1L << 1,
_S_bin = 1L << 2,
_S_in = 1L << 3,
_S_out = 1L << 4,
_S_trunc = 1L << 5,
_S_ios_openmode_end = 1L << 16
};
These values are then used as this:
typedef _Ios_Openmode openmode;
static const openmode app = _S_app;
/// Open and seek to end immediately after opening.
static const openmode ate = _S_ate;
/// Perform input and output in binary mode (as opposed to text mode).
/// This is probably not what you think it is; see
/// http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt11ch27s02.html
static const openmode binary = _S_bin;
/// Open for input. Default for #c ifstream and fstream.
static const openmode in = _S_in;
/// Open for output. Default for #c ofstream and fstream.
static const openmode out = _S_out;
/// Open for input. Default for #c ofstream.
static const openmode trunc = _S_trunc;
Since the values are chosen as 1 << n, they are exactly one "bit" each, which allows us to combine then using | (or) - as well as other similar operations.
So app in binary is 0000 0001 and bin is 0000 0100, so if we do app | bin as a mode for opening the file, we get 0000 0101. The internals of the impplementation of fstream can then use
if (mode & bin) ... do stuff for binary file ...
and
if (mode & app) ... do stuff for appending to the file ...
Other C++ library implementations may choose a different set of bit values for each flag, but will use a similar system.
"Behind the scene", in the memory of the computer every information is ultimately coded as a group of bits. Your CPU is wired to perform basic binary algebra operations (AND, OR, XOR, NOT) on such elementary information.
C++ operators | & and ^ just give direct access to these CPU operations on any integral types. For flag management it's wise to use an unsigned integral type such as unsigned int or unsigned char.
An express overview:
the trick is that every flag corresponds to a fixed bit. This is usually done with a power of 2 constant (ex: 1,2,4,8 which are binary coded as 0001,0010, 0100 and 1000).
constants are named because it's clearer than using litterals (ex: const unsigned FlagA=1, FlagB=2, FlagC=4;)
binary AND x & y ensures that only bits that are 1 in both x and y remain 1. So this is used to reset flags by "anding" with a value where the flag is 0. So x & FlagB reset all flags exept flag B
binary OR x | y any bits that are 1 either in x or y become 1. So it's used to set flags. Example: x | FlagB sets the flag B.
a binary AND is also a quick way to check if a flag is set: (x & FlagB) will be true if and only if the flag B was set.
EDIT: About your specific question on ifstream::open() parameters: it's a design choice, for convenience. As you can see there are 6 flags that influence the way the file is handled (some of them being used very rarely). So instead of providing each of the 6 flags every time, the standard decide that you'd provide them combined in an openmode. Variable number of arguments would not have been an alternative, as the called function would have to know how many arguments you've provided.

Swapping pair of bits in a Byte

I have an arbitrary 8-bit binary number e.g., 11101101
I have to swap all the pair of bits like:
Before swapping: 11-10-11-01
After swapping: 11-01-11-10
I was asked this in an interview !
In pseudo-code:
x = ((x & 0b10101010) >> 1) | ((x & 0b01010101) << 1)
It works by handling the low bits and high bits of each bit-pair separately and then combining the result:
The expression x & 0b10101010 extracts the high bit from each pair, and then >> 1 shifts it to the low bit position.
Similarly the expression (x & 0b01010101) << 1 extracts the low bit from each pair and shifts it to the high bit position.
The two parts are then combined using bitwise-OR.
Since not all languages allow you to write binary literals directly, you could write them in for example hexadecimal:
Binary Hexadecimal Decimal
0b10101010 0xaa 170
0b01010101 0x55 85
Make two bit masks, one containing all the even bits and one containing the uneven bits (10101010 and 01010101).
Use bitwise-and to filter the input into two numbers, one having all the even bits zeroed, the other having all the uneven bits zeroed.
Shift the number that contains only even bits one bit to the left, and the other one one bit to the right
Use bitwise-or to combine them back together.
Example for 16 bits (not actual code):
short swap_bit_pair(short i) {
return ((i & 0101010110101010b) >> 1) | ((i & 0x0101010101010101b) << 1));
}
b = (a & 170 >> 1) | (a & 85 << 1)
The most elegant and flexible solution is, as others have said, to apply an 'comb' mask to both the even and odd bits seperately and then, having shifted them left and right respectively one place to combine them using bitwise or.
One other solution you may want to think about takes advantage of the relatively small size of your datatype. You can create a look up table of 256 values which is statically initialised to the values you want as output to your input:
const unsigned char lookup[] = { 0x02, 0x01, 0x03, 0x08, 0x0A, 0x09, 0x0B ...
Each value is placed in the array to represent the transformation of the index. So if you then do this:
unsigned char out = lookup[ 0xAA ];
out will contain 0x55
This is more cumbersome and less flexible than the first approach (what if you want to move from 8 bits to 16?) but does have the approach that it will be measurably faster if performing a large number of these operations.
Suppose your number is num.
First find the even position bit:
num & oxAAAAAAAA
Second step find the odd position bit:
num & ox55555555
3rd step change position odd position to even position bit and even position bit to odd position bit:
Even = (num & oxAAAAAAAA)>>1
Odd = (num & 0x55555555)<<1
Last step ... result = Even | Odd
Print result
I would first code it 'longhand' - that is to say in several obvious, explicit stages, and use that to validate that the unit tests I had in place were functioning correctly, and then only move to more esoteric bit manipulation solutions if I had a need for performance (and that extra performance was delivered by said improvments)
Code for people first, computers second.

How masking works

I am new at C, and I am debugging with source code. However, I am confused with this code snippet.
When the values are assigned to the structure value, I think it is some masking. But not sure, and if it is masking. How does masking work in this concept?
Many thanks,
#define MSGINFO_ENABLE 0x01
#define MIME_ENABLE 0x02
#define FASTSTART_CODERS_IN_OFFERED 0x04
#define TRANSADDR_ENABLE 0x08
typedef struct {
unsigned int msginfo_mask; /* added in version 0x0101 */
} VIRTBOARD;
VIRTBOARD VirtBoard;
/* Not sure I understand what is happening here. */
VirtBoard.msginfo_mask = MSGINFO_ENABLE | MIME_ENABLE | FASTSTART_CODERS_IN_OFFERED | TRANSADDR_ENABLE;
Ok in plain English:
The Hexdecimal numbers 0x01,0x02,0x04,0x08 were each selected BECAUSE they are each encoded as different single bits being set in binary. None of the bit maps overlap so each one can be read and set without being effected by the other bits. Adding the following comments to your code makes it clearer what's happening:
#define MSGINFO_ENABLE 0x01 // => 0001
#define MIME_ENABLE 0x02 // => 0010
#define FASTSTART_CODERS_IN_OFFERED 0x04 // => 0100
#define TRANSADDR_ENABLE 0x08 // => 1000
Now adding a comment before the other line shows the result:
// VirtBoard.msginfo_mask |= 0001
// VirtBoard.msginfo_mask |= 0010
// VirtBoard.msginfo_mask |= 0100
// VirtBoard.msginfo_mask |= 1000
// ----
// VirtBoard.msginfo_mask == 1111
VirtBoard.msginfo_mask = MSGINFO_ENABLE |
MIME_ENABLE |
FASTSTART_CODERS_IN_OFFERED |
TRANSADDR_ENABLE;
While the comments on the assignment make it clear what's going on, once you understand what's happening, the comments kinda defeat the purpose of symbolically defining constants.
It might help to think of it this way (values shown in binary):
MSGINFO_ENABLE = 0001
MIME_ENABLE = 0010
FASTSTART_CODERS_IN_OFFERED = 0100
TRANSADDR_ENABLE = 1000
So...
1001 is TRANSADDR_ENABLE and MSGINFO_ENABLE
or
1101 is eveything but FASTSTART_CODERS_IN_OFFERED
Does that help at all? The | notation is C syntax to set the correct bit:
int something = 0;
something = MSGINFO_ENABLE | TRANSADDR_ENABLE;
is the a syntax to set only those 2 bits.
Your variable, msginfo_mask, when represented as a binary number (1's and 0's) is used as a "mask" by setting certain bits to 1 (using bit-wise OR) or clearing certain bits to 0 (using bit-wise AND). Your code snippet sets certain bits to 1 while leaving others unchanged. Masking is comparable to how a painter masks off areas that they do not want to be painted.
If you look at the #defines at the top of your code, you will notice that each number represents a single bit when written out in binary:
#define MSGINFO_ENABLE 0x01 <-- 0001 in binary
#define MIME_ENABLE 0x02 <-- 0010 in binary
#define FASTSTART_CODERS_IN_OFFERED 0x04 <-- 0100 in binary
#define TRANSADDR_ENABLE 0x08 <-- 1000 in binary
Setting bits is done by using the OR function. If you OR a bit with 1, the result is always going to be a 1. If you OR a bit with 0, the original value will not be changed.
So, when you see:
msginfo_mask = MSGINFO_ENABLE | MIME_ENABLE |
FASTSTART_CODERS_IN_OFFERED | TRANSADDR_ENABLE;
What you are saying is "take the value of msginfo_mask and OR it with (binary) 0001, 0010, 0100, and 1000. This is the same thing as saying "set bit 0, bit 1, bit 2, and bit 3."
The binary operator '|' is the bitwise-or operator; for each bit in the two input words, if either bit is a 1, then the corresponding bit in the result is a 1:
0001 | 0010 = 0011
The '|' operator is typically used to set individual bits in a word, such as in the code snippet you posted.
The binary operator '&' is the bitwise-and operator; for each bit in the two input words, if both bits are 1, then the corresponding bit in the result is a 1:
0101 & 0110 = 0100
The '&' operator can be used to test if a bit is set. For example, to test if the MSGINFO_ENABLE bit is set, you'd do something like
if (VirtBoard.msginfo_mask & MSGINFO_ENABLE != 0)
{
/* MSGINFO_ENABLE bit is set, do something interesting */
}
The expression
VirtBoard.msginfo_mask & MSGINFO_ENABLE
will evaluate to 1 (0x0001) if the MSGINFO_ENABLE bit was set,0 otherwise.
The unary operator '~' is the bitwise-not operator; for each bit in the input word, the corresponding bit in the result is set to the opposite value:
~ 0001 = 1110
You can use the '~' operator together with the '&' operator to clear an individual bit. For example, if we wanted to clear the MSGINFO_ENABLE bit, we'd do something like
VirtBoard.msginfo_mask = VirtBoard.msginfo_mask & ~MSGINFO_ENABLE;
which can be shortened to
VirtBoard.msginfo_mask &= ~MSGINFO_ENABLE;
Negating MSGINFO_ ENABLE gives us 1111111111111110 (assuming a 16-bit unsigned int); since the leading bits are all 1, and-ing this against the VirtBoard.msginfo_ mask preserves any bits that are already set; i.e., 0000000000001111 & 1111111111111110 = 0000000000001110.
If we wanted to clear both the MSGINFO _ENABLE and TRANSADDR _ENABLE bits, we'd combine all the operators like so:
VirtBoard.msginfo_mask &= ~(MSGINFO_ENABLE | TRANSADDER_ENABLE)
The programmer is setting the mask to a certain bit value. In this case:
VitBoard.msginfo_mask = 0x01 | 0x02 | 0x04 = 0x07
Assuming the code handles messages, when a message comes in they may compare it to this mask to see what is enabled in the message.
if((newMsg & VitBoard.msginfo_mask) == 0x07)
{
//do something related to mime enable, msginfo enable and faststart
}
Notice the "&" operator to do the mask comparisons.
The other part is that "or" the masks together are probably being used as switches to enable/disable certain functionality. In the examples you have written, it looks like possibly output at different levels or parts of the codes.
The defined masks can be used to check the functionality to see if it is enable or disabled. For example:
VirtBoard.msginfo_mask = MSGINFO_ENABLE | MIME_ENABLE ;
if ( VirtBoard.msginfo_mask & MSGINFO_ENABLE )
{
printf("Messages enabled\n";
}
if ( VirtBoard.msginfo_mask & TRANSADDR_ENABLE)
{
printf("Transaddress enabled\n");
}
In the first if, since MSGINFO_ENABLED mask was "or" and assigned to the variable, when you apply an "and" operation with the variable and MSGINOF_ENABLED mask, a non zero value is returned, indicating it is true. So the printf statement will be executed.
In the case of the second if, since TRANSADDR_ENABLE was not "or" in to the variable, when an "and" is used with the variable and TRANSADDR_ENABLE mask, it will return a zero value, so no message will be printed.