I can not understand what this line does:
fBuffer[fByteIndex] += 1 << (fBitIndex - 1);
where:
unsigned char fBuffer[32];
int fBitIndex;
and:
for ( int i = 0; i < 32; i++ )
fBuffer[i] = 0;
fBitIndex = 8;
What << does there?
<< is the left-shift operator, and assuming fBitIndex is 8 the code
fBuffer[fByteIndex] += 1 << (fBitIndex - 1);
is equivalent to
fBuffer[fByteIndex] += 128;
Why? Because a left-shift means you shift the bits of the value "left"; in your case, 00000001 (1), is shifted left 7 times, becoming 10000000 (128).
It's called bit shifting. Each byte is composed of 8 bits (0 or 1). By shifting the bits one time to the left or the right you either multiply the number by 2 or divide it by 2.
It's a bit shift. The decimal number 1 is represented in binary (showing just the lower 8 bits) as
00000001
If I have
int i=1;
int j=i<<1;
then I'll be taking that number and shifting it one place to the left. I'll then have the binary
00000010
which in decimal is the value 2. If instead I had
int j=i<<6;
then I'd get
01000000
which in decimal would be 128.
It sets bit fBuffer[fByteIndex]
1 << N is just bit addressing.
<< is shift operator and 1<<0 is 0b1, 1<<1 is 0b10 1<<6 is 0b1000000
So based on fByteIndex and fBitIndex , propper bit is set on 1.
In this case where + is used in case that addressing bit is alredy 1, overflow occure, but I think that in your code this is not case and addresd bit is 0 before assignment.
Related
I have a array of size 32. Each element in the array is a 0 or 1. I want to be able to store them into the bit positions of a 32-bit integer, and perform bit-wise operations on it. How can I do this ?
Also, if I have two arrays of size 32, and I want to do bitwise operations on the elements with the same index all at once, could I do this ?
op_and[31:0] = ip_1[31:0] & ip_2 [31:0];
I am using the gcc compiler.
You can use the or operator | and bitshifting ( << and >> ).
uint32_t myInt = 0;
for( int index=0; index < 32; index++ )
{
myInt |= ( arrayOf32Ints[i] << i );
}
This example assumes that the values of arrayOf32Ints are either 0 or 1 as per your question.
If they may contain "any true" or false value, one should ask for that explicitly (some people would tell you to use !! but the standard does not guarantee that true is 1).
The line would then be
myInt |= ( (arrayOf32Ints[i])?1:0) << i );
In the case you want to set individual bits on or off, you can do:
myInt |= (1<<3); //Sets bit 3 true by shifting 1 3 bits up (1 becomes 4), and ANDing it with myInt.
myInt |= 4; // Sets bit 3 by ANDing 4 (The binary form of 4 is 100) with myInt.
myInt ^= (1<<5);; // Turns OFF bit 5 by XORing it with myInt (XOR basically means "Any bits which are not the same in both numbers")
myInt ^= 16; //Sets bit 5 by XORing it with myInt (16 is 10000 in binary)
I'm having a little trouble grabbing n bits from a byte.
I have an unsigned integer. Let's say our number in hex is 0x2A, which is 42 in decimal. In binary it looks like this: 0010 1010. How would I grab the first 5 bits which are 00101 and the next 3 bits which are 010, and place them into separate integers?
If anyone could help me that would be great! I know how to extract from one byte which is to simply do
int x = (number >> (8*n)) & 0xff // n being the # byte
which I saw on another post on stack overflow, but I wasn't sure on how to get separate bits out of the byte. If anyone could help me out, that'd be great! Thanks!
Integers are represented inside a machine as a sequence of bits; fortunately for us humans, programming languages provide a mechanism to show us these numbers in decimal (or hexadecimal), but that does not alter their internal representation.
You should review the bitwise operators &, |, ^ and ~ as well as the shift operators << and >>, which will help you understand how to solve problems like this.
The last 3 bits of the integer are:
x & 0x7
The five bits starting from the eight-last bit are:
x >> 3 // all but the last three bits
& 0x1F // the last five bits.
"grabbing" parts of an integer type in C works like this:
You shift the bits you want to the lowest position.
You use & to mask the bits you want - ones means "copy this bit", zeros mean "ignore"
So, in you example. Let's say we have a number int x = 42;
first 5 bits:
(x >> 3) & ((1 << 5)-1);
or
(x >> 3) & 31;
To fetch the lower three bits:
(x >> 0) & ((1 << 3)-1)
or:
x & 7;
Say you want hi bits from the top, and lo bits from the bottom. (5 and 3 in your example)
top = (n >> lo) & ((1 << hi) - 1)
bottom = n & ((1 << lo) - 1)
Explanation:
For the top, first get rid of the lower bits (shift right), then mask the remaining with an "all ones" mask (if you have a binary number like 0010000, subtracting one results 0001111 - the same number of 1s as you had 0-s in the original number).
For the bottom it's the same, just don't have to care with the initial shifting.
top = (42 >> 3) & ((1 << 5) - 1) = 5 & (32 - 1) = 5 = 00101b
bottom = 42 & ((1 << 3) - 1) = 42 & (8 - 1) = 2 = 010b
You could use bitfields for this. Bitfields are special structs where you can specify variables in bits.
typedef struct {
unsigned char a:5;
unsigned char b:3;
} my_bit_t;
unsigned char c = 0x42;
my_bit_t * n = &c;
int first = n->a;
int sec = n->b;
Bit fields are described in more detail at http://www.cs.cf.ac.uk/Dave/C/node13.html#SECTION001320000000000000000
The charm of bit fields is, that you do not have to deal with shift operators etc. The notation is quite easy. As always with manipulating bits there is a portability issue.
int x = (number >> 3) & 0x1f;
will give you an integer where the last 5 bits are the 8-4 bits of number and zeros in the other bits.
Similarly,
int y = number & 0x7;
will give you an integer with the last 3 bits set the last 3 bits of number and the zeros in the rest.
just get rid of the 8* in your code.
int input = 42;
int high3 = input >> 5;
int low5 = input & (32 - 1); // 32 = 2^5
bool isBit3On = input & 4; // 4 = 2^(3-1)
Assuming I have a byte b with the binary value of 11111111
How do I for example read a 3 bit integer value starting at the second bit or write a four bit integer value starting at the fifth bit?
Some 2+ years after I asked this question I'd like to explain it the way I'd want it explained back when I was still a complete newb and would be most beneficial to people who want to understand the process.
First of all, forget the "11111111" example value, which is not really all that suited for the visual explanation of the process. So let the initial value be 10111011 (187 decimal) which will be a little more illustrative of the process.
1 - how to read a 3 bit value starting from the second bit:
___ <- those 3 bits
10111011
The value is 101, or 5 in decimal, there are 2 possible ways to get it:
mask and shift
In this approach, the needed bits are first masked with the value 00001110 (14 decimal) after which it is shifted in place:
___
10111011 AND
00001110 =
00001010 >> 1 =
___
00000101
The expression for this would be: (value & 14) >> 1
shift and mask
This approach is similar, but the order of operations is reversed, meaning the original value is shifted and then masked with 00000111 (7) to only leave the last 3 bits:
___
10111011 >> 1
___
01011101 AND
00000111
00000101
The expression for this would be: (value >> 1) & 7
Both approaches involve the same amount of complexity, and therefore will not differ in performance.
2 - how to write a 3 bit value starting from the second bit:
In this case, the initial value is known, and when this is the case in code, you may be able to come up with a way to set the known value to another known value which uses less operations, but in reality this is rarely the case, most of the time the code will know neither the initial value, nor the one which is to be written.
This means that in order for the new value to be successfully "spliced" into byte, the target bits must be set to zero, after which the shifted value is "spliced" in place, which is the first step:
___
10111011 AND
11110001 (241) =
10110001 (masked original value)
The second step is to shift the value we want to write in the 3 bits, say we want to change that from 101 (5) to 110 (6)
___
00000110 << 1 =
___
00001100 (shifted "splice" value)
The third and final step is to splice the masked original value with the shifted "splice" value:
10110001 OR
00001100 =
___
10111101
The expression for the whole process would be: (value & 241) | (6 << 1)
Bonus - how to generate the read and write masks:
Naturally, using a binary to decimal converter is far from elegant, especially in the case of 32 and 64 bit containers - decimal values get crazy big. It is possible to easily generate the masks with expressions, which the compiler can efficiently resolve during compilation:
read mask for "mask and shift": ((1 << fieldLength) - 1) << (fieldIndex - 1), assuming that the index at the first bit is 1 (not zero)
read mask for "shift and mask": (1 << fieldLength) - 1 (index does not play a role here since it is always shifted to the first bit
write mask : just invert the "mask and shift" mask expression with the ~ operator
How does it work (with the 3bit field beginning at the second bit from the examples above)?
00000001 << 3
00001000 - 1
00000111 << 1
00001110 ~ (read mask)
11110001 (write mask)
The same examples apply to wider integers and arbitrary bit width and position of the fields, with the shift and mask values varying accordingly.
Also note that the examples assume unsigned integer, which is what you want to use in order to use integers as portable bit-field alternative (regular bit-fields are in no way guaranteed by the standard to be portable), both left and right shift insert a padding 0, which is not the case with right shifting a signed integer.
Even easier:
Using this set of macros (but only in C++ since it relies on the generation of member functions):
#define GETMASK(index, size) ((((size_t)1 << (size)) - 1) << (index))
#define READFROM(data, index, size) (((data) & GETMASK((index), (size))) >> (index))
#define WRITETO(data, index, size, value) ((data) = (((data) & (~GETMASK((index), (size)))) | (((value) << (index)) & (GETMASK((index), (size))))))
#define FIELD(data, name, index, size) \
inline decltype(data) name() const { return READFROM(data, index, size); } \
inline void set_##name(decltype(data) value) { WRITETO(data, index, size, value); }
You could go for something as simple as:
struct A {
uint bitData;
FIELD(bitData, one, 0, 1)
FIELD(bitData, two, 1, 2)
};
And have the bit fields implemented as properties you can easily access:
A a;
a.set_two(3);
cout << a.two();
Replace decltype with gcc's typeof pre-C++11.
You need to shift and mask the value, so for example...
If you want to read the first two bits, you just need to mask them off like so:
int value = input & 0x3;
If you want to offset it you need to shift right N bits and then mask off the bits you want:
int value = (intput >> 1) & 0x3;
To read three bits like you asked in your question.
int value = (input >> 1) & 0x7;
just use this and feelfree:
#define BitVal(data,y) ( (data>>y) & 1) /** Return Data.Y value **/
#define SetBit(data,y) data |= (1 << y) /** Set Data.Y to 1 **/
#define ClearBit(data,y) data &= ~(1 << y) /** Clear Data.Y to 0 **/
#define TogleBit(data,y) (data ^=BitVal(y)) /** Togle Data.Y value **/
#define Togle(data) (data =~data ) /** Togle Data value **/
for example:
uint8_t number = 0x05; //0b00000101
uint8_t bit_2 = BitVal(number,2); // bit_2 = 1
uint8_t bit_1 = BitVal(number,1); // bit_1 = 0
SetBit(number,1); // number = 0x07 => 0b00000111
ClearBit(number,2); // number =0x03 => 0b0000011
You have to do a shift and mask (AND) operation.
Let b be any byte and p be the index (>= 0) of the bit from which you want to take n bits (>= 1).
First you have to shift right b by p times:
x = b >> p;
Second you have to mask the result with n ones:
mask = (1 << n) - 1;
y = x & mask;
You can put everything in a macro:
#define TAKE_N_BITS_FROM(b, p, n) ((b) >> (p)) & ((1 << (n)) - 1)
"How do I for example read a 3 bit integer value starting at the second bit?"
int number = // whatever;
uint8_t val; // uint8_t is the smallest data type capable of holding 3 bits
val = (number & (1 << 2 | 1 << 3 | 1 << 4)) >> 2;
(I assumed that "second bit" is bit #2, i. e. the third bit really.)
To read bytes use std::bitset
const int bits_in_byte = 8;
char myChar = 's';
cout << bitset<sizeof(myChar) * bits_in_byte>(myChar);
To write you need to use bit-wise operators such as & ^ | & << >>. make sure to learn what they do.
For example to have 00100100 you need to set the first bit to 1, and shift it with the << >> operators 5 times. if you want to continue writing you just continue to set the first bit and shift it. it's very much like an old typewriter: you write, and shift the paper.
For 00100100: set the first bit to 1, shift 5 times, set the first bit to 1, and shift 2 times:
const int bits_in_byte = 8;
char myChar = 0;
myChar = myChar | (0x1 << 5 | 0x1 << 2);
cout << bitset<sizeof(myChar) * bits_in_byte>(myChar);
int x = 0xFF; //your number - 11111111
How do I for example read a 3 bit integer value starting at the second bit
int y = x & ( 0x7 << 2 ) // 0x7 is 111
// and you shift it 2 to the left
If you keep grabbing bits from your data, you might want to use a bitfield. You'll just have to set up a struct and load it with only ones and zeroes:
struct bitfield{
unsigned int bit : 1
}
struct bitfield *bitstream;
then later on load it like this (replacing char with int or whatever data you are loading):
long int i;
int j, k;
unsigned char c, d;
bitstream=malloc(sizeof(struct bitfield)*charstreamlength*sizeof(char));
for (i=0; i<charstreamlength; i++){
c=charstream[i];
for(j=0; j < sizeof(char)*8; j++){
d=c;
d=d>>(sizeof(char)*8-j-1);
d=d<<(sizeof(char)*8-1);
k=d;
if(k==0){
bitstream[sizeof(char)*8*i + j].bit=0;
}else{
bitstream[sizeof(char)*8*i + j].bit=1;
}
}
}
Then access elements:
bitstream[bitpointer].bit=...
or
...=bitstream[bitpointer].bit
All of this is assuming are working on i86/64, not arm, since arm can be big or little endian.
This question already has answers here:
How do I set, clear, and toggle a single bit?
(27 answers)
Closed 8 years ago.
We have an integer number
int x = 50;
in binary, it's
00110010
How can I change the fourth (4th) bit programatically?
You can set the fourth bit of a number by OR-ing it with a value that is zero everywhere except in the fourth bit. This could be done as
x |= (1u << 3);
Similarly, you can clear the fourth bit by AND-ing it with a value that is one everywhere except in the fourth bit. For example:
x &= ~(1u << 3);
Finally, you can toggle the fourth bit by XOR-ing it with a value that is zero everywhere except in the fourth bit:
x ^= (1u << 3);
To see why this works, we need to look at two things:
What is the behavior of the << operator in this context?
What is the behavior of the AND, OR, and XOR operators here?
In all three of the above code snippets, we used the << operator to generate a value. The << operator is the bitwise shift-left operator, which takes a value and then shifts all of its bits some number of steps to the left. In your case, I used
1u << 3
to take the value 1 (which has binary representation 1) and to then shift all its bits over three spots, filling in the missing values with 0. This creates the binary value 1000, which has a bit set in the fourth bit.
Now, why does
x |= (1u << 3);
set the fourth bit of the number? This has to do with how the OR operator works. The |= operator is like += or *= except for bitwise OR - it's equivalent to
x = x | (1u << 3);
So why does OR-ing x with the binary value 1000 set its fourth bit? This has to do with the way that OR is defined:
0 | 0 == 0
0 | 1 == 1
1 | 0 == 1
1 | 1 == 1
More importantly, though, we can rewrite this more compactly as
x | 0 == x
x | 1 == 1
This is an extremely important fact, because it means that OR-ing any bit with zero doesn't change the bit's value, while OR-ing any bit with 1 always sets that bit to one. This means that when we write
x |= (1u << 3);
since (1u << 3) is a value that is zero everywhere except in the fourth bit, the bitwise OR leaves all the bits of x unchanged except for the fourth bit, which is then set to one. More generally, OR-ing a number with a value that is a series of zeros and ones will preserve all the values where the bits are zero and set all of the values where the bits are one.
Now, let's look at
x &= ~(1u << 3);
This uses the bitwise complement operator ~, which takes a number and flips all of its bits. If we assume that integers are two bytes (just for simplicity), this means that the actual encoding of (1u << 3) is
0000000000001000
When we take the complement of this, we get the number
1111111111110111
Now, let's see what happens when we bitwise AND two values together. The AND operator has this interesting truth table:
0 & 0 == 0
0 & 1 == 0
1 & 0 == 0
1 & 1 == 1
Or, more compactly:
x & 0 == 0
x & 1 == x
Notice that this means that if we AND two numbers together, the resulting value will be such that all of the bits AND-ed with zero are set to zero, while all other bits are preserved. This means that if we AND with
~(1u << 3)
we are AND-ing with
1111111111110111
So by our above table, this means "keep all of the bits, except for the fourth bit, as-is, and then change the fourth bit to be zero."
More generally, if you want to clear a set of bits, create a number that is one everywhere you want to keep the bits unchanged and zero where you want to clear the bits.
Finally, let's see why
x ^= (1u << 3)
Flips the fourth bit of the number. This is because the binary XOR operator has this truth table:
0 ^ 0 == 0
0 ^ 1 == 1
1 ^ 0 == 1
1 ^ 1 == 0
Notice that
x ^ 0 == 0
x ^ 1 == ~x
Where ~x is the opposite of x; it's 0 for 1 and 1 for 0. This means that if we XOR x with the value (1u << 3), we're XOR-ing it with
0000000000001000
So this means "keep all the bits but the fourth bit set as is, but flip the fourth bit." More generally, if you want to flip some number of bits, XOR the value with a number that has zero where you want to keep the bits intact and one where you want to flip this bits.
Hope this helps!
You can always use std::bitset which makes modifying bits easy.
Or you can use bit manipulations (assuming you mean 4th bit counting at one. Don't subtract 1 if you mean counting from 0). Note that I use 1U just to guarantee that the whole operation happens on unsigned numbers:
To set: x |= (1U << (4 - 1));
To clear: x &= ~(1U << (4 - 1));
To toggle: x ^= (1U << (4 - 1));
To set the fourth bit, OR with 00001000 (binary).
To clear the fourth bit, AND with 11110111 (binary).
To toggle the fourth bit, XOR with 00001000 (binary).
Examples:
00110010 OR 00001000 = 00111010
00110010 AND 11110111 = 00110010
00110010 XOR 00001000 = 00111010
Simple, since you have, or whatever value you have,
int x = 50;
To set 4th bit (from right) programatically,
int y = x | 0x00000008;
Because, 0x prefixed before a number means it's hexadecimal form.
So, 0x0 = 0000 in binary, and 0x8=1000 in binary form.
That explains the answer.
Try one of these functions in C language to change n bit
char bitfield;
// start at 0th position
void chang_n_bit(int n, int value)
{
bitfield = (bitfield | (1 << n)) & (~( (1 << n) ^ (value << n) ));
}
void chang_n_bit(int n, int value)
{
bitfield = (bitfield | (1 << n)) & ((value << n) | ((~0) ^ (1 << n)));
}
void chang_n_bit(int n, int value)
{
if(value)
bitfield |= 1 << n;
else
bitfield &= ~0 ^ (1 << n);
}
char print_n_bit(int n)
{
return (bitfield & (1 << n)) ? 1 : 0;
}
You can use binary AND and OR to toggle the fourth bit.
To set the fourth bit on x, you would use x |= 1<<3;, 1<<3 being a left shift of 0b0001 by three bits producing 0b1000.
To clear the fourth bit on x, you would use x &= ~(1<<3);, a binary AND between 0b00110010 (x) and (effectively) 0b11110111, masking out every bit in x that is not in position four, thus clearing it.
What does >> do in this situation?
int n = 500;
unsigned int max = n>>4;
cout << max;
It prints out 31.
What did it do to 500 to get it to 31?
Bit shifted!
Original binary of 500:
111110100
Shifted 4
000011111 which is 31!
Original: 111110100
1st Shift:011111010
2nd Shift:001111101
3rd Shift:000111110
4th Shift:000011111 which equals 31.
This is equivilent of doing integer division by 16.
500/16 = 31
500/2^4 = 31
Some facts pulled from here: http://www.cs.umd.edu/class/spring2003/cmsc311/Notes/BitOp/bitshift.html (because blarging from my head results in rambling that is unproductive..these folks state it much cleaner than i could)
Shifting left using << causes 0's to be shifted from the least significant end (the right side), and causes bits to fall off from the most significant end (the left side).
Shifting right using >> causes 0's to be shifted from the most significant end (the left side), and causes bits to fall off from the least significant end (the right side) if the number is unsigned.
Bitshifting doesn't change the value of the variable being shifted. Instead, a temporary value is created with the bitshifted result.
500 got bit shifted to the right 4 times.
x >> y mathematically means x / 2^y.
Hence 500 / 2^4 which is equal to 500 / 16. In integer division the result is 31.
It divided 500 by 16 using integer division.
>> is a right-shift operator, which shifted the bits of the binary representation of n to the right 4 times. This is equivalent to dividing n by 2 4 times, i. e. dividing it by 2^4=16. This is integer division, so the decimal part got truncated.
It shifts the bits of 500 to the right by 4 bit positions, tossing out the rightmost bits as it does so.
500 = 111110100 (binary)
111110100 >> 4 = 11111 = 31
111110100 is 500 in binary. Move the bits to the right and you are left with 11111 which is 31 in binary.
500 in binary is [1 1111 0100]
(4 + 16 + 32 + 64 + 128 + 256)
Shift that to the right 4 times and you lose the lowest 4 bits, resulting in:
[1 1111]
which is 1 + 2 + 4 + 8 + 16 = 31
You can also examine it in Hex:
500(decimal) is 0x1F4(hex).
Then shift to the right 4 bits, or one nibble:
0x1F == 31(dec).
The >> and << operators are shifting operators.
http://www-numi.fnal.gov/offline_software/srt_public_context/WebDocs/Companion/cxx_crib/shift.html
Of course they may be overloaded just to confuse you a little more!
C++ has nice classes to animate what is going on at the bit level
#include <bitset>
#include <iostream>
int main() {
std::bitset<16> s(500);
for(int i = 0; i < 4; i++) {
std::cout << s << std::endl;
s >>= 1;
}
std::cout << s
<< " (dec " << s.to_ulong() << ")"
<< std::endl;
}