How would i go about accessing the individual bits inside a c++ type, char or any c++ other type for example.
If you want access bit N:
Get: (INPUT >> N) & 1;
Set: INPUT |= 1 << N;
Unset: INPUT &= ~(1 << N);
Toggle: INPUT ^= 1 << N;
You would use the binary operators | (or), & (and) and ^ (xor) to set them. To set the third bit of variable a, you would type, for instance:
a = a | 0x4
// c++ 14
a = a | 0b0100
Note that 4’s binary representation is 0100
That is very easy
Lets say you need to access individual bits of an integer
Create a mask like this
int mask =1;
now, anding your numberwith this mask gives the value set at the zeroth bit
in order to access the bit set at ith position (indexes start from zero) , just and with (mask<
If you want to look at the nth bit in a number you can use: number&(1<<n).
Essentially the the (1<<n) which is basically 2^n(because you shift the 1 bit in ...0001 n times, each left shift means multiply by 2) creates a number which happens to be 0 everywhere but 1 at the nth position(this is how math works).
You then & that with number. This returns a number which is either 0 everywhere or a number that has a 1 somewhere(essentially an integer which is either 0 or not).
Example:
2nd bit in in 4, 4&(1<<2)
0100
& 0010
____
0000 = 0
Therefore the 2nd bit in 4 is a 0
It will also work with chars because they are also numbers in C,C++
Related
I came across a part of code that I cannot understand.
for (unsigned int i = (x & 0b1); i < x; i+= 2)
{
// body
}
Here, x is from 0 to 5.
What is meant by 0b1? and what would be the answers for eg: (0 & 0b1), (4 & 0b1) etc?
0b... is a binary number, just like 0x... is hex and 0... is octal.
Thus 0b1 is same as 1.
1b0 is illegal, the first digit in those must always be 0.
As previous answers said, it is the binary representation of the integer number 1, but they don't seem to have fully answered your question. This has a lot of layers so I'll briefly explain each.
In this context, the ampersand is working as a bitwise AND operator. i & 0b1 is (sometimes) a faster way of checking if an integer is even as opposed to i % 2 == 0.
Say you have int x = 5 and you'd like to check if it's even using bitwise AND.
In binary, 5 would be represented as 0101. That final 1 actually represents the number 1, and in binary integers it's only present in odd numbers. Let's apply the bitwise AND operator to 5 and 1;
0101
0001
&----
0001
The operator is checking each column, and if both rows are 1, that column of the result will be 1 – otherwise, it will be 0. So, the result (converted back to base10) is 1. Now let's try with an even number. 4 = 0100.
0100
0001
&----
0000
The result is now equal to 0. These rules apply to every single integer no matter its size.
The higher-level layer here is that in C, there is no boolean datatype, so booleans are represented as integers of either 0 (false) or any other value (true). This allows for some tricky shorthand, so the conditional if(x & 0b1) will only run if x is odd, because odd & 0b1 will always equal 1 (true), but even & 0b1 will always equal 0 (false).
On input I am given multiple uint32_t numbers, which are in fact binary strings of length 32. I want to produce binary string ( a.k.a. another uint32_t number ) where n-th bit is set to 1 if n-th bit in every given string from input is same.
Here is simple example with 4 bit string ( just more simple instance of same problem ) :
input: 0011, 0101, 0110
output: 1000
because: first bit is same in every string on input, therfore first bit in output will be set to 1 and 2nd,3rd and 4th will be set to 0 because they have different values.
What is the best way to produce output from given input? I know that I need to use bitwise operators but I don't know which of them and in which order.
uint32_t getResult( const vector< uint32_t > & data ){
//todo
}
You want the bits where all the source bits are 1 and the bits where all the source bits are 0. Just AND the source values and the NOT of the source values, then OR the results.
uint32_t getResult( const vector< uint32_t > & data ){
uint32_t bitsSet = ~0;
uint32_t bitsClear = ~0;
for (uint32_t d : data) {
bitsSet &= d;
bitsClear &= ~d;
}
return bitsSet | bitsClear
}
First of all you need to loop over the vector, of course.
Then we can use XOR of the current element and the next element. Save the result.
For the next iteration, do the same: XOR of current element with the next element. But then bitwise OR with the saved result of the previous iteration. Save this result. Then continue with this until you have iterated over all (minus one) elements.
The saved result is the complement of the what you want.
Taking your example numbers (0011, 0101 and 0110) then the first iteration we have 0011 ^ 0101 which results in 0110. The next iteration we have 0101 ^ 0110 which results in 0011. Bitwise OR with the previous result (0110 | 0011) gives 0111. End of loop, and bitwise complement give the result 1000.
I have a variable mask of type std::bitset<8> as
std::string bit_string = "00101100";
std::bitset<8> mask(bit_string);
Is there an efficient way to quickly mask out the corresponding (three) bits of another given std::bitset<8> input and move all those masked out bits to the rightmost? E.g., if input is 10100101, then I would like to quickly get 00000101 which equals 5 in decimal. Then I can vect[5] to quickly index the 6th element of vect which is std::vector<int> of size 8.
Or rather, can I quickly get the decimal value of the masked out bits (with their relative positions retained)? Or I can't?
I guess in my case the advantage that can be taken is the bitset<8> mask I have. And I'm supposed to manipulate it somehow to do the work fast.
I see it like this (added by Spektre):
mask 00101100b
input 10100101b
---------------
& ??1?01??b
>> 101b
5
First things first: you can't avoid O(n) complexity with n being the number of mask bits if your mask is available as binary. However, if your mask is constant for multiple inputs, you can preprocess the mask into a series of m mask&shift transformations where m is less or equal to your number of value 1 mask bits. If you know the mask at compile time, you can even preconstruct the transformations and then you get your O(m).
To apply this idea, you need to create a sub-mask for each group of 1 bits in your mask and combine it with a shift information. The shift information is constructed by counting the number of zeroes to the right of the current group.
Example:
mask = 00101100b
// first group of ones
submask1 = 00001100b
// number of zeroes to the right of the group
subshift1 = 2
submask2 = 00100000b
subshift2 = 3
// Apply:
input = 10100101b
transformed = (input & submask1) >> subshift1 // = 00000001b
transformed = (input & submask2) >> subshift2 // = 00000100b
+ transformed // = 00000101b
If you make the sub-transforms into an array, you can easily apply them in a loop.
Your domain is small enough that you can brute-force this. Trivially, an unsigned char LUT[256][256] can store all possible outcomes in just 64 KB.
I understand that the mask has at most 3 bits, so you can restrict the lookup table size in that dimension to [224]. And since f(input, mask) == f(input&mask, mask) you can in fact reduce the LUT to unsigned char[224][224].
A further size reduction is possible by realizing that the highest mask is 11100000 but you can just test the lowest bit of the mask. When mask is even, f(input, mask) == f((input&mask)/2, mask/2). The highest odd mask is only 11000001 or 191. This reduces your LUT further, to [192][192].
A more space-efficient algorithm splits input and mask into 2 nibbles (4 bits). You now have a very simple LUT[16][16] in which you look up the high and low parts:
int himask = mask >> 4, lomask = mask & 0xF;
int hiinp = input >> 4, loinp = input & 0xF;
unsigned char hiout = LUT[himask][hiinp];
unsigned char loout = LUT[lomask][loinp];
return hiout << bitsIn[lomask] | loout;
This shows that you need another table, char bitsIn[15].
Taking the example :
mask 0010 1100b
input 1010 0101b
himask = 0010
hiinp = 1010
hiout = 0001
lomask = 1100
loinp = 0101
loout = 0001
bitsIn[lowmask 1100] = 2
return (0001 << 2) | (0001)
Note that this generalizes fairly easily to more than 8 bits:
int bitsSoFar = 0;
int retval = 0;
while(mask) { // Until we've looked up all bits.
int mask4 = mask & 0xF;
int input4 = input & 0xF;
retval |= LUT[mask4][input4] << bitsSoFar;
bitsSoFar += bitsIn[mask4];
mask >>= 4;
input >>= 4;
}
Since this LUT only hold nibbles, you could reduce it to 16*16/2 bytes, but I suspect that's not worth the effort.
I see it like this:
mask 00101100b
input 10100101b
---------------
& ??1?01??b
>> 101b
5
I would create a bit weight table for each set bit in mask by scan bits from LSB and add weights 1,2,4,8,16... for set bits and leave zero for the rest so:
MSB LSB
--------------------------
mask 0 0 1 0 1 1 0 0 bin
--------------------------
weight 0 0 4 0 2 1 0 0 dec (A)
input 1 0 1 0 0 1 0 1 bin (B)
--------------------------
(A.B) 0*1+0*0+4*1+0*0+2*0+1*1+0*0+0*1 // this is dot product ...
4 + 1
--------------------------
5 dec
--------------------------
Sorry I do not code in Python at all so no code ... I still think using integral types for this directly would be better but that is probably just my low level C++ thinking ...
I have the following code
int n = 50;
while(n) { //1
if(n & 1) cout << "1" << endl; //2
//right shift the number so n will become 0 eventually and the loop will terminate
n >>= 1; //3
}
When we use bitwise and 1 (& 1) with a number we get back the same number.
Now my question is how does c++ evaluates the following expression: n & 1.
Since:
n = 50
In binary form 50 is: 110010
If we bitwise 1 then we get: AND 1 = 110010
Now in c++ (2) the expression evaluates like this:
Instead of getting the whole sequence of bits (110010) bitwise anded with 1
it evaluates only the number of right bits we bitwise. In my example:
n=50, 110010, use n & 1 ==> 0 AND 1 instead of 110010 AND 1.
Is there a reason that c++ treats the bitwise and like this? My guess would be it has to do with the compiler ?
When we use bitwise and 1 (& 1) with a number we get back the same number.
No we don't. We get back the number consisting of the bits that are set in both the original number and in 1. Since only the lowest bit of 1 is set, the result is the lowest bit of the original number.
Now my question is how does c++ evaluates the following expression: n & 1.
If n is 50, then in binary:
n: 110010
1: 000001
n&1: 000000 // no bits set in both
If n is 51, then in binary:
n: 110011
1: 000001
n&1: 000001 // one bit set in both
From Wikipedia:
The bitwise AND operator is a single ampersand: &. It is just a representation of AND which does its work on the bits of the operands rather than the truth value of the operands. Bitwise binary AND does the logical AND (as shown in the table above) of the bits in each position of a number in its binary form.
In your example 110010 & 1, 1 is considered as 000001, and then each bit is anded and you get the result. In fact, I use this method: 1&number to check for even and odd numbers. This is how:
if(1 & num)
printf("it is odd");
else
printf("it is even");
This is how it works: suppose you have an 8 bit number. Now, the 8 bit notation of 1 will be 00000001.
If I now perform and on each bit, for all the first seven bits I will get 0, because it will be 0 & anything will be 0. Now, the last bit of 1 is 1. So, if my number also has last bit as 1, then 1 & 1 = 1, and if my last bit is 0, then 1 & 0 = 0.
When will the last bit in my number be 1? And when 0? When converting to decimal form, the last bit is multiplied by 20. And, 20 = 1. If this 1 is multiplied with 1, we get an odd number, and if it is multiplied with 0, we get an even number.
I'm looking over some code that shifts an unsigned integer 32 times but finding it difficult to understand the use of the shift operators in combination with the OR operator.
What exactly is this actually doing when its executed?
Just as an example what would be the outcome if the variables were set to:
_arg1 = 1
_arg2 = 20
((_arg1 << _arg2) | (_arg1 >> (32 - _arg2)))
If there are easier values for the variables to explain this feel free to change them to better suit your needs.
Many thanks in advance.
<< and >> are shift operators, that move the bits of the variable by arg2 positions...
So if you have:
11001010 << 2
you would move the entire bitstring two to the left. This actually pushes the first two (form the left) bits out of the variable, and from the right side some 0's are pushed in. So:
11001010 << 2 = 00101000
In your question, you are making a rotate shift.
so lets assume arg1 = 11001010 (in binary) and arg2 = 2, and as we use a 8bit integer, we replace the 32 by an 8.
((11001010 << 2) | (11001010 >> (8 - 2)))
= (00101000 | 00000011)
And now the | connects the two bitstrings to one, so if a bit is set in one string, it will now be also set in the result:
(00101000 | 00000011) == 00101011
So what is your piece of code actually doing? It is called a circular or rotate shift... The bits it pushes out on one side, it actually pushes in from the other side. So you can rotate the bitpattern around, instead of just shifting one side into nothing and adding zeros on the other side.
This will apply circular shift.
Let's take an example:
uint32_t _arg1 = 0xc2034000;
uint32_t _arg2 = 2;
so binary
_arg1 = 11000010000000110100000000000000
_arg2 = 00000000000000000000000000000010
(_arg1 << _arg2)
shifts _arg1 to the left by _arg2 bits, in fact this means take a number created by dropping _arg2 number of bits from _arg1 (starting from left and adding 0 as added values)
_arg1 = 11000010000000110100000000000000
^^
result is 00001000000011010000000000000000
_arg1 >> (32 - _arg2)
shifts _arg1 to the right by ( 32 - _arg2) bits, in fact this means take a number created by dropping (32 - _arg2) number of bits from _arg1 (starting from the right and adding 0 as added values), so take those _arg2 bits that were dropped previously
_arg1 = 11000010000000110100000000000000
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
result is 00000000000000000000000000000011
((_arg1 << _arg2) | (_arg1 >> (32 - _arg2)))
This will concatenate both applying logical OR operation on each two corresponding bits in left and right hand side arguments.
((_arg1 << _arg2) | (_arg1 >> (32 - _arg2)))
result is 00001000000011010000000000000011
std::bitset can be really helpful for playing with bit sets as you can easily visualize what is going on in your code.
http://ideone.com/jgRvWo
break the statement into small parts, first take (_arg1 << _arg2) which means
1 << 20, now calculate the value if 1 shifted towards left by 20 positions then the binary values looks like 100000000000000000000 (each time 1 is shifted towards left 0 will be filled in the position of 1) which results into 1048576.
Now take the other part, (_arg1 >> (32 - _arg2))) which means
1 >> 12. since 1 is the least positive value so if we move 1 towards right by even one position it will become 0.
Our two parts are done now we will do a bitwise OR of the two results we have got i.e.
100000000000000000000 | 0
which we will put like
100000000000000000000
0
now we will do OR each bitwise(vertically) of the two values, which will give us 100000000000000000000 whose decimal value is 1048576.
You can refer following table for bitwise OR operations.
0 0 = 0
0 1 = 1
1 0 = 1
1 1 = 1