about & and | operation [duplicate] - c++

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Real world use cases of bitwise operators
I'm not quite sure about bitwise operator & and |, can someone explain to me what exactly these operator do?
I have read tutorial in http://www.cprogramming.com/tutorial/bitwise_operators.html yesterday, butIi don't really know if I want to apply it in coding, can someone please give some examples .

the | operator (OR):
------------------------
a 0000 1110 1110 0101
------------------------
b 1001 0011 0100 1001
------------------------
a|b 1001 1111 1110 1101
the operator gives 1 if there is 1 in the spot in one of the numbers.
the & operator (AND):
------------------------
a 0000 1110 1110 0101
------------------------
b 1001 0011 0100 1001
------------------------
a&b 0000 0010 0100 0001
the operator gives 0 if in one of the numbers.
usage: if I want just part of the number (lets say the second set of four) i could write:
a & 0x00f0
the usage of bit operators is not recommended for starters.

This is a very low-level programming question. The smallest bit of memory is the "bit". A byte is a chunk of 8 bits, a word a chunk of 16 bits and so on... Bitwise operators let you alter/check the bits of these chunks. Depending upon what you're writing code for you may never need these operators.
Examples:
unsigned char x = 10; /*This declares a byte and sets it to 10. The binary representation
of this value is 00001010. The ones and zeros are the bits.*/
if (x & 2) {
//Number 2 is binary 00000010, so the statements within this condition will be executed
//if the bit #1 is set (bits are numbered from right to left starting from #0)
}
unsigned char y = x | 1; //This makes `y` equal to x and sets the bit #0.
//I.e. y = 00001010 | 00000001 = 00001011 = 11 decimal

Related

The meaning of 1llu in C++

Consider the i-th column of a binary matrix is denoted with matrix[i]. Let D be the number of columns of the matrix.
My question: What is the result of the following code. In fact, I can't understand the role of the 1llu expression.
matrix[i]^((1llu << D)-1)
This has to be looked at from the binary representation.
1llu means 1 represented as a unsigned long long.
...0000 0000 0000 0001
<< D shift that 1 left D amount of times (bits)
If D==5 then :
...0000 0000 0010 0000
- 1 subtract 1 from the shifted result ( which gives 1's on the positions 0 ~ D-1)
...0000 0000 0001 1111
The bitwise exclusive OR operator (^) compares each bit of its first operand to the corresponding bit of its second operand. If one bit is 0 and the other bit is 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.
https://learn.microsoft.com/en-us/cpp/cpp/bitwise-exclusive-or-operator-hat?view=vs-2019
It is easy to explain with an example below:
Example 1: 1 << 34
Example 2: 1llu << 34
If the integer size if 32 bits, then Example 1 would produce 0x0000 0000 as 1 will fall off. Whereas, Example 2 would produce 0x0000 0004 0000 0000
So, it should be seen with the context of the what is the type / size of matrix element.

Casting two bytes to a 12 bit short value?

I have a buffer which consists of data in unsigned char and two bytes form a 12 Bit value.
I found out that my system is little endian. The first byte in the buffer gives me on the console numbers from 0 to 255. The second byte gives always low numbers between 1 and 8 (measured data, so higher values up to 4 bit would be possible too).
I tried to shift them together so that I get an ushort with a correct 12 bit number.
Sadly at the moment I am totally confused about the endianess and what I have to shift how far in which direction.
I tried e.g. this:
ushort value =0;
value= (ushort) firstByte << 8 | (ushort) secondByte << 4;
Sadly the value of value is quite often bigger than 12 bit.
Where is the mistake?
It depends on how the bits are packed within the two bytes exactly, but the solution for the most likely packing would be:
value = firstByte | (secondByte << 8);
This assumes that the second byte contains the 4 most significant bits (bits 8..11), while the first byte contains the 8 least significant bits (bits 0..7).
Note: the above solution assumes that firstByte and secondByte are sensible unsigned types (e.g. uint8_t). If they are not (e.g. if you have used char or some other possibly signed type), then you'll need to add some masking:
value = (firstByte & 0xff) | ((secondByte & 0xf) << 8);
I think the main issue may not be with the values you're shifting alone. If these values are greater than their representative bits, they'll create a large value unless "and'd" out.
picture the following
0000 0000 1001 0010 << 8 | 0000 0000 0000 1101 << 4
1001 0010 0000 0000 | 0000 0000 1101 0000
You should notice the first problem here. The first 4 'lowest' values are not being used, and it's using up 16 bits. you only wanted twelve. This should be modified like so:
(these are new numbers to demonstrate something else)
0000 1101 1001 0010 << 8 | 0000 0000 0000 1101
1101 1001 0010 0000 | (0000 0000 0000 1101 & 0000 0000 0000 1111)
This will create the following value:
1101 1001 0010 1101
here, you should note that the value is still greater than the 12 bits. If your numbers don't extend passed the original 8bit, 4 bit size ignore this. Otherwise, you have to use the 'and' operation on the bits to eliminate the left most 4 bits.
0000 1111 1111 1111 & 0000 1001 0010 1101
These values can be created using either 0bXX macros, the 2^bits - 1 pattern, as well as various other forms.

Convert an 8 bit data to 3 bit data

PROGRAMMING LANGUAGE: C
I've a 8 bit data with only 3 bit used, for example:
0110 0001
Where 0 indicate unused bit that are always set to 0 and 1 indicate bits that change.
I want to convert this 0110 0001 8 bit to 3 bit that indicate this 3 used bits.
For example
0110 0001 --> 111
0010 0001 --> 011
0000 0000 --> 000
0100 0001 --> 101
How I can do that with minimal operations?
You can achieve this with a couple of bitwise operations:
((a >> 4) & 6) | (a & 1)
Assuming you start from xYYx xxxY, where x is a bit you don't care about and Y a bit to keep:
left shift by 4 of a will result in xYYx, then masking with 6 (binary 110) will make sure only the second and third bit are retained, resulting in YY0 and preventing flipped x bits from messing up.
a & 1 selects the LSB, resulting in Y.
the two parts, YY0 and Y are combined using a | bitwise or, resulting in YYY.
Now you have the 3 bits you asked. But keep in mind that you can't address single bits, so it will still be byte-aligned as 00000YYY
You can get the k'th bit of n: (where n is 011000001)
(n & ( 1 << k )) >> k
(More details about that at StackOverflow)
so you use that to get bit 1,6 and 7 and just add those:
r=bit1+bit6*16+bit7*32

2s Complement of a negative zero

I have problem: you know the 2s Complement so you can get the negative number of a positive one with the reverse and adding a one. e.g.
8 Bit
121 = 0111 1001
1st= 1000 0110
+ 0000 0001
---------
1000 0111 --> -121
So now if we have a -0
a zero looks as 8 bit
0000 0000
so a minus 0 should look
1111 1111 + 0000 0001
= 10000 0000
but that is 512
so I think that I've misunderstood something
To expand my previous comment to the question
1111 1111 + 0000 0001 in 8 bit is 0000 0000, the ninth bit is lost because there is no place from it.
And, yes the complement of a negative is a positive
-121 = 1000 0111
1st = 0111 1000
+ 0000 0001
---------
0111 1001 --> 121
Think of them as a circle, at one point there is 0, adding 1 at a time you go up to the opposite point (128 in 8 bit) at that point the sign is switched and the absolute value begin to decrease, e.g.: 128 + 1 = -127, as you continue to add 1 the value go back to 0 and the circle is completed.
So given a number of bit, you only have that much bit, no more, and if you want the value to be signed you really have only x-1 bit for the value, as the most significant bit is used for the sign (0 -> +; 1 -> -)
1 0000 0000b is 256, not 512. Truncated to 8 bits, it's 0.
This is because with two's complement, zero is zero. There is no positive or negative zero.
Compare this to one's complement or sign bit, where positive zero and negative zero are different values.

Bit manipulation (clear n bits)

Going through the book "Cracking the coding interview" by Gayle Laakmann McDowell, in bit manipulation chapter, it posts a question:
Find the value of (assuming numbers are represented by 4 bits):
1011 & (~0 << 2)
Now, ~0 = 1 and shifting it two times towards the left yields 100 ( = 0100 to complete the 4 bits). Anding 1011 with 0100 equals 0000.
However, the answer i have is 1000.
~0 is not 1 but 1111 (or 0xf). The ~ operator is a bitwise NOT operator, and not a logical one (which would be !).
So, when shifted by 2 places to the left, the last four bits are 1100. And 1100 & 1011 is exaclty 1000.
~0 does not equal 1. The 0 will default to being an integer, and the NOT operation will reverse ALL the bits, not just the first.
~ is the Bitwise Complement Operator.
The value of ~0 should be 1111 in 4 bits .
1011 & (~0 << 2)
= 1011 & ( 1111 << 2)
= 1011 & 1100
= 1000
1011 & (~0 << 2)
~0 is not 1 but rather 11112 or 0xF16.
Shifting 1111 to the left twice gives 1100 (the two leftmost bits have been dropped and filled in with 0s from the right).
Adding 1011 & 1100 gives 1 in each bit position for which the corresponding bit position is 1, otherwise 0. This follows that the result is 1000.