How to interpret this bit shift - bit-manipulation

I have the following bit shift:
1011 & (~0 << 2)
How do I work out the answer for this? In particular I am confused about what ~0 << 2 means - I know that the << operator is a bit shift, and that ~ represents 'not'.
What I have read is that ~0 is a sequence of 1s - but how is that true, and how many 1s are there??

Usually, an int is a 32-bit/4-byte value. So ~0 = 1111 1111 1111 1111 1111 1111 1111 1111
In your case it really doesn't matter.
You want to solve 1011 & (~0 << 2)
Let's go through your example in steps.
First thing that happens is the parenthesis:
(~0 << 2)
This is the bits 1111 shifted left by two bits. When a lift shift occurs the new added bits are 0s. Therefore (~0 << 2) equals:
(1111 << 2) = 1100
Finally you just need to do a bitwise and between 1011 and 1100 which ends up as
1000 = 8

Related

Understanding two's complement and one's complement

There are two quotes in my textbook (Patterson, 5th edition, Computer Organization) I don't get:
First:
Two’s complement gets its name from the rule that the unsigned sum of an n-bit number and its n-bit negative is 2^n; hence, the negation or complement of a number x is 2^n - x, or its “two’s complement.”
Say I have a 4 bit number 2 = 0010
and it's negative = -2 = 1110
If I add them, I get 10000 (is this overflow? is this bad?) which is 16 right? And 1110 unsigned is 14? Is that what that quote is saying?
and
A third alternative representation to two’s complement and sign and magnitude is called one’s complement. The negative of a one’s complement is found by inverting each bit, from 0 to 1 and from 1 to 0, or x. This relation helps explain its name since the complement of x is 2^n - x - 1.
What does that mean? Can you give an example to help me see why the name is this way?
Say we have the binary representation of 2:
2 = 0000 0000 0000 0000 0000 0000 0000 0010
In one's complement, the negation is:
-2 = 1111 1111 1111 1111 1111 1111 1111 1101
Is that the negative? If so, how is it 2^n - x - 1?
--------------------
I understand two's complement:
Simply invert every 0 to 1 and every 1 to 0, then add one to the result. This shortcut is based on the observation that the sum of a number and its inverted representation must be 111... (32 of them) 111, which represents -1.
So 2 = 0000 0000 0000 0000 0000 0000 0000 0010
if we negate 2:
1111 1111 1111 1111 1111 1111 1111 1101
and add 1:
1111 1111 1111 1111 1111 1111 1111 1110
which is = -2
But I'm a bit confused about one's complement. Is it just two's complement without the addition of 1?

Casting two bytes to a 12 bit short value?

I have a buffer which consists of data in unsigned char and two bytes form a 12 Bit value.
I found out that my system is little endian. The first byte in the buffer gives me on the console numbers from 0 to 255. The second byte gives always low numbers between 1 and 8 (measured data, so higher values up to 4 bit would be possible too).
I tried to shift them together so that I get an ushort with a correct 12 bit number.
Sadly at the moment I am totally confused about the endianess and what I have to shift how far in which direction.
I tried e.g. this:
ushort value =0;
value= (ushort) firstByte << 8 | (ushort) secondByte << 4;
Sadly the value of value is quite often bigger than 12 bit.
Where is the mistake?
It depends on how the bits are packed within the two bytes exactly, but the solution for the most likely packing would be:
value = firstByte | (secondByte << 8);
This assumes that the second byte contains the 4 most significant bits (bits 8..11), while the first byte contains the 8 least significant bits (bits 0..7).
Note: the above solution assumes that firstByte and secondByte are sensible unsigned types (e.g. uint8_t). If they are not (e.g. if you have used char or some other possibly signed type), then you'll need to add some masking:
value = (firstByte & 0xff) | ((secondByte & 0xf) << 8);
I think the main issue may not be with the values you're shifting alone. If these values are greater than their representative bits, they'll create a large value unless "and'd" out.
picture the following
0000 0000 1001 0010 << 8 | 0000 0000 0000 1101 << 4
1001 0010 0000 0000 | 0000 0000 1101 0000
You should notice the first problem here. The first 4 'lowest' values are not being used, and it's using up 16 bits. you only wanted twelve. This should be modified like so:
(these are new numbers to demonstrate something else)
0000 1101 1001 0010 << 8 | 0000 0000 0000 1101
1101 1001 0010 0000 | (0000 0000 0000 1101 & 0000 0000 0000 1111)
This will create the following value:
1101 1001 0010 1101
here, you should note that the value is still greater than the 12 bits. If your numbers don't extend passed the original 8bit, 4 bit size ignore this. Otherwise, you have to use the 'and' operation on the bits to eliminate the left most 4 bits.
0000 1111 1111 1111 & 0000 1001 0010 1101
These values can be created using either 0bXX macros, the 2^bits - 1 pattern, as well as various other forms.

How does 1 left shift by 31 (1 << 31) work to get maximum int value? Here are my thoughts and some explanations I found online

I'm fairly new to bit manipulation and I'm trying to figure out how (1 << 31) - 1 works.
First I know that 1 << 31 is
1000000000000000000000000000
and I know it's actually complement of minimum int value, but when I tried to figure out (1 << 31) - 1, I found an explanation states that, it's just
10000000000000000000000000000000 - 1 = 01111111111111111111111111111111
I was almost tempted to believe it since it's really straightforward. But is this what really happening? If it's not, why it happens to be right?
My original thought was that, the real process should be: the two's complement of -1 is
11111111111111111111111111111111
then (1 << 31) - 1 =
(1)01111111111111111111111111111111
the leftmost 1 is abandoned, then we have maximum value of int.
I'm really confused about which one is right.
It's both! 1 << 31 is:
1000 0000 0000 0000 0000 0000 0000 0000
Subtracting 1 gives:
0111 1111 1111 1111 1111 1111 1111 1111
One of the nice features about the two's complement layout of signed numbers is that addition and subtraction are exactly the same operations as they are for unsigned numbers. So 10000...000 represents a negative number in two's complement, the largest negative number, which is -2,147,483,648 in this case, and subtracting 1 from it causes wrap-around to the largest positive number, 2,147,483,647, but two's complement numbers are arranged so that we can pretend it's an unsigned number instead, so the subtraction is uncomplicated. Subtracting 1 from 10000...000 simply drops the leading 1 to a 0, and borrows a bunch of 1s, same as in decimal you get a bunch of 9s: 10000 - 1 = 9999.
It's also true that mathematically, (a - b) is the same as (a + (-b)), so we can do (1 << 31) + (-1) instead:
1000 0000 0000 0000 0000 0000 0000 0000 (1 << 31)
1111 1111 1111 1111 1111 1111 1111 1111 (-1)
-----------------------------------------
1 0111 1111 1111 1111 1111 1111 1111 1111 +
0111 1111 1111 1111 1111 1111 1111 1111 (truncate)
A 1 is carried out of the high end, and lost once the result is truncated back into a 32-bit integer.
Either way, that pattern, with a single 0 at the high end, then filled by 1s, is the representation of the maximum positive value for a two's complement integer of any width.
There are other ways to generate that pattern if you prefer, such as ~(1 << 31), and (-1 >>> 1) (where >>> means logical shift right) which is agnostic of the width of the integer.

Shift instructions in Golang

The go spec says:
<< left shift integer << unsigned integer
What if the left side is type of uint8:
var x uint8 = 128
fmt.Println(x << 8) // it got 0, why ?
fmt.Println(int(x)<<8) // it got 32768, sure
Questions:
when x is uint8 type, why no compile error?
why x << 8 got result 0
For C/C++,
unsigned int a = 128;
printf("%d",a << 8); // result is 32768.
Could anyone explain? Thank you.
The left shift operator is going to shift the binary digits in the number to the left X number of places. This has the effect of adding X number of 0's to the right hand side the number A unit8 only holds 8 bits so when you have 128 your variable has
x = 1000 0000 == 128
x << 8
x= 1000 0000 0000 0000 == 32768
Since uint8 only holds 8 bits we tak the rightmost 8 bits which is
x = 0000 0000 == 0
The reason you get the right number with an int is an int has at least 16 bits worth of storage and most likely has 32 bits on your system. That is enough to store the entire result.
Because uint8 is an unsigned 8-bit integer type. That's what "u" stands for.
Because uint8(128) << 8 shifts the value, turning 1000 0000 into 0000 0000.
int(x) makes it 0000 0000 0000 0000 0000 0000 1000 0000 (on 32 bit systems, since int is architecture-dependant) and then the shift comes, making it 0000 0000 0000 0000 1000 0000 0000 0000, or 32768.

Bit manipulation (clear n bits)

Going through the book "Cracking the coding interview" by Gayle Laakmann McDowell, in bit manipulation chapter, it posts a question:
Find the value of (assuming numbers are represented by 4 bits):
1011 & (~0 << 2)
Now, ~0 = 1 and shifting it two times towards the left yields 100 ( = 0100 to complete the 4 bits). Anding 1011 with 0100 equals 0000.
However, the answer i have is 1000.
~0 is not 1 but 1111 (or 0xf). The ~ operator is a bitwise NOT operator, and not a logical one (which would be !).
So, when shifted by 2 places to the left, the last four bits are 1100. And 1100 & 1011 is exaclty 1000.
~0 does not equal 1. The 0 will default to being an integer, and the NOT operation will reverse ALL the bits, not just the first.
~ is the Bitwise Complement Operator.
The value of ~0 should be 1111 in 4 bits .
1011 & (~0 << 2)
= 1011 & ( 1111 << 2)
= 1011 & 1100
= 1000
1011 & (~0 << 2)
~0 is not 1 but rather 11112 or 0xF16.
Shifting 1111 to the left twice gives 1100 (the two leftmost bits have been dropped and filled in with 0s from the right).
Adding 1011 & 1100 gives 1 in each bit position for which the corresponding bit position is 1, otherwise 0. This follows that the result is 1000.