unsigned/signed short/int conversion - c++

I'm studying signed-unsigned integer conversions and I came to these conclusions, can someone tell me if this is correct please
unsigned short var = -65537u;
Steps:
65537u (implicitly converted to unsigned int)
Binary representation:
0000 0000 0000 0001 0000 0000 0000 0001
-65537u
Binary representation: 1111 1111 1111 1110 1111 1111 1111 1111
Truncated to short
Binary representation: 1111 1111 1111 1111
read as an unsigned short: 65535
The same should apply for the following cases:
unsigned short var = -65541u;
65541u (unsigned int)
0000 0000 0000 0001 0000 0000 0000 0101
-65541u
1111 1111 1111 1110 1111 1111 1111 1011
Truncated to short
1111 1111 1111 1011
read as an unsigned short: 65531
unsigned short var = -5u;
5u (unsigned int)
0000 0000 0000 0000 0000 0000 0000 0101
-5u
1111 1111 1111 1111 1111 1111 1111 1011
Truncated to short
1111 1111 1111 1011
read as an unsigned short: 65531

Your analysis is correct for the usual platforms where short is 16 bits and int is 32 bits.
For some platforms, the constant 65537 may not fit in an unsigned int, but if that is the case, 65537u will be typed as a larger unsigned type. The list of types that are tried can be found in section 6.4.4.1:5 of the C99 standard. In C99 it will at least fit in an unsigned long, which is guaranteed by the standard to allow values that large.
The reasoning remains much of the same if that happens, until the conversion back to unsigned short for the assignment.
Conversely, unsigned short is allowed by the C99 standard to hold more than 16 bits. In this case var receives USHRT_MAX-65536 for your first example and similarly for the other ones.

The size of short is implementation dependant - not 16bit. 16bit is the minimum size.
Similairly the size of an int may only be 16bit also.

Related

What is the output of this C++ Program? chars are stored as ASCII values?

char char_ = '3';
unsigned int * custom_mem_address = (unsigned int *) &char_;
cout<<char_<<endl;
cout << *custom_mem_address<<endl;
Since custom_mem_address contains one byte value of char '3', I except it to contain the ascii value of '3' which is 51.
But the output is the following.
3
1644042035
Depending on the byte alignment at least one byte in the 1644042035 should be 51 right? But its not. Can you please explain.
Can someone explain where am I wrong
1644042035 in binary is 0110 0001 1111 1110 0001 0111 0011 0011 and 51 is 0011 0011.
0110 0001 1111 1110 0001 0111 0011 0011
0000 0000 0000 0000 0000 0000 0011 0011
Isn't that what you are looking for?

How does bitwise not operation give negative value [duplicate]

This question already has answers here:
Bitwise NOT operator returning unexpected and negative value? [duplicate]
(4 answers)
Closed 4 years ago.
I want to see how bitwise NOT works through a simple example:
int x = 4;
int y;
int z;
y = ~(x<<1);
z =~(0x01<<1);
cout<<"y = "<<y<<endl;
cout<<"z = "<<z<<endl;
This results in y = -9 and z = -3. I don't see how this happen. Anyone can educate me a bit?
(x<<1) will shift the bits one, so
00000000 00000000 00000000 00000100
will become:
00000000 00000000 00000000 00001000
Which is the representation of 8. Then ~ will invert all the bits such that it becomes:
11111111 11111111 11111111 11110111
Which is the representation of -9.
0x01 is
00000000 00000000 00000000 00000001
in binary, so when shifted once becomes:
00000000 00000000 00000000 00000010
And then when ~ is applied we get:
11111111 11111111 11111111 11111101
Which is -3 in binary
Well, there is a very long story behind.
To make it easier to understand let's use binary numbers.
x = 4 or x = 0b 0000 0000 0000 0000 0000 0000 0000 0100 because sizeOf(int) = 4
after x<<1 x = 0b 0000 0000 0000 0000 0000 0000 0000 1000 and after
~(x<<1) x = 0b 1111 1111 1111 1111 1111 1111 1111 0111.
and here begin complication. Since int is signed type it's mean that the first bit is a sign and the whole system is Two complemnt.
so x = 0b 1111 1111 1111 1111 1111 1111 1111 0111 is x = -9 and for example
x = 0b 1111 1111 1111 1111 1111 1111 1111 1111 is x = -1
and x = 0b 0000 0000 0000 0000 0000 0000 0000 0010 is 2
Learn more about Two complemnt.
Whether an integer is positive or negative (the sign of the integer) is stored in a dedicated bit, the sign bit. The bitwise NOT affects this bit, too, so any positive number becomes a negative number and vice versa.
Note that "dedicated bit" is a bit of an oversimplification, as most contemporary computers do not use "sign and magnitude" representation (where the sign bit would just switch the sign), but "two's complement" representation, where the sign bit also affects the magnitude.
For example, the 8-bit signed integer 00000000 would be 0, but 10000000 (sign bit flipped) would be -128.

Extracting middle 16 bits of a 32 bit long

I am reading TCPPPL by Stroustrup. It gives an example of a function that extracts the middle 16 bits of a 32 bit long like this:
unsigned short middle(long a){ return (a>>8)&0xffff;}.
My question is: isn't it extracting the last 16 bits? Tell me how am I wrong.
It does indeed extract the middle 16 bits:
// a := 0b xxxx xxxx 1111 1111 1111 1111 xxxx xxxx
a>>8; // 0b 0000 0000 xxxx xxxx 1111 1111 1111 1111
&0xffff // 0b 0000 0000 0000 0000 1111 1111 1111 1111
a >> 8 will right-shift the value in a by 8 bits. The low 8 bits are forgotten, and bits previously numbered 31–8 now get moved (renumbered) to 23–0. Finally, masking out the higher 16 bits leaves you with bits 15–0, which were originally (before the shift) at positions 23–8. Voila.
a is going to right shift 8-bit (a>>8) before bitwise and operation.
Have you noticed the >>8 part? It shifts the argument right by eight bits, first.

bit parity code needs explanation as to how it works?

Here is the code that reports the bit parity of a given integer:
01: bool parity(unsigned int x)
02: {
03: x ^= x >> 16;
04: x ^= x >> 8;
05: x ^= x >> 4;
06: x &= 0x0F;
07: return ((0x6996 >> x) & 1) != 0;
08: }
I found this here.. while there seems to be explanation in the link, I do not understand.
The first explanation that start with The code first "merges" bits 0 − 15 with bits 16 − 31 using a right shift and XOR (line 3). is making it hard for me to understand as to what is going on. I tried to play around them but that did not help. if a clarity on how this work is given, it will be useful for beginners like me
Thanks
EDIT: from post below:
value : 1101 1110 1010 1101 1011 1110 1110 1111
value >> 16: 0000 0000 0000 0000 1101 1110 1010 1101
----------------------------------------------------
xor : 1101 1110 1010 1101 0110 0001 0100 0010
now right shift this again by 8 bits:
value : 1101 1110 1010 1101 0110 0001 0100 0010
value >>8 : 0000 0000 1101 1110 1010 1101 0110 0001
----------------------------------------------------
xor : 1101 1110 1110 0001 0100 1100 0010 0011
so where is the merging of parity happening here?
Let's start first with a 2-bit example so you can see what's going on. The four possibilities are:
ab a^b
-- ---
00 0
01 1
10 1
11 0
You can see that a^b (xor) gives 0 for an even number of one-bits and 1 for an odd number. This woks for 3-bit values as well:
abc a^b^c
--- -----
000 0
001 1
010 1
011 0
100 1
101 0
110 0
111 1
The same trick is being used in lines 3 through 6 to merge all 32 bits into a single 4-bit value. Line 3 merges b31-16 with b15-0 to give a 16-bit value, then line 4 merges the resultant b15-b8 with b7-b0, then line 5 merges the resultant b7-b4 with b3-b0. Since b31-b4 (the upper half of each xor operation) aren't cleared by that operations, line 6 takes care of that by clearing them out (anding with binary 0000...1111 to clear all but the lower 4 bits).
The merging here is achieved in a chunking mode. By "chunking", I mean that it treats the value in reducing chunks rather than as individual bits, which allows it to efficiently reduce the value to a 4-bit size (it can do this because the xor operation is both associative and commutative). The alternative would be to perform seven xor operations on the nybbles rather than three. Or, in complexity analysis terms, O(log n) instead of O(n).
Say you have the value 0xdeadbeef, which is binary 1101 1110 1010 1101 1011 1110 1110 1111. The merging happens thus:
value : 1101 1110 1010 1101 1011 1110 1110 1111
>> 16: 0000 0000 0000 0000 1101 1110 1010 1101
----------------------------------------------------
xor : .... .... .... .... 0110 0001 0100 0010
(with the irrelevant bits, those which will not be used in future, left as . characters).
For the complete operation:
value : 1101 1110 1010 1101 1011 1110 1110 1111
>> 16: 0000 0000 0000 0000 1101 1110 1010 1101
----------------------------------------------------
xor : .... .... .... .... 0110 0001 0100 0010
>> 8: .... .... .... .... 0000 0000 0110 0011
----------------------------------------------------
xor : .... .... .... .... .... .... 0010 0001
>> 4: .... .... .... .... .... .... 0000 0010
----------------------------------------------------
xor : .... .... .... .... .... .... .... 0011
And, looking up 0011 in the table below, we see that it gives even parity (there are 24 1-bits in the original value). Changing just one bit in that original value (any bit, I've chosen the righmost bit) will result in the opposite case:
value : 1101 1110 1010 1101 1011 1110 1110 1110
>> 16: 0000 0000 0000 0000 1101 1110 1010 1101
----------------------------------------------------
xor : .... .... .... .... 0110 0001 0100 0011
>> 8: .... .... .... .... 0000 0000 0110 0011
----------------------------------------------------
xor : .... .... .... .... .... .... 0010 0000
>> 4: .... .... .... .... .... .... 0000 0010
----------------------------------------------------
xor : .... .... .... .... .... .... .... 0010
And 0010 in the below table is odd parity.
The only "magic" there is the 0x6996 value which is shifted by the four-bit value to ensure the lower bit is set appropriately, then that bit is used to decide the parity. The reason 0x6996 (binary 0110 1001 1001 0110) is used is because of the nature of parity for binary values as shown in the lined page:
Val Bnry #1bits parity (1=odd)
--- ---- ------ --------------
+------> 0x6996
|
0 0000 0 even (0)
1 0001 1 odd (1)
2 0010 1 odd (1)
3 0011 2 even (0)
4 0100 1 odd (1)
5 0101 2 even (0)
6 0110 2 even (0)
7 0111 3 odd (1)
8 1000 1 odd (1)
9 1001 2 even (0)
10 1010 2 even (0)
11 1011 3 odd (1)
12 1100 2 even (0)
13 1101 3 odd (1)
14 1110 3 odd (1)
15 1111 4 even (0)
Note that it's not necessary to do the final shift-of-a-constant. You could just as easily continue the merging operations until you get down to a single bit, then use that bit:
bool parity (unsigned int x) {
x ^= x >> 16;
x ^= x >> 8;
x ^= x >> 4;
x ^= x >> 2;
x ^= x >> 1;
return x & 1;
}
However, once you have the value 0...15, a shift of a constant by that value is likely to be faster than two extra shift-and-xor operations.
From the original page,
Bit parity tells whether a given input contains an odd number of 1's.
So you want to add up the number of 1's. The code uses the xor operator to add pairs of bits,
0^1 = 1 bits on
1^0 = 1 bits on
0^0 = 0 bits on
1^1 = 0 bits on (well, 2, but we cast off 2's)
So the first three lines count up the number of 1's (tossing pairs of 1's).
That should help...
And notice from the original page, the description of why 0x6996,
If we encode even by 0 and odd by 1 beginning with parity(15) then we
get 0110 1001 0110 1001 = 0x6996, which is the magic number found in
line 7. The shift moves the relevant bit to bit 0. Then everything
except for bit 0 is masked out. In the end, we get 0 for even and 1
for odd, exactly as desired.

Reconstructing integers using bit mask

I am quite new to bit masking and bit operations. Could you please help me understanding this. I have three integers a, b, and c and I have created a new number d with below operations:
int a = 1;
int b = 2;
int c = 92;
int d = (a << 14) + (b << 11) + c;
How do we reconstruct a, b and c using d?
I have no idea of the range of your a, b and c. However, assuming 3 bits for a and b, and 11 bits for c we can do:
a = ( d >> 14 ) & 7;
b = ( d >> 11 ) & 7;
c = ( d >> 0 ) & 2047;
Update:
The value of and-mask is computed as: (2^NumberOfBits)-1
a is 0000 0000 0000 0000 0000 0000 0000 0001
b is 0000 0000 0000 0000 0000 0000 0000 0010
c is 0000 0000 0000 0000 0000 0000 0101 1100
a<<14 is 0000 0000 0000 0000 0100 0000 0000 0000
b<<11 is 0000 0000 0000 0000 0001 0000 0000 0000
c is 0000 0000 0000 0000 0000 0000 0101 1100
d is 0000 0000 0000 0000 0101 0000 0101 1100
^ ^ { }
a b c
So a = d>>14
b = d>>11 & 7
c = d>>0 & 2047
By the way ,you should make sure the b <= 7 and c <= 2047