NOT operation on integer value - c++

I knew that ~ operator does NOT operation. But I could not make out the output of the following program (which is -65536). What exactly is happening?
#include <stdio.h>
int main(void) {
int b = 0xFFFF;
printf("%d",~b);
return 0;
}

Assuming 32-bit integers
int b = 0xFFFF; => b = 0x0000FFFF
~b = 0xFFFF0000
The top bit is now set. Assuming 2s complement, this means we have a negative number. Inverting the other bits then adding one gives 0x00010000 or 65536

When you assign the 16-bit value 0xffff to the 32-bit integer b, the variable b actually becomes 0x0000ffff. This means when you do the bitwise complement it becomes 0xffff0000 which is the same as decimal -65536.

The ~ operator in C++ is the bitwise NOT operator. It is also called the bitwise complement. This is flipping the bits of your signed integer.
For instance, if you had
int b = 8;
// b in binary = 1000
// ~b = 0111
This will flip the bits that represent the initial integer value provided.

It is doing a bitwise complement, this output may help you understand what is going on better:
std::cout << std::hex << " b: " << std::setfill('0') << std::setw(8) << b
<< " ~b: " << (~b) << " -65536: " << -65536 << std::endl ;
the result that I receive is as follows:
b: 0000ffff ~b: ffff0000 -65536: ffff0000
So we are setting the lower 16 bits to 1 which gives us 0000ffff and then we do a complement which will set the lower 16 bits to 0 and the upper 16 bits to 1 which gives us ffff0000 which is equal to -65536 in decimal.
In this case since we are working with bitwise operations, examining the data in hex gives us some insight into what is going on.

The result depends on how signed integers are represented on your platform. The most common representation is a 32-bit value using "2s complement" arithmetic to represent negative values. That is, a negative value -x is represented by the same bit pattern as the unsigned value 2^32 - x.
In this case, the original bit pattern has the lower 16 bits set:
0x0000ffff
The bitwise negation clears those bits and sets the upper 16 bits:
0xffff0000
Interpreting this as a negative number gives the value -65536.
Usually, you'll want to use unsigned types when you're messing around with bitwise arithmetic, to avoid this kind of confusion.

Your comment:
If it is NOT of 'b' .. then output should be 0 but why -65536
Suggests that you are expecting the result of:
uint32_t x = 0xFFFF;
uint32_t y = ~x;
to be 0.
That would be true for a logical not operation, such as:
uint32_t x = 0xFFFF;
uint32_t y = !x;
...but operator~ is not a logical NOT, but a bitwise not. There is a big difference.
A logical returns 0 for non-0 values (or false for true values), and 1 for 0 values.
But a bitwise not reverses each bit in a given value. So a binary NOT of 0xF:
0x0F: 00000000 11111111
~0x0F: 11111111 00000000
Is not zero, but 0xF0.

For every binary number in the integer, a bitwise NOT operation turns all 1s into 0s, and all 0s are turned to 1s.
So hexadecimal 0xFFFF is binary 1111 1111 1111 1111 (Each hexadecimal character is 4 bits, and F, being 15, is full 1s in all four bits)
You set a 32 bit integer to that, which means it's now:
0000 0000 0000 0000 1111 1111 1111 1111
You then NOT it, which means it's:
1111 1111 1111 1111 0000 0000 0000 0000
The topmost bit is the signing bit (whether it's positive or negative), so it gives a negative number.

Related

C++ Primer Exercise 4.25 converting binary number

I have a question regarding the exercise 4.25 in C++ Primer:
Exercise 4.25: What is the value of ~'q' << 6 on a machine with 32-bit
ints and 8 bit chars, that uses Latin-1 character set in which 'q' has the
bit pattern 01110001?
I have the solution in binary, but I don't understand how this converts to int:
int main()
{
cout << (std::bitset<8 * sizeof(~'q' << 6)>(~'q' << 6))<<endl;
cout << (~'q' << 6) << endl;
return 0;
}
After executing, the following 2 lines are printed:
11111111111111111110001110000000
-7296
The first line is what I expected, but I don't understand how is it converted to -7296.
I would expect a lot larger number. Also online converters give a different result from this.
Thanks in advance for the help.
In order to answer the question, we need to analyze what types are the partial expressions and what is the precedence of the operators in play.
For this we could refer to character constant and operator precedence.
'q' represents an int as described in the first link:
single-byte integer character constant, e.g. 'a' or '\n' or '\13'.
Such constant has type int ...
'q' thus is equivalent to the int value of its Latin-1 code (binary 01110001) but expanded to fit a 32-bit integer: 00000000 0000000 00000000 01110001.
The operator ~ precedes the operator << so the bitwise negation will be performed first. The results is 11111111 11111111 11111111 10001110.
Then a bitwise shift left is performed (dropping the left 6 bits of the value and padding with 0-s on the right): 11111111 11111111 11100011 10000000.
Now, regarding your second half of the question: cout << (~'q' << 6) << endl; interpretes this value as an int (signed). The standard states:
However, all C++ compilers use two's complement representation, and as of C++20, it is the only representation allowed by the standard, with the guaranteed range from −2N−1 to +2N−1−1
(e.g. -128 to 127 for a signed 8-bit type).
The two's complement value for 11111111 11111111 11100011 10000000 on a 32-bit machine results in the binary code for the decimal -7296.
The number is not large as you would expect, because when you start from -1 decimal (11111111 11111111 11111111 11111111 binary) and count down, the binary representations all have a lot of leading 1-s. The leftmost bit is 1 for a negative number and 0 for a positive number. When you expand the negative value to more bits (e.g. from 32 to 64), you would add more 1-s to the left until you reach 64 bits. More information can be found here. And here is an online converter.
I don't understand how is it converted to -7296.
It(the second value) is the Decimal from signed 2's complement
~'q' << 6
= (~'q') << 6
= (~113) << 6
= (~(0 0111 0001)) << 6
= 1 1000 1110 << 6
= -7296
You may have forgotten to add some 0's in front of 113.

Wrong output in c++ , i=65530 and when we print value of i it gives -6 instate of 65530. why?

I was creating a simple variable printing program and there is unexpected output.
The program gives output -6, but I would have expected 65530.
Why?
#include<iostream>
int main(){
short int i=65530;
std::cout<<i;
}
This has to do with the binary represenation of the type you have used.
As a 16 bit binary: 65530 === 1111 1111 1111 1010
But you have used short int which is a signed number and in it's binary definition it is represented by 1 bit as a sign and 15 bits as the number:
(1)111 1111 1111 1010
So why are there so many 1's in the representation?
Why don't the 15 bits look like a 6 in binary ( (1)000 0000 0000 0110)?
It's because of the way that negative numbers are represented in binary.
To represent signed numbers in binary, a format is used which is called Two's complement.
So here is an example of this transformation of the number -6
Take the number and transform it to binary. (6 in binary === 0000 0000 0000 0110
Exchange 1's to 0's and 0's to 1's (1111 1111 1111 1001)
To get the result - Add 1 to the previous transformation: (1111 1111 1111 1010)
As You can see exactly the same binary representation is for (unsigned)65530 as is for (signed)-6.
It's all in the interpretation of the bits.
That's why You have to be carefull nearing the maximum values of a representation in a type.
In this case to store this value You could:
Change the type to unsigned short int
Change to larger type.
You declared i as a short int, which is a 16-bit signed type. This means that the highest number it can represent is actually 32767 (2^15-1) and the smallest is -32768 (-2^15). 65530 overflows that limit. So if you had printed 32768, it would instead overflow to -32768, 32769 would overflow to -32767 and so on. See more topics on binary representations of signed numbers to understand this process better

1's complement using ~ in C/C++

I am using Visual Studio 2013.
Recently I tried the ~ operator for 1's complement:
int a = 10;
cout << ~a << endl;
Output is -11
But for
unsigned int a = 10;
cout << ~a << endl;
the output is 4294967296
I don't get why the output is -11 in the case of signed int.
Please help me with this confusion.
When you put number 10 into 32-bit signed or unsigned integer, you get
0000 0000 0000 0000 0000 0000 0000 1010
When you negate it, you get
1111 1111 1111 1111 1111 1111 1111 0101
These 32 bits mean 4294967285 as an unsigned integer, or -11 as a signed integer (your computer represents negative integers as Two's complement). They can also mean a 32-bit floating point number or four 8-bit characters.
Bits don't have any "absolute" meaning. They can represent anything, depending on how you "look" at them (which type they have).
The ~ operator performs a ones-complement on its argument, and it does not matter whther the argument is a signed or unsigned integer. It merely flips all the bits, so
0000 0000 0000 1010 (bin) / 10 (dec)
becomes
1111 1111 1111 0101 (bin)
(where, presumably, these numbers are 32 bits wide -- I omitted 16 more 0's and 1's.)
How will cout display the result? It looks at the original type. For a signed integer, the most significant bit is its sign. Thus, the result is always going to be negative (because the most significant bit in 10 is 0). To display a negative number as a positive one, you need the two's complement: inverting all bits, then add 1. For example, -1, binary 111..111, displays as (inverting) 000..000 then +1: 000..001. Result: -1.
Applying this to the one's complement of 10 you get 111..110101 -> inverting to 000...001010, then add 1. Result: -11.
For an unsigned number, cout doesn't do this (naturally), and so you get a large number: the largest possible integer minus the original number.
In memory there is stored 4294967285 in both cases (4294967296 properly a typo, 33 bit?), the meaning of this number depends which signdeness you use:
if it's signed, this the number is -11.
if it's unsigned, then it's 4294967285
different interpretations of the same number.
You can reinterpret it as unsigned by casting it, same result:
int a = 10;
cout << (unsigned int) ~a << endl;
Try this
unsigned int getOnesComplement(unsigned int number){
unsigned onesComplement = 1;
if(number < 1)
return onesComplement;
size_t size = (sizeof(unsigned int) * 8 - 1) ;
unsigned int oneShiftedToMSB = 1 << size;
unsigned int shiftedNumber = number;
for ( size_t bitsToBeShifted = 0; bitsToBeShifted < size; bitsToBeShifted++){
shiftedNumber = number << bitsToBeShifted;
if(shiftedNumber & oneShiftedToMSB){
onesComplement = ~shiftedNumber;
onesComplement = onesComplement >> bitsToBeShifted;
break;
}
}
return onesComplement;
}

the idea behind unsigned integer [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What happens if I assign a negative value to an unsigned variable?
I'm new at C++ and I want to know how to use unsigned types. For the unsigned int type, I know that it can take the values from 0 to 4294967296. but when I want to initialize an unsigned int type as follows:
unsigned int x = -10;
cout << x;
The output seems like 4294967286
The got this output = max value - 10. So I want to learn what is happening in the memory? What kind of processes are being done while this calculation is continuing? Thanks for your answers.
You're encountering wrap around behavior.
Unsigned types are cyclic (signed types, on the other hand, may or may not be cyclic, but it's undefined behavior that you shouldn't rely on). That is to say, one less than the minimum possible value is the maximum possible value. You can demonstrate this yourself with the following snippet:
int main()
{
unsigned int x = 5;
for (int i = 0; i < 10; ++i) cout << x-- << endl;
return 0;
}
You'll notice that after reaching zero, the value of x jumps to 2^32-1, the maximum representable value. Subtracting further acts as expected.
When you subtract 1 from unsigned 0, the bit pattern changes in the following way:
0000 0000 0000 0000 0000 0000 0000 0000 // before (0)
1111 1111 1111 1111 1111 1111 1111 1111 // after (2^32 - 1)
With unsigned numbers, negative numbers are treated like positive numbers subtracted from zero. So (unsigned int) -10 will equal ((unsigned int) 0) - ((unsigned int) 10).
I like to think about it as an unsigned int being the lowest 32 bits of a higher-precision arbitrary value. Like this:
v imaginary high order bit
1 0000 0000 0000 0000 0000 0000 0000 0000 // before (2^32)
0 1111 1111 1111 1111 1111 1111 1111 1111 // after (2^32 - 1)
The behavior of the unsigned int in these overflow cases is exactly the same as the behavior of the low 8 bits of an unsigned int when you subtract 1 from 256. It makes more sense to look at an unsigned char (1 byte) like this, because the values 0 and 256 are equal if casted to unsigned char, since the limited precision discards the extra bits.
0 0000 0000 0000 0000 0000 0001 0000 0000 // before (256)
0 0000 0000 0000 0000 0000 0000 1111 1111 // before (255)
As others have pointed out, this is called modulo arithmetic. Using higher precision values to help visualize the transitions made when wrapping around works because you mask off high order bits. It doesn't matter what it was, so it can be anything, it just gets discarded. Integers are values over modulus 2^32, so any multiples of 2^32 equal zero in the space of an integer. That's why I can get away with pretending there's an extra bit on the end.
Modulus operations have their own dedicated operator in case you need to compute them for numbers other than 2^32 in your programs, as used in this statement:
int forty_mod_twelve = 40 % 12;
// value is 4: 4 + n * 12 == 40 for some whole number n
Modulus operations on powers of two (like 2^32) simplify directly to masking off high order bits, and if you take a 64 bit integer and compute it modulo 2^32, the value will be exactly the same as if you had converted it to an unsigned int.
01011010 01011100 10000001 00001101 11111111 11111111 11111111 11111111 // before
00000000 00000000 00000000 00000000 11111111 11111111 11111111 11111111 // after
Programmers like to use this property to speed up programs, because it's easy to chop off some number of bits, but performing a modulus operation is much harder (it's about as hard as doing a division).
Does that make sense?
This involves the standard integral conversions. Here's the applicable rule. We start with the type of the literal 10:
2.14.2 Integer literals [lex.icon]
An integer literal is a sequence of digits that has no period or exponent part. An integer literal may have
a prefix that specifies its base and a suffix that specifies its type. The lexically first digit of the sequence
of digits is the most significant. A decimal integer literal (base ten) begins with a digit other than 0 and
consists of a sequence of decimal digits. An octal integer literal (base eight) begins with the digit 0 and
consists of a sequence of octal digits. A hexadecimal integer literal (base sixteen) begins with 0x or 0X and
consists of a sequence of hexadecimal digits, which include the decimal digits and the letters a through f and A through F with decimal values ten through fifteen. [ Example: the number twelve can be written 12, 014, or 0XC. — end example ]
The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be
represented.
A table follows, the first type is int and it fits. So the literal's type is int.
The unary minus operator is applied, which doesn't change the type. Then the following rule is applied:
4.7 Integral conversions [conv.integral]
A prvalue of an integer type can be converted to a prvalue of another integer type. A prvalue of an unscoped enumeration type can be converted to a prvalue of an integer type.
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). — end note ]
Instead of printing the value the way you are, print it in hexadecimal format (sorry, I forget how to do that with cout but I know it's possible). You'll see that the representation is the same for both values.
From your context, an integer is 32 bits (this is not always the case). When using a signed integer, the most significant bit is the sign, not part of the value. When using an unsigned integer, the most significant bit is part of the value.

Weird output for bitwise NOT

I am trying to take one's complement of 0 to get 1 but I get 4294967295. Here is what I have done:
unsigned int x = 0;
unsigned int y= ~x;
cout << y;
My output is 4294967295 but I expect 1, why is this so? By the way, I am doing this in C++.
Why do you expect 1? Bit-wise complement flips all the bits.
00000000000000000000000000000000 = 0
|
bitwise NOT
|
v
11111111111111111111111111111111 = 4294967295
Perhaps you are thinking of a logical NOT. In C++ this is written as !x.
You have to look at this in binary to understand exactly what is happening.
unsigned int x = 0, is 00000000 00000000 00000000 00000000 in memory.
The ~x statement flips all bits, meaning the above turns into:
11111111 11111111 11111111 11111111
which converts to 4294967295 in decimal form.
XOR will allow you flip only certain bits. If you only want to flip the least significant bit, use x ^ 1 instead.
Where did you get the expectation of 1 from?
Your understanding of bitwise operations clearly shows is lacking, it would be prudent to work through them first before posting in here...
you're not confusing with a ! which is a logical NOT, are you?
a ~ bitwise complement or a bitwise NOT operation flips all the bits from 1 to 0 and vice versa depending on where in the bitmask is set, so for example, a 1 is
00000000 00000000 00000000 00000001
doing a ~ bitwise NOT on that flips it to
11111111 11111111 11111111 11111110
which gives you the maximum value less 1 of the integer datatype on a 32bit system.
Here is a worthy linky to this which shows you how to do bit-twiddling here.
An integer is more than just 1 bit (it's 4 bytes, or 32 bits). By notting it, you'r flipping everything, so in this case 00000... becomes 11111...
~ flips all of the bits in the input. Your input is an unsigned int, which has 32 bits, all of which are 0. Flipping each of those 0-bits gives you 32 1-bits instead, which is binary for that large number.
If you only want to flip the least significant bit, you can use y = x ^ 1 - that is, use XOR instead.
You can use
unsigned int y= !x;
to get y = 1;