Bitwise NOT with boolean variable [duplicate] - c++

This question already has answers here:
tilde operator returning -1, -2 instead of 0, 1 respectively
(2 answers)
Closed 6 years ago.
I'm a beginner at using bitwise operators and bool type. I might be wrong, but I thought bool type is represented on 1 bit and can take values from {0, 1}. So, I tried the NOT (~) operator with such a variable and the results are weird for me.
eg. for
bool x = 0;
cout << (~x);
I get -1. (I expected 1) Can you please tell me where I'm wrong and why only the ! operator does reverse the value (from 0 to 1 and from 1 to 0)?

Most processors do not have a 1 bit wide general purpose register so when you use a boolean it takes up whatever the default register size is on that platform (ie 64 bits on most Intel and ARM computers these days, but maybe 32 bit on some embedded systems). When you negate some thing that is all zeros, you get all 1's. In twos complement this evaluates to -1 in signed decimal. Long story short, your bool is really an int and ~0 is -1

The ! operator is a logical operator - hence 0 (false) is negated to 1 (true).
The ~ operator is a bitwise operator - hence every bit is negated. A bool, while notionally a single bit - can results in expressions of type int. Hence you are really negating 0.....000, which is 1...111, which is -1 (see two's complement).

The bool value x is implicitly converted to an int when used in the expression ~x. And the vast majority of computers use two's complement representation of signed integers, in which ~0 is equal to -1.
The ! operator is defined so that !x has type bool rather than int, so this issue doesn't occur.

Related

Assigning out-of-range (negative) values to unsigned integral types [duplicate]

This question already has answers here:
What happens if I assign a negative value to an unsigned variable?
(7 answers)
Closed 6 years ago.
Okay, this maybe a dumb question. But, here it goes.
If i assign a negative value to an unsigned integral type in C++ like "unsigned short a = -1".
The value of a in the above example is set to be 65535 (2^16 - 1). And i know that if i set a value out of range to an unsigned integer, the value set will be the modulo of the number with the max size storable (65536 in this case), can you please explain the math being worked out behind the scenes?
How is (-1) modulo 65536 = 65535 ? Shouldn't it be -1 itself?
It is a difference of 1 MSB bit. In signed, that 1 bit is used to store the negativity of the number. While in unsigned, that is used to store the value. Its not math that works internally, its basically bit pattern manipulation that makes the difference between the two.

Is it possible to send decimal value(8 bit) in bool ? If Yes then How? [duplicate]

This question already has answers here:
Can a bool variable store more than 0x01?
(6 answers)
Closed 6 years ago.
bool is 8 bits long
As above post describe , bool is 8 bit long.
So is it possible to send value 2 in bool variable.
i.e.
0000 0010 -> 2
(decimal representation)
eg: bool x;
How to send this '2' in above bool variable 'x' ?
Thanks
Not in C++, no. A bool can hold true or false. There is no way to store 2 in a bool without first invoking undefined behaviour. Once you have invoked undefined behaviour, anything can happen. (Including what you expected except when demo'ing to important clients).
Also, a bool is not necessarily 8 bits long. It must be at least as large as a char (because sizeof(bool) must be at least 1), and the limits on the range of values which an unsigned char can hold means that it must be at least 8 bits. OTOH, there is nothing to stop an implementation using a bool which is larger than char, and there actually are implementations where char is 32 or 64 bits (DSP chips in the main).
bool is 8 bits long
Not necessarily true. All the standards say is that it has to be capable of holding true and false: its sizeof is implementation defined. You can deduce that it must be at least 1 since the type of sizeof must be an integral type and it cannot be zero else pointer arithmetic on an array of bools would break.
So don't attempt to send the value 2 - you're bound to render the behaviour of your program undefined.

Right shift operator for signed numbers [duplicate]

This question already has answers here:
Right shifting negative numbers in C
(6 answers)
How are negative numbers represented in 32-bit signed integer?
(7 answers)
Closed 8 years ago.
I am reading about shift operators in C.
Right shifting n bits divides by 2 raise to n. Shifting signed values may fail because for negative values the result never gets past -1: -5 >> 3 is -1 and not 0 like -5/8.
My question is why shifting signed values may fail?
Why value of -5 >> 3 is -1 and not zero?
Kindly explain.
It is simply implementation-defined:
From 5.8 Shift operators
The operands shall be of integral or unscoped enumeration type and
integral promotions are performed. The type of the result is that of
the promoted left operand. The behavior is undefined if the right
operand is negative, or greater than or equal to the length in bits of
the promoted left operand
[...]
If E1 has a signed type and a negative value, the resulting value is implementation-defined.
Shifting with signed integers is implementation defined, but if the architecture you're using has an arithmetic shift, you can pretty reliably guess it with use it.
It's because of how negative numbers are stored in the computer. It's called two's complement. To switch the sign of a two's complement, you NOT its bits and add 1. For example, with an 8 bit integer 00011010 (26), first you NOT to get 11100101, then you add 1 and get 11100110 (-26). The problem comes from that most significant bit being set. If when you shifted it put 0s in at the left, the number would become positive, but if it put 1s, then the lowest possible result is 11111111 which is -1. This is how arithmetic shifts work, when you shift the computer adds bits that are the same as the left most bit.
So to be explicit, this is what is happening (using 8 bit integers because it's easier and the size is arbitrary in this case): 11111011 gets shifted 3 to the right (so 011 goes away) and since the most significant bit is set 3 1s are inserted at the top, so you get 11111111 which is -1.

What does ~ mean in C++?

Specifically, could you tell me what this line of code does:
int var1 = (var2 + 7) & ~7;
Thanks
It's bitwise negation. This means that it performs the binary NOT operator on every bit of a number. For example:
int x = 15; // Binary: 00000000 00000000 00000000 00001111
int y = ~x; // Binary: 11111111 11111111 11111111 11110000
When coupled with the & operator it is used for clearing bits. So, in your example it means that the last 3 bits of the result of var2+7 are set to zeroes.
As noted in the comments, it's also used to denote destructors, but that's not the case in your example.
This code rounds up var1 to the closest n*8 number. & ~7 sets last 3 bits to 0, rounding down to 8*n.
It's a bitwise NOT. Not to be confused with logical not (which is !), which flips the logical value (true to false and vice versa). This operator flips every bit in a variable.
7 in binary is 00000111, so ~7 is 11111000 (assuming an eight-bit byte). The code author is using it for bit masking.
The effect of the code, as noted, is to round a value to the next higher multiple of eight. My preferred formulation would be "var1 = (var2 | 7)+1;" but to understand the expression as written, it's most helpful to understand it from the outside in.
Although "&" and "~" are separate operators, with different prioritization rules, the concept of "a = b & ~c;" is a useful one which in a very real sense deserves its own operator (it would allow more sensible integer promotion rules, among other things). Basically, "a = b & ~c;" serves to cancel out any bits in 'b' that are also in 'c' (if 'b' is long and 'c' isn't, because of integer promotion rules, it my cancel higher bits as well). If 'c' is 2^N-1, the expression will cancel out the bottom N bits, which is equivalent to rounding down to the next multiple of 2^N.
The expression as written adds 7 to var2 before rounding the result down to the next multiple of 8. If var2 was a multiple of 8, adding 7 won't quite reach the next higher multiple of 8, but otherwise it will. Thus, the expression as a whole will round up to the next multiple of 8.
Incidentally, my preferred formulation rounds the number up to the next higher value that's just short of a multiple of 8, and then bumps it up to the next multiple. It avoids the repetition of the magic number "7", and in some instruction sets the approach will save code.

What does ~0 mean in this code?

What's the meaning of ~0 in this code?
Can somebody analyze this code for me?
unsigned int Order(unsigned int maxPeriod = ~0) const
{
Point r = *this;
unsigned int n = 0;
while( r.x_ != 0 && r.y_ != 0 )
{
++n;
r += *this;
if ( n > maxPeriod ) break;
}
return n;
}
~0 is the bitwise complement of 0, which is a number with all bits filled. For an unsigned 32-bit int, that's 0xffffffff. The exact number of fs will depend on the size of the value that you assign ~0 to.
It's the one complement, which inverts all bits.
~ 0101 => 1010
~ 0000 => 1111
~ 1111 => 0000
As others have mentioned, the ~ operator performs bitwise complement. However, the result of performing the operation on a signed value is not defined by the standard.
In particular, the value of ~0 need not be -1, which is probably the value intended. Setting the default argument to
unsigned int maxPeriod = -1
would make maxPeriod contain the highest possible value (signed to unsigned conversion is defined as an assignment modulo 2**n, where n is a characteristic number of the given unsigned type (the number of bits of representation)).
Also note that default arguments are not valid in C.
It's a binary complement function.
Basically it means flip each bit.
It is the bitwise complement of 0 which would be, in this example, an int with all the bits set to 1. If sizeof(int) is 4, then the number is 0xffffffff.
Basically, it's saying that maxPeriod has a default value of UINT_MAX. Rather than writing it as UINT_MAX, the author used his knowledge of complements to calculate the value.
If you want to make the code a bit more readable in the future, include
#include <limits>
and change the call to read
unsigned int Order(unsigned int maxPeriod = UINT_MAX) const
Now to explain why ~0 is UINT_MAX. Since we are dealing with an int, in which 0 is represented with all zero bits (00000000). Adding one would give (00000001), adding one more would give (00000010), and one more would give (00000011). Finally one more addition would give (00000100) because the 1's carry.
For unsigned ints, if you repeat the process ad-infiniteum, eventually you have all one bits (11111111), and adding another one will overflow the buffer setting all the bits back to zero. This means that all one bits in an unsigned number is the maximum that data type (int in your case) can hold.
The "~" operation flips all bits from 0 to 1 or 1 to 0, flipping a zero integer (which has all zero bits) effectively gives you UINT_MAX. So he basically the previous coded opted to computer UINT_MAX instead of using the system defined copy located in #include <limits.h>
In the example it is probably an attempt to generate the UINT_MAX value. The technique is possibly flawed for reasons already stated.
The expression does however does have legitimate use to generate a bit mask with all bits set using a literal constant that is type-width independent; but that is not how it is being used in your example.
As others have said, ~ is the bitwise complement operator (sometimes also referred to as bitwise not). It's a unary operator which means that it takes a single input.
Bitwise operators treat the input as a bit pattern and perform their respective operations on each individual bit then return the resulting pattern. Applying the ~ operator to the bit pattern will negate each bit (each zero becomes a one, each one becomes a zero).
In the example you gave, the bit representation of the integer 0 is all zeros. Thus, ~0 will produce a bit pattern of all ones. Even though 0 is an int, it is the bit pattern ~0 that is assigned to maxPeriod (not the int value that would be represented by said bit pattern). Since maxPeriod is an unsigned int, it is assigned the unsigned int value represented by ~0 (a pattern of all ones), which is in fact the highest value that an unsigned int can store before wrapping around back to 0.