I want to be able to access the sign bit of a number in C++. My current code looks something like this:
int sign bit = number >> 31;
That appears to work, giving me 0 for positive numbers and -1 for negative numbers. However, I don't see how I get -1 for negative numbers: if 12 is
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1100
then -12 is
1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 0011
and shifting it 31 bits would make
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001
which is 1, not -1, so why do I get -1 when I shift it?
What about this?
int sign = number < 0;
The result of right-shifting a negative number in C++ is implementation-defined. So, no one knows what right-shifting your -12 should get on your specific platform. You think it should make the above (1), while I say that it can easily produce all-ones pattern, which is -1. The latter is called sign-extended shifting. In sign-extended shifting the sign bit is copied to the right, but never shifted out of its place.
If all you are interested in is the value of the sign bit, then stop wasting time trying to use bitwise operations, like shifts etc. Just compare your number to 0 and see whether it is negative or not.
Because you are shifting signed integer. Cast the integer to unsigned:
int sign_bit = ((unsigned int)number) >> 31;
You can use cmath library
#include <cmath>
and use it like
std::cout << std::signbit(num);
This function get a float value as input and a bool value as output.
true for negative
false for positive
for instance
std::cout << std::signbit(1);
will give you a 0 as output (false)
but while using this function you have to be careful about zero
std::cout << std::signbit(-0.0); // 512 (true)
std::cout << std::signbit(+0.0); // 0 (false)
The output of this lines are not the same.
To remove this problem you can use:
float x = +0.01;
std::cout << (x >= 0 ? (x == 0 ? 0 : 1) : -1);
which give:
0 for 0
1 for positive
-1 for negative
For integers, test number < 0.
For floating point numbers, you may also want take into account the fact that zero has a sign. That is, there exists a -0.0 which is distinct from +0.0. To distinguish the two, you want to use std::signbit.
The >> operator is performing an arithmetic shift, which retains the sign of the number.
bool signbit(double x)
{
return 1.0/x != 1.0/fabs(x);
}
My solution that supports +/-0.
You could do the following:
int t1 = -12;
unsigned int t2 = t1;
t2 = t2>>31;
cout<<t2;
This will work fine for all implementations of c++.
bool signbit(double x)
{
return (__int64)x & 0x8000000000000000LL != 0LL;
}
bool signbit(float x)
{
return (__int32)x & 0x80000000 != 0;
}
Related
Following situation:
I have macros for storing the complement of a variable together with its original value within a structure. With another macro I want to check if the original value is equal to the complement of the stored complement value. But strangely I do not get the results I may expect.
Simplifying the operation, this results in following situation:
#include <stdbool.h>
#include <stdint.h>
#define CHECKVAR(var__, compl__) ((var__^compl__)==-1);
int main()
{
uint8_t var;
uint8_t complementvar;
var=3;
complementvar=~var;
bool checkOK=CHECKVAR(var,complementvar); /*-> I expect that all bits are set and hereby it is equal to "-1", but instead i get "false" */
return 0;
}
My expectation would be (e.g. on a 32bit system :
after the complement operation complementvar has the value 0xfffffffc (after integer promotion caused by the ~ operator to int and assigning the lvalue to complementvar by implicitly casting it down to uint8).
now to the checking macro:
the bitwise operation leads to an integer promotion on both sides?:
2a). var: 0000 0000 0000 0000 0000 0000 0000 0011;
complementvar: 1111 1111 1111 1111 1111 1111 1111 1100
2b)\ now XOR-ing both var and complementvar shoudl result in
1111 1111 1111 1111 1111 1111 1111 1111
checking the results of the XOR-operation together with -1 (represented as an integer)
1111 1111 1111 1111 1111 1111 1111 1111==1111 1111 1111 1111 1111 1111 1111 1111 shall result in true, but instead i always receive false, because the result of XOR-operation (strangely for me) is 0x000000ff?
What compiler am I using? It's the MSVC++11, but actually the solution should be as compiler independent as possible.
As I have to be portable too, would the results be different with a C compiler?
after the complement operation complementvar has the value 0xfffffffc
No. Look at that code:
uint8_t complementvar;
complementvar=~var;
~var will evaluate to the value you expect, but complementvar has type uint8_t, so the conversion leads to just 0xfc.
In your checking step, this is promoted again to int, but as uint8_t is an unsigned type, no sign extension happens here, you just get 0 bits added. This explains the result you get from xoring (^).
Consider the following Example:
First Case:
short x=255;
x = (x<<8)>>8;
cout<<x<<endl;
Second Case:
short x=255;
x = x<<8;
x = x>>8;
cout<<x<<endl;
The output in the first case is 255 whereas in the second case is -1. -1 as output does makes sense as cpp does a arithmetic right shift. Here are the intermediate values of x to obtain -1 as output.
x: 0000 0000 1111 1111
x<<8:1111 1111 0000 0000
x>>8:1111 1111 1111 1111
Why doesn't the same mechanism happen in the first case?
The difference is a result of two factors.
The C++ standard does not specify the maximum values of integral types. The standard only specifies the minimum size of each integer type. On your platform, a short is a 16 bit value, and an ints is at least a 32 bit value.
The second factor is two's complement arithmetic.
In your first example, the short value is naturally promoted to an int, which is at least 32 bits, so the left and the right shift operates on an int, before getting converted back to a short.
In your second example, after the first left shift operation the resulting value is once again converted back to a short, and due to two's complement arithmetic, it ends up being a negative value. The right shift ends up sign-extending the negative value, resulting in the final result of -1.
What you just observed is sign extension:
Sign extension is the operation, in computer arithmetic, of increasing the number of bits of a binary number while preserving the number's sign (positive/negative) and value. This is done by appending digits to the most significant side of the number, following a procedure dependent on the particular signed number representation used.
For example, if six bits are used to represent the number "00 1010" (decimal positive 10) and the sign extend operation increases the word length to 16 bits, then the new representation is simply "0000 0000 0000 1010". Thus, both the value and the fact that the value was positive are maintained.
If ten bits are used to represent the value "11 1111 0001" (decimal negative 15) using two's complement, and this is sign extended to 16 bits, the new representation is "1111 1111 1111 0001". Thus, by padding the left side with ones, the negative sign and the value of the original number are maintained.
You rigt shift all the way to the point where your short becomes negative, and when you then shift back, you get the sign extension.
This doesn't happen in the first case, as the shift isn't applied to a short. It's applied to 255 which isn't a short, but the default integral type (probably an int). It only gets casted after it's already been shifted back:
on the stack: 0000 0000 0000 0000 0000 0000 1111 1111
<<8
on the stack: 0000 0000 0000 0000 1111 1111 0000 0000
>>8
on the stack: 0000 0000 0000 0000 0000 0000 1111 1111
convert to short: 0000 0000 1111 1111
Why is the result 4294967292 instead of -4 even when both integer types are signed?
#include <iostream>
void f(int & i)
{
i = -4;
}
int main()
{
long long i = 0;
f(reinterpret_cast<int &>(i));
std::cout << i << std::endl;
return 0;
}
long long seems to be 64-bit and int 32-bit number on your architecture. You're using 32-bit integer references, so you only modified the less significant half (you're evidently compiling on an ordinary x86 machine, which uses little endian and two's complement for signed integer representation).
The sign bit, as well as the more significant half are all 0s (as initialized). It would have to be 1s to print the same negative value:
0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1100
^ ^
sign bit of long long sign bit of int
reinterpret_cast only guarantees that you get back your int with another reinterpret_cast, so you either print it as int, or use long long & parameter.
I am using Visual Studio 2013.
Recently I tried the ~ operator for 1's complement:
int a = 10;
cout << ~a << endl;
Output is -11
But for
unsigned int a = 10;
cout << ~a << endl;
the output is 4294967296
I don't get why the output is -11 in the case of signed int.
Please help me with this confusion.
When you put number 10 into 32-bit signed or unsigned integer, you get
0000 0000 0000 0000 0000 0000 0000 1010
When you negate it, you get
1111 1111 1111 1111 1111 1111 1111 0101
These 32 bits mean 4294967285 as an unsigned integer, or -11 as a signed integer (your computer represents negative integers as Two's complement). They can also mean a 32-bit floating point number or four 8-bit characters.
Bits don't have any "absolute" meaning. They can represent anything, depending on how you "look" at them (which type they have).
The ~ operator performs a ones-complement on its argument, and it does not matter whther the argument is a signed or unsigned integer. It merely flips all the bits, so
0000 0000 0000 1010 (bin) / 10 (dec)
becomes
1111 1111 1111 0101 (bin)
(where, presumably, these numbers are 32 bits wide -- I omitted 16 more 0's and 1's.)
How will cout display the result? It looks at the original type. For a signed integer, the most significant bit is its sign. Thus, the result is always going to be negative (because the most significant bit in 10 is 0). To display a negative number as a positive one, you need the two's complement: inverting all bits, then add 1. For example, -1, binary 111..111, displays as (inverting) 000..000 then +1: 000..001. Result: -1.
Applying this to the one's complement of 10 you get 111..110101 -> inverting to 000...001010, then add 1. Result: -11.
For an unsigned number, cout doesn't do this (naturally), and so you get a large number: the largest possible integer minus the original number.
In memory there is stored 4294967285 in both cases (4294967296 properly a typo, 33 bit?), the meaning of this number depends which signdeness you use:
if it's signed, this the number is -11.
if it's unsigned, then it's 4294967285
different interpretations of the same number.
You can reinterpret it as unsigned by casting it, same result:
int a = 10;
cout << (unsigned int) ~a << endl;
Try this
unsigned int getOnesComplement(unsigned int number){
unsigned onesComplement = 1;
if(number < 1)
return onesComplement;
size_t size = (sizeof(unsigned int) * 8 - 1) ;
unsigned int oneShiftedToMSB = 1 << size;
unsigned int shiftedNumber = number;
for ( size_t bitsToBeShifted = 0; bitsToBeShifted < size; bitsToBeShifted++){
shiftedNumber = number << bitsToBeShifted;
if(shiftedNumber & oneShiftedToMSB){
onesComplement = ~shiftedNumber;
onesComplement = onesComplement >> bitsToBeShifted;
break;
}
}
return onesComplement;
}
I knew that ~ operator does NOT operation. But I could not make out the output of the following program (which is -65536). What exactly is happening?
#include <stdio.h>
int main(void) {
int b = 0xFFFF;
printf("%d",~b);
return 0;
}
Assuming 32-bit integers
int b = 0xFFFF; => b = 0x0000FFFF
~b = 0xFFFF0000
The top bit is now set. Assuming 2s complement, this means we have a negative number. Inverting the other bits then adding one gives 0x00010000 or 65536
When you assign the 16-bit value 0xffff to the 32-bit integer b, the variable b actually becomes 0x0000ffff. This means when you do the bitwise complement it becomes 0xffff0000 which is the same as decimal -65536.
The ~ operator in C++ is the bitwise NOT operator. It is also called the bitwise complement. This is flipping the bits of your signed integer.
For instance, if you had
int b = 8;
// b in binary = 1000
// ~b = 0111
This will flip the bits that represent the initial integer value provided.
It is doing a bitwise complement, this output may help you understand what is going on better:
std::cout << std::hex << " b: " << std::setfill('0') << std::setw(8) << b
<< " ~b: " << (~b) << " -65536: " << -65536 << std::endl ;
the result that I receive is as follows:
b: 0000ffff ~b: ffff0000 -65536: ffff0000
So we are setting the lower 16 bits to 1 which gives us 0000ffff and then we do a complement which will set the lower 16 bits to 0 and the upper 16 bits to 1 which gives us ffff0000 which is equal to -65536 in decimal.
In this case since we are working with bitwise operations, examining the data in hex gives us some insight into what is going on.
The result depends on how signed integers are represented on your platform. The most common representation is a 32-bit value using "2s complement" arithmetic to represent negative values. That is, a negative value -x is represented by the same bit pattern as the unsigned value 2^32 - x.
In this case, the original bit pattern has the lower 16 bits set:
0x0000ffff
The bitwise negation clears those bits and sets the upper 16 bits:
0xffff0000
Interpreting this as a negative number gives the value -65536.
Usually, you'll want to use unsigned types when you're messing around with bitwise arithmetic, to avoid this kind of confusion.
Your comment:
If it is NOT of 'b' .. then output should be 0 but why -65536
Suggests that you are expecting the result of:
uint32_t x = 0xFFFF;
uint32_t y = ~x;
to be 0.
That would be true for a logical not operation, such as:
uint32_t x = 0xFFFF;
uint32_t y = !x;
...but operator~ is not a logical NOT, but a bitwise not. There is a big difference.
A logical returns 0 for non-0 values (or false for true values), and 1 for 0 values.
But a bitwise not reverses each bit in a given value. So a binary NOT of 0xF:
0x0F: 00000000 11111111
~0x0F: 11111111 00000000
Is not zero, but 0xF0.
For every binary number in the integer, a bitwise NOT operation turns all 1s into 0s, and all 0s are turned to 1s.
So hexadecimal 0xFFFF is binary 1111 1111 1111 1111 (Each hexadecimal character is 4 bits, and F, being 15, is full 1s in all four bits)
You set a 32 bit integer to that, which means it's now:
0000 0000 0000 0000 1111 1111 1111 1111
You then NOT it, which means it's:
1111 1111 1111 1111 0000 0000 0000 0000
The topmost bit is the signing bit (whether it's positive or negative), so it gives a negative number.