Following situation:
I have macros for storing the complement of a variable together with its original value within a structure. With another macro I want to check if the original value is equal to the complement of the stored complement value. But strangely I do not get the results I may expect.
Simplifying the operation, this results in following situation:
#include <stdbool.h>
#include <stdint.h>
#define CHECKVAR(var__, compl__) ((var__^compl__)==-1);
int main()
{
uint8_t var;
uint8_t complementvar;
var=3;
complementvar=~var;
bool checkOK=CHECKVAR(var,complementvar); /*-> I expect that all bits are set and hereby it is equal to "-1", but instead i get "false" */
return 0;
}
My expectation would be (e.g. on a 32bit system :
after the complement operation complementvar has the value 0xfffffffc (after integer promotion caused by the ~ operator to int and assigning the lvalue to complementvar by implicitly casting it down to uint8).
now to the checking macro:
the bitwise operation leads to an integer promotion on both sides?:
2a). var: 0000 0000 0000 0000 0000 0000 0000 0011;
complementvar: 1111 1111 1111 1111 1111 1111 1111 1100
2b)\ now XOR-ing both var and complementvar shoudl result in
1111 1111 1111 1111 1111 1111 1111 1111
checking the results of the XOR-operation together with -1 (represented as an integer)
1111 1111 1111 1111 1111 1111 1111 1111==1111 1111 1111 1111 1111 1111 1111 1111 shall result in true, but instead i always receive false, because the result of XOR-operation (strangely for me) is 0x000000ff?
What compiler am I using? It's the MSVC++11, but actually the solution should be as compiler independent as possible.
As I have to be portable too, would the results be different with a C compiler?
after the complement operation complementvar has the value 0xfffffffc
No. Look at that code:
uint8_t complementvar;
complementvar=~var;
~var will evaluate to the value you expect, but complementvar has type uint8_t, so the conversion leads to just 0xfc.
In your checking step, this is promoted again to int, but as uint8_t is an unsigned type, no sign extension happens here, you just get 0 bits added. This explains the result you get from xoring (^).
Related
I was creating a simple variable printing program and there is unexpected output.
The program gives output -6, but I would have expected 65530.
Why?
#include<iostream>
int main(){
short int i=65530;
std::cout<<i;
}
This has to do with the binary represenation of the type you have used.
As a 16 bit binary: 65530 === 1111 1111 1111 1010
But you have used short int which is a signed number and in it's binary definition it is represented by 1 bit as a sign and 15 bits as the number:
(1)111 1111 1111 1010
So why are there so many 1's in the representation?
Why don't the 15 bits look like a 6 in binary ( (1)000 0000 0000 0110)?
It's because of the way that negative numbers are represented in binary.
To represent signed numbers in binary, a format is used which is called Two's complement.
So here is an example of this transformation of the number -6
Take the number and transform it to binary. (6 in binary === 0000 0000 0000 0110
Exchange 1's to 0's and 0's to 1's (1111 1111 1111 1001)
To get the result - Add 1 to the previous transformation: (1111 1111 1111 1010)
As You can see exactly the same binary representation is for (unsigned)65530 as is for (signed)-6.
It's all in the interpretation of the bits.
That's why You have to be carefull nearing the maximum values of a representation in a type.
In this case to store this value You could:
Change the type to unsigned short int
Change to larger type.
You declared i as a short int, which is a 16-bit signed type. This means that the highest number it can represent is actually 32767 (2^15-1) and the smallest is -32768 (-2^15). 65530 overflows that limit. So if you had printed 32768, it would instead overflow to -32768, 32769 would overflow to -32767 and so on. See more topics on binary representations of signed numbers to understand this process better
Consider the following Example:
First Case:
short x=255;
x = (x<<8)>>8;
cout<<x<<endl;
Second Case:
short x=255;
x = x<<8;
x = x>>8;
cout<<x<<endl;
The output in the first case is 255 whereas in the second case is -1. -1 as output does makes sense as cpp does a arithmetic right shift. Here are the intermediate values of x to obtain -1 as output.
x: 0000 0000 1111 1111
x<<8:1111 1111 0000 0000
x>>8:1111 1111 1111 1111
Why doesn't the same mechanism happen in the first case?
The difference is a result of two factors.
The C++ standard does not specify the maximum values of integral types. The standard only specifies the minimum size of each integer type. On your platform, a short is a 16 bit value, and an ints is at least a 32 bit value.
The second factor is two's complement arithmetic.
In your first example, the short value is naturally promoted to an int, which is at least 32 bits, so the left and the right shift operates on an int, before getting converted back to a short.
In your second example, after the first left shift operation the resulting value is once again converted back to a short, and due to two's complement arithmetic, it ends up being a negative value. The right shift ends up sign-extending the negative value, resulting in the final result of -1.
What you just observed is sign extension:
Sign extension is the operation, in computer arithmetic, of increasing the number of bits of a binary number while preserving the number's sign (positive/negative) and value. This is done by appending digits to the most significant side of the number, following a procedure dependent on the particular signed number representation used.
For example, if six bits are used to represent the number "00 1010" (decimal positive 10) and the sign extend operation increases the word length to 16 bits, then the new representation is simply "0000 0000 0000 1010". Thus, both the value and the fact that the value was positive are maintained.
If ten bits are used to represent the value "11 1111 0001" (decimal negative 15) using two's complement, and this is sign extended to 16 bits, the new representation is "1111 1111 1111 0001". Thus, by padding the left side with ones, the negative sign and the value of the original number are maintained.
You rigt shift all the way to the point where your short becomes negative, and when you then shift back, you get the sign extension.
This doesn't happen in the first case, as the shift isn't applied to a short. It's applied to 255 which isn't a short, but the default integral type (probably an int). It only gets casted after it's already been shifted back:
on the stack: 0000 0000 0000 0000 0000 0000 1111 1111
<<8
on the stack: 0000 0000 0000 0000 1111 1111 0000 0000
>>8
on the stack: 0000 0000 0000 0000 0000 0000 1111 1111
convert to short: 0000 0000 1111 1111
Why is the result 4294967292 instead of -4 even when both integer types are signed?
#include <iostream>
void f(int & i)
{
i = -4;
}
int main()
{
long long i = 0;
f(reinterpret_cast<int &>(i));
std::cout << i << std::endl;
return 0;
}
long long seems to be 64-bit and int 32-bit number on your architecture. You're using 32-bit integer references, so you only modified the less significant half (you're evidently compiling on an ordinary x86 machine, which uses little endian and two's complement for signed integer representation).
The sign bit, as well as the more significant half are all 0s (as initialized). It would have to be 1s to print the same negative value:
0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1100
^ ^
sign bit of long long sign bit of int
reinterpret_cast only guarantees that you get back your int with another reinterpret_cast, so you either print it as int, or use long long & parameter.
Code:
int main(){
short a=1; // #1
char *p=(char*)&a;
*(p)=1; // #2
cout << a << endl; // Output: 1
*(p+1)=2; // #3
cout << a << endl; // Output: 513
}
From my understanding, the output should be as shown in the picture below, 257 and then 258.
Is there any reason I got different result when I run the program above ?
Update:
I know this is Undefined behavior, but still, does this mean that the decimal to binary conversion is not done as usual: right to left, but instead is done left to right for example:
binary(a)=1000 0000 | 0000 0000
so *(p)=1; will make binary(a)=1000 0000 | 0000 0000 which is 1 in decimal
and *(p+1)=2; will make binary(a)=1000 0000 | 0100 0000 which is 513
which exactly the output of the program.
What happens here is due to the fact that we have a 2-byte short in a little endian CPU architecture. The standard does not require that the architecture be LE, so in any case this program can generate a number of different results when run on different systems.
A short here is laid out in memory with the least significant byte (LSB) first:
Memory addresses ------>
LSB MSB
0000 0000 0000 0000
p points at the LSB and sets is to 1:
0000 0001 0000 0000
The result when interpreted as a short is LSB + 256 * MSB, i.e. 1 + 0 * 256 = 1
p then points at the MSB (which is on the next memory address) and sets is to 2:
0000 0001 0000 0010
Result when interpreted as a short: 1 + 2 * 256 = 513
Is there any reason I got different result when I run the program above ?
Yes. Language-agnostic answer: because this program invokes undefined behavior. Answer considering what might have happened actually: your system has different endianness than that you think it has.
I want to be able to access the sign bit of a number in C++. My current code looks something like this:
int sign bit = number >> 31;
That appears to work, giving me 0 for positive numbers and -1 for negative numbers. However, I don't see how I get -1 for negative numbers: if 12 is
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1100
then -12 is
1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 0011
and shifting it 31 bits would make
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001
which is 1, not -1, so why do I get -1 when I shift it?
What about this?
int sign = number < 0;
The result of right-shifting a negative number in C++ is implementation-defined. So, no one knows what right-shifting your -12 should get on your specific platform. You think it should make the above (1), while I say that it can easily produce all-ones pattern, which is -1. The latter is called sign-extended shifting. In sign-extended shifting the sign bit is copied to the right, but never shifted out of its place.
If all you are interested in is the value of the sign bit, then stop wasting time trying to use bitwise operations, like shifts etc. Just compare your number to 0 and see whether it is negative or not.
Because you are shifting signed integer. Cast the integer to unsigned:
int sign_bit = ((unsigned int)number) >> 31;
You can use cmath library
#include <cmath>
and use it like
std::cout << std::signbit(num);
This function get a float value as input and a bool value as output.
true for negative
false for positive
for instance
std::cout << std::signbit(1);
will give you a 0 as output (false)
but while using this function you have to be careful about zero
std::cout << std::signbit(-0.0); // 512 (true)
std::cout << std::signbit(+0.0); // 0 (false)
The output of this lines are not the same.
To remove this problem you can use:
float x = +0.01;
std::cout << (x >= 0 ? (x == 0 ? 0 : 1) : -1);
which give:
0 for 0
1 for positive
-1 for negative
For integers, test number < 0.
For floating point numbers, you may also want take into account the fact that zero has a sign. That is, there exists a -0.0 which is distinct from +0.0. To distinguish the two, you want to use std::signbit.
The >> operator is performing an arithmetic shift, which retains the sign of the number.
bool signbit(double x)
{
return 1.0/x != 1.0/fabs(x);
}
My solution that supports +/-0.
You could do the following:
int t1 = -12;
unsigned int t2 = t1;
t2 = t2>>31;
cout<<t2;
This will work fine for all implementations of c++.
bool signbit(double x)
{
return (__int64)x & 0x8000000000000000LL != 0LL;
}
bool signbit(float x)
{
return (__int32)x & 0x80000000 != 0;
}