how 256 stored in char variable and unsigned char - c++

Up to 255, I can understand how the integers are stored in char and unsigned char ;
#include<stdio.h>
int main()
{
unsigned char a = 256;
printf("%d\n",a);
return(0);
}
In the code above I have an output of 0 for unsigned char as well as char.
For 256 I think this is the way the integer stored in the code (this is just a guess):
First 256 converted to binary representation which is 100000000 (totally 9 bits).
Then they remove the remove the leftmost bit (the bit which is set) because the char datatype only have 8 bits of memory.
So its storing in the memory as 00000000 , that's why its printing 0 as output.
Is the guess correct or any other explanation is there?

Your guess is correct. Conversion to an unsigned type uses modular arithmetic: if the value is out of range (either too large, or negative) then it is reduced modulo 2N, where N is the number of bits in the target type. So, if (as is often the case) char has 8 bits, the value is reduced modulo 256, so that 256 becomes zero.
Note that there is no such rule for conversion to a signed type - out-of-range values give implementation-defined results. Also note that char is not specified to have exactly 8 bits, and can be larger on less mainstream platforms.

On your platform (as well as on any other "normal" platform) unsigned char is 8 bit wide, so it can hold numbers from 0 to 255.
Trying to assign 256 (which is an int literal) to it results in an unsigned integer overflow, that is defined by the standard to result in "wraparound". The result of u = n where u is an unsigned integral type and n is an unsigned integer outside its range is u = n % (max_value_of_u +1).
This is just a convoluted way to say what you already said: the standard guarantees that in these cases the assignment is performed keeping only the bits that fit in the target variable. This norm is there since most platform already implement this at the assembly language level (unsigned integer overflow typically results in this behavior plus some kind of overflow flag set to 1).
Notice that all this do not hold for signed integers (as often plain char is): signed integer overflow is undefined behavior.

yes, that's correct. 8 bits can hold 0 to 255 unsigned, or -128 to 127 signed. Above that and you've hit an overflow situation and bits will be lost.
Does the compiler give you warning on the above code? You might be able to increase the warning level and see something. It won't warn you if you assign a variable that can't be determined statically (before execution), but in this case it's pretty clear you're assigning something too large for the size of the variable.

Related

Why subtract from 256 when assigning signed char to unsigned char in C++?

in Bjarne's "The C++ Programming Language" book, the following piece of code on chars is given:
signed char sc = -140;
unsigned char uc = sc;
cout << uc // prints 't'
1Q) chars are 1byte (8 bits) in my hardware. what is the binary representation of -140? is it possible to represent -140 using 8 bits. I would think range is guaranteed to be at least [-127...127] when signed chars is considered. how is it even possible to represent -140 in 8 bits?
2Q) assume it's possible. why do we subtract 140 from uc when sc is assigned to uc? what is the logic behind this?
EDIT: I've wrote cout << sizeof (signed char) and it's produced 1 (1 byte). I put this to be exact on the byte-wise size of signed char.
EDIT 2: cout << int {sc } gives the output 116. I don't understand what happens here?
First of all: Unless you're writing something very low-level that requires bit-representation manipulation - avoid writing this kind of code like the plague. It's hard to read, easy to get wrong, confusing, and often exhibits implementation-defined/undefined behavior.
To answer your question though:
The code assumed you're on a platform in which the types signed char and unsigned char have 8 bits (although theoretically they could have more). And that the hardware has "two's complement" behavior: The bit representation of the result of an arithmetic operation on an integer type with N bits is always modulo 2^N. That also specifies how the same bit-pattern is interpreted as signed or unsigned. Now, -140 modulo 2^8 is 116 (01110100), so that's the bit pattern sc will hold. Interpreted as a signed char (-128 through 127), this is still 116.
An unsigned char can represent 116 as well, so the second assignment results in 116 as well.
116 is the ASCII code of the character t; and std::cout interprets unsigned char values (under 128) as ASCII codes. So, that's what gets printed.
The result of assigning -140 to a signed char is implementation-defined, just like its range is (i.e. see the manual). A very common choice is to use wrap-around math: if it doesn't fit, add or subtract 256 (or the relevant max range) until it fits.
Since sc will have the value 116, and uc can also hold that value, that conversion is trivial. The unusual thing already happened when we assigned -140 to sc.

Printf function formatter

Having following simple C++ code:
#include <stdio.h>
int main() {
char c1 = 130;
unsigned char c2 = 130;
printf("1: %+u\n", c1);
printf("2: %+u\n", c2);
printf("3: %+d\n", c1);
printf("4: %+d\n", c2);
...
return 0;
}
the output is like that:
1: 4294967170
2: 130
3: -126
4: +130
Can someone please explain me the line 1 and 3 results?
I'm using Linux gcc compiler with all default settings.
(This answer assumes that, on your machine, char ranges from -128 to 127, that unsigned char ranges from 0 to 255, and that unsigned int ranges from 0 to 4294967295, which happens to be the case.)
char c1 = 130;
Here, 130 is outside the range of numbers representable by char. The value of c1 is implementation-defined. In your case, the number happens to "wrap around," initializing c1 to static_cast<char>(-126).
In
printf("1: %+u\n", c1);
c1 is promoted to int, resulting in -126. Then, it is interpreted by the %u specifier as unsigned int. This is undefined behavior. This time the resulting number happens to be the unique number representable by unsigned int that is congruent to -126 modulo 4294967296, which is 4294967170.
In
printf("3: %+d\n", c1);
The int value -126 is interpreted by the %d specifier as int directly, and outputs -126 as expected (?).
In cases 1, 2 the format specifier doesn't match the type of the argument, so the behaviour of the program is undefined (on most systems). On most systems char and unsigned char are smaller than int, so they promote to int when passed as variadic arguments. int doesn't match the format specifier %u which requires unsigned int.
On exotic systems (which your target is not) where unsigned char is as large as int, it will be promoted to unsigned int instead, in which case 4 would have UB since it requires an int.
Explanation for 3 depends a lot on implementation specified details. The result depends on whether char is signed or not, and it depends on the representable range.
If 130 was a representable value of char, such as when it is an unsigned type, then 130 would be the correct output. That appears to not be the case, so we can assume that char is a signed type on the target system.
Initialising a signed integer with an unrepresentable value (such as char with 130 in this case) results in an implementation defined value.
On systems with 2's complement representation for signed numbers - which is ubiquitous representation these days - the implementation defined value is typically the representable value that is congruent with the unrepresentable value modulo the number of representable values. -126 is congruent with 130 modulo 256 and is a representable value of char.
A char is 8 bits. This means it can represent 2^8=256 unique values. A uchar represents 0 to 255, and a signed char represents -128 to 127 (could represent absolutely anything, but this is the typical platform implementation). Thus, assigning 130 to a char is out of range by 2, and the value overflows and wraps the value to -126 when it is interpreted as a signed char. The compiler sees 130 as an integer and makes an implicit conversion from int to char. On most platforms an int is 32-bit and the sign bit is the MSB, the value 130 easily fits into the first 8-bits, but then the compiler wants to chop of 24 bits to squeeze it into a char. When this happens, and you've told the compiler you want a signed char, the MSB of the first 8 bits actually represents -128. Uh oh! You have this in memory now 1000 0010, which when interpreted as a signed char is -128+2. My linter on my platform screams about this . .
I make that important point about interpretation because in memory, both values are identical. You can confirm this by casting the value in the printf statements, i.e., printf("3: %+d\n", (unsigned char)c1);, and you'll see 130 again.
The reason you see the large value in your first printf statement is that you are casting a signed char to an unsigned int, where the char has already overflowed. The machine interprets the char as -126 first, and then casts to unsigned int, which cannot represent that negative value, so you get the max value of the signed int and subtract 126.
2^32-126 = 4294967170 . . bingo
In printf statement 2, all the machine has to do is add 24 zeros to reach 32-bit, and then interpret the value as int. In statement one, you've told it that you have a signed value, so it first turns that to a 32-bit -126 value, and then interprets that -ve integer as an unsigned integer. Again, it flips how it interprets the most significant bit. There are 2 steps:
Signed char is promoted to signed int, because you want to work with ints. The char (is probably copied and) has 24 bits added. Because we're looking at a signed value, some machine instruction will happen to perform twos complement, so the memory here looks quite different.
The new signed int memory is interpreted as unsigned, so the machine looks at the MSB and interprets it as 2^32 instead of -2^31 as happened in the promotion.
An interesting bit of trivia, is you can suppress the clang-tidy linter warning if you do char c1 = 130u;, but you still get the same garbage based on the above logic (i.e. the implicit conversion throws away the first 24-bits, and the sign-bit was zero anyhow). I'm have submitted an LLVM clang-tidy missing functionality report based on exploring this question (issue 42137 if you really wanna follow it) 😉.

Why does (unsigned int = -1) show the largest value that it can store? [duplicate]

In C or C++ it is said that the maximum number a size_t (an unsigned int data type) can hold is the same as casting -1 to that data type. for example see Invalid Value for size_t
Why?
I mean, (talking about 32 bit ints) AFAIK the most significant bit holds the sign in a signed data type (that is, bit 0x80000000 to form a negative number). then, 1 is 0x00000001.. 0x7FFFFFFFF is the greatest positive number a int data type can hold.
Then, AFAIK the binary representation of -1 int should be 0x80000001 (perhaps I'm wrong). why/how this binary value is converted to anything completely different (0xFFFFFFFF) when casting ints to unsigned?? or.. how is it possible to form a binary -1 out of 0xFFFFFFFF?
I have no doubt that in C: ((unsigned int)-1) == 0xFFFFFFFF or ((int)0xFFFFFFFF) == -1 is equally true than 1 + 1 == 2, I'm just wondering why.
C and C++ can run on many different architectures, and machine types. Consequently, they can have different representations of numbers: Two's complement, and Ones' complement being the most common. In general you should not rely on a particular representation in your program.
For unsigned integer types (size_t being one of those), the C standard (and the C++ standard too, I think) specifies precise overflow rules. In short, if SIZE_MAX is the maximum value of the type size_t, then the expression
(size_t) (SIZE_MAX + 1)
is guaranteed to be 0, and therefore, you can be sure that (size_t) -1 is equal to SIZE_MAX. The same holds true for other unsigned types.
Note that the above holds true:
for all unsigned types,
even if the underlying machine doesn't represent numbers in Two's complement. In this case, the compiler has to make sure the identity holds true.
Also, the above means that you can't rely on specific representations for signed types.
Edit: In order to answer some of the comments:
Let's say we have a code snippet like:
int i = -1;
long j = i;
There is a type conversion in the assignment to j. Assuming that int and long have different sizes (most [all?] 64-bit systems), the bit-patterns at memory locations for i and j are going to be different, because they have different sizes. The compiler makes sure that the values of i and j are -1.
Similarly, when we do:
size_t s = (size_t) -1
There is a type conversion going on. The -1 is of type int. It has a bit-pattern, but that is irrelevant for this example because when the conversion to size_t takes place due to the cast, the compiler will translate the value according to the rules for the type (size_t in this case). Thus, even if int and size_t have different sizes, the standard guarantees that the value stored in s above will be the maximum value that size_t can take.
If we do:
long j = LONG_MAX;
int i = j;
If LONG_MAX is greater than INT_MAX, then the value in i is implementation-defined (C89, section 3.2.1.2).
It's called two's complement. To make a negative number, invert all the bits then add 1. So to convert 1 to -1, invert it to 0xFFFFFFFE, then add 1 to make 0xFFFFFFFF.
As to why it's done this way, Wikipedia says:
The two's-complement system has the advantage of not requiring that the addition and subtraction circuitry examine the signs of the operands to determine whether to add or subtract. This property makes the system both simpler to implement and capable of easily handling higher precision arithmetic.
Your first question, about why (unsigned)-1 gives the largest possible unsigned value is only accidentally related to two's complement. The reason -1 cast to an unsigned type gives the largest value possible for that type is because the standard says the unsigned types "follow the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer."
Now, for 2's complement, the representation of the largest possible unsigned value and -1 happen to be the same -- but even if the hardware uses another representation (e.g. 1's complement or sign/magnitude), converting -1 to an unsigned type must still produce the largest possible value for that type.
Two's complement is very nice for doing subtraction just like addition :)
11111110 (254 or -2)
+00000001 ( 1)
---------
11111111 (255 or -1)
11111111 (255 or -1)
+00000001 ( 1)
---------
100000000 ( 0 + 256)
That is two's complement encoding.
The main bonus is that you get the same encoding whether you are using an unsigned or signed int. If you subtract 1 from 0 the integer simply wraps around. Therefore 1 less than 0 is 0xFFFFFFFF.
Because the bit pattern for an int
-1 is FFFFFFFF in hexadecimal unsigned.
11111111111111111111111111111111 binary unsigned.
But in int the first bit signifies whether it is negative.
But in unsigned int the first bit is just extra number because a unsigned int cannot be negative. So the extra bit makes an unsigned int able to store bigger numbers.
As with an unsigned int 11111111111111111111111111111111 (binary) or FFFFFFFF (hexadecimal) is the biggest number a uint can store.
Unsigned Ints are not recommended because if they go negative then it overflows and goes to the biggest number.

Converting unsigned char * to char *

here is my code:
std::vector<unsigned char> data;
... // put some data to data vector
char* bytes= reinterpret_cast<char*>(imageData.data());
My problem is that in vector 'data' I have chars of value 255. After conversion in bytes pointer I have values of -1 instead of 255. How should I convert this data properly?
EDIT
Ok, its come up that I really dont need conversion but only a bits order. THX for trying help
char can be either signed or unsigned depending on the platform. If it is signed, like on your platform, it has a guaranteed range from -128 to 127 by the standard. For common platforms it is an 8bit type, so those are the only values that it can hold. This means that you can't represent 255 as a char.
Now to explain what you are seing: The typical representation of signed numbers in modern processors is two's-complement, for which -1 has the maximum representable bitpattern (all ones), which is the same as 255 for ùnsigned char. So the cast does exactly what you ask it to: reinterpreting the unsigned chars as (signed) chars.
However I can't tell you how to convert the data properly, since that depends on what you want to do with it. The way you are doing it might be fine for your purposes, if it isn't your only choice is to change the datatype.
This works as it should. Your char type has a size of 1 byte which equals to 8 bits. If it's unsigned, all of the bits are used to hold the value, which makes the maximum value that a char can hold 255 (28 = 256 different values, starting with 0).
In case of signed char, one bit is used to hold the sign instead of the value, which leaves you only 7 bts for the value, allowing to store numbers from -128 to 127.
So, when you hold 255 in a unsigned char, all the bits are interpreted as the value, thus you have 255. If you convert it to signed char, the first bit starts to be treated as the sign bit, and the data in the variable starts to be interpreted as -1.

Why unsigned int 0xFFFFFFFF is equal to int -1?

In C or C++ it is said that the maximum number a size_t (an unsigned int data type) can hold is the same as casting -1 to that data type. for example see Invalid Value for size_t
Why?
I mean, (talking about 32 bit ints) AFAIK the most significant bit holds the sign in a signed data type (that is, bit 0x80000000 to form a negative number). then, 1 is 0x00000001.. 0x7FFFFFFFF is the greatest positive number a int data type can hold.
Then, AFAIK the binary representation of -1 int should be 0x80000001 (perhaps I'm wrong). why/how this binary value is converted to anything completely different (0xFFFFFFFF) when casting ints to unsigned?? or.. how is it possible to form a binary -1 out of 0xFFFFFFFF?
I have no doubt that in C: ((unsigned int)-1) == 0xFFFFFFFF or ((int)0xFFFFFFFF) == -1 is equally true than 1 + 1 == 2, I'm just wondering why.
C and C++ can run on many different architectures, and machine types. Consequently, they can have different representations of numbers: Two's complement, and Ones' complement being the most common. In general you should not rely on a particular representation in your program.
For unsigned integer types (size_t being one of those), the C standard (and the C++ standard too, I think) specifies precise overflow rules. In short, if SIZE_MAX is the maximum value of the type size_t, then the expression
(size_t) (SIZE_MAX + 1)
is guaranteed to be 0, and therefore, you can be sure that (size_t) -1 is equal to SIZE_MAX. The same holds true for other unsigned types.
Note that the above holds true:
for all unsigned types,
even if the underlying machine doesn't represent numbers in Two's complement. In this case, the compiler has to make sure the identity holds true.
Also, the above means that you can't rely on specific representations for signed types.
Edit: In order to answer some of the comments:
Let's say we have a code snippet like:
int i = -1;
long j = i;
There is a type conversion in the assignment to j. Assuming that int and long have different sizes (most [all?] 64-bit systems), the bit-patterns at memory locations for i and j are going to be different, because they have different sizes. The compiler makes sure that the values of i and j are -1.
Similarly, when we do:
size_t s = (size_t) -1
There is a type conversion going on. The -1 is of type int. It has a bit-pattern, but that is irrelevant for this example because when the conversion to size_t takes place due to the cast, the compiler will translate the value according to the rules for the type (size_t in this case). Thus, even if int and size_t have different sizes, the standard guarantees that the value stored in s above will be the maximum value that size_t can take.
If we do:
long j = LONG_MAX;
int i = j;
If LONG_MAX is greater than INT_MAX, then the value in i is implementation-defined (C89, section 3.2.1.2).
It's called two's complement. To make a negative number, invert all the bits then add 1. So to convert 1 to -1, invert it to 0xFFFFFFFE, then add 1 to make 0xFFFFFFFF.
As to why it's done this way, Wikipedia says:
The two's-complement system has the advantage of not requiring that the addition and subtraction circuitry examine the signs of the operands to determine whether to add or subtract. This property makes the system both simpler to implement and capable of easily handling higher precision arithmetic.
Your first question, about why (unsigned)-1 gives the largest possible unsigned value is only accidentally related to two's complement. The reason -1 cast to an unsigned type gives the largest value possible for that type is because the standard says the unsigned types "follow the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer."
Now, for 2's complement, the representation of the largest possible unsigned value and -1 happen to be the same -- but even if the hardware uses another representation (e.g. 1's complement or sign/magnitude), converting -1 to an unsigned type must still produce the largest possible value for that type.
Two's complement is very nice for doing subtraction just like addition :)
11111110 (254 or -2)
+00000001 ( 1)
---------
11111111 (255 or -1)
11111111 (255 or -1)
+00000001 ( 1)
---------
100000000 ( 0 + 256)
That is two's complement encoding.
The main bonus is that you get the same encoding whether you are using an unsigned or signed int. If you subtract 1 from 0 the integer simply wraps around. Therefore 1 less than 0 is 0xFFFFFFFF.
Because the bit pattern for an int
-1 is FFFFFFFF in hexadecimal unsigned.
11111111111111111111111111111111 binary unsigned.
But in int the first bit signifies whether it is negative.
But in unsigned int the first bit is just extra number because a unsigned int cannot be negative. So the extra bit makes an unsigned int able to store bigger numbers.
As with an unsigned int 11111111111111111111111111111111 (binary) or FFFFFFFF (hexadecimal) is the biggest number a uint can store.
Unsigned Ints are not recommended because if they go negative then it overflows and goes to the biggest number.