Question on C++ undefined behavior. Casting between uint8 and int8 [duplicate] - c++

Suppose I assign an eleven digits number to an int, what will happen? I played around with it a little bit and I know it's giving me some other numbers within the int range. How is this new number created?

It is implementation-defined behaviour. This means that your compiler must provide documentation saying what happens in this scenario.
So, consult that documentation to get your answer.
A common way that implementations define it is to truncate the input integer to the number of bits of int (after reinterpreting unsigned as signed if necessary).
C++14 Standard references: [expr.ass]/3, [conv.integral]/3

In C++20, this behavior will still be implementation-defined1 but the requirements are much stronger than before. This is a side-consequence of the requirement to use two's complement representation for signed integer types coming in C++20.
This kind of conversion is specified by [conv.integral]:
A prvalue of an integer type can be converted to a prvalue of another integer type. [...]
[...]
Otherwise, the result is the unique value of the destination type that is congruent to the source integer modulo 2N, where N is the width of the destination type.
[...]
This behaviour is the same as truncating the representation of the number to the width of the integer type you are assigning to, e.g.:
int32_t u = 0x6881736df7939752;
...will kept the 32 right-most bits, i.e., 0xf7939752, and "copy" these bits to u. In two's complement, this corresponds to -141322414.
1 This will still be implementation-defined because the size of int is implementation-defined. If you assign to, e.g., int32_t, then the behaviour is fully defined.
Prior to C++20, there were two different rules for unsigned and signed types:
A prvalue of an integer type can be converted to a prvalue of another integer type. [...]
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type). [...]
If the destination type is signed, the value is unchanged if it can be represented in the destination type; otherwise, the value is implementation-defined.

INT_MAX on a 32 bit system is 2,147,483,647 (231 − 1), UINT_MAX is 4,294,967,295 (232 − 1).
int thing = 21474836470;
What happens is implementation-defined, it's up to the compiler. Mine appears to truncate the higher bits. 21474836470 is 0x4fffffff6,
warning: implicit conversion from 'long' to 'int' changes
value from 21474836470 to -10 [-Wconstant-conversion]
int thing = 21474836470;

Related

Order of int32_t to uint64_t casting

Does the C++ standard guarantee whether integer conversion that both widens and casts away the sign will sign-extend or zero-extend?
The quick test:
int32_t s = -1;
uint64_t u = s;
produces an 0xFFFFFFFFFFFFFFFF under Xcode, but is that a defined behavior in the first place?
When you do
uint64_t u = s;
[dcl.init]/17.9 applies which states:
the initial value of the object being initialized is the (possibly converted) value of the initializer expression. A standard conversion sequence ([conv]) will be used, if necessary, to convert the initializer expression to the cv-unqualified version of the destination type; no user-defined conversions are considered.
and if we look in [conv], under integral conversions, we have
Otherwise, the result is the unique value of the destination type that is congruent to the source integer modulo 2N, where N is the width of the destination type.
So what you are guaranteed to have happen is that -1 becomes the largest number possible to represent, -2 is one less then that, -3 is one less then -2 and so on, basically it "wraps around".
In fact,
unsigned_type some_name = -1;
Is the canonical way to create a variable with the maximum value for that unsigned integer type.
You can find the standard verbiage in other answers.
But to help you form an intuitive mental model of widening conversions, it is helpful to think of these as a 2-step process:
Sign- or zero-extension of the value. If the value is of signed type, then sign extension is used. Here int32_t is sign-extended to int64_t. On x86, the signedness of type detetermines whether MOVSX or MOVZX instruction is used.
Converting the extended value to the destination type (change of signedness). Here int64_t is converted to uint64_t. It involves 0 assembly instructions as registers are untyped, the compiler just treats that register, which contains the result of sign extension of int32_t, as uint64_t.
Note that the standard doesn't specify these steps, it just specifies the required result.
From the section on Integral conversions:
[conv.integral/3]: Otherwise, the result is the unique value of the destination type that is congruent to the source integer modulo 2N, where N is the width of the destination type.
In other words, the wrap-around "happens last".

Questions about C++20 two's-complement proposal R4

I am reading revision 4 of the two's-complement proposal (adopted by C++20), and I have some questions.
In the introduction, it says:
Status-quo Signed integer arithmetic remains non-commutative in general (though some implementations may guarantee that it is).
Does it really mean "non-commutative", as in a + b versus b + a? Or should that read "non-associative"?
It also says:
Change Conversion from signed to unsigned is always well-defined: the result is the unique value of the destination type that is congruent to the source integer modulo 2^N.
Hasn't signed-to-unsigned conversion been well-defined in precisely this way since the beginning of time? Should that read "conversion from unsigned to signed"?
Is there anything else in the list of changes that is missing or mis-stated?
Note that it wasn't P0907 that was adopted - it was P1236.
Or should that read "non-associative"?
Yes.
Should that read "conversion from unsigned to signed"?
Yes. If you look at P1236R1, you can see that the rule changed from:
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type).
If the destination type is signed, the value is unchanged if it can be represented in the destination type; otherwise, the value is implementation-defined.
to:
Otherwise, the result is the unique value of the destination type that is congruent to the source integer modulo 2N, where N is the range exponent of the destination type.

Is conversion int -> unsigned long long defined by the standard

I can't find the exact specification of how int value is converted to unsigned long long in the standard. Various similar conversions, such as int -> unsigned, unsigned -> int (UB if negative), unsigned long long -> int, etc. are specified
For example GCC, -1 is converted to 0xffffffffffffffff, not to 0x00000000ffffffff. Can I rely on this behavior?
Yes, this is well defined, it is basically adding max unsigned long long + 1 to -1 which will always be max unsigned long long. This is covered in the draft C++ standard section 4.7 Integral conversions which says:
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). —end note ]
it does the same thing as C99 but the draft C99 standard is easier to understand, from section 6.3.1.3 Signed and unsigned integers:
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.49)
where footnote 49 says:
The rules describe arithmetic on the mathematical value, not the value of a given type of expression.
Yes, it's defined:
C++11 § 4.7 [conv.integral]/2 says this:
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type).
The least unsigned integer congruent to -1 (modulo 2sizeof(unsigned long long)) is the largest value of unsigned long long possible.
Unsigned integers have guaranteed modulo arithmetic. Thus any int value v is converted to the unsigned long value u such that u = K*2n+v, where K is either 0 or 1, and where n is the number of value representation bits for unsigned long. In other words, if v is negative, just add 2n.
The power of 2 follows from the C++ standard's requirement that integers be represented with a pure binary system. With n value representation bits the number of possible values is 2n. There is not such a requirement for floating point types (you can use std::numeric_limits to check the radix of the representation of floating point values).
Also note that in order to cater to some now archaic platforms, as well as one popular compiler that does things its own way, the standard leaves the opposite conversion as undefined behavior when the unsigned value is not directly representable as a signed value. In practice, on modern systems all compilers can be told to make that reverse conversion the exact opposite of the conversion to unsigned type, and e.g. Visual C++ does that by default. However, it's worth keeping in mind that there's no formal support, so that portable code incurs a slight (now with modern computers needless) inefficiency.

converting -1 to unsigned types

Consider the following code to set all bits of x
unsigned int x = -1;
Is this portable ? It seems to work on at least Visual Studio 2005-2010
The citation-heavy answer:
I know there are plenty of correct answers in here, but I'd like to add a few citations to the mix. I'll cite two standards: C99 n1256 draft (freely available) and C++ n1905 draft (also freely available). There's nothing special about these particular standards, they're just both freely available and whatever happened to be easiest to find at the moment.
The C++ version:
§5.3.2 ¶9: According to this paragraph, the value ~(type)0 is guaranteed to have all bits set, if (type) is an unsigned type.
The operand of ~ shall have integral or enumeration type; the result is the one’s complement of its operand. Integral promotions are performed. The type of the result is the type of the promoted operand.
§3.9.1 ¶4: This explains how overflow works with unsigned numbers.
Unsigned integers, declared unsigned, shall obey the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer.
§3.9.1 ¶7, plus footnote 49: This explains that numbers must be binary. From this, we can infer that ~(type)0 must be the largest number representable in type (since it has all bits turned on, and each bit is additive).
The representations of integral types shall define values by use of a pure
binary numeration system49.
49) A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin
with 1, and are multiplied by successive integral power of 2, except perhaps for the bit with the highest position. (Adapted from the American National
Dictionary for Information Processing Systems.)
Since arithmetic is done modulo 2n, it is guaranteed that (type)-1 is the largest value representable in that type. It is also guaranteed that ~(type)0 is the largest value representable in that type. They must therefore be equal.
The C99 version:
The C99 version spells it out in a much more compact, explicit way.
§6.5.3 ¶3:
The result of the ~ operator is the bitwise complement of its (promoted) operand (that is,
each bit in the result is set if and only if the corresponding bit in the converted operand is
not set). The integer promotions are performed on the operand, and the result has the
promoted type. If the promoted type is an unsigned type, the expression ~E is equivalent
to the maximum value representable in that type minus E.
As in C++, unsigned arithmetic is guaranteed to be modular (I think I've done enough digging through standards for now), so the C99 standard definitely guarantees that ~(type)0 == (type)-1, and we know from §6.5.3 ¶3 that ~(type)0 must have all bits set.
The summary:
Yes, it is portable. unsigned type x = -1; is guaranteed to have all bits set according to the standard.
Footnote: Yes, we are talking about value bits and not padding bits. I doubt that you need to set padding bits to one, however. You can see from a recent Stack Overflow question (link) that GCC was ported to the PDP-10 where the long long type has a single padding bit. On such a system, unsigned long long x = -1; may not set that padding bit to 1. However, you would only be able to discover this if you used pointer casts, which isn't usually portable anyway.
Apparently it is:
(4.7) If the destination type is unsigned, the resulting value is the least
unsigned integer congruent to the source integer (modulo 2n where n is
the number of bits used to represent the unsigned type). [Note: In a
two’s complement representation, this conversion is conceptual and
there is no change in the bit pattern (if there is no truncation).
It is guaranteed to be the largest amount possible for that type due to the properties of modulo.
C99 also allows it:
Otherwise, if the newtype is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that
can be represented in the newtype until the value is in the range of
the newtype. 49)
Which wold also be the largest amount possible.
Largest amount possible may not be all bits set. Use ~static_cast<unsigned int>(0) for that.
I was sloppy in reading the question, and made several comments that might be misleading because of that. I'll try to clear up the confusion in this answer.
The declaration
unsigned int x = -1;
is guaranteed to set x to UINT_MAX, the maximum value of type unsigned int. The expression -1 is of type int, and it's implicitly converted to unsigned int. The conversion (which is defined in terms of values, not representations) results in the maximum value of the target unsigned type.
(It happens that the semantics of the conversion are optimized for two's-complement systems; for other schemes, the conversion might involve something more than just copying the bits.)
But the question referred to setting all bits of x. So, is UINT_MAX represented as all-bits-one?
There are several possible representations for signed integers (two's-complement is most common, but ones'-complement and sign-and-magnitude are also possible). But we're dealing with an unsigned integer type, so the way that signed integers are represented is irrelevant.
Unsigned integers are required to be represented in a pure binary format. Assuming that all the bits of the representation contribute to the value of an unsigned int object, then yes, UINT_MAX must be represented as all-bits-one.
On the other hand, integer types are allowed to have padding bits, bits that don't contribute to the representation. For example, it's legal for unsigned int to be 32 bits, but for only 24 of those bits to be value bits, so UINT_MAX would be 2*24-1 rather than 2*32-1. So in the most general case, all you can say is that
unsigned int x = -1;
sets all the value bits of x to 1.
In practice, very very few systems have padding bits in integer types. So on the vast majority of systems, unsigned int has a size of N bits, and a maximum value of 2**N-1, and the above declaration will set all the bits of x to 1.
This:
unsigned int x = ~0U;
will also set x to UINT_MAX, since bitwise complement for unsigned types is defined in terms of subtraction.
Beware!
This is implementation-defined, as how a negative integer shall be represented, whether two's complement or what, is not defined by the C++ Standard. It is up to the compiler which makes the decision, and has to document it properly.
In short, it is not portable. It may not set all bits of x.

What happens if I assign a number with a decimal point to an integer rather than to a float?

A friend of mine asked me this question earlier, but I found myself clutching at straws trying to give him an adequate explanation.
A float or a double will be truncated. So 2.99 will become 2 and -2.99 will become -2.
The pertinent section from the standard (section 4.9)
1 A prvalue of a floating point type can be converted to a prvalue of
an integer type. The conversion truncates; that is, the fractional
part is discarded. The behavior is undefined if the truncated value
cannot be represented in the destination type.
If you assign a value of any numeric type (integer, floating-point) to an object of another numeric type, the value is implicitly converted to the target type. The same thing happens in an initialization or when passing an argument to a function.
The rules for how the conversion is done vary with what kind of types you're using.
If the target type can represent the value exactly:
short s = 42;
int i = s;
double x = 42.0;
int j = x;
there may be a change in representation, but the mathematical value is unchanged.
If a floating-point type is converted to an integer type, and the value can't be represented, it's truncated, as #sashang's answer says -- but if the truncated value can't be represented, the behavior is undefined.
Conversion of an integer (either signed or unsigned) to an unsigned type causes the value to be reduced modulo MAX+1, where MAX is the maximum value of the unsigned type. For example:
unsigned short s = 70000; // sets s to 4464 (70000 - 65536)
// if unsigned short is 16 bits
Conversion of an integer to a signed type, if the value won't fit, is implementation-defined. It typically wraps around in a manner similar to what happens with unsigned types, but the language doesn't guarantee that.