Widening and narrowing rules in C/C++ - c++

I was trying to read through the C/C++ standard for this but I can't find the answer.
Say you have the following snippet:
int8_t m;
int64_t n;
And that at some point you perform m + n, the addition itself is a binary operator and I think the most likely think that happen in such a case is:
Wide m to the same size of n, call the widening result m_prime
Perform m_prime + n
Return a result of type int64_t
I was trying to understand however if instead of performing m+n I had performed n+m the result would change (because maybe there could be a narrowing operations instead of a widening).
I cannot find the part of the standard that clarify this point (which I understand it could sound trivial).
Can anyone point me where I can find this in the standard? or what happens in general in situations like the one I exposed?
Personally I've been looking at the section "Additive operators" but it doesn't seem to me to explain what happens, pointer arithmetic is covered a bit, but there's no reference to some casting rule implicitly applied.
You can assume I'm talking about C++11, but any other standard I guess would apply the same rules.

See Clause 5 Expressions [expr]. Point 10 starts
Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the usual arithmetic conversions, which are defined as follows:
The sub-points that follow say things like "If either operand is...", "...the other shall...", "If both operands ..." etc.
For your specific example, see 10.5.2
Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank shall be converted to the type of the operand with greater rank.

Related

C++: Can a + or - tell the compiler a value should be an int?

I'm a beginner with C++ so this may seem like a silly question. Please humour me!
I was using static_cast<int>(numeric_limits<unsigned_char>::min()) and ::max() (according to this post), but also found here that a + or - sign can be used before the numeric_limits to achieve the same result. Am I correct in assuming that the +/- is basically just telling the compiler that the next thing is a number? Why would I use that as opposed to a static_cast?
Am I correct in assuming that the +/- is basically just telling the compiler that the next thing is a number?
No.
+ and - are simply artihmetic operators. For numeric operands, the unary minus operator changes the sign of the right hand operator and the unary plus operator doesn't change the sign.
The reason why the result is int even though the operand was unsigned char is because all operands of all arithmetic operators are always converted to int (or to unsigned int if all values of the operand's type aren't representable by int) if they were of lower rank. This special conversion is called integer promotion.
Why would I use that as opposed to a static_cast?
It's less characters to type. If that's appealing to you, then maybe you would use it for that purpose. If it isn't, then perhaps you should use static cast instead.
The + sign you're seeing on that page is referred to as the "unary plus" operator. You should probably read up on arithmetic operators here to get a better understanding of what's happening.
That page has the following things to say about unary plus:
The built-in unary plus operator returns the value of its operand. The only situation where it is not a no-op is when the operand has integral type or unscoped enumeration type, which is changed by integral promotion, e.g, it converts char to int or if the operand is subject to lvalue-to-rvalue, array-to-pointer, or function-to-pointer conversion.
Unlike binary plus (i.e. where you would have two operands), unary plus is a slightly less common operator to see used in practice, but to be clear, there is nothing particularly special about it. Like other arithmetic operators, it can be overloaded, which means it's hard to say "universal" things about its behavior. You need to know the context in which it's being used to be able to predict what it's going to do. Start with a firm understanding of the various operators available and work your way down to this specific example.

How to use bitwise operator with unsigned char data type? [duplicate]

In the following C snippet that checks if the first two bits of a 16-bit sequence are set:
bool is_pointer(unsigned short int sequence) {
return (sequence >> 14) == 3;
}
CLion's Clang-Tidy is giving me a "Use of a signed integer operand with a binary bitwise operator" warning, and I can't understand why. Is unsigned short not unsigned enough?
The code for this warning checks if either operand to the bitwise operator is signed. It is not sequence causing the warning, but 14, and you can alleviate the problem by making 14 unsigned by appending a u to the end.
(sequence >> 14u)
This warning is bad. As Roland's answer describes, CLion is fixing this.
There is a check in clang-tidy that is called hicpp-signed-bitwise. This check follows the wording of the HIC++ standard. That standard is freely available and says:
5.6.1. Do not use bitwise operators with signed operands
Use of signed operands with bitwise operators is in some cases subject to undefined or implementation defined behavior. Therefore, bitwise operators should only be used with operands of unsigned integral types.
The authors of the HIC++ coding standard misinterpreted the intention of the C and C++ standards and either accidentally or intentionally focused on the type of the operands instead of the value of the operands.
The check in clang-tidy implements exactly this wording, in order to conform to that standard. That check is not intended to be generally useful, its only purpose is to help the poor souls whose programs have to conform to that one stupid rule from the HIC++ standard.
The crucial point is that by definition integer literals without any suffix are of type int, and that type is defined as being a signed type. HIC++ now wrongly concludes that positive integer literals might be negative and thus could invoke undefined behavior.
For comparison, the C11 standard says:
6.5.7 Bitwise shift operators
If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
This wording is carefully chosen and emphasises that the value of the right operand is important, not its type. It also covers the case of a too large value, while the HIC++ standard simply forgot that case. Therefore, saying 1u << 1000u is ok in HIC++, while 1 << 3 isn't.
The best strategy is to explicitly disable this single check. There are several bug reports for CLion mentioning this, and it is getting fixed there.
Update 2019-12-16: I asked Perforce what the motivation behind this exact wording was and whether the wording was intentional. Here is their response:
Our C++ team who were involved in creating the HIC++ standard have taken a look at the Stack Overflow question you mentioned.
In short, referring to the object type in the HIC++ rule instead of the value is an intentional choice to allow easier automated checking of the code. The type of an object is always known, while the value is not.
HIC++ rules in general aim to be "decidable". Enforcing against the type ensures that a decidable check is always possible, ie. directly where the operator is used or where a signed type is converted to unsigned.
The rationale explicitly refers to "possible" undefined behavior, therefore a sensible implementation can exclude:
constants unless there is definitely an issue and,
unsigned types that are promoted to signed types.
The best operation is therefore for CLion to limit the checking to non-constant types before promotion.
I think the integer promotion causes here the warning. Operands smaller than an int are widened to integer for the arithmetic expression, which is signed. So your code is effectively return ( (int)sequence >> 14)==3; which leds to the warning. Try return ( (unsigned)sequence >> 14)==3; or return (sequence & 0xC000)==0xC000;.

Clang-Tidy: Use of a signed integer operand with a binary bitwise operator [duplicate]

In the following C snippet that checks if the first two bits of a 16-bit sequence are set:
bool is_pointer(unsigned short int sequence) {
return (sequence >> 14) == 3;
}
CLion's Clang-Tidy is giving me a "Use of a signed integer operand with a binary bitwise operator" warning, and I can't understand why. Is unsigned short not unsigned enough?
The code for this warning checks if either operand to the bitwise operator is signed. It is not sequence causing the warning, but 14, and you can alleviate the problem by making 14 unsigned by appending a u to the end.
(sequence >> 14u)
This warning is bad. As Roland's answer describes, CLion is fixing this.
There is a check in clang-tidy that is called hicpp-signed-bitwise. This check follows the wording of the HIC++ standard. That standard is freely available and says:
5.6.1. Do not use bitwise operators with signed operands
Use of signed operands with bitwise operators is in some cases subject to undefined or implementation defined behavior. Therefore, bitwise operators should only be used with operands of unsigned integral types.
The authors of the HIC++ coding standard misinterpreted the intention of the C and C++ standards and either accidentally or intentionally focused on the type of the operands instead of the value of the operands.
The check in clang-tidy implements exactly this wording, in order to conform to that standard. That check is not intended to be generally useful, its only purpose is to help the poor souls whose programs have to conform to that one stupid rule from the HIC++ standard.
The crucial point is that by definition integer literals without any suffix are of type int, and that type is defined as being a signed type. HIC++ now wrongly concludes that positive integer literals might be negative and thus could invoke undefined behavior.
For comparison, the C11 standard says:
6.5.7 Bitwise shift operators
If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
This wording is carefully chosen and emphasises that the value of the right operand is important, not its type. It also covers the case of a too large value, while the HIC++ standard simply forgot that case. Therefore, saying 1u << 1000u is ok in HIC++, while 1 << 3 isn't.
The best strategy is to explicitly disable this single check. There are several bug reports for CLion mentioning this, and it is getting fixed there.
Update 2019-12-16: I asked Perforce what the motivation behind this exact wording was and whether the wording was intentional. Here is their response:
Our C++ team who were involved in creating the HIC++ standard have taken a look at the Stack Overflow question you mentioned.
In short, referring to the object type in the HIC++ rule instead of the value is an intentional choice to allow easier automated checking of the code. The type of an object is always known, while the value is not.
HIC++ rules in general aim to be "decidable". Enforcing against the type ensures that a decidable check is always possible, ie. directly where the operator is used or where a signed type is converted to unsigned.
The rationale explicitly refers to "possible" undefined behavior, therefore a sensible implementation can exclude:
constants unless there is definitely an issue and,
unsigned types that are promoted to signed types.
The best operation is therefore for CLion to limit the checking to non-constant types before promotion.
I think the integer promotion causes here the warning. Operands smaller than an int are widened to integer for the arithmetic expression, which is signed. So your code is effectively return ( (int)sequence >> 14)==3; which leds to the warning. Try return ( (unsigned)sequence >> 14)==3; or return (sequence & 0xC000)==0xC000;.

Why cannot floating-point promotion work for arithmetics as well?

I have read a bit about floating-point promotion. I know that it doesn't apply on binary arithmetic operations, only on e.g. overload resolution. But why?
The C++ standard guarantees that double must be at least as precise as float [basic.fundamental.8] and the floating point promotion is required to keep the value unchanged [conv.fpprom]. Yet this question makes it very clear that it does not happen. Stroustrup, 4th edition has the subject even errata-ed (here, Chapter 10, p. 267).
However, I cannot see any reason why the promotion cannot be done in usual arithmetic conversions [expr.10], even if all prerequisites are met. Is there any?
The latest C++14 working draft can be found here, the final version is purchase-only.
Converting a float to a double costs something, and it's likely more expensive than a short to int conversion (it needs several shifts and bit combining operations). And unlike e.g. short, the float type is considered something on which the processor can operate directly (just like it can on int).
Given the facts obove, why should floating-point promotion happen when it's not necessary? That is, if you're adding two floats, why convert them to double, add them, and then convert them back to float?(1)
Note that a floating-point promotion will indeed happen when you're adding mixed arguments (e.g. a float + double), by the very ruling in C++14 [expr] you're referring to.
(10.3) Otherwise, if either operand is double, the other shall be converted to double.
As per [conv.fpprom], this conversion from float to double is carried out by floating point promotion.
(1) Of course, it is perfectly possible this will happen internally if the processor cannot operate on floats directly, and [expr].12 explicitly allows that. But that very paragraph says
the types are not changed thereby.
It does!
I don't know what you call "work", but the scope of definition for floating-point promotion and usual arithmetic conversions is different.
usual arithmetic conversions : Apply to binary operators that expect operands of arithmetic or enumeration type.
floating-point promotion : Apply to prvalues of type float.
Some expressions, like a + b qualify for both, while 1.0f qualify only as a prvalue.
The standard you linked says (about usual arithmetic conversions)
(10.3) if either operand is double, the other shall be converted to
double
...
(10.5) — Otherwise, the integral promotions shall be performed on both operands
It doesn't restrict how the other operand is converted to double, so I would assume that double + float follow the floating-point promotion rule.

What does the C++ standard say about results of casting value of a type that lies outside the range of the target type?

Recently I had to perform some data type conversions from float to 16 bit integer. Essentially my code reduces to the following
float f_val = 99999.0;
short int si_val = static_cast<short int>(f_val);
// si_val is now -32768
This input value was a problem and in my code I had neglected to check the limits of the float value so I can see my fault, but it made me wonder about the exact rules of the language when one has to do this kind of ungainly cast. I was slightly surprised to find that value of the cast was -32768. Furthermore, this is the value I get whenever the value of the float exceeds the limits of a 16 bit integer. I have googled this but found a surprising lack of detailed info about it. The best I could find was the following from cplusplus.com
Converting to int from some smaller integer type, or to double from
float is known as promotion, and is guaranteed to produce the exact
same value in the destination type. Other conversions between
arithmetic types may not always be able to represent the same value
exactly:
If the conversion is from a floating-point type to an integer type, the value
is truncated (the decimal part is removed).
The conversions from/to bool consider false equivalent to zero (for numeric
types) and to null pointer (for pointer types); and true equivalent to all
other values.
Otherwise, when the destination type cannot represent the value, the conversion
is valid between numerical types, but the value is
implementation-specific (and may not be portable).
This suggestion that the results are implementation defined does not surprise me, but I have heard that cplusplus.com is not always reliable.
Finally, when performing the same cast from a 32 bit integer to 16 bit integer (again with a value outisde of 16 bit range) I saw results clearly indicating integer overflow. Although I was not surprised by this it has added to my confusion due to the inconsistency with the cast from float type.
I have no access to the C++ standard, but a lot of C++ people here do so I was wondering what the standard says on this issue? Just for completeness, I am using g++ version 4.6.3.
You're right to question what you've read. The conversion has no defined behaviour, which contradicts what you quoted in your question.
4.9 Floating-integral conversions [conv.fpint]
1 A prvalue of a floating point type can be converted to a prvalue of an integer type. The conversion truncates;
that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be
represented in the destination type. [ Note: If the destination type is bool, see 4.12. -- end note ]
One potentially useful permitted result that you might get is a crash.