Is there a tutorial somewhere that explains on which datatypes bitwise operations can be used? I don't know why Lady Ada thinks that I cannot bitwise OR two Standard.Integer...
$ gnatmake test.adb
gcc -c test.adb
test.adb:50:77: there is no applicable operator "Or" for type "Standard.Integer"
gnatmake: "test.adb" compilation error
Really? I excused the compiler for not being able to AND/OR enumerated data types. I excused the compiler for not being able to perform bitwise operations on Character type. I excused the compiler for not being able to convert from Unsigned_8 to Character in what I thought was the obvious way. But this is inexcusable.
Ada doesn't provide logical (bit-wise) operations on integer types, it provides them on modular types. Here's the section in the reference manual.
The "and", "or", and "xor" operators are defined for Boolean, for modular types, and for one-dimensional arrays of Boolean.
The language could have defined them for signed integer types, but that would create confusion given the variety of ways that signed integers can be represented. (Most implementations use two's-complement, but there are other possibilities.)
If you insist, you could define your own overloaded "or" operator, such as:
function "or"(Left, Right: Integer) return Integer is
type Unsigned_Integer is mod 2**Integer'Size;
begin
return Integer(Unsigned_Integer(Left) or Unsigned_Integer(Right));
end "or";
(I've verified that this compiles, but I haven't tested it, and I'd expect it to fail for negative values.)
But if you need to perform bitwise operations, you're better off using modular types or arrays of Boolean rather than signed integers.
Related
I'm a beginner with C++ so this may seem like a silly question. Please humour me!
I was using static_cast<int>(numeric_limits<unsigned_char>::min()) and ::max() (according to this post), but also found here that a + or - sign can be used before the numeric_limits to achieve the same result. Am I correct in assuming that the +/- is basically just telling the compiler that the next thing is a number? Why would I use that as opposed to a static_cast?
Am I correct in assuming that the +/- is basically just telling the compiler that the next thing is a number?
No.
+ and - are simply artihmetic operators. For numeric operands, the unary minus operator changes the sign of the right hand operator and the unary plus operator doesn't change the sign.
The reason why the result is int even though the operand was unsigned char is because all operands of all arithmetic operators are always converted to int (or to unsigned int if all values of the operand's type aren't representable by int) if they were of lower rank. This special conversion is called integer promotion.
Why would I use that as opposed to a static_cast?
It's less characters to type. If that's appealing to you, then maybe you would use it for that purpose. If it isn't, then perhaps you should use static cast instead.
The + sign you're seeing on that page is referred to as the "unary plus" operator. You should probably read up on arithmetic operators here to get a better understanding of what's happening.
That page has the following things to say about unary plus:
The built-in unary plus operator returns the value of its operand. The only situation where it is not a no-op is when the operand has integral type or unscoped enumeration type, which is changed by integral promotion, e.g, it converts char to int or if the operand is subject to lvalue-to-rvalue, array-to-pointer, or function-to-pointer conversion.
Unlike binary plus (i.e. where you would have two operands), unary plus is a slightly less common operator to see used in practice, but to be clear, there is nothing particularly special about it. Like other arithmetic operators, it can be overloaded, which means it's hard to say "universal" things about its behavior. You need to know the context in which it's being used to be able to predict what it's going to do. Start with a firm understanding of the various operators available and work your way down to this specific example.
In the following C snippet that checks if the first two bits of a 16-bit sequence are set:
bool is_pointer(unsigned short int sequence) {
return (sequence >> 14) == 3;
}
CLion's Clang-Tidy is giving me a "Use of a signed integer operand with a binary bitwise operator" warning, and I can't understand why. Is unsigned short not unsigned enough?
The code for this warning checks if either operand to the bitwise operator is signed. It is not sequence causing the warning, but 14, and you can alleviate the problem by making 14 unsigned by appending a u to the end.
(sequence >> 14u)
This warning is bad. As Roland's answer describes, CLion is fixing this.
There is a check in clang-tidy that is called hicpp-signed-bitwise. This check follows the wording of the HIC++ standard. That standard is freely available and says:
5.6.1. Do not use bitwise operators with signed operands
Use of signed operands with bitwise operators is in some cases subject to undefined or implementation defined behavior. Therefore, bitwise operators should only be used with operands of unsigned integral types.
The authors of the HIC++ coding standard misinterpreted the intention of the C and C++ standards and either accidentally or intentionally focused on the type of the operands instead of the value of the operands.
The check in clang-tidy implements exactly this wording, in order to conform to that standard. That check is not intended to be generally useful, its only purpose is to help the poor souls whose programs have to conform to that one stupid rule from the HIC++ standard.
The crucial point is that by definition integer literals without any suffix are of type int, and that type is defined as being a signed type. HIC++ now wrongly concludes that positive integer literals might be negative and thus could invoke undefined behavior.
For comparison, the C11 standard says:
6.5.7 Bitwise shift operators
If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
This wording is carefully chosen and emphasises that the value of the right operand is important, not its type. It also covers the case of a too large value, while the HIC++ standard simply forgot that case. Therefore, saying 1u << 1000u is ok in HIC++, while 1 << 3 isn't.
The best strategy is to explicitly disable this single check. There are several bug reports for CLion mentioning this, and it is getting fixed there.
Update 2019-12-16: I asked Perforce what the motivation behind this exact wording was and whether the wording was intentional. Here is their response:
Our C++ team who were involved in creating the HIC++ standard have taken a look at the Stack Overflow question you mentioned.
In short, referring to the object type in the HIC++ rule instead of the value is an intentional choice to allow easier automated checking of the code. The type of an object is always known, while the value is not.
HIC++ rules in general aim to be "decidable". Enforcing against the type ensures that a decidable check is always possible, ie. directly where the operator is used or where a signed type is converted to unsigned.
The rationale explicitly refers to "possible" undefined behavior, therefore a sensible implementation can exclude:
constants unless there is definitely an issue and,
unsigned types that are promoted to signed types.
The best operation is therefore for CLion to limit the checking to non-constant types before promotion.
I think the integer promotion causes here the warning. Operands smaller than an int are widened to integer for the arithmetic expression, which is signed. So your code is effectively return ( (int)sequence >> 14)==3; which leds to the warning. Try return ( (unsigned)sequence >> 14)==3; or return (sequence & 0xC000)==0xC000;.
In the following C snippet that checks if the first two bits of a 16-bit sequence are set:
bool is_pointer(unsigned short int sequence) {
return (sequence >> 14) == 3;
}
CLion's Clang-Tidy is giving me a "Use of a signed integer operand with a binary bitwise operator" warning, and I can't understand why. Is unsigned short not unsigned enough?
The code for this warning checks if either operand to the bitwise operator is signed. It is not sequence causing the warning, but 14, and you can alleviate the problem by making 14 unsigned by appending a u to the end.
(sequence >> 14u)
This warning is bad. As Roland's answer describes, CLion is fixing this.
There is a check in clang-tidy that is called hicpp-signed-bitwise. This check follows the wording of the HIC++ standard. That standard is freely available and says:
5.6.1. Do not use bitwise operators with signed operands
Use of signed operands with bitwise operators is in some cases subject to undefined or implementation defined behavior. Therefore, bitwise operators should only be used with operands of unsigned integral types.
The authors of the HIC++ coding standard misinterpreted the intention of the C and C++ standards and either accidentally or intentionally focused on the type of the operands instead of the value of the operands.
The check in clang-tidy implements exactly this wording, in order to conform to that standard. That check is not intended to be generally useful, its only purpose is to help the poor souls whose programs have to conform to that one stupid rule from the HIC++ standard.
The crucial point is that by definition integer literals without any suffix are of type int, and that type is defined as being a signed type. HIC++ now wrongly concludes that positive integer literals might be negative and thus could invoke undefined behavior.
For comparison, the C11 standard says:
6.5.7 Bitwise shift operators
If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
This wording is carefully chosen and emphasises that the value of the right operand is important, not its type. It also covers the case of a too large value, while the HIC++ standard simply forgot that case. Therefore, saying 1u << 1000u is ok in HIC++, while 1 << 3 isn't.
The best strategy is to explicitly disable this single check. There are several bug reports for CLion mentioning this, and it is getting fixed there.
Update 2019-12-16: I asked Perforce what the motivation behind this exact wording was and whether the wording was intentional. Here is their response:
Our C++ team who were involved in creating the HIC++ standard have taken a look at the Stack Overflow question you mentioned.
In short, referring to the object type in the HIC++ rule instead of the value is an intentional choice to allow easier automated checking of the code. The type of an object is always known, while the value is not.
HIC++ rules in general aim to be "decidable". Enforcing against the type ensures that a decidable check is always possible, ie. directly where the operator is used or where a signed type is converted to unsigned.
The rationale explicitly refers to "possible" undefined behavior, therefore a sensible implementation can exclude:
constants unless there is definitely an issue and,
unsigned types that are promoted to signed types.
The best operation is therefore for CLion to limit the checking to non-constant types before promotion.
I think the integer promotion causes here the warning. Operands smaller than an int are widened to integer for the arithmetic expression, which is signed. So your code is effectively return ( (int)sequence >> 14)==3; which leds to the warning. Try return ( (unsigned)sequence >> 14)==3; or return (sequence & 0xC000)==0xC000;.
I was trying to read through the C/C++ standard for this but I can't find the answer.
Say you have the following snippet:
int8_t m;
int64_t n;
And that at some point you perform m + n, the addition itself is a binary operator and I think the most likely think that happen in such a case is:
Wide m to the same size of n, call the widening result m_prime
Perform m_prime + n
Return a result of type int64_t
I was trying to understand however if instead of performing m+n I had performed n+m the result would change (because maybe there could be a narrowing operations instead of a widening).
I cannot find the part of the standard that clarify this point (which I understand it could sound trivial).
Can anyone point me where I can find this in the standard? or what happens in general in situations like the one I exposed?
Personally I've been looking at the section "Additive operators" but it doesn't seem to me to explain what happens, pointer arithmetic is covered a bit, but there's no reference to some casting rule implicitly applied.
You can assume I'm talking about C++11, but any other standard I guess would apply the same rules.
See Clause 5 Expressions [expr]. Point 10 starts
Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the usual arithmetic conversions, which are defined as follows:
The sub-points that follow say things like "If either operand is...", "...the other shall...", "If both operands ..." etc.
For your specific example, see 10.5.2
Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank shall be converted to the type of the operand with greater rank.
This is awkward, but the bitwise AND operator is defined in the C++ standard as follows (emphasis mine).
The usual arithmetic conversions are performed; the result is the bitwise AND function of its operands. The operator applies only to integral or unscoped enumeration operands.
This looks kind of meaningless to me. The "bitwise AND function" is not defined anywhere in the standard, as far as I can see.
I get that the AND function is well-understood and thus may not require explanation. The meaning of the word "bitwise" should also be rather clear: the function is applied to corresponding bits of its operands. However, what constitute the bits of the operands is not clear.
What gives?
This is underspecified. The issue of what the standard means when it refers to bit-wise operations is the subject of a few defect reports.
For example defect report 1857: Additional questions about bits:
The specification of the bitwise operations in 5.11 [expr.bit.and],
5.12 [expr.xor], and 5.13 [expr.or] uses the undefined term “bitwise” in describing the operations, without specifying whether it is the
value or object representation that is in view.
Part of the resolution of this might be to define “bit” (which is
otherwise currently undefined in C++) as a value of a given power of
2.
and the response was:
CWG decided to reformulate the description of the operations
themselves to avoid references to bits, splitting off the larger
questions of defining “bit” and the like to issue 1943 for further
consideration.
and defect report 1943 says:
CWG decided at the 2014-06 (Rapperswil) meeting to address only a
limited subset of the questions raised by issues 1857 and 1861. This
issue is a placeholder for the remaining questions, such as defining a
“bit” in terms of a value of 2n, specifying whether a bit-field has a
sign bit, etc.
We can see from this defect report 1796: Is all-bits-zero for null characters a meaningful requirement?, that this issue of what the standard means when it refers to bits affected/affects other sections as well:
According to 2.3 [lex.charset] paragraph 3,
The basic execution character set and the basic execution wide-character set shall each contain all the members of the basic
source character set, plus control characters representing alert,
backspace, and carriage return, plus a null character (respectively,
null wide character), whose representation has all zero bits.
It is not clear that a portable program can examine the bits of the
representation; instead, it would appear to be limited to examining
the bits of the numbers corresponding to the value representation
(3.9.1 [basic.fundamental] paragraph 1). It might be more appropriate
to require that the null character value compare equal to 0 or '\0'
rather than specifying the bit pattern of the representation.
There is a similar issue for the definition of shift, bitwise and, and
bitwise or operators: are those specifications constraints on the bit
pattern of the representation or on the values resulting from the
interpretation of those patterns as numbers?
In this case the resolution was to change:
representation has all zero bits
to:
value is 0.
Note that as mentioned in ecatmur's answer the draft C++ standard does defer to C standard section 5.2.4.2.1 in section 3.9.1 [basic.fundamental] in paragraph 3 it does not refer to section 6.5/4 from the C standard which would at least tell us that the results are implementation defined. I explain in my comment below that C++ standard can only incorporate text from normative references explicitly.
[basic.fundamental]/3 defers to C 5.2.4.2.1. It seems reasonable that the bitwise operators in C++ being underspecified should similarly defer to C, in this case 6.5.10/4:
The result of the binary & operator is the bitwise AND of the operands (that is, each bit in
the result is set if and only if each of the corresponding bits in the converted operands is
set).
Note that C 6.5/4 has:
Some operators (the unary operator ~, and the binary operators <<, >>, &, ^, and |,
collectively described as bitwise operators) are required to have operands that have
integer type. These operators yield values that depend on the internal representations of
integers, and have implementation-defined and undefined aspects for signed types.
The internal representations of the integers are of course described in 6.2.6.2/1, /2.
C++ Standard defines storage as a certain amount of bits. The implementation might decide what meaning to attribute to a particular bit; that being said, binary AND is supposed to work on conceptual 0s and 1s forming a particular type's representation.
3.9.1.7. (...) The representations of integral types shall define values by use of a pure binary numeration system.49 (...)
3.9.1, footnote 49) A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive
bits are additive, begin with 1, and are multiplied by successive integral power of 2, except perhaps for the bit with the highest
position
That means that for whatever physical representation used, binary AND acts according to the truth table for the AND function (for each bit number i, take bits Ai and Bi from appropriate operands and produce a value of 1 only if both are 1, otherwise produce a 0 for the bit Ri).. Resulting value is left to interpret by the implementation, but whatever is chosen, it has to be in line with other expectations with regard to other binary operations like OR and XOR.
Legally, we could consider all bitwise operations to have undefined behaviour as they are not actually defined.
More reasonably, we are expected to apply common sense and refer to the common meanings of these operations, applying them to the bits of the operands (hence the term "bitwise").
But nothing actually states that. Shame my answer can't be considered normative wording.