How std::strong_ordering works only with zero? - c++

I am just researching a three-way comparison operator <=>. I see that it returns std::strong_ordering. However, I fail to understand how the compiler is restricting only 0 in comparison operators (so<0, but not so<1)
#include<compare>
int main()
{
std::strong_ordering so = 55 <=> 10;
so < 0; // Fine
so < 1; // Fails
}
Similarly, so>20 won't work either. Following also won't work:
constexpr int Zero = 0;
so == Zero; // Error
so == 0; // Fine
EDIT - Interesting observation (on MSVC compiler). Following is valid:
so < nullptr

Using anything but a literal 0 to compare against std::strong_ordering is explicit undefined behavior, see [cmp.categories.pre]/3 of the C++20 draft.
It is up to the compiler/standard library how or whether this is enforced/diagnosed.
One way of achieving a diagnostic for the UB without any compiler magic is to use std::nullptr_t as argument to the overloaded comparison operator of std::strong_ordering (which has unspecified type according to the standard). Any integral zero literal can be implicitly converted to std::nullptr_t, but literals with other values or constant expressions that are not literals, cannot. See [conv.ptr]/1.
This is also mentioned in a draft note as possibility.
Libc++ seems to instead use a member pointer to some hidden class, see here.
Libstdc++ seems to do something similar, using a hidden class type that needs to be constructed from a pointer to itself, see here.
However neither of these implementations diagnose all arguments that result in UB according to the standard. In particular all of them also accept nullptr as argument without diagnostic: https://godbolt.org/z/esnvqR
I suppose full diagnosis of all the cases would require some compiler magic.

Related

int numeral -> pointer conversion rules

Consider the following code.
void f(double p) {}
void f(double* p) {}
int main()
{ f(1-1); return 0; }
MSVC 2017 doesn't compile that. It figures there is an ambiguous overloaded call, as 1-1 is the same as 0 and therefore can be converted into double*. Other tricks, like 0x0, 0L, or static_cast<int>(0), do not work either. Even declaring a const int Zero = 0 and calling f(Zero) produces the same error. It only works properly if Zero is not const.
It looks like the same issue applies to GCC 5 and below, but not GCC 6. I am curious if this is a part of C++ standard, a known MSVC bug, or a setting in the compiler. A cursory Google did not yield results.
MSVC considers 1-1 to be a null pointer constant. This was correct by the standard for C++03, where all integral constant expressions with value 0 were null pointer constants, but it was changed so that only zero integer literals are null pointer constants for C++11 with CWG issue 903. This is a breaking change, as you can see in your example and as is also documented in the standard, see [diff.cpp03.conv] of the C++14 standard (draft N4140).
MSVC applies this change only in conformance mode. So your code will compile with the /permissive- flag, but I think the change was implemented only in MSVC 2019, see here.
In the case of GCC, GCC 5 defaults to C++98 mode, while GCC 6 and later default to C++14 mode, which is why the change in behavior seems to depend on the GCC version.
If you call f with a null pointer constant as argument, then the call is ambiguous, because the null pointer constant can be converted to a null pointer value of any pointer type and this conversion has same rank as the conversion of int (or any integral type) to double.
The compiler works correctly, in accordance to [over.match] and [conv], more specifically [conv.fpint] and [conv.ptr].
A standard conversion sequence is [blah blah] Zero or one [...] floating-integral conversions, pointer conversions, [...].
and
A prvalue of an integer type or of an unscoped enumeration type can be converted to a prvalue of a floating-point type. The result is exact if possible [blah blah]
and
A null pointer constant is an integer literal with value zero or [...]. A null pointer constant can be converted to a pointer type; the result is the null pointer value of that type [blah blah]
Now, overload resolution is to choose the best match among all candidate functions (which, as a fun feature, need not even be accessible at the call location!). The best match is the one with exact parameters or, alternatively, the fewest possible conversions. Zero or one standard conversions may happen (... for every parameter), and zero is "better" than one.
(1-1) is an integer literal with value 0.
You can convert the zero integer literal to each of either double or double* (or nullptr_t), with exactly one conversion. So, assuming that more than one of these functions is declared (as is the case in the example), there exists more than a single candidate, and all candidates are equally good, there exists no best match. It's ambiguous, and the compiler is right about complaining.

Why is an explicit construction considered an (implicit) narrowing conversion?

Consider the following code:
uint32_t foo(uint64_t x ) {
auto y = uint32_t { x };
return y;
}
It is considered a narrowing conversion the compiler feels compelled to warn me about (GCC 9) or even declare an error (clang 9): GodBolt.
My questions:
Why is uint32_t { x } less explicit than static_cast<uint32_t>(x)?
Why is this more severe with clang than with GCC, meriting an error?
Why is uint32_t { x } less explicit than static_cast<uint32_t>(x)?
It's not less explicit, it's just not allowed. Narrowing conversions are not allowed when doing direct or copy list initialization. When you do auto y = uint32_t { x }; you are direct-list-initializing y with a narrowing conversion. (Guaranteed Copy elision means there is no temporary here anymore)
Why is this more severe with clang than with GCC, meriting an error?
It's up the the implementers. Apparently clang wants to be more strict and issue a hard error, but both are fine. The standard only requires a diagnostic message be given, and a warning or error covers that.
Adding to #NathanOliver's answer - the warnings and errors go away if we construct the 32-bit integer like so:
uint32_t foo(uint64_t x ) {
auto y = uint32_t(x);
return y;
}
So, (x) and {x} here are not semantically equivalent (even if the same constructor would end up getting called, had it been a class). The no-narrowing guarantee in the standard apparently only applies to list-initialization, IIANM.
So, take this is a motivation for using curly-bracket initialization, if you want to be extra careful (or parentheses if you want to not be bothered.)
From https://en.cppreference.com/w/cpp/language/list_initialization:
Narrowing conversions
list-initialization limits the allowed implicit conversions by
prohibiting the following:
...
-conversion from integer or unscoped enumeration type to integer type that cannot represent all values of the original, except where source
is a constant expression whose value can be stored exactly in the
target type
This sounds like clang is more conformant than gcc here (though beware that I'm not a language lawyer)*: the standard mandates that, if you use initalizer lists, you aren't in any danger of a narrowing conversion. This is a conscious design choice to remedy the rather promiscuous implicit conversion built into the language - and the admittedly clear way that you spell it out in your example is a collateral annoyance.
Edit: * and it didn't take long - it seems "not allowed" at cppreference translates to "implementer dependent" in the standard, as per NathanOliver's answer. That's what I get for not checking the source.

Endianness in constexpr

I want to create a constexpr function that returns the endianness of the system, like so:
constexpr bool IsBigEndian()
{
constexpr int32_t one = 1;
return (reinterpret_cast<const int8_t&>(one) == 0);
}
Now, since the function will get executed at compile time rather than on the actual target machine, what guarantee does the C++ spec give to make sure that the correct result is returned?
None. In fact, the program is ill-formed. From [expr.const]:
A conditional-expression e is a core constant expression unless the evaluation of e, following the rules of the
abstract machine (1.9), would evaluate one of the following expressions:
โ€” [...]
โ€” a reinterpret_cast.
โ€” [...]
And, from [dcl.constexpr]:
For a constexpr function or constexpr constructor that is neither defaulted nor a template, if no argument
values exist such that an invocation of the function or constructor could be an evaluated subexpression of
a core constant expression (5.20), or, for a constructor, a constant initializer for some object (3.6.2), the
program is ill-formed; no diagnostic required.
The way to do this is just to hope that your compiler is nice enough to provide macros for the endianness of your machine. For instance, on gcc, I could use __BYTE_ORDER__:
constexpr bool IsBigEndian() {
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
return false;
#else
return true;
#endif
}
As stated by Barry, your code is not legal C++. However, even if you took away the constexpr part, it would still not be legal C++. Your code violates strict aliasing rules and therefore represents undefined behavior.
Indeed, there is no way in C++ to detect the endian-ness of an object without invoking undefined behavior. Casting it to a char* doesn't work, because the standard doesn't require big or little endian order. So while you could read the data through a byte, you would not be able to legally infer anything from that value.
And type punning through a union fails because you're not allowed to type pun through a union in C++ at all. And even if you did... again, C++ does not restrict implementations to big or little endian order.
So as far as C++ as a standard is concerned, there is no way to detect this, whether at compile-time or runtime.

Proper way to compare BitmaskType with zero?

According to the BitmaskType concept, the implementation has to assure that the following statement is well formend: (listed in ยง17.5.2.1.3.4)
The value Y is set in the object X if the expression X & Y is nonzero.
where X and Y are of concept-type BitmaskType.
When trying the following simple code snippet with gcc 4.7 I get template deduction errors:
#include <future>
int main() {
(std::launch::async & std::launch::async) != 0;
}
Error:
error: no match for 'operator!=' in '(std::launch)1 != 0'
... followed by tons of deduction errors
Is this a bug in gcc or am I just getting something wrong here? If yes, what is the proper way to perform this kind of check?
I already checked gcc buglist but couldn't find anything covering this topic.
The members of enum classes are not meant to convert to int implicitly or vice versa. You can either make sure that your bitmask type is converted to int or use the zero value of the enum class. I'd think the latter is preferable:
(std::launch::async & std::launch::async) != std::launch()
(I have also added parenthesis around the bitwise and operation since it has higher precedence than the comparison and it doesn't really make much sense to bitwise and a Boolean value with a bitmask type).
The easiest way to see this is 7.2 [enum.dcl] paragraph 9:
... Note that this implicit enum to int conversion is not provided for a scoped enumeration: ...
This is, however, within a non-normative Example. Tracking the rules for scoped rules in the standard may require ruling out all the cases where conversions are allowed and currently don't quite fancy this exercise.

Are user-defined-literals resolved at compile-time or runtime?

I wonder, because predefined literals like ULL, f, etc. are obviously resolved at compile time. The standard (2.14.8 [lex.ext]) doesn't seem to define this, but it seems to tend towards runtime:
[2.14.8 / 2]
A user-defined-literal is treated as a call to a literal operator or literal operator template (13.5.8). To
determine the form of this call for a given user-defined-literal L with ud-suffix X, the literal-operator-id
whose literal suffix identifier is X is looked up in the context of L using the rules for unqualified name
lookup (3.4.1). Let S be the set of declarations found by this lookup. S shall not be empty.
(emphasis mine.)
However, to me this seems to introduce unnecessary runtime-overhead, as literals can only be appended to values that are available at compile-time anyways like 13.37f or "hello"_x (where _x is a user-defined-literal).
Then, we got the templated user-defined-literal, that never really gets defined in the standard AFAICS (i.e., no example is given, please prove me wrong). Is that function somehow magically invoked at compile time or is it still runtime?
Yes, you get a function call. But function calls can be compile time constant expressions because of constexpr literal operator functions.
For an example, see this one. As another example to show the advanced form of constexpr computations allowed by the FDIS, to have compile time base-26 literals you can do
typedef unsigned long long ull;
constexpr ull base26(char const *s, ull ps) {
return (*s && !(*s >= 'a' && *s <= 'z')) ? throw "bad char!" :
(!*s ? ps : base26(s + 1, (ps * 26ULL) + (*s - 'a')));
}
constexpr ull operator "" _26(char const *s, std::size_t len) {
return base26(s, 0);
}
Saying "bcd-"_26 will evaluate a throw-expression, and thereby cause the return value to become non-constant. In turn, it causes any use of "bcd-"_26 as a constant expression to become ill-formed, and any non-constant use to throw at runtime. The allowed form "bcd"_26 evaluates to a constant expression of the respective computed value.
Note that reading from string literals is not explicitly allowed by the FDIS, however it presents no problem and GCC supports this (the character lvalue reference is a constant expression and the character's value is known at compile time). IMO if one squints, one can read the FDIS as if this is allowed to do.
Then, we got the templated user-defined-literal, that never really gets defined in the standard AFAICS (i.e., no example is given, please prove me wrong)
The treatment of literals as invoking literal operator templates is defined in 2.14.8. You find more examples at 13.5.8 that detail on the literal operator function/function templates itself.
Is that function somehow magically invoked at compile time or is it still runtime?
The keyword is function invocation substitution. See 7.1.5.
#Johannes S is correct of course, but I'd like to add clearly (since I faced this), that even for constexpr user defined literals, the parameters are not considered constexpr or compile time constant, for example in the sense that they can not be used as integer constants for templates.
In addition, only things like this will actually give compile-time evaluation:
inline constexpr long long _xx(unsigned long long v) {
return (v > 100 ) ? throw std::exception() : v;
}
constexpr auto a= 150_xx;
So, that will not compile. But this will:
cout << 150_xx << endl;
And the following is not allowed:
inline constexpr long long _xx(unsigned long long v) {
return some_trait<v>::value;
}
That's annoying, but natural considering that (other) constexpr functions can be called also during execution.
Only for integer user-defined literals is it possible to force compile-time processing, by using the template form. Examples in my question and self answer: https://stackoverflow.com/a/13869688/1149664