I am trying to mimic the can-referece exposition only concept, that can be found in the new C++20 iterator concepts.
Extracted from cppreferece:
template <class I>
concept input_or_output_iterator =
requires(I i) {
{ *i } -> /*can-reference*/;
} &&
std::weakly_incrementable<I>;
So, I tried to implement the concepts, as follows:
/**
* #brief ALias for define that a template parameter T is referenceable = T&
*/
template<typename T>
using template_arg_with_ref = T&;
/**
* #brief satisfied if and only if the type is referenceable (in particular, not void)
*/
template<typename T>
concept can_reference = requires() { typename template_arg_with_ref<T>; };
/**
* #brief satisfied if and only if the type is dereferenceable (in particular, not void)
*/
template<typename T>
concept dereferenceable = requires(T& t) {
{ *t } -> can_reference;
};
which compiles perfectly fine using Clang 15.0.0.
However, if I try to remove verbosity, an define directly that a type T is referenceable, without the alias:
/**
* #brief satisfied if and only if the type is referenceable (in particular, not void)
*/
template<typename T>
concept can_reference = requires() { typename T&; };
/**
* #brief satisfied if and only if the type is dereferenceable (in particular, not void)
*/
template<typename T>
concept dereferenceable = requires(T& t) {
{ *t } -> can_reference;
};
the compiler complains, and throws the following error:
error: expected ';' at end of requirement
concept can_reference = requires() { typename T&; };
^
;
Is not possible to use directly T& as shown above?
Or I am just making a syntax error, and don't know the correct way of write the concept without the alias type?
The grammar for a type requirement is given in [expr.prim.req.type] and allows only for a (nested) type-name, not a more general type-id (which is used e.g. in template arguments).
A type-name (possibly prefixed with a nested-name-specifier) can only be the name of a type (including required template argument lists), but no other type specifiers can be included. Adding a reference qualifier, const/volatile qualifier or forming other compound types such as arrays, function types, etc. in the requirement is not allowed. This is in line with how typename can be used in other contexts. It always can only have a nested type-name follow it.
So you always need to defer an alias or you can use the template argument of e.g. std::type_identity_t:
typename std::type_identity_t<T&>;
Related
TL;DR: My question is that requires {...} can be used as a constexpr bool expression by the standard?
I haven't found anything about that in the standard, but it simplifies a lot and results a much cleaner code. For example in SFINAE instead of enable_if, or some ugly typename = decltype(declval<>()...), or something else, it is a simple clean requires-expression.
This is my example:
#include <type_traits>
struct foo { typedef int type; };
struct bar { ~bar() = delete; };
/**
* get_type trait, if T::type is valid, get_type<T>::type
* equal to T::type, else void
*/
// T::type is valid
template<typename T, bool = requires{typename T::type;}>
struct get_type : std::type_identity<typename T::type> {};
// T::type is invalid
template<typename T>
struct get_type<T, false> : std::type_identity<void> {};
/// Template alias, this is buggy on GCC 11.1 -> internal compiler error
template<typename T>
using get_type_t = typename get_type<T>::type;
// Tests
static_assert(std::is_same_v<get_type_t<foo>, int>);
static_assert(std::is_same_v<get_type_t<bar>, void>);
/**
* Destructible trait
*
* In libstdc++-v3 this is the implementation for the testing
*
* struct __do_is_destructible_impl
* {
* template <typename _Tp, typename = decltype(declval<_Tp &>().~_Tp())>
* static true_type __test(int);
*
* template <typename>
* static false_type __test(...);
* };
*/
// This is the same:
template<typename T>
struct my_destructible_impl : std::bool_constant< requires(T t) { t.~T(); } >
{};
// Tests
static_assert(my_destructible_impl<foo>::value);
static_assert(!my_destructible_impl<bar>::value);
I found that it will evaluate to true or false if I'm correct:
The substitution of template arguments into a requires-expression used in a declaration of a templated entity may result in the formation of invalid types or expressions in its requirements, or the violation of semantic constraints of those requirements. In such cases, the requires-expression evaluates to false and does not cause the program to be ill-formed. The substitution and semantic constraint checking proceeds in lexical order and stops when a condition that determines the result of the requires-expression is encountered. If substitution (if any) and semantic constraint checking succeed, the requires-expression evaluates to true.
So I would like to ask if requires {...} can be safely used as a constexpr bool expression as in my example, or not? Because based on cppreference.com I'm not 100% sure, but I feel like it is and it compiles with clang and GCC. However in the standard I haven't found anything about that (or maybe I just can't use ctrl+f properly...). And I haven't found anything where someone use the requires-expression like this...
requires {...} is a requires-expression and according to expr.prim.req/p2 it is a prvalue:
A requires-expression is a prvalue of type bool whose value is
described below. Expressions appearing within a requirement-body are
unevaluated operands.
So yes, you can use it in a constexpr bool context.
It appears that you can put lambda in the concept and then write code in it. Let us take this as an example. I'll prefer the standard concepts for such concepts and bear in mind that this is only for purposes of this example - godbolt
template<class T>
concept labdified_concept =
requires {
[](){
T t, tt; // default constructible
T ttt{t}; // copy constructible
tt = t; //copy assignable
tt = std::move(t); // move assignable
};
};
Instead of:
template<class T>
concept normal_concept =
std::default_initializable<T> && std::movable<T> && std::copy_constructible<T>;
Is lambdification an improvement or bad practice? From readability point too.
This shouldn't be valid. The point of allowed lambdas into unevaluated contexts wasn't to suddenly allow SFINAE on statements.
We do have some wording in [temp.deduct]/9 that makes this clear:
A lambda-expression appearing in a function type or a template parameter is not considered part of the immediate context for the purposes of template argument deduction. [Note: The intent is to avoid requiring implementations to deal with substitution failure involving arbitrary statements. [Example:
template <class T>
auto f(T) -> decltype([]() { T::invalid; } ());
void f(...);
f(0); // error: invalid expression not part of the immediate context
template <class T, std::size_t = sizeof([]() { T::invalid; })>
void g(T);
void g(...);
g(0); // error: invalid expression not part of the immediate context
template <class T>
auto h(T) -> decltype([x = T::invalid]() { });
void h(...);
h(0); // error: invalid expression not part of the immediate context
template <class T>
auto i(T) -> decltype([]() -> typename T::invalid { });
void i(...);
i(0); // error: invalid expression not part of the immediate context
template <class T>
auto j(T t) -> decltype([](auto x) -> decltype(x.invalid) { } (t)); // #1
void j(...); // #2
j(0); // deduction fails on #1, calls #2
— end example] — end note]
We just don't have something equivalent for requirements. gcc's behavior is really what you'd expect:
template <typename T> concept C = requires { []{ T t; }; };
struct X { X(int); };
static_assert(!C<X>); // ill-formed
Because the body of the lambda is outside of the immediate context, so it's not a substitution failure, it's a hard error.
Ignoring the obvious readability flaws in this mechanism, it doesn't actually work. Consider the following:
template<labdified_concept T>
void foo(T t) {}
template<typename T>
void foo(T t) {}
The rules of concepts tell us that if a given T doesn't satisfy labdified_concept, then the other foo should be instantiated instead. But that's not what happens if we provide SS to such a template. Instead, we get a hard error because labdified_concept<SS> cannot be instantiated.
The stuff within a requires expression has special handling that allows certain types of errors to be regarded as failures to meet the requirement. But that handling doesn't apply to the body of a lambda. There, ill-formed code is ill-formed and thus you get a compile error when trying to instantiate it.
And even if it did work, it still doesn't work. Concepts have complex rules for subsumption of concepts, which allows different concepts to be considered more highly specialized than others. This allows overloading on different concepts, which lets the more constrained concept get called. For example a concept that only requires default_initializable is more generic than one which requires default_initializable and moveable. Thus, if a type fulfills both, the latter will be taken because it is more constrained.
But this only works because of the special rules for concepts. Hiding requirements in lambdas wouldn't allow this to work.
It appears that you can put lambda in the concept and then write code in it. Let us take this as an example. I'll prefer the standard concepts for such concepts and bear in mind that this is only for purposes of this example - godbolt
template<class T>
concept labdified_concept =
requires {
[](){
T t, tt; // default constructible
T ttt{t}; // copy constructible
tt = t; //copy assignable
tt = std::move(t); // move assignable
};
};
Instead of:
template<class T>
concept normal_concept =
std::default_initializable<T> && std::movable<T> && std::copy_constructible<T>;
Is lambdification an improvement or bad practice? From readability point too.
This shouldn't be valid. The point of allowed lambdas into unevaluated contexts wasn't to suddenly allow SFINAE on statements.
We do have some wording in [temp.deduct]/9 that makes this clear:
A lambda-expression appearing in a function type or a template parameter is not considered part of the immediate context for the purposes of template argument deduction. [Note: The intent is to avoid requiring implementations to deal with substitution failure involving arbitrary statements. [Example:
template <class T>
auto f(T) -> decltype([]() { T::invalid; } ());
void f(...);
f(0); // error: invalid expression not part of the immediate context
template <class T, std::size_t = sizeof([]() { T::invalid; })>
void g(T);
void g(...);
g(0); // error: invalid expression not part of the immediate context
template <class T>
auto h(T) -> decltype([x = T::invalid]() { });
void h(...);
h(0); // error: invalid expression not part of the immediate context
template <class T>
auto i(T) -> decltype([]() -> typename T::invalid { });
void i(...);
i(0); // error: invalid expression not part of the immediate context
template <class T>
auto j(T t) -> decltype([](auto x) -> decltype(x.invalid) { } (t)); // #1
void j(...); // #2
j(0); // deduction fails on #1, calls #2
— end example] — end note]
We just don't have something equivalent for requirements. gcc's behavior is really what you'd expect:
template <typename T> concept C = requires { []{ T t; }; };
struct X { X(int); };
static_assert(!C<X>); // ill-formed
Because the body of the lambda is outside of the immediate context, so it's not a substitution failure, it's a hard error.
Ignoring the obvious readability flaws in this mechanism, it doesn't actually work. Consider the following:
template<labdified_concept T>
void foo(T t) {}
template<typename T>
void foo(T t) {}
The rules of concepts tell us that if a given T doesn't satisfy labdified_concept, then the other foo should be instantiated instead. But that's not what happens if we provide SS to such a template. Instead, we get a hard error because labdified_concept<SS> cannot be instantiated.
The stuff within a requires expression has special handling that allows certain types of errors to be regarded as failures to meet the requirement. But that handling doesn't apply to the body of a lambda. There, ill-formed code is ill-formed and thus you get a compile error when trying to instantiate it.
And even if it did work, it still doesn't work. Concepts have complex rules for subsumption of concepts, which allows different concepts to be considered more highly specialized than others. This allows overloading on different concepts, which lets the more constrained concept get called. For example a concept that only requires default_initializable is more generic than one which requires default_initializable and moveable. Thus, if a type fulfills both, the latter will be taken because it is more constrained.
But this only works because of the special rules for concepts. Hiding requirements in lambdas wouldn't allow this to work.
I was learning about the usage of enable_if and I stumbled upon the following code.
template <class T,
typename std::enable_if<std::is_integral<T>::value,
T>::type* = nullptr>
void do_stuff(T& t) {
std::cout << "do_stuff integral\n";
// an implementation for integral types (int, char, unsigned, etc.)
}
The thing that bothers me is that in the template parameter, nullptr is used as a default parameter to the std::enable_if<std::is_integral<T>::value, T>::type* which is also a type.
I am not sure how we can assign a literal to the type. Shouldn't it be nullptr_t instead?
This template accept a non-type second parameter, that is a pointer typename std::enable_if<std::is_integral<T>::value, T>::type * so nullptr is used as default value for this pointer. Note that typename in this second parameter is used to make compiler figure out that ::type as a type, it is not a beginning of usual type template parameter like typename T
nullptr is not a type, it's a value (of type nullptr_t, which can be converted to any pointer type IIRC). Otherwise, any standard usage of nullptr like:
int* a = nullptr;
would not work.
This is an unnamed default template parameter used to allow SFINAE in a template declaration instead of using the return type. It's basically like:
template<int=0>
void foo();
With the SFINAE trick/enable_if.
Consider the following code, which tries to determine existence of a nested typedef.
#include<type_traits>
struct foo;// incomplete type
template<class T>
struct seq
{
using value_type = T;
};
struct no_type{};
template<class T>
struct check_type : std::true_type{};
template<>
struct check_type<no_type> :std::false_type{};
template<class T>
struct has_value_type
{
template<class U>
static auto check(U const&)-> typename U:: value_type;
static auto check(...)->no_type;
static bool const value = check_type<decltype(check(std::declval<T>()))>::value;
using type = has_value_type;
};
int main()
{
char c[has_value_type<seq<foo>>::value?1:-1];
(void)c;
}
Now invoking has_value_type<seq>::value causes compilation error as invalid use of incomplete type seq<foo>::value_type.
does decltype needs a complete type in the expression? If not, how can I remove the error? I am using gcc 4.7 for compilation.
Your code is valid C++11, which defines that a toplevel function call that appears as a decltype operand does not introduce a temporary even when the call is a prvalue.
This rule specifically was added to make code as yours valid and to prevent instantiations of the return type (if it is a class template specialization) otherwise needed to determine the access restrictions of a destructor.
decltype requires a valid expression, and you certainly can have a valid expression that involves incomplete types. The problem in your case however is
template<class U>
auto check(U const&) -> typename U::value_type;
which has return type foo when U is seq<foo>. You can't return an incomplete type by value, so you end up with an ill-formed expression. You can use a return type of e.g. void_<typename U::value_type> (with template<typename T> struct void_ {};) and your test appears to work.