Disable non-templated methods with concepts - c++

Is there a syntax to constraint a non-templated method? All the syntaxes I've tried on godbolt with clang concepts branch and gcc fail to compile:
// these examples do not compile
template <bool B>
struct X
{
requires B
void foo() {}
};
template <class T>
struct Y
{
requires (std::is_trivially_copyable_v<T>)
auto foo() {}
};
The trick to make it compile is the same trick you needed to do with SFINAE, make the methods template, even though they really are not templates. And funny enough, the constraint doesn't seem to need the method template, it can work fine on the class template alone, so I really hope there is a way to apply the constraints with concepts without having to resort to the old hacks:
// old hacks
template <bool B>
struct X
{
template <bool = B>
requires B
auto foo() {}
};
template <class T>
struct Y
{
template <class = T>
requires std::is_trivially_copyable_v<T>
auto foo() {}
};
Real life example:
template <class T, bool Copyable_buf = false>
struct Buffer
{
/* ... */
requires Copyable_buf
Buffer(const Buffer& other) {}
/* ... */
};
template <class T>
using Copyable_buffer = Buffer<T, true>;

To support the other answer on this, here is the normative wording about this, from the latest standard draft:
[dcl.decl]
1 A declarator declares a single variable, function, or type, within a declaration. The init-declarator-list appearing in a declaration is a comma-separated sequence of declarators, each of which can have an initializer.
init-declarator-list:
init-declarator
init-declarator-list , init-declarator
init-declarator:
declarator initializeropt
declarator requires-clause
4 The optional requires-clause ([temp]) in an init-declarator or member-declarator shall not be present when the declarator does not declare a function ([dcl.fct]). When present after a declarator, the requires-clause is called the trailing requires-clause. The trailing requires-clause introduces the constraint-expression that results from interpreting its constraint-logical-or-expression as a constraint-expression. [ Example:
void f1(int a) requires true; // OK
auto f2(int a) -> bool requires true; // OK
auto f3(int a) requires true -> bool; // error: requires-clause precedes trailing-return-type
void (*pf)() requires true; // error: constraint on a variable
void g(int (*)() requires true); // error: constraint on a parameter-declaration
auto* p = new void(*)(char) requires true; // error: not a function declaration
— end example ]
As those two paragraphs specify, a trailing requires clause can appear at the end of function declarators. Its meaning is to constrain the function by the constant expression it accepts as an argument (which includes concepts).

Yes, there is!! The requires clause can appear as the last element of a function declarator, in which case it allows to constraint non-templated methods (or free functions for that matter):
// This works as expected! Yey!!
template <class T, bool Copyable_buf = false>
struct Buffer
{
Buffer(const Buffer& other) requires Copyable_buf
{
// ...
}
};
template <bool B>
struct X
{
auto foo() requires B
{
// ...
}
};
template <class T>
struct Y
{
auto foo() requires std::is_trivially_copyable_v<T>
{
// ...
}
};
This answer is empirical, based on testing on current implementations of concepts. Godbolt test. Storry Tellers's answer gives standard quotes confirming this behavior.

There was a change regarding this recently. See https://github.com/cplusplus/nbballot/issues/374
It states:
How constraints work with non-templated functions is still under heavy construction during this late stage in the process. While we have provided various comments that build in a direction where supporting such constructs (including ordering between multiple constrained functions based on their constraints) would become possible, we acknowledge that WG 21 might not find a solution with consensus in time for the DIS. We ask WG 21 to evaluate the risk of shipping the feature in such a state and consider removing the ability to declare such functions.
Does EWG want to consider this for C++20?
| F | A |
|----|----|
| 16 | 0 |
Motion passes. Hubert to coordinate with CWG.
Emphasis mine.
So it appears that constrained non-templated functions were removed from C++20 as of right now.

Related

Why are invalid number of elements in a structured binding checked in a concept a hard error [duplicate]

It appears that you can put lambda in the concept and then write code in it. Let us take this as an example. I'll prefer the standard concepts for such concepts and bear in mind that this is only for purposes of this example - godbolt
template<class T>
concept labdified_concept =
requires {
[](){
T t, tt; // default constructible
T ttt{t}; // copy constructible
tt = t; //copy assignable
tt = std::move(t); // move assignable
};
};
Instead of:
template<class T>
concept normal_concept =
std::default_initializable<T> && std::movable<T> && std::copy_constructible<T>;
Is lambdification an improvement or bad practice? From readability point too.
This shouldn't be valid. The point of allowed lambdas into unevaluated contexts wasn't to suddenly allow SFINAE on statements.
We do have some wording in [temp.deduct]/9 that makes this clear:
A lambda-expression appearing in a function type or a template parameter is not considered part of the immediate context for the purposes of template argument deduction. [Note: The intent is to avoid requiring implementations to deal with substitution failure involving arbitrary statements. [Example:
template <class T>
auto f(T) -> decltype([]() { T::invalid; } ());
void f(...);
f(0); // error: invalid expression not part of the immediate context
template <class T, std::size_t = sizeof([]() { T::invalid; })>
void g(T);
void g(...);
g(0); // error: invalid expression not part of the immediate context
template <class T>
auto h(T) -> decltype([x = T::invalid]() { });
void h(...);
h(0); // error: invalid expression not part of the immediate context
template <class T>
auto i(T) -> decltype([]() -> typename T::invalid { });
void i(...);
i(0); // error: invalid expression not part of the immediate context
template <class T>
auto j(T t) -> decltype([](auto x) -> decltype(x.invalid) { } (t)); // #1
void j(...); // #2
j(0); // deduction fails on #1, calls #2
— end example] — end note]
We just don't have something equivalent for requirements. gcc's behavior is really what you'd expect:
template <typename T> concept C = requires { []{ T t; }; };
struct X { X(int); };
static_assert(!C<X>); // ill-formed
Because the body of the lambda is outside of the immediate context, so it's not a substitution failure, it's a hard error.
Ignoring the obvious readability flaws in this mechanism, it doesn't actually work. Consider the following:
template<labdified_concept T>
void foo(T t) {}
template<typename T>
void foo(T t) {}
The rules of concepts tell us that if a given T doesn't satisfy labdified_concept, then the other foo should be instantiated instead. But that's not what happens if we provide SS to such a template. Instead, we get a hard error because labdified_concept<SS> cannot be instantiated.
The stuff within a requires expression has special handling that allows certain types of errors to be regarded as failures to meet the requirement. But that handling doesn't apply to the body of a lambda. There, ill-formed code is ill-formed and thus you get a compile error when trying to instantiate it.
And even if it did work, it still doesn't work. Concepts have complex rules for subsumption of concepts, which allows different concepts to be considered more highly specialized than others. This allows overloading on different concepts, which lets the more constrained concept get called. For example a concept that only requires default_initializable is more generic than one which requires default_initializable and moveable. Thus, if a type fulfills both, the latter will be taken because it is more constrained.
But this only works because of the special rules for concepts. Hiding requirements in lambdas wouldn't allow this to work.

Should `typename T::value_type` fail to compile when it is meant to be rejected by a constraint that comes later in the definition?

The following code:
#include <concepts>
template <typename T>
struct Foo
{
template <std::convertible_to<typename T::value_type> U> requires requires { typename T::value_type; }
void bar()
{
}
template <typename U>
void bar()
{
}
};
int main()
{
auto foo = Foo<float> {};
foo.bar<int>();
}
is rejected by GCC 11:
error: ‘float’ is not a class, struct, or union type
8 | void bar()
| ^~~
What I expected to happen was that the first definition of bar to be rejected due to an unsatisfied constraint and the second one to be selected. However, apparently when GCC tries to substitute float for T it fails to compile typename T::value_type before looking at the constraint. I can get the behaviour that I expected if I replace the definition with:
template <typename U> requires (requires { typename T::value_type; } && std::convertible_to<U, typename T::value_type>)
void bar()
{
}
which I find much less elegant.
Does the standard say that the first approach is illegal or is it a deficiency in the GCC implementation? If it's the former, is there a nicer way of writing this constraint (short of defining a new named concept like convertible_to_value_type_of)?
Edit: Just to clarify in the light of comments and the (now deleted) answer: I understand why this code would be rejected based on pre-C++20 rules. What I was getting at is that the addition of concepts to C++20 could have been an opportunity to relax the rules so that the compiler defers the verification of validity of something like typename T::value_type until it checks the constraints that might come in the rest of the definition. My question is really: were the rules relaxed in this manner?
The standard is quite clear that constraints are only substituted into at the point of use or when needed for declaration matching:
The type-constraints and requires-clause of a template
specialization or member function are not instantiated along with the
specialization or function itself, even for a member function of a
local class; substitution into the atomic constraints formed from them
is instead performed as specified in [temp.constr.decl] and
[temp.constr.atomic] when determining whether the constraints are
satisfied or as specified in [temp.constr.decl] when comparing
declarations.
This is a GCC bug. It appears that GCC does handle this correctly in a requires-clause so that can be a workaround:
template <class U>
requires std::convertible_to<U, typename T::value_type>
void bar()
{
}

Is lambdification of a concept an improvement or bad practice?

It appears that you can put lambda in the concept and then write code in it. Let us take this as an example. I'll prefer the standard concepts for such concepts and bear in mind that this is only for purposes of this example - godbolt
template<class T>
concept labdified_concept =
requires {
[](){
T t, tt; // default constructible
T ttt{t}; // copy constructible
tt = t; //copy assignable
tt = std::move(t); // move assignable
};
};
Instead of:
template<class T>
concept normal_concept =
std::default_initializable<T> && std::movable<T> && std::copy_constructible<T>;
Is lambdification an improvement or bad practice? From readability point too.
This shouldn't be valid. The point of allowed lambdas into unevaluated contexts wasn't to suddenly allow SFINAE on statements.
We do have some wording in [temp.deduct]/9 that makes this clear:
A lambda-expression appearing in a function type or a template parameter is not considered part of the immediate context for the purposes of template argument deduction. [Note: The intent is to avoid requiring implementations to deal with substitution failure involving arbitrary statements. [Example:
template <class T>
auto f(T) -> decltype([]() { T::invalid; } ());
void f(...);
f(0); // error: invalid expression not part of the immediate context
template <class T, std::size_t = sizeof([]() { T::invalid; })>
void g(T);
void g(...);
g(0); // error: invalid expression not part of the immediate context
template <class T>
auto h(T) -> decltype([x = T::invalid]() { });
void h(...);
h(0); // error: invalid expression not part of the immediate context
template <class T>
auto i(T) -> decltype([]() -> typename T::invalid { });
void i(...);
i(0); // error: invalid expression not part of the immediate context
template <class T>
auto j(T t) -> decltype([](auto x) -> decltype(x.invalid) { } (t)); // #1
void j(...); // #2
j(0); // deduction fails on #1, calls #2
— end example] — end note]
We just don't have something equivalent for requirements. gcc's behavior is really what you'd expect:
template <typename T> concept C = requires { []{ T t; }; };
struct X { X(int); };
static_assert(!C<X>); // ill-formed
Because the body of the lambda is outside of the immediate context, so it's not a substitution failure, it's a hard error.
Ignoring the obvious readability flaws in this mechanism, it doesn't actually work. Consider the following:
template<labdified_concept T>
void foo(T t) {}
template<typename T>
void foo(T t) {}
The rules of concepts tell us that if a given T doesn't satisfy labdified_concept, then the other foo should be instantiated instead. But that's not what happens if we provide SS to such a template. Instead, we get a hard error because labdified_concept<SS> cannot be instantiated.
The stuff within a requires expression has special handling that allows certain types of errors to be regarded as failures to meet the requirement. But that handling doesn't apply to the body of a lambda. There, ill-formed code is ill-formed and thus you get a compile error when trying to instantiate it.
And even if it did work, it still doesn't work. Concepts have complex rules for subsumption of concepts, which allows different concepts to be considered more highly specialized than others. This allows overloading on different concepts, which lets the more constrained concept get called. For example a concept that only requires default_initializable is more generic than one which requires default_initializable and moveable. Thus, if a type fulfills both, the latter will be taken because it is more constrained.
But this only works because of the special rules for concepts. Hiding requirements in lambdas wouldn't allow this to work.

How do I define an out-of-line class template member function with a non-trailing decltype return type

template <class T>
struct foo {
int x;
decltype(x) f1();
};
It seems to be impossible to define f1 out-of-line. I have tried the following definitions, and none of them work:
template <class T> decltype(x) foo<T>::f1() {}
template <class T> auto foo<T>::f1() -> decltype(x) {}
template <class T> auto foo<T>::f1() { return x; }
template <class T> decltype(std::declval<foo<T>>().x) foo<T>::f1() {}
// This return type is copied from the gcc error message
template <class T> decltype (((foo<T>*)(void)0)->foo<T>::x) foo<T>::f1() {}
This isn't a problem in real code because changing the in-class declaration of f1 to auto f1() -> decltype(x); allows the second definition. but I'm puzzled as to why that changes anything. Is it even possible to declare the original f1 out-of-line?
As dumb as this might seem, I believe the following is correct:
template <class T>
struct foo {
int x;
decltype(x) f1();
};
template <class T>
int foo<T>::f1() { return 0; }
Clang accepts it, but GCC doesn't, so I am going to say that I think GCC has a bug. [Coliru link]
The issue is whether these two declarations of f1 declare the same function (more technically, the same member function of the same class template). This is governed by [basic.link]/9, according to which:
Two names that are the same (Clause 6) and that are declared in different scopes shall denote the same variable, function, type, template or namespace if
both names have external linkage or else both names have internal linkage and are declared in the same translation unit; and
both names refer to members of the same namespace or to members, not by inheritance, of the same class; and
when both names denote functions, the parameter-type-lists of the functions (11.3.5) are identical; and
when both names denote function templates, the signatures (17.5.6.1) are the same.
The requirements appear to be satisfied, provided that the return types are in fact the same (since the return type is part of the signature for a class member function template, according to [defns.signature.member.templ]). Since foo<T>::x is int, they are the same.
This would not be the case if the type of x were dependent. For example, GCC and Clang both reject the definition when the declaration of x is changed to typename identity<T>::type x;. [Coliru link] In that case, [temp.type]/2 would apply:
If an expression e is type-dependent (17.6.2.2), decltype(e) denotes a unique dependent type. Two such
decltype-specifiers refer to the same type only if their expressions are equivalent (17.5.6.1). [ Note: However, such a type may be aliased, e.g., by a typedef-name. — end note ]
Perhaps GCC is in error for considering x to be type-dependent (it shouldn't be). However, this note suggests a workaround:
template <class T>
struct foo {
int x;
decltype(x) f1();
using x_type = decltype(x);
};
template <class T>
typename foo<T>::x_type foo<T>::f1() { return 0; }
This works on both GCC and Clang. [Coliru link]
(I cheated... sort of)
Using MSVC I clicked on "quick action -> create function declaration" for that member function and got this:
template<class T>
decltype(x) foo<T>::f1()
{
return x;
}

Russell's paradox in C++ templates [duplicate]

This question already has an answer here:
Fallback variadic constructor - why does this work?
(1 answer)
Closed 5 years ago.
Consider this program:
#include <iostream>
#include <type_traits>
using namespace std;
struct russell {
template <typename barber,
typename = typename enable_if<!is_convertible<barber, russell>::value>::type>
russell(barber) {}
};
russell verify1() { return 42L; }
russell verify2() { return 42; }
int main ()
{
verify1();
verify2();
cout << is_convertible<long, russell>::value;
cout << is_convertible<int, russell>::value;
return 0;
}
If some type barber is not convertible to russell. we attempt to create a paradox by making it convertible (enabling a converting constructor).
The output is 00 with three popular compilers, though constructors are evidently working.
I suspect the behaviour should be undefined, but cannot find anything in the standard.
What should the output of this program be, and why?
During overload resolution, template argument deduction must instantiate the default argument to obtain a complete set of template arguments to instantiate the function template with (if possible). Hence the instantiation of is_convertible<int, russell> is necessitated, which internally invokes overload resolution. The constructor template in russell is in scope in the instantiation context of the default template argument.
The crux is that is_convertible<int, russell>::value evaluates the default template argument of russell, which itself names is_convertible<int, russell>::value.
is_convertible<int, russell>::value
|
v
russell:russell(barber)
|
v
is_convertible<int, russell>::value (not in scope)
core issue 287's (unadopted) resolution seems to be the de facto rule abode by major compilers. Because the point of instantiation comes right before an entity, value's declaration is not in scope while we're evaluating its initialiser; hence our constructor has a substitution failure and is_convertible in main yields false.
Issue 287 clarifies which declarations are in scope, and which are not, namely value.
Clang and GCC do slightly differ on how they treat this situation. Take this example with a custom, transparent implementation of the trait:
#include <type_traits>
template <typename T, typename U>
struct is_convertible
{
static void g(U);
template <typename From>
static decltype(g(std::declval<From>()), std::true_type{}) f(int);
template <typename>
static std::false_type f(...);
static const bool value = decltype(f<T>()){};
};
struct russell
{
template <typename barber,
typename = std::enable_if_t<!is_convertible<barber, russell>::value>>
russell(barber) {}
};
russell foo() { return 42; }
int main() {}
Clang translates this silently. GCC complains about an infinite recursion chain: it seems to argue that value is indeed in scope in the recursive instantiation of the default argument, and so proceeds to instantiate the initializer of value again and again. However, arguably Clang is in the right, since both the current and the drafted relevant phrase in [temp.point]/4 mandate that the PoI is before the nearest enclosing declaration. I.e. that very declaration is not considered to be part of the partial instantiation (yet). Kinda makes sense if you consider the above scenario. Workaround for GCC: employ a declaration form in which the name is not declared until after the initializer is instantiated.
enum {value = decltype(f<T>()){}};
This compiles with GCC as well.