It appears that you can put lambda in the concept and then write code in it. Let us take this as an example. I'll prefer the standard concepts for such concepts and bear in mind that this is only for purposes of this example - godbolt
template<class T>
concept labdified_concept =
requires {
[](){
T t, tt; // default constructible
T ttt{t}; // copy constructible
tt = t; //copy assignable
tt = std::move(t); // move assignable
};
};
Instead of:
template<class T>
concept normal_concept =
std::default_initializable<T> && std::movable<T> && std::copy_constructible<T>;
Is lambdification an improvement or bad practice? From readability point too.
This shouldn't be valid. The point of allowed lambdas into unevaluated contexts wasn't to suddenly allow SFINAE on statements.
We do have some wording in [temp.deduct]/9 that makes this clear:
A lambda-expression appearing in a function type or a template parameter is not considered part of the immediate context for the purposes of template argument deduction. [Note: The intent is to avoid requiring implementations to deal with substitution failure involving arbitrary statements. [Example:
template <class T>
auto f(T) -> decltype([]() { T::invalid; } ());
void f(...);
f(0); // error: invalid expression not part of the immediate context
template <class T, std::size_t = sizeof([]() { T::invalid; })>
void g(T);
void g(...);
g(0); // error: invalid expression not part of the immediate context
template <class T>
auto h(T) -> decltype([x = T::invalid]() { });
void h(...);
h(0); // error: invalid expression not part of the immediate context
template <class T>
auto i(T) -> decltype([]() -> typename T::invalid { });
void i(...);
i(0); // error: invalid expression not part of the immediate context
template <class T>
auto j(T t) -> decltype([](auto x) -> decltype(x.invalid) { } (t)); // #1
void j(...); // #2
j(0); // deduction fails on #1, calls #2
— end example] — end note]
We just don't have something equivalent for requirements. gcc's behavior is really what you'd expect:
template <typename T> concept C = requires { []{ T t; }; };
struct X { X(int); };
static_assert(!C<X>); // ill-formed
Because the body of the lambda is outside of the immediate context, so it's not a substitution failure, it's a hard error.
Ignoring the obvious readability flaws in this mechanism, it doesn't actually work. Consider the following:
template<labdified_concept T>
void foo(T t) {}
template<typename T>
void foo(T t) {}
The rules of concepts tell us that if a given T doesn't satisfy labdified_concept, then the other foo should be instantiated instead. But that's not what happens if we provide SS to such a template. Instead, we get a hard error because labdified_concept<SS> cannot be instantiated.
The stuff within a requires expression has special handling that allows certain types of errors to be regarded as failures to meet the requirement. But that handling doesn't apply to the body of a lambda. There, ill-formed code is ill-formed and thus you get a compile error when trying to instantiate it.
And even if it did work, it still doesn't work. Concepts have complex rules for subsumption of concepts, which allows different concepts to be considered more highly specialized than others. This allows overloading on different concepts, which lets the more constrained concept get called. For example a concept that only requires default_initializable is more generic than one which requires default_initializable and moveable. Thus, if a type fulfills both, the latter will be taken because it is more constrained.
But this only works because of the special rules for concepts. Hiding requirements in lambdas wouldn't allow this to work.
Related
I have been experimenting with a system for composable pipelines, which involves a set of 'stages', which may be templated. Each stage handles its own setup, execution and cleanup, and template deduction is used to build a minimal list of 'state' used by the pipeline. This requires quite a lot of boilerplate template code, which has shown up some apparently incongruous behaviour. Despite successful experiments, actually rolling it into our code-base resulted in errors due to invalid instantiations.
It took some time to track down the difference between the toy (working) solution, and the more rich version, but eventually it was narrowed down to an explicit namespace specification.
template<typename KeyType = bool>
struct bind_stage
{
static_assert(!std::is_same<KeyType, bool>::value, "Nope, someone default instantiated me");
};
template<typename BoundStage, typename DefaultStage>
struct test_binding {};
template<template<typename...>class StageTemplate, typename S, typename T>
struct test_binding <StageTemplate<S>, StageTemplate<T>> {};
template<typename T>
auto empty_function(T b) {}
Then our main:
int main()
{
auto binder = test_binding<bind_stage<int>, bind_stage<>>();
//empty_function(binder); // Fails to compile
::empty_function(binder); // Compiles happily
return 0;
}
Now, I'm not sure if I expect the failure, or not. On the one hand, the we create a test_binder<bind_stage<int>,bind_stage<bool>> which obviously includes the invalid instantiation bind_stage<bool> as part of its type definition. Which should fail to compile.
On the other, it's included purely as a name, not a definition. In this situation it could simply be a forward declared template and we'd expect it to work as long as nothing in the outer template actually refers to it specifically.
What I didn't expect was two different behaviours depending on whether I added a (theoretically superfluous) global namespace specifier.
I have tried this code in Visual Studio, Clang and GCC. All have the same behaviour, which makes me lean away from this being a compiler bug. Is this behaviour explained by something in the C++ standard?
EDIT:
Another example from Daniel Langr which makes less sense to me:
template <typename T>
struct X {
static_assert(sizeof(T) == 1, "Why doesn't this happen in both cases?");
};
template <typename T>
struct Y { };
template <typename T>
void f(T) { }
int main() {
auto y = Y<X<int>>{};
// f(y); // triggers static assertion
::f(y); // does not
}
Either X<int> is instantiated while defining Y<X<int>> or it is not. What does using a function in a non-specified scope have to do with anything?
Template are instantiated when needed. So why when one performs a non qualified call as f(Y<X<int>> {}); does the compiler instantiate X<int> while it does not when the call to f is qualified as in ::f(X<Y<int>>{})?
The reason is Agument-Dependent name Lookup(ADL) (see [basic.lookup.argdep]) that only takes place for non qualified calls.
In the case of the call f(Y<X<int>>{}) the compiler must look in the definition of X<int> for a declaration of a friend function:
template <typename T>
struct X {
//such function will participate to the overload resolution
//to determine which function f is called in "f(Y<X<int>>{})"
friend void f(X&){}
};
ADL involving the type of a template argument of the specialization that is the type of the function argument (ouch...) is so miss-loved (because it almost only causes bad surprises) that there is a proposal to remove it: P0934
It appears that you can put lambda in the concept and then write code in it. Let us take this as an example. I'll prefer the standard concepts for such concepts and bear in mind that this is only for purposes of this example - godbolt
template<class T>
concept labdified_concept =
requires {
[](){
T t, tt; // default constructible
T ttt{t}; // copy constructible
tt = t; //copy assignable
tt = std::move(t); // move assignable
};
};
Instead of:
template<class T>
concept normal_concept =
std::default_initializable<T> && std::movable<T> && std::copy_constructible<T>;
Is lambdification an improvement or bad practice? From readability point too.
This shouldn't be valid. The point of allowed lambdas into unevaluated contexts wasn't to suddenly allow SFINAE on statements.
We do have some wording in [temp.deduct]/9 that makes this clear:
A lambda-expression appearing in a function type or a template parameter is not considered part of the immediate context for the purposes of template argument deduction. [Note: The intent is to avoid requiring implementations to deal with substitution failure involving arbitrary statements. [Example:
template <class T>
auto f(T) -> decltype([]() { T::invalid; } ());
void f(...);
f(0); // error: invalid expression not part of the immediate context
template <class T, std::size_t = sizeof([]() { T::invalid; })>
void g(T);
void g(...);
g(0); // error: invalid expression not part of the immediate context
template <class T>
auto h(T) -> decltype([x = T::invalid]() { });
void h(...);
h(0); // error: invalid expression not part of the immediate context
template <class T>
auto i(T) -> decltype([]() -> typename T::invalid { });
void i(...);
i(0); // error: invalid expression not part of the immediate context
template <class T>
auto j(T t) -> decltype([](auto x) -> decltype(x.invalid) { } (t)); // #1
void j(...); // #2
j(0); // deduction fails on #1, calls #2
— end example] — end note]
We just don't have something equivalent for requirements. gcc's behavior is really what you'd expect:
template <typename T> concept C = requires { []{ T t; }; };
struct X { X(int); };
static_assert(!C<X>); // ill-formed
Because the body of the lambda is outside of the immediate context, so it's not a substitution failure, it's a hard error.
Ignoring the obvious readability flaws in this mechanism, it doesn't actually work. Consider the following:
template<labdified_concept T>
void foo(T t) {}
template<typename T>
void foo(T t) {}
The rules of concepts tell us that if a given T doesn't satisfy labdified_concept, then the other foo should be instantiated instead. But that's not what happens if we provide SS to such a template. Instead, we get a hard error because labdified_concept<SS> cannot be instantiated.
The stuff within a requires expression has special handling that allows certain types of errors to be regarded as failures to meet the requirement. But that handling doesn't apply to the body of a lambda. There, ill-formed code is ill-formed and thus you get a compile error when trying to instantiate it.
And even if it did work, it still doesn't work. Concepts have complex rules for subsumption of concepts, which allows different concepts to be considered more highly specialized than others. This allows overloading on different concepts, which lets the more constrained concept get called. For example a concept that only requires default_initializable is more generic than one which requires default_initializable and moveable. Thus, if a type fulfills both, the latter will be taken because it is more constrained.
But this only works because of the special rules for concepts. Hiding requirements in lambdas wouldn't allow this to work.
The following code:
#include <concepts>
template <typename T>
struct Foo
{
template <std::convertible_to<typename T::value_type> U> requires requires { typename T::value_type; }
void bar()
{
}
template <typename U>
void bar()
{
}
};
int main()
{
auto foo = Foo<float> {};
foo.bar<int>();
}
is rejected by GCC 11:
error: ‘float’ is not a class, struct, or union type
8 | void bar()
| ^~~
What I expected to happen was that the first definition of bar to be rejected due to an unsatisfied constraint and the second one to be selected. However, apparently when GCC tries to substitute float for T it fails to compile typename T::value_type before looking at the constraint. I can get the behaviour that I expected if I replace the definition with:
template <typename U> requires (requires { typename T::value_type; } && std::convertible_to<U, typename T::value_type>)
void bar()
{
}
which I find much less elegant.
Does the standard say that the first approach is illegal or is it a deficiency in the GCC implementation? If it's the former, is there a nicer way of writing this constraint (short of defining a new named concept like convertible_to_value_type_of)?
Edit: Just to clarify in the light of comments and the (now deleted) answer: I understand why this code would be rejected based on pre-C++20 rules. What I was getting at is that the addition of concepts to C++20 could have been an opportunity to relax the rules so that the compiler defers the verification of validity of something like typename T::value_type until it checks the constraints that might come in the rest of the definition. My question is really: were the rules relaxed in this manner?
The standard is quite clear that constraints are only substituted into at the point of use or when needed for declaration matching:
The type-constraints and requires-clause of a template
specialization or member function are not instantiated along with the
specialization or function itself, even for a member function of a
local class; substitution into the atomic constraints formed from them
is instead performed as specified in [temp.constr.decl] and
[temp.constr.atomic] when determining whether the constraints are
satisfied or as specified in [temp.constr.decl] when comparing
declarations.
This is a GCC bug. It appears that GCC does handle this correctly in a requires-clause so that can be a workaround:
template <class U>
requires std::convertible_to<U, typename T::value_type>
void bar()
{
}
Is there a syntax to constraint a non-templated method? All the syntaxes I've tried on godbolt with clang concepts branch and gcc fail to compile:
// these examples do not compile
template <bool B>
struct X
{
requires B
void foo() {}
};
template <class T>
struct Y
{
requires (std::is_trivially_copyable_v<T>)
auto foo() {}
};
The trick to make it compile is the same trick you needed to do with SFINAE, make the methods template, even though they really are not templates. And funny enough, the constraint doesn't seem to need the method template, it can work fine on the class template alone, so I really hope there is a way to apply the constraints with concepts without having to resort to the old hacks:
// old hacks
template <bool B>
struct X
{
template <bool = B>
requires B
auto foo() {}
};
template <class T>
struct Y
{
template <class = T>
requires std::is_trivially_copyable_v<T>
auto foo() {}
};
Real life example:
template <class T, bool Copyable_buf = false>
struct Buffer
{
/* ... */
requires Copyable_buf
Buffer(const Buffer& other) {}
/* ... */
};
template <class T>
using Copyable_buffer = Buffer<T, true>;
To support the other answer on this, here is the normative wording about this, from the latest standard draft:
[dcl.decl]
1 A declarator declares a single variable, function, or type, within a declaration. The init-declarator-list appearing in a declaration is a comma-separated sequence of declarators, each of which can have an initializer.
init-declarator-list:
init-declarator
init-declarator-list , init-declarator
init-declarator:
declarator initializeropt
declarator requires-clause
4 The optional requires-clause ([temp]) in an init-declarator or member-declarator shall not be present when the declarator does not declare a function ([dcl.fct]). When present after a declarator, the requires-clause is called the trailing requires-clause. The trailing requires-clause introduces the constraint-expression that results from interpreting its constraint-logical-or-expression as a constraint-expression. [ Example:
void f1(int a) requires true; // OK
auto f2(int a) -> bool requires true; // OK
auto f3(int a) requires true -> bool; // error: requires-clause precedes trailing-return-type
void (*pf)() requires true; // error: constraint on a variable
void g(int (*)() requires true); // error: constraint on a parameter-declaration
auto* p = new void(*)(char) requires true; // error: not a function declaration
— end example ]
As those two paragraphs specify, a trailing requires clause can appear at the end of function declarators. Its meaning is to constrain the function by the constant expression it accepts as an argument (which includes concepts).
Yes, there is!! The requires clause can appear as the last element of a function declarator, in which case it allows to constraint non-templated methods (or free functions for that matter):
// This works as expected! Yey!!
template <class T, bool Copyable_buf = false>
struct Buffer
{
Buffer(const Buffer& other) requires Copyable_buf
{
// ...
}
};
template <bool B>
struct X
{
auto foo() requires B
{
// ...
}
};
template <class T>
struct Y
{
auto foo() requires std::is_trivially_copyable_v<T>
{
// ...
}
};
This answer is empirical, based on testing on current implementations of concepts. Godbolt test. Storry Tellers's answer gives standard quotes confirming this behavior.
There was a change regarding this recently. See https://github.com/cplusplus/nbballot/issues/374
It states:
How constraints work with non-templated functions is still under heavy construction during this late stage in the process. While we have provided various comments that build in a direction where supporting such constructs (including ordering between multiple constrained functions based on their constraints) would become possible, we acknowledge that WG 21 might not find a solution with consensus in time for the DIS. We ask WG 21 to evaluate the risk of shipping the feature in such a state and consider removing the ability to declare such functions.
Does EWG want to consider this for C++20?
| F | A |
|----|----|
| 16 | 0 |
Motion passes. Hubert to coordinate with CWG.
Emphasis mine.
So it appears that constrained non-templated functions were removed from C++20 as of right now.
Sorry for the lack of a better title.
While trying to implement my own version of std::move and understanding how easy it was, I'm still confused by how C++ treats partial template specializations. I know how they work, but there's a sort of rule that I found weird and I would like to know the reasoning behind it.
template <typename T>
struct BaseType {
using Type = T;
};
template <typename T>
struct BaseType<T *> {
using Type = T;
};
template <typename T>
struct BaseType<T &> {
using Type = T;
};
using int_ptr = int *;
using int_ref = int &;
// A and B are now both of type int
BaseType<int_ptr>::Type A = 5;
BaseType<int_ref>::Type B = 5;
If there wasn't no partial specializations of RemoveReference, T would always be T: if I gave a int & it would still be a int & throughout the whole template.
However, the partial specialized templates seem to collapse references and pointers: if I gave a int & or a int * and if those types match with the ones from the specialized template, T would just be int.
This feature is extremely awesome and useful, however I'm curious and I would like to know the official reasoning / rules behind this not so obvious quirk.
If your template pattern matches T& to int&, then T& is int&, which implies T is int.
The type T in the specialization only related to the T in the primary template by the fact it was used to pattern match the first argument.
It may confuse you less to replace T with X or U in the specializations. Reusing variable names can be confusing.
template <typename T>
struct RemoveReference {
using Type = T;
};
template <typename X>
struct RemoveReference<X &> {
using Type = X;
};
and X& matches T. If X& is T, and T ia int&, then X is int.
Why does the standard say this?
Suppose we look af a different template specialization:
template<class T>
struct Bob;
template<class E, class A>
struct Bob<std::vector<E,A>>{
// what should E and A be here?
};
Partial specializations act a lot like function templates: so much so, in fact, that overloading function templates is often mistaken for partial specialization of them (which is not allowed). Given
template<class T>
void value_assign(T *t) { *t=T(); }
then obviously T must be the version of the argument type without the (outermost) pointer status, because we need that type to compute the value to assign through the pointer. We of course don't typically write value_assign<int>(&i); to call a function of this type, because the arguments can be deduced.
In this case:
template<class T,class U>
void accept_pair(std::pair<T,U>);
note that the number of template parameters is greater than the number of types "supplied" as input (that is, than the number of parameter types used for deduction): complicated types can provide "more than one type's worth" of information.
All of this looks very different from class templates, where the types must be given explicitly (only sometimes true as of C++17) and they are used verbatim in the template (as you said).
But consider the partial specializations again:
template<class>
struct A; // undefined
template<class T>
struct A<T*> { /* ... */ }; // #1
template<class T,class U>
struct A<std::pair<T,U>> { /* ... */ }; // #2
These are completely isomorphic to the (unrelated) function templates value_assign and accept_pair respectively. We do have to write, for example, A<int*> to use #1; but this is simply analogous to calling value_assign(&i): in particular, the template arguments are still deduced, only this time from the explicitly-specified type int* rather than from the type of the expression &i. (Because even supplying explicit template arguments requires deduction, a partial specialization must support deducing its template arguments.)
#2 again illustrates the idea that the number of types is not conserved in this process: this should help break the false impression that "the template parameter" should continue to refer to "the type supplied". As such, partial specializations do not merely claim a (generally unbounded) set of template arguments: they interpret them.
Yet another similarity: the choice among multiple partial specializations of the same class template is exactly the same as that for discarding less-specific function templates when they are overloaded. (However, since overload resolution does not occur in the partial specialization case, this process must get rid of all but one candidate there.)