Questions about class template argument deduction in C++17 - c++

I'm trying to make sense of P0091r3 (the "template argument deduction for class templates" paper that has been adopted into the current C++ draft standard, N4606).
I believe I understand how it works in the simplest possible case, where the template-name identifies a single template:
template<class T>
struct S {
S(T);
S(const std::vector<T>&);
};
int main()
{
std::vector<int> v;
auto s = S(v);
}
S identifies the primary template, so we create a fictitious overload set consisting of
template<class T> void Sctor(T);
template<class T> void Sctor(const std::vector<T>&);
and perform overload resolution on the fictitious call
Sctor(v)
to determine that in this case we want to call the fictitious Sctor(const std::vector<T>&) [with T=int]. Which means we end up calling S<int>::S(const std::vector<int>&) and everything works great.
What I don't understand is how this is supposed to work in the presence of partial specializations.
template<class T>
struct S {
S(T);
};
template<class T>
struct S<std::list<T>> {
S(const std::vector<T>&);
};
int main()
{
std::vector<int> v;
auto s = S(v);
}
What we intuitively want here is a call to S<std::list<int>>::S(const std::vector<int>&). Is that what we actually get, though? and where is this specified?
Basically I don't intuitively understand what P0091r3 means by "the class template designated by the template-name": does that mean the primary template, or does it include all partial specializations and explicit full specializations as well?
(I also don't understand how P0091r3's changes to ยง7.1.6.2p2 don't break code using injected-class-names such as
template<class T>
struct iterator {
iterator& operator++(int) {
iterator result = *this; // injected-class-name or placeholder?
//...
}
};
but that's a different question altogether.)
Are class template deduction and explicit deduction guides supported in any extant version of Clang or GCC (possibly under an -f flag, like -fconcepts is)? If so, I could play around with some of these examples in real life and probably clear up half of my confusion.

This is somewhat skated over by the proposal, but I think the intent is that only constructors of the primary class template are considered. Evidence for this is that the new [class.template.deduction] has:
For each constructor of the class template designated by the template-name, a function template with the following properties is a candidate: [...]
If we're talking about "the" class template, then this is the primary class template, particularly as class template partial specializations are not found by name lookup ([temp.class.spec]/6). This is also how the prototype implementation (see below) appears to behave.
Within the paper, class template partial specializations are contemplated in the section "Pros and cons of implicit deduction guides", but rather out of concern that constructors within the main class template could trigger a hard (non-SFINAE) error:
template<class T> struct X {
using ty = T::type;
static auto foo() { return typename T::type{} };
X(ty); #1
X(decltype(foo())); #2
X(T);
};
template<class T>
struct X<T*> {
X(...);
};
X x{(int *)0};
Your plea for class template partial specialization constructors to be considered is on the face of it reasonable, but note that it could result in ambiguity:
template<class T> struct Y { Y(T*); };
template<class T> struct Y<T*> { Y(T*); };
Y y{(int*) 0};
It would probably be desirable for the implicitly generated deduction guides to be ranked (as a tie-breaker) by specialization of the class template.
If you want to try out a prototype implementation, the authors have published their branch of clang on github: https://github.com/faisalv/clang/tree/clang-ctor-deduction.
Discussion in the paper ("A note on injected class names") indicates that injected-class-names take priority over template names; wording is added to ensure this:
The template-name shall name a class template that is not an injected-class-name.

I would say that the wording of P0091 as it currently stands is under-specified in this regard. It does need to make it clear whether it is just the primary class template or whether it includes the constructors of all specializations.
That being said, I believe that the intent of P0091 is that partial specializations do not participate in argument deduction. The feature is to allow the compiler to decide what a class's template arguments are. However, what selects a partial specialization is what those template arguments actually are. The way to get the S<std::list<T>> specialization is to use a std::list in the template argument list of S.
If you want to cause a specific parameter to use a specific specialization, you should use a deduction guide. That is what they're for, after all.

Related

How to avoid implicit template instantiation of an unused template argument type of a class template when its instance is passed to a function? [duplicate]

I have been experimenting with a system for composable pipelines, which involves a set of 'stages', which may be templated. Each stage handles its own setup, execution and cleanup, and template deduction is used to build a minimal list of 'state' used by the pipeline. This requires quite a lot of boilerplate template code, which has shown up some apparently incongruous behaviour. Despite successful experiments, actually rolling it into our code-base resulted in errors due to invalid instantiations.
It took some time to track down the difference between the toy (working) solution, and the more rich version, but eventually it was narrowed down to an explicit namespace specification.
template<typename KeyType = bool>
struct bind_stage
{
static_assert(!std::is_same<KeyType, bool>::value, "Nope, someone default instantiated me");
};
template<typename BoundStage, typename DefaultStage>
struct test_binding {};
template<template<typename...>class StageTemplate, typename S, typename T>
struct test_binding <StageTemplate<S>, StageTemplate<T>> {};
template<typename T>
auto empty_function(T b) {}
Then our main:
int main()
{
auto binder = test_binding<bind_stage<int>, bind_stage<>>();
//empty_function(binder); // Fails to compile
::empty_function(binder); // Compiles happily
return 0;
}
Now, I'm not sure if I expect the failure, or not. On the one hand, the we create a test_binder<bind_stage<int>,bind_stage<bool>> which obviously includes the invalid instantiation bind_stage<bool> as part of its type definition. Which should fail to compile.
On the other, it's included purely as a name, not a definition. In this situation it could simply be a forward declared template and we'd expect it to work as long as nothing in the outer template actually refers to it specifically.
What I didn't expect was two different behaviours depending on whether I added a (theoretically superfluous) global namespace specifier.
I have tried this code in Visual Studio, Clang and GCC. All have the same behaviour, which makes me lean away from this being a compiler bug. Is this behaviour explained by something in the C++ standard?
EDIT:
Another example from Daniel Langr which makes less sense to me:
template <typename T>
struct X {
static_assert(sizeof(T) == 1, "Why doesn't this happen in both cases?");
};
template <typename T>
struct Y { };
template <typename T>
void f(T) { }
int main() {
auto y = Y<X<int>>{};
// f(y); // triggers static assertion
::f(y); // does not
}
Either X<int> is instantiated while defining Y<X<int>> or it is not. What does using a function in a non-specified scope have to do with anything?
Template are instantiated when needed. So why when one performs a non qualified call as f(Y<X<int>> {}); does the compiler instantiate X<int> while it does not when the call to f is qualified as in ::f(X<Y<int>>{})?
The reason is Agument-Dependent name Lookup(ADL) (see [basic.lookup.argdep]) that only takes place for non qualified calls.
In the case of the call f(Y<X<int>>{}) the compiler must look in the definition of X<int> for a declaration of a friend function:
template <typename T>
struct X {
//such function will participate to the overload resolution
//to determine which function f is called in "f(Y<X<int>>{})"
friend void f(X&){}
};
ADL involving the type of a template argument of the specialization that is the type of the function argument (ouch...) is so miss-loved (because it almost only causes bad surprises) that there is a proposal to remove it: P0934

How can I give two compatible names to a C++ class template with deduction guides?

If I have a widely-used class template called Foo that I want to rename to Bar without having to update all of its users atomically, then up until C++17 I could simply use a type alias:
template <typename T>
class Bar {
public:
// Create a Bar from a T value.
explicit Bar(T value);
};
// An older name for this class, for compatibility with callers that haven't
// yet been updated.
template <typename T>
using Foo = Bar<T>;
This is very useful when working in a large, distributed codebase. However as of C++17 this seems to be broken by class template argument deduction guides. For example, if this line exists:
template <typename T>
explicit Foo(T) -> Foo<T>;
then the obvious thing to do when renaming the class is to change the Foos in the deduction guide to Bars:
template <typename T>
explicit Bar(T) -> Bar<T>;
But now the expression Foo(17) in a random caller, which used to be legal, is an error:
test.cc:42:21: error: alias template 'Foo' requires template arguments; argument deduction only allowed for class templates
static_cast<void>(Foo(17));
^
test.cc:34:1: note: template is declared here
using Foo = Bar<T>;
^
Is there any easy and general way to give a class with deduction guides two simultaneous names in a fully compatible way? The best I can think of is defining the class's public API twice under two names, with conversion operators, but this is far from easy and general.
Your problem is exactly what P1814R0: Wording for Class Template Argument Deduction for Alias Templates
wants to solve, that is to say, in C++20, you only need to add deduction guides for Bar to make the following program well-formed:
template <typename T>
class Bar {
public:
// Create a Bar from a T value.
explicit Bar(T value);
};
// An older name for this class, for compatibility with callers that haven't
// yet been updated.
template <typename T>
using Foo = Bar<T>;
template <typename T>
explicit Bar(T) -> Bar<T>;
int main() {
Bar bar(42);
Foo foo(42); // well-formed
}
Demo.
But since it is a C++20 feature, there is currently no solution in C++17.
Have you tried to define a macro?
#define Foo Bar;
(Personally I'd find it confusing with multiple names for same implementation, but I'm not you.)
Sorry I can't test at the moment, but I hope it works!

Class template argument deduction failed with derived class

#include <utility>
template<class T1, class T2>
struct mypair : std::pair<T1, T2>
{ using std::pair<T1, T2>::pair; };
int main()
{
(void)std::pair(2, 3); // It works
(void)mypair(2, 3); // It doesn't work
}
Is the above well formed?
Is it possible deduce the class template arguments in the second case if the constructors are being inherited? Are the constructors of std::pair participating in the creation of implicit deduction guides for mypair?
My compiler is g++ 7.2.0.
The short story: there is no rule in the standard that says how this would work, nor any rule that says that it doesn't work. So GCC and Clang conservatively reject rather than inventing a (non-standard) rule.
The long story: mypair's pair base class is a dependent type, so lookup of its constructors cannot succeed. For each specialization of mytype<T1, T2>, the corresponding constructors of pair<T1, T2> are constructors of mytype, but this is not a rule that can be meaningfully applied to a template prior to instantiation in general.
In principle, there could be a rule that says that you look at the constructors of the primary pair template in this situation (much as we do when looking up constructors of mypair itself for class template argument deduction), but no such rule actually exists in the standard currently. Such a rule quickly falls down, though, when the base class becomes more complex:
template<typename T> struct my_pair2 : std::pair<T, T> {
using pair::pair;
};
What constructors should be notionally injected here? And in cases like this, I think it's reasonably clear that this lookup cannot possibly work:
template<typename T> struct my_pair3 : arbitrary_metafunction<T>::type {
using arbitrary_metafunction<T>::type::type;
};
It's possible we'll get a rule change to allow deduction through your my_pair and the my_pair2 above if/when we get class template argument deduction rules for alias templates:
template<typename T> using my_pair3 = std::pair<T, T>;
my_pair3 mp3 = {1, 2};
The complexities involved here are largely the same as in the inherited constructor case. Faisal Vali (one of the other designers of class template argument deduction) has a concrete plan for how to make such cases work, but the C++ committee hasn't discussed this extension yet.
See Richard Smith's answer.
A previous version of this answer had stated that the following should work
template <class T> struct B { B(T ) { } };
template <class T> struct D : B<T> { using B<T>::B; };
B b = 4; // okay, obviously
D d = 4; // expected: okay
But this isn't really viable, and wouldn't even be a good idea to work as I thought it would (we inherit the constructors but not the deduction guides?)

Partial template specialization type collapsing rules

Sorry for the lack of a better title.
While trying to implement my own version of std::move and understanding how easy it was, I'm still confused by how C++ treats partial template specializations. I know how they work, but there's a sort of rule that I found weird and I would like to know the reasoning behind it.
template <typename T>
struct BaseType {
using Type = T;
};
template <typename T>
struct BaseType<T *> {
using Type = T;
};
template <typename T>
struct BaseType<T &> {
using Type = T;
};
using int_ptr = int *;
using int_ref = int &;
// A and B are now both of type int
BaseType<int_ptr>::Type A = 5;
BaseType<int_ref>::Type B = 5;
If there wasn't no partial specializations of RemoveReference, T would always be T: if I gave a int & it would still be a int & throughout the whole template.
However, the partial specialized templates seem to collapse references and pointers: if I gave a int & or a int * and if those types match with the ones from the specialized template, T would just be int.
This feature is extremely awesome and useful, however I'm curious and I would like to know the official reasoning / rules behind this not so obvious quirk.
If your template pattern matches T& to int&, then T& is int&, which implies T is int.
The type T in the specialization only related to the T in the primary template by the fact it was used to pattern match the first argument.
It may confuse you less to replace T with X or U in the specializations. Reusing variable names can be confusing.
template <typename T>
struct RemoveReference {
using Type = T;
};
template <typename X>
struct RemoveReference<X &> {
using Type = X;
};
and X& matches T. If X& is T, and T ia int&, then X is int.
Why does the standard say this?
Suppose we look af a different template specialization:
template<class T>
struct Bob;
template<class E, class A>
struct Bob<std::vector<E,A>>{
// what should E and A be here?
};
Partial specializations act a lot like function templates: so much so, in fact, that overloading function templates is often mistaken for partial specialization of them (which is not allowed). Given
template<class T>
void value_assign(T *t) { *t=T(); }
then obviously T must be the version of the argument type without the (outermost) pointer status, because we need that type to compute the value to assign through the pointer. We of course don't typically write value_assign<int>(&i); to call a function of this type, because the arguments can be deduced.
In this case:
template<class T,class U>
void accept_pair(std::pair<T,U>);
note that the number of template parameters is greater than the number of types "supplied" as input (that is, than the number of parameter types used for deduction): complicated types can provide "more than one type's worth" of information.
All of this looks very different from class templates, where the types must be given explicitly (only sometimes true as of C++17) and they are used verbatim in the template (as you said).
But consider the partial specializations again:
template<class>
struct A; // undefined
template<class T>
struct A<T*> { /* ... */ }; // #1
template<class T,class U>
struct A<std::pair<T,U>> { /* ... */ }; // #2
These are completely isomorphic to the (unrelated) function templates value_assign and accept_pair respectively. We do have to write, for example, A<int*> to use #1; but this is simply analogous to calling value_assign(&i): in particular, the template arguments are still deduced, only this time from the explicitly-specified type int* rather than from the type of the expression &i. (Because even supplying explicit template arguments requires deduction, a partial specialization must support deducing its template arguments.)
#2 again illustrates the idea that the number of types is not conserved in this process: this should help break the false impression that "the template parameter" should continue to refer to "the type supplied". As such, partial specializations do not merely claim a (generally unbounded) set of template arguments: they interpret them.
Yet another similarity: the choice among multiple partial specializations of the same class template is exactly the same as that for discarding less-specific function templates when they are overloaded. (However, since overload resolution does not occur in the partial specialization case, this process must get rid of all but one candidate there.)

Infinite recursive template instantiation expected?

I am trying to understand why a piece of template metaprogramming is not generating an infinite recursion. I tried to reduce the test case as much as possible, but there's still a bit of setup involved, so bear with me :)
The setup is the following. I have a generic function foo(T) which delegates the implementation to a generic functor called foo_impl via its call operator, like this:
template <typename T, typename = void>
struct foo_impl {};
template <typename T>
inline auto foo(T x) -> decltype(foo_impl<T>{}(x))
{
return foo_impl<T>{}(x);
}
foo() uses decltype trailing return type for SFINAE purposes. The default implementation of foo_impl does not define any call operator. Next, I have a type-trait that detects whether foo() can be called with an argument of type T:
template <typename T>
struct has_foo
{
struct yes {};
struct no {};
template <typename T1>
static auto test(T1 x) -> decltype(foo(x),void(),yes{});
static no test(...);
static const bool value = std::is_same<yes,decltype(test(std::declval<T>()))>::value;
};
This is just the classic implementation of a type trait via expression SFINAE:
has_foo<T>::value will be true if a valid foo_impl specialisation exists for T, false otherwise. Finally, I have two specialisations of the the implementation functor for integral types and for floating-point types:
template <typename T>
struct foo_impl<T,typename std::enable_if<std::is_integral<T>::value>::type>
{
void operator()(T) {}
};
template <typename T>
struct foo_impl<T,typename std::enable_if<has_foo<unsigned>::value && std::is_floating_point<T>::value>::type>
{
void operator()(T) {}
};
In the last foo_impl specialisation, the one for floating-point types, I have added the extra condition that foo() must be available for the type unsigned (has_foo<unsigned>::value).
What I don't understand is why the compilers (GCC & clang both) accept the following code:
int main()
{
foo(1.23);
}
In my understanding, when foo(1.23) is called the following should happen:
the specialisation of foo_impl for integral types is discarded because 1.23 is not integral, so only the second specialisation of foo_impl is considered;
the enabling condition for the second specialisation of foo_impl contains has_foo<unsigned>::value, that is, the compiler needs to check if foo() can be called on type unsigned;
in order to check if foo() can be called on type unsigned, the compiler needs again to select a specialisation of foo_impl among the two available;
at this point, in the enabling condition for the second specialisation of foo_impl the compiler encounters again the condition has_foo<unsigned>::value.
GOTO 3.
However, it seems like the code is happily accepted both by GCC 5.4 and Clang 3.8. See here: http://ideone.com/XClvYT
I would like to understand what is going on here. Am I misunderstanding something and the recursion is blocked by some other effect? Or maybe am I triggering some sort of undefined/implementation defined behaviour?
has_foo<unsigned>::value is a non-dependent expression, so it immediately triggers instantiation of has_foo<unsigned> (even if the corresponding specialization is never used).
The relevant rules are [temp.point]/1:
For a function template specialization, a member function template specialization, or a specialization for a member function or static data member of a class template, if the specialization is implicitly instantiated because it is referenced from within another template specialization and the context from which it is referenced depends on a template parameter, the point of instantiation of the specialization is the point of instantiation of the enclosing specialization. Otherwise, the point of instantiation for such a specialization immediately follows the namespace scope declaration or definition that refers to the specialization.
(note that we're in the non-dependent case here), and [temp.res]/8:
The program is
ill-formed, no diagnostic required, if:
- [...]
- a hypothetical instantiation of a template immediately following its definition would be ill-formed due to a construct that does not depend on a template parameter, or
- the interpretation of such a construct in the hypothetical instantiation is different from the interpretation of the corresponding construct in any actual instantiation of the template.
These rules are intended to give the implementation freedom to instantiate has_foo<unsigned> at the point where it appears in the above example, and to give it the same semantics as if it had been instantiated there. (Note that the rules here are actually subtly wrong: the point of instantiation for an entity referenced by the declaration of another entity actually must immediately precede that entity rather than immediately following it. This has been reported as a core issue, but it's not on the issues list yet as the list hasn't been updated for a while.)
As a consequence, the point of instantiation of has_foo within the floating-point partial specialization occurs before the point of declaration of that specialization, which is after the > of the partial specialization per [basic.scope.pdecl]/3:
The point of declaration for a class or class template first declared by a class-specifier is immediately after the identifier or simple-template-id (if any) in its class-head (Clause 9).
Therefore, when the call to foo from has_foo<unsigned> looks up the partial specializatios of foo_impl, it does not find the floating-point specialization at all.
A couple of other notes about your example:
1) Use of cast-to-void in comma operator:
static auto test(T1 x) -> decltype(foo(x),void(),yes{});
This is a bad pattern. operator, lookup is still performed for a comma operator where one of its operands is of class or enumeration type (even though it can never succeed). This can result in ADL being performed [implementations are permitted but not required to skip this], which triggers the instantiation of all associated classes of the return type of foo (in particular, if foo returns unique_ptr<X<T>>, this can trigger the instantiation of X<T> and may render the program ill-formed if that instantiation doesn't work from this translation unit). You should prefer to cast all operands of a comma operator of user-defined type to void:
static auto test(T1 x) -> decltype(void(foo(x)),yes{});
2) SFINAE idiom:
template <typename T1>
static auto test(T1 x) -> decltype(void(foo(x)),yes{});
static no test(...);
static const bool value = std::is_same<yes,decltype(test(std::declval<T>()))>::value;
This is not a correct SFINAE pattern in the general case. There are a few problems here:
if T is a type that cannot be passed as an argument, such as void, you trigger a hard error instead of value evaluating to false as intended
if T is a type to which a reference cannot be formed, you again trigger a hard error
you check whether foo can be applied to an lvalue of type remove_reference<T> even if T is an rvalue reference
A better solution is to put the entire check into the yes version of test instead of splitting the declval portion into value:
template <typename T1>
static auto test(int) -> decltype(void(foo(std::declval<T1>())),yes{});
template <typename>
static no test(...);
static const bool value = std::is_same<yes,decltype(test<T>(0))>::value;
This approach also more naturally extends to a ranked set of options:
// elsewhere
template<int N> struct rank : rank<N-1> {};
template<> struct rank<0> {};
template <typename T1>
static no test(rank<2>, std::enable_if_t<std::is_same<T1, double>::value>* = nullptr);
template <typename T1>
static yes test(rank<1>, decltype(foo(std::declval<T1>()))* = nullptr);
template <typename T1>
static no test(rank<0>);
static const bool value = std::is_same<yes,decltype(test<T>(rank<2>()))>::value;
Finally, your type trait will evaluate faster and use less memory at compile time if you move the above declarations of test outside the definition of has_foo (perhaps into some helper class or namespace); that way, they do not need to be redundantly instantiated once for each use of has_foo.
It's not actually UB. But it really shows you how TMP is complex...
The reason this doesn't infinitely recurse is because of completeness.
template <typename T>
struct foo_impl<T,typename std::enable_if<std::is_integral<T>::value>::type>
{
void operator()(T) {}
};
// has_foo here
template <typename T>
struct foo_impl<T,typename std::enable_if<has_foo<unsigned>::value && std::is_floating_point<T>::value>::type>
{
void operator()(T) {}
};
When you call foo(3.14);, you instantiate has_foo<float>. That in turn SFINAEs on foo_impl.
The first one is enabled if is_integral. Obviously, this fails.
The second foo_impl<float> is now considered. Trying to instantiate it, the compiles sees has_foo<unsigned>::value.
Back to instantiating foo_impl: foo_impl<unsigned>!
The first foo_impl<unsigned> is a match.
The second one is considered. The enable_if contains has_foo<unsigned> - the one the compiler is already trying to instantiate.
Since it's currently being instantiated, it's incomplete, and this specialization is not considered.
Recursion stops, has_foo<unsigned>::value is true, and your code snippet works!
So, you want to know how it comes down to it in the standard? Okay.
[14.7.1/1] If a class template has been declared, but not defined, at the point of instantiation ([temp.point]), the instantiation yields an incomplete class type.
(incomplete)