Make C++ call the right template method in an un-ugly way - c++

I'm cooking up a vector library and have hit a snag. I want to allow recursive vectors (i.e. vec<H,vec<W,T> >) so I'd like my "min" and other functions to be recursive as well. Here's what I have:
template<typename T>
inline T min(const T& k1, const T& k2) {
return k1 < k2 ? k1 : k2;
}
template<int N, typename T, typename VT1, typename VT2>
inline vec<N,T> min(const container<N,T,VT1>& v1, const container<N,T,VT2>& v2) {
vec<N,T> new_vec;
for (int i = 0; i < N; i++) new_vec[i] = min(v1[i], v2[i]);
return new_vec;
}
...
template<int N, typename T>
class vec : public container<N,T,vec_array<N,T> > {
...
// This calls the first (wrong) method and says you can't call ? on a vec
vec<2,float> v1,v2;
min(v1,v2);
// This says the call is ambiguous
container<2,float,vec_array<2,float> > c1,c2;
min(c1,c2);
// This one actually works
vec<2,float> v3; container<N,T,some_other_type> v4;
min(v3,v4);
// This works too
min<2,float,vec_array<2,float>,vec_array<2,float> >(v1, v2);
That last call is ugly! How can I call the right method with just min(v1,v2)? The best I can come up with is to get rid of the "vec" class (so v1 and v2 have to be defined as container<2,float,vec_array<2,float> >) and add one more template<N,T,VT> min method that calls min<N,T,VT,VT>(v1,v2).
Thanks!

You are going to have an overload resolution that prefers the first min for the first case. It accepts both arguments by an exact match, while the second min needs a derived to base conversion to accept arguments.
As you have subsequently figured out (by experimentation?), if you use container<...> as argument types, instead of derived classes, this won't need a derived to base conversion anymore, and overload resolution will then prefer the second template because otherwise both are equally well accepting the arguments but the second template (In your own solution) is more specialized.
Yet in your own solution, you need to put a typename before the return type to make the solution Standard C++. I think the problem that causes you to need to define a second template is that in order to make the template more specialized, the first min min needs to accept all the arguments that the second template accepts, which is figured out by just trying to match second template's arguments against first
container<N, T, VT1> -> T // func param 1
container<N, T, VT2> -> T // func param 2
So, the different template parameter types try to deduce to the same template parameter, which will cause a conflict and make the first template not successfully deduce all argument of the second template. For your own solution, this won't be the case:
container<N, T, VT> -> T // func param 1
container<N, T, VT> -> T // func param 2
This will make the first template deduce all the parameter types from the second template, but not the other way around: container<N, T, VT> won't match an arbitrary T. So your own solution's template is more specialized and is called, and then explicitly forwards to the other template.
Finally note that your own solution only accepts containers where the third template argument is the same, while your other min template accepts containers where that argument can be different for both function arguments. I'm not sure whether that's on purpose - but given the other min function in place which conflicts if you won't make the third argument types the same as shown above, I'm not sure how to otherwise fix that.
Questioner subsequently edited his own answer , so most of my references above to "your own answer" don't apply anymore.

template<typename T1, **typename T2**>
inline T1 min(const T1& k1, **const T2&** k2) {
return k1 < k2 ? k1 : k2;
}
...
template<int N, typename T>
struct vec {
typedef container<N,T,vec_array<N,T> > t;
};
...
vec<2,float>::t v1,v2;
min(v1,v2);
That's what I finally did to get it to work.
The ambiguity was because both arguments have the same type - container<2,float,vec_array<2,float> >. That's one point for the min(const T&,const T&) method. Since min(const container<N,T,VT1>& v1, const container<N,T,VT2>& v2) is a match and more specialized, it also got an extra point and the compiler couldn't make up its mind over which one to use. Switching the generic min to use two type arguments - min(const T1&, const T2&) - beats it into submission.
I also switched to using a "template typedef" instead of inheritance to define vec<N,T>'s without having to deal with the messy container<N,T,VT> stuff. This makes vec<N,T>::t be an exact match to the correct function.
Now that I'm using a typedef rather than inheritance and two types in the generic min function instead of just one, the correct method is getting called.

Related

C++ Metaprogramming sum of two types

I have a class like this
template <typename T>
class Matrix {
typedef T value_type
...
};
I would like to write a function that needs to deal with different Matrix<T> types but also with arithmetic types (that is why I cannot use Matrix<T1> and Matrix<T2> as template arguments). Inside the function I need a Matrix that has the correct type, if I pass Matrix<T3> and Matrix<T4> for a and b then the type of the Matrix C should be whatever T3 + T4 returns (T3 and T4 are arithmetic types).
template <typename T1, typename T2>
auto add(T1&& a, T2&& b);
Matrix<decltype(a(0,0) + b(0,0))> C{}; // This works, but I am sure there is a nicer solution
I found one way to do it but I am sure that it is possible to work directly with the types.
I tried something like
Matrix<decltype(remove_cvref_t<T1>::value_type + remove_cvref_t<T2>::value_type) C;
The idea is to strip off any possible references and access the type of the Matrix via ::value_type. I also tried to sprinkle in some typenames but without success.
No, it's not possible to work directly with the types, but you can use std::declval - a function which returns whatever type you want - to "convert" a type to a value:
template <typename T1, typename T2>
auto add(T1&& a, T2&& b) {
Matrix<decltype(
std::declval<remove_cvref_t<T1>::value_type>()
+ std::declval<remove_cvref_t<T2>::value_type>()
)> C;
...
}
It's still ugly. If all your matrices have the (0,0) operator then you might find that less ugly, and there's nothing wrong with using it; if you absolutely do need value_type as opposed to whatever (0,0) returns, then you can use std::declval.
std::declval can't be called for real - its only purpose is to be used in expressions that don't actually get evaluated, like inside decltype or noexcept.

Why can the type constraint `std::convertible_to` be used with only one template argument?

I've scrolled and searched through the standard and cppreference for hours to no avail, would really appreciate if someone could explain this occurance for me:
I am looking at the standard concept std::convertibe_to. Here's a simple example that I do understand
class A {};
class B : public A {};
std::convertible_to<A, B>; // false
std::convertible_to<B, A>; // true
Works as expected.
Now there is also another possible way to use it, that I don't quite understand
void foo(std::convertible_to<A> auto x) { /* ... */ }
, and this function can easily accept any type convertible to A. This is weird though, because the first template parameter ("From") is essencially dropped, and deduced on function call. This following function would also work, and I'm fairly certain it's actually equivalent to the previous one
template<typename T, std::convertible_to<T> S>
void foo(S x) { /* ... */ }
again the type of x is deduced when we call foo.
This works, despite the template requiring two parameters. I tried also with std::derived_from and it seems to work. This form of specifying a concept with only one template parameter even appears in the standard itself, so there must be some piece of syntax that explains it.
Notice that the only version of std::convertible_to that exists is in fact one that takes two template parameters.
Could anyone clarify why this works?
void foo( constraint<P0, P1, P2> auto x );
this translates roughly to
template<contraint<P0, P1, P2> X>
void foo( X x );
which translates roughly to
template<class X> requires constraint<X, P0, P1, P2>
void foo( X x );
notice how the type X is prepended to the template arguments of the constraint.
So in your case,
template<typename T, std::convertible_to<T> S>
void foo(S x) { /* ... */ }
is roughly
template<typename T, class S>
requires std::convertible_to<S, T>
void foo(S x) { /* ... */ }
(I say roughly, because I believe they are not exactly equivalent in subtle ways. For example, the second one introduces the name X, while the first does not. And there are probably other differences of similar scale; what I mean is that understanding the translation will give you an understanding of what is translated. This is unlike for(:) loop-for(;;) loop correspondence; the standard specifies for(:) loops in terms of for(;;) loops, which isn't what I'm claiming above.)
There are several locations where a concept name can be used where the first argument to the template concept is not supplied in the template argument list. Constraining an auto deduced variable is one of them.
The first argument in these cases is provided by some expression, typically using template argument deduction rules. In the case of a constrained function parameter, the first argument is determined by the template function itself. That is, if you call foo(10), template argument deduction will deduce the auto template parameter as an int. Therefore, the full concept will be convertible_to<int, A>.

Template overload resolution with multiple viable types

Here are 2 template functions for calculating max
template<typename T>
auto max(T a, T b) {
cout << "Calling 1\n";
return b < a ? a : b;
}
template<typename T1, typename T2>
auto max (T1 a, T2 b) {
cout << "Calling 2\n";
return b < a ? a : b;
}
If I call the max function as follows
max(1, 2);
The first function (Calling 1) is selected. Why is that the case? Both 1 and 2 can match equally well in this case.
This is because the first max is more specialized as the second max.
What happens during template overload resolution is that the compiler instantiates both templates and then asks "Which one is more specialized?"
In a nutshell it asks, given overload A and overload B, "Can I instantiate B with the deduced type(s) from A, but not vice versa?" If so, then A is more specialized than B (we can go from A to B, but not back). It does the same thing the other way. If both can be instantiated from each other, it is ambiguous and a compiler error.
In reality, we don't use the actual type for T (int in this case), but some made-up type ("synthesized type").
In your case, the first template requires both types to be the same:
template<typename T>
auto max(T a, T b)
So we have max<int> (or max<synthesized1>)
Can we instantiate the second one given synthesized1 for T? Sure thing, T1 = synthesized1 and T2 = synthesized1.
Can we go the other way though?
The second template has two parameters, so it allows that a and b are different types, so it is more general. It gets instantiated with two synthesized types:
template<typename T1, typename T2>
auto max (T1 a, T2 b)
so, max<synthesized2, synthesized3>.
Can we instantiate the first max<T> with types synthesized2 and synthesized3? Nope, it requires that a and b have the same type. Therefore the first template is more specialized, and the compiler chooses it.
Refer to [temp.deduct.partial] for standardese
As a general rule of thumb for overload resolution in C++, more specialized versions are preferred over more generic versions. Since the second version can handle more general cases, the first version is preferred.

Need help to understand template function with complex typename parameters

I'm examining a Stroustroup's book "C++ Programming 4th edition". And I'm trying to follow his example on matrix design.
His matrix class heavily depends on templates and I try my best to figure them out.
Here is one of the helper classes for this matrix
A Matrix_slice is the part of the Matrix implementation that maps a
set of subscripts to the location of an element. It uses the idea
of generalized slices (§40.5.6):
template<size_t N>
struct Matrix_slice {
Matrix_slice() = default; // an empty matrix: no elements
Matrix_slice(size_t s, initializer_list<size_t> exts); // extents
Matrix_slice(size_t s, initializer_list<size_t> exts, initializer_list<siz e_t> strs);// extents and strides
template<typename... Dims> // N extents
Matrix_slice(Dims... dims);
template<typename... Dims,
typename = Enable_if<All(Convertible<Dims,size_t>()...)>>
size_t operator()(Dims... dims) const; // calculate index from a set of subscripts
size_t size; // total number of elements
size_t start; // star ting offset
array<size_t,N> extents; // number of elements in each dimension
array<size_t,N> strides; // offsets between elements in each dimension
};
I
Here are the lines that build up the subject of my question:
template<typename... Dims,
typename = Enable_if<All(Convertible<Dims,size_t>()...)>>
size_t operator()(Dims... dims) const; // calculate index from a set of subscripts
earlier in the book he describes how Enable_if and All() are implemented:
template<bool B,typename T>
using Enable_if = typename std::enable_if<B, T>::type;
constexpr bool All(){
return true;
}
template<typename...Args>
constexpr bool All(bool b, Args... args)
{
return b && All(args...);
}
I have enough information to understand how they work already and by looking at his Enable_if implementation I can deduce Convertible function as well:
template<typename From,typename To>
bool Convertible(){
//I think that it looks like that, but I haven't found
//this one in the book, so I might be wrong
return std::is_convertible<From, To>::value;
}
So, I can undersand the building blocks of this template function declaration
but I'm confused when trying to understand how they work altogather.
I hope that you could help
template<typename... Dims,
//so here we accept the fact that we can have multiple arguments like (1,2,3,4)
typename = Enable_if<All(Convertible<Dims,size_t>()...)>>
//Evaluating and expanding from inside out my guess will be
//for example if Dims = 1,2,3,4,5
//Convertible<Dims,size_t>()... = Convertible<1,2,3,4,5,size_t>() =
//= Convertible<typeof(1),size_t>(),Convertible<typeof(2),size_t>(),Convertible<typeof(3),size_t>(),...
//= true,true,true,true,true
//All() is thus expanded to All(true,true,true,true,true)
//=true;
//Enable_if<true>
//here is point of confusion. Enable_if takes two tamplate arguments,
//Enable_if<bool B,typename T>
//but here it only takes bool
//typename = Enable_if(...) this one is also confusing
size_t operator()(Dims... dims) const; // calculate index from a set of subscripts
So what do we get in the end?
This construct
template<typename ...Dims,typename = Enable_if<true>>
size_t operator()(Dims... dims) const;
The questions are:
Don't we need the second template argument for Enable_if
Why do we have assignment ('=') for a typename
What do we get in the end?
Update:
You can check the code in the same book that I'm referencing here
The C++ Programming Language 4th edition at page 841 (Matrix Design)
This is basic SFINAE. You can read it up here, for example.
For the answers, I'm using std::enable_if_t here instead of the EnableIf given in the book, but the two are identical:
As mentioned in the answer by #GuyGreer, the second template parameter of is defaulted to void.
The code can be read as a "normal" function template definition
template<typename ...Dims, typename some_unused_type = enable_if_t<true> >
size_t operator()(Dims... dims) const;
With the =, the parameter some_unused_type is defaulted to the type on the right-hand side. And as one does not use the type some_unused_type explicitly, one also does not need to give it a name and simply leave it empty.
This is the usual approach in C++ also found for function parameters. Check for example operator++(int) -- one does not write operator++(int i) or something like that.
What's happening all together is SFINAE, which is an abbreviation for Substitution Failure Is Not An Error. There are two cases here:
First, if the boolean argument of std::enable_if_t is false, one gets
template<typename ...Dims, typename = /* not a type */>
size_t operator()(Dims ... dims) const;
As there is no valid type on the rhs of typename =, type deduction fails. Due to SFINAE, however, it does not lead to a compile-time error but rather to a removal of the function from the overload set.
The result in practice is as if the function would have not been defined.
Second, if the boolean argument of std::enable_if_t is true, one gets
template<typename ...Dims, typename = void>
size_t operator()(Dims... dims) const;
Now typename = void is a valid type definition and so there is no need to remove the function. It can thus be normally used.
Applied to your example,
template<typename... Dims,
typename = Enable_if<All(Convertible<Dims,size_t>()...)>>
size_t operator()(Dims... dims) const;
the above means that this function exists only if All(Convertible<Dims,size_t>()... is true. This basically means the function parameters should all be integer indices (me personally, I would write that in terms of std::is_integral<T> however).
The missing constexprs notwithstanding, std::enable_if is a template that takes two parameters, but the second one is defaulted to void. It makes sense when writing up a quick alias to this to keep that convention.
Hence the alias should be defined as:
template <bool b, class T = void>
using Enable_if = typename std::enable_if<b, T>::type;
I have no insight into whether this default parameter is present in the book or not, just that this will fix that issue.
The assignment of a type is called a type alias and does what it says on the tin, when you refer to the alias, you're actually referring to what it aliases. In this case it means that when you write Enable_if<b> the compiler handily expands that to typename std::enable_if<b, void>::type for you, saving you all that extra typing.
What you get in the end is a function that is only callable if every parameter you passed to it is convertible to a std::size_t. This allows overloads of functions to be ignored if specific conditions are not met which is more a powerful technique than just matching types up for selecting what function to call. The link for std::enable_if has more information on why you would want to do that, but I warn beginners that this subject gets kinda heady.

Template metaprogramming help: transforming a vector

As my first template metaprogram I am trying to write a function that transforms an input vector to an output vector.
For instance, I want
vector<int> v={1,2,3};
auto w=v_transform(v,[](int x){return (float)(x*2)})
to set w to the vector of three floats, {2.0, 4.0, 6.0} .
I started with this stackoverflow question, The std::transform-like function that returns transformed container , which addresses a harder question of transforming arbitrary containers.
I now have two solutions:
A solution, v_transform_doesntwork that doesn’t work, but I don’t know why (which I wrote myself).
A solution, v_transform that works, but I don’t know why (based on Michael Urman's answer to the above question)
I am looking for simple explanations or pointers to literature that explains what is happening.
Here are the two solutions, v_transform_doesntwork and v_transform:
#include <type_traits>
#include <vector>
using namespace std;
template<typename T, typename Functor,
typename U=typename std::result_of<Functor(T)>::type>
vector<U> v_transform(const std::vector<T> &v, Functor&& f){
vector<U>ret;
for(const auto & e:v)
ret.push_back(f(e));
return ret;
}
template<typename T, typename U>
vector<U> v_transform_doesntwork(const std::vector<T> &v, U(*f)(const T &)){
vector<U>ret;
for(const auto & e:v)
ret.push_back(f(e));
return ret;
}
float foo(const int & i){
return (float)(i+1);
}
int main(){
vector<int>v{1,2,3,4,5};
auto w=v_transform(v,foo);
auto z=v_transform(v,[](const int &x){return (float)(x*2);});
auto zz=v_transform(v,[](int x){return (float)(x*3);});
auto zzz=v_transform_doesntwork(v,[](const int &x){return (float)(x*2);});
}
Question 1: why doesn’t the call to v_transform_doesntwork compile? (It gives a fail-to-match template error, c++11. I tried about 4 permutations of “const” and “&” and “*” in the argument list, but nothing seemed to help.)
I prefer the implementation of v_transform_doesntwork to that of v_transform, because it’s simpler, but it has the slight problem of not working.
Question 2: why does the call to v_transform work? I get the gist obviously of what is happening, but I don’t understand why all the typenames are needed in defining U, I don’t understand how this weird syntax of defining a template parameter that is relied on later in the same definition is even allowed, or where this is all specified. I tried looking up "dependent type names" in cppreference but saw nothing about this kind of syntax.
Further note: I am assuming that v_transform works, since it compiles. If it would fail or behave unexpectedly under some situations, please let me know.
Your doesnotwork expects a function pointer and pattern matches on it.
A lambda is not a function pointer. A stateless lambda can be converted to a function pointer, but template pattern matching does not use conversions (other than a very limited subset -- Derived& to Base& and Derived* to Base&, reference-to-value and vice versa, etc -- never a constructor or conversion operator).
Pass foo to doesnotwork and it should work, barring typos in your code.
template<typename T,
typename Functor,
typename U=typename std::result_of<Functor(T)>::type
>
vector<U> v_transform(const std::vector<T> &v, Functor&& f){
vector<U>ret;
for(const auto & e:v)
ret.push_back(f(e));
return ret;
}
so you call v_transform. It tries to deduce the template types.
It pattern matches the first argument. You pass a std::vector<int, blah> where blah is some allocator.
It sees that the first argument is std::vector<T>. It matches T to int. As you did not give a second parameter, the default allocator for std::vector<T> is used, which happens to match blah.
We then continue to the second parameter. You passed in a closure object, so it deduces the (unnamable) lambda type as Functor.
It is now out of arguments to pattern match. The remaining types use their defaulted types -- U is set to typename std::result_of<Functor(T)::type. This does not result in a substitution failure, so SFINAE does not occur.
All types are determined, and the function is now slotted into the set of overloads to examine to determine which to call. As there are no other functions of the same name, and it is a valid overload, it is called.
Note that your code has a few minor errors:
template<typename T,
typename A,
typename Functor,
typename U=typename std::decay<typename std::result_of<Functor&(T const&)>::type>::type
>
std::vector<U> v_transform(const std::vector<T, A> &v, Functor&& f){
std::vector<U> ret;
ret.reserve(v.size());
for(const auto & e:v)
ret.push_back(f(e));
return ret;
}
which cover some corner cases.
Question 1
Why doesn't the call to v_transform_doesntwork compile?
This is because you've passed it a C++11 lambda. The template argument in v_transform_doesntwork is a function pointer argument. C++11 lambdas are, in fact, objects of an unknown type. So the declaration
template<typename T, typename U>
vector<U> v_transform_doesntwork(const std::vector<T> &v, U(*f)(const T &))
binds T to the input type of the function pointer f and U to the output type of the function pointer. But the second argument cannot accept a lambda for this reason! You can specify the types explicitly to make it work with the non-capturing lambda, but the compiler will not attempt the type inference in the face of the cast.
Question 2
Why does the call to v_transform work?
Let's look at the code you wrote:
template<typename T,
typename Functor,
typename U=typename std::result_of<Functor(T)>::type>
vector<U> v_transform(const std::vector<T> &v, Functor&& f){
Again, T is a template parameter that represents the input type. But now Functor is a parameter for whichever callable object you decide to pass in to v_transform (nothing special about the name). We set U to be equal to the result of that Functor being called on T. The std::result_of function jumps through some hoops to figure out what the return value will be. You also might want to change the definition of U to
typename U=typename std::result_of<Functor&(T const &)>::type>
so that is can accept functions taking constants or references as parameters.
For the doesntwork function, you need to explicitly specify the template parameters:
auto zzz=v_transform_doesntwork<int,float>(v,[](const int &x){return (float)(x*2);});
Then it does work. The compiler is not able to implicitly determine these parameters whilst it converts the lambda to a function pointer.