C++ Metaprogramming sum of two types - c++

I have a class like this
template <typename T>
class Matrix {
typedef T value_type
...
};
I would like to write a function that needs to deal with different Matrix<T> types but also with arithmetic types (that is why I cannot use Matrix<T1> and Matrix<T2> as template arguments). Inside the function I need a Matrix that has the correct type, if I pass Matrix<T3> and Matrix<T4> for a and b then the type of the Matrix C should be whatever T3 + T4 returns (T3 and T4 are arithmetic types).
template <typename T1, typename T2>
auto add(T1&& a, T2&& b);
Matrix<decltype(a(0,0) + b(0,0))> C{}; // This works, but I am sure there is a nicer solution
I found one way to do it but I am sure that it is possible to work directly with the types.
I tried something like
Matrix<decltype(remove_cvref_t<T1>::value_type + remove_cvref_t<T2>::value_type) C;
The idea is to strip off any possible references and access the type of the Matrix via ::value_type. I also tried to sprinkle in some typenames but without success.

No, it's not possible to work directly with the types, but you can use std::declval - a function which returns whatever type you want - to "convert" a type to a value:
template <typename T1, typename T2>
auto add(T1&& a, T2&& b) {
Matrix<decltype(
std::declval<remove_cvref_t<T1>::value_type>()
+ std::declval<remove_cvref_t<T2>::value_type>()
)> C;
...
}
It's still ugly. If all your matrices have the (0,0) operator then you might find that less ugly, and there's nothing wrong with using it; if you absolutely do need value_type as opposed to whatever (0,0) returns, then you can use std::declval.
std::declval can't be called for real - its only purpose is to be used in expressions that don't actually get evaluated, like inside decltype or noexcept.

Related

Confusion with std::vector::size_type [duplicate]

Is there any reason that c++ compiler gives error when using two different numeric variable types in std::max() function? (e.g. int and long).
I mean something like: "Sometimes we have this problem when using std::max() function for two different numeric variable types, so the compiler gives error to prevent this problem".
The compiler produces an error because it cannot perform type deduction for the template argument of std::max. This is how std::max template is declared: the same type (template parameter) is used for both arguments. If the arguments have different types, the deduction becomes ambiguous.
If you work around the deduction ambiguity by supplying the template argument explicitly, you will be able to use different types as std::max arguments
std::max(1, 2.0); // Error
std::max<double>(1, 2.0); // OK
The reason why std::max insists on using a common type for its arguments (instead of using two independent types) is described in #bolov's answer: the function actually wants to return a reference to the maximum value.
std::max returns a reference to the argument that has the maximum value. The main reason it is this way is because it is a generic function and as such it can be used with types expensive to copy. Also you might actually need just a reference to the object, instead a copy of it.
And because it returns a reference to a argument, all arguments must be of the same type.
The direct answer to the question is that it's because std::min and std::max take only one template parameter that defines the types of both arguments. If/when you try to pass arguments of different types, the compiler can't decide which of those two types to use for the template argument, so the code is ambiguous. As originally defined in C++98, std::min and std::max had signatures like these (C++03, §[lib.alg.min.max]):
template<class T> const T& min(const T& a, const T& b);
template<class T, class Compare>
const T& min(const T& a, const T& b, Compare comp);
template<class T> const T& max(const T& a, const T& b);
template<class T, class Compare>
const T& max(const T& a, const T& b, Compare comp);
So the basic idea here is that the function receives two objects by reference, and returns a reference to one of those objects. If it received objects of two different types, it wouldn't be able to return a reference to an input object because one of the objects would necessarily be of a different type than it was returning (so #bolov is correct about that part, but I don't think it's really the whole story).
With a modern compiler/standard library, if you don't might dealing with values instead of references, you could pretty easily write code on this general order:
template <class T, class U>
std::common_type<T, U> min(T const &a, U const &b) {
return b < a ? b : a;
}
template <class T, class U>
std::common_type<T, U> max(T const &a, U const &b) {
return a < b ? b : a;
}
That makes it pretty easy to deal with your case of passing an int and a long (or other pairs of types, as long as std::common_type can deduce some common type for them, and a<b is defined for objects of the two types.
But, in 1998, even if std::common_type had been available so it was easy to do, that solution probably wouldn't have been accepted (and as we'll see, it's still open to some question whether it's a great idea)--at the time, many people still thought in terms of lots of inheritance, so it was (more or less) taken for granted that you'd frequently use it in situations where both arguments were really of some derived type, something on this general order:
class Base {
// ...
virtual bool operator<(Base const &other);
};
class Derived1 : public Base {
// ...
};
class Derived2 : public Base {
// ...
};
Derived1 d1;
Derived2 d2;
Base &b = std::max(d1, d2);
In this case, the version above that returns a value instead of returning a reference would cause a serious problem. common_type<Derived1, Derived2> is going to be Base, so we'd end up slicing the argument to create an object of type Base, and returning that. This would rarely provide desirable behavior (and in some cases, such as if Base were an abstract base class, it wouldn't even compile).
There's one other point that's probably worth noting: even when applied in a seemingly simple situation, std::common_type can produce results you might not expect. For example, let's consider calling the template defined above like:
auto x = min(-1, 1u);
That leaves us with an obvious question: what type will x be?
Even though we've passed it an int and an unsigned, the type of the result is (at least potentially) neither int or unsigned (e.g., quite possibly long long)!

Something like `declval` for concepts

When you are working with templates and with decltype you often need an instance of a certain type even though you do not have any the moment. In this case, std::declval<T>() is incredibly useful. This creates an imaginary instance of the type T.
Is there a something similar for concepts? i.e. a function which would create and imaginary type for a concept.
Let me give you an example(a bit contrived but should serve the purpose):
Let's define a concept Incrementable
template <typename T>
concept Incrementable = requires(T t){
{ ++t } -> T;
};
Now I would like to have a concept which tests if an object has the operator operator() which can accept Incrementable. In my imaginary syntax I would write something like this:
template <typename F, typename T = declval<Incrementable>>
concept OperatesOnIncrementable = requires(F f, T t){
{ f(t) } -> T;
}
There the declval in typename T = declval<Incrementable> would create an imaginary type T which is not really a concrete type but for all intents and purposes behaves like a type which satisfies Incrementable.
Is there a mechanism in the upcoming standard to allow for this? I would find this incredibly useful.
Edit: Some time ago I have asked a similar question if this can be done with boost::hana.
Edit: Why is this useful? For example if you want to write a function which composes two functions
template <typename F, typename G>
auto compose(F f, G g) {
return [f, g](Incrementable auto x) { return f(g(x)); };
}
I want to get an error when I try to compose two functions which cannot be composed. Without constraining the types F and G I get error only when I try to call the composed function.
There is no such mechanism.
Nor does this appear to be implementable/useful, since there is an unbounded number of Incrementable types, and F could reject a subset selected using an arbitrarily complex metaprogram. Thus, even if you could magically synthesize some unique type, you still have no guarantee that F operates on all Incrementable types.
Is there a something similar for concepts? i.e. a function which would create and imaginary type for a concept.
The term for this is an archetype. Coming up with archetypes would be a very valuable feature, and is critical for doing things like definition checking. From T.C.'s answer:
Thus, even if you could magically synthesize some unique type, you still have no guarantee that F operates on all Incrementable types.
The way to do that would be to synthesize an archetype that as minimally as possible meets the criteria of the concept. As he says, there is no archetype generation in C++20, and is seems impossible given the current incarnation of concepts.
Coming up with the correct archetype is incredibly difficult. For example, for
template <typename T>
concept Incrementable = requires(T t){
{ ++t } -> T;
};
It is tempting to write:
struct Incrementable_archetype {
Incrementable_archetype operator++();
};
But that is not "as minimal as possible" - this type is default constructible and copyable (not requirements that Incrementable imposes), and its operator++ returns exactly T, which is also not the requirement. So a really hardcore archetype would look like:
struct X {
X() = delete;
X(X const&) = delete;
X& operator=(X const&) = delete;
template <typename T> operator,(T&&) = delete;
struct Y {
operator X() &&;
};
Y operator++();
};
If your function works for X, then it probably works for all Incrementable types. If your function doesn't work for X, then you probably need to either change the implementation so it does or change the constraints to to allow more functionality.
For more, check out the Boost Concept Check Library which is quite old, but is very interesting to at least read the documentation.

Understanding C++ template metaprogramming

In my quest to better understand templates and meta-programming in C++ I'm reading this article, but my understanding of the code snippets quickly diminishes, e.g.:
template<class A, template<class...> class B> struct mp_rename_impl;
template<template<class...> class A, class... T, template<class...> class B>
struct mp_rename_impl<A<T...>, B>
{
using type = B<T...>;
};
template<class A, template<class...> class B>
using mp_rename = typename mp_rename_impl<A, B>::type;
The code is used like:
mp_rename<std::pair<int, float>, std::tuple> // -> std::tuple<int, float>
mp_rename<mp_list<int, float>, std::pair> // -> std::pair<int, float>
mp_rename<std::shared_ptr<int>, std::unique_ptr> // -> std::unique_ptr<int>
Can someone please explain the code like I'm five?
I do have a general and basic understanding of non-templated C++.
What I don't get is:
Why is mp_rename_impl forward declared with two type parameters (class A, template<class...> class B), then it's defined and specialized at the same time[*] with three (template<class...> class A, class... T, template<class...> class B) and respectively two(A<T...>, B) type parameters?
I understand that it aliases (using type = B<T...>;) the type to be B<T...> instead of A<T...>, but I don't really get how it is done.
Also why is A a template template parameter only in the specialization?
[*] most certainly I got something wrong here
Why is mp_rename_impl forward declared with two type parameters (class A, template<class...> class B), then it's defined and specialized at the same time[*] with three (template<class...> class A, class... T, template<class...> class B) and respectively two(A<T...>, B) type parameters?
The forward declaration establishes the number of parameters needed to instantiate mp_rename_impl, and that the former should be an actual type, the latter a template.
Then when there's an actual instantiation, it tries to match the specialisation struct mp_rename_impl<A<T...>, B>, and in doing so it can consider any combination of values for the specialisation's A, T... and B matching the specialisation's expectations: namely template<class...> class A, class... T, template<class...> class B. Note that the A parameter in the specialisation shares a name with the A in the declaration, but is not the same - the former being a template and the latter a type. Effectively, to match the specialisation a template instantiation must have been passed as the declaration's A parameter, and that template's parameters are captured at T.... It imposes no new restrictions on what can be passed as B (though the using statement does - B<T...> needs to be valid or you'll get a compilation error - too late for SFINAE to kick in).
Also why is A a template template parameter only in the specialization?
The specialisation calls that parameter A, but it's not conceptually the same as A in the declaration. Rather, the former's A<T...> corresponds to the latter A. Perhaps the specialisation should have called it "TA" or something else to indicate it's the template from which an actual A can be formed in combination with the T... parameters.
The specialisation is then of A<T...>, B, so the compiler works backwards from whatever instantiation is actually attempted to find valid substitutions for A, T... and B, guided by the restrictions on their form specified in template<template<class...> class A, class... T, template<class...> class B>.
What this achieves it to make sure the specialisation is only matched when the two parameters are a template already given some set of parameters types, and a template able to take a list of parameter types. The matching process effectively isolates the T type list so it can be reused with B.
My first attempt wasn’t what you were looking for, so let me briefly try to go back and explain like you’re six.
It’s not forward-declared in the sense that a function has a prototype and a definition. There’s an implementation for any A, and that compiles to an empty struct (which is a unique type to the compiler, but doesn’t require any actual storage or run-time code). Then, there’s a second implementation, only for template classes A.
There are really two templates in the second definition. What’s going on is that the second definition it takes the two parameters A and ... T and turns them into a type A<T>, which becomes the first parameter of mp_rename_impl<A<T...>,B>. So it applies to any A that’s a template class. But that’s a more specific kind of A! So it’s a specialization that needs to declare a struct with a type definition in its scope. Finally, the third variant is not a specialization of the template at all. It’s declaring the templated mp_rename as an alias for the more complicated type stored within the scope of every struct in the second declaration, which as you see is the identifier type in the scope mp_rename_impl<A, B>. Believe it or not, this makes his template code a lot more readable.
Why does the more generic definition at the top expand to just an empty struct? When A is not a template class, the contents are trivial, but it does need some kind of type to its name so the compiler will consider it distinct from every other type. (It would have been more cool to write my example below to generate classes with the static constants as members, rather than functions. In fact, I just did.)
Updated as threatened to make my templates a bit more like his:
Okay, template metaprogramming is a kind of programming where, instead of having the program compute something when it runs, the compiler computes it ahead of time and stores the answer in the program. It does this by compiling templates. This can be a lot faster to run, sometimes! But there are limits on what you can do. Mainly, you can’t modify any of your parameters, and you’ve got to be sure the computation stops.
If you’re thinking, “You mean, just like functional programming?” you are one very smart five-year-old. What you normally end up doing is writing recursive templates with base cases that expand either to unrolled, streamlined code, or to constants. Here’s an example that might seem familiar from your Intro to Computer Science class when you were three or maybe four:
#include <iostream>
using std::cout;
using std::endl;
/* The recursive template to compute the ith fibonacci number.
*/
template < class T, unsigned i >
struct tmp_fibo {
static const T fibo = tmp_fibo<T,i-1>::fibo + tmp_fibo<T,i-2>::fibo;
};
/* The base cases for i = 1 and i = 0. Partial struct specialization
* is allowed.
*/
template < class T >
struct tmp_fibo<T,1U> {
static const T fibo = (T)1;
};
template < class T >
struct tmp_fibo<T,0U> {
static const T fibo = (T)0;
};
int main(void) {
cout << "fibo(50) = " << tmp_fibo<unsigned long long, 50>::fibo
<< ". fibo(10) = " << tmp_fibo<int, 10>::fibo << "."
<< endl;
return 0;
}
Compile to assembly language, and we see what code the compiler generated for the line tmp_fibo<unsigned long long, 50>::fibo Here it is, in full:
movabsq $12586269025, %rsi
The templates generates an integral constant within each structure at compile-time. What those examples are doing, since you can declare type names within structs, is doing the same thing for types.
I will try to make it simple. Template metaprogramming is about computing types at compile-time (you can also compute values, but just let us focus on that).
So if you have this function:
int f(int a, int b);
You have a function that returns an int value given two int values.
You use it like this:
int val = f(5, 8);
Metafunctions operate on types, instead of on values. A metafunction looks like this:
//The template parameters of the metafunction are the
//equivalent of the parameters of the function
template <class T, class U>
struct meta_f {
typedef /*something here*/ type;
};
Namely, a metafunction has a nested type inside, by convention the nested type is called type.
So you invoke a metafunction like this in non-generic contexts:
using my_new_type = meta_f<int, float>::type;
In generic contexts you must use typename:
using my_new_type = typename meta_f<T, U>::type;
This returns a type, not a run-time value, since we said metafunctions operate on types.
Examples of metafunctions in the standard library can be found in header
type_traits, among others. You have add_pointer<T> or decay<T>. These metafunctions return new types given a type.
In C++14, in order to avoid pieces of code like this, which are verbose:
using my_computed_type = typename std::add_pointer<T>::type;
Some template aliases with a _t suffix, again by convention, were created that invoke the metafunction, directly, for you:
template <class T>
using add_pointer_t = typename std::add_pointer<T>::type;
And now you can write:
using my_computed_type = std::add_pointer_t<T>;
All in all, in a function, you have runtime values as parameters, in a metafunction, the parameters are types. In a function you invoke with the
usual syntax and obtain a runtime value. In a metafunction you get the ::type nested type and get a new computed type.
//Function invocation, a, b, c are values of type A, B, C
auto ret = f(a, b, c);
//Meta function invocation. A, B, C are types
using ret_t = typename meta_f<A, B, C>::type;
//Typical shortcut, equivalent to metafunction invocation.
using ret_t = meta_f_t<A,B,C>;
So for the first function you get a value, for the others, you get a type, not a value.

Template metaprogramming help: transforming a vector

As my first template metaprogram I am trying to write a function that transforms an input vector to an output vector.
For instance, I want
vector<int> v={1,2,3};
auto w=v_transform(v,[](int x){return (float)(x*2)})
to set w to the vector of three floats, {2.0, 4.0, 6.0} .
I started with this stackoverflow question, The std::transform-like function that returns transformed container , which addresses a harder question of transforming arbitrary containers.
I now have two solutions:
A solution, v_transform_doesntwork that doesn’t work, but I don’t know why (which I wrote myself).
A solution, v_transform that works, but I don’t know why (based on Michael Urman's answer to the above question)
I am looking for simple explanations or pointers to literature that explains what is happening.
Here are the two solutions, v_transform_doesntwork and v_transform:
#include <type_traits>
#include <vector>
using namespace std;
template<typename T, typename Functor,
typename U=typename std::result_of<Functor(T)>::type>
vector<U> v_transform(const std::vector<T> &v, Functor&& f){
vector<U>ret;
for(const auto & e:v)
ret.push_back(f(e));
return ret;
}
template<typename T, typename U>
vector<U> v_transform_doesntwork(const std::vector<T> &v, U(*f)(const T &)){
vector<U>ret;
for(const auto & e:v)
ret.push_back(f(e));
return ret;
}
float foo(const int & i){
return (float)(i+1);
}
int main(){
vector<int>v{1,2,3,4,5};
auto w=v_transform(v,foo);
auto z=v_transform(v,[](const int &x){return (float)(x*2);});
auto zz=v_transform(v,[](int x){return (float)(x*3);});
auto zzz=v_transform_doesntwork(v,[](const int &x){return (float)(x*2);});
}
Question 1: why doesn’t the call to v_transform_doesntwork compile? (It gives a fail-to-match template error, c++11. I tried about 4 permutations of “const” and “&” and “*” in the argument list, but nothing seemed to help.)
I prefer the implementation of v_transform_doesntwork to that of v_transform, because it’s simpler, but it has the slight problem of not working.
Question 2: why does the call to v_transform work? I get the gist obviously of what is happening, but I don’t understand why all the typenames are needed in defining U, I don’t understand how this weird syntax of defining a template parameter that is relied on later in the same definition is even allowed, or where this is all specified. I tried looking up "dependent type names" in cppreference but saw nothing about this kind of syntax.
Further note: I am assuming that v_transform works, since it compiles. If it would fail or behave unexpectedly under some situations, please let me know.
Your doesnotwork expects a function pointer and pattern matches on it.
A lambda is not a function pointer. A stateless lambda can be converted to a function pointer, but template pattern matching does not use conversions (other than a very limited subset -- Derived& to Base& and Derived* to Base&, reference-to-value and vice versa, etc -- never a constructor or conversion operator).
Pass foo to doesnotwork and it should work, barring typos in your code.
template<typename T,
typename Functor,
typename U=typename std::result_of<Functor(T)>::type
>
vector<U> v_transform(const std::vector<T> &v, Functor&& f){
vector<U>ret;
for(const auto & e:v)
ret.push_back(f(e));
return ret;
}
so you call v_transform. It tries to deduce the template types.
It pattern matches the first argument. You pass a std::vector<int, blah> where blah is some allocator.
It sees that the first argument is std::vector<T>. It matches T to int. As you did not give a second parameter, the default allocator for std::vector<T> is used, which happens to match blah.
We then continue to the second parameter. You passed in a closure object, so it deduces the (unnamable) lambda type as Functor.
It is now out of arguments to pattern match. The remaining types use their defaulted types -- U is set to typename std::result_of<Functor(T)::type. This does not result in a substitution failure, so SFINAE does not occur.
All types are determined, and the function is now slotted into the set of overloads to examine to determine which to call. As there are no other functions of the same name, and it is a valid overload, it is called.
Note that your code has a few minor errors:
template<typename T,
typename A,
typename Functor,
typename U=typename std::decay<typename std::result_of<Functor&(T const&)>::type>::type
>
std::vector<U> v_transform(const std::vector<T, A> &v, Functor&& f){
std::vector<U> ret;
ret.reserve(v.size());
for(const auto & e:v)
ret.push_back(f(e));
return ret;
}
which cover some corner cases.
Question 1
Why doesn't the call to v_transform_doesntwork compile?
This is because you've passed it a C++11 lambda. The template argument in v_transform_doesntwork is a function pointer argument. C++11 lambdas are, in fact, objects of an unknown type. So the declaration
template<typename T, typename U>
vector<U> v_transform_doesntwork(const std::vector<T> &v, U(*f)(const T &))
binds T to the input type of the function pointer f and U to the output type of the function pointer. But the second argument cannot accept a lambda for this reason! You can specify the types explicitly to make it work with the non-capturing lambda, but the compiler will not attempt the type inference in the face of the cast.
Question 2
Why does the call to v_transform work?
Let's look at the code you wrote:
template<typename T,
typename Functor,
typename U=typename std::result_of<Functor(T)>::type>
vector<U> v_transform(const std::vector<T> &v, Functor&& f){
Again, T is a template parameter that represents the input type. But now Functor is a parameter for whichever callable object you decide to pass in to v_transform (nothing special about the name). We set U to be equal to the result of that Functor being called on T. The std::result_of function jumps through some hoops to figure out what the return value will be. You also might want to change the definition of U to
typename U=typename std::result_of<Functor&(T const &)>::type>
so that is can accept functions taking constants or references as parameters.
For the doesntwork function, you need to explicitly specify the template parameters:
auto zzz=v_transform_doesntwork<int,float>(v,[](const int &x){return (float)(x*2);});
Then it does work. The compiler is not able to implicitly determine these parameters whilst it converts the lambda to a function pointer.

Make C++ call the right template method in an un-ugly way

I'm cooking up a vector library and have hit a snag. I want to allow recursive vectors (i.e. vec<H,vec<W,T> >) so I'd like my "min" and other functions to be recursive as well. Here's what I have:
template<typename T>
inline T min(const T& k1, const T& k2) {
return k1 < k2 ? k1 : k2;
}
template<int N, typename T, typename VT1, typename VT2>
inline vec<N,T> min(const container<N,T,VT1>& v1, const container<N,T,VT2>& v2) {
vec<N,T> new_vec;
for (int i = 0; i < N; i++) new_vec[i] = min(v1[i], v2[i]);
return new_vec;
}
...
template<int N, typename T>
class vec : public container<N,T,vec_array<N,T> > {
...
// This calls the first (wrong) method and says you can't call ? on a vec
vec<2,float> v1,v2;
min(v1,v2);
// This says the call is ambiguous
container<2,float,vec_array<2,float> > c1,c2;
min(c1,c2);
// This one actually works
vec<2,float> v3; container<N,T,some_other_type> v4;
min(v3,v4);
// This works too
min<2,float,vec_array<2,float>,vec_array<2,float> >(v1, v2);
That last call is ugly! How can I call the right method with just min(v1,v2)? The best I can come up with is to get rid of the "vec" class (so v1 and v2 have to be defined as container<2,float,vec_array<2,float> >) and add one more template<N,T,VT> min method that calls min<N,T,VT,VT>(v1,v2).
Thanks!
You are going to have an overload resolution that prefers the first min for the first case. It accepts both arguments by an exact match, while the second min needs a derived to base conversion to accept arguments.
As you have subsequently figured out (by experimentation?), if you use container<...> as argument types, instead of derived classes, this won't need a derived to base conversion anymore, and overload resolution will then prefer the second template because otherwise both are equally well accepting the arguments but the second template (In your own solution) is more specialized.
Yet in your own solution, you need to put a typename before the return type to make the solution Standard C++. I think the problem that causes you to need to define a second template is that in order to make the template more specialized, the first min min needs to accept all the arguments that the second template accepts, which is figured out by just trying to match second template's arguments against first
container<N, T, VT1> -> T // func param 1
container<N, T, VT2> -> T // func param 2
So, the different template parameter types try to deduce to the same template parameter, which will cause a conflict and make the first template not successfully deduce all argument of the second template. For your own solution, this won't be the case:
container<N, T, VT> -> T // func param 1
container<N, T, VT> -> T // func param 2
This will make the first template deduce all the parameter types from the second template, but not the other way around: container<N, T, VT> won't match an arbitrary T. So your own solution's template is more specialized and is called, and then explicitly forwards to the other template.
Finally note that your own solution only accepts containers where the third template argument is the same, while your other min template accepts containers where that argument can be different for both function arguments. I'm not sure whether that's on purpose - but given the other min function in place which conflicts if you won't make the third argument types the same as shown above, I'm not sure how to otherwise fix that.
Questioner subsequently edited his own answer , so most of my references above to "your own answer" don't apply anymore.
template<typename T1, **typename T2**>
inline T1 min(const T1& k1, **const T2&** k2) {
return k1 < k2 ? k1 : k2;
}
...
template<int N, typename T>
struct vec {
typedef container<N,T,vec_array<N,T> > t;
};
...
vec<2,float>::t v1,v2;
min(v1,v2);
That's what I finally did to get it to work.
The ambiguity was because both arguments have the same type - container<2,float,vec_array<2,float> >. That's one point for the min(const T&,const T&) method. Since min(const container<N,T,VT1>& v1, const container<N,T,VT2>& v2) is a match and more specialized, it also got an extra point and the compiler couldn't make up its mind over which one to use. Switching the generic min to use two type arguments - min(const T1&, const T2&) - beats it into submission.
I also switched to using a "template typedef" instead of inheritance to define vec<N,T>'s without having to deal with the messy container<N,T,VT> stuff. This makes vec<N,T>::t be an exact match to the correct function.
Now that I'm using a typedef rather than inheritance and two types in the generic min function instead of just one, the correct method is getting called.