Compile-time (constexpr) float modulo? - c++

Consider the following function that computes the integral or floating-point modulo depending on the argument type, at compile-time:
template<typename T>
constexpr T modulo(const T x, const T y)
{
return (std::is_floating_point<T>::value) ? (x < T() ? T(-1) : T(1))*((x < T() ? -x : x)-static_cast<long long int>((x/y < T() ? -x/y : x/y))*(y < T() ? -y : y))
: (static_cast<typename std::conditional<std::is_floating_point<T>::value, int, T>::type>(x)
%static_cast<typename std::conditional<std::is_floating_point<T>::value, int, T>::type>(y));
}
Can the body of this function be improved ? (I need to have a single function for both integer and floating-point types).

Here's one way to clean this up:
#include <type_traits>
#include <cmath>
template <typename T> // integral? floating point?
bool remainder_impl(T a, T b, std::true_type, std::false_type) constexpr
{
return a % b; // or whatever
}
template <typename T> // integral? floating point?
bool remainder_impl(T a, T b, std::false_type, std::true_type) constexpr
{
return std::fmod(a, b); // or substitute your own expression
}
template <typename T>
bool remainder(T a, T b) constexpr
{
return remainder_impl<T>(a, b,
std::is_integral<T>(), std::is_floating_point<T>());
}
If you try and call this function on a type that's not arithmetic, you'll get a compiler error.

I would rather define it this way (template aliases + template overloading):
#include <type_traits>
using namespace std;
// For floating point types
template<typename T, typename enable_if<is_floating_point<T>::value>::type* p = nullptr>
constexpr T modulo(const T x, const T y)
{
return (x < T() ? T(-1) : T(1)) * (
(x < T() ? -x : x) -
static_cast<long long int>((x/y < T() ? -x/y : x/y)) * (y < T() ? -y : y)
);
}
// For non-floating point types
template<typename T>
using TypeToCast = typename conditional<is_floating_point<T>::value, int, T>::type;
template<typename T, typename enable_if<!is_floating_point<T>::value>::type* p = nullptr>
constexpr T modulo(const T x, const T y)
{
return (static_cast<TypeToCast<T>>(x) % static_cast<TypeToCast<T>>(y));
}
int main()
{
constexpr int x = modulo(7.0, 3.0);
static_assert((x == 1.0), "Error!");
return 0;
}
It is lengthier but cleaner IMO. I am assuming that by "single function" you mean "something that can be invoked uniformly". If you mean "a single function template", then I would just keep the template alias improvement and leave the overload. But then, as mentioned in another answer, it would not be clear why you do need to have one single function template.

You ask,
“Can the body of this function be improved?”
Certainly. Right now it is a spaghetti mess:
template<typename T>
constexpr T modulo(const T x, const T y)
{
return (std::is_floating_point<T>::value) ? (x < T() ? T(-1) : T(1))*((x < T() ? -x : x)-static_cast<long long int>((x/y < T() ? -x/y : x/y))*(y < T() ? -y : y))
: (static_cast<typename std::conditional<std::is_floating_point<T>::value, int, T>::type>(x)
%static_cast<typename std::conditional<std::is_floating_point<T>::value, int, T>::type>(y));
}
You clarify that …
“(I need to have a single function for both integer and floating-point types)”
Well the template is not a single function. It’s a template. Functions are generated from it.
This means your question builds on a false assumption.
With that assumption removed, one way to simplify the function body, which you should do as a matter of course, is to specialize the template for floating point types versus other numeric types. To do that put the function template implementation in a class (because C++ does not support partial specialization of functions, only of classes).
Then you can employ various formatting tricks, including the "0?0 : blah" trick to make the function more readable, with lines and indentation and stuff! :-)
Addendum: delving into your code I see that you cast haphazardly to long int and int with disregard of the invoker's types. That's ungood. It is probably a good idea to write up a bunch of automated test cases, invoking the function with various argument types and big/small values.

template <class T>
constexpr
T
modulo(T x, T y)
{
typedef typename std::conditional<std::is_floating_point<T>::value,
int,
T
>::type Int;
return std::is_floating_point<T>() ?
x - static_cast<long long>(x / y) * y :
static_cast<Int>(x) % static_cast<Int>(y);
}

I believe there is simpler:
// Special available `%`
template <typename T, typename U>
constexpr auto modulo(T const& x, U const& y) -> decltype(x % y) {
return x % y;
}
Note: based on detection of % so also works for custom types as long as they implement the operator. I also made it mixed type while I was at it.
// Special floating point
inline constexpr float modulo(float x, float y) { return /*something*/; }
inline constexpr double modulo(double x, double y) { return /*something*/; }
inline constexpr long double modulo(long double x, long double y) { return /*something*/; }
Note: it would cleaner to have fmod available unfortunately I do not believe it is constexpr; therefore I chose to have non-template modulos for the floating point types which allows you to perform magic to compute the exact modulo possibly based on the binary representation of the type.

You can do that much simpler if you want:
template<typename A, typename B>
constexpr auto Modulo(const A& a, const B& b) -> decltype(a - (b * int(a/b)))
{
return a - (b * int(a/b));
}

Related

Is there a way to make this C++14 recursive template shorter in C++17?

This poly_eval function will compute the result of evaluating a polynomial with a particular set of coefficients at a particular value of x. For example, poly_eval(5, 1, -2, -1) computes x^2 - 2x - 1 with x = 5. It's all constexpr so if you give it constants it will compute the answer at compile time.
It currently uses recursive templates to build the polynomial evaluation expression at compile time and relies on C++14 to be constexpr. I was wondering if anybody could think of a good way to remove the recursive template, perhaps using C++17. The code that exercises the template uses the __uint128_t type from clang and gcc.
#include <type_traits>
#include <tuple>
template <typename X_t, typename Coeff_1_T>
constexpr auto poly_eval_accum(const X_t &x, const Coeff_1_T &c1)
{
return ::std::pair<X_t, Coeff_1_T>(x, c1);
}
template <typename X_t, typename Coeff_1_T, typename... Coeff_TList>
constexpr auto poly_eval_accum(const X_t &x, const Coeff_1_T &c1, const Coeff_TList &... coeffs)
{
const auto &tmp_result = poly_eval_accum(x, coeffs...);
auto saved = tmp_result.second + tmp_result.first * c1;
return ::std::pair<X_t, decltype(saved)>(tmp_result.first * x, saved);
}
template <typename X_t, typename... Coeff_TList>
constexpr auto poly_eval(const X_t &x, const Coeff_TList &... coeffs)
{
static_assert(sizeof...(coeffs) > 0,
"Must have at least one coefficient.");
return poly_eval_accum(x, coeffs...).second;
}
// This is just a test function to exercise the template.
__uint128_t multiply_lots(__uint128_t num, __uint128_t n2)
{
const __uint128_t cf = 5;
return poly_eval(cf, num, n2, 10);
}
// This is just a test function to exercise the template to make sure
// it computes the result at compile time.
__uint128_t eval_const()
{
return poly_eval(5, 1, -2, 1);
}
Also, am I doing anything wrong here?
-------- Comments on Answers --------
There are two excellent answers down below. One is clear and terse, but may not handle certain situations involving complex types (expression trees, matrices, etc..) well, though it does a fair job. It also relies on the somewhat obscure , operator.
The other is less terse, but still much clearer than my original recursive template, and it handles types just as well. It expands out to 'cn + x * (cn-1 + x * (cn-2 ...' whereas my recursive version expands out to cn + x * cn-1 + x * x * cn-2 .... For most reasonable types they should be equivalent, and the answer can easily be modified to expand out to what my recursive one expands to.
I picked the first answer because it was 1st and its terseness is more within the spirit of my original question. But, if I were to choose a version for production, I'd choose the second.
Using the power of comma operator (and C++17 folding, obviously), I suppose you can write poly_eval() as follows
template <typename X_t, typename C_t, typename ... Cs_t>
constexpr auto poly_eval (X_t const & x, C_t a, Cs_t const & ... cs)
{
( (a *= x, a += cs), ..., (void)0 );
return a;
}
trowing away poly_eval_accum().
Observe that the first coefficient if explicated, so you can delete also the static_assert() and is passed by copy, and become the accumulator.
-- EDIT --
Added an alternative version to solve the problem of the return type using std::common_type a decltype() of an expression, as the OP suggested; in this version a is a constant reference again.
template <typename X_t, typename C_t, typename ... Cs_t>
constexpr auto poly_eval (X_t const & x, C_t const & c1, Cs_t const & ... cs)
{
decltype(((x * c1) + ... + (x * cs))) ret { c1 };
( (ret *= x, ret += cs), ..., (void)0 );
return ret;
}
-- EDIT 2 --
Bonus answer: it's possible avoid the recursion also in C++14 using the power of the comma operator (again) and initializing an unused C-style array of integers
template <typename X_t, typename C_t, typename ... Cs_t>
constexpr auto poly_eval (X_t const & x, C_t const & a, Cs_t const & ... cs)
{
using unused = int[];
std::common_type_t<decltype(x * a), decltype(x * cs)...> ret { a };
(void)unused { 0, (ret *= x, ret += cs)... };
return ret;
}
A great answer is supplied above, but it requires a common return type and will therefore not work if you are, say, building a compile time expression tree.
What we need is some way to have a fold expression that both does the multiply with the value at the evaluation point x and add a coefficient at each iteration, in order to eventually end up with an expression like: (((c0) * x + c1) * x + c2) * x + c3. This is (I think) not possible with a fold expression directly, but we can define a special type that overloads a binary operator and does the necessary calculations.
template<class M, class T>
struct MultiplyAdder
{
M mul;
T acc;
constexpr MultiplyAdder(M m, T a) : mul(m), acc(a) { }
};
template<class M, class T, class U>
constexpr auto operator<<(const MultiplyAdder<M,T>& ma, const U& u)
{
return MultiplyAdder(ma.mul, ma.acc * ma.mul + u);
}
template <typename X_t, typename C_t, typename... Coeff_TList>
constexpr auto poly_eval(const X_t &x, const C_t &a, const Coeff_TList &... coeffs)
{
return (MultiplyAdder(x, a) << ... << coeffs).acc;
}
As a bonus, this solution also ticks C++17's 'automatic class template argument deduction' box ;)
Edit: Oops, argument deduction wasn't working inside MultiplyAdder<>::operator<<(), because MultiplyAdder refers to its own template-id rather than its template-name. I've added a namespace specifier, but that unfortunately makes it dependent on its own namespace. There must be a way to refer to its actual template-name, but I can't think of any without resorting to template aliases.
Edit2: Fixed it by making operator<<() a non-member.

how to divide boost::optional<double>?

I have such code:
boost::optional<double> result = _ind1.Value() / _ind2.Value();
Each arg is boost::optional<double> too:
boost::optional<double> Value() {
return value;
}
Errors are:
Error 1 error C2676: binary '/' : 'boost::optional<T>' does not define this operator or a conversion to a type acceptable to the predefined operator
2 IntelliSense: no operator "/" matches these operands
operand types are: boost::optional<double> / boost::optional<double>
I understand that it seems that division is just not defined. I expect result to be boost::none if any of two arguments is none - otherwise I want it to be normal double division. Should I just write this myself?
Of course such a simple operation as division of doubles is supported.
But you aren't trying to divide doubles. You're trying to divide boost::optional<double>s which is a whole different story.
If you want, you can define a division operator for this. It might look like (untested):
template<typename T>
boost::optional<T> operator/(const boost::optional<T>& a, const boost::optional<T>& b)
{
if(a && b) return *a / *b;
else return boost::optional<T>();
}
In C++11 (code courtesy of Yakk):
template<class T,class U> struct divide_result {
typedef typename std::decay<decltype(std::declval<T>()/std::declval<U>())>::type;
};
template<class T, class U> using divide_result_t=typename divide_result<T,U>::type;
template<typename T,typename U>
boost::optional<divide_result_t<T,U>> operator/(const boost::optional<T>& a, const boost::optional<U>& b)
{
if(a && b) return *a / *b;
else return boost::none;
}
I used a template because now it's also good for int, float, etc.

Passing an integer or a type as a template parameter?

Here is an example case of what I'm trying to do (it is a "test" case just to illustrate the problem) :
#include <iostream>
#include <type_traits>
#include <ratio>
template<int Int, typename Type>
constexpr Type f(const Type x)
{
return Int*x;
}
template<class Ratio, typename Type,
class = typename std::enable_if<Ratio::den != 0>::type>
constexpr Type f(const Type x)
{
return (x*Ratio::num)/Ratio::den;
}
template</*An int OR a type*/ Something, typename Type>
constexpr Type g(const Type x)
{
return f<Something, Type>(x);
}
int main()
{
std::cout<<f<1>(42.)<<std::endl;
std::cout<<f<std::kilo>(42.)<<std::endl;
}
As you can see, there are two versions of the f() function : the first one takes an int as a template parameter, and the second one takes a std::ratio. The problem is the following :
I would like to "wrap" this function through g() which can take an int OR a std::ratio as first template parameter and call the good version of f().
How to do that without writing two g() functions ? In other words, what do I have to write instead of /*An int OR a type*/ ?
Here's how I would do it, but I've changed your interface slightly:
#include <iostream>
#include <type_traits>
#include <ratio>
template <typename Type>
constexpr
Type
f(int Int, Type x)
{
return Int*x;
}
template <std::intmax_t N, std::intmax_t D, typename Type>
constexpr
Type
f(std::ratio<N, D> r, Type x)
{
// Note use of r.num and r.den instead of N and D leads to
// less probability of overflow. For example if N == 8
// and D == 12, then r.num == 2 and r.den == 3 because
// ratio reduces the fraction to lowest terms.
return x*r.num/r.den;
}
template <class T, class U>
constexpr
typename std::remove_reference<U>::type
g(T&& t, U&& u)
{
return f(static_cast<T&&>(t), static_cast<U&&>(u));
}
int main()
{
constexpr auto h = g(1, 42.);
constexpr auto i = g(std::kilo(), 42.);
std::cout<< h << std::endl;
std::cout<< i << std::endl;
}
42
42000
Notes:
I've taken advantage of constexpr to not pass compile-time constants via template parameters (that's what constexpr is for).
g is now just a perfect forwarder. However I was unable to use std::forward because it isn't marked up with constexpr (arguably a defect in C++11). So I dropped down to use static_cast<T&&> instead. Perfect forwarding is a little bit overkill here. But it is a good idiom to be thoroughly familiar with.
How to do that without writing two g() functions ?
You don't. There is no way in C++ to take either a type or a value of some type, except through overloading.
It is not possible to have a template parameter taking both type and non-type values.
Solution 1:
Overloaded functions.
Solution 2:
You can store values in types. Ex:
template<int n>
struct store_int
{
static const int num = n;
static const int den = 1;
};
template<class Ratio, typename Type,
class = typename std::enable_if<Ratio::den != 0>::type>
constexpr Type f(const Type x)
{
return (x*Ratio::num)/Ratio::den;
}
template<typename Something, typename Type>
constexpr Type g(const Type x)
{
return f<Something, Type>(x);
}
But with this solution you will have to specify g<store_int<42> >(...) instead of g<42>(...)
If the function is small, I advise you to use overloading.

Determine number of bits in integral type at compile time

NOTE: I added a similar but greatly simplified version of the problem at Ambiguous overload of functions like `msg(long)` with candidates `msg(int32_t)` and `msg(int64_t)`. That version has the advantage of a complete compilable example in a single file.
Problem
I have a C library with functions like
obj_from_int32(int32_t& i);
obj_from_int64(int64_t& i);
obj_from_uint32(uint32_t& i);
obj_from_uint64(uint64_t& i);
In this case the types int32_t etc are not the std ones - they are implementation defined, in this case an array of chars (in the following example I've omitted the conversion - it doesn't change the question which is about mapping intergral types to a particular function based on the number of bits in the integral type).
I have a second C++ interface class, that has constructors like
MyClass(int z);
MyClass(long z);
MyClass(long long z);
MyClass(unsigned int z);
MyClass(unsigned long z);
MyClass(unsigned long long z);
Note, I can't replace this interface with std::int32_t style types - if I could I wouldn't need to ask this question ;)
The problem is how to call the correct obj_from_ function based on the number of bits in the integral type.
Proposed Solutions
I'm putting two proposed solutions, since no killer solution has floated to the top of the list, and there are a few that are broken.
Solution 1
Provided by Cheers and hth. - Alf. Comments from this point on are my own - feel free to comment and/or edit.
Advantages
- Fairly simple (at least compared to boost::enable_if)
- Doesn't rely on 3rd party library (as long as compiler supports tr1)
*Disadvantages**
- If more functions (like anotherObj_from_int32 etc) are needed, a lot more code is required
This solution can be found below - take a look, it's nifty!
Solution 2
Advantages
Once the ConvertFromIntegral functions are done, adding new functions that need the conversion is trivial - simply write a set overloaded on int32_t, int64_t and unsigned equivalents.
Keeps use of templates to one place only, they don't spread as the technique is reused.
Disadvantages
Might be overly complicated, using boost::enable_if. Somewhat mitigated by the fact this appears in once place only.
Since this is my own I can't accept it, but you can upvote it if you think it's neat (and clearly some folks do not think it is neat at all, that's what downvote it for, I think!)
Thanks to everyone who contributed ideas!
The solution involves a conversion function from int, long and long long to int32_t and int64_t (and similar for the unsigned versions). This is combined with another set of functions overloaded on int32_t, int64_t and unsigned equivalents. The two functions could be combined, but the first conversion functions make a handy utility set that can be reused, and then the second set of functions is trivially simple.
// Utility conversion functions (reuse wherever needed)
template <class InputT>
typename boost::enable_if_c<sizeof(InputT)==sizeof(int32_t) && boost::is_signed<InputT>::value,
int32_t>::type ConvertFromIntegral(InputT z) { return static_cast<int32_t>(z); }
template <class InputT>
typename boost::enable_if_c<sizeof(InputT)==sizeof(int64_t) && boost::is_signed<InputT>::value,
int64_t>::type ConvertFromIntegral(InputT z) { return static_cast<int64_t>(z); }
template <class InputT>
typename boost::enable_if_c<sizeof(InputT)==sizeof(uint32_t) && boost::is_unsigned<InputT>::value,
uint32_t>::type ConvertFromIntegral(InputT z) { return static_cast<uint32_t>(z); }
template <class InputT>
typename boost::enable_if_c<sizeof(InputT)==sizeof(uint64_t) && boost::is_unsigned<InputT>::value,
uint64_t>::type ConvertFromIntegral(InputT z) { return static_cast<uint64_t>(z); }
// Overload set (mock implementation, depends on required return type etc)
void* objFromInt32 (int32_t i) { obj_from_int32(i); }
void* objFromInt64 (int64_t& i) { obj_from_int64(i); }
void* objFromUInt32(uint32_t& i) { obj_from_uint32(i); }
void* objFromUInt64(uint64_t& i) { obj_from_uint64(i); }
// Interface Implementation
MyClass(int z) : _val(objFromInt(ConvertFromIntegral(z))) {}
MyClass(long z): _val(objFromInt(ConvertFromIntegral(z))) {}
MyClass(long long z): _val(objFromInt(ConvertFromIntegral(z))) {}
MyClass(unsigned int z): _val(objFromInt(ConvertFromIntegral(z))) {}
MyClass(unsigned long z): _val(objFromInt(ConvertFromIntegral(z))) {}
MyClass(unsigned long long z): _val(objFromInt(ConvertFromIntegral(z))) {}
A simplified (single compilable .cpp!) version of the solution is given at Ambiguous overload of functions like `msg(long)` with candidates `msg(int32_t)` and `msg(int64_t)`
Given 3rd party functions …
void obj_from_int32( int32_bytes_t& i );
void obj_from_int64( int64_bytes_t& i );
void obj_from_uint32( uint32_bytes_t& i );
void obj_from_uint64( uint64_bytes_t& i );
you can call the "correct" such function for a built-in type as follows:
template< int nBytes, bool isSigned >
struct ThirdParty;
template<>
struct ThirdParty< 4, true >
{
template< class IntegralT >
static void func( IntegralT& v )
{ obj_from_int32( v ) } // Add whatever conversion is required.
};
// Etc., specializations of ThirdParty for unsigned and for 8 bytes.
template< class IntegralT >
void myFunc( IntegralT& v )
{ ThirdParty< sizeof( v ), std::is_signed< IntegralT >::value >::func( v ); }
Instead of overloading, what about pattern matching? Use boost::enable_if and a helper template to select the type of operation you're looking for?
Something like this:
#include <boost/utility/enable_if.hpp>
#include <boost/type_traits/is_integral.hpp>
#include <iostream>
template <typename T, typename Dummy=void> struct helper;
// Handle signed integers of size 1 (8 bits)
template <typename T> struct helper<T,
typename boost::enable_if_c<
boost::is_integral<T>::value &&
(sizeof(T)==1) &&
(static_cast<T>(-1) < static_cast<T>(0)) >::type>
{
static void do_stuff(T const& ) {std::cout<<"signed, size 1"<<std::endl;}
};
// Handle unsigned integers of size 1 (8 bits)
template <typename T> struct helper<T,
typename boost::enable_if_c<
boost::is_integral<T>::value &&
(sizeof(T)==1) &&
(static_cast<T>(-1) > static_cast<T>(0)) >::type>
{
static void do_stuff(T const& ) {std::cout<<"unsigned, size 1"<<std::endl;}
};
// Handle signed integers of size 2 (16 bits)
template <typename T> struct helper<T,
typename boost::enable_if_c<
boost::is_integral<T>::value &&
(sizeof(T)==2) &&
(static_cast<T>(-1) < static_cast<T>(0)) >::type>
{
static void do_stuff(T const& ) {std::cout<<"signed, size 2"<<std::endl;}
};
// And so on and so forth....
// Use a function for type erasure:
template <typename T> void do_stuff(T const& value)
{
helper<T>::do_stuff(value);
}
int main()
{
do_stuff(static_cast<unsigned char>(0)); // "unsigned, size 1"
do_stuff(static_cast<signed short>(0)); // "signed, size 2"
}
More complete listing (and proof it works with GCC at least) at http://ideone.com/pIhdq.
Edit: Or more simply, but with perhaps less coverage: (using the standard integral types)
template <typename T> struct helper2;
template <> struct helper2<uint8_t> {static void do_stuff2(uint8_t ) {...}};
template <> struct helper2<int8_t> {static void do_stuff2(int8_t ) {...}};
template <> struct helper2<uint16_t> {static void do_stuff2(uint16_t ) {...}};
template <> struct helper2<int16_t> {static void do_stuff2(int16_t ) {...}};
// etc.
template <typename T> void do_stuff2(T value) {helper2<T>::do_stuff2(value);}
As we discovered in linked problem, the long is cause of ambiguity here.
The line
MyClass(long z): _val(objFromInt(z)) {}
should be changed to something like:
MyClass(long z): _val(sizeof(long) == 4 ? static_cast<int32_t>(z) : static_cast<int64_t>(z)))) {}
Please note, that you will probably face similar problem with long long on 64-bit gcc.
As pointed out in other answers, this can be trivially solved at runtime using if(sizeof(int)==sizeof(int32_t)) style branches. To do this at compile-time, boost::enable_if can be used.
template <class InputT>
typename boost::enable_if_c<sizeof(InputT)==sizeof(int32_t) && boost::is_signed<InputT>::value,
int32_t>::type ConvertIntegral(InputT z) { return static_cast<int32_t>(z); }
template <class InputT>
typename boost::enable_if_c<sizeof(InputT)==sizeof(int64_t) && boost::is_signed<InputT>::value,
int64_t>::type ConvertIntegral(InputT z) { return static_cast<int64_t>(z); }
template <class InputT>
typename boost::enable_if_c<sizeof(InputT)==sizeof(uint32_t) && boost::is_unsigned<InputT>::value,
uint32_t>::type ConvertIntegral(InputT z) { return static_cast<uint32_t>(z); }
template <class InputT>
typename boost::enable_if_c<sizeof(InputT)==sizeof(uint64_t) && boost::is_unsigned<InputT>::value,
uint64_t>::type ConvertIntegral(InputT z) { return static_cast<uint64_t>(z); }
Anywhere you need to convert an integral type to an int32_t, int64_t, uint32_t or uint64_t simply call like:
ConvertIntegral(long(5)); // Will return a type compatible with int32_t or int64_t
The ConvertIntegral function can be combined with the int32_t and int64_t overload set for a complete solution. Alternatively, the technique illustrated could be built-in to the overload set.
Also, the above could be further enhanced by disabling for non-integral types. For a complete example of using the functions, see Ambiguous overload of functions like `msg(long)` with candidates `msg(int32_t)` and `msg(int64_t)`
An ambiguity can easily stem from overloading on both signed and unsigned types. For instance, given
void foo(unsigned int);
void foo(long);
then foo(0) is ambiguous as conversions (or maybe promotions) from int to both unsigned int and long are ranked the same.
Either pick one signedness, or write two overload sets for each signedness if you're using constructor overloads that use unsigned types (and you care about that).

Arguments to a template function aren't doing any implicit conversion

For some strange reason, I can't get the template arguments in this one piece of code to implicitly cast to a compatible type.
#include <type_traits>
template <typename T, unsigned D>
struct vec;
template <>
struct vec<float, 2> {
typedef float scalar;
static constexpr unsigned dimension = 2;
float x, y;
float& operator[] (unsigned i) { return (&x)[i]; }
float const& operator[] (unsigned i) const { return (&x)[i]; }
};
template <typename L, typename R>
struct add;
template <typename L, typename R, unsigned D>
struct add<vec<L, D>, vec<R, D>> {
typedef vec<L, D> left_type;
typedef vec<R, D> right_type;
typedef vec<typename std::common_type<L, R>::type, D> return_type;
add(left_type l, right_type r)
: left(l),
right(r)
{}
operator return_type() const
{
return_type result;
for (unsigned i = 0; i < D; ++i)
result[i] = left[i] + right[i];
return result;
}
left_type left;
right_type right;
};
template <typename L, typename R, unsigned D>
add<vec<L, D>, vec<R, D>>
operator+(vec<L, D> const& lhs, vec<R, D> const& rhs)
{
return {lhs, rhs};
}
int main()
{
vec<float, 2> a, b, c;
vec<float, 2> result = a + b + c;
}
Fails with:
prog.cpp: In function 'int main()':
prog.cpp:55:36: error: no match for 'operator+' in 'operator+ [with L = float, R = float, unsigned int D = 2u](((const vec<float, 2u>&)((const vec<float, 2u>*)(& a))), ((const vec<float, 2u>&)((const vec<float, 2u>*)(& b)))) + c'
So if I'm correct, the compiler should see the code in the main function as this:
((a + b) + c)
compute a + b
cast the result of a + b from add<...> to vec<float, 2> using the conversion operator in add<...>
compute (a + b) + c
But it never does the implicit cast. If I explicitly cast the result of (a + b) to a vec, the code works fine.
I'm going to side-step your actual problem and instead make a recommendation: Rather than writing all of this complicated boilerplate from scratch, have a look at Boost.Proto, which has taken care of all the tricky details for you:
Proto is a framework for building Domain Specific Embedded Languages in C++. It provides tools for constructing, type-checking, transforming and executing expression templates. More specifically, Proto provides:
An expression tree data structure.
A mechanism for giving expressions additional behaviors and members.
Operator overloads for building the tree from an expression.
Utilities for defining the grammar to which an expression must conform.
An extensible mechanism for immediately executing an expression template.
An extensible set of tree transformations to apply to expression trees.
See also the library author's Expressive C++ series of articles, which more-or-less serve as an (excellent) in-depth Boost.Proto tutorial.
Most conversions are not used during template argument deduction.
You rely on template argument deduction when you call your operator+ overload: it is only callable where both arguments are of type vec<...>, but when you try to call it the left-hand argument is of type add<...>. The compiler is not able to figure out that you really mean for that overload to be called (and it isn't allowed to guess), hence the error.