I'm learning how to use std::chrono and I want to make a template class Timer easy to use (defined in timer.h). The testing program was successful and everything worked fine, until I tried to use my new Timer in a program with the definition of some template operators, which conflit with the operators used inside Timer.
Inside Timer I have to use operator- between two variables (start_time and end_time) of type std::chrono::time_point, in order to obtain the duration variable containing the elapsed time.
In another header (algebra.h) I implemented the overloading of the binary operator- to make the difference between two std::vector or two std::array, or also a user-defined container provided with operator[] and size() member function.
template<typename pointType>
pointType operator-(pointType a, const pointType & b){
for(int i = 0; i < a.size(); ++i){
a[i] = a[i] - b[i];
}
return a;
}
When I try to include both timer.h and algebra.h, the compiler throws an error saying "ambiguous overload for operator-" suggesting, as possible candidates, both the operator in algebra.h and the one implemented in <chrono>.
I don't understand why it is ambiguous, since pointType can't be deduced as std::chrono::time_point because it doesn't have operator[] and size() member function.
P.S. I tried something else to work it out, but I only got more confused testing a program which use std::valarray. When I include both <valarray> and "algebra.h", and try to make a difference between two valarrays, I expected the compiler to complain about ambiguous definition of operator-, since std::valarray already has implementation for binary operators. But this doesn't happen: it compiles using the <valarray> implementation. Why this doesn't throw an error?
It is ambiguous because the compiler only looks at the function signature to test for ambiguity, not the body of the function. In your example, this is the function signature:
template<typename pointType>
pointType operator-(pointType a, const pointType & b)
Here, the template parameter pointType could be deduced as std::chrono::time_point. However, there is already a binary minus operator declared in the chrono header for std::chrono::time_point (https://en.cppreference.com/w/cpp/chrono/time_point/operator_arith2). This is what is causing the ambiguity error.
To solve this problem, you should first consider whether you need such a generic binary minus operator. The problem you are currently experiencing will not be unique to std::chrono::time_point, but will also occur with any other header that contains a class with a member or non-member binary minus operator, where both arguments are of the same type (or could implicitly convert into the same type). Perhaps a simple set of function overloads for the types in question:
template<typename T>
std::vector<T> operator-(const std::vector<T>& a, const std::vector<T>& b);
template<typename T, size_t N>
std::array<T,N> operator-(const std::array<T,N>& a, const std::array<T,N>& b);
This would be the safest option. You could also not use operator overloading altogether, and stick to a conventional function:
template<typename T>
T pointwise_subtract(const T& a, const T& b);
If you have a c++20 compiler, you could use concepts. If you insist on using non-member operator templates, you may have to use SFINAE-based template metaprogramming, a more advanced and less readable technique:
//enable this template if the type T has a member method "size" and
// subscript operator accepting variables of type "size_t"
template<typename T, typename=std::void_t<
decltype(std::declval<T>().size()),
decltype(std::declval<T>()[std::declval<size_t>()])
>
T operator-(const T& a, const T& b);
This will remove your ambiguity error.
Related
Is there any reason that c++ compiler gives error when using two different numeric variable types in std::max() function? (e.g. int and long).
I mean something like: "Sometimes we have this problem when using std::max() function for two different numeric variable types, so the compiler gives error to prevent this problem".
The compiler produces an error because it cannot perform type deduction for the template argument of std::max. This is how std::max template is declared: the same type (template parameter) is used for both arguments. If the arguments have different types, the deduction becomes ambiguous.
If you work around the deduction ambiguity by supplying the template argument explicitly, you will be able to use different types as std::max arguments
std::max(1, 2.0); // Error
std::max<double>(1, 2.0); // OK
The reason why std::max insists on using a common type for its arguments (instead of using two independent types) is described in #bolov's answer: the function actually wants to return a reference to the maximum value.
std::max returns a reference to the argument that has the maximum value. The main reason it is this way is because it is a generic function and as such it can be used with types expensive to copy. Also you might actually need just a reference to the object, instead a copy of it.
And because it returns a reference to a argument, all arguments must be of the same type.
The direct answer to the question is that it's because std::min and std::max take only one template parameter that defines the types of both arguments. If/when you try to pass arguments of different types, the compiler can't decide which of those two types to use for the template argument, so the code is ambiguous. As originally defined in C++98, std::min and std::max had signatures like these (C++03, §[lib.alg.min.max]):
template<class T> const T& min(const T& a, const T& b);
template<class T, class Compare>
const T& min(const T& a, const T& b, Compare comp);
template<class T> const T& max(const T& a, const T& b);
template<class T, class Compare>
const T& max(const T& a, const T& b, Compare comp);
So the basic idea here is that the function receives two objects by reference, and returns a reference to one of those objects. If it received objects of two different types, it wouldn't be able to return a reference to an input object because one of the objects would necessarily be of a different type than it was returning (so #bolov is correct about that part, but I don't think it's really the whole story).
With a modern compiler/standard library, if you don't might dealing with values instead of references, you could pretty easily write code on this general order:
template <class T, class U>
std::common_type<T, U> min(T const &a, U const &b) {
return b < a ? b : a;
}
template <class T, class U>
std::common_type<T, U> max(T const &a, U const &b) {
return a < b ? b : a;
}
That makes it pretty easy to deal with your case of passing an int and a long (or other pairs of types, as long as std::common_type can deduce some common type for them, and a<b is defined for objects of the two types.
But, in 1998, even if std::common_type had been available so it was easy to do, that solution probably wouldn't have been accepted (and as we'll see, it's still open to some question whether it's a great idea)--at the time, many people still thought in terms of lots of inheritance, so it was (more or less) taken for granted that you'd frequently use it in situations where both arguments were really of some derived type, something on this general order:
class Base {
// ...
virtual bool operator<(Base const &other);
};
class Derived1 : public Base {
// ...
};
class Derived2 : public Base {
// ...
};
Derived1 d1;
Derived2 d2;
Base &b = std::max(d1, d2);
In this case, the version above that returns a value instead of returning a reference would cause a serious problem. common_type<Derived1, Derived2> is going to be Base, so we'd end up slicing the argument to create an object of type Base, and returning that. This would rarely provide desirable behavior (and in some cases, such as if Base were an abstract base class, it wouldn't even compile).
There's one other point that's probably worth noting: even when applied in a seemingly simple situation, std::common_type can produce results you might not expect. For example, let's consider calling the template defined above like:
auto x = min(-1, 1u);
That leaves us with an obvious question: what type will x be?
Even though we've passed it an int and an unsigned, the type of the result is (at least potentially) neither int or unsigned (e.g., quite possibly long long)!
Is there any reason that c++ compiler gives error when using two different numeric variable types in std::max() function? (e.g. int and long).
I mean something like: "Sometimes we have this problem when using std::max() function for two different numeric variable types, so the compiler gives error to prevent this problem".
The compiler produces an error because it cannot perform type deduction for the template argument of std::max. This is how std::max template is declared: the same type (template parameter) is used for both arguments. If the arguments have different types, the deduction becomes ambiguous.
If you work around the deduction ambiguity by supplying the template argument explicitly, you will be able to use different types as std::max arguments
std::max(1, 2.0); // Error
std::max<double>(1, 2.0); // OK
The reason why std::max insists on using a common type for its arguments (instead of using two independent types) is described in #bolov's answer: the function actually wants to return a reference to the maximum value.
std::max returns a reference to the argument that has the maximum value. The main reason it is this way is because it is a generic function and as such it can be used with types expensive to copy. Also you might actually need just a reference to the object, instead a copy of it.
And because it returns a reference to a argument, all arguments must be of the same type.
The direct answer to the question is that it's because std::min and std::max take only one template parameter that defines the types of both arguments. If/when you try to pass arguments of different types, the compiler can't decide which of those two types to use for the template argument, so the code is ambiguous. As originally defined in C++98, std::min and std::max had signatures like these (C++03, §[lib.alg.min.max]):
template<class T> const T& min(const T& a, const T& b);
template<class T, class Compare>
const T& min(const T& a, const T& b, Compare comp);
template<class T> const T& max(const T& a, const T& b);
template<class T, class Compare>
const T& max(const T& a, const T& b, Compare comp);
So the basic idea here is that the function receives two objects by reference, and returns a reference to one of those objects. If it received objects of two different types, it wouldn't be able to return a reference to an input object because one of the objects would necessarily be of a different type than it was returning (so #bolov is correct about that part, but I don't think it's really the whole story).
With a modern compiler/standard library, if you don't might dealing with values instead of references, you could pretty easily write code on this general order:
template <class T, class U>
std::common_type<T, U> min(T const &a, U const &b) {
return b < a ? b : a;
}
template <class T, class U>
std::common_type<T, U> max(T const &a, U const &b) {
return a < b ? b : a;
}
That makes it pretty easy to deal with your case of passing an int and a long (or other pairs of types, as long as std::common_type can deduce some common type for them, and a<b is defined for objects of the two types.
But, in 1998, even if std::common_type had been available so it was easy to do, that solution probably wouldn't have been accepted (and as we'll see, it's still open to some question whether it's a great idea)--at the time, many people still thought in terms of lots of inheritance, so it was (more or less) taken for granted that you'd frequently use it in situations where both arguments were really of some derived type, something on this general order:
class Base {
// ...
virtual bool operator<(Base const &other);
};
class Derived1 : public Base {
// ...
};
class Derived2 : public Base {
// ...
};
Derived1 d1;
Derived2 d2;
Base &b = std::max(d1, d2);
In this case, the version above that returns a value instead of returning a reference would cause a serious problem. common_type<Derived1, Derived2> is going to be Base, so we'd end up slicing the argument to create an object of type Base, and returning that. This would rarely provide desirable behavior (and in some cases, such as if Base were an abstract base class, it wouldn't even compile).
There's one other point that's probably worth noting: even when applied in a seemingly simple situation, std::common_type can produce results you might not expect. For example, let's consider calling the template defined above like:
auto x = min(-1, 1u);
That leaves us with an obvious question: what type will x be?
Even though we've passed it an int and an unsigned, the type of the result is (at least potentially) neither int or unsigned (e.g., quite possibly long long)!
This question already has answers here:
What are the basic rules and idioms for operator overloading?
(8 answers)
Closed 4 years ago.
I have a situation where I have math operations that make sense to overload for convenience.
But some of them operate on other types.
Like
vec3<type> * type
and
type * vec3<type>
One way for the right hand argument for a scalar is:
template<class T, class SubT>
class Vec3 {
// this is fine, ie: works
SubT operator *(const T &val) {
return(SubT(x*val,y*val,z*val));
}
};
Ive read that is is best only to implement operators for *, +, -, /, etc "out of class" or let the compiler deduce things from the += version in the class.
is this optimal vs having + implemented in the class?
how does one do this for the reverse where the left hand argument is the other type?
Ie in my particular case, the templated operator has two template type arguments. One is the type of the element, and the other is the super class the template is implementing it's methods for.
template<class T, class SubT>
SubT operator *(const T &a, const Vec3Base<T, B> &b) {
return(b * a);
}
Anyway hopefully you get my desire, how to do it properly is the question :)
Like do I need to only make to take one type ? ie: the vector type, and then get the element type from it as a typedef ??
template<class VT>
VT::SubT operator*(const VT::ElemT &a, const VT &v) {
return(v * a);
}
and should I also implement the other way instead of in the class but "out of class" with:
template<class VT>
VT::SubT operator*(const VT &a, const VT::ElemT &b ) {
return(VT::SubT(a.x*b,a.y*b,a.z*b));
}
Well I did read most of the answers in the idioims for operator overloading question.
I does answer a lot of things. BUT doesn't cover the ramafacations for templates and templates declaring operators used in the subclasses of those templates.
For all operators where you have to choose to either implement them as a member function or a non-member function, use the following rules of thumb to decide:
This was somewhat helpful in my desire to know the best way to implement whether out of class or in the class.
If it is a unary operator, implement it as a member function.
If a binary operator treats both operands equally (it leaves them unchanged), implement this operator as a non-member function.
If a binary operator does not treat both of its operands equally (usually it will change its left operand), it might be useful to make
it a member function of its left operand’s type, if it has to access
the operand's private parts.
I was wondering if there is an issue regarding prioritization of one over the other for templates. I find if an operator is declared in a template and it is a superclass of a subclass that inherits it's operators at least in the MS compiler it will prioritize looking at the global one over the one in the supreclass. Nasty !!! similar issues happen with clang and gcc.
I did find I really have to declaare all the possible conflicting operators at the same level so the overload resolution works as expected. ie: all in the same superclass of the subclass, if there are poerators declared in a superclass of the superclass, they will sometimes be ignored it seems if there is some wayward conversion that supplies an argument to one of the overloads at the higher priority ( arrrgh ).
It seems at this point I have resolved all the compile issue - now to get it to link hahaha !!
Assuming your type is cheap to move:
template<class T, class SubT>
class Vec3 {
// this is fine, ie: works
SubT& operator *=(const T &val) & {
x*=val;y*=val;z*=val;
return *this;
}
friend SubT operator*(Vec3 v, T const& t){
v*=t;
return v;
}
friend SubT operator*(T const& t, Vec3 v){
v*=t;
return v;
}
};
A 3 tuple of numbers is almost always cheap to move, as the numbers are either tiny (like 64 bits) and trivially copyable, or they are going to be a bignum type which uses a cheap to move storage type internally.
This technique creates what I call Koenig operators; non-template operators onky discoverable via ADL from a template class. This has a myriad of advantages over member function operators and non-member template operators. Less so in this simple case, but as a blind recipie it avoids a bunch of pitfalls (like, how having an operator std::string doesn't let you be << streamed).
Consider this question, which is about the following code not compiling:
std::vector<int> a, b;
std::cout << (std::ref(a) < std::ref(b));
It doesn't compile because the vector comparison operators for vector are non-member function templates, and implicit conversions aren't allowed to be considered. However, if the operators were instead written as non-member non-template, friend functions:
template <class T, class Allocator = std::allocator<T>>
class vector {
// ...
friend bool operator<(const vector& lhs, const vector& rhs) {
// impl details
}
};
Then this version of operator< would have been found by ADL and been chosen as the best viable overload, and the original example would have compiled. Given that, is there a reason to prefer the non-member function template that we currently have, or should this be considered a defect in the standard?
Given that, is there a reason to prefer the non-member function
template that we currently have, or should this be considered a defect
in the standard?
The reason is if ADL could find out proper function or not. When such a search requires to extract the substituted template parameters from the type of given object and then substitute them many times into a templated parameter of the function template, ADL can't do this because of there are no reasons in the general case to prefer one way of template parameters binding to other. The non-member function template defined after but still in the namespace scope of that template (due to friend) excludes such an indeterminacy.
I have a matrix class like below:
template <size_t M, size_t N, typename T>
class Matrix
{
public:
Matrix<M, N, T> operator +(const Matrix<M, N, T>& B) const;
template <size_t P> Matrix<M,P,T> operator*(const Matrix<N, P, T>& B) const;
template <typename T2> operator T2() const;
private:
T data[M][N];
};
// ... the body is in header file too ...//
The body has written fine, and everything works well.
When I define two Matrices as below:
Matrix < 10, 10, int> m1;
Matrix < 10, 10, float> m2;
m1 + m2; // OK
m1 * m2; // error: no match for 'operator*' in 'm1 * m2'
The first '+' operator works well, because an implicit casting has performed on it.
but for second '*' operator for different value types, an error occurs.
error: no match for 'operator*' in 'm1 * m2'
Any idea ?!
UPDATE:
All code is in header file. I have no problem but for '*' operator.
What you can say about '+' operator? I know everything about template/operators/casting... but this problem is like a bug for my gcc compiler!? I wrote a cast-operator and this operator calls before '+' operator, but i dont know why it dose not perform for '*' operator!
The problem is more or less classic. The overload resolution starts by
building a list of possible functions; in this case, functions named
operator*. To do this, it adds all operator* functions which are in
scope to the list, and it tries to instantiate all function templates by
applying type deduction; if type deduction succeeds, it adds the
instantiation of the template to the list. (A function template is
not a function. An instantiation of the function template is a
function.)
The rules for template type deduction are different than those used in
overload resolution. In particular, only a very small set of
conversions are considered. User defined conversion operators are not
considered. The result is that in m1 * m2, type deduction for
operator* fails (since it would require a conversion which isn't
considered). So no instantiation of the function template is added to
the list, and there is no other operator*.
More generally: you're operator T2() wouldn't allow type deduction
even if it were allowed; there are a infinite number of conversions
which would match operator*. I suspect, in fact, that you've made it
too general; that you want an operator Matrix<M, N, T2>(). (Not that
this will help here, but there are contexts where it might eliminate an
ambiguity.)
You might be able to make it work by defining a:
template<size_t P, tyepname OtherT>
Matrix<M, P, T> operator*( Matrix<N, P, T> const& rhs ) const;
, then doing the conversion inside the operator*. (I haven't tried it,
and am not sure, but I think your existing operator* should be
considered “more specialized”, and thus be chosen when type
deduction succeeds for both.)
Having said this, I think the way you're doing it is the wrong approach.
Do you really want the return types of m1 * m2 and m2 * m1 to be
different. For starters, I'd require the client code to make the
conversion explicit (which is the case in your current code); if you do
want to support the implicit conversions, I think you need to make the
operator* a global, use some sort of simple meta-programming to
determine the correct return type (i.e. given Matrices of long and
unsigned, you might want to have a return type of unsigned long,
since this is what mixed type arithmetic with these types gives
otherwise), convert both sides to the target type, and do the arithmetic
on it. A lot of work for what is probably not a very important or
useful feature. (Just my opinion, of course. If your clients really
want the mixed type arithmetic, and are willing to pay for it...)
The implicit cast is the culprit in your example (m1 * m1 works). While I am not language-firm enough to tell you exactly why, I suspect that the combination of a templated operator* method (which doesn't specify the type exactly) and a necessary type conversion has too much ambiguity. The compiler is told that it can convert your matrix into any type, and that a templated family of types could be valid arguments for operator*. I would have problems determining which operator* to call from these methods. Inserting a static_cast as m1 * static_cast< Matrix<10,10,int> >(m2) confirms this suspicion.
The Eigen library is a fairly mature and very good matrix library, and they also don't make implicit scalar conversions. Rather, they have used a cast method:
template <typename Scalar> Matrix<M,N,Scalar> cast() const;
In your example, you'd write:
m1.cast<float>() * m2;