C++ operator overloading and implicit conversion - c++

I have a class that encapsulates some arithmetic, let's say fixed point calculations. I like the idea of overloading arithmetic operators, so I write the following:
class CFixed
{
CFixed( int );
CFixed( float );
};
CFixed operator* ( const CFixed& a, const CFixed& b )
{ ... }
It all works. I can write 3 * CFixed(0) and CFixed(3) * 10.0f. But now I realize, I can implement operator* with an integer operand much more effective. So I overload it:
CFixed operator* ( const CFixed& a, int b )
{ ... }
CFixed operator* ( int a, const CFixed& b )
{ ... }
It still works, but now CFixed(0) * 10.0f calls overloaded version, converting float to int ( and I expected it to convert float to CFixed ). Of course, I can overload a float versions as well, but it seems a combinatorial explosion of code for me. Is there any workaround (or am I designing my class wrong)? How can I tell the compiler to call overloaded version of operator* ONLY with ints?

You should overload with float type as well. Conversion from int to user-specified type (CFixed) is of lower priority than built-in floating-integral conversion to float. So the compiler will always choose function with int, unless you add function with float as well.
For more details, read 13.3 section of C++03 standard. Feel the pain.
It seems that I've lost track of it too. :-( UncleBens reports that adding float only doesn't solve the problem, as version with double should be added as well. But in any case adding several operators related to built-in types is tedious, but doesn't result in a combinatorial boost.

If you have constructors which can be invoked with just one argument, you effectively created an implicit conversion operator. In your example, wherever a CFixed is needed, both an int and a float can be passed. This is of course dangerous, because the compiler might silently generate code calling the wrong function instead of barking at you when you forgot to include some function's declaration.
Therefore a good rule of thumb says that, whenever you're writing constructors that can be called with just one argument (note that this one foo(int i, bool b = false) can be called with one argument, too, even though it takes two arguments), you should make that constructor explicit, unless you really want implicit conversion to kick in. explicit constructors are not used by the compiler for implicit conversions.
You would have to change your class to this:
class CFixed
{
explicit CFixed( int );
explicit CFixed( float );
};
I have found that there are very few exceptions to this rule. (std::string::string(const char*) is a rather famous one.)
Edit: I'm sorry, I missed the point about not allowing implicit conversions from int to float.
The only way I see to prevent this is to provide the operators for float as well.

Assuming you'd like the specialized version to be picked for any integral type (and not just int in particular, one thing you could do is provide that as a template function and use Boost.EnableIf to remove those overloads from the available overload set, if the operand is not an integral type.
#include <cstdio>
#include <boost/utility/enable_if.hpp>
#include <boost/type_traits/is_integral.hpp>
class CFixed
{
public:
CFixed( int ) {}
CFixed( float ) {}
};
CFixed operator* ( const CFixed& a, const CFixed& )
{ puts("General CFixed * CFixed"); return a; }
template <class T>
typename boost::enable_if<boost::is_integral<T>, CFixed>::type operator* ( const CFixed& a, T )
{ puts("CFixed * [integer type]"); return a; }
template <class T>
typename boost::enable_if<boost::is_integral<T>, CFixed>::type operator* ( T , const CFixed& b )
{ puts("[integer type] * CFixed"); return b; }
int main()
{
CFixed(0) * 10.0f;
5 * CFixed(20.4f);
3.2f * CFixed(10);
CFixed(1) * 100u;
}
Naturally, you could also use a different condition to make those overloads available only if T=int: typename boost::enable_if<boost::is_same<T, int>, CFixed>::type ...
As to designing the class, perhaps you could rely on templates more. E.g, the constructor could be a template, and again, should you need to distinguish between integral and real types, it should be possible to employ this technique.

How about making the conversion explicit?

Agree with sbi, you should definitely make your single-parameter constructors explicit.
You can avoid an explosion in the operator<> functions you write with templates, however:
template <class T>
CFixed operator* ( const CFixed& a, T b )
{ ... }
template <class T>
CFixed operator* ( T a, const CFixed& b )
{ ... }
Depending on what code is in the functions, this will only compile with types that you support converting from.

Related

Making one user defined conversion function preferred over another

I am trying to write a simple class that will act as a complex<double> number when used in a context that warrants a complex number, otherwise it will act as a double (with the requirement that the imaginary part must be 0 when used this way)
This is what I have so far:
class Sample
{
public:
Sample() {}
Sample(double real) : m_data(real) {}
Sample(double real, double imaginary) : m_data(real, imaginary) {}
operator std::complex<double>() const
{
return m_data;
}
operator double() const
{
assert(m_data.imag() == 0.0);
return m_data.real();
}
private:
std::complex<double> m_data;
};
This works great for most situations. Anytime this class is passed to a function that expects a complex number it will act as though it is a complex number. Anytime it is passed to a function that expects a double it will act as though it is a double.
The problem arises when I pass it to a function that will accept BOTH complex numbers and doubles. (for example std::arg). When I try to pass a Sample object to std::arg it doesn't know which conversion to use since both are technically valid.
In situations like this I want the complex conversion to be "preferred" and have it just pass it as a complex. Is there any way to make one user defined function preferred over another when both conversions would be technically acceptable?
I think in general it is impossible to write one conversion operator that will always be preferred to another conversion operator in the same class.
But if you need just to call std::arg with the argument of your class, then it already behaves as you expect preferring std::complex<double> over double:
#include <complex>
#include <iostream>
struct S {
operator std::complex<double>() const {
std::cout << "std::complex<double>";
return 2;
}
operator double() const {
std::cout << "double";
return 2;
}
};
int main() {
S s;
//(void)std::arg(s); //error: cannot deduce function template parameter
(void)std::arg<double>(s);
}
The program compiles and prints std::complex<double>. Demo: https://gcc.godbolt.org/z/dvxv17vvz
Please note that the call std::arg(s) is impossible because template parameter of std::complex<T> cannot be deduced from s. And std::arg<double>(s) prefers std::complex overload in all tested implementations of the standard library.

more than one operator "[]" matches these operands

I have a class that has both implicit conversion operator() to intrinsic types and the ability to access by a string index operator[] that is used for a settings store. It compiles and works very well in unit tests on gcc 6.3 & MSVC however the class causes some ambiguity warnings on intellisense and clang which is not acceptable for use.
Super slimmed down version:
https://onlinegdb.com/rJ-q7svG8
#include <memory>
#include <unordered_map>
#include <string>
struct Setting
{
int data; // this in reality is a Variant of intrinsic types + std::string
std::unordered_map<std::string, std::shared_ptr<Setting>> children;
template<typename T>
operator T()
{
return data;
}
template<typename T>
Setting & operator=(T val)
{
data = val;
return *this;
}
Setting & operator[](const std::string key)
{
if(children.count(key))
return *(children[key]);
else
{
children[key] = std::shared_ptr<Setting>(new Setting());
return *(children[key]);
}
}
};
Usage:
Setting data;
data["TestNode"] = 4;
data["TestNode"]["SubValue"] = 55;
int x = data["TestNode"];
int y = data["TestNode"]["SubValue"];
std::cout << x <<std::endl;
std::cout << y;
output:
4
55
Error message is as follows:
more than one operator "[]" matches these operands:
built-in operator "integer[pointer-to-object]" function
"Setting::operator[](std::string key)"
operand types are: Setting [ const char [15] ]
I understand why the error/warning exists as it's from the ability to reverse the indexer on an array with the array itself (which by itself is extremely bizarre syntax but makes logical sense with pointer arithmetic).
char* a = "asdf";
char b = a[5];
char c = 5[a];
b == c
I am not sure how to avoid the error message it's presenting while keeping with what I want to accomplish. (implicit assignment & index by string)
Is that possible?
Note: I cannot use C++ features above 11.
The issue is the user-defined implicit conversion function template.
template<typename T>
operator T()
{
return data;
}
When the compiler considers the expression data["TestNode"], some implicit conversions need to take place. The compiler has two options:
Convert the const char [9] to a const std::string and call Setting &Setting::operator[](const std::string)
Convert the Setting to an int and call const char *operator[](int, const char *)
Both options involve an implicit conversion so the compiler can't decide which one is better. The compiler says that the call is ambiguous.
There a few ways to get around this.
Option 1
Eliminate the implicit conversion from const char [9] to std::string. You can do this by making Setting::operator[] a template that accepts a reference to an array of characters (a reference to a string literal).
template <size_t Size>
Setting &operator[](const char (&key)[Size]);
Option 2
Eliminate the implicit conversion from Setting to int. You can do this by marking the user-defined conversion as explicit.
template <typename T>
explicit operator T() const;
This will require you to update the calling code to use direct initialization instead of copy initialization.
int x{data["TestNode"]};
Option 3
Eliminate the implicit conversion from Setting to int. Another way to do this is by removing the user-defined conversion entirely and using a function.
template <typename T>
T get() const;
Obviously, this will also require you to update the calling code.
int x = data["TestNode"].get<int>();
Some other notes
Some things I noticed about the code is that you didn't mark the user-defined conversion as const. If a member function does not modify the object, you should mark it as const to be able to use that function on a constant object. So put const after the parameter list:
template<typename T>
operator T() const {
return data;
}
Another thing I noticed was this:
std::shared_ptr<Setting>(new Setting())
Here you're mentioning Setting twice and doing two memory allocations when you could be doing one. It is preferable for code cleanliness and performance to do this instead:
std::make_shared<Setting>()
One more thing, I don't know enough about your design to make this decision myself but do you really need to use std::shared_ptr? I don't remember the last time I used std::shared_ptr as std::unique_ptr is much more efficient and seems to be enough in most situations. And really, do you need a pointer at all? Is there any reason for using std::shared_ptr<Setting> or std::unique_ptr<Setting> over Setting? Just something to think about.

Why argument conversion is not considered when calling a templated function?

I have a template class and a friend operator* function
StereoSmp<TE> operator* (const StereoSmp<TE>& a, TE b)
I use it with TE=float but I need to multiply a StereoSmp<float> * double
I think that should be possibile because it should converts double to float automatically and works but I get the error:
no match for ‘operator*’ (operand types are ‘StereoSmp<float>’ and
‘__gnu_cxx::__alloc_traits<std::allocator<double> >::value_type {aka double}’)
deduced conflicting types for parameter ‘TE’ (‘float’ and ‘double’)
Why it doesn't convert double to float automatically? And what can I do to allow the automatic conversion between types?
Don't make your friend a template.
template<class TE>
struct StereoSmp {
friend StereoSmp operator* (const StereoSmp& a, TE b) {
return multiply( a, b ); // implement here
}
};
this is a non-template friend to each type instance of the template StereoSmp. It will consider conversions.
Template functions don't consider conversions, they simply do exact pattern matching. Overload resolution is already insane enough in C++.
Keep it simple, silly:
template<class U>
StereoSmp<TE> operator* (const StereoSmp<TE>& a, U b);
or if it applies:
StereoSmp<TE> a {/* ... */};
double b = /* ... */;
auto c = a * static_cast<float>(b);
Why it doesn't convert double to float automatically?
Because template deduction happens before possible conversions are taken into consideration. If you call a*b with a a StereoSmp<float> and b a double, template substitution will fail before a float to double conversion can be considered, and name lookup will continue, until failing short of candidates.
This process is called Template argument deduction.
Although Yakk's answer is probably the best in this particular scenario, I want to point out that you can prevent this deduction conflict and get your expected result (pass StereoSmp<float>, deduce TE as float) by making the other argument ineligible for use in deduction:
StereoSmp<TE> operator* (const StereoSmp<TE>& a, remove_reference<TE>::type b)
Related reading: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3766.html
Why it doesn't convert double to float automatically? And what can I do to allow the automatic conversion between types?
This isn't a conversion problem, it's a template parameter inference issue. Since the declaration is of the form:
StereoSmp<TE> operator* (const StereoSmp<TE>& a, TE b)
... and the operands are of type StereoSmp<float> and double, the C++ inference rules do not work, because they do not take into account that a double is convertible to a float. These rules are specified by the language specification; the reason why they don't take potential conversions into account is probably because "it would make deduction very complicated, otherwise". The rules are already complex enough!
You can of course cast your double parameter to a float and it will work fine. Also, you could make the operator* a member function of StereoSmp, or you could independently parameterise the types of the type parameters:
template <class TE, class U> StereoSmp<TE> operator* (const StereoSmp<TE>& a, U b);

Numeric Array Class: Multiplication using friend functions

I have been trying to solve this bug for days. I made a generic array class, from which I used generic inheritance to create a numeric array. Everything works perfect; however, I am having trouble with multiplication/scaling of the numeric array. I overloaded the operator* in the following standard way:
//Scaling operator
template <typename Type>
NumericArray<Type>& NumericArray<Type> :: operator * (const double factor) // scaling the elements
{
for (int i = 0; i < (*this).size(); i++)
{
(*this)[i] *= factor;
}
return *this;
}
Of course, due to the inherent order demanded by this implementation, I can only do multiplication in the way of array * 2. But I also want to be able to write 2 * array. So I decided to implement a friend function as follows:
template <typename Type>
NumericArray<Type> operator* (Type factor, const NumericArray<Type>& num_array)
{
return(num_array * factor);
}
The declaration of this friend function is as follows:
template <typename Type> friend NumericArray<Type> operator * (double factor, const NumericArray<Type>& num_array);
But when I try to write 2 * array in main(), I get the error message:
Severity Code Description Project File Line
Error C2678 binary '*': no operator found which takes a left-hand operand of type 'const NumericArray' (or there is no acceptable conversion)
The error makes me think that main() doesn't even recognize that the friend function exists. I have used friend functions a few times before, but I am pretty new to templates. I've been told that templates and generic programming in general has some weird quirky necessities, which may cause unforeseen circumstances.
For anyone who wants to see the full program, see the dropbox link:
https://www.dropbox.com/sh/86c5o702vkjyrwx/AAB-Pnpl_jPR_GT4qiPYb8LTa?dl=0
Thanks again for your help:)
NumericArray<Type> in the friend function is const, it can not call a not const function--- NumericArray<Type>& NumericArray<Type> :: operator * (const double factor)
NumericArray<Type>& operator * (const double factor)
This is a non const member function. This means it may mutate the object it's called on.
Your parameter in the free operator function (which doesn't need to be a friend if calling the public operator above) is
const NumericArray<Type>&
and therefore const qualified.
To solve it, you need to change the member function to be callable on const qualified instances (given that it really does not mutate it's instance, which it shouldn't)(*):
NumericArray<Type>& operator * (const double factor) const
(*) Yours is mutating it's instance, this is counter intuitive:
c = a * b;
You don't want a to be changed after that, in general.
As Nicky C suggests in his answer, the best approach here is probably to have an operator*= as (non const, public) member function, which is doing what currently your operator* seems to be doing.
Then you add two free operator* functions which use the operator*= to implement their behaviour.
So no need for friends in this case.
I am not sure whether I want to explain everything since it can be complicated if we go deep. But in short:
You've mixed up the meanings and implementations of *= and *.
The errors involve a lot of things: template instantiation, const-correctness, overload resolution...
So, let's just look at the idiomatic way to do so.
Step 1: Write a member operator *=.
template <typename Type>
class NumericArray
{
//...
NumericArray& operator *= (const Type factor)
{
[[ Code that scales up every element, i.e. your for-loop ]]
return *this;
}
//...
}
Step 2: Write a pair of non-member non-friend operator *
template <typename Type>
NumericArray<Type> operator * (const NumericArray<Type>& num_array, const Type factor)
{
NumericArray<Type> new_num_array = num_array; // Copy construct
new_num_array *= factor; // Scale up
return new_num_array;
}
// And similarly for factor*num_array

What is better to use for automatic return types: decltype or std::common_type<>::type (if possible)?

As answer to my last question it was suggested to use, when possible, std::common_type<X,Y>::type in the declaration of automatic return types instead of my original decltype(). However, doing so I ran into problems (using gcc 4.7.0). Consider the following simple code
template<typename> class A;
template<typename X> class A {
X a[3];
template <typename> friend class A;
public:
A(X a0, X a1, X a2) { a[0]=a0; a[1]=a1; a[2]=a2; }
X operator[](int i) const { return a[i]; }
X operator*(A const&y) const // multiplication 0: dot product with self
{ return a[0]*y[0] + a[1]*y[1] + a[2]*y[2]; }
template<typename Y>
auto operator*(A<Y> const&y) const -> // multiplication 1: dot product with A<Y>
#ifdef USE_DECLTYPE
decltype((*this)[0]*y[0])
#else
typename std::common_type<X,Y>::type
#endif
{ return a[0]*y[0] + a[1]*y[1] + a[2]*y[2]; }
template<typename Y>
auto operator*(Y s) const -> // multiplication 2: with scalar
#ifdef USE_DECLTYPE
A<decltype((*this)[0]*s)>
#else
A<typename std::common_type<X,Y>::type>
#endif
{ return A<decltype((*this)[0]*s)>(s*a[0],s*a[1],s*a[2]); }
};
int main()
{
A<double> x(1.2,2.0,-0.4), y(0.2,4.4,5.0);
A<double> z = x*4;
auto dot = x*y; // <--
std::cout<<" x*4="<<z[0]<<' '<<z[1]<<' '<<z[2]<<'\n'
<<" x*y="<<dot<<'\n';
}
when USE_DECLTYPE is #defined, the code compiles and runs fine with gcc 4.7.0. But otherwise, the line indicated in main() calls the multiplaction 2, which seems weird if not wrong. Could this possibly be a consequence/side effect of using std::common_type or is it a bug with gcc?
I always thought that the return type has no bearing on which of a multitude of fitting template functions is chosen...
The suggestion to use common_type is bogus.
The problem using decltype you had in your other question was simply a GCC bug.
The problem you have in this question when using common_type is because std::common_type<X, Y>::type tells you the type that you would get from the expression:
condition ? std::declval<X>() : std::declval<Y>()
i.e. what type an X and a Y can both be converted to.
In general that has absolutely nothing to do with the result of x * y, if X and Y have an overloaded operator* that returns a completely different type.
In your specific case, you have the expression x*y where both variables are type A<double>. Overload resolution tries to check each overloaded operator* to see if it's valid. As part of overload resolution it instantiates this member function template:
template<typename Y>
auto operator*(Y s) const ->
A<typename std::common_type<X,Y>::type>;
With A<double> substituted for the template parameter Y. That tries to instantiate common_type<double, A<double>> which is not valid, because the expression
condition ? std::declval<double>() : std::declval< A<double> >()
is not valid, because you cannot convert A<double> to double or vice versa, or to any other common type.
The error doesn't happen because that overloaded operator* is called, it happens because the template must be instantiated in order to decide which operator should be called, and the act of instantiating it causes the error. The compiler never gets a far as deciding which operator to call, the error stops it before it gets that far.
So, as I said, the suggestion to use common_type is bogus, it prevents SFINAE from disabling the member function templates that don't match the argument types (formally, SFINAE doesn't work here because the substitution error happens outside the "immediate context" of the template, i.e. it happens inside the definition of common_type not in the function signature, where SFINAE applies.)
It's permitted to specialize std::common_type so it knows about types without implicit conversions, so you could specialize it so that common_type<double, A<double>>::type is valid and produces the type double, like so:
namespace std {
template<typename T>
struct common_type<T, A<T>> { typedef T type; };
}
Doing so would be a very bad idea! What common_type is supposed to give is the answer to "what type can both these types safely be converted to?". The specialization above subverts it to give the answer to "what is the result of multiplying these types?" which is a completely different question! It would be as foolish as specializing is_integral<std::string> to be true.
If you want the answer to "what is the type of a general expression such as expr ?" then use decltype(expr), that's what it's for!