Overload resolution and user-defined conversion - c++

Consider this example:
struct Foo
{
Foo(int){cout << "Foo(int)\n";}
Foo(double){cout << "Foo(double)\n";}
operator int()const{cout << "operator int()\n"; return 0;}
operator double()const{cout << "operator double()\n"; return 0.;}
};
void bar(Foo){cout << "bar(Foo)\n";}
void bar(float){cout << "bar(float)\n";}
int main()
{
int i = 5;
bar(i); // whey bar(float) and not bar(Foo)?
}
I know I shouldn't overload the "converting-ctor" to take relate types (here arithmetic types) but just for understanding better function matching and user-defined-conversion.
Why the call to bar is resolved to bar(float) and not bar(Foo) as long as Foo has an exact match for this argument (int)?
Does it mean that standard conversion is preferred over user-defined conversion?

Does it mean that standard conversion is preferred over user-defined conversion?
Yes. Standard conversions are always preferred over user-defined ones. See this
In deciding on the best match, the compiler works on a rating system for the way the types passed in the call and the competing parameter lists match up. In decreasing order of goodness of match:
An exact match, e.g. argument is a double and parameter is a double
A promotion
A standard type conversion
A constructor or user-defined type conversion

Related

Prevent Array Decay for overloaded functions taking const char* or const char(&)[] [duplicate]

Let's say I'm writing a function to print the length of a string:
template <size_t N>
void foo(const char (&s)[N]) {
std::cout << "array, size=" << N-1 << std::endl;
}
foo("hello") // prints array, size=5
Now I want to extend foo to support non-arrays:
void foo(const char* s) {
std::cout << "raw, size=" << strlen(s) << std::endl;
}
But it turns out that this breaks my original intended usage:
foo("hello") // now prints raw, size=5
Why? Wouldn't that require an array-to-pointer conversion, whereas the template would be an exact match? Is there a way to ensure that my array function gets called?
The fundamental reason for this (standard-conforming) ambiguity appears to lie within the cost of conversion: Overload resolution tries to minimize the operations performed to convert an argument to the corresponding parameter. An array is effectively the pointer to its first element though, decorated with some compile-time type information. An array-to-pointer conversion doesn't cost more than e.g. saving the address of the array itself, or initializing a reference to it. From that perspective, the ambiguity seems justified, although conceptually it is unintuitive (and may be subpar). In fact, this argumentation applies to all Lvalue Transformations, as suggested by the quote below. Another example:
void g() {}
void f(void(*)()) {}
void f(void(&)()) {}
int main() {
f(g); // Ambiguous
}
The following is obligatory standardese. Functions that are not specializations of some function template are preferred over ones that are if both are otherwise an equally good match (see [over.match.best]/(1.3), (1.6)). In our case, the conversion performed is an array-to-pointer conversion, which is an Lvalue Transformation with Exact Match rank (according to table 12 in [over.ics.user]). [over.ics.rank]/3:
Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if
S1 is a proper subsequence of S2 (comparing the conversion sequences in the canonical form defined by 13.3.3.1.1, excluding
any Lvalue Transformation; the identity conversion sequence is considered to be a subsequence of any non-identity conversion
sequence) or, if not that,
the rank of S1 is better than the rank of S2, or S1 and S2 have the same rank and are distinguishable by the rules in the paragraph below, or, if not that,
[..]
The first bullet point excludes our conversion (as it is an Lvalue Transformation). The second one requires a difference in ranks, which isn't present, as both conversions have Exact match rank; The "rules in the paragraph below", i.e. in [over.ics.rank]/4, don't cover array-to-pointer conversions either.
So believe it or not, none of both conversion sequences is better than the other, and thus the char const*-overload is picked.
Possible workaround: Define the second overload as a function template as well, then partial ordering kicks in and selects the first one.
template <typename T>
auto foo(T s)
-> std::enable_if_t<std::is_convertible<T, char const*>{}>
{
std::cout << "raw, size=" << std::strlen(s) << std::endl;
}
Demo.

Overload resolution, templates and inheritance

#include <iostream>
struct A {};
struct B : public A {};
template<typename T>
void foo(const T &x) { std::cout << "Called template" << std::endl; }
void foo(const A &a) { std::cout << "Called A" << std::endl; }
int main()
{
foo(A());
foo(B());
return 0;
}
This prints:
Called A
Called template
I was under the impression that a suitable non-template function would always be chosen over a template function. Can someone explain to me the resolution steps that lead to this somewhat surprising result?
I was under the impression that a suitable non-template function would always be chosen over a template function.
This only holds if the template and the non-template are equally good candidates. That's why the non-template is chosen for foo(A()).
However, in the case of foo(B()), using the non-template requires a derived-to-base conversion. So the function template is strictly better, and hence it's chosen.
The foo template instantiates into void foo(const B&). Consider what it would look like without templates:
void foo(const B &x) { std::cout << "Called template" << std::endl; }
void foo(const A &a) { std::cout << "Called A" << std::endl; }
I believe you'll agree calling foo(B()) should unambiguously pick the first one. That's exactly why the template is chosen.
n3376 13.3.3.1/6
When the parameter has a class type and the argument expression has a
derived class type, the implicit conversion sequence is a
derived-to-base Conversion from the derived class to the base class.
n3376 13.3.3.1/8
If no conversions are required to match an argument to a parameter
type, the implicit conversion sequence is the standard conversion
sequence consisting of the identity conversion (13.3.3.1.1).
Identity conversion has exact match rank due table in 13.3.3.1.1/table 12, but derived-to-base is worse, than identity.
So, compiler just have candidates in first case
// template after resolving
void foo(const A&)
// non-template
void foo(const A&)
Both has identity rank, but since first is function-template, second will be chosen.
And in second case
// template after resolving
void foo(const B&)
// non-template
void foo(const A&)
Only first has identity rank and will be chosen.
Can someone explain to me the resolution steps that lead to this somewhat surprising result?
you may look at Overload Resolution at cppreference.com:
http://en.cppreference.com/w/cpp/language/overload_resolution
in particular see the section Ranking of implicit conversion sequences
Extension of the Answer:
I tried to provide more clarification with an excerpt of the information from the aforementioned link:
A function template by itself is not a type, or a function, or any other entity. No code is generated from a source file that contains only template definitions. In order for any code to appear, a template must be instantiated: the template arguments must be determined so that the compiler can generate an actual function (or class, from a class template).
For that, the compiler goes through:
function template name lookup
template argument deduction
Down to here, the compiler has a couple of candidate function definitions which can handle the specific function call. These candidates are instannces of the template function as well as relevant non-template function definitions in the program.
But the answer to your question lies in fact here:
Template argument deduction takes place after the function template name lookup (which may involve argument-dependent lookup) and before overload resolution.
The fact that function overload resolution is performed after template function instantiation is the reason for the ouput of your code.
Now your specific case goes through overload resolution as the following:
Overload Resolution:
If the [previous] steps produce more than one candidate function, then overload resolution is performed to select the function that will actually be called. In general, the candidate function whose parameters match the arguments most closely is the one that is called.
.
.
.
...
F1 is determined to be a better function than F2 if implicit conversions for all arguments of F1 are not worse than the implicit conversions for all arguments of F2, and
1) there is at least one argument of F1 whose implicit conversion is better than the corresponding implicit conversion for that argument of F2
...
.
.
.
Ranking of implicit conversion sequences:
Each type of standard conversion sequence is assigned one of three ranks:
1) Exact match: no conversion required, lvalue-to-rvalue conversion, qualification conversion, user-defined conversion of class type to the same class
2) Promotion: integral promotion, floating-point promotion
3) Conversion: integral conversion, floating-point conversion, floating-integral conversion, pointer conversion, pointer-to-member conversion, boolean conversion, user-defined conversion of a derived class to its base
The rank of the standard conversion sequence is the worst of the ranks of the standard conversions it holds (there may be up to three conversions)
Binding of a reference parameter directly to the argument expression is either Identity or a derived-to-base Conversion:
struct Base {};
struct Derived : Base {} d;
int f(Base&); // overload #1
int f(Derived&); // overload #2
int i = f(d); // d -> Derived& has rank Exact Match
// d -> Base& has rank Conversion
// calls f(Derived&)

How is the implicit type conversion priority determined?

Here is the code:
class A{
public:
int val;
char cval;
A():val(10),cval('a'){ }
operator char() const{ return cval; }
operator int() const{ return val; }
};
int main()
{
A a;
cout << a;
}
I am running the code in VS 2013, the output value is 10, if I comment out operator int() const{ return val; }, the output value will then become a.
My question is how does the compiler determine which implicit type conversion to choose, I mean since both int and char are possible options for the << operator?
Yes, this is ambiguous, but the cause of the ambiguity is actually rather surprising. It is not that the compiler cannot distinguish between ostream::operator<<(int) and operator<<(ostream &, char); the latter is actually a template while the former is not, so if the matches are equally good the first one will be selected, and there is no ambiguity between those two. Rather, the ambiguity comes from ostream's other member operator<< overloads.
A minimized repro is
struct A{
operator char() const{ return 'a'; }
operator int() const{ return 10; }
};
struct B {
void operator<< (int) { }
void operator<< (long) { }
};
int main()
{
A a;
B b;
b << a;
}
The problem is that the conversion of a to long can be via either a.operator char() or a.operator int(), both followed by a standard conversion sequence consisting of an integral conversion. The standard says that (§13.3.3.1 [over.best.ics]/p10, footnote omitted):
If several different sequences of conversions exist that each convert
the argument to the parameter type, the implicit conversion sequence
associated with the parameter is defined to be the unique conversion
sequence designated the ambiguous conversion sequence. For the
purpose of ranking implicit conversion sequences as described in
13.3.3.2, the ambiguous conversion sequence is treated as a user-defined sequence that is indistinguishable from any other
user-defined conversion sequence. *
Since the conversion of a to int also involves a user-defined conversion sequence, it is indistinguishable from the ambiguous conversion sequence from a to long, and in this context no other rule in §13.3.3 [over.match.best] applies to distinguish the two overloads either. Hence, the call is ambiguous, and the program is ill-formed.
* The next sentence in the standard says "If a function that uses the ambiguous conversion sequence is selected as the best viable function, the call will be ill-formed because the conversion of one of the arguments in the call is ambiguous.", which doesn't seem necessarily correct, but detailed discussion of this issue is probably better in a separate question.
It shouldn't compile, since the conversion is ambiguous; and it doesn't with my compiler: live demo. I've no idea why your compiler accepts it, or how it chooses which conversion to use, but it's wrong.
You can resolve the ambiguity with an explicit cast:
cout << static_cast<char>(a); // uses operator char()
cout << static_cast<int>(a); // uses operator int()
Personally, I'd probably use named conversion functions, rather than operators, if I wanted it to be convertible to more than one type.
A debugging session gave the result. One is globally defined operator<< and other one is class method. You guess which one is calling which.
Test.exe!std::operator<<<std::char_traits<char> >(std::basic_ostream<char,std::char_traits<char> > & _Ostr, char _Ch)
msvcp120d.dll!std::basic_ostream<char,std::char_traits<char> >::operator<<(int _Val) Line 292 C++
I am not a language lawyer, but I believe compiler is giving precedence to member-function first.

Make an implicit conversion operator preferred over another in C++

I would like to prefer a certain implicit conversion sequence over another. I have the following (greatly simplified) class and functions:
class Whatever {...}
template <class T>
class ref
{
public:
operator T* ()
{
return object;
}
operator T& ()
{
return *object;
}
T* object;
...
};
void f (Whatever*)
{
cout << "f (Whatever*)" << endl;
}
void f (Whatever&)
{
cout << "f (Whatever&") << endl;
}
int main (int argc, char* argv[])
{
ref<Whatever> whatever = ...;
f(whatever);
}
When I have a ref object and I am making an ambiguous call to f, I would like the compiler to choose the one involving T&. But in other unambiguous cases I wish the implicit conversion to remain the same.
So far I have tried introducing an intermediate class which ref is implicitly convertible to, and which has an implicit conversion operator to T*, so the conversion sequence would be longer. Unfortunately it did not recognize in unambiguous cases that it is indeed convertible to T*. Same thing happened when the intermediate class had a(n implicit) constructor. It's no wonder, this version was completely unrelated to ref.
I also tried making one of the implicit conversion operators template, same result.
There's no "ranking" among the two conversions; both are equally good and hence the overload is ambiguous. That's a core part of the language that you cannot change.
However, you can just specify which overload you want by making the conversion explicit:
f((Whatever&) whatever);
Simple: define void f(const ref<Whatever>&), it will trump the others which require a conversion.
Only one user-defined conversion function is applied when performing implicit conversions. If there is no defined conversion function, the compiler does not look for intermediate types into which an object can be converted.

Why do implicit conversion member functions overloading work by return type, while it is not allowed for normal functions?

C++ does not allow polymorphism for methods based on their return type. However, when overloading an implicit conversion member function this seems possible.
Does anyone know why? I thought operators are handled like methods internally.
Edit: Here's an example:
struct func {
operator string() { return "1";}
operator int() { return 2; }
};
int main( ) {
int x = func(); // calls int version
string y = func(); // calls string version
double d = func(); // calls int version
cout << func() << endl; // calls int version
}
Conversion operators are not really considered different overloads and they are not called based on their return type. The compiler will only use them when it has to (when the type is incompatible and should be converted) or when explicitly asked to use one of them with a cast operator.
Semantically, what your code is doing is to declare several different type conversion operators and not overloads of a single operator.
That's not return type. That's type conversion.
Consider: func() creates an object of type func. There is no ambiguity as to what method (constructor) will be invoked.
The only question which remains is if it is possible to cast it to the desired types. You provided the compiler with appropriate conversion, so it is happy.
There isn't really a technical reason to prevent overloading of functions on the result types. This is done in some languages like Ada for instance, but in the context of C++ which has also implicit conversions (and two kind of them), the utility is reduced, and the interactions of both features would quickly leads to ambiguities.
Note that you can use the fact that implicit conversions are user definable to simulate overloading on result type:
class CallFProxy;
CallFProxy f(int);
class CallFProxy {
int myParameter;
CallFProxy(int i) : myParameter(i) {}
public:
operator double() { std::cout << "Calling f(int)->double\n"; return myParameter; }
operator string() { std::cout << "Calling f(int)->string\n"; return "dummy"; }
};
Overload resolution chooses between multiple candidate functions. In this process, the return type of candidates is indeed not considered. However, in the case of conversion operators the "return type" is critically important in determining whether that operator is a candidate at all.