Related
(All ISO Standard references below refer to N4659: March 2017 post-Kona working draft/C++17 DIS, and all example program results are consistent over GCC and Clang for C++11, C++14 and C++17)
Consider the following example:
#include <initializer_list>
// Denote as A.
void f(float) {}
// Denote as B.
void f(std::initializer_list<int>) {}
int main() {
// Denote call as C1.
f(1.5F); // Overload resolution picks A (trivial).
// Denote call as C2.
f({1.5F}); // Overload resolution picks B (and subsequently fails; narrowing).
return 0;
}
which results in the call C2 picking overload B as best viable function, followed by a failure due to narrowing in list initialization (governed by [dcl.init.list]/7):
error: type 'float' cannot be narrowed to 'int' in initializer list [-Wc++11-narrowing]
Question
Why does the f({1.5F}); function call find void f(std::initializer_list<int>) to be the unique best viable function, ranked as a better match than void f(float)? Afaics this conflicts with [over.ics.list]/4 and [over.ics.list]/9 (see details below).
I'm looking for references to the relevant standard passages.
Note that I am aware of the special rules regarding overload resolution for list initialization (constructor overloading) and std::initializer_list<> (and the various SO questions on this topic), where std::initializer_list<> is strongly preferred, as governed by [over.match.list]/1. However, afaics, this does not apply here (or, if I'm wrong, at the very least arguably conflicts with the standard examples shown in [over.ics.list]/4).
Details and investigation of my own
For both C1 and C2 calls above, overload resolution is applied as specified in [over.call.func] (as per [over.match]/2.1 and [over.match.call]/1), particularly [over.call.func]/3, and the set of candidate functions for both calls is:
candidate_functions(C1) = { void f(float),
void f(std::initializer_list<int>) }
candidate_functions(C2) = { void f(float),
void f(std::initializer_list<int>) }
and the argument list in each call is the same as the expression-list in the call.
As per [over.match.viable]/2 and [over.match.viable]/3, the set of viable functions is the same as the candidate functions for both calls:
viable_functions(C1) = { void f(float),
void f(std::initializer_list<int>) }
viable_functions(C2) = { void f(float),
void f(std::initializer_list<int>) }
Best viable functions
As per [over.match.best]/1, in short terms, the best viable function for a given function call, here specifically a single-argument call (with no special cases for the viable functions of each call), is the viable function that, for the single argument, has the best implicit conversion sequence from the single argument to the single parameter of the respective viable function.
C1: Best viable function
The call f(1.5F) has an identity standard conversion (no conversion) and thus an exact match for A (as per [over.ics.scs]/3) and A can be (trivially and) non-ambiguously chosen as the best viable function.
best_viable_function(C1) = void f(float)
C2: Best viable function
Afaics, ranking the implicit conversion sequences of the call f({1.5F}) towards viable candidates is governed by [over.ics.list], as per [over.ics.list]/1:
When an argument is an initializer list ([dcl.init.list]), it is not an expression and special rules apply for converting it to a parameter type.
Matching C2 towards A
For matching towards A where the single parameter is of the fundamental float type, [over.ics.list]/9 and particularly [over.ics.list]/9.1 applies [emphasis mine]:
Otherwise, if the parameter type is not a class:
(9.1) if the initializer list has one element that is not itself an initializer list, the implicit conversion sequence is the
one required to convert the element to the parameter type;
[ Example:
void f(int);
f( {'a'} ); // OK: same conversion as char to int
f( {1.0} ); // error: narrowing
— end example ]
[...]
This arguably means that the implicit conversion sequence for matching the call f({1.5F}} to f(float) is the same conversion sequence as float to float; i.e., identity conversion and subsequently an exact match. However, as per the example above, there must be some flaw in my logic here as the call C2 doesn't even result in an ambiguous best viable function.
Matching C2 towards B
For matching towards B where the single parameter [over.ics.list]/4 applies [emphasis mine]:
Otherwise, if the parameter type is std::initializer_list<X> and
all the elements of the initializer list can be implicitly converted
to X, the implicit conversion sequence is the worst conversion
necessary to convert an element of the list to X, or if the
initializer list has no elements, the identity conversion. This
conversion can be a user-defined conversion even in the context of a
call to an initializer-list constructor. [ Example:
void f(std::initializer_list<int>);
f( {} ); // OK: f(initializer_list<int>) identity conversion
f( {1,2,3} ); // OK: f(initializer_list<int>) identity conversion
f( {'a','b'} ); // OK: f(initializer_list<int>) integral promotion
f( {1.0} ); // error: narrowing
[...]
— end example ]
In this example, the implicit conversion sequence is thus the worst conversion necessary to convert the single float element of the list to int, which is a conversion ranked standard conversion sequence ([over.ics.scs]/3), particularly a narrowing conversion as per [conv.fpint]/1.
Best viable function
According to my own interpretation of the standard passages as per above, the best viable function should be the same as for the C1 call,
best_viable_function(C2) = void f(float) ?
but I'm obviously missing something.
List-initialization sequences: special case ranking when one sequence converts to std::initializer_list
[over.ics.rank]/3.1 applies for this case, and takes precedence over the other rules of [over.ics.rank]/3 [emphasis mine]:
List-initialization sequence L1 is a better conversion sequence than
list-initialization sequence L2 if
(3.1.1) L1 converts to std::initializer_list<X> for some X and L2 does not, or, if not that
(3.1.2) [...]
even if one of the other rules in this paragraph would otherwise apply. [ Example:
void f1(int); // #1
void f1(std::initializer_list<long>); // #2
void g1() { f1({42}); } // chooses #2
void f2(std::pair<const char*, const char*>); // #3
void f2(std::initializer_list<std::string>); // #4
void g2() { f2({"foo","bar"}); } // chooses #4
— end example ]
meaning that [over.ics.rank]/3.2 and [over.ics.rank]/3.3, covering distinguishing implicit conversion sequences by means of standard conversion sequences and user defined conversion sequences, respectively, does not apply, which in turn means [over.ics.list]/4 and [over.ics.list]/9 will not be used to rank the implicit conversion sequences when comparing the best match out of "C2 calls A" vs "C2 calls B".
This very topic was a defect in C++11 and C++14
It's not surprising that these conversions may come as counter-intuitive, and that the rules governing them are complicated. In the original C++11 and C++14 ISO standard releases, the call f({1.5F}); actually had ambiguous ranking rules w.r.t. the best viable function, which was covered in CWG Defect Report 1589 [emphasis mine]:
1589. Ambiguous ranking of list-initialization sequences
Section: 16.3.3.2 [over.ics.rank]
Status: CD4
Submitter: Johannes Schaub
Date: 2012-11-21
[Moved to DR at the November, 2014 meeting.]
The interpretation of the following example is unclear in the current
wording:
void f(long);
void f(initializer_list<int>);
int main() { f({1L});
The problem is that a list-initialization sequence can also be a
standard conversion sequence, depending on the types of the elements
and the type of the parameter, so more than one bullet in the list
in 16.3.3.2 [over.ics.rank] paragraph 3 applies:
[...]
These bullets give opposite results for the example above, and there
is implementation variance in which is selected.
[...]
Proposed resolution (June, 2014):
This issue is resolved by the resolution of issue
1467.
CWG Defect Report 1467 finally resolved also DR 1589, particularly adding the relevant part quoted from [over.ics.rank]/3 above [emphasis mine]:
1467. List-initialization of aggregate from same-type object
[...]
Proposed resolution (June, 2014):
[...]
Move the final bullet of 16.3.3.2 [over.ics.rank] paragraph 3 to the beginning of the list and change it as follows:
[...]
even if one of the other rules in this paragraph would otherwise apply. [Example: ... — end example]
This resolution also resolves issues 1490, 1589, 1631, 1756, and 1758.
Compilers such as GCC and Clang has since back-ported DR 1467 to earlier standards (C++11 and C++14).
There is global function (just example):
void func( int i )
{
std::cout << i + 100 << std::endl;
}
I assume that calling this function with char argument does not make any sense so I use delete:
void func(char) = delete;
So I expect that following calls should be possible:
func(1);
func(3.1);
func(true);
And call with char argument should be forbiden:
func('a');
But that is not true. When calling func('a') I get as expected:
error: use of deleted function ‘void func(char)’
But during calling func(2.3) I get:
error: call of overloaded ‘func(double)’ is ambiguous
Why do I get this error? Without deleting function with char arguments double was converted to int and func(int) was called, why now it is forbidden?
When you call
func(2.3)
you pass a double to the function. The list of candidates contains both func(int) and func(char), as overload resolution happens before delete kicks in:
If the function is overloaded, overload resolution takes place first, and the program is only ill-formed if the deleted function was selected Ref: cppreference.com, see Avishai's answer for precise standard quotes.
Now double can be converted both to char and int, hence the ambiguity.
Without deleting function with char arguments double was converted to int and func(int) was called, why now it is forbidden?
You get an ambiguity error even without deleteing the char version, see live here. Of course if you only define the func(int), then there will be no ambiguity so double will happily be converted to an int.
Any time a non-exact match is passed in for a function parameter, a conversion must be done. When you have two functions, regardless if one is deleted, they still participate in overload resolution. The compiler picks the best match, then either generates a call to it, or fails if the function is inaccessible or deleted.
For overload resolution to work, it makes a set of candidate functions then sees which has the shortest conversion path to the set of parameters. As far as the rules go, all builtin numeric types are an "equal" conversion. Therefore, if you call with 2.5, which is a double, it must convert and can convert equally to a char OR to an int, with the same ranking, so the call is ambiguous.
It is still ambiguous, as the compiler tries to resolve the overload before checking for deleted functions. See:
§8.4.3 Deleted definitions
A program that refers to a deleted function implicitly or explicitly, other than to declare it, is ill-formed.
[
Note:
This includes calling the function implicitly or explicitly and forming a pointer or pointer-to-member
to the function. It applies even for references in expressions that are not potentially-evaluated. If a function
is overloaded, it is referenced only if the function is selected by overload resolution.
— end note
]
§13.3.3.1 Implicit conversion sequences:
Implicit conversion sequences are concerned only with the type, cv-qualification, and value category of
the argument and how these are converted to match the corresponding properties of the parameter. Other
properties, such as the lifetime, storage class, alignment, accessibility of the argument, whether the argument
is a bit-field, and whether a function is deleted (8.4.3), are ignored. So, although an implicit conversion
sequence can be defined for a given argument-parameter pair, the conversion from the argument to the
parameter might still be ill-formed in the final analysis.
I am just looking for clarification on how C++ works, this isn't really about solving a particular problem in my code.
In C++ you can say that type A should implicitly convert to type B in two different ways.
If you are the author of A you can add something like this to A:
operator B() {
// code
}
If you are the author of B you can add something like this to B:
B(const A &a) {
// code
}
Either one of these, if I understand correctly, will allow A to implicitly convert to B. So if both are defined which one is used? Is that even allowed?
NOTE: I understand that you should probably never be in a situation where you do this. You would either make the constructor explicit or even more likely only have one of the two. I am just wondering what the C++ specification says and I don't know how to look that up.
Unfortunately, the answer to this question is probably more complex than what you were looking for. It is true that the compiler will reject ambiguous conversions as Lightness Races in Orbit points out, but are the conversions ambiguous? Let's examine a few cases. All references are to the C++11 standard.
Explicit conversion
This doesn't address your question directly because you asked about implicit conversion, but since Lightness Races in Orbit gave an example of explicit conversion, I'll go over it anyway.
Explicit conversion is performed from A to B when:
you use the syntax (B)a, where a is of type A, which in this case will be equivalent to static_cast<B>(a) (C++11 standard, §5.4/4).
you use a static cast, which in this case will create a temporary which is initialized in the same way that the declaration B t(a); initializes t; (§5.2.9/4)
you use the syntax B(a), which is equivalent to (B)a and hence also does the same thing as the initialization in the declaration B t(a); (§5.2.3/1)
In each case, therefore, direct initialization is performed of a prvalue of type B using a value of type A as the argument. §8.5/16 specifies that only constructors are considered, so B::B(const A&) will be called. (For slightly more detail, see my answer here: https://stackoverflow.com/a/22444974/481267)
Copy-initialization
In the copy-initialization
B b = a;
the value a of type A is first converted to a temporary of type B using a user-defined conversion sequence, which is an implicit conversion sequence. Then this temporary is used to direct-initialize b.
Because this is copy-initialization of a class type by an object of a different class type, both the converting constructor B::B(const A&) and the conversion function A::operator B() are candidates for the conversion (§13.3.1.4). The latter is called because it wins overload resolution. Note that if B::B had argument A& rather than const A&, the overload would be ambiguous and the program wouldn't compile. For details and references to the Standard see this answer: https://stackoverflow.com/a/1384044/481267
Copy-list-initialization
The copy-list-initialization
B b = {a};
only considers constructors of B (§8.5.4/3), and not conversion functions of A, so B::B(const A&) will be called, just like in explicit conversion.
Implicit conversion of function arguments
If we have
void f(B b);
A a;
f(a);
then the compiler has to select the best implicit conversion sequence to convert a to type B in order to pass it to f. For this purpose, user-defined conversion sequences are considered which consist of a standard conversion followed by a user-defined conversion followed by another standard conversion (§13.3.3.1.2/1). A user-defined conversion can occur through either the converting constructor B::B(const A&) or through the conversion function A::operator B().
Here's where it gets tricky. There is some confusing wording in the standard:
Since an implicit conversion sequence is an initialization, the special rules for initialization
by user-defined conversion apply when selecting the best user-defined conversion for a user-defined conversion
sequence (see 13.3.3 and 13.3.3.1).
(§13.3.3.1.2/2)
To make a long story short, this means that the user-defined conversion in the user-defined conversion sequence from A to B is itself subject to overload resolution; A::operator B() wins over B::B(const A&) because the former has less cv-qualification (as in the copy-initialization case) and ambiguity would result if we had had B::B(A&) rather than B::B(const A&). Note that this cannot result in the infinite recursion of overload resolution, since user-defined conversions are not allowed for converting the argument to the parameter type of the user-defined conversion.
Return statement
In
B foo() {
return A();
}
the expression A() is implicitly converted to type B (§6.6.3/2) so the same rules apply as in implicit conversion of function arguments; A::operator B() will be called, and the overload would be ambiguous if we had B::B(A&). However, if it were instead
return {A()};
then this would be a copy-list-initialization instead (§6.6.3/2 again); B::B(const A&) will be called.
Note: user-defined conversions are not tried when handling exceptions; a catch(B) block won't handle a throw A();.
[C++11: 12.3/2]: User-defined conversions are applied only where they are unambiguous. [..]
12.3 goes on to list the two kinds you identified.
This example seems to compile with VC10 and gcc (though my version of gcc is very old).
EDIT: R. Martinho Fernandez tried this on gcc 4.7 and the behaviour is still the same.
struct Base
{
operator double() const { return 0.0; }
};
struct foo
{
foo(const char* c) {}
};
struct Something : public Base
{
void operator[](const foo& f) {}
};
int main()
{
Something d;
d["32"];
return 0;
}
But clang complains:
test4.cpp:19:6: error: use of overloaded operator '[]' is ambiguous (with operand types 'Something' and 'const char [3]')
d["32"]
~^~~~~
test4.cpp:13:10: note: candidate function
void operator[](const foo& f) {}
^
test4.cpp:19:6: note: built-in candidate operator[](long, const char *)
d["32"]
^
test4.cpp:19:6: note: built-in candidate operator[](long, const restrict char *)
test4.cpp:19:6: note: built-in candidate operator[](long, const volatile char *)
test4.cpp:19:6: note: built-in candidate operator[](long, const volatile restrict char *)
The overload resolution is considering two possible functions from looking at this expression:
calling Something::operator[] (after a user defined conversion)
calling built in operator for const char* (think "32"[d]) (after a user defined conversion and standard conversion double to long).
If I had written d["32"] as d.operator[]("32"), then overload resolution won't even look at option 2, and clang will also compile fine.
EDIT: (clarification of questions)
This seems to be a complicated area in overload resolution, and because of that I'd appreciate very much answers that explain in detail the overload resolution in this case, and cite the standard (if there's some obscure/advanced likely to be unknown rule).
If clang is correct, I'm also interested in knowing why the two are ambiguous / one is not preferred over another. The answer likely would have to explain how overload resolution considers implicit conversions (both user defined and standard conversions) involved on the two candidates and why one is not better than the other.
Note: if operator double() is changed to operator bool(), all three (clang, vc, gcc) will refuse to compile with similar ambiguous error.
It should be easier to picture why the overload resolution is ambiguous by going through it step-by-step.
§13.5.5 [over.sub]
Thus, a subscripting expression x[y] is interpreted as x.operator[](y) for a class object x of type T if T::operator[](T1) exists and if the operator is selected as the best match function by the overload resolution mechanism (13.3.3).
Now, we first need an overload set. That's constructed according to §13.3.1 and contains member aswell as non-member functions. See this answer of mine for a more detailed explanation.
§13.3.1 [over.match.funcs]
p2 The set of candidate functions can contain both member and non-member functions to be resolved against the same argument list. So that argument and parameter lists are comparable within this heterogeneous set, a member function is considered to have an extra parameter, called the implicit object parameter, which represents the object for which the member function has been called. [...]
p3 Similarly, when appropriate, the context can construct an argument list that contains an implied object argument to denote the object to be operated on.
// abstract overload set (return types omitted since irrelevant)
f1(Something&, foo const&); // linked to Something::operator[](foo const&)
f2(std::ptrdiff_t, char const*); // linked to operator[](std::ptrdiff_t, char const*)
f3(char const*, std::ptrdiff_t); // linked to operator[](char const*, std::ptrdiff_t)
Then, an argument list is constructed:
// abstract argument list
(Something&, char const[3]) // 'Something&' is the implied object argument
And then the argument list is tested against every member of the overload set:
f1 -> identity match on argument 1, conversion required for argument 2
f2 -> conversion required for argument 1, conversion required for argument 2 (decay)
f3 -> argument 1 incompatible, argument 2 incompatible, discarded
Then, since we found out that there are implicit conversions required, we take a look at §13.3.3 [over.match.best] p1:
Define ICSi(F) as follows:
if F is a static member function, [...]; otherwise,
let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the type of the i-th parameter of viable function F. 13.3.3.1 defines the implicit conversion sequences and 13.3.3.2 defines what it means for one implicit conversion sequence to be a better conversion sequence or worse conversion sequence than another.
Now let's construct those implicit conversion sequences for f1 and f2 in the overload set (§13.3.3.1):
ICS1(f1): 'Something&' -> 'Someting&', standard conversion sequence
ICS2(f1): 'char const[3]' -> 'foo const&', user-defined conversion sequence
ICS1(f2): 'Something&' -> 'std::ptrdiff_t', user-defined conversion sequence
ICS2(f2): 'char const[3]' -> 'char const*', standard conversion sequence
§13.3.3.2 [over.ics.rank] p2
a standard conversion sequence (13.3.3.1.1) is a better conversion sequence than a user-defined conversion sequence or an ellipsis conversion sequence.
So ICS1(f1) is better than ICS1(f2) and ICS2(f1) is worse than ICS2(f2).
Conversely, ICS1(f2) is worse than ICS1(f1) and ICS2(f2) is better than ICS2(f1).
§13.3.3 [over.match.best]
p1 (cont.) Given these definitions, a viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then [...]
p2 If there is exactly one viable function that is a better function than all other viable functions, then it is the one selected by overload resolution; otherwise the call is ill-formed.
Well, f*ck. :) As such, Clang is correct in rejecting that code.
It seems there is no question that both Something::operator[](const foo& f) and the built-in operator[](long, const char *) are viable candidate functions (13.3.2) for overload resolution. The types of real arguments are Something and const char*, and implicit conversion sequences (ICFs) I think are:
for Something::operator[](const foo& f): (1-1) identity conversion, and (1-2) foo("32") through foo::foo(const char*);
for operator[](long, const char *): (2-1) long(double(d)) through Something::operator double() const (inherited from Base), and (2-2) identity conversion.
Now if we rank these ICFs according to (13.3.3.2), we can see that (1-1) is a better conversion than (2-1), and (1-2) is a worse conversion than (2-2). According to the definition in (13.3.3),
a viable function F1 is defined to be a better function than another viable function
F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), ...
Therefore, neither of the considered two candidate functions is better than the other one, and thus the call is ill-formed. I.e. Clang appears to be correct, and the code should not compile.
It would seem from 13.6 in the C++11 spec that clang is correct here:
13.6
Built-in operators
[over.built]
The candidate operator functions that represent the built-in operators defined in Clause 5 are specified in
this subclause. These candidate functions participate in the operator overload resolution process as described
in 13.3.1.2 and are used for no other purpose. [ Note: Because built-in operators take only operands with
non-class type, and operator overload resolution occurs only when an operand expression originally has class
or enumeration type, operator overload resolution can resolve to a built-in operator only when an operand
has a class type that has a user-defined conversion to a non-class type appropriate for the operator, or when
an operand has an enumeration type that can be converted to a type appropriate for the operator. Also note
that some of the candidate operator functions given in this subclause are more permissive than the built-in
operators themselves. As described in 13.3.1.2, after a built-in operator is selected by overload resolution
the expression is subject to the requirements for the built-in operator given in Clause 5, and therefore to
any additional semantic constraints given there. If there is a user-written candidate with the same name
and parameter types as a built-in candidate operator function, the built-in operator function is hidden and
is not included in the set of candidate functions. — end note ]
:
For every cv-qualified or cv-unqualified object type T there exist candidate operator functions of the form
T& operator[](T *, std::ptrdiff_t);
T& operator[](std::ptrdiff_t, T *);
edit
Once you get past which operator functions exist, this just becomes standard overload resolution as described by section 13.3 of the standard -- about 10 pages of details, but the gist of it is that for a function call to not be ambiguous, there needs to be a single function that is at least as good a match as all the possible, viable functions on every argument, and a better match than the others on at least one argument. There's a lot of spec detail on exactly what 'better' means, but it boils down to (in this case) a match not requiring any user-defined conversion operator or object constructor is better than one which does.
So in this case, there are two viable matches:
void Something::operator[](const foo& f)
operator[](long, const char *)
The first is a better match for the first argument, while the second is a better match for the second. So unless there's some other function that is better than both of these, its ambiguous.
That latter point is a possble workaround -- add:
void operator[](const char *a) { return (*this)[foo(a)]; }
to class Something
Several comments on a recent answer of mine, What other useful casts can be used in C++, suggest that my understanding of C++ conversions is faulty. Just to clarify the issue, consider the following code:
#include <string>
struct A {
A( const std::string & s ) {}
};
void func( const A & a ) {
}
int main() {
func( "one" ); // error
func( A("two") ); // ok
func( std::string("three") ); // ok
}
My assertion was that the the first function call is an error, becauuse there is no conversion from a const char * to an A. There is a conversion from a string to an A, but using this would involve more than one conversion. My understanding is that this is not allowed, and this seems to be confirmed by g++ 4.4.0 & Comeau compilers. With Comeau, I get the following error:
"ComeauTest.c", line 11: error: no suitable constructor exists
to convert from "const char [4]" to "A"
func( "one" ); // error
If you can point out, where I am wrong, either here or in the original answer, preferably with reference to the C++ Standard, please do so.
And the answer from the C++ standard seems to be:
At most one user-defined conversion
(constructor or conversion function)
is implicitly applied to a single value.
Thanks to Abhay for providing the quote.
I think the answer from sharptooth is precise. The C++ Standard (SC22-N-4411.pdf) section 12.3.4 titled 'Conversions' makes it clear that only one implicit user-defined conversion is allowed.
1 Type conversions of class objects can be specified by
constructors and by conversion
functions. These
conversions are called user-defined conversions and are used
for implicit type conversions (Clause
4), for
initialization (8.5), and for explicit type conversions (5.4,
5.2.9).
2 User-defined conversions are applied only where they are
unambiguous (10.2, 12.3.2).
Conversions obey the
access control rules (Clause 11). Access control is applied after
ambiguity resolution (3.4).
3 [ Note: See 13.3 for a discussion of the use of conversions
in function calls as well as examples
below. —end
note ]
4 At most one user-defined conversion (constructor or conversion
function) is implicitly applied to a
single
value.
That's true, only one implicit conversion is allowed.
Two conversions in a row may be performed with a combination of a conversion operator and a parameterized constructor but this causes a C4927 warning - "illegal conversion; more than one user-defined conversion has been implicitly applied" - in VC++ for a reason.
As the consensus seems to be already: yes you're right.
But as this question / answers will probably become the point of reference for C++ implicit conversions on stackoverflow I'd like to add that for template arguments the rules are different.
No implicit conversions are allowed for arguments that are used for template argument deduction. This might seem pretty obvious but nevertheless can lead to subtle weirdness.
Case in point, std::string addition operators
std::string s;
s += 67; // (1)
s = s + 67; // (2)
(1) compiles and works fine, operator+= is a member function, the template character parameter is already deduced by instantiating std::string for s (to char). So implicit conversions are allowed (int -> char), results in s containing the char equivalent of 67, e.g. in ASCII this would become 'C'
(2) gives a compiler error as operator+ is declared as a free function and here the template character argument is used in deduction.
The C++ Programming Language (4th. ed.) (section 18.4.3) says that
only one level of user-defined
implicit conversion is legal
That "user-defined" part makes it sound like multiple implicit conversions may be allowed if some are between native types.