Understand the whole process of function template overload resolution - c++

I found out that the overload resolution process is the more complicated I have ever learned in C++, Therefore, bear in mind that this topic is relatively difficult for me to understand easily, so be patient with me.
I have two examples here, and I tried to parse each example separately and understand what actually the compiler is doing starting from the call to the overloaded template until determining which overload is the best viable.
First Example #1
template <class T> void f(const T&); // F1: reference to const T
template <class T> void f(const T*); // F2: pointer to const T
f((int*)0);
// matching F1:
// deduction: P = const T&, A = int* --> const T& = int* --> T = int*
// instantiates: void f(int *const&); A = int*, P = int *const&
// S1 conversion: int* --> int* const& (identity conversion?)
// matching F2:
// P = const T*, A = int* --> const T* = int*; T = int;
// instantaies: void f(const int*); A = int*, P = const int*;
// S2 conversion: int* --> const int* (qualification conversion)
Per [over.ics.rank]/3/3.2
Standard conversion sequence S1 is a better conversion sequence than
standard conversion sequence S2 if
(3.2.1) S1 is a proper subsequence
of S2 (comparing the conversion sequences in the canonical form
defined by 12.2.4.2.2, excluding any Lvalue Transformation; the
identity conversion sequence is considered to be a subsequence of any
non-identity conversion sequence)
[..]
S1 is s a proper subsequence of S2: that's identity conversion is a proper subsequence of qualification conversion because the identity conversion sequence is considered to be a subsequence of any non-identity conversion sequence.
and per [over.match.best.general]/2:
Given these definitions, a viable function F1 is defined to be a
better function than another viable function F2 if for all arguments
i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then
(2.1) — for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2)
[..]
Here, the implicit conversions for all arguments of F1 (which is identity conversion) are "not worse" than the implicit conversions for all arguments of F2 (which is qualification conversion), and there is at least one argument of F1 whose implicit conversion is better than the corresponding implicit conversion for that argument of F2. Then the viable function F1 is a better function than the viable function F2. Hence, a specialization of F1 gets chosen by the overload resolution for the given call.
My questions:
Have I parsed the whole process correctly?
Does the compiler need to perform partial ordering at this point? if yes, why?, even though we knowfrom ICS that F1 is the best viable candidate?
Second Example #2
template <class T> void f(const T&); // F1: reference to const T
template <class T> void f(T*); // F2: pointer to T
f((int*)0);
// matching F1:
// deduction: P = const T&, A = int* --> const T& = int* --> T = int*
// instantiates: void f(int *const&); A = int*, P = int *const&
// S1 conversion: int* --> int* const& (identity conversion)
// matching F2:
// deduction: P = T*, A = int* --> T* = int*; T = int
// instantaies: void f(int*); A = int*, P = int*
// S2 conversion: int* --> int* (identity conversion)
Here neither rule in [over.ics.rank]/3/3.2 is applied; that's both S1 and S2 are identity conversions.
Then the compiler goes to the next step to check the rules defined in [over.match.best.general]/2. Indeed ICSi(F1) is not a worse conversion sequence than ICSi(F2), but the 2.1 bullet is not satisfied because ICSi(F1) is not a better conversion sequence than ICSi(F2). The implicit conversion sequence for the argument of F1 is actually the same (i.e not worse) as that of F2.
The compiler keeps checking the rules in [over.match.best.general]/2 until it hits bullet 2.5 which says:
[..]
(2.5) — F1 and F2 are function template specializations and the
function template for F1 is more specialized than the template for F2
according to the partial ordering rules [..]
I am not going through the whole process of partial ordering, instead, I will summarize things as possible.
Adjustment Function Signature:
void f(T); // Tem1
void f(T*); // Tem2
Transformed Function Signature:
void f(U1); // Tra1
void f(U2*); // Tra2
Matching Tem1 against Tran2:
void f(T); // Tem1
void f(U2*); // Tra2
// T = U2* - OK: T can be deduced from U2*
Matching Tem2 against Tran1:
void f(T*); // Tem2
void f(U1); // Tra1
// T* = U1 - error: T cannot be deduced
So template referred by Tra2 is more specialized than the template referred by Tran1: F2 is more specialized than F1. Therefore a specialization of F2 will be selected by the overload resolution for the given call.
My questions:
Have I parsed the whole process correctly?
What I think (it might be incorrect) is that after template argument deduction, the compiler generates specializations for both overloads to match P/A pairs for the ICS process. Now does the compiler generates a specialization again for the more-specialized template?
Believe me, before posting this question, I searched a lot for a question that covers that whole process. What I need to know, Am I thinking right, and whether all quotes I provided are applied to the examples or not.
Sorry for taking long, and Thanks in advance.

Related

Function Matching for parameters of type const T& and T

I have a question regarding the c++ function matching for parameters of types T and const T&.
Let's say I have the following two functions:
void f(int i) {}
void f(const int &ri) {}
If I call f with an argument of type const int then this call is of course ambiguous. But why is a call of f with an argument of type int also ambiguous? Wouldn't be the first version of f be an exact match and the second one a worse match, because the int argument must be converted to a const int?
const int ci = 0;
int i = 0;
f(ci); // of course ambiguous
f(i); // why also ambiguous?
I know that such kind of overloading doesn't make much sense, because calls of f are almost always ambiguous unless the parameter type T doesn't have an accessible copy constructor. But I'm just studying the rules of function matching.
Regards,
Kevin
EDIT: To make my question more clear. If I have the two functions:
void f(int *pi) {}
void f(const int *pi) {}
Then the following call is not ambiguous:
int i = 0;
f(&i); // not ambiguous, first version f(int*) chosen
Although both versions of f could be called with &i the first version is chosen, because the second version of f would include a conversion to const. That is, the first version is a "better match". But in the two functions:
void f(int i) {} and
void f(const int &ri) {}
This additional conversion to const seems to be ignored for some reason. Again both versions of f could be called with an int. But again, the second version of f would require a conversion to const which would make it a worse match than the first version f(int).
int i = 1;
// f(int) requires no conversion
// f(const int &) does require a const conversion
// so why are both versions treated as "equally good" matches?
// isnt this analogous to the f(int*) and f(const int*) example?
f(i); // why ambiguous this time?
One call involves an "lvalue-to-rvalue conversion", the other requires an identity conversion (for references) or a "qualification adjustment" (for pointers), and according to the Standard these are treated equally when it comes to overload resolution.
So, neither is better on the basis of differing conversions.
There is, however, a special rule in the Standard, section 13.3.3.2, that applies only if both candidates being compared take the parameter by reference.
Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if ... S1 and S2 are reference bindings (8.5.3), and the types to which the references refer are the same type except for top-level cv-qualifiers, and the type to which the reference initialized by S2 refers is more cv-qualified than the type to which the reference initialized by S1 refers.
There's an identical rule for pointers.
Therefore the compiler will prefer
f(int*);
f(int&);
over
f(const int*);
f(const int&);
respectively, but there's no preference for f(int) vs f(const int) vs f(const int&), because lvalue-to-rvalue transformation and qualification adjustment are both considered "Exact Match".
Also relevant, from section 13.3.3.1.4:
When a parameter of reference type binds directly to an argument expression, the implicit conversion sequence is the identity conversion, unless the argument expression has a type that is a derived class of the parameter type, in which case the implicit conversion sequence is a derived-to-base Conversion.
The second call f(i) is also ambiguous because void f(const int &ri) indicates that ri is a reference to i and is a constant. Meaning it says that it will not modify the original i which is passed to that function.
The choice whether to modify the passed argument or not is in the hands of the implementer of the function not the client programmer who mearly uses that function.
The reason the second call f(i) is ambiguous is because to the compiler, both functions would be acceptable. const-ness can't be used to overload functions because different const versions of functions can be used in a single cause. So in your example:
int i = 0;
fi(i);
How would the compiler know which function you intended in invoking? The const qualifier is only relevant to the function definition.
See const function overloading for a more detailed explanation.

Why does this code call different template function in vs2005?

The code is:
#include <iostream>
using namespace std;
// compares two objects
template <typename T> void compare(const T&, const T&){
cout<<"T"<<endl;
};
// compares elements in two sequences
template <class U, class V> void compare(U, U, V){
cout<<"UV"<<endl;
};
// plain functions to handle C-style character strings
void compare(const char*, const char*){
cout<<"ordinary"<<endl;
};
int main() {
cout<<"-------------------------char* --------------------------"<< endl;
char* c="a";
char* d="b";
compare(c,d);
cout<<"------------------------- char [2]---------------------------"<< endl;
char e[]= "a";
char f[]="b";
compare(e,f);
system("pause");
}
The result is:
-------------------------char* --------------------------
T
------------------------- char [2]-----------------------
ordinary
And my question is:
Why does compare(c,d) call compare(const T&, const T&) and compare(e,f) call the ordinary function even though the arguments of the two functions are char*s?
It appears that VS2005 may be erroneously treating the e and f variables as const char * types.
Consider the following code:
#include <iostream>
using namespace std;
template <typename T> void compare (const T&, const T&) {
cout << "T: ";
};
template <class U, class V> void compare (U, U, V) {
cout << "UV: ";
};
void compare (const char*, const char*) {
cout << "ordinary: ";
};
int main (void) {
char* c = "a";
char* d = "b";
compare (c,d);
cout << "<- char *\n";
char e[] = "a";
char f[] = "b";
compare (e,f);
cout << "<- char []\n";
const char g[] = "a";
const char h[] = "b";
compare (g,h);
cout << "<- const char []\n";
return 0;
}
which outputs:
T: <- char *
T: <- char []
ordinary: <- const char []
Section 13.3 Overload resolution of C++03 (section numbers appear to be unchanged in C++11 so the same comments apply there) specifies how to select which function is used and I'll try to explain it in (relatively) simple terms, given that the standard is rather a dry read.
Basically, a list of candidate functions is built based on how the function is actually being called (as a member function of an class/object, regular (unadorned) function calls, calls via a pointer and so on).
Then, out of those, a list of viable functions is extracted based on argument counts.
Then, from the viable functions, the best fit function is selected based on the idea of a minimal implicit conversion sequence (see 13.3.3 Best viable function of C++03).
In essence, there is a "cost" for selecting a function from the viable list that is set based on the implicit conversions required for each argument. The cost of selecting the function is the sum of the costs for each individual argument to that function, and the compiler will chose the function with the minimal cost.
If two functions are found with the same cost, the standard states the the compiler should treat it as an error.
So, if you have a function where an implicit conversion happens to one argument, it will be preferred over one where two arguments have to be converted in that same way.
The "cost" can be see in the table below in the Rank column. An exact match has less cost than promotion, which has less cost than conversion.
Rank Conversion
---- ----------
Exact match No conversions required
Lvalue-to-rvalue conversion
Array-to-pointer conversion
Function-to-pointer conversion
Qualification conversion
Promotion Integral promotions
Floating point promotions
Conversion Integral conversion
Floating point conversions
Floating-integral conversions
Pointer conversions
Pointer-to-member conversions
Boolean conversions
In places where the conversion cost is identical for functions F1 and F2 (such as in your case), F1 is considered better if:
F1 is a non-template function and F2 is a function template specialization.
However, that's not the whole story since the template code and non-template code are all exact matches hence you would expect to see the non-template function called in all cases rather than just the third.
That's covered further on in the standard: The answer lies in section 13.3.3.2 Ranking implicit conversion sequences. That section states that an identical rank would result in ambiguity except under certain conditions, one of which is:
Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if (1) S1 is a proper subsequence of S2 (comparing the conversion sequences in the canonical form defined by 13.3.3.1.1, excluding any Lvalue Transformation; the identity conversion sequence is considered to be a subsequence of any non-identity conversion sequence) ...
The conversions for the template version are actually a proper subset (qualification conversion) of the non-template version (qualification AND array-to-pointer conversions), and proper subsets are deemed to have a lower cost.
Hence it prefers the template version in the first two cases. In the third case, the only conversions are array-to-pointer for the non-template version and qualification for the template version, hence there's no subset in either direction, and it prefers the non-template version based on the rule I mentioned above, under the ranking table).

Global function template overloading and const parameters

If I compile (gcc 4.6.0) and run this code:
#include <iostream>
template <typename T> void F(/* const */ T& value) {
std::cout << "T & " << value << std::endl;
}
template <typename T> void F(/* const */ T* value) {
std::cout << "T * " << value << std::endl;
F(*value);
}
int main(int argc, char* argv[]) {
float f = 123.456;
float* pf = &f;
F(pf);
return 0;
}
I get the following output:
T * 0x7fff7b2652c4
T & 123.456
If I uncomment the const keywords I get the following output:
T & 0x7fff3162c68c
I can change float* pf = &f; to const float* pf = &f; to get the original output again, that's not the issue.
What I'd like to understand is why, when compiling with the const modifiers, overload resolution considers const T& value a better match than const T* valuefor a non-const float*?
During overload resolution, overloads requiring no conversions beat overloads requiring some conversions, even if those conversions are trivial. Quoting the C++03 standard, [over.match.best] (§13.3.3/1):
Define ICSi(F) as follows:
if F is a static member function, ICS1(F) is defined such that ICS1(F) is neither better nor worse than ICS1(G) for any function G, and, symmetrically, ICS1(G) is neither better nor worse than ICS1(F); otherwise,
let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the type of the i-th parameter of viable function F. 13.3.3.1 defines the implicit conversion sequences and 13.3.3.2 defines what it means for one implicit conversion sequence to be a better conversion sequence or worse conversion sequence than another.
Given these definitions, a viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then
for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that,
F1 is a non-template function and F2 is a function template specialization, or, if not that,
F1 and F2 are function template specializations, and the function template for F1 is more specialized than the template for F2 according to the partial ordering rules described in 14.5.5.2, or, if not that,
the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the entity being initialized) is a better conversion sequence than the standard conversion sequence from the return type of F2 to the destination type.
When const is present, in order to call the overload taking a reference, no conversion is necessary -- T is deduced to be float* and the argument is float* const&. However, in order to call the overload taking a pointer, float would need to be converted to float const for said overload to be viable. Consequently, the overload taking a reference wins.
Note, of course, that if pf were changed to be a float const*, the behavior would go back to the way you expected because the overload taking a pointer would no longer require a conversion.

Why is this ambiguity here?

Consider I have the following minimal code:
#include <boost/type_traits.hpp>
template<typename ptr_t>
struct TData
{
typedef typename boost::remove_extent<ptr_t>::type value_type;
ptr_t data;
value_type & operator [] ( size_t id ) { return data[id]; }
operator ptr_t & () { return data; }
};
int main( int argc, char ** argv )
{
TData<float[100][100]> t;
t[1][1] = 5;
return 0;
}
GNU C++ gives me the error:
test.cpp: In function 'int main(int, char**)':
test.cpp:16: error: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for second:
test.cpp:9: note: candidate 1: typename boost::remove_extent<ptr_t>::type& TData<ptr_t>::operator[](size_t) [with ptr_t = float [100][100]]
test.cpp:16: note: candidate 2: operator[](float (*)[100], int) <built-in>
My questions are:
Why GNU C++ gives the error, but Intel C++ compiler is not?
Why changing operator[] to the following leads to compiling without errors?
value_type & operator [] ( int id ) { return data[id]; }
Links to the C++ Standard are appreciated.
As I can see here are two conversion paths:
(1)int to size_t and (2)operator[](size_t).
(1)operator ptr_t&(), (2)int to size_t and (3)build-in operator[](size_t).
It's actually quite straight forward. For t[1], overload resolution has these candidates:
Candidate 1 (builtin: 13.6/13) (T being some arbitrary object type):
Parameter list: (T*, ptrdiff_t)
Candidate 2 (your operator)
Parameter list: (TData<float[100][100]>&, something unsigned)
The argument list is given by 13.3.1.2/6:
The set of candidate functions for overload resolution is the union of the member candidates, the non-member candidates, and the built-in candidates. The argument list contains all of the operands of the operator.
Argument list: (TData<float[100][100]>, int)
You see that the first argument matches the first parameter of Candidate 2 exactly. But it needs a user defined conversion for the first parameter of Candidate 1. So for the first parameter, the second candidate wins.
You also see that the outcome of the second position depends. Let's make some assumptions and see what we get:
ptrdiff_t is int: The first candidate wins, because it has an exact match, while the second candidate requires an integral conversion.
ptrdiff_t is long: Neither candidate wins, because both require an integral conversion.
Now, 13.3.3/1 says
Let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the type of the i-th parameter of viable function F.
A viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then ... for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that ...
For our first assumption, we don't get an overall winner, because Candidate 2 wins for the first parameter, and Candidate 1 wins for the second parameter. I call it the criss-cross. For our second assumption, the Candidate 2 wins overall, because neither parameter had a worse conversion, but the first parameter had a better conversion.
For the first assumption, it does not matter that the integral conversion (int to unsigned) in the second parameter is less of an evil than the user defined conversion of the other candidate in the first parameter. In the criss-cross, rules are crude.
That last point might still confuse you, because of all the fuss around, so let's make an example
void f(int, int) { }
void f(long, char) { }
int main() { f(0, 'a'); }
This gives you the same confusing GCC warning (which, I remember, was actually confusing the hell out of me when I first received it some years ago), because 0 converts to long worse than 'a' to int - yet you get an ambiguity, because you are in a criss-cross situation.
With the expression:
t[1][1] = 5;
The compiler must focus on the left hand side to determine what goes there, so the = 5; is ignored until the lhs is resolved. Leaving us with the expression: t[1][1], which represents two operations, with the second one operating on the result from the first one, so the compiler must only take into account the first part of the expression: t[1].The actual type is (TData&)[(int)]
The call does not match exactly any functions, as operator[] for TData is defined as taking a size_t argument, so to be able to use it the compiler would have to convert 1 from int to size_t with an implicit conversion. That is the first choice. Now, another possible path is applying user defined conversion to convert TData<float[100][100]> into float[100][100].
The int to size_t conversion is an integral conversion and is ranked as Conversion in Table 9 of the standard, as is the user defined conversion from TData<float[100][100]> to float[100][100] conversion according to §13.3.3.1.2/4. The conversion from float [100][100]& to float (*)[100] is ranked as Exact Match in Table 9. The compiler is not allowed to choose from those two conversion sequences.
Q1: Not all compilers adhere to the standard in the same way. It is quite common to find out that in some specific cases a compiler will perform differently than the others. In this case, the g++ implementors decided to whine about the standard not allowing the compiler to choose, while the Intel implementors probably just silently applied their preferred conversion.
Q2: When you change the signature of the user defined operator[], the argument matches exactly the passed in type. t[1] is a perfect match for t.operator[](1) with no conversions whatsoever, so the compiler must follow that path.
I don't know what's the exact answer, but...
Because of this operator:
operator ptr_t & () { return data; }
there exist already built-in [] operator (array subscription) which accepts size_t as index. So we have two [] operators, the built-in and defined by you. Booth accepts size_t so this is considered as illegal overload probably.
//EDIT
this should work as you intended
template<typename ptr_t>
struct TData
{
ptr_t data;
operator ptr_t & () { return data; }
};
It seems to me that with
t[1][1] = 5;
the compiler has to choose between.
value_type & operator [] ( size_t id ) { return data[id]; }
which would match if the int literal were to be converted to size_t, or
operator ptr_t & () { return data; }
followed by normal array indexing, in which case the type of the index matches exactly.
As to the error, it seems GCC as a compiler extension would like to choose the first overload for you, and you are compiling with the -pedantic and/or -Werror flag which forces it to stick to the word of the standard.
(I'm not in a -pedantic mood, so no quotes from the standard, especially on this topic.)
I have tried to show the two candidates for the expression t[1][1]. These are both of equal RANK (CONVERSION). Hence ambiguity
I think the catch here is that the built-in [] operator as per 13.6/13 is defined as
T& operator[](T*, ptrdiff_t);
On my system ptrdiff_t is defined as 'int' (does that explain x64 behavior?)
template<typename ptr_t>
struct TData
{
typedef typename boost::remove_extent<ptr_t>::type value_type;
ptr_t data;
value_type & operator [] ( size_t id ) { return data[id]; }
operator ptr_t & () { return data; }
};
typedef float (&ATYPE) [100][100];
int main( int argc, char ** argv )
{
TData<float[100][100]> t;
t[size_t(1)][size_t(1)] = 5; // note the cast. This works now. No ambiguity as operator[] is preferred over built-in operator
t[1][1] = 5; // error, as per the logic given below for Candidate 1 and Candidate 2
// Candidate 1 (CONVERSION rank)
// User defined conversion from 'TData' to float array
(t.operator[](1))[1] = 5;
// Candidate 2 (CONVERSION rank)
// User defined conversion from 'TData' to ATYPE
(t.operator ATYPE())[1][1] = 6;
return 0;
}
EDIT:
Here is what I think:
For candidate 1 (operator []) the conversion sequence S1 is
User defined conversion - Standard Conversion (int to size_t)
For candidate 2, the conversion sequence S2 is
User defined conversion -> int to ptrdiff_t (for first argument) -> int to ptrdiff_t (for second argument)
The conversion sequence S1 is a subset of S2 and is supposed to be better. But here is the catch...
Here the below quote from Standard should help.
$13.3.3.2/3 states - Standard
conversion sequence S1 is a better
conversion sequence than standard
conversion sequence S2 if — S1 is a
proper subsequence of S2 (comparing
the conversion sequences in the
canonical form defined by 13.3.3.1.1,
excluding any Lvalue Transformation;
the identity conversion sequence is
considered to be a subsequence of any
non-identity conversion sequence) or,
if not that...
$13.3.3.2 states- " User-defined
conversion sequence U1 is a better
conversion sequence than another
user-defined conversion sequence U2 if
they contain the same user-defined
conversion function or constructor and
if the second standard conversion
sequence of U1 is better than the
second standard conversion sequence of
U2."
Here the first part of the and condition "if they contain the same user-defined conversion function or constructor" does not hold good. So, even if the second part of the and condition "if the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2." holds good, neither S1 nor S2 is preferred over the other.
That's why gcc's phantom error message "ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second"
This explains the ambiguity quiet well IMHO
Overload resolution is a headache. But since you stumbled on a fix (eliminate conversion of the index operand to operator[]) which is too specific to the example (literals are type int but most variables you'll be using aren't), maybe you can generalize it:
template< typename IT>
typename boost::enable_if< typename boost::is_integral< IT >::type, value_type & >::type
operator [] ( IT id ) { return data[id]; }
Unfortunately I can't test this because GCC 4.2.1 and 4.5 accept your example without complaint under --pedantic. Which really raises the question whether it's a compiler bug or not.
Also, once I eliminated the Boost dependency, it passed Comeau.

What are the rules for choosing from overloaded template functions?

Given the code below, why is the foo(T*) function selected ?
If I remove it (the foo(T*)) the code still compiles and works correctly, but G++ v4.4.0 (and probably other compilers as well) will generate two foo() functions: one for char[4] and one for char[7].
#include <iostream>
using namespace std;
template< typename T >
void foo( const T& )
{
cout << "foo(const T&)" << endl;
}
template< typename T >
void foo( T* )
{
cout << "foo(T*)" << endl;
}
int main()
{
foo( "bar" );
foo( "foobar" );
return 0;
}
Formally, when comparing conversion sequences, lvalue transformations are ignored. Conversions are grouped into several categories, like qualification adjustment (T* -> T const*), lvalue transformation (int[N] -> int*, void() -> void(*)()), and others.
The only difference between your two candidates is an lvalue transformation. String literals are arrays that convert to pointers. The first candidate accepts the array by reference, and thus won't need an lvalue transformation. The second candidate requires an lvalue transformation.
So, if there are two candidates that both function template specializations are equally viable by looking only at the conversions, then the rule is that the more specialized one is chosen by doing partial ordering of the two.
Let's compare the two by looking at their signature of their function parameter list
void(T const&);
void(T*);
If we choose some unique type Q for the first parameter list and try to match against the second parameter list, we are matching Q against T*. This will fail, since Q is not a pointer. Thus, the second is at least as specialized as the first.
If we do the other way around, we match Q* against T const&. The reference is dropped and toplevel qualifiers are ignored, and the remaining T becomes Q*. This is an exact match for the purpose of partial ordering, and thus deduction of the transformed parameter list of the second against the first candidate succeeds. Since the other direction (against the second) didn't succeed, the second candidate is more specialized than the first - and in consequence, overload resolution will prefer the second, if there would otherwise be an ambiguity.
At 13.3.3.2/3:
Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if [...]
S1 is a proper subsequence of S2 (comparing the conversion sequences in the canonical form
defined by 13.3.3.1.1, excluding any Lvalue Transformation; the identity conversion sequence is considered to be a subsequence of any non-identity conversion sequence) or, if not that [...]
Then 13.3.3/1
let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the type of the i-th parameter of viable function F. 13.3.3.1 defines the implicit conversion sequences and 13.3.3.2 defines what it means for one implicit conversion sequence to be a better conversion sequence or worse conversion sequence than another.
Given these definitions, a viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then [...]
F1 and F2 are function template specializations, and the function template for F1 is more specialized than the template for F2 according to the partial ordering rules described in 14.5.5.2, or, if not that, [...]
Finally, here is the table of implicit conversions that may participate in an standard conversion sequence at 13.3.3.1.1/3.
Conversion sequences http://img259.imageshack.us/img259/851/convs.png
The full answer is quite technical.
First, string literals have char const[N] type.
Then there is an implicit conversion from char const[N] to char const*.
So both your template function match, one using reference binding, one using the implicit conversion. When they are alone, both your template functions are able to handle the calls, but when they are both present, we have to explain why the second foo (instantiated with T=char const[N]) is a better match than the first (instantiated with T=char). If you look at the overloading rules (as given by litb), the choice between
void foo(char const (&x)[4));
and
void foo(char const* x);
is ambigous (the rules are quite complicated but you can check by writing non template functions with such signatures and see that the compiler complains). In that case, the choice is made to the second one because that one is more specialized (again the rules for this partial ordering are complicated, but in this case it is because you can pass a char const[N] to a char const* but not a char const* to a char const[N] in the same way as void bar(char const*) is more specialized than void bar(char*) because you can pass a char* to a char const* but not vise-versa).
Based on overload resolution rules (Appendix B of C++ Templates: The Complete Guide has a good overview), string literals (const char []) are closer to T* than T&, because the compiler makes no distinction between char[] and char*, so T* is the closest match (const T* would be an exact match).
In fact, if you could add:
template<typename T>
void foo(const T[] a)
(which you can't), your compiler would tell you that this function is a redefinition of:
template<typename T>
void foo(const T* a)
Cause " " is a char*, which fits perfectly to foo(T*) function. When you remove this, the compiler will try to make it work with foo(T&), which requires you to pass reference to char array that contains the string.
Compiler can't generate one function that would receive reference to char, as you are passing whole array, so it has to dereference it.