On the following code clang and EDG diagnose an ambiguous function call, while gcc and Visual Studio accept the code.
struct s
{
typedef void(*F)();
operator F(); //#1
operator F() const; //#2
};
void test(s& p)
{
p(); //ambiguous function call with clang/EDG; gcc/VS call #1
}
According to the C++ standard draft (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3797.pdf) section 13.3.1.1.2 2 says;
a surrogate call function with the unique name call-function and having the form R call-function ( conversion-type-id F, P1 a1, ... ,Pn an) { return F (a1,... ,an); } is also considered as a candidate function.
In the code above that seems to mean that two call function definitions (one for each conversion function) are being considered, but both call functions have identical signatures (therefore the ambiguity) since the cv-qualifiers of the conversion operator do not seem to be taken into account in the call function signature.
I would have expected #1 to be called as with gcc and Visual Studio. So if clang/EDG are instead right in rejecting the above code, could someone please shed some light on the reason as to why the standard stipulates that there should be an ambiguity in this case and which code benefits from that property of surrogate call functions? Who is right: clang(3.5)/EDG(310) or gcc (4.8.2)/VS(2013)?
Clang and EDG are right.
Here's how this works. The standard says (same source as your quote):
In addition, for each non-explicit conversion function declared in T of the form
operator conversion-type-id () attribute-specifier-seq[opt] cv-qualifier ;
where [various conditions fulfilled in your example], a surrogate call function with the unique name call-function and having the form
R call-function ( conversion-type-id F, P1 a1, ... ,Pn an) { return F (a1, ... ,an); }
is also considered as a candidate function. [do the same for inherited conversions]^128
And the footnote points out that this may yield multiple surrogates with undistinguishable signatures, and if those aren't displaced by clearly better candidates, the call is ambiguous.
Following this scheme, your type has two conversion operators, yielding two surrogates:
// for operator F();
void call-function-1(void (*F)()) { return F(); }
// for operator F() const;
void call-function-2(void (*F)()) { return F(); }
Your example does not contain any other candidates.
The compiler then does overload resolution. Because the signatures of the two call functions are identical, it will use the same conversion sequence for both - in particular, it will use the non-const overload of the conversion function in both cases! So the two functions cannot be distinguished, and the call is ambiguous.
The key to understanding this is that the conversion that is actually used in passing the object to the surrogate doesn't have to use the conversion function the surrogate was generated for!
I can see two ways GCC and MSVC may arrive at the wrong answer here.
Option 1 is that they see the two surrogates with identical signatures, and somehow meld them into one.
Option 2, more likely, is that they thought, "hey, we don't need to do the expensive searching for a conversion for the object here, we already know that it will use the conversion function that the surrogate was generated for". It seems like a sounds optimization, except in this edge case, where this assumption is wrong. Anyway, by tying the conversion to the source conversion function, one of the surrogates uses identity-user-identity as the conversion sequence for the object, while the other uses const-user-identity, making it worse.
Related
Recently, I've decided to write a class storing a variant with reference_wrapper<const vector> and vector for having a choice either to own the value or having only a reference of it. That is, std::variant<vector<string>, reference_wrapper<const vector<string>>>.
The interesting part is what the variant stores depending on initialization.
I did a small investigation, and it turned out, that in all cases vector<string> type wins, except for the case when passing via std::cref. The same applies to functions (somewhat expected, because constructors are similar to functions in this way)
void f(vector<string>); // #1
void f(reference_wrapper<const vector<string>>); // #2
vector<string> data;
const vector<string>& get_data();
f(data); // #1
f(std::cref(data)) // #2
f(get_data()); // #1
f(std::cref(get_data())) // #2
The question is why the vector<string> has the priority here. I looked at Best viable function section here , but it didn't make much sense. It seems, that
4) or, if not that, F1 is a non-template function while F2 is a template specialization
part chooses vector<string> over reference_wrapper<vector<string>> (because reference_wrapper constructor is templated), but I'm not sure, because I can't fully understand if they are equal using the rule
1) There is at least one argument of F1 whose implicit conversion is better than the corresponding implicit conversion for that argument of F2
Can someone please describe all the implicit conversions applied in each case and show the true reason why one overload is preferred over another? To me, they are as follows:
f(data) = f(vector<string>&) -> (*exact match* implicit conversion) -> f(vector<string>)
f(data) = f(vector<string>&) -> (*conversion* implicit conversion) -> f(reference_wrapper<vector<string>>)
Did I miss something?
Another question, connected to this topic: Ranking of implicit conversion sequences section again,here leaves a question, is T(const T&) considered an Exact match (user-defined conversion of class type to the same class) or Conversion?
First, Although std::reference_wrapper is part of standard library it is treated as user-defined type.
For example an implicit conversion from std::vector & to const std::vector & is always preferred over an implicit conversion from std::vector& to std::reference_wrapper<vector>. That is because (as per standard) the former one is a standard conversion, but the later is a user-defined conversion. the first one is called standard conversion because it adds a const to your type but the second is treated as converting some type to totally different type.
check this code and see cppreference.com.
Second, I'm trying to guess some good alternative. I see you want either to store a reference to vector OR move/(copy as cheap as possible) or (you could say directly initialize) the data inside your class if it is not already stored (safely) in some variable. Maybe you could consider using move semantics. you can play with code here
using TVar = std::variant<reference_wrapper<const vector<string>>, vector<string>>;
class Config {
private:
TVar data;
public:
const vector<string>& get_data() const{
if (data.index() == 1)
return get<1>(data);
else return get<0>(data);
}
Config(const vector<string>& it):data(cref(it)){}
Config(vector<string>&& it):data(move(it)){}
};
Here we have two functions.
One that takes reference to "stored value" (more precisely lvalue). wrapping it in cref so that it causes the reference_wrapper alternative in the variant to be the best overload.
The other does the magic. it is the reference to values that are either written directly (aka pvalues) and values that use the magic std::move function (aka xvalues). if you have never seen this, please reference this respectable Q&A What is move semantics?
catch(...) :), this is it. also notice, you don't need the std::monostate as this is only needed for making the variant default constructible (with no arguments). you can make your class default constructible like this.
Config(vector<string>&& it = {}):data(move(it)){}
There is absolutely no reason to store reference_wrapper whatsoever. Just use a pointer like any sane programmer. reference_wrapper is used to properly trigger std::invoke and associated classes/functions like thread and bind.
The following code is supposedly illegal in C++14 but legal in C++17:
#include <functional>
int main()
{
int x = 1729;
std::function<void (int&)> f(
[](int& r) { return ++r; });
f(x);
}
Don't bother testing it, you'll get inconsistent results making it difficult to suss whether it's a bug or intentional behavior. However, comparing two drafts (N4140 vs N4527, both can be found on github.com/cplusplus/draft), there's one significant difference in [func.wrap.func.inv]. Paragraph 2:
Returns: Nothing if R is void, otherwise the return value of INVOKE (f, std::forward(args)..., R).
The above was removed between drafts. The implication is that the return value of the lambda is now silently discarded. This seems like a misfeature. Can anyone explain the reasoning?
There was a ridiculous defect in the standard about std::function<void(Args...)>. By the wording of the standard, no (non-trivial)1 use of std::function<void(Args...)> was legal, because nothing can be "implicitly converted to" void (not even void).
void foo() {} std::function<void()> f = foo; was not legal in C++14. Oops.
Some compilers took the bad wording that made std::function<void(Args...)> completely useless, and applied the logic only to passed-in callables where the return value was not void. Then they concluded it was illegal to pass a function returning int to std::function<void(Args...)> (or any other non-void type). They did not take it to the logical end and ban functions returning void as well (the std::function requirements make no special case for signatures that match exactly: the same logic applies.)
Other compilers just ignored the bad wording in the void return type case.
The defect was basically that the return type of the invocation expression has to be implicitly convertible-to the return type of the std::function's signature (see link above for more details). And under the standard, void cannot be implicitly converted to void2.
So the defect was resolved. std::function<void(Args...)> now accepts anything that can be invoked with Args..., and discards the return value, as many existing compilers implemented. I presume this is because (A) the restriction was not ever intended by the language designers, or (B) having a way for a std::function that discards return values was desired.
std::function has never required exact match of arguments or return values, just compatibility. If the incoming arguments can implicitly convert-from the signature arguments, and the return type can implicitly convert-to the return type, it was happy.
And a function of type int(int&) is, under many intuitive definitions, compatible with the signature void(int&), in that you can run it in a "void context".
1 Basically, anything that makes operator() legal to call was not allowed. You can create it, you can destroy it, you can test it (and know it is empty). You cannot give it a function, even one that matches its signature exactly, or a function object or lambda. Ridiculous.
2 For void to be impliclty converted to void under the standard, it requires that the statement void x = blah;, where blah is an expression of type void, be valid; that statement is not valid, as you cannot create a variable of type void.
I have a question regarding the c++ function matching for parameters of types T and const T&.
Let's say I have the following two functions:
void f(int i) {}
void f(const int &ri) {}
If I call f with an argument of type const int then this call is of course ambiguous. But why is a call of f with an argument of type int also ambiguous? Wouldn't be the first version of f be an exact match and the second one a worse match, because the int argument must be converted to a const int?
const int ci = 0;
int i = 0;
f(ci); // of course ambiguous
f(i); // why also ambiguous?
I know that such kind of overloading doesn't make much sense, because calls of f are almost always ambiguous unless the parameter type T doesn't have an accessible copy constructor. But I'm just studying the rules of function matching.
Regards,
Kevin
EDIT: To make my question more clear. If I have the two functions:
void f(int *pi) {}
void f(const int *pi) {}
Then the following call is not ambiguous:
int i = 0;
f(&i); // not ambiguous, first version f(int*) chosen
Although both versions of f could be called with &i the first version is chosen, because the second version of f would include a conversion to const. That is, the first version is a "better match". But in the two functions:
void f(int i) {} and
void f(const int &ri) {}
This additional conversion to const seems to be ignored for some reason. Again both versions of f could be called with an int. But again, the second version of f would require a conversion to const which would make it a worse match than the first version f(int).
int i = 1;
// f(int) requires no conversion
// f(const int &) does require a const conversion
// so why are both versions treated as "equally good" matches?
// isnt this analogous to the f(int*) and f(const int*) example?
f(i); // why ambiguous this time?
One call involves an "lvalue-to-rvalue conversion", the other requires an identity conversion (for references) or a "qualification adjustment" (for pointers), and according to the Standard these are treated equally when it comes to overload resolution.
So, neither is better on the basis of differing conversions.
There is, however, a special rule in the Standard, section 13.3.3.2, that applies only if both candidates being compared take the parameter by reference.
Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if ... S1 and S2 are reference bindings (8.5.3), and the types to which the references refer are the same type except for top-level cv-qualifiers, and the type to which the reference initialized by S2 refers is more cv-qualified than the type to which the reference initialized by S1 refers.
There's an identical rule for pointers.
Therefore the compiler will prefer
f(int*);
f(int&);
over
f(const int*);
f(const int&);
respectively, but there's no preference for f(int) vs f(const int) vs f(const int&), because lvalue-to-rvalue transformation and qualification adjustment are both considered "Exact Match".
Also relevant, from section 13.3.3.1.4:
When a parameter of reference type binds directly to an argument expression, the implicit conversion sequence is the identity conversion, unless the argument expression has a type that is a derived class of the parameter type, in which case the implicit conversion sequence is a derived-to-base Conversion.
The second call f(i) is also ambiguous because void f(const int &ri) indicates that ri is a reference to i and is a constant. Meaning it says that it will not modify the original i which is passed to that function.
The choice whether to modify the passed argument or not is in the hands of the implementer of the function not the client programmer who mearly uses that function.
The reason the second call f(i) is ambiguous is because to the compiler, both functions would be acceptable. const-ness can't be used to overload functions because different const versions of functions can be used in a single cause. So in your example:
int i = 0;
fi(i);
How would the compiler know which function you intended in invoking? The const qualifier is only relevant to the function definition.
See const function overloading for a more detailed explanation.
Consider the following simple example
struct C
{
template <typename T> operator T () {return 0.5;}
operator int () {return 1;}
operator bool () {return false;}
};
int main ()
{
C c;
double x = c;
std::cout << x << std::endl;
}
When compiled with Clang, it gives the following error
test.cpp:11:12: error: conversion from 'C' to 'double' is ambiguous
double x = c;
^ ~
test.cpp:4:5: note: candidate function
operator int () {return 1;}
^
test.cpp:5:5: note: candidate function
operator bool () {return false;}
^
test.cpp:3:27: note: candidate function [with T = double]
template <typename T> operator T () {return 0.5;}
^
1 error generated.
Other compilers generate similar errors, e.g., GCC and Intel iclc
If I remove operator int and operator bool. It compiles fine and work as expected. If only remove one of them, that is keep the template operator and say operator int, then the non-template version is always chosen.
My understanding is that only when the template and non-template overloaded functions are equal in the sense that they are both perfect match or both require the same conversion sequence, the non-template version will be preferred. However in this case, it appears that the compiler does not see the operator template as a perfect match. And when both the bool and int overloading are present, then naturally it considers they are ambiguous.
In summary, my question is that why the operator template is not considered a perfect match in this case?
Let's break this down into two different problems:
1. Why does this generate a compiler error?
struct C
{
operator bool () {return false;}
operator int () {return 1;}
};
As both int and bool can be implicitly converted to double, the compiler can not know which function it should use. There are two functions which it could use and neither one takes precedence over the other one.
2. Why isn't the templated version a perfect match?
struct C
{
template <typename T> operator T () {return 0.5;}
operator int () {return 1;}
};
Why is operator int() called when requesting a double?
The non-template function is called because a non-template function takes precedence in overload resolution. (Overloading function templates)
EDIT:
I was wrong! As Yan Zhou mentioned in his comment, and as it is stated in the link I provided, a perfect match in the templated function takes precedence over the non-templated function.
I tested your code (compiled with g++ 4.7.2), and it worked as expected: it returned 0.5, in other words, the templated function was used!
EDIT2:
I now tried with clang and I can reproduce the behaviour you described. As it works correctly in gcc, this seems to be a bug in clang.
This is interesting. There are two ways to read a critical part of section 13.3.3. The original example should definitely call the function template, but the version where one of the non-templates is removed might be argued to be ambiguous.
13.3.3:
A viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICS_i(F1) is not a worse conversion sequence than ICS_i(F2), and then
for some argument j, ICS_j(F1) is a better conversion sequence than ICS_j(F2), or, if not that,
the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the entity being initialized) is a better conversion sequence than the standard conversion sequence from the return type of F2 to the destination type, or, if not that,
F1 is a non-template function and F2 is a function template specialization, or, if not that,
F1 and F2 are function template specializations, and the function template for F1 is more specialized than the template for F2 according to the partial ordering rules described in 14.5.6.2.
If there is exactly one viable function that is a better function than all other viable functions, then it is the one selected by overload resolution; otherwise the call is ill-formed.
In the example, clang correctly identifies the set of three viable candidate functions:
C::operator int()
C::operator bool()
C::operator double<double>()
The third is a function template specialization. (I don't think the syntax above is legal, but you get the idea: at this point of overload resolution it's not treated as a template, but as a specialization with a definite function type.)
The only Implicit Conversion Sequence on arguments here (ICS1) is the exact match "lvalue C" to "C&" on the implicit parameter, so that won't make a difference.
This example is exactly the situation described in the second bullet, so the function returning double is clearly better than the other two.
Here's where it gets weird: By a very literal reading, operator int is also better than the template specialization, because of the third bullet. "Wait a minute, shouldn't 'better than' be antisymmetric? How can you say F1 is better than F2 AND F2 is better than F1?" Unfortunately, the Standard doesn't explicitly say anything of the sort. "Doesn't the second bullet take priority over the third bullet because of the 'if not that' phrase?" Yes, for constant F1 and F2. But the Standard doesn't say that satisfying the second bullet for (F1,F2) makes the third bullet for (F2,F1) not applicable.
Of course, since operator int is not better than operator bool and vice versa, there is still "exactly one viable function that is a better function than all other viable functions".
I'm not exactly endorsing this weird reading, except maybe to report it as a Standard defect. Going with that would have bizarre consequences (like removing an overload which was not the best from this example changes a program from well-formed to ambiguous!). I think the intent is for the second bullet to be considered both ways before the third bullet is considered at all.
Which would mean the function template should be selected by overload resolution, and this is a clang bug.
Can you tell me why the following code is giving me the following error - call of overloaded "C(int)" is ambiguous
I would think that since C(char x) is private, only the C(float) ctor is visible from outside and that should be called by converting int to float.
But that's not the case.
class C
{
C(char x)
{
}
public:
C(float t)
{
}
};
int main()
{
C p(0);
}
This is discussed in "Effective C++" by Scott Meyer. The reason this is ambiguous is that they wanted to ensure that merely changing the visibility of a member wouldn't change the meaning of already-existing code elsewhere.
Otherwise, suppose your C class was in a header somewhere. If you had a private C(int) member, the code you present would call C(float). If, for some reason, the C(int) member was made public, the old code would suddenly call that member, even though neither the old code, nor the function it called had changed.
EDIT: More reasons:
Even worse, suppose you had the following 2 functions:
C A::foo()
{
return C(1.0);
}
C B::bar()
{
return C(1.0);
}
These two functions could call different functions depending on whether either foo or bar was declared as a friend of C, or whether A or B inherits from it. Having identical code call different functions is scary.
(That's probably not as well put as Scott Meyer's discussion, but that's the idea.)
0 is an int type. Because it can be implicitly cast to either a float or char equally, the call is ambiguous. Visibility is irrelevant for these purposes.
Either put 0.0, 0., or 0.0f, or get rid of the C(char) constructor entirely.
Edit: Relevant portion of the standard, section 13.3:
3) [...] But, once the candidate functions and argument lists have been identified, the selection of the best function is the same in all cases:
First, a subset of the candidate functions—those that have the proper number of arguments and meet certain other conditions—is selected to form a set of viable functions (13.3.2).
Then the best viable function is selected based on the implicit conversion sequences (13.3.3.1) needed to match each argument to the corresponding parameter of each viable function.
4) If a best viable function exists and is unique, overload resolution succeeds and produces it as the result. Otherwise overload resolution fails and the invocation is ill-formed. When overload resolution succeeds, and the best viable function is not accessible (clause 11) in the context in which it is used, the program is ill-formed.
Note that visibility is not part of the selection process.
I don't think that:
C p(0);
is being converted to:
C(float t)
you probably need to do:
C p(0.0f);