I'm attempting to call a function in the ROOT plotting package that accepts three variables. The relevant section in my code is:
int xwidth=0,ywidth=0;
Bool_t useangle=0;
vIt->GetBoundingBox(xwidth,ywidth,useangle);
where vIt is an iterator to an object with GetBoundingBox as a class member function. (Bool_t is just a typedef that ROOT uses).
Now, when I compile, I get the following error from g++:
error: no matching function for call to ‘TText::GetBoundingBox(int&, int&, Bool_t&)’
/home/bwhelan/Programs/MODIFIED//External/root/5.30.00/include/root/TText.h:57: note: candidates are: virtual void TText::GetBoundingBox(UInt_t&, UInt_t&, Bool_t)
My question is why is useangle being passed by reference here, instead of by value? I simply cannot figure it out.
In C++, upon overload resolution, a set of viable overloads is selected, and the one candidate that requires the least
conversions (does it require changing the constness? is a promotion of integer to floating point needed?)
is chosen. If there are multiple matches that have the same weight, you have an ambiguous call and get an error
(e.g.: int foo(int &x, int y); int foo(int x, int &y); ... int a,b; foo(a,b); is ambiguous).
However, in your case, no valid conversion sequence can be found, because there exists no valid conversion from
int& to unsigned int& (sidenote: there exists a conversion from int to unsigned int and vice versa), because
references to unrelated types are not compatible at all.
About the error message: The compiler uses the weakest allowed binding to filter out the set of viable functions. For int, this is int&.
But because no viable set of functions is found, an error message is spit out. The author of the message did not base
it on your code, but rather on the data he/she has had for the search, which is int&. However, he/she
correctly proposes a viable alternative that really exists.
So we have more a compiler diagnostic quality issue here, rather than a C++ correctness issue.
From the standard, here's the table of conversion. The least are required to make a function call
valid, the better the match:
Conversion
-----------------------------------+----------------------------
No conversions required | Identity
-----------------------------------+----------------------------
Lvalue-to-rvalue conversion | Lvalue transformation
Array-to-pointer conversion |
Function-to-pointer conversion |
-----------------------------------+----------------------------
Qualification conversions | Qualification adjustment
-----------------------------------+----------------------------
Integral promotions | Promotion
Floating point promotion |
-----------------------------------+----------------------------
Integral conversions | Conversion
Floating point conversions |
Floating-integral conversions |
Pointer conversions |
Pointer to member conversions |
Boolean conversions |
-----------------------------------+----------------------------
The problem isn't that the bool is passed by reference but that you line
int xwidth=0,ywidth=0;
Should be of type UInt_t
UInt_t xwidth=0u,ywidth=0u;
The compiler doesn't know how you want to pass the variable to function with the unkown overload so it just assumes you meant by ref.
The signature of the function have the following arguments
UInt_t&, UInt_t&, Bool_t
and you are passing
int&, int&, Bool_t&
Either convert your int to UInt_t before you call the method or directly declare them as UInt_t.
It is not.
Compiler messages are more or less useful. As it is, since it did not find a method that it can call with the arguments you supplied, the compiler is trying to synthetize, from the arguments you gave, a method signature that could have worked.
This is, unfortunately, ultimately doomed to fail, as there are just so many possible variations, but then gcc's messages have never been too great.
Clang took another approach, which I happen to prefer. Instead of trying to imagine what the function you wanted to call looks like and then listing the candidates and leaving you to spot the differences, it tells you why each candidate was discarded.
void func(unsigned&, unsigned&);
int something() {
int a = 0, b = 0;
func(a, b);
return a + b;
}
Yields the following error message:
/tmp/webcompile/_3246_0.cc:5:3: error: no matching function for call to 'func'
func(a, b);
^~~~
/tmp/webcompile/_3246_0.cc:1:6: note: candidate function not viable:
no known conversion from 'int' to 'unsigned int &' for 1st argument;
void func(unsigned&, unsigned&);
Which I find much more useful. Patching this by turning a into an unsigned and leaving b as is we get:
/tmp/webcompile/_3710_0.cc:1:6: note: candidate function not viable:
no known conversion from 'int' to 'unsigned int &' for 2nd argument;
void func(unsigned&, unsigned&);
And this way we advance one argument at a time until we "fixed" the call to our liking.
Related
Just found the reason for an insidious crash to be a unchecked wild cast by the compiler, disregarding the types. Is this intended behaviour or a compiler bug?
Problem: When a type definition is involved, it is possible to make an implicit reinterpret cast, undermining the type system.
#include <iostream>
template<class A, class B>
inline bool
isSameObject (A const& a, B const& b)
{
return static_cast<const void*> (&a)
== static_cast<const void*> (&b);
}
class Wau
{
int i = -1;
};
class Miau
{
public:
uint u = 1;
};
int
main (int, char**)
{
Wau wau;
using ID = Miau &;
ID wuff = ID(wau); // <<---disaster
std::cout << "Miau=" << wuff.u
<< " ref to same object: " <<std::boolalpha<< isSameObject (wau, wuff)
<< std::endl;
return 0;
}
I was shocked to find out that gcc-4.9, gcc-6.3 and clang-3.8 accept this code without any error and produce the following output:
Miau=4294967295 ref to same object: true
Please note I use the type constructor syntax ID(wau). I would expect such behaviour on a C-style cast, i.e. (ID)wau. Only when using the new-style curly braces syntax ID{wau} we get the expected error...
~$ g++ -std=c++11 -o aua woot.cpp
woot.cpp: In function ‘int main(int, char**)’:
woot.cpp:31:21: error: no matching function for call to ‘Miau::Miau(<brace-enclosed initializer list>)’
ID wuff = ID{wau};
^
woot.cpp:10:7: note: candidate: constexpr Miau::Miau()
class Miau
^~~~
woot.cpp:10:7: note: candidate expects 0 arguments, 1 provided
woot.cpp:10:7: note: candidate: constexpr Miau::Miau(const Miau&)
woot.cpp:10:7: note: no known conversion for argument 1 from ‘Wau’ to ‘const Miau&’
woot.cpp:10:7: note: candidate: constexpr Miau::Miau(Miau&&)
woot.cpp:10:7: note: no known conversion for argument 1 from ‘Wau’ to ‘Miau&&’
Unfortunately, the curly-braces syntax is frequently a no-go in template heavy code, due to the std::initializer_list fiasco. So for me this is a serious concern, since the protection by the type system effectively breaks down here.
Can someone explain the reasoning behind this behaviour?
Is it some kind of backwards compatibility (again, sigh)?
To go full language-lawyer, T(expression) is a conversion of the result of expression to T1. This conversion has as effect to call the class' constructor2. This is why we tend to call a non-explicit constructor taking exactly one argument a conversion constructor.
using ID = Miau &;
ID wuff = ID(wau);
This is then equivalent to a cast expression to ID. Since ID is not a class type, a C-style cast occurs.
Can someone explain the reasoning behind this behaviour?
I really can't tell why is was ever part of C++. This is not needed. And it is harmful.
Is it some kind of backwards compatibility (again, sigh)?
Not necessarily, with C++11 to C++20 we've seen breaking changes. This could be removed some day, but I doubt it would.
1)
[expr.type.conv]
A simple-type-specifier or typename-specifier followed by a parenthesized optional expression-list or by a braced-init-list (the initializer) constructs a value of the specified type given the initializer. [...]
If the initializer is a parenthesized single expression, the type conversion expression is equivalent to the corresponding cast expression. [...]
2) (when T is of class type and such a constructor exists)
[class.ctor]/2
A constructor is used to initialize objects of its class type. Because constructors do not have names, they are never found during name lookup; however an explicit type conversion using the functional notation ([expr.type.conv]) will cause a constructor to be called to initialize an object.
it is possible to make an implicit reinterpret cast, undermining the type system.
ID wuff = ID(wau);
That's not an "implicit" reinterpret cast. That is an explicit type conversion. Although, the fact that the conversion does reinterpretation is indeed not easy to see. Specifically, the syntax of the cast is called "functional style".
If you're unsure what type of cast an explicit type conversion (whether using the functional syntax, or the C style syntax) performs, then you should refrain from using it. Many would argue that explicit type conversions should never be used.
If you had used static_cast instead, you would have stayed within the protection of the type system:
ID wuff = static_cast<ID>(wau);
error: non-const lvalue reference to type 'Miau' cannot bind to a value of unrelated type 'Wau'
It's often also safe to simply rely on implicit conversions:
ID wuff = wau;
error: non-const lvalue reference to type 'Miau' cannot bind to a value of unrelated type 'Wau'
Is this intended behaviour
Yes.
or a compiler bug?
No.
Visual C++ 2012. Code. I think it should compile; the compiler respectfully disagrees. I've narrowed my repro down to:
struct B { };
void foo(B* b, signed int si) { } // Overload 1
void foo(B const* b, unsigned int ui) { } // Overload 2
int main()
{
B b;
unsigned int ui;
foo(&b, ui);
}
So we've got two candidates for overload resolution. For the first overload, the first argument exactly matches, and the second argument requires an integral conversion (unsigned to signed). For the second overload, the second argument exactly matches, and the first argument requires a cv-adjustment (because &b is a pointer to non-const).
Now, it seems that this should be entirely unambiguous. For Overload 1, the first argument is an "Exact Match" as defined by the standard's section on overload resolution, but the second is a "Conversion". For Overload 2, both arguments are "Exact Matches" (qualification conversion being at the same rank as identity). Therefore (my apparently imperfect reasoning goes), Overload 2 should be chosen, with no ambiguity. And yet:
a.cpp(12): error C2666: 'foo' : 2 overloads have similar conversions
a.cpp(6): could be 'void foo(const B *,unsigned int)'
a.cpp(5): or 'void foo(B *,int)'
while trying to match the argument list '(B *, unsigned int)'
note: qualification adjustment (const/volatile) may be causing the ambiguity
GCC seems fine with the code, both in default dialect and in C++11 (thanks, IDEOne!). So I'm kiiiiind of inclined to chalk this up to a bug in MSVC, but (a) you know what they say about people who think their bugs are compiler bugs, and (b) this seems like it would be a pretty obvious bug, the sort that would have sent up red flags during their conformance testing.
Is this a noncompliant MSVC, or a noncompliant GCC? (Or both?) Is my reasoning regarding the overload resolution sound?
MSVC is correct.
gcc 4.9.0 says:
warning: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second: [enabled by default]
clang 3.4.1 agrees that the two functions are ambiguous.
Although B* => B* and B* => B const* both have Exact Match rank, the former is still a better conversion sequence per over.ics.rank/3; this is (per example) to ensure that:
int f(const int *);
int f(int *);
int i;
int j = f(&i); // calls f(int*)
From over.ics.rank/3:
Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if [...]
— S1 and S2 differ only in their qualification conversion and yield similar types T1 and T2 (4.4), respectively, and the cv-qualification signature of type T1 is a proper subset of the cv-qualification signature of type T2. [...]
And, of course, unsigned int => unsigned int is better than unsigned int => signed int. So of the two overloads, one has a better implicit conversion sequence on the first argument, and the other has a better implicit conversion sequence on the second argument. So they cannot be distinguished per over.match.best/1.
You can find the text below in Appendix B of the book "C++ Templates The Complete Guide" by David Vandevoorde and Nicolai Josuttis.
B.2 Simplified Overload Resolution
Given this first principle, we are left with specifying how well a
given argument matches the corresponding parameter of a viable
candidate. As a first approximation we can rank the possible matches
as follows (from best to worst):
Perfect match. The parameter has the type of the expression, or it has a type that is a reference to the type of the expression (possibly
with added const and/or volatile qualifiers).
Match with minor adjustments. This includes, for example, the decay of an array variable to a pointer to its first element, or the
addition of const to match an argument of type int** to a parameter of
type int const* const*.
Match with promotion. Promotion is a kind of implicit conversion that includes the conversion of small integral types (such as bool,
char, short, and sometimes enumerations) to int, unsigned int, long or
unsigned long, and the conversion of float to double.
Match with standard conversions only. This includes any sort of standard conversion (such as int to float) but excludes the implicit
call to a conversion operator or a converting constructor.
Match with user-defined conversions. This allows any kind of implicit conversion.
Match with ellipsis. An ellipsis parameter can match almost any type (but non-POD class types result in undefined behavior).
A few pages later the book shows the following example and text (emphasis mine):
class BadString {
public:
BadString(char const*);
...
// character access through subscripting:
char& operator[] (size_t); // (1)
char const& operator[] (size_t) const;
// implicit conversion to null-terminated byte string:
operator char* (); // (2)
operator char const* ();
...
};
int main()
{
BadString str("correkt");
str[5] = 'c'; // possibly an overload resolution ambiguity!
}
At first, nothing seems ambiguous about the expression str[5]. The
subscript operator at (1) seems like a perfect match. However, it is
not quite perfect because the argument 5 has type int, and the
operator expects an unsigned integer type (size_t and std::size_t
usually have type unsigned int or unsigned long, but never type int).
Still, a simple standard integer conversion makes (1) easily viable.
However, there is another viable candidate: the built-in subscript
operator. Indeed, if we apply the implicit conversion operator to str
(which is the implicit member function argument), we obtain a pointer
type, and now the built-in subscript operator applies. This built-in
operator takes an argument of type ptrdiff_t, which on many platforms
is equivalent to int and therefore is a perfect match for the argument
5. So even though the built-in subscript operator is a poor match (by user-defined conversion) for the implied argument, it is a better
match than the operator defined at (1) for the actual subscript! Hence
the potential ambiguity.
Note that the first list is Simplified Overload Resolution.
To remove the "possibly" and "potential" about whether or not int is the same as ptrdiff_t, let's change one line:
str[(ptrdiff_t)5] = 'c'; // definitely an overload resolution ambiguity!
Now the message I get from g++ is:
warning: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second: [enabled by default]
and --pedantic-errors promotes that warning to an error.
So without diving into the standard, this tells you that the simplified list is only part of the story. That list tells you which of several possible routes to prefer when going from A to B, but it does not tell you whether to prefer a trip from A to B over a trip from C to D.
You can see the same phenomenon (and the same g++ message) more obviously here:
struct S {
explicit S(int) {}
operator int() { return 0; }
};
void foo(const S&, long) { }
void foo(int, int) { }
int main() {
S s(0);
foo(s, 1);
}
Again the call to foo is ambiguous, because when we have a choice of which argument to implicitly convert in order to choose an overload, the rules don't say, "pick the more lightweight conversion of the two and convert the argument requiring that conversion".
Now you're going to ask me for the standard citation, but I shall use the fact I need to hit the hay as an excuse for not looking it up ;-)
In summary it is not true here that "a user defined conversion is a better match than a standard integer conversion". However in this case the two possible overloads are defined by the standard to be equally good and hence the call is ambiguous.
str.operator[](size_t(5));
Write code in definite way and you'll not need to bother about all this stuff :)
There is much more to thing about.
I have a question regarding the c++ function matching for parameters of types T and const T&.
Let's say I have the following two functions:
void f(int i) {}
void f(const int &ri) {}
If I call f with an argument of type const int then this call is of course ambiguous. But why is a call of f with an argument of type int also ambiguous? Wouldn't be the first version of f be an exact match and the second one a worse match, because the int argument must be converted to a const int?
const int ci = 0;
int i = 0;
f(ci); // of course ambiguous
f(i); // why also ambiguous?
I know that such kind of overloading doesn't make much sense, because calls of f are almost always ambiguous unless the parameter type T doesn't have an accessible copy constructor. But I'm just studying the rules of function matching.
Regards,
Kevin
EDIT: To make my question more clear. If I have the two functions:
void f(int *pi) {}
void f(const int *pi) {}
Then the following call is not ambiguous:
int i = 0;
f(&i); // not ambiguous, first version f(int*) chosen
Although both versions of f could be called with &i the first version is chosen, because the second version of f would include a conversion to const. That is, the first version is a "better match". But in the two functions:
void f(int i) {} and
void f(const int &ri) {}
This additional conversion to const seems to be ignored for some reason. Again both versions of f could be called with an int. But again, the second version of f would require a conversion to const which would make it a worse match than the first version f(int).
int i = 1;
// f(int) requires no conversion
// f(const int &) does require a const conversion
// so why are both versions treated as "equally good" matches?
// isnt this analogous to the f(int*) and f(const int*) example?
f(i); // why ambiguous this time?
One call involves an "lvalue-to-rvalue conversion", the other requires an identity conversion (for references) or a "qualification adjustment" (for pointers), and according to the Standard these are treated equally when it comes to overload resolution.
So, neither is better on the basis of differing conversions.
There is, however, a special rule in the Standard, section 13.3.3.2, that applies only if both candidates being compared take the parameter by reference.
Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if ... S1 and S2 are reference bindings (8.5.3), and the types to which the references refer are the same type except for top-level cv-qualifiers, and the type to which the reference initialized by S2 refers is more cv-qualified than the type to which the reference initialized by S1 refers.
There's an identical rule for pointers.
Therefore the compiler will prefer
f(int*);
f(int&);
over
f(const int*);
f(const int&);
respectively, but there's no preference for f(int) vs f(const int) vs f(const int&), because lvalue-to-rvalue transformation and qualification adjustment are both considered "Exact Match".
Also relevant, from section 13.3.3.1.4:
When a parameter of reference type binds directly to an argument expression, the implicit conversion sequence is the identity conversion, unless the argument expression has a type that is a derived class of the parameter type, in which case the implicit conversion sequence is a derived-to-base Conversion.
The second call f(i) is also ambiguous because void f(const int &ri) indicates that ri is a reference to i and is a constant. Meaning it says that it will not modify the original i which is passed to that function.
The choice whether to modify the passed argument or not is in the hands of the implementer of the function not the client programmer who mearly uses that function.
The reason the second call f(i) is ambiguous is because to the compiler, both functions would be acceptable. const-ness can't be used to overload functions because different const versions of functions can be used in a single cause. So in your example:
int i = 0;
fi(i);
How would the compiler know which function you intended in invoking? The const qualifier is only relevant to the function definition.
See const function overloading for a more detailed explanation.
I don't understand what happens here
class A{};
class B : A {};
void func(A&, bool){}
void func(B&, double){}
int main(void)
{
B b;
A a;
bool bo;
double d;
func(b, bo);
}
When compiling, Visual 2010 gives me this error on line func(b, bo);
2 overloads have similar conversions
could be 'void func(B &,double)'
or 'void func(A &,bool)'
while trying to match the argument list '(B, bool)'
I don't understand why the bool parameter isn't enough to resolve the overload.
I've seen this question, and as pointed in the accepted answer, bool should prefer the bool overload. In my case, I see that first parameter isn't enough to choose the good function, but why the second parameter doesn't solve the ambiguity?
The overloading rules are a bit more complicated than you might guess. You look at each argument separately and pick the best match for that argument. Then if there is exactly one overload that provides the best match for every argument, that's the one that's called. In the example, the best match for the first argument is the second version of func, because it requires only a conversion of B to B&; the other version of func requires converting B to B& and then B& to A&. For the second argument, the first version of func is the best match, because it requires no conversions. The first version has the best match for the second argument, but it does not have the best match for the first argument, so it is not considered. Similarly, the second version has the best match for the first argument, but it does not have the best match for the second argument, so it is not considered. Now there are no versions of func left, and the call fails.
Overload resolution rules are even more complicated than Pete Becker wrote.
For each overload of f, the compiler counts not only the number of parameters for which conversion is required, but the rank of conversion.
Rank 1
No conversions required
Lvalue-to-rvalue conversion
Array-to-pointer conversion
Function-to-pointer conversion
Qualification conversion
Rank 2
Integral promotions
Floating point promotions
Rank 3
Integral conversions
Floating point conversions
Floating-integral conversions
Pointer conversions
Pointer to member conversions
Boolean conversions
Assuming that all candidates are non-template functions, a function wins if and only if it has a parameter which rank is better than the rank of the same parameter in other candidates and the ranks for the other parameters are not worse.
Now let's have a look at the OP case.
func(A&, bool): conversion B&->A& (rank 3) for the 1st parameter,
exact match (rank 1) for the 2nd parameter.
func(B&, double): exact match (rank 1) for the 1st parameter, conversion bool->double (rank 3) for the 2nd parameter.
Conclusion: noone wins.