The following code gives the compilation error of main.cpp: In function ‘int main()’:
main.cpp:18:19: error: call of overloaded ‘print(int, int)’ is ambiguous
print(0, 0);
#include <iostream>
using namespace std;
void print(int, double);
void print(double, int);
int main()
{
print(0, 0);
return 0;
}
While the following code does not give the compilation error of function call ambiguity, why so?
#include <iostream>
using namespace std;
void print(double){}
void print(int){}
int main()
{
print(0);
return 0;
}
Sloppy speaking, overload resolution picks the overload that is a better match considering the types of the parameters.
print(0) calls void print(int){} because 0 is an integer literal of type int.
When you call print(0, 0) then either the first 0 could be converted to double to call void print(double, int) or the second one to call void print(int, double). Neither of them is a better match, hence the compiler reports this ambiguity.
For more details i refer you to https://en.cppreference.com/w/cpp/language/overload_resolution.
Note that this is a matter of choice. The universe would not stop expanding when your example would choose to call void print(double, int) because some rules would say that thats a better match than void print(int,double), thats just not what the rules say.
In the first example, when you are calling print(0,0), you are passing both integer arguments. So the compiler could try to convert either the first argument to double and call print(double, int) or it could convert second argument to double type and call the function print(int, double). In doing so, it could get confused as both of them could be a possibility for called function. Hence, it reports ambiguity.
However, in the second example, it finds clarity about the called function, i.e., print(int). Here, no type conversion is required for identifying the called function.
There are really two parts to the rules for overload resolution. Most people just talk about a "better match" or the "best match", or something similar, but (at least in my opinion) this isn't very helpful (by itself).
The rule to find that "best match" really comes in two parts. For one overload to qualify as a better match than the other, it must:
have a "better" conversion sequence for at least one parameter, and
have at least as "good" of a conversion sequence for every parameter.
I won't try to get into all the details about what constitutes a "better" conversion sequence, but in this case it suffices to say that the "best" conversion is no conversion at all, so leaving an int as an int is better than converting an int to a double.
In your first case, you're supplying two int arguments. Each of the available overloads has one parameter that would require a conversion from int to double, and another that would leave the int as an int.
As such, each available overload has a worse sequence than the other for at least one parameter, so neither one can be a better match than the other.
In your second case, the situation's much simpler: the overloaded functions have one one parameter apiece, and a conversion would be required for one, but no conversion would be required for the other, so the one that requires no conversion is clearly better.
The difference between the actual rules and what people tend to think based only on only hearing about a "better match" becomes especially apparent if we have (for one example) an odd number of parameters. For example, consider a situation like this:
void f(int, double, double);
void f(double, int, int);
// ...
f(1, 2, 3);
In this case, if you think solely in terms of a "better match" the second overload might seem like an obvious choice--the first requires conversions on two arguments, and the second requires a conversion on only one argument, so that seems better.
But once we understand the real rules, we recognize that this is still ambiguous. Each of the available overloads is worse for at least one argument, therefore neither qualifies as a better match.
Related
I would like to define a function void f(int i) that would only compile and do something when called as f(1) and would cause a compile error in all other cases. Is it possible to do with a template specialization?
As you mentioned in the comment that you would prefer not to use a template, and from the requirements you mentioned there, an option might be using an enum as argument, which means you limit the input to a given set of arguments:
enum argument{option1,option2,option3};
void f(argument x){
// do stuff
}
(the example is using three allowed values, but you can use any number of possible values)
While not being exactly what you asked for (the signature of f changed), it basically serves the same purpose.
You can now call f with either option1, option2 or option3 as input (or any other ones you specify in argument). As enum is implicitly convertible to int, you can use it as such in the function body, and specify the possible values also directly in argument
enum argument{option1=20,option2=23,option3=12};
Note that even though arguments can be implicitly converted to int, it does not work the other way around, so you will not be able to call f directly on int inputs, which is why it is not quite exactly what you asked for.
Note that while you can use static_cast to convert int to argument, it is dangerous, because this will again compile for any input, and might even invoke UB depending on the value you cast.
I'm having a weird problem with an overloaded function one of my work colleagues made, but we're both C++/QT newbies and can't figure out the reason for such a situation.
The situation is as follows:
We have an overloaded function:
foo(bool arg1 = false);
foo(const QVariant &arg1, const QString &arg2 = NULL, bool arg3 = false);
On the first one we have only one optional bool argument passed as value;
On the second we have one QVariant reference, an optional qstring reference and a bool.
What happens on runtime is the when I call for example:
foo("some_c++_string");
instead of using the 2nd one and casting the string to QVariant, it uses the first one and probably ignores the argument! The weird thing is that it doesn't even complain about not existing an overload for foo(char *) on compile time for example!
But if we do:
foo(QString("some_qt_string"));
it goes to the second overload as expected.
So, the question is: Why in the world does it decide to go for an overload which accepts one optional argument not even of the same type instead of using the second one and trying to use the string argument as a QVariant?
It probably has something to do with passing the arguments as reference and giving them default values, but I've tried several combinations and always went wrong.
Thanks for your time
That's because of the ordering of implicit conversions. Converting a string literal to a bool requires only a standard conversion sequence: an array-to-pointer conversion (obtaining const char*) and a boolean conversion (obtaining bool).
On the other hand, converting a string literal to a QVariant requires a user-defined conversion sequence, since it involves a class (QVariant).
And per C++11 13.3.3.2/2,
When comparing the basic forms of implicit conversion sequences (as defined in 13.3.3.1)
a standard conversion sequence (13.3.3.1.1) is a better conversion sequence than a user-defined conversion
sequence or an ellipsis conversion sequence
Which means that the first overload is strictly better and is thus chosen.
I'd wager you're building with Visual Studio, by the way. Otherwise, the construct
foo(QString("some_qt_string"));
wouldn't even compile. That's because QString("some_qt_string") creates a temporary object, and in standard C++, temporaries cannot bind to non-const lvalue references (which your second overload takes). However, Visual Studio allows that as an extension.
Why in the world does it decide to go for an overload which accepts one optional argument not even of the same type instead of using the second one and trying to use the string argument as a QVariant?
Looking at the functions signature :
foo(bool arg1 = false);
foo(QVariant &arg1, QString &arg2 = NULL, bool arg3 = false);
The 2nd overload takes an lvalue. If you pass a const char*, it would require creating a QVariant rvalue.
On the other hand, there is a function overload that takes a bool value, which is a better match, and is called.
You also said that if you do this :
foo(QString("some_qt_string"));
the second is called. But it shouldn't even compile. You are passing rvalue to a function which takes a lvalue. I am not sure which compiler you use, but that is certainly wrong. Put maximum possible warning level and see what happens. I do hope you are not ignoring compiler warnings ;)
The simplest solution (and maybe the best) is NOT to overload the function. Or at least not to give default arguments.
The second overload takes a QVariant by reference as it's first argument. You did not pass a QVariant, so this overload can never be taken. In fact, when you pass a QString as first argument, the second overload is still not viable and you should get a compiler error. My guess is that you are compiling with Microsoft VC and what you are observing is non-standard behavior that allows you to take an l-value reference of a temporary.
I was looking for a way to uppercase a standard string. The answer that I found included the following code:
int main()
{
// explicit cast needed to resolve ambiguity
std::transform(myString.begin(), myString.end(), myString.begin(),
(int(*)(int)) std::toupper)
}
Can someone explain the casting expression “(int(*) (int))”? All of the other casting examples and descriptions that I’ve found only use simple type casting expressions.
It's actually a simple typecast - but to a function-pointer type.
std::toupper comes in two flavours. One takes int and returns int; the other takes int and const locale& and returns int. In this case, it's the first one that's wanted, but the compiler wouldn't normally have any way of knowing that.
(int(*)(int)) is a cast to a function pointer that takes int (right-hand portion) and returns int (left-hand portion). Only the first version of toupper can be cast like that, so it disambiguates for the compiler.
(int(*)(int)) is the name of a function pointer type. The function returns (int), is a function *, and takes an (int) argument.
As others already mentioned, int (*)(int) is the type pointer to a function which takes and returns int. However what is missing here is what this cast expression does: Unlike other cast expressions it does not really cast (i.e. it does not convert a value into a different type), but it selects from the overloaded set of functions named std::toupper the one which has the signature int(int).
Note, however, that this method is somewhat fragile: If for some reason there's no matching function (for example because the corresponding header was not included) but only one non-matching function (so no ambiguity arises), then this cast expression will indeed turn into a cast, more exactly a reinterpret_cast, with undesired effects. To make sure that no unintended cast happens, the C++ style cast syntax should be used instead of the C style cast syntax: static_cast<int(*)(int)>(std::toupper) (actually, in the case of std::toupper this case cannot occur because the only alternative function is templated and therefore ambiguous, however it could happen with other overloaded functions).
Coincidentally, the new-style cast syntak is more readable in that case, too.
Another possibility, which works without any cast expression, is the following:
int (*ptoupper)(int) = &std::toupper; // here the context provides the required type information
std::transform(myString.begin(), myString.end(), myString.begin(), ptoupper);
Note that the reason why the context cannot provide the necessary information is that std::transform is templated on the last argument, therefore the compiler cannot determine the correct function to choose.
int function(int);
A function taking int and returning int.
int (*function_pointer)(int);
A pointer to a function taking int and returning int.
int (*)(int)
The type of a pointer to a function taking int and returning int.
std::toupper from <cctype> already has type int (*)(int), but the one in <locale> is templatized on charT, which I assume is the reason for the cast. But ptr_fun would be clearer.
I have a class called FileProc that runs File IO operations. In one instance I have declared two functions (which are sub-functions to operator= functions), both decisively different:
const bool WriteAmount(const std::string &Arr, const long S)
{
/* Do some string conversion */
return true;
}
const bool WriteAmount(const char Arr[], unsigned int S)
{
/* Do some string conversion */
return true;
}
If I make a call with a 'char string' to WriteAmount, it reports an ambiguity error - saying it is confused between WriteAmount for char string and WriteAmount for std::string. I know what is occurring under the hood - it's attempting to using the std::string constructor to implicitly convert the char string into a std::string. But I don't want this to occur in the instance of WriteAmount (IE I don't want any implicit conversion occurring within the functions - given each one is optimised to each role).
My question is, for consistency, without changing the function format (IE not changing number of arguments or what order they appear in) and without altering the standard library, is there anyway to prevent implicit conversion in the functions in question?
I forgot to add, preferably without typecasting, as this will be tedious on function calls and not user friendly.
You get the ambiguity because your second parameter is different. Trying to call it with long x = ...; WriteAmount("foo", x) will raise an ambiguity because it matches the second argument better with the first overload, but the first argument is matched better with the second overload.
Make the second parameter have the same type in both cases and you will get rid of the ambiguity, as then the second argument is matched equally worse/good for both overloads, and the first argument will be matched better with the second overload.
Can't you change the second argument and cast it to unsigned int ? It should not be able to use the first function call. I have not coded in C++ for ages..
This C++ question seems to be pretty basic and general but still I want someone to answer.
1) What is the difference between a function with variable-length argument and an overloaded function?
2) Will we have problems if we have a function with variable-length argument and another same name function with similar arguments?
2) Do you mean the following?
int mul(int a, int b);
int mul(int n, ...);
Let's assume the first multiplies 2 integers. The second multiplies n integers passed by var-args. Called with f(1, 2) will not be ambiguous, because an argument passed through "the ellipsis" is associated with the highest possible cost. Passing an argument to a parameter of the same type however is associated with the lowest possible cost. So this very call will surely be resolved to the first function :)
Notice that overload resolution only compares argument to parameter conversions for the same position. It will fail hard if either function for some parameter pair has a winner. For example
int mul(int a, int b);
int mul(double a, ...);
Imagine the first multiplies two integers, and the second multiplies a list of doubles that is terminated by a 0.0. This overload set is flawed and will be ambiguous when called by
mul(3.14, 0.0);
This is because the second function wins for the first argument, but the first function wins for the second argument. It doesn't matter that the conversion cost for the second argument is higher for the second function than the cost of the first argument for the first function. Once such a "cross" winner situation is determined, the call for such two candidates is ambiguous.
1) Well an overloaded function will require a HELL of a lot of different prototypes and implementations. It will also be type safe.
2) Yes this will cause you problems as the compiler will not know which function it needs to call. It may or may not warn about this. If it doesn't you may well end up with hard to find bugs.
An overloaded function can have completely different parameter types, including none, with the correct one being picked depending on the parameter types.
A variable-length argument requires at least one parameter to be present. You also need some mechanism to "predict" the type of the next parameter (as you have to state it in va_arg()), and it has to be a basic type (i.e., integer, floating point, or pointer). Common techniques here are "format strings" (as in printf(), scanf()), or "tag lists" (every odd element in the parameter list being an enum telling the type of the following even element, with a zero enum to mark the end of the parameter list).
Generally speaking, overloading is the C++ way to go. If you end up really needing something akin to variable-length argument lists in C++, for example for conveniently chaining arguments of various number and type, consider how C++ streams work (those concatenated "<<" and ">>"s):
class MyClass {
public:
MyClass & operator<<( int i )
{
// do something with integer
return *this;
}
MyClass & operator<<( double d )
{
// do something with float
return *this;
}
};
int main()
{
MyClass foo;
foo << 42 << 3.14 << 0.1234 << 23;
return 0;
}
It is pretty general, and Goz has already covered some of the points. A few more:
1) A variable argument list gives undefined behavior if you pass anything but POD objects. Overloaded functions can receive any kind of objects.
2) You can have ambiguity if one member of an overload set takes a variable argument list. Then again, you can have ambiguity without that as well. The variable argument list might create ambiguity in a larger number of situations though.
The first point is the really serious one -- for most practical purposes, it renders variable argument lists purely a "legacy" item in C++, not something to even consider using in any new code. The most common alternative is chaining overloaded operators instead (e.g. iostream inserters/extractors versus printf/scanf).