I have a class called FileProc that runs File IO operations. In one instance I have declared two functions (which are sub-functions to operator= functions), both decisively different:
const bool WriteAmount(const std::string &Arr, const long S)
{
/* Do some string conversion */
return true;
}
const bool WriteAmount(const char Arr[], unsigned int S)
{
/* Do some string conversion */
return true;
}
If I make a call with a 'char string' to WriteAmount, it reports an ambiguity error - saying it is confused between WriteAmount for char string and WriteAmount for std::string. I know what is occurring under the hood - it's attempting to using the std::string constructor to implicitly convert the char string into a std::string. But I don't want this to occur in the instance of WriteAmount (IE I don't want any implicit conversion occurring within the functions - given each one is optimised to each role).
My question is, for consistency, without changing the function format (IE not changing number of arguments or what order they appear in) and without altering the standard library, is there anyway to prevent implicit conversion in the functions in question?
I forgot to add, preferably without typecasting, as this will be tedious on function calls and not user friendly.
You get the ambiguity because your second parameter is different. Trying to call it with long x = ...; WriteAmount("foo", x) will raise an ambiguity because it matches the second argument better with the first overload, but the first argument is matched better with the second overload.
Make the second parameter have the same type in both cases and you will get rid of the ambiguity, as then the second argument is matched equally worse/good for both overloads, and the first argument will be matched better with the second overload.
Can't you change the second argument and cast it to unsigned int ? It should not be able to use the first function call. I have not coded in C++ for ages..
Related
The following code gives the compilation error of main.cpp: In function ‘int main()’:
main.cpp:18:19: error: call of overloaded ‘print(int, int)’ is ambiguous
print(0, 0);
#include <iostream>
using namespace std;
void print(int, double);
void print(double, int);
int main()
{
print(0, 0);
return 0;
}
While the following code does not give the compilation error of function call ambiguity, why so?
#include <iostream>
using namespace std;
void print(double){}
void print(int){}
int main()
{
print(0);
return 0;
}
Sloppy speaking, overload resolution picks the overload that is a better match considering the types of the parameters.
print(0) calls void print(int){} because 0 is an integer literal of type int.
When you call print(0, 0) then either the first 0 could be converted to double to call void print(double, int) or the second one to call void print(int, double). Neither of them is a better match, hence the compiler reports this ambiguity.
For more details i refer you to https://en.cppreference.com/w/cpp/language/overload_resolution.
Note that this is a matter of choice. The universe would not stop expanding when your example would choose to call void print(double, int) because some rules would say that thats a better match than void print(int,double), thats just not what the rules say.
In the first example, when you are calling print(0,0), you are passing both integer arguments. So the compiler could try to convert either the first argument to double and call print(double, int) or it could convert second argument to double type and call the function print(int, double). In doing so, it could get confused as both of them could be a possibility for called function. Hence, it reports ambiguity.
However, in the second example, it finds clarity about the called function, i.e., print(int). Here, no type conversion is required for identifying the called function.
There are really two parts to the rules for overload resolution. Most people just talk about a "better match" or the "best match", or something similar, but (at least in my opinion) this isn't very helpful (by itself).
The rule to find that "best match" really comes in two parts. For one overload to qualify as a better match than the other, it must:
have a "better" conversion sequence for at least one parameter, and
have at least as "good" of a conversion sequence for every parameter.
I won't try to get into all the details about what constitutes a "better" conversion sequence, but in this case it suffices to say that the "best" conversion is no conversion at all, so leaving an int as an int is better than converting an int to a double.
In your first case, you're supplying two int arguments. Each of the available overloads has one parameter that would require a conversion from int to double, and another that would leave the int as an int.
As such, each available overload has a worse sequence than the other for at least one parameter, so neither one can be a better match than the other.
In your second case, the situation's much simpler: the overloaded functions have one one parameter apiece, and a conversion would be required for one, but no conversion would be required for the other, so the one that requires no conversion is clearly better.
The difference between the actual rules and what people tend to think based only on only hearing about a "better match" becomes especially apparent if we have (for one example) an odd number of parameters. For example, consider a situation like this:
void f(int, double, double);
void f(double, int, int);
// ...
f(1, 2, 3);
In this case, if you think solely in terms of a "better match" the second overload might seem like an obvious choice--the first requires conversions on two arguments, and the second requires a conversion on only one argument, so that seems better.
But once we understand the real rules, we recognize that this is still ambiguous. Each of the available overloads is worse for at least one argument, therefore neither qualifies as a better match.
I have a class SpecialString. It has an operator overload / conversion function it uses any time it's passed off as a const char*. It then returns a normal c-string.
class SpecialString
{
...
operator char* () const { return mCStr; }
...
};
This used to work a long time ago (literally 19 years ago) when I passed these directly into printf(). The compiler was smart enough to know that argument was meant to be a char* and it used the conversion function, but the now g++ complains.
SpecialString str1("Hello"), str2("World");
printf("%s %s\n", str1, str2);
error: cannot pass object of non-POD type 'SPECIALSTRING' (aka 'SpecialString') through variadic method; call will abort at runtime [-Wnon-pod-varargs]
Is there any way to get this to work again without changing the code? I can add a deref operator overload function that returns the c-string and pass the SpecialString objects around like this.
class SpecialString
{
...
operator CHAR* () const { return mCStr; }
char* operator * () const { return mCStr; }
...
};
SpecialString str1("Hello"), str2("World");
printf("%s %s\n", *str1, *str2);
But I'd prefer not to because this requires manually changing thousands of lines of code.
You could disable the warning, if you don't want to be informed about it... but that's a bad idea.
The behaviour of the program is undefined, you should fix it and that requires changing the code. You can use the exising conversion operator with static_cast, or you can use your unary * operator idea, which is going to be terser.
Even less change would be required if you used unary + instead which doesn't require introducing an overload, since it will invoke the implicit conversion instead. That may add some confusion to the reader of the code though.
Since you don't want to modify the existing code, you can write a "hack" instead. More specifically, a bunch of overloads to printf() that patch the existing code.
For example:
int printf(const char* f, const SpecialString& a, const SpecialString& b)
{
return printf(f, (const char*)a, (const char*)b);
}
With this function declared in your header, every call to printf() with those specific parameters will use this function instead of the "real" printf() you're familiar with, and perform the needed conversions.
I presume you have quite a few combinations of printf() calls in your code envolving SpecialString, so you may have to write a bunch of different overloads, and this is ugly af to say the least but it does fit your requirement.
As mentioned in another comment, it has always been undefined behavior that happens to work in your case.
With Microsoft CString class, it seems like the undefined behavior was so used (as it happen to work), that now the layout is defined in a way that it will still works. See How can CString be passed to format string %s?.
In our code base, I try to fix code when I modify a file to explicitly do the conversion by calling GetString()
There are a few things you could do:
Fix existing code everywhere you get the warning.
In that case, a named function like c_str or GetString is preferable to a conversion operator to avoid explicit casting (for ex. static_cast or even worst C-style case (const char *). The deref operator might be an acceptable compromise.
Use some formatting library
<iosteam>
fmt: https://github.com/fmtlib/fmt
many other choices (search C++ formatting library or something similar)
Use variadic template function so that conversion could be done.
If you only use a few types (int, double, string) and rarely more than 2 or 3 parameters, defining overloads might also be a possibility.
Not recommended: Hack your class to works again.
Have you done any change to your class definition that cause it to break or only upgrade the compiler version or change compiler options?
Such hack is working with undefined behavior so you must figure out how your compiler works and the code won't be portable.
For it to works the class must have the size of a pointer and the data itself must be compatible with a pointer. Thus essentially, the data must consist of a single pointer (no v-table or other stuff).
Side note: I think that one should avoid defining its own string class. In most case, standard C++ string should be used (or string view). If you need additional functions, I would recommend your to do write stand-alone function in a namespace like StringUtilities for example. That way, you avoid converting back and forth between your own string and standard string (or some library string like MFC, Qt or something else).
I'm having a weird problem with an overloaded function one of my work colleagues made, but we're both C++/QT newbies and can't figure out the reason for such a situation.
The situation is as follows:
We have an overloaded function:
foo(bool arg1 = false);
foo(const QVariant &arg1, const QString &arg2 = NULL, bool arg3 = false);
On the first one we have only one optional bool argument passed as value;
On the second we have one QVariant reference, an optional qstring reference and a bool.
What happens on runtime is the when I call for example:
foo("some_c++_string");
instead of using the 2nd one and casting the string to QVariant, it uses the first one and probably ignores the argument! The weird thing is that it doesn't even complain about not existing an overload for foo(char *) on compile time for example!
But if we do:
foo(QString("some_qt_string"));
it goes to the second overload as expected.
So, the question is: Why in the world does it decide to go for an overload which accepts one optional argument not even of the same type instead of using the second one and trying to use the string argument as a QVariant?
It probably has something to do with passing the arguments as reference and giving them default values, but I've tried several combinations and always went wrong.
Thanks for your time
That's because of the ordering of implicit conversions. Converting a string literal to a bool requires only a standard conversion sequence: an array-to-pointer conversion (obtaining const char*) and a boolean conversion (obtaining bool).
On the other hand, converting a string literal to a QVariant requires a user-defined conversion sequence, since it involves a class (QVariant).
And per C++11 13.3.3.2/2,
When comparing the basic forms of implicit conversion sequences (as defined in 13.3.3.1)
a standard conversion sequence (13.3.3.1.1) is a better conversion sequence than a user-defined conversion
sequence or an ellipsis conversion sequence
Which means that the first overload is strictly better and is thus chosen.
I'd wager you're building with Visual Studio, by the way. Otherwise, the construct
foo(QString("some_qt_string"));
wouldn't even compile. That's because QString("some_qt_string") creates a temporary object, and in standard C++, temporaries cannot bind to non-const lvalue references (which your second overload takes). However, Visual Studio allows that as an extension.
Why in the world does it decide to go for an overload which accepts one optional argument not even of the same type instead of using the second one and trying to use the string argument as a QVariant?
Looking at the functions signature :
foo(bool arg1 = false);
foo(QVariant &arg1, QString &arg2 = NULL, bool arg3 = false);
The 2nd overload takes an lvalue. If you pass a const char*, it would require creating a QVariant rvalue.
On the other hand, there is a function overload that takes a bool value, which is a better match, and is called.
You also said that if you do this :
foo(QString("some_qt_string"));
the second is called. But it shouldn't even compile. You are passing rvalue to a function which takes a lvalue. I am not sure which compiler you use, but that is certainly wrong. Put maximum possible warning level and see what happens. I do hope you are not ignoring compiler warnings ;)
The simplest solution (and maybe the best) is NOT to overload the function. Or at least not to give default arguments.
The second overload takes a QVariant by reference as it's first argument. You did not pass a QVariant, so this overload can never be taken. In fact, when you pass a QString as first argument, the second overload is still not viable and you should get a compiler error. My guess is that you are compiling with Microsoft VC and what you are observing is non-standard behavior that allows you to take an l-value reference of a temporary.
Hi i have a confusion or to say more i need to understand something. I have a procedure and another overloaded procedure of same.
string conct (string a, string b) {
string str = conct(a, b, "string");
return str;
}
string conct (string a, string b, const char* c) {
// do the processing;
return concatenated_string;
}
is it possible that instead of having two overloaded functions, if i make c in the overloaded function as default argument. So that even if someone passes only two arguments, i can just have one function to handle that case.
But my main concern comes in the third argument which is currently const char* c. So if i make it to something like const char* c = "string", would it be correct way to handle the case of removing overloading with one function with default argument.
I saw the post here but that seems to be focused on compilation and not the confusion i have.
Yes, you can replace your overloaded functions with one function and a default argument:
string conct (string a, string b, const char* c = "string") {
// do the processing;
return concatenated_string;
}
When you overload functions the compiler generates code for each function, probably resulting in larger code size.
If an overload just acts as a thin wrapper as in your case then the optimizer may eliminate the extra work.
default arguments get set at the caller's location, rather than inside the function, so default arguments must be publicly visible, and changing them requires recompiling all callers. With an overload like yours the psuedo-default argument becomes a hidden detail.
default values can be used in function prototypes but if we want to default middle argument then we'll have to default all values to its right...
On the other hand overloading a function can be done for all possible argument combinations also default value needs not to be placed on function call stack and thus less work for the compiler...
This C++ question seems to be pretty basic and general but still I want someone to answer.
1) What is the difference between a function with variable-length argument and an overloaded function?
2) Will we have problems if we have a function with variable-length argument and another same name function with similar arguments?
2) Do you mean the following?
int mul(int a, int b);
int mul(int n, ...);
Let's assume the first multiplies 2 integers. The second multiplies n integers passed by var-args. Called with f(1, 2) will not be ambiguous, because an argument passed through "the ellipsis" is associated with the highest possible cost. Passing an argument to a parameter of the same type however is associated with the lowest possible cost. So this very call will surely be resolved to the first function :)
Notice that overload resolution only compares argument to parameter conversions for the same position. It will fail hard if either function for some parameter pair has a winner. For example
int mul(int a, int b);
int mul(double a, ...);
Imagine the first multiplies two integers, and the second multiplies a list of doubles that is terminated by a 0.0. This overload set is flawed and will be ambiguous when called by
mul(3.14, 0.0);
This is because the second function wins for the first argument, but the first function wins for the second argument. It doesn't matter that the conversion cost for the second argument is higher for the second function than the cost of the first argument for the first function. Once such a "cross" winner situation is determined, the call for such two candidates is ambiguous.
1) Well an overloaded function will require a HELL of a lot of different prototypes and implementations. It will also be type safe.
2) Yes this will cause you problems as the compiler will not know which function it needs to call. It may or may not warn about this. If it doesn't you may well end up with hard to find bugs.
An overloaded function can have completely different parameter types, including none, with the correct one being picked depending on the parameter types.
A variable-length argument requires at least one parameter to be present. You also need some mechanism to "predict" the type of the next parameter (as you have to state it in va_arg()), and it has to be a basic type (i.e., integer, floating point, or pointer). Common techniques here are "format strings" (as in printf(), scanf()), or "tag lists" (every odd element in the parameter list being an enum telling the type of the following even element, with a zero enum to mark the end of the parameter list).
Generally speaking, overloading is the C++ way to go. If you end up really needing something akin to variable-length argument lists in C++, for example for conveniently chaining arguments of various number and type, consider how C++ streams work (those concatenated "<<" and ">>"s):
class MyClass {
public:
MyClass & operator<<( int i )
{
// do something with integer
return *this;
}
MyClass & operator<<( double d )
{
// do something with float
return *this;
}
};
int main()
{
MyClass foo;
foo << 42 << 3.14 << 0.1234 << 23;
return 0;
}
It is pretty general, and Goz has already covered some of the points. A few more:
1) A variable argument list gives undefined behavior if you pass anything but POD objects. Overloaded functions can receive any kind of objects.
2) You can have ambiguity if one member of an overload set takes a variable argument list. Then again, you can have ambiguity without that as well. The variable argument list might create ambiguity in a larger number of situations though.
The first point is the really serious one -- for most practical purposes, it renders variable argument lists purely a "legacy" item in C++, not something to even consider using in any new code. The most common alternative is chaining overloaded operators instead (e.g. iostream inserters/extractors versus printf/scanf).