This C++ question seems to be pretty basic and general but still I want someone to answer.
1) What is the difference between a function with variable-length argument and an overloaded function?
2) Will we have problems if we have a function with variable-length argument and another same name function with similar arguments?
2) Do you mean the following?
int mul(int a, int b);
int mul(int n, ...);
Let's assume the first multiplies 2 integers. The second multiplies n integers passed by var-args. Called with f(1, 2) will not be ambiguous, because an argument passed through "the ellipsis" is associated with the highest possible cost. Passing an argument to a parameter of the same type however is associated with the lowest possible cost. So this very call will surely be resolved to the first function :)
Notice that overload resolution only compares argument to parameter conversions for the same position. It will fail hard if either function for some parameter pair has a winner. For example
int mul(int a, int b);
int mul(double a, ...);
Imagine the first multiplies two integers, and the second multiplies a list of doubles that is terminated by a 0.0. This overload set is flawed and will be ambiguous when called by
mul(3.14, 0.0);
This is because the second function wins for the first argument, but the first function wins for the second argument. It doesn't matter that the conversion cost for the second argument is higher for the second function than the cost of the first argument for the first function. Once such a "cross" winner situation is determined, the call for such two candidates is ambiguous.
1) Well an overloaded function will require a HELL of a lot of different prototypes and implementations. It will also be type safe.
2) Yes this will cause you problems as the compiler will not know which function it needs to call. It may or may not warn about this. If it doesn't you may well end up with hard to find bugs.
An overloaded function can have completely different parameter types, including none, with the correct one being picked depending on the parameter types.
A variable-length argument requires at least one parameter to be present. You also need some mechanism to "predict" the type of the next parameter (as you have to state it in va_arg()), and it has to be a basic type (i.e., integer, floating point, or pointer). Common techniques here are "format strings" (as in printf(), scanf()), or "tag lists" (every odd element in the parameter list being an enum telling the type of the following even element, with a zero enum to mark the end of the parameter list).
Generally speaking, overloading is the C++ way to go. If you end up really needing something akin to variable-length argument lists in C++, for example for conveniently chaining arguments of various number and type, consider how C++ streams work (those concatenated "<<" and ">>"s):
class MyClass {
public:
MyClass & operator<<( int i )
{
// do something with integer
return *this;
}
MyClass & operator<<( double d )
{
// do something with float
return *this;
}
};
int main()
{
MyClass foo;
foo << 42 << 3.14 << 0.1234 << 23;
return 0;
}
It is pretty general, and Goz has already covered some of the points. A few more:
1) A variable argument list gives undefined behavior if you pass anything but POD objects. Overloaded functions can receive any kind of objects.
2) You can have ambiguity if one member of an overload set takes a variable argument list. Then again, you can have ambiguity without that as well. The variable argument list might create ambiguity in a larger number of situations though.
The first point is the really serious one -- for most practical purposes, it renders variable argument lists purely a "legacy" item in C++, not something to even consider using in any new code. The most common alternative is chaining overloaded operators instead (e.g. iostream inserters/extractors versus printf/scanf).
Related
The following code gives the compilation error of main.cpp: In function ‘int main()’:
main.cpp:18:19: error: call of overloaded ‘print(int, int)’ is ambiguous
print(0, 0);
#include <iostream>
using namespace std;
void print(int, double);
void print(double, int);
int main()
{
print(0, 0);
return 0;
}
While the following code does not give the compilation error of function call ambiguity, why so?
#include <iostream>
using namespace std;
void print(double){}
void print(int){}
int main()
{
print(0);
return 0;
}
Sloppy speaking, overload resolution picks the overload that is a better match considering the types of the parameters.
print(0) calls void print(int){} because 0 is an integer literal of type int.
When you call print(0, 0) then either the first 0 could be converted to double to call void print(double, int) or the second one to call void print(int, double). Neither of them is a better match, hence the compiler reports this ambiguity.
For more details i refer you to https://en.cppreference.com/w/cpp/language/overload_resolution.
Note that this is a matter of choice. The universe would not stop expanding when your example would choose to call void print(double, int) because some rules would say that thats a better match than void print(int,double), thats just not what the rules say.
In the first example, when you are calling print(0,0), you are passing both integer arguments. So the compiler could try to convert either the first argument to double and call print(double, int) or it could convert second argument to double type and call the function print(int, double). In doing so, it could get confused as both of them could be a possibility for called function. Hence, it reports ambiguity.
However, in the second example, it finds clarity about the called function, i.e., print(int). Here, no type conversion is required for identifying the called function.
There are really two parts to the rules for overload resolution. Most people just talk about a "better match" or the "best match", or something similar, but (at least in my opinion) this isn't very helpful (by itself).
The rule to find that "best match" really comes in two parts. For one overload to qualify as a better match than the other, it must:
have a "better" conversion sequence for at least one parameter, and
have at least as "good" of a conversion sequence for every parameter.
I won't try to get into all the details about what constitutes a "better" conversion sequence, but in this case it suffices to say that the "best" conversion is no conversion at all, so leaving an int as an int is better than converting an int to a double.
In your first case, you're supplying two int arguments. Each of the available overloads has one parameter that would require a conversion from int to double, and another that would leave the int as an int.
As such, each available overload has a worse sequence than the other for at least one parameter, so neither one can be a better match than the other.
In your second case, the situation's much simpler: the overloaded functions have one one parameter apiece, and a conversion would be required for one, but no conversion would be required for the other, so the one that requires no conversion is clearly better.
The difference between the actual rules and what people tend to think based only on only hearing about a "better match" becomes especially apparent if we have (for one example) an odd number of parameters. For example, consider a situation like this:
void f(int, double, double);
void f(double, int, int);
// ...
f(1, 2, 3);
In this case, if you think solely in terms of a "better match" the second overload might seem like an obvious choice--the first requires conversions on two arguments, and the second requires a conversion on only one argument, so that seems better.
But once we understand the real rules, we recognize that this is still ambiguous. Each of the available overloads is worse for at least one argument, therefore neither qualifies as a better match.
I would like to define a function void f(int i) that would only compile and do something when called as f(1) and would cause a compile error in all other cases. Is it possible to do with a template specialization?
As you mentioned in the comment that you would prefer not to use a template, and from the requirements you mentioned there, an option might be using an enum as argument, which means you limit the input to a given set of arguments:
enum argument{option1,option2,option3};
void f(argument x){
// do stuff
}
(the example is using three allowed values, but you can use any number of possible values)
While not being exactly what you asked for (the signature of f changed), it basically serves the same purpose.
You can now call f with either option1, option2 or option3 as input (or any other ones you specify in argument). As enum is implicitly convertible to int, you can use it as such in the function body, and specify the possible values also directly in argument
enum argument{option1=20,option2=23,option3=12};
Note that even though arguments can be implicitly converted to int, it does not work the other way around, so you will not be able to call f directly on int inputs, which is why it is not quite exactly what you asked for.
Note that while you can use static_cast to convert int to argument, it is dangerous, because this will again compile for any input, and might even invoke UB depending on the value you cast.
I have code:
int SomeClass::sum(int x)
{
return x+=x;
}
int SomeClass::sum(int & x)
{
return x+=x;
}
....
int num = 0;
int result = sum(num);
that not work. How I can use both functions and indicate which of them I want to use when I сall them?
You can provide two different overloads taking int& and const int&, which might somehow meet your needs...
But the whole code is a bit strange... in the function that takes the argument by value you are modifying it (+=), when it probably makes sense to only read it return x+x;. In the overload that takes the reference, you are both modifying the argument and returning the new value. That is a bit strange.
Other than that, sum is a horrible name for a function that multiplies by 2.
You can not have such functions in C++. They will have to be named differently for instance sumByCopy and sumByRef. How would you expect the compiler to decide which one are you referring to at each point?
You can overload in this way, it will compile OK if you do not call any of them directly. However, it will cause ambiguity when you call sum(num). Therefore, it is no sense to provide those overloads of a function since we cannot directly call none of them. This kind of overloads is useless and violates good practice.
I have a class called FileProc that runs File IO operations. In one instance I have declared two functions (which are sub-functions to operator= functions), both decisively different:
const bool WriteAmount(const std::string &Arr, const long S)
{
/* Do some string conversion */
return true;
}
const bool WriteAmount(const char Arr[], unsigned int S)
{
/* Do some string conversion */
return true;
}
If I make a call with a 'char string' to WriteAmount, it reports an ambiguity error - saying it is confused between WriteAmount for char string and WriteAmount for std::string. I know what is occurring under the hood - it's attempting to using the std::string constructor to implicitly convert the char string into a std::string. But I don't want this to occur in the instance of WriteAmount (IE I don't want any implicit conversion occurring within the functions - given each one is optimised to each role).
My question is, for consistency, without changing the function format (IE not changing number of arguments or what order they appear in) and without altering the standard library, is there anyway to prevent implicit conversion in the functions in question?
I forgot to add, preferably without typecasting, as this will be tedious on function calls and not user friendly.
You get the ambiguity because your second parameter is different. Trying to call it with long x = ...; WriteAmount("foo", x) will raise an ambiguity because it matches the second argument better with the first overload, but the first argument is matched better with the second overload.
Make the second parameter have the same type in both cases and you will get rid of the ambiguity, as then the second argument is matched equally worse/good for both overloads, and the first argument will be matched better with the second overload.
Can't you change the second argument and cast it to unsigned int ? It should not be able to use the first function call. I have not coded in C++ for ages..
I've come across some C++ code that looks like this (simplified for this post):
(Here's the function prototype located in someCode.hpp)
void someFunction(const double & a, double & b, const double c = 0, const double * d = 0);
(Here's the first line of the function body located in someCode.cpp that #include's someCode.hpp)
void someFunction(const double & a, double & b, const double c, const double * d);
Can I legally call someFunction using:
someFunction(*ptr1, *ptr2);
and/or
someFunction(*ptr1, *ptr2, val1, &val2);
where the variables ptr1, ptr2, val, and val2 have been defined appropriately and val1 and val2 do not equal zero? Why or why not?
And if it is legal, is this syntax preferred vs overloading a function to account for the optional parameters?
Yes, this is legal, this is called default arguments. I would say it's preferred to overloading due to involving less code, yes.
Regarding your comment about const, that doesn't apply to the default value itself, it applies to the argument. If you have an argument of type const char* fruit = "apple", that doesn't mean it has to be called with a character pointer whose value is the same as the address of the "apple" string literal (which is good, since that would be hard to guarantee). It just means that it has to be called with a pointer to constant characters, and tells you that the function being called doesn't need to write to that memory, it is only read from.
Yes, the parameters are optional and when you don't pass them, the given default values will be used.
It has some advantages and disadvantages to use default parameter values instead of overloading. The advantage is less typing in both interface and implementation part. But the disadvantage is that the default value is a part of interface with all its consequences. Then when you change the default value, you for example need to recompile a lot of code instead of a single file when using overloading.
I personally prefer default parameters.
I'd like to expand a bit on whether Default Parameters are preferred over overloading.
Usually they are for all the reasons given in the other answers, most notably less boilerplate code.
There are also valid reasons that make overloading a better alternative in some situations:
Default values are part of the interface, changes might break clients (as #Juraj already noted)
Additionally Overloads make it easier to add additional (combinations of) parameters, without breaking the (binary) interface.
Overloads are resolved at compile time, which can give the compiler better optimization (esp inlining) possibilities. e.g. if you have something like this:
void foo(Something* param = 0) {
if (param == 0) {
simpleAlgorithm();
} else {
complexAlgorithm(param);
}
}
It might be better to use overloads.
Can I legally call someFunction using:
someFunction(*ptr1, *ptr2);
Absolutely! Yes, the other 2 variables that the function accepts would have default values you have set in the header file which is zero for both the arguments.
But if you do supply the 3rd and the 4th argument to the function, then those values are considered instead of the default values.