Overload a method or use default values? c++ - c++

I'm still relatively new to C++ and I can't seem to figure out the difference in the following two ways of coding a function that may take one parameter or maybe two or three or more. Anyway, here's my point
function overload:
int aClass::doSomething(int required)
{
//DO SOMETHING
}
int aClass::doSomething(int required, int optional)
{
//DO SOMETHING
}
how is this different to, default value:
int aClass::doSomething(int required, int optional = 0)
{
//DO SOMETHING
}
I know in different circumstances one may be more suitable than another but what kinds of things should I be aware of when choosing between each of these options?

There are several technical reasons to prefer overloading to default arguments, they are well laid out in Google's C++ Style Guide in the Default Arguments section:
Function pointers are confusing in the presence of default arguments,
since the function signature often doesn't match the call signature.
Adding a default argument to an existing function changes its type,
which can cause problems with code taking its address. Adding function
overloads avoids these problems.
and:
default parameters may result in bulkier code since they are
replicated at every call-site -- as opposed to overloaded functions,
where "the default" appears only in the function definition.
On the positive side it says:
Often you have a function that uses default values, but occasionally
you want to override the defaults. Default parameters allow an easy
way to do this without having to define many functions for the rare
exceptions.
So your choice will depend on how relevant the negative issues are for your application.

First off, you're talking about overloading, not overriding. Overriding is done for virtual functions in a derived class. Overloading refers to the same function name with a different signature.
The difference is logical - in the first case (2 versions), the two functions can behave completely different, whereas the second case will have more or less the same logic. It's really up to you.

The compiler doesn't care which of these you use. Imagine that you wrote it as two constructors, and they ended up about 20 lines long. Further imagine that 19 of the lines were identical, and the different line read
foo = 0;
in one version and
foo = optional;
in the other. In this situation, using an optional parameter makes your code far more readable and understandable. In another language, that didn't have optional parameters, you'd implement this by having the one-parameter version call the two parameter version and pass zero to it as the second parameter.
Now imagine a different pair of constructors or functions that again are about 20 lines long but are entirely different. For example the second parameter is an ID, and if it's provided, you look stuff up in the database and if it's not you set values to nullptr, 0, and so on. You could have a default value (-1 is popular for this) but then the body of the function would be full of
if (ID == -1)
{
foo = 0;
}
else
{
foo = DbLookup(ID);
}
which could be hard to read and would make the single function a lot longer than the two separate functions. I've seen functions with one giant if that eseentially split the whole thing into two separate blocks with no common code, and I've seen the same condition tested 4 or 5 times as a calculation progresses. Both are hard to read.
That's the thing about C++. There are lots of ways to accomplish most things. But those different ways serve different purposes, and once you "get" the subtle differences, you will write better code. In this case "better" means shorter, faster (all those ifs cost execution time), and more expressive - people reading it can understand your intentions quickly.

You are making use of the overloading feature, if you provide several constructors. The advantage in this case is, that you can react differently in every constructor on the passed arguments. If that is of importance use overloading.
If you can provide decent default values for your parameters and these wouldn't affect the proper running of your code, use default parameters.
See here for a thread on SO.

Related

Is there a way to pass an unknown number of arguments to a function?

Right now, I am trying to call a function in C++ through a Json object. The Json object would provide me with the name of the callee function and all the parameters. I will be able to extract the parameters using a for loop, but I am not sure how I can pass them in. For loop only allows me to pass arguments one by one, and I did not find a way to call a function besides passing in all the arguments at once.
I've made a temporary solution of:
if (parameter_count == 1)
func(param_1);
if (parameter_count == 2)
func(param_1, param_2);
...
This solution seems would not work for all cases since it can only work for functions with a limited number of arguments (depending on how many ifs I write). Is there a better way for this? Thanks!
EDIT: Sorry if I was being unclear. I do not know anything about func. I will be reading func from DLL based on its string name. Since I can't really change the function itself, I wouldn't be able to pass in a vector or struct directly.
Or perhaps did I have the wrong understanding? Are we allowed to pass in a single vector in place of a lot of parameters?
Sorry for making a mess through so many edits on this question. Brandon's solution with libffi works. Thanks!
So the problem as I understand it is that you have a void * pointer (which would come from your platform's DLL loading code) which "secretly" is a pointer to a function with a signature which is only known at runtime. You'd like to call this function at runtime with specified arguments.
Unfortunately, this is not possible to do cleanly with standard C++ alone. C++ cannot work with types that are not present in the program at compile-time, and since there is an infinite number of potential function signatures involved here there is no way to compile them all in.
What you'll want to do instead is manually set up the stack frame on your call stack and then jump to it, either via inline assembly or via some library or compiler extension that accomplishes this for your platform.
Here is a simple example of doing this via inline assembly. (To do this in general you will need to learn your platform's calling convention in detail, and needless to say this will constrain your program to the platform(s) you've implemented this for.)
I haven't actually tried it, but gcc has a compiler extension __builtin_apply that is apparently just meant to forward the arguments from one method wholesale to another but which could perhaps be used to accomplish something like this if you learned the (apparently opaque) description of the method.
[Update: Apparently I missed this in the comments, but Brandon mentioned libffi, a library which implements a bunch of platforms' calling conventions. This sounds like it might be the best option if you want to take this sort of approach.]
A final option would be to constrain the allowed signatures of your functions to a specified list, e.g. something like
switch(mySignature)
{
case VOID_VOID:
dynamic_cast<std::function<void(void)> *>(myPtr)();
break;
case VOID_INT:
dynamic_cast<std::function<void(int)> *>(myPtr)(my_int_arg_1);
break;
// ...
}
(Syntax of the above may not be 100% correct; I haven't tested it yet.) Whether this approach is sensible for your purposes depends on what you're doing.

Default arguments vs overloads, when to use which

In Kotlin there are two ways to express an optional parameter, either by specifying default argument value:
fun foo(parameter: Any, option: Boolean = false) { ... }
or by introducing an overload:
fun foo(parameter: Any) = foo(parameter, false)
fun foo(parameter: Any, option: Boolean) { ... }
Which way is preferred in which situations?
What is the difference for consumers of such function?
In Kotlin code calling other Kotlin code optional parameters tend to be the norm over using overloads. Using optional parameters should be you default behavior.
Special cases FOR using defaulted values:
As a general practice or if unsure -- use default arguments over overrides.
if you want the default value to be seen by the caller, use default values. They will show up in IDE tooltips (i.e. Intellij IDEA) and let the caller know they are being applied as part of the contract. You can see in the following screenshot that calling foo() will default some values if values are omitted for x and y:
Whereas doing the same thing with function overloads hides this useful information and just presents a much more messy:
using default values causes bytecode generation of two functions, one with all parameters specified and another that is a bridge function that can check and apply missing parameters with their defaulted values. No matter how many defaulted parameters you have, it is always only two functions. So in a total-function-count constrained environment (i.e. Android), it can be better to have just these two functions instead of a larger number of overloads that it would take to accomplish the same job.
Cases where you might not want to use default argument values:
When you want another JVM language to be able to use the defaulted values you either need to use explicit overloads or use the #JvmOverloads annotation which:
For every parameter with a default value, this will generate one additional overload, which has this parameter and all parameters to the right of it in the parameter list removed.
You have a previous version of your library and for binary API compatibility adding a default parameter might break compatibility for existing compiled code whereas adding an overload would not.
You have a previous existing function:
fun foo() = ...
and you need to retain that function signature, but you also want to add another with the same signature but additional optional parameter:
fun foo() = ...
fun foo(x: Int = 5) = ... // never can be called using default value
You will not be able to use the default value in the 2nd version (other than via reflection callBy). Instead all foo() calls without parameters still call the first version of the function. So you need to instead use distinct overloads without the default or you will confuse users of the function:
fun foo() = ...
fun foo(x: Int) = ...
You have arguments that may not make sense together, and therefore overloads allow you to group parameters into meaningful coordinated sets.
Calling methods with default values has to do another step to check which values are missing and apply the defaults and then forward the call to the real method. So in a performance constrained environment (i.e. Android, embedded, real-time, billion loop iterations on a method call) this extra check may not be desired. Although if you do not see an issue in profiling, this might be an imaginary issue, might be inlined by the JVM, and may not have any impact at all. Measure first before worrying.
Cases that don't really support either case:
In case you are reading general arguments about this from other languages...
in a C# answer for this similar question the esteemed Jon Skeet mentions that you should be careful using defaults if they could change between builds and that would be a problem. In C# the defaulting is at the call site, whereas in Kotlin for non-inlined functions it is inside of the (bridge) function being called. Therefore for Kotlin it is the same impact for changing hidden and explicit defaulting of values and this argument should not impact the decision.
also in the C# answer saying that if team members have opposing views about use of defaulted arguments then maybe don't use them. This should not be applied to Kotlin as they are a core language feature and used in the standard library since before 1.0 and there is no support for restricting their use. The opposing team members should default to using defaulted arguments unless they have a definitive case that makes them unusable. Whereas in C# it was introduced much later in the life cycle of that language and therefore had a sense of more "optional adoption"
Let's examine how functions with default argument values are compiled in Kotlin to see if there's a difference in method count. It may differ depending on the target platform, so we'll look into Kotlin for JVM first.
For the function fun foo(parameter: Any, option: Boolean = false) the following two methods are generated:
First is foo(Ljava/lang/Object;Z)V which is being called when all arguments are specified at a call site.
Second is synthetic bridge foo$default(Ljava/lang/Object;ZILjava/lang/Object;)V. It has 2 additional parameters: Int mask that specifies which parameters were actually passed and an Object parameter which currently is not used, but reserved for allowing super-calls with default arguments in the future.
That bridge is called when some arguments are omitted at a call-site. The bridge analyzes the mask, provides default values for omitted arguments and then calls the first method now specifying all arguments.
When you place #JvmOverloads annotation on a function, additional overloads are generated, one per each argument with default value. All these overloads delegate to foo$default bridge. For the foo function the following additional overload will be generated: foo(Ljava/lang/Object;)V.
Thus, from the method count point of view, in a situation when a function has only one parameter with default value, it's no matter whether you use overloads or default values, you'll get two methods. But if there's more than one optional parameter, using default values instead of overloads will result in less methods generated.
Overloads could be preferred when the implementation of a function gets simpler when parameter is omitted.
Consider the following example:
fun compare(v1: T, v2: T, ignoreCase: Boolean = false) =
if (ignoreCase)
internalCompareWithIgnoreCase(v1, v2)
else
internalCompare(v1, v2)
When it is called like compare(a, b) and ignoreCase is omitted, you actually pay twice for not using ignoreCase: first is when arguments are checked and default values are substituted instead of omitted ones and second is when you check the ignoreCase in the body of compare and branch to internalCompare based on its value.
Adding an overload will get rid of these two checks. Also a method with such simple body is more likely to be inlined by JIT compiler.
fun compare(v1: T, v2: T) = internalCompare(v1, v2)

How does c++11 resolve constexpr into assembly?

The basic question:
Edit: v-The question-v
class foo {
public:
constexpr foo() { }
constexpr int operator()(const int& i) { return int(i); }
}
Performance is a non-trivial issue. How does the compiler actually compile the above? I know how I want it to be resolved, but how does the specification actually specify it will be resolved?
1) Seeing the type int has a constexpr constructor, create a int object and compile the string of bytes that make the type from memory into the code directly?
2) Replace any calls to the overload with a call to the 'int's constructor that for some unknown reason int doesn't have constexpr constructors? (Inlining the call.)
3) Create a function, call the function, and have that function call 'int's consctructor?
Why I want to know, and how I plan to use the knowledge
edit:v-Background only-v
The real library I'm working with uses template arguments to decide how a given type should be passed between functions. That is, by reference or by value because the exact size of the type is unknown. It will be a user's responsibility to work within the limits I give them, but I want these limits to be as light and user friendly as I can sanely make them.
I expect a simple single byte character to be passed around in which case it should be passed by value. I do not bar 300mega-byte behemoth that does several minuets of recalculation every time a copy constructor is invoked. In which case passing by reference makes more sense. I have only a list of requirements that a type must comply with, not set cap on what a type can or can not do.
Why I want to know the answer to my question is so I can in good faith make a function object that accepts this unknown template, and then makes a decision how, when, or even how much of a object should be copied. Via a virtual member function and a pointer allocated with new is so required. If the compiler resolves constexpr badly I need to know so I can abandon this line of thought and/or find a new one. Again, It will be a user's responsibility to work within the limits I give them, but I want these limits to be as light and user friendly as I can sanely make them.
Edit: Thank you for your answers. The only real question was the second sentence. It has now been answered. Everything else If more background is required, Allow me to restate the above:
I have a template with four argument. The goal of the template is a routing protocol. Be that TCP/IP -unlikely- or node to node within a game -possible. The first two are for data storage. They have no requirement beyond a list of operators for each. The last two define how the data is passed within the template. By default this is by reference. For performance and freedom of use, these can be changed define to pass information by value at a user's request.
Each is expect to be a single byte long. They could in the case of metric for a EIGRP or OSFP like protocol the second template argument could be the compound of a dozen or more different variable. Each taking a non-trival time to copy or recompute.
For ease of use I investigate the use a function object that accepts the third and fourth template to handle special cases and polymorphic classes that would fail to function or copy correctly. The goal to not force a user to rebuild their objects from scratch. This would require planning for virtual function to preform deep copies, or any number of other unknown oddites. The usefulness of the function object depends on how sanely a compiler can be depended on not generate a cascade of function calls.
More helpful I hope?
The C++11 standard doesn't say anything about how constexpr will be compiled down to machine instructions. The standard just says that expressions that are constexpr may be used in contexts where a compile time constant value is required. How any particular compiler chooses to translate that to executable code is an implementation issue.
Now in general, with optimizations turned on you can expect a reasonable compiler to not execute any code at runtime for many uses of constexpr but there aren't really any guarantees. I'm not really clear on what exactly you're asking about in your example so it's hard to give any specifics about your use case.
constexpr expressions are not special. For all intents and purposes, they're basically const unless the context they're used in is constexpr and all variables/functions are also constexpr. It is implementation defined how the compiler chooses to handle this. The Standard never deals with implementation details because it speaks in abstract terms.

Is there a standard way of determining the number of va_args?

I'm experimenting with variable arguments in C++, using va_args. The idea is useful, and is indeed something I've used a lot in C# via the params functionality. One thing that frustrates me is the following excerpt regarding va_args, above:
Notice also that va_arg does not determine either whether the retrieved argument is the last argument passed to the function (or even if it is an element past the end of that list).
I find it hard to believe that there is no way to programmatically determine the number of variable arguments passed to the function from within that function itself. I would like to perform something like the following:
void fcn(int arg1 ...)
{
va_list argList;
va_start(argList, arg1);
int numRemainingParams = //function that returns number of remaining parameters
for (int i=0; i<numRemainingParams; ++i)
{
//do stuff with params
}
va_end(argList);
}
To reiterate, the documentation above suggests that va_arg doesn't determine whether the retrieved arg is the last in the list. But I feel this information must be accessible in some manner.
Is there a standard way of achieving this?
I find it hard to believe that there is no way to programmatically determine the number of variable arguments passed to the function from within that function itself.
Nonetheless, it is true. C/C++ do not put markers on the end of the argument list, so the called function really does not know how many arguments it is receiving. If you need to mark the end of the arguments, you must do so yourself by putting some kind of marker at the end of the list.
The called function also has no idea of the types or sizes of the arguments provided. That's why printf and friends force you to specify the precise datatype of the value to interpolate into the format string, and also why you can crash a program by calling printf with a bad format string.
Note that parameter passing is specified by the ABI for a particular platform, not by the C++/C standards. However, the ABI must allow the C++/C standards to be implementable. For example, an ABI might want to pass parameters in registers for efficiency, but it might not be possible to implement va_args easily in that case. So it's possible that arguments are also shadowed on the stack. In almost no case is the stack marked to show the end of the argument list, though, since the C++/C standards don't require this information to be made available, and it would therefore be unnecessary overhead.
The way variable arguments work in C and C++ is relatively simple: the arguments are just pushed on the stack and it is the callee's responsibility to somewhat figure out what arguments there are. There is nothing in the standard which provides a way to determine the number of arguments. As a result, the number of arguments are determined by some context information, e.g., the number of elements referenced in a format string.
Individual compilers may know how many elements there are but there is no standard interface to obtain this value.
What you could do instead, however, is to use variadic templates: you can determine very detailed information on the arguments being passed to the function. The interface looks different and it may be necessary to channel the arguments into some sort of data structure but on the upside it would also work with types you cannot pass using variable arguments.
No, there isn't. That's why variable arguments are not safe. They're a part of C, which lacks the expressiveness to achieve type safety for "convenient" variadic functions. You have to live with the fact that C contains constructions whose very correctness depends on values and not just on types. That's why it is an "unsafe language".
Don't use variable arguments in C++. It is a much stronger language that allows you to write equally convenient code that is safe.
No, there's no such way. If you have such a need, it's probably best to pack those function parameters in a std::vector or a similar collection which can be iterated.
The variable argument list is a very old concept inherited from the C history of C++. It dates back to the time where C programmers usually had the generated assembler code in mind.
At that time the compiler did not check at all if the data you passed to a function when calling it matched the data types the function expected to receive. It was the programmer's responsibility to do that right. If, for example, the caller called the function with a char and the function expected an int the program crashed, although the compiler didn't complain.
Today's type checking prevents these errors, but with a variable argument list you go back to those old concepts including all risks. So, don't use it if you can avoid it somehow.
The fact that this concept is several decades old is probably the reason that it feels wrong compared to modern concepts of safe code.

Why aren't named parameters used more often?

I have designed a parameter class which allows me to write code like this:
//define parameter
typedef basic_config_param<std::string> name;
void test(config_param param) {
if(param.has<name>()) { //by name
cout << "Your name is: " << param.get<name>() << endl;
}
unsigned long & n = param<ref<unsigned long> >(); //by type
if(param.get<value<bool> >(true)) { //return true if not found
++n;
}
}
unsigned long num = 0;
test(( name("Special :-)"), ref<unsigned long>(num) )); //easy to add a number parameter
cout << "Number is: " << num; //prints 1
The performance of the class is pretty fast: everything is just a reference on the stack. And to save all the information I use an internal buffer of up to 5 arguments before it goes to heap allocation to decrease the size of every single object, but this can be easily changed.
Why isn't this syntax used more often, overloading operator,() to implement named parameters? Is it because of the potential performance penalty?
One other way is to use the named idiom:
object.name("my name").ref(num); //every object method returns a reference to itself, allow object chaining.
But, for me, overloading operator,() looks much more "modern" C++, as long you don't forget to uses double parentheses. The performance does not suffer much either, even if it is slower than a normal function, so is it negligible in most cases.
I am probably not the first one to come up with a solution like this, but why isn't it more common? I have never seen anything like the syntax above (my example) before I wrote a class which accepts it, but for me looks it perfect.
My question is why this syntax is not used more, overloading operator,() to implement named parameters.
Because it is counter-intuitive, non-human-readable, and arguably a bad programming practice. Unless you want to sabotage the codebase, avoid doing that.
test(( name("Special :-)"), ref<unsigned long>(num) ));
Let's say I see this code fragment for the first time. My thought process goes like this:
At a first glance it looks like an example of "the most vexing parse" because you use double-parentheses. So I assume that test is a variable, and have to wonder if you forgot to write variable's type. Then it occurs to me that this thing actually compiles. After that I have to wonder if this is an instance of an immediately destroyed class of type test and you use lowercase names for all class types.
Then I discover it is actually a function call. Great.
The code fragment now looks like a function call with two arguments.
Now it becomes obvious to me that this can't be a function call with two arguments, because you used double parentheses.
So, NOW I have to figure what the heck is going on within ().
I remember that there is a comma operator (which I haven't ever seen in real C++ code during the last 5 years) which discards the previous argument. SO NOW I have to wonder what is that useful side effect of name(), and what the name() is - a function call or a type (because you don't use uppercase/lowercase letters to distinguish between class/function (i.e. Test is a class, but test is a function), and you don't have C prefixes).
After looking up name in the source code, I discover that it is class. And that it overloads the , operator, so it actually doesn't discard the first argument anymore.
See how much time is wasted here? Frankly, writing something like that can get you into trouble, because you use language features to make your code look like something that is different from what your code actually does (you make a function call with one argument look like it has two arguments or that it is a variadic function). Which is a bad programming practice that is roughly equivalent to overloading operator+ to perform substractions instead of additions.
Now, let's consider a QString example.
QString status = QString("Processing file %1 of %2: %3").arg(i).arg(total).arg(fileName);
Let's say I see it for the first time in my life. That's how my thought process goes:
There is a variable status of type QString.
It is initialized from a temporary variable of type QString().
... after QString::arg method is called. (I know it is a method).
I look up .arg in the documentation to see what it does, and discover that it replaces %1-style entries and returns QString&. So the chain of .arg() calls instantly makes sense. Please note that something like QString::arg can be templated, and you'll be able to call it for different argument types without manually specifying the type of argument in <>.
That code fragment now makes sense, so I move on to another fragment.
looks very more "modern" C++
"New and shiny" sometimes means "buggy and broken" (slackware linux was built on a somewhat similar idea). It is irrelevant if your code looks modern. It should be human-readable, it should do what it is intended to do, and you should waste the minimum possible amount of time in writing it. I.e. you should (personal recommendation) aim to "implement a maximum amount of functionality in a minimum amount of time at a minimum cost (includes maintenance)", but receive the maximum reward for doing it. Also it makes sense to follow KISS principle.
Your "modern" syntax does not reduce development cost, does not reduce development time, and increases maintenance cost (counter-intuitive). As a result, this syntax should be avoided.
There is not necessity. Your dynamic dispatch (behave differently, depending on the logical type of the argument) can be implemented a) much easier and b) much faster using template specialisation.
And if you actually require a distinction based on information that is only available on runtime, I'd try to move your test function to be a virtual method of the param type and simply use dynamic binding (that's what it's for, and that's what you're kind of reinventing).
The only cases where this approach would be more useful may be multiple-dispatch scenarios, where you want to reduce code and can find some similarity patterns.