In Kotlin there are two ways to express an optional parameter, either by specifying default argument value:
fun foo(parameter: Any, option: Boolean = false) { ... }
or by introducing an overload:
fun foo(parameter: Any) = foo(parameter, false)
fun foo(parameter: Any, option: Boolean) { ... }
Which way is preferred in which situations?
What is the difference for consumers of such function?
In Kotlin code calling other Kotlin code optional parameters tend to be the norm over using overloads. Using optional parameters should be you default behavior.
Special cases FOR using defaulted values:
As a general practice or if unsure -- use default arguments over overrides.
if you want the default value to be seen by the caller, use default values. They will show up in IDE tooltips (i.e. Intellij IDEA) and let the caller know they are being applied as part of the contract. You can see in the following screenshot that calling foo() will default some values if values are omitted for x and y:
Whereas doing the same thing with function overloads hides this useful information and just presents a much more messy:
using default values causes bytecode generation of two functions, one with all parameters specified and another that is a bridge function that can check and apply missing parameters with their defaulted values. No matter how many defaulted parameters you have, it is always only two functions. So in a total-function-count constrained environment (i.e. Android), it can be better to have just these two functions instead of a larger number of overloads that it would take to accomplish the same job.
Cases where you might not want to use default argument values:
When you want another JVM language to be able to use the defaulted values you either need to use explicit overloads or use the #JvmOverloads annotation which:
For every parameter with a default value, this will generate one additional overload, which has this parameter and all parameters to the right of it in the parameter list removed.
You have a previous version of your library and for binary API compatibility adding a default parameter might break compatibility for existing compiled code whereas adding an overload would not.
You have a previous existing function:
fun foo() = ...
and you need to retain that function signature, but you also want to add another with the same signature but additional optional parameter:
fun foo() = ...
fun foo(x: Int = 5) = ... // never can be called using default value
You will not be able to use the default value in the 2nd version (other than via reflection callBy). Instead all foo() calls without parameters still call the first version of the function. So you need to instead use distinct overloads without the default or you will confuse users of the function:
fun foo() = ...
fun foo(x: Int) = ...
You have arguments that may not make sense together, and therefore overloads allow you to group parameters into meaningful coordinated sets.
Calling methods with default values has to do another step to check which values are missing and apply the defaults and then forward the call to the real method. So in a performance constrained environment (i.e. Android, embedded, real-time, billion loop iterations on a method call) this extra check may not be desired. Although if you do not see an issue in profiling, this might be an imaginary issue, might be inlined by the JVM, and may not have any impact at all. Measure first before worrying.
Cases that don't really support either case:
In case you are reading general arguments about this from other languages...
in a C# answer for this similar question the esteemed Jon Skeet mentions that you should be careful using defaults if they could change between builds and that would be a problem. In C# the defaulting is at the call site, whereas in Kotlin for non-inlined functions it is inside of the (bridge) function being called. Therefore for Kotlin it is the same impact for changing hidden and explicit defaulting of values and this argument should not impact the decision.
also in the C# answer saying that if team members have opposing views about use of defaulted arguments then maybe don't use them. This should not be applied to Kotlin as they are a core language feature and used in the standard library since before 1.0 and there is no support for restricting their use. The opposing team members should default to using defaulted arguments unless they have a definitive case that makes them unusable. Whereas in C# it was introduced much later in the life cycle of that language and therefore had a sense of more "optional adoption"
Let's examine how functions with default argument values are compiled in Kotlin to see if there's a difference in method count. It may differ depending on the target platform, so we'll look into Kotlin for JVM first.
For the function fun foo(parameter: Any, option: Boolean = false) the following two methods are generated:
First is foo(Ljava/lang/Object;Z)V which is being called when all arguments are specified at a call site.
Second is synthetic bridge foo$default(Ljava/lang/Object;ZILjava/lang/Object;)V. It has 2 additional parameters: Int mask that specifies which parameters were actually passed and an Object parameter which currently is not used, but reserved for allowing super-calls with default arguments in the future.
That bridge is called when some arguments are omitted at a call-site. The bridge analyzes the mask, provides default values for omitted arguments and then calls the first method now specifying all arguments.
When you place #JvmOverloads annotation on a function, additional overloads are generated, one per each argument with default value. All these overloads delegate to foo$default bridge. For the foo function the following additional overload will be generated: foo(Ljava/lang/Object;)V.
Thus, from the method count point of view, in a situation when a function has only one parameter with default value, it's no matter whether you use overloads or default values, you'll get two methods. But if there's more than one optional parameter, using default values instead of overloads will result in less methods generated.
Overloads could be preferred when the implementation of a function gets simpler when parameter is omitted.
Consider the following example:
fun compare(v1: T, v2: T, ignoreCase: Boolean = false) =
if (ignoreCase)
internalCompareWithIgnoreCase(v1, v2)
else
internalCompare(v1, v2)
When it is called like compare(a, b) and ignoreCase is omitted, you actually pay twice for not using ignoreCase: first is when arguments are checked and default values are substituted instead of omitted ones and second is when you check the ignoreCase in the body of compare and branch to internalCompare based on its value.
Adding an overload will get rid of these two checks. Also a method with such simple body is more likely to be inlined by JIT compiler.
fun compare(v1: T, v2: T) = internalCompare(v1, v2)
Related
In Google C++ Style Guide, it said:
When defining a function, parameter order is: inputs, then outputs.
Basically Google suggest function parameter ordering like:
void foo(const Foo& input1, const Foo& input2, Foo* output);
However, my colleague suggested that the output should be put in the first position. because in this way, foo could accept default values and most of the time output would not use a default value. for example:
void foo(Foo* output, const Foo& input1, const Foo& input2 = default);
I think what he said make sense. Or is there something we are missing here from aspects of readability, performance, ...? Why the style guide suggest output should be the last?
The reason why this isn't a problem for the Google style guide is because default arguments are disallowed:
https://google-styleguide.googlecode.com/svn/trunk/cppguide.html#Default_Arguments
We do not allow default function parameters, except in limited situations as explained below. Simulate them with function overloading instead, if appropriate.
Pros
Often you have a function that uses default values, but occasionally you want to override the defaults. Default parameters allow an easy way to do this without having to define many functions for the rare exceptions. Compared to overloading the function, default arguments have a cleaner syntax, with less boilerplate and a clearer distinction between 'required' and 'optional' arguments.
Cons
Function pointers are confusing in the presence of default arguments, since the function signature often doesn't match the call signature. Adding a default argument to an existing function changes its type, which can cause problems with code taking its address. Adding function overloads avoids these problems. In addition, default parameters may result in bulkier code since they are replicated at every call-site -- as opposed to overloaded functions, where "the default" appears only in the function definition.
Decision
While the cons above are not that onerous, they still outweigh the (small) benefits of default arguments over function overloading. So except as described below, we require all arguments to be explicitly specified.
One specific exception is when the function is a static function (or in an unnamed namespace) in a .cc file. In this case, the cons don't apply since the function's use is so localized.
In addition, default function parameters are allowed in constructors. Most of the cons listed above don't apply to constructors because it's impossible to take their address.
Another specific exception is when default arguments are used to simulate variable-length argument lists.
// Support up to 4 params by using a default empty AlphaNum.
string StrCat(const AlphaNum &a,
const AlphaNum &b = gEmptyAlphaNum,
const AlphaNum &c = gEmptyAlphaNum,
const AlphaNum &d = gEmptyAlphaNum);
The basic question:
Edit: v-The question-v
class foo {
public:
constexpr foo() { }
constexpr int operator()(const int& i) { return int(i); }
}
Performance is a non-trivial issue. How does the compiler actually compile the above? I know how I want it to be resolved, but how does the specification actually specify it will be resolved?
1) Seeing the type int has a constexpr constructor, create a int object and compile the string of bytes that make the type from memory into the code directly?
2) Replace any calls to the overload with a call to the 'int's constructor that for some unknown reason int doesn't have constexpr constructors? (Inlining the call.)
3) Create a function, call the function, and have that function call 'int's consctructor?
Why I want to know, and how I plan to use the knowledge
edit:v-Background only-v
The real library I'm working with uses template arguments to decide how a given type should be passed between functions. That is, by reference or by value because the exact size of the type is unknown. It will be a user's responsibility to work within the limits I give them, but I want these limits to be as light and user friendly as I can sanely make them.
I expect a simple single byte character to be passed around in which case it should be passed by value. I do not bar 300mega-byte behemoth that does several minuets of recalculation every time a copy constructor is invoked. In which case passing by reference makes more sense. I have only a list of requirements that a type must comply with, not set cap on what a type can or can not do.
Why I want to know the answer to my question is so I can in good faith make a function object that accepts this unknown template, and then makes a decision how, when, or even how much of a object should be copied. Via a virtual member function and a pointer allocated with new is so required. If the compiler resolves constexpr badly I need to know so I can abandon this line of thought and/or find a new one. Again, It will be a user's responsibility to work within the limits I give them, but I want these limits to be as light and user friendly as I can sanely make them.
Edit: Thank you for your answers. The only real question was the second sentence. It has now been answered. Everything else If more background is required, Allow me to restate the above:
I have a template with four argument. The goal of the template is a routing protocol. Be that TCP/IP -unlikely- or node to node within a game -possible. The first two are for data storage. They have no requirement beyond a list of operators for each. The last two define how the data is passed within the template. By default this is by reference. For performance and freedom of use, these can be changed define to pass information by value at a user's request.
Each is expect to be a single byte long. They could in the case of metric for a EIGRP or OSFP like protocol the second template argument could be the compound of a dozen or more different variable. Each taking a non-trival time to copy or recompute.
For ease of use I investigate the use a function object that accepts the third and fourth template to handle special cases and polymorphic classes that would fail to function or copy correctly. The goal to not force a user to rebuild their objects from scratch. This would require planning for virtual function to preform deep copies, or any number of other unknown oddites. The usefulness of the function object depends on how sanely a compiler can be depended on not generate a cascade of function calls.
More helpful I hope?
The C++11 standard doesn't say anything about how constexpr will be compiled down to machine instructions. The standard just says that expressions that are constexpr may be used in contexts where a compile time constant value is required. How any particular compiler chooses to translate that to executable code is an implementation issue.
Now in general, with optimizations turned on you can expect a reasonable compiler to not execute any code at runtime for many uses of constexpr but there aren't really any guarantees. I'm not really clear on what exactly you're asking about in your example so it's hard to give any specifics about your use case.
constexpr expressions are not special. For all intents and purposes, they're basically const unless the context they're used in is constexpr and all variables/functions are also constexpr. It is implementation defined how the compiler chooses to handle this. The Standard never deals with implementation details because it speaks in abstract terms.
I'm still relatively new to C++ and I can't seem to figure out the difference in the following two ways of coding a function that may take one parameter or maybe two or three or more. Anyway, here's my point
function overload:
int aClass::doSomething(int required)
{
//DO SOMETHING
}
int aClass::doSomething(int required, int optional)
{
//DO SOMETHING
}
how is this different to, default value:
int aClass::doSomething(int required, int optional = 0)
{
//DO SOMETHING
}
I know in different circumstances one may be more suitable than another but what kinds of things should I be aware of when choosing between each of these options?
There are several technical reasons to prefer overloading to default arguments, they are well laid out in Google's C++ Style Guide in the Default Arguments section:
Function pointers are confusing in the presence of default arguments,
since the function signature often doesn't match the call signature.
Adding a default argument to an existing function changes its type,
which can cause problems with code taking its address. Adding function
overloads avoids these problems.
and:
default parameters may result in bulkier code since they are
replicated at every call-site -- as opposed to overloaded functions,
where "the default" appears only in the function definition.
On the positive side it says:
Often you have a function that uses default values, but occasionally
you want to override the defaults. Default parameters allow an easy
way to do this without having to define many functions for the rare
exceptions.
So your choice will depend on how relevant the negative issues are for your application.
First off, you're talking about overloading, not overriding. Overriding is done for virtual functions in a derived class. Overloading refers to the same function name with a different signature.
The difference is logical - in the first case (2 versions), the two functions can behave completely different, whereas the second case will have more or less the same logic. It's really up to you.
The compiler doesn't care which of these you use. Imagine that you wrote it as two constructors, and they ended up about 20 lines long. Further imagine that 19 of the lines were identical, and the different line read
foo = 0;
in one version and
foo = optional;
in the other. In this situation, using an optional parameter makes your code far more readable and understandable. In another language, that didn't have optional parameters, you'd implement this by having the one-parameter version call the two parameter version and pass zero to it as the second parameter.
Now imagine a different pair of constructors or functions that again are about 20 lines long but are entirely different. For example the second parameter is an ID, and if it's provided, you look stuff up in the database and if it's not you set values to nullptr, 0, and so on. You could have a default value (-1 is popular for this) but then the body of the function would be full of
if (ID == -1)
{
foo = 0;
}
else
{
foo = DbLookup(ID);
}
which could be hard to read and would make the single function a lot longer than the two separate functions. I've seen functions with one giant if that eseentially split the whole thing into two separate blocks with no common code, and I've seen the same condition tested 4 or 5 times as a calculation progresses. Both are hard to read.
That's the thing about C++. There are lots of ways to accomplish most things. But those different ways serve different purposes, and once you "get" the subtle differences, you will write better code. In this case "better" means shorter, faster (all those ifs cost execution time), and more expressive - people reading it can understand your intentions quickly.
You are making use of the overloading feature, if you provide several constructors. The advantage in this case is, that you can react differently in every constructor on the passed arguments. If that is of importance use overloading.
If you can provide decent default values for your parameters and these wouldn't affect the proper running of your code, use default parameters.
See here for a thread on SO.
Suppose I have a dll with 2 functions.name of dll="dll1"
f1(int a, int b, int c);
f2(int a);
My program would take the function name ,the dll name and a "list" of parameters as input.
how would i call the appropriate function with its appropriate parameters.
i.e,
if input is
dll1
f1
list(5,8,9)
this would require me to call f1 with 3 parameters
if input was
dll1
f2
list(8)
it would require me to call f2 with one parameter
how would i call the function without knowing the number of parameters in advance.
further clarification:
how do I write code that will call any
function with all its arguments by building the argument list dynamically
using some other source of information
Since the generated code differs based on the number of parameters, you have two choices: you can write some code in assembly language to do the job (basically walk through the parameter list and push each on the stack before calling the function), or you can create something like an array of pointers to functions, one for each number of parameters you care about (e.g., 0 through 10). Most people find the latter a lot simpler to deal with (if only because it avoids using assembly language at all).
To solve the problem in general you need to know:
The calling conventions (those stdcall, cdecl, fastcall, thiscall (btw, the latter two can be combined in MSVC++), etc things) that govern how the functions receive their parameters (e.g. in special registers, on the stack, both), how they return values (same) and what they are allowed to trash (e.g. some registers).
Exact function prototypes.
You can find all this only in the symbol/debug information produced by the compiler and (likely to a lesser extent) the header file containing the prototypes for the functions in the DLL. There's one problem with the header file. If it doesn't specify the calling convention and the functions have been compiled with non-default calling conventions (via a compiler option), you have ambiguity to deal with. In either case you'll need to parse something.
If you don't have this information, the only option left is reverse engineering of the DLL and/or its user(s).
In order to correctly invoke an arbitrary function only knowing its prototype and calling convention at run time you need to construct code analogous to that produced by the compiler when calling this function when it's known at compile time. If you're solving the general problem, you'll need some assembly code here, not necessarily hand-written, run-time generated machine code is a good option.
Last but not least, you need some code to generate parameter values. This is most trivial with numeric types (ints, floats and the like) and arrays of them and most difficult with structures, unions and classes. Creating the latter on the fly may be at least as difficult as properly invoking functions. Don't forget that they may refer to other objects using pointers and references.
The general problem is solvable, but not cheaply. It's far easier to solve a few simple specific cases and maybe avoid the entire problem altogether by rewriting the functions to have less-variable parameters and only one calling convention OR by writing wrapper functions to do that.
You might want to check out the Named Parameter Idiom.
It uses method chaining to basically accomplish what you want.
It solves the problem where you know what a default set of arguments look like, but you only need to customize a few of them and not necessarily in the order they are declared.
If your clients know at compile-time, then can wrap it this way:
template<class Args...>
void CallFunctionPointer(void* pf, Args&&... args)
{
typedef void(*FunctionType)(Args...);
FunctionType* pf2 = (FunctionType*) pf;
(*pf2)(forward<Args>(args)...);
}
Note, if you pass the wrong number of paramters or the wrong type(s) of parameters behaviour is undefined.
Background:
In C/C++ you can cast a function pointer to any signature you want, however if you get it wrong behavior is undefined.
In your case there are two signatures you have mentioned:
void (*)(int)
and
void (*)(int, int, int)
When you load the function from the DLL it is your responsibility to make sure you cast it to the correct signature, with the correct number and types of parameters before you call it.
If you have control over the design of these functions, I would modify them to take a variable number of arguments. It the base type is always int, than just change the signature of all the functions to:
void (*)(int* begin, size_t n);
// begin points to an array of int of n elements
so that you can safely bind any of the functions to any number of arguments.
Is there a way to determine how many parameters a Lua function takes just before calling it from C/C++ code?
I looked at lua_Debug and lua_getinfo but they don't appear to provide what I need.
It may seem a bit like I am going against the spirit of Lua but I really want to bullet proof the interface that I have between Lua and C++. When a C++ function is called from Lua code the interface verifies that Lua has supplied the correct number of arguments and the type of each argument is correct. If a problem is found with the arguments a lua_error is issued.
I'd like to have similar error checking the other way around. When C++ calls a Lua function it should at least check that the Lua function doesn't declare more parameters than are necessary.
What you're asking for isn't possible in Lua.
You can define a Lua function with a set of arguments like this:
function f(a, b, c)
body
end
However, Lua imposes no restrictions on the number of arguments you pass to this function.
This is valid:
f(1,2,3,4,5)
The extra parameters are ignored.
This is also valid:
f(1)
The remaining arguments are assigned 'nil'.
Finally, you can defined a function that takes a variable number of arguments:
function f(a, ...)
At which point you can pass any number of arguments to the function.
See section 2.5.9 of the Lua reference manual.
The best you can do here is to add checks to your Lua functions to verify you receive the arguments you expect.
You can determine the number of parameters, upvalues and whether the function accepts variable number of arguments in Lua 5.2, by using the 'u' type to fill nups, nparams, isvararg fields by get_info(). This feature is not available in Lua 5.1.
I wouldn't do this on the Lua side unless you're in full control of Lua code you're validating. It is rather common for Lua functions to ignore extra arguments simply by omitting them.
One example is when we do not want to implement some methods, and use a stub function:
function do_nothing() end
full_api = {}
function full_api:callback(a1, a2) print(a1, a2) end
lazy_impl = {}
lazy_impl.callback = do_nothing
This allows to save typing (and a bit of performance) by reusing available functions.
If you still want to do function argument validation, you have to statically analyze the code. One tool to do this is Metalua.
No, not within standard Lua. And is Aaron Saarela is saying, it is somewhat outside the spirit of Lua as I understand it. The Lua way would be to make sure that the function itself treats nil as a sensible default (or converts it to a sensible default with something like name = name or "Bruce" before its first use) or if there is no sensible default the function should either throw an error or return a failure (if not name then error"Name required" end is a common idiom for the former, and if not name then return nil, "name required" end is a common idiom for the latter). By making the Lua side responsible for its own argument checks, you get that benefit regardless of whether the function is called from Lua or C.
That said, it is possible that your modules could maintain an attribute table indexed by function that contains the info you need to know. It would require maintenance, of course. It is also possible that MetaLua could be used to add some syntax sugar to create the table directly from function declarations at compile time. Before calling the Lua function, you would use it directly to look up any available attributes and use them to validate the call.
If you are concerned about bullet-proofing, you might want to control the function environment to use some care with what (if any) globals are available to the Lua side, and use lua_pcall() rather than lua_call() so that you catch any thrown errors.
The information you ask for is not available in all cases. For example, a Lua function might actually be implemented in C as a lua_CFunction. From Lua code there is no way to distinguish a pure Lua function from a lua_CFunction. And in the case of a lua_CFunction, the number of parameters is not exposed at all, since it's entirely dependent on the way the function is implemented.
On the other hand, what you can do is provide a system for functions writers (be it in pure Lua or in C) to advertise how many parameters their functions expect. After creating the function (function f(a, b, c) end) they would simply pass it to a global function (register(f, 3)). You would then be able to retrieve that information from your C++ code, and if the function didn't advertise its parameters then fallback to what you have now. With such a system you could even advertise the type expected by the parameters.