In Google C++ Style Guide, it said:
When defining a function, parameter order is: inputs, then outputs.
Basically Google suggest function parameter ordering like:
void foo(const Foo& input1, const Foo& input2, Foo* output);
However, my colleague suggested that the output should be put in the first position. because in this way, foo could accept default values and most of the time output would not use a default value. for example:
void foo(Foo* output, const Foo& input1, const Foo& input2 = default);
I think what he said make sense. Or is there something we are missing here from aspects of readability, performance, ...? Why the style guide suggest output should be the last?
The reason why this isn't a problem for the Google style guide is because default arguments are disallowed:
https://google-styleguide.googlecode.com/svn/trunk/cppguide.html#Default_Arguments
We do not allow default function parameters, except in limited situations as explained below. Simulate them with function overloading instead, if appropriate.
Pros
Often you have a function that uses default values, but occasionally you want to override the defaults. Default parameters allow an easy way to do this without having to define many functions for the rare exceptions. Compared to overloading the function, default arguments have a cleaner syntax, with less boilerplate and a clearer distinction between 'required' and 'optional' arguments.
Cons
Function pointers are confusing in the presence of default arguments, since the function signature often doesn't match the call signature. Adding a default argument to an existing function changes its type, which can cause problems with code taking its address. Adding function overloads avoids these problems. In addition, default parameters may result in bulkier code since they are replicated at every call-site -- as opposed to overloaded functions, where "the default" appears only in the function definition.
Decision
While the cons above are not that onerous, they still outweigh the (small) benefits of default arguments over function overloading. So except as described below, we require all arguments to be explicitly specified.
One specific exception is when the function is a static function (or in an unnamed namespace) in a .cc file. In this case, the cons don't apply since the function's use is so localized.
In addition, default function parameters are allowed in constructors. Most of the cons listed above don't apply to constructors because it's impossible to take their address.
Another specific exception is when default arguments are used to simulate variable-length argument lists.
// Support up to 4 params by using a default empty AlphaNum.
string StrCat(const AlphaNum &a,
const AlphaNum &b = gEmptyAlphaNum,
const AlphaNum &c = gEmptyAlphaNum,
const AlphaNum &d = gEmptyAlphaNum);
Related
Consider this simple check for whether a (global) function is defined:
template <typename T>
concept has_f = requires ( const T& t ) { Function( t ); };
// later use in MyClass<T>:
if constexpr ( has_f<T> ) Function( value );
unfortunately this allows for implicit conversions. This is obviously a big risk for mess-ups.
Question: How to check if Function( const T& t ) 'explicitly' exists?
Something like
if constexpr ( std::is_same_v<decltype( Function( t ) ), void> )
should be free of implict conversions, but I can't get it working.
Note: The point of the concept approach was to get rid of old 'detection patterns' and simplify.
Before explaining how to do this, I will explain why you shouldn't want to do any of this.
You mentioned "old 'detection patterns'" without adding any specifics as to what you are referring to. There are a fair number of idioms C++ users sometimes employ that can do something like detecting if a function takes a particular parameter. Which ones of these count as "detection patterns" by your reckoning is not known.
However, the vast majority of these idioms exist to serve a specific, singular purpose: to see if a particular function call with a given set of arguments is valid, legal C++ code. They don't really care if a function exactly takes T; testing for T specifically is just how a few of those idioms work to produce the important information. Namely whether you can pass a T to said function.
Looking for a specific function signature was almost always a means to an end, not the final goal.
Concepts, particularly requires expressions, is the end itself. It allows you to ask the question directly. Because really, you don't care if Function has a parameter that takes a T; you care whether Function(t) is legitimate code or not. Exactly how that happens is an implementation detail.
The only reason I can think of that someone might want to constrain a template on an exact signature (rather than an argument match) is to defeat implicit conversion. But you really shouldn't try to break basic language features like that. If someone writes a type that is implicitly convertible to another, they have the right to the benefits of that conversion, as defined by the language. Namely, the ability to use it in many ways as if it were that other type.
That is, if Function(t) is what your constrained template code is actually going to do, then the user of that template has every right to provide code that makes that compiler within the limits of the C++ language. Not within the limits of your personal ideas of what features are good or bad in that language.
Concepts are not like base classes, where you decide the exact signature for each method and the user must strictly abide by that. Concepts are patterns that constrain template definitions. Expressions in concept constraints are expressions that you expect to use in your template. You only put an expression in a concept if you plan on using it in your templates constrained by that concept.
You don't use a function signature; you call functions. So you constrain a concept on what functions can be called with which arguments. You're saying "you must let me do this", not "provide this signature".
That having been said... what you want is not generally possible ;)
There are several mechanisms that you might employ to achieve it, but none of them do exactly what you want in all cases.
The name of a function resolves to an overload set consisting of all of the functions that could be called. This name can be converted into a pointer to a specific function signature if and only if that signature is one of the functions in the overload set. So in theory, you might do this:
template <typename T>
concept has_f = requires () { static_cast<void (*)(T const&)>(&Function); };
However, because the name Function is not dependent on T (as far as C++ is concerned), it must be resolved during the first pass of two-phase name lookup for templates. That means any and all Function overloads you intend to care about have to be declared before has_f is defined, not merely instantiated with an appropriate T.
I think this is sufficient to declare that this is non-functional as a solution. Even if it worked though, it would only "work" given 3 circumstances:
Function is known/required to be an actual function, rather than a global object with an operator() overload. So if a provider of T wants to provide a global functor instead of a regular function (for any number of reasons) this method will not work, even though Function(t) is 100% perfectly valid, legitimate, and does none of those terrible implicit conversions that for some reason must be stopped.
The expression Function(t) is not expected to use ADL to find the actual Function to call.
Function is not a template function.
And not one of these possibilities has anything to do with implicit conversions. If you're going to call Function(t), then it's 100% OK for ADL to find it, template argument deduction to instantiate it, or for the user to fulfill this with some global lambda.
Your second-best bet is to rely on how overload resolution works. C++ only permits a single user-defined conversion in operator overloading. As such, you can create a type which will consume that one user-defined conversion in the function call expression in lieu of T. And that conversion should be a conversion to T itself.
You would use it like this:
template<typename T>
class udc_killer
{
public:
//Will never be called.
operator T const&();
};
template <typename T>
concept has_f = requires () { Function(udc_killer<T>{}); };
This of course still leaves the standard conversions, so you can't differentiate between a function taking a float if T is int, or derived classes from bases. You also can't detect if Function has any default parameters after the first one.
Overall, you're still not detecting the signature, merely call-ability. Because that's all you should care about to begin with.
In Kotlin there are two ways to express an optional parameter, either by specifying default argument value:
fun foo(parameter: Any, option: Boolean = false) { ... }
or by introducing an overload:
fun foo(parameter: Any) = foo(parameter, false)
fun foo(parameter: Any, option: Boolean) { ... }
Which way is preferred in which situations?
What is the difference for consumers of such function?
In Kotlin code calling other Kotlin code optional parameters tend to be the norm over using overloads. Using optional parameters should be you default behavior.
Special cases FOR using defaulted values:
As a general practice or if unsure -- use default arguments over overrides.
if you want the default value to be seen by the caller, use default values. They will show up in IDE tooltips (i.e. Intellij IDEA) and let the caller know they are being applied as part of the contract. You can see in the following screenshot that calling foo() will default some values if values are omitted for x and y:
Whereas doing the same thing with function overloads hides this useful information and just presents a much more messy:
using default values causes bytecode generation of two functions, one with all parameters specified and another that is a bridge function that can check and apply missing parameters with their defaulted values. No matter how many defaulted parameters you have, it is always only two functions. So in a total-function-count constrained environment (i.e. Android), it can be better to have just these two functions instead of a larger number of overloads that it would take to accomplish the same job.
Cases where you might not want to use default argument values:
When you want another JVM language to be able to use the defaulted values you either need to use explicit overloads or use the #JvmOverloads annotation which:
For every parameter with a default value, this will generate one additional overload, which has this parameter and all parameters to the right of it in the parameter list removed.
You have a previous version of your library and for binary API compatibility adding a default parameter might break compatibility for existing compiled code whereas adding an overload would not.
You have a previous existing function:
fun foo() = ...
and you need to retain that function signature, but you also want to add another with the same signature but additional optional parameter:
fun foo() = ...
fun foo(x: Int = 5) = ... // never can be called using default value
You will not be able to use the default value in the 2nd version (other than via reflection callBy). Instead all foo() calls without parameters still call the first version of the function. So you need to instead use distinct overloads without the default or you will confuse users of the function:
fun foo() = ...
fun foo(x: Int) = ...
You have arguments that may not make sense together, and therefore overloads allow you to group parameters into meaningful coordinated sets.
Calling methods with default values has to do another step to check which values are missing and apply the defaults and then forward the call to the real method. So in a performance constrained environment (i.e. Android, embedded, real-time, billion loop iterations on a method call) this extra check may not be desired. Although if you do not see an issue in profiling, this might be an imaginary issue, might be inlined by the JVM, and may not have any impact at all. Measure first before worrying.
Cases that don't really support either case:
In case you are reading general arguments about this from other languages...
in a C# answer for this similar question the esteemed Jon Skeet mentions that you should be careful using defaults if they could change between builds and that would be a problem. In C# the defaulting is at the call site, whereas in Kotlin for non-inlined functions it is inside of the (bridge) function being called. Therefore for Kotlin it is the same impact for changing hidden and explicit defaulting of values and this argument should not impact the decision.
also in the C# answer saying that if team members have opposing views about use of defaulted arguments then maybe don't use them. This should not be applied to Kotlin as they are a core language feature and used in the standard library since before 1.0 and there is no support for restricting their use. The opposing team members should default to using defaulted arguments unless they have a definitive case that makes them unusable. Whereas in C# it was introduced much later in the life cycle of that language and therefore had a sense of more "optional adoption"
Let's examine how functions with default argument values are compiled in Kotlin to see if there's a difference in method count. It may differ depending on the target platform, so we'll look into Kotlin for JVM first.
For the function fun foo(parameter: Any, option: Boolean = false) the following two methods are generated:
First is foo(Ljava/lang/Object;Z)V which is being called when all arguments are specified at a call site.
Second is synthetic bridge foo$default(Ljava/lang/Object;ZILjava/lang/Object;)V. It has 2 additional parameters: Int mask that specifies which parameters were actually passed and an Object parameter which currently is not used, but reserved for allowing super-calls with default arguments in the future.
That bridge is called when some arguments are omitted at a call-site. The bridge analyzes the mask, provides default values for omitted arguments and then calls the first method now specifying all arguments.
When you place #JvmOverloads annotation on a function, additional overloads are generated, one per each argument with default value. All these overloads delegate to foo$default bridge. For the foo function the following additional overload will be generated: foo(Ljava/lang/Object;)V.
Thus, from the method count point of view, in a situation when a function has only one parameter with default value, it's no matter whether you use overloads or default values, you'll get two methods. But if there's more than one optional parameter, using default values instead of overloads will result in less methods generated.
Overloads could be preferred when the implementation of a function gets simpler when parameter is omitted.
Consider the following example:
fun compare(v1: T, v2: T, ignoreCase: Boolean = false) =
if (ignoreCase)
internalCompareWithIgnoreCase(v1, v2)
else
internalCompare(v1, v2)
When it is called like compare(a, b) and ignoreCase is omitted, you actually pay twice for not using ignoreCase: first is when arguments are checked and default values are substituted instead of omitted ones and second is when you check the ignoreCase in the body of compare and branch to internalCompare based on its value.
Adding an overload will get rid of these two checks. Also a method with such simple body is more likely to be inlined by JIT compiler.
fun compare(v1: T, v2: T) = internalCompare(v1, v2)
I'm interested in the technical logistics. Is there any advantage, such as memory saved, etc., to implementing certain functions dealing with a class?
In particular, implementing operator overloads as free functions (providing you don't need access to any private members, and even then you can make them use a friend non-member)?
Is a distinct memory address provided for each function of the class each time an object is created?
This answer may helps you : Operator overloading : member function vs. non-member function?. In general free functions are mandatory if you need to implement operators on classes you don't have access to code source (think about streams) or if left operand is not of class type (int for example). If you control the code of the class then you can freely use function members.
For your last question, no, function members are uniquely defined and an object internal table is used to point to them. Function members can be viewed as free functions with an hidden parameter that is a pointer to the object, i.e. o.f(a) is more or less the same as f(&o,a) with a prototype roughly like f(C *this,A a);.
There are various articles about circumstances when implementing functionality using non-member functions is preferred over function members.
Examples include
Scott Meyers (author of books like "Effective C++", "Effective STL", and others) on how non-members improve encapsulation: http://www.drdobbs.com/cpp/how-non-member-functions-improve-encapsu/184401197
Herb Sutter in his Guru of the Week series #84 "Monoliths Unstrung". Essentially he advocates, when it is possible to implement functionality as a member or as a non-member non-friend, prefer the non-member option. http://www.gotw.ca/gotw/084.htm
Non-static member functions have an implicit this parameter. If your function doesn't use any non-static members, it should be either a free function or a static member function, depending on what namespace you want it to be in. This will avoid confusion for human readers (who will be scratching their heads looking for the reason it's not static), and will be a small improvement in code size, with a probably non-measurable gain in performance.
To be clear: in the asm, there's zero difference between a static member function and a non-member function. The choice between static-member and global or static (file scope) is purely a namespace / design quality issue, not a performance issue. (In Unix shared libraries (position-independent code), calling global functions has a level of indirection through the PLT, so prefer static file-scope functions. This is a different meaning of the static keyword vs. global static-member functions, which are globally visible and thus subject to symbol interposition.)
One possible exception to this rule is that wrapper functions that pass on most of their args unchanged to another function benefit from having their args in the same order as the function they call, so they don't have to move them between registers. e.g. if a member function does something simple to a class member and then calls a static member function with the same arg list, it's actually without the implicit this pointer, so all the args have to move over by one register.
Most ABIs use an args-in-registers calling convention. 32bit x86 (other than some Windows calling conventions) is the major exception I know of, where all args are always passed on the stack. 64bit x86 passes the first 6 integer args in registers, and the first 8 FP args in xmm registers (SysV). Or the first 4 args of args of either type in registers (Windows).
Passing an object pointer will typically take an extra instruction or two at every call site. If the implicit first arg bumps any other args out of the limited set of arg-passing regs, then it will have to be passed on the stack. This adds a few cycles of latency to the critical path involving that arg for the store-load round trip, and extra instructions in the callee as well as the caller. (See the x86 wiki for links to more details about this sort of thing, for that platform).
Inlining of course eliminates this. static functions can also be optimized by modern compilers, because the compiler knows all the calls come from code it can see, so it can make them non-standard. IDK if any compiler will drop unused args during inter-procedure optimization. Link-time and/or whole-program optimization may also be able to reduce or eliminate overhead from unused args.
Code-size always matters at least a little, since smaller binaries load from disk faster, and miss less in I-cache. I don't expect any measurable speed difference unless you specifically design an experiment that's sensitive to it.
The most important thing to consider when designing classes is, "what is the invariant?" Classes are design to protect invariants. So, classes must be the tiniest as possible to ensure that invariant is properly protected. If you have so many member/friend functions, there is more code to review.
From this point of view, if a class has members which don't need to be protected (for example, a boolean which its corresponding get/set functions can be freely changed by the user), is better to put that attributes as public and remove the get/set functions (more or less, these are the Bjarne Stroustrup words).
So, which functions must be declared inside the class and which ones out? Inside functions must be these minimum required set of function to protect the invariant, and outside functions must be any function that can be implemented using the other ones.
The thing with operator overloading is another history, because the criteria to put some operators inside, and some other outside, is because of syntactical issues related to implicit conversions and so on:
class A
{
private:
int i_i;
public:
A(int i) noexcept : i_i(i) {}
int val() const noexcept { return i_i; }
A operator+(A const& other) const noexcept
{ return A(i_i + other.i_i); }
};
A a(5);
cout << (4 + a).val() << endl;
In this case, since the operator is defined inside the class, the compiler doesn't find the operator, because the first argument is an integer (when an operator is called, the compiler search for free functions and functions declared inside the class of the first argument).
When declared outside:
class A
{
private:
int i_i;
public:
A(int i) noexcept : i_i(i) {}
int val() const noexcept { return i_i; }
};
inline A operator+(A const& first, A const& other) const noexcept;
{ return A(first.val() + other.val()); }
A a(5);
cout << (4 + a).i_i << endl;
In these case, the compiler find the operator, and try to perform an implicit conversion of the first parameter from int to A, using the proper A's constructor.
In these case, the operator can also be implemented using other functions, so, it doesn't need to be friend and you can be sure the invariant is not compromised with that additional function. So, in these concrete example, moving the operator outside is good for two reasons.
One strictly technical difference, which also is valid for static vs non-static member functions, might affect performance in extreme scenarios:
For a member function, the this pointer will be passed as an "invisible" parameter to the function. Usually, depending on the parameter types, a fixed number of parameter values can be passed via registers instead of via the stack (registers are faster to read and write).
If the function already takes that number of parameters explicitly, then making it a non-static member function might cause parameters to be passed via the stack instead of via registers, and if that happens, barring optimizations that may or may not happen, the function call will be slower.
However, even if it is slower - in this case, in the vast majority of any use cases that you can dream up, slower is insignificant (but real).
Depending on the subject, class functions may not be the right solution. Class functions depend on the assumption that exists an assymetry between the arguments of the corrispetive non class function where it is cleary individuated a main subject of the function (id est the implicitly passed this that practically corresponds to passing the object by reference). On the other side many times such an assymetry may not exist. In those cases the free functions are the best solution.
Regarding execution speed there isn't any difference because the method of a class is just a function where the first argument is the this pointer. So it is totally equivalent the the corrispetive non class function where the first element is the pointer to the object.
I'm still relatively new to C++ and I can't seem to figure out the difference in the following two ways of coding a function that may take one parameter or maybe two or three or more. Anyway, here's my point
function overload:
int aClass::doSomething(int required)
{
//DO SOMETHING
}
int aClass::doSomething(int required, int optional)
{
//DO SOMETHING
}
how is this different to, default value:
int aClass::doSomething(int required, int optional = 0)
{
//DO SOMETHING
}
I know in different circumstances one may be more suitable than another but what kinds of things should I be aware of when choosing between each of these options?
There are several technical reasons to prefer overloading to default arguments, they are well laid out in Google's C++ Style Guide in the Default Arguments section:
Function pointers are confusing in the presence of default arguments,
since the function signature often doesn't match the call signature.
Adding a default argument to an existing function changes its type,
which can cause problems with code taking its address. Adding function
overloads avoids these problems.
and:
default parameters may result in bulkier code since they are
replicated at every call-site -- as opposed to overloaded functions,
where "the default" appears only in the function definition.
On the positive side it says:
Often you have a function that uses default values, but occasionally
you want to override the defaults. Default parameters allow an easy
way to do this without having to define many functions for the rare
exceptions.
So your choice will depend on how relevant the negative issues are for your application.
First off, you're talking about overloading, not overriding. Overriding is done for virtual functions in a derived class. Overloading refers to the same function name with a different signature.
The difference is logical - in the first case (2 versions), the two functions can behave completely different, whereas the second case will have more or less the same logic. It's really up to you.
The compiler doesn't care which of these you use. Imagine that you wrote it as two constructors, and they ended up about 20 lines long. Further imagine that 19 of the lines were identical, and the different line read
foo = 0;
in one version and
foo = optional;
in the other. In this situation, using an optional parameter makes your code far more readable and understandable. In another language, that didn't have optional parameters, you'd implement this by having the one-parameter version call the two parameter version and pass zero to it as the second parameter.
Now imagine a different pair of constructors or functions that again are about 20 lines long but are entirely different. For example the second parameter is an ID, and if it's provided, you look stuff up in the database and if it's not you set values to nullptr, 0, and so on. You could have a default value (-1 is popular for this) but then the body of the function would be full of
if (ID == -1)
{
foo = 0;
}
else
{
foo = DbLookup(ID);
}
which could be hard to read and would make the single function a lot longer than the two separate functions. I've seen functions with one giant if that eseentially split the whole thing into two separate blocks with no common code, and I've seen the same condition tested 4 or 5 times as a calculation progresses. Both are hard to read.
That's the thing about C++. There are lots of ways to accomplish most things. But those different ways serve different purposes, and once you "get" the subtle differences, you will write better code. In this case "better" means shorter, faster (all those ifs cost execution time), and more expressive - people reading it can understand your intentions quickly.
You are making use of the overloading feature, if you provide several constructors. The advantage in this case is, that you can react differently in every constructor on the passed arguments. If that is of importance use overloading.
If you can provide decent default values for your parameters and these wouldn't affect the proper running of your code, use default parameters.
See here for a thread on SO.
If I have this code:
void Foo(aBasicType aIn) //Where aBasicType is int, char etc.
{
//...
}
Is there any point in making it const aBasicType since it is going to be copied anyway? One of the reasons I am asking is because I have seen it in 3rd party code and was wondering if there is something I am not aware of.
It cannot hurt to declare it const if you know that your function needs not modify its value during execution.
Note that functions that change their arguments, when arguments are passed by value, should be rare.
Declaring your variable const can prevent you from writing if (aIn = someValue).
I sometimes (infrequently) do it, when there is temptation to modify aIn in-place instead of making another copy, yet the method relies on aIn remaining unchanged throughout. It tends to be a close call though.
The reason is informative: you want the compiler to warn/error when a value-passed argument is seen on the left of an assignment.
It's a bit cumbersome, seen on libs whose audience may be less than "well informed" on C or C++ (it's the same for both languages).
That would make the value const for that function, which might be useful in the same way declaring a constant at the top of your function might be useful.
No, adding const to a scalar call-by-value parameter is meaningless and will only be confusing.
I prefer to add const qualifier to input paramters regardless to parameter passing method (by value, by pointer or by reference). So const parameter simply means "input parameter" and non-const parameter means "output parameter" (or, rarely, inout parameter). I suppose such a convention makes code more understandable but it is matter of taste, of course.
I think I can formulate this much simpler. When footype is not a template parameter:
const footype & in the signature is a guarantee for the caller of
the function and a constraint for the implementer of the function.
const footype on the other hand is only a constraint for the
implementer and irrelevant to the caller.
When footype is a template parameter, then the rules can only be tested against the individual template instantiations.
BTW, if you see const constraints, then the connected code is much easier to read because the possibilities of what the code can do is much restricted. This is one of the many reasons why C++ is easier to read than C# or Java.