Function resolution - c++

I have a lot of functions with this signature:
DoSomething(int x, int y, int z, int t, int u, int p);
They all have the same number of parameters and the same type of parameters.
I want to be able to use them like this:
DoSomething(1, 2, 3, 4, 5, 6);
I know the compiler cannot distinguish between functions of the same signature (they are plain illegal).
To that affect I would like to wrap the parameters of the functions in logical "Constructs". This does not mean classes or structures. For example:
DoSomething(Construct1(x, y, z), Construct2(t, u, p));
or
DoSomething(Construct1(x, y), Constrcut2(t, u, p, o));
In this case I can distinguish between the two functions and they have the same number of parameters. If I use objects with different constructors even if it is const by ref, using a class or a structure, the Construct is still created. For Example:
DoSomething(const Construct1& constr1, const Construct2& constr2)
{
constr1.x + constr2.t
}
DoSomething(Construct1(1, 2, 3), Construct2(4, 5, 6));
In this case Construct1 and Construct2 are both created.
What I want is:
DoSomething(Construct1(x, y, z), Construct2(t, u, p));
or
DoSomething(Construct1(x, y), Constrcut2(t, u, p, o));
at compile time to expand to:
DoSomething(int x, int y, int z, int t, int u, int p);
thus eliminating the need for the object creation. I am not looking for an object solution. Anything that can expand this is welcome. Even if it is a macro. I am not looking for a complete solution, but if you can point me to what I should read in order to make this myself then that is more than welcome.
Thanks in advance.

Overloading is based on the parameter types rather than the parameter names. You cannot have overloaded functions which have parameter lists with identical types.

I think you have a couple of misconceptions. The most obvious is that the names of the parameters matter... they don't, as far as the compiler is concerned those two function declarations declare a single function that takes 6 integers (consider, if they were different, what would DoSomething( 1, 2, 3, 4, 5, 6 ) do?)
The second misconception is that an object creation necessarily means an allocation. In the code you presented: DoSomething( Object1(x,y,z), Object2(t,y,u) ) there are two objects but not a single dynamic allocation (unless you do them inside Object1 or Object2 constructors).
Overall you should write code that is readable, and only if that proves to be slow, then profile and try to optimize the bottlenecks.

I'm afraid you'll have to find some other route. The compiler ignores any names you give to parameters in a function declaration, so as far as it cares, what you have is:
DoSomething(int, int, int, int, int, int);
DoSomething(int, int, int, int, int, int);
Since there's no difference between these, you aren't declaring two overloaded functions at all -- you're just declaring the same function twice. Attempting to define two functions with that identical signature then violates the one-definition rule.
Edit: Oh, I suppose I should add that without a return type, those aren't allowable function declarations either (not that it's related to the question at hand, but just in case somebody decides to get pedantic about it -- though I can hardly imagine a C++ programmer doing anything like that).

You can avoid unneeded copies by having Object1 and Object2 expose the underlying storage ints
struct Object1 {
...
int x;
int y;
...
}
//<--- passed by reference, no copy happens --->
DoSomething( const Object1& o1, const Object2& o2 )
{
int somethingUseful = o1.x * o2.w - o1.y * o2.z;
}
Please elaborate if you still feel there are copies happening with this approach that are not really needed.

You cannot even define two identical functions in C++, so asking how the compiler can differentiate between the two if hypothetically they did exist is pointless.
Then you seem to be asking how to pass objects in without requiring additional allocations. That is done with const references, but even worrying about this seems premature given that you haven't even got your program's structure solidified yet. Write your program first, then and only then optimise if it is needed. Premature optimisation is the root of all evil.

Related

Is the use of const necessary here?

I am starting to come to grips with const in terms of a reference parameter. The way I see it is that a constant reference parameter basically sets the parameter in question to the original memory space that calls the function into question. And since it is const, the value in itself cannot change.
I have found a solution with regards to a code that delivers matrix multiplication A=BC:
vector<vector<double> > mult(const vector<vector<double> >& B, const vector<vector<double> >& C)
{ ...;
return A;
}
int main()
{
vector<vector<double> > B, C;
cout << mult(B,C) << endl;
return 0;
}
I agree with the structure of the code but I am confused about the neccessity of "const" and "&". Surely the code would be exactly the same if I excluded both from the above? For "&" one could perhaps that we use less memory space by not creating an extra space for the parameters of "mult". But the use of const seems unnecessary to me.
The '&' prevents the copy constructor from being called, i.e., prevents a duplicate copy being made. It is more efficient this way because you avoid the constructor on the invocation and the destructor on the exit.
The 'const' keyword communicates to the caller that the object to which the reference refers will not be changed in the function. It also allows the function to be called with constant vectors as input. In other words, if B and C are constant, you couldn't call mult() without the const keyword in the signature.
It's been a while in C++ for me, but I think that's the gist. I'm certainly open to corrections on my answer.
There are only a few times when a const reference is, strictly-speaking, necessary. The most common is when you need to pass a const object by reference. The type system will prevent this unless the function promises not to modify the object. It can also make a difference when a function is overloaded to do something different when the object is const, and you specifically want the const version. (The latter is probably bad design!)
It would alternatively be possible to remove the const qualifier from the function argument, and to give any overloaded functions different names. In fact, references in C++ are syntactic sugar for C-style pointers, and it would be possible to replace void foo (T& x) with void foo(T* x) and every occurrence of x inside foo with (*x). Adding const T& or T* const simply means that the program will not be able to modify the object through that reference or pointer.
C had no const keyword until 1989, and you could do all the same things without it, but it’s present in order to help developers avoid bugs related to modifying the wrong variable.
It is not really necessary. As long as you pass in non-const parameters to your function, the program will not behave differently.
I can state a few examples in your case:
If one of the parameters you have to pass is const, it will not work.
Furthermore, you won't be able to do something like mult({{1, 2}, {3, 4}}, b); because that temporary object can only implicitly convert into a const reference.
If you put the definition and declaration in separate translation units (i.e. .cpp files) then the compiler might miss some optimization potential, because it wouldn't be able to assume that mult() doesn't modify its parameters.
Another argument is simply that const shows your intents more clearly.
See a few more reasons on isocpp.
The reference & prevents an unnecessary copy. Your parameter type is a std::vector, which means that copying will involve memory allocations, and for performance reasons you do not want that.
On a side note, if your code is meant to manipulate matrices, then a std::vector of std::vector is very inappropriate for performance reasons, as it makes it extremely cache inefficient and causes unnecessary dynamic allocations. You would rather use a 1D std::array and wrap it to handle 2D indices nicely. std::array has sizes known as compile time, which means that every function you pass a specific std::array to knows its size on compile-time which is good for performance, especially as std::array makes it possible to avoid dynamic allocation.
int sum(const int a,int b)
{
b=10;
// a=5; error
return (a+b);
}
in above example a is const and not b.
So a as read-only variable. If you try to change the value of a you get error. That means you have to use value of a which pass when function is call.

Is it better to remove "const" in front of "primitive" types used as function parameters in the header?

In the code review process, one of my coworkers mentioned to me that "const"s in front of "primitive types" used as a function parameter in a header is meaningless, and he recommended to remove these "const"s. He suggested using "const" only in the source file in such cases. Primitive types mean types such as "int", "char", "float", etc.
The following is example.
example.h
int ProcessScore(const int score);
example.cc
int ProcessScore(const int score) {
// Do some calculation using score
return some_value;
}
His suggestion is doing as follows:
example.h
int ProcessScore(int score); // const is removed here.
example.cc
int ProcessScore(const int score) {
// Do some calculation using score
return some_value;
}
But I'm somewhat confused. Usually, the user will look at only the header, so if there is inconsistency between the header and the source file, it might cause confusion.
Could anyone give some advice on this?
For all types (not just primitives), the top level const qualifiers in the function declaration are ignored. So the following four all declare the same function:
void foo(int const i, int const j);
void foo(int i, int const j);
void foo(int const i, int j);
void foo(int i, int j);
The const qualifier isn't ignored inside the function body, however. There it can have impact on const correctness. But that is an implementation detail of the function. So the general consensus is this:
Leave the const out of the declaration. It's just clutter, and doesn't affect how clients will call the function.
Leave the const in the definition if you wish for the compiler to catch any accidental modification of the parameter.
Function parameter declared const and without const is the same when coming to overload resolution. So for example functions
void f(int);
void f(const int);
are the same and could not be defined together. As a result it is better not to use const in declaration for parameters at all to avoid possible duplications. (I'm not talking about const reference or const pointer - since const modifier is not top level.)
Here is exact quote from the standard.
After producing the list
of parameter types, any top-level cv-qualifiers modifying a parameter type are deleted when forming the
function type. The resulting list of transformed parameter types and the presence or absence of the ellipsis
or a function parameter pack is the function’s parameter-type-list. [ Note: This transformation does not
affect the types of the parameters. For example, int(*)(const int p, decltype(p)*) and int(*)(int, const int*) are identical types. — end note ]
Usefulness of const in the function definition is debatable - reasoning behind it is the same as using const for declaring local variable - it demonstrates to other programmers reading the code the this value is not going to be modified inside the function.
Follow the recommendations given you in code review.
Using const for value arguments has no semantic value — it is only meaningful (potentially) for implementation of your function — and even in that case I would argue that it is unnecessary.
edit:
Just to be clear: your function’s prototype is the public interface to your function. What const does is offer a guarantee that you will not modify references.
int a = 7;
do_something( a );
void do_something( int& x ); // 'a' may be modified
void do_something( const int& x ); // I will not modify 'a'
void do_something( int x ); // no one cares what happens to x
Using const is something akin to TMI — it is unimportant anywhere except inside the function whether or not 'x' is modified.
edit2: I also very much like the information in StoryTeller’s answer
As many other people have answered, from an API perspective, the following are all equivalent, and are equal for overload-resolution:
void foo( int );
void foo( const int );
But a better question is whether or not this provides any semantic meaning to a consumer of this API, or whether it provides any enforcement of good behaviours from a developer of the implementation.
Without any well-defined developer coding guidelines that expressly define this, const scalar arguments have no readily obvious semantic meaning.
From a consumer:
const int does not change your input. It can still be a literal, or it can be from another variable (both const or non-const)
From a developer:
const int imposes a restriction on a local copy of a variable (in this case, a function argument). This just means to modify the argument, you take another copy of the variable and modify it instead.
When calling a function that accepts an argument by-value, a copy is made of that argument on the stack for the called function. This gives the function a local copy of the argument for its entire scope that can then be modified, used for calculations, etc -- without affecting the original input passed into the call. Effectively, this provides a local variable argument of its input.
By marking the argument as const, it simply means that this copy cannot be modified; but it does not prohibit the developer from copying it and making modifications to this copy. Since this was a copy from the start, it does not enforce all that much from inside the implementation -- and ultimately doesn't make much difference from the consumer's perspective.
This is in contrast to passing by reference, wherein a reference to int& is semantically different from const int&. The former is capable of mutating its input; the latter is only capable of observing the input (provided the implementation doesn't const_cast the const-ness away -- but lets ignore this possibility); thus, const-ness on references have an implied semantic meaning.
It does not provide much benefit being in the public API; and (imo) introduces unnecessary restrictions into the implementation. As an arbitrary, contrived example -- a simple function like:
void do_n_times( int n )
{
while( n-- > 0 ) {
// do something n times
}
}
would now have to be written using an unnecessary copy:
void do_n_times( const int n )
{
auto n_copy = n;
while( n_copy-- > 0 ) {
// do something n times
}
}
Regardless of whether const scalars are used in the public API, one key thing is to be consistent with the design. If the API randomly switches between using const scalar arguments to using non-const scalars, then it can cause confusion as to whether there is meant to be any implied meaning to the consumer.
TL;DR: const scalar types in a public API don't convey semantic meaning unless explicitly defined by your own guidelines for your domain.
I thought that const is a hint for the compiler that some expressions don't change and to optimize accordingly. For example I was testing if a number is prime by looking for divisors up to the square root of the number and I thought that declaring the argument const would take the sqrt(n) outside of the for loop, but it didn't.
It may not be necessary in the header, but then again, you could say that all you need is to not modify the argument and it is never necessary. I'd rather see const where it is const, not just in the source, but in the header too. Inconsistencies between declaration and definition make me circumspect. Just my opinion.

C++ parameter passing array vs individual items

Considering two functions:
int foo(int, int);
and
int bar(int*);
(Assuming bar is passed in an array of size n where n = # of foo params, the respective values are equivalent, and the functions are functionally the same)
Is it ever 'better' to take the former approach over the latter?
It depends on what you mean by "better": if passing multiple parameters is better aligned with the logical structure of your program, then it is definitely much better for readability to pass multiple parameters. A good test of whether or not this may be the case is asking if individual elements of the array would benefit from being named individually. If the answer is "yes", then individual parameters are better. Saving a few bytes here and there when passing parameters does not compensate for even a slight loss of readability.
Yes, the former approach int foo(int, int); can check the parameters at compile time. int bar(int*) needs to make assumptions about the passed-in array.
One disadvantage I can think of with int bar(int*) is that it could make overloading more difficult. E.g.:
int areaOfRectangle(int width, int height);
int areaOfRectangle(int x1, int y1, int x2, int y2);
is fine, but how are you going to do that with arrays?
int areaOfRectangle(int* widthHeight);
int areaOfRectangle(int* coordinates); // error
The above won't compile because int areaOfRectangle(int*) can't be declared/defined twice.
Also, if you're simply using an array as a way to arbitrarily group parameters, you make your code harder to read and use. Compare:
int calculateIncome(int normalHours,
int overtimeHours,
int normalRate
int overtimeRate); // Self documenting
int calculateIncome(int* parameters); // How do I use this function?
int calculateMean(int* numbers); // Parameters are logically grouped,
// so array makes sense
A third issue (which Sancho describes in his answer) is that int bar(int*) is dangerous. What happens when the user calls your function a smaller array than the one you were expecting? A call to int Foo(int, int) on the other hand, with the wrong number of parameters, will not compile.
It Depends on What the "better" is:
For better performance:
1 If the array have only a few element( say less than 6 parameter, depends on the target CPU and the available registers can be used to pass parameters) , the foo() method is better, since all variable can be pass by register; and foo can directly get the value out from the register. While for bar, a dereference of the pointer is needed.
2 IF the array have more than 10 items. It is better to pass using pointers, since this have less stack operations to push the parameter on the top of the stack.
For less bug:
the foo() method is better, since for the bar() method, the items can be modified by statements in bar which can be visible after returning from bar(), and this may cause error if you does not maintain the operations of items in bar() carefully.

Initializing variable in C++ function header

I've come across some C++ code that looks like this (simplified for this post):
(Here's the function prototype located in someCode.hpp)
void someFunction(const double & a, double & b, const double c = 0, const double * d = 0);
(Here's the first line of the function body located in someCode.cpp that #include's someCode.hpp)
void someFunction(const double & a, double & b, const double c, const double * d);
Can I legally call someFunction using:
someFunction(*ptr1, *ptr2);
and/or
someFunction(*ptr1, *ptr2, val1, &val2);
where the variables ptr1, ptr2, val, and val2 have been defined appropriately and val1 and val2 do not equal zero? Why or why not?
And if it is legal, is this syntax preferred vs overloading a function to account for the optional parameters?
Yes, this is legal, this is called default arguments. I would say it's preferred to overloading due to involving less code, yes.
Regarding your comment about const, that doesn't apply to the default value itself, it applies to the argument. If you have an argument of type const char* fruit = "apple", that doesn't mean it has to be called with a character pointer whose value is the same as the address of the "apple" string literal (which is good, since that would be hard to guarantee). It just means that it has to be called with a pointer to constant characters, and tells you that the function being called doesn't need to write to that memory, it is only read from.
Yes, the parameters are optional and when you don't pass them, the given default values will be used.
It has some advantages and disadvantages to use default parameter values instead of overloading. The advantage is less typing in both interface and implementation part. But the disadvantage is that the default value is a part of interface with all its consequences. Then when you change the default value, you for example need to recompile a lot of code instead of a single file when using overloading.
I personally prefer default parameters.
I'd like to expand a bit on whether Default Parameters are preferred over overloading.
Usually they are for all the reasons given in the other answers, most notably less boilerplate code.
There are also valid reasons that make overloading a better alternative in some situations:
Default values are part of the interface, changes might break clients (as #Juraj already noted)
Additionally Overloads make it easier to add additional (combinations of) parameters, without breaking the (binary) interface.
Overloads are resolved at compile time, which can give the compiler better optimization (esp inlining) possibilities. e.g. if you have something like this:
void foo(Something* param = 0) {
if (param == 0) {
simpleAlgorithm();
} else {
complexAlgorithm(param);
}
}
It might be better to use overloads.
Can I legally call someFunction using:
someFunction(*ptr1, *ptr2);
Absolutely! Yes, the other 2 variables that the function accepts would have default values you have set in the header file which is zero for both the arguments.
But if you do supply the 3rd and the 4th argument to the function, then those values are considered instead of the default values.

c++: using type safety to distinguish types oftwo int arguments

I have various functions with two int arguments (I write both the functions and the calling code myself). I am afraid to confuse the order of argument in some calls.
How can I use type safety to have compiler warn me or error me if I call a function with wrong sequence of arguments (all arguments are int) ?
I tried typedefs: Typedef do not trigger any compiler warnings or errors:
typedef int X; typedef int Y;
void foo(X,Y);
X x; Y y;
foo(y,x); // compiled without warning)
You will have to create wrapper classes. Lets say you have two different units (say, seconds and minutes), both of which are represented as ints. You would need something like the following to be completely typesafe:
class Minute
{
public:
explicit Minute(int m) : myMinute(m) {}
operator int () const { return myMinute; }
private:
int myMinute;
};
and a similar class for seconds. The explicit constructor prevents you accidentally using an int as a Minute, but the conversion operator allows you to use a Minute anywhere you need an int.
typedef creates type aliases. As you've discovered, there's no type safety there.
One possibility, depending on what you're trying to achieve, is to use enum. That's not fully typesafe either, but it's closer. For example, you can't pass an int to an enum parameter without casting it.
Get a post-it note. Write on it, in big letters, "X FIRST! THEN Y!" Stick it to your computer screen. I honestly don't know what else to advise. Using wrapper classes is surely overkill, when the problem can be solved with a post-it and a magic marker.