C++ parameter passing array vs individual items - c++

Considering two functions:
int foo(int, int);
and
int bar(int*);
(Assuming bar is passed in an array of size n where n = # of foo params, the respective values are equivalent, and the functions are functionally the same)
Is it ever 'better' to take the former approach over the latter?

It depends on what you mean by "better": if passing multiple parameters is better aligned with the logical structure of your program, then it is definitely much better for readability to pass multiple parameters. A good test of whether or not this may be the case is asking if individual elements of the array would benefit from being named individually. If the answer is "yes", then individual parameters are better. Saving a few bytes here and there when passing parameters does not compensate for even a slight loss of readability.

Yes, the former approach int foo(int, int); can check the parameters at compile time. int bar(int*) needs to make assumptions about the passed-in array.

One disadvantage I can think of with int bar(int*) is that it could make overloading more difficult. E.g.:
int areaOfRectangle(int width, int height);
int areaOfRectangle(int x1, int y1, int x2, int y2);
is fine, but how are you going to do that with arrays?
int areaOfRectangle(int* widthHeight);
int areaOfRectangle(int* coordinates); // error
The above won't compile because int areaOfRectangle(int*) can't be declared/defined twice.
Also, if you're simply using an array as a way to arbitrarily group parameters, you make your code harder to read and use. Compare:
int calculateIncome(int normalHours,
int overtimeHours,
int normalRate
int overtimeRate); // Self documenting
int calculateIncome(int* parameters); // How do I use this function?
int calculateMean(int* numbers); // Parameters are logically grouped,
// so array makes sense
A third issue (which Sancho describes in his answer) is that int bar(int*) is dangerous. What happens when the user calls your function a smaller array than the one you were expecting? A call to int Foo(int, int) on the other hand, with the wrong number of parameters, will not compile.

It Depends on What the "better" is:
For better performance:
1 If the array have only a few element( say less than 6 parameter, depends on the target CPU and the available registers can be used to pass parameters) , the foo() method is better, since all variable can be pass by register; and foo can directly get the value out from the register. While for bar, a dereference of the pointer is needed.
2 IF the array have more than 10 items. It is better to pass using pointers, since this have less stack operations to push the parameter on the top of the stack.
For less bug:
the foo() method is better, since for the bar() method, the items can be modified by statements in bar which can be visible after returning from bar(), and this may cause error if you does not maintain the operations of items in bar() carefully.

Related

benefits of passing const reference vs values in function in c++ for primitive types

I want to know what might be the possible advantages of passing by value over passing by const reference for primitive types like int, char, float, double, etc. to function? Is there any performance benefit for passing by value?
Example:
int sum(const int x,const int y);
or
int sum(const int& x,const int& y);
For the second case, I have hardly seen people using this. I know there is benefit of passing by reference for big objects.
In every ABI I know of, references are passed via something equivalent to pointers. So when the compiler cannot inline the function or otherwise must follow the ABI, it will pass pointers there.
Pointers are often larger than values; but more importantly, pointers do not point at registers, and while the top of the stack is almost always going to be in cache, what it points at may not. In addition, many ABIs have primitives passed via register, which can be faster than via memory.
The next problem is within the function. Whenever the code flow could possible modify an int, data from a const int& parameter must be reloaded! While the reference is to const, the data it refers to can be changed via other paths.
The most common ways this can happen is when you leave the code the complier can see while understanding the function body or modify memory through a global variable, or follow a pointer to touch an int elsewhere.
In comparison, an int argument whose address is not taken cannot be legally modified through other means than directly. This permits the compiler to understand it isn't being mutated.
This isn't just a problem for the complier trying to optimize and getting confused. Take something like:
struct ui{
enum{ defFontSize=9;};
std:optional<int> fontSize;
void reloadFontSize(){
fontSize=getFontSizePref();
fontSizeChanged(*fontSize),
}
void fontSizeChanged(int const& sz){
if(sz==defFontSize)
fontSize=std:nullopt;
else
fontSize=sz;
drawText(sz);
}
void drawText(int sz){
std::cout << "At size " << sz <<"\n";
}
};
and the optional, to whom we are passing a reference, gets destroyed and used after destruction.
A bug like this can be far less obvious than this. If we defaulted to passing by value, it could not happen.
Typically, primitive types are not passed by reference, but sometimes there is a point in that. E.g, on x64 machine long double is 16 bytes long and pointer is 8 bytes long. So it will be a little bit better to use a reference in this case.
In your example, there is no point in that: usual int is 4 bytes long, so you can pass two integers instead of one pointer.
You can use sizeof() to measure the size of the type.

Usage of const and references in parameters in c++ [duplicate]

This question already has answers here:
Pass int by const reference or by value , any difference? [duplicate]
(4 answers)
Closed 4 years ago.
There are multiple ways of making a method. I'm not quite sure when to use const and reference in method parameters.
Imagine a method called 'getSum' that returns the sum of two integers. The parameters in such a method can have multiple forms.
int getSum1(int, int);
int getSum2(int&, int&);
int getSum3(const int, const int);
int getSum4(const int&, const int&);
Correct me if I'm wrong, but here's how I see these methods:
getSum1 - Copies integers and calculates
getSum2 - Doesn't copy integers, but uses the values directly from memory and calculates
getSum3 - Promises that the values won't change
getSum4 - Promises that the values won't change & doesn't copy the integers, but uses the values directly from memory
So here are some questions:
So is getSum2 faster than getSum1 since it doesn't copy the integers, but uses them directly?
Since the values aren't changed, I don't think 'const' makes any difference in this situation, but should it still be there for const correctness?
Would it be the same with doubles?
Should a reference only be used with very large parameters? e.g. if I were to give it a whole class, then it would make no sense to copy the whole thing
For integers, this is irrelevant in practice. Processors work with registers (and an int fits in a register in all but the most exotic hardware), copying a register is basically the cheapest operation (after a noop) and it may not even be necessary if the compiler allocates registers in a smart way.
Use this if you want to change the passed ints. Non-const reference parameters generally indicate that you intend to modify the argument (for example, store multiple return values).
This does exactly the same as 1. for basically the same reason. You cannot change the passed ints but nobody would be any the wiser if you did (i.e. used 1. instead).
Again, this will effectively do the same thing as 1. for ints (or doubles, if your CPU handles them natively) because the compiler understands that passing a const pointer to an int (or double) is the same as providing a copy, but the latter avoids unnecessary trips to memory. Unless you take a pointer to the arguments (in which case the compiler would have to guarantee it points to the int on the call site) this is thus pointless.
Note that the above is not in terms of the C++ abstract machine but in terms of what happens with modern hardware/compilers. If you are working on hardware without dedicated floating point capabilities or where ints don't fit in registers, you have to be more careful. I don't have an overview over current embedded hardware trends, but unless you literally write code for toasters, you should be good.
If you are not dealing with ints but with (large) classes, then the semantic differences are much stronger:
The function receives a copy. Note that if you pass in a temporary, that copy may be move-constructed (or even better, elided).
Same as in the "int section", use this over 4. only if you want to change the passed value.
You receive a copy that cannot be changed. This is generally not very useful outside of specific circumstances (or for marginal code clarity increases).
This should be the default to pass a large class (well, pretty much anything bigger than a pointer) if you intend to only read from (or call const methods on) it.
You are correct. the values of a and b would not be copied. But the addresses to a and b would be copied, and in this case you would not gain any speed since int and pointer to int are of the same (or about the same) size. You would gain speed if the size of the arguments to the function is large, like a struct or class as you mention in Q4.
2)
Const means that you can not change the value of the parameter. If it is not declared as a const you can change it inside the function, but the original value or variable you used when calling the function will not be changed.
int getSum1(int a, int b)
{
a = a + 5;
return a + b;
}
int a, b, foo;
a = 10;
b = 5;
foo = getSum1(a, b);
In this case foo has the value 20
a equals 10
b equals 5
Since the modification of a is only local to the function getSum1()

C++ - Reference, Pointers in Arguments

There are many questions about "when do I use reference and when pointers?". They confused me a little bit. I thought a reference wouldn't take any memory because it's just the address.
Now I made a simple Date class and showed them the community of code-review. They told me not to use the reference in the following example. But why?
Someone told me that it'll allocate the same memory a pointer would allocate. That's the opposite of what I learned.
class A{
int a;
public:
void setA(const int& b) { a = b; } /* Bad! - But why?*/
};
class B{
int b;
public:
void setB(int c) { b = c; } /* They told me to do this */
};
So when do I use references or pointers in arguments and when just a simple copy? Without the reference in my example, is the constant unnecessary?
It is not guaranteed to be bad. But it is unnecessary in this specific case.
In many (or most) contexts, references are implemented as pointers in disguise. Your example happens to be one of those cases. Assuming that the function does not get inlined, parameter b will be implemented "under the hood" as a pointer. So, what you really pass into setA in the first version is a pointer to int, i.e. something that provides indirect access to your argument value. In the second version you pass an immediate int, i.e. something that provides direct access to your argument value.
Which is better and which is worse? Well, a pointer in many cases has greater size than an int, meaning that the first variant might passes larger amount of data. This might be considered "bad", but since both data types will typically fit into the hardware word size, it will probably make no appreciable difference, especially if parameters are passed in CPU registers.
Also, in order to read b inside the function you have to dereference that disguised pointer. This is also "bad" from the performance point of view.
These are the formal reasons one would prefer to pass by value any parameters of small size (smaller or equal to pointer size). For parameters or bigger size, passing by const reference becomes a better idea (assuming you don't explicitly require a copy).
However, in most cases a function that simple will probably be inlined, which will completely eliminate the difference between the two variants, regardless of which parameter type you use.
The matter of const being unnecessary in the second variant is a different story. In the first variant that const serves two important purposes:
1) It prevents you from modifying the parameter value, and thus protects the actual argument from modification. If the reference weren't const, you would be able to modify the reference parameter and thus modify the argument.
2) It allows you to use rvalues as arguments, e.g. call some_obj.setA(5). Without that const such calls would be impossible.
In the second version neither of this is an issue. There's no need to protect the actual argument from modification, since the parameter is a local copy of that argument. Regardless of what you do to the parameter, the actual argument will remain unchanged. And you can already use rvalues as arguments to SetA regardless of whether the parameter is declared const or not.
For this reason people don't normally use top-level const qualifiers on parameters passed by value. But if you do declare it const, it will simply prevent you from modifying the local b inside the function. Some people actually like that, since it enforces the moderately popular "don't modify original parameter values" convention, for which reason you might sometimes see top-level const qualifiers being used in parameter declarations.
If you has light-weight type like a int or long you should use passing by value, because there won't be additional costs from work with references. But when you passing some heavy types, you should use references
I agree with the reviewer. And here's why:
A (const or non-const) reference to a small simple type, such as int will be more complex (in terms of number of instructions). This is because the calling code will have to pass the address of the argument into setA, and then inside setA the value has to be dereferenced from the address stored in b. In the case where b is a plain int, it just copies the value itself. So there is at least one step of a memory reference in saving. This may not make much of a difference in a long runtime of a large program, but if you keep adding one extra cycle everywhere you do this, then it does soon add up to noticeably slower.
I had a look at a piece of code that went something like this:
class X
{
vector v;
public:
...
void find(int& index, int b);
....
}
bool X::find(int &index, int b)
{
while(v[index] != b)
{
if (index == v.size()-1)
{
return false;
}
index++;
}
return true;
}
Rewriting this code to:
bool X::find(int &index, int b)
{
int i = index;
while(v[i] != b)
{
if (i == v.size()-1)
{
index = i;
return false;
}
i++;
}
index = i;
return true;
}
meant that this function went from about 30% of the total execution of some code that called find quite a bit, to about 5% of the execution time of the same test. Because the compiler put i in a register, and only updated the reference value when it finished searching.
References are implemented as pointers (that's not a requirement, but it's universally true, I believe).
So in your first one, since you're just passing an "int", passing the pointer to that int will take about the same amount of space to pass (same or more registers, or same or more stack space, depending on your architecture), so there's no savings there. Plus now you have to dereference that pointer, which is an extra operation (and will almost surely cause you to go to memory, which you might not have to do with the second one, again, depending on your architecture).
Now, if what you're passing is much larger than an int, then the first one could be better because you're only passing a pointer. [NB that there are cases where it still might make sense to pass by value even for a very large object. Those cases are usually when you plan to create your own copy anyway. In that case, it's better to let the compiler do the copy, because the overall approach may improve it's ability to optimize. Those cases are very complex, and my opinion is that if you're asking this question, you should study C++ more before you try to tackle them. Although they do make for interesting reading.]
Passing primitives as const-reference does not save you anything. A pointer and an int use the same amount of memory. If you pass a const-reference, the machine will have to allocate memory for a pointer and copy the pointer address, which has the same cost as allocating and copying an integer. If your Date class uses a single 64-bit integer (or double) to store the date, then you don't need to use const-reference. However, if your Data class becomes more complex and stores additional fields, then passing the Date object by const-reference should have a lower cost than passing it by value.

when do we need to pass the size of array as a parameter

I am a little bit confused about pass an array in C/C++. I saw some cases in which the signature is like this
void f(int arr[])
some is like this
void f(int arr[], int size)
Could anybody elaborate what's the difference and when and how to use it?
First, an array passed to a function actually passes a pointer to the first element of the array, e.g., if you have
int a[] = { 1, 2, 3 };
f(a);
Then, f() gets &a[0] passed to it. So, when writing your function prototypes, the following are equivalent:
void f(int arr[]);
void f(int *arr);
This means that the size of the array is lost, and f(), in general, can't determine the size. (This is the reason I prefer void f(int *arr) form over void f(int arr[]).)
There are two cases where f() doesn't need the information, and in those two cases, it is OK to not have an extra parameter to it.
First, there is some special, agreed value in arr that both the caller and f() take to mean "the end". For example, one can agree that a value 0 means "Done".
Then one could write:
int a[] = { 1, 2, 3, 0 }; /* make sure there is a 0 at the end */
int result = f(a);
and define f() something like:
int f(int *a)
{
size_t i;
int result = 0;
for (i=0; a[i]; ++i) /* loop until we see a 0 */
result += a[i];
return result;
}
Obviously, the above scheme works only if both the caller and the callee agree to a convention, and follow it. An example is strlen() function in the C library. It calculates the length of a string by finding a 0. If you pass it something that doesn't have a 0 at the end, all bets are off, and you are in the undefined behavior territory.
The second case is when you don't really have an array. In this case, f() takes a pointer to an object (int in your example). So:
int change_me = 10;
f(&change_me);
printf("%d\n", change_me);
with
void f(int *a)
{
*a = 42;
}
is fine: f() is not operating on an array anyway.
WHen an array is passed in C or C++ only its address is passed. That is why the second case is quite common, where the second parameter is the number of elements in the array. The function has no idea, only by looking at the address of the array, how many elements it is supposed to contain.
you can write
void f( int *arr, int size )
as well, having latter (size) allows to not step outside the array boundaries while reading/writing to it
C and C++ are not the same thing. They have some common subset, though. What you observed here is that the "first" array dimension when passed to a function always results just in a pointer being passed. The "signature" (C doesn't use this term) of a function declared as
void toto(double A[23]);
is always just
void toto(double *A);
That is that the 23 above is somewhat redundant and not used by the compiler. Modern C (aka C99) has an extension here that lets you declare that A always has 23 elements:
void toto(double A[static 23]);
or that the pointer is const qualified
void toto(double A[const 23]);
If you add other dimension the picture changes, then the array size is used:
void toto(double A[23][7]);
in both C and C++ is
void toto(double (*A)[7]);
that is a pointer to an array of 7 elements. In C++ these array bounds must be an integer constant. In C it can be dynamic.
void toto(size_t n, size_t m, double A[n][m]);
They only thing that you have to watch here is that here n and m come before A in the parameter list. So better you always declare functions with the parameters in that order.
The first signature just passes the array with no way to tell how big the array is and can lead to problems with out-of-bounds errors and/or security flaws.\
The second signature is a more secure version because it allows the function to check against the size of the array to prevent the first versions shortcomings.
Unless this is homework, raw arrays are a bit out-dated. Use std::vector instead. It allows passing the vector around without having to manually pass the size as it does this for you.
The size of an array is not passed with the array itself. Therefore, if the other function needs the size, it will have it as a parameter.
The thing is, some functions implicitly understand the array to be of a certain size. So they won't need to have it specified explicitly. For example, if your function operates on an array of 3 floats, you don't need the user to tell you that it is an array of 3 floats. Just take an array.
And then there are those functions (let's call them "terrible" because they are) that will fill an array in with arbitrary data up to a point defined by that data. sprintf is probably the "best" example. It will keep putting characters in that array until it is finished writing them. That's very bad, because there's no explicit or implicit agreement between the user and the function as to how big this array could be. sprintf will write some number of characters, but there's no way for the user to know exactly how many get written (in the general case).
Which is why you should never use sprintf; use snprintf or _snprintf, depending on your compiler.
Anytime you need to know the size of the array, it needs to be provided. There is nothing special about the two forms of passing the array itself; the first parameter is the same either way. The second method simply provides the information needed to know the size of the array while the first does not.
Sometimes the array itself holds the information about its size, though. In your first example, for instance, perhaps arr[0] is set to the size of the array, and the actual data begins at arr[1]. Or consider the case of c-strings... you provide just a char[], and the array is assumed to end at the first element equal to \0. In your example, a negative value may act as a similar sentinel. Or perhaps the function simply doesn't care about the array's size, and will simply assume it is large enough.
Such methods are inherently unsafe, though... it is easy to forget to set arr[0] or to accidently overwrite the null terminator. Then, f suddenly has no way of knowing how much space it has available to it. Always prefer to provide the size explicitly, either via a size parameter like you show, or with a second pointer to the end of the array. The latter is the method generally taken by the standard library functions in C++. You still have the issue of providing an incorrect size, though, which is why in C++ it isn't recommended you ever use such an array in the first place... use an actual container that will keep track of that information for you.
The difference is that the second one includes a parameter that indicates the array size. The logical conclusion is that if you don't use such a parameter, the function doesn't know what the array size is. And this indeed turns out to be the case. In fact, it doesn't know you have an array. In fact, you don't have to have an array to call the function.
The array syntax here, without a specified size inside the square brackets, is a fake-out. The parameter is actually a pointer. For more information, please see http://c-faq.com/aryptr/index.html , especially section 4.

Passing integers as constant references versus copying

This might be a stupid question, but I notice that in a good number of APIs, a lot of method signatures that take integer parameters that aren't intended to be modified look like:
void method(int x);
rather than:
void method(const int &x);
To me, it looks like both of these would function exactly the same. (EDIT: apparently not in some cases, see answer by R Samuel Klatchko) In the former, the value is copied and thus can't change the original. In the latter, a constant reference is passed, so the original can't be changed.
What I want to know is why one over the other - is it because the performance is basically the same or even better with the former? e.g. passing a 16-bit value or 32-bit value rather than a 32-bit or 64-bit address? This was the only logical reason I could think of, I just want to know if this is correct, and if not, why and when one should prefer int x over const int &x and vice versa.
It's not just the cost of passing a pointer (that's essentially what a reference is), but also the de-referencing in the called method's body to retrieve the underlying value.
That's why passing an int by value will be virtually guaranteed to be faster (Also, the compiler can optimize and simply pass the int via processor registers, eliminating the need to push it onto the stack).
To me, it looks like both of these would function exactly the same.
It depends on exactly what the reference is to. Here is an admittedly made up example that would change based on whether you pass a reference or a value:
static int global_value = 0;
int doit(int x)
{
++global_value;
return x + 1;
}
int main()
{
return doit(global_value);
}
This code will behave differently depending on whether you have int doit(int) or int doit(const int &)
Integers are usually the size of the processor's native word and can pass easily into a registers. From this perspective, there is no difference between passing by value or passing by constant reference.
When in doubt, print the assembly language listing for your functions to find out how the compiler is passing the argument. Print out for both pass by value and pass by constant reference.
Also, when passing by value, the function can modify the copy. When passing by constant reference, the function cannot modify the variable (it's marked as const).
There will probably be a very, very small de-optimization for passing by reference, since at the very least one dereference will need to occur to get the actual value (unless the call is inlined, the compiler cannot simply pass the value due to the fact that the call site and function might be separately compiled, and it's valid and well-defined to cast away the const for a passed parameter that isn't actually const itself - see What are the benefits to passing integral types by const ref). Note, however, that the 'de-optimization' is likely to be so small as to be difficult to measure.
Most people seem to dislike pass-by-const-ref for built-ins because of this (some very much). However, I think that it it might be preferable in some cases if you want the compiler to assist you in ensuring that the value isn't accidentally changed within the function. It's not a big thing, but sometimes it might help.
Depending on the underlying instruction set, an integer parameter can be passed as register or on the stack. Register is definitely faster than memory access, which would always be required in case of const refs (considering early cache-less architectures)
You cannot pass an int literal as a const int&
Explicit type-casts allow you cast a const int& into * (const int *) opening the possibility to change the value of the passed reference