I wonder why cannot we pass objects by value to the functions on which we create threads.
Is there a logical reason behind it?
Would it be harmful if the language allowed passing by value?
pthread is a C style interface. To allow more flexibility than "pass an integer", it has to be a pointer. A void * is the most flexible way to pass arbitrary things in C. In C, you can of course pass a struct by value, but which struct needs to be known by both the source and the destination function at compile time (and the same every time, so we can't use struct X in one of our threads, and struct Y in another thread).
In C++ we can of course use classes and templates to allow almost anything to be passed to almost any type of function.
The C++ 11 std::thread allows you to use various C++ style things to overcome the "C-ness" of pthreads (and subject to an available implementation for the target system, use threads without pthreads).
[This is not unique to pthreads. Both OS/2 and Windows thread implementations take a void * as the argument to the thread function]
POSIX threads is a C API. C does not provide language facilities like copy constructors and so it is not possible to copy any object by value without additional information (i.e. passing in function that are aware of the type and can do the job of allocating memory and copying the data). However, that API would be over-complicated for no good reason.
That being said, you can pass any object by value as long as its size is not greater than sizeof(void *).
Since you have tagged your question as C++, C++ does allow to pass a function with as many arguments as you want through variadic templates. See std::thread for more details.
The argument to pthread_create is typed as a pointer, to be as flexible as possible, but that doesn't mean you can't pass an int.
Just cast it back to an int in the start_routine.
As long as the passed-by value argument is smaller than a pointer you should be OK.
Related
It is common for C-style APIs that take a function pointer as a callback to also take a pointer-sized argument as a "context", that is passed into the callback so that information can be passed from the call-site to the invocation of the callback. For example, pthread_create:
int pthread_create(pthread_t *thread, const pthread_attr_t *attr,
void *(*start_routine) (void *), void *arg);
Here, arg is the "context".
Recently, I came across a situation where I wanted to pass an integer into such a function. I obviously didn't want to actually pass a pointer to an integer because I would have to dynamically allocate it to guarantee the lifetime.
So my solution was to reinterpret_cast the int to void*, and then back to int in the callback. However, I later learnt that this is not portable: Does reinterpret_casting an integral to a pointer type and back yield the same value?
If that's the case, what is the solution?
To avoid this issue, should such APIs take a uintptr_t instead of void*?
I obviously didn't want to actually pass a pointer to an integer because I would have to dynamically allocate it to guarantee the lifetime.
I understand that your question isn't just about pthread_create so I'll attempt to answer in a broader sense. However, you've also focused specifically upon pthread_create, giving an example, so I feel the need to answer that question too.
In the context of pthread_create, your C++ code should use a C++ idiom such as std::thread. If you are going to use the C idiom, to provide a nice balance of portability, cleanliness and maintainability in the context of pthread_create you should dynamically allocate this object! However, there are alternatives. Avoiding dynamic allocation seems like a premature optimisation; it's the simplest solution (aside from using the C++ idiom). We could drivel on all day avoiding the simplest solutions until we reach the brink of insanity, but that wouldn't be too useful, would it?
... what is the solution?
In the context of other APIs, the sky's the limit. C++ has a marvelous set of features that make life easier, yet don't introduce noticeable overhead or complexity. We should try to keep maintainability in mind when we're designing APIs...
In the context of pthread_create, there are two almost insane alternatives:
you could pass a pointer to a structure that has automatic storage duration containing a mutex and the integer, and synchronise after the pthread_create call by using pthread_rwlock_t or pthread_mutex_t to ensure the object remains alive for long enough. However, this seems like more work than using an object with dynamic storage duration. Do you notice how we're slowly reaching towards insanity?
Alternatively, you pass a pointer to an object that has static storage duration. This would be appropriate only for creating one new thread. This is reaching towards insanity in a different direction, by complicating future maintenance and optimisations.
What reason do you have to avoid the saner option, std::thread?
... should such APIs take a uintptr_t instead of void*?
I suppose that depends upon the APIs. It's a decision during the design phase. In the context of pthread_create, that API is a part of POSIX. I want to make it clear that as the POSIX world currently stands, the only functions that can be called by pthread_create must take a void * as an argument and return a void *. However, the POSIX standard doesn't seem to require that these pointers point at anything.
In both the worlds of C++ and POSIX, the uintptr_t type is optional where-as the void * type is mandatory. Any APIs that want to utilise uintptr_t should do so with this optionality in mind; <pthread.h> doesn't seem optional in the POSIX.1-2008 world, and such a change could break portability as we'll soon explore, so I wouldn't expect a change to the POSIX pthread API.
If uintptr_t does exist, there are guarantees that a conversion from uintptr_t to void * and back to uintptr_t will yield the same value, so pthread_create(..., fubar, (void *) 42) (or similar using reinterpret_cast) can be well-defined providing uintptr_t exists and fubar performs the inverse conversion (i.e. (uintptr_t) context will equal 42).
Similarly, a conversion from void *(*)(uintptr_t) to void *(*)(void *) (i.e. in your call to pthread_create), and back to void *(*)(uintptr_t) produces a function pointer which can be invoked. However, invoking a function as the wrong type produces undefined behaviour! The C standard (which the POSIX standard adopts) is actually better at explaining this than I, so here's an extract from C11/6.3.2.3p8:
A pointer to a function of one type may be converted to a pointer to a function of another type and back again; the result shall compare equal to the original pointer. If a converted pointer is used to call a function whose type is not compatible with the referenced type, the behavior is undefined.
Many functions accept a function pointer as an argument. atexit and call_once are excellent examples. If these higher level functions accepted a void* argument, such as atexit(&myFunction, &argumentForMyFunction), then I could easily wrap any functor I pleased by passing a function pointer and a block of data to provide statefulness.
As is, there are many cases where I wish I could register a callback with arguments, but the registration function does not allow me to pass any arguments through. atexit only accepts one argument: a function taking 0 arguments. I cannot register a function to clean up after my object, I must register a function which cleans up after all objects of a class, and force my class to maintain a list of all objects needing cleanup.
I always viewed this as an oversight, there seemed no valid reason why you wouldn't allow a measly 4 or 8 byte pointer to be passed along, unless you were on an extremely limited microcontroller. I always assumed they simply didn't realize how important that extra argument could be until it was too late to redefine the spec. In the case of call_once, the posix version accepts no arguments, but the C++11 version accepts a functor (which is virtually equivalent to passing a function and an argument, only the compiler does some of the work for you).
Is there any reason why one would choose not to allow that extra argument? Is there an advantage to accepting only "void functions with 0 arguments"?
I think atexit is just a special case, because whatever function you pass to it is supposed to be called only once. Therefore whatever state it needs to do its job can just be kept in global variables. If atexit were being designed today, it would probably take a void* in order to enable you to avoid using global variables, but that wouldn't actually give it any new functionality; it would just make the code slightly cleaner in some cases.
For many APIs, though, callbacks are allowed to take additional arguments, and not allowing them to do so would be a severe design flaw. For example, pthread_create does let you pass a void*, which makes sense because otherwise you'd need a separate function for each thread, and it would be totally impossible to write a program that spawns a variable number of threads.
Quite a number of the interfaces taking function pointers lacking a pass-through argument are simply coming from a different time. However, their signatures can't be changed without breaking existing code. It is sort of a misdesign but that's easy to say in hindsight. The overall programming style has moved on to have limited uses of functional programming within generally non-functional programming languages. Also, at the time many of these interfaces were created storing any extra data even on "normal" computers implied an observable extra cost: aside from the extra storage used, the extra argument also needs to be passed even when it isn't used. Sure, atexit() is hardly bound to be a performance bottleneck seeing that it is called just once but if you'd pass an extra pointer everywhere you'd surely also have one qsort()'s comparison function.
Specifically for something like atexit() it is reasonably straight forward to use a custom global object with which function objects to be invoked upon exit are registered: just register a function with atexit() calling all of the functions registered with said global object. Also note that atexit() is only guaranteed to register up to 32 functions although implementations may support more registered functions. It seems ill-advised to use it as a registry for object clean-up function rather than the function which calling an object clean-up function as other libraries may have a need to register functions, too.
That said, I can't imagine why atexit() is particular useful in C++ where objects are automatically destroyed upon program termination anyway. Of course, this approach assumes that all objects are somehow held but that's normally necessary anyway in some form or the other and typically done using appropriate RAII objects.
When implementing a callback function in C++, should I still use the C-style function pointer:
void (*callbackFunc)(int);
Or should I make use of std::function:
std::function< void(int) > callbackFunc;
In short, use std::function unless you have a reason not to.
Function pointers have the disadvantage of not being able to capture some context. You won't be able to for example pass a lambda function as a callback which captures some context variables (but it will work if it doesn't capture any). Calling a member variable of an object (i.e. non-static) is thus also not possible, since the object (this-pointer) needs to be captured.(1)
std::function (since C++11) is primarily to store a function (passing it around doesn't require it to be stored). Hence if you want to store the callback for example in a member variable, it's probably your best choice. But also if you don't store it, it's a good "first choice" although it has the disadvantage of introducing some (very small) overhead when being called (so in a very performance-critical situation it might be a problem but in most it should not). It is very "universal": if you care a lot about consistent and readable code as well as don't want to think about every choice you make (i.e. want to keep it simple), use std::function for every function you pass around.
Think about a third option: If you're about to implement a small function which then reports something via the provided callback function, consider a template parameter, which can then be any callable object, i.e. a function pointer, a functor, a lambda, a std::function, ... Drawback here is that your (outer) function becomes a template and hence needs to be implemented in the header. On the other hand you get the advantage that the call to the callback can be inlined, as the client code of your (outer) function "sees" the call to the callback will the exact type information being available.
Example for the version with the template parameter (write & instead of && for pre-C++11):
template <typename CallbackFunction>
void myFunction(..., CallbackFunction && callback) {
...
callback(...);
...
}
As you can see in the following table, all of them have their advantages and disadvantages:
function ptr
std::function
template param
can capture context variables
no1
yes
yes
no call overhead (see comments)
yes
no
yes
can be inlined (see comments)
no
no
yes
can be stored in a class member
yes
yes
no2
can be implemented outside of header
yes
yes
no
supported without C++11 standard
yes
no3
yes
nicely readable (my opinion)
no
yes
(yes)
(1) Workarounds exist to overcome this limitation, for example passing the additional data as further parameters to your (outer) function: myFunction(..., callback, data) will call callback(data). That's the C-style "callback with arguments", which is possible in C++ (and by the way heavily used in the WIN32 API) but should be avoided because we have better options in C++.
(2) Unless we're talking about a class template, i.e. the class in which you store the function is a template. But that would mean that on the client side the type of the function decides the type of the object which stores the callback, which is almost never an option for actual use cases.
(3) For pre-C++11, use boost::function
void (*callbackFunc)(int); may be a C style callback function, but it is a horribly unusable one of poor design.
A well designed C style callback looks like void (*callbackFunc)(void*, int); -- it has a void* to allow the code that does the callback to maintain state beyond the function. Not doing this forces the caller to store state globally, which is impolite.
std::function< int(int) > ends up being slightly more expensive than int(*)(void*, int) invocation in most implementations. It is however harder for some compilers to inline. There are std::function clone implementations that rival function pointer invocation overheads (see 'fastest possible delegates' etc) that may make their way into libraries.
Now, clients of a callback system often need to set up resources and dispose of them when the callback is created and removed, and to be aware of the lifetime of the callback. void(*callback)(void*, int) does not provide this.
Sometimes this is available via code structure (the callback has limited lifetime) or through other mechanisms (unregister callbacks and the like).
std::function provides a means for limited lifetime management (the last copy of the object goes away when it is forgotten).
In general, I'd use a std::function unless performance concerns manifest. If they did, I'd first look for structural changes (instead of a per-pixel callback, how about generating a scanline processor based off of the lambda you pass me? which should be enough to reduce function-call overhead to trivial levels.). Then, if it persists, I'd write a delegate based off fastest possible delegates, and see if the performance problem goes away.
I would mostly only use function pointers for legacy APIs, or for creating C interfaces for communicating between different compilers generated code. I have also used them as internal implementation details when I am implementing jump tables, type erasure, etc: when I am both producing and consuming it, and am not exposing it externally for any client code to use, and function pointers do all I need.
Note that you can write wrappers that turn a std::function<int(int)> into a int(void*,int) style callback, assuming there are proper callback lifetime management infrastructure. So as a smoke test for any C-style callback lifetime management system, I'd make sure that wrapping a std::function works reasonably well.
Use std::function to store arbitrary callable objects. It allows the user to provide whatever context is needed for the callback; a plain function pointer does not.
If you do need to use plain function pointers for some reason (perhaps because you want a C-compatible API), then you should add a void * user_context argument so it's at least possible (albeit inconvenient) for it to access state that's not directly passed to the function.
The only reason to avoid std::function is support of legacy compilers that lack support for this template, which has been introduced in C++11.
If supporting pre-C++11 language is not a requirement, using std::function gives your callers more choice in implementing the callback, making it a better option compared to "plain" function pointers. It offers the users of your API more choice, while abstracting out the specifics of their implementation for your code that performs the callback.
std::function may bring VMT to the code in some cases, which has some impact on performance.
The other answers answer based on technical merits. I'll give you an answer based on experience.
As a very heavy X-Windows developer who always worked with function pointer callbacks with void* pvUserData arguments, I started using std::function with some trepidation.
But I find out that combined with the power of lambdas and the like, it has freed up my work considerably to be able to, at a whim, throw multiple arguments in, re-order them, ignore parameters the caller wants to supply but I don't need, etc. It really makes development feel looser and more responsive, saves me time, and adds clarity.
On this basis I'd recommend anyone to try using std::function any time they'd normally have a callback. Try it everywhere, for like six months, and you may find you hate the idea of going back.
Yes there's some slight performance penalty, but I write high-performance code and I'm willing to pay the price. As an exercise, time it yourself and try to figure out whether the performance difference would ever matter, with your computers, compilers and application space.
In C++ Standard Template Library, there's a 'functional' part, in which many classes have overloaded their () operator.
Does it bring any convenience to use functions as objects in C++?
Why can't we just use function pointer instead? Any examples?
Ofcourse, One can always use Function pointers instead of Function Objects, However there are certain advantages which function objects provide over function pointers, namely:
Better Performance:
One of the most distinct and important advantage is they are more likely to yield better performance. In case of function objects more details are available at compile time so that the compiler can accurately determine and hence inline the function to be called unlike in case of function pointers where the derefencing of the pointer makes it difficult for the compiler to determine the actual function that will be called.
Function objects are Smart functions:
Function objects may have other member functions and attributes.This means that function objects have a state. In fact, the same function, represented by a function object, may have different states at the same time. This is not possible for ordinary functions. Another advantage of function objects is that you can initialize them at runtime before you use/call them.
Power of Generic programming:
Ordinary functions can have different types only when their signatures differ. However, function objects can have different types even when their signatures are the same. In fact, each functional behavior defined by a function object has its own type. This is a significant improvement for generic programming using templates because one can pass functional behavior as a template parameter.
Why can't we just use function pointer instead? Any examples?
Using C style function pointer cannot leverage the advantage of inlining. Function pointer typically requires an additional indirection for lookup.
However, if operator () is overloaded then it's very easy for compiler to inline the code and save an extra call, so increase in performance.
The other advantage of overloaded operator () is that, one can design a function which implicitly considers the function object as argument; no need to pass it as a separate function. Lesser the hand coded program, lesser the bugs and better readability.
This question from Bjarne Stroustrup (C++ inventor) webpage explains that aspect nicely.
C++ Standard (Template) Library uses functional programming with overloaded operator (), if it's needed.
> Does it bring any convenience to use functions as objects in C++?
Yes: The C++ template mechanism allows all other C/C++ programming styles (C style and OOP style, see below).
> Why can't we just use function pointer instead? Any examples?
But we can: A simple C function pointer is an object with a well defined operator(), too.
If we design a library, we do not want to force anyone to use that C pointer style if not desired. It is usually as undesired as forcing everything/everyone to be in/use OOP style; see below.
From C-programmers and functional programmers views, OOP not only tends to be slower but more verbose and in most cases to be the wrong direction of abstraction ("information" is not and should not be an "object"). Because of that, people tend to be confused whenever the word "object" is used in other contexts.
In C++, anything with the desired properties can be seen as an object. In this case, a simple C function pointer is an object, too. This does not imply that OOP paradigms are used when not desired; it is just a proper way to use the template mechanism.
To understand the performance differences, compare the programming(-language) styles/paradigms and their possible optimisations:
C style:
Function pointer with its closure ("this" in OOP, pointer to some structure) as first parameter.
To call the function, the address of the function needs to be accessed first.
That is 1 indirection; no inlining possible.
C++ (and Java) OOP style:
Reference to an object derived from a class with virtual functions.
Reference is 1st pointer.
Pointer to virtual-table is 2nd pointer.
Function pointer in virtual-table is 3rd pointer.
That are 3 indirections; no inlining possible.
C++ template style:
Copy of an object with () function.
No virtual-table since the type of that object is known at compile time.
The address of the function is known at compile time.
That are 0 indirections; inlining possible.
The C++ templates are versatile enough to allow the other two styles above, and in the case of inlining they can even outperform…
compiled functional languages: (excluding JVM and Javascript as target platforms because of missing "proper tail calls")
Function pointer and reference to its closure in machine registers.
It is usually no function "call" but a GOTO like jump.
Functions do not need the stack, no address to jump back, no parameters nor local variables on the stack.
Functions have their garbage collectable closure(s) containing parameters and a pointer to the next function to be called.
For the CPU to predict the jump, the address of the function needs to be loaded to a register as early as possible.
That is 1 indirection with possible jump prediction; everything is nearly as fast as inlined.
The main difference is that function objects are more powerful than plain function pointers as they can hold state. Most algorithms take templates functions rather than plain function pointers, which enable the use of powerful constructs as binders that call functions with different signatures by filling extra arguments with values stored on the functor, or the newer lambdas in C++11. Once the algorithms are designed to take functors it just makes sense to provide a set of predefined generic function objects in the library.
Aside from that there are potential advantages in that in most cases those functors are simple classes for which the compiler has the full definition and can perform inlining of the function calls improving performance. This is the reason why std::sort can be much faster than qsort from the C library.
With C++ how do i decide if i should pass an argument by value or by reference/pointer? (tell me the answer for both 32 and 64bits) Lets take A. Is 2 32bit values more less or equal work as a pointer to a 32bit value?
B to me seems like i always should pass by value. C i think i should pass by value but someone told me (however i haven't seen proof) that processors don't handle values not their bitsize and so it is more work. So if i were passing them around would it be more work to pass by value thus byref is faster? Finally i threw in an enum. I think enums should always be by value
Note: When i say by ref i mean a const reference or pointer (can't forget the const...)
struct A { int a, b; }
struct B { int a; }
struct C { char a, b; }
enum D { a,b,c }
void fn(T a);
Now tell me the answer if i were pushing the parameters many times and the code doesn't use a tail call? (lets say the values isnt used until 4 or so calls deep)
Forget the stack size. You should pass by reference if you want to change it, otherwise you should pass by value.
Preventing the sort of bugs introduced by allowing functions to change your data unexpectedly is far more important than a few bytes of wasted stack space.
If stack space becomes a problem, stop using so many levels (such as replacing a recursive solution with an iterative one) or expand your stack. Four levels of recursion isn't usually that onerous, unless your structures are massive or you're operating in the embedded world.
If performance becomes a problem, find a faster algorithm :-) If that's not possible, then you can look at passing by reference, but you need to understand that it's breaking the contract between caller and callee. If you can live with that, that's okay. I generally can't :-)
The intent of the value/reference dichotomy is to control what happens to the thing you pass as a parameter at the language level, not to fiddle with the way an implementation of the language works.
I pass all parameters by reference for consistency, including builtins (of course, const is used where possible).
I did test this in performance critical domains -- worst case loss compared to builtins was marginal. Reference can be quite a bit faster, for non-builtins, and when the calls are deep (as a generalization). This was important for me as I was doing quite a bit of deep TMP, where function bodies were tiny.
You might consider breaking that convention if you're counting instructions, the hardware is register-starved (e.g. embedded), or if the function is not a good candidate for inlining.
Unfortunately, the question you ask is more complex than it appears -- the answer may vary greatly by your platform, ABI, calling conventions, register counts, etc.
A lot depends on your requirement but best practice is to pass by reference as it reduces the memory foot print.
If you pass large objects by value, a copy of it is made in memory andthe copy constructor is called for making a copy of this.
So it will take more machine cycles and also, if you pass by value, changes are not reflected in the original object.
So try passing them by reference.
Hope this has been helpful to you.
Regards, Ken
First, reference and pointers aren't the same.
Pass by pointer
Pass parameters by pointers if any/some of these apply:
The passed element could be null.
The resource is allocated inside the called function and the caller is responsible should be responsible for freeing such a resource. Remember in this case to provide a free() function for that resource.
The value is of a variable type, like for example void*. When it's type is determined at runtime or depending on the usage pattern (or hiding implementation - i.e Win32 HANDLE), such as a thread procedure argument. (Here favor c++ templates and std::function, and use pointers for this purpose only if your environment does not permit otherwise.
Pass by reference
Pass parameters by reference if any/some of these apply:
Most of the time. (prefer passing by const reference)
If you want the modifications to the passed arguments to be visible to the caller. (unless const reference is used).
If the passed argument is never null.
If you know what is the passed argument type and you have control over function's signature.
Pass by copy
Pass a copy if any/some of these apply:
Generally try to avoid this.
If you want to operate on a copy of the passed argument. i.e you know that the called function would create a copy anyway.
With primitive types smaller than the system's pointer size - as it makes no performance/memory difference compared to a const ref.
This is tricky - when you know that the type implements a move constructor (such as std::string in C++11). It then looks as if you're passing by copy.
Any of these three lists can go more longer, but these are - I would say - the basic rules of thumb.
Your complete question is a bit unclear to me, but I can answer when you would use passing by value or by reference.
When passing by value, you have a complete copy of the parameter into the call stack. It's like you're making a local variable in the function call initialized with whatever you passed into it.
When passing by reference, you... well, pass by reference. The main difference is that you can modify the external object.
There is the benefit of reducing memory load for large objects passing by reference. For basic data types (32-bit or 64-bit integers, for example), the performance is negligible.
Generally, if you're going to work in C/C++ you should learn to use pointers. Passing objects as parameters will almost always be passed via a pointer (vs reference). The few instances you absolutely must use references is in the copy constructor. You'll want to use it in the operators as well, but it's not required.
Copying objects by value is usually a bad idea - more CPU to do the constructor function; more memory for the actual object. Use const to prevent the function modifying the object. The function signature should tell the caller what might happen to the referenced object.
Things like int, char, pointers are usually passed by value.
As to the structures you outlined, passing by value will not really matter. You need to do profiling to find out, but on the grand scheme of a program you be better off looking elsewhere for increasing performance in terms of CPU and/or memory.
I would consider whether you want value or reference semantics before you go worrying about optimizations. Generally you would pass by reference if you want the method you are calling to be able to modify the parameter. You can pass a pointer in this case, like you would in C, but idiomatic C++ tends to use references.
There is no rule that says that small types or enums should always be passed by value. There is plenty of code that passes int& parameters, because they rely on the semantics of passing by reference. Also, you should keep in mind that for any relatively small data type, you won't notice a difference in speed between passing by reference and by value.
That said, if you have a very large structure, you probably don't want to make lots of copies of it. This is where const references are handy. Do keep in mind though that const in C++ is not strictly enforced (even if it's considered bad practice, you can always const_cast it away). There is no reason to pass a const int& over an int, although there is a reason to pass a const ClassWithManyMembers& over a ClassWithManyMembers.
All of the structs that you listed I would say are fine to pass by value if you are intending them to be treated as values. Consider that if you call a function that takes one parameter of type struct Rectangle{int x, y, w, h}, this is the same as passing those 4 parameters independently, which is really not a big deal. Generally you should be more worried about the work that the copy constructor has to do - for example, passing a vector by value is probably not such a good idea, because it will have to dynamically allocate memory and iterate through a list whose size you don't know, and invoke many more copy constructors.
While you should keep all this in mind, a good general rule is: if you want refence semantics, pass by refence. Otherwise, pass intrinsics by value, and other things by const reference.
Also, C++11 introduced r-value references which complicate things even further. But that's a different topic.
These are the rules that I use:
for native types:
by value when they are input arguments
by non-const reference when they are mandatory output arguments
for structs or classes:
by const reference when they are input arguments
by non-const reference when they are output arguments
for arrays:
by const pointer when they are input arguments (const applies to the data, not the pointer here, i.e. const TYPE *)
by pointer when they are output arguments (const applies to the data, not the pointer)
I've found that there are very few times that require making an exception to the above rules. The one exception that comes to mind is for a struct or class argument that is optional, in which case a reference would not work. In that case I use a const pointer (input) or a non-const pointer (output), so that you can also pass 0.
If you want a copy, then pass by value. If you want to change it and you want those changes to be seen outside the function, then pass by reference. If you want speed and don't want to change it, pass by const reference.