Is ampersand(&) important in this case? - c++

What's the difference between
Complex operator+(Complex& A, Complex& B) {
double re=A.getReal()+B.getReal();
double im=A.getImg()+B.getImg();
Complex C(re, im);
return C;
}
and this(without &):
Complex operator+(Complex A, Complex B) {
double re=A.getReal()+B.getReal();
double im=A.getImg()+B.getImg();
Complex C(re, im);
return C;
}

Primarily, it is important to not use a reference to non-const for a function that doesn't modify the object through the reference. Using a reference to non-const will prevent the operator from being used with rvalue arguments.
Using a reference in this case may be important or it might not be. It is only relevant for optimisation purpose. If the function is not used in a hot part of the program, then its speed may not be important.
Assuming its speed is important, then the importance of the argument type depends on on many factors. For example if function is expanded inline then the choice probably doesn't matter at all. If it isn't inlined, then it can depend on the capabilities of the target system. On one system, the reference may be faster, on another system the value may be faster, while on others there may not be significant difference.
You can find out both which is faster, and whether it is significant to your program by measuring the different choices.
Note that if you do use a reference, then you should use a reference to const here.

In the first case the overload of the + operator receives as parameters a reference of A and a reference of B. This means that no copy constructor is called. Also, if you modify A (for example) by setting the real part to 0, you will see this modification at A’s real part after returning from the function.
In the second case, the overload of + operator received a copy of A and a copy of B. In this case the copy constructor is called. Any modification to A or B inside the function are not visible after the function ends.
Why is sometimes better to avoid the call of the copy constructor? It depends on the members of your class. Imagine that your class has a member that stores a vector with 1.000.000 elements. The copy constructor should allocate a one million elements vector and then copy its data. This operation takes time. So in this case is better to avoid the call of the copy constructor. But if the members of your class are simple double values, as in your example, you can use your second definition without problems.
Also, in the first case, if you don’t want to allow any modifications to A or B, you can use a const reference, like bellow:
Complex operator+(const Complex& A, const Complex& B);

Related

C++ overloading the equality operator. Should I write my function to accept argument passed by reference or value?

I want to overload the == operator for a simple struct
struct MyStruct {
public:
int a;
float b;
bool operator==( ) { }
}
All the examples I'm seeing seem to pass the value by reference using a &.
But I really want to pass these structs by value.
Is there anything wrong with me writing this as
bool operator== (MyStruct another) { return ( (a==another.a) && (b==another.b) ); }
It should really not matter expect that you pay the penalty of a copy when you pass by value. This applies if the struct is really heavy. In the simple example you quote, there may not be a big difference.
That being said, passing by const reference makes more sense since it expresses the intent of the overloaded function == clearly. const makes sure that the overloaded function accidentally doesn't modify the object and passing by reference saves you from making a copy. For == operator, there is no need to pass a copy just for comparison purposes.
If you are concerned about consistency, it's better to switch the other pass by value instances to pass by const ref.
While being consistent is laudable goal, one shouldn't overdo it. A program containing only 'A' characters would be very consistent, but hardly useful. Argument passing mechanism is not something you do out of consistency, it is a technical decision based on certain technical aspects.
For example, in your case, passing by value could potentially lead to better performance, since the struct is small enough and on AMD64 ABI (the one which is used on any 64bit Intel/AMD chip) it will be passed in a register, thus saving time normally associated with dereferencing.
On the hand, in your case, it is reasonable to assume that the function will be inlined, and passing scheme will not matter at all (since it won't be passed). This is proven by codegen here (no call to operator== exist in generated assembly): https://gcc.godbolt.org/z/G7oEgE

in c++11, is it necessary to provide rvalue overrides for functions move-assigning large objects? [duplicate]

Since we have move semantics in C++, nowadays it is usual to do
void set_a(A a) { _a = std::move(a); }
The reasoning is that if a is an rvalue, the copy will be elided and there will be just one move.
But what happens if a is an lvalue? It seems there will be a copy construction and then a move assignment (assuming A has a proper move assignment operator). Move assignments can be costly if the object has too many member variables.
On the other hand, if we do
void set_a(const A& a) { _a = a; }
There will be just one copy assignment. Can we say this way is preferred over the pass-by-value idiom if we will pass lvalues?
Expensive-to-move types are rare in modern C++ usage. If you are concerned about the cost of the move, write both overloads:
void set_a(const A& a) { _a = a; }
void set_a(A&& a) { _a = std::move(a); }
or a perfect-forwarding setter:
template <typename T>
void set_a(T&& a) { _a = std::forward<T>(a); }
that will accept lvalues, rvalues, and anything else implicitly convertible to decltype(_a) without requiring extra copies or moves.
Despite requiring an extra move when setting from an lvalue, the idiom is not bad since (a) the vast majority of types provide constant-time moves and (b) copy-and-swap provides exception safety and near-optimal performance in a single line of code.
But what happens if a is an lvalue? It seems there will be a copy
construction and then a move assignment (assuming A has a proper move
assignment operator). Move assignments can be costly if the object has
too many member variables.
Problem well spotted. I wouldn't go as far as to say that the pass-by-value-and-then-move construct is a bad idiom but it definitely has its potential pitfalls.
If your type is expensive to move and / or moving it is essentially just a copy, then the pass-by-value approach is suboptimal. Examples of such types would include types with a fixed size array as a member: It may be relatively expensive to move and a move is just a copy. See also
Small String Optimization and Move Operations and
"Want speed? Measure." (by Howard Hinnant)
in this context.
The pass-by-value approach has the advantage that you only need to maintain one function but you pay for this with performance. It depends on your application whether this maintenance advantage outweighs the loss in performance.
The pass by lvalue and rvalue reference approach can lead to maintenance headaches quickly if you have multiple arguments. Consider this:
#include <vector>
using namespace std;
struct A { vector<int> v; };
struct B { vector<int> v; };
struct C {
A a;
B b;
C(const A& a, const B& b) : a(a), b(b) { }
C(const A& a, B&& b) : a(a), b(move(b)) { }
C( A&& a, const B& b) : a(move(a)), b(b) { }
C( A&& a, B&& b) : a(move(a)), b(move(b)) { }
};
If you have multiple arguments, you will have a permutation problem. In this very simple example, it is probably still not that bad to maintain these 4 constructors. However, already in this simple case, I would seriously consider using the pass-by-value approach with a single function
C(A a, B b) : a(move(a)), b(move(b)) { }
instead of the above 4 constructors.
So long story short, neither approach is without drawbacks. Make your decisions based on actual profiling information, instead of optimizing prematurely.
The current answers are quite incomplete. Instead, I will try to conclude based on the lists of pros and cons I find.
Short answer
In short, it may be OK, but sometimes bad.
This idiom, namely the unifying interface, has better clarity (both in conceptual design and implementation) compared to forwarding templates or different overloads. It is sometimes used with copy-and-swap (actually, as well as move-and-swap in this case).
Detailed analysis
The pros are:
It needs only one function for each parameter list.
It needs indeed only one, not multiple ordinary overloads (or even 2n overloads when you have n parameters when each one can be unqualified or const-qualified).
Like within a forwarding template, parameters passed by value are compatible with not only const, but volatile, which reduce even more ordinary overloads.
Combined with the bullet above, you don't need 4n overloads to serve to {unqulified, const, const, const volatile} combinations for n parameters.
Compared to a forwarding template, it can be a non-templated function as long as the parameters are not needed to be generic (parameterized through template type parameters). This allows out-of-line definitions instead of template definitions needed to be instantiated for each instance in each translation unit, which can make significant improvement to translation-time performance (typically, during both compiling and linking).
It also makes other overloads (if any) easier to implement.
If you have a forwarding template for a parameter object type T, it may still clash with overloads having a parameter const T& in the same position, because the argument can be a lvalue of type T and the template instantiated with type T& (rather than const T&) for it can be more preferred by the overloading rule when there is no other way to differentiate which is the best overloading candidate. This inconsistency may be quite surprising.
In particular, consider you have forwarding template constructor with one parameter of type P&& in a class C. How many time will you forget to excluded the instance of P&& away from possibly cv-qualified C by SFINAE (e.g. by adding typename = enable_if_t<!is_same<C, decay_t<P>> to the template-parameter-list), to ensure it does not clash with copy/move constructors (even when the latter are explicitly user-provided)?
Since the parameter is passed by value of a non-reference type, it can force the argument be passed as a prvalue. This can make a difference when the argument is of a class literal type. Consider there is such a class with a static constexpr data member declared in some class without an out-of-class definition, when it is used as an argument to a parameter of lvalue reference type, it may eventually fail to link, because it is odr-used and there is no definition of it.
Note since ISO C++ 17 the rules of static constexpr data member have changed to introduce a definition implicitly, so the difference is not significant in this case.
The cons are:
A unifying interface can not replace copy and move constructors where the parameter object type is identical to the class. Otherwise, copy-initialization of the parameter would be infinite recursion, because it will call the unifying constructor, and the constructor then call itself.
As mentioned by other answers, if the cost of copy is not ignorable (cheap and predictable enough), this means you will almost always have the degeneration of performance in the calls when the copy is not needed, because copy-initialization of a unifying passed-by-value parameter unconditionally introduce a copy (either copied-to or moved-to) of the argument unless elided.
Even with mandatory elision since C++17, copy-initialization of a parameter object is still hardly free to be removed away - unless the implementation try very hard to prove the behavior not changed according to as-if rules instead of the dedicated copy elision rules applicable here, which might be sometimes impossible without a whole program analysis.
Likewise, the cost of destruction may not be ignorable as well, particularly when non-trivial subobjects are taken into account (e.g. in cases of containers). The difference is that, it does not only apply to the copy-initialization introduced by the copy construction, but also by the move construction. Making move cheaper than copy in constructors can not improve the situation. The more cost of copy-initialization, the more cost of destruction you have to afford.
A minor shortcoming is that there is no way to tweak the interface in different ways as plural overloads, for example, specifying different noexcept-specifiers for parameters of const& and && qualified types.
OTOH, in this example, unifying interface will usually provide you with noexcept(false) copy + noexcept move if you specifies noexcept, or always noexcept(false) when you specify nothing (or explicit noexcept(false)). (Note in the former case, noexcept does not prevent throwing during copy because that will only occur during evaluation of arguments, which is out of the function body.) There is no further chance to tune them separately.
This is considered minor because it is not frequently needed in reality.
Even if such overloads are used, they are probably confusing by nature: different specifiers may hide subtle but important behavioral differences which are difficult to reason about. Why not different names instead of overloads?
Note the example of noexcept may be particularly problematic since C++17 because noexcept-specification now affect the function type. (Some unexpected compatibility issues can be diagnosed by Clang++ warning.)
Sometimes the unconditional copy is actually useful. Because composition of operations with strong-exception guarantee does not hold the guarantee in nature, a copy can be used as a transactional state holder when the strong-exception guarantee is required and the operation cannot be broken down as sequence of operations with no less strict (no-exception or strong) exception guarantee. (This includes the copy-and-swap idiom, although assignments are not recommended to be unified for other reasons in general, see below.) However, this does not mean the copy is otherwise unacceptable. If the intention of the interface is always to create some object of type T, and the cost of moving T is ignorable, the copy can be moved to the target without unwanted overhead.
Conclusions
So for some given operations, here are suggestions about whether using a unifying interface to replace them:
If not all of the parameter types match the unifying interface, or if there is behavioral difference other than the cost of new copies among operations being unified, there cannot be a unifying interface.
If the following conditions are failed to be fit for all parameters, there cannot be a unifying interface. (But it can still be broken down to different named-functions, delegating one call to another.)
For any parameter of type T, if a copy of each argument is needed for all operations, use unifying.
If both copy and move construction of T have ignorable cost, use unifying.
If the intention of the interface is always to create some object of type T, and the cost of the move construction of T is ignorable, use unifying.
Otherwise, avoid unifying.
Here are some examples need to avoid unifying:
Assignment operations (including assignment to the subobjects thereof, typically with copy-and-swap idiom) for T without ignorable cost in copy and move constructions does not meet the criteria of unifying, because the intention of assignment is not to create (but to replace the content of) the object. The copied object will eventually be destructed, which incurs unnecessary overhead. This is even more obvious for cases of self-assignment.
Insertion of values to a container does not meet the criteria, unless both the copy-initialization and destruction have ignorable cost. If the operation fails (due to the allocation failure, duplicate values or so on) after copy-initialization, the parameters have to be destructed, which incurs unnecessary overhead.
Conditionally creation of object based on parameters will incur the overhead when it does not actually create the object (e.g. std::map::insert_or_assign-like container insertion even in spite of the failure above).
Note the accurate limit of "ignorable" cost is somewhat subjective because it eventually depends on how much cost can be tolerated by the developers and/or the users, and it may vary case by case.
Practically, I (conservatively) assume any trivially copyable and trivailly destructible type whose size is not more than one machine word (like a pointer) qualifying the criteria of ignorable cost in general - if the resulted code actually cost too much in such case, it suggests either a wrong configuration of the build tool is used, or the toolchain is not ready for production.
Do profile if there is any further doubt on performance.
Additional case study
There are some other well-known types preferred to be passed by value or not, depending on the conventions:
Types need to preserve reference values by convention should not be passed by value.
A canonical example is the argument forwarding call wrapper defined in ISO C++, which requires to forward references. Note in the caller position it may also preserve the reference respecting to the ref-qualifier.
An instance of this example is std::bind. See also the resolution of LWG 817.
Some generic code may directly copy some parameters. It may be even without std::move, because the cost of the copy is assumed to be ignorable and a move does not necessarily make it better.
Such parameters include iterators and function objects (except the case of argument forwarding caller wrappers discussed above).
Note the constructor template of std::function (but not the assignment operator template) also uses the pass-by-value functor parameter.
Types presumably having the cost comparable to pass-by-value parameter types with ignorable cost are also preferred to be pass-by-value. (Sometimes they are used as dedicated alternatives.) For example, instances of std::initializer_list and std::basic_string_view are more or less two pointers or a pointer plus a size. This fact makes them cheap enough to be directly passed without using references.
Some types should be better avoided passed by value unless you do need a copy. There are different reasons.
Avoid copy by default, because the copy may be quite expensive, or at least it is not easy to guarantee the copy is cheap without some inspection of the runtime properties of the value being copied. Containers are typical examples in this sort.
Without statically knowing how many elements in a container, it is generally not safe (in the sense of a DoS attack, for example) to be copied.
A nested container (of other containers) will easily make the performance problem of copying worse.
Even empty containers are not guaranteed cheap to be copied. (Strictly speaking, this depends on the concrete implementation of the container, e.g. the existence of the "sentinel" element for some node-based containers... But no, keep it simple, just avoid copying by default.)
Avoid copy by default, even when the performance is totally uninterested, because there can be some unexpected side effects.
In particular, allocator-awared containers and some other types with similar treatment to allocators ("container semantics", in David Krauss' word), should not be passed by value - allocator propagation is just another big semantic worm can.
A few other types conventionally depend. For example, see GotW #91 for shared_ptr instances. (However, not all smart pointers are like that; observer_ptr are more like raw pointers.)
For the general case where the value will be stored, the pass-by-value only is a good compromise-
For the case where you know that only lvalues will be passed (some tightly coupled code) it's unreasonable, unsmart.
For the case where one suspects a speed improvement by providing both, first THINK TWICE, and if that didn't help, MEASURE.
Where the value will not be stored I prefer the pass by reference, because that prevents umpteen needless copy operations.
Finally, if programming could be reduced to unthinking application of rules, we could leave it to robots. So IMHO it's not a good idea to focus so much on rules. Better to focus on what the advantages and costs are, for different situations. Costs include not only speed, but also e.g. code size and clarity. Rules can't generally handle such conflicts of interest.
Pass by value, then move is actually a good idiom for objects that you know are movable.
As you mentioned, if an rvalue is passed, it'll either elide the copy, or be moved, then within the constructor it will be moved.
You could overload the copy constructor and move constructor explicitly, however it gets more complicated if you have more than one parameter.
Consider the example,
class Obj {
public:
Obj(std::vector<int> x, std::vector<int> y)
: X(std::move(x)), Y(std::move(y)) {}
private:
/* Our internal data. */
std::vector<int> X, Y;
}; // Obj
Suppose if you wanted to provide explicit versions, you end up with 4 constructors like so:
class Obj {
public:
Obj(std::vector<int> &&x, std::vector<int> &&y)
: X(std::move(x)), Y(std::move(y)) {}
Obj(std::vector<int> &&x, const std::vector<int> &y)
: X(std::move(x)), Y(y) {}
Obj(const std::vector<int> &x, std::vector<int> &&y)
: X(x), Y(std::move(y)) {}
Obj(const std::vector<int> &x, const std::vector<int> &y)
: X(x), Y(y) {}
private:
/* Our internal data. */
std::vector<int> X, Y;
}; // Obj
As you can see, as you increase the number of parameters, the number of necessary constructors grow in permutations.
If you don't have a concrete type but have a templatized constructor, you can use perfect-forwarding like so:
class Obj {
public:
template <typename T, typename U>
Obj(T &&x, U &&y)
: X(std::forward<T>(x)), Y(std::forward<U>(y)) {}
private:
std::vector<int> X, Y;
}; // Obj
References:
Want Speed? Pass by Value
C++ Seasoning
I am answering myself because I will try to summarize some of the answers. How many moves/copies do we have in each case?
(A) Pass by value and move assignment construct, passing a X parameter. If X is a...
Temporary: 1 move (the copy is elided)
Lvalue: 1 copy 1 move
std::move(lvalue): 2 moves
(B) Pass by reference and copy assignment usual (pre C++11) construct. If X is a...
Temporary: 1 copy
Lvalue: 1 copy
std::move(lvalue): 1 copy
We can assume the three kinds of parameters are equally probable. So every 3 calls we have (A) 4 moves and 1 copy, or (B) 3 copies. I.e., in average, (A) 1.33 moves and 0.33 copies per call or (B) 1 copy per call.
If we come to a situation when our classes consist mostly of PODs, moves are as expensive as copies. So we would have 1.66 copies (or moves) per call to the setter in case (A) and 1 copies in case (B).
We can say that in some circumstances (PODs based types), the pass-by-value-and-then-move construct is a very bad idea. It is 66% slower and it depends on a C++11 feature.
On the other hand, if our classes include containers (which make use of dynamic memory), (A) should be much faster (except if we mostly pass lvalues).
Please, correct me if I'm wrong.
Readability in the declaration:
void foo1( A a ); // easy to read, but unless you see the implementation
// you don't know for sure if a std::move() is used.
void foo2( const A & a ); // longer declaration, but the interface shows
// that no copy is required on calling foo().
Performance:
A a;
foo1( a ); // copy + move
foo2( a ); // pass by reference + copy
Responsibilities:
A a;
foo1( a ); // caller copies, foo1 moves
foo2( a ); // foo2 copies
For typical inline code there is usually no difference when optimized.
But foo2() might do the copy only on certain conditions (e.g. insert into map if key does not exist), whereas for foo1() the copy will always be done.

Is the pass-by-value-and-then-move construct a bad idiom?

Since we have move semantics in C++, nowadays it is usual to do
void set_a(A a) { _a = std::move(a); }
The reasoning is that if a is an rvalue, the copy will be elided and there will be just one move.
But what happens if a is an lvalue? It seems there will be a copy construction and then a move assignment (assuming A has a proper move assignment operator). Move assignments can be costly if the object has too many member variables.
On the other hand, if we do
void set_a(const A& a) { _a = a; }
There will be just one copy assignment. Can we say this way is preferred over the pass-by-value idiom if we will pass lvalues?
Expensive-to-move types are rare in modern C++ usage. If you are concerned about the cost of the move, write both overloads:
void set_a(const A& a) { _a = a; }
void set_a(A&& a) { _a = std::move(a); }
or a perfect-forwarding setter:
template <typename T>
void set_a(T&& a) { _a = std::forward<T>(a); }
that will accept lvalues, rvalues, and anything else implicitly convertible to decltype(_a) without requiring extra copies or moves.
Despite requiring an extra move when setting from an lvalue, the idiom is not bad since (a) the vast majority of types provide constant-time moves and (b) copy-and-swap provides exception safety and near-optimal performance in a single line of code.
But what happens if a is an lvalue? It seems there will be a copy
construction and then a move assignment (assuming A has a proper move
assignment operator). Move assignments can be costly if the object has
too many member variables.
Problem well spotted. I wouldn't go as far as to say that the pass-by-value-and-then-move construct is a bad idiom but it definitely has its potential pitfalls.
If your type is expensive to move and / or moving it is essentially just a copy, then the pass-by-value approach is suboptimal. Examples of such types would include types with a fixed size array as a member: It may be relatively expensive to move and a move is just a copy. See also
Small String Optimization and Move Operations and
"Want speed? Measure." (by Howard Hinnant)
in this context.
The pass-by-value approach has the advantage that you only need to maintain one function but you pay for this with performance. It depends on your application whether this maintenance advantage outweighs the loss in performance.
The pass by lvalue and rvalue reference approach can lead to maintenance headaches quickly if you have multiple arguments. Consider this:
#include <vector>
using namespace std;
struct A { vector<int> v; };
struct B { vector<int> v; };
struct C {
A a;
B b;
C(const A& a, const B& b) : a(a), b(b) { }
C(const A& a, B&& b) : a(a), b(move(b)) { }
C( A&& a, const B& b) : a(move(a)), b(b) { }
C( A&& a, B&& b) : a(move(a)), b(move(b)) { }
};
If you have multiple arguments, you will have a permutation problem. In this very simple example, it is probably still not that bad to maintain these 4 constructors. However, already in this simple case, I would seriously consider using the pass-by-value approach with a single function
C(A a, B b) : a(move(a)), b(move(b)) { }
instead of the above 4 constructors.
So long story short, neither approach is without drawbacks. Make your decisions based on actual profiling information, instead of optimizing prematurely.
The current answers are quite incomplete. Instead, I will try to conclude based on the lists of pros and cons I find.
Short answer
In short, it may be OK, but sometimes bad.
This idiom, namely the unifying interface, has better clarity (both in conceptual design and implementation) compared to forwarding templates or different overloads. It is sometimes used with copy-and-swap (actually, as well as move-and-swap in this case).
Detailed analysis
The pros are:
It needs only one function for each parameter list.
It needs indeed only one, not multiple ordinary overloads (or even 2n overloads when you have n parameters when each one can be unqualified or const-qualified).
Like within a forwarding template, parameters passed by value are compatible with not only const, but volatile, which reduce even more ordinary overloads.
Combined with the bullet above, you don't need 4n overloads to serve to {unqulified, const, const, const volatile} combinations for n parameters.
Compared to a forwarding template, it can be a non-templated function as long as the parameters are not needed to be generic (parameterized through template type parameters). This allows out-of-line definitions instead of template definitions needed to be instantiated for each instance in each translation unit, which can make significant improvement to translation-time performance (typically, during both compiling and linking).
It also makes other overloads (if any) easier to implement.
If you have a forwarding template for a parameter object type T, it may still clash with overloads having a parameter const T& in the same position, because the argument can be a lvalue of type T and the template instantiated with type T& (rather than const T&) for it can be more preferred by the overloading rule when there is no other way to differentiate which is the best overloading candidate. This inconsistency may be quite surprising.
In particular, consider you have forwarding template constructor with one parameter of type P&& in a class C. How many time will you forget to excluded the instance of P&& away from possibly cv-qualified C by SFINAE (e.g. by adding typename = enable_if_t<!is_same<C, decay_t<P>> to the template-parameter-list), to ensure it does not clash with copy/move constructors (even when the latter are explicitly user-provided)?
Since the parameter is passed by value of a non-reference type, it can force the argument be passed as a prvalue. This can make a difference when the argument is of a class literal type. Consider there is such a class with a static constexpr data member declared in some class without an out-of-class definition, when it is used as an argument to a parameter of lvalue reference type, it may eventually fail to link, because it is odr-used and there is no definition of it.
Note since ISO C++ 17 the rules of static constexpr data member have changed to introduce a definition implicitly, so the difference is not significant in this case.
The cons are:
A unifying interface can not replace copy and move constructors where the parameter object type is identical to the class. Otherwise, copy-initialization of the parameter would be infinite recursion, because it will call the unifying constructor, and the constructor then call itself.
As mentioned by other answers, if the cost of copy is not ignorable (cheap and predictable enough), this means you will almost always have the degeneration of performance in the calls when the copy is not needed, because copy-initialization of a unifying passed-by-value parameter unconditionally introduce a copy (either copied-to or moved-to) of the argument unless elided.
Even with mandatory elision since C++17, copy-initialization of a parameter object is still hardly free to be removed away - unless the implementation try very hard to prove the behavior not changed according to as-if rules instead of the dedicated copy elision rules applicable here, which might be sometimes impossible without a whole program analysis.
Likewise, the cost of destruction may not be ignorable as well, particularly when non-trivial subobjects are taken into account (e.g. in cases of containers). The difference is that, it does not only apply to the copy-initialization introduced by the copy construction, but also by the move construction. Making move cheaper than copy in constructors can not improve the situation. The more cost of copy-initialization, the more cost of destruction you have to afford.
A minor shortcoming is that there is no way to tweak the interface in different ways as plural overloads, for example, specifying different noexcept-specifiers for parameters of const& and && qualified types.
OTOH, in this example, unifying interface will usually provide you with noexcept(false) copy + noexcept move if you specifies noexcept, or always noexcept(false) when you specify nothing (or explicit noexcept(false)). (Note in the former case, noexcept does not prevent throwing during copy because that will only occur during evaluation of arguments, which is out of the function body.) There is no further chance to tune them separately.
This is considered minor because it is not frequently needed in reality.
Even if such overloads are used, they are probably confusing by nature: different specifiers may hide subtle but important behavioral differences which are difficult to reason about. Why not different names instead of overloads?
Note the example of noexcept may be particularly problematic since C++17 because noexcept-specification now affect the function type. (Some unexpected compatibility issues can be diagnosed by Clang++ warning.)
Sometimes the unconditional copy is actually useful. Because composition of operations with strong-exception guarantee does not hold the guarantee in nature, a copy can be used as a transactional state holder when the strong-exception guarantee is required and the operation cannot be broken down as sequence of operations with no less strict (no-exception or strong) exception guarantee. (This includes the copy-and-swap idiom, although assignments are not recommended to be unified for other reasons in general, see below.) However, this does not mean the copy is otherwise unacceptable. If the intention of the interface is always to create some object of type T, and the cost of moving T is ignorable, the copy can be moved to the target without unwanted overhead.
Conclusions
So for some given operations, here are suggestions about whether using a unifying interface to replace them:
If not all of the parameter types match the unifying interface, or if there is behavioral difference other than the cost of new copies among operations being unified, there cannot be a unifying interface.
If the following conditions are failed to be fit for all parameters, there cannot be a unifying interface. (But it can still be broken down to different named-functions, delegating one call to another.)
For any parameter of type T, if a copy of each argument is needed for all operations, use unifying.
If both copy and move construction of T have ignorable cost, use unifying.
If the intention of the interface is always to create some object of type T, and the cost of the move construction of T is ignorable, use unifying.
Otherwise, avoid unifying.
Here are some examples need to avoid unifying:
Assignment operations (including assignment to the subobjects thereof, typically with copy-and-swap idiom) for T without ignorable cost in copy and move constructions does not meet the criteria of unifying, because the intention of assignment is not to create (but to replace the content of) the object. The copied object will eventually be destructed, which incurs unnecessary overhead. This is even more obvious for cases of self-assignment.
Insertion of values to a container does not meet the criteria, unless both the copy-initialization and destruction have ignorable cost. If the operation fails (due to the allocation failure, duplicate values or so on) after copy-initialization, the parameters have to be destructed, which incurs unnecessary overhead.
Conditionally creation of object based on parameters will incur the overhead when it does not actually create the object (e.g. std::map::insert_or_assign-like container insertion even in spite of the failure above).
Note the accurate limit of "ignorable" cost is somewhat subjective because it eventually depends on how much cost can be tolerated by the developers and/or the users, and it may vary case by case.
Practically, I (conservatively) assume any trivially copyable and trivailly destructible type whose size is not more than one machine word (like a pointer) qualifying the criteria of ignorable cost in general - if the resulted code actually cost too much in such case, it suggests either a wrong configuration of the build tool is used, or the toolchain is not ready for production.
Do profile if there is any further doubt on performance.
Additional case study
There are some other well-known types preferred to be passed by value or not, depending on the conventions:
Types need to preserve reference values by convention should not be passed by value.
A canonical example is the argument forwarding call wrapper defined in ISO C++, which requires to forward references. Note in the caller position it may also preserve the reference respecting to the ref-qualifier.
An instance of this example is std::bind. See also the resolution of LWG 817.
Some generic code may directly copy some parameters. It may be even without std::move, because the cost of the copy is assumed to be ignorable and a move does not necessarily make it better.
Such parameters include iterators and function objects (except the case of argument forwarding caller wrappers discussed above).
Note the constructor template of std::function (but not the assignment operator template) also uses the pass-by-value functor parameter.
Types presumably having the cost comparable to pass-by-value parameter types with ignorable cost are also preferred to be pass-by-value. (Sometimes they are used as dedicated alternatives.) For example, instances of std::initializer_list and std::basic_string_view are more or less two pointers or a pointer plus a size. This fact makes them cheap enough to be directly passed without using references.
Some types should be better avoided passed by value unless you do need a copy. There are different reasons.
Avoid copy by default, because the copy may be quite expensive, or at least it is not easy to guarantee the copy is cheap without some inspection of the runtime properties of the value being copied. Containers are typical examples in this sort.
Without statically knowing how many elements in a container, it is generally not safe (in the sense of a DoS attack, for example) to be copied.
A nested container (of other containers) will easily make the performance problem of copying worse.
Even empty containers are not guaranteed cheap to be copied. (Strictly speaking, this depends on the concrete implementation of the container, e.g. the existence of the "sentinel" element for some node-based containers... But no, keep it simple, just avoid copying by default.)
Avoid copy by default, even when the performance is totally uninterested, because there can be some unexpected side effects.
In particular, allocator-awared containers and some other types with similar treatment to allocators ("container semantics", in David Krauss' word), should not be passed by value - allocator propagation is just another big semantic worm can.
A few other types conventionally depend. For example, see GotW #91 for shared_ptr instances. (However, not all smart pointers are like that; observer_ptr are more like raw pointers.)
For the general case where the value will be stored, the pass-by-value only is a good compromise-
For the case where you know that only lvalues will be passed (some tightly coupled code) it's unreasonable, unsmart.
For the case where one suspects a speed improvement by providing both, first THINK TWICE, and if that didn't help, MEASURE.
Where the value will not be stored I prefer the pass by reference, because that prevents umpteen needless copy operations.
Finally, if programming could be reduced to unthinking application of rules, we could leave it to robots. So IMHO it's not a good idea to focus so much on rules. Better to focus on what the advantages and costs are, for different situations. Costs include not only speed, but also e.g. code size and clarity. Rules can't generally handle such conflicts of interest.
Pass by value, then move is actually a good idiom for objects that you know are movable.
As you mentioned, if an rvalue is passed, it'll either elide the copy, or be moved, then within the constructor it will be moved.
You could overload the copy constructor and move constructor explicitly, however it gets more complicated if you have more than one parameter.
Consider the example,
class Obj {
public:
Obj(std::vector<int> x, std::vector<int> y)
: X(std::move(x)), Y(std::move(y)) {}
private:
/* Our internal data. */
std::vector<int> X, Y;
}; // Obj
Suppose if you wanted to provide explicit versions, you end up with 4 constructors like so:
class Obj {
public:
Obj(std::vector<int> &&x, std::vector<int> &&y)
: X(std::move(x)), Y(std::move(y)) {}
Obj(std::vector<int> &&x, const std::vector<int> &y)
: X(std::move(x)), Y(y) {}
Obj(const std::vector<int> &x, std::vector<int> &&y)
: X(x), Y(std::move(y)) {}
Obj(const std::vector<int> &x, const std::vector<int> &y)
: X(x), Y(y) {}
private:
/* Our internal data. */
std::vector<int> X, Y;
}; // Obj
As you can see, as you increase the number of parameters, the number of necessary constructors grow in permutations.
If you don't have a concrete type but have a templatized constructor, you can use perfect-forwarding like so:
class Obj {
public:
template <typename T, typename U>
Obj(T &&x, U &&y)
: X(std::forward<T>(x)), Y(std::forward<U>(y)) {}
private:
std::vector<int> X, Y;
}; // Obj
References:
Want Speed? Pass by Value
C++ Seasoning
I am answering myself because I will try to summarize some of the answers. How many moves/copies do we have in each case?
(A) Pass by value and move assignment construct, passing a X parameter. If X is a...
Temporary: 1 move (the copy is elided)
Lvalue: 1 copy 1 move
std::move(lvalue): 2 moves
(B) Pass by reference and copy assignment usual (pre C++11) construct. If X is a...
Temporary: 1 copy
Lvalue: 1 copy
std::move(lvalue): 1 copy
We can assume the three kinds of parameters are equally probable. So every 3 calls we have (A) 4 moves and 1 copy, or (B) 3 copies. I.e., in average, (A) 1.33 moves and 0.33 copies per call or (B) 1 copy per call.
If we come to a situation when our classes consist mostly of PODs, moves are as expensive as copies. So we would have 1.66 copies (or moves) per call to the setter in case (A) and 1 copies in case (B).
We can say that in some circumstances (PODs based types), the pass-by-value-and-then-move construct is a very bad idea. It is 66% slower and it depends on a C++11 feature.
On the other hand, if our classes include containers (which make use of dynamic memory), (A) should be much faster (except if we mostly pass lvalues).
Please, correct me if I'm wrong.
Readability in the declaration:
void foo1( A a ); // easy to read, but unless you see the implementation
// you don't know for sure if a std::move() is used.
void foo2( const A & a ); // longer declaration, but the interface shows
// that no copy is required on calling foo().
Performance:
A a;
foo1( a ); // copy + move
foo2( a ); // pass by reference + copy
Responsibilities:
A a;
foo1( a ); // caller copies, foo1 moves
foo2( a ); // foo2 copies
For typical inline code there is usually no difference when optimized.
But foo2() might do the copy only on certain conditions (e.g. insert into map if key does not exist), whereas for foo1() the copy will always be done.

What do I often see references in operator overloading definitions?

For example, in the OGRE3D engine, I often see things like
class_name class_name :: operator + (class_name & object)
Instead of
class_name class_name :: operator + (class_name object)
Well it's not that I prefer the second form, but is there a particular reason to use a reference in the input ? Does it has special cases where it is necessary to use a reference instead of a copy-by-value ? Or is a performance matter ?
It's a performance issue. Passing by reference will generally be cheaper than passing by value (it's basically equivalent to passing by pointer).
On an unrelated note, you probably want the argument to operator+ to be const class_name &object.
It is recommended to implement operator+ in terms of operator+=. First make a copy of the left argument, then modify it, and finally return the copy. Since you are going to copy the left argument anyway, you might just as well do it by using call by value for the left argument:
class_name operator+(class_name x, const class_name& y)
{
return x += y;
}
In C++0x, you should enable move semantics for the result:
#include <utility>
class_name operator+(class_name x, const class_name& y)
{
return std::move(x += y);
}
Besides the "usual" calling conventions for regular methods, I would note that the operators are somewhat peculiar.
The main reason to use const& instead of pass-by-value is correctness (performance comes second, at least in my mind). If your value may be polymorphic, then a copy means object slicing, which is undefined behavior in general.
Therefore, if you use pass-by-value, you clearly state to your caller that the object will be copied and it should not be polymorphic.
Another reason can be performance. If the class is small and its copy-constructor trivial, it might be faster to copy it than to use indirection (think of an int-like class). There are other cases where pass-by-value can be faster, but in non-inline cases they are rarer.
I do think however that none of these is the real reason and the developers just picked this out of the blue...
... because the real WTF (as they say) is that the operator# should be declared as free functions to allow argument promotion of the left-hand side argument ...
So if they didn't follow this rule, why would they bother with usual argument passing style ?
So that object doesn't get copied from the original parameter. In fact, the preferred way is to pass object as const, and to make operator+ as const member operator:
class_name class_name :: operator + (const class_name& object) const;
Passing a reference is much faster than copying the whole instance of your class of course.
Also, if there is no copy constructor defined for the class, it could be dangerous to copy it (default copy constructor will just copy pointers and the destructor will delete them for example).
When passing classes to a method or operator, the best is to always pass a reference. You can declare it const to be sure it is not modified to avoid side effects.

return by value inline functions

I'm implementing some math types and I want to optimize the operators to minimize the amount of memory created, destroyed, and copied. To demonstrate I'll show you part of my Quaternion implementation.
class Quaternion
{
public:
double w,x,y,z;
...
Quaternion operator+(const Quaternion &other) const;
}
I want to know how the two following implementations differ from eachother. I do have a += implementation that operates in-place to where no memory is created, but some higher level operations utilizing quaternions it's useful to use + and not +=.
__forceinline Quaternion Quaternion::operator+( const Quaternion &other ) const
{
return Quaternion(w+other.w,x+other.x,y+other.y,z+other.z);
}
and
__forceinline Quaternion Quaternion::operator+( const Quaternion &other ) const
{
Quaternion q(w+other.w,x+other.x,y+other.y,z+other.z);
return q;
}
My c++ is completely self-taught so when it comes to some optimizations, I'm unsure what to do because I do not know exactly how the compiler handles these things. Also how do these mechanics translate to non-inline implementations.
Any other criticisms of my code are welcomed.
Your first example allows the compiler to potentially use somehting called "Return Value Optimization" (RVO).
The second example allows the compiler to potentially use something called "Named Return Value Optimization" (NRVO). These 2 optimizations are clearly closely related.
Some details of Microsoft's implementation of NRVO can be found here:
http://msdn.microsoft.com/en-us/library/ms364057.aspx
Note that the article indicates that NRVO support started with VS 2005 (MSVC 8.0). It doesn't specifically say whether the same applies to RVO or not, but I believe that MSVC used RVO optimizations before version 8.0.
This article about Move Constructors by Andrei Alexandrescu has good information about how RVO works (and when and why compilers might not use it).
Including this bit:
you'll be disappointed to hear that each compiler, and often each compiler version, has its own rules for detecting and applying RVO. Some apply RVO only to functions returning unnamed temporaries (the simplest form of RVO). The more sophisticated ones also apply RVO when there's a named result that the function returns (the so-called Named RVO, or NRVO).
In essence, when writing code, you can count on RVO being portably applied to your code depending on how you exactly write the code (under a very fluid definition of "exactly"), the phase of the moon, and the size of your shoes.
The article was written in 2003 and compilers should be much improved by now; hopefully, the phase of the moon is less important to when the compiler might use RVO/NRVO (maybe it's down to day-of-the-week). As noted above it appears that MS didn't implement NRVO until 2005. Maybe that's when someone working on the compiler at Microsoft got a new pair of more comfortable shoes a half-size larger than before.
Your examples are simple enough that I'd expect both to generate equivalent code with more recent compiler versions.
Between the two implementations you presented, there really is no difference. Any compiler doing any sort of optimizations whatsoever will optimize your local variable out.
As for the += operator, a slightly more involved discussion about whether or not you want your Quaternions to be immutable objects is probably required... I would always lead towards creating objects like this as immutable objects. (but then again, I'm more of a managed coder as well)
If these two implementations do not generate exactly the same assembly code when optimization is turned on, you should consider using a different compiler. :) And I don't think it matters whether or not the function is inlined.
By the way, be aware that __forceinline is very non-portable. I would just use plain old standard inline and let the compiler decide.
The current consensus is that you should implement first all your ?= operators that do not create new objects. Depending on whether exception safety is a problem (in your case it probably is not) or a goal the definition of ?= operator can be different. After that you implement operator? as a free function in terms of the ?= operator using pass-by-value semantics.
// thread safety is not a problem
class Q
{
double w,x,y,z;
public:
// constructors, other operators, other methods... omitted
Q& operator+=( Q const & rhs ) {
w += rhs.w;
x += rhs.x;
y += rhs.y;
z += rhs.z;
return *this;
}
};
Q operator+( Q lhs, Q const & rhs ) {
lhs += rhs;
return lhs;
}
This has the following advantages:
Only one implementation of the logic. If the class changes you only need to reimplement operator?= and operator? will adapt automatically.
The free function operator is symmetric with respect to implicit compiler conversions
It is the most efficient implementation of operator? you can find with respect to copies
Efficiency of operator?
When you call operator? on two elements, a third object must be created and returned. Using the approach above, the copy is performed in the method call. As it is, the compiler is able to elide the copy when you are passing a temporary object. Note that this should be read as 'the compiler knows that it can elide the copy', not as 'the compiler will elide the copy'. Mileage will vary with different compilers, and even the same compiler can yield different results in different compilation runs (due to different parameters or resources available to the optimizer).
In the following code, a temporary will be created with the sum of a and b, and that temporary must be passed again to operator+ together with c to create a second temporary with the final result:
Q a, b, c;
// initialize values
Q d = a + b + c;
If operator+ has pass by value semantics, the compiler can elide the pass-by-value copy (the compiler knows that the temporary will get destructed right after the second operator+ call, and does not need to create a different copy to pass in)
Even if the operator? could be implemented as a one line function (Q operator+( Q lhs, Q const & rhs ) { return lhs+=rhs; }) in the code, it should not be so. The reason is that the compiler cannot know whether the reference returned by operator?= is in fact a reference to the same object or not. By making the return statement explicitly take the lhs object, the compiler knows that the return copy can be elided.
Symmetry with respect to types
If there is an implicit conversion from type T to type Q, and you have two instances t and q respectively of each type, then you expect (t+q) and (q+t) both to be callable. If you implement operator+ as a member function inside Q, then the compiler will not be able to convert the t object into a temporary Q object and later call (Q(t)+q) as it cannot perform type conversions in the left hand side to call a member function. Thus with a member function implementation t+q will not compile.
Note that this is also true for operators that are not symmetric in arithmetic terms, we are talking about types. If you can substract a T from a Q by promoting the T to a Q, then there is no reason not to be able to substract a Q from a T with another automatic promotion.