references in C++ - c++

Once I read in a statement that
The language feature that "sealed the
deal" to include references is
operator overloading.
Why are references needed to effectively support operator overloading?? Any good explanation?

Here's what Stroustrup said in "The Design and Evolution of C++" (3.7 "references"):
References were introduced primarily to support operator overloading. ...
C passes every function argument by value, and where passing an object by value would be inefficient or inappropriate the user can pass a pointer . This strategy doesn't work where operator overloading is used. In that case, notational convenience is essential because users cannot be expected to insert address-of operators if the objects are large. For example:
a = b - c;
is acceptable (that is, conventional) notation, but
a = &b - &c;
is not. Anyway, &b - &c already has a meaning in C, and I didn't want to change that.

An obvious example would be the typical overload of ">>" as a stream extraction operator. To work as designed, this has to be able to modify both its left- and right-hand arguments. The right has to be modified, because the primary purpose is to read a new value into that variable. The left has to be modified to do things like indicating the current status of the stream.
In theory, you could pass a pointer as the right-hand argument, but to do the same for the left argument would be problematic (e.g. when you chain operators together).
Edit: it becomes more problematic for the left side partly because the basic syntax of overloading is that x#y (where "#" stands for any overloaded operator) means x.opertor#(y). Now, if you change the rules so you somehow turn that x into a pointer, you quickly run into another problem: for a pointer, a lot of those operators already have a valid meaning separate from the overload -- e.g., if I translate x+2 as somehow magically working with a pointer to x, then I've produced an expression that already has a meaning completely separate from the overload. To work around that, you could (for example) decide that for this purpose, you'd produce a special kind of pointer that didn't support pointer arithmetic. Then you'd have to deal with x=y -- so the special pointer becomes one that you can't modify directly either, and any attempt at assigning to it ends up assigning to whatever it points at instead.
We've only restricted them enough to support two operator overloads, but our "special pointer" is already about 90% of the way to being a reference with a different name...

References constitute a standard means of specifying that the compiler should handle the addresses of objects as though they were objects themselves. This is well-suited to operator overloading because operations usually need to be chained in expressions; to do this with a uniform interface, i.e., entirely by reference, you often would need to take the address of a temporary variable, which is illegal in C++ because pointers make no guarantee about the lifetimes of their referents. References, on the other hand, do. Using references tells the compiler to work a certain kind of very specific magic (with const references at least) that preserves the lifetime of the referent.

Typically when you're implementing an operator you want to operate directly on the operand -- not a copy of it -- but passing a pointer risks that you could delete the memory inside the operator. (Yes, it would be stupid, but it would be a significant danger nonetheless.) References allow for a convenient way of allowing pointer-like access without the "assignment of responsibility" that passing pointers incurs.

Related

Why is it a good idea for (C++) types to be regular?

(This question stems for this more specific questions about stream iterators.)
A type is said [Stepanov, McJones] to be Regular if:
it is equality-comparable
it is assignable (from other values of the type)
it is destructible
it is default-constructible (i.e. constructible with no arguments)
It has a (default) total ordering of its values
(and there's some wording about "underlying type" which I didn't quite get.)
Some/many people claim that, when designing types - e.g. for the standard library of C++ - it is worthwhile or even important to make an effort to make these types regular, possibly ignoring the total-order requirement. Some mention the maxim:
Do as the ints do.
and indeed, the int type satisfies all these requirements. However, default-constructible types, when constructed, hold some kind of null, invalid or junk value - ints do. A different approach is requiring initialization on construction (and de-initialization on destruction), so that the object's lifetime corresponds to its time of viability.
To contrast these two approaches, one can perhaps think about a T-pointer type, and a T-reference or T-reference-wrapper type. The pointer is basically regular (the ordering assumes a linear address space), and has nullptr which one needs to look out for; the reference is, under the hood, a pointer - but you can't just construct a reference-to-nothing or a junk-reference. Now, pointers may have their place, but we mostly like to work with references (or possibly reference wrappers which are assignable).
So, why should we prefer designing (as library authors) and using regular types? Or at least, why should we prefer them in so many contexts?
I doubt this is the most significant answer, but you could make the argument that "no uninitialized state" / "no junk value" is a principle that doesn't sit well with move-assignment: After you move from your value, and take its resources away - what state does it remain in? It is not very far from default-constructing it (unless the move-assignment is based on some sort of swapping; but then - you could look at move construction).
The rebuttal would be: "Fine, so let's not have such types move-assignable, only move-destructible"; unfortunately - C++ decided sometime in the 2000's to go with non-destructive moves. See also this question & answer by #HowardHinnant:
Why does C++ move semantics leave the source constructed?

What is the purpose of a scalar in the C++ type traits sense?

I have been able to find reference material at cppreference.com, cplusplus.com, and this site (What is a scalar Object in C++?) that enables me to determine whether a particular C++ data type is a scalar. Namely, I can apply a mental algorithm that runs like this: "Is it a reference type, a function type, or void? If not, is it an array, class, or union? If not, it's a scalar type." In code, of course, I can apply std::is_scalar<T>. And finally, I can apply the working definition "A scalar type is a type that has built-in functionality for the addition operator without overloads (arithmetic, pointer, member pointer, enum and std::nullptr_t)."
What I have not been able to find is a description of the purpose of the scalar classification. Why would anyone care if something is a scalar? It seems like a kind of "leftover" classification, like "reptile" in zoological taxonomy ("Well, a reptile is, um, an amniote that's not a bird or a mammal"). I'm guessing that it must have some use to justify its messiness. I can understand why someone would want to know whether a type is a reference -- you can't take a reference of a reference, for instance. But why would people care whether something is a scalar? What is scalarness all about?
Given is_scalar<T>, you can be sure that operator=(), operator==() and operator!=() does what you think (that is, assignment, comparison and the inverse of that, respectively) for any T.
a class T might or might not have any of these, with arbitrary meaning;
a union T is problematic;
a function doesn't have =;
a reference might hold any of these;
an array - well for two arrays of different size, == and != will make it decay to pointer and compare, while = will fail compile-time.
Thus if you have is_scalar<T>, you can be sure that these work consistently. Otherwise, you need to look further.
One purpose is to write more efficient template specializations. On many architectures, it would be more efficient to pass around pointers to objects than to copy them, but scalars can fit into registers and be copied with a single machine instruction. Or a generic type might need locking, while the machine guarantees that it will read or update a properly-aligned scalar with a single atomic instruction.
Clue here in the notes on cppreference.com?
Each individual memory location in the C++ memory model, including the hidden memory locations used by language features (e.g virtual table pointer), has scalar type (or is a sequence of adjacent bit-fields of non-zero length). Sequencing of side-effects in expression evaluation, interthread synchronization, and dependency ordering are all defined in terms of individual scalar objects.

built-in type variable is returned from a function

If a local object is returned in a function call, it has to do at least three steps:
Copy constructor is called to hold a copy.
Destroy local object.
A copy is return.
For example:
x = y + z
If x is an integer object. A copy of y + z should be returned, then a new object is created, then assignment operator of x will take this object as parameter.
So my questions are:
Is the same process used for built-in type such as int, double...?
If they're not the same, how's it done?
The language specification does not say "how it is done" for built-in types and built-in operators. The language simply says that the result of binary + for built-in types is an rvalue - the sum of the operand values. That's it. There's no step-by-step description of what happens when a built-in operator is used (with some exceptions like &&, , etc.).
The reason you can come up with a step-by-step description of how an overloaded operator works (which is what you have in your question) is because the process of evaluating overloaded operator comes through several sequence points. A sequence point in a C++ program implements the concept of discrete time: it is the only thing that separates something that happens before from things that happen after. Without a separating sequence point, there's no "before" and no "after".
In case of an overloaded operator, there quite a few sequence points involved in the process of its evaluation, which is why you can describe this process as a sequence of steps. The process of evaluation of built-in operator + has no sequence points in it, so there absolutely no way to describe what happens there in step-by-step fashion. From the language point of view, the built-in + is evaluated through a blurry indivisible mix of unspecified actions that produce the correct result.
It is done this way to give the compiler better optimization opportunities when evaluating built-in operators.
This depends on several factors, especially the compiler's level of or capacity for optimization. This can also, to some extend, depend on the calling convention.
All built-in types can fit on a register (except for "unusually large" built-in types like "long long int"). Basically, for all calling conventions, if the return type can fit on the EAX register, that is where it's put by the callee and retrieved by the caller. So, that would be the answer to your question.
For larger objects, the procedure that you described is only true in principle, but this whole copy-destroy-temporary-copy-destroy thing is very inefficient and it is amongst the highest priorities of any compiler's optimization algorithm. Since objects are too large to fit on a register. They are typically put on the stack and left there to be retrieved by the caller. Since very often, they are just stored right back into another local variable, compilers will try to merge those stack slots together, and also, often the local variable in the called function will also be at the same slot, so, at the end, you get no copy, no destruction, no temporary, no overhead... that's the ideal "optimized" situation, but the compiler is not always able to realize that and it also requires the object to be of a POD class.
If you're just talking about the example you have right now (x = y + z), then there is no function call - the addition takes place right in the registers.
If you're actually calling a function (x = sum(y, z), say), then you can get a couple different behaviours based on calling conventions and data types (types that can't fit in a single register get special treatment), but with ints, it's pretty safe to assume they'll end up passed back in the EAX register.
No constructors / destructors are involved!
For more about the individual data types, check out the Return Values section on this page - I think these are for the cdecl calling convention, but they should be pretty ubiquitous.
For more on calling conventions in general, Wikipedia (x86 calling conventions) does a pretty thorough job. You'll be interested in cdecl (standard C functions) and thiscall (C++ class-member functions) in particular.
Hope this helps!
There are two questions here. First, your list about what needs to be done for returning a user-defined type from a function is basically correct; except that all actual compilers use return value optimization to avoid the temporary copy.
The other question is "what about built-in types?" Conceptually the same thing happens, it's just that (1) built in types have "trivial constructors" and "trivial destructors" (i.e., the compiler knows there's no need to actually call any functions to construct/destruct these types), (2) the compiler knows more about operations on the built-in types than operations on user-defined types and won't actually need to call functions to, say, add two ints (instead the compiler will just use the relevant assembly code instructions, and (3) the compiler knows more about built-in types than user-defined types and can use return value optimization even more often.
For the record, rvalue references and related changes to C++-0x are largely about giving the programmer more ability to control things like return value optimization.

What exactly was the rationale behind introducing references in c++?

From the discussion that has happened in my recent question (Why is a c++ reference considered safer than a pointer?), it raises another question in my mind: What exactly was the rationale behind introducing references in c++?
Section 3.7 of Stroustrup's Design and Evolution of C++ describes the introduction of references into the language. If you're interested in the rationale behind any feature of C++, I highly recommend this book.
References were introduced primarily to support operator overloading. Doug McIlroy recalls that once I was explaining some problems with a precursor to the current operator overloading scheme to him. He used the word reference with the startling effect that I muttered "Thank you," and left his office to reappear the next day with the current scheme essentially complete. Doug had reminded me of Algol68.
C passes every function argument by value, and where passing an object by value would be inefficient or inappropriate the user can pass a pointer. This strategy doesn't work where operator overloading is used. In that case, notational convenience is essential because users cannot be expected to insert address-of operators if the objects are large. For example:
a = b - c;
is acceptable (that is, conventional) notation, but
a = &b - &c;
is not. Anyway, &b - &c already has a meaning in C, and I didn't want to change that.
It is not possible to change what a reference refers to after initialization. That is, once a C++ reference is initialized, it cannot be re-bound. I had in the past been bitten by Algol68 references where r1 = r2 can either assign through r1 to the object referred to or assign a new reference value to r1 (re-binding r1) depending on the type of r2. I wanted to avoid such problems in C++.
You need them for operator overloading (of course we can now go down the rabbit hole of "what was the rationale for introducing operator overloading?")
How would you type std::auto_ptr::operator*() without references? Or std::vector::operator[]?
References bind to objects implicitly. This has large advantages when you consider things like binding to temporaries or operator overloading- C++ programs would be full of & and *. When you think about it, the basic use case of a pointer is actually to behave of a reference. In addition, it's much harder to screw up references- you don't perform any pointer arithmetic yourself, can't automatically convert from arrays (a terrible thing), etc.
References are cleaner, easier, and safer than pointers.
It's interesting because most other languages don't have references like C++ has them (aliases), they just have pointer-style references.
If code takes the address of a variable and passes it to a routine, the compiler has no way of knowing whether that address might get stored someplace and used long after the called routine has exited, and possibly after the variable has ceased to exist. By contrast, if code passes give a routine a reference to a variable, it has somewhat more assurance that the reference will only be used while that routine is running. Once that routine returns, the reference will no longer be used.
Things end up getting a little 'broken' by the fact that C++ allows code to take the address of a reference. This ability was provided to allow compatibility with older routines which expected pointers rather than references. If a reference is passed to a routine which takes its address and stores it someplace, all bets are off. On the other hand, if as a matter of policy one forbids using the address of a reference in any way that might be persisted, one can pretty well gain the assurances that references provide.
To allow for operator overloading. They wanted operators to be overloadable both for objects and pointers, so they needed a way to refer to an object by something other than a pointer. Hence the reference was introduce. It is in "The Design and Evolution of C++".

Operator overloading with memory allocation?

The sentence below is from, The Positive Legacy of C++ and Java by Bruce Eckel, about operator overloading in C++:
C++ has both stack allocation and heap
allocation and you must overload your
operators to handle all situations and
not cause memory leaks. Difficult
indeed.
I do not understand how operator overloading has anything to do with memory allocation. Can anyone please explain how they are correlated?
I can imagine a couple possible interpretations:
First, in C++ new and delete are both actually operators; if you choose to provide custom allocation behavior for an object by overloading these operators, you must be very careful in doing so to ensure you don't introduce leaks.
Second, some types of objects require that you overload operator= to avoid memory management bugs. For example, if you have a reference counting smart pointer object (like the Boost shared_ptr), you must implement operator=, and you must be sure to do so correctly. Consider this broken example:
template <class T>
class RefCountedPtr {
public:
RefCountedPtr(T *data) : mData(data) { mData->incrRefCount(); }
~RefCountedPtr() { mData->decrRefCount(); }
RefCountedPtr<T>& operator=(const RefCountedPtr<T>& other) {
mData = other.mData;
return *this;
}
...
protected:
T *mData;
};
The operator= implementation here is broken because it doesn't manage the reference counts on mData and other.mData: it does not decrement the reference count on mData, leading to a leak; and it does not increment the reference count on other.mData, leading to a possible memory fault down the road because the object being pointed to could be deleted before all the actual references are gone.
Note that if you do not explicitly declare your own operator= for your classes, the compiler will provide a default implementation which has behavior identical to the implementation shown here -- that is, completely broken for this particular case.
So as the article says -- in some cases you must overload operators, and you must be careful to handle all situations correctly.
EDIT: Sorry, I didn't realize that the reference was an online article, rather than a book. Even after reading the full article it's not clear what was intended, but I think Eckel was probably referring to situations like the second one I described above.
new and delete are actually operators in C++ which you can override to provide your own customized memory management. Take a look at the example here.
Operators are functions. Just because they add syntactic sugar does not mean you need not be careful with memory. You must manage memory as you would with any other member/global/friend function.
This is particularly important when you overload the when you implement a wrapper pointer class.
There is then string concatenation, by overloading the operator+ or operator+=. Take a look at the basic_string template for more information.
If you are comparing operator overloading between Java and C++, you wouldn't be talking about new and delete - Java doesn't expose enough memory management detail for new, and doesn't need delete.
You can't overload the other operators for pointer types - at least one argument must be a class or enumerated type, so he can't be talking about providing different operators for pointers.
So operators in C++ operate on values or const references to values.
It would be very unusual for operators which operator on values or const references to values to return anything other than a value.
Apart from obvious errors common to all C++ functions - returning a reference to a stack allocated object (which is the opposite of a memory leak), or returning a reference to an object created with new rather than a value (which is usually done no more than once in a career before being learnt), it would be hard to come up with a scenario where the common operators have memory issues.
So there isn't any need to create multiple versions depending on whether the operands are stack or heap allocated based on normal patterns of use.
The arguments to an operator are objects passed as either values or references. There is no portable mechanism in C++ to test whether an object was allocated heap or stack. If the object was passed by value, it will always be on the stack. So if there was a requirement to change the behaviour of the operators for the two cases, it would not be possible to do so portably in C++. (you could, on many OSes, test whether the pointer to the object is in the space normally used for stack or the space normally used for heap, but that is neither portable nor entirely reliable.) (also, even if you could have operators which took two pointers as arguments, there's no reason to believe that the objects are heap allocated just because they are pointers. that information simply doesn't exist in C++)
The only duplication you get is for cases such as operator[] where the same operator is used as both an accessor and a mutator. Then it is normal to have a const and a non-const version, so you can set the value if the receiver is not const. That is a good thing - not being able to mutate (the publicly accessible state of) objects which have been marked constant.