Ways to accidentally create temporary objects in C++? - c++

Years ago I believed that C was absolutely pure compared to C++ because the compiler couldn't generate any code that you couldn't predict. I now believe counter examples include the volatile keyword and memory barriers (in multiprocessor programming or device drivers for memory-mapped hardware devices, where plain assembly language would be even more pure than the optimizations of a C compiler).
At the moment I'm trying to enumerate the unpredictable things a C++ compiler can do. The main complaint that sticks in my mind about C++ is that the compiler will implicitly instantiate temporary objects, but I believe these cases can all be expected. The cases I'm thinking of are:
when a class defines a copy constructor for a type other than itself, without using the explicit keyword
when a class defines an overloaded conversion operator: operator ()
when a function accepts an object by value instead of by reference
when a function returns an object by value instead of by reference
Are there any others?

I suppose "unpredictable" means "something in accordance with the standard but different from what the programmer expects when writing code", right?
I guess you can see from the code where objects are being instantiated or copied, even if it's maybe not obvious. It might be hard to understand though.
Some stuff is just implemented in certain ways by (all?) compiler vendors, but it could be done differently. E.g., late binding (aka. calling an overloaded, virtual method) is usually implemented using function pointers in the background. This is maybe the fastest way doing it, but I suppose it could be done differently and that would be unexpected. I don't know any compiler though that does it differently.
Lots of stuff is unexpected in the sense that C++ is overly complex - hardly anybody understands the full language. So unexpected also depends on your knowledge.

12.2 Temporary objects
1 Temporaries of class type are
created in various contexts: binding
an rvalue to a reference (8.5.3),
returning an rvalue (6.6.3), a
conversion that creates an rvalue
(4.1, 5.2.9, 5.2.11, 5.4), throwing an
exception (15.1), entering a handler
(15.3), and in some initializations
(8.5).
4 There are two contexts in which
temporaries are destroyed at a
different point than the end of the
fullexpression.
In fact I suggest take a look at the entire 12.2
At the moment I'm trying to enumerate
the unpredictable things a C++
compiler can do. The main complaint
that sticks in my mind about C++ is
that the compiler will implicitly
instantiate temporary objects, but I
believe these cases can all be
expected.
The compiler does not create temporaries implicitly -- it obeys the standard. Unless, of course, when you invoke undefined behavior. Note, that there is something called copy-elision and return value optimization which may actually reduce the number of temporaries that would otherwise be created.

An interesting link about common pitfalls related to this subject:
http://www.gotw.ca/gotw/002.htm

Related

Is it legal to convert a pointer/reference to a fixed array size to a smaller size

Is it legal as per the C++ standard to convert a pointer or reference to a fixed array (e.g. T(*)[N] or T(&)[N]) to a pointer or reference to a smaller fixed array of the same type and CV qualification (e.g. T(*)[M] or T(&)[M])?
Basically, would this always be well-formed for all instantiations of T (regardless of layout-type):
void consume(T(&array)[2]);
void receive(T(&array)[6])
{
consume(reinterpret_cast<T(&)[2]>(array));
}
I don't see any references to this being a valid conversion in:
expr.reinterpret.cast,
expr.static.cast,
conv.array, or even
basic.types
However, it appears that all major compilers accept this and generate proper code even when optimized when using T = std::string (compiler explorer)(not that this proves much, if it is undefined behavior).
It's my understanding that this should be illegal as per the type-system, since an object of T[2] was never truly created, which means a reference of T(&)[2] would be invalid.
I'm tagging this question c++11 because this is the version I am most interested in the answer for, but I would be curious to know whether this answer is different in newer versions a well.
There’s not much to say here except no, in any language version: the types are simply unrelated. C++20 does allow conversion from T (*)[N] to T (*)[] (and similarly for references), but that doesn’t mean you can treat two different Ns equivalently. The closest you’re going to get to a “reference” for this rule is [conv.array]/1 (“The result is a pointer to the first element of the array.”, which T[2] does not exist in your example) and a note in [defns.undefined] (“Undefined behavior may be expected when this document omits any explicit definition of behavior”).
Part of the reason that compilers don’t “catch” you is that such reinterpret_casts are valid to return to the real type of an object after another reinterpret_cast used to “sneak” it through an interface that expects a pointer or reference to a different type (but doesn’t use it as that type!). That means that the code as given is legitimate, but the obvious sort of definition for consume and caller for receive would together cause undefined behavior. (The other part is that optimizers often leave code alone that’s always undefined unless it can eliminate a branch.)
A late additional answer, that rather yields the quality of a comment but would exceed the allowed content amount by far:
At first: Great question! It's remarkable, that such a quite obvious issue is hard to be verified and generates a lot of confusion even among experts. Worth to mention, that I've seen code of that category quite often already...
Some words about undefined behavior first
I think at least the question about the pointer usage is a great example where one has to admit, that theoretical undefined behavior from one aspect of the language can sometimes be "beaten" by two other strong aspects:
Are there other standard clauses that reduce the degree of UB for the aspect of interest for several cases? Are there maybe clauses whose priorities within the standard are ambiguous to each other even? (There are several prominent examples still existing in C++20, see conversion-type-id handling for operator auto() for instance...).
Are there (Turing-) provable arguments, that any theoretical and practical compiler realization has to behave as you expect since there are other constraints from the language, that have to determine it that way? Saying that even if UB can quirky mean, the compiler could apply "I can do what I want here, even the biggest mess" for your case, it might be provable, that the ensuring of other specified(!) language aspects determines that to be at least effectively impossible.
So with respect to point 2, there's an often underrated aspect: What are the constraints (if definable) by the model of the abstract machine, that determine the outcome of any theoretical (compiler-) implementation for the given code?
So, many words so far, but does anything from 1) apply to your concrete case (the pointer way)?
As multiple times users mentioned within the comments, a chance for that lies here basic.types#basic.compound-4:
Two objects a and b are pointer-interconvertible if:
...
(4.4) there exists an object c such that a and c are
pointer-interconvertible, and c and b are pointer-interconvertible.
That's the simple rule of transitivity. Can we actually find such a c (for arrays)?
Within the same section, the standard says further on:
If two objects are pointer-interconvertible, then they have the same
address, and it is possible to obtain a pointer to one from a pointer
to the other via a reinterpret_­cast. [ Note: An array object and its
first element are not pointer-interconvertible, even though they have
the same address.  — end note ]
demolishing our dreams here of our approach via the pointer-to-the-first-element - usage. There isn't such a c for arrays.
Do we have another chance? You mentioned expr.reinterpret.cast#7 :
An object pointer can be explicitly converted to an object pointer of
a different type.70 When a prvalue v of type “pointer to T1” is
converted to the type “pointer to cv T2”, the result is static_cast<cv
T2*>(static_cast<cv void*>(v)) if both T1 and T2 are standard-layout
types ([basic.types]) and the alignment requirements of T2 are no
stricter than those of T1, or if either type is void. Converting a
prvalue of type “pointer to T1” to the type “pointer to T2” (where T1
and T2 are object types and where the alignment requirements of T2 are
no stricter than those of T1) and back to its original type yields the
original pointer value. The result of any other such pointer
conversion is unspecified.
This looks promising at first glance but the devil is in the details. That solely ensures that you can apply the pointer conversion since the alignment requirements for both arrays are equal, but not refer to interconvertibility (i.e. object usage itself) a priori.
As Davis already said: with the pointer to the first element, one could still use reinterpret_cast as some kind of a fake fascade fully standard compliant as long as the wrong type pointer to T[2] is only really used as a forwarder and all actual use cases refer to the element pointer via an according reinterpret_cast and as long as all use cases "are aware" of the fact, that the actual type was a T[4]. Trivial to see, that this is still hacky as hell for many scenarios. At least a type aliasing in order to emphasize the forwarding quality would be recommended here.
So a strict interpretation of the standard here is: It's undefined behavior with the note that we all know that it should work well with all common modern compilers on many common platforms (I know, the latter was not your question).
Do we have some chances according to my point 2) about effective "weak UB" from above?
I don't think so as long as only the abstract machine is on focus here. For instance, IMO there's no restriction from the standard, a compiler/environment could not handle (abstract) allocation schemes differently between arrays of different size (changed intrinsics for threshold sizes for instance) while still ensuring alignment requirements. To be very quirky here, one could say a very exotic compiler could be allowed to refer to underlying dynamic storage duration mechanisms even for scoped objects that appear to be on that what we know as stack. Another related possible issue could be the question about proper deallocation of arrays of dynamic storage duration here (see the similar debate about UB in the context of inheritance from classes that do not provide virtual destructors). I highly doubt that it's trivial to validate, that the standard guarantees a valid cleanup here a priori, i.e. effectively calling ~T[4] for your example for all cases.

Why does default constructor of std::atomic not default initialize the underlying stored value?

Since it's Thanksgiving today in the USA, I'll be the designated turkey to ask this question:
Take something as innocuous as this. An atomic with a simple plain old data type such as an int:
atomic<int> x;
cout << x;
The above will print out garbage (undefined) data. Which makes sense given what I read for the atomic constuctor:
(1) default constructor
Leaves the atomic object in an uninitialized state.
An uninitialized atomic object may later be initialized by calling atomic_init.
Feels like an odd committee decision. But I'm sure they had their reasons. But I can't think of another std:: class where the default constructor will leave the object in an undefined state.
I can see how it would make sense for more complex types being used with std::atomic that don't have a default constructor and need to go the atomic_init path. But the more general case is to use an atomic with a simple type for scenarios such as reference counting, sequential identifier values, and simple poll based locking. As such it feels weird for these types to be not have their own stored value "zero-initialized" (default initialized). Or at the very least, why have a default constructor if isn't going to be predictable.
What's the rationale for this where an uninitialized std::atomic instance would be useful.
As mentioned in P0883, the main reason for this behavior is compatibility with C. Obviously C has no notion of value initialization; atomic_int i; performs no initialization. To be compatible with C, the C++ equivalent must also perform no initialization. And since atomic_int in C++ is supposed to be an alias for std::atomic<int>, then for full C/C++ compatibility, that type too must perform no initialization.
Fortunately, C++20 looks to be undoing this behavior.
What's the rationale for this where an uninitialized std::atomic
instance would be useful.
For the same reason basic "building block" user defined types should not do more than strictly needed, especially in unavoidable operations like construction.
But I can't think of another std:: class where the default constructor
will leave the object in an undefined state.
That's the case of all classes that don't need an internal invariant.
There is no expectation in generic code that T x; will create a zero initialized object; but it's expected that it will create an object in a usable state. For a scalar type, any existing object is usable during its lifetime.
On the other hand, it's expected that
T x = T();
will create an object in a default state for generic code, for a normal value type. (It will normally be a "zero value" if the values being represented have such thing.)
Atomics are very different, they exist in a different "world"
Atomics aren't really about a range of values. They are about providing special guarantees for both reads, writes, and complex operations; atomics are unlike other data types in a lot of ways, as no compound assignment operation is ever defined in term of a normal assignment over that object. So usual equivalences don't hold for atomics. You can't reason on atomics as you do on normal objects.
You simply can't write generic code over atomics and normal objects; it would make no sense what so ever.
(See footnote.)
Summary
You can have generic code, but not atomic-non atomic generic algorithms as their semantic don't belong in the same style of semantic definition (and it isn't even clear how C++ has both atomic and non atomic actions).
"You don't pay for what you don't use."
No generic code will assume that an uninitialized variable has a value; only that it's in a valid state for assignment and other operations that don't depend on the previous value (no compound assignment obviously).
Many STL types are not initialized to a "zero" or default value by their default constructor.
[Footnote:
The following is "a rant" that is a technical important text, but not important to understand why the constructor of an atomic object is as it is.
They simply follow different semantic rules, in the most extremely deep way: in a way the standard doesn't even describe, as the standard never explains the most basic fact of multithreading: that some parts of the language are evaluated as a sequence of operations making progress, and that other areas (atomics, try_lock...) don't. In fact the authors of the standard clearly do not even see that distinction and do not even understand that duality. (Note that discussing these issues will often get your questions and answers both downvoted and deleted.)
This distinction is essential as without it (and again, it appears nowhere in the standard), exactly zero programs can even have multithread-defined behavior: only old style pre thread behavior can be explained without this duality.
The symptom of the C++ committee not getting what C++ is about is the fact they believe the "no thin air value" is a bonus feature and not an essential part of the semantics (not getting "no thin air" guarantee for atomics make the promise of sequential semantic for sequential programs even more indefensible).
--end note]

Show where temporaries are created in C++

What is the fastest way to uncover where temporaries are created in my C++ code?
The answer is not always easily deducible from the standard and compiler optimizations can further eliminate temporaries.
I have experimented with godbolt.org and its fantastic. Unfortunately it often hides the trees behind the wood of assembler when it comes to temporaries. Additionally, aggressive compiler optimization options make the assembler totally unreadable.
Any other means to accomplish this?
"compiler optimizations can further eliminate temporaries."
It seems you have a slight misunderstanding of the C++ semantics. The C++ Standard talks about temporaries to define the formal semantics of a program. This is a compact way to describe a large set of possible executions.
An actual compiler doesn't need to behave at all like this. And often, they won't. Real compilers know about registers, real compilers don't pretend that POD's have (trivial) constructors and destructors. This happens already before optimizations. I don't know of any compiler that will generate trivial ctors in debug mode.
Now some semantics described by the Standard can only be achieved by a fairly close approximation. When destructors have visible side effects (think std::cout), temporaries of those types cannot be entirely eliminated. But real compilers might implement the visible side effect while not allocating any storage. The notion of a temporary existing or not existing is a binary view, and in reality there are intermediate forms.
Due to the "as-if" rule it is probably unreliable to try to view the compilation process to see where temporaries are created.
But reading the code (and coding) while keeping in mind the following paragraph of the standard may help in finding where temporaries are created or not, [class.temporary]/2
The materialization of a temporary object is generally delayed as long as possible in order to avoid creating unnecessary temporary objects. [ Note: Temporary objects are materialized:
when binding a reference to a prvalue ([dcl.init.ref], [expr.type.conv], [expr.dynamic.cast], [expr.static.cast], [expr.const.cast], [expr.cast]),
when performing member access on a class prvalue ([expr.ref], [expr.mptr.oper]),
when performing an array-to-pointer conversion or subscripting on an array prvalue,
when initializing an object of type std​::​initializer_­list from a braced-init-list ([dcl.init.list]),
for certain unevaluated operands ([expr.typeid], [expr.sizeof]), and
when a prvalue appears as a discarded-value expression.
In this paragraph coming from the C++17 standard, the term prvalue has a new definition [basic.lval]/1:
A prvalue is an expression whose evaluation initializes an object or a bit-field, or computes the value of the operand of an operator, as specified by the context in which it appears.
And in the last standard (pre C++20), the paragraph [basic.lval] has been moved to Expressions [expr], so what we knew as value categories is evolving to become expression categories.

Using an object after std::move doesn't result in a compilation error

After std::move is called on an object, why doesn't the language cause a compilation error if the object is used after?
Is it because it is not possible for compiler to detect this condition?
The general principle in C++ language design is "trust the programmer". Some issues I can think of with rejecting any use of the object after it has been an argument to std::move.
To determine whether a given use is after a call to std::move in the general case would be equivalent to solving the halting problem. (In other words, it can't be done.) You would have to come up with some rules that describe what you mean by "after" in a way that can be statically determined.
In general, it is perfectly safe to assign to an object that has been an argument to std::move. (A particular class might cause an assertion, but that would be a very odd design.)
It's hard for a compiler to tell whether a given function is going to just assign the elements of the class new values.
Remember, std::move is nothing more than a cast to an rvalue reference. In itself it doesn't actually move anything. Additionally, the language only states that a moved from object is in a valid/destructible state but apart from that doesn't say anything about its content - it may still be intact, it may not be or (like std::unique_ptr it may be defined to have specific content (nullptr)) - it all depends on what the move constructor/move-assignment-operator implements.
So, whether or not it is safe/valid to access a moved from object depends entirely on the specific object. Reading a std::unique_ptr after a move, to see if it is nullptr is perfectly fine for example - for other types; not so much.
This is what's called a "quality of implementation" issue: it makes much more sense for a good compiler or toolchain to issue a warning for a potentially dangerous use-after-move situation than for the language standard to formally forbid it in a precisely defined set of situations. After all, some uses after moves are legitimate (such as the case of a std::unique_ptr, which is guaranteed to be empty after moved from) whereas other cases that are undefined cannot be easily detected---detecting, in general, whether an object is used after it is moved from is equivalent to the halting problem.
You can use clang-tidy to detect use after move: https://clang.llvm.org/extra/clang-tidy/checks/misc-use-after-move.html
This is because we often want to use the moved-from object. Consider the default implementation of std::swap:
template<typename T>
void swap(T& t1, T& t2)
{
T temp = std::move(t1);
t1 = std::move(t2); // using moved-from t1
t2 = std::move(temp); // using moved-from t2
}
You do not want the compiler to warn you each time you use std::swap.
The nature of an object’s post-move state is defined by the implementor; in some cases, moving an object may leave it in a perfectly usable state, while in others the object may be rendered useless (or worse, somehow dangerous to use).
While I somewhat agree – it would be nice to have the option of generating a warning, at least, if an object is used in a potentially dangerous way after a move, I don’t know of options to either GCC or Clang to draw attention to this type of code smell.
(In fact, to the intrepid, this might make a fine basis for e.g. a Clang plugin, indeed!)

Does type aliasing issue exist only when pointers are passed to functions as arguments?

As far as I know, when two pointers (or references) do not type alias each other, it is legal to for the compiler to make the assumption that they address different locations and to make certain optimizations thereof, e.g., reordering instructions. Therefore, having pointers to different types to have the same value may be problematic. However, I think this issue only applies when the two pointers are passed to functions. Within the function body where the two pointers are created, the compiler should be able to make sure the relationship between them as to whether they address the same location. Am I right?
As far as I know, when two pointers (or references) do not type alias
each other, it is legal to for the compiler to make the assumption
that they address different locations and to make certain
optimizations thereof, e.g., reordering instructions.
Correct. GCC, for example, does perform optimizations of this form which can be disabled by passing the flag -fno-strict-aliasing.
However, I think this issue only applies when the two pointers are
passed to functions. Within the function body where the two pointers
are created, the compiler should be able to make sure the relationship
between them as to whether they address the same location. Am I right?
The standard doesn't distinguish between where those pointers came from. If your operation has undefined behavior, the program has undefined behavior, period. The compiler is in no way obliged to analyze the operands at compile time, but he may give you a warning.
Implementations which are designed and intended to be suitable for low-level programming should have no particular difficulty recognizing common patterns where storage of one type is reused or reinterpreted as another in situations not involving aliasing, provided that:
Within any particular function or loop, all pointers or lvalues used to access a particular piece of storage are derived from lvalues of a common type which identify the same object or elements of the same array, and
Between the creation of a derived-type pointer and the last use of it or any pointer derived from it, all operations involving the storage are performed only using the derived pointer or other pointers derived from it.
Most low-level programming scenarios requiring reuse or reinterpretation of storage fit these criteria, and handling code that fits these criteria will typically be rather straightforward in an implementation designed for low-level programming. If an implementation cache lvalues in registers and performs loop hoisting, for example, it could support the above semantics reasonably efficiently by flushing all cached values of type T whenever T or T* is used to form a pointer or lvalue of another type. Such an approach may be optimal, but would degrade performance much less than having to block all type-based optimizations entirely.
Note that it is probably in many cases not worthwhile for even an implementation intended for low-level programming to try to handle all possible scenarios involving aliasing. Doing that would be much more expensive than handling the far more common scenarios that don't involve aliasing.
Implementations which are specialized for other purposes are, of course, not required to make any attempt whatsoever to support any exceptions to 6.5p7--not even those that are often treated as part of the Standard. Whether such an implementation should be able to support such constructs would depend upon the particular purposes for which it is designed.