C/C++ unions and undefined behaviour - c++

Is the following undefined behaviour?
union {
int foo;
float bar;
} baz;
baz.foo = 3.14 * baz.bar;
I remember that writing and reading from the same underlying memory between two sequence points is UB, but I am not certain.

I remember that writing and reading from the same underlying memory between two sequence points is UB, but I am not certain.
Reading and writing to the same memory location in the same expression does not invoke undefined behavior until and unless that location is modified more than once between two sequence points or the side effect is unsequenced relative to the value computation using the value at the same location.
C11: 6.5 Expressions:
If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined. [...]
The expression
baz.foo = 3.14 * baz.bar;
has well defined behaviour if bar is initialized before. The reason is that the side effect to baz.foo is sequenced relative to the value computations of the objects baz.foo and baz.bar.
C11: 6.5.16/3 Assignment operators:
[...] The side effect of updating the stored value of the left operand is sequenced after the value computations of the left and right operands. The evaluations of the operands are unsequenced.

Disclaimer: This answer addresses C++.
You're accessing an object whose lifetime hasn't begun yet - baz.bar - which induces UB by [basic.life]/(6.1).
Assuming bar has been brought to life (e.g. by initializing it), your code is fine; before the assignment, foo need not be alive as no operation is performed that depends on its value, and during it, the active member is changed by reusing the memory and effectively initializing it. The current rules aren't clear about the latter; see CWG #1116. However, the status quo is that such assignments are indeed setting the target member as active (=alive).
Note that the assignment is sequenced (i.e. guaranteed to happen) after the value computation of the operands - see [expr.ass]/1.

Answering for C, not C++
I thought this was Defined Behavior, but then I read the following paragraph from ISO C2x (which I guess is also present in older C standards, but didn't check):
6.5.16.1/3 (Assignment operators::Simple Assignment::Semantics):
If the value being stored in an object is read from another object
that overlaps in any way the storage of the first object, then the
overlap shall be exact and the two objects shall have qualified or
unqualified versions of a compatible type; otherwise, the behavior is
undefined.
So, let's consider the following:
union {
int a;
const int b;
} db;
union {
int a;
float b;
} ub1;
union {
uint32_t a;
int32_t b;
} ub2;
Then, it is Defined Behavior to do:
db.a = db.b + 1;
But it is Undefined Behavior to do:
ub1.a = ub1.b + 1;
or
ub2.a = ub2.b + 1;
The definition of compatible types is in 6.2.7/1 (Compatible type and composite type). See also: __builtin_types_compatible_p().

The Standard uses the phrase "Undefined Behavior", among other things, as a catch-all for situations where many implementations would process a construct in at least somewhat predictable fashion (e.g. yielding a not-necessarily-predictable value without side effects), but where the authors of the Standard thought it impractical to try to anticipate everything that implementations might do. It wasn't intended as an invitation for implementations to behave gratuitously nonsensically, nor as an indication that code was erroneous (the phrase "non-portable or erroneous" was very much intended to include constructs that might fail on some machines, but would be correct on code which was not intended to be suitable for use with those machines).
On some platforms like the 8051, if a compiler were given a construct like someInt16 += *someUnsignedCharPtr << 4; the most efficient way to process it if it didn't have to accommodate the possibility that the pointer might point to the lower byte of someInt16 would be to fetch *someUnsignedCharPtr, shift it left four bits, add it to the LSB of someInt16 (capturing the carry), reload *someUnsignedCharPtr, shift it right four bits, and add it along with the earlier carry to the MSB of someInt16. Loading the value from *someUnsignedCharPtr twice would be faster than loading it, storing its value to a temporary location before doing the shift, and then having to load its value from that temporary location. If, however, someUnsignedCharPtr were to point to the lower byte of someInt16, then the modification of that lower byte before the second load of someUnsignedCharPtr would corrupt the upper bits of that byte which would, after shifing, be added to the upper byte of someInt16.
The Standard would allow a compiler to generate such code, even though character pointers are exempt from aliasing rules, because it does not require that compilers handle all situations where unsequenced reads and writes affect regions of storage that partially overlap. If such accesses were performed usinng a union instead of a character pointer, a compiler might recognize that the character-type access would always overlap the least significant byte of the 16-bit value, but I don't think the authors of the Standard wanted to require that compilers invest the time and effort that might be necessary to handle such obscure cases meaningfully.

Related

Why is *ptr = (*ptr)++ Undefined Behavior

I am trying to learn how to explain the cause of(if any) of undefined behavior in the following cases(given below).
int i = 0, *ptr = &i;
i = ++i; //is this UB? If yes then why according to C++11
*ptr = (*ptr)++; //i think this is UB but i am unable to explain exactly why is this so
*ptr = ++(*ptr); //i think this is not UB but can't explain why
I have looked at many SO posts describing UB for different pointer cases similar to the cases above, but still i am unable to explain why exactly(like using which point(s) from the standard we can prove that they will result in UB) they result in UB.
I am looking for explanations according to C++11(or C++14) but not C++17 and not Pre-C++11.
Undefined behavior stems from this:
C++11 [intro.execution]/15 Except where noted, evaluations of operands of individual operators and of subexpressions of individual expressions are unsequenced... If a side effect on a scalar object is unsequenced relative to either another side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined.
C++17 [intro.execution]/17 Except where noted, evaluations of operands of individual operators and of subexpressions of individual expressions are unsequenced... If a side effect on a memory location (4.4) is unsequenced relative to either another side effect on the same memory location or a value computation using the value of any object in the same memory location, and they are not potentially concurrent (4.7), the behavior is undefined.
This text is similar. The main difference lies in "except where noted" part; in C++17, the order of evaluation of operands is specified for more operators than in C++11. Thus:
C++17 [expr.ass]/1 In all cases, the assignment is sequenced after the value
computation of the right and left operands, and before the value computation of the assignment expression. The right operand is sequenced before the left operand.
C++11 lacks the bolded part. This part is what makes i = i++ well-defined in C++17, but undefined in C++11. That's because for postfix increment, the side effect is not part of a value computation of the expression:
C++11 and C++17 [expr.post.incr]/1 The value computation of the ++ expression is sequenced before the modification of the operand object.
So "the assignment is sequenced after the value computation of the right and left operands" is not by itself sufficient: the assignment is sequenced after the value computation of i++, and the side effect is also sequenced after that same value computation, but nothing says how they are sequenced relative to each other. Therefore, they are unsequenced, and they are both modifying the same object (here, i). This exhibits undefined behavior.
The addition of "the right operand is sequenced before the left operand" in C++17 means that the side effect of i++ is sequenced before the value computation of i, and both are sequenced before the assignment.
On the other hand, for pre-increment the side effect is necessarily part of the evaluation of the expression:
C++11 and C++17 [expr.pre.incr]/1 ... The result is the updated operand; it is an lvalue ...
So the value computation of ++i involves incrementing i first, and then applying an lvalue-to-rvalue conversion to obtain the updated value. This value computation is sequenced before the assignment in both C++11 and C++17, and so the two side effects on i are sequenced relative to each other; no undefined behavior.
Nothing changes in this analysis if i is replaced with (*ptr). That's just another way to refer to the same object or memory location.
The C++ Standard is based upon the C Standard, whose authors didn't need any particular "reason" to say that implementations may process a construct in whatever fashion would be most useful to their customers [which is what they intended the phrase "Undefined Behavior" to mean]. Many platforms can cheaply guarantee, for small primitive types, that race conditions involving a read and conflicting write to the same object will always yield either old or new data, and that race conditions involving conflicting writes will result every individual subsequent read seeing one of the written values. Rather than trying to identify all of the cases where implementations should or should not be expected to uphold guarantee, the Standard allows implementations to, at their leisure, process code "in a documented manner characteristic of the environment". Because it's not practical for all implementations to offer such guarantees in all possible scenarios, and because the range of scenarios where such guarantees would be practical would be different on different platforms, the authors of the Standard allowed implementations to weigh the pros and cons of offering various behavioral guarantees on their particular target platforms, rather than trying to write precise rules that would be appropriate for all possible implementations.
Note also that if one were to do something like:
*p = (*q)++;
return q[0] + q[i]; // where 'i' is some object of type `int`.
when p and q are equal and i is zero, a compiler might quite plausibly generate code where the assignment would undo the effect of the increment, but which would return the sum of the old value of q, plus 1, plus the actual stored value of q (which would be the old value, rather than the incremented value). Although this would be a logical consequence of the specified race-condition semantics, trying to specify it precisely would have been sufficiently awkward that the Standard simply allows implementations to specify the behavior as tightly or loosely as they see fit.

C++ volatile: guaranteed 32-bit accesses?

In my Linux C++ project, I have a hardware memory region mapped somewhere in the physical address space which I access using uint32_t pointers, after doing a mmap.
The release build of the application crashes with a SIGBUS (bus error).
This is happening because the compiler optimizes accesses to the aforementioned hardware memory using 64-bit accesses, instead of sticking to 32-bit => bus error, the hardware memory can only be accessed using 32-bit reads/writes.
I marked the uint32_t pointer as volatile.
It works. For this one specific code portion at least. Because the compiler is told not to do reordering. And most of the time it would have to reorder to optimize.
I know that volatile controls when the compiler accesses the memory. The question is: does volatile also tell the compiler how to access the memory, i.e. to access it exactly as the programmer instructs? Am I guaranteed that the compiler will always stick to doing 32-bit accesses to volatile uint32_t buffers?
e.g. Does volatile also guarantee that the compiler will be accessing the 2 consecutive writes to the 2 consecutive 32-bit values in the following code-snippet using 32-bit reads/writes as well?
void aFunction(volatile uint32_t* hwmem_array)
{
[...]
// Are we guaranteed by volatile that the following 2 consecutive writes, in consecutive memory regions
// are not merged into a single 64-bit write by the compiler?
hwmem_array[0] = 0x11223344u;
hwmem_array[1] = 0xaabbccddu;
[...]
}
I think I answered my own question, please correct me if I'm wrong.
C99 standard draft: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
Quotes:
“
6 An object that has volatile-qualified type may be modified in ways
unknown to the implementation or have other unknown side effects.
Therefore any expression referring to such an object shall be
evaluated strictly according to the rules of the abstract machine, as
described in 5.1.2.3. Furthermore, at every sequence point the value
last stored in the object shall agree with that prescribed by the
abstract machine, except as modified by the unknown factors mentioned
previously.
”
Section 5.1.2.3:
“
2 Accessing a volatile object, modifying an object, modifying a file,
or calling a function that does any of those operations are all side
effects, which are changes in the state of the execution environment.
Evaluation of an expression may produce side effects. At certain
specified points in the execution sequence called sequence points, all
side effects of previous evaluations shall be complete and no side
effects of subsequent evaluations shall have taken place. (A summary
of the sequence points is given in annex C.)
5 The least requirements on a conforming implementation are: — At
sequence points, volatile objects are stable in the sense that
previous accesses are complete and subsequent accesses have not yet
occurred.
”
Annex C (informative) Sequence points:
“
1 The following are the sequence points described in 5.1.2.3:
[…]
— The end of a full expression: an initializer (6.7.8); the expression
in an expression statement (6.8.3); the controlling expression of a
selection statement (if or switch) (6.8.4); the controlling expression
of a while or do statement (6.8.5); each of the expressions of a for
statement (6.8.5.3); the expression in a return statement (6.8.6.4).
”
So theoretically we are guaranteed that at the end of any expression involving a volatile object, the volatile object is written/read, as the compiler was instructed to do.

Why is f(i = -1, i = -1) undefined behavior?

I was reading about order of evaluation violations, and they give an example that puzzles me.
1) If a side effect on a scalar object is un-sequenced relative to another side effect on the same scalar object, the behavior is undefined.
// snip
f(i = -1, i = -1); // undefined behavior
In this context, i is a scalar object, which apparently means
Arithmetic types (3.9.1), enumeration types, pointer types, pointer to member types (3.9.2), std::nullptr_t, and cv-qualified versions of these types (3.9.3) are collectively called scalar types.
I don’t see how the statement is ambiguous in that case. It seems to me that regardless of if the first or second argument is evaluated first, i ends up as -1, and both arguments are also -1.
Can someone please clarify?
UPDATE
I really appreciate all the discussion. So far, I like #harmic’s answer a lot since it exposes the pitfalls and intricacies of defining this statement in spite of how straight forward it looks at first glance. #acheong87 points out some issues that come up when using references, but I think that's orthogonal to the unsequenced side effects aspect of this question.
SUMMARY
Since this question got a ton of attention, I will summarize the main points/answers. First, allow me a small digression to point out that "why" can have closely related yet subtly different meanings, namely "for what cause", "for what reason", and "for what purpose". I will group the answers by which of those meanings of "why" they addressed.
for what cause
The main answer here comes from Paul Draper, with Martin J contributing a similar but not as extensive answer. Paul Draper's answer boils down to
It is undefined behavior because it is not defined what the behavior is.
The answer is overall very good in terms of explaining what the C++ standard says. It also addresses some related cases of UB such as f(++i, ++i); and f(i=1, i=-1);. In the first of the related cases, it's not clear if the first argument should be i+1 and the second i+2 or vice versa; in the second, it's not clear if i should be 1 or -1 after the function call. Both of these cases are UB because they fall under the following rule:
If a side effect on a scalar object is unsequenced relative to another side effect on the same scalar object, the behavior is undefined.
Therefore, f(i=-1, i=-1) is also UB since it falls under the same rule, despite that the intention of the programmer is (IMHO) obvious and unambiguous.
Paul Draper also makes it explicit in his conclusion that
Could it have been defined behavior? Yes. Was it defined? No.
which brings us to the question of "for what reason/purpose was f(i=-1, i=-1) left as undefined behavior?"
for what reason / purpose
Although there are some oversights (maybe careless) in the C++ standard, many omissions are well-reasoned and serve a specific purpose. Although I am aware that the purpose is often either "make the compiler-writer's job easier", or "faster code", I was mainly interested to know if there is a good reason leave f(i=-1, i=-1) as UB.
harmic and supercat provide the main answers that provide a reason for the UB. Harmic points out that an optimizing compiler that might break up the ostensibly atomic assignment operations into multiple machine instructions, and that it might further interleave those instructions for optimal speed. This could lead to some very surprising results: i ends up as -2 in his scenario! Thus, harmic demonstrates how assigning the same value to a variable more than once can have ill effects if the operations are unsequenced.
supercat provides a related exposition of the pitfalls of trying to get f(i=-1, i=-1) to do what it looks like it ought to do. He points out that on some architectures, there are hard restrictions against multiple simultaneous writes to the same memory address. A compiler could have a hard time catching this if we were dealing with something less trivial than f(i=-1, i=-1).
davidf also provides an example of interleaving instructions very similar to harmic's.
Although each of harmic's, supercat's and davidf' examples are somewhat contrived, taken together they still serve to provide a tangible reason why f(i=-1, i=-1) should be undefined behavior.
I accepted harmic's answer because it did the best job of addressing all meanings of why, even though Paul Draper's answer addressed the "for what cause" portion better.
other answers
JohnB points out that if we consider overloaded assignment operators (instead of just plain scalars), then we can run into trouble as well.
Since the operations are unsequenced, there is nothing to say that the instructions performing the assignment cannot be interleaved. It might be optimal to do so, depending on CPU architecture. The referenced page states this:
If A is not sequenced before B and B is not sequenced before A, then
two possibilities exist:
evaluations of A and B are unsequenced: they may be performed in any order and may overlap (within a single thread of execution, the
compiler may interleave the CPU instructions that comprise A and B)
evaluations of A and B are indeterminately-sequenced: they may be performed in any order but may not overlap: either A will be complete
before B, or B will be complete before A. The order may be the
opposite the next time the same expression is evaluated.
That by itself doesn't seem like it would cause a problem - assuming that the operation being performed is storing the value -1 into a memory location. But there is also nothing to say that the compiler cannot optimize that into a separate set of instructions that has the same effect, but which could fail if the operation was interleaved with another operation on the same memory location.
For example, imagine that it was more efficient to zero the memory, then decrement it, compared with loading the value -1 in. Then this:
f(i=-1, i=-1)
might become:
clear i
clear i
decr i
decr i
Now i is -2.
It is probably a bogus example, but it is possible.
First, "scalar object" means a type like a int, float, or a pointer (see What is a scalar Object in C++?).
Second, it may seem more obvious that
f(++i, ++i);
would have undefined behavior. But
f(i = -1, i = -1);
is less obvious.
A slightly different example:
int i;
f(i = 1, i = -1);
std::cout << i << "\n";
What assignment happened "last", i = 1, or i = -1? It's not defined in the standard. Really, that means i could be 5 (see harmic's answer for a completely plausible explanation for how this chould be the case). Or you program could segfault. Or reformat your hard drive.
But now you ask: "What about my example? I used the same value (-1) for both assignments. What could possibly be unclear about that?"
You are correct...except in the way the C++ standards committee described this.
If a side effect on a scalar object is unsequenced relative to another side effect on the same scalar object, the behavior is undefined.
They could have made a special exception for your special case, but they didn't. (And why should they? What use would that ever possibly have?) So, i could still be 5. Or your hard drive could be empty. Thus the answer to your question is:
It is undefined behavior because it is not defined what the behavior is.
(This deserves emphasis because many programmers think "undefined" means "random", or "unpredictable". It doesn't; it means not defined by the standard. The behavior could be 100% consistent, and still be undefined.)
Could it have been defined behavior? Yes. Was it defined? No. Hence, it is "undefined".
That said, "undefined" doesn't mean that a compiler will format your hard drive...it means that it could and it would still be a standards-compliant compiler. Realistically, I'm sure g++, Clang, and MSVC will all do what you expected. They just wouldn't "have to".
A different question might be Why did the C++ standards committee choose to make this side-effect unsequenced?. That answer will involve history and opinions of the committee. Or What is good about having this side-effect unsequenced in C++?, which permits any justification, whether or not it was the actual reasoning of the standards committee. You could ask those questions here, or at programmers.stackexchange.com.
A practical reason to not make an exception from the rules just because the two values are the same:
// config.h
#define VALUEA 1
// defaults.h
#define VALUEB 1
// prog.cpp
f(i = VALUEA, i = VALUEB);
Consider the case this was allowed.
Now, some months later, the need arises to change
#define VALUEB 2
Seemingly harmless, isn't it? And yet suddenly prog.cpp wouldn't compile anymore.
Yet, we feel that compilation should not depend on the value of a literal.
Bottom line: there is no exception to the rule because it would make successful compilation depend on the value (rather the type) of a constant.
EDIT
#HeartWare pointed out that constant expressions of the form A DIV B are not allowed in some languages, when B is 0, and cause compilation to fail. Hence changing of a constant could cause compilation errors in some other place. Which is, IMHO, unfortunate. But it is certainly good to restrict such things to the unavoidable.
The confusion is that storing a constant value into a local variable is not one atomic instruction on every architecture the C is designed to be run on. The processor the code runs on matters more than the compiler in this case. For example, on ARM where each instruction can not carry a complete 32 bits constant, storing an int in a variable needs more that one instruction. Example with this pseudo code where you can only store 8 bits at a time and must work in a 32 bits register, i is a int32:
reg = 0xFF; // first instruction
reg |= 0xFF00; // second
reg |= 0xFF0000; // third
reg |= 0xFF000000; // fourth
i = reg; // last
You can imagine that if the compiler wants to optimize it may interleave the same sequence twice, and you don't know what value will get written to i; and let's say that he is not very smart:
reg = 0xFF;
reg |= 0xFF00;
reg |= 0xFF0000;
reg = 0xFF;
reg |= 0xFF000000;
i = reg; // writes 0xFF0000FF == -16776961
reg |= 0xFF00;
reg |= 0xFF0000;
reg |= 0xFF000000;
i = reg; // writes 0xFFFFFFFF == -1
However in my tests gcc is kind enough to recognize that the same value is used twice and generates it once and does nothing weird. I get -1, -1
But my example is still valid as it is important to consider that even a constant may not be as obvious as it seems to be.
Behavior is commonly specified as undefined if there is some conceivable reason why a compiler which was trying to be "helpful" might do something which would cause totally unexpected behavior.
In the case where a variable is written multiple times with nothing to ensure that the writes happen at distinct times, some kinds of hardware might allow multiple "store" operations to be performed simultaneously to different addresses using a dual-port memory. However, some dual-port memories expressly forbid the scenario where two stores hit the same address simultaneously, regardless of whether or not the values written match. If a compiler for such a machine notices two unsequenced attempts to write the same variable, it might either refuse to compile or ensure that the two writes cannot get scheduled simultaneously. But if one or both of the accesses is via a pointer or reference, the compiler might not always be able to tell whether both writes might hit the same storage location. In that case, it might schedule the writes simultaneously, causing a hardware trap on the access attempt.
Of course, the fact that someone might implement a C compiler on such a platform does not suggest that such behavior shouldn't be defined on hardware platforms when using stores of types small enough to be processed atomically. Trying to store two different values in unsequenced fashion could cause weirdness if a compiler isn't aware of it; for example, given:
uint8_t v; // Global
void hey(uint8_t *p)
{
moo(v=5, (*p)=6);
zoo(v);
zoo(v);
}
if the compiler in-lines the call to "moo" and can tell it doesn't modify
"v", it might store a 5 to v, then store a 6 to *p, then pass 5 to "zoo",
and then pass the contents of v to "zoo". If "zoo" doesn't modify "v",
there should be no way the two calls should be passed different values,
but that could easily happen anyway. On the other hand, in cases where
both stores would write the same value, such weirdness could not occur and
there would on most platforms be no sensible reason for an implementation
to do anything weird. Unfortunately, some compiler writers don't need any
excuse for silly behaviors beyond "because the Standard allows it", so even
those cases aren't safe.
C++17 defines stricter evaluation rules. In particular, it sequences function arguments (although in unspecified order).
N5659 §4.6:15
Evaluations A and B are indeterminately sequenced when either A is sequenced before B or B is sequenced before A,
but it is unspecified which. [ Note: Indeterminately sequenced evaluations cannot overlap, but either could
be executed first. —end note ]
N5659 § 8.2.2:5
The
initialization of a parameter, including every associated value computation and side effect, is indeterminately
sequenced with respect to that of any other parameter.
It allows some cases which would be UB before:
f(i = -1, i = -1); // value of i is -1
f(i = -1, i = -2); // value of i is either -1 or -2, but not specified which one
The fact that the result would be the same in most implementations in this case is incidental; the order of evaluation is still undefined. Consider f(i = -1, i = -2): here, order matters. The only reason it doesn't matter in your example is the accident that both values are -1.
Given that the expression is specified as one with an undefined behaviour, a maliciously compliant compiler might display an inappropriate image when you evaluate f(i = -1, i = -1) and abort the execution - and still be considered completely correct. Luckily, no compilers I am aware of do so.
It looks to me like the only rule pertaining to sequencing of function argument expression is here:
3) When calling a function (whether or not the function is inline, and whether or not explicit function call syntax is used), every value computation and side effect associated with any argument expression, or with the postfix expression designating the called function, is sequenced before execution of every expression or statement in the body of the called function.
This does not define sequencing between argument expressions, so we end up in this case:
1) If a side effect on a scalar object is unsequenced relative to another side effect on the same scalar object, the behavior is undefined.
In practice, on most compilers, the example you quoted will run fine (as opposed to "erasing your hard disk" and other theoretical undefined behavior consequences).
It is, however, a liability, as it depends on specific compiler behaviour, even if the two assigned values are the same. Also, obviously, if you tried to assign different values, the results would be "truly" undefined:
void f(int l, int r) {
return l < -1;
}
auto b = f(i = -1, i = -2);
if (b) {
formatDisk();
}
The assignment operator could be overloaded, in which case the order could matter:
struct A {
bool first;
A () : first (false) {
}
const A & operator = (int i) {
first = !first;
return * this;
}
};
void f (A a1, A a2) {
// ...
}
// ...
A i;
f (i = -1, i = -1); // the argument evaluated first has ax.first == true
Actually, there's a reason not to depend on the fact that compiler will check that i is assigned with the same value twice, so that it's possible to replace it with single assignment. What if we have some expressions?
void g(int a, int b, int c, int n) {
int i;
// hey, compiler has to prove Fermat's theorem now!
f(i = 1, i = (ipow(a, n) + ipow(b, n) == ipow(c, n)));
}
This is just answering the "I'm not sure what "scalar object" could mean besides something like an int or a float".
I would interpret the "scalar object" as a abbreviation of "scalar type object", or just "scalar type variable". Then, pointer, enum (constant) are of scalar type.
This is a MSDN article of Scalar Types.

What does 'prior value shall be accessed only to determine the value to be stored' mean?

From Prasoon's answer to question regarding "Undefined Behavior and Sequence Points", I do not understand what the following means
.. the prior value shall be accessed only to determine the value to be stored.
As examples, the following are cited to possess Undefined Behaviour in C++:
a[i] = i++;
int x = i + i++;
Despite the explanations given there, I do not understand this part (I think I correctly understand the rest of the answer).
I do not understand what is wrong with the above code samples. I think these have well defined steps for the compiler as below.
a[i] = i++;
a[i] = i;
i = i + 1;
int x = i + i++ ;
x = i + i;
i = i + 1;
What am I missing? What does 'prior value shall be accessed only to determine the value to be stored' mean?
See also this question and my answer to it. I'm not going to vote to close this as a duplicate because you're asking about C++ rather than C, but I believe the issue is the same in both languages.
the prior value shall be accessed only to determine the value to be stored.
This does seem like an odd requirement; why should the standard care why a value is accessed? It makes sense when you realize that if the prior value is read to determine the value to be stored in the same object, that implicitly imposes an ordering on the two operations, so the read has to happen before the write. Because of that ordering, the two accesses to the same object (one read and one write) are safe. The compiler cannot rearrange (optimize) the code in a way that causes them to interfere with each other.
On the other hand, in an expression like
a[i] = i++
there are three accesses to i: a read on the left hand side to determine which element of a is to be modified, a read on the right hand side to determine the value to be incremented, and a write that stores the incremented value back in i. The read and write on the RHS are ok (i++ by itself is safe), but there's no defined ordering between the read on the LHS and the write on the RHS. So the compiler is free to rearrange the code in ways that change the relationship between those read and write operations, and the standard figuratively throws up its hands and leaves the behavior undefined, saying nothing about the possible consequences.
Both C11 and C++11 change the wording in this area, making some ordering requirements explicit. The "prior value" wording is no longer there. Quoting from a draft of the C++11 standard, 1.9p15:
Except where noted, evaluations of operands of individual operators
and of subexpressions of individual expressions are unsequenced. [...]
The value computations of the operands of an operator are sequenced
before the value computation of the result of the operator. If a side
effect on a scalar object is unsequenced relative to either
anotherside effect on the same scalar object or a value computation
using the value of the same scalar object, the behavior is undefined.
a[i] = i++;
i is modified. i is also read to determine which index of a to use, which does not affect the store to i. That's not allowed.
int x = i + i++;
i is modified. i is also used to calculate the value to store into x, which does not affect the store to i. That's not allowed.
Since the standard says that "the prior value shall be accessed only to determine the value to be stored", compilers are not required to follow the "well defined" steps you outlined.
And they often don't.
What the wording of the standard means for your particular examples is that the compiler is permitted to order the steps like so:
a[i] = i++;
i = i + 1;
a[i] = i;
int x = i + i++ ;
i = i + 1;
x = i + i;
Which give an entirely different outcome than your imagined well defined order. The compiler is also permitted to do whatever else it might like, even if it makes less sense to you than what I just typed above. That's what undefined behavior means.
While a statement like x=y+z; is semantically equivalent to temp=y; temp+=z; x=temp; there's generally no requirement (unless x is volatile) for a compiler to implement it that way. It may on some platforms be much more efficiently performed as x=y; x+=z;. Unless a variable is volatile, the code a compiler generates for an assignment may write any sequence of values to it provided that:
Any code which is entitled to read the "old" value of the variable acts upon the value it had before the assignment.
Any code which is entitled to read the "new" value of the variable acts upon the final value it was given.
Given i=511; foo[i] = i++; a compiler would be entitled to write the value 5 to foo[511] or to foo[512], but would be no less entitled to store it to foo[256] or foo[767], or foo[24601], or anything else. Since the compiler would be entitled to store the value at any possible displacement from foo, and since the compiler would be entitled to do anything it likes with code that adds an overly large displacement to a pointer, those permissions together effectively mean that the compiler could do anything it likes with foo[i]=i++;.
Note that in theory, if i were a 16-bit unsigned int but foo was a 65536-element-or-larger array (entirely possible on the classic Macintosh), the above entitlements would allow a compiler given foo[i]=i++; to write to an arbitrary value of foo, but not do anything else. In practice, the Standard refrains from such fine distinctions. It's much easier to say that the Standard imposes no requirements on what compilers do when given expressions like foo[i]=i++; than to say that the compiler's behavior is constrained in some narrow corner cases but not in others.

Why is `i = ++i + 1` unspecified behavior?

Consider the following C++ Standard ISO/IEC 14882:2003(E) citation (section 5, paragraph 4):
Except where noted, the order of
evaluation of operands of individual
operators and subexpressions of individual
expressions, and the order in
which side effects take place, is
unspecified. 53) Between the previous
and next sequence point a scalar
object shall have its stored value
modified at most once by the
evaluation of an expression.
Furthermore, the prior value shall be
accessed only to determine the value
to be stored. The requirements of this
paragraph shall be met for each
allowable ordering of the
subexpressions of a full expression;
otherwise the behavior is undefined.
[Example:
i = v[i++]; // the behavior is unspecified
i = 7, i++, i++; // i becomes 9
i = ++i + 1; // the behavior is unspecified
i = i + 1; // the value of i is incremented
—end example]
I was surprised that i = ++i + 1 gives an undefined value of i.
Does anybody know of a compiler implementation which does not give 2 for the following case?
int i = 0;
i = ++i + 1;
std::cout << i << std::endl;
The thing is that operator= has two args. First one is always i reference.
The order of evaluation does not matter in this case.
I do not see any problem except C++ Standard taboo.
Please, do not consider such cases where the order of arguments is important to evaluation. For example, ++i + i is obviously undefined. Please, consider only my case
i = ++i + 1.
Why does the C++ Standard prohibit such expressions?
You make the mistake of thinking of operator= as a two-argument function, where the side effects of the arguments must be completely evaluated before the function begins. If that were the case, then the expression i = ++i + 1 would have multiple sequence points, and ++i would be fully evaluated before the assignment began. That's not the case, though. What's being evaluated in the intrinsic assignment operator, not a user-defined operator. There's only one sequence point in that expression.
The result of ++i is evaluated before the assignment (and before the addition operator), but the side effect is not necessarily applied right away. The result of ++i + 1 is always the same as i + 2, so that's the value that gets assigned to i as part of the assignment operator. The result of ++i is always i + 1, so that's what gets assigned to i as part of the increment operator. There is no sequence point to control which value should get assigned first.
Since the code is violating the rule that "between the previous and next sequence point a scalar object shall have its stored value modified at most once by the evaluation of an expression," the behavior is undefined. Practically, though, it's likely that either i + 1 or i + 2 will be assigned first, then the other value will be assigned, and finally the program will continue running as usual — no nasal demons or exploding toilets, and no i + 3, either.
It's undefined behaviour, not (just) unspecified behaviour because there are two writes to i without an intervening sequence point. It is this way by definition as far as the standard specifies.
The standard allows compilers to generate code that delays writes back to storage - or from another view point, to resequence the instructions implementing side effects - any way it chooses so long as it complies with the requirements of sequence points.
The issue with this statement expression is that it implies two writes to i without an intervening sequence point:
i = i++ + 1;
One write is for the value of the original value of i "plus one" and the other is for that value "plus one" again. These writes could happen in any order or blow up completely as far as the standard allows. Theoretically this even gives implementations the freedom to perform writebacks in parallel without bothering to check for simultaneous access errors.
C/C++ defines a concept called sequence points, which refer to a point in execution where it's guaranteed that all effects of previous evaluations will have already been performed. Saying i = ++i + 1 is undefined because it increments i and also assigns i to itself, neither of which is a defined sequence point alone. Therefore, it is unspecified which will happen first.
Update for C++11 (09/30/2011)
Stop, this is well defined in C++11. It was undefined only in C++03, but C++11 is more flexible.
int i = 0;
i = ++i + 1;
After that line, i will be 2. The reason for this change was ... because it already works in practice and it would have been more work to make it be undefined than to just leave it defined in the rules of C++11 (actually, that this works now is more of an accident than a deliberate change, so please don't do it in your code!).
Straight from the horse's mouth
http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#637
Given two choices: defined or undefined, which choice would you have made?
The authors of the standard had two choices: define the behavior or specify it as undefined.
Given the clearly unwise nature of writing such code in the first place, it doesn't make any sense to specify a result for it. One would want to discourage code like that and not encourage it. It's not useful or necessary for anything.
Furthermore, standards committees do not have any way to force compiler writers to do anything. Had they required a specific behavior it is likely that the requirement would have been ignored.
There are practical reasons as well, but I suspect they were subordinate to the above general consideration. But for the record, any sort of required behavior for this kind of expression and related kinds will restrict the compiler's ability to generate code, to factor out common subexpressions, to move objects between registers and memory, etc. C was already handicapped by weak visibility restrictions. Languages like Fortran long ago realized that aliased parameters and globals were an optimization-killer and I believe they simply prohibited them.
I know you were interested in a specific expression, but the exact nature of any given construct doesn't matter very much. It's not going to be easy to predict what a complex code generator will do and the language attempts to not require those predictions in silly cases.
The important part of the standard is:
its stored value modified at most once by the evaluation of an expression
You modify the value twice, once with the ++ operator, once with the assignment
Please note that your copy of the standard is outdated and contains a known (and fixed) error just in 1st and 3rd code lines of your example, see:
C++ Standard Core Language Issue Table of Contents, Revision 67, #351
and
Andrew Koenig: Sequence point error: unspecified or undefined?
The topic is not easy to get just reading the standard (which is pretty obscure :( in this case).
For example, will it be well(or not)-defined, unspecified or else in general case actually depends not only on the statement structure, but also on memory contents (to be specific, variable values) at the moment of execution, another example:
++i, ++i; //ok
(++i, ++j) + (++i, ++j); //ub, see the first reference below (12.1 - 12.3)
Please have a look at (it has it all clear and precise):
JTC1/SC22/WG14 N926 "Sequence Point Analysis"
Also, Angelika Langer has an article on the topic (though not as clear as the previous one):
"Sequence Points and Expression Evaluation in C++"
There was also a discussion in Russian (though with some apparently erroneous statements in the comments and in the post itself):
"Точки следования (sequence points)"
The following code demonstrates how you could get the wrong(unexpected) result:
int main()
{
int i = 0;
__asm { // here standard conformant implementation of i = ++i + 1
mov eax, i;
inc eax;
mov ecx, 1;
add ecx, eax;
mov i, ecx;
mov i, eax; // delayed write
};
cout << i << endl;
}
It will print 1 as a result.
Assuming you are asking "Why is the language designed this way?".
You say that i = ++i + i is "obviously undefined" but i = ++i + 1 should leave i with a defined value? Frankly, that would not be very consistent. I prefer to have either everything perfectly defined, or everything consistently unspecified. In C++ I have the latter. It's not a terribly bad choice per se - for one thing, it prevents you from writing evil code which makes five or six modifications in the same "statement".
Argument by analogy: If you think of operators as types of functions, then it kind of makes sense. If you had a class with an overloaded operator=, your assignment statement would be equivalent to something like this:
operator=(i, ++i+1)
(The first parameter is actually passed in implicitly via the this pointer, but this is just for illustration.)
For a plain function call, this is obviously undefined. The value of the first argument depends on when the second argument is evaluated. However with primitive types you get away with it because the original value of i is simply overwritten; its value doesn't matter. But if you were doing some other magic in your own operator=, then the difference could surface.
Simply put: all operators act like functions, and should therefore behave according to the same notions. If i + ++i is undefined, then i = ++i should be undefined as well.
How about, we just all agree to never, never, write code like this? If the compiler doesn't know what you want to do, how do you expect the poor sap that is following on behind you to understand what you wanted to do? Putting i++; on it's own line will not kill you.
The underlying reason is because of the way the compiler handles reading and writing of values. The compiler is allowed to store an intermediate value in memory and only actually commit the value at the end of the expression. We read the expression ++i as "increase i by one and return it", but a compiler might see it as "load the value of i, add one, return it, and the commit it back to memory before someone uses it again. The compiler is encouraged to avoid reading/writing to the actual memory location as much as possible, because that would slow the program down.
In the specific case of i = ++i + 1, it suffers largely due to the need of consistent behavioral rules. Many compilers will do the 'right thing' in such a situation, but what if one of the is was actually a pointer, pointing to i? Without this rule, the compiler would have to be very careful to make sure it performed the loads and stores in the right order. This rule serves to allow for more optimization opportunities.
A similar case is that of the so-called strict-aliasing rule. You can't assign a value (say, an int) through a value of an unrelated type (say, a float) with only a few exceptions. This keeps the compiler from having to worry that some float * being used will change the value of an int, and greatly improves optimization potential.
The problem here is that the standard allows a compiler to completely reorder a statement while it is executing. It is not, however, allowed to reorder statements (so long as any such reordering results in changed program behavior). Therefore, the expression i = ++i + 1; may be evaluated two ways:
++i; // i = 2
i = i + 1;
or
i = i + 1; // i = 2
++i;
or
i = i + 1; ++i; //(Running in parallel using, say, an SSE instruction) i = 1
This gets even worse when you have user defined types thrown in the mix, where the ++ operator can have whatever effect on the type the author of the type wants, in which case the order used in evaluation matters significantly.
i = v[i++]; // the behavior is unspecified
i = ++i + 1; // the behavior is unspecified
All the above expressions invoke Undefined Behavior.
i = 7, i++, i++; // i becomes 9
This is fine.
Read Steve Summit's C-FAQs.
From ++i, i must assigned "1", but with i = ++i + 1, it must be assigned the value "2". Since there is no intervening sequence point, the compiler can assume that the same variable is not being written twice, so this two operations can be done in any order. so yes, the compiler would be correct if the final value is 1.