Pointer and post-increment funny business - c++

What, if anything, is theoretically wrong with this c/c++ statement:
*memory++ = BIT_MASK & *memory;
Where BIT_MASK is an arbitrary bitwise AND mask, and memory is a pointer.
The intent was to read a memory location, AND the value with the mask, store the result at the original location, then finally increment the pointer to point to the next memory location.

You are invoking undefined behaviour because you reference memory twice (once for reading, once for writing) in a single statement without an intervening sequence point, and the language standards do not specify when the increment will occur. (You can read the same memory multiple times; the troubles occur when you try to mix some writing in with the reading - as in your example.)
You can use:
*memory++ &= BIT_MASK;
to achieve what you want to achieve without incurring undefined behaviour.
In the C standard (ISO/IEC 9899:1999 aka C99), §6.5 'Expressions', ¶2 says
Between the previous and next sequence point an object shall have its stored value
modified at most once by the evaluation of an expression. Furthermore, the prior value
shall be read only to determine the value to be stored.70)
That's the primary source in the C standard. The footnote says:
This paragraph renders undefined statement expressions such as
i = ++i + 1;
a[i++] = i;
while allowing
i = i + 1;
a[i] = i;
In addition, 'Annex C (informative) Sequence Points' has an extensive discussion of all this.
You would find similar wording in the C++ standard, though I'm not sure it has an analogue to 'Annex C'.

It's undefined behavior since you have memory++ and memory in the same statement.
This is because C/C++ does not specify exactly when the ++ will occur. It can be before or after *memory is evaluated.
Here are two ways to fix it:
*memory = BIT_MASK & *memory;
memory++;
or just simply:
*memory++ &= BIT_MASK;
Take your pick.

Related

Does the pointer arithmetic in this usage cause undefined behavior

This is a follow up to the following question. I was under the assumption, that the pointer arithmetic I originally used would cause undefined behavior. However I was told by a colleague, that the usage is actually well defined. The following is a simplified example:
typedef struct StructA {
int a;
} StructA ;
typedef struct StructB {
StructA a;
StructA* b;
} StructB;
int main() {
StructB* original = (StructB*)malloc(sizeof(StructB));
original->a.a = 5;
original->b = &original->a;
StructB* copy = (StructB*)malloc(sizeof(StructB));
memcpy(copy, original, sizeof(StructB));
free(original);
ptrdiff_t offset = (char*)copy - (char*)original;
StructA* a = (StructA*)((char*)(copy->b) + offset);
printf("%i\n", a->a);
free(copy)
}
According to §5.7 ¶5 of the C++11 spec:
If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined.
I assumed, that the following part of the code:
ptrdiff_t offset = (char*)copy - (char*)original;
StructA* a = (StructA*)((char*)(copy->b) + offset);
causes undefined behavior, since it:
subtracts two pointers, which point to different arrays
the resulting pointer of the offset calculation does not point into the same array anymore.
Does this cause undefined behavior, or do I misinterpret the C++ specification? Does the same apply in C as well?
Edit:
Following the comments I assume the following modification would still be undefined behavior because of the object usage after the lifetime has ended:
ptrdiff_t offset = (char*)(copy->b) - (char*)original;
StructA* a = (StructA*)((char*)copy + offset);
Would it be defined when working with indexes instead:
typedef struct StructB {
StructA a;
ptrdiff_t b_offset;
} StructB;
int main() {
StructB* original = (StructB*)malloc(sizeof(StructB));
original->a.a = 5;
original->b_offset = (char*)&(original->a) - (char*)original
StructB* copy = (StructB*)malloc(sizeof(StructB));
memcpy(copy, original, sizeof(StructB));
free(original);
StructA* a = (StructA*)((char*)copy + copy->b_offset);
printf("%i\n", a->a);
free(copy);
}
It is undefined behavior because there are severe restrictions on what can be done with pointer arithmetic. The edits that you have made and that were suggested do nothing to fix this.
Undefined Behavior in Addition
StructA* a = (StructA*)((char*)copy + offset);
First of all, this is undefined behavior due to the addition onto copy:
When an expression J that has integral type is added to or subtracted from an expression P of pointer type, the result has the type of P.
(4.1) If P evaluates to a null pointer value and J evaluates to 0, the result is a null pointer value.
(4.2) Otherwise, if P points to an array element i of an array object x with n elements ([dcl.array]), the expressions P + J and J + P (where J has the value j) point to the (possibly-hypothetical) array element i+j of x if 0 ≤ i + j ≤ n and the expression P - J points to the (possibly-hypothetical) array element i − j of x if 0 ≤ i − j ≤ n.
(4.3) Otherwise, the behavior is undefined.
See https://eel.is/c++draft/expr.add#4
In short, performing pointer arithmetic on non-arrays and non-null-pointers is always undefined behavior. Even if copy or its members were arrays, adding onto a pointer so that it becomes:
two or more past the end of the array
at least one before the first element
is also undefined behavior.
Undefined Behavior in Subtraction
ptrdiff_t offset = (char*)original - (char*)(copy->b);
The subtraction of your two pointers is also undefined behavior:
When two pointer expressions P and Q are subtracted, the type of the result is an implementation-defined signed integral type; [...]
(5.1) If P and Q both evaluate to null pointer values, the result is 0.
(5.2) Otherwise, if P and Q point to, respectively, array elements i and j of the same array object x, the expression P - Q has the value i − j.
(5.3) Otherwise, the behavior is undefined.
See https://eel.is/c++draft/expr.add#5
So subtracting pointers from one another, when they are not both null or pointers to elements of the same array is undefined behavior.
Undefined Behavior in C
The C standard has similar restrictions:
(8) [...] If the pointer operand points to an element of
an array object, and the array is large enough, the result points to an element offset from
the original element such that the difference of the subscripts of the resulting and original
array elements equals the integer expression.
(The standard does not mention what happens for non-array pointer addition)
(9) When two pointers are subtracted, both shall point to elements of the same array object,
or one past the last element of the array object; [...]
See §6.5.6 Additive Operators in the C11 standard (n1570).
Using Data Member Pointers Instead
A clean and type-safe solution in C++ would be to use data member pointers.
typedef struct StructB {
StructA a;
StructA StructB::*b_offset;
} StructB;
int main() {
StructB* original = (StructB*) malloc(sizeof(StructB));
original->a.a = 5;
original->b_offset = &StructB::a;
StructB* copy = (StructB*) malloc(sizeof(StructB));
memcpy(copy, original, sizeof(StructB));
free(original);
printf("%i\n", (copy->*(copy->b_offset)).a);
free(copy);
}
Notes
The standard citations are from a C++ draft. The C++11 which you have cited does not appear to have any looser restrictions on pointer arithmetic, it is just formatted differently. See C++11 standard (n3337).
The Standard explicitly provides that in situations it characterizes as Undefined Behavior, implementations may behave "in a documented fashion characteristic of the environment". According to the Rationale, the intention of such characterization was, among other things, to identify avenues of "conforming language extension"; the question of when implementations support such "popular extensions" was a Quality of Implementation issue best left to the marketplace.
Many implementations intended and/or configured for low-level programming on commonplace platforms extend the language by specifying that the following equivalences hold, for any pointers p and q of type T* and integer expression i:
The bit patterns of p, (uintptr_t)p, and (intptr_t)p are identical.
p+i is equivalent to (T*)((uintptr_t)p + (uintptr_t)i * sizeof (T))
p-i is equivalent to (T*)((uintptr_t)p - (uintptr_t)i * sizeof (T))
p-q is equivalent to ((uintptr_t)p - (uintptr_t)q) / sizeof (T) in all cases where the division would have no remainder.
p>q is equivalent to (uintptr_t)p > (uintptr_t)q and likewise for all other relational and comparison operators.
The Standard does not recognize any category of implementations that always uphold those equivalences, as distinct from those that do not, in part because they did not wish to portray as "inferior" implementations for unusual platforms where such upholding equivalence would be impractical. Instead, it expected that such implementations would be upheld on implementations where that would make sense, and programmers would know when they were targeting such implementations. Someone writing memory-management code for the 68000, or for small-model 8086 (where such equivalences would naturally hold) could write memory management code that would run interchangeably on other systems where those equivalences would hold, but someone writing memory-management code for large-model 8086 would need to design it explicitly for that platform because those equivalences do not hold (pointers are 32 bits, but individual objects are limited to 65520 bytes and most pointer operations only act upon the bottom 16 bits of a pointer).
Unfortunately, even on platforms where such equivalences would normally hold, some kinds of optimizations may yield corner-case behaviors that differ from those otherwise implied by those equivalences. Commercial compilers generally uphold the Spirit of C principle "don't prevent the programmer from doing what needs to be done", and can be configured to uphold the equivalences even when most optimizations are enabled. The gcc and clang C compilers, however, don't allow such control over semantics. When all optimizations are disabled, they will uphold those equivalences on commonplace platforms, but there is no other optimization setting that will prevent them from making inferences that would be inconsistent with them.

What are the differences between a+i and &a[i] for pointer arithmetic in C++?

Supposing we have:
char* a;
int i;
Many introductions to C++ (like this one) suggest that the rvalues a+i and &a[i] are interchangeable. I naively believed this for several decades, until I recently stumbled upon the following text (here) quoted from [dcl.ref]:
in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the "object" obtained by dereferencing a null pointer, which causes undefined behavior.
In other words, "binding" a reference object to a null-dereference causes undefined behavior. Based on the context of the above text, one infers that merely evaluating &a[i] (within the offsetof macro) is considered "binding" a reference. Furthermore, there seems to be a consensus that &a[i] causes undefined behavior in the case where a=null and i=0. This behavior is different from a+i (at least in C++, in the a=null, i=0 case).
This leads to at least 2 questions about the differences between a+i and &a[i]:
First, what is the underlying semantic difference between a+i and &a[i] that causes this difference in behavior. Can it be explained in terms of any kind of general principles, not just "binding a reference to a null dereference object causes undefined behavior just because this is a very specific case that everybody knows"? Is it that &a[i] might generate a memory access to a[i]? Or the spec author wasn't happy with null dereferences that day? Or something else?
Second, besides the case where a=null and i=0, are there any other cases where a+i and &a[i] behave differently? (could be covered by the first question, depending on the answer to it.)
TL;DR: a+i and &a[i] are both well-formed and produce a null pointer when a is a null pointer and i is 0, according to (the intent of) the standard, and all compilers agree.
a+i is obviously well-formed per [expr.add]/4 of the latest draft standard:
When an expression J that has integral type is added to or subtracted from an expression P of pointer type, the result has the type of P.
If P evaluates to a null pointer value and J evaluates to 0, the result is a null pointer value.
[...]
&a[i] is tricky. Per [expr.sub]/1, a[i] is equivalent to *(a+i), thus &a[i] is equivalent to &*(a+i). Now the standard is not quite clear about whether &*(a+i) is well-formed when a+i is a null pointer. But as #n.m. points out in comment, the intent as recorded in cwg 232 is to permit this case.
Since core language UB is required to be caught in a constant expression ([expr.const]/(4.6)), we can test whether compilers think these two expressions are UB.
Here's the demo, if the compilers think the constant expression in static_assert is UB, or if they think the result is not true, then they must produce a diagnostic (error or warning) per standard:
(note that this uses single-parameter static_assert and constexpr lambda which are C++17 features, and default lambda argument which is also pretty new)
static_assert(nullptr == [](char* a=nullptr, int i=0) {
return a+i;
}());
static_assert(nullptr == [](char* a=nullptr, int i=0) {
return &a[i];
}());
From https://godbolt.org/z/hhsV4I, it seems all compilers behave uniformly in this case, producing no diagnostics at all (which surprises me a bit).
However, this is different from the offset case. The implementation posted in that question explicitly creates a reference (which is necessary to sidestep user-defined operator&), and thus is subject to the requirements on references.
In the C++ standard, section [expr.sub]/1 you can read:
The expression E1[E2] is identical (by definition) to *((E1)+(E2)).
This means that &a[i] is exactly the same as &*(a+i). So you would dereference * a pointer first and get the address & second. In case the pointer is invalid (i.e. nullptr, but also out of range), this is UB.
a+i is based on pointer arithmetics. At first it looks less dangerous since there is no dereferencing that would be UB for sure. However, it may also be UB (see [expr.add]/4:
When an expression that has integral type is added to or subtracted
from a pointer, the result has the type of the pointer operand. If the
expression P points to element x[i] of an array object x with n
elements, the expressions P + J and J + P (where J has the value j)
point to the (possibly-hypothetical) element x[i + j] if 0 ≤ i + j ≤
n; otherwise, the behavior is undefined. Likewise, the expression P -
J points to the (possibly-hypothetical) element x[i − j] if 0 ≤ i − j
≤ n; otherwise, the behavior is undefined.
So, while the semantics behind these two expression are slightly different, I would say that the result is the same in the end.

C++ is it legal to take the address of 2 or more past the end of the array?

From Take the address of a one-past-the-end array element via subscript: legal by the C++ Standard or not?
It seems that there is language specific to taking the address of one more than an array end.
Why would 2 or 2,000,000 past the end be an issue if it's not derefferenced?
Looking at some simple loop:
int array[];
...
for (int i = 0: i < array_max; ++i)
{
int * x = &array[i *2]; // Is this legal
int y=0;
if (i * 2 < array_max) // We check here before dereference
{
y = *x; // Legal dereference
}
...
}
Why or at what point does this become undefined, in practice it just sets a ptr to some value, why would it be undefined if it's not refferenced?
More specifically - what example of anything but what is expected to happen could there be?
The key issue with taking addresses beyond the end of an array are segmented architectures: you may overflow the representable range of the pointer. The existing rule already creates some level of pain as it means that the last object can't be right on the boundary of a segment. however, the ability to form this address was well established.
Since array[i *2] is equivalent to *((array) + (i*2)), we should look at the rules for pointer addition. C++11 §5.7 says:
If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined.
So you have undefined behaviour even if you don't perform indirection on the pointer (not to mention that you do perform indirection, due to the expression equivalence I gave at the beginning).
in practice it just sets a ptr to some value
In theory, just having a pointer that points somewhere invalid is not allowed.
Pointers are not integers: they are things that point to other things, or to nullity.
You can't just set them to whatever number you like.
in this example, there is nothing but assignment and then qualified dereferrencing - the value is only used if validated, the question is why is the setting of the value an issue?
Yeah, you'd have to be pretty unlucky to run into practical consequences of doing that. "Undefined behaviour" does not mean "always crash". Why should the standard actually mandate semantics for such an operation? What do you think such semantics should be?

Why is i = v[i++] undefined?

From the C++ (C++11) standard, §1.9.15 which discusses ordering of evaluation, is the following code example:
void g(int i, int* v) {
i = v[i++]; // the behavior is undefined
}
As noted in the code sample, the behavior is undefined.
(Note: The answer to another question with the slightly different construct i + i++, Why is a = i + i++ undefined and not unspecified behaviour, might apply here: The answer is essentially that the behavior is undefined for historical reasons, and not out of necessity. However, the standard seems to imply some justification for this being undefined - see quote immediately below. Also, that linked question indicates agreement that the behavior should be unspecified, whereas in this question I am asking why the behavior is not well-specified.)
The reasoning given by the standard for the undefined behavior is as follows:
If a side effect on a scalar object is unsequenced relative to either
another side effect on the same scalar object or a value computation
using the value of the same scalar object, the behavior is undefined.
In this example I would think that the subexpression i++ would be completely evaluated before the subexpression v[...] is evaluated, and that the result of evaluation of the subexpression is i (before the increment), but that the value of i is the incremented value after that subexpression has been completely evaluated. I would think that at that point (after the subexpression i++ has been completely evaluated), the evaluation v[...] takes place, followed by the assignment i = ....
Therefore, although the incrementing of i is pointless, I would nonetheless think that this should be defined.
Why is this undefined behavior?
I would think that the subexpression i++ would be completely evaluated before the subexpression v[...] is evaluated
But why would you think that?
One historical reason for this code being UB is to allow compiler optimizations to move side-effects around anywhere between sequence points. The fewer sequence points, the more potential opportunities to optimize but the more confused programmers. If the code says:
a = v[i++];
The intention of the standard is that the code emitted can be:
a = v[i];
++i;
which might be two instructions where:
tmp = i;
++i;
a = v[tmp];
would be more than two.
The "optimized code" breaks when a is i, but the standard permits the optimization anyway, by saying that behavior of the original code is undefined when a is i.
The standard easily could say that i++ must be evaluated before the assignment as you suggest. Then the behavior would be fully defined and the optimization would be forbidden. But that's not how C and C++ do business.
Also beware that many examples raised in these discussions make it easier to tell that there's UB around than it is in general. This leads to people saying that it's "obvious" the behavior should be defined and the optimization forbidden. But consider:
void g(int *i, int* v, int *dst) {
*dst = v[(*i)++];
}
The behavior of this function is defined when i != dst, and in that case you'd want all the optimization you can get (which is why C99 introduces restrict, to allow more optimizations than C89 or C++ do). In order to give you the optimization, behavior is undefined when i == dst. The C and C++ standards tread a fine line when it comes to aliasing, between undefined behavior that's not expected by the programmer, and forbidding desirable optimizations that fail in certain cases. The number of questions about it on SO suggests that the questioners would prefer a bit less optimization and a bit more defined behavior, but it's still not simple to draw the line.
Aside from whether the behavior is fully defined is the issue of whether it should be UB, or merely unspecified order of execution of certain well-defined operations corresponding to the sub-expressions. The reason C goes for UB is all to do with the idea of sequence points, and the fact that the compiler need not actually have a notion of the value of a modified object, until the next sequence point. So rather than constrain the optimizer by saying that "the" value changes at some unspecified point, the standard just says (to paraphrase): (1) any code that relies on the value of a modified object prior to the next sequence point, has UB; (2) any code that modifies a modified object has UB. Where a "modified object" is any object that would have been modified since the last sequence point in one or more of the legal orders of evaluation of the subexpressions.
Other languages (e.g. Java) go the whole way and completely define the order of expression side-effects, so there's definitely a case against C's approach. C++ just doesn't accept that case.
I'm going to design a pathological computer1. It is a multi-core, high-latency, single-thread system with in-thread joins that operates with byte-level instructions. So you make a request for something to happen, then the computer runs (in its own "thread" or "task") a byte-level set of instructions, and a certain number of cycles later the operation is complete.
Meanwhile, the main thread of execution continues:
void foo(int v[], int i){
i = v[i++];
}
becomes in pseudo-code:
input variable i // = 0x00000000
input variable v // = &[0xBAADF00D, 0xABABABABAB, 0x10101010]
task get_i_value: GET_VAR_VALUE<int>(i)
reg indx = WAIT(get_i_value)
task write_i++_back: WRITE(i, INC(indx))
task get_v_value: GET_VAR_VALUE<int*>(v)
reg arr = WAIT(get_v_value)
task get_v[i]_value = CALC(arr + sizeof(int)*indx)
reg pval = WAIT(get_v[i]_value)
task read_v[i]_value = LOAD_VALUE<int>(pval)
reg got_value = WAIT(read_v[i]_value)
task write_i_value_again = WRITE(i, got_value)
(discard, discard) = WAIT(write_i++_back, write_i_value_again)
So you'll notice that I didn't wait on write_i++_back until the very end, the same time as I was waiting on write_i_value_again (which value I loaded from v[]). And, in fact, those writes are the only writes back to memory.
Imagine if write to memory are the really slow part of this computer design, and they get batched up into a queue of things that get processed by a parallel memory modifying unit that does things on a per-byte basis.
So the write(i, 0x00000001) and write(i, 0xBAADF00D) execute unordered and in parallel. Each gets turned into byte-level writes, and they are randomly ordered.
We end up writing 0x00 then 0xBA to the high byte, then 0xAD and 0x00 to the next byte, then 0xF0 0x00 to the next byte, and finally 0x0D 0x01 to the low byte. The resulting value in i is 0xBA000001, which few would expect, yet would be a valid result to your undefined operation.
Now, all I did there was result in an unspecified value. We haven't crashed the system. But the compiler would be free to make it completely undefined -- maybe sending two such requests to the memory controller for the same address in the same batch of instructions actually crashes the system. That would still be a "valid" way to compile C++, and a "valid" execution environment.
Remember, this is a language where restricting the size of pointers to 8 bits is still a valid execution environment. C++ allows for compiling to rather wonkey targets.
1: As noted in #SteveJessop's comment below, the joke is that this pathological computer behaves a lot like a modern desktop computer, until you get down to the byte-level operations. Non-atomic int writing by a CPU isn't all that rare on some hardware (such as when the int isn't aligned the way the CPU wants it to be aligned).
The reason is not just historical. Example:
int f(int& i0, int& i1) {
return i0 + i1++;
}
Now, what happens with this call:
int i = 3;
int j = f(i, i);
It's certainly possible to put requirements on the code in f so that the result of this call is well defined (Java does this), but C and C++ don't impose constraints; this gives more freedom to optimizers.
You specifically refer to the C++11 standard so I'm going to answer with the C++11 answer. It is, however, very similar to the C++03 answer, but the definition of sequencing is different.
C++11 defines a sequenced before relation between evaluations on a single thread. It is asymmetric, transitive and pair-wise. If some evaluation A is not sequenced before some evaluation B and B is also not sequenced before A, then the two evaluations are unsequenced.
Evaluating an expression includes both value computations (working out the value of some expression) and side effects. One instance of a side effect is the modification of an object, which is the most important one for answering question. Other things also count as side effects. If a side effect is unsequenced relative to another side effect or value computation on the same object, then your program has undefined behaviour.
So that's the set up. The first important rule is:
Every value computation and side effect associated with a full-expression is sequenced before every value computation and side effect associated with the next full-expression to be evaluated.
So any full expression is fully evaluated before the next full expression. In your question, we're only dealing with one full expression, namely i = v[i++], so we don't need to worry about this. The next important rule is:
Except where noted, evaluations of operands of individual operators and of subexpressions of individual expressions are unsequenced.
That means that in a + b, for example, the evaluation of a and b are unsequenced (they may be evaluated in any order). Now for our final important rule:
The value computations of the operands of an operator are sequenced before the value computation of the result of the operator.
So for a + b, the sequenced before relationships can be represented by a tree where a directed arrow represents the sequenced before relationship:
a + b (value computation)
^ ^
| |
a b (value computation)
If two evaluations occur in separate branches of the tree, they are unsequenced, so this tree shows that the evaluations of a and b are unsequenced relative to each other.
Now, let's do the same thing to your i = v[i++] example. We make use of the fact that v[i++] is defined to be equivalent to *(v + (i++)). We also use some extra knowledge about the sequencing of postfix increment:
The value computation of the ++ expression is sequenced before the modification of the operand object.
So here we go (a node of the tree is a value computation unless specified as a side effect):
i = v[i++]
^ ^
| |
i★ v[i++] = *(v + (i++))
^
|
v + (i++)
^ ^
| |
v ++ (side effect on i)★
^
|
i
Here you can see that the side effect on i, i++, is in a separate branch to the usage of i in front of the assignment operator (I marked each of these evaluations with a ★). So we definitely have undefined behaviour! I highly recommend drawing these diagrams if you ever wonder if your sequencing of evaluations is going to cause you trouble.
So now we get the question about the fact that the value of i before the assignment operator doesn't matter, because we write over it anyway. But actually, in the general case, that's not true. We can override the assignment operator and make use of the value of the object before the assignment. The standard doesn't care that we don't use that value - the rules are defined such that having any value computation unsequenced with a side effect will be undefined behaviour. No buts. This undefined behaviour is there to allow the compiler to emit more optimized code. If we add sequencing for the assignment operator, this optimization cannot be employed.
In this example I would think that the subexpression i++ would be completely evaluated before the subexpression v[...] is evaluated, and that the result of evaluation of the subexpression is i (before the increment), but that the value of i is the incremented value after that subexpression has been completely evaluated.
The increment in i++ must be evaluated before indexing v and thus before assigning to i, but storing the value of that increment back to memory need not happen before. In the statement i = v[i++] there are two suboperations that modify i (i.e. will end up causing a store from a register into the variable i). The expression i++ is equivalent to x=i+1, i=x, and there is no requirement that both operations need to take place sequentially:
x = i+1;
y = v[i];
i = y;
i = x;
With that expansion, the result of i is unrelated to the value in v[i]. On a different expansion, the i = x assignment could take place before the i = y assignment, and the result would be i = v[i]
There two rules.
The first rule is about multiple writes which give rise to a "write-write hazard": the same object cannot be modified more than once between two sequence points.
The second rule is about "read-write hazards". It is this: if an object is modified in an expression, and also accessed, then all accesses to its value must be for the purpose of computing the new value.
Expressions like i++ + i++ and your expression i = v[i++] violate the first rule. They modify an object twice.
An expression like i + i++ violates the second rule. The subexpression i on the left observes the value of a modified object, without being involved in the calculation of its new value.
So, i = v[i++] violates a different rule (bad write-write) from i + i++ (bad read-write).
The rules are too simplistic, which gives rise to classes of puzzling expressions. Consider this:
p = p->next = q
This appears to have a sane data flow dependency that is free of hazards: the assignment p = cannot take place until the new value is known. The new value is the result of p->next = q. The the value q should not "race ahead" and get inside p, such that p->next is affected.
Yet, this expression breaks the second rule: p is modified, and also used for a purpose not related to computing its new value, namely determining the storage location where the value of q is placed!
So, perversely, compilers are allowed to partially evaluate p->next = q to determine that the result is q, and store that into p, and then go back and complete the p->next = assignment. Or so it would seem.
A key issue here is, what is the value of an assignment expression? The C standard says that the value of an assignment expression is that of the lvalue, after the assignment. But that is ambiguous: it could be interpreted as meaning "the value which the lvalue will have, once the assignment takes place" or as "the value which can be observed in the lvalue after the assignment has taken place". In C++ this is made clear by the wording "[i]n all cases, the assignment is sequenced after the value computation of the right and left operands, and before the value computation of the assignment expression.", so p = p->next = q appears to be valid C++, but dubious C.
I would share your arguments if the example were v[++i], but since i++ modifies i as a side-effect, it is undefined as to when the value is modified. The standard could probably mandate a result one way or the other, but there's no true way of knowing what the value of i should be: (i + 1) or (v[i + 1]).
Think about the sequences of machine operations necessary for each of the following assignment statements, assuming the given declarations are in effect:
extern int *foo(void);
extern int *p;
*p = *foo();
*foo() = *p;
If the evaluation of the subscript on the left side and the value on the right side are unsequenced, the most efficient ways to process the two function calls would likely be something like:
[For *p = *foo()]
call foo (which yields result in r0 and trashes r1)
load r0 from address held in r0
load r1 from address held in p
store r0 to address held in r1
[For *foo() = *p]
call foo (which yields result in r0 and trashes r1)
load r1 from address held in p
load r1 from address held in r1
store r1 to address held in r0
In either case, if p or *p were read into a register before the call to foo, then unless "foo" promises not to disturb that register, the compiler would need to add an extra step to save its value before calling "foo", and another extra step to restore the value afterward. That extra step might be avoided by using a register that "foo" won't disturb, but that would only help if there were a such a register which didn't hold a value needed by the surrounding code.
Letting the compiler read the value of "p" before or after the function call, at its leisure, will allow both patterns above to be handled efficiently. Requiring that the address of the left-hand operand of "=" always be evaluated before the right hand side would likely make the first assignment above less efficient than it otherwise could be, and requiring that the address of the left-hand operand be evaluated after the right-hand side would make the second assignment less efficient.

Is the behaviour of i = i++ really undefined?

Possible Duplicate:
Could anyone explain these undefined behaviors (i = i++ + ++i , i = i++, etc…)
According to c++ standard,
i = 3;
i = i++;
will result in undefined behavior.
We use the term "undefined behavior" if it can lead to more then one result. But here, the final value of i will be 4 no matter what the order of evaluation, so shouldn't this really be called "unspecified behavior"?
The phrase, "…the final value of i will be 4 no matter what the order of evaluation…" is incorrect. The compiler could emit the equivalent of this:
i = 3;
int tmp = i;
++i;
i = tmp;
or this:
i = 3;
++i;
i = i - 1;
or this:
i = 3;
i = i;
++i;
As to the definitions of terms, if the answer was guaranteed to be 4, that wouldn't be unspecified or undefined behavior, it would be defined behavior.
As it stands, it is undefined behaviour according to the standard (Wikipedia), so it's even free to do this:
i = 3;
system("sudo rm -rf /"); // DO NOT TRY THIS AT HOME … OR AT WORK … OR ANYWHERE.
No, we don't use the term "undefined behavior" when it can simply lead to more than one arithmetical result. When the behavior is limited to different arithmetical results (or, more generally, to some set of predictable results), it is typically referred to as unspecified behavior.
Undefined behavior means completely unpredictable and unlimited consequences, like formatting the hard drive on your computer or simply making your program to crash. And i = i++ is undefined behavior.
Where you got the idea that i should be 4 in this case is not clear. There's absolutely nothing in C++ language that would let you come to that conclusion.
In C and also in C++, the order of any operation between two sequence points is completely up to the compiler and cannot be dependent on. The standard defines a list of things that makes up sequence points, from memory this is
the semicolon after a statement
the comma operator
evaluation of all function arguments before the call to the function
the && and || operand
Looking up the page on wikipedia, the lists is more complete and describes more in detail. Sequence points is an extremely important concept and if you do not already know what it means, you will benefit greatly by learning it right away.
1.
No, the result will be different depending on the order of evaluation. There is no evaluation boundary between the increment and the assignment, so the increment can be performed before or after the assignment. Consider this behaviour:
load i into CX
copy CX to DX
increase DX
store DX in i
store CX in i
The result is that i contains 3, not 4.
As a comparison, in C# there is a evaluation boundary between the evaulation of the expression and the assignment, so the result will always be 3.
2.
Even if the exact behaviour isn't specified, the specification is very clear on what it covers and what it doesn't cover. The behaviour is specified as undefined, it's not unspecified.
i=, and i++ are both side effects that modify i.
i++ does not imply that i is only incremented after the entire statement is evaluated, merely that the current value of i has been read.
As such, the assignment, and the increment, could happen in any order.
This question is old, but still appears to be referenced frequently, so it deserves a new answer in light of changes to the standard, from C++17.
expr.ass Subclause 1 explains
... the assignment is sequenced after the value computation of the right and left operands ...
and
The right operand is sequenced before the left operand.
The implication here is that the side-effects of the right operand are sequenced before the assignment, which means that the expression is not addressed by the provision in [basic.exec] Subclause 10:
If a side effect on a memory location ([intro.memory]) is unsequenced relative to either another side effect on the same memory location or a value computation using the value of any object in the same memory location, and they are not potentially concurrent ([intro.multithread]), the behavior is undefined
The behavior is defined, as explained in the example which immediately follows.
See also: What made i = i++ + 1; legal in C++17?
To answer your questions:
I think "undefined behavior" means that the compiler/language implementator is free to do whatever it thinks best, and no that it could lead to more than one result.
Because it's not unspecified. It's clearly specified that its behavior is undefined.
It's not worth it to type i=i++ when you could simply type i++.
I saw such question at OCAJP practice test.
IntelliJ's IDEA decompiler turns this
public static int iplus(){
int i=0;
return i=i++;
}
into this
public static int iplus() {
int i = 0;
byte var10000 = i;
int var1 = i + 1;
return var10000;
}
Create JAR from module, then import as library & inspect.