How are the below valid and invalid as shown and what do they mean. When would such a situation arise to write this piece of code.
++x = 5; // legal
--x = 5; // legal
x++ = 5; // illegal
x-- = 5; // illegal
The postfix (x++/x--) operators do not return an lvalue (a value you can assign into).
They return a temporary value which is a copy of the value of the variable before the change
The value is an rvalue, so you could write:
y = x++ and get the old value of x
Given that both operator=() and operator++() can be overloaded, it is impossible to say what the code does without knowing more about the type of thing the operators are being applied to.
Those all modify the value of x more than once between sequence points, and are therefore undefined behavior, which you should carefully avoid. I don't see where the distinction between "legal" and "illegal" comes in - since the behavior is legal, any behavior (including sending assorted email to the Secretary of State) is perfectly in accordance with the Standard.
Assuming that the question is about built-in ++ and -- operators, none of these statements are strictly legal.
The first two are well-formed, i.e. they merely compilable because the result of prefix increment is lvalue. The last two are ill-formed, since the result of postfix increment is not a rvalue, which is why you can't assign to it.
However, even the first two are not legal in a sense that they produce undefined behavior. It is illegal to modify the same object more than once without an intervening sequence point. (Note also, that compilers are allowed to refuse to compile well-formed code that produces undefined behavior, meaning that even the first pair might prove to be non-compilable).
++x and --x both give you back x (after it's been incremented/decremented). At that point you can do what you want with it, including assign it a new value.
x++ and x-- both give you back what x was (just before it was incremented/decremented). Altering this value makes no more sense than changing any ordinary function's return value:
obj->getValue() += 3; // pointless
Frankly, you should never write that. Postincrement and pre-increment (and decrements) should only ever be used on their own. They're just recipes for confusion.
The only place I can think of where such a situation would occur is with an operator overload of operator++ and operator=. Even then, the definition isn't clear. What you code is saying basically is add one to x, then assign 5 to it. A question would arise such as why would you need to increment x before assigning 5 to it? The only possible explanation is if you have some sort of class where the ++ operator somehow increment an internal counter, then the assignment. No idea why such a thing is needed though.
Related
when i am executing this program:
#include<iostream>
using namespace std;
int main(){
int x=5,y=9;
if(++x=y++){
cout<<"Works "<<x;
}
else{
cout<<"No";
}
return 0;
}
it works fine and the output is: Works 9
but if i execute:
#include<iostream>
using namespace std;
int main(){
int x=5,y=9;
if(x++=y++){
cout<<"Works "<<x;
}
else{
cout<<"No";
}
return 0;
}
it states:
In function 'int main()':
6:11: error: lvalue required as left operand of assignment
if(x++=y++){
Because x++ isn't an lvalue.
More specifically, x++ increments x, then returns a temporary with the original value of x. A temporary object can't (casts of dubious legality aside) be used on the left hand side of an assignment, so it is not an lvalue.
++x increments x and returns a reference to x (with its new value). You can then assign directly to it if you choose, so it is an lvalue.
However, it is possible you actually meant to compare the two expressions for equality, rather than do an assignment. In which case, you need to use == rather than =.
You have to remember that the suffix increment operator returns the old value, before the increment.
This value is a very temporary value, and as all other temporary values it is not an "lvalue", i.e. it is not something that can be assigned to.
The prefix increment operator does its increment operation, and then return a reference to the new value. For ++x it returns a reference to x. It is an "lvalue".
The same of course goes for the decrement (--) operator.
There are many sources all over the Internet that will help you understand the difference between "lvalues" and "rvalues" (temporaries).
x++ returns x, and then increments x. On the other hand, ++x increments x, and then returns it.
The second case makes sense; x is being returned, and you can do whatever. The first case makes no sense at all; x++ isn't a value. In fact, once you get back the value, x is no longer that value.
As everybody else has explained, the value of x++ is a temporary containing the old value of x, and temporaries cannot be assigned to. The compiler therefor rejects the code as not conforming to the grammar of C++.
The problem you have is that the first example is also wrong. Although ++x is grammatically an lvalue, if you put it on the left hand side of an assignment operator, you are both incrementing x and assigning to x, and it's not clear what you meant to happen. C++89 and C have the concept of sequence points, and if you modify the same variable twice without an intervening sequence point - the behaviour of the program is not defined (anything can happen - including replacing the whole function with return, or a segfault).
C++11 introduced a different terminology which I am not familiar with, but the effect in this case is the same - your first example is undefined behaviour.
Add some brackets according to operator precedence to see what is going on:
(++x) = (y++)
This increments x, increments y and assigns y-1 (previous value of y before the increment) to x (y++ evaluates to y, since it is a post-increment here)
(x++)=(y++)
This here isn't a valid statement, since x++ doesn't have an lvalue.
See also: Why is ++i considered an l-value, but i++ is not?
I've just stuck myself with the following question: should this cause undefined behaviour or not and why?
std::map<int, int> m;
m[10] += 1;
It compiles and runs perfectly but it doesn't prove anything.
It resembles a common UB example i = ++i + i++; since operator[] does have side effects but on the other hand assuming any order of evaluation (left to right and right to left) brings me to the same final state of the map
P.S. possibly related: http://en.cppreference.com/w/cpp/language/eval_order
edit
Sorry guys I should have written
m[10] = m[10] + 1;
There is nothing undefined about this. The operator[] returns an lvalue reference to the map entry (which it creates if necessary). You are then merely incrementing this lvalue expression, i.e. the underlying entry.
The rules for evaluation order state that for a modifying assign operation, the side effect is sequenced strictly after the evaluation of both the left (i.e. lvalue reference to the map entry) and right (i.e. the constant 1) operands. There is no ambiguity at all in this example.
UPDATE: In your updated example nothing changes. Again the side effect of modifying m[10] is sequenced strictly after the other operations (i.e. evaluating as an lvalue on the right, evaluating it on the right, and performing the addition).
The relevant sequencing rule, from cppreference:
8) The side effect (modification of the left argument) of the built-in
assignment operator and of all built-in compound assignment operators
is sequenced after the value computation (but not the side effects) of
both left and right arguments, and is sequenced before the value
computation of the assignment expression (that is, before returning
the reference to the modified object)
I am not quite sure what your worry is (and maybe you should clarify your question if that answer isn't sufficient), but m[10] += 1; doesn't get translated to m[10] = m[10] + 1; because m is user defined class type and overloaded operators don't get translated by the compiler, ever. For a and b objects with a user defined class type:
a+=b doesn't mean a = a + b (unless you make it so)
a!=b doesn't mean !(a==b) (unless you make it so)
Also, function calls are never duplicated.
So m[10] += 1; means call overloaded operator[] once; return type is a reference, so the expression is a lvalue; then apply the builtin operator += to the lvalue.
There is no order of evaluation issue. There isn't even multiple possible orders of evaluation!
Also, you need to remember that the std::map<>::operator[] doesn't behave like std::vector<>::operator[] (or std::deque's), because the map is a completely different abstraction: vector and deque are implementations of the Sequence concept (where position matters), but map is an associative container (where "key" matters, not position):
std::vector<>::operator[] takes a numerical index, and doesn't make sense if such index doesn't refer to an element of the vector.
std::map<>::operator[] takes a key (which can be any type satisfying basic constraints) and will create a (key,value) pair if none exists.
Note that for this reason, std::map<>::operator[] is inherently a modifying operation and thus non const, while std::vector<>::operator[] isn't inherently modifying but can allow modification via the returned reference, and is thus "transitively" const: v[i] will be a modifiable lvalue if v is a non-const vector and a const lvalue if v is a const vector.
So no worry, the code has perfectly well defined behavior.
Consider the following code snippet
int a,i;
a = 5;
(i++) = a;
(++i) = a;
cout<<i<<endl;
Line (++i) = a is compiling properly and giving 5 as output.
But (i++) = a is giving compilation error error: lvalue required as left operand of assignment.
I am not able to find the reason for such indifferent behavior. I would be grateful if someone explains this.
The expression i++ evaluates to the value of i prior to the increment operation. That value is a temporary (which is an rvalue) and you cannot assign to it.
++i works because that expression evaluates to i after it has been incremented, and i can be assigned to (it's an lvalue).
More on lvalues and rvalues on Wikipedia.
According to the C++ standard, prefix ++ is an lvalue (which
is different than C), post-fix no. More generally, C++ takes
the point of view that anything which changes an lvalue
parameter, and has as its value the value of that parameter,
results in an lvalue. So ++ i is an lvalue (since the
resulting value is the new value of i), but i ++ is not
(since the resulting value is not the new value, but the old).
All of this, of course, for the built-in ++ operators. If you
overload, it depends on the signatures of your overloads (but
a correctly designed overloaded ++ will behave like the
built-in ones).
Of course, neither (++ i) = a; nor (i ++) = a; in your
example are legal; both use the value of an uninitialized
variable (i), which is undefined behavior, and both modify i
twice without an intervening sequence point.
I think everyone here knows that --i is a left value expression while i-- is a right value expression. But I read the Assembly code of the two expression and find out that they are compiled to the same Assembly code:
mov eax,dword ptr [i]
sub eax,1
mov dword ptr [i],eax
In C99 language standard, An lvalue is defined to an expression with an object type or an incomplete type other than void.
So I can ensure that --i return a value which is an type other than void while i-- return a value which is void or maybe a temp variable.
However when I give a assignment such as i--=5, the compiler will give me an error indicating i-- is not a lvalue, I do no know why it is not and why the return value is a temp variable. How does the compiler make such a judgement? Can anybody give me some explanation in Assembly language level?Thanks!
Left value? Right value?
If you are talking about lvalues and rvalues, then the property of being lvalue or rvalue applies to the result of an expression, meaning that you have to consider the results of --i and i--. And in C language both --i and i-- are rvalues. So, your question is based on incorrect premise in the realm of C language. --i is not an lvalue in C. I don't know what point you are trying to make by referring to the C99 standard, since it clearly states that neither is an lvalue. Also, it is not clear what you mean by i-- returning a void. No, the built-in postfix -- never returns void.
The lvalue vs. rvalue distinction in case of --i and i-- exists in C++ only.
Anyway, if you are looking at mere --i; and i--; expression statements, you are not using the results of these expressions. You are discarding them. The only point to use standalone --i and i-- is their side-effects (decrement of i). But since their side-effects are identical, it is completely expected that the generated code is the same.
If you want to see the difference between --i and i-- expressions, you have to use their results. For example
int a = --i;
int b = i--;
will generate different code for each initialization.
This example has nothing to do with lvalueness or rvalueness of their results though. If you want to observe the difference from that side (which only exists in C++, as I said above), you can try this
int *a = &--i;
int *b = &i--;
The first initialization will compile in C++ (since the result is an lvalue) while the second won't compile (since the result is an rvalue and you cannot apply the built-in unary & to an rvalue).
The rationale behind this specification is rather obvious. Since the --i evaluates to the new value of i, it is perfectly possible to make this operator to return a reference to i itself as its result (and C++ language, as opposed to C, prefers to return lvalues whenever possible). Meanwhile, i-- is required to return the old value of i. Since by the time we get to analyze the result oh i-- the i itself is likely to hold the new value, we cannot return a reference to i. We have to save (or recreate) the old value of i in some auxiliary temporary location and return it as the result of i--. That temporary value is just a value, not an object. It does not need to reside in memory, which is why it cannot be an lvalue.
[Note: I'm answering this from a C++ perspective.]
Assuming i is a built-in type, if you just write --i; or i--; rather than, say, j = ++i; or j = i++;, then it's unsurprising that they get compiled to the assembly code by the compiler - they're doing the same thing, which is decrementing i. The difference only becomes apparent at the assembly level when you do something with the result, otherwise they effectively have the same semantics.
(Note that if we were thinking about overloaded pre- and post-decrement operators for a user-defined type, the code generated would not be the same.)
When you write something like i-- = 5;, the compiler quite rightly complains, because the semantics of post-decrement are essentially to decrement the thing in question but return the old value of it for further use. The thing returned will be a temporary, hence why i-- yields an r-value.
The terms “lvalue” and “rvalue” originate from the assignment expression E1 = E2, in which the left operand E1 is used to identify the object to be modified, and the right operand E2 identifies the value to be used. (See C 1999 6.3.2.1, note 53.)
Thus, an expression which still has some object associated with it can be used to locate that object and to write to it. This is an lvalue. If an expression is not an lvalue, it might be called an rvalue.
For example, if you have i, the name of some object, it is an lvalue, because we can find where i is, and we can assign to it, as in i = 3.
On the other hand, if we have the expression i+1, then we have taken the value of i and added 1, and we now have a value, but it is not associated with a particular object. This new value is not in i. It is just a temporary value and does not have a particular location. (To be sure, the compiler must put it somewhere, unless optimization removes the expression completely. But it might be in registers and never in memory. Even if it is in memory for some reason, the C language does not provide you for a way to find out where.) So i+1 is not an lvalue, because you cannot use it on the left side of an assignment.
--i and i++ are both expressions that result from taking the value of i and performing some arithmetic. (These expressions also change i, but that is a side effect of the operator, not part of the result it returns.) The “left” and “right” of lvalues and rvalues have nothing to do with whether -- or ++ operator is on the left side or the right side of a name; they have to do with the left side or the right side of an assignment. As other answers explain, in C++, when they are on the left side of an lvalue, they return an lvalue. However, this is coincidental; this definition of the operators in C++ came many years after the creation of the term “lvalue”.
From the C++ (C++11) standard, §1.9.15 which discusses ordering of evaluation, is the following code example:
void g(int i, int* v) {
i = v[i++]; // the behavior is undefined
}
As noted in the code sample, the behavior is undefined.
(Note: The answer to another question with the slightly different construct i + i++, Why is a = i + i++ undefined and not unspecified behaviour, might apply here: The answer is essentially that the behavior is undefined for historical reasons, and not out of necessity. However, the standard seems to imply some justification for this being undefined - see quote immediately below. Also, that linked question indicates agreement that the behavior should be unspecified, whereas in this question I am asking why the behavior is not well-specified.)
The reasoning given by the standard for the undefined behavior is as follows:
If a side effect on a scalar object is unsequenced relative to either
another side effect on the same scalar object or a value computation
using the value of the same scalar object, the behavior is undefined.
In this example I would think that the subexpression i++ would be completely evaluated before the subexpression v[...] is evaluated, and that the result of evaluation of the subexpression is i (before the increment), but that the value of i is the incremented value after that subexpression has been completely evaluated. I would think that at that point (after the subexpression i++ has been completely evaluated), the evaluation v[...] takes place, followed by the assignment i = ....
Therefore, although the incrementing of i is pointless, I would nonetheless think that this should be defined.
Why is this undefined behavior?
I would think that the subexpression i++ would be completely evaluated before the subexpression v[...] is evaluated
But why would you think that?
One historical reason for this code being UB is to allow compiler optimizations to move side-effects around anywhere between sequence points. The fewer sequence points, the more potential opportunities to optimize but the more confused programmers. If the code says:
a = v[i++];
The intention of the standard is that the code emitted can be:
a = v[i];
++i;
which might be two instructions where:
tmp = i;
++i;
a = v[tmp];
would be more than two.
The "optimized code" breaks when a is i, but the standard permits the optimization anyway, by saying that behavior of the original code is undefined when a is i.
The standard easily could say that i++ must be evaluated before the assignment as you suggest. Then the behavior would be fully defined and the optimization would be forbidden. But that's not how C and C++ do business.
Also beware that many examples raised in these discussions make it easier to tell that there's UB around than it is in general. This leads to people saying that it's "obvious" the behavior should be defined and the optimization forbidden. But consider:
void g(int *i, int* v, int *dst) {
*dst = v[(*i)++];
}
The behavior of this function is defined when i != dst, and in that case you'd want all the optimization you can get (which is why C99 introduces restrict, to allow more optimizations than C89 or C++ do). In order to give you the optimization, behavior is undefined when i == dst. The C and C++ standards tread a fine line when it comes to aliasing, between undefined behavior that's not expected by the programmer, and forbidding desirable optimizations that fail in certain cases. The number of questions about it on SO suggests that the questioners would prefer a bit less optimization and a bit more defined behavior, but it's still not simple to draw the line.
Aside from whether the behavior is fully defined is the issue of whether it should be UB, or merely unspecified order of execution of certain well-defined operations corresponding to the sub-expressions. The reason C goes for UB is all to do with the idea of sequence points, and the fact that the compiler need not actually have a notion of the value of a modified object, until the next sequence point. So rather than constrain the optimizer by saying that "the" value changes at some unspecified point, the standard just says (to paraphrase): (1) any code that relies on the value of a modified object prior to the next sequence point, has UB; (2) any code that modifies a modified object has UB. Where a "modified object" is any object that would have been modified since the last sequence point in one or more of the legal orders of evaluation of the subexpressions.
Other languages (e.g. Java) go the whole way and completely define the order of expression side-effects, so there's definitely a case against C's approach. C++ just doesn't accept that case.
I'm going to design a pathological computer1. It is a multi-core, high-latency, single-thread system with in-thread joins that operates with byte-level instructions. So you make a request for something to happen, then the computer runs (in its own "thread" or "task") a byte-level set of instructions, and a certain number of cycles later the operation is complete.
Meanwhile, the main thread of execution continues:
void foo(int v[], int i){
i = v[i++];
}
becomes in pseudo-code:
input variable i // = 0x00000000
input variable v // = &[0xBAADF00D, 0xABABABABAB, 0x10101010]
task get_i_value: GET_VAR_VALUE<int>(i)
reg indx = WAIT(get_i_value)
task write_i++_back: WRITE(i, INC(indx))
task get_v_value: GET_VAR_VALUE<int*>(v)
reg arr = WAIT(get_v_value)
task get_v[i]_value = CALC(arr + sizeof(int)*indx)
reg pval = WAIT(get_v[i]_value)
task read_v[i]_value = LOAD_VALUE<int>(pval)
reg got_value = WAIT(read_v[i]_value)
task write_i_value_again = WRITE(i, got_value)
(discard, discard) = WAIT(write_i++_back, write_i_value_again)
So you'll notice that I didn't wait on write_i++_back until the very end, the same time as I was waiting on write_i_value_again (which value I loaded from v[]). And, in fact, those writes are the only writes back to memory.
Imagine if write to memory are the really slow part of this computer design, and they get batched up into a queue of things that get processed by a parallel memory modifying unit that does things on a per-byte basis.
So the write(i, 0x00000001) and write(i, 0xBAADF00D) execute unordered and in parallel. Each gets turned into byte-level writes, and they are randomly ordered.
We end up writing 0x00 then 0xBA to the high byte, then 0xAD and 0x00 to the next byte, then 0xF0 0x00 to the next byte, and finally 0x0D 0x01 to the low byte. The resulting value in i is 0xBA000001, which few would expect, yet would be a valid result to your undefined operation.
Now, all I did there was result in an unspecified value. We haven't crashed the system. But the compiler would be free to make it completely undefined -- maybe sending two such requests to the memory controller for the same address in the same batch of instructions actually crashes the system. That would still be a "valid" way to compile C++, and a "valid" execution environment.
Remember, this is a language where restricting the size of pointers to 8 bits is still a valid execution environment. C++ allows for compiling to rather wonkey targets.
1: As noted in #SteveJessop's comment below, the joke is that this pathological computer behaves a lot like a modern desktop computer, until you get down to the byte-level operations. Non-atomic int writing by a CPU isn't all that rare on some hardware (such as when the int isn't aligned the way the CPU wants it to be aligned).
The reason is not just historical. Example:
int f(int& i0, int& i1) {
return i0 + i1++;
}
Now, what happens with this call:
int i = 3;
int j = f(i, i);
It's certainly possible to put requirements on the code in f so that the result of this call is well defined (Java does this), but C and C++ don't impose constraints; this gives more freedom to optimizers.
You specifically refer to the C++11 standard so I'm going to answer with the C++11 answer. It is, however, very similar to the C++03 answer, but the definition of sequencing is different.
C++11 defines a sequenced before relation between evaluations on a single thread. It is asymmetric, transitive and pair-wise. If some evaluation A is not sequenced before some evaluation B and B is also not sequenced before A, then the two evaluations are unsequenced.
Evaluating an expression includes both value computations (working out the value of some expression) and side effects. One instance of a side effect is the modification of an object, which is the most important one for answering question. Other things also count as side effects. If a side effect is unsequenced relative to another side effect or value computation on the same object, then your program has undefined behaviour.
So that's the set up. The first important rule is:
Every value computation and side effect associated with a full-expression is sequenced before every value computation and side effect associated with the next full-expression to be evaluated.
So any full expression is fully evaluated before the next full expression. In your question, we're only dealing with one full expression, namely i = v[i++], so we don't need to worry about this. The next important rule is:
Except where noted, evaluations of operands of individual operators and of subexpressions of individual expressions are unsequenced.
That means that in a + b, for example, the evaluation of a and b are unsequenced (they may be evaluated in any order). Now for our final important rule:
The value computations of the operands of an operator are sequenced before the value computation of the result of the operator.
So for a + b, the sequenced before relationships can be represented by a tree where a directed arrow represents the sequenced before relationship:
a + b (value computation)
^ ^
| |
a b (value computation)
If two evaluations occur in separate branches of the tree, they are unsequenced, so this tree shows that the evaluations of a and b are unsequenced relative to each other.
Now, let's do the same thing to your i = v[i++] example. We make use of the fact that v[i++] is defined to be equivalent to *(v + (i++)). We also use some extra knowledge about the sequencing of postfix increment:
The value computation of the ++ expression is sequenced before the modification of the operand object.
So here we go (a node of the tree is a value computation unless specified as a side effect):
i = v[i++]
^ ^
| |
i★ v[i++] = *(v + (i++))
^
|
v + (i++)
^ ^
| |
v ++ (side effect on i)★
^
|
i
Here you can see that the side effect on i, i++, is in a separate branch to the usage of i in front of the assignment operator (I marked each of these evaluations with a ★). So we definitely have undefined behaviour! I highly recommend drawing these diagrams if you ever wonder if your sequencing of evaluations is going to cause you trouble.
So now we get the question about the fact that the value of i before the assignment operator doesn't matter, because we write over it anyway. But actually, in the general case, that's not true. We can override the assignment operator and make use of the value of the object before the assignment. The standard doesn't care that we don't use that value - the rules are defined such that having any value computation unsequenced with a side effect will be undefined behaviour. No buts. This undefined behaviour is there to allow the compiler to emit more optimized code. If we add sequencing for the assignment operator, this optimization cannot be employed.
In this example I would think that the subexpression i++ would be completely evaluated before the subexpression v[...] is evaluated, and that the result of evaluation of the subexpression is i (before the increment), but that the value of i is the incremented value after that subexpression has been completely evaluated.
The increment in i++ must be evaluated before indexing v and thus before assigning to i, but storing the value of that increment back to memory need not happen before. In the statement i = v[i++] there are two suboperations that modify i (i.e. will end up causing a store from a register into the variable i). The expression i++ is equivalent to x=i+1, i=x, and there is no requirement that both operations need to take place sequentially:
x = i+1;
y = v[i];
i = y;
i = x;
With that expansion, the result of i is unrelated to the value in v[i]. On a different expansion, the i = x assignment could take place before the i = y assignment, and the result would be i = v[i]
There two rules.
The first rule is about multiple writes which give rise to a "write-write hazard": the same object cannot be modified more than once between two sequence points.
The second rule is about "read-write hazards". It is this: if an object is modified in an expression, and also accessed, then all accesses to its value must be for the purpose of computing the new value.
Expressions like i++ + i++ and your expression i = v[i++] violate the first rule. They modify an object twice.
An expression like i + i++ violates the second rule. The subexpression i on the left observes the value of a modified object, without being involved in the calculation of its new value.
So, i = v[i++] violates a different rule (bad write-write) from i + i++ (bad read-write).
The rules are too simplistic, which gives rise to classes of puzzling expressions. Consider this:
p = p->next = q
This appears to have a sane data flow dependency that is free of hazards: the assignment p = cannot take place until the new value is known. The new value is the result of p->next = q. The the value q should not "race ahead" and get inside p, such that p->next is affected.
Yet, this expression breaks the second rule: p is modified, and also used for a purpose not related to computing its new value, namely determining the storage location where the value of q is placed!
So, perversely, compilers are allowed to partially evaluate p->next = q to determine that the result is q, and store that into p, and then go back and complete the p->next = assignment. Or so it would seem.
A key issue here is, what is the value of an assignment expression? The C standard says that the value of an assignment expression is that of the lvalue, after the assignment. But that is ambiguous: it could be interpreted as meaning "the value which the lvalue will have, once the assignment takes place" or as "the value which can be observed in the lvalue after the assignment has taken place". In C++ this is made clear by the wording "[i]n all cases, the assignment is sequenced after the value computation of the right and left operands, and before the value computation of the assignment expression.", so p = p->next = q appears to be valid C++, but dubious C.
I would share your arguments if the example were v[++i], but since i++ modifies i as a side-effect, it is undefined as to when the value is modified. The standard could probably mandate a result one way or the other, but there's no true way of knowing what the value of i should be: (i + 1) or (v[i + 1]).
Think about the sequences of machine operations necessary for each of the following assignment statements, assuming the given declarations are in effect:
extern int *foo(void);
extern int *p;
*p = *foo();
*foo() = *p;
If the evaluation of the subscript on the left side and the value on the right side are unsequenced, the most efficient ways to process the two function calls would likely be something like:
[For *p = *foo()]
call foo (which yields result in r0 and trashes r1)
load r0 from address held in r0
load r1 from address held in p
store r0 to address held in r1
[For *foo() = *p]
call foo (which yields result in r0 and trashes r1)
load r1 from address held in p
load r1 from address held in r1
store r1 to address held in r0
In either case, if p or *p were read into a register before the call to foo, then unless "foo" promises not to disturb that register, the compiler would need to add an extra step to save its value before calling "foo", and another extra step to restore the value afterward. That extra step might be avoided by using a register that "foo" won't disturb, but that would only help if there were a such a register which didn't hold a value needed by the surrounding code.
Letting the compiler read the value of "p" before or after the function call, at its leisure, will allow both patterns above to be handled efficiently. Requiring that the address of the left-hand operand of "=" always be evaluated before the right hand side would likely make the first assignment above less efficient than it otherwise could be, and requiring that the address of the left-hand operand be evaluated after the right-hand side would make the second assignment less efficient.