Why C/C++ does not defined expression evaluation order? - c++

As you may know C/C++ does not specified expression evaluation order. What are the reasons to left them undefined.

It allows compiler optimizations.
One example would be the reordering of arithmetic instructions for the maximum usage of the ALUs or hiding memory latency with calculations.

One of the C/C++ design goals is the efficient implementation for complilers. So the compilers are given relatively free rein in choosing the evaluation order of various subexpression within a complicated expression; this order is not constrained by operator precedence and associativity as we think. In such situation when we a modifying the same vairiable in multiple sub expression, the behaviour become undefied. An increment or decrement operation is not guaranteed to be performed immediately after giving upthe previous value and before any other part of the expression is evalauted. The only guarantee is that the update will be performed before the expression is considered finished.
Undefined behavious means undefined and anything can happen.
Source:C programmig FAQ by Steve Summit

Related

Why is *ptr = (*ptr)++ Undefined Behavior

I am trying to learn how to explain the cause of(if any) of undefined behavior in the following cases(given below).
int i = 0, *ptr = &i;
i = ++i; //is this UB? If yes then why according to C++11
*ptr = (*ptr)++; //i think this is UB but i am unable to explain exactly why is this so
*ptr = ++(*ptr); //i think this is not UB but can't explain why
I have looked at many SO posts describing UB for different pointer cases similar to the cases above, but still i am unable to explain why exactly(like using which point(s) from the standard we can prove that they will result in UB) they result in UB.
I am looking for explanations according to C++11(or C++14) but not C++17 and not Pre-C++11.
Undefined behavior stems from this:
C++11 [intro.execution]/15 Except where noted, evaluations of operands of individual operators and of subexpressions of individual expressions are unsequenced... If a side effect on a scalar object is unsequenced relative to either another side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined.
C++17 [intro.execution]/17 Except where noted, evaluations of operands of individual operators and of subexpressions of individual expressions are unsequenced... If a side effect on a memory location (4.4) is unsequenced relative to either another side effect on the same memory location or a value computation using the value of any object in the same memory location, and they are not potentially concurrent (4.7), the behavior is undefined.
This text is similar. The main difference lies in "except where noted" part; in C++17, the order of evaluation of operands is specified for more operators than in C++11. Thus:
C++17 [expr.ass]/1 In all cases, the assignment is sequenced after the value
computation of the right and left operands, and before the value computation of the assignment expression. The right operand is sequenced before the left operand.
C++11 lacks the bolded part. This part is what makes i = i++ well-defined in C++17, but undefined in C++11. That's because for postfix increment, the side effect is not part of a value computation of the expression:
C++11 and C++17 [expr.post.incr]/1 The value computation of the ++ expression is sequenced before the modification of the operand object.
So "the assignment is sequenced after the value computation of the right and left operands" is not by itself sufficient: the assignment is sequenced after the value computation of i++, and the side effect is also sequenced after that same value computation, but nothing says how they are sequenced relative to each other. Therefore, they are unsequenced, and they are both modifying the same object (here, i). This exhibits undefined behavior.
The addition of "the right operand is sequenced before the left operand" in C++17 means that the side effect of i++ is sequenced before the value computation of i, and both are sequenced before the assignment.
On the other hand, for pre-increment the side effect is necessarily part of the evaluation of the expression:
C++11 and C++17 [expr.pre.incr]/1 ... The result is the updated operand; it is an lvalue ...
So the value computation of ++i involves incrementing i first, and then applying an lvalue-to-rvalue conversion to obtain the updated value. This value computation is sequenced before the assignment in both C++11 and C++17, and so the two side effects on i are sequenced relative to each other; no undefined behavior.
Nothing changes in this analysis if i is replaced with (*ptr). That's just another way to refer to the same object or memory location.
The C++ Standard is based upon the C Standard, whose authors didn't need any particular "reason" to say that implementations may process a construct in whatever fashion would be most useful to their customers [which is what they intended the phrase "Undefined Behavior" to mean]. Many platforms can cheaply guarantee, for small primitive types, that race conditions involving a read and conflicting write to the same object will always yield either old or new data, and that race conditions involving conflicting writes will result every individual subsequent read seeing one of the written values. Rather than trying to identify all of the cases where implementations should or should not be expected to uphold guarantee, the Standard allows implementations to, at their leisure, process code "in a documented manner characteristic of the environment". Because it's not practical for all implementations to offer such guarantees in all possible scenarios, and because the range of scenarios where such guarantees would be practical would be different on different platforms, the authors of the Standard allowed implementations to weigh the pros and cons of offering various behavioral guarantees on their particular target platforms, rather than trying to write precise rules that would be appropriate for all possible implementations.
Note also that if one were to do something like:
*p = (*q)++;
return q[0] + q[i]; // where 'i' is some object of type `int`.
when p and q are equal and i is zero, a compiler might quite plausibly generate code where the assignment would undo the effect of the increment, but which would return the sum of the old value of q, plus 1, plus the actual stored value of q (which would be the old value, rather than the incremented value). Although this would be a logical consequence of the specified race-condition semantics, trying to specify it precisely would have been sufficiently awkward that the Standard simply allows implementations to specify the behavior as tightly or loosely as they see fit.

Unexpected reversal of values when using a function to provide int arguments for constructor [duplicate]

Okay, I'm aware that the standard dictates that a C++ implementation may choose in which order arguments of a function are evaluated, but are there any implementations that actually 'take advantage' of this in a scenario where it would actually affect the program?
Classic Example:
int i = 0;
foo(i++, i++);
Note: I'm not looking for someone to tell me that the order of evaluation can't be relied on, I'm well aware of that. I'm only interested in whether any compilers actually do evaluate out of a left-to-right order because my guess would be that if they did lots of poorly written code would break (rightly so, but they would still probably complain).
It depends on the argument type, the called function's calling convention, the archtecture and the compiler. On an x86, the Pascal calling convention evaluates arguments left to right whereas in the C calling convention (__cdecl) it is right to left. Most programs which run on multiple platforms do take into account the calling conventions to skip surprises.
There is a nice article on Raymond Chen' blog if you are interested. You may also want to take a look at the Stack and Calling section of the GCC manual.
Edit: So long as we are splitting hairs: My answer treats this not as a language question but as a platform one. The language standard does not gurantee or prefer one over the other and leaves it as unspecified. Note the wording. It does not say this is undefined. Unspecified in this sense means something you cannot count on, non-portable behavior. I don't have the C spec/draft handy but it should be similar to that from my n2798 draft (C++)
Certain other aspects and operations of the abstract machine are described in this International Standard as unspecified (for example, order of evaluation of arguments to a function). Where possible, this International Standard defines a set of allowable behaviors. These define the nondeterministic aspects of the abstract machine. An instance of the abstract machine can thus have more than one possible execution sequence for a given program and a given input.
I found answer in c++ standards.
Paragraph 5.2.2.8:
The order of evaluation of arguments is unspecified. All side effects of argument expression evaluations take effect before the function is entered. The order of evaluation of the postfix expression and the argument expression list is
unspecified.
In other words, It depends on compiler only.
Read this
It's not an exact copy of your question, but my answer (and a few others) cover your question as well.
There are very good optimization reasons why the compiler might not just choose right-to-left but also interleave them.
The standard doesn't even guarantee a sequential ordering. It only guarantees that when the function gets called, all arguments have been fully evaluated.
And yes, I have seen a few versions of GCC do exactly this. For your example, foo(0,0) would be called, and i would be 2 afterwards. (I can't give you the exact version number of the compiler. It was a while ago - but I wouldn't be surprised to see this behavior pop up again. It's an efficient way to schedule instructions)
All arguments are evaluated. Order not defined (as per standard). But all implementations of C/C++ (that I know of) evaluate function arguments from right to left. EDIT: CLang is an exception (see comment below).
I believe that the right-to-left evaluation order has been very very old (since the first C compilers). Certainly way before C++ was invented, and most implementations of C++ would be keeping the same evaluation order because early C++ implementations simply translated into C.
There are some technical reasons for evaluating function arguments right-to-left. In stack architectures, arguments are typically pushed onto the stack. In C/C++, you can call a function with more arguments than actually specified -- the extra arguments are simiply ignored. If arguments are evaluated left-to-right, and pushed left-to-right, then the stack slot right under the stack pointer will hold the last argument, and there is no way for the function to get at the offset of any particular argument (because the actual number of arguments pushed depends on the caller).
In a right-to-left push order, the stack slot right under the stack pointer will always hold the first argument, and the next slot holds the second argument etc. Argument offsets will always be deterministic for the function (which may be written and compiled elsewhere into a library, separately from where it is called).
Now, right-to-left push order does not mandate right-to-left evaluation order, but in early compilers, memory is scarce. In right-to-left evaluation order, the same stack can be used in-place (essentially, after evaluating the argument -- which may be an expression or a funciton call! -- the return value is already at the right position on the stack). In left-to-right evaluation, the argument values must be stored separately and the pushed back to the stack in reverse order.
Last time I saw differences was between VS2005 and GCC 3.x on an x86 hardware in 2007.
So it's (was?) a very likely situation. So I never rely on evaluation order anymore. Maybe it's better now.
I expect that most modern compilers would attempt to interleave the instructions computing the arguments, given that they are required by the C++ standard to be independent and thus lack any interdependencies. Doing this should help to keep a deeply-pipelined CPU's execution units full and thereby increase throughput. (At least I would expect that a compiler that claims to be an optimising compiler would do so when optimisation flags are given.)

c++11 Order of evaluation (undefined behavior)

vec[ival++] <= vec[ival]
This expression has undefined behavior, because the order of evaluation of operator's (<=) operands is undefined.
How can we rewrite that expression to avoid the undefined behavior?
I've found an answer that appears to work:
vec[ival] <= vec[ival + 1]
If that is the right way to avoid the undefined behavior, why does rewriting it that way avoid the undefined behavior?
Adding any reference about how to fix that expression would be great.
Yes, your first example has undefined behavior because we have an unsequenced modification and access of a memory location, which is undefined behavior. This is covered in draft C++ standard [intro.execution]p10 (emphasis mine):
Except where noted, evaluations of operands of individual operators
and of subexpressions of individual expressions are unsequenced.
[ Note: In an expression that is evaluated more than once during the
execution of a program, unsequenced and indeterminately sequenced
evaluations of its subexpressions need not be performed consistently
in different evaluations. — end note  ] The value computations of the
operands of an operator are sequenced before the value computation of
the result of the operator. If a side effect on a memory location
([intro.memory]) is unsequenced relative to either another side effect
on the same memory location or a value computation using the value of
any object in the same memory location, and they are not potentially
concurrent ([intro.multithread]), the behavior is undefined. [ Note:
The next subclause imposes similar, but more complex restrictions on
potentially concurrent computations. — end note  ] [ Example:
void g(int i) {
i = 7, i++, i++; // i becomes 9
i = i++ + 1; // the value of i is incremented
i = i++ + i; // the behavior is undefined
i = i + 1; // the value of i is incremented
}
— end example  ]
If we check out the section of relational operators which covers <= [expr.rel] it does not specify an order of evaluation, so we are covered by intro.exection and thus we have undefined behavior.
Having unspecified order of evaluation is not sufficient for undefined behavior as the example in Order of evaluation of assignment statement in C++ demonstrates.
Your second example avoids the undefined behavior since you are not introducing a side effect to ival, you are just reading the memory location twice.
I believe that is a reasonable way to solve the problem, it is readable and not surprising. An alternate method could include introducing a second variable, e.g. index and prev_index. It is hard to come with a fast rule given such a small code snippet.
It avoids undefined behavior because you are not changing the value of ival. The issue you're seeing in the first sample is that we can't determine what the values of ival are at the times that they're used. In the second sample, there's no confusion.
Let's start with the worst problem first, and that is the Undefined Behavior. C++ uses sequencing rules. Statements are executed in seqeuence. Usually that's top-to-bottom, but if statements, for statements, function calls and the like can change that.
Now withing a statement there still might be a further sequence of execution, but I'm very intentionally writing might. By defaults, the various parts of a single statement are not sequenced relative to each other. That's why you can get the varying order of execution. But worse, if you change and use a single object without sequencing, you have Undefined Behavior. That is bad - anything might happen.
The proposed solution (iVal + 1) doesn't change iVal anymore, but generates a new value. That is entirely safe. Still, it leaves iVal unchanged.
You may want to check on std::adjacent_find(). Chances are that the loop you're trying to write is already in the Standard Library.
The first problem is that as the initial code exhibited undefined behavior, under the C++ standard there is no "fix". The behavior of that line of code is not specified by the C++ standard; to know what it is supposed to do you have to have another source of information.
Formally, that expression can be rewritten as system("format c:"), as the C++ standard does not mandate any behavior from a program that exhibits undefined behavior.
But in practice, when you run into something like that, you have to read the original programmer's mind.
Well, you can solve anything with lambdas.
[&]{ bool ret = vec[ival] <= vec[ival+1]; ++ival; return ret; }()
Second,
vec[ival] <= vec[ival+1]
is not the same, because it lacks the side effect of ival being 1 greater after the expression is evaluated.

Sequence points, conditionals and optimizations

I had an argument today with one of my collegues regarding the fact that a compiler could change the semantics of a program when agressive optimizations are enabled.
My collegue states that when optimizations are enabled, a compiler might change the order of some instructions. So that:
function foo(int a, int b)
{
if (a > 5)
{
if (b < 6)
{
// Do something
}
}
}
Might be changed to:
function foo(int a, int b)
{
if (b < 6)
{
if (a > 5)
{
// Do something
}
}
}
Of course, in this case, it doesn't change the program general behavior and isn't really important.
From my understanding, I believe that the two if (condition) belong to two different sequence points and that the compiler can't change their order, even if changing it would keep the same general behavior.
So, dear SO users, what is the truth regarding this ?
If the compiler can verify that there is no observable difference between those two, then it is free to make such optimizations.
Sequence points are a conceptual thing: the compiler has to generate code such that it behaves as if all the semantic rules like sequence points were followed. The generated code doesn't actually have to follow those rules if not following them produces no observable difference in the behavior of the program.
Even if you had:
if (a > 5 && b < 6)
the compiler could freely rearrange this to be
if (b < 6 && a > 5)
because there is no observable difference between the two (in this specific case where a and b are both int values). [This assumes that it is safe to read both a and b; if reading one of them could cause some error (e.g., one has a trap value), then the compiler would be more restricted in what optimizations it could make.]
As there is no observable difference between the two program snippets - provided the implementation is one that doesn't use trap values or anything else that might cause the inner comparison to do something other than just evaluate to true or false - the compiler could optimize one to the other under the "as if" rule. If there was some observable difference or some way that a conforming program might behave differently then the compiler would be non-conforming if it changed one form to the other.
For C++, see 1.9 [intro.execution] / 5.
A conforming implementation executing a well-formed program shall produce the same observable behavior as one of the possible execution sequences of the corresponding instance of the abstract machine with the same program and the same input. However, if any such execution sequence contains an undefined
operation, this International Standard places no requirement on the implementation executing that program with that input (not even with regard to operations preceding the first undefined operation).
[This provision is sometimes called the "as-if" rule, because an implementation is free to disregard any requirement of this International Standard as long as the result is as if the requirement had been obeyed, as far as can be determined from the observable behavior of the program. For instance, an actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no side effects affecting the observable behavior of the program are produced.]
Yes, the if statement is a sequence point.
However, a smart and agressive compiler can still reorder the different expressions, statements and alter the sequence points providing no side effects appear.
Sequence points only apply to the abstract machine.
If the target specific optimizer can prove that reversing the order of two instructions has no side effects, it can change them at will.
The end of a full expression (including those that control logical constructs like if, while, et cetera) is a sequence point. However, the sequence point really only provides a guarantee that side-effects of previously-evaluated statements have completed.
If a statement has no observable side-effects the compiler can do what it feels is best.
The truth is that if a>5 is false more often than b<6 is false or vice versa then the sequence will make a very minor difference as it will have to compute both conditionals on more occasions.
In reality though it is so trivial it is not worth bothering about in this particular case.
There are cases where it actually does make a difference, i.e. when you are filtering a large collection of data on several criteria and have to decide which filter to apply first, particularly if only one of them is O(log N) or constant and the subsequent checks are linear through what is left.
Lots of PC programmer replies =)
The compiler may, and likely would, optimize the sequence points for speed if "b" is passed to the function in a quickly-accessed register while "a" is passed on the stack. That's a quite common case for many compilers for 8-bit and 16-bit MCU:s.
Through the optimization it doesn't need to first stack "b", then load "a" into a register, then evaluate "a", then load "b" back into a register, then evaluate "b". Quite a mess I'd rather hope the compiler handled by rearranging the sequence points.
Though of course as already mentioned, to be standard compliant the compiler needs to ensure that it doesn't change the program behavior by the optimization.

Compilers and argument order of evaluation in C++

Okay, I'm aware that the standard dictates that a C++ implementation may choose in which order arguments of a function are evaluated, but are there any implementations that actually 'take advantage' of this in a scenario where it would actually affect the program?
Classic Example:
int i = 0;
foo(i++, i++);
Note: I'm not looking for someone to tell me that the order of evaluation can't be relied on, I'm well aware of that. I'm only interested in whether any compilers actually do evaluate out of a left-to-right order because my guess would be that if they did lots of poorly written code would break (rightly so, but they would still probably complain).
It depends on the argument type, the called function's calling convention, the archtecture and the compiler. On an x86, the Pascal calling convention evaluates arguments left to right whereas in the C calling convention (__cdecl) it is right to left. Most programs which run on multiple platforms do take into account the calling conventions to skip surprises.
There is a nice article on Raymond Chen' blog if you are interested. You may also want to take a look at the Stack and Calling section of the GCC manual.
Edit: So long as we are splitting hairs: My answer treats this not as a language question but as a platform one. The language standard does not gurantee or prefer one over the other and leaves it as unspecified. Note the wording. It does not say this is undefined. Unspecified in this sense means something you cannot count on, non-portable behavior. I don't have the C spec/draft handy but it should be similar to that from my n2798 draft (C++)
Certain other aspects and operations of the abstract machine are described in this International Standard as unspecified (for example, order of evaluation of arguments to a function). Where possible, this International Standard defines a set of allowable behaviors. These define the nondeterministic aspects of the abstract machine. An instance of the abstract machine can thus have more than one possible execution sequence for a given program and a given input.
I found answer in c++ standards.
Paragraph 5.2.2.8:
The order of evaluation of arguments is unspecified. All side effects of argument expression evaluations take effect before the function is entered. The order of evaluation of the postfix expression and the argument expression list is
unspecified.
In other words, It depends on compiler only.
Read this
It's not an exact copy of your question, but my answer (and a few others) cover your question as well.
There are very good optimization reasons why the compiler might not just choose right-to-left but also interleave them.
The standard doesn't even guarantee a sequential ordering. It only guarantees that when the function gets called, all arguments have been fully evaluated.
And yes, I have seen a few versions of GCC do exactly this. For your example, foo(0,0) would be called, and i would be 2 afterwards. (I can't give you the exact version number of the compiler. It was a while ago - but I wouldn't be surprised to see this behavior pop up again. It's an efficient way to schedule instructions)
All arguments are evaluated. Order not defined (as per standard). But all implementations of C/C++ (that I know of) evaluate function arguments from right to left. EDIT: CLang is an exception (see comment below).
I believe that the right-to-left evaluation order has been very very old (since the first C compilers). Certainly way before C++ was invented, and most implementations of C++ would be keeping the same evaluation order because early C++ implementations simply translated into C.
There are some technical reasons for evaluating function arguments right-to-left. In stack architectures, arguments are typically pushed onto the stack. In C/C++, you can call a function with more arguments than actually specified -- the extra arguments are simiply ignored. If arguments are evaluated left-to-right, and pushed left-to-right, then the stack slot right under the stack pointer will hold the last argument, and there is no way for the function to get at the offset of any particular argument (because the actual number of arguments pushed depends on the caller).
In a right-to-left push order, the stack slot right under the stack pointer will always hold the first argument, and the next slot holds the second argument etc. Argument offsets will always be deterministic for the function (which may be written and compiled elsewhere into a library, separately from where it is called).
Now, right-to-left push order does not mandate right-to-left evaluation order, but in early compilers, memory is scarce. In right-to-left evaluation order, the same stack can be used in-place (essentially, after evaluating the argument -- which may be an expression or a funciton call! -- the return value is already at the right position on the stack). In left-to-right evaluation, the argument values must be stored separately and the pushed back to the stack in reverse order.
Last time I saw differences was between VS2005 and GCC 3.x on an x86 hardware in 2007.
So it's (was?) a very likely situation. So I never rely on evaluation order anymore. Maybe it's better now.
I expect that most modern compilers would attempt to interleave the instructions computing the arguments, given that they are required by the C++ standard to be independent and thus lack any interdependencies. Doing this should help to keep a deeply-pipelined CPU's execution units full and thereby increase throughput. (At least I would expect that a compiler that claims to be an optimising compiler would do so when optimisation flags are given.)