Simple question, but surprisingly hard to search for.
For the statement A && B I know there is a sequence point between the evaluation of A and B, and I know that the order of evaluation is left-to-right, but what is a compiler allowed to do when it can prove that B is always false (perhaps even explicitly so)?
Namely, for function_with_side_effects() && false is the compiler allowed to optimize away the function call?
A compiler is allowed to optimise out anything, as long as it doesn't break the as-if rule. The as-if rule states that with respect to observable behaviour, a program must behave as if it was executed by the exact rules of the C++ abstract machine (basically normal, unoptimised semantics of code).
Observable behaviour is:
Access to volatile objects
Writing to files
Input & output on interactive devices
As long as the program does the three things above in correct order, it is allowed to deviate from other source code functionality as much as it wants.
Of course, in practice, the number of operations which must be left intact by the compiler is much larger than the above, simply because the compiler has to assume that any function whose code it cannot see can, potentially, have an observable effect.
So, in your case, unless the compiler can prove that no action inside function_with_side_effects can ever affect observable behaviour (directly or indirectly by e.g. setting a flag tested later), it has to execute a call of function_with_side_effects, because it could violate the as-if rule if it didn't.
As #T.C. correctly pointed out in comments, there are a few exceptions to the as-if rule, when a compiler is allowed to perform optimisations which change observable behaviour; the most commonly encountered among these exceptions being copy elision. However, none of the exceptions come into play in the code in question.
No.
In general, the C++ Standard specifies the result of the computation in terms of observable effects and as long as your code is written in a Standard-compliant way (avoid Undefined Behavior, Unspecified Behavior and Implementation-Defined Behavior) then a compliant compiler has to produce the observable effects in the order they are specified.
There are only two caveats in the Standard: Copy Elision when returning a value allows the compiler to omit a call to the Copy Constructor or to the Move Constructor without a care for their (potential) observable effects.
The compiler is otherwise only allowed to optimize non-observable behavior, such as for example using less CPU registers or not writing a value in a memory location you never read afterward.
Note: in C++, the address of an object can be observed, and is thus considered observable; it's low-level like that.
In your particular case, let's refer to the Standard:
[expr.log.and] Logical AND operator
The && operator groups left-to-right. The operands are both contextually converted to bool (Clause 4). The result is true if both operands are true and false otherwise. Unlike &, && guarantees left-to-right evaluation: the second operand is not evaluated if the first operand is false.
The result is a bool. If the second expression is evaluated, every value computation and side effect associated with the first expression is sequenced before every value computation and side effect associated with the second expression.
The key here is (2): sequenced after/sequenced before is standardese speak for ordering the observable events.
According to standard:
5.14 Logical AND operator:
1 The && operator groups left-to-right. The operands are both contextually converted to bool.
The result is true if both operands are true and false otherwise. Unlike &, && guarantees left-to-right evaluation: the second operand is not evaluated if the first operand is false.
2 The result is a bool. If the second expression is evaluated, every value computation and side effect associated with the first expression is sequenced before every value computation and side effect associated with the
second expression.
So, according to these rules compiler will generate code where function_with_side_effects() will be evaluated.
The expression function_with_side_effects() && false is the same as function_with_side_effects() except that the result value is unconditionally false. The function call cannot be eliminated. The left operand of && is always evaluated; it is only the right operand whose evaluation is conditional.
Perhaps you're really asking about false && function_with_side_effects()?
In this expression, the function call must not happen. This is obvious statically, from the false. Since it must not happen, the translation of the code can be such that the function isn't referenced in the generated code.
However, suppose that we have false && nonexistent_function() which is the only reference to nonexistent_function, and that function isn't defined anywhere. If this is completely optimized away, then the program will link. Thereby, the implementation will fail to diagnose a violation of the One Definition Rule.
So I suspect that, for conformance, the symbol still has to be referenced, even if it isn't used in the code.
Related
When a class member cannot have a sensible meaning at the moment of construction,
I don't initialize it. Obviously that only applies to POD types, you cannot NOT
initialize an object with constructors.
The advantage of that, apart from saving CPU cycles initializing something to
a value that has no meaning, is that I can detect erroneous usage of these
variables with valgrind; which is not possible when I'd just give those variables
some random value.
For example,
struct MathProblem {
bool finished;
double answer;
MathProblem() : finished(false) { }
};
Until the math problem is solved (finished) there is no answer. It makes no sense to initialize answer in advance (to -say- zero) because that might not be the answer. answer only has a meaning after finished was set to true.
Usage of answer before it is initialized is therefore an error and perfectly OK to be UB.
However, a trivial copy of answer before it is initialized is currently ALSO UB (if I understand the standard correctly), and that doesn't make sense: the default copy and move constructor should simply be able to make a trivial copy (aka, as-if using memcpy), initialized or not: I might want to move this object into a container:
v.push_back(MathProblem());
and then work with the copy inside the container.
Is moving an object with an uninitialized, trivially copyable member indeed defined as UB by the standard? And if so, why? It doesn't seem to make sense.
Is moving an object with an uninitialized, trivially copyable member indeed defined as UB by the standard?
Depends on the type of the member. Standard says:
[basic.indet]
When storage for an object with automatic or dynamic storage duration is obtained, the object has an indeterminate value, and if no initialization is performed for the object, that object retains an indeterminate value until that value is replaced ([expr.ass]).
If an indeterminate value is produced by an evaluation, the behavior is undefined except in the following cases:
If an indeterminate value of unsigned ordinary character type ([basic.fundamental]) or std::byte type ([cstddef.syn]) is produced by the evaluation of:
the second or third operand of a conditional expression,
the right operand of a comma expression,
the operand of a cast or conversion ([conv.integral], [expr.type.conv], [expr.static.cast], [expr.cast]) to an unsigned ordinary character type or std::byte type ([cstddef.syn]), or
a discarded-value expression,
then the result of the operation is an indeterminate value.
If an indeterminate value of unsigned ordinary character type or std::byte type is produced by the evaluation of the right operand of a simple assignment operator ([expr.ass]) whose first operand is an lvalue of unsigned ordinary character type or std::byte type, an indeterminate value replaces the value of the object referred to by the left operand.
If an indeterminate value of unsigned ordinary character type is produced by the evaluation of the initialization expression when initializing an object of unsigned ordinary character type, that object is initialized to an indeterminate value.
If an indeterminate value of unsigned ordinary character type or std::byte type is produced by the evaluation of the initialization expression when initializing an object of std::byte type, that object is initialized to an indeterminate value.
None of the exceptional cases apply to your example object, so UB applies.
with memcpy is UB?
It is not. std::memcpy interprets the object as an array of bytes, in which exceptional case there is no UB. You still have UB if you attempt to read the indeterminate copy (unless the exceptions above apply).
why?
The C++ standard doesn't include a rationale for most rules. This particular rule has existed since the first standard. It is slightly stricter than the related C rule which is about trap representations. To my understanding, there is no established convention for trap handling, and the authors didn't wish to restrict implementations by specifying it, and instead opted to specify it as UB. This also has the effect of allowing optimiser to deduce that indeterminate values will never be read.
I might want to move this object into a container:
Moving an uninitialised object into a container is typically a logic error. It is unclear why you might want to do such thing.
The design of the C++ Standard was heavily influenced by the C Standard, whose authors (according to the published Rationale) intended and expected that implementations would, on a quality-of-implementation basis, extend the semantics of the language by meaningfully processing programs in cases where it was clear that doing so would be useful, even if the Standard didn't "officially" define the behavior of those programs. Consequently, both standards place more priority upon ensuring that they don't mandate behaviors in cases where doing so might make some implementations less useful, than upon ensuring that they mandate everything that should be supported by quality general-purpose implementations.
There are many cases where it may be useful for an implementation to extend the semantics of the language by guaranteeing that using memcpy on any valid region of storage will, at worst, behave in a fashion consistent with populating the destination with some possibly-meaningless bit pattern with no outside side effects, and few if any where it would be either easier or more useful to have it do something else. The only situations where anyone should care about whether the behavior of memcpy is defined in a particular situation involving valid regions of storage would be those in which some alternative behavior would be genuinely more useful than the commonplace one. If such situations exist, compiler writers and their customers would be better placed than the Committee to judge which behavior would be most useful.
As an example of a situation where an alternative behavior might be more useful, consider code which uses memcpy to copy a partially-written structure, and then uses it to make two copies of that structure. In some cases, having the compiler only write the parts of the two destination structures which had been written in the original may improve efficiency, but that behavior would be observably different from having the first memcpy behave as though it stores some bit pattern to its destination. Note that while such a change would not adversely affect a program's overall behavior if no copies of the uninitialized parts of the structure are ever used in a way that would affect behavior, the Standard has no nice way of distinguishing scenarios that could or could not occur under such a module, and thus leaves all such scenarios undefined.
Is the following valid?
template <typename T>
std::pair<T, T> foo(T one, T two) { ... }
std::tie(one, two) = foo(std::move(one), std::move(two));
(Assuming that the classes involved handle assigning to a moved-from object in a valid manner).
From reading the updated evaluation order proposal, my assumption was that this was fixed, but I can't find an exact reference in the standard that verifies this. Could someone help provide that?
The relevant section from the standard can be found in [expr.ass]/1 and it has
In all cases, the assignment is sequenced after the value computation of the right and left operands, and before the value computation of the assignment expression. The right operand is sequenced before the left operand. With respect to an indeterminately-sequenced function call, the operation of a compound assignment is a single evaluation.
So, according to this, foo(std::move(one), std::move(two)); will be evaluated first, leaving one and two as moved from objects once std::tie(one, two) is evaluated. tie create references, so there is no accessing the moved from objects there. Then the assignment actually happens meaning one and two get assigned to via std::tuple::operator = and get whatever value foo returns. This is legal and well defined.
The code is valid, regardless of C++17 changes to evaluation order of =.
Since LHS doesn't read from one, two, but merely takes their addresses, it being unsequenced (pre-C++17) relative to the RHS doesn't cause UB.
This question already has answers here:
Undefined behavior and sequence points
(5 answers)
Closed 6 years ago.
I am confused about the output of the code.
It depends on what compiler i run the code. Why is it so?
#include <iostream>
using namespace std;
int f(int &n)
{
n--;
return n;
}
int main()
{
int n=10;
n=n-f(n);
cout<<n;
return 0;
}
Running it on the Ubuntu terminal with g++, the output is 1 whereas running it on Turbo C++ ( the compiler we used in school) gives output as 0.
In C++03, modifying a variable and also using its value in the same expression, without an intervening C++03 sequence point, was Undefined Behavior.
C++03 §5/4:
” Between the previous
and next sequence point a scalar object shall have its stored value modified at most once by the evaluation
of an expression. Furthermore, the prior value shall be accessed only to determine the value to be stored.
The requirements of this paragraph shall be met for each allowable ordering of the subexpressions of a full
expression; otherwise the behavior is undefined.
Undefined Behavior, UB, provides the compiler with an opportunity to optimize, because it can assume that UB does not occur in a valid program.
However, with all the myriad UB rules of C++ it's difficult to reason about source code.
In C++11 sequence points were replaced with sequenced before, indeterminately sequenced and unsequenced relations:
C++11 §1.9/3
” Given any two evaluations A and B, if
A is sequenced before B, then the execution of A shall precede the execution of B. If A is not sequenced before
B and B is not sequenced before A, then A and B are unsequenced. [Note: The execution of unsequenced
evaluations can overlap. —end note ] Evaluations A and B are indeterminately sequenced when either A
is sequenced before B or B is sequenced before A, but it is unspecified which.
And with the new C++11 sequence relationship rules the modification in the function in the code in question is indeterminately sequenced with respect to the use of the variable, and so the code has unspecified behavior rather than Undefined Behavior, as noted by Eric M Schmidt in a comment to (the first version of) this answer. Essentially that means that there is no danger of nasal daemons or other possible UB effects, and that the behavior is a reasonable one. The two possible behaviors here are that the modification via the function call is done before the use of the value, or that it's done after the use of the value.
Why it's unspecified behavior:
C++11 §1.9/15:
” Every evaluation in the calling function (including other function calls) that is not otherwise specifically
sequenced before or after the execution of the body of the called function is indeterminately sequenced with
respect to the execution of the called function.
What “unspecified behavior” means:
C++11 §1.3.25:
” unspecified behavior
Behavior, for a well-formed program construct and correct data, that depends on the implementation
[Note: The implementation is not required to document which behavior occurs. The range of possible
behaviors is usually delineated by this International Standard. —end note ]
Why the modification effected by the assignment is not problematic:
C++11 §5.17/1
” In all cases, the assignment is sequenced after the value
computation of the right and left operands, and before the value computation of the assignment expression.
This is also quite different from C++03.
As the rather drastic edit of this answer shows, following Eric's comment, this kind of issue is not simple! The main advice I can give is to as much as possible just Say No™ to effects governed by subtle or very complex rules, the corners of the language. Simple code has a better chance of being correct, while so called clever code does not have a good chance of being significantly faster.
Wondering if empty expressions evaluate to NOP or if it's compiler dependent.
// Trivial example
int main()
{
;;
}
It's compiler dependent but the observable behaviour must be that nothing happens. In practice, I'm sure most compilers will omit no code at all for an empty expression.
A conforming implementation executing a well-formed program shall produce the same observable behavior as one of the possible executions of the corresponding instance of the abstract machine with the same program and the same input.
And the observable behaviour is defined by:
The least requirements on a conforming implementation are:
Access to volatile objects are evaluated strictly according to the rules of the abstract machine.
At program termination, all data written into files shall be identical to one of the possible results that execution of the program according to the abstract semantics would have produced.
The input and output dynamics of interactive devices shall take place in such a fashion that prompting output is actually delivered before a program waits for input. What constitutes an interactive device is implementation-defined.
These collectively are referred to as the observable behavior of the program.
This is really the only requirement for an implementation. It is often known as the "as-if" rule - the compiler can do whatever it likes as long as the observable behaviour is as expected.
For what it's worth, these empty expressions are known as null statements:
An expression statement with the expression missing is called a null statement.
If you really want a NOP, you can try:
asm("nop");
This is, however, conditionally supported and its behaviour is implementation-defined.
or if it's compiler dependent.
It is compiler-dependent ("as-if rule"), but most reasonable optimizing compilers will just ignore empty statements for the sake of efficiency, and they generally won't emit NOP instructions.
Is it a standard term which is well defined, or just a term coined by developers to explain a concept (.. and what is the concept)? As I understand this has something to do with the all-confusing sequence points, but am not sure.
I found one definition here, but doesn't this make each and every statement of code a side effect?
A side effect is a result of an operator, expression, statement, or function that persists even after the operator, expression, statement, or function has finished being evaluated.
Can someone please explain what the term 'side effect' formally means in C++, and what is its significance?
For reference, some questions talking about side effects:
Is comma operator free from side effect?
Force compiler to not optimize side-effect-less statements
Side effects when passing objects to function in C++
A "side effect" is defined by the C++ standard in [intro.execution], by:
Reading an object designated by a volatile glvalue (3.10), modifying an object, calling a library I/O function, or calling a function that does any of those operations are all side effects, which are changes in the state of the execution environment.
The term "side-effect" arises from the distinction between imperative languages and pure functional languages. A C++ expression can do three things:
compute a result (or compute "no result" in the case of a void expression),
raise an exception instead of evaluating to a result,
in addition to 1 or 2, otherwise alter the state of the abstract machine on which the program is nominally running.
(3) are side-effects, the "main effect" being to evaluate the result of the expression. Exceptions are a slightly awkward special case, in that altering the flow of control does change the state of the abstract machine (by changing the current point of execution), but isn't a side-effect. The code to construct, handle and destroy the exception may have its own side-effects, of course.
The same principles apply to functions, with the return value in place of the result of the expression.
So, int foo(int a, int b) { return a + b; } just computes a return value, it doesn't alter anything else. Therefore it has no side-effects, which sometimes is an interesting property of a function when it comes to reasoning about your program (e.g. to prove that it is correct, or by the compiler when it optimizes). int bar(int &a, int &b) { return ++a + b; } does have a side-effect, since modifying the caller's object a is an additional effect of the function beyond simply computing a return value. It would not be permitted in a pure functional language.
The stuff in your quote about "has finished being evaluated" refers to the fact that the result of an expression (or return value of a function) can be a "temporary object", which is destroyed at the end of the full expression in which it occurs. So creating a temporary isn't a "side-effect" by that definition: other changes are.
What exactly is a 'side-effect' in C++? Is it a standard term which is well defined...
c++11 draft - 1.9.12: Accessing an object designated by a volatile glvalue (3.10), modifying an object, calling a library I/O function, or calling a function that does any of those operations are all side effects, which are changes in the state of the execution environment. Evaluation of an expression (or a sub-expression) in general includes both value computations (including determining the identity of an object for glvalue evaluation and fetching a value previously assigned to an object for prvalue evaluation) and initiation of side effects. When a call to a library I/O function returns or an access to a volatile object is evaluated the side effect is considered complete, even though some external actions implied by the call (such as the I/O itself) or by the volatile access may not have completed yet.
I found one definition here, but doesn't this make each and every statement of code a side effect?
A side effect is a result of an operator, expression, statement, or function that persists even after the operator, expression, statement, or function has finished being evaluated.
Can someone please explain what the term 'side effect' formally means in C++, and what is its significance?
The significance is that, as expressions are being evaluated they can modify the program state and/or perform I/O. Expressions are allowed in myriad places in C++: variable assignments, if/else/while conditions, for loop setup/test/modify steps, function parameters etc.... A couple examples: ++x and strcat(buffer, "append this").
In a C++ program, the Standard grants the optimiser the right to generate code representing the program operations, but requires that all the operations associated with steps before a sequence point appear before any operations related to steps after the sequence point.
The reason C++ programmers tend to have to care about sequence points and side effects is that there aren't as many sequence points as you might expect. For example: given x = 1; f(++x, ++x);, you may expect a call to f(2, 3) but it's actually undefined behaviour. This behaviour is left undefined so the compiler's optimiser has more freedom to arrange operations with side effects to run in the most efficient order possible - perhaps even in parallel. It also avoid burdening compiler writers with detecting such conditions.
1.Is comma operator free from side effect?
Yes - a comma operator introduces a sequence point: the steps on the left must be complete before those on the right execute. There are a list of sequence points at http://en.wikipedia.org/wiki/Sequence_point - you should read this! (If you have to ask about side effects, then be careful in interpreting this answer - the "comma operator" is NOT invoked between function arguments, array initialisation elements etc.. The comma operator is relatively rarely used and somewhat obscure. Do some reading if you're not sure what the comma operator really is.)
2.Force compiler to not optimize side-effect-less statements
I assume you mean "side-effect-ful" statements. Compiler's are not obliged to support any such option. What behaviour would they exhibit if they tried? - the Standard doesn't define what they should do in such situations. Sometimes a majority of programmers might share an intuitive expectation, but other times it's really arbitary.
3.Side effects when passing objects to function in C++
When calling a function, all the parameters must have been completely evaluated - and their side effects triggered - before the function call takes place. BUT, there are no restrictions on the compiler related to evaluating specific parameter expressions before any other. They can be overlapping, in parallel etc.. So, in f(expr1, expr2) - some of the steps in evaluating expr2 might run before anything from expr1, but expr1 might still complete first - it's undefined.
1.9.6
The observable behavior of the abstract machine is its sequence of
reads and writes to volatile data and calls to library I/O
functions.
A side-effect is anything that affects observable behavior.
Note that there are exceptions specified by the standard, where observable behavior doesn't have to conform to that of the abstract machine - see return value optimization, temporary copy elision.