This is not currently a problem, but I am concerned if the code gets ported or we change compilers.
I have code with a block
{
MyClass myObj;
// copy some other variables but never touch myObj
.
.
} // expect destructor to be called on myObj
where myObj is never used in the block code but the constructor has a side effect and I rely on the destructor code of MyClass to be executed at the close of the block. This works as expected on my current arm compiler with some optimization turned on.
My question is, is there any thing I need to do, like declaring something volatile or setting some common attribute to prevent an optimizer from detecting myObj as an unused variable or some such.
This is not a C++11 compiler. As I said this is not currently a problem but I did not want to leave an odd future bug for someone else.
Apart from explicitly defined cases like RVO (return value optimization), optimization is not allowed to change the observable behaviour of the program. Optimizations must follow the so called "as-if" rule.
Insofar as the compiler you're using is even marginally compliant with the standard (I'm looking at you Turbo C++). This is a non-issue because the standard makes strong guarantees about construction and destruction. Those guarantees are the foundation of RAII which is the basis of the "Modern" c++ style.
Related
Consider the following simple code that makes use of new (I am aware there is no delete[], but it does not pertain to this question):
int main()
{
int* mem = new int[100];
return 0;
}
Is the compiler allowed to optimize out the new call?
In my research, g++ (5.2.0) and Visual Studio 2015 do not optimize out the new call, while clang (3.0+) does. All tests have been made with full optimizations enabled (-O3 for g++ and clang, Release mode for Visual Studio).
Isn't new making a system call under the hood, making it impossible (and illegal) for a compiler to optimize that out?
EDIT: I have now excluded undefined behaviour from the program:
#include <new>
int main()
{
int* mem = new (std::nothrow) int[100];
return 0;
}
clang 3.0 does not optimize that out anymore, but later versions do.
EDIT2:
#include <new>
int main()
{
int* mem = new (std::nothrow) int[1000];
if (mem != 0)
return 1;
return 0;
}
clang always returns 1.
The history seems to be that clang is following the rules laid out in N3664: Clarifying Memory Allocation which allows the compiler to optimize around memory allocations but as Nick Lewycky points out :
Shafik pointed out that seems to violate causality but N3664 started life as N3433, and I'm pretty sure we wrote the optimization first and wrote the paper afterwards anyway.
So clang implemented the optimization which later on became a proposal that was implemented as part of C++14.
The base question is whether this is a valid optimization prior to N3664, that is a tough question. We would have to go to the as-if rule covered in the draft C++ standard section 1.9 Program execution which says(emphasis mine):
The semantic descriptions in this International Standard define a
parameterized nondeterministic abstract machine. This International
Standard places no requirement on the structure of conforming
implementations. In particular, they need not copy or emulate the
structure of the abstract machine. Rather, conforming implementations
are required to emulate (only) the observable behavior of the abstract
machine as explained below.5
where note 5 says:
This provision is sometimes called the “as-if” rule, because an
implementation is free to disregard any requirement of this
International Standard as long as the result is as if the requirement
had been obeyed, as far as can be determined from the observable
behavior of the program. For instance, an actual implementation need
not evaluate part of an expression if it can deduce that its value is
not used and that no side effects affecting the observable behavior of
the program are produced.
Since new could throw an exception which would have observable behavior since it would alter the return value of the program, that would seem to argue against it being allowed by the as-if rule.
Although, it could be argued it is implementation detail when to throw an exception and therefore clang could decide even in this scenario it would not cause an exception and therefore eliding the new call would not violate the as-if rule.
It also seems valid under the as-if rule to optimize away the call to the non-throwing version as well.
But we could have a replacement global operator new in a different translation unit which could cause this to affect observable behavior, so the compiler would have to have some way a proving this was not the case, otherwise it would not be able to perform this optimization without violating the as-if rule. Previous versions of clang did indeed optimize in this case as this godbolt example shows which was provided via Casey here, taking this code:
#include <cstddef>
extern void* operator new(std::size_t n);
template<typename T>
T* create() { return new T(); }
int main() {
auto result = 0;
for (auto i = 0; i < 1000000; ++i) {
result += (create<int>() != nullptr);
}
return result;
}
and optimizing it to this:
main: # #main
movl $1000000, %eax # imm = 0xF4240
ret
This indeed seems way too aggressive but later versions do not seem to do this.
This is allowed by N3664.
An implementation is allowed to omit a call to a replaceable global allocation function (18.6.1.1, 18.6.1.2). When it does so, the storage is instead provided by the implementation or provided by extending the allocation of another new-expression.
This proposal is part of the C++14 standard, so in C++14 the compiler is allowed to optimize out a new expression (even if it might throw).
If you take a look at the Clang implementation status it clearly states that they do implement N3664.
If you observe this behavior while compiling in C++11 or C++03 you should fill a bug.
Notice that before C++14 dynamic memory allocations are part of the observable status of the program (although I can not find a reference for that at the moment), so a conformant implementation was not allowed to apply the as-if rule in this case.
Bear in mind the C++ standard tells what a correct program should do, not how it should do it. It can't tell the later at all since new architectures can and do arise after the standard is written and the standard has to be of use to them.
new does not have to be a system call under the hood. There are computers usable without operating systems and without a concept of system call.
Hence, as long as the end behaviour does not change, the compiler can optimize any and everything away. Including that new
There is one caveat.
A replacement global operator new could have been defined in a different translation unit
In that case the side effects of new could be such that can't be optimized away. But if the compiler can guarantee that the new operator has no side effects, as would be the case if the posted code is the whole code, then the optimization is valid.
That new can throw std::bad_alloc is not a requirement. In this case, when new is optimized, the compiler can guarantee that no exception will be thrown and no side effect will happen.
It is perfectly allowable (but not required) for a compiler to optimize out the allocations in your original example, and even more so in the EDIT1 example per §1.9 of the standard, which is usually referred to as the as-if rule:
Conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below:
[3 pages of conditions]
A more human-readable representation is available at cppreference.com.
The relevant points are:
You have no volatiles, so 1) and 2) do not apply.
You do not output/write any data or prompt the user, so 3) and 4) do not apply. But even if you did, they would clearly be satisfied in EDIT1 (arguably also in the original example, although from a purely theoretical point of view, it is illegal since the program flow and output -- theoretically -- differs, but see two paragraphs below).
An exception, even an uncaught one, is well-defined (not undefined!) behavior. However, strictly speaking, in case that new throws (not going to happen, see also next paragraph), the observable behavior would be different, both by the program's exit code and by any output that might follow later in the program.
Now, in the particular case of a singular small allocation, you can give the compiler the "benefit of doubt" that it can guarantee that the allocation will not fail.
Even on a system under very heavy memory pressure, it is not possible to even start a process when you have less than the minimum allocation granularity available, and the heap will have been set up prior to calling main, too. So, if this allocation was to fail, the program would never start or would already have met an ungraceful end before main is even called.
Insofar, assuming that the compiler knows this, even though the allocation could in theory throw, it is legal to even optimize the original example, since the compiler can practically guarantee that it will not happen.
<slightly undecided>
On the other hand, it is not allowable (and as you can observe, a compiler bug) to optimize out the allocation in your EDIT2 example. The value is consumed to produce an externally observable effect (the return code).
Note that if you replace new (std::nothrow) int[1000] with new (std::nothrow) int[1024*1024*1024*1024ll] (that's a 4TiB allocation!), which is -- on present day computers -- guaranteed to fail, it still optimizes out the call. In other words, it returns 1 although you wrote code that must output 0.
#Yakk brought up a good argument against this: As long as the memory is never touched, a pointer can be returned, and not actual RAM is needed. Insofar it would even be legitimate to optimize out the allocation in EDIT2. I am unsure who is right and who is wrong here.
Doing a 4TiB allocation is pretty much guaranteed to fail on a machine that doesn't have at least something like a two-digit gigabyte amount of RAM simply because the OS needs to create page tables. Now of course, the C++ standard does not care about page tables or about what the OS is doing to provide memory, that is true.
But on the other hand, the assumption "this will work if memory is not touched" does rely on exactly such a detail and on something that the OS provides. The assumption that if RAM that is not touched it is actually not needed is only true because the OS provides virtual memory. And that implies that the OS needs to create page tables (I can pretend that I don't know about it, but that doesn't change the fact that I rely on it anyway).
Therefore, I think it is not 100% correct to first assume one and then say "but we don't care about the other".
So, yes, the compiler can assume that a 4TiB allocation is in general perfectly possible as long as memory is not touched, and it can assume that it is generally possible to succeed. It might even assume that it's likely to succeed (even when it's not). But I think that in any case, you are never allowed to assume that something must work when there is a possibility of a failure. And not only is there a possibility of failure, in that example, failure is even the more likely possibility.
</slightly undecided>
The worst that can happen in your snippet is that new throws std::bad_alloc, which is unhandled. What happens then is implementation-defined.
With the best case being a no-op and the worst case not being defined, the compiler is allowed to factor them into non-existence. Now, if you actually try and catch the possible exception :
int main() try {
int* mem = new int[100];
return 0;
} catch(...) {
return 1;
}
... then the call to operator new is kept.
Consider the following simple code that makes use of new (I am aware there is no delete[], but it does not pertain to this question):
int main()
{
int* mem = new int[100];
return 0;
}
Is the compiler allowed to optimize out the new call?
In my research, g++ (5.2.0) and Visual Studio 2015 do not optimize out the new call, while clang (3.0+) does. All tests have been made with full optimizations enabled (-O3 for g++ and clang, Release mode for Visual Studio).
Isn't new making a system call under the hood, making it impossible (and illegal) for a compiler to optimize that out?
EDIT: I have now excluded undefined behaviour from the program:
#include <new>
int main()
{
int* mem = new (std::nothrow) int[100];
return 0;
}
clang 3.0 does not optimize that out anymore, but later versions do.
EDIT2:
#include <new>
int main()
{
int* mem = new (std::nothrow) int[1000];
if (mem != 0)
return 1;
return 0;
}
clang always returns 1.
The history seems to be that clang is following the rules laid out in N3664: Clarifying Memory Allocation which allows the compiler to optimize around memory allocations but as Nick Lewycky points out :
Shafik pointed out that seems to violate causality but N3664 started life as N3433, and I'm pretty sure we wrote the optimization first and wrote the paper afterwards anyway.
So clang implemented the optimization which later on became a proposal that was implemented as part of C++14.
The base question is whether this is a valid optimization prior to N3664, that is a tough question. We would have to go to the as-if rule covered in the draft C++ standard section 1.9 Program execution which says(emphasis mine):
The semantic descriptions in this International Standard define a
parameterized nondeterministic abstract machine. This International
Standard places no requirement on the structure of conforming
implementations. In particular, they need not copy or emulate the
structure of the abstract machine. Rather, conforming implementations
are required to emulate (only) the observable behavior of the abstract
machine as explained below.5
where note 5 says:
This provision is sometimes called the “as-if” rule, because an
implementation is free to disregard any requirement of this
International Standard as long as the result is as if the requirement
had been obeyed, as far as can be determined from the observable
behavior of the program. For instance, an actual implementation need
not evaluate part of an expression if it can deduce that its value is
not used and that no side effects affecting the observable behavior of
the program are produced.
Since new could throw an exception which would have observable behavior since it would alter the return value of the program, that would seem to argue against it being allowed by the as-if rule.
Although, it could be argued it is implementation detail when to throw an exception and therefore clang could decide even in this scenario it would not cause an exception and therefore eliding the new call would not violate the as-if rule.
It also seems valid under the as-if rule to optimize away the call to the non-throwing version as well.
But we could have a replacement global operator new in a different translation unit which could cause this to affect observable behavior, so the compiler would have to have some way a proving this was not the case, otherwise it would not be able to perform this optimization without violating the as-if rule. Previous versions of clang did indeed optimize in this case as this godbolt example shows which was provided via Casey here, taking this code:
#include <cstddef>
extern void* operator new(std::size_t n);
template<typename T>
T* create() { return new T(); }
int main() {
auto result = 0;
for (auto i = 0; i < 1000000; ++i) {
result += (create<int>() != nullptr);
}
return result;
}
and optimizing it to this:
main: # #main
movl $1000000, %eax # imm = 0xF4240
ret
This indeed seems way too aggressive but later versions do not seem to do this.
This is allowed by N3664.
An implementation is allowed to omit a call to a replaceable global allocation function (18.6.1.1, 18.6.1.2). When it does so, the storage is instead provided by the implementation or provided by extending the allocation of another new-expression.
This proposal is part of the C++14 standard, so in C++14 the compiler is allowed to optimize out a new expression (even if it might throw).
If you take a look at the Clang implementation status it clearly states that they do implement N3664.
If you observe this behavior while compiling in C++11 or C++03 you should fill a bug.
Notice that before C++14 dynamic memory allocations are part of the observable status of the program (although I can not find a reference for that at the moment), so a conformant implementation was not allowed to apply the as-if rule in this case.
Bear in mind the C++ standard tells what a correct program should do, not how it should do it. It can't tell the later at all since new architectures can and do arise after the standard is written and the standard has to be of use to them.
new does not have to be a system call under the hood. There are computers usable without operating systems and without a concept of system call.
Hence, as long as the end behaviour does not change, the compiler can optimize any and everything away. Including that new
There is one caveat.
A replacement global operator new could have been defined in a different translation unit
In that case the side effects of new could be such that can't be optimized away. But if the compiler can guarantee that the new operator has no side effects, as would be the case if the posted code is the whole code, then the optimization is valid.
That new can throw std::bad_alloc is not a requirement. In this case, when new is optimized, the compiler can guarantee that no exception will be thrown and no side effect will happen.
It is perfectly allowable (but not required) for a compiler to optimize out the allocations in your original example, and even more so in the EDIT1 example per §1.9 of the standard, which is usually referred to as the as-if rule:
Conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below:
[3 pages of conditions]
A more human-readable representation is available at cppreference.com.
The relevant points are:
You have no volatiles, so 1) and 2) do not apply.
You do not output/write any data or prompt the user, so 3) and 4) do not apply. But even if you did, they would clearly be satisfied in EDIT1 (arguably also in the original example, although from a purely theoretical point of view, it is illegal since the program flow and output -- theoretically -- differs, but see two paragraphs below).
An exception, even an uncaught one, is well-defined (not undefined!) behavior. However, strictly speaking, in case that new throws (not going to happen, see also next paragraph), the observable behavior would be different, both by the program's exit code and by any output that might follow later in the program.
Now, in the particular case of a singular small allocation, you can give the compiler the "benefit of doubt" that it can guarantee that the allocation will not fail.
Even on a system under very heavy memory pressure, it is not possible to even start a process when you have less than the minimum allocation granularity available, and the heap will have been set up prior to calling main, too. So, if this allocation was to fail, the program would never start or would already have met an ungraceful end before main is even called.
Insofar, assuming that the compiler knows this, even though the allocation could in theory throw, it is legal to even optimize the original example, since the compiler can practically guarantee that it will not happen.
<slightly undecided>
On the other hand, it is not allowable (and as you can observe, a compiler bug) to optimize out the allocation in your EDIT2 example. The value is consumed to produce an externally observable effect (the return code).
Note that if you replace new (std::nothrow) int[1000] with new (std::nothrow) int[1024*1024*1024*1024ll] (that's a 4TiB allocation!), which is -- on present day computers -- guaranteed to fail, it still optimizes out the call. In other words, it returns 1 although you wrote code that must output 0.
#Yakk brought up a good argument against this: As long as the memory is never touched, a pointer can be returned, and not actual RAM is needed. Insofar it would even be legitimate to optimize out the allocation in EDIT2. I am unsure who is right and who is wrong here.
Doing a 4TiB allocation is pretty much guaranteed to fail on a machine that doesn't have at least something like a two-digit gigabyte amount of RAM simply because the OS needs to create page tables. Now of course, the C++ standard does not care about page tables or about what the OS is doing to provide memory, that is true.
But on the other hand, the assumption "this will work if memory is not touched" does rely on exactly such a detail and on something that the OS provides. The assumption that if RAM that is not touched it is actually not needed is only true because the OS provides virtual memory. And that implies that the OS needs to create page tables (I can pretend that I don't know about it, but that doesn't change the fact that I rely on it anyway).
Therefore, I think it is not 100% correct to first assume one and then say "but we don't care about the other".
So, yes, the compiler can assume that a 4TiB allocation is in general perfectly possible as long as memory is not touched, and it can assume that it is generally possible to succeed. It might even assume that it's likely to succeed (even when it's not). But I think that in any case, you are never allowed to assume that something must work when there is a possibility of a failure. And not only is there a possibility of failure, in that example, failure is even the more likely possibility.
</slightly undecided>
The worst that can happen in your snippet is that new throws std::bad_alloc, which is unhandled. What happens then is implementation-defined.
With the best case being a no-op and the worst case not being defined, the compiler is allowed to factor them into non-existence. Now, if you actually try and catch the possible exception :
int main() try {
int* mem = new int[100];
return 0;
} catch(...) {
return 1;
}
... then the call to operator new is kept.
If there is a C or C++ code like this:
if (func())
;
can compiler optimise out call to function func() if it cannot be sure whether function has any side-effects?
Origin of my question: I sometimes call assert macros in a way like this:
if (func())
assert(0);
if I want to make sure that func() is always called and that asssertion fails in debug mode if func() returns wrong value. But recently I was warned that my code doesn't guarantee that function is always called.
If the compiler cannot prove that optimizing away the call to func does not change the observable behavior of your program, it is not allowed to make the optimization.
So unless the compiler can prove that not calling the function has no observable effect, the call will take place. Note that compilers can be smart sometimes, so if you want to be sure, make sure the function actually does have a side effect. (On the other hand, if it doesn't, you need not care.)
This is known as the as-if rule.
(This is a C++ answer. Please post a question for one programming language only, not two.)
No, a function that may have side effects cannot be optimised out, because then you may be "optimising out" side effects. And since by "side effects" we really mean "the things that your program does", a compiler permitted to do such a thing would not be particularly useful. That's why the standard's "as-if" rule prevents the sort of optimisation you're talking about.
I want to initialize some static data on the main thread.
int32_t GetFoo(ptime t)
{
static HugeBarData data;
return data.Baz(t);
}
int main()
{
GetFoo(); // Avoid data race on static field.
// But will it be optimized away as unnecessary?
// Spawn threads. Call 'GetFoo' on the threads.
}
If the complier may decide to remove it, how can I force it to stay there?
The only side-effecting functions that a C++ compiler can optimize away are unnecessary constructor calls, particularly copy constructors.
Cf Under what conditions does C++ optimize out constructor calls?
Compilers must optimize according to the "as-if" rule. That is, after any optimization, the program must still behave (in the logical sense) as if the code were not optimized.
If there are side-effects to a function, any optimization must preserve the side effects. However, if the compiler can determine that the result of the side-effects don't affect the rest of the program, it can optimize away even the side-effects. Compilers are very conservative about this area. If your compiler optimizes away side-effects of the HugeBarData constructor or Baz call, which are required elsewhere in the program, this is a bug in the compiler.
There are some exceptions where the compiler can make optimizations which alter the behaviour of the program from the non-optimized case, usually involving copies. I don't think any of those exceptions apply here.
Suppose I have the following:
int main() {
SomeClass();
return 0;
}
Without optimization, the SomeClass() constructor will be called, and then its destructor will be called, and the object will be no more.
However, according to an IRC channel that constructor/destructor call may be optimized away if the compiler thinks there's no side effect to the SomeClass constructors/destructors.
I suppose the obvious way to go about this is not to use some constructor/destructor function (e.g use a function, or a static method or so), but is there a way to ensure the calling of the constructors/destructors?
However, according to an IRC channel that constructor/destructor call may be optimized away if the compiler thinks there's no side effect to the SomeClass constructors/destructors.
The bolded part is wrong. That should be: knows there is no observable behaviour
E.g. from § 1.9 of the latest standard (there are more relevant quotes):
A conforming implementation executing a well-formed program shall produce the same observable behavior
as one of the possible executions of the corresponding instance of the abstract machine with the same program
and the same input. However, if any such execution contains an undefined operation, this International
Standard places no requirement on the implementation executing that program with that input (not even
with regard to operations preceding the first undefined operation).
As a matter of fact, this whole mechanism underpins the sinlge most ubiquitous C++ language idiom: Resource Acquisition Is Initialization
Backgrounder
Having the compiler optimize away the trivial case-constructors is extremely helpful. It is what allows iterators to compile down to exactly the same performance code as using raw pointer/indexers.
It is also what allows a function object to compile down to the exact same code as inlining the function body.
It is what makes C++11 lambdas perfectly optimal for simple use cases:
factorial = std::accumulate(begin, end, [] (int a,int b) { return a*b; });
The lambda compiles down to a functor object similar to
struct lambda_1
{
int operator()(int a, int b) const
{ return a*b; }
};
The compiler sees that the constructor/destructor can be elided and the function body get's inlined. The end result is optimal 1
More (un)observable behaviour
The standard contains a very entertaining example to the contrary, to spark your imagination.
§ 20.7.2.2.3
[ Note: The use count updates caused by the temporary object construction and destruction are not
observable side effects, so the implementation may meet the effects (and the implied guarantees) via
different means, without creating a temporary. In particular, in the example:
shared_ptr<int> p(new int);
shared_ptr<void> q(p);
p = p;
q = p;
both assignments may be no-ops. —end note ]
IOW: Don't underestimate the power of optimizing compilers. This in no way means that language guarantees are to be thrown out of the window!
1 Though there could be faster algorithms to get a factorial, depending on the problem domain :)
I'm sure is 'SomeClass::SomeClass()' is not implemented as 'inline', the compiler has no way of knowing that the constructor/destructor has no side effects, and it will call the constructor/destructor always.
If the compiler is optimizing away a visible effect of the constructor/destructor call, it is buggy. If it has no visible effect, then you shouldn't notice it anyway.
However let's assume that somehow your constructor or destructor does have a visible effect (so construction and subsequent destruction of that object isn't effectively a no-op) in such a way that the compiler could legitimately think it wouldn't (not that I can think of such a situation, but then, it might be just a lack of imagination on my side). Then any of the following strategies should work:
Make sure that the compiler cannot see the definition of the constructor and/or destructor. If the compiler doesn't know what the constructor/destructor does, it cannot assume it does not have an effect. Note, however, that this also disables inlining. If your compiler does not do cross-module optimization, just putting the constructor/destructor into a different file should suffice.
Make sure that your constructor/destructor actually does have observable behaviour, e.g. through use of volatile variables (every read or write of a volatile variable is considered observable behaviour in C++).
However let me stress again that it's very unlikely that you have to do anything, unless your compiler is horribly buggy (in which case I'd strongly advice you to change the compiler :-)).