Constant folding/propagation optimization with memory barriers - c++

I have been reading for a while in order to understand better whats going on when multithread programming with a modern (multicore) CPU. However, while I was reading this, I noticed the code below in the "Explicit Compiler Barriers" section, which does not use volatile for IsPublished global.
#define COMPILER_BARRIER() asm volatile("" ::: "memory")
int Value;
int IsPublished = 0;
void sendValue(int x)
{
Value = x;
COMPILER_BARRIER(); // prevent reordering of stores
IsPublished = 1;
}
int tryRecvValue()
{
if (IsPublished)
{
COMPILER_BARRIER(); // prevent reordering of loads
return Value;
}
return -1; // or some other value to mean not yet received
}
The question is, is it safe to omit volatile for IsPublished here? Many people mention that "volatile" keyword has nothing much to do with multithread programming and I agree with them. However, during the compiler optimizations "Constant Folding/Propagation" can be applied and as the wiki page shows it is possible to change if (IsPublished) into if (false) if compiler do not knows much about who can change the value of IsPublished. Do I miss or misunderstood something here?
Memory barriers can prevent compiler ordering and out-of-order execution for CPU, but as I said in the previos paragraph do I still need volatile in order to avoid "Constant Folding/Propagation" which is a dangereous optimization especially using globals as flags in a lock-free code?

If tryRecvValue() is called once, it is safe to omit volatile for IsPublished. The same is true in case, when between calls to tryRecvValue() there is a function call, for which compiler cannot prove, that it does not change false value of IsPublished.
// Example 1(Safe)
int v = tryRecvValue();
if(v == -1) exit(1);
// Example 2(Unsafe): tryRecvValue may be inlined and 'IsPublished' may be not re-read between iterations.
int v;
while(true)
{
v = tryRecvValue();
if(v != -1) break;
}
// Example 3(Safe)
int v;
while(true)
{
v = tryRecvValue();
if(v != -1) break;
some_extern_call(); // Possibly can change 'IsPublished'
}
Constant propagation can be applied only when compiler can prove value of the variable. Because IsPublished is declared as non-constant, its value can be proven only if:
Variable is assigned to the given value or read from variable is followed by the branch, executed only in case when variable has given value.
Variable is read (again) in the same program's thread.
Between 2 and 3 variable is not changed within given program's thread.
Unless you call tryRecvValue() in some sort of .init function, compiler will never see IsPublished initialization in the same thread with its reading. So, proving false value of this variable according to its initialization is not possible.
Proving false value of IsPublished according to false (empty) branch in tryRecvValue function is possible, see Example 2 in the code above.

Related

Difference between Interlocked, InterlockedAcquire, and InterlockedRelease if single thread reordering is impossible

In all likelihood, a lockless implementation is already overkill for the purposes of my application, but I wanted to look into memory barriers and lockless-ness anyways in case I ever actually need to use these concepts in the future.
From what I can tell:
an "InterlockedAcquire" function performs an atomic operation while preventing the compiler from moving code statements after the InterlockedAcquire to before the InterlockedAcquire.
an "InterlockedRelease" function performs an atomic operation while preventing the compiler from moving code statements before the InterlockedRelease to after the InterlockedRelease.
a vanilla "Interlocked" function performs an atomic operation while preventing the compiler from moving code statements in either direction across the Interlocked call.
My question is, if a function is structured such that the compiler can't reorder any of the code anyways because doing so would affect single-threaded behavior, is there a difference between any of the variants of an Interlocked function, or all they all effectively the same? Is the only difference between them how they interact with code reordering?
For a more concrete example, here's my current application - the produce() function as part of what will eventually be a multiple producer, single consumer queue built using a circular buffer:
template <typename T>
class Queue {
private:
long headIndex;
long tailIndex;
T* array[MAXQUEUESIZE];
public:
Queue() {
headIndex = 0;
tailIndex = 0;
memset(array, 0, MAXQUEUESIZE*sizeof(void*);
}
~Queue() {
}
bool produce(T value) {
//1) prevents concurrent calls to produce() from causing corruption:
long indexRetVal;
long reservedIndex;
do {
reservedIndex = tailIndex;
indexRetVal = InterlockedCompareExchange64(&tailIndex, (reservedIndex + 1) % MAXQUEUESIZE, reservedIndex);
} while (indexRetVal != reservedIndex);
//2) allocates the node.
T* newValPtr = (T*) malloc(sizeof(T));
if (newValPtr == null) {
OutputDebugString("Queue: malloc returned null");
return false;
}
*newValPtr = value;
//3) prevents a concurrent call to consume from causing corruption by atomically replacing the old pointer:
T* valPtrRetVal = InterlockedCompareExchangePointer(array + reservedIndex, newValPtr, null);
//if the previous value wasn't null, then our circular buffer overflowed:
if (valPtrRetVal != null) {
OutputDebugString("Queue: circular buffer overflowed");
free(newValPtr); //as pointed out by RbMm
return false;
}
//otherwise, everything worked fine
return true;
}
};
As I understand it, 3) will occur after 1) and 2) regardless of what I do anyways, but I should change 1) to an InterlockedRelease because I don't care whether it occurs before or after 2) and I should let the compiler decide.
My question is, if a function is structured such that the compiler can't reorder any of the code anyways because doing so would affect single-threaded behavior, is there a difference between any of the variants of an Interlocked function, or all they all effectively the same? Is the only difference between them how they interact with code reordering?
You may be confusing C++ statements with instructions. Your question isn't CPU specific, so you have to pretend you have no idea what the CPU instructions look like.
Consider this code:
if (a == 2)
{
b = 5;
}
Now, here's an example of a re-ordering of this code that doesn't affect a single thread:
int c = b;
b = 5;
if (a != 2)
b = c;
This performs the same operations but in a different order. It has no effect on single-threaded code. But, of course, if another thread was accessing b, it could see a value of 5 from this code even if a was never 2.
Thus it could also see a value of 5 from the original code even if a is never 2!
Why, because the two bits of code perform the same from the point of view of a single thread. And unless you use operations with guaranteed threading semantics, that's all the compiler, CPU, caches, and other platform components need to preserve.
So most likely, your belief that reordering any of the code would affect single-threaded behavior is probably incorrect. There's lots of ways to reorder and optimize code that doesn't affect single-threaded behavior.
There is an document on the msdn Explained the difference: Acquire and Release Semantics.
For the sample:
a++;
b++;
c++;
If we use acquire semantics to increment a, other processors would always see the increment of a before the increments of b and c;
If we use release semantics to increment c, other processors would always see the increments of a and b before the increment of c;
the InterlockedXxx routines perform, have both acquire and release semantics by default.
More specific, for 4 values:
a++;
b++;
c++;
d++;
If we use acquire semantics to increment b, other processors would always see the increment of b before the increments of c and d;
The order may be a->b->c,d or b->a,c,d.
If we use release semantics to increment c, other processors would always see the increments of a and b before the increment of c;
The order may be a,b->c->d or a,b,d->c.
To quote from this answer of #antiduh:
Acquire says "only worry about stuff after me". Release says "only
worry about stuff before me". Combining those both is a full memory
barrier.
All three versions prevent the compiler from moving code across the function call, but the compiler is not the only place that reordering takes place.
Modern CPUs have "out-of-order execution" and even "speculative execution". Acquire and release semantics cause the code to compiler to instructions with flags or prefixes controlling reordering within the CPU.

Can (a==1 && a==2 && a==3) ever evaluate to true in C or C++?

We know it can in Java and JavaScript.
But the question is, can the condition below ever evaluate to true in C or C++?
if(a==1 && a==2 && a==3)
printf("SUCCESS");
EDIT
If a was an integer.
Depends on your definition of "a is an integer":
int a__(){ static int r; return ++r; }
#define a a__() //a is now an expression of type `int`
int main()
{
return a==1 && a==2 && a==3; //returns 1
}
Of course:
int f(int b) { return b==1&&b==2&&b==3; }
will always return 0; and optimizers will generally replace the check with exactly that.
If we put macro magic aside, I can see one way that could positively answer this question. But it will require a bit more than just standard C. Let's assume we have an extension allowing to use the __attribute__((at(ADDRESS))); attribute, which is placing a variable at some specific memory location (available in some ARM compilers for example, like ARM GCC). Lets assume we have a hardware counter register at the address ADDRESS, which is incrementing each read. Then we could do something like this:
volatile int a __attribute__((at(ADDRESS)));
The volatile is forcing the compiler to generate the register read each time the comparison is performed, so the counter will increment 3 times. If the initial value of the counter is 1, the statement will return true.
P.S. If you don't like the at attribute, same effect can be achieved using linker script by placing a into specific memory section.
If a is of a primitive type (i.e all == and && operators are built in) and you are in defined behavior, and there's no way for another thread to modify a in the middle of execution (this is technically a case of undefined behavior - see comments - but I left it here anyway because it's the example given in the Java question), and there is no preprocessor magic involved (see chosen answer), then I don't believe there is anything way for this to evaluate to true. However, as you can see by that list of conditions, there are many scenarios in which that expression could evaluate to true, depending on the types used and the context of the code.
In C, yes it can. If a is uninitialised then (even if there is no UB, as discussed here), its value is indeterminate, reading it gives indeterminate results, and comparing it to other numbers therefore also gives indeterminate results.
As a direct consequence, a could compare true with 1 in one moment, then compare true with 2 instead the next moment. It can't hold both those values simultaneously, but it doesn't matter, because its value is indeterminate.
In practice I'd be surprised to see the behaviour you describe, though, because there's no real reason for the actual storage to change in memory in the time between the two comparisons.
In C++, sort of. The above is still true there, but reading an indeterminate value is always an undefined operation in C++ so really all bets are off.
Optimisations are allowed to aggressively bastardise your code, and when you do undefined things this can quite easily result in all manner of chaos.
So I'd be less surprised to see this result in C++ than in C but, if I did, it would be an observation without purpose or meaning because a program with undefined behaviour should be entirely ignored anyway.
And, naturally, in both languages there are "tricks" you can do, like #define a (x++), though these do not seem to be in the spirit of your question.
The following program randomly prints seen: yes or seen: no, depending on whether at some point in the execution of the main thread (a == 0 && a == 1 && a == 2) evaluated to true.
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
_Atomic int a = 0;
_Atomic int relse = 0;
void *writer(void *arg)
{
++relse;
while (relse != 2);
for (int i = 100; i > 0; --i)
{
a = 0;
a = 1;
a = 2;
}
return NULL;
}
int main(void)
{
int seen = 0;
pthread_t pt;
if (pthread_create(&pt, NULL, writer, NULL)) exit(EXIT_FAILURE);
++relse;
while (relse != 2);
for (int i = 100; i > 0; --i)
seen |= (a == 0 && a == 1 && a == 2);
printf("seen: %s\n", seen ? "yes":"no");
pthread_join(pt, NULL);
return 0;
}
As far as I am aware, this does not contain undefined behavior at any point, and a is of an integer type, as required by the question.
Obviously this is a race condition, and so whether seen: yes or seen: no is printed depends on the platform the program is run on. On Linux, x86_64, gcc 8.2.1 both answers appear regularly. If it doesn't work, try increasing the loop counters.

C/C++ default value/value assigned to variable is never used

I like to always initialse local variables, e.g
int32_t result = 0;
I thought this is good programming style because "result" can never become uninitialised, independent of whether following if-constructs set it or not.
But now I am trying out a static code checker tool (C_STAT for IAR embedded workbench) and this complains that in the function below, MISRA-C++ rule 2008-01-06 ("shall not contain instances of non-volatile variables being given values that are never subsequently used") and MISRA C:2012 rule 2.2c ("no dead code") and CWE 563 ("unused variable") are violated.
// gets signal1 - signal2 (checks range of value)
int16_t getSignalDifferenceFromFloat(float signal1, int16_t signal2)
{
int32_t result = 0; // <-- this assignment makes the violation
// ... but I feel better with it
if (signal1 > 65535.0)
{
// because result cannot be smaller than the max value of TSignal
result = 32767;
}
else if (signal1 < -65535.0) // <-- here an else was missing
{
// because result cannot be larger than the min value of TSignal
result = -32768;
}
else
{
result = (int32_t)signal1 - (int32_t)signal2;
if (result < -32768)
{
result = -32768;
}
else if (result > 32767)
{
result = 32767;
}
}
return (int16_t) result;
}
original Question: What do you think about it?
New Questions:
Are there good coding standards which require always immediate initialisation of declared local variables?
Is the code checker too pedantic (some compilers do not complain at this place but would complain it the variable would be completely stay unused)? I make it for years but I cannot remember where I had seen it.
Update: In the meantime (Sept 2021) I use pclint with default settings. It also checks misra C++ rule 0-1-6 (variable not subsequently used) and others, but does not complain with this example. If I change the code and do not return this variable, the warning comes as expected. I think it is correct that pclint does not give a warning here.
I agree with MISRA.
The unnecessary initialisation of a variable can allow for sloppy code as it defeats tools that check for uninitialised variables.
In your particular case you could localise result to the final else case and return prematurely in the other cases. But that's not to everyone's taste either.
Static analysers apply the rules you define - some of these rules will conflict because you are expected to set them to match your local coding standard. I imagine there is also a rule that may or may not be enabled that requires initialisation of all variables; and you cannot do both.
Pick a non-conflicting rule set that matches your preferred standard. Which you choose is a matter of opinion so not really a valid SO question.
Most good static analysers will provide you with data flow analysis - which warn you of possible paths leaving uninitialised variables.
Taking your example, say you (accidentally) missed one of the later assignments, this would not be detected by SA:
/// gets signal1 - signal2 (checks range of value)
int16_t getSignalDifferenceFromFloat(float signal1, int16_t signal2)
{
int32_t result = 0; // <-- this assignment makes the violation
// ... but I feel better with it
if (signal1 > 65535.0)
{
// commented out this line
// No data-flow anomoly detected
// result = 32767;
}
// Snip rest of code
return (int16_t) result;
}
The MISRA Guidelines are not perfect, by any stretch, but we try and prevent the obvious... and uninitialised variables are quite easy to detect - as long as you don't try and fudge the issue.
It's a bit like sticking a cast in, just to shut up the Static Analysis tool......
[See profile for disclaimers]

Optimization barrier for microbenchmarks in MSVC: tell the optimizer you clobber memory?

Chandler Carruth introduced two functions in his CppCon2015 talk that can be used to do some fine-grained inhibition of the optimizer. They are useful to write micro-benchmarks that the optimizer won't simply nuke into meaninglessness.
void clobber() {
asm volatile("" : : : "memory");
}
void escape(void* p) {
asm volatile("" : : "g"(p) : "memory");
}
These use inline assembly statements to change the assumptions of the optimizer.
The assembly statement in clobber states that the assembly code in it can read and write anywhere in memory. The actual assembly code is empty, but the optimizer won't look into it because it's asm volatile. It believes it when we tell it the code might read and write everywhere in memory. This effectively prevents the optimizer from reordering or discarding memory writes prior to the call to clobber, and forces memory reads after the call to clobber†.
The one in escape, additionally makes the pointer p visible to the assembly block. Again, because the optimizer won't look into the actual inline assembly code that code can be empty, and the optimizer will still assume that the block uses the address pointed by the pointer p. This effectively forces whatever p points to be in memory and not not in a register, because the assembly block might perform a read from that address.
(This is important because the clobber function won't force reads nor writes for anything that the compilers decides to put in a register, since the assembly statement in clobber doesn't state that anything in particular must be visible to the assembly.)
All of this happens without any additional code being generated directly by these "barriers". They are purely compile-time artifacts.
These use language extensions supported in GCC and in Clang, though. Is there a way to have similar behaviour when using MSVC?
† To understand why the optimizer has to think this way, imagine if the assembly block were a loop adding 1 to every byte in memory.
Given your approximation of escape(), you should also be fine with the following approximation of clobber() (note that this is a draft idea, deferring some of the solution to the implementation of the function nextLocationToClobber()):
// always returns false, but in an undeducible way
bool isClobberingEnabled();
// The challenge is to implement this function in a way,
// that will make even the smartest optimizer believe that
// it can deliver a valid pointer pointing anywhere in the heap,
// stack or the static memory.
volatile char* nextLocationToClobber();
const bool clobberingIsEnabled = isClobberingEnabled();
volatile char* clobberingPtr;
inline void clobber() {
if ( clobberingIsEnabled ) {
// This will never be executed, but the compiler
// cannot know about it.
clobberingPtr = nextLocationToClobber();
*clobberingPtr = *clobberingPtr;
}
}
UPDATE
Question: How would you ensure that isClobberingEnabled returns false "in an undeducible way"? Certainly it would be trivial to place the definition in another translation unit, but the minute you enable LTCG, that strategy is defeated. What did you have in mind?
Answer: We can take advantage of a hard-to-prove property from the number theory, for example, Fermat's Last Theorem:
bool undeducible_false() {
// It took mathematicians more than 3 centuries to prove Fermat's
// last theorem in its most general form. Hardly that knowledge
// has been put into compilers (or the compiler will try hard
// enough to check all one million possible combinations below).
// Caveat: avoid integer overflow (Fermat's theorem
// doesn't hold for modulo arithmetic)
std::uint32_t a = std::clock() % 100 + 1;
std::uint32_t b = std::rand() % 100 + 1;
std::uint32_t c = reinterpret_cast<std::uintptr_t>(&a) % 100 + 1;
return a*a*a + b*b*b == c*c*c;
}
I have used the following in place of escape.
#ifdef _MSC_VER
#pragma optimize("", off)
template <typename T>
inline void escape(T* p) {
*reinterpret_cast<char volatile*>(p) =
*reinterpret_cast<char const volatile*>(p); // thanks, #milleniumbug
}
#pragma optimize("", on)
#endif
It's not perfect but it's close enough, I think.
Sadly, I don't have a way to emulate clobber.

Optimization and multithreading in B.Stroustrup's new book

Please refer to section 41.2.2 Instruction Reordering of "TCPL" 4th edition by B.Stroustrup, which I transcribe below:
To gain performance, compilers, optimizers, and hardware reorder
instructions. Consider:
// thread 1:
int x;
bool x_init;
void init()
{
x = initialize(); // no use of x_init in initialize()
x_init = true;
// ...
}
For this piece of code there is no stated reason to assign to x before
assigning to x_init. The optimizer (or the hardware instruction
scheduler) may decide to speed up the program by executing x_init =
true first. We probably meant for x_init to indicate whether x had
been initialized by initializer() or not. However, we did not say
that, so the hardware, the compiler, and the optimizer do not know
that.
Add another thread to the program:
// thread 2:
extern int x;
extern bool x_init;
void f2()
{
int y;
while (!x_init) // if necessary, wait for initialization to complete
this_thread::sleep_for(milliseconds{10});
y = x;
// ...
}
Now we have a problem: thread 2 may never wait and thus will assign an
uninitialized x to y. Even if thread 1 did not set x_init and x in
‘‘the wrong order,’’ we still may have a problem. In thread 2, there
are no assignments to x_init, so an optimizer may decide to lift the
evaluation of !x_init out of the loop, so that thread 2 either never
sleeps or sleeps forever.
Does the Standard allow the reordering in thread 1? (some quote from the Standard would be forthcoming) Why would that speed up the program?
Both answers in this discussion on SO seem to indicate that no such optimization occurs when there are global variables in the code, as x_init above.
What does the author mean by "to lift the evaluation of !x_init out of the loop"? Is this something like this?
if( !x_init ) while(true) this_thread::sleep_for(milliseconds{10});
y = x;
This is not so much a issue of the C++ compiler/standard, but that of modern CPUs. Have a look here. The compiler isn't going to emit memory barrier instructions between the assignments of x and x_init unless you tell it to.
For what it is worth, prior to C++11, the standard had no notion of multi threading in it's abstract machine model. Things are a bit nicer these days.
The C++11 standard does not "allow" or "prevent" reordering. It specifies some way to force a specific "barrier" that, it turns, prevent the compiler to move instructions before/after them. A compiler, in this example might reorder the assignment because it might be more efficient on a CPU with multiple computing unit (ALU/Hyperthreading/etc...) even with a single core. Typically, if your CPU has 2 ALU that can work in parallel, there is no reason the compiler would not try to feed them with as much work as it can.
I'm not speaking of out-of-order reordering of the CPU instructions that's done internally in Intel CPU (for example), but compile time ordering to ensure all the computing resources are busy doing some work.
I think it depends on the compiler compilation flags. Typically unless you tell it to, the compiler must assume that another compilation unit (say B.cpp, which is not visible at compile time) can have a "extern bool x_init", and can change it at any time. Then, the re-ordering optimization would break with the expected behavior (B can define the initialize() function). This example is trivial and unlikely to break. The linked SO answer are not related to this "optimization", but simply that, in their case, the compiler can not make the assumption that the global array is not modified externally, and as such can not make the optimization. This is not like your example.
Yes. It's a very common optimization trick, instead of:
// test is a bool
for (int i = 0; i < 345; i++) {
if (test) do_something();
}
The compiler might do:
if (test) for(int i = 0; i < 345; i++) { do_something(); }
And save 344 useless tests.