Do all CPU that formally respect dependency allow independent dependencies? - c++

The context of the question is normal cached memory such as the memory of C and C++ objects.
Many CPU perform memory loads out of order (with the benefit of not completely stalling the executing thread on a cache miss), and do not check later that another core did not ask the permission to change these (*), so a read that appears later in the program order was effectively retrieved earlier, and another thread might have made progress in that interval, which means that no agreement on the ordering of memory operations is possible between these two threads that agrees with program order.
(*) By asking the local cache to invalidate the cache line containing it, which is a prerequisite before altering a value in any architecture that can support normal thread programming.
Many CPU types that allow that behavior also guarantee that when a value depends on the result of previous load, all subsequent operations that have a dependency on that value occur really after the load, in particular the loads at an address derived from that value.
But a value derived from another can be known to be mathematically independent of the parameter; indeed, a common idiom to set a register to zero is to xor its value with itself, which syntactically derives the value from the previous one but semantically does not.
Do all CPU that track value dependencies to guarantee that memory operations based on values dependent on a previous load allow such non semantic dependencies to create an independent dependency for the purpose of ordering memory operations? Does that apply to bitwise operations or all arithmetic operations, including but not limited to use of absorbing elements?
Most obvious examples are:
x ^ x
x & 0
x | -1
x - x
x * 0
x / x (raises an exception if zero)
The purpose the question obviously is to understand how a compiler can support memory_order_consume in C and C++ on these architectures in an optimal way, following the intent of that feature (and not in a degenerate way by defining mo_consume as mo_acquire).
PROBLEM STATEMENT
The problem is the compilation of code such as
r1 = x.load(memory_order_consume);
r2 = y | (r1&0);
r3 = z + (r1-r1);
r4 = (&E)[r1*0];
Where the simple strategy of renaming r1 and making it "register volatile" (volatile-like, non optimizable, except in a register) in the intermediate code produces correct "consume" (value/address dependent load) behavior on each operations.
BONUS QUESTION
Do any CPU directly support dependent constants? That is a "set immediate value" instruction that introduces an explicit dependency of the constant on a register value:
load.i.dep r1,8,r4 ; sets r1 to 8 but dependency on value of r4 is respected
lea r0,r0,r1 ; r0 += r1, r0 depends on r4
load.w r2,r1 ; loads one word at r1 to r2 only after r4 is known

Related

using bitshift in c++ to set two bits to 0 resp 1

Assuming x is a 8bit unsigned integer, what is the most efficient command to set the last two bits to 01 ?
So regardless of the initial value it should be x = ******01 in the final state.
In order to set
the last bit to 1, one can use OR like x |= 00000001, and
the forelast bit to 0, one can use AND like x &= 11111101 which is ~(1<<1).
Is there an arithmetic / logical operation that can be use to apply both operations at the same time?
Can this be answered independently of programm-specific implementation but pure logical operations?
Of course you can put both operation in one instructions.
x = (x & 0b11111100) | 1;
That at least saves one assignment and one memory read/write. However, most likely the compilers may optimize this anyway, even if put into two instructions.
Depending on the target CPU, compilers may even optimize the code into bit manipulation instructions that directly can set or reset single bits. And if the variable is locally heavily used, then most likely its kept in register.
So at the end the generated code may look at as simple as (pseudo asm):
load x
setBit 0
clearBit 1
store x
however it also may be compiled into something like
load x to R1
load immediate 0b11111100 to R2
and R1, R2
load immediate 1 to R2
or R1, R2
store R1 to x
For playing with things like this you may throw a look at compiler explorer https://godbolt.org/z/sMhG3YTx9
Please try removing the -O2 compiler option and see the difference between optimized and non-optimized code. Also you may try to switch to different cpu architectures and compilers
This is not possible with a dyadic bitwise operator. Because the second operand should allow to distinguish between "reset to 0", "set to 1" or "leave unchanged". This cannot be encoded in a single binary mask.
Instead, you can use the bitwise ternary operator and form the expression
x = 0b11111100 ??? x : 0b00000001.
Anyway, a piece of caution: this operator does not exist.

What formally guarantees that non-atomic variables can't see out-of-thin-air values and create a data race like atomic relaxed theoretically can?

This is a question about the formal guarantees of the C++ standard.
The standard points out that the rules for std::memory_order_relaxed atomic variables allow "out of thin air" / "out of the blue" values to appear.
But for non-atomic variables, can this example have UB? Is r1 == r2 == 42 possible in the C++ abstract machine? Neither variable == 42 initially so you'd expect neither if body should execute, meaning no writes to the shared variables.
// Global state
int x = 0, y = 0;
// Thread 1:
r1 = x;
if (r1 == 42) y = r1;
// Thread 2:
r2 = y;
if (r2 == 42) x = 42;
The above example is adapted from the standard, which explicitly says such behavior is allowed by the specification for atomic objects:
[Note: The requirements do allow r1 == r2 == 42 in the following
example, with x and y initially zero:
// Thread 1:
r1 = x.load(memory_order_relaxed);
if (r1 == 42) y.store(r1, memory_order_relaxed);
// Thread 2:
r2 = y.load(memory_order_relaxed);
if (r2 == 42) x.store(42, memory_order_relaxed);
However, implementations should not allow such behavior. – end note]
What part of the so called "memory model" protects non atomic objects from these interactions caused by reads seeing out-of-thin-air values?
When a race condition would exist with different values for x and y, what guarantees that read of a shared variable (normal, non atomic) cannot see such values?
Can not-executed if bodies create self-fulfilling conditions that lead to a data-race?
The text of your question seems to be missing the point of the example and out-of-thin-air values. Your example does not contain data-race UB. (It might if x or y were set to 42 before those threads ran, in which case all bets are off and the other answers citing data-race UB apply.)
There is no protection against real data races, only against out-of-thin-air values.
I think you're really asking how to reconcile that mo_relaxed example with sane and well-defined behaviour for non-atomic variables. That's what this answer covers.
The note is pointing out a hole in the atomic mo_relaxed formalism, not warning you of a real possible effect on some implementations.
This gap does not (I think) apply to non-atomic objects, only to mo_relaxed.
They say However, implementations should not allow such behavior. – end note]. Apparently the standards committee couldn't find a way to formalize that requirement so for now it's just a note, but is not intended to be optional.
It's clear that even though this isn't strictly normative, the C++ standard intends to disallow out-of-thin-air values for relaxed atomic (and in general I assume). Later standards discussion, e.g. 2018's p0668r5: Revising the C++ memory model (which doesn't "fix" this, it's an unrelated change) includes juicy side-nodes like:
We still do not have an acceptable way to make our informal (since C++14) prohibition of out-of-thin-air results precise. The primary practical effect of that is that formal verification of C++ programs using relaxed atomics remains unfeasible. The above paper suggests a solution similar to http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3710.html . We continue to ignore the problem here ...
So yes, the normative parts of the standard are apparently weaker for relaxed_atomic than they are for non-atomic. This seems to be an unfortunately side effect of how they define the rules.
AFAIK no implementations can produce out-of-thin-air values in real life.
Later versions of the standard phrase the informal recommendation more clearly, e.g. in the current draft: https://timsong-cpp.github.io/cppwp/atomics.order#8
Implementations should ensure that no “out-of-thin-air” values are computed that circularly depend on their own computation.
...
[ Note: The recommendation [of 8.] similarly disallows r1 == r2 == 42 in the following example, with x and y again initially zero:
// Thread 1:
r1 = x.load(memory_order::relaxed);
if (r1 == 42) y.store(42, memory_order::relaxed);
// Thread 2:
r2 = y.load(memory_order::relaxed);
if (r2 == 42) x.store(42, memory_order::relaxed);
— end note ]
(This rest of the answer was written before I was sure that the standard intended to disallow this for mo_relaxed, too.)
I'm pretty sure the C++ abstract machine does not allow r1 == r2 == 42.
Every possible ordering of operations in the C++ abstract machine operations leads to r1=r2=0 without UB, even without synchronization. Therefore the program has no UB and any non-zero result would violate the "as-if" rule.
Formally, ISO C++ allows an implementation to implement functions / programs in any way that gives the same result as the C++ abstract machine would. For multi-threaded code, an implementation can pick one possible abstract-machine ordering and decide that's the ordering that always happens. (e.g. when reordering relaxed atomic stores when compiling to asm for a strongly-ordered ISA. The standard as written even allows coalescing atomic stores but compilers choose not to). But the result of the program always has to be something the abstract machine could have produced. (Only the Atomics chapter introduces the possibility of one thread observing the actions of another thread without mutexes. Otherwise that's not possible without data-race UB).
I think the other answers didn't look carefully enough at this. (And neither did I when it was first posted). Code that doesn't execute doesn't cause UB (including data-race UB), and compilers aren't allowed to invent writes to objects. (Except in code paths that already unconditionally write them, like y = (x==42) ? 42 : y; which would obviously create data-race UB.)
For any non-atomic object, if don't actually write it then other threads might also be reading it, regardless of code inside not-executed if blocks. The standard allows this and doesn't allow a variable to suddenly read as a different value when the abstract machine hasn't written it. (And for objects we don't even read, like neighbouring array elements, another thread might even be writing them.)
Therefore we can't do anything that would let another thread temporarily see a different value for the object, or step on its write. Inventing writes to non-atomic objects is basically always a compiler bug; this is well known and universally agreed upon because it can break code that doesn't contain UB (and has done so in practice for a few cases of compiler bugs that created it, e.g. IA-64 GCC I think had such a bug at one point that broke the Linux kernel). IIRC, Herb Sutter mentioned such bugs in part 1 or 2 of his talk, atomic<> Weapons: The C++ Memory Model and Modern Hardware", saying that it was already usually considered a compiler bug before C++11, but C++11 codified that and made it easier to be sure.
Or another recent example with ICC for x86:
Crash with icc: can the compiler invent writes where none existed in the abstract machine?
In the C++ abstract machine, there's no way for execution to reach either y = r1; or x = r2;, regardless of sequencing or simultaneity of the loads for the branch conditions. x and y both read as 0 and neither thread ever writes them.
No synchronization is required to avoid UB because no order of abstract-machine operations leads to a data-race. The ISO C++ standard doesn't have anything to say about speculative execution or what happens when mis-speculation reaches code. That's because speculation is a feature of real implementations, not of the abstract machine. It's up to implementations (HW vendors and compiler writers) to ensure the "as-if" rule is respected.
It's legal in C++ to write code like if (global_id == mine) shared_var = 123; and have all threads execute it, as long as at most one thread actually runs the shared_var = 123; statement. (And as long as synchronization exists to avoid a data race on non-atomic int global_id). If things like this broke down, it would be chaos. For example, you could apparently draw wrong conclusions like reordering atomic operations in C++
Observing that a non-write didn't happen isn't data-race UB.
It's also not UB to run if(i<SIZE) return arr[i]; because the array access only happens if i is in bounds.
I think the "out of the blue" value-invention note only applies to relaxed-atomics, apparently as a special caveat for them in the Atomics chapter. (And even then, AFAIK it can't actually happen on any real C++ implementations, certainly not mainstream ones. At this point implementations don't have to take any special measures to make sure it can't happen for non-atomic variables.)
I'm not aware of any similar language outside the atomics chapter of the standard that allows an implementation to allow values to appear out of the blue like this.
I don't see any sane way to argue that the C++ abstract machine causes UB at any point when executing this, but seeing r1 == r2 == 42 would imply that unsynchronized read+write had happened, but that's data-race UB. If that can happen, can an implementation invent UB because of speculative execution (or some other reason)? The answer has to be "no" for the C++ standard to be usable at all.
For relaxed atomics, inventing the 42 out of nowhere wouldn't imply that UB had happened; perhaps that's why the standard says it's allowed by the rules? As far as I know, nothing outside the Atomics chapter of the standard allows it.
A hypothetical asm / hardware mechanism that could cause this
(Nobody wants this, hopefully everyone agrees that it would be a bad idea to build hardware like this. It seems unlikely that coupling speculation across logical cores would ever be worth the downside of having to roll back all cores when one detects a mispredict or other mis-speculation.)
For 42 to be possible, thread 1 has to see thread 2's speculative store and the store from thread 1 has to be seen by thread 2's load. (Confirming that branch speculation as good, allowing this path of execution to become the real path that was actually taken.)
i.e. speculation across threads: Possible on current HW if they ran on the same core with only a lightweight context switch, e.g. coroutines or green threads.
But on current HW, memory reordering between threads is impossible in that case. Out-of-order execution of code on the same core gives the illusion of everything happening in program order. To get memory reordering between threads, they need to be running on different cores.
So we'd need a design that coupled together speculation between two logical cores. Nobody does that because it means more state needs to rollback if a mispredict is detected. But it is hypothetically possible. For example an OoO SMT core that allows store-forwarding between its logical cores even before they've retired from the out-of-order core (i.e. become non-speculative).
PowerPC allows store-forwarding between logical cores for retired stores, meaning that threads can disagree about the global order of stores. But waiting until they "graduate" (i.e. retire) and become non-speculative means it doesn't tie together speculation on separate logical cores. So when one is recovering from a branch miss, the others can keep the back-end busy. If they all had to rollback on a mispredict on any logical core, that would defeat a significant part of the benefit of SMT.
I thought for a while I'd found an ordering that lead to this on single core of a real weakly-ordered CPUs (with user-space context switching between the threads), but the final step store can't forward to the first step load because this is program order and OoO exec preserves that.
T2: r2 = y; stalls (e.g. cache miss)
T2: branch prediction predicts that r2 == 42 will be true. ( x = 42 should run.
T2: x = 42 runs. (Still speculative; r2 = yhasn't obtained a value yet so ther2 == 42` compare/branch is still waiting to confirm that speculation).
a context switch to Thread 1 happens without rolling back the CPU to retirement state or otherwise waiting for speculation to be confirmed as good or detected as mis-speculation.
This part won't happen on real C++ implementations unless they use an M:N thread model, not the more common 1:1 C++ thread to OS thread. Real CPUs don't rename the privilege level: they don't take interrupts or otherwise enter the kernel with speculative instructions in flight that might need to rollback and redo entering kernel mode from a different architectural state.
T1: r1 = x; takes its value from the speculative x = 42 store
T1: r1 == 42 is found to be true. (Branch speculation happens here, too, not actually waiting for store-forwarding to complete. But along this path of execution, where the x = 42 did happen, this branch condition will execute and confirm the prediction).
T1: y = 42 runs.
this was all on the same CPU core so this y=42 store is after the r2=y load in program-order; it can't give that load a 42 to let the r2==42 speculation be confirmed. So this possible ordering doesn't demonstrate this in action after all. This is why threads have to be running on separate cores with inter-thread speculation for effects like this to be possible.
Note that x = 42 doesn't have a data dependency on r2 so value-prediction isn't required to make this happen. And the y=r1 is inside an if(r1 == 42) anyway so the compiler can optimize to y=42 if it wants, breaking the data dependency in the other thread and making things symmetric.
Note that the arguments about Green Threads or other context switch on a single core isn't actually relevant: we need separate cores for the memory reordering.
I commented earlier that I thought this might involve value-prediction. The ISO C++ standard's memory model is certainly weak enough to allow the kinds of crazy "reordering" that value-prediction can create to use, but it's not necessary for this reordering. y=r1 can be optimized to y=42, and the original code includes x=42 anyway so there's no data dependency of that store on the r2=y load. Speculative stores of 42 are easily possible without value prediction. (The problem is getting the other thread to see them!)
Speculating because of branch prediction instead of value prediction has the same effect here. And in both cases the loads need to eventually see 42 to confirm the speculation as correct.
Value-prediction doesn't even help make this reordering more plausible. We still need inter-thread speculation and memory reordering for the two speculative stores to confirm each other and bootstrap themselves into existence.
ISO C++ chooses to allow this for relaxed atomics, but AFAICT is disallows this non-atomic variables. I'm not sure I see exactly what in the standard does allow the relaxed-atomic case in ISO C++ beyond the note saying it's not explicitly disallowed. If there was any other code that did anything with x or y then maybe, but I think my argument does apply to the relaxed atomic case as well. No path through the source in the C++ abstract machine can produce it.
As I said, it's not possible in practice AFAIK on any real hardware (in asm), or in C++ on any real C++ implementation. It's more of an interesting thought-experiment into crazy consequences of very weak ordering rules, like C++'s relaxed-atomic. (Those ordering rules don't disallow it, but I think the as-if rule and the rest of the standard does, unless there's some provision that allows relaxed atomics to read a value that was never actually written by any thread.)
If there is such a rule, it would only be for relaxed atomics, not for non-atomic variables. Data-race UB is pretty much all the standard needs to say about non-atomic vars and memory ordering, but we don't have that.
When a race condition potentially exists, what guarantees that a read of a shared variable (normal, non atomic) cannot see a write
There is no such guarantee.
When race condition exists, the behaviour of the program is undefined:
[intro.races]
Two actions are potentially concurrent if
they are performed by different threads, or
they are unsequenced, at least one is performed by a signal handler, and they are not both performed by the same signal handler invocation.
The execution of a program contains a data race if it contains two potentially concurrent conflicting actions, at least one of which is not atomic, and neither happens before the other, except for the special case for signal handlers described below. Any such data race results in undefined behavior. ...
The special case is not very relevant to the question, but I'll include it for completeness:
Two accesses to the same object of type volatile std::sig_­atomic_­t do not result in a data race if both occur in the same thread, even if one or more occurs in a signal handler. ...
What part of the so called "memory model" protects non atomic objects from these interactions caused by reads that see the interaction?
None. In fact, you get the opposite and the standard explicitly calls this out as undefined behavior. In [intro.races]\21 we have
The execution of a program contains a data race if it contains two potentially concurrent conflicting actions, at least one of which is not atomic, and neither happens before the other, except for the special case for signal handlers described below. Any such data race results in undefined behavior.
which covers your second example.
The rule is that if you have shared data in multiple threads, and at least one of those threads write to that shared data, then you need synchronization. Without that you have a data race and undefined behavior. Do note that volatile is not a valid synchronization mechanism. You need atomics/mutexs/condition variables to protect shared access.
Note: The specific examples I give here are apparently not accurate. I've assumed the optimizer can be somewhat more aggressive than it's apparently allowed to be. There is some excellent discussion about this in the comments. I'm going to have to investigate this further, but wanted to leave this note here as a warning.
Other people have given you answers quoting the appropriate parts of the standard that flat out state that the guarantee you think exists, doesn't. It appears that you're interpreting a part of the standard that says a certain weird behavior is permitted for atomic objects if you use memory_order_relaxed as meaning that this behavior is not permitted for non-atomic objects. This is a leap of inference that is explicitly addressed by other parts of the standard that declare the behavior undefined for non-atomic objects.
In practical terms, here is an order of events that might happen in thread 1 that would be perfectly reasonable, but result in the behavior you think is barred even if the hardware guaranteed that all memory access was completely serialized between CPUs. Keep in mind that the standard has to not only take into account the behavior of the hardware, but the behavior of optimizers, which often aggressively re-order and re-write code.
Thread 1 could be re-written by an optimizer to look this way:
old_y = y; // old_y is a hidden variable (perhaps a register) created by the optimizer
y = 42;
if (x != 42) y = old_y;
There might be perfectly reasonable reasons for an optimizer to do this. For example, it may decide that it's far more likely than not for 42 to be written into y, and for dependency reasons, the pipeline might work a lot better if the store into y occurs sooner rather than later.
The rule is that the apparent result must look as if the code you wrote is what was executed. But there is no requirement that the code you write bears any resemblance at all to what the CPU is actually told to do.
The atomic variables impose constraints on the ability of the compiler to re-write code as well as instructing the compiler to issue special CPU instructions that impose constraints on the ability of the CPU to re-order memory accesses. The constraints involving memory_order_relaxed are much stronger than what is ordinarily allowed. The compiler would generally be allowed to completely get rid of any reference to x and y at all if they weren't atomic.
Additionally, if they are atomic, the compiler must ensure that other CPUs see the entire variable as either with the new value or the old value. For example, if the variable is a 32-bit entity that crosses a cache line boundary and a modification involves changing bits on both sides of the cache line boundary, one CPU may see a value of the variable that is never written because it only sees an update to the bits on one side of the cache line boundary. But this is not allowed for atomic variables modified with memory_order_relaxed.
That is why data races are labeled as undefined behavior by the standard. The space of the possible things that could happen is probably a lot wilder than your imagination could account for, and certainly wider than any standard could reasonably encompass.
(Stackoverflow complains about too many comments I put above, so I gathered them into an answer with some modifications.)
The intercept you cite from from C++ standard working draft N3337 was wrong.
[Note: The requirements do allow r1 == r2 == 42 in the following
example, with x and y initially zero:
// Thread 1:
r1 = x.load(memory_order_relaxed);
if (r1 == 42)
y.store(r1, memory_order_relaxed);
// Thread 2:
r2 = y.load(memory_order_relaxed);
if (r2 == 42)
x.store(42, memory_order_relaxed);
A programming language should never allow this "r1 == r2 == 42" to happen.
This has nothing to do with memory model. This is required by causality, which is the basic logic methodology and the foundation of any programming language design. It is the fundamental contract between human and computer. Any memory model should abide by it. Otherwise it is a bug.
The causality here is reflected by the intra-thread dependences between operations within a thread, such as data dependence (e.g., read after write in same location) and control dependence (e.g., operation in a branch), etc. They cannot be violated by any language specification. Any compiler/processor design should respect the dependence in its committed result (i.e., externally visible result or program visible result).
Memory model is mainly about memory operation ordering among multi-processors, which should never violate the intra-thread dependence, although a weak model may allow the causality happening in one processor to be violated (or unseen) in another processor.
In your code snippet, both threads have (intra-thread) data dependence (load->check) and control dependence (check->store) that ensure their respective executions (within a thread) are ordered. That means, we can check the later op's output to determine if the earlier op has executed.
Then we can use simple logic to deduce that, if both r1 and r2 are 42, there must be a dependence cycle, which is impossible, unless you remove one condition check, which essentially breaks the dependence cycle. This has nothing to do with memory model, but intra-thread data dependence.
Causality (or more accurately, intra-thread dependence here) is defined in C++ std, but not so explicitly in early drafts, because dependence is more of micro-architecture and compiler terminology. In language spec, it is usually defined as operational semantics. For example, the control dependence formed by "if statement" is defined in the same version of draft you cited as "If the condition yields true the first substatement is executed. " That defines the sequential execution order.
That said, the compiler and processor can schedule one or more operations of the if-branch to be executed before the if-condition is resolved. But no matter how the compiler and processor schedule the operations, the result of the if-branch cannot be committed (i.e., become visible to the program) before the if-condition is resolved. One should distinguish between semantics requirement and implementation details. One is language spec, the other is how the compiler and processor implement the language spec.
Actually the current C++ standard draft has corrected this bug in https://timsong-cpp.github.io/cppwp/atomics.order#9 with a slight change.
[ Note: The recommendation similarly disallows r1 == r2 == 42 in the following example, with x and y again initially zero:
// Thread 1:
r1 = x.load(memory_order_relaxed);
if (r1 == 42)
y.store(42, memory_order_relaxed);
// Thread 2:
r2 = y.load(memory_order_relaxed);
if (r2 == 42)
x.store(42, memory_order_relaxed);

Can an expression containing post increment execute in parallel with other parts of that expression in C++?

I came up with this question from following answer:
Efficiency of postincrement v.s. preincrement in C++
There I've found this expression:
a = b++ * 2;
They said, above b++ can run parallel with multiplication.
How b++ run parallel with multiplication?
What i've understood about the procedure is:
First we copy b's value to a temporary variable , then increment b , finally multiply 2 with that temporary variable.
We're not multiplying with b but with that temporary variable , then how it will run parallel?
I've got above idea about temporary variable from another answer Is there a performance difference between i++ and ++i in C++?
What you are talking about is instruction level parallelism. The processor in this case can execute an increment on the copy of b while also multiplying the old b.
This is very fine-grained parallelism, at the processor level, and in general you can expect it to give you some advantages, depending on the architecture.
In the case of pre-increment, instead, the result of the increment operation must be waited in the processor's pipeline before the multiplication can be executed, hence giving you a penalty.
However, this is not semantically equivalent as the value of a will be different.
#SimpleGuy's answer is a pretty reasonable first approximation. The trouble is, it assumes a halway point between the simple theoretic model (no parallelization) and the real world. This answer tries to look at a more realistic model, still without assuming one particular CPU.
The chief thing to realize is that real CPU's have registers and cache. These exist because memory operations are far more expensive than simple math. Parallelization of integer increment and integer bitshift (*2 is optimized to <<1 on real CPU's) is a minor concern; the optimizer will chiefly look at avoiding load stalls.
So let's assume that a and b aren't in a CPU register. The relevant memory operations are LOAD b, STORE a and STORE b. Everything starts with LOAD b so the optimizer may move that up as far as possible, even before a previous instruction when possible (aliasing is the chief concern here). The STORE b can start as soon as b++ has finished. The STORE a can happen after the STORE b so it's not a big problem that it's delayed by one CPU instruction (the <<1), and there's little to be gained by parallelizing the two operations.
The reason that b++ can run in parallel is as under
b++ is a post increment, which means the value of a variable would be incremented after use. The below line can be broken into two parts
a = b++ * 2
Part-1: Multiply b with 2
Part-2: Increment the value of b by 1
Since above two are not dependent on each other, they can be run in parallel.
Had case been of pre-increment, which means to increment before use
a = ++b * 2
The parts would have been
Part-1: Increment the value of b by 1
Part-2: Multiply (new) b with 2
As can be seen above, the part two can be run only after part 1 is executed, so there is a dependency and hence no parallelism.

Will two relaxed writes to the same location in different threads always be seen in the same order by other threads?

On the x86 architecture, stores to the same memory location have a total order, e.g., see this video. What are the guarantees in the C++11 memory model?
More precisely, in
-- Initially --
std::atomic<int> x{0};
-- Thread 1 --
x.store(1, std::memory_order_release);
-- Thread 2 --
x.store(2, std::memory_order_release);
-- Thread 3 --
int r1 = x.load(std::memory_order_acquire);
int r2 = x.load(std::memory_order_acquire);
-- Thread 4 --
int r3 = x.load(std::memory_order_acquire);
int r4 = x.load(std::memory_order_acquire);
would the outcome r1==1, r2==2, r3==2, r4==1 be allowed (on some architecture other than x86)? What if I were to replace all memory_order's by std::memory_order_relaxed?
No, such an outcome is not allowed. §1.10 [intro.multithread]/p8, 18 (quoting N3936/C++14; the same text is found in paragraphs 6 and 16 for N3337/C++11):
8 All modifications to a particular atomic object M occur in some
particular total order, called the modification order of M.
18 If a value computation A of an atomic object M happens before a
value computation B of M, and A takes its value from a side effect X
on M, then the value computed by B shall either be the value stored by
X or the value stored by a side effect Y on M, where Y follows X in
the modification order of M. [ Note: This requirement is known as
read-read coherence. —end note ]
In your code there are two side effects, and by p8 they occur in some particular total order. In Thread 3, the value computation to calculate the value to be stored in r1 happens before that of r2, so given r1 == 1 and r2 == 2 we know that the store performed by Thread 1 precedes the store performed by Thread 2 in the modification order of x. That being the case, Thread 4 cannot observe r3 == 2, r4 == 1 without running afoul of p18. This is regardless of the memory_order used.
There is a note in p21 (p19 in N3337) that is relevant:
[ Note: The four preceding coherence requirements effectively
disallow compiler reordering of atomic operations to a single object,
even if both operations are relaxed loads. This effectively makes the
cache coherence guarantee provided by most hardware available to C++
atomic operations. —end note ]
Per C++11 [intro.multithread]/6: "All modifications to a particular atomic object M occur in some particular total order, called the modification order of M." Consequently, reads of an atomic object by a particular thread will never see "older" values than those the thread has already observed. Note that there is no mention of memory orderings here, so this property holds true for all of them - seq_cst through relaxed.
In the example given in the OP, the modification order of x can be either (0,1,2) or (0,2,1). A thread that has observed a given value in that modification order cannot later observe an earlier value. The outcome r1==1, r2==2 implies that the modification order of x is (0,1,2), but r3==2, r4==1 implies it is (0,2,1), a contradiction. So that outcome is not possible on an implementation that conforms to C++11 .
Given that the C++11 rules definitely disallow this, here's a more qualitative / intuitive way to understand it:
If there are no further stores to x, eventually all readers will agree on its value. (i.e. one of the two stores came 2nd).
If it were possible for different threads to disagree about the order, then either they'd permanently / long-term disagree about the value, or one thread could see the value change a 3rd extra time (a phantom store).
Fortunately C++11 doesn't allow either of those possibilities.

What does `std::kill_dependency` do, and why would I want to use it?

I've been reading about the new C++11 memory model and I've come upon the std::kill_dependency function (§29.3/14-15). I'm struggling to understand why I would ever want to use it.
I found an example in the N2664 proposal but it didn't help much.
It starts by showing code without std::kill_dependency. Here, the first line carries a dependency into the second, which carries a dependency into the indexing operation, and then carries a dependency into the do_something_with function.
r1 = x.load(memory_order_consume);
r2 = r1->index;
do_something_with(a[r2]);
There is further example that uses std::kill_dependency to break the dependency between the second line and the indexing.
r1 = x.load(memory_order_consume);
r2 = r1->index;
do_something_with(a[std::kill_dependency(r2)]);
As far as I can tell, this means that the indexing and the call to do_something_with are not dependency ordered before the second line. According to N2664:
This allows the compiler to reorder the call to do_something_with, for example, by performing speculative optimizations that predict the value of a[r2].
In order to make the call to do_something_with the value a[r2] is needed. If, hypothetically, the compiler "knows" that the array is filled with zeros, it can optimize that call to do_something_with(0); and reorder this call relative to the other two instructions as it pleases. It could produce any of:
// 1
r1 = x.load(memory_order_consume);
r2 = r1->index;
do_something_with(0);
// 2
r1 = x.load(memory_order_consume);
do_something_with(0);
r2 = r1->index;
// 3
do_something_with(0);
r1 = x.load(memory_order_consume);
r2 = r1->index;
Is my understanding correct?
If do_something_with synchronizes with another thread by some other means, what does this mean with respect to the ordering of the x.load call and this other thread?
Assuming my understading is correct, there's still one thing that bugs me: when I'm writing code, what reasons would lead me to choose to kill a dependency?
The purpose of memory_order_consume is to ensure the compiler does not do certain unfortunate optimizations that may break lockless algorithms. For example, consider this code:
int t;
volatile int a, b;
t = *x;
a = t;
b = t;
A conforming compiler may transform this into:
a = *x;
b = *x;
Thus, a may not equal b. It may also do:
t2 = *x;
// use t2 somewhere
// later
t = *x;
a = t2;
b = t;
By using load(memory_order_consume), we require that uses of the value being loaded not be moved prior to the point of use. In other words,
t = x.load(memory_order_consume);
a = t;
b = t;
assert(a == b); // always true
The standard document considers a case where you may only be interested in ordering certain fields of a structure. The example is:
r1 = x.load(memory_order_consume);
r2 = r1->index;
do_something_with(a[std::kill_dependency(r2)]);
This instructs the compiler that it is allowed to, effectively, do this:
predicted_r2 = x->index; // unordered load
r1 = x; // ordered load
r2 = r1->index;
do_something_with(a[predicted_r2]); // may be faster than waiting for r2's value to be available
Or even this:
predicted_r2 = x->index; // unordered load
predicted_a = a[predicted_r2]; // get the CPU loading it early on
r1 = x; // ordered load
r2 = r1->index; // ordered load
do_something_with(predicted_a);
If the compiler knows that do_something_with won't change the result of the loads for r1 or r2, then it can even hoist it all the way up:
do_something_with(a[x->index]); // completely unordered
r1 = x; // ordered
r2 = r1->index; // ordered
This allows the compiler a little more freedom in its optimization.
In addition to the other answer, I will point out that Scott Meyers, one of the definitive leaders in the C++ community, bashed memory_order_consume pretty strongly. He basically said that he believed it had no place in the standard. He said there are two cases where memory_order_consume has any effect:
Exotic architectures designed to support 1024+ core shared memory machines.
The DEC Alpha
Yes, once again, the DEC Alpha finds its way into infamy by using an optimization not seen in any other chip until many years later on absurdly specialized machines.
The particular optimization is that those processors allow one to dereference a field before actually getting the address of that field (i.e. it can look up x->y BEFORE it even looks up x, using a predicted value of x). It then goes back and determines whether x was the value it expected it to be. On success, it saved time. On failure, it has to go back and get x->y again.
Memory_order_consume tells the compiler/architecture that these operations have to happen in order. However, in the most useful case, one will end up wanting to do (x->y.z), where z doesn't change. memory_order_consume would force the compiler to keep x y and z in order. kill_dependency(x->y).z tells the compiler/architecture that it may resume doing such nefarious reorderings.
99.999% of developers will probably never work on a platform where this feature is required (or has any effect at all).
The usual use case of kill_dependency arises from the following. Suppose you want to do atomic updates to a nontrivial shared data structure. A typical way to do this is to nonatomically create some new data and to atomically swing a pointer from the data structure to the new data. Once you do this, you are not going to change the new data until you have swung the pointer away from it to something else (and waited for all readers to vacate). This paradigm is widely used, e.g. read-copy-update in the Linux kernel.
Now, suppose the reader reads the pointer, reads the new data, and comes back later and reads the pointer again, finding that the pointer hasn't changed. The hardware can't tell that the pointer hasn't been updated again, so by consume semantics he can't use a cached copy of the data but has to read it again from memory. (Or to think of it another way, the hardware and compiler can't speculatively move the read of the data up before the read of the pointer.)
This is where kill_dependency comes to the rescue. By wrapping the pointer in a kill_dependency, you create a value that will no longer propagate dependency, allowing accesses through the pointer to use the cached copy of the new data.