Strange optimization? in `libuv`. Please explain [duplicate] - c++

This question already has answers here:
C optimization: conditional store to avoid dirtying a cache line
(2 answers)
Closed 1 year ago.
The libuv contains next code in core.c:uv_run()
/* The if statement lets the compiler compile it to a conditional store.
* Avoids dirtying a cache line.
*/
if (loop->stop_flag != 0)
loop->stop_flag = 0;
What does this mean? Is it some kind of optimization? Why they did not simply assign 0?

I would argue this optimization is bad. For example, on gcc with -O3 it gives following code:
foo():
movl stop_flag(%rip), %eax
testl %eax, %eax
je .L3
movl $0, stop_flag(%rip)
.L3:
ret
stop_flag:
.zero 4
As you see, there is no conditional move, but a branch. And I am sure, branch misprediction is far worse than dirtying the cache line.

Yes, just like the comment says. In case the flag is already 0, there is no need to write any data to the memory, thus avoiding a possible eviction of present data in the cache and replacing it with 0 for the flag. This will provide added value only in extremely time-critical applications.

Related

Clobbered memory in two inline assembly calls vs in one inline assembly call?

This question follows this one, considering a GCC-compliant compiler and a x86-64 architecture.
I am wondering if there is any difference between option 1, option 2 and option 3 below. Would the result be the same in all contexts, or would it be different. And if so what would be the difference?
// Option 1
asm volatile(:::"memory");
asm volatile("CPUID":"=a"(eax),"=b"(ebx),"=c"(ecx),"=d"(edx):"0"(level):);
and
// Option 2
asm volatile("CPUID":"=a"(eax),"=b"(ebx),"=c"(ecx),"=d"(edx):"0"(level):);
asm volatile(:::"memory");
and
// Option 3
asm volatile("CPUID":"=a"(eax),"=b"(ebx),"=c"(ecx),"=d"(edx):"0"(level):"memory");
Options 1 & 2 would let the CPUID itself reorder with unrelated non-volatile loads/stores (in one direction or the other). This is very likely not what you want.
You could put a memory barrier on both sides of CPUID, but it's certainly better to just make CPUID a memory barrier itself.
As Jester points out, option 1 would force reload of level from memory, if it had ever had its address passed outside of the function, or if it already is a global or static.
(Or whatever the exact criterion is that decides whether a C variable could be modified read or written by asm that uses a "memory" clobber. I think it's essentially the same as what the optimizer uses to decide whether a variable can be kept in a register across a non-inline function call to an opaque function, so pure local variables that haven't had their address passed anywhere, and that aren't inputs to the asm statement, can still live in registers).
For example (Godbolt compiler explorer):
void foo(int level){
int eax, ebx, ecx, edx;
asm volatile("":::"memory");
asm volatile("CPUID"
: "=a"(eax),"=b"(ebx),"=c"(ecx),"=d"(edx)
: "0"(level)
:
);
}
# x86-64 gcc7.3 -O3 -fverbose-asm
pushq %rbx # # rbx is call-preserved, but we clobber it.
movl %edi, %eax # level, eax
CPUID
popq %rbx #
ret
Notice the lack of a spill/reload of the function arg.
Normally I'd use Intel syntax, but with inline asm it's a good idea to always use AT&T unless you complete hate AT&T syntax or don't know it.
Even if it started in memory (i386 System V calling convention, with stack args), the compiler still decides that nothing else (including the asm statement with a memory clobber) could reference it. But how do we tell the difference between delaying the load? Modify the function arg before the barrier, then use it after:
void modify_level(int level){
level += 1; // modify level before the barrier
int eax, ebx, ecx, edx;
asm volatile("#mem barrier here":::"memory");
asm volatile("CPUID" // then read it after
: "=a"(eax),"=b"(ebx),"=c"(ecx),"=d"(edx)
: "0"(level):);
}
The asm output from gcc -m32 -O3 -fverbose-asm is:
modify_level(int):
pushl %ebx #
#mem barrier here
movl 8(%esp), %eax # level, tmp97
addl $1, %eax #, level
CPUID
popl %ebx #
ret
Notice that the compiler let level++ reorder across the memory barrier, because it's a local variable.
Godbolt filters hand-written asm comments along with compiler-generated asm comment-only lines. I disabled the comment filter and found the mem barrier. You might want to remove -fverbose-asm to get less noise. Or use a non-comment string for the mem barrier: it doesn't have to assemble if you're just looking at the compiler's asm output. (Unless you're using clang, which has the assembler built-in).
BTW, the original version of your question didn't compile: you left out the empty string as asm template. asm(:::"memory"). The output, input, and clobber sections can be empty, but the asm instruction string is not optional.
Fun fact, you can put asm comments in the string:
asm volatile("# memory barrier here":::"memory");
gcc fills in any %whatever things in the string template as it writes asm output, so you can even do stuff like "CPUID # %%0 was in %0" and see what gcc chose for your "dummy" args that are otherwise unmentioned in the asm template. (This is more interesting for dummy memory input/output operands to tell the compiler which memory you read/write instead of using a "memory" clobber, when you give the asm statement a pointer.)

Visual C++ optimization options - how to improve the code output?

Are there any options (other than /O2) to improve the Visual C++ code output? The MSDN documentation is quite bad in this regard.
Note that I'm not asking about project-wide settings (link-time optimization, etc...). I'm only interested in this particular example.
The fairly simple C++11 code looks like this:
#include <vector>
int main() {
std::vector<int> v = {1, 2, 3, 4};
int sum = 0;
for(int i = 0; i < v.size(); i++) {
sum += v[i];
}
return sum;
}
Clang's output with libc++ is quite compact:
main: # #main
mov eax, 10
ret
Visual C++ output, on the other hand, is a multi-page mess.
Am I missing something here or is VS really this bad?
Compiler explorer link:
https://godbolt.org/g/GJYHjE
Unfortunately, it's difficult to greatly improve Visual C++ output in this case, even by using more aggressive optimization flags. There are several factors contributing to VS inefficiency, including lack of certain compiler optimizations, and the structure of Microsoft's implementation of <vector>.
Inspecting the generated assembly, Clang does an outstanding job optimizing this code. Specifically, when compared to VS, Clang is able to perform a very effective Constant propagation, Function Inlining (and consequently, Dead Code Elimination), and New/delete optimization.
Constant Propagation
In the example, the vector is statically initialized:
std::vector<int> v = {1, 2, 3, 4};
Normally, the compiler will store the constants 1, 2, 3, 4 in the data memory, and in the for loop, will load one value at one at a time, starting from the low address in which 1 is stored, and add each value to the sum.
Here's the abbreviated VS code for doing this:
movdqa xmm0, XMMWORD PTR __xmm#00000004000000030000000200000001
...
movdqu XMMWORD PTR $T1[rsp], xmm0 ; Store integers 1, 2, 3, 4 in memory
...
$LL4#main:
add ebx, DWORD PTR [rdx] ; loop and sum the values
lea rdx, QWORD PTR [rdx+4]
inc r8d
movsxd rax, r8d
cmp rax, r9
jb SHORT $LL4#main
Clang, however, is very clever to realize that the sum could be calculated in advance. My best guess is that it replaces the loading of the constants from memory to constant mov operations into registers (propagates the constants), and then combines them into the result of 10. This has the useful side effect of breaking dependencies, and since the addresses are no longer loaded from, the compiler is free to remove everything else as dead code.
Clang seems to be unique in doing this - neither VS or GCC were able to precalculate the vector accumulation result in advance.
New/Delete Optimization
Compilers conforming to C++14 are allowed to omit calls to new and delete on certain conditions, specifically when the number of allocation calls is not part of the observable behavior of the program (N3664 standard paper).
This has already generated much discussion on SO:
clang vs gcc - optimization including operator new
Is the compiler allowed to optimize out heap memory allocations?
Optimization of raw new[]/delete[] vs std::vector
Clang invoked with -std=c++14 -stdlib=libc++ indeed performs this optimization and eliminates the calls to new and delete, which do carry side effects, but supposedly do not affect the observable behaviour of the program. With -stdlib=libstdc++, Clang is stricter and keeps the calls to new and delete - although, by looking at the assembly, it's clear they are not really needed.
Now, when inspecting the main code generated by VS, we can find there two function calls (with the rest of vector construction and iteration code inlined into main):
call std::vector<int,std::allocator<int> >::_Range_construct_or_tidy<int const * __ptr64>
and
call void __cdecl operator delete(void * __ptr64)
The first is used for allocating the vector, and the second for deallocating it, and practically all other functions in the VS output are pulled in by these functions calls. This hints that Visual C++ will not optimize away calls to allocation functions (for C++14 conformance we should add the /std:c++14 flag, but the results are the same).
This blog post (May 10, 2017) from the Visual C++ team confirms that indeed, this optimization is not implemented. Searching the page for N3664 shows that "Avoiding/fusing allocations" is at status N/A, and linked comment says:
[E] Avoiding/fusing allocations is permitted but not required. For the time being, we’ve chosen not to implement this.
Combining new/delete optimization and constant propagation, it's easy to see the impact of these two optimizations in this Compiler Explorer 3-way comparison of Clang with -stdlib=libc++, Clang with -stdlib=libstdc++, and GCC.
STL Implementation
VS has its own STL implementation which is very differently structured than libc++ and stdlibc++, and that seems to have a large contribution to VS inferior code generation. While VS STL has some very useful features, such as checked iterators and iterator debugging hooks (_ITERATOR_DEBUG_LEVEL), it gives the general impression of being heavier and to perform less efficiently than stdlibc++.
For isolating the impact of the vector STL implementation, an interesting experiment is to use Clang for compilation, combined with the VS header files. Indeed, using Clang 5.0.0 with Visual Studio 2015 headers, results in the following code generation - clearly, the STL implementation has a huge impact!
main: # #main
.Lfunc_begin0:
.Lcfi0:
.seh_proc main
.seh_handler __CxxFrameHandler3, #unwind, #except
# BB#0: # %.lr.ph
pushq %rbp
.Lcfi1:
.seh_pushreg 5
pushq %rsi
.Lcfi2:
.seh_pushreg 6
pushq %rdi
.Lcfi3:
.seh_pushreg 7
pushq %rbx
.Lcfi4:
.seh_pushreg 3
subq $72, %rsp
.Lcfi5:
.seh_stackalloc 72
leaq 64(%rsp), %rbp
.Lcfi6:
.seh_setframe 5, 64
.Lcfi7:
.seh_endprologue
movq $-2, (%rbp)
movl $16, %ecx
callq "??2#YAPEAX_K#Z"
movq %rax, -24(%rbp)
leaq 16(%rax), %rcx
movq %rcx, -8(%rbp)
movups .L.ref.tmp(%rip), %xmm0
movups %xmm0, (%rax)
movq %rcx, -16(%rbp)
movl 4(%rax), %ebx
movl 8(%rax), %esi
movl 12(%rax), %edi
.Ltmp0:
leaq -24(%rbp), %rcx
callq "?_Tidy#?$vector#HV?$allocator#H#std###std##IEAAXXZ"
.Ltmp1:
# BB#1: # %"\01??1?$vector#HV?$allocator#H#std###std##QEAA#XZ.exit"
addl %ebx, %esi
leal 1(%rdi,%rsi), %eax
addq $72, %rsp
popq %rbx
popq %rdi
popq %rsi
popq %rbp
retq
.seh_handlerdata
.long ($cppxdata$main)#IMGREL
.text
Update - Visual Studio 2017
In Visual Studio 2017, <vector> has seen a major overhaul, as announced on this blog post from the Visual C++ team. Specifically, it mentions the following optimizations:
Eliminated unnecessary EH logic. For example, vector’s copy assignment operator had an unnecessary try-catch block. It just has to provide the basic guarantee, which we can achieve through proper action sequencing.
Improved performance by avoiding unnecessary rotate() calls. For example, emplace(where, val) was calling emplace_back() followed by rotate(). Now, vector calls rotate() in only one scenario (range insertion with input-only iterators, as previously described).
Improved performance with stateful allocators. For example, move construction with non-equal allocators now attempts to activate our memmove() optimization. (Previously, we used make_move_iterator(), which had the side effect of inhibiting the memmove() optimization.) Note that a further improvement is coming in VS 2017 Update 1, where move assignment will attempt to reuse the buffer in the non-POCMA non-equal case.
Curious, I went back to test this. When building the example in Visual Studio 2017, the result is still a multi page assembly listing, with many function calls, so even if code generation improved, it is difficult to notice.
However, when building with clang 5.0.0 and Visual Studio 2017 headers, we get the following assembly:
main: # #main
.Lcfi0:
.seh_proc main
# BB#0:
subq $40, %rsp
.Lcfi1:
.seh_stackalloc 40
.Lcfi2:
.seh_endprologue
movl $16, %ecx
callq "??2#YAPEAX_K#Z" ; void * __ptr64 __cdecl operator new(unsigned __int64)
movq %rax, %rcx
callq "??3#YAXPEAX#Z" ; void __cdecl operator delete(void * __ptr64)
movl $10, %eax
addq $40, %rsp
retq
.seh_handlerdata
.text
Note the movl $10, %eax instruction - that is, with VS 2017's <vector>, clang was able to collapse everything, precalculate the result of 10, and keep only the calls to new and delete.
I'd say that is pretty amazing!
Function Inlining
Function inlining is probably the single most vital optimization in this example. By collapsing the code of called functions into their call sites, the compiler is able to perform further optimizations on the merged code, plus, removing of function calls is beneficial in reducing call overhead and removing of optimization barriers.
When inspecting the generated assembly for VS, and comparing the code before and after inlining (Compiler Explorer), we can see that most vector functions were indeed inlined, except for the allocation and deallocation functions. In particular, there are calls to memmove, which are the result of inlining of some higher level functions, such as _Uninitialized_copy_al_unchecked.
memmove is a library function, and therefore cannot be inlined. However, clang has a clever way around this - it replaces the call to memmove with a call to __builtin_memmove. __builtin_memmove is a builtin/intrinsic function, which has the same functionality as memmove, but as opposed to the plain function call, the compiler generates code for it and embeds it into the calling function. Consequently, the code could be further optimized inside the calling function and eventually removed as dead code.
Summary
To conclude, Clang is clearly superior than VS in this example, both thanks to high quality optimizations, and more efficient vector STL implementation. When using the same header files for Visual C++ and clang (the Visual Studio 2017 headers), Clang beats Visual C++ hands down.
While writing this answer, I couldn't help not to think, what would we do without Compiler Explorer? Thanks Matt Godbolt for this amazing tool!

Using base pointer register in C++ inline asm

I want to be able to use the base pointer register (%rbp) within inline asm. A toy example of this is like so:
void Foo(int &x)
{
asm volatile ("pushq %%rbp;" // 'prologue'
"movq %%rsp, %%rbp;" // 'prologue'
"subq $12, %%rsp;" // make room
"movl $5, -12(%%rbp);" // some asm instruction
"movq %%rbp, %%rsp;" // 'epilogue'
"popq %%rbp;" // 'epilogue'
: : : );
x = 5;
}
int main()
{
int x;
Foo(x);
return 0;
}
I hoped that, since I am using the usual prologue/epilogue function-calling method of pushing and popping the old %rbp, this would be ok. However, it seg faults when I try to access x after the inline asm.
The GCC-generated assembly code (slightly stripped-down) is:
_Foo:
pushq %rbp
movq %rsp, %rbp
movq %rdi, -8(%rbp)
# INLINEASM
pushq %rbp; // prologue
movq %rsp, %rbp; // prologue
subq $12, %rsp; // make room
movl $5, -12(%rbp); // some asm instruction
movq %rbp, %rsp; // epilogue
popq %rbp; // epilogue
# /INLINEASM
movq -8(%rbp), %rax
movl $5, (%rax) // x=5;
popq %rbp
ret
main:
pushq %rbp
movq %rsp, %rbp
subq $16, %rsp
leaq -4(%rbp), %rax
movq %rax, %rdi
call _Foo
movl $0, %eax
leave
ret
Can anyone tell me why this seg faults? It seems that I somehow corrupt %rbp but I don't see how. Thanks in advance.
I'm running GCC 4.8.4 on 64-bit Ubuntu 14.04.
See the bottom of this answer for a collection of links to other inline-asm Q&As.
Your code is broken because you step on the red-zone below RSP (with push) where GCC was keeping a value.
What are you hoping to learn to accomplish with inline asm? If you want to learn inline asm, learn to use it to make efficient code, rather than horrible stuff like this. If you want to write function prologues and push/pop to save/restore registers, you should write whole functions in asm. (Then you can easily use nasm or yasm, rather than the less-preferred-by-most AT&T syntax with GNU assembler directives1.)
GNU inline asm is hard to use, but allows you to mix custom asm fragments into C and C++ while letting the compiler handle register allocation and any saving/restoring if necessary. Sometimes the compiler will be able to avoid the save and restore by giving you a register that's allowed to be clobbered. Without volatile, it can even hoist asm statements out of loops when the input would be the same. (i.e. unless you use volatile, the outputs are assumed to be a "pure" function of the inputs.)
If you're just trying to learn asm in the first place, GNU inline asm is a terrible choice. You have to fully understand almost everything that's going on with the asm, and understand what the compiler needs to know, to write correct input/output constraints and get everything right. Mistakes will lead to clobbering things and hard-to-debug breakage. The function-call ABI is a much simpler and easier to keep track of boundary between your code and the compiler's code.
Why this breaks
You compiled with -O0, so gcc's code spills the function parameter from %rdi to a location on the stack. (This could happen in a non-trivial function even with -O3).
Since the target ABI is the x86-64 SysV ABI, it uses the "Red Zone" (128 bytes below %rsp that even asynchronous signal handlers aren't allowed to clobber), instead of wasting an instruction decrementing the stack pointer to reserve space.
It stores the 8B pointer function arg at -8(rsp_at_function_entry). Then your inline asm pushes %rbp, which decrements %rsp by 8 and then writes there, clobbering the low 32b of &x (the pointer).
When your inline asm is done,
gcc reloads -8(%rbp) (which has been overwritten with %rbp) and uses it as the address for a 4B store.
Foo returns to main with %rbp = (upper32)|5 (orig value with the low 32 set to 5).
main runs leave: %rsp = (upper32)|5
main runs ret with %rsp = (upper32)|5, reading the return address from virtual address (void*)(upper32|5), which from your comment is 0x7fff0000000d.
I didn't check with a debugger; one of those steps might be slightly off, but the problem is definitely that you clobber the red zone, leading to gcc's code trashing the stack.
Even adding a "memory" clobber doesn't get gcc to avoid using the red zone, so it looks like allocating your own stack memory from inline asm is just a bad idea. (A memory clobber means you might have written some memory you're allowed to write to, e.g. a global variable or something pointed-to by a global, not that you might have overwritten something you're not supposed to.)
If you want to use scratch space from inline asm, you should probably declare an array as a local variable and use it as an output-only operand (which you never read from).
AFAIK, there's no syntax for declaring that you modify the red-zone, so your only options are:
use an "=m" output operand (possibly an array) for scratch space; the compiler will probably fill in that operand with an addressing mode relative to RBP or RSP. You can index into it with constants like 4 + %[tmp] or whatever. You might get an assembler warning from 4 + (%rsp) but not an error.
skip over the red-zone with add $-128, %rsp / sub $-128, %rsp around your code. (Necessary if you want to use an unknown amount of extra stack space, e.g. push in a loop, or making a function call. Yet another reason to deref a function pointer in pure C, not inline asm.)
compile with -mno-red-zone (I don't think you can enable that on a per-function basis, only per-file)
Don't use scratch space in the first place. Tell the compiler what registers you clobber and let it save them.
Here's what you should have done:
void Bar(int &x)
{
int tmp;
long tmplong;
asm ("lea -16 + %[mem1], %%rbp\n\t"
"imul $10, %%rbp, %q[reg1]\n\t" // q modifier: 64bit name.
"add %k[reg1], %k[reg1]\n\t" // k modifier: 32bit name
"movl $5, %[mem1]\n\t" // some asm instruction writing to mem
: [mem1] "=m" (tmp), [reg1] "=r" (tmplong) // tmp vars -> tmp regs / mem for use inside asm
:
: "%rbp" // tell compiler it needs to save/restore %rbp.
// gcc refuses to let you clobber %rbp with -fno-omit-frame-pointer (the default at -O0)
// clang lets you, but memory operands still use an offset from %rbp, which will crash!
// gcc memory operands still reference %rsp, so don't modify it. Declaring a clobber on %rsp does nothing
);
x = 5;
}
Note the push/pop of %rbp in the code outside the #APP / #NO_APP section, emitted by gcc. Also note that the scratch memory it gives you is in the red zone. If you compile with -O0, you'll see that it's at a different position from where it spills &x.
To get more scratch regs, it's better to just declare more output operands that are never used by the surrounding non-asm code. That leaves register allocation to the compiler, so it can be different when inlined into different places. Choosing ahead of time and declaring a clobber only makes sense if you need to use a specific register (e.g. shift count in %cl). Of course, an input constraint like "c" (count) gets gcc to put the count in rcx/ecx/cx/cl, so you don't emit a potentially redundant mov %[count], %%ecx.
If this looks too complicated, don't use inline asm. Either lead the compiler to the asm you want with C that's like the optimal asm, or write a whole function in asm.
When using inline asm, keep it as small as possible: ideally just the one or two instructions that gcc isn't emitting on its own, with input/output constraints to tell it how to get data into / out of the asm statement. This is what it's designed for.
Rule of thumb: if your GNU C inline asm start or ends with a mov, you're usually doing it wrong and should have used a constraint instead.
Footnotes:
You can use GAS's intel-syntax in inline-asm by building with -masm=intel (in which case your code will only work with that option), or using dialect alternatives so it works with the compiler in Intel or AT&T asm output syntax. But that doesn't change the directives, and GAS's Intel-syntax is not well documented. (It's like MASM, not NASM, though.) I don't really recommend it unless you really hate AT&T syntax.
Inline asm links:
x86 wiki. (The tag wiki also links to this question, for this collection of links)
The inline-assembly tag wiki
The manual. Read this. Note that inline asm was designed to wrap single instructions that the compiler doesn't normally emit. That's why it's worded to say things like "the instruction", not "the block of code".
A tutorial
Looping over arrays with inline assembly Using r constraints for pointers/indices and using your choice of addressing mode, vs. using m constraints to let gcc choose between incrementing pointers vs. indexing arrays.
How can I indicate that the memory *pointed* to by an inline ASM argument may be used? (pointer inputs in registers do not imply that the pointed-to memory is read and/or written, so it might not be in sync if you don't tell the compiler).
In GNU C inline asm, what're the modifiers for xmm/ymm/zmm for a single operand?. Using %q0 to get %rax vs. %w0 to get %ax. Using %g[scalar] to get %zmm0 instead of %xmm0.
Efficient 128-bit addition using carry flag Stephen Canon's answer explains a case where an early-clobber declaration is needed on a read+write operand. Also note that x86/x86-64 inline asm doesn't need to declare a "cc" clobber (the condition codes, aka flags); it's implicit. (gcc6 introduces syntax for using flag conditions as input/output operands. Before that you have to setcc a register that gcc will emit code to test, which is obviously worse.)
Questions about the performance of different implementations of strlen: my answer on a question with some badly-used inline asm, with an answer similar to this one.
llvm reports: unsupported inline asm: input with type 'void *' matching output with type 'int': Using offsetable memory operands (in x86, all effective addresses are offsettable: you can always add a displacement).
When not to use inline asm, with an example of 32b/32b => 32b division and remainder that the compiler can already do with a single div. (The code in the question is an example of how not to use inline asm: many instructions for setup and save/restore that should be left to the compiler by writing proper in/out constraints.)
MSVC inline asm vs. GNU C inline asm for wrapping a single instruction, with a correct example of inline asm for 64b/32b=>32bit division. MSVC's design and syntax require a round trip through memory for inputs and outputs, making it terrible for short functions. It's also "never very reliable" according to Ross Ridge's comment on that answer.
Using x87 floating point, and commutative operands. Not a great example, because I didn't find a way to get gcc to emit ideal code.
Some of those re-iterate some of the same stuff I explained here. I didn't re-read them to try to avoid redundancy, sorry.
In x86-64, the stack pointer needs to be aligned to 8 bytes.
This:
subq $12, %rsp; // make room
should be:
subq $16, %rsp; // make room

Is std::atomic_compare_exchange_weak thread-unsafe by design?

It was brought up on cppreference atomic_compare_exchange Talk page that the existing implementations of std::atomic_compare_exchange_weak compute the boolean result of the CAS with a non-atomic compare instruction, e.g.
lock
cmpxchgq %rcx, (%rsp)
cmpq %rdx, %rax
which (Edit: apologies for the red herring)
break CAS loops such as Concurrency in Action's listing 7.2:
while(!head.compare_exchange_weak(new_node->next, new_node);
The specification (29.6.5[atomics.types.operations.req]/21-22) seems to imply that the result of the comparison must be a part of the atomic operation:
Effects: atomically compares ...
Returns: the result of the comparison
but is it actually implementable? Should we file bug reports to the vendors or to the LWG?
TL;DR: atomic_compare_exchange_weak is safe by design, but actual implementations are buggy.
Here's the code that Clang actually generates for this little snippet:
struct node {
int data;
node* next;
};
std::atomic<node*> head;
void push(int data) {
node* new_node = new node{data};
new_node->next = head.load(std::memory_order_relaxed);
while (!head.compare_exchange_weak(new_node->next, new_node,
std::memory_order_release, std::memory_order_relaxed)) {}
}
Result:
movl %edi, %ebx
# Allocate memory
movl $16, %edi
callq _Znwm
movq %rax, %rcx
# Initialize with data and 0
movl %ebx, (%rcx)
movq $0, 8(%rcx) ; dead store, should have been optimized away
# Overwrite next with head.load
movq head(%rip), %rdx
movq %rdx, 8(%rcx)
.align 16, 0x90
.LBB0_1: # %while.cond
# =>This Inner Loop Header: Depth=1
# put value of head into comparand/result position
movq %rdx, %rax
# atomic operation here, compares second argument to %rax, stores first argument
# in second if same, and second in %rax otherwise
lock
cmpxchgq %rcx, head(%rip)
# unconditionally write old value back to next - wait, what?
movq %rax, 8(%rcx)
# check if cmpxchg modified the result position
cmpq %rdx, %rax
movq %rax, %rdx
jne .LBB0_1
The comparison is perfectly safe: it's just comparing registers. However, the whole operation is not safe.
The critical point is this: the description of compare_exchange_(weak|strong) says:
Atomically [...] if true, replace the contents of the memory point to by this with that in desired, and if false, updates the contents of the memory in expected with the contents of the memory pointed to by this
Or in pseudo-code:
if (*this == expected)
*this = desired;
else
expected = *this;
Note that expected is only written to if the comparison is false, and *this is only written to if comparison is true. The abstract model of C++ does not allow an execution where both are written to. This is important for the correctness of push above, because if the write to head happens, suddenly new_node points to a location that is visible to other threads, which means other threads can start reading next (by accessing head->next), and if the write to expected (which aliases new_node->next) also happens, that's a race.
And Clang writes to new_node->next unconditionally. In the case where the comparison is true, that's an invented write.
This is a bug in Clang. I don't know whether GCC does the same thing.
In addition, the wording of the standard is suboptimal. It claims that the entire operation must happen atomically, but this is impossible, because expected is not an atomic object; writes to there cannot happen atomically. What the standard should say is that the comparison and the write to *this happen atomically, but the write to expected does not. But this isn't that bad, because no one really expects that write to be atomic anyway.
So there should be a bug report for Clang (and possibly GCC), and a defect report for the standard.
I was the one who originally found this bug. For the last few days I have been e-mailing Anthony Williams regarding this issue and vendor implementations. I didn't realize Cubbi had raise a StackOverFlow question. It's not just Clang or GCC it's every vendor that is broken (all that matters anyway). Anthony Williams also author of Just::Thread (a C++11 thread and atomic library) confirmed his library is implemented correctly (only known correct implementation).
Anthony has raised a GCC bug report http://gcc.gnu.org/bugzilla/show_bug.cgi?id=60272
Simple example:
#include <atomic>
struct Node { Node* next; };
void Push(std::atomic<Node*> head, Node* node)
{
node->next = head.load();
while(!head.compare_exchange_weak(node->next, node))
;
}
g++ 4.8 [assembler]
mov rdx, rdi
mov rax, QWORD PTR [rdi]
mov QWORD PTR [rsi], rax
.L3:
mov rax, QWORD PTR [rsi]
lock cmpxchg QWORD PTR [rdx], rsi
mov QWORD PTR [rsi], rax !!!!!!!!!!!!!!!!!!!!!!!
jne .L3
rep; ret
clang 3.3 [assembler]
movq (%rdi), %rcx
movq %rcx, (%rsi)
.LBB0_1:
movq %rcx, %rax
lock
cmpxchgq %rsi, (%rdi)
movq %rax, (%rsi) !!!!!!!!!!!!!!!!!!!!!!!
cmpq %rcx, %rax !!!!!!!!!!!!!!!!!!!!!!!
movq %rax, %rcx
jne .LBB0_1
ret
icc 13.0.1 [assembler]
movl %edx, %ecx
movl (%rsi), %r8d
movl %r8d, %eax
lock
cmpxchg %ecx, (%rdi)
movl %eax, (%rsi) !!!!!!!!!!!!!!!!!!!!!!!
cmpl %eax, %r8d !!!!!!!!!!!!!!!!!!!!!!!
je ..B1.7
..B1.4:
movl %edx, %ecx
movl %eax, %r8d
lock
cmpxchg %ecx, (%rdi)
movl %eax, (%rsi) !!!!!!!!!!!!!!!!!!!!!!!
cmpl %eax, %r8d !!!!!!!!!!!!!!!!!!!!!!!
jne ..B1.4
..B1.7:
ret
Visual Studio 2012 [No need to check assembler, MS uses _InterlockedCompareExchange !!!]
inline int _Compare_exchange_seq_cst_4(volatile _Uint4_t *_Tgt, _Uint4_t *_Exp, _Uint4_t _Value)
{ /* compare and exchange values atomically with
sequentially consistent memory order */
int _Res;
_Uint4_t _Prev = _InterlockedCompareExchange((volatile long
*)_Tgt, _Value, *_Exp);
if (_Prev == *_Exp) !!!!!!!!!!!!!!!!!!!!!!!
_Res = 1;
else
{ /* copy old value */
_Res = 0;
*_Exp = _Prev;
}
return (_Res);
}
[...]
break CAS loops such as Concurrency in Action's listing 7.2:
while(!head.compare_exchange_weak(new_node->next, new_node);
The specification (29.6.5[atomics.types.operations.req]/21-22) seems to
imply that the result of the comparison must be a part of the atomic
operation:
[...]
The issue with this code and the specification is not whether the atomicity of compare_exchange needs to extend beyond just the comparison and exchange itself to returning the result of the comparison or assigning to the expected parameter. That is, the code may still be correct without the store to expected being atomic.
What causes the above code to be potentially racy is when implementations write to the expected parameter after a successful exchange may have been observed by other threads. The code is written with the expectation that in the case when the exchange is successful there is no write on expected to produce a race.
The spec, as written, does appear to guarantee this expected behavior. (And indeed can be read as making the much stronger guarantee you describe, that the entire operation is atomic.) According to the spec, compare_exchange_weak:
Atomically, compares the contents of the memory pointed to by object
or by this for equality with that in expected, and if true, replaces
the contents of the memory pointed to by object or by this with that
in desired, and if false, updates the contents of the memory in
expected with the contents of the memory pointed to by object or by
this. [n4140 § 29.6.5 / 21] (N.B. The wording is unchanged between C++11 and C++14)
The problem is that it seems as though the actual language of the standard is stronger than the original intent of the proposal. Herb Sutter is saying that Concurrency in Action's usage was never really intended to be supported, and that updating expected was only intended to be done on local variables.
I don't see any current defect report on this. [See second update below] If in fact this language is stronger than intended then presumably one will get filed. Either C++11's wording will be updated to guarantee the above code's expected behavior, thus making current implementations non-conformant, or the new wording will not guarantee this behavior, making the above code potentially result in undefined behavior. In that case I guess Anthony's book will need updating. What the committee will do about this, and whether or not actual implementations conform to the original intent (rather than the actual wording of the spec) is still an open question. [See update below]
For the purposes of writing code in the meantime, you'll have to take into account the actual behavior of implementation whether it's conformant or not. Existing implementations may be 'buggy' in the sense that they don't implement the the exact wording of the ISO spec, but they do operate as their implementers intended and they can be used to write thread safe code. [See update below]
So to answer your questions directly:
but is it actually implementable?
I believe that the actual wording of the spec is not reasonably implementable (And that the actual wording makes guarantees stronger even than Anthony's just::thread library provides. For example the actual wording appears to require atomic operations on a non-atomic object. Anthony's slightly weaker interpretation, that the assignment to expected need not be atomic but must be conditioned on the failure of the exchange, is obviously implementable. Herb's even weaker interpretation is also obviously implementable, as that's what most libraries actually implement. [See update below]
Is std::atomic_compare_exchange_weak thread-unsafe by design?
The operation is not thread unsafe no matter whether the operation makes guarantees as strong as the actual wording of the spec or as weak as Herb Sutter indicates. It's simply that correct, thread safe usage of the operation depends on what is guaranteed. The example code from Concurrency in Action is an unsafe usage of a compare_exchange that only offers Herb's weak guarantee, but it could be written to work correctly with Herb's implementation. That could be done like so:
node *expected_head = head.load();
while(!head.compare_exchange_weak(expected_head, new_node) {
new_node->next = expected_head;
}
With this change the 'spurious' writes to expected are simply made to a local variable, and no longer produce any races. The write to new_node->next is now conditional upon the exchange having failed, and thus new_node->next is not visible to any other thread and may be safely updated. This code sample is safe both under current implementations and under stronger guarantees, so it should be future proof to any updates to C++11's atomics that resolve this issue.
Update:
Actual implementations (MSVC, gcc, and clang at least) have been updated to offer the guarantees under Anthony Williams' interpretation; that is, they have stopped inventing writes to expected in the case that the exchange succeeds.
https://llvm.org/bugs/show_bug.cgi?id=18899
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60272
https://connect.microsoft.com/VisualStudio/feedback/details/819819/std-atomic-compare-exchange-weak-has-spurious-write-which-can-cause-race-conditions
Update 2:
This defect report on this issue has been filed with the C++ committee. From the currently proposed resolution the committee does want to make stronger guarantees than provided by the implementations you checked (but not as strong as current wording which appears to guarantee atomic operations on non-atomic objects.) The draft for the next C++ standard (C++1z or 'C++17') has not yet adopted the improved wording.
Update 3: C++17 adopted the proposed resolution.
Those people don't seem to understand either the standard or the instructions.
First of all, std::atomic_compare_exchange_weak is not thread-unsafe by design. That is complete nonsense.
The design very clearly defines what the function does and which guarantees (including atomicity and memory ordering) it must provide.
Whether your program that uses this function is thread-safe as a whole is a different matter, but the function's semantics per se are certainly correct in the sense of an atomic copare-exchange (you can still write thread-unsafe code using any available thread-safe primitive, but that is a totally different story).
This particular function implements the "weak" version of a thread-safe compare-exchange operation which differs from the "non weak" version in that the implementation is allowed to generate code which may spuriously fail, if that gives a performance benefit (irrelevant on x86). Weak does not mean it's worse, it only means that it is allowable to fail more often on some platforms, if that gives an overall performance benefit.
The implementation is of course still required to work correctly. That is, if the compare-exchange fails -- whether by concurrency or spuriously -- it must be correctly reported back as having failed.
Second, the code generated by existing implementations has no bearing for the correctness or thread-safety of std::atomic_compare_exchange_weak. At best, if the generated instructions do not work correctly, this is an implementation issue, but it has nothing to do with the language construct. The language standard defines what behavior an implementation must provide, it is not responsible for implementations acutally doing it correctly.
Third, there is no problem in the generated code. The x86 CMPXCHG instruction has a well-defined mode of operation. It compares the actual value with the expected value, and if the comparison is successful, it performs the swap. You know whether or not the operation was successful either by looking at EAX (or RAX in x64) or by the state of ZF.
What matters is that the atomic compare-exchange is atomic, and that's the case. Whatever you do with the result afterwards needs not be atomic (in your case, the CMP), since the state does not change any more. Either the swap was successful at that point, or it has failed. In either case, it's already "history".
std::atomic_compare_exchange_weak has different semantics than the underlying instruction, it returns a bool value. Therefore, you cannot always expect a 1:1 mapping to instructions. The compiler may have to generate additional instructions (and different ones depending on how you consume the result) to implement these semantics, but it really makes no difference for correctness.
The only thing one could arguably complain about is the fact that instead of directly using the already present state of ZF (with a Jcc or CMOVcc), it performs another comparison. But this is a performance issue (1 cycle wasted), not a correctness issue.
Quoting Duncan Forster from the linked page:
The important thing to remember is that the hardware implementation of CAS only returns 1 value (the old value) not two (old plus boolean)
So there's one instruction - the (atomic) CAS - which actually operates on memory, and then another instruction to convert the (atomically-assigned) result into the expected boolean.
Since the value in %rax was set atomically and can't then be affected by another thread, there is no race here.
The quote is anyway false, since ZF is also set depending on the CAS result (ie, it does return both the old value and the boolean). The fact the flag isn't used might be a missed optimisation, or the cmpq might be faster, but it doesn't affect correctness.
For reference, consider decomposing compare_exchange_weak like this pseudocode:
T compare_exchange_weak_value(atomic<T> *obj, T *expected, T desired) {
// setup ...
lock cmpxchgq %rcx, (%rsp) // actual CAS
return %rax; // actual destination value
}
bool compare_exchange_weak_bool(atomic<T> *obj, T *expected, T desired) {
// CAS is atomic
T actual = compare_exchange_weak_value(obj, expected, desired);
// now we figure out if it worked
return actual == *expected;
}
Do you agree the CAS is properly atomic?
If the unconditional store to expected is really what you wanted to ask about (instead of the perfectly safe comparison), I agree with Sebastian that it's a bug.
For reference, you can work around it by forcing the unconditional store into a local, and making the potentially-visible store conditional again:
struct node {
int data;
node* next;
};
std::atomic<node*> head;
void push(int data) {
node* new_node = new node{data};
node* cur_head = head.load(std::memory_order_relaxed);
do {
new_node->next = cur_head;
} while (!head.compare_exchange_weak(cur_head, new_node,
std::memory_order_release, std::memory_order_relaxed));
}

Help understanding part of this generated assembly code

Can anyone explain to me the assembly generated by GCC of the following C++ codes? espeically, the meaning of setg and test in the codes. thx!
.cpp codes:
1 /*for loop*/
2 int main()
3 {
4 int floop_id;
5 for(floop_id=100;floop_id>=1;floop_id--)
6 {}
7 return 0;
8 }
assembly codes:
3 push %ebp
3 mov %esp, %ebp
3 sub $0x10,%esp
5 movl $0x64,-0x4(%ebp)
5 jmp 8048457<main+0x13>
5 subl $0x1,-0x4(%esp)
5 cmpl $0x0,-0x4(%esp)
5 setg %al
5 test %al, %al
7 mov $0x0,%eax
8 leave
8 ret
cmpl $0x0,-0x4(%esp); setg %al means compare -0x4(%esp) (floop_id in your code) against 0, and set %al to 1 if it's greater, or 0 otherwise.
test %al, %al here isn't doing anything. I don't know why it's in the assembly. (Normally, testing a value with itself is used to get the signum of the value (i.e., zero, positive, or negative), but the result of this isn't being used here. Chances are, it was going to do a conditional branch (to implement the loop), but seeing as your loop is empty, it got removed.)
Your generated assembly code doesn't have the cycle in it (apparently the compiler decided that it is not needed), but it seems to contain some loose remains of that cycle. There are bits that load 100 into a variable, subtract 1 from it, compare it with 0. But there's no actual iteration in the code.
Trying to find any logic in this is a pointless exercise. The compiler, obviously, decided to remove the entire cycle. But why it left some "debris" behind is not clear. I'd say that what is left behind in the code is harmless, but at the same time has as much meaning as a value of an initialized variable.
BTW, where does the unconditional jmp lead to? It is not clear from your disassembly. Doesn't it just jump to 7 right away?