c++ atomic: would function call act as memory barrier? - c++

I'm reading this article Memory Ordering at Compile Time from which said:
In fact, the majority of function calls act as compiler barriers,
whether they contain their own compiler barrier or not.This excludes inline functions, functions declared with the pure attribute, and cases where link-time code generation is used. Other than those cases, a call to an external function is even stronger than a compiler barrier, since the compiler has no idea what the function’s side effects will be.
Is this a true statement? Think about this sample -
std::atomic_bool flag = false;
int value = 0;
void th1 () { // running in thread 1
value = 1;
// use atomic & release to prevent above sentence being reordered below
flag.store(true, std::memory_order_release);
}
void th2 () { // running in thread 2
// use atomic & acquire to prevent asset(..) being reordered above
while (!flag.load(std::memory_order_acquire)) {}
assert (value == 1); // should never fail!
}
Then we can remove atomic but replace with function call -
bool flag = false;
int value = 0;
void writeflag () {
flag = true;
}
void readflag () {
while (!flag) {}
}
void th1 () {
value = 1;
writeflag(); // would function call prevent reordering?
}
void th2 () {
readflag(); // would function call prevent reordering?
assert (value == 1); // would this fail???
}
Any idea?

A compiler barrier is not the same thing as a memory barrier. A compiler barrier prevents the compiler from moving code across the barrier. A memory barrier (loosely speaking) prevents the hardware from moving reads and writes across the barrier. For atomics you need both, and you also need to ensure that values don't get torn when read or written.

Formally, no, if only because Link-Time Code Generation is a valid implementation choice and need not be optional.
There's also a second oversight, and that's escape analysis. The claim is that "the compiler has no idea what the function’s side effects will be.", but if no pointers to my local variables escape from my function, then the compiler does know for sure that no other function changes them.

In the second example, even if we assume that no reordering of any kind, the behavior is undefined.
The writes and reads from variable flag are not atomic, and there is a race condition1. Having no reordering doesn't guarantee that both threads don't access the variable flat at the same time. This happens when one thread hits the while loop in the function readflag and reads flag, and the other thread writes to flag in writeflag.
1 (Quoted from: ISO/IEC 14882:2011(E) 1.10 Multi-threaded executions and data races 21)
The execution of a program contains a data race if it contains two conflicting actions in different threads,
at least one of which is not atomic, and neither happens before the other. Any such data race results in
undefined behavior

You are confusing a memory barrier used for inter thread memory visibility and a compiler barrier, which isn't a thread device, just a device (or trick) to prevent reordering of side effects by the compiler.
You need a memory barrier for your threading example.
You can use a compiler barrier to ensure that memory side effet are performed in a given order (on the local CPU) for other purposes, like benchmarking, getting around a type aliasing violation, integrating assembly code, or signal handling (for a signal only handled in that same thread).

Related

Could the following code written for a Reentrant Lock be susceptible to an instruction reordering error?

I recently came across the following code while learning about Reentrant Locks in Lock-Free Concurrency:
class ReentrantLock32
{
std::atomic<std::size_t> m_atomic;
std::int32_t m_refCount;
public:
ReentrantLock32() : m_atomic(0), m_refCount(0) {}
void Acquire()
{
std::hash<std::thread::id> hasher;
std::size_t tid = hasher(std::this_thread::get_id());
if (m_atomic.load(std::memory_order_relaxed) != tid)
{
std::size_t unlockValue = 0;
while (!m_atomic.compare_exchange_weak(
unlockValue,
tid,
std::memory_order_relaxed,
std::memory_order_relaxed))
{
unlockValue = 0;
PAUSE();
}
}
++m_refCount;
std::atomic_thread_fence(std::memory_order_acquire);
}
void Release() {
std::atomic_thread_fence(std:memory_order_release);
std::hash<std::thread::id> hasher;
std::size_t tid = hasher(std::this_thread::get_id());
std::size_t actual = m_atomic.load(std::memory_order_relaxed);
assert(actual == tid);
--m_refCount;
if (m_refCount == 0)
{
m_atomic.store(0,std::memory_order_relaxed);
}
}
//...
}
However, I'm unsure whether the release fence call doesn't preclude the possibility of later memory operations in the thread preceding it and whether the acquire fence precludes the possibility of earlier memory operation succeeding it. If they don't, wouldn't it technically be possible that an optimisation could cause the line
if (m_refCount == 0)
to be suceeded by a complete and successful call to Acquire() on the same thread before the call to
m_atomic.store(0,std::memory_order_relaxed);
in which case the valid incrementation in the reordered Acquire() call would be overwritten by the delayed store() call?
When analyzing this code it also occurred to me that there might be stale data issues which lead to duplicate locks which is questioned here.
There is also another related question to clarify the potential order of memory operations for release fence calls here.
That can't happen.
The situation you mention takes place within a thread. A variable is always sequentially consistent with itself within a thread. (Otherwise it would be impossible to program.)
If, for example, m_atomic.store(0,std::memory_order_relaxed); is stuck in the store buffer, the CPU knows to look there for the load in this line: if (m_atomic.load(std::memory_order_relaxed) != tid)
Regardless of how relaxed a variable is, within a thread, optimizations aren't allowed to change the semantics of the source code. Atomics only exist to provide visibility ordering guarantees to other threads.
BTW, it's not accurate to say this:
the release fence doesn't preclude the possibility of a later operation in the thread preceding it
According to Jeff Preshing:
A release fence prevents the memory reordering of any read or write which precedes it in program order with any write which follows it in program order.
https://preshing.com/20130922/acquire-and-release-fences/
This means that in theory C++ acquire/release atomic thread fences are a bit more strict than the common notion of acquire/release memory barriers, by which reordering is allowed 1-way.

C++: __sync_synchronize() still needed with std::atomic?

I've been running into an infrequent but re-occurring race condition.
The program has two threads and uses std::atomic. I'll simplify the critical parts of the code to look like:
std::atomic<uint64_t> b; // flag, initialized to 0
uint64_t data[100]; // shared data, initialized to 0
thread 1 (publishing):
// set various shared variables here, for example
data[5] = 10;
uint64_t a = b.exchange(1); // signal to thread 2 that data is ready
thread 2 (receiving):
if (b.load() != 0) { // signal that data is ready
// read various shared variables here, for example:
uint64_t x = data[5];
// race condition sometimes (x sometimes not consistent)
}
The odd thing is that when I add __sync_synchronize() to each thread, then the race condition goes away. I've seen this happen on two different servers.
i.e. when I change the code to look like the following, then the problem goes away:
thread 1 (publishing):
// set various shared variables here, for example
data[5] = 10;
__sync_synchronize();
uint64_t a = b.exchange(1); // signal to thread 2 that data is ready
thread 2 (receiving):
if (b.load() != 0) { // signal that data is ready
__sync_synchronize();
// read various shared variables here, for example:
uint64_t x = data[5];
}
Why is __sync_synchronize() necessary? It seems redundant as I thought both exchange and load ensured the correct sequential ordering of logic.
Architecture is x86_64 processors, linux, g++ 4.6.2
Whilst it is impossible to say from your simplified code what actually goes on in your actual application, the fact that __sync_synchronize helps, and the fact that this function is a memory barrier tells me that you are writing things in the one thread that the other thread is reading, in a way that isn't atomic.
An example:
thread_1:
object *p = new object;
p->x = 1;
b.exchange(p); /* give pointer p to other thread */
thread_2:
object *p = b.load();
if (p->x == 1) do_stuff();
else error("Huh?");
This may very well trigger the error-path in thread2, because the write to p->x has not actually been completed when thread 2 reads the new pointer value p.
Adding memory barrier, in this case, in the thread_1 code should fix this. Note that for THIS case, a memory barrier in thread_2 will not do anything - it may alter the timing and appear to fix the problem, but it won't be the right thing. You may need memory barriers on both sides still, if you are reading/writing memory that is shared between two threads.
I understand that this may not be precisely what your code is doing, but the concept is the same - __sync_synchronize ensures that memory reads and memory writes have completed for ALL of the instructions before that function call [which isn't a real function call, it will inline a single instruction that waits for any pending memory operations to comlete].
Noteworthy is that operations on std::atomic ONLY affect the actual data stored in the atomic object. Not reads/writes of other data.
Sometimes you also need a "compiler barrier" to avoid the compiler moving stuff from one side of an operation to another:
std::atomic<bool> flag(false);
value = 42;
flag.store(true);
....
another thread:
while(!flag.load());
print(value);
Now, there is a chance that the compiler generates the first form as:
flag.store(true);
value = 42;
Now, that wouldn't be good, would it? std::atomic is guaranteed to be a "compiler barrier", but in other cases, the compiler may well shuffle stuff around in a similar way.

Confusion about thread-safety

I am new to the world of concurrency but from what I have read I understand the program below to be undefined in its execution. If I understand correctly this is not threadsafe as I am concurrently reading/writing both the shared_ptr and the counter variable in non-atomic ways.
#include <string>
#include <memory>
#include <thread>
#include <chrono>
#include <iostream>
struct Inner {
Inner() {
t_ = std::thread([this]() {
counter_ = 0;
running_ = true;
while (running_) {
counter_++;
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
});
}
~Inner() {
running_ = false;
if (t_.joinable()) {
t_.join();
}
}
std::uint64_t counter_;
std::thread t_;
bool running_;
};
struct Middle {
Middle() {
data_.reset(new Inner);
t_ = std::thread([this]() {
running_ = true;
while (running_) {
data_.reset(new Inner());
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
}
});
}
~Middle() {
running_ = false;
if (t_.joinable()) {
t_.join();
}
}
std::uint64_t inner_data() {
return data_->counter_;
}
std::shared_ptr<Inner> data_;
std::thread t_;
bool running_;
};
struct Outer {
std::uint64_t data() {
return middle_.inner_data();
}
Middle middle_;
};
int main() {
Outer o;
while (true) {
std::cout << "Data: " << o.data() << std::endl;
}
return 0;
}
My confusion comes from this:
Is the access to data_->counter safe in Middle::inner_data?
If thread A has a member shared_ptr<T> sp and decides to update it while thread B does shared_ptr<T> sp = A::sp will the copy and destruction be threadsafe? Or do I risk the copy failing because the object is in the process of being destroyed.
Under what circumstances (can I check this with some tool?) is undefined likely to mean std::terminate? I suspect something like the above happens in some of my production code but I cannot be certain as I am confused about 1 and 2 but this small program has been running for days since I wrote it and nothing happens.
Code can be checked here at https://godbolt.org/g/saHz94
Is the access to data_->counter safe in Middle::inner_data?
No; it's a race condition. According to the standard, it's undefined behavior anytime you allow unsynchronized access to the same variable from more than one thread, and at least one thread might possibly modify the variable.
As a practical matter, here are a couple of unwanted behaviors you might see:
The thread reading the value of counter_ reads an "old" value of counter (that rarely or never updates) due to different processor cores caching the variable independently of each other (using atomic_t would avoid this problem, because then the compiler would be aware that you are intending this variable to be accessed in an unsynchronized manner, and it would know to take precautions to prevent this problem)
Thread A might read the address that the data_ shared_pointer points to and be just about to dereference the address and read from the Inner struct it points to, when Thread A gets kicked off the CPU by thread B. Thread B executes, and during Thread B's execution, the old Inner struct gets deleted and the data_ shared_pointer set to point to a new Inner struct. Then Thread A gets back onto the CPU again, but since Thread A already has the old pointer value in memory, it dereferences the old value rather than the new one and ends up reading from freed/invalid memory. Again, this is undefined behavior, so in principle anything could happen; in practice you're likely to see either no obvious misbehavior, or occasionally a wrong/garbage value, or possibly a crash, it depends.
If thread A has a member shared_ptr sp and decides to update it
while thread B does shared_ptr sp = A::sp will the copy and
destruction be threadsafe? Or do I risk the copy failing because the
object is in the process of being destroyed.
If you're only retargeting the shared_ptrs themselves (i.e. changing them to point to different objects) and not modifying the T objects that they point to, that should be thread safe AFAIK. But if you are modifying state of the T objects themselves (i.e. the Inner object in your example) that is not thread safe, since you could have one thread reading from the object while another thread is writing to it (deleting the object can be seen as a special case of writing to it, in that it definitely changes the object's state)
Under what circumstances (can I check this with some tool?) is
undefined likely to mean std::terminate?
When you hit undefined behavior, it's very much dependent on the details of your program, the compiler, the OS, and the hardware architecture what will happen. In principle, undefined behavior means anything (including the program running just as you intended!) can happen, but you can't rely on any particular behavior -- which is what makes undefined behavior so evil.
In particular, it's common for a multithreaded program with a race condition to run fine for hours/days/weeks and then one day the timing is just right and it crashes or computes an incorrect result. Race conditions can be really difficult to reproduce for that reason.
As for when terminate() might be called, terminate() would be called if the the fault causes an error state that is detected by the runtime environment (i.e. it corrupts a data structure that the runtime environment does integrity checks on, such as, in some implementations, the heap's metadata). Whether or not that actually happens depends on how the heap was implemented (which varies from one OS and compiler to the next) and what sort of corruption the fault introduced.
Thread safety is an operation between threads, not an absolute in general.
You cannot read or write a variable while another thread writes a variable without synchronization between the other thread's write and your read or write. Doing so is undefined behavior.
Undefined can mean anything. Program crashes. Program reads impossible value. Program formats hard drive. Program emails your browser history to all of your contacts.
A common case for unsynchronized integer access is that the compiler optimizes multiple reads to a value into one and doesn't reload it, because it can prove there is no defined way that someone could have modified the value. Or, the CPU memory cache does the same thing, because you did not synchronize.
For the pointers, similar or worse problems can occur, including following dangling pointers, corrupting memory, crashes, etc.
There are now atomic operations you can perform on shared pointers., as well as atomic<shared_ptr<?>>.

Is std::call_once lock free?

I want to find out if the std::call_once lock free. There are call_once implementations using mutex. But why should we use mutex? I tried to write simple implementation using atomic_bool and CAS operation. Is the code thread safe?
#include <iostream>
#include <thread>
#include <atomic>
#include <unistd.h>
using namespace std;
using my_once_flag = atomic<bool>;
void my_call_once(my_once_flag& flag, std::function<void()> foo) {
bool expected = false;
bool res = flag.compare_exchange_strong(expected, true,
std::memory_order_release, std::memory_order_relaxed);
if(res)
foo();
}
my_once_flag flag;
void printOnce() {
usleep(100);
my_call_once(flag, [](){
cout << "test" << endl;
});
}
int main() {
for(int i = 0; i< 500; ++i){
thread([](){
printOnce();
}).detach();
}
return 0;
}
Your proposed implementation is not thread safe. It does guarantee that foo() will only be called once through this code, but it does not guarantee that all threads will see side effects from the call to foo(). Suppose that thread 1 executes the compare and gets true, then the scheduler switches to thread 2, before thread 2 calls foo(). Thread 2 will get false, skip the call to foo(), and move on. Since the call to foo() has not been executed, thread 2 can continue executing before any side effects from foo() have occurred.
The already-called-once fast-path can be wait-free.
gcc's implementation doesn't look all that efficient. I don't know why it isn't implemented the same way as initialization of static local variables with a non-constant arg, which uses a check that's very cheap (but not free!) for the case where it's already initialized.
http://en.cppreference.com/w/cpp/thread/call_once comments that:
Initialization of function-local statics is guaranteed to occur only
once even when called from multiple threads, and may be more efficient
than the equivalent code using std::call_once.
To make an efficient implementation, the std::once_flag could have three states:
execution finished: if you find this state, you're already done.
execution in progress: if you find this: wait until it changes to finished (or changes to failed-with-exception, which which case try to claim it)
execution not started: if you find this, attempt to CAS it to in-progress and call the function. If the CAS failed, some other thread succeeded, so goto the wait-for-finished state.
Checking the flag with an acquire-load is extremely cheap on most architectures (especially x86 where all loads are acquire-loads). Once it's set to "finished", it's not modified for the rest of the program, so it can stay cached in L1 on all cores (unless you put it in the same cache line as something that is frequently modified, creating false sharing).
Even if your implementation worked, it attempts an atomic CAS every time, which is ridiculously more expensive than a load-acquire.
I haven't fully decoded what gcc is doing for call_once, but it unconditionally does a bunch of loads, and two stores to thread-local storage, before checking if a pointer is NULL. (test rax,rax / je). But if it is, it calls std::__throw_system_error(int), so that's not a guard variable that it's using to detect the already-initialized case.
So it looks like it unconditionally calls __gthrw_pthread_once(int*, void (*)()), and checks the return value. So that pretty much sucks for use-cases where you want to cheaply make sure some initialization got done, while avoiding the static-initialization fiasco. (i.e. that your build procedure controls the ordering of constructors for static objects, not anything you put in the code itself.)
So I'd recommend using static int dummy = init_function(); where dummy is either something you actually want to construct, or just a way to call init_function for its side-effects.
Then on the fast-path, the asm from:
int called_once();
void static_local(){
static char dummy = called_once();
(void)dummy;
}
looks like this:
static_local():
movzx eax, BYTE PTR guard variable for static_local()::dummy[rip]
test al, al
je .L18
ret
.L18:
... # code that implements basically what I described above: call or wait
See it on the Godbolt compiler explorer, along with gcc's actual code for std::once_flag.
You could of course implement a guard variable yourself with an atomic uint8_t, which starts out initialized to non-zero, and is set to zero only when the call is complete. Testing for zero may be slightly cheaper on some ISAs, including x86 if the compiler is weird like gcc and decides to actually load it into a register instead of using cmp byte [guard], 0.

Proper compiler intrinsics for double-checked locking?

When implementing double-checked locking, what is the proper way to do the memory and/or compiler barriers when implementing double-checked locking for initialization?
Something like std::call_once isn't what I want; it's way too slow. It's typically just implemented on top of pthread_mutex_lock and EnterCriticalSection respective to OS.
In my programs, I often run into initialization cases where the initialization is safe to repeat, as long as exactly one thread gets to set the final pointer. If another thread beats it to setting the final pointer to the singleton object, it deletes what it created and makes use of the other thread's. I also often use this in cases where it doesn't matter which thread "wins" because they all come up with the same result.
Here's an unsafe, overly-contrived example, using Visual C++ intrinsics:
MyClass *GetGlobalMyClass()
{
static MyClass *const UNSET_POINTER = reinterpret_cast<MyClass *>(
static_cast<intptr_t>(-1));
static MyClass *volatile s_object = UNSET_POINTER;
if (s_object == UNSET_POINTER)
{
MyClass *newObject = MyClass::Create();
if (_InterlockedCompareExchangePointer(&s_object, newObject,
UNSET_POINTER) != UNSET_POINTER)
{
// Another thread beat us. If Create didn't return null, destroy.
if (newObject)
{
newObject->Destroy(); // calls "delete this;", presumably
}
}
}
return s_object;
}
On a weakly-ordered memory architecture, my understanding is that it's possible that the new value of s_object is visible to other threads before other variables written inside MyClass::Create or MyClass::MyClass are visible. Also, the compiler itself could arrange the code this way in the absence of a compiler barrier (in Visual C++, _WriteBarrier, but _InterlockedCompareExchange acts as a barrier).
Do I need like a store fence intrinsic function in there or something in order to ensure that MyClass's variables are visible to all threads before s_object becomes somethings besides -1?
Fortunately, the rules in C++ are very simple:
If there is a data race, the behaviour is undefined.
In you code the data race is caused by the following read, which conflicts with the write operation in __InterlockedCompareExchangePointer.
if (s_object.m_void == UNSET_POINTER)
A thread-safe solution without blocking might look as follows. Note that on x86 a load operation with sequential consistency has basically no overhead compared to a regular load operation. If you care about other architectures, you can also use acquire release instead of sequential consistency.
static std::atomic<MyClass*> s_object{nullptr};
MyClass* o = s_object.load(std::memory_order_seq_cst);
if (o == nullptr) {
o = new MyClass{...};
MyClass* expected = nullptr;
if (!s_object.compare_exchange_strong(expected, o, std::memory_order_seq_cst)) {
delete o;
o = expected;
}
}
return o;
For a proper C++11 implementation any function-local static variable will be constructed in a thread-safe fashion by the first thread passing through this variable.