Assuming that aligned pointer loads and stores are naturally atomic on the target platform, what is the difference between this:
// Case 1: Dumb pointer, manual fence
int* ptr;
// ...
std::atomic_thread_fence(std::memory_order_release);
ptr = new int(-4);
this:
// Case 2: atomic var, automatic fence
std::atomic<int*> ptr;
// ...
ptr.store(new int(-4), std::memory_order_release);
and this:
// Case 3: atomic var, manual fence
std::atomic<int*> ptr;
// ...
std::atomic_thread_fence(std::memory_order_release);
ptr.store(new int(-4), std::memory_order_relaxed);
I was under the impression that they were all equivalent, however Relacy detects a data race in
the first case (only):
struct test_relacy_behaviour : public rl::test_suite<test_relacy_behaviour, 2>
{
rl::var<std::string*> ptr;
rl::var<int> data;
void before()
{
ptr($) = nullptr;
rl::atomic_thread_fence(rl::memory_order_seq_cst);
}
void thread(unsigned int id)
{
if (id == 0) {
std::string* p = new std::string("Hello");
data($) = 42;
rl::atomic_thread_fence(rl::memory_order_release);
ptr($) = p;
}
else {
std::string* p2 = ptr($); // <-- Test fails here after the first thread completely finishes executing (no contention)
rl::atomic_thread_fence(rl::memory_order_acquire);
RL_ASSERT(!p2 || *p2 == "Hello" && data($) == 42);
}
}
void after()
{
delete ptr($);
}
};
I contacted the author of Relacy to find out if this was expected behaviour; he says that there is indeed a data race in my test case.
However, I'm having trouble spotting it; can someone point out to me what the race is?
Most importantly, what are the differences between these three cases?
Update: It's occurred to me that Relacy may simply be complaining about the atomicity (or lack thereof, rather) of the variable being accessed across threads... after all, it doesn't know that I intend only to use this code on platforms where aligned integer/pointer access is naturally atomic.
Another update: Jeff Preshing has written an excellent blog post explaining the difference between explicit fences and the built-in ones ("fences" vs "operations"). Cases 2 and 3 are apparently not equivalent! (In certain subtle circumstances, anyway.)
Although various answers cover bits and pieces of what the potential problem is and/or provide useful information, no answer correctly describes the potential issues for all three cases.
In order to synchronize memory operations between threads, release and acquire barriers are used to specify ordering.
In the diagram, memory operations A in thread 1 cannot move down across the (one-way) release barrier (regardless whether that is a release operation on an atomic store,
or a standalone release fence followed by a relaxed atomic store). Hence memory operations A are guaranteed to happen before the atomic store.
Same goes for memory operations B in thread 2 which cannot move up across the acquire barrier; hence the atomic load happens before memory operations B.
The atomic ptr itself provides inter-thread ordering based on the guarantee that it has a single modification order. As soon as thread 2 sees a value for ptr,
it is guaranteed that the store (and thus memory operations A) happened before the load. Because the load is guaranteed to happen before memory operations B,
the rules for transitivity say that memory operations A happen before B and synchronization is complete.
With that, let's look at your 3 cases.
Case 1 is broken because ptr, a non-atomic type, is modified in different threads. That is a classical example of a data race and it causes undefined behavior.
Case 2 is correct. As an argument, the integer allocation with new is sequenced before the release operation. This is equivalent to:
// Case 2: atomic var, automatic fence
std::atomic<int*> ptr;
// ...
int *tmp = new int(-4);
ptr.store(tmp, std::memory_order_release);
Case 3 is broken, albeit in a subtle way. The problem is that even though the ptr assignment is correctly sequenced after the standalone fence,
the integer allocation (new) is also sequenced after the fence, causing a data race on the integer memory location.
the code is equivalent to:
// Case 3: atomic var, manual fence
std::atomic<int*> ptr;
// ...
std::atomic_thread_fence(std::memory_order_release);
int *tmp = new int(-4);
ptr.store(tmp, std::memory_order_relaxed);
If you map that to the diagram above, the new operator is supposed to be part of memory operations A. Being sequenced below the release fence,
ordering guarantees no longer hold and the integer allocation may actually be reordered with memory operations B in thread 2.
Therefore, a load() in thread 2 may return garbage or cause other undefined behavior.
I believe the code has a race. Case 1 and case 2 are not equivalent.
29.8 [atomics.fences]
-2- A release fence A synchronizes with an acquire fence B if there exist atomic operations X and Y, both operating on some atomic object M, such that A is sequenced before X, X modifies M, Y is sequenced before B, and Y reads the value written by X or a value written by any side effect in the hypothetical release sequence X would head if it were a release operation.
In case 1 your release fence does not synchronize with your acquire fence because ptr is not an atomic object and the store and load on ptr are not atomic operations.
Case 2 and case 3 are equivalent (actually, not quite, see LWimsey's comments and answer), because ptr is an atomic object and the store is an atomic operation. (Paragraphs 3 and 4 of [atomic.fences] describe how a fence synchronizes with an atomic operation and vice versa.)
The semantics of fences are defined only with respect to atomic objects and atomic operations. Whether your target platform and your implementation offer stronger guarantees (such as treating any pointer type as an atomic object) is implementation-defined at best.
N.B. for both of case 2 and case 3 the acquire operation on ptr could happen before the store, and so would read garbage from the uninitialized atomic<int*>. Simply using acquire and release operations (or fences) doesn't ensure that the store happens before the load, it only ensures that if the load reads the stored value then the code is correctly synchronized.
Several pertinent references:
the C++11 draft standard (PDF, see clauses 1, 29 and 30);
Hans-J. Boehm's overview of concurrency in C++;
McKenney, Boehm and Crowl on concurrency in C++;
GCC's developmental notes on concurrency in C++;
the Linux kernel's notes on concurrency;
a related question with answers here on Stackoverflow;
another related question with answers;
Cppmem, a sandbox in which to experiment with concurrency;
Cppmem's help page;
Spin, a tool for analyzing the logical consistency of concurrent systems;
an overview of memory barriers from a hardware perspective (PDF).
Some of the above may interest you and other readers.
The memory backing an atomic variable can only ever be used for the contents of the atomic. However, a plain variable, like ptr in case 1, is a different story. Once a compiler has the right to write to it, it can write anything to it, even the value of a temporary value when you run out of registers.
Remember, your example is pathologically clean. Given a slightly more complex example:
std::string* p = new std::string("Hello");
data($) = 42;
rl::atomic_thread_fence(rl::memory_order_release);
std::string* p2 = new std::string("Bye");
ptr($) = p;
it is totally legal for the compiler to choose to reuse your pointer
std::string* p = new std::string("Hello");
data($) = 42;
rl::atomic_thread_fence(rl::memory_order_release);
ptr($) = new std::string("Bye");
std::string* p2 = ptr($);
ptr($) = p;
Why would it do so? I don't know, perhaps some exotic trick to keep a cache line or something. The point is that, since ptr is not atomic in case 1, there is a race case between the write on line 'ptr($) = p' and the read on 'std::string* p2 = ptr($)', yielding undefined behavior. In this simple test case, the compiler may not choose to exercise this right, and it may be safe, but in more complicated cases the compiler has the right to abuse ptr however it pleases, and Relacy catches this.
My favorite article on the topic: http://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong
The race in the first example is between the publication of the pointer, and the stuff that it points to. The reason is, that you have the creation and initialization of the pointer after the fence (= on the same side as the publication of the pointer):
int* ptr; //noop
std::atomic_thread_fence(std::memory_order_release); //fence between noop and interesting stuff
ptr = new int(-4); //object creation, initalization, and publication
If we assume that CPU accesses to properly aligned pointers are atomic, the code can be corrected by writing this:
int* ptr; //noop
int* newPtr = new int(-4); //object creation & initalization
std::atomic_thread_fence(std::memory_order_release); //fence between initialization and publication
ptr = newPtr; //publication
Note that even though this may work fine on many machines, there is absolutely no guarantee within the C++ standard on the atomicity of the last line. So better use atomic<> variables in the first place.
Related
I'm referring to this example.
The authors use memory_order_release to decrement the counter. And they even state in the discussion section that using memory_order_acq_rel instead would be excessive. But wouldn't the following scenario in theory lead to that x is never deleted?
we have two threads on different CPUs
each of them owns an instance of a shared pointer, both pointers share ownership over the same control block, no other pointers referring that block exist
each thread has the counter in its cache and the counter is 2 for both of them
the first thread destroys its pointer, the counter in this thread now is 1
the second thread destroys its pointer, however cache invalidation signal from the first thread may still be queued, so it decrements the value from its own cache and gets 1
both threads didn't delete x, but there are no shared pointers sharing our control block => memory leak
The code sample from the link:
#include <boost/intrusive_ptr.hpp>
#include <boost/atomic.hpp>
class X {
public:
typedef boost::intrusive_ptr<X> pointer;
X() : refcount_(0) {}
private:
mutable boost::atomic<int> refcount_;
friend void intrusive_ptr_add_ref(const X * x)
{
x->refcount_.fetch_add(1, boost::memory_order_relaxed);
}
friend void intrusive_ptr_release(const X * x)
{
if (x->refcount_.fetch_sub(1, boost::memory_order_release) == 1) {
boost::atomic_thread_fence(boost::memory_order_acquire);
delete x;
}
}
};
The quote from discussion section:
It would be possible to use memory_order_acq_rel for the fetch_sub
operation, but this results in unneeded "acquire" operations when the
reference counter does not yet reach zero and may impose a performance
penalty.
All modifications of a single atomic variable happen in a global modification order. It is not possible for two threads to disagree about this order.
The fetch_sub operation is an atomic read-modify-write operation and is required to always read the value of the atomic variable immediately before the modification from the same operation in the modification order.
So it is not possible for the second thread to read 2 when the first thread's fetch_sub was first in the modification order. The implementation must assure that such a cache incoherence cannot happen, if necessary with the help of locks if the hardware doesn't support this atomic access natively. (That is what the is_lock_free and is_always_lock_free members of the atomic are there to check for.)
This is all independent of the memory orders of the operations. These matter only for access to other memory locations than the atomic variable itself.
I have a global reference-counted object obj that I want to protect from data races by using atomic operations:
T* obj; // initially nullptr
std::atomic<int> count; // initially zero
My understanding is that I need to use std::memory_order_release after I write to obj, so that the other threads will be aware of it being created:
void increment()
{
if (count.load(std::memory_order_relaxed) == 0)
obj = std::make_unique<T>();
count.fetch_add(1, std::memory_order_release);
}
Likewise, I need to use std::memory_order_acquire when reading the counter, to ensure the thread has visibility of obj being changed:
void decrement()
{
count.fetch_sub(1, std::memory_order_relaxed);
if (count.load(std::memory_order_acquire) == 0)
obj.reset();
}
I am not convinced that the code above is correct, but I'm not entirely sure why. I feel like after obj.reset() is called, there should be a std::memory_order_release operation to inform other threads about it. Is that correct?
Are there other things that can go wrong, or is my understanding of atomic operations in this case completely wrong?
It is wrong regardless of memory ordering.
As #MaartenBamelis pointed out for concurrent calling of increment the object is constructed twice. And the same is true for concurrent decrement: object is reset twice (which may result in double destructor call).
Note that there's disagreement between T* obj; declaration and using it as unique_ptr but neither raw pointer not unique pointer are safe for concurrent modification. In practice, reset or delete will check pointer for null, then delete and set it to null, and these steps are not atomic.
fetch_add and fetch_sub are fetch and op instead of just op for a reason: if you don't use the value observed during operation, it is likely to be a race.
This code is inherently racey. If two threads call increment at the same time when count is initially 0, both will see count as 0, and both will create obj (and race to see which copy is kept; given unique_ptr has no special threading protections, terrible things can happen if two of them set it at once).
If two threads decrement at the same time (holding the last two references), and finish the fetch_sub before either calls load, both will reset obj (also bad).
And if a decrement finishes the fetch_sub (to 0), then another thread increments before the decrement load occurs, the increment will see the count as 0 and reinitialize. Whether the object is cleared after being replaced, or replaced after being cleared, or some horrible mixture of the two, will depend on whether increment's fetch_add runs before or after decrement's load.
In short: If you find yourself using two separate atomic operations on the same variable, and testing the result of one of them (without looping, as in a compare and swap loop), you're wrong.
More correct code would look like:
void increment() // Still not safe
{
// acquire is good for the != 0 case, for a later read of obj
// or would be if the other writer did a release *after* constructing an obj
if (count.fetch_add(1, std::memory_order_acquire) == 0)
obj = std::make_unique<T>();
}
void decrement()
{
if (count.fetch_sub(1, std::memory_order_acquire) == 1)
obj.reset();
}
but even then it's not reliable; there's no guarantee that, when count is 0, two threads couldn't call increment, both of them fetch_add at once, and while exactly one of them is guaranteed to see the count as 0, said 0-seeing thread might end up delayed while the one that saw it as 1 assumes the object exists and uses it before it's initialized.
I'm not going to swear there's no mutex-free solution here, but dealing with the issues involved with atomics is almost certainly not worth the headache.
It might be possible to confine the mutex to inside the if() branches, but taking a mutex is also an atomic RMW operation (and not much more than that for a good lightweight implementation) so this doesn't necessarily help a huge amount. If you need really good read-side scaling, you'd want to look into something like RCU instead of a ref-count, to allow readers to truly be read-only, not contending with other readers.
I don't really see a simple way of implementing a reference-counted resource with atomics. Maybe there's some clever way that I haven't thought of yet, but in my experience, clever does not equal readable.
My advice would be to implement it first using a mutex. Then you simply lock the mutex, check the reference count, do whatever needs to be done, and unlock again. It's guaranteed correct:
std::mutex mutex;
int count;
std::unique_ptr<T> obj;
void increment()
{
auto lock = std::scoped_lock{mutex};
if (++count == 1) // Am I the first reference?
obj = std::make_unique<T>();
}
void decrement()
{
auto lock = std::scoped_lock{mutex};
if (--count == 0) // Was I the last reference?
obj.reset();
}
Although at this point, I would just use a std::shared_ptr instead of managing the reference count myself:
std::mutex mutex;
std::weak_ptr<T> obj;
std::shared_ptr<T> acquire()
{
auto lock = std::scoped_lock{mutex};
auto sp = obj.lock();
if (!sp)
obj = sp = std::make_shared<T>();
return sp;
}
I believe this also makes it safe when exceptions may be thrown when constructing the object.
Mutexes are surprisingly performant, so I expect that locking code is plenty quick unless you have a highly specialized use case where you need code to be lock-free.
Following up to this question - std::memory_order_relaxed and initialization. Suppose I have code like this
class Something {
public:
int value;
};
auto&& pointer = std::atomic<Something*>{nullptr};
// thread 1
auto value = Something{1};
pointer.set(&value, std::memory_order_relaxed);
// thread 2
Something* something = nullptr;
while (!(something = pointer.load(std::memory_order_relaxed))) {}
cout << something->value << endl;
Is this guaranteed to print 1? Can an implementation be allowed to take the address of a non initialized value?
(Assuming that there are no lifetime issues with thread 2 reading the pointer set by thread 1)
No, it isn't guaranteed to print 1. The write to the field value may be reordered WRT to the write to pointer. If it is reordered to after the write to pointer, 'thread 2' will observe uninitialized memory.
This can and does happen in practice on ARM.
Because x86 CPUs maintain "total store order" (i.e. all stores are observable by other threads in the order they were issued by the issuing thread,) the CPU cannot cause this to happen. But, it still can happen on x86 because, while the CPU will not reorder writes, the compiler is allowed to reorder writes. I don't know if in practice compilers do that.
Reading this article about Double Checked Locking Pattern in C++, I reached the place (page 10) where the authors demonstrate one of the attempts to implement DCLP "correctly" using volatile variables:
class Singleton {
public:
static volatile Singleton* volatile instance();
private:
static volatile Singleton* volatile pInstance;
};
// from the implementation file
volatile Singleton* volatile Singleton::pInstance = 0;
volatile Singleton* volatile Singleton::instance() {
if (pInstance == 0) {
Lock lock;
if (pInstance == 0) {
volatile Singleton* volatile temp = new Singleton;
pInstance = temp;
}
}
return pInstance;
}
After such example there is a text snippet that I don't understand:
First, the Standard’s constraints on observable behavior are only for
an abstract machine defined by the Standard, and that abstract machine
has no notion of multiple threads of execution. As a result, though
the Standard prevents compilers from reordering reads and writes to
volatile data within a thread, it imposes no constraints at all on
such reorderings across threads. At least that’s how most compiler
implementers interpret things. As a result, in practice, many
compilers may generate thread-unsafe code from the source above.
and later:
... C++’s abstract machine is single-threaded, and C++ compilers may
choose to generate thread-unsafe code from source like the above,
anyway.
These remarks are related to the execution on the uni-processor, so it's definitely not about cache-coherence issues.
If the compiler can't reorder reads and writes to volatile data within a thread, how can it reorder reads and writes across threads for this particular example thus generating thread-unsafe code?
The pointer to the Singleton may be volatile, but the data within the singleton is not.
Imagine Singleton has int x, y, z; as members, set to 15, 16, 17 in the constructor for some reason.
volatile Singleton* volatile temp = new Singleton;
pInstance = temp;
OK, temp is written before pInstance. When are x,y,z written relative to those? Before? After? You don't know. They aren't volatile, so they don't need to be ordered relative to the volatile ordering.
Now a thread comes in and sees:
if (pInstance == 0) { // first check
And let's say pInstance has been set, is not null.
What are the values of x,y,z? Even though new Singleton has been called, and the constructor has "run", you don't know whether the operations that set x,y,z have run or not.
So now your code goes and reads x,y,z and crashes, because it was really expecting 15,16,17, not random data.
Oh wait, pInstance is a volatile pointer to volatile data! So x,y,z is volatile right? Right? And thus ordered with pInstance and temp. Aha!
Almost. Any reads from *pInstance will be volatile, but the construction via new Singleton was not volatile. So the initial writes to x,y,z were not ordered. :-(
So you could, maybe, make the members volatile int x, y, z; OK. However...
C++ now has a memory model, even if it didn't when the article was written. Under the current rules, volatile does not prevent data races. volatile has nothing to do with threads. The program is UB. Cats and Dogs living together.
Also, although this is pushing the limits of the standard (ie it gets vague as to what volatile really means), an all-knowing, all-seeing, full-program-optimizing compiler could look at your uses of volatile and say "no, those volatiles don't actually connect to any IO memory addressses etc, they really aren't observable behaviour, I'm just going to make them non-volatile"...
I think they're referring to the cache coherency problem discussed in section 6 ("DCLP on Multiprocessor Machines". With a multiprocessor system, the processor/cache hardware may write out the value for pInstance before the values are written out for the allocated Singleton. This can cause a 2nd CPU to see the non-NULL pInstance before it can see the data it points to.
This requires a hardware fence instruction to ensure all the memory is updated before other CPUs in the system can see any of it.
If I'm understanding correctly they are saying that in the context of a single-thread abstract machine the compiler may simply transform:
volatile Singleton* volatile temp = new Singleton;
pInstance = temp;
Into:
pInstance = new Singleton;
Because the observable behavior is unchanged. Then this brings us back to the original problem with double checked locking.
My question involves std::atomic<T*> and the data that this pointer points to. If in thread 1 I have
Object A;
std:atomic<Object*> ptr;
int bar = 2;
A.foo = 4; //foo is an int;
ptr.store(*A);
and if in thread 2 I observe that ptr points to A, can I be guaranteed that ptr->foo is 4 and bar is 2?
Does the default memory model for the atomic pointer (sequentially consistent) guarantee that assignments on non-atomic (in this case A.foo) that happen before an atomic store will be seen by other threads before it sees the assignment of the same atomic.store for both cases?
If it helps or matters, I am using x64 (and I only care about this platform), gcc (with a version that supports atomics).
The answer is yes and perhaps no
The memory model principles:
C++11 atomics use by default the std::memory_order_seq_cst memory ordering, which means that operations are sequentially consistent.
The semantics of this is that ordering of all operations are as if all these operations were performed sequentially :
C++ standard section 29.3/3 explains how this works for atomics: "There shall be a single total order S on all memory_order_seq_cst operations, consistent with the “happens before” order and modification orders for all affected locations, such that each memory_order_seq_cst
operation that loads a value observes either the last preceding modification according to this order S, or the result of an operation that is not memory_order_seq_cst."
The section 1.10/5 explains how this impacts also non-atomics: "The library defines a number of atomic operations (...) that are specially identified as synchronization operations. These operations play a special role in making assignments in one thread visible to another."
The answer to your question is yes !
Risk with non-atomic data
You shall however be aware that in reality the consistency guarantee is more limited for the non-atomic values.
Suppose a first execution scenario:
(thread 1) A.foo = 10;
(thread 1) A.foo = 4; //stores an int
(thread 1) ptr.store(&A); //ptr is set AND synchronisation
(thread 2) int i = *ptr; //ptr value is safely accessed (still &A) AND synchronisation
Here, i is 4. Because ptr is atomic, thread (2) safely gets the value &A when it reads the pointer. The memory ordering ensures that all assignments made BEFORE ptr are seen by the other threads ("happens before" constraint).
But suppose a second execution scenario:
(thread 1) A.foo = 4; //stores an int
(thread 1) ptr.store(&A); //ptr is set AND synchronisation
(thread 1) A.foo = 8; // stores int but NO SYNCHRONISATION !!
(thread 2) int i = *ptr; //ptr value is safely accessed (still &A) AND synchronisation
Here the result is undefined. It could be 4 because of the memory ordering guaranteed that what happens before the ptr assignement is seen by the other threads. But nothing prevents assignments made afterwards to be seen as well. So it could be 8.
If you would have had *ptr = 8; instead of A.foo=8; then you would have certainty again: i would be 8.
You can verify this experimentally with this for example:
void f1() { // to be launched in a thread
secret = 50;
ptr = &secret;
secret = 777;
this_thread::yield();
}
void f2() { // to be launched in a second thread
this_thread::sleep_for(chrono::seconds(2));
int i = *ptr;
cout << "Value is " << i << endl;
}
Conclusions
To conclude, the answer to your question is yes, but only if no other change to the non atomic data happens after the synchronisation. The main risk is that only ptr is atomic. But this does not apply to the values pointed to.
To be noted that especially pointers bring further synchronisation risk when you reassign the atomic pointer to a non atomic pointer.
Example:
// Thread (1):
std:atomic<Object*> ptr;
A.foo = 4; //foo is an int;
ptr.store(*A);
// Thread (2):
Object *x;
x=ptr; // ptr is atomic but x not !
terrible_function(ptr); // ptr is atomic, but the pointer argument for the function is not !
By default, C++-11 atomic operations have acquire/release semantics.
So a thread that see's your store will also see all operations performed before it.
You can find some more details here.