Confusion about _Lock_policy in libstdc++ - c++

I'm reading the implements of std::shared_ptr of libstdc++, and I noticed that libstdc++ has three locking policies: _S_single, _S_mutex, and _S_atomic (see here), and the lock policy would affect the specialization of class _Sp_counted_base (_M_add_ref and _M_release)
Below is the code snippet:
_M_release_last_use() noexcept
{
_GLIBCXX_SYNCHRONIZATION_HAPPENS_AFTER(&_M_use_count);
_M_dispose();
// There must be a memory barrier between dispose() and destroy()
// to ensure that the effects of dispose() are observed in the
// thread that runs destroy().
// See http://gcc.gnu.org/ml/libstdc++/2005-11/msg00136.html
if (_Mutex_base<_Lp>::_S_need_barriers)
{
__atomic_thread_fence (__ATOMIC_ACQ_REL);
}
// Be race-detector-friendly. For more info see bits/c++config.
_GLIBCXX_SYNCHRONIZATION_HAPPENS_BEFORE(&_M_weak_count);
if (__gnu_cxx::__exchange_and_add_dispatch(&_M_weak_count,
-1) == 1)
{
_GLIBCXX_SYNCHRONIZATION_HAPPENS_AFTER(&_M_weak_count);
_M_destroy();
}
}
template<>
inline bool
_Sp_counted_base<_S_mutex>::
_M_add_ref_lock_nothrow() noexcept
{
__gnu_cxx::__scoped_lock sentry(*this);
if (__gnu_cxx::__exchange_and_add_dispatch(&_M_use_count, 1) == 0)
{
_M_use_count = 0;
return false;
}
return true;
}
my questions are:
When using the _S_mutex locking policy, the __exchange_and_add_dispatch function may only guarantee atomicity, but may not guarantee that it is fully fenced, am I right?
and because of 1, is the purpose of '__atomic_thread_fence (__ATOMIC_ACQ_REL)' to ensure that when thread A invoking _M_dispose, no thread will invoke _M_destory (so that thread A could never access a destroyed member(eg: _M_ptr) inside the function '_M_dispose'?
What puzzles me most is that if 1 and 2 are both correct, then why there is no need to add a thread fence before invoking the '_M_dispose'? (because the objects managed by _Sp_counted_base and _Sp_counted_base itself have the same problem when the reference count is dropped to zero)

Some of this is answered in the documentation and at the URL shown in the code you quoted.
When using the _S_mutex locking policy, the __exchange_and_add_dispatch function may only guarantee atomicity, but may not guarantee that it is fully fenced, am I right?
Yes.
and because of 1, is the purpose of '__atomic_thread_fence (__ATOMIC_ACQ_REL)' to ensure that when thread A invoking _M_dispose, no thread will invoke _M_destory (so that thread A could never access a destroyed member(eg: _M_ptr) inside the function '_M_dispose'?
No, that can't happen. When _M_release_last_use() starts executing _M_weak_count >= 1, and _M_destroy() function won't be called until after _M_weak_count is decremented, and so _M_ptr remains valid during the _M_dispose() call. If _M_weak_count==2 and another thread is decrementing it at the same time as _M_release_last_use() runs, then there is a race to see which thread decrements first, and so you don't know which thread will run _M_destroy(). But whichever thread it is, _M_dispose() already finished before _M_release_last_use() does its decrement.
The reason for the memory barrier is that _M_dispose() invokes the deleter, which runs user-defined code. That code could have arbitrary side effects, touching other objects in the program. The _M_destroy() function uses operator delete or an allocator to release the memory, which could also have arbitrary side effects (if the program has replaced operator delete, or the shared_ptr was constructed with a custom allocator). The memory barrier ensures that the effects of the deleter are synchronized with the effects of the deallocation. The library doesn't know what those effects are, but it doesn't matter, the code still ensures they synchronize.
What puzzles me most is that if 1 and 2 are both correct,
They are not.

Related

Synchronizing global reference-counted resource with atomics -- is relaxed appropriate here?

I have a global reference-counted object obj that I want to protect from data races by using atomic operations:
T* obj; // initially nullptr
std::atomic<int> count; // initially zero
My understanding is that I need to use std::memory_order_release after I write to obj, so that the other threads will be aware of it being created:
void increment()
{
if (count.load(std::memory_order_relaxed) == 0)
obj = std::make_unique<T>();
count.fetch_add(1, std::memory_order_release);
}
Likewise, I need to use std::memory_order_acquire when reading the counter, to ensure the thread has visibility of obj being changed:
void decrement()
{
count.fetch_sub(1, std::memory_order_relaxed);
if (count.load(std::memory_order_acquire) == 0)
obj.reset();
}
I am not convinced that the code above is correct, but I'm not entirely sure why. I feel like after obj.reset() is called, there should be a std::memory_order_release operation to inform other threads about it. Is that correct?
Are there other things that can go wrong, or is my understanding of atomic operations in this case completely wrong?
It is wrong regardless of memory ordering.
As #MaartenBamelis pointed out for concurrent calling of increment the object is constructed twice. And the same is true for concurrent decrement: object is reset twice (which may result in double destructor call).
Note that there's disagreement between T* obj; declaration and using it as unique_ptr but neither raw pointer not unique pointer are safe for concurrent modification. In practice, reset or delete will check pointer for null, then delete and set it to null, and these steps are not atomic.
fetch_add and fetch_sub are fetch and op instead of just op for a reason: if you don't use the value observed during operation, it is likely to be a race.
This code is inherently racey. If two threads call increment at the same time when count is initially 0, both will see count as 0, and both will create obj (and race to see which copy is kept; given unique_ptr has no special threading protections, terrible things can happen if two of them set it at once).
If two threads decrement at the same time (holding the last two references), and finish the fetch_sub before either calls load, both will reset obj (also bad).
And if a decrement finishes the fetch_sub (to 0), then another thread increments before the decrement load occurs, the increment will see the count as 0 and reinitialize. Whether the object is cleared after being replaced, or replaced after being cleared, or some horrible mixture of the two, will depend on whether increment's fetch_add runs before or after decrement's load.
In short: If you find yourself using two separate atomic operations on the same variable, and testing the result of one of them (without looping, as in a compare and swap loop), you're wrong.
More correct code would look like:
void increment() // Still not safe
{
// acquire is good for the != 0 case, for a later read of obj
// or would be if the other writer did a release *after* constructing an obj
if (count.fetch_add(1, std::memory_order_acquire) == 0)
obj = std::make_unique<T>();
}
void decrement()
{
if (count.fetch_sub(1, std::memory_order_acquire) == 1)
obj.reset();
}
but even then it's not reliable; there's no guarantee that, when count is 0, two threads couldn't call increment, both of them fetch_add at once, and while exactly one of them is guaranteed to see the count as 0, said 0-seeing thread might end up delayed while the one that saw it as 1 assumes the object exists and uses it before it's initialized.
I'm not going to swear there's no mutex-free solution here, but dealing with the issues involved with atomics is almost certainly not worth the headache.
It might be possible to confine the mutex to inside the if() branches, but taking a mutex is also an atomic RMW operation (and not much more than that for a good lightweight implementation) so this doesn't necessarily help a huge amount. If you need really good read-side scaling, you'd want to look into something like RCU instead of a ref-count, to allow readers to truly be read-only, not contending with other readers.
I don't really see a simple way of implementing a reference-counted resource with atomics. Maybe there's some clever way that I haven't thought of yet, but in my experience, clever does not equal readable.
My advice would be to implement it first using a mutex. Then you simply lock the mutex, check the reference count, do whatever needs to be done, and unlock again. It's guaranteed correct:
std::mutex mutex;
int count;
std::unique_ptr<T> obj;
void increment()
{
auto lock = std::scoped_lock{mutex};
if (++count == 1) // Am I the first reference?
obj = std::make_unique<T>();
}
void decrement()
{
auto lock = std::scoped_lock{mutex};
if (--count == 0) // Was I the last reference?
obj.reset();
}
Although at this point, I would just use a std::shared_ptr instead of managing the reference count myself:
std::mutex mutex;
std::weak_ptr<T> obj;
std::shared_ptr<T> acquire()
{
auto lock = std::scoped_lock{mutex};
auto sp = obj.lock();
if (!sp)
obj = sp = std::make_shared<T>();
return sp;
}
I believe this also makes it safe when exceptions may be thrown when constructing the object.
Mutexes are surprisingly performant, so I expect that locking code is plenty quick unless you have a highly specialized use case where you need code to be lock-free.

std::atomic to what extent?

In the C++ Seasoning video by Sean Parent https://youtu.be/W2tWOdzgXHA at 33:41 when starting to talk about “no raw synchronization primitives”, he brings an example to show that with raw synchronization primitives we will get it wrong. The example is a bad copy on write class:
template <typename T>
class bad_cow {
struct object_t {
explicit object_t(const T& x) : data_m(x) { ++count_m; }
atomic<int> count_m;
T data_m;
};
object_t* object_m;
public:
explicit bad_cow(const T& x) : object_m(new object_t(x)) { }
~bad_cow() { if (0 == --object_m->count_m) delete object_m; }
bad_cow(const bad_cow& x) : object_m(x.object_m) { ++object_m->count_m; }
bad_cow& operator=(const T& x) {
if (object_m->count_m == 1) {
// label #2
object_m->data_m = x;
} else {
object_t* tmp = new object_t(x);
--object_m->count_m; // bug #1
// this solves bug #1:
// if (0 == --object_m->count_m) delete object_m;
object_m = tmp;
}
return *this;
}
};
He then asks the audience to find the bug, which is the bug #1 as he confirms.
But a more obvious bug I guess, is when some thread is about to proceed to execute a line of code that I have denoted with label #2, while all of a sudden, some other thread just destroys the object and the destructor is called, which deletes object_m. So, the first thread will encounter a deleted memory location.
Am I right? I don’t seem so!
some other thread just destroys the object and the destructor is
called, which deletes object_m. So, the first thread will encounter a
deleted memory location.
Am I right? I don’t seem so!
Assuming the rest of the program isn't buggy, that shouldn't happen, because each thread should have its own reference-count object referencing the data_m object. Therefore, if thread B has a bad_cow object that references the data-object, then thread A cannot (or at least should not) ever delete that object, because the count_m field can never drop to zero as long as there remains at least one reference-count object pointing to it.
Of course, a buggy program might encounter the race condition you suggest -- for example, a thread might be holding only a raw pointer to the data-object, rather than a bad_cow that increments its reference count; or a buggy thread might call delete on the object explicitly rather than relying on the bad_cow class to handle deletion properly.
Your objection doesn't hold because *this at that moment is pointing to the object and the count is 1. The counter cannot get to 0 unless someone is not playing this game correctly (but in that case anything can happen anyway).
Another similar objection could be that while you're assigning to *this and the code being executed is inside the #2 branch another thread makes a copy of *this; even if this second thread is just reading the pointed object may see it mutating suddenly because of the assignment. The problem in this case is that count was 1 when entering the if in the thread doing the mutation but increased immediately after.
This is also however a bad objection because this code handles concurrency to the pointed-to object (like for example std::shared_ptr does) but you are not allowed to mutate and read a single instance of bad_cow class from different threads. In other words a single instance of bad_cow cannot be used from multiple threads if some of them are writers without adding synchronization. Distinct instances of bad_cow pointing to the same storage are instead safe to be used from different threads (after the fix #1, of course).

Is this a correct C++11 double-checked locking version with shared_ptr?

This article by Jeff Preshing states that the double-checked locking pattern (DCLP) is fixed in C++11. The classical example used for this pattern is the singleton pattern but I happen to have a different use case and I am still lacking experience in handling "atomic<> weapons" - maybe someone over here can help me out.
Is the following piece of code a correct DCLP implementation as described by Jeff under "Using C++11 Sequentially Consistent Atomics"?
class Foo {
std::shared_ptr<B> data;
std::mutex mutex;
void detach()
{
if (data.use_count() > 1)
{
std::lock_guard<std::mutex> lock{mutex};
if (data.use_count() > 1)
{
data = std::make_shared<B>(*data);
}
}
}
public:
// public interface
};
No, this is not a correct implementation of DCLP.
The thing is that your outer check data.use_count() > 1 accesses the object (of type B with reference count), which can be deleted (unreferenced) in mutex-protected part. Any sort of memory fences cannot help there.
Why data.use_count() accesses the object:
Assume these operations have been executed:
shared_ptr<B> data1 = make_shared<B>(...);
shared_ptr<B> data = data1;
Then you have following layout (weak_ptr support is not shown here):
data1 [allocated with B::new()] data
--------------------------
[pointer type] ref; --> |atomic<int> m_use_count;| <-- [pointer type] ref
|B obj; |
--------------------------
Each shared_ptr object is just a pointer, which points to allocated memory region. This memory region embeds object of type B plus atomic counter, reflecting number of shared_ptr's, pointed to given object. When this counter becomes zero, memory region is freed(and B object is destroyed). Exactly this counter is returned by shared_ptr::use_count().
UPDATE: Execution, which can lead to accessing memory which is already freed (initially, two shared_ptr's point to the same object, .use_count() is 2):
/* Thread 1 */ /* Thread 2 */ /* Thread 3 */
Enter detach() Enter detach()
Found `data.use_count()` > 1
Enter critical section
Found `data.use_count()` > 1
Dereference `data`,
found old object.
Unreference old `data`,
`use_count` becomes 1
Delete other shared_ptr,
old object is deleted
Assign new object to `data`
Access old object
(for check `use_count`)
!! But object is freed !!
Outer check should only take a pointer to object for decide, whether to need aquire lock.
BTW, even your implementation would be correct, it has a little sence:
If data (and detach) can be accessed from several threads at the same time, object's uniqueness gives no advantages, since it can be accessed from the several threads. If you want to change object, all accesses to data should be protected by outer mutex, in that case detach() cannot be executed concurrently.
If data (and detach) can be accessed only by single thread at the same time, detach implementation doesn't need any locking at all.
This constitutes a data race if two threads invoke detach on the same instance of Foo concurrently, because std::shared_ptr<B>::use_count() (a read-only operation) would run concurrently with the std::shared_ptr<B> move-assignment operator (a modifying operation), which is a data race and hence a cause of undefined behavior. If Foo instances are never accessed concurrently, on the other hand, there is no data race, but then the std::mutex would be useless in your example. The question is: how does data's pointee become shared in the first place? Without this crucial bit of information, it is hard to tell if the code is safe even if a Foo is never used concurrently.
According to your source, I think you still need to add thread fences before the first test and after the second test.
std::shared_ptr<B> data;
std::mutex mutex;
void detach()
{
std::atomic_thread_fence(std::memory_order_acquire);
if (data.use_count() > 1)
{
auto lock = std::lock_guard<std::mutex>{mutex};
if (data.use_count() > 1)
{
std::atomic_thread_fence(std::memory_order_release);
data = std::make_shared<B>(*data);
}
}
}

Can code reordering affect my test

I am writing an unit test for a class to test for insertion when no memory is available. It relies on the fact that nbElementInserted is incremented AFTER insert_edge has returned.
void test()
{
adjacency_list a(true);
MemoryVacuum no_memory_after_this_line;
bool signalReceived = false;
size_t nbElementInserted = 0;
do
{
try
{
a.insert_edge( 0, 1, true ); // this should throw
nbElementInserted++;
}
catch(std::bad_alloc &)
{
signalReceived = true;
}
}
while (!signalReceived); // this loop is necessary because the
// memory vacuum only prevents new memory
// pages from being mapped. so the first
// allocations may succeed.
CHECK_EQUAL( nbElementInserted, a.nb_edges() );
}
Now I am wondering which of the two statement is true:
Reordering can happen, in which case nbElementInserted can be incremented before insert_edge throws an exception, and that invalidates my case. Reordering can happen because the visible result for the user is the same if the two lines are permuted.
Reordering cannot happen because insert_edge is a function and all the side effects of the function should be completed before going to the next line. Throwing is a side effect.
Bonus point: if the correct answer is “yes reordering can happen”, is a memory barrier between the 2 lines sufficient to fix it?
No. Reordering only comes into play in multithreaded or multiprocessing scenarios. In a single thread the compiler cannot reorder instructions in a way that would change the behavior of the program. Exceptions are not an exception to this rule.
Reordering becomes visible when two threads read and write to shared state. If thread A makes modifications to shared variables thread B can see those modifications out-of-order, or even not at all if it has the shared state cached. This can be due to optimizations in either thread A or thread B or both.
Thread A will always see its own modifications in-order, though. Each sequence point must happen in order, at least as far as the local thread is aware.
Let's say thread A executed this code:
a = foo() + bar();
b = baz;
Each ; introduces a sequence point. The compiler is allowed to call either foo() or bar() first, whichever it likes, since + does not introduce a sequence point. If you put printouts you might see foo() called first, or you might see bar() called first. Either one would be correct. It must call them before it assigns baz to b, though. If either foo() or bar() throws an exception b must retain its existing value.
However, if the compiler knew that foo() and bar() never throw, and their execution in no way depends on the value of b, it could reorder the two statements. It'd be a valid optimization. There would be no way for the thread A to know that statements had been reordered.
Thread B, on the other hand, would know. The problem in multithreaded programming is that sequence points don't apply to other threads. That's where memory barriers come in. Memory barriers are cross-thread sequence points, in a sense.

Are constructors thread safe in C++ and/or C++11?

Derived from this question and related to this question:
If I construct an object in one thread and then convey a reference/pointer to it to another thread, is it thread un-safe for that other thread to access the object without explicit locking/memory-barriers?
// thread 1
Obj obj;
anyLeagalTransferDevice.Send(&obj);
while(1); // never let obj go out of scope
// thread 2
anyLeagalTransferDevice.Get()->SomeFn();
Alternatively: is there any legal way to convey data between threads that doesn't enforce memory ordering with regards to everything else the thread has touched? From a hardware standpoint I don't see any reason it shouldn't be possible.
To clarify; the question is with regards to cache coherency, memory ordering and whatnot. Can Thread 2 get and use the pointer before Thread 2's view of memory includes the writes involved in constructing obj? To miss-quote Alexandrescu(?) "Could a malicious CPU designer and compiler writer collude to build a standard conforming system that make that break?"
Reasoning about thread-safety can be difficult, and I am no expert on the C++11 memory model. Fortunately, however, your example is very simple. I rewrite the example, because the constructor is irrelevant.
Simplified Example
Question: Is the following code correct? Or can the execution result in undefined behavior?
// Legal transfer of pointer to int without data race.
// The receive function blocks until send is called.
void send(int*);
int* receive();
// --- thread A ---
/* A1 */ int* pointer = receive();
/* A2 */ int answer = *pointer;
// --- thread B ---
int answer;
/* B1 */ answer = 42;
/* B2 */ send(&answer);
// wait forever
Answer: There may be a data race on the memory location of answer, and thus the execution results in undefined behavior. See below for details.
Implementation of Data Transfer
Of course, the answer depends on the possible and legal implementations of the functions send and receive. I use the following data-race-free implementation. Note that only a single atomic variable is used, and all memory operations use std::memory_order_relaxed. Basically this means, that these functions do not restrict memory re-orderings.
std::atomic<int*> transfer{nullptr};
void send(int* pointer) {
transfer.store(pointer, std::memory_order_relaxed);
}
int* receive() {
while (transfer.load(std::memory_order_relaxed) == nullptr) { }
return transfer.load(std::memory_order_relaxed);
}
Order of Memory Operations
On multicore systems, a thread can see memory changes in a different order as what other threads see. In addition, both compilers and CPUs may reorder memory operations within a single thread for efficiency - and they do this all the time. Atomic operations with std::memory_order_relaxed do not participate in any synchronization and do not impose any ordering.
In the above example, the compiler is allowed to reorder the operations of thread B, and execute B2 before B1, because the reordering has no effect on the thread itself.
// --- valid execution of operations in thread B ---
int answer;
/* B2 */ send(&answer);
/* B1 */ answer = 42;
// wait forever
Data Race
C++11 defines a data race as follows (N3290 C++11 Draft): "The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior." And the term happens before is defined earlier in the same document.
In the above example, B1 and A2 are conflicting and non-atomic operations, and neither happens before the other. This is obvious, because I have shown in the previous section, that both can happen at the same time.
That's the only thing that matters in C++11. In contrast, the Java Memory Model also tries to define the behavior if there are data races, and it took them almost a decade to come up with a reasonable specification. C++11 didn't make the same mistake.
Further Information
I'm a bit surprised that these basics are not well known. The definitive source of information is the section Multi-threaded executions and data races in the C++11 standard. However, the specification is difficult to understand.
A good starting point are Hans Boehm's talks - e.g. available as online videos:
Threads and Shared Variables in C++11
Getting C++ Threads Right
There are also a lot of other good resources, I have mentioned elsewhere, e.g.:
std::memory_order - cppreference.com
There is no parallel access to the same data, so there is no problem:
Thread 1 starts execution of Obj::Obj().
Thread 1 finishes execution of Obj::Obj().
Thread 1 passes reference to the memory occupied by obj to thread 2.
Thread 1 never does anything else with that memory (soon after, it falls into infinite loop).
Thread 2 picks-up the reference to memory occupied by obj.
Thread 2 presumably does something with it, undisturbed by thread 1 which is still infinitely looping.
The only potential problem is if Send didn't acts as a memory barrier, but then it wouldn't really be a "legal transfer device".
As others have alluded to, the only way in which a constructor is not thread-safe is if something somehow gets a pointer or reference to it before the constructor is finished, and the only way that would occur is if the constructor itself has code that registers the this pointer to some type of container which is shared across threads.
Now in your specific example, Branko Dimitrijevic gave a good complete explanation how your case is fine. But in the general case, I'd say to not use something until the constructor is finished, though I don't think there's anything "special" that doesn't happen until the constructor is finished. By the time it enters the (last) constructor in an inheritance chain, the object is pretty much fully "good to go" with all of its member variables being initialized, etc. So no worse than any other critical section work, but another thread would need to know about it first, and the only way that happens is if you're sharing this in the constructor itself somehow. So only do that as the "last thing" if you are.
It is only safe (sort of) if you wrote both threads, and know the first thread is not accessing it while the second thread is. For example, if the thread constructing it never accesses it after passing the reference/pointer, you would be OK. Otherwise it is thread unsafe. You could change that by making all methods that access data members (read or write) lock memory.
Read this question until now... Still will post my comments:
Static Local Variable
There is a reliable way to construct objects when you are in a multi-thread environment, that is using a static local variable (static local variable-CppCoreGuidelines),
From the above reference: "This is one of the most effective solutions to problems related to initialization order. In a multi-threaded environment the initialization of the static object does not introduce a race condition (unless you carelessly access a shared object from within its constructor)."
Also note from the reference, if the destruction of X involves an operation that needs to be synchronized you can create the object on the heap and synchronize when to call the destructor.
Below is an example I wrote to show the Construct On First Use Idiom, which is basically what the reference talks about.
#include <iostream>
#include <thread>
#include <vector>
class ThreadConstruct
{
public:
ThreadConstruct(int a, float b) : _a{a}, _b{b}
{
std::cout << "ThreadConstruct construct start" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(2));
std::cout << "ThreadConstruct construct end" << std::endl;
}
void get()
{
std::cout << _a << " " << _b << std::endl;
}
private:
int _a;
float _b;
};
struct Factory
{
template<class T, typename ...ARGS>
static T& get(ARGS... args)
{
//thread safe object instantiation
static T instance(std::forward<ARGS>(args)...);
return instance;
}
};
//thread pool
class Threads
{
public:
Threads()
{
for (size_t num_threads = 0; num_threads < 5; ++num_threads) {
thread_pool.emplace_back(&Threads::run, this);
}
}
void run()
{
//thread safe constructor call
ThreadConstruct& thread_construct = Factory::get<ThreadConstruct>(5, 10.1);
thread_construct.get();
}
~Threads()
{
for(auto& x : thread_pool) {
if(x.joinable()) {
x.join();
}
}
}
private:
std::vector<std::thread> thread_pool;
};
int main()
{
Threads thread;
return 0;
}
Output:
ThreadConstruct construct start
ThreadConstruct construct end
5 10.1
5 10.1
5 10.1
5 10.1
5 10.1