Background
Since C++11, atomic operations on std::shared_ptr can be done via std::atomic_... methods found here, because the partial specialization as shown below is not possible:
std::atomic<std::shared_ptr<T>>
This is due to the fact that std::atomic only accepts TriviallyCopyable types, and std::shared_ptr (or std::weak_ptr) is not trivially copyable.
However, as of C++20, these methods have been deprecated, and got replaced by the partial template specialization of std::atomic for std::shared_ptr as described here.
Question
I am not sure of
Why std::atomic_... got replaced.
Techniques used to enable the partial template specialization of std::atomic for smart pointers.
Several proposals for atomic<shared_ptr> or something of that nature explain a variety of reasons. Of particular note is P0718, which tells us:
The C++ standard provides an API to access and manipulate specific shared_ptr objects atomically, i.e., without introducing data races when the same object is manipulated from multiple threads without further synchronization. This API is fragile and error-prone, as shared_ptr objects manipulated through this API are indistinguishable from other shared_ptr objects, yet subject to the restriction that they may be manipulated/accessed only through this API. In particular, you cannot dereference such a shared_ptr without first loading it into another shared_ptr object, and then dereferencing through the second object.
N4058 explains a performance issue with regard to how you have to go about implementing such a thing. Since shared_ptr is typically bigger than a single pointer in size, atomic access typically has to be implemented with a spinlock. So either every shared_ptr instance has a spinlock even if it never gets used atomically, or the implementation of those atomic functions has to have a lookaside table of spinlocks for individual objects. Or use a global spinlock.
None of these are problems if you have a type dedicated to being atomic.
atomic<shared_ptr> implementations can use the usual techniques for atomic<T> when T is too large to fit into a CPU atomic operation. They get to get around the TriviallyCopyable restriction by fiat: the standard requires that they exist and be atomic, so the implementation makes it so. C++ implementations don't have to play by the same rules as regular C++ programs.
Related
This is a c++ ecosystem question - though it is easiest to ask to refer to Rust.
Are there stable implementations of a thread-safe / reference count smart pointers which support to "unwrap" it in a thread-safe manner - under the condition that there is ref-count of exactly 1, as in https://doc.rust-lang.org/std/sync/struct.Arc.html#method.try_unwrap.
Coarsely, speaking std::shared_ptr is similar to ARC, but this use-case seems not to be supported, nor does it appear straight forward to implement (e.g. see https://en.cppreference.com/w/cpp/memory/shared_ptr/use_count#Notes).
The exhaustive API of std::shared_ptr is available online (see cppreference) and as you can see there is no built-in support.
Furthermore, due to race-conditions with the promotion of std::weak_ptr, it is not possible to safely use use_count or unique to implement such functionality -- and unique was deprecated in C++17 and removed in C++20.
As a result, the functionality is simply not available with std::shared_ptr.
There may be other implementations of std::shared_ptr which offer this functionality -- though Boost's doesn't appear to.
As noted in the notes of use_count, the primary difficulty in implementing this function is the potential race-condition with weak_ptr promotion. That is, a naive:
// Susceptible to race-conditions, do not use!
if (pointer.use_count() == 1) {
return std::move(*pointer);
}
return std::nullopt;
Would not work because between the check and the actual move, a new shared owner may have appeared in another thread allowing concurrent access to the value.
The only ways to have this functionality safely are:
The shared_ptr implementation does not support weak pointers in the first place.
The shared_ptr implementation provides it, and ensures the absence of race condition with weak_ptr promotion.
I note that the latter typically requires locking the same lock used for weak_ptr promotion; hence why it cannot be provided externally.
A weaker variant could be implemented if unique were also guaranteeing the absence of weak_ptr. Although it would not be strictly equivalent as the presence of any weak_ptr would cause it to fail, it could still be useful in many scenarios where no weak_ptr is created.
I wrote some multithreaded but lock-free code that compiled and apparently executed fine on an earlier C++11-supporting GCC (7 or older). The atomic fields were ints and so on. To the best of my recollection, I used normal C/C++ operations to operate on them (a=1;, etc.) in places where atomicity or event ordering wasn't a concern.
Later I had to do some double-width CAS operations, and made a little struct with a pointer and counter as is common. I tried doing the same normal C/C++ operations, and errors came that the variable had no such members. (Which is what you'd expect from most normal templates, but I half-expected atomic to work differently, in part because normal assignments to and from were supported, to the best of my recollection, for ints.).
So two part question:
Should we use the atomic methods in all cases, even (say) initialization done by one thread with no race conditions? 1a) so once declared atomic there's no way to access unatomically? 1b) we also have to use the verboser verbosity of the atomic<> methods to do so?
Otherwise, if for integer types at least, we can use normal C/C++ operations. But in this case will those operations be the same as load()/store() or are they merely normal assignments?
And a semi-meta question: is there any insight as to why normal C/C++ operations aren't supported on atomic<> variables? I'm not sure if the C++11 language as spec'd has the power to write code that does that, but the spec can certainly require the compiler to do things the language as spec'd isn't powerful enough to do.
You're maybe looking for C++20 std::atomic_ref<T> to give you the ability to do atomic ops on objects that can also be accessed non-atomically. Make sure your non-atomic T object is declared with sufficient alignment for atomic<T>. e.g.
alignas(std::atomic_ref<long long>::required_alignment)
long long sometimes_shared_var;
But that requires C++20, and nothing equivalent is available in C++17 or earlier. Once an atomic object is constructed, I don't think there's any guaranteed portable safe way to modify it other than its atomic member functions.
Its internal object representation isn't guaranteed by the standard so memcpy to get the struct sixteenbyte object out of an atomic<sixteenbyte> efficiently isn't guaranteed by the standard to be safe even if no other thread has a reference to it. You'd have to know how a specific implementation stores it. Checking sizeof(atomic<T>) == sizeof(T) is a good sign, though, and mainstream implementations do in practice just have a T as the object-representation for atomic<T>.
Related: How can I implement ABA counter with c++11 CAS? for a nasty union hack ("safe" in GNU C++) to give efficient access to a single member, because compilers don't optimize foo.load().ptr to just atomically load that member. Instead GCC and clang will lock cmpxchg16b to load the whole pointer+counter pair, then just the first member. C++20 atomic_ref<> should solve that.
Accessing members of atomic<struct foo>: one reason for not allowing shared.x = tmp; is that it's the wrong mental model. If two different threads are storing to different members of the same struct, how does the language define any ordering for what other threads see? Plus it was probably considered too easy for programmer to design their lockless algorithms incorrectly if stuff like that were allowed.
Also, how would you even implement that? Return an lvalue-reference? It can't be to the underlying non-atomic object. And what if the code captures that reference and keeps using it long after calling some function that's not load or store?
Remember that ISO C++'s ordering model works in terms of synchronizes-with, not in terms of local reordering and a single cache-coherent domain like the way real ISAs define their memory models. The ISO C++ model is always strictly in terms of reading, writing, or RMWing the entire atomic object. So a load of the object can always sync-with any store of the whole object.
In hardware that would actually still work for a store to one member and a load from a different member if the whole object is in one cache line, on real-world ISAs. At least I think so, although possibly not on some SMT systems. (Being in one cache line is necessary for lock-free atomic access to the whole object to be possible on most ISAs.)
we also have to use the verboser verbosity of the atomic<> methods to do so?
The member functions of atomic<T> include overloads of all the operators, including operator= (store) and cast back to T (load). a = 1; is equivalent to a.store(1, std::memory_order_seq_cst) for atomic<int> a; and is the slowest way to set a new value.
Should we use the atomic methods in all cases, even (say) initialization done by one thread with no race conditions?
You don't have any choice, other than passing args to the constructors of std::atomic<T> objects.
You can use mo_relaxed loads / stores while your object is still thread-private, though. Avoid any RMW operators like +=. e.g. a.store(a.load(relaxed) + 1, relaxed); will compile about the same as for non-atomic objects of register-width or smaller.
(Except that it can't optimize away and keep the value in a register, so use local temporaries instead of actually updating the atomic object).
But for atomic objects too large to be lock-free, there's not really anything you can do efficiently except construct them with the right values in the first place.
The atomic fields were ints and so on. ...
and apparently executed fine
If you mean plain int, not atomic<int> then it wasn't portably safe.
Data-race UB doesn't guarantee visible breakage, the nasty thing with undefined behaviour is that happening to work in your test case is one of the things that's allowed to happen.
And in many cases with pure load or pure store, it won't break, especially on strongly ordered x86, unless the load or store can hoist or sink out of a loop. Why is integer assignment on a naturally aligned variable atomic on x86?. It'll eventually bite you when a compiler manages to do cross-file inlining and reorder some operations at compile time, though.
why normal C/C++ operations aren't supported on atomic<> variables?
... but the spec can certainly require the compiler to do things the language as spec'd isn't powerful enough to do.
This in fact was a limitation of C++11 through 17. Most compilers have no problem with it. For example implementation of the <atomic> header for gcc/clang's uses __atomic_ builtins which take a plain T* pointer.
The C++20 proposal for atomic_ref is p0019, which cites as motivation:
An object could be heavily used non-atomically in well-defined phases
of an application. Forcing such objects to be exclusively atomic would
incur an unnecessary performance penalty.
3.2. Atomic Operations on Members of a Very Large Array
High-performance computing (HPC) applications use very large arrays. Computations with these arrays typically have distinct phases that allocate and initialize members of the array, update members of the array, and read members of the array. Parallel algorithms for initialization (e.g., zero fill) have non-conflicting access when assigning member values. Parallel algorithms for updates have conflicting access to members which must be guarded by atomic operations. Parallel algorithms with read-only access require best-performing streaming read access, random read access, vectorization, or other guaranteed non-conflicting HPC pattern.
All of these things are a problem with std::atomic<>, confirming your suspicion that this is a problem for C++11.
Instead of introducing a way to do non-atomic access to std::atomic<T>, they introduced a way to do atomic access to a T object. One problem with this is that atomic<T> might need more alignment than a T would get by default, so be careful.
Unlike with giving atomic access to members of T, you could plausible have a .non_atomic() member function that returned an lvalue reference to the underlying object.
I use std::atomic for atomicity. Still, somewhere in the code, atomicity is not needed by program logic. In this case, I'm wondering whether it is OK, both pedantically and practically, to use constructor in place of store() as an optimization. For example,
// p.store(nullptr, std::memory_order_relaxed);
new(p) std::atomic<node*>(nullptr);
In accord with the standard, whether this works depends entirely on the implementation of std::atomic<T>. If it is lock-free for that T, then the implementation probably just stores a T. If it isn't lock-free, things get more complex, since it may store a mutex or some other thing.
The thing is, you don't know what std::atomic<T> stores. This matters because if it stores a const-qualified object or a reference type, then reusing the storage here will cause problems. The pointer returned by placement-new can certainly be used, but if a const or reference type is used, the original object name p cannot.
Why would std::atomic<T> store a const or reference type? Who knows; my point is that, because its implementation is not under your control, then pedantically you cannot know how any particular implementation behaves.
As for "practically", it's unlikely that this will cause a problem. Especially if the atomic<T> is always lock-free.
That being said, "practically" should also include some notion of how other users will interpret this code. While people experienced with doing things like reusing storage will be able to understand what the code is doing, they will likely be puzzled by why you're doing it. That means you'll need to either stick a comment on that line or make a (template) function non_atomic_reset.
Also, it should be noted that std::shared_ptr uses atomic increments/decrements for its reference counter. I bring that up because there is no std::single_threaded_shared_ptr that doesn't use atomics, or a special constructor that doesn't use atomics. So even in cases where you're using shared_ptr in pure single-threaded code, those atomics are still firing. This was considered a reasonable tradeoff by the C++ standards committee.
Atomics aren't cheap, but they're not that expensive (most of the time) that using unusual mechanisms like this to bypass an atomic store is a good idea. As always, profile to see if the code obfuscation is worth it.
Why are there are atomic overloads for shared_ptr as described here rather than there being a specialization for std::atomic which deals with shared_ptrs. Seems inconsistent with the object oriented patterns employed by the rest of the C++ standard library..
And just to make sure I am getting this right, when using shared_ptrs to implement the read copy update idiom we need to do all accesses (reads and writes) to shared pointers through these functions right?
Because:
std::atomic may be instantiated with any TriviallyCopyable type T.
Source: http://en.cppreference.com/w/cpp/atomic/atomic
And
std::is_trivially_copyable<std::shared_ptr<int>>::value == false;
Thus, you cannot instantiate std::atomic<> with std::shared_ptr<>. However, automatic memory management is useful in multi-threading, thus those overloads were provided. Those overloads are most likely not lock-free however (one of the big draws of using std::atomic<> in the first place); they probably use a lock to provide synchronicity.
As for your second question: yes.
Reading the docs on boost::atomic and on std::atomic leaves me confused as to whether the atomic interface is supposed to support non-trivial types?
That is, given a (value-)type that can only be written/read by enclosing the read/write in a full mutex, because it has a non-trivial copy-ctor/assignment operator, is this supposed to be supported by std::atomic (as boost clearly states that it is UB).
Am I supposed to provide the specialization the docs talk about myself for non-trivial types?
Note: I was hitting on this because I have a cross-thread callback object boost::function<bool (void)> simpleFn; that needs to be set/reset atomically. Having a separate mutex / critical section or even wrapping both in a atomic-like helper type with simple set and get seem easy enough, but is there anything out of the box?
Arne's answer already points out that the Standard requires trivially copyable types for std::atomic.
Here's some rationale why atomics might not be the right tool for your problem in the first place: Atomics are the fundamental building primitives for building thread-safe data structures in C++. They are supposed to be the lowest-level building blocks for constructing more powerful data structures like thread-safe containers.
In particular, atomics are usually used for building lock-free data structures. For locking data structures primitives like std::mutex and std::condition_variable are a way better match, if only for the fact that it is very hard to write blocking code with atomics without introducing lots of busy waiting.
So when you think of std::atomic the first association should be lock-free (despite the fact that most of the atomic types are technically allowed to have blocking implementations). What you describe is a simple lock-based concurrent data structure, so wrapping it in an atomic should already feel wrong from a conceptual point of view.
Unfortunately, it is currently not clear how to express in the language that a data structure is thread-safe (which I guess was your primary intent for using atomic in the first place). Herb Sutter had some interesting ideas on this issue, but I guess for now we simply have to accept the fact that we have to rely on documentation to communicate how certain data structures behave with regards to thread-safety.
The standard specifies (ยง29.5,1) that
The type of the template argument T shall be trivially copyable
Meaning no, you cannot use types with non-trivial copy-ctor or assignment-op.
However, like with any template in namespace std, you are free to specialize the template for any type that it has not been specialized for by the implementation. So if you really want to use std::atomic<MyNonTriviallyCopyableType>, you have to provide the specialization yourself. How that specialization behaves is up to you, meaning, you are free to blow off your leg or the leg of anyone using that specialization, because it's just outside the scope of the standard.