Why atomic overloads for shared_ptr exist - c++

Why are there are atomic overloads for shared_ptr as described here rather than there being a specialization for std::atomic which deals with shared_ptrs. Seems inconsistent with the object oriented patterns employed by the rest of the C++ standard library..
And just to make sure I am getting this right, when using shared_ptrs to implement the read copy update idiom we need to do all accesses (reads and writes) to shared pointers through these functions right?

Because:
std::atomic may be instantiated with any TriviallyCopyable type T.
Source: http://en.cppreference.com/w/cpp/atomic/atomic
And
std::is_trivially_copyable<std::shared_ptr<int>>::value == false;
Thus, you cannot instantiate std::atomic<> with std::shared_ptr<>. However, automatic memory management is useful in multi-threading, thus those overloads were provided. Those overloads are most likely not lock-free however (one of the big draws of using std::atomic<> in the first place); they probably use a lock to provide synchronicity.
As for your second question: yes.

Related

Any equivalent of Rust's Arc::try_unwrap in c++?

This is a c++ ecosystem question - though it is easiest to ask to refer to Rust.
Are there stable implementations of a thread-safe / reference count smart pointers which support to "unwrap" it in a thread-safe manner - under the condition that there is ref-count of exactly 1, as in https://doc.rust-lang.org/std/sync/struct.Arc.html#method.try_unwrap.
Coarsely, speaking std::shared_ptr is similar to ARC, but this use-case seems not to be supported, nor does it appear straight forward to implement (e.g. see https://en.cppreference.com/w/cpp/memory/shared_ptr/use_count#Notes).
The exhaustive API of std::shared_ptr is available online (see cppreference) and as you can see there is no built-in support.
Furthermore, due to race-conditions with the promotion of std::weak_ptr, it is not possible to safely use use_count or unique to implement such functionality -- and unique was deprecated in C++17 and removed in C++20.
As a result, the functionality is simply not available with std::shared_ptr.
There may be other implementations of std::shared_ptr which offer this functionality -- though Boost's doesn't appear to.
As noted in the notes of use_count, the primary difficulty in implementing this function is the potential race-condition with weak_ptr promotion. That is, a naive:
// Susceptible to race-conditions, do not use!
if (pointer.use_count() == 1) {
return std::move(*pointer);
}
return std::nullopt;
Would not work because between the check and the actual move, a new shared owner may have appeared in another thread allowing concurrent access to the value.
The only ways to have this functionality safely are:
The shared_ptr implementation does not support weak pointers in the first place.
The shared_ptr implementation provides it, and ensures the absence of race condition with weak_ptr promotion.
I note that the latter typically requires locking the same lock used for weak_ptr promotion; hence why it cannot be provided externally.
A weaker variant could be implemented if unique were also guaranteeing the absence of weak_ptr. Although it would not be strictly equivalent as the presence of any weak_ptr would cause it to fail, it could still be useful in many scenarios where no weak_ptr is created.

Partial template specialization of std::atomic for smart pointers

Background
Since C++11, atomic operations on std::shared_ptr can be done via std::atomic_... methods found here, because the partial specialization as shown below is not possible:
std::atomic<std::shared_ptr<T>>
This is due to the fact that std::atomic only accepts TriviallyCopyable types, and std::shared_ptr (or std::weak_ptr) is not trivially copyable.
However, as of C++20, these methods have been deprecated, and got replaced by the partial template specialization of std::atomic for std::shared_ptr as described here.
Question
I am not sure of
Why std::atomic_... got replaced.
Techniques used to enable the partial template specialization of std::atomic for smart pointers.
Several proposals for atomic<shared_ptr> or something of that nature explain a variety of reasons. Of particular note is P0718, which tells us:
The C++ standard provides an API to access and manipulate specific shared_ptr objects atomically, i.e., without introducing data races when the same object is manipulated from multiple threads without further synchronization. This API is fragile and error-prone, as shared_ptr objects manipulated through this API are indistinguishable from other shared_ptr objects, yet subject to the restriction that they may be manipulated/accessed only through this API. In particular, you cannot dereference such a shared_ptr without first loading it into another shared_ptr object, and then dereferencing through the second object.
N4058 explains a performance issue with regard to how you have to go about implementing such a thing. Since shared_ptr is typically bigger than a single pointer in size, atomic access typically has to be implemented with a spinlock. So either every shared_ptr instance has a spinlock even if it never gets used atomically, or the implementation of those atomic functions has to have a lookaside table of spinlocks for individual objects. Or use a global spinlock.
None of these are problems if you have a type dedicated to being atomic.
atomic<shared_ptr> implementations can use the usual techniques for atomic<T> when T is too large to fit into a CPU atomic operation. They get to get around the TriviallyCopyable restriction by fiat: the standard requires that they exist and be atomic, so the implementation makes it so. C++ implementations don't have to play by the same rules as regular C++ programs.

C++ atomic with non-trivial type?

Reading the docs on boost::atomic and on std::atomic leaves me confused as to whether the atomic interface is supposed to support non-trivial types?
That is, given a (value-)type that can only be written/read by enclosing the read/write in a full mutex, because it has a non-trivial copy-ctor/assignment operator, is this supposed to be supported by std::atomic (as boost clearly states that it is UB).
Am I supposed to provide the specialization the docs talk about myself for non-trivial types?
Note: I was hitting on this because I have a cross-thread callback object boost::function<bool (void)> simpleFn; that needs to be set/reset atomically. Having a separate mutex / critical section or even wrapping both in a atomic-like helper type with simple set and get seem easy enough, but is there anything out of the box?
Arne's answer already points out that the Standard requires trivially copyable types for std::atomic.
Here's some rationale why atomics might not be the right tool for your problem in the first place: Atomics are the fundamental building primitives for building thread-safe data structures in C++. They are supposed to be the lowest-level building blocks for constructing more powerful data structures like thread-safe containers.
In particular, atomics are usually used for building lock-free data structures. For locking data structures primitives like std::mutex and std::condition_variable are a way better match, if only for the fact that it is very hard to write blocking code with atomics without introducing lots of busy waiting.
So when you think of std::atomic the first association should be lock-free (despite the fact that most of the atomic types are technically allowed to have blocking implementations). What you describe is a simple lock-based concurrent data structure, so wrapping it in an atomic should already feel wrong from a conceptual point of view.
Unfortunately, it is currently not clear how to express in the language that a data structure is thread-safe (which I guess was your primary intent for using atomic in the first place). Herb Sutter had some interesting ideas on this issue, but I guess for now we simply have to accept the fact that we have to rely on documentation to communicate how certain data structures behave with regards to thread-safety.
The standard specifies (ยง29.5,1) that
The type of the template argument T shall be trivially copyable
Meaning no, you cannot use types with non-trivial copy-ctor or assignment-op.
However, like with any template in namespace std, you are free to specialize the template for any type that it has not been specialized for by the implementation. So if you really want to use std::atomic<MyNonTriviallyCopyableType>, you have to provide the specialization yourself. How that specialization behaves is up to you, meaning, you are free to blow off your leg or the leg of anyone using that specialization, because it's just outside the scope of the standard.

Why are std::atomic objects not copyable?

It seems that std::atomic types are not copy constructible or copy assignable.
Why?
Is there a technical reason why copying atomic types is not possible?
Or is the interface limited on purpose to avoid some sort of bad code?
On platforms without atomic instructions (or without atomic instructions for all integer sizes) the types might need to contain a mutex to provide atomicity. Mutexes are not generally copyable or movable.
In order to keep a consistent interface for all specializations of std::atomic<T> across all platforms, the types are never copyable.
Technical reason: Most atomic types are not guaranteed to be lock-free. The representation of the atomic type might need to contain an embedded mutex and mutexes are not copyable.
Logical reason: What would it mean to copy an atomic type? Would the entire copy operation be expected to be atomic? Would the copy and the original represent the same atomic object?
There is no well-defined meaning for an operation spanning two separately atomic objects that would make this worthwhile. The one thing you can do is transfer the value loaded from one atomic object into another. But the load directly synchronizes only with other operations on the former object, while the store synchronizes with operations on the destination object. And each part can come with completely independent memory ordering constraints.
Spelling out such an operation as a load followed by a store makes that explicit, whereas an assignment would leave one wondering how it relates to the memory access properties of the participating objects. If you insist, you can achieve a similar effect by combining the existing conversions of std::atomic<..> (requires an explicit cast or other intermediate of the value type).
.

standard containers as local variables in multi-threaded application

I'm aware of the fact that the containers from standard library are not thread-safe. By that I used to think that a container, say of type std::list, cannot be accessed by more than one thread concurrently (some of which may modify the container). But now it seems that there is more to it than meets the eye; something more subtle, something not so obvious, well at least to me.
For example, consider this function which accepts the first argument by value:
void log(std::string msg, severity s, /*...*/)
{
return; //no code!
}
Is this thread-safe?
At first, it seems that it is thread-safe, as the function body accesses no shared modifiable resources, hence thread-safe. On second thought, it comes to me that when invoking such a function, an object of type std::string will be created, which is the first argument, and I think that construction of this object isn't thread-safe, as it internally uses std::allocator, which I believe isn't thread-safe. Hence invoking such a function isn't thread-safe either. But if it is correct, then what about this:
void f()
{
std::string msg = "message"; //is it thread-safe? it doesn't seem so!
}
Am I going right? Can we use std::string (or any container which uses std::allocator internally) in multi-threaded program?
I'm specifically talking about containers as local variables, as opposed to shared objects.
I searched google and found many similar doubts, with no concrete answer. I face similar problem as his:
c++ allocators thread-safe?
Please consider C++03 and C++11, both.
In C++11, std::allocator is thread safe. From its definition:
20.6.9.1/6: Remark: the storage is obtained by calling ::operator new(std::size_t)
and from the definition of ::operator new:
18.6.1.4: The library versions of operator new and operator delete, user replacement versions of global operator new and operator delete, and the C standard library functions calloc, malloc, realloc, and free shall
not introduce data races (1.10) as a result of concurrent calls from different threads.
C++03 had no concept of threads, so any thread safety was implementation-specific; you'd have to refer to your implementation's documentation to see what guarantees it offered, if any. Since you're using Microsoft's implementation, this page says that it is safe to write to multiple container objects of the same class from many threads, which implies that std::allocator is thread-safe.
In C++11 this would be addressed for the default allocator in:
20.6.9.1 allocator members [allocator.members]
Except for the destructor, member functions of the default allocator
shall not introduce data races (1.10) as a result of concurrent calls
to those member functions from different threads. Calls to these
functions that allocate or deallocate a particular unit of storage
shall occur in a single total order, and each such deallocation call
shall happen before the next allocation (if any) in this order.
Any user-provided allocator would have to hold to the same constraints if it were going to be used across different threads.
Of course, for earlier versions of the standard, nothing is said about this since they didn't talk about multithreading. If an implementation were to support multithreading (as many or most do), it would be responsible for taking care of those issues. Similar to the way implementations provide a thread-safe malloc() (and other library functions) for C and C++ even though the standards prior to very recently said nothing about that.
As you may have already figured, there is not going to be an easy yes or no answer. However, I think this may help:
http://www.cs.huji.ac.il/~etsman/Docs/gcc-3.4-base/libstdc++/html/faq/index.html#5_6
I quote verbatim:
5.6 Is libstdc++-v3 thread-safe?
libstdc++-v3 strives to be thread-safe when all of the following
conditions are met:
The system's libc is itself thread-safe,
gcc -v reports a thread model other than 'single',
[pre-3.3 only] a non-generic implementation of atomicity.h exists for the architecture in question.
When an std::string is copied during call to log, the allocator may be thread-safe (mandatory in C++11), but the copy itself isn't. So if there is another thread mutating the source string while copy is taking place, this is not thread safe.
You may end-up with half the string as it was before mutation and another half after, or may even end-up accessing deallocated memory if the mutating thread reallocated (e.g. by appending new characters) or deleted the string, while the copy was still taking place.
OTOH, the...
std::string msg = "message";
...is thread safe provided your allocator is thread safe.