c++, c++11, std::atomic member functions - c++

I am trying to use std::atomic library.
What's the difference between specialized and non-specialized atomic
member functions?
What's the difference (if there is any) between following functions?
operator= stores a value into an atomic object (public member function) v.s. store (C++11) atomically replaces the value of the atomic object with a non-atomic argument (public member function)
operator T() loads a value from an atomic object (public member function) v.s. load (C++11) atomically obtains the value of the atomic object (public member function).
operator+= v.s. fetch_add
operator-= v.s. fetch_sub
operator&= v.s. fetch_and
operator|= v.s. fetch_or
operator^= v.s. fetch_xor
What's the downside of declare a variable as atomic v.s. a
non-atomic variable. For example, what's the downside of
std::atomic<int> x v.s. int x? In other words, how much is the overhead of an atomic variable?
Which one has more overhead? An atomic variable, v.s. a normal
variable protected by a mutex?
Here is the reference to my quesitons. http://en.cppreference.com/w/cpp/atomic/atomic

Not an expert, but I'll try:
The specializations (for built-in types such as int) contain additional operations such as fetch_add. Non-specialized forms (user defined types) will not contain these.
operator= returns its argument, store does not. Also, non-operators allow you to specify a memory order. The standard says operator= is defined in terms of store.
Same as above, although it returns the value of load.
Same as above
Same as above
Same as above
Same as above
Same as above
Same as above
They do different things. It's undefined behavior to use an int in the way you would use std::atomic_int.
You can assume the overhead is int <= std::atomic <= int and std::mutex where <= means 'less overhead'. So it's likely better than locking with a mutex (especially for built-in types), but worse than int.

What's the difference between specialized and non-specialized atomic member functions?
As can be seen in the synposis of these classes on the standard (§29.5), there are three different sets of member functions:
the most generic one provides only store, load, exchange, and compare-exchange operations;
the specializations for integral types provide atomic arithmetic and bitwise operations, in addition to the generic ones;
the specialization for pointers provides pointer arithmetic operations in addition to the generic ones.
What's the difference (if there is any) between following functions?
operator= stores a value into an atomic object (public member function) v.s. store (C++11) atomically replaces the value of the atomic object with a non-atomic argument (public member function)
(...)
The main functional difference is that the non-operator versions (§29.6.5, paragraphs 9-17 and more) have an extra parameter for specifying the desired memory ordering (§29.3/1). The operator versions use the sequential consistency memory ordering:
void A::store(C desired, memory_order order = memory_order_seq_cst) volatile noexcept;
void A::store(C desired, memory_order order = memory_order_seq_cst) noexcept;
Requires: The order argument shall not be memory_order_consume, memory_order_acquire, nor memory_order_acq_rel.
Effects: Atomically replaces the value pointed to by object or by this with the value of desired. Memory is affected according to the
value of order.
C A::operator=(C desired) volatile noexcept;
C A::operator=(C desired) noexcept;
Effects: store(desired)
Returns: desired
The non-operator forms are advantageous because sequential consistency is not always necessary, and it is potentially more expensive than the other memory orderings. With careful analysis one can find out what are the minimal guarantees needed for correct operation and select one of the less restrictive memory orderings, giving more leeway to the optimizer.
What's the downside of declare a variable as atomic v.s. a non-atomic variable. For example, what's the downside of std::atomic<int> x v.s. int x? In other words, how much is the overhead of an atomic variable?
Using an atomic variable when a regular variable would suffice limits the number of possible optimizations, because atomic variables impose additional constraints of indivisibility and (possibly) memory ordering.
Using a regular variable when an atomic variable is needed may introduce data races, and that makes the behaviour undefined (§1.10/21):
The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior.
The overhead of an atomic variable is a matter of quality of implementation. Ideally, an atomic variable has zero overhead when you need atomic operations. When you don't need atomic operations, whatever overhead it may have is irrelevant: you just use a regular variable.
Which one has more overhead? An atomic variable, v.s. a normal variable protected by a mutex?
There's no reason for an atomic variable to have more overhead than a normal variable protected by a mutex: worst case scenario, the atomic variable is implemented just like that. But there is a possibility that the atomic variable is lock-free, which would involve less overhead. This property can be ascertained with the functions described in the standard in §29.6.5/7:
bool atomic_is_lock_free(const volatile A *object) noexcept;
bool atomic_is_lock_free(const A *object) noexcept;
bool A::is_lock_free() const volatile noexcept;
bool A::is_lock_free() const noexcept;
Returns: True if the object’s operations are lock-free, false otherwise.

I'm not an expert on this stuff, but if I understand correctly the non-specialized operations in your reference do one thing atomically, load, store, replace, etc.
The specialized function do two things atomically, that is they modify and then return an atomic object in such a way that both operations happen before any other thread could mess with them.

Related

Does “M&M rule” applies to std::atomic data-member?

"Mutable is used to specify that the member does not affect the externally visible state of the class (as often used for mutexes, memo caches, lazy evaluation, and access instrumentation)." [Reference: cv (const and volatile) type qualifiers, mutable specifier]
This sentence made me wonder:
"Guideline: Remember the “M&M rule”: For a member-variable, mutable and mutex (or atomic) go together." [Reference: GotW #6a Solution: Const-Correctness, Part 1 (updated for C ++11/14)]
I understand why “M&M rule” applies to std::mutex data-member: to allow const-functions to be thread-safe despite they lock/unlock the mutex data-member, but does “M&M rule” applies also to std::atomic data-member?
You got it partly backwards. The article does not suggest to make all atomic members mutable. Instead it says:
(1) For a member variable, mutable implies mutex (or equivalent): A
mutable member variable is presumed to be a mutable shared variable
and so must be synchronized internally—protected with a mutex, made
atomic, or similar.
(2) For a member variable, mutex (or similar synchronization type)
implies mutable: A member variable that is itself of a synchronization
type, such as a mutex or a condition variable, naturally wants to be
mutable, because you will want to use it in a non-const way (e.g.,
take a std::lock_guard) inside concurrent const member
functions.
(2) says that you want a mutex member mutable. Because typically you also want to lock the mutex in const methods. (2) does not mention atomic members.
(1) on the other hand says that if a member is mutable, then you need to take care of synchronization internally, be it via a mutex or by making the member an atomic. That is because of the bullets the article mentions before:
If you are implementing a type, unless you know objects of the type can never be shared (which is generally impossible), this means that each of your const member functions must be either:
truly physically/bitwise const with respect to this object, meaning that they perform no writes to the object’s data; or else
internally synchronized so that if it does perform any actual writes to the object’s data, that data is correctly protected with a mutex or equivalent (or if appropriate are atomic<>) so that any possible concurrent const accesses by multiple callers can’t tell the difference.
A member that is mutable is not "truly const", hence you need to take care of synchronization internally (either via a mutex or by making the member atomic).
TL;DR: The article does not suggest to make all atomic members mutable. It rather suggests to make mutex members mutable and to use internal synchronization for all mutable members.

How to return an atomic from an array of atomics? [duplicate]

It seems that std::atomic types are not copy constructible or copy assignable.
Why?
Is there a technical reason why copying atomic types is not possible?
Or is the interface limited on purpose to avoid some sort of bad code?
On platforms without atomic instructions (or without atomic instructions for all integer sizes) the types might need to contain a mutex to provide atomicity. Mutexes are not generally copyable or movable.
In order to keep a consistent interface for all specializations of std::atomic<T> across all platforms, the types are never copyable.
Technical reason: Most atomic types are not guaranteed to be lock-free. The representation of the atomic type might need to contain an embedded mutex and mutexes are not copyable.
Logical reason: What would it mean to copy an atomic type? Would the entire copy operation be expected to be atomic? Would the copy and the original represent the same atomic object?
There is no well-defined meaning for an operation spanning two separately atomic objects that would make this worthwhile. The one thing you can do is transfer the value loaded from one atomic object into another. But the load directly synchronizes only with other operations on the former object, while the store synchronizes with operations on the destination object. And each part can come with completely independent memory ordering constraints.
Spelling out such an operation as a load followed by a store makes that explicit, whereas an assignment would leave one wondering how it relates to the memory access properties of the participating objects. If you insist, you can achieve a similar effect by combining the existing conversions of std::atomic<..> (requires an explicit cast or other intermediate of the value type).
.

Why doesn't C++11 standard provide other lock free atomic structure [duplicate]

From C++ Concurrency in Action:
difference between std::atomic and std::atomic_flag is that std::atomic may not be lock-free; the implementation may have to acquire a mutex internally in order to ensure the atomicity of the operations
I wonder why. If atomic_flag is guaranteed to be lock-free, why isn't it guaranteed for atomic<bool> as well?
Is this because of the member function compare_exchange_weak? I know that some machines lack a single compare-and-exchange instruction, is that the reason?
First of all, you are perfectly allowed to have something like std::atomic<very_nontrivial_large_structure>, so std::atomic as such cannot generally be guaranteed to be lock-free (although most specializations for trivial types like bool or int probably could, on most systems). But that is somewhat unrelated.
The exact reasoning why atomic_flag and nothing else must be lock-free is given in the Note in N2427/29.3:
Hence the operations must be address-free. No other type requires lock-free operations, and hence the atomic_flag type is the minimum hardware-implemented type needed to conform to this standard. The remaining types can be emulated with atomic_flag, though with less than ideal properties.
In other words, it's the minimum thing that must be guaranteed on every platform, so it's possible to implement the standard correctly.
The standard does not garantee atomic objects are lock-free. On a platform that doesn't provide lock-free atomic operations for a type T, std::atomic<T> objects may be implemented using a mutex, which wouldn't be lock-free. In that case, any containers using these objects in their implementation would not be lock-free either.
The standard provide an opportunity to check if an std::atomic<T> variable is lock-free: you can use var.is_lock_free() or atomic_is_lock_free(&var). For basic types such as int, there is also macros provided (e.g. ATOMIC_INT_LOCK_FREE) which specify if lock-free atomic access to that type is available.
std::atomic_flag is an atomic boolean type. Almost always for boolean type it's not needed to use mutex or another way for synchronization.

Why only std::atomic_flag is guaranteed to be lock-free?

From C++ Concurrency in Action:
difference between std::atomic and std::atomic_flag is that std::atomic may not be lock-free; the implementation may have to acquire a mutex internally in order to ensure the atomicity of the operations
I wonder why. If atomic_flag is guaranteed to be lock-free, why isn't it guaranteed for atomic<bool> as well?
Is this because of the member function compare_exchange_weak? I know that some machines lack a single compare-and-exchange instruction, is that the reason?
First of all, you are perfectly allowed to have something like std::atomic<very_nontrivial_large_structure>, so std::atomic as such cannot generally be guaranteed to be lock-free (although most specializations for trivial types like bool or int probably could, on most systems). But that is somewhat unrelated.
The exact reasoning why atomic_flag and nothing else must be lock-free is given in the Note in N2427/29.3:
Hence the operations must be address-free. No other type requires lock-free operations, and hence the atomic_flag type is the minimum hardware-implemented type needed to conform to this standard. The remaining types can be emulated with atomic_flag, though with less than ideal properties.
In other words, it's the minimum thing that must be guaranteed on every platform, so it's possible to implement the standard correctly.
The standard does not garantee atomic objects are lock-free. On a platform that doesn't provide lock-free atomic operations for a type T, std::atomic<T> objects may be implemented using a mutex, which wouldn't be lock-free. In that case, any containers using these objects in their implementation would not be lock-free either.
The standard provide an opportunity to check if an std::atomic<T> variable is lock-free: you can use var.is_lock_free() or atomic_is_lock_free(&var). For basic types such as int, there is also macros provided (e.g. ATOMIC_INT_LOCK_FREE) which specify if lock-free atomic access to that type is available.
std::atomic_flag is an atomic boolean type. Almost always for boolean type it's not needed to use mutex or another way for synchronization.

Why are std::atomic objects not copyable?

It seems that std::atomic types are not copy constructible or copy assignable.
Why?
Is there a technical reason why copying atomic types is not possible?
Or is the interface limited on purpose to avoid some sort of bad code?
On platforms without atomic instructions (or without atomic instructions for all integer sizes) the types might need to contain a mutex to provide atomicity. Mutexes are not generally copyable or movable.
In order to keep a consistent interface for all specializations of std::atomic<T> across all platforms, the types are never copyable.
Technical reason: Most atomic types are not guaranteed to be lock-free. The representation of the atomic type might need to contain an embedded mutex and mutexes are not copyable.
Logical reason: What would it mean to copy an atomic type? Would the entire copy operation be expected to be atomic? Would the copy and the original represent the same atomic object?
There is no well-defined meaning for an operation spanning two separately atomic objects that would make this worthwhile. The one thing you can do is transfer the value loaded from one atomic object into another. But the load directly synchronizes only with other operations on the former object, while the store synchronizes with operations on the destination object. And each part can come with completely independent memory ordering constraints.
Spelling out such an operation as a load followed by a store makes that explicit, whereas an assignment would leave one wondering how it relates to the memory access properties of the participating objects. If you insist, you can achieve a similar effect by combining the existing conversions of std::atomic<..> (requires an explicit cast or other intermediate of the value type).
.