Is a shared pointer (std::shared_ptr) safe to use in a multi-threaded program?
I am not considering read/write accesses to the data owned by the shared pointer but rather the shared pointer itself.
I am aware that certain implementations (such as MSDN) do provide this extra guarantee; but I want to understand if this is guaranteed by the standard and as such is portable.
#include <thread>
#include <memory>
#include <iostream>
void function_to_run_thread(std::shared_ptr<int> x)
{
std::cout << x << "\n";
}
// Shared pointer goes out of scope.
// Is its destruction here guaranteed to happen only once?
// Or is this a "Data Race" situation that is UB?
int main()
{
std::thread threads[2];
{
// A new scope
// So that the shared_ptr in this scope has the
// potential to go out of scope before the threads have executed.
// So leaving the shared_ptr in the scope of the threads only.
std::shared_ptr<int> data = std::make_shared<int>(5);
// Create workers.
threads[0] = std::thread(function_to_run_thread, data);
threads[1] = std::thread(function_to_run_thread, data);
}
threads[0].join();
threads[1].join();
}
Any links to sections in the standard most welcome.
I would be happy if people have reference to the major implementations so we could consider it portable to most normal developers.
MSDN: Check. Thread Safe.
G++: ?
clang: ?
I would consider those the major implementations but happy to consider others.
I don't have links to the standard. I did check this a long time ago, std::shared_ptr is thread-safe under certain conditions, which summarizes to: every thread should have its own copy.
As documented on cppreference:
All member functions (including copy constructor and copy assignment) can be called by multiple threads on different instances of shared_ptr without additional synchronization even if these instances are copies and share ownership of the same object. If multiple threads of execution access the same shared_ptr without synchronization and any of those accesses uses a non-const member function of shared_ptr then a data race will occur.
So just like any other class in the standard, reading from the same instance from multiple threads is allowed. Writing to this instance from 1 thread is not.
int main()
{
std::vector<std::thread> threads;
{
// A new scope
// So that the shared_ptr in this scope has the
// potential to go out of scope before the threads have executed.
// So leaving the shared_ptr in the scope of the threads only.
std::shared_ptr<int> data = std::make_shared<int>(5);
// Perfectly legal to read access the shared_ptr
threads.emplace_back(std::thread([&data]{ std::cout << data.get() << '\n'; }));
threads.emplace_back(std::thread([&data]{ std::cout << data.get() << '\n'; }));
// This line will result in a race condition as you now have read and write on the same instance
threads.emplace_back(std::thread([&data]{ data = std::make_shared<int>(42); }));
for (auto &thread : threads)
thread.join();
}
}
Once we are dealing with multiple copies of the shared_ptr, everything is fine:
int main()
{
std::vector<std::thread> threads;
{
// A new scope
// So that the shared_ptr in this scope has the
// potential to go out of scope before the threads have executed.
// So leaving the shared_ptr in the scope of the threads only.
std::shared_ptr<int> data = std::make_shared<int>(5);
// Perfectly legal to read access the shared_ptr copy
threads.emplace_back(std::thread([data]{ std::cout << data.get() << '\n'; }));
threads.emplace_back(std::thread([data]{ std::cout << data.get() << '\n'; }));
// This line will no longer result in a race condition the other threads are using a copy
threads.emplace_back(std::thread([&data]{ data = std::make_shared<int>(42); }));
for (auto &thread : threads)
thread.join();
}
}
Also destruction of the shared_ptr will be fine, as every thread will call the destructor of the local shared_ptr and the last one will clean up the data. There are some atomic operations on the reference count to ensure this happens correctly.
int main()
{
std::vector<std::thread> threads;
{
// A new scope
// So that the shared_ptr in this scope has the
// potential to go out of scope before the threads have executed.
// So leaving the shared_ptr in the scope of the threads only.
std::shared_ptr<int> data = std::make_shared<int>(5);
// Perfectly legal to read access the shared_ptr copy
threads.emplace_back(std::thread([data]{ std::cout << data.get() << '\n'; }));
threads.emplace_back(std::thread([data]{ std::cout << data.get() << '\n'; }));
// Sleep to ensure we have some delay
threads.emplace_back(std::thread([data]{ std::this_thread::sleep_for(std::chrono::seconds{2}); }));
}
for (auto &thread : threads)
thread.join();
}
As you already indicated, the access to the data in the shared_ptr ain't protected. So similar to the first case, if you would have 1 thread reading and 1 thread writing, you still have a problem. This can be solved with atomics or mutexes or by guaranteeing read-onlyness of the objects.
Quoting the latest draft:
For purposes of determining the presence of a data race, member functions shall access and modify only the shared_ptr and weak_ptr objects themselves and not objects they refer to. Changes in use_count() do not reflect modifications that can introduce data races.
So, this is a lot to take in. The first sentence talks about member functions not accessing the pointee, i.e. that accessing the pointee is not thread-safe.
However, then there is the second sentence. Effectively, this forces any operation that would change use_count() (e.g. copy construction, assignment, destruction, calling reset) to be thread-safe - but only as far as they are affecting use_count().
Which makes sense: Different threads copying the same std::shared_ptr (or destroying the same std::shared_ptr) must not cause a data race regarding ownership of the pointee. The internal value of use_count() must be synchronized.
I checked, and this exact wording was also present in N3337, Section 20.7.2.2 Paragraph 4, so it should be safe to say that this requirement has been there since the introduction of std::shared_ptr in C++11 (and was not something introduced later on).
shared_ptr (and also weak_ptr) utilizes atomic integer to keep use count, so sharing between threads is safe but of course, access to data still requires mutexes or any other synchronization.
Related
As it's well known that shared_ptr only guarantees access to underlying control block is thread
safe and no guarantee made for accesses to owned object.
Then why there is a race condition in the code snippet below:
std::shared_ptr<int> g_s = std::make_shared<int>(1);
void f1()
{
std::shared_ptr<int>l_s1 = g_s; // read g_s
}
void f2()
{
std::shared_ptr<int> l_s2 = std::make_shared<int>(3);
std::thread th(f1);
th.detach();
g_s = l_s2; // write g_s
}
In the code snippet above, the owned object of the shared pointer named g_s are not accessed indeed.
I am really confused now. Could somebody shed some light on this matter?
std::shared_ptr<T> guarantees that access to its control block is thread-safe, but not access to the std::shared_ptr<T> instance itself, which is generally an object with two data members: the raw pointer (the one returned by get()) and the pointer to the control block.
In your code, the same std::shared_ptr<int> instance may be concurrently accessed by the two threads; f1 reads, and f2 writes.
If the two threads were accessing two different shared_ptr instances that shared ownership of the same object, there would be no data race. The two instances would have the same control block, but accesses to the control block would be appropriately synchronized by the library implementation.
If you need concurrent, race-free access to a single std::shared_ptr<T> instance from multiple threads, you can use std::atomic<std::shared_ptr<T>>. (There is also an older interface that can be used prior to C++20, which is deprecated in C++20.)
If I create a thread in a constructor and if that thread accesses the object do I need to introduce a release barrier before the thread accesses the object? Specifically, if I have the code below (wandbox link) do I need to lock the mutex in the constructor (the commented out line)? I need to make sure that the worker_thread_ sees the write to run_worker_thread_ so that is doesn't immediately exit. I realize using an atomic boolean is better here but I'm interested in understanding the memory ordering implications here. Based on my understanding I think I do need to lock the mutex in the constructor to ensure that the release operation that the unlocking of the mutex in the constructor provides synchronizes with the acquire operation provided by the locking of the mutex in the threadLoop() via the call to shouldRun().
class ThreadLooper {
public:
ThreadLooper(std::string thread_name)
: thread_name_{std::move(thread_name)}, loop_counter_{0} {
//std::lock_guard<std::mutex> lock(mutex_);
run_worker_thread_ = true;
worker_thread_ = std::thread([this]() { threadLoop(); });
// mutex unlock provides release semantics
}
~ThreadLooper() {
{
std::lock_guard<std::mutex> lock(mutex_);
run_worker_thread_ = false;
}
if (worker_thread_.joinable()) {
worker_thread_.join();
}
cout << thread_name_ << ": destroyed and counter is " << loop_counter_
<< std::endl;
}
private:
bool shouldRun() {
std::lock_guard<std::mutex> lock(mutex_);
return run_worker_thread_;
}
void threadLoop() {
cout << thread_name_ << ": threadLoop() started running"
<< std::endl;
while (shouldRun()) {
using namespace std::literals::chrono_literals;
std::this_thread::sleep_for(2s);
++loop_counter_;
cout << thread_name_ << ": counter is " << loop_counter_ << std::endl;
}
cout << thread_name_
<< ": exiting threadLoop() because flag is false" << std::endl;
}
const std::string thread_name_;
std::atomic_uint64_t loop_counter_;
bool run_worker_thread_;
std::mutex mutex_;
std::thread worker_thread_;
};
This also got me to thinking about more generally if I were to initialize a bunch of regular int (not atomic) member variables in the constructor that were then read from other threads via some public methods if I would need to similarly lock the mutex in the constructor in addition to in the methods that read these variables. This seems slightly different to me than the case above since I know that the object would be fully constructed before any other thread could access it, but that doesn't seem to ensure that the initialization of the object would be visible to the other threads without a release operation in the constructor.
You do not need any barriers because it is guaranteed that the thread constructor synchronizes with the invocation of the function passed to it.
In Standardese:
The completion of the invocation of the constructor synchronizes with the beginning of the invocation of the copy of f.
Somewhat formal proof:
run_worker_thread_ = true;(A) is sequenced before the thread object creation (B) according to the full expressions evaluation order. The thread object construction synchronizes with the closure object execution (C) according to the rule cited above. Hence, A inter-thread happens before C.
A seq before B, B sync with C, A happens before C -> this is a formal proof in Standard terms.
And when analyzing programs in C++11+ era you should stick to the C++ model of memory & execution and forget about barriers and reordering which compiler might or might not do. These are just implementation details. The only thing that matters is the formal proof in the C++ terms. Compiler must obey and do (and not do) whatever it can to adhere to the rules.
But for the sake of completeness let's look at the code with compiler's eyes and try to understand why it can't reorder anything in this case. We all know the "as-if" rule under which the compiler might reorder some instructions if you can't tell they have been reordered. So if we have some bool flags setting:
flag1 = true; // A
flag2 = false;// B
It is allowed to execute these lines as follows:
flag2 = false;// B
flag1 = true;// A
Despite the fact that A sequenced before B. It can do it because we can't tell the difference, we can't catch it reordering our instructions just by observing the program behavior because except "sequenced before" there is no relations between these lines. But let's get back to our case:
run_worker_thread_ = true; // A
worker_thread_ = std::thread(...); // B
It might look like that this case is the same as with bool variables above. And that would be the case if we didn't know that the thread object (besides being sequenced after the A expression) synchronizes with something (for simplicity let's ignore this something). But as we found out if something is sequenced before another thing which in its turn sync with yet another thing then it is happens before that thing. So the Standard requires for the A expression to happen before that something our B expression sync with.
And this fact forbids compiler to reorder our A & B expressions because suddenly we can tell the difference if it did so. Because if it did it then the C expression (something) might not see the visible side effects provided by A. So just by observing the program execution we might caught the cheating compiler! Hence, it has to use some barriers. It doesn't matter if it is just a compiler barrier or a hardware one—it has to be there to guarantee that these instructions are not reordered. So you might think that it uses a release fence upon the construction completion and an acquire fence upon the closure object execution. That would roughly describe what happens under the hood.
It also looks like you treat mutex as some kind of magic thing which always work and do not require any proofs. So for some reason you believe in mutex and not in thread. But the thing is that it has no magic and the only guarantee it has is that lock sync with prior unlock and vice versa. So it provides the same guarantee that thread provides.
How to correctly use the move semantic with running thread in object?
Sample:
#include <iostream>
#include <thread>
#include <vector>
struct A {
std::string v_;
std::thread t_;
void start() {
t_ = std::thread(&A::threadProc, this);
}
void threadProc() {
for(;;) {
std::cout << "foo-" << v_ << '\n';
std::this_thread::sleep_for(std::chrono::seconds(5));
}
}
};
int main() {
A m;
{
A a;
a.v_ = "bar";
a.start();
m = std::move(a);
}
std::cout << "v_ = " << m.v_ << '\n'; /* stdout is 'v_ = bar' as expected */
/* but v_ in thread proc was destroyed */
/* stdout in thread proc is 'foo-' */
m.t_.join();
return 0;
}
I want to use class members after moving, but when I go out scope, class members are destroyed and std::thread is moved into new object as expected but it starting use destroyed members.
It seems to me because of using this pointer in thread initialization.
What is best practice in this case?
As written, it's not going to work. After moving, the thread m.t_ refers to a thread which is still running a.threadProc(). That will be attempting to print a.v_.
There are even two problems with the snippet: not only is a.v_ moved from (so its value is unspecified), but it's also about to be destroyed in another thread, and that destruction is not sequenced-after its use.
Since the object needs to stay alive long enough, with a non-trivial lifetime due to the thread, you'll need to get it off the stack and out of the vector. Instead, use std::shared_ptr to manage the lifetime. You will probably need to pass that shared_ptr to the thread, to avoid a race condition where the object might expire before the thread starts running. You can't rely on std:shared_from_this.
What is best practice in this case?
The best practice is to delete the move constructor and move assignment operator to prevent this from happening. Your object requires that this never changes, and you're getting undefined behavior because in this case the object was whipped out from beneath your thread and subsequently destroyed.
If, for whatever reason preventing moves goes against your design requirements, then there are a some common approaches that would make the most sense to anybody fortunate enough to be reading and maintaining your code.
Use the pimpl idiom to create an internal object dynamically which can move with the outer object. The outer object is movable, but the inner object is not. The thread is bound to that object, and anything the thread needs access to is also within that object. In your case, you would basically take your structure as it is and wrap it. The basic idea is something like:
class MovableA
{
public:
MovableA() : a_(std::make_unique<A>()) {}
void start() { a_->start(); }
A & a() const { return *a_; }
private:
std::unique_ptr<A> a_;
};
The benefit of this approach is that you can move MoveableA without needing to synchronize with the running thread.
Abandon the notion of using stack allocation, and just allocate A dynamically. This has the same benefit as option 1, and is simpler because you're not having to wrap your class in anything or provide accessors.
std::unique_ptr<A> m;
{
auto a = std::make_unique<A>();
a->v_ = "bar";
a->start();
m = std::move(a);
}
std::cout << "v_ = " << m->v_ << '\n';
m->t_.join();
I started writing an option 3 that avoids dynamic allocation and instead binds a 'floating' version of this to a std::reference_wrapper but I felt I'd get it wrong without thinking about it a lot, and it seemed hacky and horrible anyway.
The bottom line is if you want to keep the object outside your thread and use it in the thread, the best practice is to use dynamic allocation.
(Alternative answer, using C++17)
Using lambda's, you can capture a copy of A. Since the thread owns the lambda and the lambda owns the copy, you don't have lifetime issues:
t_ = std::thread([*this](){threadProc();});
Is unique_ptr thread safe? Is it impossible for the code below to print same number twice?
#include <memory>
#include <string>
#include <thread>
#include <cstdio>
using namespace std;
int main()
{
unique_ptr<int> work;
thread t1([&] {
while (true) {
const unique_ptr<int> localWork = move(work);
if (localWork)
printf("thread1: %d\n", *localWork);
this_thread::yield();
}
});
thread t2([&] {
while (true) {
const unique_ptr<int> localWork = move(work);
if (localWork)
printf("thread2: %d\n", *localWork);
this_thread::yield();
}
});
for (int i = 0; ; i++) {
work.reset(new int(i));
while (work)
this_thread::yield();
}
return 0;
}
unique_ptr is thread safe when used correctly. You broke the unwritten rule: Thou shalt never pass unique_ptr between threads by reference.
The philosophy behind unique_ptr is that it has a single (unique) owner at all times. Because of that, you can always pass it safely between threads without synchronization -- but you have to pass it by value, not by reference. Once you create aliases to a unique_ptr, you lose the uniqueness property and all bets are off. Unfortunately C++ can't guarantee uniqueness, so you are left with a convention that you have to follow religiously. Don't create aliases to a unique_ptr!
No, it isn't thread-safe.
Both threads can potentially move the work pointer with no explicit synchronization, so it's possible for both threads to get the same value, or both to get some invalid pointer ... it's undefined behaviour.
If you want to do something like this correctly, you probably need to use something like std::atomic_exchange so both threads can read/modify the shared work pointer with the right semantics.
According to Msdn:
The following thread safety rules apply to all classes in the Standard
C++ Library (except shared_ptr and iostream classes, as described
below).
A single object is thread safe for reading from multiple threads. For
example, given an object A, it is safe to read A from thread 1 and
from thread 2 simultaneously.
If a single object is being written to by one thread, then all reads
and writes to that object on the same or other threads must be
protected. For example, given an object A, if thread 1 is writing to
A, then thread 2 must be prevented from reading from or writing to A.
It is safe to read and write to one instance of a type even if another
thread is reading or writing to a different instance of the same type.
For example, given objects A and B of the same type, it is safe if A
is being written in thread 1 and B is being read in thread 2.
Derived from this question and related to this question:
If I construct an object in one thread and then convey a reference/pointer to it to another thread, is it thread un-safe for that other thread to access the object without explicit locking/memory-barriers?
// thread 1
Obj obj;
anyLeagalTransferDevice.Send(&obj);
while(1); // never let obj go out of scope
// thread 2
anyLeagalTransferDevice.Get()->SomeFn();
Alternatively: is there any legal way to convey data between threads that doesn't enforce memory ordering with regards to everything else the thread has touched? From a hardware standpoint I don't see any reason it shouldn't be possible.
To clarify; the question is with regards to cache coherency, memory ordering and whatnot. Can Thread 2 get and use the pointer before Thread 2's view of memory includes the writes involved in constructing obj? To miss-quote Alexandrescu(?) "Could a malicious CPU designer and compiler writer collude to build a standard conforming system that make that break?"
Reasoning about thread-safety can be difficult, and I am no expert on the C++11 memory model. Fortunately, however, your example is very simple. I rewrite the example, because the constructor is irrelevant.
Simplified Example
Question: Is the following code correct? Or can the execution result in undefined behavior?
// Legal transfer of pointer to int without data race.
// The receive function blocks until send is called.
void send(int*);
int* receive();
// --- thread A ---
/* A1 */ int* pointer = receive();
/* A2 */ int answer = *pointer;
// --- thread B ---
int answer;
/* B1 */ answer = 42;
/* B2 */ send(&answer);
// wait forever
Answer: There may be a data race on the memory location of answer, and thus the execution results in undefined behavior. See below for details.
Implementation of Data Transfer
Of course, the answer depends on the possible and legal implementations of the functions send and receive. I use the following data-race-free implementation. Note that only a single atomic variable is used, and all memory operations use std::memory_order_relaxed. Basically this means, that these functions do not restrict memory re-orderings.
std::atomic<int*> transfer{nullptr};
void send(int* pointer) {
transfer.store(pointer, std::memory_order_relaxed);
}
int* receive() {
while (transfer.load(std::memory_order_relaxed) == nullptr) { }
return transfer.load(std::memory_order_relaxed);
}
Order of Memory Operations
On multicore systems, a thread can see memory changes in a different order as what other threads see. In addition, both compilers and CPUs may reorder memory operations within a single thread for efficiency - and they do this all the time. Atomic operations with std::memory_order_relaxed do not participate in any synchronization and do not impose any ordering.
In the above example, the compiler is allowed to reorder the operations of thread B, and execute B2 before B1, because the reordering has no effect on the thread itself.
// --- valid execution of operations in thread B ---
int answer;
/* B2 */ send(&answer);
/* B1 */ answer = 42;
// wait forever
Data Race
C++11 defines a data race as follows (N3290 C++11 Draft): "The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior." And the term happens before is defined earlier in the same document.
In the above example, B1 and A2 are conflicting and non-atomic operations, and neither happens before the other. This is obvious, because I have shown in the previous section, that both can happen at the same time.
That's the only thing that matters in C++11. In contrast, the Java Memory Model also tries to define the behavior if there are data races, and it took them almost a decade to come up with a reasonable specification. C++11 didn't make the same mistake.
Further Information
I'm a bit surprised that these basics are not well known. The definitive source of information is the section Multi-threaded executions and data races in the C++11 standard. However, the specification is difficult to understand.
A good starting point are Hans Boehm's talks - e.g. available as online videos:
Threads and Shared Variables in C++11
Getting C++ Threads Right
There are also a lot of other good resources, I have mentioned elsewhere, e.g.:
std::memory_order - cppreference.com
There is no parallel access to the same data, so there is no problem:
Thread 1 starts execution of Obj::Obj().
Thread 1 finishes execution of Obj::Obj().
Thread 1 passes reference to the memory occupied by obj to thread 2.
Thread 1 never does anything else with that memory (soon after, it falls into infinite loop).
Thread 2 picks-up the reference to memory occupied by obj.
Thread 2 presumably does something with it, undisturbed by thread 1 which is still infinitely looping.
The only potential problem is if Send didn't acts as a memory barrier, but then it wouldn't really be a "legal transfer device".
As others have alluded to, the only way in which a constructor is not thread-safe is if something somehow gets a pointer or reference to it before the constructor is finished, and the only way that would occur is if the constructor itself has code that registers the this pointer to some type of container which is shared across threads.
Now in your specific example, Branko Dimitrijevic gave a good complete explanation how your case is fine. But in the general case, I'd say to not use something until the constructor is finished, though I don't think there's anything "special" that doesn't happen until the constructor is finished. By the time it enters the (last) constructor in an inheritance chain, the object is pretty much fully "good to go" with all of its member variables being initialized, etc. So no worse than any other critical section work, but another thread would need to know about it first, and the only way that happens is if you're sharing this in the constructor itself somehow. So only do that as the "last thing" if you are.
It is only safe (sort of) if you wrote both threads, and know the first thread is not accessing it while the second thread is. For example, if the thread constructing it never accesses it after passing the reference/pointer, you would be OK. Otherwise it is thread unsafe. You could change that by making all methods that access data members (read or write) lock memory.
Read this question until now... Still will post my comments:
Static Local Variable
There is a reliable way to construct objects when you are in a multi-thread environment, that is using a static local variable (static local variable-CppCoreGuidelines),
From the above reference: "This is one of the most effective solutions to problems related to initialization order. In a multi-threaded environment the initialization of the static object does not introduce a race condition (unless you carelessly access a shared object from within its constructor)."
Also note from the reference, if the destruction of X involves an operation that needs to be synchronized you can create the object on the heap and synchronize when to call the destructor.
Below is an example I wrote to show the Construct On First Use Idiom, which is basically what the reference talks about.
#include <iostream>
#include <thread>
#include <vector>
class ThreadConstruct
{
public:
ThreadConstruct(int a, float b) : _a{a}, _b{b}
{
std::cout << "ThreadConstruct construct start" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(2));
std::cout << "ThreadConstruct construct end" << std::endl;
}
void get()
{
std::cout << _a << " " << _b << std::endl;
}
private:
int _a;
float _b;
};
struct Factory
{
template<class T, typename ...ARGS>
static T& get(ARGS... args)
{
//thread safe object instantiation
static T instance(std::forward<ARGS>(args)...);
return instance;
}
};
//thread pool
class Threads
{
public:
Threads()
{
for (size_t num_threads = 0; num_threads < 5; ++num_threads) {
thread_pool.emplace_back(&Threads::run, this);
}
}
void run()
{
//thread safe constructor call
ThreadConstruct& thread_construct = Factory::get<ThreadConstruct>(5, 10.1);
thread_construct.get();
}
~Threads()
{
for(auto& x : thread_pool) {
if(x.joinable()) {
x.join();
}
}
}
private:
std::vector<std::thread> thread_pool;
};
int main()
{
Threads thread;
return 0;
}
Output:
ThreadConstruct construct start
ThreadConstruct construct end
5 10.1
5 10.1
5 10.1
5 10.1
5 10.1