I have a question about boost::shared_ptr<T>.
There are lots of thread.
using namespace boost;
class CResource
{
// xxxxxx
}
class CResourceBase
{
public:
void SetResource(shared_ptr<CResource> res)
{
m_Res = res;
}
shared_ptr<CResource> GetResource()
{
return m_Res;
}
private:
shared_ptr<CResource> m_Res;
}
CResourceBase base;
//----------------------------------------------
// Thread_A:
while (true)
{
//...
shared_ptr<CResource> nowResource = base.GetResource();
nowResource.doSomeThing();
//...
}
// Thread_B:
shared_ptr<CResource> nowResource;
base.SetResource(nowResource);
//...
Q1
If Thread_A do not care the nowResource is the newest, will this part of code have problem?
I mean when Thread_B do not SetResource() completely, Thread_A get a wrong smart point by GetResource()?
Q2
What does thread-safe mean?
If I do not care about whether the resource is newest, will the shared_ptr<CResource> nowResource crash the program when the nowResource is released or will the problem destroy the shared_ptr<CResource>?
boost::shared_ptr<> offers a certain level of thread safety. The reference count is manipulated in a thread safe manner (unless you configure boost to disable threading support).
So you can copy a shared_ptr around and the ref_count is maintained correctly. What you cannot do safely in multiple threads is modify the actual shared_ptr object instance itself from multiple threads (such as calling reset() on it from multiple threads). So your usage is not safe - you're modifying the actual shared_ptr instance in multiple threads - you'll need to have your own protection.
In my code, shared_ptr's are generally locals or parameters passed by value, so there's no issue. Getting them from one thread to another I generally use a thread-safe queue.
Of course none of this addresses the thread safety of accessing the object pointed to by the shared_ptr - that's also up to you.
From the boost documentation:
shared_ptr objects offer the same
level of thread safety as built-in
types. A shared_ptr instance can be
"read" (accessed using only const
operations) simultaneously by multiple
threads. Different shared_ptr
instances can be "written to"
(accessed using mutable operations
such as operator= or reset)
simultaneously by multiple threads
(even when these instances are copies,
and share the same reference count
underneath.)
Any other simultaneous accesses result in undefined behavior.
So your usage is not safe, since it uses simultaneous read and write of m_res. Example 3 in the boost documentation also illustrates this.
You should use a separate mutex that guards the access to m_res in SetResource/GetResource.
Well, tr1::shared_ptr (which is based on boost) documentation tells a different story, which implies that resource management is thread safe, whereas access to the resource is not.
"...
Thread Safety
C++0x-only features are: rvalue-ref/move support, allocator support, aliasing constructor, make_shared & allocate_shared. Additionally, the constructors taking auto_ptr parameters are deprecated in C++0x mode.
The Thread Safety section of the Boost shared_ptr documentation says "shared_ptr objects offer the same level of thread safety as built-in types." The implementation must ensure that concurrent updates to separate shared_ptr instances are correct even when those instances share a reference count e.g.
shared_ptr a(new A);
shared_ptr b(a);
// Thread 1 // Thread 2
a.reset(); b.reset();
The dynamically-allocated object must be destroyed by exactly one of the threads. Weak references make things even more interesting. The shared state used to implement shared_ptr must be transparent to the user and invariants must be preserved at all times. The key pieces of shared state are the strong and weak reference counts. Updates to these need to be atomic and visible to all threads to ensure correct cleanup of the managed resource (which is, after all, shared_ptr's job!) On multi-processor systems memory synchronisation may be needed so that reference-count updates and the destruction of the managed resource are race-free.
..."
see
http://gcc.gnu.org/onlinedocs/libstdc++/manual/memory.html#std.util.memory.shared_ptr
m_Res is not threadsafe ,because it simultaneous read/write,
you need boost::atomic_store/load function to protects it.
//--- Example 3 ---
// thread A
p = p3; // reads p3, writes p
// thread B
p3.reset(); // writes p3; undefined, simultaneous read/write
Add, your class has a Cyclic-references condition; the shared_ptr<CResource> m_Res can't be a member of CResourceBase. You can use weak_ptr instead.
Related
I have the following class that is supposed to be thread safe and has a std::shared_ptr member that refers to some shared resource.
class ResourceHandle
{
using Resource = /* unspecified */;
std::shared_ptr<Resource> m_resource;
};
Multiple threads may acquire a copy of a resource handle from some central location and the central resource handle may be updated at any time. So reads and writes to the same ResourceHandle may take place concurrently.
// Centrally:
ResourceHandle rh;
// Thread 1: reads the central handle into a local copy for further processing
auto localRh = rh;
// Thread 2: creates a new resource and updates the central handle
rh = ResourceHandle{/* ... */};
Because these threads perform non-const operations on the same std::shared_ptr, according to CppReference, I should use the std::atomic_...<std::shared_ptr> specializations to manipulate the shared pointer.
If multiple threads of execution access the same shared_ptr without
synchronization and any of those accesses uses a non-const member
function of shared_ptr then a data race will occur; the shared_ptr
overloads of atomic functions can be used to prevent the data race.
So I want to implement the copy and move operations of the ResourceHandle class using these atomic operations such that manipulating a single resource handle from multiple threads avoids all data races.
The notes section of the CppReference page on std::atomic_...<std::shared_ptr> specializations states the following:
To avoid data races, once a shared pointer is passed to any of these
functions, it cannot be accessed non-atomically. In particular, you
cannot dereference such a shared_ptr without first atomically loading
it into another shared_ptr object, and then dereferencing through the
second object.
So I probably want to use some combination of std::atomic_load and std::atomic_store, but I am unsure where and when they should be applied.
How should the copy and move operations of my ResourceHandle class be implemented to not introduce any data races?
std::shared_ptr synchronises it's access to the reference count, so you don't have to worry about operations on one std::shared_ptr affecting another. If those are followed by at least one modification to the pointee, you have a data race there. Code that shares ownership of a previous Resource will be unaffected by m_resource being reset to point to a new Resource.
You have to synchronise access to a single std::shared_ptr, if that is accessible in multiple threads. The warning provided (and the reason it is deprecated in C++20) states that if anywhere is atomically accessing a value, everywhere that accesses that value should be atomic.
You could achieve that by hiding the global std::shared_ptr behind a local copies. ResourceHandle as a separate class makes that more difficult.
using ResourceHandle = std::shared_ptr<Resource>;
static ResourceHandle global;
ResourceHandle getResource()
{
return std::atomic_load(&global);
}
void setResource(ResourceHandle handle)
{
std::atomic_store(&global, handle);
}
Assume I have shared_ptr<T> a and two threads running concurrently where one does:
a.reset();
and another does:
auto b = a;
if the operations are atomic, then I either end up with two empty shared_ptrs or a being empty and b pointing to what was pointed to by a. I am fine with either outcome, however, due to the interleaving of the instructions, these operations might not be atomic. Is there any way I can assure that?
To be more precise I only need a.reset() to be atomic.
UPD: as pointed out in the comments my question is silly if I don't get more specific. It is possible to achieve atomicity with a mutex. However, I wonder if, on the implementation level of shared_ptr, things are already taken care of. From cppreference.com, copy assignment and copy constructors are thread-safe. So auto b = a is alright to run without a lock. However, from this it's unclear if a.reset() is also thread-safe.
UPD1: it would be great if there is some document that specifies which methods of shared_ptr are thread-safe. From cppreference:
If multiple threads of execution access the same shared_ptr without synchronization and any of those accesses uses a non-const member function of shared_ptr then a data race will occur
It is unclear to me which of the methods are non-const.
Let the other thread use a weak_ptr. The lock() operation on weak pointer is documented to be atomic.
Create:
std::shared_ptr<A> a = std::make_shared<A>();
std::weak_ptr<A> a_weak = std::weak_ptr<A>(a);
Thread 1:
a.reset();
Thread 2:
b = a_weak.get();
if (b != nullptr)
{
...
}
std::shared_ptr<T> is what some call a "thread-compatible" class, meaning that as long as each instance of a std::shared_ptr<T> can only have one thread calling its member functions at a given point in time, such member function invocations do not cause race conditions, even if multiple threads are accessing shared_ptrs that share ownership with each other.
std::shared_ptr<T> is not a thread-safe class; it is not safe for one thread to call a non-const method of an std::shared_ptr<T> instance while another thread is also accessing the same instance. If you need potentially concurrent reads and writes to not race, then synchronize them using a mutex.
I would like to know if this is safe with shared_ptr. Pardon my pseudo code:
Thread 1:
do lock
ReadOnlyObj obj = make_shared<ReadOnlyObj>();
some_shared_ptr.swap(obj);
do unlock
Thread 2-N:
//no lock
some_shared_ptr->getterOnObj();
CPP reference says
All member functions (including copy constructor and copy assignment) can be called by multiple threads on different instances of shared_ptr without additional synchronization even if these instances are copies and share ownership of the same object. If multiple threads of execution access the same shared_ptr without synchronization and any of those accesses uses a non-const member function of shared_ptr then a data race will occur, the shared_ptr overloads of atomic functions can be used to prevent the data race.
but, according to the GNU docs:
The Boost shared_ptr (as used in GCC) features a clever lock-free algorithm to avoid the race condition, but this relies on the processor supporting an atomic Compare-And-Swap instruction. For other platforms there are fall-backs using mutex locks. Boost (as of version 1.35) includes several different implementations and the preprocessor selects one based on the compiler, standard library, platform etc. For the version of shared_ptr in libstdc++ the compiler and library are fixed, which makes things much simpler: we have an atomic CAS or we don't, see Lock Policy below for details.
as far as I know, intel x86_64 supports CAS.
So, to my question:
shared_ptr::swap is non-const. get and ->() are const. Do I have to lock on get/->, too, given my usage scenario listed above?
I think I found the answer myself in the boost docs.
//--- Example 3 ---
// thread A
p = p3; // reads p3, writes p
// thread B
p3.reset(); // writes p3; undefined, simultaneous read/write
What I'm trying to do is a simultaneous read and write, which is undefined/not safe.
I wonder, is it safe to implement like this? :
typedef shared_ptr<Foo> FooPtr;
FooPtr *gPtrToFooPtr // global variable
// init (before any thread has been created)
void init()
{
gPtrToFooPtr = new FooPtr(new Foo);
}
// thread A, B, C, ..., K
// Once thread Z execute read_and_drop(),
// no more call to read() from any thread.
// But it is possible even after read_and_drop() has returned,
// some thread is still in read() function.
void read()
{
FooPtr a = *gPtrToFooPtr;
// do useful things (read only)
}
// thread Z (executed once)
void read_and_drop()
{
FooPtr b = *gPtrToFooPtr;
// do useful things with a (read only)
b.reset();
}
We do not know which thread would do the actual realease.
Does boost's shared_ptr do the release safely under circumstance like this?
According to boost's document, thread safety of shared_ptr is:
A shared_ptr instance can be "read" (accessed using only const
operations) simultaneously by multiple threads. Different shared_ptr
instances can be "written to" (accessed using mutable operations such
as operator= or reset) simultaneosly by multiple threads.
As far as I am concerned, the code above does not violate any of thread safety criteria I mentioned above. And I believe the code should run fine. Does anyone tell me if I am right or wrong?
Thanks in advance.
Editted 2012-06-20 01:00 UTC+9
The pseudo code above works fine. The shared_ptr implementation guarantees to work correctly under circumstances where multiple thread is accessing instances of it (each thread MUST access its own instance of shared_ptr instantiated by using copy constructor).
Note that in the pseudo code above, you must delete gPtrToFooPtr to have the shared_ptr implementation finally release (drop the reference count by one) the object it owns(not proper expression since it is not an auto_ptr, but who cares ;) ). And in this case, you must be aware of the fact that it may cause SIGSEGV in multithreaded application.
How do you define 'safe' here? If you define it as 'I want to make sure that the object is destroyed exactly once', then YES, the release is safe. However, the problem is that the two threads share one smart pointer in your example. This is not safe at all. The reset() performed by one thread might not be visible to the other thread.
As stated by the documentation, smart pointers offer the same guarantees as built in types (i.e., pointers). Therefore, it is problematic to perform an unguarded write while an other thread might still be reading. It is undefined when that other reading thread will see writes of the other one. Therefore, while one thread calls reset() the pointer might NOT be reset in the other thread, since the shared_ptr instance itself is shared.
If you want some sort of thread safety, you have to use two shared pointer instances. Then, of course, resetting one of them WILL NOT release the object, since the other thread still has a reference to it. Usually this behaviour is intended.
However, I think the bigger problem is that you are misusing shared_ptrs. It is quite uncommon to use pointers of shared_ptrs and to allocate the shared_ptr on the heap (using new). If you do that, you have the problem you wanted to avoid using smart pointers again (you have to manage the lifetime of the shared_ptr now). Maybe check out some example code about smart pointers and their usage first.
For your own good, I will be honest.
Your code is doing many things and almost all are simply useless and absurd.
typedef shared_ptr<Foo> FooPtr;
FooPtr *gPtrToFooPtr // global variable
A raw pointer to a smart pointer, cancels the advantage of automatic resource management and does not solve any problem.
void read()
{
FooPtr a = *gPtrToFooPtr;
// do useful things (read only)
}
a is not used in any meaningful way.
{
FooPtr b = ...
b.reset();
}
b.reset() is useless here, b is about to be destroyed anyway. b has no purpose in this function.
I am afraid you have no idea what you are doing, what smart pointers are for, how to use shared_ptr, and how to do MT programming; so, you end up with this absurd pile of useless features to not solve the problem.
What about doing simple things simply:
Foo f;
// called before others functions
void init() {
// prepare f
}
// called in many threads {R1, R2, ... Rn} in parallel
void read()
{
// use f (read-only)
}
// called after all threads {R1, R2, ... Rn} have terminated
void read_and_drop()
{
// reset f
}
read_and_drop() must not be called before it can be guaranteed that other threads are not reading f.
To your edit:
Why not call reset() first on the global shared_ptr?
If you were the last one to access the object, fine it is deleted, then you delete the shared_ptr on the heap.
If some other thread still uses it, you reduce the ref count by one, and "disconnect" the global ptr from the (still existing) object that is pointed-to. You can then safely delete the shared_ptr on the heap without affecting any thread that might still use it.
I am working on a set that is frequently read but rarely written.
class A {
boost::shared_ptr<std::set<int> > _mySet;
public:
void add(int v) {
boost::shared_ptr<std::set<int> > tmpSet(new std::set<int>(*_mySet));
tmpSet->insert(v); // insert to tmpSet
_mySet = tmpSet; // swap _mySet
}
void check(int v) {
boost::shared_ptr<std::set<int> > theSet = _mySet;
if (theSet->find(v) != theSet->end()) {
// do something irrelevant
}
}
};
In the class, add() is only called by one thread and check() is called by many threads. check() does not care whether _mySet is the latest or not. Is the class thread-safe? Is it possible that the thread executing check() would observe swap _mySet happening before insert to tmpSet?
This is an interesting use of shared_ptr to implement thread safety.
Whether it is OK depends on the thread-safety guarantees of
boost::shared_ptr. In particular, does it establish some sort of
fence or membar, so that you are guaranteed that all of the writes in
the constructor and insert functions of set occur before any
modification of the pointer value becomes visible.
I can find no thread safety guarantees whatsoever in the Boost
documentation of smart pointers. This surprizes me, as I was sure that
there was some. But a quick look at the sources for 1.47.0 show none,
and that any use of boost::shared_ptr in a threaded environment will
fail. (Could someone please point me to what I'm missing. I can't
believe that boost::shared_ptr has ignored threading.)
Anyway, there are three possibilities: you can't use the shared pointer
in a threaded environment (which seems to be the case), the shared
pointer ensures its own internal consistency in a threaded environment,
but doesn't establish ordering with regards to other objects, or the
shared pointer establishes full ordering. Only in the last case will
your code be safe as is. In the first case, you'll need some form of
lock around everything, and in the second, you'll need some sort of
fences or membar to ensure that the necessary writes are actually done
before publishing the new version, and that they will be seen before
trying to read it.
You do need synchronization, it is not thread safe. Generally it doesn't matter, even something as simple as shared += value; is not thread safe.
look here for example with regards to thread safety of shared_ptr: Is boost shared_ptr <XXX> thread safe?
I would also question your allocation/swapping in add() and use of shared_ptr in check()
update:
I went back and re-rad dox for shared_ptr ... It is most likely thread-safe in your particular since the reference counting for shared_ptr is thread-safe. However you are doing (IMHO) unnecessary complexity by not using read/write lock.
Eventually this code should be thread safe:
atomic_store(&_my_set,tmpSet);
and
theSet = atomic_load(&_mySet);
(instead of simple assignments)
But I don't know the current status of atomicity support for shared_ptr.
Note, that adding atomicity to shared_ptr in lock-free manner is really dificult thing; so even atomicity is implemented it may relay on mutexes or usermode spinlocks and, therefore, may sometimes suffer from performance issues
Edit: Perhaps, volatile qualifier for _my_set member variable should also be added.. but I'm not sure that it is strictly required by semantics of atomic operations