I'm programming a simple 3D rendering engine just to get more familliar with C++. Today I had my first steps with multithreading and already have a problem I cannot wrap my head around. When the application starts it generates a small, minecraft-like terrain consisting of cubes. They're generated withhin the main thread.
Now when I want to generate more chunks
void VoxelWorld::generateChunk(glm::vec2 chunkPosition) {
Chunk* generatedChunk = m_worldGenerator->generateChunk(chunkPosition);
generatedChunk->shader = m_chunkShader;
generatedChunk->generateRenderObject();
m_chunks[chunkPosition.x][chunkPosition.y] = generatedChunk;
m_loadedChunks.push_back(glm::vec2(chunkPosition.x, chunkPosition.y));
}
void VoxelWorld::generateChunkThreaded(glm::vec2 chunkPosition) {
std::thread chunkThread(&VoxelWorld::generateChunk, this, chunkPosition);
chunkThread.detach();
}
void VoxelWorld::draw() {
for(glm::vec2& vec : m_loadedChunks){
Transformation* transformation = new Transformation();
transformation->getPosition().setPosition(glm::vec3(CHUNK_WIDTH*vec.x, 0, CHUNK_WIDTH*vec.y));
m_chunks[vec.x][vec.y]->getRenderObject()->draw(transformation);
delete(transformation); //TODO: Find a better way
}
}
I have my member function (everything is non-static) generateChunk() which generates a Chunk and stores it in the VoxelWorld class. I have a 2D std::map<..> m_chunks which stores every chunk and a std::vector<glm::vec2> m_loadedChunks which stores the positions of the generated chunks.
Calling generateChunk() works fine as expected. But when I try generateChunkThreaded() the application crashes! I tried commenting out the last line of generateChunk(), then it does not crash. Thats what confuses me so much! m_loadedChunks ist just a regular std::vector. I tried making it public, with no effect. Is there anything obvious I miss?
You are accessing m_loadedChunks from several threads without synchronizing it.
You need to lock the usage of shared usages. So few tips here.
Declare a mutex as a member of the class
std::mutex mtx; // mutex for critical section
Use it to lock via a critical section each time you want to access the elements
std::lock_guard lock(mtx);
m_chunks[chunkPosition.x][chunkPosition.y] = generatedChunk;
m_loadedChunks.push_back(glm::vec2(chunkPosition.x, chunkPosition.y));
Hope that helps
When you have many threads access shared resources, you either have those resources available as read-only, atomic, or guarded with a mutex lock.
So, for your m_loadedChunks member variable, you would want to have it wrapped in a lock. For example:
class VoxelWorld
{
// your class members and more ...
private:
std::mutex m_loadedChunksMutex;
}
void VoxelWorld::generateChunk(glm::vec2 chunkPosition)
{
Chunk* generatedChunk = m_worldGenerator->generateChunk(chunkPosition);
generatedChunk->shader = m_chunkShader;
generatedChunk->generateRenderObject();
m_chunks[chunkPosition.x][chunkPosition.y] = generatedChunk;
{
auto&& scopedLock = std::lock_guard< std::mutex >(m_loadedChunksMutex);
(void)scopedLock;
m_loadedChunks.push_back(glm::vec2(chunkPosition.x, chunkPosition.y));
}
}
The scopedLock will automatically wait for a lock and when the code goes out of scope, the lock will be released.
Now note, that I have a mutex for m_loadedChunks and not a generic mutex covering all variables that may be accessed by threads. This is actually a good practice introduced by Herb Sutter in his "Effective Concurrency" courses and on his talks at cppcon.
So, for whatever shared variables you have, use the above example as one means to solve race issues.
Related
Example here, just want to protect the iData to ensure only one thread visit it at the same time.
struct myData;
myData iData;
Method 1, mutex inside the call function (multiple mutexes could be created):
void _proceedTest(myData &data)
{
std::mutex mtx;
std::unique_lock<std::mutex> lk(mtx);
modifyData(data);
lk.unlock;
}
int const nMaxThreads = std::thread::hardware_concurrency();
vector<std::thread> threads;
for (int iThread = 0; iThread < nMaxThreads; ++iThread)
{
threads.push_back(std::thread(_proceedTest, iData));
}
for (auto& th : threads) th.join();
Method2, use only one mutex:
void _proceedTest(myData &data, std::mutex &mtx)
{
std::unique_lock<std::mutex> lk(mtx);
modifyData(data);
lk.unlock;
}
std::mutex mtx;
int const nMaxThreads = std::thread::hardware_concurrency();
vector<std::thread> threads;
for (int iThread = 0; iThread < nMaxThreads; ++iThread)
{
threads.push_back(std::thread(_proceedTest, iData, mtx));
}
for (auto& th : threads) th.join();
I want to make sure that the Method 1 (multiple mutexes) ensures that only one thread can visit the iData at the same time.
If Method 1 is correct, not sure Method 1 is better of Method 2?
Thanks!
I want to make sure that the Method 1 (multiple mutexes) ensures that only one thread can visit the iData at the same time.
Your 1st example creates a local mutex variable on the stack, it won't be shared with the other threads. Thus it's completely useless.
It won't guarantee exclusive access to iData.
If Method 1 is correct, not sure Method 1 is better of Method 2?
It isn't correct.
The other answers are correct on the technical level, but there is an important language independent thing missing: you always prefer to minimize the number of different mutexes/locks/... !
Because: as soon as you have more than one thing that a thread needs to acquire in order to do something (to then release all acquired locks) order becomes crucial.
When you have two locks, and you have to different pieces of code, like:
getLockA() {
getLockB() {
do something
release B
release A
And
getLockB() {
getLockA() {
you can quickly run into deadlocks - because two threads/processes can acquire one lock each - and then they are both stuck, waiting for the other one to release its lock. Of course - when looking at the above example "you would never make a mistake, and always go A first then B". But what if those locks exist in completely different parts of your application? And they aren't acquired in the same method or class, but over the course of say 3, 5 nested method invocations?
Thus: when you can solve your problem with one lock - use one lock only! The more locks you need to get something done, the higher the risk to end up in dead locks.
Method 1 only works if you make the mutex variable static.
void _proceedTest(myData &data)
{
static std::mutex mtx;
std::unique_lock<std::mutex> lk(mtx);
modifyData(data);
lk.unlock;
}
This will make mtx be shared by all threads that enter _proceedTest.
Since a static function scope variable is only visible to users of the function, it is not really a sufficient lock for the passed in data. This is because it is conceivable that multiple threads could be calling different functions that each want to manipulate data.
Thus, even though Method 1 is salvageable, Method 2 is still better, even though the cohesion between the lock and the data is weak.
The mutex in version 1 will go out of scope once you leave the _proceedTest scope, locking a mutex like that makes no sense because it will never be accessible to the other thread.
In the second version multiple threads can share the mutex (as long as it doesn't go out of scope, for example as a class member), this way one thread can lock it and the other thread can see that it is locked (and won't be able to lock it aswell, hence the term mutual exclusion).
Consider the following:
// There are guys:
class Guy {
// Each guy can have a buddy:
Guy* buddy; // When a guy has a buddy, he is his buddy's buddy, i.e:
// assert(!buddy || buddy->buddy == this);
public:
// When guys are birthed into the world they have no buddy:
Guy()
: buddy{}
{}
// But guys can befriend each other:
friend void befriend(Guy& a, Guy& b) {
// Except themselves:
assert(&a != &b);
// Their old buddies (if any), lose their buddies:
if (a.buddy) { a.buddy->buddy = {}; }
if (b.buddy) { b.buddy->buddy = {}; }
a.buddy = &b;
b.buddy = &a;
}
// When a guy moves around, he keeps track of his buddy
// and lets his buddy keep track of him:
friend void swap(Guy& a, Guy& b) {
std::swap(a.buddy, b.buddy);
if (a.buddy) { a.buddy->buddy = &a; }
if (b.buddy) { b.buddy->buddy = &b; }
}
Guy(Guy&& guy)
: Guy()
{
swap(*this, guy);
}
Guy& operator=(Guy guy) {
swap(*this, guy);
return *this;
}
// When a Guy dies, his buddy loses his buddy.
~Guy() {
if (buddy) { buddy->buddy = {}; }
}
};
All is well so far, but now I want this to work when buddies are used in different threads. No problem, let's just stick std::mutex in Guy:
class Guy {
std::mutex mutex;
// same as above...
};
Now I just have to lock mutexes of both guys before linking or unlinking the pair of them.
This is where I am stumped. Here are failed attempts (using the destructor as an example):
Deadlock:
~Guy() {
std::unique_lock<std::mutex> lock{mutex};
if (buddy) {
std::unique_lock<std::mutex> buddyLock{buddy->mutex};
buddy->buddy = {};
}
}
When two buddies are destroyed at around the same time, it is possible that each of them locks their own mutex, before trying to lock their buddies' mutexes, thus resulting in a deadlock.
Race condition:
Okay so we just have to lock mutexes in consistent order, either manually or with std::lock:
~Guy() {
std::unique_lock<std::mutex> lock{mutex, std::defer_lock};
if (buddy) {
std::unique_lock<std::mutex> buddyLock{buddy->mutex, std::defer_lock};
std::lock(lock, buddyLock);
buddy->buddy = {};
}
}
Unfortunately, to get to buddy's mutex we have to access buddy which at this point is not protected by any lock and may be in the process of being modified from another thread, which is a race condition.
Not scalable:
Correctness can be attained with a global mutex:
static std::mutex mutex;
~Guy() {
std::unique_lock<std::mutex> lock{mutex};
if (buddy) { buddy->buddy = {}; }
}
But this is undesirable for performance and scalability reasons.
So is this possible to do without global lock? How?
Using std::lock isn't a race condition (per se) nor does it risk deadlock.
std::lock will use a deadlock-free algorithm to obtain the two locks. They will be some kind of (unspecified) try-and-retreat method.
An alternative is to determine an arbitrary lock order for example using the physical address of the objects.
You've correctly excluded the possibility that an objects buddy is itself so there's no risk of trying to lock() the same mutex twice.
I say isn't a race condition per se because what that code will do is ensure integrity that if a has buddy b then b has buddy a for all a and b.
The fact that one moment after befriending two objects they might be unfriended by another thread is presumably what you intend or addressed by other synchronization.
Note also that when you're befriending and may unfriend the friends of the new friends you need to lock ALL the objects at once.
That is the two 'to be' friends and their current friends (if any).
Consequently you need to lock 2,3 or 4 mutexes.
std::lock unfortunately doesn't take an array but there is a version that does that in boost or you need to address it by hand.
To clarify, I'm reading the examples of possible destructors as models. Synchronization on the same locks would be required in all the relevant members (e.g. befriend(), swap() and unfriend() if that is required). Indeed the issue of locking 2,3 or 4 applies to the befriend() member.
Furthermore the destructor is probably the worst example because as mentioned in a comment it's illogical that an object is destructible but may be in lock contention with another thread. Some synchronization surely needs to exist in the wider program to make that impossible at which point the lock in the destructor is redundant.
Indeed a design which ensures Guy objects have no buddy before destruction would seem like a good idea and a debug pre-condition that checks assert(buddy==nullptr) in the destructor. Sadly that can't be left as a run-time exception because throwing exceptions in destructors can cause program termination (std::terminate()).
In fact the real challenge (which may depend on the surrounding program) is how to unfriend when befriending. That would appear to require a try-retreat loop:
Lock a and b.
Find out if they have buddies.
If they're already buddies - you're done.
If they have other buddies, unlock a & b, and lock a and b and their buddies (if any).
Check that the buddies haven't changed, if they have go again.
Adjust the relevant members.
It's a question for the surrounding program whether that risks live-lock but any try-retreat method suffers the same risk.
What won't work is std::lock() a & b then std::lock() the buddies because that does risk deadlock.
So to answer the question - yes, it is possible without a global lock but that depends on the surrounding program. It may be in a population of many Guy objects contention is rare and liveness is high.
But it may be that there are a small number of objects that are hotly contended (possibly in a large population) that results in a problem. That can't be assessed without understanding the wider application.
One way to resolve that would be lock escalation which in effect piecemeal falls back to a global lock. In essence that would mean if there were too many trips round the re-try loop a global semaphore would be set ordering all threads to go into global lock mode for a while. A while might be a number of actions or a period of time or until contention on the global lock subsides!
So the final answer is "yes it's absolutely possible unless it doesn't work in which case 'no'".
I am thinking about how to implement a class that will contain private data that will be eventually be modified by multiple threads through method calls. For synchronization (using the Windows API), I am planning on using a CRITICAL_SECTION object since all the threads will spawn from the same process.
Given the following design, I have a few questions.
template <typename T> class Shareable
{
private:
const LPCRITICAL_SECTION sync; //Can be read and used by multiple threads
T *data;
public:
Shareable(LPCRITICAL_SECTION cs, unsigned elems) : sync{cs}, data{new T[elems]} { }
~Shareable() { delete[] data; }
void sharedModify(unsigned index, T &datum) //<-- Can this be validly called
//by multiple threads with synchronization being implicit?
{
EnterCriticalSection(sync);
/*
The critical section of code involving reads & writes to 'data'
*/
LeaveCriticalSection(sync);
}
};
// Somewhere else ...
DWORD WINAPI ThreadProc(LPVOID lpParameter)
{
Shareable<ActualType> *ptr = static_cast<Shareable<ActualType>*>(lpParameter);
T copyable = /* initialization */;
ptr->sharedModify(validIndex, copyable); //<-- OK, synchronized?
return 0;
}
The way I see it, the API calls will be conducted in the context of the current thread. That is, I assume this is the same as if I had acquired the critical section object from the pointer and called the API from within ThreadProc(). However, I am worried that if the object is created and placed in the main/initial thread, there will be something funky about the API calls.
When sharedModify() is called on the same object concurrently,
from multiple threads, will the synchronization be implicit, in the
way I described it above?
Should I instead get a pointer to the
critical section object and use that instead?
Is there some other
synchronization mechanism that is better suited to this scenario?
When sharedModify() is called on the same object concurrently, from multiple threads, will the synchronization be implicit, in the way I described it above?
It's not implicit, it's explicit. There's only only CRITICAL_SECTION and only one thread can hold it at a time.
Should I instead get a pointer to the critical section object and use that instead?
No. There's no reason to use a pointer here.
Is there some other synchronization mechanism that is better suited to this scenario?
It's hard to say without seeing more code, but this is definitely the "default" solution. It's like a singly-linked list -- you learn it first, it always works, but it's not always the best choice.
When sharedModify() is called on the same object concurrently, from multiple threads, will the synchronization be implicit, in the way I described it above?
Implicit from the caller's perspective, yes.
Should I instead get a pointer to the critical section object and use that instead?
No. In fact, I would suggest giving the Sharable object ownership of its own critical section instead of accepting one from the outside (and embrace RAII concepts to write safer code), eg:
template <typename T>
class Shareable
{
private:
CRITICAL_SECTION sync;
std::vector<T> data;
struct SyncLocker
{
CRITICAL_SECTION &sync;
SyncLocker(CRITICAL_SECTION &cs) : sync(cs) { EnterCriticalSection(&sync); }
~SyncLocker() { LeaveCriticalSection(&sync); }
}
public:
Shareable(unsigned elems) : data(elems)
{
InitializeCriticalSection(&sync);
}
Shareable(const Shareable&) = delete;
Shareable(Shareable&&) = delete;
~Shareable()
{
{
SyncLocker lock(sync);
data.clear();
}
DeleteCriticalSection(&sync);
}
void sharedModify(unsigned index, const T &datum)
{
SyncLocker lock(sync);
data[index] = datum;
}
Shareable& operator=(const Shareable&) = delete;
Shareable& operator=(Shareable&&) = delete;
};
Is there some other synchronization mechanism that is better suited to this scenario?
That depends. Will multiple threads be accessing the same index at the same time? If not, then there is not really a need for the critical section at all. One thread can safely access one index while another thread accesses a different index.
If multiple threads need to access the same index at the same time, a critical section might still not be the best choice. Locking the entire array might be a big bottleneck if you only need to lock portions of the array at a time. Things like the Interlocked API, or Slim Read/Write locks, might make more sense. It really depends on your thread designs and what you are actually trying to protect.
I'm implementing a Signal/Slot framework, and got to the point that I want it to be thread-safe. I already had a lot of support from the Boost mailing-list, but since this is not really boost-related, I'll ask my pending question here.
When is a signal/slot implementation (or any framework that calls functions outside itself, specified in some way by the user) considered thread-safe? Should it be safe w.r.t. its own data, i.e. the data associated to its implementation details? Or should it also take into account the user's data, which might or might not be modified whatever functions are passed to the framework?
This is an example given on the mailing-list (Edit: this is an example use-case --i.e. user code--. My code is behind the calls to the Emitter object):
int * somePtr = nullptr;
Emitter<Event> em; // just an object that can emit the 'Event' signal
void mainThread()
{
em.connect<Event>(someFunction);
// now, somehow, 2 threads are created which, at some point
// execute the thread1() and thread2() functions below
}
void someFunction()
{
// can somePtr change after the check but before the set?
if (somePtr)
*somePtr = 17;
}
void cleanupPtr()
{
// this looks safe, but compilers and CPUs can reorder this code:
int *tmp = somePtr;
somePtr = null;
delete tmp;
}
void thread1()
{
em.emit<Event>();
}
void thread2()
{
em.disconnect<Event>(someFunction);
// now safe to cleanup (?)
cleanupPtr();
}
In the above code, it might happen that Event is emitted, causing someFunction to be executed. If somePtr is non-null, but becomes null just after the if, but before the assignment, we're in trouble. From the point of view of thread2, this is not obvious because it is disconnecting someFunction before calling cleanupPtr.
I can see why this could potentially lead to trouble, but who's responsibility is this? Should my library protect the user from using it in every irresponsible but imaginable way?
I suspect there is no clearly good answer, but clarity will come from documenting the guarantees you wish to make about concurrent access to an Emitter object.
One level of guarantee, which to me is what is implied by a promise of thread safety, is that:
Concurrent operations on the object are guaranteed to leave the object in a consistent state (at least, from the point of view of the accessing threads.)
Non-commutative operations will be performed as if they were scheduled serially in some (unknown) order.
Then the question is, what does the emit method promise semantically: passing control to the connected routine, or evaluation of the function? If the former, then your work sounds like it is already done; if the latter, then the 'as-if ordered' requirement would mean that you need to enforce some level of synchronisation.
Users of the library can work with either, provided it is clear what is being promised.
Firstly the simplest possibility: If you don't claim your library to be thread-safe, you don't have to bother about this.
(But even) if you do:
In your example the user would have to take care about thread-safety, since both functions could be dangerous, even without using your event-system (IMHO, this is a pretty good way to determine who should take care about those kind of problems). A possible way for him to do this in C++11 could be:
#include <mutex>
// A mutex is used to control thread-acess to a shared resource
std::mutex _somePtr_mutex;
int* somePtr = nullptr;
void someFunction()
{
/*
Create a 'lock_guard' to manage your mutex.
Is the mutex '_somePtr_mutex' already locked?
Yes: Wait until it's unlocked.
No: Lock it and continue execution.
*/
std::lock_guard<std::mutex> lock(_somePtr_mutex);
if(somePtr)
*somePtr = 17;
// End of scope: 'lock' gets destroyed and hence unlocks '_somePtr_mutex'
}
void cleanupPtr()
{
/*
Create a 'lock_guard' to manage your mutex.
Is the mutex '_somePtr_mutex' already locked?
Yes: Wait until it's unlocked.
No: Lock it and continue execution.
*/
std::lock_guard<std::mutex> lock(_somePtr_mutex);
int *tmp = somePtr;
somePtr = null;
delete tmp;
// End of scope: 'lock' gets destroyed and hence unlocks '_somePtr_mutex'
}
The last question is easy. If you say your library is threadsafe, it should threadsafe. It makes no sense to say it is partly threadsafe or, it is only threadsafe if you do not abuse it. In that case you have to explain what exactly is not threadsafe.
Now to your first question regarded someFunction:
The operation is non atomic. Which means the CPU can interrupt between the if and the assigment. And that will happen, I know that :-) The other thread can erase the pointer anytime. Even between two short and fast looking statements.
Now to cleanupPtr:
I am not a compiler expert, but if you want to be shure that your assigment take place in the same moment you wrote it in code you should write the keyword volatile in front of the declaration of somePtr. The compiler will now know that you use that attribute in a multithreaded situation and will not buffer the value in a register of the CPU.
If you have a thread situation with a reader thread and a writer thread, the keyword volatile can (IMHO) be enough to sync them. As long as the attributes you use to exchange information between threads are generic.
For other situations you can use mutex or atomics. I will give you an example for mutex. I use C++11 for that, but it works similar with previous versions of C++ using boost.
Using mutex:
int * somePtr = nullptr;
Emitter<Event> em; // just an object that can emit the 'Event' signal
std::recursive_mutex g_mutex;
void mainThread()
{
em.connect<Event>(someFunction);
// now, somehow, 2 threads are created which, at some point
// execute the thread1() and thread2() functions below
}
void someFunction()
{
std::lock_guard<std::recursive_mutex> lock(g_mutex);
// can somePtr change after the check but before the set?
if (somePtr)
*somePtr = 17;
}
void cleanupPtr()
{
std::lock_guard<std::recursive_mutex> lock(g_mutex);
// this looks safe, but compilers and CPUs can reorder this code:
int *tmp = somePtr;
somePtr = null;
delete tmp;
}
void thread1()
{
em.emit<Event>();
}
void thread2()
{
em.disconnect<Event>(someFunction);
// now safe to cleanup (?)
cleanupPtr();
}
I only added a recursive mutex here without changing any other code of the sample, even if it's now cargo code.
There are two kinds of mutex in the std. A utterly useless std::mutex and the std::recursive_mutex which work like you expect a mutex should work. The std::mutex exclude the access of any further call even from the same thread. Which can happen if a method which needs mutex protection calls a public method which use the same mutex. std::recursive_mutex is reentrant for the same thread.
Atomics (or interlocks in win32) are another way, but only to exchange values between threads or access them concurrently. Your example is missing such values, but in your case, I would look a little deeper in them (std::atomic).
UPDATE
If your are the user of a library which is not explicit declared as threadsafe by the developer, take it as non threadsafe and shield every call to it with a mutex lock.
To stick with the example. If you cannot change someFunction the you have to wrap the function like:
void threadsafeSomeFunction()
{
std::lock_guard<std::recursive_mutex> lock(g_mutex);
someFunction();
}
There is some exemplary class of container in pseudo code:
class Container
{
public:
Container(){}
~Container(){}
void add(data new)
{
// addition of data
}
data get(size_t which)
{
// returning some data
}
void remove(size_t which)
{
// delete specified object
}
private:
data d;
};
How this container can be made thread safe? I heard about mutexes - where these mutexes should be placed? Should mutex be static for a class or maybe in global scope? What is good library for this task in C++?
First of all mutexes should not be static for a class as long as you going to use more than one instance. There is many cases where you should or shouldn't use use them. So without seeing your code it's hard to say. Just remember, they are used to synchronise access to shared data. So it's wise to place them inside methods that modify or rely on object's state. In your case I would use one mutex to protect whole object and lock all three methods. Like:
class Container
{
public:
Container(){}
~Container(){}
void add(data new)
{
lock_guard<Mutex> lock(mutex);
// addition of data
}
data get(size_t which)
{
lock_guard<Mutex> lock(mutex);
// getting copy of value
// return that value
}
void remove(size_t which)
{
lock_guard<Mutex> lock(mutex);
// delete specified object
}
private:
data d;
Mutex mutex;
};
Intel Thread Building Blocks (TBB) provides a bunch of thread-safe container implementations for C++. It has been open sourced, you can download it from: http://threadingbuildingblocks.org/ver.php?fid=174 .
First: sharing mutable state between threads is hard. You should be using a library that has been audited and debugged.
Now that it is said, there are two different functional issue:
you want a container to provide safe atomic operations
you want a container to provide safe multiple operations
The idea of multiple operations is that multiple accesses to the same container must be executed successively, under the control of a single entity. They require the caller to "hold" the mutex for the duration of the transaction so that only it changes the state.
1. Atomic operations
This one appears simple:
add a mutex to the object
at the start of each method grab a mutex with a RAII lock
Unfortunately it's also plain wrong.
The issue is re-entrancy. It is likely that some methods will call other methods on the same object. If those once again attempt to grab the mutex, you get a dead lock.
It is possible to use re-entrant mutexes. They are a bit slower, but allow the same thread to lock a given mutex as much as it wants. The number of unlocks should match the number of locks, so once again, RAII.
Another approach is to use dispatching methods:
class C {
public:
void foo() { Lock lock(_mutex); foo_impl(); }]
private:
void foo_impl() { /* do something */; foo_impl(); }
};
The public methods are simple forwarders to private work-methods and simply lock. Then one just have to ensure that private methods never take the mutex...
Of course there are risks of accidentally calling a locking method from a work-method, in which case you deadlock. Read on to avoid this ;)
2. Multiple operations
The only way to achieve this is to have the caller hold the mutex.
The general method is simple:
add a mutex to the container
provide a handle on this method
cross your fingers that the caller will never forget to hold the mutex while accessing the class
I personally prefer a much saner approach.
First, I create a "bundle of data", which simply represents the class data (+ a mutex), and then I provide a Proxy, in charge of grabbing the mutex. The data is locked so that the proxy only may access the state.
class ContainerData {
protected:
friend class ContainerProxy;
Mutex _mutex;
void foo();
void bar();
private:
// some data
};
class ContainerProxy {
public:
ContainerProxy(ContainerData& data): _data(data), _lock(data._mutex) {}
void foo() { data.foo(); }
void bar() { foo(); data.bar(); }
};
Note that it is perfectly safe for the Proxy to call its own methods. The mutex will be released automatically by the destructor.
The mutex can still be reentrant if multiple Proxies are desired. But really, when multiple proxies are involved, it generally turns into a mess. In debug mode, it's also possible to add a "check" that the mutex is not already held by this thread (and assert if it is).
3. Reminder
Using locks is error-prone. Deadlocks are a common cause of error and occur as soon as you have two mutexes (or one and re-entrancy). When possible, prefer using higher level alternatives.
Add mutex as an instance variable of class. Initialize it in constructor, and lock it at the very begining of every method, including destructor, and unlock at the end of method. Adding global mutex for all instances of class (static member or just in gloabl scope) may be a performance penalty.
The is also a very nice collection of lock-free containers (including maps) by Max Khiszinsky
LibCDS1 Concurrent Data Structures
Here is the documentation page:
http://libcds.sourceforge.net/doc/index.html
It can be kind of intimidating to get started, because it is fully generic and requires you register a chosen garbage collection strategy and initialize that. Of course, the threading library is configurable and you need to initialize that as well :)
See the following links for some getting started info:
initialization of CDS and the threading manager
http://sourceforge.net/projects/libcds/forums/forum/1034512/topic/4600301/
the unit tests ((cd build && ./build.sh ----debug-test for debug build)
Here is base template for 'main':
#include <cds/threading/model.h> // threading manager
#include <cds/gc/hzp/hzp.h> // Hazard Pointer GC
int main()
{
// Initialize \p CDS library
cds::Initialize();
// Initialize Garbage collector(s) that you use
cds::gc::hzp::GarbageCollector::Construct();
// Attach main thread
// Note: it is needed if main thread can access to libcds containers
cds::threading::Manager::attachThread();
// Do some useful work
...
// Finish main thread - detaches internal control structures
cds::threading::Manager::detachThread();
// Terminate GCs
cds::gc::hzp::GarbageCollector::Destruct();
// Terminate \p CDS library
cds::Terminate();
}
Don't forget to attach any additional threads you are using:
#include <cds/threading/model.h>
int myThreadFunc(void *)
{
// initialize libcds thread control structures
cds::threading::Manager::attachThread();
// Now, you can work with GCs and libcds containers
....
// Finish working thread
cds::threading::Manager::detachThread();
}
1 (not to be confuse with Google's compact datastructures library)