Multiple mutex locking strategies and why libraries don't use address comparison - c++

There is a widely known way of locking multiple locks, which relies on choosing fixed linear ordering and aquiring locks according to this ordering.
That was proposed, for example, in the answer for "Acquire a lock on two mutexes and avoid deadlock". Especially, the solution based on address comparison seems to be quite elegant and obvious.
When I tried to check how it is actually implemented, I've found, to my surprise, that this solution in not widely used.
To quote the Kernel Docs - Unreliable Guide To Locking:
Textbooks will tell you that if you always lock in the same order, you
will never get this kind of deadlock. Practice will tell you that this
approach doesn't scale: when I create a new lock, I don't understand
enough of the kernel to figure out where in the 5000 lock hierarchy it
will fit.
PThreads doesn't seem to have such a mechanism built in at all.
Boost.Thread came up with
completely different solution, lock() for multiple (2 to 5) mutexes is based on trying and locking as many mutexes as it is possible at the moment.
This is the fragment of the Boost.Thread source code (Boost 1.48.0, boost/thread/locks.hpp:1291):
template<typename MutexType1,typename MutexType2,typename MutexType3>
void lock(MutexType1& m1,MutexType2& m2,MutexType3& m3)
{
unsigned const lock_count=3;
unsigned lock_first=0;
for(;;)
{
switch(lock_first)
{
case 0:
lock_first=detail::lock_helper(m1,m2,m3);
if(!lock_first)
return;
break;
case 1:
lock_first=detail::lock_helper(m2,m3,m1);
if(!lock_first)
return;
lock_first=(lock_first+1)%lock_count;
break;
case 2:
lock_first=detail::lock_helper(m3,m1,m2);
if(!lock_first)
return;
lock_first=(lock_first+2)%lock_count;
break;
}
}
}
where lock_helper returns 0 on success and number of mutexes that weren't successfully locked otherwise.
Why is this solution better, than comparing addresses or any other kind of ids? I don't see any problems with pointer comparison, which can be avoided using this kind of "blind" locking.
Are there any other ideas on how to solve this problem on a library level?

From the bounty text:
I'm not even sure if I can prove correctness of the presented Boost solution, which seems more tricky than the one with linear order.
The Boost solution cannot deadlock because it never waits while already holding a lock. All locks but the first are acquired with try_lock. If any try_lock call fails to acquire its lock, all previously acquired locks are freed. Also, in the Boost implementation the new attempt will start from the lock failed to acquire the previous time, and will first wait till it is available; it's a smart design decision.
As a general rule, it's always better to avoid blocking calls while holding a lock. Therefore, the solution with try-lock, if possible, is preferred (in my opinion). As a particular consequence, in case of lock ordering, the system at whole might get stuck. Imagine the very last lock (e.g. the one with the biggest address) was acquired by a thread which was then blocked. Now imagine some other thread needs the last lock and another lock, and due to ordering it will first get the other one and will wait on the last lock. Same can happen with all other locks, and the whole system makes no progress until the last lock is released. Of course it's an extreme and rather unlikely case, but it illustrates the inherent problem with lock ordering: the higher a lock number the more indirect impact the lock has when acquired.
The shortcoming of the try-lock-based solution is that it can cause livelock, and in extreme cases the whole system might also get stuck for at least some time. Therefore it is important to have some back-off schema that make pauses between locking attempts longer with time, and perhaps randomized.

Sometimes, lock A needs to be acquired before lock B does. Lock B might have either a lower or a higher address, so you can't use address comparison in this case.
Example: When you have a tree data-structure, and threads try to read and update nodes, you can protect the tree using a reader-writer lock per node. This only works if your threads always acquire locks top-down root-to-leave. The address of the locks does not matter in this case.
You can only use address comparison if it does not matter at all which lock gets acquired first. If this is the case, address comparison is a good solution. But if this is not the case you can't do it.
I guess the Linux kernel requires certain subsystems to be locked before others are. This cannot be done using address comparison.

The "address comparison" and similar approaches, although used quite often, are special cases. They works fine if you have
a lock-free mechanism to get
two (or more) "items" of the same kind or hierarchy level
any stable ordering schema between those items
For example: You have a mechanism to get two "accounts" from a list. Assume that the access to the list is lock-free. Now you have pointers to both items and want to lock them. Since they are "siblings" you have to choose which one to lock first. Here the approach using addresses (or any other stable ordering schema like "account id") is OK.
But the linked Linux text talks about "lock hierarchies". This means locking not between "siblings" (of the same kind) but between "parent" and "children" which might be from different types. This may happen in actual tree structures as well in other scenarios.
Contrived example: To load a program you must
lock the file inode,
lock the process table
lock the destination memory
These three locks are not "siblings" not in a clear hierarchy. The locks are also not taken directly one after the other - each subsystem will take the locks at free will. If you consider all usecases where those three (and more) subsystems interact you see, that there is no clear, stable ordering you can think of.
The Boost library is in the same situation: It strives to provide generic solutions. So they cannot assume the points from above and must fall back to a more complicated strategy.

One scenario when address compare will fail is if you use the proxy pattern.
You can delegate the locks to the same object and the addresses will be different.
Consider the following example
template<typename MutexType>
class MutexHelper
{
MutexHelper(MutexType &m) : _m(m) {}
void lock()
{
std::cout <<"locking ";
m.lock();
}
void unlock()
{
std::cout <<"unlocking ";
m.unlock();
}
MutexType &_m;
};
if the function
template<typename MutexType1,typename MutexType2,typename MutexType3>
void lock(MutexType1& m1,MutexType2& m2,MutexType3& m3);
will actually use address compare the following code ca produce a deadlock
Mutex m1;
Mutex m1;
thread1
MutexHelper hm1(m1);
MutexHelper hm2(m2);
lock(hm1, hm2);
thread2:
MutexHelper hm2(m2);
MutexHelper hm1(m1);
lock(hm1, hm2);
EDIT:
this is an interesting thread that share some light on boost::lock implementation
thread-best-practice-to-lock-multiple-mutexes
Address compare does not work for inter-process shared mutexes (named synchronization objects).

Related

How to solve deadlock in multiple mutexes

I have a code that required to lock multiple mutexes.
void AttackAoeRequest(Player* attacker, int range)
{
std::lock_guard<std::mutex> lk_attacker(attacker->mtx);
if (attacker->isInVehicle)
{
return;
}
//there are a lot of code that need to check before the loop, and these code need to access attacker properties.
//s_map is the global map class that contains all player in the map.
for (Player* defender : s_map.GetAllPlayers())
{
if (attacker == defender) continue;
std::lock_guard<std::mutex> lk_defender(defender->mtx);
if (GetDistance(attacker->position, defender->position) <= 5)
{
printf("%d attack %d damage : %d\n", attacker->id, defender->id
, attacker->attackUpgrade - defender->defenseUpgrade);
}
}
}
There is a deadlock occured when the attacker is the defender as the same time.
e.g.
//playerA and playerB are in the global map class.
std::thread threadA = std::thread(AttackAoeRequest, &playerA, 5);
std::thread threadB = std::thread(AttackAoeRequest, &playerB, 5);
UPDATE
Actually the threadA, threadB illustrate which situation the cause the deadlock.
AttackAoeRequest is calling from a multithread networking.
The networking is going to handle messages from client and call AttackAoeRequest. There are might be a situation that clientA(playerA) and clientB(playerB) attack each others.
As the code described. There is a situation the player might be the attacker and defender in the same times, and this cause the deadlock.
I had searched about std::lock to lock multiple mutexes in same time, but in this case the mutex aren't lock in the same time.
Presumably who is "attacker" and who is "defender" is very fluid, and so you are getting opposite locking order issues.
One defense against deadlocks is to write the code so that it avoids holding multiple locks at the same time. Or, going the other way, make the locking more coarse-grained so that a single lock covers all the objects.
If you have to lock an attacker and defender, you could have the code do it always in the same order. For instance, by address. The object with the lower address in memory is locked first, then the higher one. Acquire both locks this way, and then execute the all the code that has to work with both of them.
You could have some scoped lock for this which takes two objects. Make a template class supporting lock_double_guard<std::mutex> dbl_lk(attacker->mtx, defender->mtx); which puts the two objects in sorted order, and locks them in that order.
In C and C++, pointer to distinct objects may not be compared other than for exact equality, but being able to do ptrObj1 < ptrObj2 is a common extension. If that makes you nervous, you could just assign an unsigned integer serial number to each object which is incremented whenever a new object is made. The object with a lower serial number is locked first.
There is no universal answer to your question. You will have to evaluate what makes most sense in your design and possibly redesign your code. Here are a few avenues to explore:
Avoid locking in the first place. Use atomics and lock-free techniques to work with player structures. This is not always easy or even possible to do, but may provide good performance.
Make locking more coarse grained. For example, don't lock individual players, instead lock all players with a single lock. This, obviously, limits parallelism, but this may not be an issue in your code at a large scale.
Avoid locking multiple players at the same time. For example, complete all you need to do with attacker in AttackAoeRequest, release lk_attacker and then proceed to iterate over defenders. Copy/cache the necessary data from attacker if you have to to avoid having to access attacker during iteration. Your design should allow that some of the cached data will become stale during iteration, if another thread modifies attacker while you're iterating.
Introduce asynchronicity or retries. For example, try locking the defender opportunistically, using try_lock. If it fails, postpone processing that player and go on with the rest. After you've completed the iteration, release all locks and retry the whole operation on the leftover defenders a bit later. Hopefully, by that time other threads will have completed their work with the defenders and released their locks. You may need to redo some work on the attacker on the second retry, or reuse the previously cached data.
Separate players processing to different threads. Or, more generally make sure that a given player is never accessed by multiple threads concurrently. Use message passing between threads to implement interaction between players. The message passing mechanism does not need to lock any players, and in fact, locking the players should not be necessary at all. This will also introduce some asynchronicity in the sense that the effects of AttackAoeRequest may be applied to defenders with a delay - when the corresponding thread processes damage notifications from the attacker.
I'm sure there are other ideas as well.

relationship semaphores, mutexes, monitors, test-and-set and message passing

I'm learning for an exam but the slides given by the professor aren't very helpful, and any searches I do give only muddy results. From what I could gather:
Semaphore: counter with down() and up() operations that busy waits when calling down(counter) when counter = 0 until counter > 0 again
Mutex: binary semaphore that can only be released by its owning process/thread
Test-and-set: CPU instruction for retrieving a binary value and setting it to 1 atomically; used to implement mutexes
Monitor: object that forces synchronized access, i.e. only one process/thread can access it at a time; can be implemented using
mutexes
Message passing: processes send messages over some shared memory location to tell each other when the other can continue their work; this is effectively a semaphore that acts not only as a counter, but can also be used to exchange other data, i.e. some produce in the producer-consumer problem
I have the following questions:
Are these definitions correct?
Is there a way to implement monitors without mutexes?
Is there a way to implement mutexes without test-and-set, specifically without busy-waiting?
Are there uses for test-and-set other than to implement mutexes?
No. You may or may not be able to implement them with busy waits, but that is by no means part of the definition. The definition is Procure will return when the semaphore is available; Liberate will make the semaphore available. How they do that is up to them. There is no other definition.
Yes. Roughly speaking you can use any of { message passing, mutexes, semaphores } to implement { message passing, mutees, semaphores }. By transitivity, anything you can implement with any of these, you can implement with any other of these. But remember, you can kick a ball down a beach. A ball is an object like a whale; thus you can kick a whale down a beach. It might be a bit harder.
Yes. Ask your favourite search-and-creep service about dekker and his fabulous algorithm.
Yes. You might implement a multi-cpu state machine using test+set and test+clr, for example. Turing machines are fantastically flexible; simple abstractions like mutexes are meant to constrain that flexibility into something comprehensible.

Thread-safe linked list with fine grained locks

In a program I have a class M:
class M{
/*
very big immutable fields
*/
int status;
};
And I need a linked-list of objects of type M.
Three types of threads are accessing the list:
Producers: Produce and append objects to the end of the list. All of the newly produced objects have the status=NEW. (Operation time = O(1))
Consumers: Consume objects at the beginning of the list. An object can be consumed by a consumer if it has status=CONSUMER_ID. Each of the consumers keeps the first item in the linked-list that it can consume so the consumption is (amortized?) O(1)(see note below).
Destructor: Deletes consumed objects when there is a notification that says the object has been consumed correctly (Operation time = O(1)).
Modifier: Changes the status of the objects based on a state diagram. The final status of any object is the id of a consumer (Operation time = O(1) per object).
The number of consumers is less than 10. The number of Producers may be as big as a couple of hundreds. There is one modifier.
note: The modifier may modify the already consumed objects and thus the stored items of consumers may move back and forth. I did not find any better solutions for this problem (Although, the comparison between objects is O(1), the operation is no more amortized O(1)).
The performance is very important. Therefore, I want to use atomic operations or fine-grained locks (one per object) to avoid unnecessary blocking.
My questions are:
Atomic operations are preferred because they are lighter. I guess I must use locks for updating the pointers in destructor thread only and I can use atomic operations for handling contention between other threads. Please let me know if I am missing something and there is a reason that I cannot use atomic operations on status field.
I think I cannot use STL list because it does not support fine-grained locks. But would you recommend using Boost::Intrusive lists (instead of writing my own)? Here it is mentioned that intrusive data structures are harder to make thread-safe? Is this true for fine-grained locks?
The producers, consumers and destructor would be called asynchronously based on some events (I am planning to use Boost::asio. But I don't know how to run the modifier to minimize its contention with other threads. The options are:
Asynchronously from producers.
Asynchronously from consumers.
Using its own timer.
Any such call would operate on the list only if some conditions hold. My own intuition is that there is no difference between how I call the modifier. Am I missing something?
My system is Linux/GCC and I am using boost 1.47 in case it matters.
Similar question: Thread-safe deletion of a linked list node, using the fine-grained approach
The performance is very important. Therefore, I want to use atomic operations or fine-grained locks (one per object) to avoid unnecessary blocking.
This will make performance worse by increasing the probability that threads that contend (access the same data) will run at the same time on different cores. If the locks are too fine, threads may contend (ping-pong data between their caches) and run in slow lock step without ever blocking on a lock, causing terrible performance.
You want to use coarse enough locks that threads that contend over the same data block each other as soon as possible. That will force the scheduler to schedule non-contending threads, eliminating the cache ping-ponging that destroys performance.
You have a common misconception that blocking is bad. In fact, contention is bad, because it slows cores down to bus speeds. Blocking ends contention. Blocking is good because it de-schedules contending threads, allowing non-contending threads (that can run concurrently at full speed) to be scheduled.
If you're already planning to use Boost Asio, then good news! You can stop writing your custom asynchronous producer-consumer queue right now.
The Boost Asio io_service class is an asynchronous queue, so you can easily use it to pass objects from producers to consumers. Use the io_service::post() method to enqueue a bound function object for asychronous callback by another thread.
boost::asio::io_service io_service_;
void produce()
{
M* m = new M;
io_service_.post(boost::bind(&consume, m));
}
void consume(M* m)
{
delete m;
}
Have your producer threads call produce(), then have your consumers threads call io_service_.run(), and then consume() will be called back on your consumer threads. Instant producer-consumer!
Plus, you can enqueue all kinds of other heterogeneous events into the io_service_ to be handled by your consumer threads if you like, such as network reads and waiting for signals. Boost Asio is more than just a network library-- it's also an easy way to express a proactor, reactor, producer-consumer, thread-pool, or any other kind of threading architecture.
EDIT
Oh, and one more tip. Don't make separate pools of dedicated producer threads and dedicated consumer threads. Just make one thread for each core available on your machine (4 core machine => 4 threads). Then have all those threads call io_service_.run(). Use the io_service_ to asynchronously read stuff to produce, from files or the network or whatever, then use the io_service_ again to asynchronously consume whatever was produced.
That's the most performant threading architecture. One thread per core.
As #David Schwartz fairly noted, blocking is not always slow and spinning (in user space multithreaded applications) can be quite dangerous.
Moreover, linux pthread library has "smart" implementation of pthread_mutex. It's designed to be "lightweight", i.e. when a thread tries to lock already acquired mutex, it spins some time making several attempts to get the lock before it blocks. Number of attempts is not big enough to harm your system or even break real-time requirements (if any). Additional linux specific feature is so-called fast user space mutex (FUTEX), which reduces number of syscalls. The main idea is that it'll do mutex_lock syscall only when a thread really needs to block on a mutex (when a thread locks unacquired mutex, it doesn't do a syscall).
Actually in most cases you don't need to reinvent the wheel or introduce some very specific locking techniques. If you have to, then either something wrong with design or you're dealing with highly concurrent environment (for the first sight 10 consumers don't seem that and all these seem like over engineering).
If I were you I'd prefer to use conditional variable + mutex protecting the list.
Another thing I'd do is to go over the design again. Why use one global list when consumer needs to do a search to find out whether the list contains the item with its ID (and if so, remove/dequeue it)? May be it's better to make a separate list for each consumer? In this case you probably can get rid of status field.
Does read access is more frequent than write access? If so it would be better to use R/W locks or RCU
If I wouldn't satisfied with pthread primitives and futex stuff (and if I wouldn't, I would have proved by the tests that locking primitives are bottleneck, not the number of consumers or the algorithm I chosen), then I'd try to think about complicated algorithm with reference counting, separate GC thread and restriction of all updates to be atomic.
I would advice you on a slightly different approach to the problem:
Producers: Enqueue objects at the end of a shared queue (SQ). Wakes up
the Modifier via a semaphore.
producer()
{
while (true)
{
o = get_object_from_somewhere ()
atomic_enqueue (SQ.queue, o)
signal(SQ.sem)
}
}
Consumers: Deque objects from the front of a per consumer queue (CQ[i]).
consumer()
{
while (true)
{
wait (CQ[self].sem)
o = atomic_dequeue (CQ[self].queue)
process (o)
destroy (o)
}
}
Destructor: Destructor does not exist, after a consumer is done with
an object, the consumer destroys it.
Modifier: The modifier dequeues objects from the shared queue,
processed them and enqueues them to the private queue of the appropriate consumer.
modifier()
{
while (true)
{
wait (SQ.sem)
o = atomic_dequeue (SQ.queue)
FSM (o)
atomic_enqueue (CQ [o.status].queue, o)
signal (CQ [o.status].sem)
}
}
A note to the various atomic_xxx functions in the pseudo code: this
does not necessarily mean using atomic instructions like CAS, CAS2,
LL/SC, etc. It can be using atomics, spinlocks or plain mutexes. I
would advice implementing it in the most straighforward way
(e.g. mutexes) and optimizing it later if it proves to be a
performance issue.

How to detect circular calls?

I've been looking for causes for deadlocks and strategies/tools to avoid and detect them.
Another potential cause for deadlocks is to have blocking functions calling other blocking functions in a circular way, so that eventually a call never returns.
Sometimes this is hard to discover, specially in very large projects.
So, are there any tools/libraries/techiques that allow to automate the detection of circular calls in a program?
EDIT:
I code mostly in C and C++ so, if possible, give any information about the topic that is applicable to those languages.
Nevertheless, it seems this topic is scarcely covered in SO, so answers for other languages are ok too. although maybe those deserve a topic of its own if someone finds it relevant
Thanks.
Circular (or recursive) calls that try to acquire the same non-reentrant lock are one of the easiest to debug blocking scenarios: locking is deterministic, and can be easily checked. When the application locks, fire up the debugger and look at the stack trace to understand what locks are held and why.
As to general solutions for the problem of locking... you can look into some libraries that provide mutex ordering, and detect when you are trying to lock on a mutex out of order. This type of solutions might be complex to implement correctly, but once in place it ensures that you cannot enter a deadlock condition, as it forces all processes to obtain the locks in the same order (i.e. if process A holds lock La, and it tries to acquire lock Lb for which the ordering is correct, then it can either succeed or lock, but whichever process is holding lock Lb cannot try to lock La as the ordering constraint would not be met).
If you are on Linux there 2 Valgrind tools for detecting deadlocks and race conditions: Helgrind, DRD. They both complement each other and it's worth to check for thread errors by both of them.
In linux you can use valgrind to detect deadlocks, use --tool=helgrind.
Best way to detect deadlocks (IMO) is to make a test program that calls all the functions in a random order in like 30 different threads 10000s of times.
If you get a deadlock you can use VS2010 "Parallel Stacks" window. Debug->Windows->Parallel Stacks
This window will show you all the stacks, so you can find the methods that are deadlocking.
A simple strategy I use to write thread-safe objects:
A thread safe object should be safe when its public methods are called, so you don't get deadlocks when it is used.
So, the idea is to just lock all the public methods that access the object's data.
Besides that you need to insure that within the class' code you never call a public method. If you need to use one of the public methods, then make that method private, and wrap the private method with a public method that locks and then calls it.
If you want better lock granularity you could just create objects for each part that has its own lock, and lock it like I suggested. Then use encapsulation to combine those classes to the one class.
Example:
class Blah {
MyData data;
Lock lock;
public:
DataItem GetData(int index)
{
ReadLock read(lock);
return LocalGetData(index);
}
DataItem FindData(string key)
{
ReadLock read(lock);
DataItem item;
//find the item, can use LocalGetData() to get the item without deadlocking
return item;
}
void PutData(DataItem item)
{
ReadLock write(lock);
//put item in database
}
private:
DataItem LocalGetData(int index)
{
return data[index];
}
}
You could find a tool that builds a call graph, and check the graph for cycles.
Otherwise, there are a number of strategies for detecting deadlocks or other circularities, but they all depend on having some sort of supporting infrastructure in place.
There are deadlock avoidance strategies, having to do with assigning lock priorities and ordering the locks according to priority. These require code changes and enforcing the standards, though.

How to synchronize access to many objects

I have a thread pool with some threads (e.g. as many as number of cores) that work on many objects, say thousands of objects. Normally I would give each object a mutex to protect access to its internals, lock it when I'm doing work, then release it. When two threads would try to access the same object, one of the threads has to wait.
Now I want to save some resources and be scalable, as there may be thousands of objects, and still only a hand full of threads. I'm thinking about a class design where the thread has some sort of mutex or lock object, and assigns the lock to the object when the object should be accessed. This would save resources, as I only have as much lock objects as I have threads.
Now comes the programming part, where I want to transfer this design into code, but don't know quite where to start. I'm programming in C++ and want to use Boost classes where possible, but self written classes that handle these special requirements are ok. How would I implement this?
My first idea was to have a boost::mutex object per thread, and each object has a boost::shared_ptr that initially is unset (or NULL). Now when I want to access the object, I lock it by creating a scoped_lock object and assign it to the shared_ptr. When the shared_ptr is already set, I wait on the present lock. This idea sounds like a heap full of race conditions, so I sort of abandoned it. Is there another way to accomplish this design? A completely different way?
Edit:
The above description is a bit abstract, so let me add a specific example. Imagine a virtual world with many objects (think > 100.000). Users moving in the world could move through the world and modify objects (e.g. shoot arrows at monsters). When only using one thread, I'm good with a work queue where modifications to objects are queued. I want a more scalable design, though. If 128 core processors are available, I want to use all 128, so use that number of threads, each with work queues. One solution would be to use spatial separation, e.g. use a lock for an area. This could reduce number of locks used, but I'm more interested if there's a design which saves as much locks as possible.
You could use a mutex pool instead of allocating one mutex per resource or one mutex per thread. As mutexes are requested, first check the object in question. If it already has a mutex tagged to it, block on that mutex. If not, assign a mutex to that object and signal it, taking the mutex out of the pool. Once the mutex is unsignaled, clear the slot and return the mutex to the pool.
Without knowing it, what you were looking for is Software Transactional Memory (STM).
STM systems manage with the needed locks internally to ensure the ACI properties (Atomic,Consistent,Isolated). This is a research activity. You can find a lot of STM libraries; in particular I'm working on Boost.STM (The library is not yet for beta test, and the documentation is not really up to date, but you can play with). There are also some compilers that are introducing TM in (as Intel, IBM, and SUN compilers). You can get the draft specification from here
The idea is to identify the critical regions as follows
transaction {
// transactional block
}
and let the STM system to manage with the needed locks as far as it ensures the ACI properties.
The Boost.STM approach let you write things like
int inc_and_ret(stm::object<int>& i) {
BOOST_STM_TRANSACTION {
return ++i;
} BOOST_STM_END_TRANSACTION
}
You can see the couple BOOST_STM_TRANSACTION/BOOST_STM_END_TRANSACTION as a way to determine a scoped implicit lock.
The cost of this pseudo transparency is of 4 meta-data bytes for each stm::object.
Even if this is far from your initial design I really think is what was behind your goal and initial design.
I doubt there's any clean way to accomplish your design. The problem that assigning the mutex to the object looks like it'll modify the contents of the object -- so you need a mutex to protect the object from several threads trying to assign mutexes to it at once, so to keep your first mutex assignment safe, you'd need another mutex to protect the first one.
Personally, I think what you're trying to cure probably isn't a problem in the first place. Before I spent much time on trying to fix it, I'd do a bit of testing to see what (if anything) you lose by simply including a Mutex in each object and being done with it. I doubt you'll need to go any further than that.
If you need to do more than that I'd think of having a thread-safe pool of objects, and anytime a thread wants to operate on an object, it has to obtain ownership from that pool. The call to obtain ownership would release any object currently owned by the requesting thread (to avoid deadlocks), and then give it ownership of the requested object (blocking if the object is currently owned by another thread). The object pool manager would probably operate in a thread by itself, automatically serializing all access to the pool management, so the pool management code could avoid having to lock access to the variables telling it who currently owns what object and such.
Personally, here's what I would do. You have a number of objects, all probably have a key of some sort, say names. So take the following list of people's names:
Bill Clinton
Bill Cosby
John Doe
Abraham Lincoln
Jon Stewart
So now you would create a number of lists: one per letter of the alphabet, say. Bill and Bill would go in one list, John, Jon Abraham all by themselves.
Each list would be assigned to a specific thread - access would have to go through that thread (you would have to marshall operations to an object onto that thread - a great use of functors). Then you only have two places to lock:
thread() {
loop {
scoped_lock lock(list.mutex);
list.objectAccess();
}
}
list_add() {
scoped_lock lock(list.mutex);
list.add(..);
}
Keep the locks to a minimum, and if you're still doing a lot of locking, you can optimise the number of iterations you perform on the objects in your lists from 1 to 5, to minimize the amount of time spent acquiring locks. If your data set grows or is keyed by number, you can do any amount of segregating data to keep the locking to a minimum.
It sounds to me like you need a work queue. If the lock on the work queue became a bottle neck you could switch it around so that each thread had its own work queue then some sort of scheduler would give the incoming object to the thread with the least amount of work to do. The next level up from that is work stealing where threads that have run out of work look at the work queues of other threads.(See Intel's thread building blocks library.)
If I follow you correctly ....
struct table_entry {
void * pObject; // substitute with your object
sem_t sem; // init to empty
int nPenders; // init to zero
};
struct table_entry * table;
object_lock (void * pObject) {
goto label; // yes it is an evil goto
do {
pEntry->nPenders++;
unlock (mutex);
sem_wait (sem);
label:
lock (mutex);
found = search (table, pObject, &pEntry);
} while (found);
add_object_to_table (table, pObject);
unlock (mutex);
}
object_unlock (void * pObject) {
lock (mutex);
pEntry = remove (table, pObject); // assuming it is in the table
if (nPenders != 0) {
nPenders--;
sem_post (pEntry->sem);
}
unlock (mutex);
}
The above should work, but it does have some potential drawbacks such as ...
A possible bottleneck in the search.
Thread starvation. There is no guarantee that any given thread will get out of the do-while loop in object_lock().
However, depending upon your setup, these potential draw-backs might not matter.
Hope this helps.
We here have an interest in a similar model. A solution we have considered is to have a global (or shared) lock but used in the following manner:
A flag that can be atomically set on the object. If you set the flag you then own the object.
You perform your action then reset the variable and signal (broadcast) a condition variable.
If the acquire failed you wait on the condition variable. When it is broadcast you check its state to see if it is available.
It does appear though that we need to lock the mutex each time we change the value of this variable. So there is a lot of locking and unlocking but you do not need to keep the lock for any long period.
With a "shared" lock you have one lock applying to multiple items. You would use some kind of "hash" function to determine which mutex/condition variable applies to this particular entry.
Answer the following question under the #JohnDibling's post.
did you implement this solution ? I've a similar problem and I would like to know how you solved to release the mutex back to the pool. I mean, how do you know, when you release the mutex, that it can be safely put back in queue if you do not know if another thread is holding it ?
by #LeonardoBernardini
I'm currently trying to solve the same kind of problem. My approach is create your own mutex struct (call it counterMutex) with a counter field and the real resource mutex field. So every time you try to lock the counterMutex, first you increment the counter then lock the underlying mutex. When you're done with it, you decrement the coutner and unlock the mutex, after that check the counter to see if it's zero which means no other thread is trying to acquire the lock . If so put the counterMutex back to the pool. Is there a race condition when manipulating the counter? you may ask. The answer is NO. Remember you have a global mutex to ensure that only one thread can access the coutnerMutex at one time.