This question already has answers here:
How pthread_mutex_lock is implemented
(3 answers)
Closed 4 years ago.
If Thread1 tried to lock resource locked by Thread2.
Does it go to sleep for finite time ?
Now if the Thread2 unlock the mutex then how would Thread1 will come to know that resource is available ? Is the operating system wakes it up or it checks for resource periodically ?
your second assumption is correct. When a mutex is locked by a thread already, all the remaining threads that are trying to lock it again will be placed on hold and will be in the sleep state. Once the mutex lock is unlocked the O/S wakes them all up and who can unlock first can access the lock. This is not in FIFO basis, actually there is no rule which thread should get first preference to lock the mutex once wakes up. You can consider my below example where I have use condition variable to control the threads:-
pthread_cond_t cond1 = PTHREAD_COND_INITIALIZER;
pthread_cond_t cond2 = PTHREAD_COND_INITIALIZER;
pthread_cond_t cond3 = PTHREAD_COND_INITIALIZER;
pthread_mutex_t lock1 = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t lock2 = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t lock3 = PTHREAD_MUTEX_INITIALIZER;
int TRUE = 1;
void print(char *p)
{
printf("%s",p);
}
void * threadMethod1(void *arg)
{
printf("In thread1\n");
do{
pthread_mutex_lock(&lock1);
pthread_cond_wait(&cond1, &lock1);
print("I am thread 1st\n");
pthread_cond_signal(&cond3);/* Now allow 3rd thread to process */
pthread_mutex_unlock(&lock1);
}while(TRUE);
pthread_exit(NULL);
}
void * threadMethod2(void *arg)
{
printf("In thread2\n");
do
{
pthread_mutex_lock(&lock2);
pthread_cond_wait(&cond2, &lock2);
print("I am thread 2nd\n");
pthread_cond_signal(&cond1);
pthread_mutex_unlock(&lock2);
}while(TRUE);
pthread_exit(NULL);
}
void * threadMethod3(void *arg)
{
printf("In thread3\n");
do
{
pthread_mutex_lock(&lock3);
pthread_cond_wait(&cond3, &lock3);
print("I am thread 3rd\n");
pthread_cond_signal(&cond2);
pthread_mutex_unlock(&lock3);
}while(TRUE);
pthread_exit(NULL);
}
int main(void)
{
pthread_t tid1, tid2, tid3;
int i = 0;
printf("Before creating the threads\n");
if( pthread_create(&tid1, NULL, threadMethod1, NULL) != 0 )
printf("Failed to create thread1\n");
if( pthread_create(&tid2, NULL, threadMethod2, NULL) != 0 )
printf("Failed to create thread2\n");
if( pthread_create(&tid3, NULL, threadMethod3, NULL) != 0 )
printf("Failed to create thread3\n");
pthread_cond_signal(&cond1);/* Now allow first thread to process first */
sleep(1);
TRUE = 0;/* Stop all the thread */
sleep(3);
/* this is how we join thread before exit from a system */
/*
pthread_join(tid1,NULL);
pthread_join(tid2,NULL);
pthread_join(tid3,NULL);*/
exit(0);
}
Here I am using 3 mutexs and 3 conditions. With the above example you can schedule/control or prioritize any number of threads in C. If you see the first thread here it locked mutex lock1 and waiting on cond1, likewise second thread locked mutex lock2 and waits on condition cond2 and 3rd thread locked mutex lock3 and waits on condition cond3. This is the current situation of all the threads after they are being created and now all the threads are waiting for a signal to execute further on its condition variable. In the main thread (i.e. main function, every program has one main thread, in C/C++ this main thread created automatically by operating system once control pass to the main method by kernal) we are calling pthread_cond_signal(&cond1); once this system call done thread1 who was waiting on cond1 will be release and it will start executing. Once it finished with its task it will call pthread_cond_signal(&cond3); now thread who was waiting on condition cond3 i.e. thread3 will be release and it will start execute and will call pthread_cond_signal(&cond2); which will release the thread who is waiting on condition cond2 i.e. in this case thread2.
Fundamental information about the mutex (MUtual Exclusion locks)
A mutex is a special lock that only one thread may lock at a time. If a thread locks a mutex and then a second thread also tries to lock the same mutex, the second thread is blocked, or put on hold. Only when the first thread unlocks the mutex is the second thread unblocked—allowed to resume execution.
Linux guarantees that race conditions do not occur among threads attempting to lock a mutex; only one thread will ever get the lock, and all other threads will be blocked.
A thread may attempt to lock a mutex by calling pthread_mutex_lock on it. If the mutex was unlocked, it becomes locked and the function returns immediately.
What happens trying to lock the when its locked by another thread?
If the mutex was locked by another thread, pthread_mutex_lock blocks execution and returns only eventually when the mutex is unlocked by the other thread.
I'm trying to understand condition_variables.
I guess my code should work like:
1. main lock mx
2. main wait() notify <= here lock released
3. threads lock mx
4. threads send notify
5. threads unlock mx
6. main wait() finished and lock mx
So why threads can lock mx faster than wait() call after notify?
Example
#include <iostream>
#include <future>
#include <condition_variable>
#include <vector>
using namespace std::chrono_literals;
std::shared_future<void> ready;
std::mutex finish_mx;
std::condition_variable finish_cv;
int execute(int val, const std::shared_future<void> &ready){
ready.wait();
std::lock_guard<std::mutex> lock(finish_mx);
std::cout<<"Locked: "<<val<<std::endl;
finish_cv.notify_one();
return val;
}
int main()
{
std::promise<void> promise;
auto shared = promise.get_future().share();
std::vector<std::future<int>> pool;
for (int i=0; i<10; ++i){
auto fut = std::async(std::launch::async, execute, i, std::cref(shared));
pool.push_back(std::move(fut));
}
std::this_thread::sleep_for(100ms);
std::unique_lock<std::mutex> finish_lock(finish_mx);
promise.set_value();
for (int i=0; pool.size() > 0; ++i)
{
finish_cv.wait(finish_lock);
std::cout<<"Notifies: "<<i<<std::endl;
for (auto it = pool.begin(); it != pool.end(); ++it) {
auto state = it->wait_for(0ms);
if (state == std::future_status::ready) {
pool.erase(it);
break;
}
}
}
}
example output:
Locked: 6
Locked: 7
Locked: 8
Locked: 9
Locked: 5
Locked: 4
Locked: 3
Locked: 2
Locked: 1
Notifies: 0
Locked: 0
Notifies: 1
Edit
for (int i=0; pool.size() > 0; ++i)
{
finish_cv.wait(finish_lock);
std::cout<<"Notifies: "<<i<<std::endl;
auto it = pool.begin();
while (it != pool.end()) {
auto state = it->wait_for(0ms);
if (state == std::future_status::ready) {
/* process result */
it = pool.erase(it);
} else {
++it;
}
}
}
This depends on how your OS schedules threads that are waiting to acquire a mutex lock. All the execute threads are already waiting to acquire the mutex lock before the first notify_one, so if there's a simple FIFO queue of threads waiting to lock the mutex then they are all ahead of the main thread in the queue. As each mutex unlocks the mutex, the next one in the queue locks it.
This has nothing to do with mutexes being "faster" than condition variables, the condition variable has to lock the same mutex to return from the wait.
As soon as the future becomes ready all the execute threads return from the wait and all try to lock the mutex, joining the queue of waiters. When the condition variable starts to wait the mutex is unlocked, and one of the other threads (the one at the front of the queue) gets the lock. It calls notify_one which causes the condition variable to try to relock the mutex, joining the back of the queue. The notifying thread unlocks the mutex, and the next thread in the queue gets the lock, and calls notify_one (which does nothing because the condition variable is already notified and waiting to lock the mutex). Then the next thread in the queue gets the mutex, and so on.
It seems that one of the execute threads didn't run quickly enough to get in the queue before the first notify_one call, so it ended up in the queue behind the condition variable.
I have three threads in my application, the first thread needs to wait for a data to be ready from the two other threads. The two threads are preparing the data concurrently.
In order to do that I am using condition variable in C++ as following:
boost::mutex mut;
boost::condition_variable cond;
Thread1:
bool check_data_received()
{
return (data1_received && data2_received);
}
// Wait until socket data has arrived
boost::unique_lock<boost::mutex> lock(mut);
if (!cond.timed_wait(lock, boost::posix_time::milliseconds(200),
boost::bind(&check_data_received)))
{
}
Thread2:
{
boost::lock_guard<boost::mutex> lock(mut);
data1_received = true;
}
cond.notify_one();
Thread3:
{
boost::lock_guard<boost::mutex> lock(mut);
data2_received = true;
}
cond.notify_one();
So my question is it correct to do that, or is there any more efficient way? I am looking for the most optimized way to do the waiting.
It looks like you want a semaphore here, so you can wait for two "resources" to be "taken".
For now, just replace the mutual exclusion with an atomic. you can still use a cv to signal the waiter:
#include <boost/thread.hpp>
boost::mutex mut;
boost::condition_variable cond;
boost::atomic_bool data1_received(false);
boost::atomic_bool data2_received(false);
bool check_data_received()
{
return (data1_received && data2_received);
}
void thread1()
{
// Wait until socket data has arrived
boost::unique_lock<boost::mutex> lock(mut);
while (!cond.timed_wait(lock, boost::posix_time::milliseconds(200),
boost::bind(&check_data_received)))
{
std::cout << "." << std::flush;
}
}
void thread2()
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(rand() % 4000));
data1_received = true;
cond.notify_one();
}
void thread3()
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(rand() % 4000));
data2_received = true;
cond.notify_one();
}
int main()
{
boost::thread_group g;
g.create_thread(thread1);
g.create_thread(thread2);
g.create_thread(thread3);
g.join_all();
}
Note:
warning - it's essential that you know only the waiter is waiting on the cv, otherwise you need notify_all() instead of notify_one().
It is not important that the waiter is already waiting before the workers signal their completion, because the predicated timed_wait checks the predicate before blocking.
Because this sample uses atomics and predicated wait, it's not actually critical to signal the cv under the mutex. However, thread checkers will (rightly) complain about this (I think) because it's impossible for them to check proper synchronization unless you add the locking.
we are working on a project to generate a general thread class that permit us to process a set of interconnected data.
The basic idea is to evaluate in different threads only the datasets that are not connected and that can be simultaneously processed.
We developed a ThreadClass based on boost::thread and a OF_bmutex class based based on boost::mutex in order to perform logging operation.
The scheme of the code is in the linked pdf (http://cdm.unimore.it/dep/test.pd) while the skeleton of the main classes are below...
// encapsulate boost::mutex to log...
class OF_bmutex{
public:
std::string mutex_type;
int m_id;
boost::mutex m;
void lock(){
std::cout << "Mutex " << mutex_type << m_id << " locking from " << boost::this_thread::get_id() << std::endl;
m.lock();
std::cout << "Mutex " << mutex_type << m_id << " locked from " << boost::this_thread::get_id() << std::endl;
}
void unlock(){
std::cout << "Mutex " << mutex_type << m_id << " unlocking from " << boost::this_thread::get_id() << std::endl;
m.unlock();
std::cout << "Mutex " << mutex_type << m_id << " unlocked from " << boost::this_thread::get_id() << std::endl;
}
bool try_lock(){
std::cout << "Mutex " << mutex_type << m_id << " try locking from " << boost::this_thread::get_id() << std::endl;
bool ret = m.try_lock();
if( ret ){
std::cout << "Mutex " << mutex_type << m_id << " try locked from " << boost::this_thread::get_id() << std::endl;
}
return(ret);
}
};
// My thread class
class OF_ThreadClass {
private:
//! running variable
bool running;
//! The thread executing this process...
boost::thread *m_thread;
//! The data to process...
LinkedDataSet *my_data;
//! The id of this thread
int thread_id;
//! Process the data...
virtual int processData();
public:
//! The boost thread id
boost::thread::id boost_id;
//! Thread function
void operator () ();
//! Default constructor
OF_ThreadClass();
//! Destructor
~OF_ThreadClass();
//! Connect this thread with the process data to evaluate
void setProcessData( DataToProcess *pd );
//! return the thread id
int getId() const { return this->thread_id; }
//! post process the thread...
void post_process();
};
// The core function with the execution point of the Thread class...
void OF_ThreadClass::operator () (){
while( this->running ){
OF_AVAILABLE_THREADS_MUTEX[ this->thread_id ]->unlock();
OF_RUNNING_THREADS_MUTEX[ this->thread_id ]->lock();
// PUT HERE OUR CODE...
if( running == true ){
if( my_data != NULL ){
this->processData();
}
this->my_data->done = true;
}
std::cout << ">>>>>> Thread " << thread_id << " notified that evaluation terminated\n";
OF_RUNNING_THREADS_MUTEX[ this->thread_id ]->unlock();
OF_AVAILABLE_THREADS_MUTEX[ this->thread_id ]->lock();
}
OF_AVAILABLE_THREADS_MUTEX[ this->thread_id ]->unlock();
}
// A class to perform multithread calculation...
class OF_SmartThreads{
private:
//! The number of threads to activate
int _n_threads;
//! The polling time
int _polling_time;
//! The thread pool...
std::vector< OF_ThreadClass *> threadPool;
//! The stack of the available threads
std::set< OF_ThreadClass *> *OF_AVAILABLE_THREADS;
//! The set of the running threads
std::set< OF_ThreadClass *> OF_RUNNING_THREADS;
//! The set of the locked datasets
std::set< LinkedDataSet* > locked_data;
//! The set of the available datasets
std::set< LinkedDataSet* > unlocked_data;
//! The set of the datasets under processing
std::set< LinkedDataSet* > processing_data;
//! The size of the progress bar
int progBarDim;
public:
//! Constructor
OF_SmartThreads();
//! Destructor
~OF_SmartThreads();
//! Initialize the SmartThreads
int init_function( std::list< LinkedDataSet * > *dList, int n_max_threads);
//! Initialize the SmartThreads
int init_function( std::set< LinkedDataSet * > *dSet, int n_max_threads);
//! Process all the cuncurrent threads..
int process_data();
//! Process all the cuncurrent threads..
int process_data_polling( int polling_time );
//! stop the process..
int post_process();
};
// Initialization function...
int OF_SmartThreads::init_function( ... ){
// in the main thread...
// Fill the pool of thread mutex...
for(int i = 0; i< _n_threads; i++ ){
_tm = new OF_BMUTEX;
_tm->mutex_type.assign( "A" );
_tm->m_id = i;
OF_AVAILABLE_THREADS_MUTEX.push_back( _tm );
_tm = new OF_BMUTEX;
_tm->mutex_type.assign( "R" );
_tm->m_id = i;
OF_RUNNING_THREADS_MUTEX.push_back( _tm );
}
// Create the threads...
threadPool.resize( _n_threads );
for(int i = 0; i< _n_threads; i++ ){
// ...preventivally lock the resources...
OF_RUNNING_THREADS_MUTEX[i]->lock();
OF_AVAILABLE_THREADS_MUTEX[i]->unlock();
// ..create the new thread...
pc = new OF_ThreadClass;
// insert the new thread in the list...
threadPool.at( pc->getId() ) = pc;
// set it as available...
OF_AVAILABLE_THREADS->insert( pc );
}
}
// Execution function...
void process_data_polling( int _polling_time ){
while ( running ){
if ( something_changed ){
//Print the status on the screen...
...
}
something_changed = false;
// Poll the status of the processing data periodically
boost::this_thread::sleep(boost::posix_time::millisec( _polling_time ));
// Are there some data ready to process?
if( OF_UNLOCKED_DATASETS->size() > 0 ){
// Take the first
pd = *OF_UNLOCKED_DATASETS->begin();
// are there some threads available?
if( OF_AVAILABLE_THREADS->size() != 0 ){
//...lock and move the datasets linked to pd...
ret = lock_data( pd, LOCK );
std::cout << "\tNumber of available threads: " << OF_AVAILABLE_THREADS->size() << std::endl;
// Take the available thread...
pc = *OF_AVAILABLE_THREADS->begin();
// ...link it the dataset to process...
pc->setProcess( pd );
OF_AVAILABLE_THREADS_MUTEX[ pc->getId() ]->lock();
OF_RUNNING_THREADS_MUTEX[ pc->getId() ]->unlock();
something_changed = true;
} // available threads
} // unlock datasets
// Find, unlock and remove finished datasets...
pIter2 = OF_RUNNING_THREADS->begin();
pEnd2 = OF_RUNNING_THREADS->end();
while( pIter2 != pEnd2 ){
pc = *pIter2++;
pd = pc->getDataSet();
if( pd->isDone() ){
//...unlock and move the datasets linked to the current dataset...
ret_move = lock_data( pd, RELEASE_LOCK );
//...remove the data from the active set
ret_remove = OF_ACTIVE_DATASETS->erase( pd );
// make the threads available
moveThreads( pc, _RUNNING_, _AVAILABLE_ );
something_changed = true;
}
}
pIter2 = OF_AVAILABLE_THREADS->begin();
pEnd2 = OF_AVAILABLE_THREADS->end();
while( pIter2 != pEnd2 ){
pc = *pIter2++;
bool obtained = OF_RUNNING_THREADS_MUTEX[ pc->getId() ]->try_lock();
if( obtained ){
std::cout << "\t\t\tOF_SMART_THREADS: Thread " << pc->getId() << " obtained running mutex..." << std::endl;
}
else{
std::cout << "\t\t\tOF_SMART_THREADS: Thread " << pc->getId() << " failed to obtain running mutex..." << std::endl;
}
OF_AVAILABLE_THREADS_MUTEX[ pc->getId() ]->unlock();
std::cout << "\t\t\tOF_SMART_THREADS: Thread " << pc->getId() << " released available mutex..." << std::endl;
}
if( ( OF_LOCKED_DATASETS->size() + OF_UNLOCKED_DATASETS->size() + OF_ACTIVE_DATASETS->size() ) > 0 ){
running = true;
}
else{
running = false;
}
} // end running...
}
// The main function...
int main( int argc, char* argv[]) {
init_function( &data, INT_MAX );
process_data_polling( 100 );
lc.post_process();
return 0;
}
all the system function perfectly when compiled on Linux and OSX with boost 1.53. The number of threads used is 2. An extract of the log is presented below.
Note the mutex logs emitted from the proper threads...
---> LOG FROM OSX ...
---------------------------------
Number of data: 2
Data: 0, links:
Data: 1, links:
---> OF_SmartThreads::init_function --
------------------------------------
--> 8 processors/cores detected.
--> n_max_threads = 2
------------------------------------
Mutex R0 locking from thread master
Mutex R0 locked from thread master
Mutex R0 try locking from thread master
OF_SMART_THREADS: Thread 0 failed to obtain running mutex...
Mutex A0 unlocking from thread master
Mutex A0 unlocked from thread master
New thread 0 created
Mutex R1 locking from thread master
Mutex R1 locked from thread master
Mutex R1 try locking from thread master
OF_SMART_THREADS: Thread 1 failed to obtain running mutex...
Mutex A1 unlocking from thread master
Mutex A1 unlocked from thread master
New thread 1 created
---------------------------------
Available threads: 2
Unlocked datasets: 2
---> OF_SmartThreads::process_data_function
Mutex A1 unlocking from thread1
Mutex A1 unlocked from thread1
>>>>>> Thread 1 released available mutex...
Mutex R1 locking from thread1
Mutex A0 unlocking from thread0
Mutex A0 unlocked from thread0
>>>>>> Thread 0 released available mutex...
Mutex R0 locking from thread0
UNLOCKED DATASETS : 0 1
LOCKED DATASETS :
ACTIVE DATASETS :
RUNNING THREADS :
OF_SMART_THREADS: THREADS AVAILABLE
Number of available threads: 2
OF_SMART_THREADS: take the thread 0
OF_SMART_THREADS: Thread master try to lock available mutex... 0
Mutex A0 locking from thread master
Mutex A0 locked from thread master
OF_SMART_THREADS: Thread obtained available mutex... 0
OF_SMART_THREADS: Thread try to unlock running mutex... 0
Mutex R0 unlocking from thread master
Mutex R0 unlocked from thread master
OF_SMART_THREADS: Thread released running mutex... 0
OF_SMART_THREADS: PREPARE AVAILABLE THREADS
Mutex R1 try locking from thread master
OF_SMART_THREADS: Thread 1 failed to obtain running mutex...
Mutex A1 unlocking from thread master
Mutex A1 unlocked from thread master
OF_SMART_THREADS: Thread 1 released available mutex...
UNLOCKED DATASETS : 1
LOCKED DATASETS :
ACTIVE DATASETS : 0
RUNNING THREADS : 0->0
Mutex R0 locked from thread0
>>>>>> Thread 0 obtained running mutex...
>>>>>> Thread 0 is going to process the dataset 0
>>>>>> Thread 0 terminated to process the dataset 0
>>>>>> Thread 0 notified that evaluation terminated
Mutex R0 unlocking from thread0
Mutex R0 unlocked from thread0
>>>>>> Thread 0 released running mutex...
Mutex A0 locking from thread0
OF_SMART_THREADS: THREADS AVAILABLE
Number of available threads: 1
OF_SMART_THREADS: take the thread 1
OF_SMART_THREADS: Thread master try to lock available mutex... 1
Mutex A1 locking from thread master
Mutex A1 locked from thread master
OF_SMART_THREADS: Thread obtained available mutex... 1
OF_SMART_THREADS: Thread try to unlock running mutex... 1
Mutex R1 unlocking from thread master
Mutex R1 unlocked from thread master
OF_SMART_THREADS: Thread released running mutex... 1
OF_SMART_THREADS: CHECK THREADS DONE
------------> DATASETS 0 done...
------------> DATASETS 0 removed from the active set.
OF_SMART_THREADS: PREPARE AVAILABLE THREADS
Mutex R0 try locking from thread master
Mutex R0 try locked from thread master
OF_SMART_THREADS: Thread 0 obtained running mutex...
Mutex R1 locked from thread1
Mutex A0 unlocking from thread master
>>>>>> Thread 1 obtained running mutex...
Mutex A0 unlocked from thread master
>>>>>> Thread 1 is going to process the dataset 1
Mutex A0 locked from thread0
OF_SMART_THREADS: Thread 0 released available mutex...
>>>>>> Thread 0 obtained available mutex...
UNLOCKED DATASETS :
LOCKED DATASETS :
ACTIVE DATASETS : 1
RUNNING THREADS : 1->1
>>>>>> Thread 1 terminated to process the dataset 1
Mutex A0 unlocking from thread0
>>>>>> Thread 1 notified that evaluation terminated
Mutex A0 unlocked from thread0
Mutex R1 unlocking from thread1
>>>>>> Thread 0 released available mutex...
Mutex R1 unlocked from thread1
Mutex R0 locking from thread0
>>>>>> Thread 1 released running mutex...
Mutex A1 locking from thread1
OF_SMART_THREADS: CHECK THREADS DONE
------------> DATASETS 1 done...
------------> DATASETS 1 removed from the active set.
OF_SMART_THREADS: PREPARE AVAILABLE THREADS
Mutex R0 try locking from thread master
OF_SMART_THREADS: Thread 0 failed to obtain running mutex...
Mutex A0 unlocking from thread master
Mutex A0 unlocked from thread master
OF_SMART_THREADS: Thread 0 released available mutex...
Mutex R1 try locking from thread master
Mutex R1 try locked from thread master
OF_SMART_THREADS: Thread 1 obtained running mutex...
Mutex A1 unlocking from thread master
Mutex A1 unlocked from thread master
OF_SMART_THREADS: Thread 1 released available mutex...
OF_SMART_THREADS: ALL THE DATASETS HAS BEEN SUCCESFULLY PROCESSED...
Mutex A1 locked from thread1
Mutex R0 unlocking from thread master
>>>>>> Thread 1 obtained available mutex...
Mutex R0 unlocked from thread master
Mutex R0 locked from thread0
Mutex A1 unlocking from thread1
>>>>>> Thread 0 obtained running mutex...
Mutex A1 unlocked from thread1
>>>>>> Thread 0 notified that evaluation terminated
>>>>>> Thread 1 released available mutex...
Mutex R0 unlocking from thread0
Mutex R1 locking from thread1
Mutex R0 unlocked from thread0
>>>>>> Thread 0 released running mutex...
Mutex A0 locking from thread0
Mutex A0 locked from thread0
>>>>>> Thread 0 obtained available mutex...
Mutex A0 unlocking from thread0
Mutex A0 unlocked from thread0
>>>>>> Thread 0 is terminating...
Mutex R1 unlocking from thread master
Mutex R1 unlocked from thread master
Mutex R1 locked from thread1
>>>>>> Thread 1 obtained running mutex...
>>>>>> Thread 1 notified that evaluation terminated
Mutex R1 unlocking from thread1
Mutex R1 unlocked from thread1
>>>>>> Thread 1 released running mutex...
Mutex A1 locking from thread1
Mutex A1 locked from thread1
>>>>>> Thread 1 obtained available mutex...
Mutex A1 unlocking from thread1
Mutex A1 unlocked from thread1
>>>>>> Thread 1 is terminating...
The problem arise when compiling system on Windows 7, both with Visual Studio 64 bits and with Mingw 32 bit. As it is possible to see from the log before
there is a deadlock at the beginning. This appears to us very strange and cannot be explained by the mutex logs coming from the different threads.
Some suggestion on how to debug this problem?
---> LOG FROM WINDOWS 7...
---------------------------------
Number of data: 2
Data: 0, links:
Data: 1, links:
-———> OF_SmartThreads::init_function --
------------------------------------
--> 4 processors/cores detected.
--> n_max_threads = 2
------------------------------------
Mutex R0 locking from thread master
Mutex R0 locked from thread master
Mutex R0 try locking from thread master
OF_SMART_THREADS: Thread 0 failed to obtain running mutex...
Mutex A0 unlocking from thread master
Mutex A0 unlocked from thread master
New thread 0 created
Mutex A0 unlocking from thread0
Mutex A0 unlocked from thread0
Mutex R1 locking from thread master
Mutex R1 locked from thread master
>>>>>> Thread 0 released available mutex...
Mutex R0 locking from thread0
Mutex R1 try locking from thread master
OF_SMART_THREADS: Thread 1 failed to obtain running mutex...
Mutex A1 unlocking from thread master
Mutex A1 unlocked from thread master
New thread 1 created
Mutex A1 unlocking from thread1
Mutex A1 unlocked from thread1
---------------------------------
Available threads: 2
>>>>>> Thread 1 released available mutex...
Mutex R1 locking from thread1
Unlocked datasets: 2
---> OF_SmartThreads::process_data_function
UNLOCKED DATASETS : 0 1
LOCKED DATASETS :
ACTIVE DATASETS :
RUNNING THREADS :
OF_SMART_THREADS: THREADS AVAILABLE
Number of available threads: 2
OF_SMART_THREADS: take the thread 0
OF_SMART_THREADS: Thread master try to lock available mutex... 0
Mutex A0 locking from thread master
Mutex A0 locked from thread master
OF_SMART_THREADS: Thread obtained available mutex... 0
OF_SMART_THREADS: Thread try to unlock running mutex... 0
Mutex R0 unlocking from thread master
Mutex R0 unlocked from thread master
Mutex R0 locked from thread0
OF_SMART_THREADS: Thread released running mutex... 0
>>>>>> Thread 0 obtained running mutex...
>>>>>> Thread 0 is going to process the dataset 0
Process Data: delay 41
OF_SMART_THREADS: PREPARE AVAILABLE THREADS
>>>>>> Thread 0 terminated to process the dataset 0
>>>>>> Thread 0 notified that evaluation terminated
Mutex R1 try locking from thread master
OF_SMART_THREADS: Thread 1 failed to obtain running mutex...
Mutex A1 unlocking from thread master
Mutex A1 unlocked from thread master
Mutex R0 unlocking from thread0
Mutex R0 unlocked from thread0
OF_SMART_THREADS: Thread 1 released available mutex...
UNLOCKED DATASETS : 1
LOCKED DATASETS :
ACTIVE DATASETS : 0*
RUNNING THREADS : 0->0
>>>>>> Thread 0 released running mutex...
Mutex A0 locking from thread0
OF_SMART_THREADS: THREADS AVAILABLE
Number of available threads: 1
OF_SMART_THREADS: take the thread 1
OF_SMART_THREADS: Thread master try to lock available mutex... 1
Mutex A1 locking from thread master
There is a deadlock, the thread master cannot lock mutex A1 but, as you can see from the log, no other threads locked that mutex before. Some suggestions on how to debug this problem?
Regards
Add lock monitoring to your OF_bmutex, like bool locked. You should not unclock not locked mutex, or lock locked mutex - so place assert. It seems like your init_function does OF_AVAILABLE_THREADS_MUTEX[i]->unlock(); without prior lock.
Boost BasicLockable Concept:
m.unlock();
Requires: The current thread owns m
So looks like you are violating unlock() preconditions. That can be seen in your log:
Mutex A0 unlocking from thread master
Mutex A0 unlocked from thread master
First of all: I am completely a newbie in mutex/multithread programming, so
sorry for any error in advance...
I have a program that runs multiple threads. The threads (usually one per
cpu core) do a lot of
calculation and "thinking" and then sometimes they decide to call a
particular (shared) method that updates some statistics.
The concurrency on statistics updates is managed through the use of a mutex:
stats_mutex.lock();
common_area->update_thread_stats( ... );
stats_mutex.unlock();
Now to the problem.
Of all those threads there is one particular thread that need almost
realtime priority, because it's the only thread that actually operates.
With "almost realtime priority" I mean:
Let's suppose thread t0 is the "privileged one" and t1....t15 are the normal
ones.What happens now is:
Thread t1 acquires lock.
Thread t2, t3, t0 call the lock() method and wait for it to succeed.
Thread t1 calls unlock()
One (at random, as far as i know) of the threads t2, t3, t0 succeeds in acquiring
the lock, and the other ones continue to wait.
What I need is:
Thread t1 acquire lock.
Thread t2, t3, t0 call the lock() method and wait for it to succeed.
Thread t1 calls unlock()
Thread t0 acquires lock since it's privileged
So, what's the best (possibly simplest) method to do this thing?
What I was thinking is to have a bool variable called
"privileged_needs_lock".
But I think I need another mutex to manage access to this variable... I dont
know if this is the right way...
Additional info:
my threads use C++11 (as of gcc 4.6.3)
code needs to run on both Linux and Windows (but tested only on Linux at the moment).
performance on locking mechanism is not an issue (my performance problem are in internal thread calculations, and thread number will always be low, one or two per cpu core at maximum)
Any idea is appreciated.
Thanks
The below solution works (three mutex way):
#include <thread>
#include <iostream>
#include <mutex>
#include "unistd.h"
std::mutex M;
std::mutex N;
std::mutex L;
void lowpriolock(){
L.lock();
N.lock();
M.lock();
N.unlock();
}
void lowpriounlock(){
M.unlock();
L.unlock();
}
void highpriolock(){
N.lock();
M.lock();
N.unlock();
}
void highpriounlock(){
M.unlock();
}
void hpt(const char* s){
using namespace std;
//cout << "hpt trying to get lock here" << endl;
highpriolock();
cout << s << endl;
sleep(2);
highpriounlock();
}
void lpt(const char* s){
using namespace std;
//cout << "lpt trying to get lock here" << endl;
lowpriolock();
cout << s << endl;
sleep(2);
lowpriounlock();
}
int main(){
std::thread t0(lpt,"low prio t0 working here");
std::thread t1(lpt,"low prio t1 working here");
std::thread t2(hpt,"high prio t2 working here");
std::thread t3(lpt,"low prio t3 working here");
std::thread t4(lpt,"low prio t4 working here");
std::thread t5(lpt,"low prio t5 working here");
std::thread t6(lpt,"low prio t6 working here");
std::thread t7(lpt,"low prio t7 working here");
//std::cout << "All threads created" << std::endl;
t0.join();
t1.join();
t2.join();
t3.join();
t4.join();
t5.join();
t6.join();
t7.join();
return 0;
}
Tried the below solution as suggested but it does not work (compile with " g++ -std=c++0x -o test test.cpp -lpthread"):
#include <thread>
#include <mutex>
#include "time.h"
#include "pthread.h"
std::mutex l;
void waiter(){
l.lock();
printf("Here i am, waiter starts\n");
sleep(2);
printf("Here i am, waiter ends\n");
l.unlock();
}
void privileged(int id){
usleep(200000);
l.lock();
usleep(200000);
printf("Here i am, privileged (%d)\n",id);
l.unlock();
}
void normal(int id){
usleep(200000);
l.lock();
usleep(200000);
printf("Here i am, normal (%d)\n",id);
l.unlock();
}
int main(){
std::thread tw(waiter);
std::thread t1(normal,1);
std::thread t0(privileged,0);
std::thread t2(normal,2);
sched_param sch;
int policy;
pthread_getschedparam(t0.native_handle(), &policy, &sch);
sch.sched_priority = -19;
pthread_setschedparam(t0.native_handle(), SCHED_FIFO, &sch);
pthread_getschedparam(t1.native_handle(), &policy, &sch);
sch.sched_priority = 18;
pthread_setschedparam(t1.native_handle(), SCHED_FIFO, &sch);
pthread_getschedparam(t2.native_handle(), &policy, &sch);
sch.sched_priority = 18;
pthread_setschedparam(t2.native_handle(), SCHED_FIFO, &sch);
tw.join();
t1.join();
t0.join();
t2.join();
return 0;
}
I can think of three methods using only threading primitives:
Triple mutex
Three mutexes would work here:
data mutex ('M')
next-to-access mutex ('N'), and
low-priority access mutex ('L')
Access patterns are:
Low-priority threads: lock L, lock N, lock M, unlock N, { do stuff }, unlock M, unlock L
High-priority thread: lock N, lock M, unlock N, { do stuff }, unlock M
That way the access to the data is protected, and the high-priority thread can get ahead of the low-priority threads in access to it.
Mutex, condition variable, atomic flag
The primitive way to do this is with a condition variable and an atomic:
Mutex M;
Condvar C;
atomic bool hpt_waiting;
Data access patterns:
Low-priority thread: lock M, while (hpt_waiting) wait C on M, { do stuff }, broadcast C, unlock M
High-priority thread: hpt_waiting := true, lock M, hpt_waiting := false, { do stuff }, broadcast C, unlock M
Mutex, condition variable, two non-atomic flag
Alternatively you can use two non-atomic bools with a condvar; in this technique the mutex/condvar protects the flags, and the data is protected not by a mutex but by a flag:
Mutex M;
Condvar C;
bool data_held, hpt_waiting;
Low-priority thread: lock M, while (hpt_waiting or data_held) wait C on M, data_held := true, unlock M, { do stuff }, lock M, data_held := false, broadcast C, unlock M
High-priority thread: lock M, hpt_waiting := true, while (data_held) wait C on M, data_held := true, unlock M, { do stuff }, lock M, data_held := false, hpt_waiting := false, broadcast C, unlock M
Put requesting threads on a 'priority queue'. The privileged thread can get first go at the data when it's free.
One way to do this would be withan array of ConcurrentQueues[privilegeLevel], a lock and some events.
Any thread that wants at the data enters the lock. If the data is free, (boolean), it gets the data object and exits the lock. If the data is in use by another thread, the requesting thread pushes an event onto one of the concurrent queues, depending on its privilege level, exits the lock and waits on the event.
When a thread wants to release its ownership of the data object, it gets the lock and iterates the array of ConcurrentQueues from the highest-privilege end down, looking for an event, (ie queue count>0). If it finds one, it signals it and exits the lock, if not, it sets the 'dataFree' boolean and and exits the lock.
When a thread waiting on an event for access to the data is made ready, it may access the data object.
I thnk that should work. Please, other developers, check this design and see if you can think of any races etc? I'm still suffering somewhat from 'hospitality overload' after a trip to CZ..
Edit - probably don't even need concurrent queues because of the explicit lock across them all. Any old queue would do.
#include <thread>
#include <mutex>
#include <condition_variable>
#include <cassert>
class priority_mutex {
std::condition_variable cv_;
std::mutex gate_;
bool locked_;
std::thread::id pr_tid_; // priority thread
public:
priority_mutex() : locked_(false) {}
~priority_mutex() { assert(!locked_); }
priority_mutex(priority_mutex&) = delete;
priority_mutex operator=(priority_mutex&) = delete;
void lock(bool privileged = false) {
const std::thread::id tid = std::this_thread::get_id();
std::unique_lock<decltype(gate_)> lk(gate_);
if (privileged)
pr_tid_ = tid;
cv_.wait(lk, [&]{
return !locked_ && (pr_tid_ == std::thread::id() || pr_tid_ == tid);
});
locked_ = true;
}
void unlock() {
std::lock_guard<decltype(gate_)> lk(gate_);
if (pr_tid_ == std::this_thread::get_id())
pr_tid_ = std::thread::id();
locked_ = false;
cv_.notify_all();
}
};
NOTICE: This priority_mutex provides unfair thread scheduling. If privileged thread acquires the lock frequently, other non-privileged threads may almost not scheduled.
Usage example:
#include <mutex>
priority_mutex mtx;
void privileged_thread()
{
//...
{
mtx.lock(true); // acquire 'priority lock'
std::unique_lock<decltype(mtx)> lk(mtx, std::adopt_lock);
// update shared state, etc.
}
//...
}
void normal_thread()
{
//...
{
std::unique_lock<decltype(mtx)> lk(mtx); // acquire 'normal lock'
// do something
}
//...
}
On linux you can check this man: pthread_setschedparam and also man sched_setscheduler
pthread_setschedparam(pthread_t thread, int policy,
const struct sched_param *param);
Check this also for c++2011:
http://msdn.microsoft.com/en-us/library/system.threading.thread.priority.aspx#Y78
pthreads has thread priorities:
pthread_setschedprio( (pthread_t*)(&mThreadId), wpri );
If multiple threads are sleeping waiting in a lock, the scheduler will wake the highest priority thread first.
Try something like the following. You could make the class a thread-safe singleton and you could even make it a functor.
#include <pthread.h>
#include <semaphore.h>
#include <map>
class ThreadPrioFun
{
typedef std::multimap<int, sem_t*> priomap_t;
public:
ThreadPrioFun()
{
pthread_mutex_init(&mtx, NULL);
}
~ThreadPrioFun()
{
pthread_mutex_destroy(&mtx);
}
void fun(int prio, sem_t* pSem)
{
pthread_mutex_lock(&mtx);
bool bWait = !(pm.empty());
priomap_t::iterator it = pm.insert(std::pair<int, sem_t*>(prio, pSem) );
pthread_mutex_unlock(&mtx);
if( bWait ) sem_wait(pSem);
// do the actual job
// ....
//
pthread_mutex_lock(&mtx);
// done, remove yourself
pm.erase(it);
if( ! pm.empty() )
{
// let next guy run:
sem_post((pm.begin()->second));
}
pthread_mutex_unlock(&mtx);
}
private:
pthread_mutex_t mtx;
priomap_t pm;
};
Since thread priorities isn't working for you:
Create 2 mutexes, a regular lock and a priority lock.
Regular threads must first lock the normal lock, and then the priority lock. The priority thread only has to lock the priority lock:
Mutex mLock;
Mutex mPriLock;
doNormal()
{
mLock.lock();
pthread_yield();
doPriority();
mLock.unlock();
}
doPriority()
{
mPriLock.lock();
doStuff();
mPriLock.unlock();
}
Modified slightly ecatmur answer, adding a 4th mutex to handle multiple high priority threads contemporaneously (note that this was not required in my original question):
#include <thread>
#include <iostream>
#include "unistd.h"
std::mutex M; //data access mutex
std::mutex N; // 'next to access' mutex
std::mutex L; //low priority access mutex
std::mutex H; //hptwaiting int access mutex
int hptwaiting=0;
void lowpriolock(){
L.lock();
while(hptwaiting>0){
N.lock();
N.unlock();
}
N.lock();
M.lock();
N.unlock();
}
void lowpriounlock(){
M.unlock();
L.unlock();
}
void highpriolock(){
H.lock();
hptwaiting++;
H.unlock();
N.lock();
M.lock();
N.unlock();
}
void highpriounlock(){
M.unlock();
H.lock();
hptwaiting--;
H.unlock();
}
void hpt(const char* s){
using namespace std;
//cout << "hpt trying to get lock here" << endl;
highpriolock();
cout << s << endl;
usleep(30000);
highpriounlock();
}
void lpt(const char* s){
using namespace std;
//cout << "lpt trying to get lock here" << endl;
lowpriolock();
cout << s << endl;
usleep(30000);
lowpriounlock();
}
int main(){
std::thread t0(lpt,"low prio t0 working here");
std::thread t1(lpt,"low prio t1 working here");
std::thread t2(hpt,"high prio t2 working here");
std::thread t3(lpt,"low prio t3 working here");
std::thread t4(lpt,"low prio t4 working here");
std::thread t5(lpt,"low prio t5 working here");
std::thread t6(hpt,"high prio t6 working here");
std::thread t7(lpt,"low prio t7 working here");
std::thread t8(hpt,"high prio t8 working here");
std::thread t9(lpt,"low prio t9 working here");
std::thread t10(lpt,"low prio t10 working here");
std::thread t11(lpt,"low prio t11 working here");
std::thread t12(hpt,"high prio t12 working here");
std::thread t13(lpt,"low prio t13 working here");
//std::cout << "All threads created" << std::endl;
t0.join();
t1.join();
t2.join();
t3.join();
t4.join();
t5.join();
t6.join();
t7.join();
t8.join();
t9.join();
t10.join();
t11.join();
t12.join();
t13.join();
return 0;
}
What do you think? Is it ok? It's true that a semaphore could handle better this kind of thing, but mutexes are much more easy to manage to me.