Let say I have an array of 5 threads :
//main thread
pthread_t t[5];
pthread_mutex_t mutex[5];
queue<int> q[5];
for(int i = 0; i < 5; i++){
pthread_create(&pthread_t[i], NULL, worker, NULL);
}
for(int i = 0; i < 5; i++){
pthread_mutex_lock(&mutex[i]);
queue[i].push_back(i);
pthread_mutex_unlock(&mutex[i]);
}
void* worker(void* arg){
pthread_mutex_lock(&mutex[?]);
}
I am confused with the mutex_lock here. My question is:
How could I let the worker know which mutex to lock?
When I access the mutex through mutex[i], do I need another lock since the child thread might be accessing the mutex array as well?
Thanks.
You need to be clear which threads are sharing which queues. The code you've written suggests each worker thread works on a specific queue, but the main thread (that spawns the workers) will be pushing back new values onto those queues. If that's what you want, then what you've done is basically correct, and you can let the worker threads know the array index of the mutex they're to lock/unlock by casting it to void* and passing it as the argument to pthread_create, which will in turn be passed as a void* to the worker function. You do not need any additional layer of locking around the mutex array - it is entirely safe to access specific elements independently, though if it were say a vector that was being resized at run-time, then you would need that extra level of locking.
Associate the mutex with the queue creating a new struct;
typedef struct {
pthread_mutex_t mutex;
queue<int> q;
} safe_queue;
safe_queue queue_pool [5];
void* worker(safe_queue){
pthread_mutex_lock(&safe_queue.mutex);
}
That last argument to the pthread_create is handed over to the thread when it's called, so you can just pass a value to the specific thread.
Since you want both a specific mutex and a specific queue, you're better off passing in the value of i directly.
for(int i = 0; i < 5; i++){
pthread_create(&pthread_t[i], NULL, worker, (void*)i);
}
void *worker (void *pvI) {
int idx = (int)pvI; // Check for cast problems.
// Use mutex[idx] and q[idx].
}
However, if you want to do it this way, I'd go for a single queue and mutex.
That's because the act of putting something on the queue is almost certainly going to be much faster than processing an item on the queue (otherwise you wouldn't need threads at all).
If you have multiple queues, the main thread has to figure out somehow which are the underutilised threads so it can select the best queue. If you have one queue and one mutex to protect it, the threads will self-organise for efficiency. Those threads that do long jobs won't try to get something from the queue. Those doing short jobs will come back sooner.
I should mention that mutexes on their own are not a good solution for this producer/consumer model. You can't have a thread lock the mutex then wait indefinitely on the queue since that will prevent the main thread putting anything on the queue.
So that means your worker threads will be constantly polling the queues looking for work.
If you use a mutex combined with a condition variable, it will be a lot more efficient. That's because the threads are signalled by the main thread when work is available rather than constantly grabbing the mutex, checking for work, then releasing the mutex.
The basic outline will be, for the main thread:
initialise
while not finished:
await work
lock mutex
put work on queue
signal condvar
unlock mutex
terminate
and, for the worker threads:
initialise
while not finished:
lock mutex
while queue is empty:
wait on condvar
get work from queue
unlock mutex
do work
terminate
Don't pass a NULL pointer as arg to the thread. Instead use a pointer to an object that defines what the thread has to do.
How could I let the worker know which mutex to lock?
Pass the number as the last parameter to pthread_create()
for(int i = 0; i < 5; i++)
{
pthread_create(&pthread_t[i], NULL, worker, reinterpret_cast<void*>(i));
}
Then you can get the value like this:
void* worker(void* arg)
{
int index = reinterpret_cast<int>(arg);
pthread_mutex_lock(&mutex[index]);
}
When I access the mutex through mutex[i], do I need another lock since the child thread might be accessing the mutex array as well?
No. Because the variable mutex itself is never modified. Each member of the array behaves in an atomic fashion via the pthread_mutext_X() methods.
A slightly better design would be:
//main thread
struct ThreadData
{
pthread_mutex_t mutex;
queue<int> queue;
};
pthread_t t[5];
ThreadData d[5];
for(int i = 0; i < 5; i++)
{
pthread_create(&t[i], NULL, worker, &d[i]); // Pass a pointer to ThreadData
}
void* worker(void* arg)
{
// Retrieve the ThreadData object.
ThreadData d = reinterpret_cast<ThreadData*>(arg);
pthread_mutex_lock(&(d.mutex));
<STUFF>
}
Related
I am using a queue shared between 2 threads in my program. One thread keeps pushing data to the queue, and second thread keeps popping data from the queue and writing to a vector.
My question is do I need condition variable along with mutex lock for this scenario when doing enqueue or dequeue operation? How to handle race condition ?
My code is as follows:
void push_data_to_queue(){
mtx.lock();
std::lock_guard<std::mutex> lockGuard(mtx);
for( int i=0; i < 10; i++ ) {
queue.push(i);
}
}
void get_data_from_queue(){
std::vector<int> v;
mtx.lock();
std::lock_guard<std::mutex> lockGuard(mtx);
for(int i=0;i<5;i++) {
v.push_back(queue.front()));
queue.pop();
}
}
int main(){
std::mutex mtx;
std::thread(push_data_to_queue,std::ref(mtx));
std::thread(get_data_from_queue,std::ref(mtx));
return 0;
}
first point is you dont need to call mutex.lock() when you are using std::lock_guard.
second point is this senario there is only one writer thread and one reader thread there is no need for locking mechanism you can implement it with wait free approach or something like busy loop. but you can use just mutex but beware there is no guarantee that if reader or writer thread release mutex it will let other side acquire mutex in another word there is no guarantee ordering of acquisition because of that you need something like condition variable's(but beware condition variable also use for put thread in waiting state and prevent from wasting resources).
as i said you need some method of syncronazation.
imagine this senario:
writer thread write A to queue.
reader thread call back() and get item and start to process.
then writer thread write B to queue.
after that reader thread call pop_back() then it will remove item B from queue without processing.
so because of that you need mutex(no guarantee ordering of acquisition) , or condition variable(which is using mutex inside itself), busy loop(may consume so much resource), spinklock, ...
for better performance you can use atomic variable's.
Poring through legacy code of old and large project, I had found that there was used some odd method of creating thread-safe queue, something like this:
template < typename _Msg>
class WaitQue: public QWaitCondition
{
public:
typedef _Msg DataType;
void wakeOne(const DataType& msg)
{
QMutexLocker lock_(&mx);
que.push(msg);
QWaitCondition::wakeOne();
}
void wait(DataType& msg)
{
/// wait if empty.
{
QMutex wx; // WHAT?
QMutexLocker cvlock_(&wx);
if (que.empty())
QWaitCondition::wait(&wx);
}
{
QMutexLocker _wlock(&mx);
msg = que.front();
que.pop();
}
}
unsigned long size() {
QMutexLocker lock_(&mx);
return que.size();
}
private:
std::queue<DataType> que;
QMutex mx;
};
wakeOne is used from threads as kind of "posting" function" and wait is called from other threads and waits indefinitely until a message appears in queue. In some cases roles between threads reverse at different stages and using separate queues.
Is this even legal way to use a QMutex by creating local one? I kind of understand why someone could do that to dodge deadlock while reading size of que but how it even works? Is there a simpler and more idiomatic way to achieve this behavior?
Its legal to have a local condition variable. But it normally makes no sense.
As you've worked out in this case is wrong. You should be using the member:
void wait(DataType& msg)
{
QMutexLocker cvlock_(&mx);
while (que.empty())
QWaitCondition::wait(&mx);
msg = que.front();
que.pop();
}
Notice also that you must have while instead of if around the call to QWaitCondition::wait. This is for complex reasons about (possible) spurious wake up - the Qt docs aren't clear here. But more importantly the fact that the wake and the subsequent reacquire of the mutex is not an atomic operation means you must recheck the variable queue for emptiness. It could be this last case where you previously were getting deadlocks/UB.
Consider the scenario of an empty queue and a caller (thread 1) to wait into QWaitCondition::wait. This thread blocks. Then thread 2 comes along and adds an item to the queue and calls wakeOne. Thread 1 gets woken up and tries to reacquire the mutex. However, thread 3 comes along in your implementation of wait, takes the mutex before thread 1, sees the queue isn't empty, processes the single item and moves on, releasing the mutex. Then thread 1 which has been woken up finally acquires the mutex, returns from QWaitCondition::wait and tries to process... an empty queue. Yikes.
Suppose we have two workers. Each worker has an id of 0 and 1. Also suppose that we have jobs arriving all the time, each job has also an identifier 0 or 1 which specifies which worker will have to do this job.
I would like to create 2 threads that are initially locked, and then when two jobs arrive, unlock them, each of them does their job and then lock them again until other jobs arrive.
I have the following code:
#include <iostream>
#include <thread>
#include <mutex>
using namespace std;
struct job{
thread jobThread;
mutex jobMutex;
};
job jobs[2];
void executeJob(int worker){
while(true){
jobs[worker].jobMutex.lock();
//do some job
}
}
void initialize(){
int i;
for(i=0;i<2;i++){
jobs[i].jobThread = thread(executeJob, i);
}
}
int main(void){
//initialization
initialize();
int buffer[2];
int bufferSize = 0;
while(true){
//jobs arrive here constantly,
//once the buffer becomes full,
//we unlock the threads(workers) and they start working
bufferSize = 2;
if(bufferSize == 2){
for(int i = 0; i<2; i++){
jobs[i].jobMutex.unlock();
}
}
break;
}
}
I started using std::thread a few days ago and I'm not sure why but Visual Studio gives me an error saying abort() has been called. I believe there's something missing however due to my ignorance I can't figure out what.
I would expect this piece of code to actually
Initialize the two threads and then lock them
Inside the main function unlock the two threads, the two threads will do their job(in this case nothing) and then they will become locked again.
But it gives me an error instead. What am I doing wrong?
Thank you in advance!
For this purpose you can use boost's threadpool class.
It's efficient and well tested. opensource library instead of you writing newly and stabilizing it.
http://threadpool.sourceforge.net/
main()
{
pool tp(2); //number of worker threads-currently its 2.
// Add some tasks to the pool.
tp.schedule(&first_task);
tp.schedule(&second_task);
}
void first_task()
{
...
}
void second_task()
{
...
}
Note:
Suggestion for your example:
You don't need to have individual mutex object for each thread. Single mutex object lock itself will does the synchronization between all the threads. You are locking mutex of one thread in executejob function and without unlocking another thread is calling lock with different mutex object leading to deadlock or undefined behaviour.
Also since you are calling mutex.lock() inside whileloop without unlocking , same thread is trying to lock itself with same mutex object infinately leading to undefined behaviour.
If you donot need to execute threads parallel you can have one global mutex object can be used inside executejob function to lock and unlock.
mutex m;
void executeJob(int worker)
{
m.lock();
//do some job
m.unlock();
}
If you want to execute job parallel use boost threadpool as I suggested earlier.
In general you can write an algorithm similar to the following. It works with pthreads. I'm sure it would work with c++ threads as well.
create threads and make them wait on a condition variable, e.g. work_exists.
When work arrives you notify all threads that are waiting on that condition variable. Then in the main thread you start waiting on another condition variable work_done
Upon receiving work_exists notification, worker threads wake up, and grab their assigned work from jobs[worker], they execute it, they send a notification on work_done variable, and then go back to waiting on the work_exists condition variable
When main thread receives work_done notification it checks if all threads are done. If not, it keeps waiting till the notification from last-finishing thread arrives.
From cppreference's page on std::mutex::unlock:
The mutex must be unlocked by all threads that have successfully locked it before being destroyed. Otherwise, the behavior is undefined.
Your approach of having one thread unlock a mutex on behalf of another thread is incorrect.
The behavior you're attempting would normally be done using std::condition_variable. There are examples if you look at the links to the member functions.
I was reading a bit of Mutex and semaphore.
I have piece of code
int func()
{
i++;
return i;
}
i is declared somewhere outside as a global variable.
If i create counting semaphore with count as 3 won't it have a race condition? does that mean i should be using a binary semaphore or a Mutex in this case ?
Can somebody give me some practical senarios where Mutex, critical section and semaphores can be used.
probably i read lot. At the end i am a bit confused now. Can somebody clear the thought.
P.S: I have understood that primary diff between mutex and binary semaphore is the ownership. and counting semaphore should be used as a Signaling mechanism.
Differences between mutex and semaphore (I never worked with CriticalSection):
When using condition variables, its lock must be a mutex.
When using more than 1 available resources, you must use a semaphore initialized with the number of available resources, so when you're out of resources, the next thread blocks.
When using 1 resource or some code that may only be executed by 1 thread, you have the choice of using a mutex or a semaphore initialized with 1 (this is the case for OP's question).
When letting a thread wait until signaled by another thread, you need a semaphore intialized with 0 (waiting thread does sem.p(), signalling thread does sem.v()).
A critical section object is the easiest way here. It is a lightweight synchronisation object.
Here is some code as example:
#define NUMBER_OF_THREADS 100
// global
CRITICAL_SECTION csMyCriticalSectionObject;
int i = 0;
HANDLE hThread[NUMBER_OF_THREADS];
int main(int argc, char *argv[])
{
// initialize the critical section object
InitializeCriticalSection(&csMyCriticalSectionObject);
// create 100 threads:
for (int n = 0; n < NUMBER_OF_THREADS; n++)
{
if (!CreateThread(NULL,0,func,hThread[n],0,NULL))
{
fprintf(stderr,"Failed to create thread\n");
}
}
// wait for all 100 threads:
WaitForMultipleObjects(NUMBER_OF_THREADS,hThread,TRUE,INFINITE);
// this can be made more detailed/complex to find each thread ending with its
// exit code. See documentation for that
}
Links: CreateThread function and WaitForMultipleObjects function
With the thread:
// i is global, no need for i to returned by the thread
DWORD WINAPI func( LPVOID lpvParam )
{
EnterCriticalSection(&csMyCriticalSectionObject);
i++;
LeaveCriticalSection(&csMyCriticalSectionObject);
return GetLastError();
}
Mutex and/or semaphore are going to far for this purpose.
Edit: A semaphore is basically a mutex which can be released multiple times. It stores the number of release operations and can therefore release the same number of waits on it.
I am working on creating a threadpool from scratch as part of an assignment and am able to create the thread pool and then pass each created thread a function that constantly loops. My question is how can I accept the input and pass it to an already executing pthread. After figuring this out I will add mutexes to lock the function to a specific thread, but I am unable to get to that part.
class ThreadPool{
public:
ThreadPool(size_t threadCount);
int dispatch_thread(void *(dispatch_function(void *)), void *arg);
bool thread_avail();
int numThreads;
pthread_t * thread;
pthread_mutex_t * mutexes;
};
int ThreadPool::dispatch_thread(void *(dispatch_function(void *)), void *arg){
flag = 1;
//This is where I would like to pass the function the running pthread
}
void *BusyWork(void *t)
{
while(true){
//This is where I would like to run the passed function from each thread
//I can run the passed function by itself, but need to pass it to the threadpool
}
}
ThreadPool::ThreadPool(size_t threadCount){
pthread_t thread[threadCount];
for(t=0; t<threadCount; t++) {
//printf("Main: creating thread %ld\n", t);
rc = pthread_create(&thread[t], NULL, BusyWork, (void *)t);
}
}
void *test_fn(void *par)
{
cout << "in test_fn " << *(int *)par << endl;
}
int main (){
ThreadPool th(3);
int max = 100;
for (int i = 0; i < 20; i++) {
max = 100 * i;
th.dispatch_thread(test_fn, (void *)&max);
sleep(1);
}
}
The best pattern that I can think of is to use some sort of queue to pass messages to the thread-pool. These messages may contain functions to be run as well as some control messages for shutting down the thread-pool. As you already have guessed, the queue will have to be thread safe.
A simple approach for the queue is to use a fixed size array which you turn into a circular buffer. The array will have a Mutex to lock it when accessing the array and a Condition Variable to awaken the thread-pool thread.
When putting an item on the queue, we lock the mutex, add to the queue and then signal the thread-pool with the Condition Variable.
Each running thread in the in the thread pool will start life by locking the mutex and waiting on the condition varaible (which automaticall unlocks the Mutex). When awoken it will remove the item from the queue, and then unlock the mutex. It is now free do its stuff. When finished it goes to sleep until re-signaled.
As general advice, avoid sharing memory between threads because this either leads to race conditions (if access is not protected) or leads to interlocking (if access is locked). Also avoid locking a mutex when performing any long running operation such as calling new (malloc), delete (free) or any system calls.