I just started learning pthread condition variable. But below code is not working as expected.
#include<iostream>
#include<pthread.h>
using namespace std;
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t count_threshold_cv = PTHREAD_COND_INITIALIZER;
int count=0;
void *inc_func(void *arg)
{
pthread_mutex_lock(&mutex);
int c;
while(1)
{
cin>>c;
if(c==8){
pthread_cond_signal(&count_threshold_cv);break;}
}
cout<<"inc count reached 8"<<endl;
pthread_mutex_unlock(&mutex);
}
void *watch(void *arg)
{
pthread_mutex_lock(&mutex);
while(1){
pthread_cond_wait(&count_threshold_cv,&mutex);
break;
}
cout<<"This executed"<<endl;
pthread_mutex_unlock(&mutex);
}
int main()
{
pthread_t id[26];
pthread_create(&id[0],NULL,inc_func,NULL);
pthread_create(&id[1],NULL,watch,NULL);
int i;
for(i=0;i<2;i++)
{
pthread_join(id[i],NULL);
}
}
when the input is 8 this code gets hanged at "inc count reached 8? I am not able to understand.
Where is my understanding wrong?
The correct solution to this is to make the watch thread only wait if the condition it is waiting for has not occured yet.
The condition appears to be c == 8 (since that is what is signalled), so you will need to make the c variable global so that it is shared between the threads, then change the watch thread to do:
void *watch(void *arg)
{
pthread_mutex_lock(&mutex);
while (c != 8) {
pthread_cond_wait(&count_threshold_cv, &mutex);
}
cout<<"This executed"<<endl;
pthread_mutex_unlock(&mutex);
return 0;
}
Now it doesn't matter which thread runs first: your code is correct either way. This is the right way to use conditional variables: in general, the waiter should do:
pthread_mutex_lock(&mutex);
while (!condition)
pthread_cond_wait(&cond, &mutex);
/* ... */
and the signaller should do:
pthread_mutex_lock(&mutex);
/* ... something that potentially makes condition true ... */
pthread_cond_broadcast(&cond);
pthread_mutex_unlock(&mutex);
The important thing here is that pthread_cond_signal will unblock at least one of the threads that are blocked on that condition variable (meaning which are currently blocked on a call to pthread_cond_wait on that same condition variable). If at the moment when one thread calls pthread_cond_signal there is no other thread waiting on that condition then basically nothing happens.
Keeping this in mind, the flow of your program is something like this:
create and start first thread;
first thread calls inc_func(), which locks the mutex before anything else;
inc_func() keeps waiting for the number 8 to be entered, keeping the mutex locked all this time;
sometime during this, but most times probably after the inc_func managed to lock the mutex, the second thread is created;
the second thread also tries to lock the mutex right at the start of the function, and is blocked because the first thread already has it locked;
at some point, you enter 8 and the condition gets signaled from thread 1; thread 2 is not waiting on this condition yet, so it remains blocked trying to lock the mutex;
the first thread finally releases the mutex, so thread 2 locks it and then blocks on pthread_cond_wait.
At this point, thread 1 has already finished, thread 2 is blocked waiting for the condition to be signaled, and the main thread is waiting for it to finish. There is nobody else to signal that condition, so you have a hang.
For a quick fix that will probably work most of the times, you could try changing the order in which you start the threads(start the watch thread first). But keep in mind and understand why I used bold for probably and most of the times.
The correct way to fix this would be to rethink your locking strategy: lock the mutexes in the smallest scope possible and keep them locked for the shortest time possible.
Swap the thread's execution sequence:
pthread_create(&id[1],NULL,watch,NULL);
pthread_create(&id[0],NULL,inc_func,NULL);
If you run thread0 as first, thread1 never gets past mutex lock, so it doesn't start waiting. Than thread0 ends, and only then thread1 executes pthread_cond_wait(), but there's no thread to do signal.
If you start thread1 first, it gets to the waiting part.
Related
I have a while loop which is constantly locking & unlocking a mutex
while(true)
{
mtx.lock();
mess_with_global_data();
thread_opener();
// ^^^
// this opens a thread at random times. If a thread is already open then it checks to see
// if it is done, if so close it.
mtx.unlock();
cv.notify_all();
}
keep in mind that the varibles are global.
The thread_opener function, puts this function on a thread:
void foo()
{
std::unique_lock<std::mutex> unique_mtx(mtx);
cv.wait(unique_mtx); // this is where it gets stuck, sometimes it is able to get through
if(some_global_var == 5)
{
some_global_var--;
}
unique_mtx.unlock();
}
The issue/problem:
The conditon varible - even when notified in another
thread, it does not lock the mutex everytime.
instead it takes a few seconds or even a few minutes
before the condtion varible can finally lock
the mutex.
I think the problem is that the main thread is locking
the mutex before the child thread can lock it it-self.
But shouldn't the wait method be the very first one
to lock it? And if the problem was the main thread locking
the mutex too quick, how do I stop it? and instead choose
the other thread that was waiting?
EDIT, code for thread_opener():
void thread_opener()
{
// foo_thread is ptr to a std::thread object allocated on the heap
if(!foo_thread && (rand() % 5) == 3)
{
foo_thread = new std::thread(&foo);
}
else if(foo_thread->joinable())
{
foo_thread->join();
delete foo_thread();
foo_thread = nullptr;
}
}
this came from: #DanielLangr.
As is, this is a perfect example of a deadlock. Consider the following possible ordering of operations: 1) main thread creates a new thread which runs foo, 2) main threads unlocks the mutex, 3) main thread calls notify_all, 4) other thread locks the mutex, 5) other thread calls wait which unlocks the mutex internally, 6) main thread calls join inside thread_opener. Consequently, the main thread is blocked at join and the other thread is blocked at wait, and there is nothing that would stop any of these. –
Helped me with developing a solution:
the solution being adding a flag(boolean) that the thread is done at the end of foo(). and check that instead of checking with std::thread::joinable().
Please see the following code:
std::mutex mutex;
std::condition_variable cv;
std::atomic<bool> terminate;
// Worker thread routine
void work() {
while( !terminate ) {
{
std::unique_lock<std::mutex> lg{ mutex };
cv.wait(lg);
// Do something
}
// Do something
}
}
// This function is called from the main thread
void terminate_worker() {
terminate = true;
cv.notify_all();
worker_thread.join();
}
Is the following scenario can happen?
Worker thread is waiting for signals.
The main thread called terminate_worker();
The main thread set the atomic variable terminate to true, and then signaled to the worker thread.
Worker thread now wakes up, do its job and load from terminate. At this step, the change to terminate made by the main thread is not yet seen, so the worker thread decides to wait for another signal.
Now deadlock occurs...
I wonder this is ever possible. As I understood, std::atomic only guarantees no race condition, but memory order is a different thing. Questions:
Is this possible?
If this is not possible, is this possible if terminate is not an atomic variable but is simply bool? Or atomicity has nothing to do with this?
If this is possible, what should I do?
Thank you.
I don't believe, what you describe is possible, as cv.notify_all() afaik (please correct me if I'm wrong) synchronizes with wait(), so when the worker thread awakes, it will see the change to terminate.
However:
A deadlock can happen the following way:
Worker thread (WT) determines that the terminate flag is still false.
The main thread (MT) sets the terminate flag and calls cv.notify_all().
As no one is curently waiting for the condition variable that notification gets "lost/ignored".
MT calls join and blocks.
WT goes to sleep ( cv.wait()) and blocks too.
Solution:
While you don't have to hold a lock while you call cv.notify, you
have to hold a lock, while you are modifying terminate (even if it is an atomic)
have to make sure, that the check for the condition and the actual call to wait happen while you are holding the same lock.
This is why there is a form of wait that performs this check just before it sends the thread to sleep.
A corrected code (with minimal changes) could look like this:
// Worker thread routine
void work() {
while( !terminate ) {
{
std::unique_lock<std::mutex> lg{ mutex };
if (!terminate) {
cv.wait(lg);
}
// Do something
}
// Do something
}
}
// This function is called from the main thread
void terminate_worker() {
{
std::lock_guard<std::mutex> lg(mutex);
terminate = true;
}
cv.notify_all();
worker_thread.join();
}
I have following scenario:
condition_variable cv;
mutex mut;
// Thread 1:
void run() {
while (true) {
mut.lock();
// create_some_data();
mut.unlock();
cv.notify_all();
}
}
// Thread 2
void thread2() {
mutex lockMutex;
unique_lock<mutex> lock(lockMutex);
while (running) {
cv.wait(lock);
mut.lock();
// copy data
mut.unlock();
// process data
}
}
// Thread 3, 4... - same as Thread 2
I run thread 1 all the time to get new data. Other threads wait with condition_variable until new data is available, then copy it and do some work on it. Work perfomed by threads differs in time needed to finish, the idea is that threads will get new data only when they finished with the old one. Data got in meantime is allowed to be "missed". I don't use shared mutex (only to access data) because I don't want threads to depend on each other.
Above code works fine on Windows, but now I run it on Ubuntu and I noticed that only one thread is being notified when notify_all() is called and the other ones just hangs on wait().
Why is that? Does Linux require different approach for using condition_variable?
Your code exhibits UB immediately as it relocks the unique lock that the cv has relocked when it exits wait.
There are other problems, like not detecting spurious wakeups.
Finally cv notify all onky notified currently waiting threads. If a thread shows up later, no dice.
It's working by luck.
The mutex and the condition variable are two parts of the same construct. You can't mix and match mutexes and cvs.
try this:
void thread2() {
unique_lock<mutex> lock(mut); // use the global mutex
while (running) {
cv.wait(lock);
// mutex is already locked here
// test condition. wakeups can be spurious
// copy data
lock.unlock();
// process data
lock.lock();
}
}
Per this documentation:
Any thread that intends to wait on std::condition_variable has to
acquire a std::unique_lock, on the same mutex as used to
protect the shared variable
execute wait, wait_for, or wait_until. The wait operations atomically release the mutex and suspend the execution of the
thread.
When the condition variable is notified, a timeout expires, or a spurious wakeup occurs, the thread is awakened, and the mutex is
atomically reacquired. The thread should then check the condition
and resume waiting if the wake up was spurious.
This code
void thread2() {
mutex lockMutex;
unique_lock<mutex> lock(lockMutex);
while (running) {
doesn't do that.
I'm new to C++ (on Windows) and threading and I'm currently trying to find a solution to my problem using mutexes, semaphores and events.
I'm trying to create a Barrier class with a constructor and a method called Enter. The class Barrier with it's only method Enter is supposed to hold off any thread that enters it, until a number of thread have reached that method. The number of thread to wait for it recieved at the contructor.
My problem is how do I use the locks to create that effect? what I need is something like a reversed semaphore, that holds threads until a count has been reached and not like the regular semaphore works that lets threads in until a count is reached.
Any ideas as to how to go about this would be great.
Thanks,
Netanel.
Maybe:
In the ctor, store the limit count and create an empty semaphore.
When a thread calls Enter, lock a mutex first so you can twiddle inside safely. Inc a thread count toward the limit count. If the limit has not yet been reached, release the mutex and wait on the semaphore. If the limit is reached, signal the semaphore[limit-1] times in a loop, zero the thread count, (ready for next time), release the mutex and return from Enter(). Any threads that were waiting on the semaphore, and are now ready/running, should just return from their 'Enter' call.
The mutex prevents any released thread that loops around from 'getting in again' until all the threads that called 'Enter' and waited have been set running and the barrier is reset.
You can implement it with condition variable.
Here is an example:
I declare 25 threads and launch them doing the WorkerThread function.
The condition I am checking to block/unblick the threads is whether the number of threads in the section is less than 2.
(I have added some asserts to prove what my coode does).
My code is simply sleeping in the critical section and after I decrease the number of threads in the critical section.
I also added a mutex for the cout to have clean messages.
#include
#include
#include
#include
#include
#include
#include /* assert */
using namespace std;
std::mutex m;
atomic<int> NumThreadsInCritialSection=0;
int MaxNumberThreadsInSection=2;
std::condition_variable cv;
mutex coutMutex;
int WorkerThread()
{
// Wait until main() sends data
{
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, []{return NumThreadsInCritialSection<MaxNumberThreadsInSection;});
}
assert (NumThreadsInCritialSection<MaxNumberThreadsInSection);
assert (NumThreadsInCritialSection>=0);
NumThreadsInCritialSection++;
{
std::unique_lock<std::mutex> lk(coutMutex);
cout<<"NumThreadsInCritialSection= "<<NumThreadsInCritialSection<<endl;
}
std::this_thread::sleep_for(std::chrono::seconds(5));
NumThreadsInCritialSection--;
{
std::unique_lock<std::mutex> lk(coutMutex);
cout<<"NumThreadsInCritialSection= "<<NumThreadsInCritialSection<<endl;
}
cv.notify_one();
return 0;
}
int main()
{
vector<thread> vWorkers;
for (int i=0;i<25;++i)
{
vWorkers.push_back(thread(WorkerThread));
}
for (auto j=vWorkers.begin(); j!=vWorkers.end(); ++j)
{
j->join();
}
return 0;
}
Hope that helps, tell me if you have any questions, I can comment or change my code.
Pseudocode outline might look like this:
void Enter()
{
Increment counter (atomically or with mutex)
if(counter >= desired_count)
{
condition_met = true; (protected if bool writes aren't atomic on your architecture)
cond_broadcast(blocking_cond_var);
}
else
{
Do a normal cond_wait loop-plus-predicate-check (waiting for the broadcast and checking condition_met each iteration to protect for spurious wakeups).
}
}
I have a function that is invoked from the main thread:
void create_thread() {
pthread_t bg_thread;
pthread_create(&bg_thread, NULL, run_in_background, NULL);
//wait here
pthread_mutex_lock(&MAIN_MUTEX);
pthread_cond_wait(&wakeUpMainThread, &MAIN_MUTEX);
pthread_mutex_unlock(&MAIN_MUTEX);
pthread_cond_signal(wakeUpBgThread);
}
Here is the short version of the function that runs in background thread:
void* run_in_background(void* v) {
pthread_mutex_t mutex;
pthread_cond_t cond;
pthread_mutex_init(&mutex, NULL);
pthread_cond_init(&cond, NULL);
//NOTE: wakeUpBgThread == cond
save_condition_and_mutex(&cond, &mutex);
pthread_mutex_lock(&mutex);
{
pthread_cond_signal(&wakeUpMainThread);
while( run_condition ) {
pthread_cond_wait(&cond, &mutex);
do_smth();
}
}
pthread_mutex_unlock(&mutex);
pthread_mutex_destroy(&mutex);
pthread_cond_destroy(&cond);
pthread_exit(NULL);
}
So the goal is:
1. Create a thread in the main one.
2. Make the main thread sleep until the signal from that thread.
3. Make the background thread sleep until the signal from the main thread.
4. Invoke the background thread from the main one.
The problem is: sometimes after the
pthread_cond_signal(&wakeUpMainThread);
scheduler switches to the main thread immediately and fires the wake up signal for the background thread. After this scheduler switches back to the background thread and it starts waiting for the signal that has already been fired, so it sleeps forever.
Question: is there any way to force background thread to execute the code until the
pthread_cond_wait(&cond, &mutex);
Your call to pthread_mutex_lock in create_thread needs to take place before pthread_create, not after it. Otherwise you have a race condition.
Use a semaphore? Semaphore signals are not lost - they just increment the count & so the background thread will run agan after the semaphore is signaled, even if it has not actually got around to waiting on it yet.
Rgds,
Martin
It sounds like your best bet is to use a condition. Have a mutex and a condition. Main initializes both, grabs the mutex, creates the thread, then goes to sleep on the condition. Child grabs the lock (after main waits on the condition) does the work (or alternatively does the work then grab the lock), and then signals the condition (you can decide whether to release the lock before or after the signal--important bit is that you grabbed it). Main then wakes up and continues processing.
pthread_cond_wait() and friends is what you look at.
You don't lock the mutex before your signal on main thread. If you want predictable behavior - you should lock the same mutex both before wait call and signal call.