I have a thread that is doing "work", it is supposed to report progress when conditional variable notifies it. This thread is waiting for conditional variables.
Other thread is waiting for a x amount of milliseconds and then notifies conditional variable to proceed.
I have 5 conditional variables (this is an exercise for school) and once each gets notified work progress is supposed to be reported:
Problem im having is that thread 2, the one that is supposed to notify thread 1, goes through all 5 checkPoints and notifies only once in the end. So I end up in a situation where progress is at 20% in the end and thread 1 is waiting for another notify but thread 2 has finished all notifies.
Where is flaw in my implementation of this logic?
Code below:
#include <condition_variable>
#include <functional>
#include <iostream>
#include <mutex>
#include <thread>
using namespace std;
class Program {
public:
Program() {
m_progress = 0;
m_check = false;
}
bool isWorkReady() { return m_check; }
void loopWork() {
cout << "Working ... : " << endl;
work(m_cv1);
work(m_cv2);
work(m_cv3);
work(m_cv4);
work(m_cv5);
cout << "\nFinished!" << endl;
}
void work(condition_variable &cv) {
unique_lock<mutex> mlock(m_mutex);
cv.wait(mlock, bind(&Program::isWorkReady, this));
m_progress++;
cout << " ... " << m_progress * 20 << "%" << endl;
m_check = false;
}
void checkPoint(condition_variable &cv) {
lock_guard<mutex> guard(m_mutex);
cout << " < Checking >" << m_progress << endl;
this_thread::sleep_for(chrono::milliseconds(300));
m_check = true;
cv.notify_one();
}
void loopCheckPoints() {
checkPoint(m_cv1);
checkPoint(m_cv2);
checkPoint(m_cv3);
checkPoint(m_cv4);
checkPoint(m_cv5);
}
private:
mutex m_mutex;
condition_variable m_cv1, m_cv2, m_cv3, m_cv4, m_cv5;
int m_progress;
bool m_check;
};
int main() {
Program program;
thread t1(&Program::loopWork, &program);
thread t2(&Program::loopCheckPoints, &program);
t1.join();
t2.join();
return 0;
}
The loopCheckPoints() thread holds a lock for some time, sets m_check then releases the lock and immediately goes on to grab the lock again. The loopWork() thread may not have woken up in between to react to the m_check change.
Never hold locks for long times. Be as quick as possible. If you can't get the program to work without adding sleeps, you have a problem.
One way to fix this would be to check that the worker has actually set m_check back to false:
void work(condition_variable& cv) {
{ // lock scope
unique_lock<mutex> mlock(m_mutex);
cv.wait(mlock, [this] { return m_check; });
m_progress++;
cout << " ... " << m_progress * 20 << "%" << endl;
m_check = false;
}
// there's no need to hold the lock when notifying
cv.notify_one(); // notify that we set it back to false
}
void checkPoint(condition_variable& cv) {
// if you are going to sleep, do it without holding the lock
// this_thread::sleep_for(chrono::milliseconds(300));
{ // lock scope
lock_guard<mutex> guard(m_mutex);
cout << "<Checking> " << m_progress << endl;
m_check = true;
}
cv.notify_one(); // no need to hold the lock here
{
// Check that m_check is set back to false
unique_lock<mutex> mlock(m_mutex);
cv.wait(mlock, [this] { return not m_check; });
}
}
Where is flaw in my implementation of this logic?
cv.notify_one does not require, that the code after cv.wait(mlock, bind(&Program::isWorkReady, this)); continues immediatly, so it is perfectly valid that multiple checkPoint are exectued, before the code continues after cv.wait.
But after you the cv.wait you set m_check = false; to false, so if there is no further checkPoint execution remaining, that will set m_check = true;, your work function becomes stuck.
Instead of m_check being a bool you could think about making it a counter, that is incremented in checkPoint and decremented in work.
Related
I am trying to create a sort of threadpool that runs functions on separate threads and only starts a new iteration when all functions have finished.
map<size_t, bool> status_map;
vector<thread> threads;
condition_variable cond;
bool are_all_ready() {
mutex m;
unique_lock<mutex> lock(m);
for (const auto& [_, status] : status_map) {
if (!status) {
return false;
}
}
return true;
}
void do_little_work(size_t id) {
this_thread::sleep_for(chrono::seconds(1));
cout << id << " did little work..." << endl;
}
void do_some_work(size_t id) {
this_thread::sleep_for(chrono::seconds(2));
cout << id << " did some work..." << endl;
}
void do_much_work(size_t id) {
this_thread::sleep_for(chrono::seconds(4));
cout << id << " did much work..." << endl;
}
void run(const function<void(size_t)>& function, size_t id) {
while (true) {
mutex m;
unique_lock<mutex> lock(m);
cond.wait(lock, are_all_ready);
status_map[id] = false;
cond.notify_all();
function(id);
status_map[id] = true;
cond.notify_all();
}
}
int main() {
threads.push_back(thread(run, do_little_work, 0));
threads.push_back(thread(run, do_some_work, 1));
threads.push_back(thread(run, do_much_work, 2));
for (auto& thread : threads) {
thread.join();
}
return EXIT_SUCCESS;
}
I expect to get the output:
0 did little work...
1 did some work...
2 did much work...
0 did little work...
1 did some work...
2 did much work...
.
.
.
after the respective timeouts but when I run the program I only get
0 did little work...
0 did little work...
.
.
.
I also have to say that Im rather new to multithreading but in my understanding, the condition_variable should to the taks of blocking every thread till the predicate returns true. And in my case are_all_ready should return true after all functions have returned.
There are several ways to do this.
Easiest in my opinion would be a C++20 std::barrier, which says, "wait until all of N threads have arrived and are waiting here."
#include <barrier>
std::barrier synch_workers(3);
....
void run(const std::function<void(size_t)>& func, size_t id) {
while (true) {
synch_workers.arrive_and_wait(); // wait for all three to be ready
func(id);
}
}
Cruder and less efficient, but equally effective, would be to construct and join() new sets of three worker threads for each "batch" of work:
int main(...) {
std::vector<thread> threads;
...
while (flag_running) {
threads.push_back(...);
threads.push_back(...);
...
for (auto& thread : threads) {
thread.join();
}
threads.clear();
}
Aside
I'd suggest you revisit some core synchronization concepts, however. You are using new mutexes when you want to re-use a shared one. The scope of your unique_lock isn't quite right.
Now, your idea to track worker thread "busy/idle" state in a map is straightforward, but cannot correctly coordinate "batches" or "rounds" of work that must be begun at the same time.
If a worker sees in the map that two of three threads, including itself, are "idle", what does that mean? Is a "batch" of work concluding — i.e., two workers are waiting for a tardy third? Or has a batch just begun — i.e., the two idle threads are tardy and had better get to work like their more eager peer?
The threads cannot know the answer without keeping track of the current batch of work, which is what a barrier (or its more complex cousin the phaser) does under the hood.
As-is, your program has a crash (UB) due to concurrent access to status_map.
When you do:
void run(const function<void(size_t)>& function, size_t id)
{
...
mutex m;
unique_lock<mutex> lock(m);
...
status_map[id] = false;
the locks created are local variables, one per thread, and as such independent. So, it doesn't prevent multiple threads from writing to status_map at once, and thus crashing. That's what I get on my machine.
Now, if you make the mutex static, only one thread can access the map at once. But that also makes it so that only one thread runs at once. With this I see 0, 1 and 2 running, but only once at a time and a strong tendency for the previous thread to have run to run again.
My suggestion, go back to the drawing board and make it simpler. All threads run at once, single mutex to protect the map, only lock the mutex to access the map, and ... well, in fact, I don't even see the need for a condition variable.
e.g. what is wrong with:
#include <thread>
#include <iostream>
#include <vector>
using namespace std;
vector<thread> threads;
void do_little_work(size_t id) {
this_thread::sleep_for(chrono::seconds(1));
cout << id << " did little work..." << endl;
}
void do_some_work(size_t id) {
this_thread::sleep_for(chrono::seconds(2));
cout << id << " did some work..." << endl;
}
void do_much_work(size_t id) {
this_thread::sleep_for(chrono::seconds(4));
cout << id << " did much work..." << endl;
}
void run(const function<void(size_t)>& function, size_t id) {
while (true) {
function(id);
}
}
int main() {
threads.push_back(thread(run, do_little_work, 0));
threads.push_back(thread(run, do_some_work, 1));
threads.push_back(thread(run, do_much_work, 2));
for (auto& thread : threads) {
thread.join();
}
return EXIT_SUCCESS;
}
I am currently trying to learn how to use a condition_variable for thread synchronization. For testing, I have made the demo application shown below. When I start it, it runs into a dead lock. I know the location where this happens, but I'm unable to understand why the deadlock occurs.
I know that a condition_variable's wait function will automatically unlock the mutex when the condition is not true, so the main thread should not be blocked in the second pass. But it is just this what happens.
Could anybody explain why?
#include <thread>
#include <condition_variable>
#include <iostream>
bool flag = false;
std::mutex g_mutex;
std::condition_variable cv;
void threadProc()
{
std::unique_lock<std::mutex> lck(g_mutex);
while (true)
{
static int count = 0;
std::cout << "wait for flag" << ++count << std::endl;
cv.wait(lck, []() {return flag; }); // !!!It will blocked at the second round
std::cout << "flag is true " << count << std::endl;
flag = false;
lck.unlock();
}
}
int main(int argc, char *argv[])
{
std::thread t(threadProc);
while (true)
{
static int count = 0;
{
std::lock_guard<std::mutex> guard(g_mutex); // !!!It will blocked at the second round
flag = true;
std::cout << "set flag " << ++count << std::endl;
}
cv.notify_one();
std::this_thread::sleep_for(std::chrono::seconds(1));
}
t.join();
return 0;
}
I know that a condition_variable's wait function will automatically unlock the mutex when the condition is not true.
Um..., yes..., Just to be absolutely clear, cv.wait(lck, f) does this:
while(! f()) {
cv.wait(lck);
}
And each call to cv.wait(lck) will;
unlock lck,
wait until some other thread calls cv.notify_one() or cv.notify_all(),
re-lock lck, and then
return.
You can fix the problem by moving the unique_lock(...) statement inside the while loop. As it is now, you're attempting to unlock lck on round 2 but it was not in a locked state, since, after round 1 you never locked it again.
I'm trying to write a program with c++11 in which multiple threads are run, and, during each cycle the main thread will wait for each thread to be finished. The program below is a testing program for this concept.
Apparently I'm missing something trivial in my implementation as it looks like I'm experiencing a deadlock (Not always, just during some random runs).
#include <iostream>
#include <stdio.h>
#include <thread>
#include <chrono>
#include <condition_variable>
#include <mutex>
using namespace std;
class Producer
{
public:
Producer(int a_id):
m_id(a_id),
m_ready(false),
m_terminate(false)
{
m_id = a_id;
m_thread = thread(&Producer::run, this);
// ensure thread is available before it is started
this_thread::sleep_for(std::chrono::milliseconds(100));
}
~Producer() {
terminate();
m_thread.join();
}
void start() {
//cout << "start " << m_id << endl;
unique_lock<mutex> runLock(m_muRun);
m_ready = true;
runLock.unlock();
m_cond.notify_all();
}
void wait() {
cout << "wait " << m_id << endl;
unique_lock<decltype(m_muRun)> runLock(m_muRun);
m_cond.wait(runLock, [this]{return !m_ready;});
}
void terminate() {
m_terminate = true;
start();
}
void run() {
do {
unique_lock<decltype(m_muRun)> runLock(m_muRun);
m_cond.wait(runLock, [this]{return m_ready;});
if (!m_terminate) {
cout << "running thread: " << m_id << endl;
} else {
cout << "exit thread: " << m_id << endl;
}
runLock.unlock();
m_ready = false;
m_cond.notify_all();
} while (!m_terminate);
}
private:
int m_id;
bool m_ready;
bool m_terminate;
thread m_thread;
mutex m_muRun;
condition_variable m_cond;
};
int main()
{
Producer producer1(1);
Producer producer2(2);
Producer producer3(3);
for (int i=0; i<10000; ++i) {
cout << i << endl;
producer1.start();
producer2.start();
producer3.start();
producer1.wait();
producer2.wait();
producer3.wait();
}
cout << "exit" << endl;
return 0;
}
The program's output when the deadlock is occurring:
....
.......
running thread: 2
running thread: 1
wait 1
wait 2
wait 3
running thread: 3
Looking at the program's output when the deadlock occurs, I suspect the bottleneck of the program is that sometimes the Producer::wait function is called, before the corresponding thread is actually started, i.e. the command Producer::start should have triggered the start, a.k. unlocking of the mutex, however it is not yet picked up by the thread's run method (Producer::run), (NB: I'm not 100% sure of this!). I'm a bit lost here, hopefully somebody can provide some help.
You have race condition in this code:
runLock.unlock();
m_ready = false;
m_ready variable must be always protected by mutex for proper synchronization. And it is completely unnecessary to wait for thread to start this_thread::sleep_for() - proper synchronization would take care of that as well so you can simply remove that line. Note this is pretty inefficient way of doing proper multithreading - there should be thread pool instead of individual object with separate mutex and condition variable each.
I'm trying to understand C++ Multithreading and synchronize between many threads.
Thus I created 2 threads the first one increments a value and the second one decrements it. what I can't understand why the resulted value after the execution is different than the first one, since I added and subtracted from the same value.
static unsigned int counter = 100;
static bool alive = true;
static Lock lock;
std::mutex mutex;
void add() {
while (alive)
{
mutex.lock();
counter += 10;
std::cout << "Counter Add = " << counter << std::endl;
mutex.unlock();
}
}
void sub() {
while (alive)
{
mutex.lock();
counter -= 10;
std::cout << "Counter Sub = " << counter<< std::endl;
mutex.unlock();
}
}
int main()
{
std::cout << "critical section value at the start " << counter << std::endl;
std::thread tAdd(add);
std::thread tSub(sub);
Sleep(1000);
alive = false;
tAdd.join();
tSub.join();
std::cout << "critical section value at the end " << counter << std::endl;
return 0;
}
Output
critical section value at the start 100
critical section value at the end 220
So what I need is how to keep my value as it's, I mean counter equal to 100 using those two threads.
The problem is that both threads will get into an "infinite" loop for 1 second and they will get greedy with the mutex. Do a print in both functions and see which thread gets the lock more often.
Mutexes are used to synchronize access to resources so that threads will not read/write incomplete or corrupted data, not create a neat sequence.
If you want to keep that value at 100 at the end of execution you need to use a semaphore so that there will be an ordered sequence of access to the variable.
I think, what you want is to signal to the subtracting thread, that you just have sucessfully added in the add thread, and vice versa. You'll have to additionally communicate the information, which thread is next. A naive solution:
bool shouldAdd = true;
add() {
while( alive ) {
if( shouldAdd ) {
// prefer lock guards over lock() and unlock() for exception safety
std::lock_guard<std::mutex> lock{mutex};
counter += 10;
std::cout << "Counter Add = " << counter << std::endl;
shouldAdd = false;
}
}
}
sub() {
while( alive ) {
if( !shouldAdd ) {
std::lock_guard<std::mutex> lock{mutex};
counter -= 10;
std::cout << "Counter Sub = " << counter << std::endl;
shouldAdd = true;
}
}
}
Now add() will busy wait for sub() to do its job before it will try and acquire the lock again.
To prevent busy waiting, you might chose a condition variable, instead of trying to only use a single mutex. You can wait() on the condition variable, before you add or subtract, and notify() the waiting thread afterwards.
I was trying to write code for Producer-Consumer problem. Below code works fine most of the time but stuck sometimes because of "Lost Wake-up" (i guess). I tried thread sleep() but it didn't work. What modification is needed to handle this case in my code? Is semaphore can be helpful here ? If yes, how will i implement them here ?
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <iostream>
using namespace std;
int product = 0;
boost::mutex mutex;
boost::condition_variable cv;
boost::condition_variable pv;
bool done = false;
void consumer(){
while(done==false){
//cout << "start c" << endl
boost::mutex::scoped_lock lock(mutex);
cv.wait(lock);
//cout << "wakeup c" << endl;
if (done==false)
{
cout << product << endl;
//cout << "notify c" << endl;
pv.notify_one();
}
//cout << "end c" << endl;
}
}
void producer(){
for(int i=0;i<10;i++){
//cout << "start p" << endl;
boost::mutex::scoped_lock lock(mutex);
boost::this_thread::sleep(boost::posix_time::microseconds(50000));
++product;
//cout << "notify p" << endl;
cv.notify_one();
pv.wait(lock);
//cout << "wakeup p" << endl;
}
//cout << "end p" << endl;
cv.notify_one();
done = true;
}
int main()
{
int t = 1000;
while(t--){
/*
This is not perfect, and is prone to a subtle issue called the lost wakeup (for example, producer calls notify()
on the condition, but client hasn't really called wait() yet, then both will wait() indefinitely.)
*/
boost::thread consumerThread(&consumer);
boost::thread producerThread(&producer);
producerThread.join();
consumerThread.join();
done =false;
//cout << "process end" << endl;
}
cout << "done" << endl;
getchar();
return 0;
}
Yes, you want a way to know (in the consumer) that you "missed" a signal. A semaphore can help. There's more than one way to skin a cat, so here's my simple take on it (using just c++11 standard library features):
class semaphore
{
private:
std::mutex mtx;
std::condition_variable cv;
int count;
public:
semaphore(int count_ = 0) : count(count_) { }
void notify()
{
std::unique_lock<std::mutex> lck(mtx);
++count;
cv.notify_one();
}
void wait() { return wait([]{}); } // no-op action
template <typename F>
auto wait(F&& func = []{}) -> decltype(std::declval<F>()())
{
std::unique_lock<std::mutex> lck(mtx);
while(count == 0){
cv.wait(lck);
}
count--;
return func();
}
};
For convenience, I added a convenience wait() overload that takes a function to be executed under the lock. This makes it possible for the consumer to operate the 'semaphore' without ever manually operating the lock (and still get the value of product without data-races):
semaphore sem;
void consumer() {
do {
bool stop = false;
int received_product = sem.wait([&stop] { stop = done; return product; });
if (stop)
break;
std::cout << received_product << std::endl;
std::unique_lock<std::mutex> lock(processed_mutex);
processed_signal.notify_one();
} while(true);
}
A fully working demo: Live on Coliru:
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <thread>
#include <cassert>
class semaphore
{
private:
std::mutex mtx;
std::condition_variable cv;
int count;
public:
semaphore(int count_ = 0) : count(count_) { }
void notify()
{
std::unique_lock<std::mutex> lck(mtx);
++count;
cv.notify_one();
}
void wait() { return wait([]{}); } // no-op action
template <typename F>
auto wait(F&& func = []{}) -> decltype(std::declval<F>()())
{
std::unique_lock<std::mutex> lck(mtx);
while(count == 0){
cv.wait(lck);
}
count--;
return func();
}
};
semaphore sem;
int product = 0;
std::mutex processed_mutex;
std::condition_variable processed_signal;
bool done = false;
void consumer(int check) {
do {
bool stop = false;
int received_product = sem.wait([&stop] { stop = done; return product; });
if (stop)
break;
std::cout << received_product << std::endl;
assert(++check == received_product);
std::unique_lock<std::mutex> lock(processed_mutex);
processed_signal.notify_one();
} while(true);
}
void producer() {
std::unique_lock<std::mutex> lock(processed_mutex);
for(int i = 0; i < 10; ++i) {
++product;
sem.notify();
processed_signal.wait(lock);
}
done = true;
sem.notify();
}
int main() {
int t = 1000;
while(t--) {
std::thread consumerThread(&consumer, product);
std::thread producerThread(&producer);
producerThread.join();
consumerThread.join();
done = false;
std::cout << "process end" << std::endl;
}
std::cout << "done" << std::endl;
}
You seems to ignore that the variable done is also a shared state, to the same extend as product. Which can lead to several races conditions. In your case, I see at least one scenario where consumerThread make no progress:
The loop execute has intended
consumer executes, and is waiting at cv.wait(lock);
producer has finished the for loop, and notify consumer and is preempted
consumer wakes up, read "done==false", output product, read done == false again, wait on the condition
producer set done to true and exit
consumer is stuck forever
To avoid these kind of issues you should be holding a lock when reading or writing done. Btw your implementation is quite sequential, ie the producer and the consumer can only process a single piece of data at the time...