Mutlithreading: synchronize threads to perform several steps race condition - c++

I want to create 15 threads and have them performed 4 successive steps (that I call Init, Process, Terminate and WriteOutputs).
For each step I want all threads to finish it before passing to the following step.
I am trying to implement it (cf code below) using a std::condition_variable and calling the wait() and notify_all() methods but somehow I do not manage to do it
and even worse I have a race condition
when counting the number of operations done (which should be 15*4 = 60) I sometimes have some prints that are indeed not printed and the m_counter in my class at the end is less than 60 which should not be the case
I use two std::mutex objects: one for printing messages and another one for the step synchronization
Could someone explain to me the problem?
What would be a solution ?
Many thanks in advance
#include<iostream>
#include<thread>
#include<mutex>
#include<condition_variable>
#include<vector>
#include<functional>
class MTHandler
{
public:
MTHandler(){
// 15 threads
std::function<void(int)> funcThread = std::bind(&MTHandler::ThreadFunction, this, std::placeholders::_1);
for (int i=0; i<15; i++){
m_vectThreads.push_back(std::thread(funcThread,i));
}
for (std::thread & th : m_vectThreads) {
th.join();
}
std::cout << "m_counter = " << m_counter << std::endl;
}
private:
enum class ManagerStep{
Init,
Process,
Terminate,
WriteOutputs,
};
std::vector<ManagerStep> m_vectSteps = {
ManagerStep::Init,
ManagerStep::Process,
ManagerStep::Terminate,
ManagerStep::WriteOutputs
};
unsigned int m_iCurrentStep = 0 ;
unsigned int m_counter = 0;
std::mutex m_mutex;
std::mutex m_mutexStep;
std::condition_variable m_condVar;
bool m_finishedAllSteps = false;
unsigned int m_nThreadsFinishedStep = 0;
std::vector<std::thread> m_vectThreads = {};
void ThreadFunction (int id) {
while(!m_finishedAllSteps){
m_mutex.lock();
m_counter+=1;
m_mutex.unlock();
switch (m_vectSteps[m_iCurrentStep])
{
case ManagerStep::Init:{
m_mutex.lock();
std::cout << "thread " << id << " --> Init step" << "\n";
m_mutex.unlock();
break;
}
case ManagerStep::Process:{
m_mutex.lock();
std::cout << "thread " << id << " --> Process step" << "\n";
m_mutex.unlock();
break;
}
case ManagerStep::Terminate:{
m_mutex.lock();
std::cout << "thread " << id << " --> Terminate step" << "\n";
m_mutex.unlock();
break;
}
case ManagerStep::WriteOutputs:{
m_mutex.lock();
std::cout << "thread " << id << " --> WriteOutputs step" << "\n";
m_mutex.unlock();
break;
}
default:
{
break;
}
}
unsigned int iCurrentStep = m_iCurrentStep;
bool isCurrentStepFinished = getIsFinishedStatus();
if (!isCurrentStepFinished){
// wait for other threads to finish current step
std::unique_lock<std::mutex> lck(m_mutexStep);
m_condVar.wait(lck, [iCurrentStep,this]{return iCurrentStep != m_iCurrentStep;});
}
}
}
bool getIsFinishedStatus(){
m_mutexStep.lock();
bool isCurrentStepFinished = false;
m_nThreadsFinishedStep +=1;
if (m_nThreadsFinishedStep == m_vectThreads.size()){
// all threads have completed the current step
// pass to the next step
m_iCurrentStep += 1;
m_nThreadsFinishedStep = 0;
m_finishedAllSteps = (m_iCurrentStep == m_vectSteps.size());
isCurrentStepFinished = true;
}
if (isCurrentStepFinished){m_condVar.notify_all();}
m_mutexStep.unlock();
return isCurrentStepFinished;
}
};
int main ()
{
MTHandler mt;
return 0;
}

Related

Sandard way of implementing c++ multi-threading for collecting data streams and processing

I'm new to c++ development. I'm trying to run infinite functions that are independent of each other.
Problem statement is smiliar to this:
The way I'm trying to implement this is
#include <iostream>
#include <cstdlib>
#include <pthread.h>
#include <unistd.h>
#include <mutex>
int g_i = 0;
std::mutex g_i_mutex; // protects g_i
// increment g_i by 1
void increment_itr()
{
const std::lock_guard<std::mutex> lock(g_i_mutex);
g_i += 1;
}
void *fun(void *s)
{
std::string str;
str = (char *)s;
std::cout << str << " start\n";
while (1)
{
std::cout << str << " " << g_i << "\n";
if(g_i > 1000) break;
increment_itr();
}
pthread_exit(NULL);
std::cout << str << " end\n";
}
void *checker(void *s) {
while (1) {
if(g_i > 1000) {
std::cout<<"**********************\n";
std::cout << "checker: g_i == 100\n";
std::cout<<"**********************\n";
pthread_exit(NULL);
}
}
}
int main()
{
int itr = 0;
pthread_t threads[3];
pthread_attr_t attr;
void *status;
// Initialize and set thread joinable
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
int rc1 = pthread_create(&threads[0], &attr, fun, (void *)&"foo");
int rc2 = pthread_create(&threads[1], &attr, fun, (void *)&"bar");
int rc3 = pthread_create(&threads[2], &attr, checker, (void *)&"checker");
if (rc1 || rc2 || rc3)
{
std::cout << "Error:unable to create thread," << rc1 << rc2 << rc3 << std::endl;
exit(-1);
}
pthread_attr_destroy(&attr);
std::cout << "main func continues\n";
for (int i = 0; i < 3; i++)
{
rc1 = pthread_join(threads[i], &status);
if (rc1)
{
std::cout << "Error:unable to join," << rc1 << std::endl;
exit(-1);
}
std::cout << "Main: completed thread id :" << i;
std::cout << " exiting with status :" << status << std::endl;
}
std::cout << "main end\n";
return 0;
}
This works, but I want to know if this implementation is a standard approach to do this or this can be done in any better way?
You correctly take a lock inside increment_itr, but your fun function is accessing g_i without acquiring the lock.
Change this:
void increment_itr()
{
const std::lock_guard<std::mutex> lock(g_i_mutex);
g_i += 1;
}
To this
int increment_itr()
{
std::lock_guard<std::mutex> lock(g_i_mutex); // the const wasn't actually needed
g_i = g_i + 1;
return g_i; // return the updated value of g_i
}
This is not thread safe:
if(g_i > 1000) break; // access g_i without acquiring the lock
increment_itr();
This this is better:
if (increment_itr() > 1000) {
break;
}
Similar fix is needed in checker:
void *checker(void *s) {
while (1) {
int i;
{
std::lock_guard<std::mutex> lock(g_i_mutex);
i = g_i;
}
if(i > 1000) {
std::cout<<"**********************\n";
std::cout << "checker: g_i == 100\n";
std::cout<<"**********************\n";
break;
}
return NULL;
}
As to your design question. Here's the fundamental issue.
You're proposing a dedicated thread that continuously takes a lock and would does some sort checking on a data structure. And if a certain condition is met, it would do some additional processing such as writing to a database. The thread spinning in an infinite loop would be wasteful if nothing in the data structure (the two maps) has changed. Instead, you only want your integrity check to run when something changes. You can use a condition variable to have the checker thread pause until something actually changes.
Here's a better design.
uint64_t g_data_version = 0;
std::conditional_variable g_cv;
void *fun(void *s)
{
while (true) {
<< wait for data from the source >>
{
std::lock_guard<std::mutex> lock(g_i_mutex);
// update the data in the map while under a lock
// e.g. g_n++;
//
// increment the data version to signal a new revision has been made
g_data_version += 1;
}
// notify the checker thread that something has changed
g_cv.notify_all();
}
}
Then your checker function only wakes up when it fun signals it to say something has changed.
void *checker(void *s) {
while (1) {
// lock the mutex
std::unique_lock<std::mutex> lock(g_i_mutex);
// do the data comparison check here
// now wait for the data version to change
uint64_t version = g_data_version;
while (version != g_data_version) { // check for spurious wake up
cv.wait(lock); // this atomically unlocks the mutex and waits for a notify() call on another thread to happen
}
}
}

How to correctly pause and resume a std::thread?

I'm new to multithreading in C++. I just want to define a class TaskManager that allows me to handle the execution of a general task. The core logic of the task should be implemented in the task() method. Then I want to implement the start(), pause(), and resume() methods to handle the execution of task(). Is there any problem with this implementation? Is it the right way to handle this kind of problem? is there a way to abstract the core logic from the task() method?
#include <iostream>
#include <thread>
#include <chrono>
class TaskManager{
private:
std::condition_variable cv;
std::mutex mtx;
std::thread task_thread;
bool paused = true;
bool finished = false;
int counter = 0;
int MAX_COUNT = INT_MAX;
public:
~TaskManager(){
if (this->task_thread.joinable()){
this->task_thread.join();
}
}
void task(){
// Finishing condition. ==> counter < this->MAX_COUNT
while(counter < this->MAX_COUNT){
std::unique_lock<std::mutex> ul(this->mtx);
this->cv.wait(ul, [this] {return (!this->paused);});
// CORE LOGIC...
counter++;
}
std::cout << "Finished!" << std::endl;
this->finished = true;
}
void start(){
std::unique_lock<std::mutex> ul(this->mtx);
this->paused = false;
task_thread = std::thread([this]{this->task();});
cv.notify_one();
}
void pause(){
std::unique_lock<std::mutex> ul(this->mtx);
if (!this->finished) {
this->paused = true;
this->cv.notify_one();
}
}
void resume(){
std::unique_lock<std::mutex> ul(this->mtx);
if (!this->finished) {
this->paused = false;
this->cv.notify_one();
}
}
int getCounter() {
return this->counter;
}
};
int main() {
TaskManager tm;
std::cout << "counter before start(): " << tm.getCounter() << std::endl;
tm.start();
std::this_thread::sleep_for(std::chrono::milliseconds(10));
std::cout << "counter after 10 ms: " << tm.getCounter() << std::endl;
tm.pause();
std::cout << "counter after pause(): " << tm.getCounter() << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(10));
std::cout << "counter after 10 ms: " << tm.getCounter() << std::endl;
tm.resume();
std::cout << "counter after resume(): " << tm.getCounter() << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(10));
std::cout << "counter after 10 ms: " << tm.getCounter() << std::endl;
return 0;
}
Output:
counter before start(): 0
counter after 10 ms: 266967
counter after pause(): 267526
counter after 10 ms: 267526
counter after resume(): 267526
counter after 10 ms: 487041
Finished!
Is there any problem with this implementation?
There's data race on counter
You probably don't want to hold the lock while executing // CORE LOGIC.... If you mean to protect counter, you should prefer another mutex for it.
finished should be under the lock too. Alternatively, it could be atomic. Note that unnecessary notifications don't hurt, so you can sip finished altogether
Is it the right way to handle this kind of problem?
Depending on why do you want to pause in the first place. For some reasons to pause there could be a better approach, like C++20 latch/semaphore/barrier.
is there a way to abstract the core logic from the task() method?
To what extent. You can change it to
void CoreLogic(std::function<void()> pause_callback) {
pause_callback();
// Core logic
}
You cannot pause a thread in an arbitrary point with C++ facilities. Maybe you can with platform facilities (like, Windows has SuspendThread), but it may not be a good idea (imagine a thread acquires malloc internal lock when paused).

create a monitor class in C++ that terminates gracefully [duplicate]

This question already has answers here:
How do I terminate a thread in C++11?
(7 answers)
How to stop the thread execution in C++
(3 answers)
Proper way to terminate a thread in c++
(1 answer)
Closed 3 years ago.
My main function loads a monitoring class. This class calls external services to periodically get some data and report health status.
These are the task_1 and task_2 in the class below, that can have sub tasks. The tasks accumulate some values that are stored to a shared "Data" class.
So each task_N is coupled with a thread that executes, sleeps for a while and does this forever until the program stops.
My basic problem is that I cannot stop the threads in the Monitor class, since they might be waiting for the timer to expire (sleep)
#include <iostream>
#include <thread>
#include <utility>
#include "Settings.hpp"
#include "Data.hpp"
class Monitors {
public:
Monitors(uint32_t timeout1, uint32_t timeout2, Settings settings, std::shared_ptr<Data> data)
: timeout_1(timeout1), timeout_2(timeout2), settings_(std::move(settings)), data_(std::move(data)) {}
void start() {
thread_1 = std::thread(&Monitors::task_1, this);
thread_2 = std::thread(&Monitors::task_2, this);
started_ = true;
}
void stop() {
started_ = false;
thread_1.join();
thread_2.join();
std::cout << "stopping threads" << std::endl;
}
virtual ~Monitors() {
std::cout << "Monitor stops" << std::endl;
}
private:
void subtask_1_1() {
//std::cout << "subtask_1_1 reads " << settings_.getWeb1() << std::endl;
}
void subtask_1_2() {
//std::cout << "subtask_1_2" << std::endl;
data_->setValue1(21);
}
void task_1() {
while(started_) {
subtask_1_1();
subtask_1_2();
std::this_thread::sleep_for(std::chrono::milliseconds(timeout_1));
std::cout << "task1 done" << std::endl;
}
}
void subtask_2_1() {
//std::cout << "subtask_2_1" << std::endl;
}
void subtask_2_2() {
//std::cout << "subtask_2_2" << std::endl;
}
void task_2() {
while(started_) {
subtask_2_1();
subtask_2_2();
std::this_thread::sleep_for(std::chrono::milliseconds(timeout_2));
std::cout << "task2 done" << std::endl;
}
}
private:
bool started_ {false};
std::thread thread_1;
std::thread thread_2;
uint32_t timeout_1;
uint32_t timeout_2;
Settings settings_;
std::shared_ptr<Data> data_;
};
The main function is here:
auto data = std::make_shared<Data>(10,20);
Settings set("hello", "world");
Monitors mon(1000, 24000,set,data);
mon.start();
int count = 1;
while(true) {
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
std::cout << data->getValue2() << " and count is " << count << std::endl;
count++;
if ( count == 10)
break;
}
std::cout << "now I am here" << std::endl;
mon.stop();
return 0;
Now when I call mon.stop() the main thread stops only when the timer exprires.
How can I gracefully call mon.stop() and interrupt and call the task_N?
UPDATE: Since I don't want to call std::terminate, which is the proper way to implement a monitor class in c++

C++ Producer Consumer, same consumer thread grabs all tasks

I am implementing a producer consumer project in c++, and when I run the program, the same consumer grabs almost all of the work, without letting any of the other consumer threads grab any. Sometimes, other threads do get some work, but then that other thread takes control for a while. for example, TID 10 could grab almost all of the work, but then all of a sudden TID 12 would grab it, with no other consumer threads getting work in between.
Any idea why other threads wouldn't have a chance to grab work?
#include <thread>
#include <iostream>
#include <mutex>
#include <condition_variable>
#include <deque>
#include <csignal>
#include <unistd.h>
using namespace std;
int max_queue_size = 100;
int num_producers = 5;
int num_consumers = 7;
int num_operations = 40;
int operations_created = 0;
thread_local int operations_created_by_this_thread = 0;
int operations_consumed = 0;
thread_local int operations_consumed_by_this_thread = 0;
struct thread_stuff {
int a;
int b;
int operand_num;
char operand;
};
char operands[] = {'+', '-', '/', '*'};
deque<thread_stuff> q;
bool finished = false;
condition_variable cv;
mutex queue_mutex;
void producer(int n) {
while (operations_created_by_this_thread < num_operations) {
int oper_num = rand() % 4;
thread_stuff equation;
equation.a = rand();
equation.b = rand();
equation.operand_num = oper_num;
equation.operand = operands[oper_num];
while ((operations_created - operations_consumed) >= max_queue_size) {
// don't do anything until it has space available
}
{
lock_guard<mutex> lk(queue_mutex);
q.push_back(equation);
operations_created++;
}
cv.notify_all();
operations_created_by_this_thread++;
this_thread::__sleep_for(chrono::seconds(rand() % 2), chrono::nanoseconds(0));
}
{
lock_guard<mutex> lk(queue_mutex);
if(operations_created == num_operations * num_producers){
finished = true;
}
}
cv.notify_all();
}
void consumer() {
while (true) {
unique_lock<mutex> lk(queue_mutex);
cv.wait(lk, [] { return finished || !q.empty(); });
if(!q.empty()) {
thread_stuff data = q.front();
q.pop_front();
operations_consumed++;
operations_consumed_by_this_thread++;
int ans = 0;
switch (data.operand_num) {
case 0:
ans = data.a + data.b;
break;
case 1:
ans = data.a - data.b;
break;
case 2:
ans = data.a / data.b;
break;
case 3:
ans = data.a * data.b;
break;
}
cout << "Operation " << operations_consumed << " processed by PID " << getpid()
<< " TID " << this_thread::get_id() << ": "
<< data.a << " " << data.operand << " " << data.b << " = " << ans << " queue size: "
<< (operations_created - operations_consumed) << endl;
}
this_thread::yield();
if (finished) break;
}
}
void usr1_handler(int signal) {
cout << "Status: Produced " << operations_created << " operations and "
<< (operations_created - operations_consumed) << " operations are in the queue" << endl;
}
void usr2_handler(int signal) {
cout << "Status: Consumed " << operations_consumed << " operations and "
<< (operations_created - operations_consumed) << " operations are in the queue" << endl;
}
int main(int argc, char *argv[]) {
if (argc < 5) {
cout << "Invalid number of parameters passed in" << endl;
exit(1);
}
max_queue_size = atoi(argv[1]);
num_operations = atoi(argv[2]);
num_producers = atoi(argv[3]);
num_consumers = atoi(argv[4]);
// signal(SIGUSR1, usr1_handler);
// signal(SIGUSR2, usr2_handler);
thread producers[num_producers];
thread consumers[num_consumers];
for (int i = 0; i < num_producers; i++) {
producers[i] = thread(producer, num_operations);
}
for (int i = 0; i < num_consumers; i++) {
consumers[i] = thread(consumer);
}
for (int i = 0; i < num_producers; i++) {
producers[i].join();
}
for (int i = 0; i < num_consumers; i++) {
consumers[i].join();
}
cout << "finished!" << endl;
}
You're holding the mutex the whole time--including yield()-ing while holding the mutex.
Scope the unique_lock like you do in your producer's code, popping from the queue and incrementing the counter atomically.
I see that you have a max queue size. You need a 2nd condition for the producer to wait on if the queue is full, and the consumer will signal this condition as it consumes items.
Any idea why other threads wouldn't have a chance to grab work?
This poll is troubling:
while ((operations_created - operations_consumed) >= max_queue_size)
{
// don't do anything until it has space available
}
You might try a minimal delay in the loop ... this is a 'bad neighbor', and can 'consume' a core.
There are few issues with your code:
Using Normal Variables for Inter-Thread Communication
Here is an example:
int operations_created = 0;
int operations_consumed = 0;
void producer(int n) {
[...]
while ((operations_created - operations_consumed) >= max_queue_size) { }
and later
void consumer() {
[...]
operations_consumed++;
This will work only on x86 architectures without optimizations, i.e. -O0. Once we try to enable optimizations, the compiler will optimize the while loop to:
void producer(int n) {
[...]
if ((operations_created - operations_consumed) >= max_queue_size) {
while (true) { }
}
So, your program simply hang here. You can check this on Compiler Explorer.
mov eax, DWORD PTR operations_created[rip]
sub eax, DWORD PTR operations_consumed[rip]
cmp eax, DWORD PTR max_queue_size[rip]
jl .L19 // here is the if before the loop
.L20:
jmp .L20 // here is the empty loop
.L19:
Why is this happening? From the single-thread program point of view, while (condition) { operators } is exact equivalent to if (condition) while (true) { operators } if operators do not change the condition.
To fix the issue, we should use std::atomic<int> instead of simple int. Those are designed for inter-thread communication and so compiler will avoid such optimizations and generate the correct assembly.
Consumer Locks The Mutex while yield()
Have a look at this snippet:
void consumer() {
while (true) {
unique_lock<mutex> lk(queue_mutex);
[...]
this_thread::yield();
[...]
}
Basically this mean that consumer does the yield() holding the lock. Since only one consumer can hold a lock at a time (mutex stands for mutual exclusion), that explains why other consumers cannot consume the work.
To fix this issue, we should unlock the queue_mutex before the yield(), i.e.:
void consumer() {
while (true) {
{
unique_lock<mutex> lk(queue_mutex);
[...]
}
this_thread::yield();
[...]
}
This still does not guarantee that only one thread will do most of the tasks. When we do notify_all() in producer, all threads get woke up, but only one will lock the mutex. Since the work we schedule is tiny, by the time producer calls notify_all() our thread will finish the work, done the yield() and will be ready for the next work.
So why this thread locks the mutex, but not the other one then? I guess that is happening due to CPU cache and busy waiting. The thread just finished the work is "hot", it is in CPU cache and ready to lock the mutex. Before go to sleep it also might try to busy wait for mutex few cycles, which increases its chances to win even more.
To fix this, we can either remove the sleep in producer (so it will wake up other threads more often, so other threads will be "hot" as well), or do a sleep() in the consumer instead of yield() (so this thread becomes "cold" during the sleep).
Anyway, there is no opportunity to do the work in parallel due to mutex, so the fact that same thread does most of the work is completely natural IMO.

Producer-Consumer: Lost Wake-up issue

I was trying to write code for Producer-Consumer problem. Below code works fine most of the time but stuck sometimes because of "Lost Wake-up" (i guess). I tried thread sleep() but it didn't work. What modification is needed to handle this case in my code? Is semaphore can be helpful here ? If yes, how will i implement them here ?
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <iostream>
using namespace std;
int product = 0;
boost::mutex mutex;
boost::condition_variable cv;
boost::condition_variable pv;
bool done = false;
void consumer(){
while(done==false){
//cout << "start c" << endl
boost::mutex::scoped_lock lock(mutex);
cv.wait(lock);
//cout << "wakeup c" << endl;
if (done==false)
{
cout << product << endl;
//cout << "notify c" << endl;
pv.notify_one();
}
//cout << "end c" << endl;
}
}
void producer(){
for(int i=0;i<10;i++){
//cout << "start p" << endl;
boost::mutex::scoped_lock lock(mutex);
boost::this_thread::sleep(boost::posix_time::microseconds(50000));
++product;
//cout << "notify p" << endl;
cv.notify_one();
pv.wait(lock);
//cout << "wakeup p" << endl;
}
//cout << "end p" << endl;
cv.notify_one();
done = true;
}
int main()
{
int t = 1000;
while(t--){
/*
This is not perfect, and is prone to a subtle issue called the lost wakeup (for example, producer calls notify()
on the condition, but client hasn't really called wait() yet, then both will wait() indefinitely.)
*/
boost::thread consumerThread(&consumer);
boost::thread producerThread(&producer);
producerThread.join();
consumerThread.join();
done =false;
//cout << "process end" << endl;
}
cout << "done" << endl;
getchar();
return 0;
}
Yes, you want a way to know (in the consumer) that you "missed" a signal. A semaphore can help. There's more than one way to skin a cat, so here's my simple take on it (using just c++11 standard library features):
class semaphore
{
private:
std::mutex mtx;
std::condition_variable cv;
int count;
public:
semaphore(int count_ = 0) : count(count_) { }
void notify()
{
std::unique_lock<std::mutex> lck(mtx);
++count;
cv.notify_one();
}
void wait() { return wait([]{}); } // no-op action
template <typename F>
auto wait(F&& func = []{}) -> decltype(std::declval<F>()())
{
std::unique_lock<std::mutex> lck(mtx);
while(count == 0){
cv.wait(lck);
}
count--;
return func();
}
};
For convenience, I added a convenience wait() overload that takes a function to be executed under the lock. This makes it possible for the consumer to operate the 'semaphore' without ever manually operating the lock (and still get the value of product without data-races):
semaphore sem;
void consumer() {
do {
bool stop = false;
int received_product = sem.wait([&stop] { stop = done; return product; });
if (stop)
break;
std::cout << received_product << std::endl;
std::unique_lock<std::mutex> lock(processed_mutex);
processed_signal.notify_one();
} while(true);
}
A fully working demo: Live on Coliru:
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <thread>
#include <cassert>
class semaphore
{
private:
std::mutex mtx;
std::condition_variable cv;
int count;
public:
semaphore(int count_ = 0) : count(count_) { }
void notify()
{
std::unique_lock<std::mutex> lck(mtx);
++count;
cv.notify_one();
}
void wait() { return wait([]{}); } // no-op action
template <typename F>
auto wait(F&& func = []{}) -> decltype(std::declval<F>()())
{
std::unique_lock<std::mutex> lck(mtx);
while(count == 0){
cv.wait(lck);
}
count--;
return func();
}
};
semaphore sem;
int product = 0;
std::mutex processed_mutex;
std::condition_variable processed_signal;
bool done = false;
void consumer(int check) {
do {
bool stop = false;
int received_product = sem.wait([&stop] { stop = done; return product; });
if (stop)
break;
std::cout << received_product << std::endl;
assert(++check == received_product);
std::unique_lock<std::mutex> lock(processed_mutex);
processed_signal.notify_one();
} while(true);
}
void producer() {
std::unique_lock<std::mutex> lock(processed_mutex);
for(int i = 0; i < 10; ++i) {
++product;
sem.notify();
processed_signal.wait(lock);
}
done = true;
sem.notify();
}
int main() {
int t = 1000;
while(t--) {
std::thread consumerThread(&consumer, product);
std::thread producerThread(&producer);
producerThread.join();
consumerThread.join();
done = false;
std::cout << "process end" << std::endl;
}
std::cout << "done" << std::endl;
}
You seems to ignore that the variable done is also a shared state, to the same extend as product. Which can lead to several races conditions. In your case, I see at least one scenario where consumerThread make no progress:
The loop execute has intended
consumer executes, and is waiting at cv.wait(lock);
producer has finished the for loop, and notify consumer and is preempted
consumer wakes up, read "done==false", output product, read done == false again, wait on the condition
producer set done to true and exit
consumer is stuck forever
To avoid these kind of issues you should be holding a lock when reading or writing done. Btw your implementation is quite sequential, ie the producer and the consumer can only process a single piece of data at the time...