This might be a basic question in terms of multithreaded programming but I really want to achieve the following without any concurrent data structure.
Consider the code:
class A
{
std::stack<int> s;
public:
A()
{
s.push(7); s.push(6); s.push(5); s.push(4); s.push(3); s.push(2); s.push(1);
}
void process(int tid)
{
while (!s.empty())
{
std::unique_lock<std::mutex> lck(m);
std::cout << tid << " --> " << s.top() << '\n';
cv.wait(lck);
s.pop();
cv.notify_all();
lck.unlock();
}
}
std::mutex m;
std::condition_variable cv;
};
int main()
{
A a;
std::thread t1(&A::process, &a, 1);
std::thread t2(&A::process, &a, 2);
t1.join();
t2.join();
}
I want for each thread to print the top of the stack and pop it out so that the output is looking like this:
1 --> 1
2 --> 2
1 --> 3
2 --> 4
...
So only 1 thread should enter the while body and execute it only one iteration.
But instead it always outputs:
1 --> 1
2 --> 1
then it waits infinitely
How can I do this ?
What's wrong with the current solution ?
Never, ever do a wait on a condition variable without testing for spurious wakeups. The easiest way is to use the lambda verson.
condition_variables are not semaphores, they are lower level than that.
class A
{
public:
A()
{
s.push(7); s.push(6); s.push(5); s.push(4); s.push(3); s.push(2); s.push(1);
}
void process(int tid)
{
while (true)
{
std::unique_lock<std::mutex> lck(m);
cv.wait(lck, [&]{ return std::this_thread::get_id() != last || s.empty(); });
// must only read within lock:
if (s.empty()) {
last = std::thread::id{}; // thread ids can be reused
break;
}
last = std::this_thread::get_id();
std::cout << tid << " --> " << s.top() << '\n';
s.pop();
cv.notify_one();
}
}
std::mutex m;
std::condition_variable cv;
std::thread::id last{};
std::stack<int> s;
};
Related
I am facing an issue while performing thread synchronisation.
I have a class very similar to the ThreadQueue implementation proposed in this answer, which I'll briefly report here for completeness:
#include <mutex>
#include <queue>
#include <condition_variable>
template <typename T>
class ThreadQueue {
std::queue<T> q_;
std::mutex mtx;
std::condition_variable cv;
public:
void enqueue (const T& t) {
{
std::lock_guard<std::mutex> lck(mtx);
q_.push(t);
}
cv.notify_one();
}
T dequeue () {
std::unique_lock<std::mutex> lck(mtx);
cv.wait(lck, [this] { return !q_.empty(); });
T t = q_.front();
q_.pop();
return t;
}
};
I have a consumer that continuously extracts the first available item of a shared instance of that class, say ThreadQueue<int> my_queue;, until it receives a signal to quit, for instance:
std::atomic_bool quit(false);
void worker(){
std::cout << "[worker] starting..." << std::endl;
while(!quit.load()) {
std::cout << "[worker] extract element from the queue" << std::endl;
auto el = my_queue.dequeue();
std::cout << "[worker] consume extracted element" << std::endl;
std::cout << el << std::endl;
}
std::cout << "[worker] exiting" << std::endl;
}
Suppose the program has to terminate (for any reason) before any producer can insert elements in the queue; in this case the worker would be stuck on the line auto el = my_queue.dequeue(); and cannot terminate.
An exemple of this case is the following:
int main() {
std::thread t(worker);
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << "[main] terminating..." << std::endl;
quit.store(true);
t.join();
std::cout << "[main] terminated!" << std::endl;
return 0;
}
Clearly, the worker can be "unlocked" by pushing a dummy element in the queue, but it does not seem an elegant solution.
I am thus wondering whether the thread syncronisation on the empty queue should be taken out of the ThreadQueue class and done inside the worker instead, i.e. moving the "ownership" of the condition variable outside the ThreadQueue container.
In general, is a class such as ThreadQueue always a bad design?
In case it's not, is there any solution that allows to keep the condition variable encapsulated in ThreadQueue, hence removing the responsibility of thread syncronisation from the users of that class (bearing in mind I am limited to usage of C++11)?
Full MWE here
The object that contains the mutex should also own the condition variable. So the ThreadQueue code looks good. But it is unclear what dequeue() should return when an asynchronous stop is requested.
A common way to solve this is to introduce either a quit flag or a sentinel value to the queue itself, a stop() method and a way for dequeue() to signal a closed queue, for example, using std::optional<T> as return value.
template <typename T>
class ThreadQueue {
std::queue<T> q_;
std::mutex mtx;
std::condition_variable cv;
bool quit = false;
public:
void enqueue (const T& t) {
{
std::lock_guard<std::mutex> lck(mtx);
q_.push(t);
}
cv.notify_one();
}
std::optional<T> dequeue () {
std::unique_lock<std::mutex> lck(mtx);
cv.wait(lck, [this] { return quit || !q_.empty(); });
if (quit) {
return {};
}
T t = q_.front();
q_.pop();
return t;
}
void stop() {
std::unique_lock<std::mutex> lck(mtx);
quit = true;
cv.notify_all();
}
};
Then when dequeue() returns an empty optional, the worker can exit gracefully.
void worker() {
std::cout << "[worker] starting..." << std::endl;
while (true) {
std::cout << "[worker] extract element from the queue" << std::endl;
auto el = my_queue.dequeue();
if (!el) {
std::cout << "[worker] exiting" << std::endl;
break;
}
std::cout << "[worker] consume extracted element" << std::endl;
std::cout << *el << std::endl;
}
std::cout << "[worker] exiting" << std::endl;
}
int main() {
std::thread t(worker);
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << "[main] terminating..." << std::endl;
my_queue.stop();
t.join();
std::cout << "[main] terminated!" << std::endl;
return 0;
}
This is a quick hacky mod to your class to add stop function:
template <typename T>
class ThreadQueue {
std::queue<T> q_;
std::mutex mtx;
std::condition_variable cv;
std::atomic<bool> running = true;
public:
void enqueue (const T& t) {
{
std::lock_guard<std::mutex> lck(mtx);
q_.push(t);
}
cv.notify_one();
}
T dequeue () {
std::unique_lock<std::mutex> lck(mtx);
cv.wait(lck, [this] { return !q_.empty() || !running; });
if (!running){return {};} // tidy-up part 1
T t = q_.front();
q_.pop();
return t;
}
bool is_running()
{
return running;
}
void stop()
{
running = false;
cv.notify_all(); // tidy-up part 2
}
};
see live example: https://godbolt.org/z/bje6Gj7o4
Obviously needs tidying up as you require
I want to create 15 threads and have them performed 4 successive steps (that I call Init, Process, Terminate and WriteOutputs).
For each step I want all threads to finish it before passing to the following step.
I am trying to implement it (cf code below) using a std::condition_variable and calling the wait() and notify_all() methods but somehow I do not manage to do it
and even worse I have a race condition
when counting the number of operations done (which should be 15*4 = 60) I sometimes have some prints that are indeed not printed and the m_counter in my class at the end is less than 60 which should not be the case
I use two std::mutex objects: one for printing messages and another one for the step synchronization
Could someone explain to me the problem?
What would be a solution ?
Many thanks in advance
#include<iostream>
#include<thread>
#include<mutex>
#include<condition_variable>
#include<vector>
#include<functional>
class MTHandler
{
public:
MTHandler(){
// 15 threads
std::function<void(int)> funcThread = std::bind(&MTHandler::ThreadFunction, this, std::placeholders::_1);
for (int i=0; i<15; i++){
m_vectThreads.push_back(std::thread(funcThread,i));
}
for (std::thread & th : m_vectThreads) {
th.join();
}
std::cout << "m_counter = " << m_counter << std::endl;
}
private:
enum class ManagerStep{
Init,
Process,
Terminate,
WriteOutputs,
};
std::vector<ManagerStep> m_vectSteps = {
ManagerStep::Init,
ManagerStep::Process,
ManagerStep::Terminate,
ManagerStep::WriteOutputs
};
unsigned int m_iCurrentStep = 0 ;
unsigned int m_counter = 0;
std::mutex m_mutex;
std::mutex m_mutexStep;
std::condition_variable m_condVar;
bool m_finishedAllSteps = false;
unsigned int m_nThreadsFinishedStep = 0;
std::vector<std::thread> m_vectThreads = {};
void ThreadFunction (int id) {
while(!m_finishedAllSteps){
m_mutex.lock();
m_counter+=1;
m_mutex.unlock();
switch (m_vectSteps[m_iCurrentStep])
{
case ManagerStep::Init:{
m_mutex.lock();
std::cout << "thread " << id << " --> Init step" << "\n";
m_mutex.unlock();
break;
}
case ManagerStep::Process:{
m_mutex.lock();
std::cout << "thread " << id << " --> Process step" << "\n";
m_mutex.unlock();
break;
}
case ManagerStep::Terminate:{
m_mutex.lock();
std::cout << "thread " << id << " --> Terminate step" << "\n";
m_mutex.unlock();
break;
}
case ManagerStep::WriteOutputs:{
m_mutex.lock();
std::cout << "thread " << id << " --> WriteOutputs step" << "\n";
m_mutex.unlock();
break;
}
default:
{
break;
}
}
unsigned int iCurrentStep = m_iCurrentStep;
bool isCurrentStepFinished = getIsFinishedStatus();
if (!isCurrentStepFinished){
// wait for other threads to finish current step
std::unique_lock<std::mutex> lck(m_mutexStep);
m_condVar.wait(lck, [iCurrentStep,this]{return iCurrentStep != m_iCurrentStep;});
}
}
}
bool getIsFinishedStatus(){
m_mutexStep.lock();
bool isCurrentStepFinished = false;
m_nThreadsFinishedStep +=1;
if (m_nThreadsFinishedStep == m_vectThreads.size()){
// all threads have completed the current step
// pass to the next step
m_iCurrentStep += 1;
m_nThreadsFinishedStep = 0;
m_finishedAllSteps = (m_iCurrentStep == m_vectSteps.size());
isCurrentStepFinished = true;
}
if (isCurrentStepFinished){m_condVar.notify_all();}
m_mutexStep.unlock();
return isCurrentStepFinished;
}
};
int main ()
{
MTHandler mt;
return 0;
}
I am new to multithreading. Here is what I want
thread_function(){
// do job1;
//wait main thread to notify;
// do job2;
}
main(){
//create two threads
//wait both threads to finish job1
//finish job3, then let both threads start job2
//wait both threads to join
}
What is the best way to do this? Thanks.
Here is my code
void job1(){
}
void job2(){
}
void job3(){
}
int main(){
thread t11(job1);
thread t12(job1);
t11.join();
t12.join();
job3();
thread t21(job2);
thread t22(job2);
t21.join();
t22.join();
}
My question is whether I can combine job1 and job2 to one function, and use condition variable to control the order?
I will give you a sample (something similar to producer-consumer problem)
This is not the exact solution you are looking for, but below code will guide you,
Below "q" is protected by mutex, on which the condition variable waits for it to get notified or the !q.empty(needed for spurious wakeups) or time-out.
std::condition_variable cond;
std::deque<int> q;
std::mutex mu;
void function_1() {
int count = 50;
while (count > 0)
{
// Condition variables when used lock should be unique_lock
// lock the resource
std::unique_lock<mutex> locker(mu);
// defer the lock until further
//std::unique_lock<mutex> locker(mu, std::defer_lock);
q.push_front(count);
locker.unlock();
//cond.notify_one();
cond.notify_all();
//std::this_thread::sleep_for(chrono::seconds(1));
count--;
}
}
void function_2(int x,int y) {
int data = 0;
while (data != 1)
{
// mu is the common mutex this resource is protected for the q.
std::unique_lock<mutex> locker(mu);
// this will only be done when !q.empty()
// This will make sure it is handled by multiple threads
auto now = std::chrono::system_clock::now();
if (cond.wait_until(locker, now + y * 100ms, []() { return !q.empty(); }))
{
auto nowx = std::chrono::system_clock::now();
cout << "Thread " << x << "waited for " << (nowx-now).count() << endl;
}
else
{
cout << "Timed out " << endl;
break;
}
data = q.back();
q.pop_back();
locker.unlock();
cout << x << " got value from t1 " << data << endl;
}
}
int main()
{
std::thread t1(function_1);
std::thread t2(function_2,1,50);
std::thread t3(function_2,2,60);
std::thread t4(function_2,3,100);
t1.join();
t2.join();
t3.join();
t4.join();
return 0;
}
I'm trying to learn the condition variables, and I'm stuck at the following example. I thought that notify_one on consumers should unlock only one waiting consumer. But after starting it repeatedly it seems to me that this isn't the case. I've changed notify_one into notify_all and haven't noticed a change in behavior. After the producer calls notify_one on consumers I can see Get… being written on screen by more then one consumer.
Why is this happening?
#include <iostream> // std::cout
#include <thread> // std::thread
#include <mutex> // std::mutex, std::unique_lock
#include <condition_variable> // std::condition_variable
#include <chrono>
std::mutex mtx;
std::condition_variable produce,consume;
int cargo = 0; // shared value by producers and consumers
void consumer () {
std::unique_lock<std::mutex> lck(mtx);
while (cargo==0) consume.wait(lck);
std::cout << "Get" << cargo << " "<< std::this_thread::get_id() << '\n';
cargo--;
produce.notify_one();
}
void producer (int id) {
std::unique_lock<std::mutex> lck(mtx);
while (cargo!=0) produce.wait(lck);
std::cout << "Push" << id << " "<< std::this_thread::get_id() << '\n';
cargo += id;
consume.notify_one();
}
void c () {
while(1) {
consumer();
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
}
}
void p(int n) {
while(1) {
producer(n);
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
}
}
int main ()
{
std::thread consumers[5],producers[5];
for (int i=0; i<5; ++i) {
consumers[i] = std::thread(c);
producers[i] = std::thread(p,i+1);
}
for (int i=0; i<5; ++i) {
producers[i].join();
consumers[i].join();
}
return 0;
}
I was trying to write code for Producer-Consumer problem. Below code works fine most of the time but stuck sometimes because of "Lost Wake-up" (i guess). I tried thread sleep() but it didn't work. What modification is needed to handle this case in my code? Is semaphore can be helpful here ? If yes, how will i implement them here ?
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <iostream>
using namespace std;
int product = 0;
boost::mutex mutex;
boost::condition_variable cv;
boost::condition_variable pv;
bool done = false;
void consumer(){
while(done==false){
//cout << "start c" << endl
boost::mutex::scoped_lock lock(mutex);
cv.wait(lock);
//cout << "wakeup c" << endl;
if (done==false)
{
cout << product << endl;
//cout << "notify c" << endl;
pv.notify_one();
}
//cout << "end c" << endl;
}
}
void producer(){
for(int i=0;i<10;i++){
//cout << "start p" << endl;
boost::mutex::scoped_lock lock(mutex);
boost::this_thread::sleep(boost::posix_time::microseconds(50000));
++product;
//cout << "notify p" << endl;
cv.notify_one();
pv.wait(lock);
//cout << "wakeup p" << endl;
}
//cout << "end p" << endl;
cv.notify_one();
done = true;
}
int main()
{
int t = 1000;
while(t--){
/*
This is not perfect, and is prone to a subtle issue called the lost wakeup (for example, producer calls notify()
on the condition, but client hasn't really called wait() yet, then both will wait() indefinitely.)
*/
boost::thread consumerThread(&consumer);
boost::thread producerThread(&producer);
producerThread.join();
consumerThread.join();
done =false;
//cout << "process end" << endl;
}
cout << "done" << endl;
getchar();
return 0;
}
Yes, you want a way to know (in the consumer) that you "missed" a signal. A semaphore can help. There's more than one way to skin a cat, so here's my simple take on it (using just c++11 standard library features):
class semaphore
{
private:
std::mutex mtx;
std::condition_variable cv;
int count;
public:
semaphore(int count_ = 0) : count(count_) { }
void notify()
{
std::unique_lock<std::mutex> lck(mtx);
++count;
cv.notify_one();
}
void wait() { return wait([]{}); } // no-op action
template <typename F>
auto wait(F&& func = []{}) -> decltype(std::declval<F>()())
{
std::unique_lock<std::mutex> lck(mtx);
while(count == 0){
cv.wait(lck);
}
count--;
return func();
}
};
For convenience, I added a convenience wait() overload that takes a function to be executed under the lock. This makes it possible for the consumer to operate the 'semaphore' without ever manually operating the lock (and still get the value of product without data-races):
semaphore sem;
void consumer() {
do {
bool stop = false;
int received_product = sem.wait([&stop] { stop = done; return product; });
if (stop)
break;
std::cout << received_product << std::endl;
std::unique_lock<std::mutex> lock(processed_mutex);
processed_signal.notify_one();
} while(true);
}
A fully working demo: Live on Coliru:
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <thread>
#include <cassert>
class semaphore
{
private:
std::mutex mtx;
std::condition_variable cv;
int count;
public:
semaphore(int count_ = 0) : count(count_) { }
void notify()
{
std::unique_lock<std::mutex> lck(mtx);
++count;
cv.notify_one();
}
void wait() { return wait([]{}); } // no-op action
template <typename F>
auto wait(F&& func = []{}) -> decltype(std::declval<F>()())
{
std::unique_lock<std::mutex> lck(mtx);
while(count == 0){
cv.wait(lck);
}
count--;
return func();
}
};
semaphore sem;
int product = 0;
std::mutex processed_mutex;
std::condition_variable processed_signal;
bool done = false;
void consumer(int check) {
do {
bool stop = false;
int received_product = sem.wait([&stop] { stop = done; return product; });
if (stop)
break;
std::cout << received_product << std::endl;
assert(++check == received_product);
std::unique_lock<std::mutex> lock(processed_mutex);
processed_signal.notify_one();
} while(true);
}
void producer() {
std::unique_lock<std::mutex> lock(processed_mutex);
for(int i = 0; i < 10; ++i) {
++product;
sem.notify();
processed_signal.wait(lock);
}
done = true;
sem.notify();
}
int main() {
int t = 1000;
while(t--) {
std::thread consumerThread(&consumer, product);
std::thread producerThread(&producer);
producerThread.join();
consumerThread.join();
done = false;
std::cout << "process end" << std::endl;
}
std::cout << "done" << std::endl;
}
You seems to ignore that the variable done is also a shared state, to the same extend as product. Which can lead to several races conditions. In your case, I see at least one scenario where consumerThread make no progress:
The loop execute has intended
consumer executes, and is waiting at cv.wait(lock);
producer has finished the for loop, and notify consumer and is preempted
consumer wakes up, read "done==false", output product, read done == false again, wait on the condition
producer set done to true and exit
consumer is stuck forever
To avoid these kind of issues you should be holding a lock when reading or writing done. Btw your implementation is quite sequential, ie the producer and the consumer can only process a single piece of data at the time...