I'm using the mutex and condition_variable pair to implement mult-threaded processing. I have read examples and solid explanations like this and that. However, I do not understand why separate variables trigger each other. For example,
mutex alert0, alert1;
condition_variable var0, var1;
void toy0() {
std::unique_lock<std::mutex> lock(alert0);
var0.wait(lock, [=] { return true; });
cout << "Toy0 triggered" << endl;
}
void toy1() {
std::unique_lock<std::mutex> lock(alert1);
var1.wait(lock, [=] { return true; });
cout << "Toy1 triggered" << endl;
}
void main(){
std::thread t0 = std::thread([=] {
toy0();
});
std::thread t1 = std::thread([=] {
toy1();
});
{
std::unique_lock<std::mutex> lock(alert0);
var0.notify_all();
}
t0.join();
t1.join();
return;
}
yields
Toy0 triggered
Toy1 triggered
If this is the intended outcome, how may I have cross-talk-free signals in different parts of the program so each wait() can be triggered by a specific condition_variable's notify_all(), not others'?
Related
I'm trying to solve a dinning philosophers problem using chandy-misra algorithm. More explanation here: https://en.wikipedia.org/wiki/Dining_philosophers_problem
I'm using one mutex to lock the modified variables and another with condition variable to notify when the fork is free to use.
I can't see the reason why all my philosophers are eating at the same time - they are not waiting for forks other at all. It seems like I'm using mutexes wrong.
Philosopher thread:
void philosopher::dine() {
while(!is_initialized); // here threads waits until all other philosophers are initialized
while(!is_stopped) {
eat();
think(); // here just sleeps for a few seconds
}
}
Eat method:
void philosopher::eat() {
left_fork.request(index);
right_fork.request(index);
std::lock(right_fork.get_mutex(), left_fork.get_mutex());
std::lock_guard<std::mutex> l1( right_fork.get_mutex(), std::adopt_lock );
std::lock_guard<std::mutex> l2( left_fork.get_mutex(), std::adopt_lock );
int num = distribution(mt);
std::cout << "Philsopher " << index << " eats for " << num
<< "seconds." << std::endl;
sleep(num);
right_fork.free();
left_fork.free();
}
How fork class looks:
enum fork_state {
CLEAN, DIRTY
};
class fork_t {
int index;
int owner_id;
mutable std::mutex condition_m;
std::mutex owner_m;
std::condition_variable condition;
public:
fork_t(int _index,int _owner_id);
fork_t(const fork_t &f);
void request(int phil_req);
void free();
std::mutex &get_mutex() { return owner_m; }
fork_t& operator=(fork_t const &f);
};
void fork_t::request(int phil_req) {
while (owner_id != phil_req ) {
std::unique_lock<std::mutex> l(condition_m);
if(state == DIRTY) {
std::lock_guard<std::mutex> lock(owner_m);
state = CLEAN;
owner_id = phil_req;
} else {
while(state == CLEAN) {
std::cout<<"Philosopher " << phil_req << " is waiting for"<< index <<std::endl;
condition.wait(l);
}
}
}
}
void fork_t::free() {
state = DIRTY;
condition.notify_one();
}
At the start all forks are given to philosophers with lower id.
I would be grateful for any tips.
I am testing how to push objects waiting on condition_variables in a queue. I want to execute the threads as per my wish because they will be in critical sections later. Nothing is printed from the threads, what could be wrong ?
mutex print_mu;
void print(function<void()> func)
{
lock_guard<mutex> lock(print_mu);
func();
}
unsigned int generate_id()
{
static unsigned int id = 1;
return id++;
}
class foo
{
unsigned int id_;
mutex mu_;
condition_variable cv_;
bool signal_;
bool& kill_;
public:
foo(bool kill)
:kill_(kill)
, signal_(false)
, id_(generate_id())
{
run();
}
void set()
{
signal_ = true;
}
void run()
{
async(launch::async, [=]()
{
unique_lock<mutex> lock(mu_);
cv_.wait(lock, [&]() { return signal_ || kill_ ; });
if (kill_)
{
print([=](){ cout << " Thread " << id_ << " killed!" << endl; });
return;
}
print([=](){ cout << " Hello from thread " << id_ << endl; });
});
}
};
int main()
{
queue<shared_ptr<foo>> foos;
bool kill = false;
for (int i = 1; i <= 10; i++)
{
shared_ptr<foo> p = make_shared<foo>(kill);
foos.push(p);
}
this_thread::sleep_for(chrono::seconds(2));
auto p1 = foos.front();
p1->set();
foos.pop();
auto p2 = foos.front();
p2->set();
foos.pop();
this_thread::sleep_for(chrono::seconds(2));
kill = true; // terminate all waiting threads unconditionally
this_thread::sleep_for(chrono::seconds(2));
print([=](){ cout << " Main thread exits" << endl; });
return 0;
}
When a thread calls std::condition_variable::wait, it will block until another thread calls notify_one or notify_all on the same condition_variable. Since you never call notify_* on any of your condition_variables they will block forever.
Your foo::run method will also block forever, since std::future's destructor will block waiting for the result of a std::async call if it's the last std::future referencing that result. Thus your code deadlocks: your main thread is blocked waiting for your async future to finish, and your async future is blocked waiting for your main thread to signal cv_.
(Also foo::kill_ is a dangling reference. Well, it would become one if run ever returned anyway.)
I have the following piece of code. I am using c++11 threads to write a simple multi threaded producer consumer problem.
class W
{
public:
explicit W();
void p();
void c();
private:
std::deque<std::uint64_t> q;
std::shared_ptr<std::mutex> m;
std::shared_ptr<std::condition_variable> cvQEmpty;
std::shared_ptr<std::condition_variable> cvQFull;
const std::size_t queue_size;
};
W::W()
: m(std::make_shared<std::mutex>()),
cvQEmpty(std::make_shared<std::condition_variable>()),
cvQFull(std::make_shared<std::condition_variable>()),
queue_size(3)
{
}
void
W::p()
{
while(1)
{
std::unique_lock<std::mutex> lk(*m.get());
if (q.size() >= queue_size)
{
cvQFull->wait(lk, [this] { return q.size() < queue_size; });
}
q.push_back(q.size());
std::cout << "Pushed " << q[q.size() - 1] << std::endl;
lk.unlock();
cvQEmpty->notify_one();
}
}
void
W::c()
{
while (1)
{
std::unique_lock<std::mutex> lk(*m.get());
if (q.empty())
{
cvQEmpty->wait(lk, [this] { return !q.empty(); });
}
while(!q.empty())
{
const std::uint64_t val = q[0];
std::cout << "Output : " << val << std::endl;
q.pop_back();
}
lk.unlock();
cvQFull->notify_one();
}
}
void
foo()
{
W w;
std::thread p(&W::p, w);
std::thread c(&W::c, w);
c.join();
p.join();
}
Both the threads are deadlocked on condition wait.
Could you please tell me where I am going wrong. The program compiles fine without any warnings.
Compiler Used is : g++-5.8
Quite simple. You are copying your w argument to both threads, invoking copy constructor. Those threads end up using two indepenent queues!
Solutions:
Make your queue a shared_ptr like mutex
(better) encompass your argument into std::ref.
(On a side note, explicit W() gives you nothing and is just syntax noise)
I have the following program (made up example!):
#include<thread>
#include<mutex>
#include<iostream>
class MultiClass {
public:
void Run() {
std::thread t1(&MultiClass::Calc, this);
std::thread t2(&MultiClass::Calc, this);
std::thread t3(&MultiClass::Calc, this);
t1.join();
t2.join();
t3.join();
}
private:
void Calc() {
for (int i = 0; i < 10; ++i) {
std::cout << i << std::endl;
}
}
};
int main() {
MultiClass m;
m.Run();
return 0;
}
What I need is to sync the loop iterations the following way and I cant come up with a solution (I've been fiddling for about an hour now using mutexes but cant find THE combination):
t1 and t2 shall do one loop iteration, then t3 shall do one iteration, then again t1 and t2 shall do one, then t3 shall do one.
So you see, I need t1 and t2 to do things simultaneously and after one iteration, t3 shall do one iteration on its own.
Can you point your finger on how I would be able to achieve that? Like I said, ive been trying this with mutexes and cant come up with a solution.
If you really want to do this by hand with the given thread structure, you could use something like this*:
class SyncObj {
mutex mux;
condition_variable cv;
bool completed[2]{ false,false };
public:
void signalCompetionT1T2(int id) {
lock_guard<mutex> ul(mux);
completed[id] = true;
cv.notify_all();
}
void signalCompetionT3() {
lock_guard<mutex> ul(mux);
completed[0] = false;
completed[1] = false;
cv.notify_all();
}
void waitForCompetionT1T2() {
unique_lock<mutex> ul(mux);
cv.wait(ul, [&]() {return completed[0] && completed[1]; });
}
void waitForCompetionT3(int id) {
unique_lock<mutex> ul(mux);
cv.wait(ul, [&]() {return !completed[id]; });
}
};
class MultiClass {
public:
void Run() {
std::thread t1(&MultiClass::Calc1, this);
std::thread t2(&MultiClass::Calc2, this);
std::thread t3(&MultiClass::Calc3, this);
t1.join();
t2.join();
t3.join();
}
private:
SyncObj obj;
void Calc1() {
for (int i = 0; i < 10; ++i) {
obj.waitForCompetionT3(0);
std::cout << "T1:" << i << std::endl;
obj.signalCompetionT1T2(0);
}
}
void Calc2() {
for (int i = 0; i < 10; ++i) {
obj.waitForCompetionT3(1);
std::cout << "T2:" << i << std::endl;
obj.signalCompetionT1T2(1);
}
}
void Calc3() {
for (int i = 0; i < 10; ++i) {
obj.waitForCompetionT1T2();
std::cout << "T3:" << i << std::endl;
obj.signalCompetionT3();
}
}
};
However, this is only a reasonable approach, if each iteration is computational expensive, such that you can ignore the synchronization overhead. If that is not the case you should probably better have a look at a proper parallel programming library like intel's tbb or microsofts ppl.
*)NOTE: This code is untested and unoptimized. I just wrote it to show what the general structure could look like
Use two condition variables, here is a sketch..
thread 1 & 2 wait on condition variable segment_1:
std::condition_variable segment_1;
thread 3 waits on condition variable segment_2;
std::condition_variable segment_2;
threads 1 & 2 should wait() on segment_1, and thread 3 should wait() on segment_2. To kick off threads 1 & 2, call notify_all() on segment_1, and once they complete, call notify_one() on segment_2 to kick off thread 3. You may want to use some controlling thread to control the sequence unless you can chain (i.e. once 1 & 2 complete, the last one to complete calls notify for thread 3 and so on..)
This is not perfect (see lost wakeups)
I just simply get packets from network, and Enqueue them in one thread and then consume this packets (Dequeue) in an other thread.
So i decide to use boost library to make a shared queue based on
https://www.quantnet.com/cplusplus-multithreading-boost/
template <typename T>
class SynchronisedQueue
{
private:
std::queue<T> m_queue; // Use STL queue to store data
boost::mutex m_mutex; // The mutex to synchronise on
boost::condition_variable m_cond;// The condition to wait for
public:
// Add data to the queue and notify others
void Enqueue(const T& data)
{
// Acquire lock on the queue
boost::unique_lock<boost::mutex> lock(m_mutex);
// Add the data to the queue
m_queue.push(data);
// Notify others that data is ready
m_cond.notify_one();
} // Lock is automatically released here
// Get data from the queue. Wait for data if not available
T Dequeue()
{
// Acquire lock on the queue
boost::unique_lock<boost::mutex> lock(m_mutex);
// When there is no data, wait till someone fills it.
// Lock is automatically released in the wait and obtained
// again after the wait
while (m_queue.size()==0) m_cond.wait(lock);
// Retrieve the data from the queue
T result=m_queue.front(); m_queue.pop();
return result;
} // Lock is automatically released here
};
The problem is , while not getting any data, Dequeue() method
blocks my consumer thread, and when i want to terminate consumer
thread i can not able to end it or stop it sometimes.
What is the suggested way to end blocking of Dequeue(), so that i can safely terminate the thread that consume packets?
Any ideas suggestions?
PS: The site https://www.quantnet.com/cplusplus-multithreading-boost/ use "boost::this_thread::interruption_point();" for stopping consumer thread ... Because of my legacy code structure this is not possible for me...
Based on Answer I update Shared Queue like this:
#include <queue>
#include <boost/thread.hpp>
template <typename T>
class SynchronisedQueue
{
public:
SynchronisedQueue()
{
RequestToEnd = false;
EnqueueData = true;
}
void Enqueue(const T& data)
{
boost::unique_lock<boost::mutex> lock(m_mutex);
if(EnqueueData)
{
m_queue.push(data);
m_cond.notify_one();
}
}
bool TryDequeue(T& result)
{
boost::unique_lock<boost::mutex> lock(m_mutex);
while (m_queue.empty() && (! RequestToEnd))
{
m_cond.wait(lock);
}
if( RequestToEnd )
{
DoEndActions();
return false;
}
result= m_queue.front(); m_queue.pop();
return true;
}
void StopQueue()
{
RequestToEnd = true;
Enqueue(NULL);
}
int Size()
{
boost::unique_lock<boost::mutex> lock(m_mutex);
return m_queue.size();
}
private:
void DoEndActions()
{
EnqueueData = false;
while (!m_queue.empty())
{
m_queue.pop();
}
}
std::queue<T> m_queue; // Use STL queue to store data
boost::mutex m_mutex; // The mutex to synchronise on
boost::condition_variable m_cond; // The condition to wait for
bool RequestToEnd;
bool EnqueueData;
};
And Here is my Test Drive:
#include <iostream>
#include <string>
#include "SynchronisedQueue.h"
using namespace std;
SynchronisedQueue<int> MyQueue;
void InsertToQueue()
{
int i= 0;
while(true)
{
MyQueue.Enqueue(++i);
}
}
void ConsumeFromQueue()
{
while(true)
{
int number;
cout << "Now try to dequeue" << endl;
bool success = MyQueue.TryDequeue(number);
if(success)
{
cout << "value is " << number << endl;
}
else
{
cout << " queue is stopped" << endl;
break;
}
}
cout << "Que size is : " << MyQueue.Size() << endl;
}
int main()
{
cout << "Test Started" << endl;
boost::thread startInsertIntoQueue = boost::thread(InsertToQueue);
boost::thread consumeFromQueue = boost::thread(ConsumeFromQueue);
boost::this_thread::sleep(boost::posix_time::seconds(5)); //After 5 seconds
MyQueue.StopQueue();
int endMain;
cin >> endMain;
return 0;
}
For now it seems to work...Based on new suggestions:
i change Stop Method as:
void StopQueue()
{
boost::unique_lock<boost::mutex> lock(m_mutex);
RequestToEnd = true;
m_cond.notify_one();
}
2 easy solutions to let the thread end:
send an end message on the queue.
add another condition to the condition variable to command to end
while(queue.empty() && (! RequestToEnd)) m_cond.wait(lock);
if (RequestToEnd) { doEndActions(); }
else { T result=m_queue.front(); m_queue.pop(); return result; }
First, do you really need to terminate the thread? If not, don't.
If you do have to, then just queue it a suicide pill. I usually send a NULL cast to T. The thread checks T and, if NULL, cleans up, returns and so dies.
Also, you may need to purge the queue first by removing and delete()ing all the items.
Another option that should be considered is not to block infinitely in threads. In other words, add a time out to your blocking calls like so:
bool TryDequeue(T& result, boost::chrono::milliseconds timeout)
{
boost::unique_lock<boost::mutex> lock(m_mutex);
boost::chrono::system_clock::time_point timeLimit =
boost::chrono::system_clock::now() + timeout;
while (m_queue.empty())
{
if (m_cond.wait_until(lock, timeLimit) ==
boost::condition_variable::cv_status::timeout)
{
return false;
}
}
result = m_queue.front(); m_queue.pop();
return true;
}
Then in your thread, just have a variable to indicate if the thread is still running (I took the liberty to make your consumer into a class):
class Consumer
{
public:
boost::shared_ptr<Consumer> createConsumer()
{
boost::shared_ptr<Consumer> ret(new Consumer());
ret->_consumeFromQueue = boost::thread(&Consumer::ConsumeFromQueue, ret.get());
return ret;
}
protected:
Consumer()
: _threadRunning(true)
{
}
~Consumer()
{
_threadRunning = false;
_consumeFromQueue.join();
}
void ConsumeFromQueue()
{
while(_threadRunning == true)
{
int number;
cout << "Now try to dequeue" << endl;
bool success = MyQueue.TryDequeue(number);
if(success)
{
cout << "value is " << number << endl;
}
else
{
cout << " queue is stopped" << endl;
break;
}
}
cout << "Que size is : " << MyQueue.Size() << endl;
}
bool _threadRunning;
boost::thread _consumeFromQueue;
}
There is no need to hack your queue class just so it can be used in a thread, give it normal interfaces with timeouts, and then use it in the proper way based on the use case.
I give more details on why this is a good pattern to follow for threads here:
http://blog.chrisd.info/how-to-run-threads/