Is there a problem with this usage of boost condition code? - c++

Will this code ever wait on the mutex inside the producer's void push(data)?
If so how do I get around that?
boost::mutex access;
boost::condition cond;
// consumer
data read()
{
boost::mutex::scoped_lock lock(access);
// this blocks until the data is ready
cond.wait(lock);
// queue is ready
return data_from_queue();
}
// producer
void push(data)
{
//<--- will a block ever happen here?
boost::mutex::scoped_lock lock(access);
// add data to queue
cond.notify_one();
}
Let's say I have a thread pool for(;;) loop and I have read() being called from a thread in this pool. Then I process data on it. And I call push() with some external thread. My question is, can that external thread ever block on its call to push(data)?

wait can return without notify ever being called. This is called a spurious wakeup. To handle this, code using a condition should always have a loop around the wait that checks that the expected condition really is in effect. For example:
queue data_queue;
boost::mutex access;
boost::condition cond;
// consumer
data read()
{
boost::mutex::scoped_lock lock(access);
while (queue.is_empty()) {
// this blocks until the data is ready
cond.wait(lock);
}
// queue is ready
return data_from_queue();
}
// producer
void push(data)
{
boost::mutex::scoped_lock lock(access);
// add data to queue
queue.push_back(data);
cond.notify_one();
}
Conceptually, "condition" is kind of misleading. Instead you can think of it as a signal. You are signalling another thread or threads to wake up, but you are not promising anything. Just, "Hey, maybe there's some data ready, why don't you go check eh?"

When .wait() is called it will block the calling thread in your thread pool and release the mutex. It will return when someone calls notify_one() or notify_all(). Before the thread that was blocked returns though, it will re-acquire the mutex and unblock the thread in your thread pool.
So the call to void push(data) by your external thread will only block temporarily up until .wait() is called.
See the boost documentation on the condition's wait function.

Related

Using std::condition_variable to wait in the message sending thread, a deadlock occurred

I am writing a network module, the sending of data is carried out in a separate thread, using a concurrent queue to synchronize data in the main thread.
private:
std::mutex mutex_;
std::condition_variable blockNotification_;
moodycamel::ConcurrentQueue<Envelope> sendQueue_;
std::promise<bool> senderThreadStopped_;
void AsyncTransport::RunSender()
{
while (!drain_)
{
SendAllQueuedEnvelope();
std::unique_lock<std::mutex> lock(mutex_);
blockNotification_.wait(lock);
}
// Make sure all envelope have been sent.
SendAllQueuedEnvelope();
senderThreadStopped_.set_value(true);
assert(sendQueue_.size_approx() == 0);
}
void AsyncTransport::SendAllQueuedEnvelope()
{
auto envelope = Envelope::Wrap(nullptr);
while (sendQueue_.try_dequeue(envelope))
{
envelope = syncTransport_->Send(envelope);
}
}
Envelope AsyncTransport::Send(const Envelope& envelope) const
{
if (drain_)
{
return envelope.With<SentFaildStamp>("The current transport has drained.");
}
if (!sendQueue_.try_enqueue(envelope.CloneContent()))
{
return envelope.With<SentFaildStamp>("Send queue is full.");
}
blockNotification_.notify_all();
return envelope;
}
RunSender It runs in a separate thread, and always gets data from the concurrent queue,When all the data in the queue is sent, we wait for the thread to avoid additional cpu overhead until there is new data in the queue.
Send method call in main thread.
But I found that I had a deadlock, what did I do wrong?
I expect the sending thread to enter wait after sending data, and wake up again after there is new data in the queue.
The Send() method isn't thread-safe. I would use a std:lock_guard in a new scope to lock the mutex and ensure it is unlocked before the notify_all call like this:
Envelope AsyncTransport::Send(const Envelope& envelope) const
{
{
const std::lock_guard<std::mutex> lock(mutex_);
if (drain_)
{
return envelope.With<SentFaildStamp>("The current transport has drained.");
}
if (!sendQueue_.try_enqueue(envelope.CloneContent()))
{
return envelope.With<SentFaildStamp>("Send queue is full.");
}
}
blockNotification_.notify_all();
return envelope;
}
Since the lock_guard locks the mutex, you would have to either make the mutex mutable to be used in a const function or remove the const specifier on the function.
You should always have a condition with a wait to protect against spurious wake-ups. See CP.42. So I would change the wait to include a condition like this:
std::unique_lock<std::mutex> lock(mutex_);
blockNotification_.wait(lock, [&]() { return drain_ || !sendQueue_.empty(); });
Now the wait will only wake up once drain is true or the sendQueue has something in it.
You need a variable that is shared between threads and is the condition predicate.
You need to take a lock before reading or writing the predicate. The condition variable wait will unlock before sleeping so the other thread can lock, update the predicate and unlock. You can send the notify before or after unlocking. I prefer after, but meh.
A condition variable on its own is useless. It must always go along with a lock protected variable, or set of variables, which must be checked before continuing after waiting.
And of course it then only makes sense to update whatever that variable is before sending a notification.
Regarding the deadlock: it may happen that blockNotification_.notify_all(); happens before blockNotification_.wait(lock); and then the latter will wait forever. You need to use wait_for that checks if the queue is not-empty so that block_notification can exit when new messages are ready to be sent. Be aware of spurious wakeup.

mutex lock synchronization between different threads

Since I have recently started coding multi threaded programs this might be a stupid question. I found out about the awesome mutex and condition variable usage. From as far as I can understand there use is:
Protect sections of code/shared resources from getting corrupted by multiple threads access. Hence lock that portion thus one can control which thread will be accessing.
If a thread is waiting for a resource/condition from another thread one can use cond.wait() instead of polling every msec
Now Consider the following class example:
class Queue {
private:
std::queue<std::string> m_queue;
boost::mutex m_mutex;
boost::condition_variable m_cond;
bool m_exit;
public:
Queue()
: m_queue()
, m_mutex()
, m_cond()
, m_exit(false)
{}
void Enqueue(const std::string& Req)
{
boost::mutex::scoped_lock lock(m_mutex);
m_queue.push(Req);
m_cond.notify_all();
}
std::string Dequeue()
{
boost::mutex::scoped_lock lock(m_mutex);
while(m_queue.empty() && !m_exit)
{
m_cond.wait(lock);
}
if (m_queue.empty() && m_exit) return "";
std::string val = m_queue.front();
m_queue.pop();
return val;
}
void Exit()
{
boost::mutex::scoped_lock lock(m_mutex);
m_exit = true;
m_cond.notify_all();
}
}
In the above example, Exit() can be called and it will notify the threads waiting on Dequeue that it's time to exit without waiting for more data in the queue.
My question is since Dequeue has acquired the lock(m_mutex), how can Exit acquire the same lock(m_mutex)? Isn't unless the Dequeue releases the lock then only Exit can acquire it?
I have seen this pattern in Destructor implementation too, using same class member mutex, Destructor notifies all the threads(class methods) thats it time to terminate their respective loops/functions etc.
As Jarod mentions in the comments, the call
m_cond.wait(lock)
is guaranteed to atomically unlock the mutex, releasing it for the thread, and starts listening to notifications of the condition variable (see e.g. here).
This atomicity also ensures any code in the thread is executed after the listening is set up (so no notify calls will be missed). This assumes of course that the thread first locks the mutex, otherwise all bets are off.
Another important bit to understand is that condition variables may suffer from "spurious wakeups", so it is important to have a second boolean condition (e.g. here, you could check the emptiness of your queue) so that you don't end up awoken with an empty queue. Something like this:
m_cond.wait(lock, [this]() { return !m_queue.empty() || m_exit; });

Is std::condition_variable thread-safe?

I have some code like this:
std::queue<myData*> _qDatas;
std::mutex _qDatasMtx;
Generator function
void generator_thread()
{
std::this_thread::sleep_for(std::chrono::milliseconds(1));
{
std::lock_guard<std::mutex> lck(_qDatasMtx);
_qData.push(new myData);
}
//std::lock_guard<std::mutex> lck(cvMtx); //need lock here?
cv.notify_one();
}
Consumer function
void consumer_thread()
{
for(;;)
{
std::unique_lock lck(_qDatasMtx);
if(_qDatas.size() > 0)
{
delete _qDatas.front();
_qDatas.pop();
}
else
cv.wait(lck);
}
}
So if I have dozens of generator threads and one consumer thread, do I need a mutex lock when calling cv.notify_one() in each thread?
And is std::condition_variable thread-safe?
do I need a mutex lock when calling cv.notify_one() in each thread?
No
is std::condition_variable thread-safe?
Yes
When calling wait, you pass a locked mutex which is immediately unlocked for use. When calling notify you don't lock it with the same mutex, since what will happen is (detailed on links):
Notify
Wakeup thread that is waiting
Lock mutex for use
From std::condition_variable:
execute notify_one or notify_all on the std::condition_variable (the lock does not need to be held for notification)
and from std::condition_variable::notify_all:
The notifying thread does not need to hold the lock on the same mutex as the one held by the waiting thread(s);
With regards to your code snippet:
//std::lock_guard<std::mutex> lck(cvMtx); //need lock here?
cv.notify_one();
No you do not need a lock there.
notify_one can be called from multiple threads without a lock.
However, in order for normal operation to work and no chance of notifications "slipping through", you must hold the lock the condition variable is guarded by between the point where you modify the spurious wakeup guard and/or package the condition variable checks on read, and the point where you notify one.
Your code appears to pass that test. However this version is more clear:
std::unique_lock lck(_qDatasMtx);
for(;;) {
cv.wait(lck,[&]{ return _qDatas.size()>0; });
delete _qDatas.front();
_qDatas.pop();
}
and it reduces spurious unlocking/relocking.

Is deadlock possible in this simple scenario?

Please see the following code:
std::mutex mutex;
std::condition_variable cv;
std::atomic<bool> terminate;
// Worker thread routine
void work() {
while( !terminate ) {
{
std::unique_lock<std::mutex> lg{ mutex };
cv.wait(lg);
// Do something
}
// Do something
}
}
// This function is called from the main thread
void terminate_worker() {
terminate = true;
cv.notify_all();
worker_thread.join();
}
Is the following scenario can happen?
Worker thread is waiting for signals.
The main thread called terminate_worker();
The main thread set the atomic variable terminate to true, and then signaled to the worker thread.
Worker thread now wakes up, do its job and load from terminate. At this step, the change to terminate made by the main thread is not yet seen, so the worker thread decides to wait for another signal.
Now deadlock occurs...
I wonder this is ever possible. As I understood, std::atomic only guarantees no race condition, but memory order is a different thing. Questions:
Is this possible?
If this is not possible, is this possible if terminate is not an atomic variable but is simply bool? Or atomicity has nothing to do with this?
If this is possible, what should I do?
Thank you.
I don't believe, what you describe is possible, as cv.notify_all() afaik (please correct me if I'm wrong) synchronizes with wait(), so when the worker thread awakes, it will see the change to terminate.
However:
A deadlock can happen the following way:
Worker thread (WT) determines that the terminate flag is still false.
The main thread (MT) sets the terminate flag and calls cv.notify_all().
As no one is curently waiting for the condition variable that notification gets "lost/ignored".
MT calls join and blocks.
WT goes to sleep ( cv.wait()) and blocks too.
Solution:
While you don't have to hold a lock while you call cv.notify, you
have to hold a lock, while you are modifying terminate (even if it is an atomic)
have to make sure, that the check for the condition and the actual call to wait happen while you are holding the same lock.
This is why there is a form of wait that performs this check just before it sends the thread to sleep.
A corrected code (with minimal changes) could look like this:
// Worker thread routine
void work() {
while( !terminate ) {
{
std::unique_lock<std::mutex> lg{ mutex };
if (!terminate) {
cv.wait(lg);
}
// Do something
}
// Do something
}
}
// This function is called from the main thread
void terminate_worker() {
{
std::lock_guard<std::mutex> lg(mutex);
terminate = true;
}
cv.notify_all();
worker_thread.join();
}

Not all threads notified of condition_variable.notify_all()

I have following scenario:
condition_variable cv;
mutex mut;
// Thread 1:
void run() {
while (true) {
mut.lock();
// create_some_data();
mut.unlock();
cv.notify_all();
}
}
// Thread 2
void thread2() {
mutex lockMutex;
unique_lock<mutex> lock(lockMutex);
while (running) {
cv.wait(lock);
mut.lock();
// copy data
mut.unlock();
// process data
}
}
// Thread 3, 4... - same as Thread 2
I run thread 1 all the time to get new data. Other threads wait with condition_variable until new data is available, then copy it and do some work on it. Work perfomed by threads differs in time needed to finish, the idea is that threads will get new data only when they finished with the old one. Data got in meantime is allowed to be "missed". I don't use shared mutex (only to access data) because I don't want threads to depend on each other.
Above code works fine on Windows, but now I run it on Ubuntu and I noticed that only one thread is being notified when notify_all() is called and the other ones just hangs on wait().
Why is that? Does Linux require different approach for using condition_variable?
Your code exhibits UB immediately as it relocks the unique lock that the cv has relocked when it exits wait.
There are other problems, like not detecting spurious wakeups.
Finally cv notify all onky notified currently waiting threads. If a thread shows up later, no dice.
It's working by luck.
The mutex and the condition variable are two parts of the same construct. You can't mix and match mutexes and cvs.
try this:
void thread2() {
unique_lock<mutex> lock(mut); // use the global mutex
while (running) {
cv.wait(lock);
// mutex is already locked here
// test condition. wakeups can be spurious
// copy data
lock.unlock();
// process data
lock.lock();
}
}
Per this documentation:
Any thread that intends to wait on std::condition_variable has to
acquire a std::unique_lock, on the same mutex as used to
protect the shared variable
execute wait, wait_for, or wait_until. The wait operations atomically release the mutex and suspend the execution of the
thread.
When the condition variable is notified, a timeout expires, or a spurious wakeup occurs, the thread is awakened, and the mutex is
atomically reacquired. The thread should then check the condition
and resume waiting if the wake up was spurious.
This code
void thread2() {
mutex lockMutex;
unique_lock<mutex> lock(lockMutex);
while (running) {
doesn't do that.