boost mutex, condition, scoped_lock , am I using them wrong here? - c++

class MyClass
{
public:
void PushMessage(MyMessage m) // Thread 1 calls this
{
boost::mutex::scoped_lock lock(mMutex);
mQueue.push_back(m);
mCondition.notify_one();
}
MyMessage PopMessage()
{
boost::mutex::scoped_lock lock(mMutex);
while(mQueue.empty())
mCondition.wait(lock);
MyMessage message = mQueue.front();
mQueue.pop_front();
return message;
}
void foo() // thread 2 is running this loop, and supposed to get messages
{
for(;;)
{
MyMessage message = PopMessage();
do_something(message);
}
}
private:
std::deque<MyMessage> mQueue;
boost::mutex mMutex;
boost::condition mCondition;
};
When I run the code, PushMessage is called, and foo() is waiting on PopMessage(), but PopMessage never returns.
What does do_something here is not irrelevant I think.
What am I doing wrong here?
Strangely, the above code worked fine under mac, but I'm having trouble on linux.
boost version is 1.44.0
Thank you

Rather than letting the scope of the lock object expire before it unlocks, you could try to manually unlock the mutex in PushMessage() before you unblock the waiting thread, i.e.,
void PushMessage(MyMessage m) // Thread 1 calls this
{
boost::mutex::scoped_lock lock(mMutex);
mQueue.push_back(m);
lock.unlock(); // <== manually unlock
mCondition.notify_one();
}
That way when thread 2 unblocks, there will be no "cross-over" time where thread 1 contains the lock, and thread 2 is trying to obtain a lock on your mutex. I don't see why that would create problems, but again, at least you won't have thread 2 trying to call lock.lock() while thread 1 still contains the lock.

I think you need 2 mutex objects, one is for synchronizing method call in different threads, one is for condition wait. You mixed them.

Related

mutex lock synchronization between different threads

Since I have recently started coding multi threaded programs this might be a stupid question. I found out about the awesome mutex and condition variable usage. From as far as I can understand there use is:
Protect sections of code/shared resources from getting corrupted by multiple threads access. Hence lock that portion thus one can control which thread will be accessing.
If a thread is waiting for a resource/condition from another thread one can use cond.wait() instead of polling every msec
Now Consider the following class example:
class Queue {
private:
std::queue<std::string> m_queue;
boost::mutex m_mutex;
boost::condition_variable m_cond;
bool m_exit;
public:
Queue()
: m_queue()
, m_mutex()
, m_cond()
, m_exit(false)
{}
void Enqueue(const std::string& Req)
{
boost::mutex::scoped_lock lock(m_mutex);
m_queue.push(Req);
m_cond.notify_all();
}
std::string Dequeue()
{
boost::mutex::scoped_lock lock(m_mutex);
while(m_queue.empty() && !m_exit)
{
m_cond.wait(lock);
}
if (m_queue.empty() && m_exit) return "";
std::string val = m_queue.front();
m_queue.pop();
return val;
}
void Exit()
{
boost::mutex::scoped_lock lock(m_mutex);
m_exit = true;
m_cond.notify_all();
}
}
In the above example, Exit() can be called and it will notify the threads waiting on Dequeue that it's time to exit without waiting for more data in the queue.
My question is since Dequeue has acquired the lock(m_mutex), how can Exit acquire the same lock(m_mutex)? Isn't unless the Dequeue releases the lock then only Exit can acquire it?
I have seen this pattern in Destructor implementation too, using same class member mutex, Destructor notifies all the threads(class methods) thats it time to terminate their respective loops/functions etc.
As Jarod mentions in the comments, the call
m_cond.wait(lock)
is guaranteed to atomically unlock the mutex, releasing it for the thread, and starts listening to notifications of the condition variable (see e.g. here).
This atomicity also ensures any code in the thread is executed after the listening is set up (so no notify calls will be missed). This assumes of course that the thread first locks the mutex, otherwise all bets are off.
Another important bit to understand is that condition variables may suffer from "spurious wakeups", so it is important to have a second boolean condition (e.g. here, you could check the emptiness of your queue) so that you don't end up awoken with an empty queue. Something like this:
m_cond.wait(lock, [this]() { return !m_queue.empty() || m_exit; });

Writing a thread that stays alive

I would like to write a class that wraps around std::thread and behaves like a std::thread but without actually allocating a thread every time I need to process something async. The reason is that I need to use multi threading in a context where I'm not allow to dynamically allocate and I also don't want to have the overhead of creating a std::thread.
Instead, I want a thread to run in a loop and wait until it can start processing. The client calls invoke which wakes up the thread. The Thread locks a mutex, does it's processing and falls asleep again. A function join behaves like std::thread::join by locking until the thread frees the lock (i.e. falls asleep again).
I think I got the class to run but because of a general lack of experience in multi threading, I would like to ask if anybody can spot race conditions or if the approach I used is considered "good style". For example, I'm not sure if temporary locking the mutex is a decent way to "join" the thread.
EDIT
I found another race condition: when calling join directly after invoke, there is no reason the thread already locked the mutex and thus locks the caller of join until the thread goes to sleep. To prevent this, I had to add a check for the invoke counter.
Header
#pragma once
#include <thread>
#include <atomic>
#include <mutex>
class PersistentThread
{
public:
PersistentThread();
~PersistentThread();
// set function to invoke
// locks if thread is currently processing _func
void set(const std::function<void()> &f);
// wakes the thread up to process _func and fall asleep again
// locks if thread is currently processing _func
void invoke();
// mimics std::thread::join
// locks until the thread is finished with it's loop
void join();
private:
// intern thread loop
void loop(bool *initialized);
private:
bool _shutdownRequested{ false };
std::mutex _mutex;
std::unique_ptr<std::thread> _thread;
std::condition_variable _cond;
std::function<void()> _func{ nullptr };
};
Source File
#include "PersistentThread.h"
PersistentThread::PersistentThread()
{
auto lock = std::unique_lock<std::mutex>(_mutex);
bool initialized = false;
_thread = std::make_unique<std::thread>(&PersistentThread::loop, this, &initialized);
// wait until _thread notifies, check bool initialized to prevent spurious wakeups
_cond.wait(lock, [&] {return initialized; });
}
PersistentThread::~PersistentThread()
{
{
std::lock_guard<std::mutex> lock(_mutex);
_func = nullptr;
_shutdownRequested = true;
// wake up and let join
_cond.notify_one();
}
// join thread,
if (_thread->joinable())
{
_thread->join();
}
}
void PersistentThread::set(const std::function<void()>& f)
{
std::lock_guard<std::mutex> lock(_mutex);
this->_func = f;
}
void PersistentThread::invoke()
{
std::lock_guard<std::mutex> lock(_mutex);
_cond.notify_one();
}
void PersistentThread::join()
{
bool joined = false;
while (!joined)
{
std::lock_guard<std::mutex> lock(_mutex);
joined = (_invokeCounter == 0);
}
}
void PersistentThread::loop(bool *initialized)
{
std::unique_lock<std::mutex> lock(_mutex);
*initialized = true;
_cond.notify_one();
while (true)
{
// wait until we get the mutex again
_cond.wait(lock, [this] {return _shutdownRequested || (this->_invokeCounter > 0); });
// shut down if requested
if (_shutdownRequested) return;
// process
if (_func) _func();
_invokeCounter--;
}
}
You are asking about potential race conditions, and I see at least one race condition in the shown code.
After constructing a PersistentThread, there is no guarantee that the new thread will acquire its initial lock in its loop() before the main execution thread returns from the constructor and enters invoke(). It is possible that the main execution thread enters invoke() immediately after the constructor is complete, ends up notifying nobody, since the internal execution thread hasn't locked the mutex yet. As such, this invoke() will not result in any processing taking place.
You need to synchronize the completion of the constructor with the execution thread's initial lock acquisition.
EDIT: your revision looks right; but I also spotted another race condition.
As documented in the description of wait(), wait() may wake up "spuriously". Just because wait() returned, doesn't mean that some other thread has entered invoke().
You need a counter, in addition to everything else, with invoke() incrementing the counter, and the execution thread executing its assigned duties only when the counter is greater than zero, decrementing it. This will guard against spurious wake-ups.
I would also have the execution thread check the counter before entering wait(), and enter wait() only if it is 0. Otherwise, it decrements the counter, executes its function, and loops back.
This should plug up all the potential race conditions in this area.
P.S. The spurious wake-up also applies to the initial notification, in your correction, that the execution thread has entered the loop. You'll need to do something similar for that situation, too.
I don't understand what you're trying to ask exactly. It's a nice style you used.
It would be much safer using bools and check the single routines because void returns nothing so you could be maybe stuck caused by bugs. Check everything you can since the thread runs under the hood. Make sure the calls are running correctly, if the process had really success. Also you could read some stuff about "Thread Pooling".

Is deadlock possible in this simple scenario?

Please see the following code:
std::mutex mutex;
std::condition_variable cv;
std::atomic<bool> terminate;
// Worker thread routine
void work() {
while( !terminate ) {
{
std::unique_lock<std::mutex> lg{ mutex };
cv.wait(lg);
// Do something
}
// Do something
}
}
// This function is called from the main thread
void terminate_worker() {
terminate = true;
cv.notify_all();
worker_thread.join();
}
Is the following scenario can happen?
Worker thread is waiting for signals.
The main thread called terminate_worker();
The main thread set the atomic variable terminate to true, and then signaled to the worker thread.
Worker thread now wakes up, do its job and load from terminate. At this step, the change to terminate made by the main thread is not yet seen, so the worker thread decides to wait for another signal.
Now deadlock occurs...
I wonder this is ever possible. As I understood, std::atomic only guarantees no race condition, but memory order is a different thing. Questions:
Is this possible?
If this is not possible, is this possible if terminate is not an atomic variable but is simply bool? Or atomicity has nothing to do with this?
If this is possible, what should I do?
Thank you.
I don't believe, what you describe is possible, as cv.notify_all() afaik (please correct me if I'm wrong) synchronizes with wait(), so when the worker thread awakes, it will see the change to terminate.
However:
A deadlock can happen the following way:
Worker thread (WT) determines that the terminate flag is still false.
The main thread (MT) sets the terminate flag and calls cv.notify_all().
As no one is curently waiting for the condition variable that notification gets "lost/ignored".
MT calls join and blocks.
WT goes to sleep ( cv.wait()) and blocks too.
Solution:
While you don't have to hold a lock while you call cv.notify, you
have to hold a lock, while you are modifying terminate (even if it is an atomic)
have to make sure, that the check for the condition and the actual call to wait happen while you are holding the same lock.
This is why there is a form of wait that performs this check just before it sends the thread to sleep.
A corrected code (with minimal changes) could look like this:
// Worker thread routine
void work() {
while( !terminate ) {
{
std::unique_lock<std::mutex> lg{ mutex };
if (!terminate) {
cv.wait(lg);
}
// Do something
}
// Do something
}
}
// This function is called from the main thread
void terminate_worker() {
{
std::lock_guard<std::mutex> lg(mutex);
terminate = true;
}
cv.notify_all();
worker_thread.join();
}

Pausing std::thread until a function finishes

class Class {
public:
Class ();
private:
std::thread* updationThread;
};
Constructor:
Class::Class() {
updationThread = new std::thread(&someFunc);
}
At some point in my application, I have to pause that thread and call a function and after execution of that function I have to resume the thread. Let's say it happens here:
void Class::aFunction() {
functionToBeCalled(); //Before this, the thread should be paused
//Now, the thread should be resumed.
}
I have tried to use another thread with function functionToBeCalled() and use thread::join but was unable to do that for some reason.
How can I pause a thread or how can I use thread::join to pause a thread until the other finishes?
I don't think you can easily (in a standard way) "pause" some thread, and then resumes it. I imagine you can send SIGSTOP and SIGCONT if you are using some Unix-flavored OS, but otherwise, you should properly mark the atomic parts inside someFunc() with mutexes and locks, an wraps functionToBeCalled() with a lock on the corresponding mutex:
std::mutex m; // Global mutex, you should find a better place to put it
// (possibly in your object)
and inside the function:
void someFunc() {
// I am just making up stuff here
while(...) {
func1();
{
std::lock_guard<std::mutex> lock(m); // lock the mutex
...; // Stuff that must not run with functionToBeCalled()
} // Mutex unlocked here, by end of scope
}
}
and when calling functionToBeCalled():
void Class::aFunction() {
std::lock_guard<std::mutex> lock(m); // lock the mutex
functionToBeCalled();
} // Mutex unlocked here, by end of scope
You can use a condition variable. An example similar to your situation is given there:
http://en.cppreference.com/w/cpp/thread/condition_variable

C++ pthread blocking queue deadlock (I think)

I am having a problem with pthreads where i think i am getting a deadlock. I have created a blocking queue which I thought was working, but after doing some more testing I have found that if i try and cancel multiple threads that are blocking on the blocking_queue, i seem to get a deadlock.
The blocking queue is very simple and looks like this:
template <class T> class Blocking_Queue
{
public:
Blocking_Queue()
{
pthread_mutex_init(&_lock, NULL);
pthread_cond_init(&_cond, NULL);
}
~Blocking_Queue()
{
pthread_mutex_destroy(&_lock);
pthread_cond_destroy(&_cond);
}
void put(T t)
{
pthread_mutex_lock(&_lock);
_queue.push(t);
pthread_cond_signal(&_cond);
pthread_mutex_unlock(&_lock);
}
T pull()
{
pthread_mutex_lock(&_lock);
while(_queue.empty())
{
pthread_cond_wait(&_cond, &_lock);
}
T t = _queue.front();
_queue.pop();
pthread_mutex_unlock(&_lock);
return t;
}
priavte:
std::queue<T> _queue;
pthread_cond_t _cond;
pthread_mutex_t _lock;
}
For testing, I have created 4 threads that pull on this blocking queue. I added some print statements to the blocking queue and each thread is getting to the pthread_cond_wait() method. However, when i try to call pthread_cancel() and pthread_join() on each thread the program just hangs.
I have also tested this with just one thread and it works perfectly.
According to documentation, pthread_cond_wait() is a cancellation point, so calling cancel on those threads should cause them to stop execution (and this does work with just 1 thread). However pthread_mutex_lock is not a cancelation point. Could something be happening along the lines of when pthread_cancel() is called, the canceled thread aquires the mutex before terminating and doesn't unlock it, and then when the next thread gets canceled it cannot aquire the mutex and deadlocks? Or is there something else that I am doing wrong.
Any advice would be lovely. Thanks :)
pthread_cancel() is best avoided.
You can unblock all your threads blocked on Blocking_Queue::pull() by throwing an exception from there.
One weak spot in the queue is that T t = _queue.front(); invokes the copy constructor of T that may throw an exception, rendering you queue mutex locked forever. Better use C++ scoped locks.
Here is an example of graceful thread termination:
$ cat test.cc
#include <boost/thread/mutex.hpp>
#include <boost/thread/thread.hpp>
#include <boost/thread/condition_variable.hpp>
#include <exception>
#include <list>
#include <stdio.h>
struct BlockingQueueTerminate
: std::exception
{};
template<class T>
class BlockingQueue
{
private:
boost::mutex mtx_;
boost::condition_variable cnd_;
std::list<T> q_;
unsigned blocked_;
bool stop_;
public:
BlockingQueue()
: blocked_()
, stop_()
{}
~BlockingQueue()
{
this->stop(true);
}
void stop(bool wait)
{
// tell threads blocked on BlockingQueue::pull() to leave
boost::mutex::scoped_lock lock(mtx_);
stop_ = true;
cnd_.notify_all();
if(wait) // wait till all threads blocked on the queue leave BlockingQueue::pull()
while(blocked_)
cnd_.wait(lock);
}
void put(T t)
{
boost::mutex::scoped_lock lock(mtx_);
q_.push_back(t);
cnd_.notify_one();
}
T pull()
{
boost::mutex::scoped_lock lock(mtx_);
++blocked_;
while(!stop_ && q_.empty())
cnd_.wait(lock);
--blocked_;
if(stop_) {
cnd_.notify_all(); // tell stop() this thread has left
throw BlockingQueueTerminate();
}
T front = q_.front();
q_.pop_front();
return front;
}
};
void sleep_ms(unsigned ms)
{
// i am using old boost
boost::thread::sleep(boost::get_system_time() + boost::posix_time::milliseconds(ms));
// with latest one you can do this
//boost::thread::sleep(boost::posix_time::milliseconds(10));
}
void thread(int n, BlockingQueue<int>* q)
try
{
for(;;) {
int m = q->pull();
printf("thread %u: pulled %d\n", n, m);
sleep_ms(10);
}
}
catch(BlockingQueueTerminate&)
{
printf("thread %u: finished\n", n);
}
int main()
{
BlockingQueue<int> q;
// create two threads
boost::thread_group tg;
tg.create_thread(boost::bind(thread, 1, &q));
tg.create_thread(boost::bind(thread, 2, &q));
for(int i = 1; i < 10; ++i)
q.put(i);
sleep_ms(100); // let the threads do something
q.stop(false); // tell the threads to stop
tg.join_all(); // wait till they stop
}
$ g++ -pthread -Wall -Wextra -o test -lboost_thread-mt test.cc
$ ./test
thread 2: pulled 1
thread 1: pulled 2
thread 1: pulled 3
thread 2: pulled 4
thread 1: pulled 5
thread 2: pulled 6
thread 1: pulled 7
thread 2: pulled 8
thread 1: pulled 9
thread 2: finished
thread 1: finished
I'm not exactly familiar with pthread_cancel() - I prefer cooperative termination.
Wouldn't a pthread_cancel() leave your mutex locked? I suppose you need to cleanup with a cancellation handler.
I've had similar experience with pthread_cond_wait() / pthread_cancel(). I had issues with a lock still being held after the thread returned for some reason, and it was impossible to unlock it, since you have to unlock in the same thread as you locked. I noticed these errors when doing pthread_mutex_destroy() since I had a single producer, single consumer situation so the deadlock didn't occur.
pthread_cond_wait() is supposed to lock the mutex when returning, and this could have happened, but the final unlock didn't go through since we forcefully canceled the thread. For safety I generally try to avoid using pthread_cancel() altogether since some platforms don't even support this. You could use a volatile bool or atomics and check if thread should be shut down. That way, the mutexes will be handled cleanly as well.