Consider the following example class, which allows one thread to wait for a signal from another thread.
class Sync
{
std::mutex mtx, access;
std::condition_variable cv;
bool waiting;
public:
Sync()
: waiting(false)
{
}
Sync(const Sync&);
~Sync()
{
sendSignal();
}
void waitForSignal()
{
access.lock();
if (!waiting)
{
std::unique_lock<std::mutex> lck (mtx);
waiting = true;
access.unlock();
// in the time between these two statements, 'sendSignal()' acquires
// the lock and calls 'cv.notify_all()', thus the signal was missed.
cv.wait(lck);
}
else
access.unlock();
}
void sendSignal()
{
access.lock();
if (waiting)
{
std::unique_lock<std::mutex> lck (mtx);
cv.notify_all();
waiting = false;
}
access.unlock();
}
};
The problem I'm having is that a signal will occasionally be missed due to interleaving during the time between unlocking the 'access' mutex and calling 'wait()' on the condition_variable. How can I prevent this?
You should probably only have one mutex. I don't see why you need access. Use mtx to protect the waiting variable and for the condition variable.
class Sync
{
std::mutex mtx;
std::condition_variable cv;
bool waiting;
public:
Sync()
: waiting(false)
{
}
Sync(const Sync&);
~Sync()
{
sendSignal();
}
void waitForSignal()
{
std::unique_lock lck (mtx);
if (!waiting)
{
waiting = true;
cv.wait(lck);
}
}
void sendSignal()
{
std::unique_lock lck (mtx);
if (waiting)
{
cv.notify_all();
waiting = false;
}
}
};
The waiting variable and the condition variable state are tied together so they should be treated as a single critical section.
Related
I see a common pattern when using condition_variable to let one thread wait for another thread to finish some work:
Define a condition, a mutex, and a condition_variable:
bool workDone = false;
std::mutex mutex;
std::condition_variable cv;
In one thread, do some work. And when the work is done, update the condition under lock, and notify the condition_variable:
std::unique_lock<std::mutex> lock(mutex);
workDone = true;
lock.unlock();
cv.notify_all();
In another thread that needs to wait the work to be done, create a lock and wait on the condition_variable:
std::unique_lock<std::mutex> lock(mutex);
cv.wait(lock, []() { return workDone; });
In each of the 3 parts, we need multiple lines of code. We can create a class like below to wrap up above codes:
template<class T>
class CVWaiter
{
private:
T m_val;
std::mutex m_mutex;
std::condition_variable m_cv;
public:
Waiter(_In_ const T& val) : m_val(val)
{
}
void Notify(_In_ std::function<void(T& val)> update)
{
std::unique_lock<std::mutex> lock(m_mutex);
update(m_val);
lock.unlock();
m_cv.notify_all();
}
void Wait(_In_ std::function<bool(const T& val)> condition)
{
std::unique_lock<std::mutex> lock(m_mutex);
m_cv.wait(lock, [this, condition]() { return condition(m_val); });
}
};
With this class, the 3 parts on the top can be respectively written as:
CVWaiter workDoneWaiter(false);
workDoneWaiter.Notify([](bool& val) { val = true; });
workDoneWaiter.Wait([](const bool& val) { return val; });
Is it a correct way to implement the wrapper class? Is it a good practice to use the wrapper class for scenarios like this? Is there already anything that can achieve the same in a simpler way in STL?
I'd like to implement a processing loop in a worker thread so that it processes data from a queue when there's something and blocks (the thread sleeps) otherwise...is this even possible? Should also work without any noticeable delays.
Something simple like this:
std::deque<Foo> queue;
void worker()
{
while (active) {
blockAndWaitForData();
while (!queue.empty()) {
doSomething(queue.front());
queue.pop_front();
}
}
}
Of course the queue would need to be locked plus some other details.
The Linux API could also be used directly if needed.
There is something that will suit your needs in C++11 standard. It is condition_variable. It allows you to wait until it is notified by other thread. So your worker could wait until producer notifies it like this. Note this is very dumbed down example and in most situations insufficent but gives you gist how to do it
std::deque<int> q;
std::mutex m;
std::condition_variable cv;
void worker() {
while (active) {
std::deque<int> vals;
{
std::unique_lock<std::mutex> l(m);
cv.wait(l, []{return q.empty();});
vals = std::move(q);
q.clear();
}
for (const auto& val : vals)
doSomething(val);
}
}
void producer() {
while (active) {
{
std::unique_lock<std::mutex> l(m);
q.push_back(produce());
}
cv.notify_one();
}
}
How can I terminate my spun off thread in the destructor of Bar (without having to wait until the thread woke up form its sleep)?
class Bar {
public:
Bar() : thread(&Bar:foo, this) {
}
~Bar() { // terminate thread here}
...
void foo() {
while (true) {
std::this_thread::sleep_for(
std::chrono::seconds(LONG_PERIOD));
//do stuff//
}
}
private:
std::thread thread;
};
You could use a std::condition_variable:
class Bar {
public:
Bar() : t_(&Bar::foo, this) { }
~Bar() {
{
// Lock mutex to avoid race condition (see Mark B comment).
std::unique_lock<std::mutex> lk(m_);
// Update keep_ and notify the thread.
keep_ = false;
} // Unlock the mutex (see std::unique_lock)
cv_.notify_one();
t_.join(); // Wait for the thread to finish
}
void foo() {
std::unique_lock<std::mutex> lk(m_);
while (keep_) {
if (cv_.wait_for(lk, LONG_PERIOD) == std::cv_status::no_timeout) {
continue; // On notify, just continue (keep_ is updated).
}
// Do whatever the thread needs to do...
}
}
private:
bool keep_{true};
std::thread t_;
std::mutex m_;
std::condition_variable cv_;
};
This should give you a global idea of what you may do:
You use an bool to control the loop (with protected read and write access using a std::mutex);
You use an std::condition_variable to wake up the thread to avoid waiting LONG_PERIOD.
I have a function for occasionally getting a frame from GigE camera, and want it to return quickly. The standard procedure is like this:
// ...
camera.StartCapture();
Image img=camera.GetNextFrame();
camera.StopCapture(); // <-- takes a few secs
return img;
Return data is ready after GetNextFrame() and StopCapture() is quite slow; therefore, I'd like to return img as soon as possible and spawn a background thread to do StopCapture(). However, in the (unlikely) case that the acquisition is started again, I would like to protect the access by a mutex. There are places where exceptions can be thrown, so I decide to use a RAII-style lock, which will release at scope exit. At the same time, I need to transfer the lock to the background thread. Something like this (pseudocode):
class CamIface{
std::mutex mutex;
CameraHw camera;
public:
Image acquire(){
std::unique_lock<std::mutex> lock(mutex); // waits for cleanup after the previous call to finish
camera.StartCapture();
Image img=camera.GetNextFrame();
std::thread bg([&]{
camera.StopCapture(); // takes a long time
lock.release(); // release the lock here, somehow
});
bg.detach();
return img;
// do not destroy&release lock here, do it in the bg thread
};
};
How can I transfer the lock from the caller to the background thread spawned? Or is there some better way to handle this?
EDIT: Sufficient lifetime of CamIface instance is assured, please suppose it exists forever.
Updated Answer:
#Revolver_Ocelot is right that my answer encourages undefined behavior, which I'd like to avoid.
So let me use the simple Semaphore implementation from this SO Answer
#include <mutex>
#include <thread>
#include <condition_variable>
class Semaphore {
public:
Semaphore (int count_ = 0)
: count(count_) {}
inline void notify()
{
std::unique_lock<std::mutex> lock(mtx);
count++;
cv.notify_one();
}
inline void wait()
{
std::unique_lock<std::mutex> lock(mtx);
while(count == 0){
cv.wait(lock);
}
count--;
}
private:
std::mutex mtx;
std::condition_variable cv;
int count;
};
class SemGuard
{
Semaphore* sem;
public:
SemGuard(Semaphore& semaphore) : sem(&semaphore)
{
sem->wait();
}
~SemGuard()
{
if (sem)sem->notify();
}
SemGuard(const SemGuard& other) = delete;
SemGuard& operator=(const SemGuard& other) = delete;
SemGuard(SemGuard&& other) : sem(other.sem)
{
other.sem = nullptr;
}
SemGuard& operator=(SemGuard&& other)
{
if (sem)sem->notify();
sem = other.sem;
other.sem = nullptr;
return *this;
}
};
class CamIface{
Semaphore sem;
CameraHw camera;
public:
CamIface() : sem(1){}
Image acquire(){
SemGuard guard(sem);
camera.StartCapture();
Image img=camera.GetNextFrame();
std::thread bg([&](SemGuard guard){
camera.StopCapture(); // takes a long time
}, std::move(guard));
bg.detach();
return img;
};
};
Old Answer:
Just like PanicSheep said, move the mutex into the thread. For example like this:
std::mutex mut;
void func()
{
std::unique_lock<std::mutex> lock(mut);
std::thread bg([&](std::unique_lock<std::mutex> lock)
{
camera.StopCapture(); // takes a long time
},std::move(lock));
bg.detach();
}
Also, just to remark, don't do this:
std::thread bg([&]()
{
std::unique_lock<std::mutex> local_lock = std::move(lock);
camera.StopCapture(); // takes a long time
local_lock.release(); // release the lock here, somehow
});
Because you're racing the thread startup and the function scope ending.
Move the std::unique_lock to the background thread.
You can use both mutex and condition_variable to do the synchronization. Also it's dangerous to detach the background thread, since the thread might still running while the CamIface object has been destructed.
class CamIface {
public:
CamIface() {
background_thread = std::thread(&CamIface::stop, this);
}
~CamIface() {
if (background_thread.joinable()) {
exit = true;
cv.notify_all();
background_thread.join();
}
}
Image acquire() {
std::unique_lock<std::mutex> lock(mtx);
cv.wait(lock, [this]() { return !this->stopping; });
// acquire your image here...
stopping = true;
cv.notify_all();
return img;
}
private:
void stop() {
while (true) {
std::unique_lock<std::mutex> lock(mtx);
cv.wait(lock, [this]() { return this->stopping || this->exit; });
if (exit) return; // exit if needed.
camera.StopCapture();
stopping = false;
cv.notify_one();
}
}
std::mutex mtx;
std::condition_variable cv;
atomic<bool> stopping = {false};
atomic<bool> exit = {false};
CameraHw camera;
std::thread background_thread;
};
The fact that this is hard to do correctly should indicate that your design is oddly asymmetric. Instead, put all of the camera interaction in the background thread, with all the mutex operations from that thread. Think of the camera thread as owning the camera resource and the corresponding mutex.
Then deliver the captured frame(s) across the thread boundary with a std::future or other synchronization like a concurrent queue. You could consider from here making the background thread persistent. Note that this doesn't mean that the capture has to run all the time, it might just make the thread management easier: if the camera object owns the thread, the destructor can signal it to exit, then join() it.
I'm working on a server-side project, which is supposed to accept more than 100 client connections.
It's multithreaded program using boost::thread. Some places I'm using boost::lock_guard<boost::mutex> to lock the shared member data. There is also a BlockingQueue<ConnectionPtr> which contains the input connections. The implementation of the BlockingQueue:
template <typename DataType>
class BlockingQueue : private boost::noncopyable
{
public:
BlockingQueue()
: nblocked(0), stopped(false)
{
}
~BlockingQueue()
{
Stop(true);
}
void Push(const DataType& item)
{
boost::mutex::scoped_lock lock(mutex);
queue.push(item);
lock.unlock();
cond.notify_one(); // cond.notify_all();
}
bool Empty() const
{
boost::mutex::scoped_lock lock(mutex);
return queue.empty();
}
std::size_t Count() const
{
boost::mutex::scoped_lock lock(mutex);
return queue.size();
}
bool TryPop(DataType& poppedItem)
{
boost::mutex::scoped_lock lock(mutex);
if (queue.empty())
return false;
poppedItem = queue.front();
queue.pop();
return true;
}
DataType WaitPop()
{
boost::mutex::scoped_lock lock(mutex);
++nblocked;
while (!stopped && queue.empty()) // Or: if (queue.empty())
cond.wait(lock);
--nblocked;
if (stopped)
{
cond.notify_all(); // Tell Stop() that this thread has left
BOOST_THROW_EXCEPTION(BlockingQueueTerminatedException());
}
DataType tmp(queue.front());
queue.pop();
return tmp;
}
void Stop(bool wait)
{
boost::mutex::scoped_lock lock(mutex);
stopped = true;
cond.notify_all();
if (wait) // Wait till all blocked threads on the waiting queue to leave BlockingQueue::WaitPop()
{
while (nblocked)
cond.wait(lock);
}
}
private:
std::queue<DataType> queue;
mutable boost::mutex mutex;
boost::condition_variable_any cond;
unsigned int nblocked;
bool stopped;
};
For each Connection, there is a ConcurrentQueue<StreamPtr>, which contains the input Streams. The implementation of the ConcurrentQueue:
template <typename DataType>
class ConcurrentQueue : private boost::noncopyable
{
public:
void Push(const DataType& item)
{
boost::mutex::scoped_lock lock(mutex);
queue.push(item);
}
bool Empty() const
{
boost::mutex::scoped_lock lock(mutex);
return queue.empty();
}
bool TryPop(DataType& poppedItem)
{
boost::mutex::scoped_lock lock(mutex);
if (queue.empty())
return false;
poppedItem = queue.front();
queue.pop();
return true;
}
private:
std::queue<DataType> queue;
mutable boost::mutex mutex;
};
When debugging the program, it's okay. But in a load testing with 50 or 100 or more client connections, sometimes it aborted with
pthread_mutex_lock.c:321: __pthread_mutex_lock_full: Assertion `robust || (oldval & 0x40000000) == 0' failed.
I have no idea what happened, and it cannot be reproduced every time.
I googled a lot, but no luck. Please advise.
Thanks.
Peter
0x40000000 is FUTEX_OWNER_DIED - which has the following docs in the futex.h header:
/*
* The kernel signals via this bit that a thread holding a futex
* has exited without unlocking the futex. The kernel also does
* a FUTEX_WAKE on such futexes, after setting the bit, to wake
* up any possible waiters:
*/
#define FUTEX_OWNER_DIED 0x40000000
So the assertion seems to be an indication that a thread that's holding the lock is exiting for some reason - is there a way tha a thread object might be destroyed while it's holding a lock?
Another thing to check is if you have some sort of memory corruption somewhere. Valgrind might be a tool that can help you with that.
I had a similar issue and found this post. It may be useful for some of you: in my case I was just missing the init.
pthread_mutex_init(&_mutexChangeMapEvent, NULL);