How to implement a blocking processing loop? - c++

I'd like to implement a processing loop in a worker thread so that it processes data from a queue when there's something and blocks (the thread sleeps) otherwise...is this even possible? Should also work without any noticeable delays.
Something simple like this:
std::deque<Foo> queue;
void worker()
{
while (active) {
blockAndWaitForData();
while (!queue.empty()) {
doSomething(queue.front());
queue.pop_front();
}
}
}
Of course the queue would need to be locked plus some other details.
The Linux API could also be used directly if needed.

There is something that will suit your needs in C++11 standard. It is condition_variable. It allows you to wait until it is notified by other thread. So your worker could wait until producer notifies it like this. Note this is very dumbed down example and in most situations insufficent but gives you gist how to do it
std::deque<int> q;
std::mutex m;
std::condition_variable cv;
void worker() {
while (active) {
std::deque<int> vals;
{
std::unique_lock<std::mutex> l(m);
cv.wait(l, []{return q.empty();});
vals = std::move(q);
q.clear();
}
for (const auto& val : vals)
doSomething(val);
}
}
void producer() {
while (active) {
{
std::unique_lock<std::mutex> l(m);
q.push_back(produce());
}
cv.notify_one();
}
}

Related

Would it be good to create a wrapper class when using condition_variable?

I see a common pattern when using condition_variable to let one thread wait for another thread to finish some work:
Define a condition, a mutex, and a condition_variable:
bool workDone = false;
std::mutex mutex;
std::condition_variable cv;
In one thread, do some work. And when the work is done, update the condition under lock, and notify the condition_variable:
std::unique_lock<std::mutex> lock(mutex);
workDone = true;
lock.unlock();
cv.notify_all();
In another thread that needs to wait the work to be done, create a lock and wait on the condition_variable:
std::unique_lock<std::mutex> lock(mutex);
cv.wait(lock, []() { return workDone; });
In each of the 3 parts, we need multiple lines of code. We can create a class like below to wrap up above codes:
template<class T>
class CVWaiter
{
private:
T m_val;
std::mutex m_mutex;
std::condition_variable m_cv;
public:
Waiter(_In_ const T& val) : m_val(val)
{
}
void Notify(_In_ std::function<void(T& val)> update)
{
std::unique_lock<std::mutex> lock(m_mutex);
update(m_val);
lock.unlock();
m_cv.notify_all();
}
void Wait(_In_ std::function<bool(const T& val)> condition)
{
std::unique_lock<std::mutex> lock(m_mutex);
m_cv.wait(lock, [this, condition]() { return condition(m_val); });
}
};
With this class, the 3 parts on the top can be respectively written as:
CVWaiter workDoneWaiter(false);
workDoneWaiter.Notify([](bool& val) { val = true; });
workDoneWaiter.Wait([](const bool& val) { return val; });
Is it a correct way to implement the wrapper class? Is it a good practice to use the wrapper class for scenarios like this? Is there already anything that can achieve the same in a simpler way in STL?

How to wait for completion of all tasks in this ThreadPool?

I am trying to write a ThreadPool class
class ThreadPool {
public:
ThreadPool(size_t numberOfThreads):isAlive(true) {
for(int i =0; i < numberOfThreads; i++) {
workerThreads.push_back(std::thread(&ThreadPool::doJob, this));
}
#ifdef DEBUG
std::cout<<"Construction Complete"<<std::endl;
#endif
}
~ThreadPool() {
#ifdef DEBUG
std::cout<<"Destruction Start"<<std::endl;
#endif
isAlive = false;
conditionVariable.notify_all();
waitForExecution();
#ifdef DEBUG
std::cout<<"Destruction Complete"<<std::endl;
#endif
}
void waitForExecution() {
for(std::thread& worker: workerThreads) {
worker.join();
}
}
void addWork(std::function<void()> job) {
#ifdef DEBUG
std::cout<<"Adding work"<<std::endl;
#endif
std::unique_lock<std::mutex> lock(lockListMutex);
jobQueue.push_back(job);
conditionVariable.notify_one();
}
private:
// performs actual work
void doJob() {
// try {
while(isAlive) {
#ifdef DEBUG
std::cout<<"Do Job"<<std::endl;
#endif
std::unique_lock<std::mutex> lock(lockListMutex);
if(!jobQueue.empty()) {
#ifdef DEBUG
std::cout<<"Next Job Found"<<std::endl;
#endif
std::function<void()> job = jobQueue.front();
jobQueue.pop_front();
job();
}
conditionVariable.wait(lock);
}
}
// a vector containing worker threads
std::vector<std::thread> workerThreads;
// a queue for jobs
std::list<std::function<void()>> jobQueue;
// a mutex for synchronized insertion and deletion from list
std::mutex lockListMutex;
std::atomic<bool> isAlive;
// condition variable to track whether or not there is a job in queue
std::condition_variable conditionVariable;
};
I am adding work to this thread pool from my main thread. My problem is calling waitForExecution() results in forever waiting main thread. I need to be able to terminate threads when all work is done and continue main thread execution from there. How should I proceed here?
The first step when writing a robust thread pool is to split the queue from the management of threads. A thread-safe queue is hard enough to write by its own, and managing threads similarly.
A thread safe queue looks like:
template<class T>
struct threadsafe_queue {
boost::optional<T> pop() {
std::unique_lock<std::mutex> l(m);
cv.wait(l, [&]{ aborted || !data.empty(); } );
if (aborted) return {};
return data.pop_front();
}
void push( T t )
{
std::unique_lock<std::mutex> l(m);
if (aborted) return;
data.push_front( std::move(t) );
cv.notify_one();
}
void abort()
{
std::unique_lock<std::mutex> l(m);
aborted = true;
data = {};
cv.notify_all();
}
~threadsafe_queue() { abort(); }
private:
std::mutex m;
std::condition_variable cv;
std::queue< T > data;
bool aborted = false;
};
where pop returns nullopt when the queue is aborted.
Now our thread pool is easy:
struct threadpool {
explicit threadpool(std::size_t n) { add_threads(n); }
threadpool() = default;
~threadpool(){ abort(); }
void add_thread() { add_threads(1); }
void add_threads(std::size_t n)
{
for (std::size_t i = 0; i < n; ++i)
threads.push_back( std::thread( [this]{ do_thread_work(); } ) );
}
template<class F>
auto add_task( F && f )
{
using R = std::result_of_t< F&() >;
auto pptr = std::make_shared<std::promise<R>>();
auto future = pptr.get_future();
tasks.push([pptr]{ (*pptr)(); });
return future;
}
void abort()
{
tasks.abort();
while (!threads.empty()) {
threads.back().join();
threads.pop_back();
}
}
private:
threadsafe_queue< std::function<void()> > tasks;
std::vector< std::thread > threads;
void do_thread_work() {
while (auto f = tasks.pop()) {
(*f)();
}
}
};
note that if you abort, outstanding future's are filled with a broken promise exception.
Worker threads stop running when the queue they are feeding from is aborted. The main thread on abort() will wait for the worker threads to finish (as is wise).
This does mean that worker thread tasks must also terminate, or the main thread will hang. There is no way to avoid this; often, your worker threads' tasks need to cooperate to get a message saying they should abort early.
Boost has a thread pool that integrates with its threading primitives and permits a less cooperative abort; in it, all mutex type operations implicitly check for an abort flag, and if they see it the operation throws.
How should I proceed here?
Well, you should learn to use your debugger, which should show you exactly where each of the threads you want to join is stopped.
I'm going to tell you what looks wrong, but strongly encourage you to do that first. It's invaluable.
OK, now: your condition variable loop is wrong.
The correct pattern is the one that behaves like the second form, with the predicate argument, here:
while (!pred()) {
wait(lock);
}
Specifically, if your predicate is true, you must not call wait. You may never be woken again, because the predicate never became false in the first place!
Try
// wait until we have something to do
while(jobQueue.empty() && isAlive) {
conditionVariable.wait(lock);
}
// unless we're exiting, we must have a job
if (isAlive) {
#ifdef DEBUG
std::cout<<"Next Job Found"<<std::endl;
#endif
std::function<void()> job = jobQueue.front();
jobQueue.pop_front();
job();
}
Imagine your thread is running a job when you call notify_all - it will call wait after the notification has already happened, and it isn't coming again. Since it doesn't check isAlive between finishing the job and calling wait, it's going to wait forever.
Even without the shutdown problem it would be wrong, because it should keep consuming jobs while there is work to do, instead of blocking every time it finishes one. Which reminds me of the last issue - you should probably unlock the mutex while executing the job (and re-lock it afterwards) - otherwise your pool is single-threaded.

std::mutex with RAII but finish & release in background thread

I have a function for occasionally getting a frame from GigE camera, and want it to return quickly. The standard procedure is like this:
// ...
camera.StartCapture();
Image img=camera.GetNextFrame();
camera.StopCapture(); // <-- takes a few secs
return img;
Return data is ready after GetNextFrame() and StopCapture() is quite slow; therefore, I'd like to return img as soon as possible and spawn a background thread to do StopCapture(). However, in the (unlikely) case that the acquisition is started again, I would like to protect the access by a mutex. There are places where exceptions can be thrown, so I decide to use a RAII-style lock, which will release at scope exit. At the same time, I need to transfer the lock to the background thread. Something like this (pseudocode):
class CamIface{
std::mutex mutex;
CameraHw camera;
public:
Image acquire(){
std::unique_lock<std::mutex> lock(mutex); // waits for cleanup after the previous call to finish
camera.StartCapture();
Image img=camera.GetNextFrame();
std::thread bg([&]{
camera.StopCapture(); // takes a long time
lock.release(); // release the lock here, somehow
});
bg.detach();
return img;
// do not destroy&release lock here, do it in the bg thread
};
};
How can I transfer the lock from the caller to the background thread spawned? Or is there some better way to handle this?
EDIT: Sufficient lifetime of CamIface instance is assured, please suppose it exists forever.
Updated Answer:
#Revolver_Ocelot is right that my answer encourages undefined behavior, which I'd like to avoid.
So let me use the simple Semaphore implementation from this SO Answer
#include <mutex>
#include <thread>
#include <condition_variable>
class Semaphore {
public:
Semaphore (int count_ = 0)
: count(count_) {}
inline void notify()
{
std::unique_lock<std::mutex> lock(mtx);
count++;
cv.notify_one();
}
inline void wait()
{
std::unique_lock<std::mutex> lock(mtx);
while(count == 0){
cv.wait(lock);
}
count--;
}
private:
std::mutex mtx;
std::condition_variable cv;
int count;
};
class SemGuard
{
Semaphore* sem;
public:
SemGuard(Semaphore& semaphore) : sem(&semaphore)
{
sem->wait();
}
~SemGuard()
{
if (sem)sem->notify();
}
SemGuard(const SemGuard& other) = delete;
SemGuard& operator=(const SemGuard& other) = delete;
SemGuard(SemGuard&& other) : sem(other.sem)
{
other.sem = nullptr;
}
SemGuard& operator=(SemGuard&& other)
{
if (sem)sem->notify();
sem = other.sem;
other.sem = nullptr;
return *this;
}
};
class CamIface{
Semaphore sem;
CameraHw camera;
public:
CamIface() : sem(1){}
Image acquire(){
SemGuard guard(sem);
camera.StartCapture();
Image img=camera.GetNextFrame();
std::thread bg([&](SemGuard guard){
camera.StopCapture(); // takes a long time
}, std::move(guard));
bg.detach();
return img;
};
};
Old Answer:
Just like PanicSheep said, move the mutex into the thread. For example like this:
std::mutex mut;
void func()
{
std::unique_lock<std::mutex> lock(mut);
std::thread bg([&](std::unique_lock<std::mutex> lock)
{
camera.StopCapture(); // takes a long time
},std::move(lock));
bg.detach();
}
Also, just to remark, don't do this:
std::thread bg([&]()
{
std::unique_lock<std::mutex> local_lock = std::move(lock);
camera.StopCapture(); // takes a long time
local_lock.release(); // release the lock here, somehow
});
Because you're racing the thread startup and the function scope ending.
Move the std::unique_lock to the background thread.
You can use both mutex and condition_variable to do the synchronization. Also it's dangerous to detach the background thread, since the thread might still running while the CamIface object has been destructed.
class CamIface {
public:
CamIface() {
background_thread = std::thread(&CamIface::stop, this);
}
~CamIface() {
if (background_thread.joinable()) {
exit = true;
cv.notify_all();
background_thread.join();
}
}
Image acquire() {
std::unique_lock<std::mutex> lock(mtx);
cv.wait(lock, [this]() { return !this->stopping; });
// acquire your image here...
stopping = true;
cv.notify_all();
return img;
}
private:
void stop() {
while (true) {
std::unique_lock<std::mutex> lock(mtx);
cv.wait(lock, [this]() { return this->stopping || this->exit; });
if (exit) return; // exit if needed.
camera.StopCapture();
stopping = false;
cv.notify_one();
}
}
std::mutex mtx;
std::condition_variable cv;
atomic<bool> stopping = {false};
atomic<bool> exit = {false};
CameraHw camera;
std::thread background_thread;
};
The fact that this is hard to do correctly should indicate that your design is oddly asymmetric. Instead, put all of the camera interaction in the background thread, with all the mutex operations from that thread. Think of the camera thread as owning the camera resource and the corresponding mutex.
Then deliver the captured frame(s) across the thread boundary with a std::future or other synchronization like a concurrent queue. You could consider from here making the background thread persistent. Note that this doesn't mean that the capture has to run all the time, it might just make the thread management easier: if the camera object owns the thread, the destructor can signal it to exit, then join() it.

detached thread crashing on exiting

I am using a simple thread pool as below-
template<typename T>
class thread_safe_queue // thread safe worker queue.
{
private:
std::atomic<bool> finish;
mutable std::mutex mut;
std::queue<T> data_queue;
std::condition_variable data_cond;
public:
thread_safe_queue() : finish{ false }
{}
~thread_safe_queue()
{}
void setDone()
{
finish.store(true);
data_cond.notify_one();
}
void push(T new_value)
{
std::lock_guard<std::mutex> lk(mut);
data_queue.push(std::move(new_value));
data_cond.notify_one();
}
void wait_and_pop(T& value)
{
std::unique_lock<std::mutex> lk(mut);
data_cond.wait(lk, [this]
{
return false == data_queue.empty();
});
if (finish.load() == true)
return;
value = std::move(data_queue.front());
data_queue.pop();
}
bool empty() const
{
std::lock_guard<std::mutex> lk(mut);
return data_queue.empty();
}
};
//Thread Pool
class ThreadPool
{
private:
std::atomic<bool> done;
unsigned thread_count;
std::vector<std::thread> threads;
public:
explicit ThreadPool(unsigned count = 1);
ThreadPool(const ThreadPool & other) = delete;
ThreadPool& operator = (const ThreadPool & other) = delete;
~ThreadPool()
{
done.store(true);
work_queue.setDone();
// IF thread is NOT marked detached and this is uncommented the worker threads waits infinitely.
//for (auto &th : threads)
//{
// if (th.joinable())
// th.join();
// }
}
void init()
{
try
{
thread_count = std::min(thread_count, std::thread::hardware_concurrency());
for (unsigned i = 0; i < thread_count; ++i)
{
threads.emplace_back(std::move(std::thread(&ThreadPool::workerThread, this)));
threads.back().detach();
// here the problem is if i dont mark it detatched thread infinitely waits for condition.
// if i comment out the detach line and uncomment out comment lines in ~ThreadPool main threads waits infinitely.
}
}
catch (...)
{
done.store(true);
throw;
}
}
void workerThread()
{
while (true)
{
std::function<void()> task;
work_queue.wait_and_pop(task);
if (done == true)
break;
task();
}
}
void submit(std::function<void(void)> fn)
{
work_queue.push(fn);
}
};
The usage is like :
struct start
{
public:
ThreadPool::ThreadPool m_NotifPool;
ThreadPool::ThreadPool m_SnapPool;
start()
{
m_NotifPool.init();
m_SnapPool.init();
}
};
int main()
{
start s;
return 0;
}
I am running this code on visual studio 2013. The problem is when main thread exits. The program crashes. It throws exception.
Please help me with what am i doing wrong? How do i stop the worker thread properly? I have spent quite some time but still figuring out what is the issue.
Thanks for your help in advance.
I am not familiar with threads in c++ but have worked with threading in C. In C what actually happens is when you creates child threads of from the main thread then you have to stop the main thread until the childs finishes. If main exits the threads becomes zombie. I think C don't throw an exception in case of Zombies. And may be you are getting exception because of these zombies only. Try stopping the main until the childs finishes and see if it works.
When main exits, detached threads are allowed to continue running, however, object s is destroyed. So, as your threads attempt to access members of object s, you are running into UB.
See accepted answer of this question for more details about your issue : What happens to a detached thread when main() exits?
A rule of thumb would be not to detach threads from main, but signal thread pool that app is ending and join all thread. Or do as is answered in What happens to a detached thread when main() exits?

Preventing interleaving in C++

Consider the following example class, which allows one thread to wait for a signal from another thread.
class Sync
{
std::mutex mtx, access;
std::condition_variable cv;
bool waiting;
public:
Sync()
: waiting(false)
{
}
Sync(const Sync&);
~Sync()
{
sendSignal();
}
void waitForSignal()
{
access.lock();
if (!waiting)
{
std::unique_lock<std::mutex> lck (mtx);
waiting = true;
access.unlock();
// in the time between these two statements, 'sendSignal()' acquires
// the lock and calls 'cv.notify_all()', thus the signal was missed.
cv.wait(lck);
}
else
access.unlock();
}
void sendSignal()
{
access.lock();
if (waiting)
{
std::unique_lock<std::mutex> lck (mtx);
cv.notify_all();
waiting = false;
}
access.unlock();
}
};
The problem I'm having is that a signal will occasionally be missed due to interleaving during the time between unlocking the 'access' mutex and calling 'wait()' on the condition_variable. How can I prevent this?
You should probably only have one mutex. I don't see why you need access. Use mtx to protect the waiting variable and for the condition variable.
class Sync
{
std::mutex mtx;
std::condition_variable cv;
bool waiting;
public:
Sync()
: waiting(false)
{
}
Sync(const Sync&);
~Sync()
{
sendSignal();
}
void waitForSignal()
{
std::unique_lock lck (mtx);
if (!waiting)
{
waiting = true;
cv.wait(lck);
}
}
void sendSignal()
{
std::unique_lock lck (mtx);
if (waiting)
{
cv.notify_all();
waiting = false;
}
}
};
The waiting variable and the condition variable state are tied together so they should be treated as a single critical section.