How to test my blocking queue actually blocks - c++

I have a blocking queue (it would be really hard for me to change its implementation), and I want to test that it actually blocks. In particular, the pop methods must block if the queue is empty and unblock as soon as a push is performed. See the following pseudo C++11 code for the test:
BlockingQueue queue; // empty queue
thread pushThread([]
{
sleep(large_delay);
queue.push();
});
queue.pop();
Obviously it is not perfect, because it may happen that the whole thread pushThread is executed and terminates before pop is called, even if the delay is large, and the larger the delay the more I have to wait for the test being over.
How can I properly ensure that pop is executed before push is called and that is blocks until push returns?

I do not believe this is possible without adding some extra state and interfaces to your BlockingQueue.
Proof goes something like this. You want to wait until the reading thread is blocked on pop. But there is no way to distinguish between that and the thread being about to execute the pop. This remains true no matter what you put just before or after the call to pop itself.
If you really want to fix this with 100% reliability, you need to add some state inside the queue, guarded by the queue's mutex, that means "someone is waiting". The pop call then has to update that state just before it atomically releases the mutex and goes to sleep on the internal condition variable. The push thread can obtain the mutex and wait until "someone is waiting". To avoid a busy loop here, you will want to use the condition variable again.
All of this machinery is nearly as complicated as the queue itself, so maybe you will want to test it, too... This sort of multi-threaded code is where concepts like "code coverage" -- and arguably even unit testing itself -- break down a bit. There are just too many possible interleavings of operations.
In practice, I would probably go with your original approach of sleeping.

template<class T>
struct async_queue {
T pop() {
auto l = lock();
++wait_count;
cv.wait( l, [&]{ return !data.empty(); } );
--wait_count;
auto r = std::move(data.front());
data.pop_front();
return r;
}
void push(T in) {
{
auto l = lock();
data.push_back( std::move(in) );
}
cv.notify_one();
}
void push_many(std::initializer_list<T> in) {
{
auto l = lock();
for (auto&& x: in)
data.push_back( x );
}
cv.notify_all();
}
std::size_t readers_waiting() {
return wait_count;
}
std::size_t data_waiting() const {
auto l = lock();
return data.size();
}
private:
std::queue<T> data;
std::condition_variable cv;
mutable std::mutex m;
std::atomic<std::size_t> wait_count{0};
auto lock() const { return std::unique_lock<std::mutex>(m); }
};
or somesuch.
In the push thread, busy wait on readers_waiting until it passes 1.
At which point you have the lock and are within cv.wait before the lock is unlocked. Do a push.
In theory an infinitely slow reader thread could have gotten into cv.wait and still be evaluating the first lambda by the time you call push, but an infinitely slow reader thread is no different than a blocked one...
This does, however, deal with slow thread startup and the like.
Using readers_waiting and data_waiting for anything other than debugging is usually code smell.

You can use a std::condition_variable to accomplish this. The help page of cppreference.com actually shows a very nice cosumer-producer example which should be exactly what you are looking for: http://en.cppreference.com/w/cpp/thread/condition_variable
EDIT: Actually the german version of cppreference.com has an even better example :-) http://de.cppreference.com/w/cpp/thread/condition_variable

Related

std queue pop a moved std string in multithreading

I am currently implementing a string processor. I used to using single-thread, but it is kind of slow, so I would like to use multi-thread to boost it. Now it has some problems I could not solve on my own.
I use thread-safe queue to implement producer and consumer. And the push and pop method of the thread-safe queue is below, and if whole file is needed, take a look at here:
template <typename Tp>
void ThreadSafeQueue<Tp>::enqueue(Tp &&data) {
std::lock_guard<std::mutex> lk(mtx);
q.emplace(std::forward<Tp>(data));
cv.notify_one();
}
template <typename Tp>
bool ThreadSafeQueue<Tp>::dequeue(Tp &data) {
std::unique_lock<std::mutex> lk(mtx);
while (!broken && q.empty()) {
cv.wait(lk);
}
if (!broken && !q.empty()) {
data = std::move(q.front());
q.pop();
}
return !broken;
}
When I use this struct to store string (aka Tp=std::string), problem occurs. I am using it this way:
producer:
__prepare_data__(raw_data)
std::vector<std::thread> vec_threads;
for(int i=0;i<thread_num;++i)
{
vec_threads.emplace_back(consumer,std::ref(raw_data),std::ref(processed_data))
}
for(int i=0;i<thread_num;++i)
{
if(vec_threads[i].joinable())
{
vec_thread[i].join();
}
__collect_data__(processed_data)
}
and consumer:
std::string buf;
while(deque(buf))
{
__process__(buf)
}
In the above codes, all values passed to consumer threads are passed by reference (aks using std::ref wrapper), so the __collect_data__ procedure is valid.
I will not meet any problem in these cases:
The number of string pieces is small. (This does not mean the string length is short.)
Only one consumer is working.
I will meet the problem in these cases:
The number of string is large, millions or so.
2 or more consumers is working.
And what exception the system would throw varies between these two:
Corrupted double-linked list, followed by a bunch of memory indicator. GDB told me the line causing problem is the pop in the dequeue method.
Pure segment fault. GDB told me the problem occurred when consumer threads were joining.
The first case happens the most frequently, so I would like to ask as the title indicates, Would it cause any undefined behavior when popping an already moved std::string? Or if you have any other insights, please let me know!
While there are issues with your code, there are none that explain your crash. I suggest you investigate your data processing code, not your queue.
For reference, your logic around queue shutdown is slightly wrong. For example, shutdown waits on the condition variable until the queue is empty but the dequeue operation does not notify on that variable. So you might deadlock.
It is easier to just ignore the "broken" flag in the dequeue operation until the queue is empty. That way the worker threads will drain the queue before quitting. Also, don't let the shutdown block until empty. If you want to wait until all threads are done with the queue, just join the threads.
Something like this:
template <typename Tp>
bool ThreadSafeQueue<Tp>::dequeue(Tp &data) {
std::unique_lock<std::mutex> lk(mtx);
while (!broken && q.empty()) {
cv.wait(lk);
}
if (q.empty())
return false; // broken
data = std::move(q.front());
q.pop();
return true;
}
template <typename Tp>
void ThreadSafeQueue<Tp>::shutdown() {
std::unique_lock<std::mutex> lk(mtx);
broken = true;
cv.notify_all();
}
There are other minor issues, for example it is in practice more efficient (and safe) to unlock mutexes before notifying the condition variables so that the woken threads do not race with the waking thread on acquiring/releasing the mutex. But that is not a correctness issue.
I also suggest you delete the move constructor on the queue. You rightfully noted that it shouldn't be called. Better make sure that it really isn't.

Lambda Expression returning a bool flag not stopping condition variables wait() function

I have a WorkDispatcher class which holds Worker class objects as properties and launches their function in new threads.
Here is a example:
WorkDispatcher.h:
class WorkDispatcher
{
private:
std::thread m_reconstructionThread;
std::shared_ptr <Reconstruction::RGBDImageModel> m_currentRGBD;
public:
WorkDispatcher();
std::mutex m_rgbdMutex, m_commandMutex;
std::deque<Network::NetworkCommandModel> m_CommandQueue;
std::condition_variable m_RgbConditional, m_CommandConditional;
Reconstruction::SceneReconstructor m_Reconstructor;
void Initialize();
void Work();
void Stop();
};
WorkDispatcher.cpp:
void WorkDispatcher::Work()
{
m_reconstructionThread = std::thread(
&Reconstruction::SceneReconstructor::Reconstruct,
std::ref(m_Reconstructor),
std::ref(m_currentRGBD),
std::ref(m_CommandQueue),
std::ref(m_rgbdMutex),
std::ref(m_RgbConditional),
std::ref(m_commandMutex),
std::ref(m_CommandConditional)
);
}
These functions hold infinite loops and I use the condition variables to wait until work is avaible. For example my Reconstruct function:
void SceneReconstructor::Reconstruct(
std::shared_ptr<RGBDImageModel>& currentImage,
std::deque<Network::NetworkCommandModel> commandQueue,
std::mutex& rgbdMutex,
std::condition_variable& rgbdCond,
std::mutex& commandMutex,
std::condition_variable& commandConditional)
{
while (true)
{
std::unique_lock<std::mutex> rgbdLocker(rgbdMutex);
rgbdCond.wait(rgbdLocker, [this] {return m_QuitReconstructionFlag; });
// Quit flag to break out of loop
if (m_QuitReconstructionFlag)
break;
// do stuff here
}
}
So far so good, however if I want to quit the application I need to quit all of my worker threads. As seen above, for this these classes have a flag to quit, which I uses as follows:
void WorkDispatcher::Stop()
{
// for all worker threads do this
m_Reconstructor.m_QuitReconstructionFlag = true;
if (m_reconstructionThread.joinable())
m_reconstructionThread.join();
}
In theory this should stop the wait() function within a worker threads loop and then break out of the loop with the m_QuitReconstructionFlag, however this doesn't work.
What does work is the following:
remove the lambda from the wait functions
call notify_all() on the condition variables after settings the
quit-flags to true;
This works fine for me, however the question is, why doesn't the lambda work?
why doesn't the lambda work?
It works just fine, by itself.
However, C++ requires complex rules to be followed to properly synchronize multiple execution threads. Just because one execution thread sets a particular variable to a value does not guarantee you, in any way, that other execution threads will see the variable's new value. The synchronization rules govern that behavior.
So, this lambda works just fine. In its own execution thread. But if you want this lambda to observe changes to the value, made by other execution threads, this must be correctly synchronized.
Additionally, if you review your documentation of wait(), you should find a detailed explanation that says that if the condition function evaluates to false, it will not be called again until the condition variable is notified.
What does work is ... call notify_all()
Well, of course. Since wait() requires the condition variable to be notified, before it checks the waited-on condition again, then that's what you must do!
Finally, notifying the condition variable will work correctly in most cases, but, as I mentioned, synchronization rules (of which mutexes and condition variables play an important part of) have some edge cases where this, by itself, will not work. You must follow the following sequence of events strictly in order to have proper synchronization in all edge cases:
Lock the same mutex that another execution thread has locked before waiting on its condition variable.
Notify the condition variable.
Unlock the mutex
You must protect m_QuitReconstructionFlag with the same mutex used by the condition variable wait.
Or it won't work.
When using a condition variable if you do not want to learn about the C++ memory model in extreme detail, you should follow "best practices" that defend you against problems.
The best practices for a condition variable is to bundle up 3 things together.
The condition variable.
A mutex (often mutable).
A state.
Then bundle all 3 of them up behind a single abstraction of some kind.
To change the state:
Lock the mutex
Change the state
Notify the condition variable appropriately
Unlock the mutex
Do not think that the state being atomic means you don't have to lock the mutex.
When you want to wait on the condition variable:
Lock the mutex
Wait, passing a lambda that checks the state.
When exiting wait, you are free to update the state.
Unlock the mutex
In general, use a unique_lock to lock the mutex in all of the above cases, and rely on RAII to unlock it.
What, exactly, the state is, and when you notify, is up to you.
Do not interact with that mutex directly outside of this bundle and api, don't interact with the state directly outside of this bundle and api, and don't interact with the condition variable outside of this bundle and api.
Copy or move data out of the api if needed, don't hold pointers or references or iterators into it.
Your state can have more than just one variable in it. Like, you can have a queue and a bool for shutdown.
For example, suppose you have a queue.
template<class T>
struct cv_queue {
std::optional<T> pop() {
auto l = lock();
cv.wait( l, [&]{ return aborted || !queue.empty(); } );
if (aborted) return {};
auto retval = std::move(queue.front());
queue.pop_front();
return retval;
}
void push( T in ) {
auto l = lock();
queue.push_back( std::move(in) );
cv.notify_one();
}
void abort_everything() {
auto l = lock();
abort = true;
cv.notify_all();
}
bool empty() const {
auto l = lock();
return queue.empty();
}
private:
std::condition_variable cv;
mutable std::mutex m;
std::deque<T> queue;
bool aborted=false;
auto lock() const { return std::unique_lock( m ); }
};
adding pop_wait_for or try_pop isn't hard.
A simple 3-part wrapper around data or whatever isn't hard to write. Making it more generic, in my experience, doesn't add much to it being understandable.
Here the lambda returning true is not the condition to stop waiting, rather the lambda is to account for spurious wake ups. The notify or notify_all function of the conditional_variable is what is used to make the wait quit.
Rather than removing the lambda, you must simply change the stop function to
void WorkDispatcher::Stop()
{
// for all worker threads do this
m_Reconstructor.m_QuitReconstructionFlag = true;
m_RgbConditional.notify_all()
if (m_reconstructionThread.joinable())
m_reconstructionThread.join();
}
from here you can see that the wait with predicate passed to it (wait(predicate)) is equivalent to
if(!predicate())
wait()
Hence when you call Stop(), It sets the predicate to true, so when the thread is woken up wait() returns, and the predicate is checked, if it is true, the wait(predicate) returns.
In the earlier case, the predicate was set to true but the function was not woken up

How could I quit a C++ blocking queue?

After reading some other articles, I got to know that I could implement a c++ blocking queue like this:
template<typename T>
class BlockingQueue {
public:
std::mutex mtx;
std::condition_variable not_full;
std::condition_variable not_empty;
std::queue<T> queue;
size_t capacity{5};
BlockingQueue()=default;
BlockingQueue(int cap):capacity(cap) {}
BlockingQueue(const BlockingQueue&)=delete;
BlockingQueue& operator=(const BlockingQueue&)=delete;
void push(const T& data) {
std::unique_lock<std::mutex> lock(mtx);
while (queue.size() >= capacity) {
not_full.wait(lock, [&]{return queue.size() < capacity;});
}
queue.push(data);
not_empty.notify_all();
}
T pop() {
std::unique_lock<std::mutex> lock(mtx);
while (queue.empty()) {
not_empty.wait(lock, [&]{return !queue.empty();});
}
T res = queue.front();
queue.pop();
not_full.notify_all();
return res;
}
bool empty() {
std::unique_lock<std::mutex> lock(mtx);
return queue.empty();
}
size_t size() {
std::unique_lock<std::mutex> lock(mtx);
return queue.size();
}
void set_capacity(const size_t capacity) {
this->capacity = (capacity > 0 ? capacity : 10);
}
};
This works for me, but I do not know how could I shut it down if I start it in the background thread:
void main() {
BlockingQueue<float> q;
bool stop{false};
auto fun = [&] {
std::cout << "before entering loop\n";
while (!stop) {
q.push(1);
}
std::cout << "after entering loop\n";
};
std::thread t_bg(fun);
t_bg.detach();
// Some other tasks here
stop = true;
// How could I shut it down before quit here, or could I simply let the operation system do that when the whole program is over?
}
The problem is that when I want to shut down the background thread, the background thread might have been sleeping because the queue is full and the push operation is blocked. How could I stop it when I want the background thread to stop ?
One easy way would be to add a flag that you set from outside when you want to abort a pop() operation that's already blocked. And then you'd have to decide what an aborted pop() is going to return. One way is for it to throw an exception, another would be to return an std::optional<T>. Here's the first method (I'll only write the changed parts.)
Add this type wherever you think is appropriate:
struct AbortedPopException {};
Add this to your class fields:
mutable std::atomic<bool> abort_flag = false;
Also add this method:
void abort () const {
abort_flag = true;
}
Change the while loop in the pop() method like this: (you don't need the while at all, since I believe the condition variable wait() method that accepts a lambda does not wake up/return spuriously; i.e. the loop is inside the wait already.)
not_empty.wait(lock, [this]{return !queue.empty() || abort_flag;});
if (abort_flag)
throw AbortedPopException{};
That's it (I believe.)
In your main(), when you want to shut the "consumer" down you can call abort() on your queue. But you'll have to handle the thrown exception there as well. It's your "exit" signal, basically.
Some side notes:
Don't detach from threads! Specially here where AFAICT there is no reason for it (and some actual danger too.) Just signal them to exit (in any manner appropriate) and join() them.
Your stop flag should be atomic. You read from it in your background thread and write to it from your main thread, and those can (and in fact do) overlap in time, so... data race!
I don't understand why you have a "full" state and "capacity" in your queue. Think about whether they are necessary.
UPDATE 1: In response to OP's comment about detaching... Here's what happens in your main thread:
You spawn the "producer" thread (i.e. the one that pushed stuff onto the queue)
Then you do all the work you want to do (e.g. consuming the stuff on the queue)
Sometime, perhaps at the end of main(), you signal the thread to stop (e.g. by setting stop flag to true)
then, and only then you join() with the thread.
It is true that your main thread will block while it is waiting for the thread to pick up the "stop" signal, exit its loop, and return from its thread function, but that's a very very short wait. And you have nothing else to do. More importantly, you'll know that your thread exited cleanly and predictably, and from that point on, you know definitely that that thread won't be running (not important for you here, but could be critical for some other threaded task.)
That is the pattern that you usually want to follow in spawning worker thread that loop over a short task.
Update 2: About "full" and "capacity" of the queue. That's fine. It's certainly your decision. No problem with that.
Update 3: About "throwing" vs. returning an "empty" object to signal an aborted "blocking pop()". I don't think there is anything wrong with throwing like that; specially since it is very very rare (just happens once at the end of the operation of the producer/consumer.) However, if all T types that you want to store in your Queue have an "invalid" or "empty" state, then you certainly can use that. But throwing is more general, if more "icky" to some people.

Thread pool stuck on wait condition

I'm encountering a stuck in my c++ program using this thread pool class:
class ThreadPool {
unsigned threadCount;
std::vector<std::thread> threads;
std::list<std::function<void(void)> > queue;
std::atomic_int jobs_left;
std::atomic_bool bailout;
std::atomic_bool finished;
std::condition_variable job_available_var;
std::condition_variable wait_var;
std::mutex wait_mutex;
std::mutex queue_mutex;
std::mutex mtx;
void Task() {
while (!bailout) {
next_job()();
--jobs_left;
wait_var.notify_one();
}
}
std::function<void(void)> next_job() {
std::function<void(void)> res;
std::unique_lock<std::mutex> job_lock(queue_mutex);
// Wait for a job if we don't have any.
job_available_var.wait(job_lock, [this]()->bool { return queue.size() || bailout; });
// Get job from the queue
mtx.lock();
if (!bailout) {
res = queue.front();
queue.pop_front();
}else {
// If we're bailing out, 'inject' a job into the queue to keep jobs_left accurate.
res = [] {};
++jobs_left;
}
mtx.unlock();
return res;
}
public:
ThreadPool(int c)
: threadCount(c)
, threads(threadCount)
, jobs_left(0)
, bailout(false)
, finished(false)
{
for (unsigned i = 0; i < threadCount; ++i)
threads[i] = std::move(std::thread([this, i] { this->Task(); }));
}
~ThreadPool() {
JoinAll();
}
void AddJob(std::function<void(void)> job) {
std::lock_guard<std::mutex> lock(queue_mutex);
queue.emplace_back(job);
++jobs_left;
job_available_var.notify_one();
}
void JoinAll(bool WaitForAll = true) {
if (!finished) {
if (WaitForAll) {
WaitAll();
}
// note that we're done, and wake up any thread that's
// waiting for a new job
bailout = true;
job_available_var.notify_all();
for (auto& x : threads)
if (x.joinable())
x.join();
finished = true;
}
}
void WaitAll() {
std::unique_lock<std::mutex> lk(wait_mutex);
if (jobs_left > 0) {
wait_var.wait(lk, [this] { return this->jobs_left == 0; });
}
lk.unlock();
}
};
gdb say (when stopping the blocked execution) that the stuck was in (std::unique_lock&, ThreadPool::WaitAll()::{lambda()#1})+58>
I'm using g++ v5.3.0 with support for c++14 (-std=c++1y)
How can I avoid this problem?
Update
I've edited (rewrote) the class: https://github.com/edoz90/threadpool/blob/master/ThreadPool.h
The issue here is a race condition on your job count. You're using one mutex to protect the queue, and another to protect the count, which is semantically equivalent to the queue size. Clearly the second mutex is redundant (and improperly used), as is the job_count variable itself.
Every method that deals with the queue has to gain exclusive access to it (even JoinAll to read its size), so you should use the same queue_mutex in the three bits of code that tamper with it (JoinAll, AddJob and next_job).
Btw, splitting the code at next_job() is pretty awkward IMO. You would avoid calling a dummy function if you handled the worker thread body in a single function.
EDIT:
As other comments have already stated, you would probably be better off getting your eyes off the code and reconsidering the problem globally for a while.
The only thing you need to protect here is the job queue, so you need only one mutex.
Then there is the problem of waking up the various actors, which requires a condition variable since C++ basically does not give you any other useable synchronization object.
Here again you don't need more than one variable. Terminating the thread pool is equivalent to dequeueing the jobs without executing them, which can be done any which way, be it in the worker threads themselves (skipping execution if the termination flag is set) or in the JoinAll function (clearing the queue after gaining exclusive access).
Last but not least, you might want to invalidate AddJob once someone decided to close the pool, or else you could get stuck in the destructor while someone keeps feeding in new jobs.
I think you need to keep it simple.
you seem to be using a mutex too many. So there's queue_mutex and you use that when you add and process jobs.
Now what's the need for another separate mutex when you are waiting on reading the queue?
Why can't you use just a conditional variable with the same queue_mutex to read the queue in your WaitAll() method?
Update
I would also recommend using a lock_guard instead of the unique_lock in your WaitAll. There really isn't a need to lock the queue_mutex beyond the WaitAll under exceptional conditions. If you exit the WaitAll exceptionally it should be released regardless.
Update2
Ignore my Update above. Since you are using a condition variable you can't use a lock guard in the WaitAll. But if you are using a unique_lock always go with the try_to_lock version especially if you have more than a couple control paths

Why is there no wait function for condition_variable which does not relock the mutex

Consider the following example.
std::mutex mtx;
std::condition_variable cv;
void f()
{
{
std::unique_lock<std::mutex> lock( mtx );
cv.wait( lock ); // 1
}
std::cout << "f()\n";
}
void g()
{
std::this_thread::sleep_for( 1s );
cv.notify_one();
}
int main()
{
std::thread t1{ f };
std::thread t2{ g };
t2.join();
t1.join();
}
g() "knows" that f() is waiting in the scenario I would like to discuss.
According to cppreference.com there is no need for g() to lock the mutex before calling notify_one. Now in the line marked "1" cv will release the mutex and relock it once the notification is sent. The destructor of lock releases it again immediately after that. This seems to be superfluous especially since locking is expensive. (I know in certain scenarios the mutex needs to be locked. But this is not the case here.)
Why does condition_variable have no function "wait_nolock" which does not relock the mutex once the notification arrives. If the answer is that pthreads do not provide such functionality: Why can`t pthreads be extended for providing it? Is there an alternative for realizing the desired behavior?
You misunderstand what your code does.
Your code on line // 1 is free to not block at all. condition_variables can (and will!) have spurious wakeups -- they can wake up for no good reason at all.
You are responsible for checking if the wakeup is spurious.
Using a condition_variable properly requires 3 things:
A condition_variable
A mutex
Some data guarded by the mutex
The data guarded by the mutex is modified (under the mutex). Then (with the mutex possibly disengaged), the condition_variable is notified.
On the other end, you lock the mutex, then wait on the condition variable. When you wake up, your mutex is relocked, and you test if the wakeup is spurious by looking at the data guarded by the mutex. If it is a valid wakeup, you process and proceed.
If it wasn't a valid wakeup, you go back to waiting.
In your case, you don't have any data guarded, you cannot distinguish spurious wakeups from real ones, and your design is incomplete.
Not surprisingly with the incomplete design you don't see the reason why the mutex is relocked: it is relocked so you can safely check the data to see if the wakeup was spurious or not.
If you want to know why condition variables are designed that way, probably because this design is more efficient than the "reliable" one (for whatever reason), and rather than exposing higher level primitives, C++ exposed the lower level more efficient primitives.
Building a higher level abstraction on top of this isn't hard, but there are design decisions. Here is one built on top of std::experimental::optional:
template<class T>
struct data_passer {
std::experimental::optional<T> data;
bool abort_flag = false;
std::mutex guard;
std::condition_variable signal;
void send( T t ) {
{
std::unique_lock<std::mutex> _(guard);
data = std::move(t);
}
signal.notify_one();
}
void abort() {
{
std::unique_lock<std::mutex> _(guard);
abort_flag = true;
}
signal.notify_all();
}
std::experimental::optional<T> get() {
std::unique_lock<std::mutex> _(guard);
signal.wait( _, [this]()->bool{
return data || abort_flag;
});
if (abort_flag) return {};
T retval = std::move(*data);
data = {};
return retval;
}
};
Now, each send can cause a get to succeed at the other end. If more than one send occurs, only the latest one is consumed by a get. If and when abort_flag is set, instead get() immediately returns {};
The above supports multiple consumers and producers.
An example of how the above might be used is a source of preview state (say, a UI thread), and one or more preview renderers (which are not fast enough to be run in the UI thread).
The preview state dumps a preview state into the data_passer<preview_state> willy-nilly. The renderers compete and one of them grabs it. Then they render it, and pass it back (through whatever mechanism).
If the preview states come faster than the renderers consume them, only the most recent one is of interest, so the earlier ones are discarded. But existing previews aren't aborted just because a new state shows up.
Questions where asked below about race conditions.
If the data being communicated is atomic, can't we do without the mutex on the "send" side?
So something like this:
template<class T>
struct data_passer {
std::atomic<std::experimental::optional<T>> data;
std::atomic<bool> abort_flag = false;
std::mutex guard;
std::condition_variable signal;
void send( T t ) {
data = std::move(t); // 1a
signal.notify_one(); // 1b
}
void abort() {
abort_flag = true; // 1a
signal.notify_all(); // 1b
}
std::experimental::optional<T> get() {
std::unique_lock<std::mutex> _(guard); // 2a
signal.wait( _, [this]()->bool{ // 2b
return data.load() || abort_flag.load(); // 2c
});
if (abort_flag.load()) return {};
T retval = std::move(*data.load());
// data = std::experimental::nullopt; // doesn't make sense
return retval;
}
};
the above fails to work.
We start with the listening thread. It does step 2a, then waits (2b). It evaluates the condition at step 2c, but doesn't return from the lambda yet.
The broadcasting thread then does step 1a (setting the data), then signals the condition variable. At this moment, nobody is waiting on the condition variable (the code in the lambda doesn't count!).
The listening thread then finishes the lambda, and returns "spurious wakeup". It then blocks on the condition variable, and never notices that data was sent.
The std::mutex used while waiting on the condition variable must guard the write to the data "passed" by the condition variable (whatever test you do to determine if the wakeup was spurious), and the read (in the lambda), or the possibility of "lost signals" exists. (At least in a simple implementation: more complex implementations can create lock-free paths for "common cases" and only use the mutex in a double-check. This is beyond the scope of this question.)
Using atomic variables does not get around this problem, because the two operations of "determine if the message was spurious" and "rewait in the condition variable" must be atomic with regards to the "spuriousness" of the message.