I am trying to make a thread safe queue in c++ with the help of std::mutex and std::condition_variable.The Code
#include <iostream>
#include<thread>
#include<queue>
#include<atomic>
#include<mutex>
#include<condition_variable>
using namespace std;
template<class T>
class SafeQueue{
public:
queue<T>qu;
mutex mut;
condition_variable cv;
SafeQueue(){}
SafeQueue(queue<T>q):qu(q){}
void push(int val){
unique_lock<mutex>uq(mut);
cv.wait(uq,[&](){return qu.empty();});
qu.push(val);
uq.unlock();
}
bool isEmpty(){
// unique_lock<mutex>uq(mut);
// uq.unlock();
cv.notify_all();
return qu.empty();
}
};
void inc(SafeQueue<int>& sq){
for(int i=0;i<10;i++)
continue;
if(sq.isEmpty())
sq.push(1);
}
void inc1(SafeQueue<int>& sq){
for(int i=0;i<10;i++)
continue;
if(sq.isEmpty())
sq.push(2);
}
int main(){
queue<int>qu;
SafeQueue<int> sq(qu);
thread t1(inc,ref(sq));
thread t2(inc1,ref(sq));
t1.join();
t2.join();
cout<<sq.qu.front();
}
A thread Safe queue is supposed to output 1 at the end,but the output is random either 1 or 2 that means it is not thread safe.Why is this particular program not working?
It doesn't mean the program isn't thread safe. It doesn't mean it's ill-defined and can crash.
It just means your program's logic is not written to add the items to the queue in any particular order.
If you want those two items to be added in a specific order, push both from one thread.
Thread safety doesn't mean your application runs as if it only had one thread.
Your program is working fine.
There are several aspects where your code is flawed:
Whenever you access a shared structure, that must be guarded by a mutex. You have a mutex, but you don't use it in isEmpty(). Document that connection, this is important no not lose track. Also, do the same for the CV, document when it's being signaled.
Concerning isEmpty(), that function is useless anyway. Even if the queue was not empty at one point in time, there is nothing that prevents it from becoming empty the next second.
Re-read the documentation for unique_lock. Your way of using it is more complicated than necessary.
The use of the CV is also odd: Normally, you use it to notify waiters of a change. You signal it unconditionally in a function that only seems to query some state.
Related
I have been trying to figure out std::condition_variables and I am particularly confused by wait() and whether to use notify_all or notify_one.
First, I've written some code and attached it below. Here's a short explanation: Collection is a class that holds onto a bunch of Counter objects. These Counter objects have a Counter::increment() method, which needs to be called on all the objects, over and over again. To speed everything up, Collection also maintains a thread pool to distribute the work over, and sends out all the work with its Collection::increment_all() method.
These threads don't need to communicate with each other, and there are usually many more Counter objects than there are threads. It's fine if one thread processes more than Counters than others, just as long as all the work gets done. Adding work to the queue is easy and only needs to be done in the "main" thread. As far as I can see, the only bad thing that can happen is if other methods (e.g. Collection::printCounts) are allowed to be called on the counters in the middle of the work being done.
#include <iostream>
#include <thread>
#include <vector>
#include <mutex>
#include <condition_variable>
#include <queue>
class Counter{
private:
int m_count;
public:
Counter() : m_count(0) {}
void increment() {
m_count ++;
}
int getCount() const { return m_count; }
};
class Collection{
public:
Collection(unsigned num_threads, unsigned num_counters)
: m_shutdown(false)
{
// start workers
for(size_t i = 0; i < num_threads; ++i){
m_threads.push_back(std::thread(&Collection::work, this));
}
// intsntiate counters
for(size_t j = 0; j < num_counters; ++j){
m_counters.emplace_back();
}
}
~Collection()
{
m_shutdown = true;
for(auto& t : m_threads){
if(t.joinable()){
t.join();
}
}
}
void printCounts() {
// wait for work to be done
std::unique_lock<std::mutex> lk(m_mtx);
m_work_complete.wait(lk); // q2: do I need a while lop?
// print all current counters
for(const auto& cntr : m_counters){
std::cout << cntr.getCount() << ", ";
}
std::cout << "\n";
}
void increment_all()
{
std::unique_lock<std::mutex> lock(m_mtx);
m_work_complete.wait(lock);
for(size_t i = 0; i < m_counters.size(); ++i){
m_which_counters_have_work.push(i);
}
}
private:
void work()
{
while(!m_shutdown){
bool action = false;
unsigned which_counter;
{
std::unique_lock<std::mutex> lock(m_mtx);
if(m_which_counters_have_work.size()){
which_counter = m_which_counters_have_work.front();
m_which_counters_have_work.pop();
action = true;
}else{
m_work_complete.notify_one(); // q1: notify_all
}
}
if(action){
m_counters[which_counter].increment();
}
}
}
std::vector<Counter> m_counters;
std::vector<std::thread> m_threads;
std::condition_variable m_work_complete;
std::mutex m_mtx;
std::queue<unsigned> m_which_counters_have_work;
bool m_shutdown;
};
int main() {
int num_threads = std::thread::hardware_concurrency()-1;
int num_counters = 10;
Collection myCollection(num_threads, num_counters);
myCollection.printCounts();
myCollection.increment_all();
myCollection.printCounts();
myCollection.increment_all();
myCollection.printCounts();
return 0;
}
I compile this on Ubuntu 18.04 with g++ -std=c++17 -pthread thread_pool.cpp -o tp && ./tp I think the code accomplishes all of those objectives, but a few questions remain:
I am using m_work_complete.wait(lk) to make sure the work is finished before I start printing all the new counts. Why do I sometimes see this written inside a while loop, or with a second argument as a lambda predicate function? These docs mention spurious wake ups. If a spurious wake up occurs, does that mean printCounts could prematurely print? If so, I don't want that. I just want to ensure the work queue is empty before I start using the numbers that should be there.
I am using m_work_complete.notify_all instead of m_work_complete.notify_one. I've read this thread, and I don't think it matters--only the main thread is going to be blocked by this. Is it faster to use notify_one just so the other threads don't have to worry about it?
std::condition_variable is not really a condition variable, it's more of a synchronization tool for reaching a certain condition. What that condition is is up to the programmer, and it should still be checked after each condition_variable wake-up, since it can wake-up spuriously, or "too early", when the desired condition isn't yet reached.
On POSIX systems, condition_variable::wait() delegates to pthread_cond_wait, which is susceptible to spurious wake-up (see "Condition Wait Semantics" in the Rationale section). On Linux, pthread_cond_wait is in turn implemented via a futex, which is again susceptible to spurious wake-up.
So yes you still need a flag (protected by the same mutex) or some other way to check that the work is actually complete. A convenient way to do this is by wrapping the check in a predicate and passing it to the wait() function, which would loop for you until the predicate is satisfied.
notify_all unblocks all threads waiting on the condition variable; notify_one unblocks just one (or at least one, to be precise). If there are more than one waiting threads, and they are equivalent, i.e. either one can handle the condition fully, and if the condition is sufficient to let just one thread continue (as in submitting a work unit to a thread pool), then notify_one would be more efficient since it won't unblock other threads unnecessarily for them to only notice no work to be done and going back to waiting. If you ever only have one waiter, then there would be no difference between notify_one and notify_all.
It's pretty simple: Use notify() when;
There is no reason why more than one thread needs to know about the event. (E.g., use notify() to announce the availability of an item that a worker thread will "consume," and thereby make the item unavailable to other workers)*AND*
There is no wrong thread that could be awakened. (E.g., you're probably safe if all of the threads are wait()ing in the same line of the same exact function.)
Use notify_all() in all other cases.
This question already has answers here:
condition variable - why calling pthread_cond_signal() before calling pthread_cond_wait() is a logical error?
(5 answers)
Closed 5 years ago.
I'm learning condition_variable and running some examples. I'm curious that why the following code get deadlock if I comment the block. It's a simple consumer and producer example using condition_variable. I think it's a deadlock problem, isn't it?
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
using namespace std;
mutex mtx;
condition_variable cv;
int cargo = 0;
bool shipment_available()
{
return cargo != 0;
}
void consume(int cnt)
{
for (int i = 0; i < cnt; i++)
{
unique_lock<mutex> lck(mtx);
cv.wait(lck, shipment_available);
printf("%d\n", cargo);
cargo = 0;
}
}
int main()
{
thread consumer_thread(consume, 10);
for (int i = 0; i < 10; i++)
{
//while (shipment_available()) // Dead lock without this block
//{
// std::this_thread::yield();
//}
unique_lock<mutex> lck(mtx);
cargo = i + 1;
cv.notify_one();
}
consumer_thread.join();
}
It runs well if I uncomment the block.
So walk through this highly likely possibility:
main() starts up the consumer thread.
Before the thread can acquire the mutex, main() locks it, increments cargo, and fires the notification, then releases the mutex by scope-rotation ten times.
main() now runs down to join the consumer thread.
Consumer finally acquires the mutex, and prepares to wait on the condition variable for the given predicate (non-zero cargo).
The predicate is already true, so no wait is performed.
The consumer zeros the cargo, then releases the mutex by scope-rotation.
The consumer loops for the second time, locks the mutex, then checks the predicate, only now the predicate is false because cargo is indeed zero, so it waits on the condition variable (releasing the mutex in the process) for a signal that will never come. The only thing that ever sent that signal is main(), and it is just camped out waiting for the consumer thread to join up, which can now never happen.
In short, you seem to be of the belief that condition variables signals stack up. That simply isn't the case. if no one is actively waiting on a notification at the time it is posted, it is simply lost to the ether.
Finally, and this is important, your "fix" is completely wrong in itself. The shipment_available function examines predicate data (i.e. data that must be protected by the mutex enshrouding it, not only for modification, but likewise for examination). Checking that without the mutex locked in main is a recipe for a race condition.
My suggestion is to reduce cargo by one rather than zeroing it out. Alternatively you could increment i by the value of cargo before zeroing it, thereby hastening i's ascent to the cnt limit, but you appear to be using an ascending cargo amount, so you'll have to make other adjustments accordingly.
I have a program starting an std::thread doing the following: sleep X, execute a function, terminate.
create std::thread(Xms, &func)
wait Xms
then do func()
end
I was wondering if I could for example send a signal to my std::thread in order to instantly break the sleep and do func, then quit.
Do I need to send the signal to std::thread::id in order to perform this?
my thread is launched this way, with a lambda function:
template<typename T, typename U>
void execAfter(T func, U params, const int ms)
{
std::thread thread([=](){
std::this_thread::sleep_for(std::chrono::milliseconds(ms));
func(params);
});
thread.detach();
}
Using wait_for of std::condition_variable would be the way to go, if the thread model can't be changed. In the code snippet below, the use of the condition_variable is wrapped into a class of which objects have to be shared across the threads.
#include <iostream>
#include <atomic>
#include <condition_variable>
#include <thread>
#include <chrono>
class BlockCondition
{
private:
mutable std::mutex m;
std::atomic<bool> done;
mutable std::condition_variable cv;
public:
BlockCondition()
:
m(),
done(false),
cv()
{
}
void wait_for(int duration_ms)
{
std::unique_lock<std::mutex> l(m);
int ms_waited(0);
while ( !done.load() && ms_waited < duration_ms )
{
auto t_0(std::chrono::high_resolution_clock::now());
cv.wait_for(l, std::chrono::milliseconds(duration_ms - ms_waited));
auto t_1(std::chrono::high_resolution_clock::now());
ms_waited += std::chrono::duration_cast<std::chrono::milliseconds>(t_1 - t_0).count();
}
}
void release()
{
std::lock_guard<std::mutex> l(m);
done.store(true);
cv.notify_one();
}
};
void delayed_func(BlockCondition* block)
{
block->wait_for(1000);
std::cout << "Hello actual work\n";
}
void abortSleepyFunction(BlockCondition* block)
{
block->release();
}
void test_aborted()
{
BlockCondition b();
std::thread delayed_thread(delayed_func, &b);
abortSleepyFunction(&b);
delayed_thread.join();
}
void test_unaborted()
{
BlockCondition b();
std::thread delayed_thread(delayed_func, &b);
delayed_thread.join();
}
int main()
{
test_aborted();
test_unaborted();
}
Note that there might be spurious wakeups that abort the wait call prematurely. To account for that, we count the milliseconds actually waited and continue waiting until the done flag is set.
As was pointed out in the comments, this wasn't the smartest approach for solving your problem in the first place. As implementing a proper interruption mechanism is quite complex and extremely easy to get wrong, here are suggestions for a workaround:
Instead of sleeping for the whole timeout, simply loop over a sleep of fixed small size (e.g. 10 milliseconds) until the desired duration has elapsed. After each sleep you check an atomic flag whether interruption was requested. This is a dirty solution, but is the quickest to pull of.
Alternatively, supply each thread with a condition_variable and do a wait on it instead of doing the this_thread::sleep. Notify the condition variable to indicate the request for interruption. You will probably still want an additional flag to protect against spurious wakeups so you don't accidentally return too early.
Ok, to figure this out I found a new implementation, it's inspired by all your answers so thanks a lot.
First I am gonna do a BombHandler item, in the main Game item. It will have a an attribute containing all the Bomb items.
This BombHandler will be a singleton, containing a timerLoop() function who will execute in a thread (This way I only use ONE thread for xxx bombs, way more effective)
The timerLoop() will usleep(50) then pass through the whole std::list elements and call Bomb::incrTimer() who will increment their internal _timer attribute by 10ms indefinitely, and check bombs who have to explode.
When they reach 2000ms for instance, BombHandler.explode() will be called, exploding the bomb and deleting it.
If another bomb is in range Bomb::touchByFire() will be called, and set the internal attribute of Bomb, _timer, to TIME_TO_EXPLODE (1950ms).
Then it will be explode 50ms later by BombHandler::explode().
Isn't this a nice solution?
Again, thanks for your answers! Hope this can help.
Suppose we have two workers. Each worker has an id of 0 and 1. Also suppose that we have jobs arriving all the time, each job has also an identifier 0 or 1 which specifies which worker will have to do this job.
I would like to create 2 threads that are initially locked, and then when two jobs arrive, unlock them, each of them does their job and then lock them again until other jobs arrive.
I have the following code:
#include <iostream>
#include <thread>
#include <mutex>
using namespace std;
struct job{
thread jobThread;
mutex jobMutex;
};
job jobs[2];
void executeJob(int worker){
while(true){
jobs[worker].jobMutex.lock();
//do some job
}
}
void initialize(){
int i;
for(i=0;i<2;i++){
jobs[i].jobThread = thread(executeJob, i);
}
}
int main(void){
//initialization
initialize();
int buffer[2];
int bufferSize = 0;
while(true){
//jobs arrive here constantly,
//once the buffer becomes full,
//we unlock the threads(workers) and they start working
bufferSize = 2;
if(bufferSize == 2){
for(int i = 0; i<2; i++){
jobs[i].jobMutex.unlock();
}
}
break;
}
}
I started using std::thread a few days ago and I'm not sure why but Visual Studio gives me an error saying abort() has been called. I believe there's something missing however due to my ignorance I can't figure out what.
I would expect this piece of code to actually
Initialize the two threads and then lock them
Inside the main function unlock the two threads, the two threads will do their job(in this case nothing) and then they will become locked again.
But it gives me an error instead. What am I doing wrong?
Thank you in advance!
For this purpose you can use boost's threadpool class.
It's efficient and well tested. opensource library instead of you writing newly and stabilizing it.
http://threadpool.sourceforge.net/
main()
{
pool tp(2); //number of worker threads-currently its 2.
// Add some tasks to the pool.
tp.schedule(&first_task);
tp.schedule(&second_task);
}
void first_task()
{
...
}
void second_task()
{
...
}
Note:
Suggestion for your example:
You don't need to have individual mutex object for each thread. Single mutex object lock itself will does the synchronization between all the threads. You are locking mutex of one thread in executejob function and without unlocking another thread is calling lock with different mutex object leading to deadlock or undefined behaviour.
Also since you are calling mutex.lock() inside whileloop without unlocking , same thread is trying to lock itself with same mutex object infinately leading to undefined behaviour.
If you donot need to execute threads parallel you can have one global mutex object can be used inside executejob function to lock and unlock.
mutex m;
void executeJob(int worker)
{
m.lock();
//do some job
m.unlock();
}
If you want to execute job parallel use boost threadpool as I suggested earlier.
In general you can write an algorithm similar to the following. It works with pthreads. I'm sure it would work with c++ threads as well.
create threads and make them wait on a condition variable, e.g. work_exists.
When work arrives you notify all threads that are waiting on that condition variable. Then in the main thread you start waiting on another condition variable work_done
Upon receiving work_exists notification, worker threads wake up, and grab their assigned work from jobs[worker], they execute it, they send a notification on work_done variable, and then go back to waiting on the work_exists condition variable
When main thread receives work_done notification it checks if all threads are done. If not, it keeps waiting till the notification from last-finishing thread arrives.
From cppreference's page on std::mutex::unlock:
The mutex must be unlocked by all threads that have successfully locked it before being destroyed. Otherwise, the behavior is undefined.
Your approach of having one thread unlock a mutex on behalf of another thread is incorrect.
The behavior you're attempting would normally be done using std::condition_variable. There are examples if you look at the links to the member functions.
I'm playing with boost library and C++. I want to create a multithreaded program that contains a producer, conumer, and a stack. The procuder fills the stack, the consumer remove items (int) from the stack. everything work (pop, push, mutex) But when i call the pop/push winthin a thread, i don't get any effect
i made this simple code :
#include "stdafx.h"
#include <stack>
#include <iostream>
#include <algorithm>
#include <boost/shared_ptr.hpp>
#include <boost/thread.hpp>
#include <boost/date_time.hpp>
#include <boost/signals2/mutex.hpp>
#include <ctime>
using namespace std;
/ *
* this class reprents a stack which is proteced by mutex
* Pop and push are executed by one thread each time.
*/
class ProtectedStack{
private :
stack<int> m_Stack;
boost::signals2::mutex m;
public :
ProtectedStack(){
}
ProtectedStack(const ProtectedStack & p){
}
void push(int x){
m.lock();
m_Stack.push(x);
m.unlock();
}
void pop(){
m.lock();
//return m_Stack.top();
if(!m_Stack.empty())
m_Stack.pop();
m.unlock();
}
int size(){
return m_Stack.size();
}
bool isEmpty(){
return m_Stack.empty();
}
int top(){
return m_Stack.top();
}
};
/*
*The producer is the class that fills the stack. It encapsulate the thread object
*/
class Producer{
public:
Producer(int number ){
//create thread here but don't start here
m_Number=number;
}
void fillStack (ProtectedStack& s ) {
int object = 3; //random value
s.push(object);
//cout<<"push object\n";
}
void produce (ProtectedStack & s){
//call fill within a thread
m_Thread = boost::thread(&Producer::fillStack,this, s);
}
private :
int m_Number;
boost::thread m_Thread;
};
/* The consumer will consume the products produced by the producer */
class Consumer {
private :
int m_Number;
boost::thread m_Thread;
public:
Consumer(int n){
m_Number = n;
}
void remove(ProtectedStack &s ) {
if(s.isEmpty()){ // if the stack is empty sleep and wait for the producer to fill the stack
//cout<<"stack is empty\n";
boost::posix_time::seconds workTime(1);
boost::this_thread::sleep(workTime);
}
else{
s.pop(); //pop it
//cout<<"pop object\n";
}
}
void consume (ProtectedStack & s){
//call remove within a thread
m_Thread = boost::thread(&Consumer::remove, this, s);
}
};
int main(int argc, char* argv[])
{
ProtectedStack s;
Producer p(0);
p.produce(s);
Producer p2(1);
p2.produce(s);
cout<<"size after production "<<s.size()<<endl;
Consumer c(0);
c.consume(s);
Consumer c2(1);
c2.consume(s);
cout<<"size after consumption "<<s.size()<<endl;
getchar();
return 0;
}
After i run that in VC++ 2010 / win7
i got :
0
0
Could you please help me understand why when i call fillStack function from the main i got an effect but when i call it from a thread nothing happens?
Thank you
Your example code suffers from a couple synchronization issues as noted by others:
Missing locks on calls to some of the members of ProtectedStack.
Main thread could exit without allowing worker threads to join.
The producer and consumer do not loop as you would expect. Producers should always (when they can) be producing, and consumers should keep consuming as new elements are pushed onto the stack.
cout's on the main thread may very well be performed before the producers or consumers have had a chance to work yet.
I would recommend looking at using a condition variable for synchronization between your producers and consumers. Take a look at the producer/consumer example here: http://en.cppreference.com/w/cpp/thread/condition_variable
It is a rather new feature in the standard library as of C++11 and supported as of VS2012. Before VS2012, you would either need boost or to use Win32 calls.
Using a condition variable to tackle a producer/consumer problem is nice because it almost enforces the use of a mutex to lock shared data and it provides a signaling mechanism to let consumers know something is ready to be consumed so they don't have so spin (which is always a trade off between the responsiveness of the consumer and CPU usage polling the queue). It also does so being atomic itself which prevents the possibility of threads missing a signal that there is something to consume as explained here: https://en.wikipedia.org/wiki/Sleeping_barber_problem
To give a brief run-down of how a condition variable takes care of this...
A producer does all time consuming activities on its thread without the owning the mutex.
The producer locks the mutex, adds the item it produced to a global data structure (probably a queue of some sort), lets go of the mutex and signals a single consumer to go -- in that order.
A consumer that is waiting on the condition variable re-acquires the mutex automatically, removes the item out of the queue and does some processing on it. During this time, the producer is already working on producing a new item but has to wait until the consumer is done before it can queue the item up.
This would have the following impact on your code:
No more need for ProtectedStack, a normal stack/queue data structure will do.
No need for boost if you are using a new enough compiler - removing build dependencies is always a nice thing.
I get the feeling that threading is rather new to you so I can only offer the advice to look at how others have solved synchronization issues as it is very difficult to wrap your mind around. Confusion about what is going on in an environment with multiple threads and shared data typically leads to issues like deadlocks down the road.
The major problem with your code is that your threads are not synchronized.
Remember that by default threads execution isn't ordered and isn't sequenced, so consumer threads actually can be (and in your particular case are) finished before any producer thread produces any data.
To make sure consumers will be run after producers finished its work you need to use thread::join() function on producer threads, it will stop main thread execution until producers exit:
// Start producers
...
p.m_Thread.join(); // Wait p to complete
p2.m_Thread.join(); // Wait p2 to complete
// Start consumers
...
This will do the trick, but probably this is not good for typical producer-consumer use case.
To achieve more useful case you need to fix consumer function.
Your consumer function actually doesn't wait for produced data, it will just exit if stack is empty and never consume any data if no data were produced yet.
It shall be like this:
void remove(ProtectedStack &s)
{
// Place your actual exit condition here,
// e.g. count of consumed elements or some event
// raised by producers meaning no more data available etc.
// For testing/educational purpose it can be just while(true)
while(!_some_exit_condition_)
{
if(s.isEmpty())
{
// Second sleeping is too big, use milliseconds instead
boost::posix_time::milliseconds workTime(1);
boost::this_thread::sleep(workTime);
}
else
{
s.pop();
}
}
}
Another problem is wrong thread constructor usage:
m_Thread = boost::thread(&Producer::fillStack, this, s);
Quote from Boost.Thread documentation:
Thread Constructor with arguments
template <class F,class A1,class A2,...>
thread(F f,A1 a1,A2 a2,...);
Preconditions:
F and each An must by copyable or movable.
Effects:
As if thread(boost::bind(f,a1,a2,...)). Consequently, f and each an are copied into
internal storage for access by the new thread.
This means that each your thread receives its own copy of s and all modifications aren't applied to s but to local thread copies. It's the same case when you pass object to function argument by value. You need to pass s object by reference instead - using boost::ref:
void produce(ProtectedStack& s)
{
m_Thread = boost::thread(&Producer::fillStack, this, boost::ref(s));
}
void consume(ProtectedStack& s)
{
m_Thread = boost::thread(&Consumer::remove, this, boost::ref(s));
}
Another issues is about your mutex usage. It's not the best possible.
Why do you use mutex from Signals2 library? Just use boost::mutex from Boost.Thread and remove uneeded dependency to Signals2 library.
Use RAII wrapper boost::lock_guard instead of direct lock/unlock calls.
As other people mentioned, you shall protect with lock all members of ProtectedStack.
Sample:
boost::mutex m;
void push(int x)
{
boost::lock_guard<boost::mutex> lock(m);
m_Stack.push(x);
}
void pop()
{
boost::lock_guard<boost::mutex> lock(m);
if(!m_Stack.empty()) m_Stack.pop();
}
int size()
{
boost::lock_guard<boost::mutex> lock(m);
return m_Stack.size();
}
bool isEmpty()
{
boost::lock_guard<boost::mutex> lock(m);
return m_Stack.empty();
}
int top()
{
boost::lock_guard<boost::mutex> lock(m);
return m_Stack.top();
}
You're not checking that the producing thread has executed before you try to consume. You're also not locking around size/empty/top... that's not safe if the container's being updated.