mutex C++ good usage - c++

I've some trouble with mutex, consider this exemple :
boost::mutex m;
void thread1_unstack(std::stack<std::string>& msg) {
while (true) {
if (msg.empty()) continue;
m.lock();
std::string msg_string = msg.top();
msg.pop();
std::cout << msg_string << std::endl;
m.unlock();
}
}
void thread2_stack(std::stack& msg) {
while (1) {
msg.push("very long message");
}
}
void wait_for_finish(std::stack& msg) {
while (!msg.empty()) sleep(1);
}
int main() {
std::stack<std::string> msg;
boost::thread t1 = boost::thread(boost::bind(&thread1_unstack, boost::ref(msg));
boost::thread t2 = boost::thread(boost::bind(&thread2_stack, boost::ref(msg));
wait_for_finish(msg);
t1.stop();
t2.stop();
}
So the probleme is with the function wait_for_finish. The function detect the stack is empty when msg.pop() is called, so thread are stopped juste after and sometimes, the message (std::cout) is not totally printed on the screen.
So I would like "lock" msg variable for thoses 3 lines :
std::string msg_string = msg.top();
msg.pop();
std::cout << msg_string << std::endl;
like that, wait_for_finish don't detect the stack is empty during std::cout.
I'm tryed to lock a boost::mutex and unlock it at the end, but nothing changed.
So I don't know how to solve it

you have to guard all access with a mutex:
boost::mutex m;
void thread1_unstack(std::stack<std::string>& msg) {
while (true) {
m.lock();
bool msgEmpty = msg.empty();
m.unlock();
if (msgEmpty) continue;
m.lock();
std::string msg_string = msg.top();
msg.pop();
std::cout << msg_string << std::endl;
m.unlock();
}
}
void thread2_stack(std::stack& msg) {
while (1) {
m.lock();
msg.push("very long message");
m.unlock();
}
}
void wait_for_finish(std::stack& msg) {
while(true) {
m.lock();
bool msgEmpty = msg.empty();
m.unlock();
if(msgEmpty) break;
sleep(1);
}
}
int main() {
std::stack<std::string> msg;
boost::thread t1 = boost::thread(boost::bind(&thread1_unstack, boost::ref(msg));
boost::thread t2 = boost::thread(boost::bind(&thread2_stack, boost::ref(msg));
wait_for_finish(msg);
t1.stop();
t2.stop();
}

Comments in our discussion reveal that you're using some kind of thread-safe waitable container. You need to stop the thread with its cooperation to make sure it has time to finish any work it's supposed to do on the last object. I would suggest this approach:
Have a stop flag that's either atomic or protected by a mutex. It should start out cleared.
When the thread gets an object from the container, before it prints it, it checks the stop flag. If the flag is set, the thread terminates.
When you want to stop the thread, set the stop flag, add a dummy object to the container, and then join the thread.
The dummy object unblocks the thread, and it won't get printed because the thread won't print with the stop flag set. By joining the thread, you ensure it's finished all the work it has to do before you terminate.
You can also use the "object of death" approach. For example, say it's not possible for your code to ever queue an empty string -- you could use an empty string as an "object of death". It works like this:
When you get an object from the container, check if it's the object of death. If it is, terminate.
To cause the thread to terminate, queue the object of death and then join the thread.
This has the same behavior. Queuing the object of death unblocks the thread (because the container is no longer empty and it only blocks if it is) and guarantees the thread will terminate when it's done (since the thread checks as soon as it unqueues it). Joining the thread ensures it has completely finished processing all objects prior to the object of death.

Related

Using infinite loops in std::thread to increment and display a value

Considering the following, simple code:
using ms = std::chrono::milliseconds;
int val = 0;
for(;;)
{
std::cout << val++ << ' ';
std::this_thread::sleep_for(ms(200));
}
We see that we infinitely print subsequent numbers each 0.2 second.
Now, I would like to implement the same logic using a helper class and multithreading. My aim is to be able to run something similar to this:
int main()
{
Foo f;
std::thread t1(&Foo::inc, f);
std::thread t2(&Foo::dis, f);
t1.join();
t2.join();
}
where Foo::inc() will increment a member variable val of an object f by 1 and Foo::dis() will display the same variable.
Since the original idea consisted of incrementing and printing the value infinitely, I would assume that both of those functions must contain an infinite loop. The problem that could occur is data race - reading and incrementing the very same variable. To prevent that I decided to use std::mutex.
My idea of implementing Foo is as follows:
class Foo {
int val;
public:
Foo() : val{0} {}
void inc()
{
for(;;){
mtx.lock();
++val;
mtx.unlock();
}
}
void dis()
{
using ms = std::chrono::milliseconds;
for(;;){
mtx.lock();
std::cout << val << ' ';
std::this_thread::sleep_for(ms(200));
mtx.unlock();
}
}
};
Obviously it's missing the mtx object, so the line
std::mutex mtx;
is written just under the #includes, declaring mtx as a global variable.
To my understanding, combining this class' definition with the above main() function should issue two, separate, infinite loops that each will firstly lock the mutex, either increment or display val and unlock the mutex so the other one could perform the second action.
What actually happens is instead of displaying the sequence of 0 1 2 3 4... it simply displays 0 0 0 0 0.... My guess is that I am either using std::mutex::lock and std::mutex::unlock incorrectly, or my fundamental understanding of multithreading is lacking some basic knowledge.
The question is - where is my logic wrong?
How would I approach this problem using a helper class and two std::threads with member functions of the same object?
Is there a guarantee that the incrementation of val and printing of it will each occur one after the other using this kind of logic? i.e. will there never be a situation when val is incremented twice before it being displayed, or vice versa?
You are sleeping with the thread locked preventing the other thread from running for most of the time.
void dis()
{
using ms = std::chrono::milliseconds;
for(;;){
mtx.lock();
std::cout << val << ' ';
std::this_thread::sleep_for(ms(200)); // this is still blocking the other thread
mtx.unlock();
}
}
Try this:
void dis()
{
using ms = std::chrono::milliseconds;
for(;;){
mtx.lock();
std::cout << val << ' ';
mtx.unlock(); // unlock to allow the other thread to progress
std::this_thread::sleep_for(ms(200));
}
}
Also, rather than using a global std::mutex you could add it as a member of your class.
If you want to synchronize the threads to produce an even output of numbers incrementing by exactly one each time then you need something like a std::condition_variable so that each thread can signal the other when it has done it's part of the job (thread one - incrementing and thread 2 - printing).
Here is an example:
class Foo {
int val;
std::mutex mtx;
std::condition_variable cv;
bool new_value; // flag when a new value is ready
public:
Foo() : val{0}, new_value{false} {}
void inc()
{
for(;;){
std::unique_lock<std::mutex> lock(mtx);
// release the lock and wait until new_value has been consumed
cv.wait(lock, [this]{ return !new_value; }); // wait for change in new_value
++val;
new_value = true; // signal for the other thread there is a new value
cv.notify_one(); // wake up the other thread
}
}
void dis()
{
using ms = std::chrono::milliseconds;
for(;;){
// a nice delay
std::this_thread::sleep_for(ms(200));
std::unique_lock<std::mutex> lock(mtx);
// release the lock and wait until new_value has been produced
cv.wait(lock, [this]{ return new_value; }); // wait for a new value
std::cout << val << ' ' << std::flush; // don't forget to flush
new_value = false; // signal for the other thread that the new value was used
cv.notify_one(); // wake up the other thread
}
}
};
int main(int argc, char** argv)
{
Foo f;
std::thread t1(&Foo::inc, &f);
std::thread t2(&Foo::dis, &f);
t1.join();
t2.join();
}
A mutex is not a signal. It is not fair. You can unlock then relock a mutex, and someone waiting for it can never notice.
All it guarantees is that exactly one thread has it locked.
Your task, splitting it into two threads, seems utterly pointless. Using sleep for is also a bad idea, as printing takes an unknown amount of time, making the period between displays drift by an unpredictable amount.
You probably (A) do not want to do this, and failing that (B) use a condition variable. One thread increments the value every X time (based off a fixed start time, not based off delays of X), and then signs the condition variable. It holds no mutex while waiting.
The other thread waits on the condition variable and the counter value changing. When it wakes, it copies the counter, unlocks, prints once, updates the last value seen, then waits on the condition variable (and value changing) again.
A mild benefit to this is that if the io is ridiculously slow or blocking, the counter keeps incrementing, so other consumers can use it.
struct Counting {
int val = -1; // optionally atomic
std::mutex mtx;
std::condition_variable cv;
void counting() {
while(true){
{
auto l=std::unique_lock<std::mutex>(mtx);
++val; // even if atomic, val must be modified while or before the mtx is held and before the notify.
}
// or notify all:
cv.notify_one(); // no need to hold lock here
using namespace std::literals;
std::this_thread::sleep_for(200ms); // ideally wait to an absolute time instead of delay here
}
}
void printing() {
int old_val=-1;
while(true){
int new_val=[&]{
auto lock=std::unique_lock<std::mutex>(mtx);
cv.wait(lock, [&]{ return val!=old_val; }); // only print if we have a new value
return val;
}();// release lock, no need to hold it while printing
std::cout << new_val << std::endl; // endl flushes. Note there are threading issues streaming to cout like this.
old_val=new_val; // update last printed value
}
}
};
if one thread is printing the other counting, you'll get basically what you want.
When launching a thread with a member function, you need to pass the address of the object, not the object itself
std::thread t2(&Foo::dis, &f);
Please note that this still won't print 1 2 3 4 .. You'll need to have the increment operation and the print alternate exactly for that.
#include <thread>
#include<iostream>
#include <mutex>
std::mutex mtx1, mtx2;
class Foo {
int val;
public:
Foo() : val{0} { mtx2.lock(); }
void inc()
{
for(;;){
mtx1.lock();
++val;
mtx2.unlock();
}
}
void dis()
{
using ms = std::chrono::milliseconds;
for(;;){
mtx2.lock();
std::cout << val <<std::endl;
std::this_thread::sleep_for(ms(200));
mtx1.unlock();
}
}
};
int main()
{
Foo f;
std::thread t1(&Foo::inc, &f);
std::thread t2(&Foo::dis, &f);
t1.join();
t2.join();
}
Also take a look at http://en.cppreference.com/w/cpp/thread/condition_variable

std::thread: How to wait (join) for any of the given threads to complete?

For example, I have two threads, t1 and t2. I want to wait for t1 or t2 to finish. Is this possible?
If I have a series of threads, say, a std::vector<std::thread>, how can I do it?
There's always wait & notify using std::condition_variable, e.g.:
std::mutex m;
std::condition_variable cond;
std::atomic<std::thread::id> val;
auto task = [&] {
std::this_thread::sleep_for(1s); // Some work
val = std::this_thread::get_id();
cond.notify_all();
};
std::thread{task}.detach();
std::thread{task}.detach();
std::thread{task}.detach();
std::unique_lock<std::mutex> lock{m};
cond.wait(lock, [&] { return val != std::thread::id{}; });
std::cout << "Thread " << val << " finished first" << std::endl;
Note: val doesn't necessarily represent the thread that finished first as all threads finish at about the same time and an overwrite might occur, but it is only for the purposes of this example.
No, there is no wait for multiple objects equivalent in C++11's threading library.
If you want to wait on the first of a set of operations, consider having them feed a thread-safe producer-consumer queue.
Here is a post I made containing a threaded_queue<T>. Have the work product of your threads be delivered to such a queue. Have the consumer read off of the other end.
Now someone can wait on (the work product) of multiple threads at once. Or one thread. Or a GPU shader. Or work product being delivered over a RESTful web interface. You don't care.
The threads themselves should be managed by something like a thread pool or other higher level abstraction on top of std::thread, as std::thread makes a poor client-facing threading abstraction.
template<class T>
struct threaded_queue {
using lock = std::unique_lock<std::mutex>;
void push_back( T t ) {
{
lock l(m);
data.push_back(std::move(t));
}
cv.notify_one();
}
boost::optional<T> pop_front() {
lock l(m);
cv.wait(l, [this]{ return abort || !data.empty(); } );
if (abort) return {};
auto r = std::move(data.back());
data.pop_back();
return r;
}
void terminate() {
{
lock l(m);
abort = true;
data.clear();
}
cv.notify_all();
}
~threaded_queue()
{
terminate();
}
private:
std::mutex m;
std::deque<T> data;
std::condition_variable cv;
bool abort = false;
};
I'd use std::optional instead of boost::optional in C++17. It can also be replaced with a unique_ptr, or a number of other constructs.
It's easy to do with a polling wait:
#include<iostream>
#include<thread>
#include<random>
#include<chrono>
#include<atomic>
void thread_task(std::atomic<bool> & boolean) {
std::default_random_engine engine{std::random_device{}()};
std::uniform_int_distribution<int64_t> dist{1000, 3000};
int64_t wait_time = dist(engine);
std::this_thread::sleep_for(std::chrono::milliseconds{wait_time});
std::string line = "Thread slept for " + std::to_string(wait_time) + "ms.\n";
std::cout << line;
boolean.store(true);
}
int main() {
std::vector<std::thread> threads;
std::atomic<bool> boolean{false};
for(int i = 0; i < 4; i++) {
threads.emplace_back([&]{thread_task(boolean);});
}
std::string line = "We reacted after a single thread finished!\n";
while(!boolean) std::this_thread::yield();
std::cout << line;
for(std::thread & thread : threads) {
thread.join();
}
return 0;
}
Example output I got on Ideone.com:
Thread slept for 1194ms.
We reacted after a single thread finished!
Thread slept for 1967ms.
Thread slept for 2390ms.
Thread slept for 2984ms.
This probably isn't the best code possible, because polling loops are not necessarily best practice, but it should work as a start.
There is no standard way of waiting on multiple threads.
You need to resort to operating system specific functions like WaitForMultipleObjects on Windows.
A Windows only example:
HANDLE handles[] = { t1.native_handle(), t2.native_handle(), };
auto res = WaitForMultipleObjects(2 , handles, FALSE, INFINITE);
Funnily , when std::when_any will be standardized, one can do a standard but wasteful solution:
std::vector<std::thread> waitingThreads;
std::vector<std::future<void>> futures;
for (auto& thread: threads){
std::promise<void> promise;
futures.emplace_back(promise.get_future());
waitingThreads.emplace_back([&thread, promise = std::move(promise)]{
thread.join();
promise.set_value();
});
}
auto oneFinished = std::when_any(futures.begin(), futures.end());
very wastefull, still not available , but standard.

How to say to std::thread to stop?

I have two questions.
1) I want to launch some function with an infinite loop to work like a server and checking for messages in a separate thread. However I want to close it from the parent thread when I want. I'm confusing how to std::future or std::condition_variable in this case. Or is it better to create some global variable and change it to true/false from the parent thread.
2) I'd like to have something like this. Why this one example crashes during the run time?
#include <iostream>
#include <chrono>
#include <thread>
#include <future>
std::mutex mu;
bool stopServer = false;
bool serverFunction()
{
while (true)
{
// checking for messages...
// processing messages
std::this_thread::sleep_for(std::chrono::seconds(1));
mu.lock();
if (stopServer)
break;
mu.unlock();
}
std::cout << "Exiting func..." << std::endl;
return true;
}
int main()
{
std::thread serverThread(serverFunction);
// some stuff
system("pause");
mu.lock();
stopServer = true;
mu.unlock();
serverThread.join();
}
Why this one example crashes during the run time?
When you leave the inner loop of your thread, you leave the mutex locked, so the parent thread may be blocked forever if you use that mutex again.
You should use std::unique_lock or something similar to avoid problems like that.
You leave your mutex locked. Don't lock mutexes manually in 999/1000 cases.
In this case, you can use std::unique_lock<std::mutex> to create a RAII lock-holder that will avoid this problem. Simply create it in a scope, and have the lock area end at the end of the scope.
{
std::unique_lock<std::mutex> lock(mu);
stopServer = true;
}
in main and
{
std::unique_lock<std::mutex> lock(mu);
if (stopServer)
break;
}
in serverFunction.
Now in this case your mutex is pointless. Remove it. Replace bool stopServer with std::atomic<bool> stopServer, and remove all references to mutex and mu from your code.
An atomic variable can safely be read/written to from different threads.
However, your code is still busy-waiting. The right way to handle a server processing messages is a condition variable guarding the message queue. You then stop it by front-queuing a stop server message (or a flag) in the message queue.
This results in a server thread that doesn't wake up and pointlessly spin nearly as often. Instead, it blocks on the condition variable (with some spurious wakeups, but rare) and only really wakes up when there are new messages or it is told to shut down.
template<class T>
struct cross_thread_queue {
void push( T t ) {
{
auto l = lock();
data.push_back(std::move(t));
}
cv.notify_one();
}
boost::optional<T> pop() {
auto l = lock();
cv.wait( l, [&]{ return halt || !data.empty(); } );
if (halt) return {};
T r = data.front();
data.pop_front();
return std::move(r); // returning to optional<T>, so we'll explicitly `move` here.
}
void terminate() {
{
auto l = lock();
data.clear();
halt = true;
}
cv.notify_all();
}
private:
std::mutex m;
std::unique_lock<std::mutex> lock() {
return std::unique_lock<std::mutex>(m);
}
bool halt = false;
std::deque<T> data;
std::condition_variable cv;
};
We use boost::optional for the return type of pop -- if the queue is halted, pop returns an empty optional. Otherwise, it blocks until there is data.
You can replace this with anything optional-like, even a std::pair<bool, T> where the first element says if there is anything to return, or a std::unique_ptr<T>, or a std::experimental::optional, or a myriad of other choices.
cross_thread_queue<int> queue;
bool serverFunction()
{
while (auto message = queue.pop()) {
// processing *message
std::cout << "Processing " << *message << std::endl;
}
std::cout << "Exiting func..." << std::endl;
return true;
}
int main()
{
std::thread serverThread(serverFunction);
// some stuff
queue.push(42);
system("pause");
queue.terminate();
serverThread.join();
}
live example.

std::conditional_variable::notify_all does not wake up all the threads

I have a simple example here:
The project can be called academic since I try to learn c++11 threads.
Here is a description of what's going on.
Imagine a really big std::string with lot's of assembly source code inside like
mov ebx,ecx;\r\nmov eax,ecx;\r\n....
Parse() function takes this string and finds all the line positions by marking the begin and the end of the line and saving those as string::const_iterators in a job queue.
After that 2 worker threads pop this info from the queue and do the parsing of a substring into an Intstuction class object. They push_back the resulted instance of Instruction class into the std::vector<Instruction> result
Here is a struct declaration to hold the line number and the iterators for a substring to parse
struct JobItem {
int lineNumber;
string::const_iterator itStart;
string::const_iterator itEnd;
};
That's a small logger...
void ThreadLog(const char* log) {
writeMutex.lock();
cout << "Thr:" << this_thread::get_id() << " " << log << endl;
writeMutex.unlock();
}
That's the shared data:
queue<JobItem> que;
vector<Instruction> result;
Here are all the primitives for sync
condition_variable condVar;
mutex condMutex;
bool signaled = false;
mutex writeMutex;
bool done=false;
mutex resultMutex;
mutex queMutex;
Per-thread function
void Func() {
unique_lock<mutex> condLock(condMutex);
ThreadLog("Waiting...");
while (!signaled) {
condVar.wait(condLock);
}
ThreadLog("Started");
while (!done) {
JobItem item;
queMutex.lock();
if (!que.empty()) {
item = que.front(); que.pop();
queMutex.unlock();
}
else {
queMutex.unlock();
break;
}
//if i comment the line below both threads wake up
auto instr = ParseInstruction(item.itStart, item.itEnd);
resultMutex.lock();
result.push_back(Instruction());
resultMutex.unlock();
}
The manager function that manages the threads...
vector<Instruction> Parser::Parse(const string& instructionStream){
thread thread1(Func);
thread thread2(Func);
auto it0 = instructionStream.cbegin();
auto it1 = it0;
int currentIndex = instructionStream.find("\r\n");
int oldIndex = 0;
this_thread::sleep_for(chrono::milliseconds(1000)); //experimental
int x = 0;
while (currentIndex != string::npos){
auto it0 = instructionStream.cbegin() + oldIndex;
auto it1 = instructionStream.cbegin() + currentIndex;
queMutex.lock();
que.push({ x,it0,it1 });
queMutex.unlock();
if (x == 20) {//fill the buffer a little bit before signal
signaled = true;
condVar.notify_all();
}
oldIndex = currentIndex + 2;
currentIndex = instructionStream.find("\r\n", oldIndex);
++x;
}
thread1.join();
thread2.join();
done = true;
return result;
}
The problem arises in the Func() function. As you can see, I'm using some logging inside of it. And the logs say:
Output:
Thr:9928 Waiting...
Thr:8532 Waiting...
Thr:8532 Started
Meaning that after the main thread had sent notify_all() to the waiting threads, only one of them actually woke up.
If I comment out the call to ParseInstruction() inside of Func() then both threads would wake up, otherwise only one is doing so.
It would be great to get some advice.
Suppose Func reads signaled and sees it false.
Then Parse sets signaled true and does the notify_all; at this point Func is not waiting, so does not see the notify.
Func then waits on the condition variable and blocks.
You can avoid this by putting a lock of condMutex around the assignment to signaled.
This is the normal pattern for using condition variables correctly - you need to both test and modify the condition you want to wait on within the same mutex.

Shutdown boost threads correctly

I have x boost threads that work at the same time. One producer thread fills a synchronised queue with calculation tasks. The consumer threads pop out tasks and calculates them.
Image Source: https://www.quantnet.com/threads/c-multithreading-in-boost.10028/
The user may finish the programm during this process, so I need to shutdown my threads properly. My current approach seems to not work, since exceptions are thrown. It's intented that on system shutdown all processes should be killed and stop their current task no matter what they do. Could you please show me, how you would kill thoses threads?
Thread Initialisation:
for (int i = 0; i < numberOfThreads; i++)
{
std::thread* thread = new std::thread(&MyManager::worker, this);
mThreads.push_back(thread);
}
Thread Destruction:
void MyManager::shutdown()
{
for (int i = 0; i < numberOfThreads; i++)
{
mThreads.at(i)->join();
delete mThreads.at(i);
}
mThreads.clear();
}
Worker:
void MyManager::worker()
{
while (true)
{
int current = waitingList.pop();
Object * p = objects.at(current);
p->calculateMesh(); //this task is internally locked by a mutex
try
{
boost::this_thread::interruption_point();
}
catch (const boost::thread_interrupted&)
{
// Thread interruption request received, break the loop
std::cout << "- Thread interrupted. Exiting thread." << std::endl;
break;
}
}
}
Synchronised Queue:
#include <queue>
#include <thread>
#include <mutex>
#include <condition_variable>
template <typename T>
class ThreadSafeQueue
{
public:
T pop()
{
std::unique_lock<std::mutex> mlock(mutex_);
while (queue_.empty())
{
cond_.wait(mlock);
}
auto item = queue_.front();
queue_.pop();
return item;
}
void push(const T& item)
{
std::unique_lock<std::mutex> mlock(mutex_);
queue_.push(item);
mlock.unlock();
cond_.notify_one();
}
int sizeIndicator()
{
std::unique_lock<std::mutex> mlock(mutex_);
return queue_.size();
}
private:
bool isEmpty() {
std::unique_lock<std::mutex> mlock(mutex_);
return queue_.empty();
}
std::queue<T> queue_;
std::mutex mutex_;
std::condition_variable cond_;
};
The thrown error call stack:
... std::_Mtx_lockX(_Mtx_internal_imp_t * * _Mtx) Line 68 C++
... std::_Mutex_base::lock() Line 42 C++
... std::unique_lock<std::mutex>::unique_lock<std::mutex>(std::mutex & _Mtx) Line 220 C++
... ThreadSafeQueue<int>::pop() Line 13 C++
... MyManager::worker() Zeile 178 C++
From my experience on working with threads in both Boost and Java, trying to shut down threads externally is always messy. I've never been able to really get that to work cleanly.
The best I've gotten is to have a boolean value available to all the consumer threads that is set to true. When you set it to false, the threads will simply return on their own. In your case, that could easily be put into the while loop you have.
On top of that, you're going to need some synchronization so that you can wait for the threads to return before you delete them, otherwise you can get some hard to define behavior.
An example from a past project of mine:
Thread creation
barrier = new boost::barrier(numOfThreads + 1);
threads = new detail::updater_thread*[numOfThreads];
for (unsigned int t = 0; t < numOfThreads; t++) {
//This object is just a wrapper class for the boost thread.
threads[t] = new detail::updater_thread(barrier, this);
}
Thread destruction
for (unsigned int i = 0; i < numOfThreads; i++) {
threads[i]->requestStop();//Notify all threads to stop.
}
barrier->wait();//The update request will allow the threads to get the message to shutdown.
for (unsigned int i = 0; i < numOfThreads; i++) {
threads[i]->waitForStop();//Wait for all threads to stop.
delete threads[i];//Now we are safe to clean up.
}
Some methods that may be of interest from the thread wrapper.
//Constructor
updater_thread::updater_thread(boost::barrier * barrier)
{
this->barrier = barrier;
running = true;
thread = boost::thread(&updater_thread::run, this);
}
void updater_thread::run() {
while (running) {
barrier->wait();
if (!running) break;
//Do stuff
barrier->wait();
}
}
void updater_thread::requestStop() {
running = false;
}
void updater_thread::waitForStop() {
thread.join();
}
Try moving 'try' up (like in the sample below). If your thread is waiting for data (inside waitingList.pop()) then may be waiting inside the condition variable .wait(). This is an 'interruption point' and so may throw when the thread gets interrupted.
void MyManager::worker()
{
while (true)
{
try
{
int current = waitingList.pop();
Object * p = objects.at(current);
p->calculateMesh(); //this task is internally locked by a mutex
boost::this_thread::interruption_point();
}
catch (const boost::thread_interrupted&)
{
// Thread interruption request received, break the loop
std::cout << "- Thread interrupted. Exiting thread." << std::endl;
break;
}
}
}
Maybe you are catching the wrong exception class?
Which would mean it does not get caught.
Not too familiar with threads but is it the mix of std::threads and boost::threads that is causing this?
Try catching the lowest parent exception.
I think this is a classic problem of reader/writer thread working on a common buffer. One of the most secured way of working out this problem is to use mutexes and signals.( I am not able to post the code here. Please send me an email, I post the code to you).