I'm writing an Audio class that holds an std::thread for refilling some buffers asynchronously. Say we call the main thread A and the background (class member) thread B. I'm using an std::mutex to block thread B whenever the sound is not playing, that way it doesn't run in the background when unnecessary and doesn't use excess CPU power. The mutex locked by thread A by default, so thread B is blocked, then when it's time to play the sound thread A unlocks the mutex and thread B runs (by locking then immediately unlocking it) in a loop.
The issue comes up when thread B sees that it's reached the end of the file. It can stop playback and clean up buffers and such, but it can't stop its own loop because thread B can't lock the mutex from thread A.
Here's the relevant code outline:
class Audio {
private:
// ...
std::thread Thread;
std::mutex PauseMutex; // mutex that blocks Thread, locked in constructor
void ThreadFunc(); // assigned to Thread in constructor
public:
// ...
void Play();
void Stop();
}
_
void Audio::ThreadFunc() {
// ... (include initial check of mutex here)
while (!this->EndThread) { // Thread-safe flag, only set when Audio is destructed
// ... Check and refill buffers as necessary, etc ...
if (EOF)
Stop();
// Attempt a lock, blocks thread if sound/music is not playing
this->PauseMutex.lock();
this->PauseMutex.unlock();
}
}
void Audio::Play() {
// ...
PauseMutex.unlock(); // unlock mutex so loop in ThreadFunc can start
}
void Audio::Stop() {
// ...
PauseMutex.lock(); // locks mutex to stop loop in ThreadFunc
// ^^ This is the issue here
}
In the above setup, when the background thread sees that it's reached EOF, it would call the class's Stop() function, which supposedly locks the mutex to stop the background thread. This doesn't work because the mutex would have to be locked by the main thread, not the background thread (in this example, it crashes in ThreadFunc because the background thread attempts a lock in its main loop after already locking in Stop()).
At this point the only thing I could think of would be to somehow have the background thread lock the mutex as if it was the main thread, giving the main thread ownership of the mutex... if that's even possible? Is there a way for a thread to transfer ownership of a mutex to another thread? Or is this a design flaw in the setup I've created? (If the latter, are there any rational workarounds?) Everything else in the class so far works just as designed.
I'm not going to even pretend to understand how your code is trying to do what it is doing. There is one thing, however, that is evident. You're trying to use a mutex for conveying some predicate state change, which is the wrong vehicle to drive on that freeway.
Predicate state change is handled by coupling three things:
Some predicate datum
A mutex to protect the predicate
A condition variable to convey possible change in predicate state.
The Goal
The goal in the below example is to demonstrate how a mutex, a condition variable, and predicate data are used in concert when controlling program flow across multiple threads. It shows examples of using both wait and wait_for condition variable functionality, as well as one way to run a member function as a thread proc.
Following is a simple Player class toggles between four possible states:
Stopped : The player is not playing, nor paused, nor quitting.
Playing : The player is playing
Paused : The player is paused, and will continue from whence it left off once it resumes Playing.
Quit : The player should stop what it is doing and terminate.
The predicate data is fairly obvious. the state member. It must be protected, which means it cannot be changed nor checked unless under the protection of the mutex. I've added to this a counter that simply increments during the course of maintaining the Playing state for some period of time. more specifically:
While Playing, each 200ms the counter increments, then dumps some data to the console.
While Paused, counter is not changed, but retains its last value while Playing. This means when resumed it will continue from where it left off.
When Stopped, the counter is reset to zero and a newline is injected into the console output. This means switching back to Playing will start the counter sequence all over again.
Setting the Quit state has no effect on counter, it will be going away along with everything else.
The Code
#include <iostream>
#include <mutex>
#include <condition_variable>
#include <thread>
#include <unistd.h>
using namespace std::chrono_literals;
struct Player
{
private:
std::mutex mtx;
std::condition_variable cv;
std::thread thr;
enum State
{
Stopped,
Paused,
Playing,
Quit
};
State state;
int counter;
void signal_state(State st)
{
std::unique_lock<std::mutex> lock(mtx);
if (st != state)
{
state = st;
cv.notify_one();
}
}
// main player monitor
void monitor()
{
std::unique_lock<std::mutex> lock(mtx);
bool bQuit = false;
while (!bQuit)
{
switch (state)
{
case Playing:
std::cout << ++counter << '.';
cv.wait_for(lock, 200ms, [this](){ return state != Playing; });
break;
case Stopped:
cv.wait(lock, [this]() { return state != Stopped; });
std::cout << '\n';
counter = 0;
break;
case Paused:
cv.wait(lock, [this]() { return state != Paused; });
break;
case Quit:
bQuit = true;
break;
}
}
}
public:
Player()
: state(Stopped)
, counter(0)
{
thr = std::thread(std::bind(&Player::monitor, this));
}
~Player()
{
quit();
thr.join();
}
void stop() { signal_state(Stopped); }
void play() { signal_state(Playing); }
void pause() { signal_state(Paused); }
void quit() { signal_state(Quit); }
};
int main()
{
Player player;
player.play();
sleep(3);
player.pause();
sleep(3);
player.play();
sleep(3);
player.stop();
sleep(3);
player.play();
sleep(3);
}
Output
I can't really demonstrate this. You'll have to run it and see how it works, and I invite you to toy with the states in main() as I have above. Do note, however, that once quit is invoked none of the other stated will be monitored. Setting the Quit state will shut down the monitor thread. For what its worth, a run of the above should look something like this:
1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.21.22.23.24.25.26.27.28.29.30.
1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.
with the first set of numbers dumped in two groups (1..15, then 16..30), as a result of playing, then pausing, then playing again. Then a stop is issued, followed by another play for a period of ~3 seconds. After that, the object self-destructs, and in doing so, sets the Quit state, and waits for the monitor to terminate.
Summary
Hopefully you get something out of this. If you find yourself trying to manage predicate state by manually latching and releasing mutexes, changes are you need a condition-variable design patter to facilitate detecting those changes.
Hope you get something out of it.
class CtLockCS
{
public:
//--------------------------------------------------------------------------
CtLockCS() { ::InitializeCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
~CtLockCS() { ::DeleteCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
bool TryLock() { return ::TryEnterCriticalSection(&m_cs) == TRUE; }
//--------------------------------------------------------------------------
void Lock() { ::EnterCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
void Unlock() { ::LeaveCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
protected:
CRITICAL_SECTION m_cs;
};
///////////////////////////////////////////////////////////////////////////////
// class CtLockMX - using mutex
class CtLockMX
{
public:
//--------------------------------------------------------------------------
CtLockMX(const TCHAR* nameMutex = 0)
{ m_mx = ::CreateMutex(0, FALSE, nameMutex); }
//--------------------------------------------------------------------------
~CtLockMX()
{ if (m_mx) { ::CloseHandle(m_mx); m_mx = NULL; } }
//--------------------------------------------------------------------------
bool TryLock()
{ return m_mx ? (::WaitForSingleObject(m_mx, 0) == WAIT_OBJECT_0) : false; }
//--------------------------------------------------------------------------
void Lock()
{ if (m_mx) { ::WaitForSingleObject(m_mx, INFINITE); } }
//--------------------------------------------------------------------------
void Unlock()
{ if (m_mx) { ::ReleaseMutex(m_mx); } }
//--------------------------------------------------------------------------
protected:
HANDLE m_mx;
};
///////////////////////////////////////////////////////////////////////////////
// class CtLockSM - using semaphore
class CtLockSM
{
public:
//--------------------------------------------------------------------------
CtLockSM(int maxcnt) { m_sm = ::CreateSemaphore(0, maxcnt, maxcnt, 0); }
//--------------------------------------------------------------------------
~CtLockSM() { ::CloseHandle(m_sm); }
//--------------------------------------------------------------------------
bool TryLock() { return m_sm ? (::WaitForSingleObject(m_sm, 0) == WAIT_OBJECT_0) : false; }
//--------------------------------------------------------------------------
void Lock() { if (m_sm) { ::WaitForSingleObject(m_sm, INFINITE); } }
//--------------------------------------------------------------------------
void Unlock()
{
if (m_sm){
LONG prevcnt = 0;
::ReleaseSemaphore(m_sm, 1, &prevcnt);
}
}
//--------------------------------------------------------------------------
protected:
HANDLE m_sm;
};
Related
I've got a Timer class that can run with both an initial time and an interval. There's an internal function internalQuit performs thread.join() before a thread is started again on the resetCallback. The thing is that each public function has it's own std::lock_guard on the mutex to prevent the data of being written. I'm now running into an issue that when using the callback to for example stop the timer in the callback, the mutex cannot be locked by stop(). I'm hoping to get some help on how to tackle this issue.
class Timer
{
public:
Timer(string_view identifier, Function &&timeoutHandler, Duration initTime, Duration intervalTime);
void start()
void stop() // for example
{
std::lock_guard lock{mutex};
running = false;
sleepCv.notify_all();
}
void setInitTime()
void setIntervalTime()
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
startTimerThread(std::forward<Function>(timeoutHandler));
}
private:
internalQuit() // performs thread join
{
{
std::lock_guard lock {mutex};
quit = true;
running = false;
sleepCv.notify_all();
}
thread.join();
}
mainLoop(Function &&timeoutHandler)
{
while(!quit)
{
std::unique_lock lock{mutex};
// wait for running with sleepCv.wait()
// handle initTimer with sleepCv.wait_until()
timeoutHandler(); // callback
// handle intervalTimer with sleepCv.wait_until()
timeoutHandler(); // callback
}
}
startTimerThread(Function &&timeoutHandler)
{
thread = std::thread([&, timeoutHandler = std::forward<Function>(timeoutHandler)](){
mainLoop(timeoutHandler);
});
}
std::thread thread{};
std::mutex mutex{};
std::condition_variable sleepCv{}
// initTime, intervalTime and some booleans for updating with sleepCv.notify_all();
}
For testing this, I have the following testcase in Gtest. I'm expecting the timer to stop in the callback. Unfortunately, the timer will hang on acquiring the mutex lock in the stop() function.
std::atomic<int> callbackCounter;
void timerCallback()
{
callbackCounter.fetch_add(1, std::memory_order_acq_rel);
}
TEST(timerTest, timerShouldStopWhenStoppedInNewCallback)
{
std::atomic<int> testCounter{0};
Timer<std::chrono::steady_clock > t{"timerstop", &timerCallback, std::chrono::milliseconds(0), std::chrono::milliseconds(100)};
t.resetCallback([&]{
testCounter += 1;
t.stop();
});
t.start();
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // trigger due to original interval timeout
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // no trigger, because stopped in new callback
}
Removing all the mutexes in each of the public fucntions, fixes the issue. But that could lead to possible race conditions for data being written to variables. Hence each function has a lock before writing to f.e. the booleans.
I've tried looking into the std::move functionality to move the thread during the resetCallback into a different variable and then call join on that one. I'm also investigating recursive_mutex but have no experience with using that.
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
auto prevThread = std::thread(std::move(this->thread));
// didn't know how to continue from here, requiring more selfstudy.
startTimerThread(std::forward<Function>(timeoutHandler));
}
It's a new subject for me, have worked with mutexes and timers before but with relatively simple stuff.
Thank you in advance.
I have two methods "log" and "measure" that should never execute at the same time.
So I tried to use a "std::mutex" to do this as follows:
void log(std::string message)
{
mtx.lock();
someLogFunctionality();
mtx.unlock();
}
void measure()
{
mtx.lock();
someMeasureFunctionality();
mtx.unlock();
}
Now it turned out that it also shall be possible to call "log" multiple times in parallel without locking and the same applies for "measure", too. (Reason: someLogFunctionality() and someMeasureFunctionality() interfere with each other but the same method may be called multiple times parallely)
I had a look at "std::shared_mutex" then, but there are two problems for me:
1.) With shared_mutex I could use lock_shared for only one of the methods (log or measure) but then the other one would have to use the exclusive lock (and could again not be executed multiple times in parallel)
void log(std::string message)
{
mtx.lock_shared();
someLogFunctionality();
mtx.unlock_shared();
}
void measure()
{
mtx.lock(); // This should also be shared but among another "group"
someMeasureFunctionality();
mtx.unlock();
}
2.) I can't use C++17 (constraint in the environment that I'm working with)
Do you have any suggestions for me how I could realize this?
Based on the reply from alexb I have written the following mutex class which currently works for me (only tried out in a simple multithreaded example application so far)
Please note that it is not protected against "starvation". In simple words: It is not ensured that that lockMeasure will ever get the lock if lockLogging is called high-frequently (and the other way round).
class MyMutex
{
private:
std::atomic<int> log_executors;
std::atomic<int> measure_executors;
std::mutex mtx;
std::condition_variable condition;
public:
MyMutex() : log_executors(0), measure_executors(0) {}
~MyMutex() {}
void lockMeasure()
{
std::unique_lock<std::mutex> lock(mtx);
while(log_executors) {
condition.wait(lock);
}
measure_executors++;
}
void unlockMeasure()
{
std::unique_lock<std::mutex> lock(mtx);
measure_executors--;
if (!measure_executors)
{
condition.notify_all();
}
}
void lockLogging()
{
std::unique_lock<std::mutex> lock(mtx);
while(measure_executors) {
condition.wait(lock);
}
log_executors++;
}
void unlockLogging()
{
std::unique_lock<std::mutex> lock(mtx);
log_executors--;
if (!log_executors)
{
condition.notify_all();
}
}
static MyMutex& getInstance()
{
static MyMutex _instance;
return _instance;
}
};
Usage:
void measure()
{
MyMutex::getInstance().lockMeasure();
someMeasureFunctionality();
MyMutex::getInstance().unlockMeasure();
}
void log()
{
MyMutex::getInstance().lockLogging();
someLogFunctionality();
MyMutex::getInstance().unlockLogging();
}
You need some barrier logic which is more complicated than shared_mutex (BTW, shared_mutex is not best choice for multiplatform compilation). For example, you can use mutex, conditional variable, and 2 variables for barrier sync. It does not take CPU and you may not use sleeps for check.
#include <mutex>
#include <condition_variable>
#include <atomic>
std::atomic<int> log_executors = 0;
std::atomic<int> measure_executors = 0;
std::mutex mutex;
std::condition_variable condition;
void log(std::string message) {
{
std::unique_lock<std::mutex> lock(mutex);
log_executors++; // Register current executor and prevent from entering new measure executors
// Wait until all measure executors will go away
while(measure_executors) {
condition.wait(lock); // wait condition variable signal. Mutex will be unlocked during wait
}
}
// here lock is freed
someLogFunctionality(); // execute logic
{
std::unique_lock<std::mutex> lock(mutex);
log_executors--; // unregister current execution
condition.notify_all(); // send signal and unlock all waiters
}
}
void measure()
{
{
std::unique_lock<std::mutex> lock(mutex);
measure_executors++; // Register current executor and prevent from entering new log executors
while(log_executors) {
condition.wait(lock); // wait until all measure executors will gone
}
}
someMeasureFunctionality();
{
std::unique_lock<std::mutex> lock(mutex);
measure_executors--; // unregister current execution
condition.notify_all(); // send signal and unlock all waiters
}
}
You can have a master lock granting access to a semaphore variable:
void log(std::string message)
{
acquire(LOG);
someLogFunctionality();
release(LOG);
}
void measure()
{
acquire(MEASURE);
someMeasureFunctionality();
release(MEASURE);
}
void acquire(int what) {
for (;;) {
mtx.lock();
if (owner == NONE) {
owner = what;
}
if (owner == what) {
// A LOG was asked while LOG is running
users[owner]++;
mtx.unlock();
return;
}
mtx.unlock();
// Some sleep would be good
usleep(5000);
}
}
void release(int what) {
mtx.lock();
if (owner != what) {
// This is an error. How could this happen?
}
if (users[what] <= 0) {
// This is an error. How could this happen?
}
users[what]--;
if (0 == users[what]) {
owner = NONE;
}
mtx.unlock();
}
In this case, for example:
owner is NONE
LOG1 acquires LOG. It can do so because owner is NONE
MEASURE1 acquires LOG. It starts spinning in place because owner != MEASURE
MEASURE2 acquires LOG. It starts spinning in place because owner != MEASURE
LOG2 acquires LOG. It can do so because owner is LOG, users[LOG]=2
LOG2 releases LOG. users[LOG]=1
LOG1 releases LOG. users[LOG]=0, so owner becomes NONE
MEASURE2 by pure chance acquires mtx before MEASURE1, finds owner=NONE and goes
MEASURE1 finds owner=MEASURE and sets users[MEASURE]=2
In the above, note that the second call to measure() actually executed a bit earlier. This should be OK. But if you want to keep the calls "serialized" even if they happen in parallel, you'll need a stack for each owner and more complex code.
I have a pretty basic producer / consumer implementation. The producer is the "main" thread, and the consumer is executed on a separate thread. However the consumer needs to be explicitly started, using a Start() function. This sets the "processing" flag to true (used in the infinite while loop).
Once in the while loop, the consumer then uses a condition variable to see if there is data in the queue to process. If yes, it does its work, goes back to the top of the infinite loop, then the condition variable, and so on.
The problem I am having is the consumer is waiting for data in the queue, and I want to stop processing. How can I wake up the consumer? I have provided some example code below, removing some major components, just showing the high level design (everything is not actually public).
// Consumer object
class Consumer {
public:
std::mutex mtx_;
bool processing_ = false;
std::thread processing_thread_;
std::queue<int> data_;
std::condition_variable cv_;
~Consumer() {
// Make sure the processing thread is stopped
{
std::lock_guard<std::mutex> lock(mtx_);
processing_ = false;
}
if (processing_thread_.joinable()) {
processing_thread_.join();
}
}
void Start() {
std::lock_guard<std::mutex> lock(mtx_);
processing_ = true;
processing_thread_ = std::thread(
&Consumer::Run,
this);
}
void Stop() {
std::lock_guard<std::mutex> lock(mtx_);
processing_ = false;
}
void AddData(int d) {
std::lock_guard<std::mutex> lock(mtx_);
data_.push(d);
cv_.notify_one();
}
bool IsDataAvailable() const {
return (!data.empty());
}
void Run() {
// The infinite loop
while (processing_) {
// This is where I get stuck waiting even tho processing has been
// changed to false by the main thread
std::unique_lock<std::mutex> lock(mtx_);
cv_.wait(lock, std::bind(
&Consumer::IsDataAvailable, this));
// do some processing
}
}
}; // end of consumer
// Somewhere in main trying to stop the processing thread cause I am
// done processing OR my_consumer goes out of scope and tries to join
// ...
my_consumer.Stop();
}
// my_consumer goes out of scope here calling destructor.
A couple of changes is required for the consumer to wait for change in processing_:
~Consumer() {
if (processing_thread_.joinable()) {
Stop();
processing_thread_.join();
}
}
// ...
void Stop() {
std::lock_guard<std::mutex> lock(mtx_);
processing_ = false;
cv_.notify_one();
}
// ...
void Run() {
for(;;) {
std::unique_lock<std::mutex> lock(mtx_);
// Wait till something is put into the queue or stop requested.
cv_.wait(lock, [this]() { return !processing_ || !data_.empty(); });
if(!data_.empty())
// Process queue elements.
else if(!processing_)
return; // Only exit when the queue is empty.
}
}
I have a Program class and a Browser class.
Inside my Program::Run(), I launch the Browser to start in a separate thread.
However, before I continue with my Run() method, I want to wait for a certain part of the Browser to initialize, thus I need to check if a variable has been set in the browser object.
Used as the thread for the browser
int Program::_Thread_UI_Run() {
...
return Browser->Run();
}
I am using async to run the browser thread and retrieve its return value when it is finished.
int Program::Start() {
std::unique_lock<std::mutex> lck(mtx);
auto t1 = std::async(&Program::_Thread_Browser_Run, this);
cv.wait(lck);
... when wait is released, do stuff
// Thread finishes and returns an exit code for the program
auto res1 = f1.get();
// return res1 as exit code.
}
Browser.cpp class
int Browser::Run()
{
// Initialize many things
...
m_Running = true;
cv.notify_all(); // Notify the waiter back in Program
// This will run for as long as the program is running
while (m_Running)
{
... browser window message loop
}
return exit_code;
}
I have problems setting this up. The program is crashing :/
Do I pass the mutex variable to everything using it? Or just recreate it in every function body?
What about the conditional_variable?
With the current setup the program crashes:
The exception Breakpoint
A breakpoint has been reached.
(0x80000003) occured in the application at location 0x107d07d6.
Hints and help is appreciated
Edit: Updated code to match new suggestions
In browser's .h file: std::atomic_bool m_Running;
int Browser::Run(std::condition_variable& cv)
{
int exit_code = 0;
// Set up, and attain the desired state:
...
m_Running = true;
cv.notify_all();
while (m_Running)
{
// Process things etc
}
return exit_code;
}
int Program::Start()
{
std::mutex m;
std::condition_variable cv;
auto t1 = std::async(&Program::_Thread_UI_Run, this, std::ref(cv));
std::unique_lock<std::mutex> lock(m);
cv.wait(lock);
.... stuff
return t1.get();
}
I have a logger that helps me keep track of how the program is running.
By placing logger calls in crucial places in the code I was able to confirm that the program waits appropiately before continuing. However I still get prompted with
The exception Breakpoint A breakpoint has been reached. (0x80000003)
occured in the application at location 0x107d07d6.
By commenting out //cv.wait(lock); the program resumes to work.. :/
Why would waiting making it crash like that?
You definitely want to use std::condition_variable. It allows you to signal other threads once an operation has complete, so in your case, once the bool has been set:
Browser::Run()
{
// Set some things up, make sure everything is okay:
...
m_Running = true; // Now the thread is, by our standards, running*
// Let other threads know:
cv.notify_all();
// Continue to process thread:
while (m_Running)
{
}
}
And then in your main / other thread:
auto t1 = std::async(&Program::_Thread_Browser_Run, this);
// Wait until we know the thread is actually running. This will pause this thread indefinitely until the condition_variable signals.
cv.wait();
You should pass the std::condition_variable into any function using it, so your code would look more like:
int Browser::Run(std::condition_variable& cv)
{
int exit_code = 0;
// Set up, and attain the desired state:
...
m_Running = true;
cv.notify_all();
while (m_Running)
{
// Process things etc
}
return exit_code;
}
int Program::Start()
{
std::mutex m;
std::condition_variable cv;
auto t1 = std::async(&Program::_Thread_UI_Run, this, std::ref(cv));
std::unique_lock<std::mutex> lock(m);
// Wait until the browser is in the desired state
cv.wait(lock);
// The thread has signalled. At this point, Browser::m_Running = true
// Wait for the browser to exit, and then propagate its exit code
return t1.get();
}
#Richard Hodges raises an excellent point in the comments, which I overlooked: m_Running needs to be std::atomic (or have locking around its use) otherwise both threads may try to use it once. std::condition_variable is thread-safe and doesn't require locking around it.
*Of course the thread is running at that point, I just mean it's in the state you desire
I have threaded iterative generation of some geometries. I use VTK for rendering. After each iteration I would like to display (render) the current progress. My approach works as expected until the last 2 threads are left hanging waiting for QWaitCondition. They are blocked, even though their status in QWaitCondition's queue is wokenUp (inspected through debugger). I suspect that number of 2 threads is somehow connected with my processor's 4 cores.
Simplified code is below. What am I doing wrong and how to fix it?
class Logic
{
QMutex threadLock, renderLock;
//SOLUTION: renderLock should be per thread, not global like this!
QWaitCondition wc;
bool render;
...
}
Logic::Logic()
{
...
renderLock.lock(); //needs to be locked for QWaitCondition
}
void Logic::timerProc()
{
static int count=0;
if (render||count>10) //render wanted or not rendered in a while
{
threadLock.lock();
vtkRenderWindow->Render();
render=false;
count=0;
wc.wakeAll();
threadLock.unlock();
}
else
count++;
}
double Logic::makeMesh(int meshIndex)
{
while (notFinished)
{
...(calculate g)
threadLock.lock(); //lock scene
mesh[meshIndex]->setGeometry(g);
render=true;
threadLock.unlock();
wc.wait(&renderLock); //wait until rendered
}
return g.size;
}
void Logic::makeAllMeshes()
{
vector<QFuture<double>> r;
for (int i=0; i<meshes.size(); i++)
{
QFuture<double> future = QtConcurrent::run<double>(this, &Logic::makeMesh, i);
r.push_back(future);
}
while(any r is not finished)
QApplication::processEvents(); //give timer a chance
}
There is at least one defect in your code. count and render belong to the critical section, which means they need to be protected from concurrent access.
Assume there are more threads waiting on wc.wait(&renderLock);. Someone somewhere execute wc.wakeAll();. ALL the threads are woken up. Assume at least one thread sees notFinished as true (if any of your code make sense, this must be possible) and go back to execute :
threadLock.lock(); //lock scene
mesh[meshIndex]->setGeometry(g);
render=true;
threadLock.unlock();
wc.wait(&renderLock) <----OOPS...
The second time the thread comes back, he doesn't have the lock renderLock. So Kamil Klimek is right: you call wait on a mutex you don't hold.
You should remove the lock in constructor, and lock before the calling the condition. Wherever you lock renderlock, the thread should not hold threadlock.
The catch was that I needed one QMutex per thread, and not just one global QMutex. The corrected code is below. Thanks for help UmNyobe!
class Logic
{
QMutex threadLock;
QWaitCondition wc;
bool render;
...
}
//nothing in constructor related to threading
void Logic::timerProc()
{
//count was a debugging workaround and is not needed
if (render)
{
threadLock.lock();
vtkRenderWindow->Render();
render=false;
wc.wakeAll();
threadLock.unlock();
}
}
double Logic::makeMesh(int meshIndex)
{
QMutex renderLock; //fix
renderLock.lock(); //fix
while (notFinished)
{
...(calculate g)
threadLock.lock(); //lock scene
mesh[meshIndex]->setGeometry(g);
render=true;
threadLock.unlock();
wc.wait(&renderLock); //wait until rendered
}
return g.size;
}
void Logic::makeAllMeshes()
{
vector<QFuture<double>> r;
for (int i=0; i<meshes.size(); i++)
{
QFuture<double> future = QtConcurrent::run<double>(this, &Logic::makeMesh, i);
r.push_back(future);
}
while(any r is not finished)
QApplication::processEvents(); //give timer a chance
}