I have threaded iterative generation of some geometries. I use VTK for rendering. After each iteration I would like to display (render) the current progress. My approach works as expected until the last 2 threads are left hanging waiting for QWaitCondition. They are blocked, even though their status in QWaitCondition's queue is wokenUp (inspected through debugger). I suspect that number of 2 threads is somehow connected with my processor's 4 cores.
Simplified code is below. What am I doing wrong and how to fix it?
class Logic
{
QMutex threadLock, renderLock;
//SOLUTION: renderLock should be per thread, not global like this!
QWaitCondition wc;
bool render;
...
}
Logic::Logic()
{
...
renderLock.lock(); //needs to be locked for QWaitCondition
}
void Logic::timerProc()
{
static int count=0;
if (render||count>10) //render wanted or not rendered in a while
{
threadLock.lock();
vtkRenderWindow->Render();
render=false;
count=0;
wc.wakeAll();
threadLock.unlock();
}
else
count++;
}
double Logic::makeMesh(int meshIndex)
{
while (notFinished)
{
...(calculate g)
threadLock.lock(); //lock scene
mesh[meshIndex]->setGeometry(g);
render=true;
threadLock.unlock();
wc.wait(&renderLock); //wait until rendered
}
return g.size;
}
void Logic::makeAllMeshes()
{
vector<QFuture<double>> r;
for (int i=0; i<meshes.size(); i++)
{
QFuture<double> future = QtConcurrent::run<double>(this, &Logic::makeMesh, i);
r.push_back(future);
}
while(any r is not finished)
QApplication::processEvents(); //give timer a chance
}
There is at least one defect in your code. count and render belong to the critical section, which means they need to be protected from concurrent access.
Assume there are more threads waiting on wc.wait(&renderLock);. Someone somewhere execute wc.wakeAll();. ALL the threads are woken up. Assume at least one thread sees notFinished as true (if any of your code make sense, this must be possible) and go back to execute :
threadLock.lock(); //lock scene
mesh[meshIndex]->setGeometry(g);
render=true;
threadLock.unlock();
wc.wait(&renderLock) <----OOPS...
The second time the thread comes back, he doesn't have the lock renderLock. So Kamil Klimek is right: you call wait on a mutex you don't hold.
You should remove the lock in constructor, and lock before the calling the condition. Wherever you lock renderlock, the thread should not hold threadlock.
The catch was that I needed one QMutex per thread, and not just one global QMutex. The corrected code is below. Thanks for help UmNyobe!
class Logic
{
QMutex threadLock;
QWaitCondition wc;
bool render;
...
}
//nothing in constructor related to threading
void Logic::timerProc()
{
//count was a debugging workaround and is not needed
if (render)
{
threadLock.lock();
vtkRenderWindow->Render();
render=false;
wc.wakeAll();
threadLock.unlock();
}
}
double Logic::makeMesh(int meshIndex)
{
QMutex renderLock; //fix
renderLock.lock(); //fix
while (notFinished)
{
...(calculate g)
threadLock.lock(); //lock scene
mesh[meshIndex]->setGeometry(g);
render=true;
threadLock.unlock();
wc.wait(&renderLock); //wait until rendered
}
return g.size;
}
void Logic::makeAllMeshes()
{
vector<QFuture<double>> r;
for (int i=0; i<meshes.size(); i++)
{
QFuture<double> future = QtConcurrent::run<double>(this, &Logic::makeMesh, i);
r.push_back(future);
}
while(any r is not finished)
QApplication::processEvents(); //give timer a chance
}
Related
I've got a Timer class that can run with both an initial time and an interval. There's an internal function internalQuit performs thread.join() before a thread is started again on the resetCallback. The thing is that each public function has it's own std::lock_guard on the mutex to prevent the data of being written. I'm now running into an issue that when using the callback to for example stop the timer in the callback, the mutex cannot be locked by stop(). I'm hoping to get some help on how to tackle this issue.
class Timer
{
public:
Timer(string_view identifier, Function &&timeoutHandler, Duration initTime, Duration intervalTime);
void start()
void stop() // for example
{
std::lock_guard lock{mutex};
running = false;
sleepCv.notify_all();
}
void setInitTime()
void setIntervalTime()
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
startTimerThread(std::forward<Function>(timeoutHandler));
}
private:
internalQuit() // performs thread join
{
{
std::lock_guard lock {mutex};
quit = true;
running = false;
sleepCv.notify_all();
}
thread.join();
}
mainLoop(Function &&timeoutHandler)
{
while(!quit)
{
std::unique_lock lock{mutex};
// wait for running with sleepCv.wait()
// handle initTimer with sleepCv.wait_until()
timeoutHandler(); // callback
// handle intervalTimer with sleepCv.wait_until()
timeoutHandler(); // callback
}
}
startTimerThread(Function &&timeoutHandler)
{
thread = std::thread([&, timeoutHandler = std::forward<Function>(timeoutHandler)](){
mainLoop(timeoutHandler);
});
}
std::thread thread{};
std::mutex mutex{};
std::condition_variable sleepCv{}
// initTime, intervalTime and some booleans for updating with sleepCv.notify_all();
}
For testing this, I have the following testcase in Gtest. I'm expecting the timer to stop in the callback. Unfortunately, the timer will hang on acquiring the mutex lock in the stop() function.
std::atomic<int> callbackCounter;
void timerCallback()
{
callbackCounter.fetch_add(1, std::memory_order_acq_rel);
}
TEST(timerTest, timerShouldStopWhenStoppedInNewCallback)
{
std::atomic<int> testCounter{0};
Timer<std::chrono::steady_clock > t{"timerstop", &timerCallback, std::chrono::milliseconds(0), std::chrono::milliseconds(100)};
t.resetCallback([&]{
testCounter += 1;
t.stop();
});
t.start();
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // trigger due to original interval timeout
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // no trigger, because stopped in new callback
}
Removing all the mutexes in each of the public fucntions, fixes the issue. But that could lead to possible race conditions for data being written to variables. Hence each function has a lock before writing to f.e. the booleans.
I've tried looking into the std::move functionality to move the thread during the resetCallback into a different variable and then call join on that one. I'm also investigating recursive_mutex but have no experience with using that.
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
auto prevThread = std::thread(std::move(this->thread));
// didn't know how to continue from here, requiring more selfstudy.
startTimerThread(std::forward<Function>(timeoutHandler));
}
It's a new subject for me, have worked with mutexes and timers before but with relatively simple stuff.
Thank you in advance.
I have a program that starts N number of threads (async/future). I want the main thread to set up some data, then all threads should go while the main thread waits for all of the other threads to finish, and then this needs to loop.
What I have atm is something like this
int main()
{
//Start N new threads (std::future/std::async)
while(condition)
{
//Set Up Data Here
//Send Data to threads
{
std::lock_guard<std::mutex> lock(mrun);
bRun = true;
}
run.notify_all();
//Wait for threads
{
std::unique_lock<std::mutex> lock(mrun);
run.wait(lock, [] {return bDone; });
}
//Reset bools
bRun = false;
bDone = false;
}
//Get results from futures once complete
}
int thread()
{
while(otherCondition)
{
std::unique_lock<std::mutex> lock(mrun);
run.wait(lock, [] {return bRun; });
bDone = true;
//Do thread stuff here
lock.unlock();
run.notify_all();
}
}
But I can't see any signs of either the main or the other threads waiting for each other! Any idea what I am doing wrong or how I can do this?
There are a couple of problems. First, you're setting bDone as soon as the first worker wakes up. Thus the main thread wakes immediately and begins readying the next data set. You want to have the main thread wait until all workers have finished processing their data. Second, when a worker finishes processing, it loops around and immediately checks bRun. But it can't tell if bRun == true means that the next data set is ready or if the last data set is ready. You want to wait for the next data set.
Something like this should work:
std::mutex mrun;
std::condition_variable dataReady;
std::condition_variable workComplete;
int nCurrentIteration = 0;
int nWorkerCount = 0;
int main()
{
//Start N new threads (std::future/std::async)
while(condition)
{
//Set Up Data Here
//Send Data to threads
{
std::lock_guard<std::mutex> lock(mrun);
nWorkerCount = N;
++nCurrentIteration;
}
dataReady.notify_all();
//Wait for threads
{
std::unique_lock<std::mutex> lock(mrun);
workComplete.wait(lock, [] { return nWorkerCount == 0; });
}
}
//Get results from futures once complete
}
int thread()
{
int nNextIteration == 1;
while(otherCondition)
{
std::unique_lock<std::mutex> lock(mrun);
dataReady.wait(lock, [&nNextIteration] { return nCurrentIteration==nNextIteration; });
lock.unlock();
++nNextIteration;
//Do thread stuff here
lock.lock();
if (--nWorkerCount == 0)
{
lock.unlock();
workComplete.notify_one();
}
}
}
Be aware that this solution isn't quite complete. If a worker encounters an exception, then the main thread will hang (because the dead worker will never reduce nWorkerCount). You'll likely need a strategy to deal with that scenario.
Incidentally, this pattern is called a barrier.
I'm writing an Audio class that holds an std::thread for refilling some buffers asynchronously. Say we call the main thread A and the background (class member) thread B. I'm using an std::mutex to block thread B whenever the sound is not playing, that way it doesn't run in the background when unnecessary and doesn't use excess CPU power. The mutex locked by thread A by default, so thread B is blocked, then when it's time to play the sound thread A unlocks the mutex and thread B runs (by locking then immediately unlocking it) in a loop.
The issue comes up when thread B sees that it's reached the end of the file. It can stop playback and clean up buffers and such, but it can't stop its own loop because thread B can't lock the mutex from thread A.
Here's the relevant code outline:
class Audio {
private:
// ...
std::thread Thread;
std::mutex PauseMutex; // mutex that blocks Thread, locked in constructor
void ThreadFunc(); // assigned to Thread in constructor
public:
// ...
void Play();
void Stop();
}
_
void Audio::ThreadFunc() {
// ... (include initial check of mutex here)
while (!this->EndThread) { // Thread-safe flag, only set when Audio is destructed
// ... Check and refill buffers as necessary, etc ...
if (EOF)
Stop();
// Attempt a lock, blocks thread if sound/music is not playing
this->PauseMutex.lock();
this->PauseMutex.unlock();
}
}
void Audio::Play() {
// ...
PauseMutex.unlock(); // unlock mutex so loop in ThreadFunc can start
}
void Audio::Stop() {
// ...
PauseMutex.lock(); // locks mutex to stop loop in ThreadFunc
// ^^ This is the issue here
}
In the above setup, when the background thread sees that it's reached EOF, it would call the class's Stop() function, which supposedly locks the mutex to stop the background thread. This doesn't work because the mutex would have to be locked by the main thread, not the background thread (in this example, it crashes in ThreadFunc because the background thread attempts a lock in its main loop after already locking in Stop()).
At this point the only thing I could think of would be to somehow have the background thread lock the mutex as if it was the main thread, giving the main thread ownership of the mutex... if that's even possible? Is there a way for a thread to transfer ownership of a mutex to another thread? Or is this a design flaw in the setup I've created? (If the latter, are there any rational workarounds?) Everything else in the class so far works just as designed.
I'm not going to even pretend to understand how your code is trying to do what it is doing. There is one thing, however, that is evident. You're trying to use a mutex for conveying some predicate state change, which is the wrong vehicle to drive on that freeway.
Predicate state change is handled by coupling three things:
Some predicate datum
A mutex to protect the predicate
A condition variable to convey possible change in predicate state.
The Goal
The goal in the below example is to demonstrate how a mutex, a condition variable, and predicate data are used in concert when controlling program flow across multiple threads. It shows examples of using both wait and wait_for condition variable functionality, as well as one way to run a member function as a thread proc.
Following is a simple Player class toggles between four possible states:
Stopped : The player is not playing, nor paused, nor quitting.
Playing : The player is playing
Paused : The player is paused, and will continue from whence it left off once it resumes Playing.
Quit : The player should stop what it is doing and terminate.
The predicate data is fairly obvious. the state member. It must be protected, which means it cannot be changed nor checked unless under the protection of the mutex. I've added to this a counter that simply increments during the course of maintaining the Playing state for some period of time. more specifically:
While Playing, each 200ms the counter increments, then dumps some data to the console.
While Paused, counter is not changed, but retains its last value while Playing. This means when resumed it will continue from where it left off.
When Stopped, the counter is reset to zero and a newline is injected into the console output. This means switching back to Playing will start the counter sequence all over again.
Setting the Quit state has no effect on counter, it will be going away along with everything else.
The Code
#include <iostream>
#include <mutex>
#include <condition_variable>
#include <thread>
#include <unistd.h>
using namespace std::chrono_literals;
struct Player
{
private:
std::mutex mtx;
std::condition_variable cv;
std::thread thr;
enum State
{
Stopped,
Paused,
Playing,
Quit
};
State state;
int counter;
void signal_state(State st)
{
std::unique_lock<std::mutex> lock(mtx);
if (st != state)
{
state = st;
cv.notify_one();
}
}
// main player monitor
void monitor()
{
std::unique_lock<std::mutex> lock(mtx);
bool bQuit = false;
while (!bQuit)
{
switch (state)
{
case Playing:
std::cout << ++counter << '.';
cv.wait_for(lock, 200ms, [this](){ return state != Playing; });
break;
case Stopped:
cv.wait(lock, [this]() { return state != Stopped; });
std::cout << '\n';
counter = 0;
break;
case Paused:
cv.wait(lock, [this]() { return state != Paused; });
break;
case Quit:
bQuit = true;
break;
}
}
}
public:
Player()
: state(Stopped)
, counter(0)
{
thr = std::thread(std::bind(&Player::monitor, this));
}
~Player()
{
quit();
thr.join();
}
void stop() { signal_state(Stopped); }
void play() { signal_state(Playing); }
void pause() { signal_state(Paused); }
void quit() { signal_state(Quit); }
};
int main()
{
Player player;
player.play();
sleep(3);
player.pause();
sleep(3);
player.play();
sleep(3);
player.stop();
sleep(3);
player.play();
sleep(3);
}
Output
I can't really demonstrate this. You'll have to run it and see how it works, and I invite you to toy with the states in main() as I have above. Do note, however, that once quit is invoked none of the other stated will be monitored. Setting the Quit state will shut down the monitor thread. For what its worth, a run of the above should look something like this:
1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.21.22.23.24.25.26.27.28.29.30.
1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.
with the first set of numbers dumped in two groups (1..15, then 16..30), as a result of playing, then pausing, then playing again. Then a stop is issued, followed by another play for a period of ~3 seconds. After that, the object self-destructs, and in doing so, sets the Quit state, and waits for the monitor to terminate.
Summary
Hopefully you get something out of this. If you find yourself trying to manage predicate state by manually latching and releasing mutexes, changes are you need a condition-variable design patter to facilitate detecting those changes.
Hope you get something out of it.
class CtLockCS
{
public:
//--------------------------------------------------------------------------
CtLockCS() { ::InitializeCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
~CtLockCS() { ::DeleteCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
bool TryLock() { return ::TryEnterCriticalSection(&m_cs) == TRUE; }
//--------------------------------------------------------------------------
void Lock() { ::EnterCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
void Unlock() { ::LeaveCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
protected:
CRITICAL_SECTION m_cs;
};
///////////////////////////////////////////////////////////////////////////////
// class CtLockMX - using mutex
class CtLockMX
{
public:
//--------------------------------------------------------------------------
CtLockMX(const TCHAR* nameMutex = 0)
{ m_mx = ::CreateMutex(0, FALSE, nameMutex); }
//--------------------------------------------------------------------------
~CtLockMX()
{ if (m_mx) { ::CloseHandle(m_mx); m_mx = NULL; } }
//--------------------------------------------------------------------------
bool TryLock()
{ return m_mx ? (::WaitForSingleObject(m_mx, 0) == WAIT_OBJECT_0) : false; }
//--------------------------------------------------------------------------
void Lock()
{ if (m_mx) { ::WaitForSingleObject(m_mx, INFINITE); } }
//--------------------------------------------------------------------------
void Unlock()
{ if (m_mx) { ::ReleaseMutex(m_mx); } }
//--------------------------------------------------------------------------
protected:
HANDLE m_mx;
};
///////////////////////////////////////////////////////////////////////////////
// class CtLockSM - using semaphore
class CtLockSM
{
public:
//--------------------------------------------------------------------------
CtLockSM(int maxcnt) { m_sm = ::CreateSemaphore(0, maxcnt, maxcnt, 0); }
//--------------------------------------------------------------------------
~CtLockSM() { ::CloseHandle(m_sm); }
//--------------------------------------------------------------------------
bool TryLock() { return m_sm ? (::WaitForSingleObject(m_sm, 0) == WAIT_OBJECT_0) : false; }
//--------------------------------------------------------------------------
void Lock() { if (m_sm) { ::WaitForSingleObject(m_sm, INFINITE); } }
//--------------------------------------------------------------------------
void Unlock()
{
if (m_sm){
LONG prevcnt = 0;
::ReleaseSemaphore(m_sm, 1, &prevcnt);
}
}
//--------------------------------------------------------------------------
protected:
HANDLE m_sm;
};
I have 4 threads that should enter to same function A.
I want to allow that only two can perform.
I want to wait for all the four and then perform function A.
How should I do it (in C++)?
A condition variable in C++ should suffice here.
This should work for allowing only 2 threads from proceeding at once:
// globals
std::condition_variable cv;
std::mutex m;
int active_runners = 0;
int FunctionA()
{
// do work
}
void ThreadFunction()
{
// enter lock and wait until we can grab one of the two runner slots
{
std::unique_lock<std::mutex> lock(m); // enter lock
while (active_runners >= 2) // evaluate the condition under a lock
{
cv.wait(); // release the lock and wait for a signal
}
active_runners++; // become one of the runners
} // release lock
FunctionA();
// on return from FunctionA, notify everyone that there's one less runner
{
std::unique_lock<std::mutex> lock(m); // enter lock
active_runners--;
cv.notify(); // wake up anyone blocked on "wait"
} // release lock
}
My project is consists of two threads: one main thread and the other thread which handles another window content. So, the when the main thread wants to ask the another windows to update itself it calls the draw function which is as follows:
void SubApplicationManager::draw() {
// Zero number of applications which has finished the draw counter
{
boost::lock_guard<boost::mutex> lock(SubApplication::draw_mutex);
SubApplication::num_draws = 0;
}
// Draw the sub applications.
for (size_t i = 0; i < m_subApplications.size(); i++)
m_subApplications[i].signal_draw();
// Wait until all the sub applications finish drawing.
while (true){
boost::lock_guard<boost::mutex> lock(SubApplication::draw_mutex);
std::cout << SubApplication::num_draws << std::endl;
if (SubApplication::num_draws >= m_subApplications.size()) break;
}
}
The draw function just signals the other thread that a new task is received.
void SubApplication::signal_draw() {
task = TASK::TASK_DRAW;
{
boost::lock_guard<boost::mutex> lock(task_received_mutex);
task_received = true;
}
task_start_condition.notify_all();
}
The body of other thread is as follows. It waits for the task to arrive and then start to process:
void SubApplication::thread() {
clock_t start_time, last_update;
start_time = last_update = clock();
//! Creates the Sub Application
init();
while (!done) // Loop That Runs While done=FALSE
{
// Draw The Scene. Watch For ESC Key And Quit Messages From DrawGLScene()
if (active) // Program Active?
{
// Wait here, until a update/draw command is received.
boost::unique_lock<boost::mutex> start_lock(task_start_mutex);
while (!task_received){
task_start_condition.wait(start_lock);
}
// Task received is set to false, for next loop.
{
boost::lock_guard<boost::mutex> lock(task_received_mutex);
task_received = false;
}
clock_t frame_start_time = clock();
switch (task){
case TASK_UPDATE:
update();
break;
case TASK_DRAW:
draw();
swapBuffers();
break;
case TASK_CREATE:
create();
break;
default:
break;
}
clock_t frame_end_time = clock();
double task_time = static_cast<float>(frame_end_time - frame_start_time) / CLOCKS_PER_SEC;
}
}
}
The problem is that if I run the code as it is, it never runs the other thread with task = TASK::TASK_DRAW; but if I add a std::cout << "Draw\n"; to the beginning of SubApplication::draw(), it will work as it should. I am looking for the reason which it is happening and what is the usual way to fix it?
boost::lock_guard<boost::mutex> lock(task_received_mutex);
task_received = true;
Okay, the task_received_mutex protects task_received.
boost::unique_lock<boost::mutex> start_lock(task_start_mutex);
while (!task_received){
task_start_condition.wait(start_lock);
}
Oops, we're reading task_received without holding the mutex that protects it. What prevents a race where one thread reads task_received while another thread is modifying it? This could immediately lead to deadlock.
Also, you have code that claims to "Wait until all the sub applications finish drawing" but there's no call to any wait function. So it actually spins rather than waiting, which is awful.
As a starter, signal the task_start_condition under the task_start_mutex lock.
Consider locking that mutex during thread creation to avoid obvious races.
Third: it seems you have several mutexes named for "logical tasks" (draw, start). In reality, however, mutexes guard resources, not "logical tasks". So it's good practice to name them after the shared resource they should guard. _(In this case I get the impression that a single mutex could be enough/better. But we can't tell for sure from the code shown)).