How to stop a running thread safely on user request? - c++

I'm in a scenario when I have to terminate a thread while the thread is running according to user action on GUI. I'm using Qt 4.5.2 on Windows. One way to do that is the following:
class MyThread : public QThread
{
QMutex mutex;
bool stop;
public:
MyThread() : stop(false) {}
void requestStop()
{
QMutexLocker(&mutex);
stop = true;
}
void run()
{
while(counter1--)
{
QMutexLocker(&mutex);
if (stop) return;
while(counter2--)
{
}
}
}
};
Please note that the above code is minimal. The run function can take upto 20 seconds before finish so I want to avoid locking and unlocking the mutex variable in the loop. Is there any other way faster than this method.
Thanks in advance.

It doesn't directly answer your need, but can't you scope your mutex much tighter ?
while(counter1--) {
{
QMutexLocker(&mutex);
if (stop) return;
} // End locking scope : we won't read it anymore until next time
while(counter2--)
...

Firstly it doesn't look like you need a mutex around your entire inner loop, just around the if (stop) expression as the others say, but I may be missing some of your app context to definitively say that. Maybe you need requestStop() to block until the thread exits.
If the reduced mutex scope is adequate for you, then you don't need a mutex at all if you declare your stop variable as "volatile". The "volatile" keyword causes (at least under VC++) a read/write memory barrier to be placed around accesses to stop, which means your requestStop() call is guaranteed to be communicated to your thread and not cached away. The following code should work just fine on multicore processors.
class MyThread : public QThread
{
volatile bool stop;
public:
MyThread() : stop(false) {}
void requestStop()
{
stop = true;
}
void run()
{
while(counter1--)
{
if (stop) return;
while(counter2--)
{
}
}
}
};

The main problem in your code is that you are holding the lock for much longer than you actually need. You should unlock it after you check the stop variable. That should make it much faster (depending on what is done in the inner loop). A lock-free alternative is to use QAtomicInt.

You could use a critical section instead of a mutex. They have a bit less overhead.
Otherwise you have to use this approach. If you want the worker thread to terminate within some interval t seconds, then it needs to check for a termination event at least once every t seconds.

Why not use an event that can be checked periodically and let the underlying platform worry about whether a mutex is needed or not to handle the event (I assume that Qt has event objects - I'm not all that familiar with it). If you use an event object, the platform will scope any critical section need to handle that event to as short a time period as necessary.
Also, since there's likely not going to be much contention for that mutex (the only time would be when something wants to kill the thread), grabbing and releasing the mutex will likely have little performance impact. In a loop that's taking 20 seconds to run, I'd be surprised if the impact were anything that could even be measured. But maybe I'm wrong - try measuring it by timing the thread with and without the mutex being taken. See if it's something you really need to concern yourself with.
Qt doesn't seem to have the kind of event object I'm talking about (one along the lines of Win32's event objects), but a QSemaphore can be used just as easily:
class MyThread : public QThread
{
QSemaphore stopFlag;
public:
MyThread() : stopFlag( 1) {}
void requestStop()
{
stopFlag.tryAcquire(); // decrement the flag (if it hasn't been already)
}
void run()
{
while(counter1--)
{
if (!stopFlag.available()) return;
while(counter2--)
{
}
}
}
};

Related

is there any way to wakeup multiple threads at the same time in c/c++

well, actually, I'm not asking the threads must "line up" to work, but I just want to notify multiple threads. so I'm not looking for barrier.
it's kind of like the condition_variable::notify_all(), but I don't want the threads wakeup one-by-one, which may cause starvation(also the potential problem in multiple semaphore post operation). it's kind of like:
std::atomic_flag flag{ATOMIC_FLAG_INIT};
void example() {
if (!flag.test_and_set()) {
// this is the thread to do the job, and notify others
do_something();
notify_others(); // this is what I'm looking for
flag.clear();
} else {
// this is the waiting thread
wait_till_notification();
do_some_other_thing();
}
}
void runner() {
std::vector<std::threads>;
for (int i=0; i<10; ++i) {
threads.emplace_back([]() {
while(1) {
example();
}
});
}
// ...
}
so how can I do this in c/c++ or maybe posix API?
sorry, I didn't make this question clear enough, I'd add some more explaination.
it's not thunder heard problem I'm talking about, and yes, it's the re-acquire-lock that bothers me, and I tried shared_mutex, there's still some problem.
let me split the threads to 2 parts, 1 as leader thread, which do the writing job, the others as worker threads, which do the reading job.
but actually they're all equal in programme, the leader thread is the thread that 1st got access to the job( you can take it as the shared buffer is underflowed for this thread). once the job is done, the other workers just need to be notified that them have the access.
if the mutex is used here, any thread would block the others.
to give an example: the main thread's job do_something() here is a read, and it block the main thread, thus the whole system is blocked.
unfortunatly, shared_mutex won't solve this problem:
void example() {
if (!flag.test_and_set()) {
// leader thread:
lk.lock();
do_something();
lk.unlock();
flag.clear();
} else {
// worker thread
lk.shared_lock();
do_some_other_thing();
lk.shared_unlock();
}
}
// outer loop
void looper() {
std::vector<std::threads>;
for (int i=0; i<10; ++i) {
threads.emplace_back([]() {
while(1) {
example();
}
});
}
}
in this code, if the leader job was done, and not much to do between this unlock and next lock (remember they're in a loop), it may get the lock again, leave the worker jobs not working, which is why I call it starve earlier.
and to explain the blocking in do_something(), I don't want this part of job takes all my CPU time, even if the leader's job is not ready (no data arrive for read)
and std::call_once may still not be the answer to this. because, as you can see, the workers must wait till the leader's job finished.
to summarize, this is actually a one-producer-multi-consumer problem.
but I want the consumers can do the job when the product is ready for them. and any can be the producer or consumer. if any but the 1st find the product has run out, the thread should be the producer, thus others are automatically consumer.
but unfortunately, I'm not sure if this idea would work or not
it's kind of like the condition_variable::notify_all(), but I don't want the threads wakeup one-by-one, which may cause starvation
In principle it's not waking up that is serialized, but re-acquiring the lock.
You can avoid that by using std::condition_variable_any with a std::shared_lock - so long as nobody ever gets an exclusive lock on the std::shared_mutex. Alternatively, you can provide your own Lockable type.
Note however that this won't magically allow you to concurrently run more threads than you have cores, or force the scheduler to start them all running in parallel. They'll just be marked as runnable and scheduled as normal - this only fixes the avoidable serialization in your own code.
It sounds like you are looking for call_once
#include <mutex>
void example()
{
static std::once_flag flag;
bool i_did_once = false;
std::call_once(flag, [&i_did_once]() mutable {
i_did_once = true;
do_something();
});
if(! i_did_once)
do_some_other_thing();
}
I don't see how your problem relates to starvation. Are you perhaps thinking about the thundering herd problem? This may arise if do_some_other_thing has a mutex but in that case you have to describe your problem in more detail.

Deadlock with boost::condition_variable

I am a bit stuck with the problem, so it is my cry for help.
I have a manager that pushes some events to a queue, which is proceeded in another thread.
I don't want this thread to be 'busy waiting' for events in the queue, because it may be empty all the time (as well as it may always be full).
Also I need m_bShutdownFlag to stop the thread when needed.
So I wanted to try a condition_variable for this case: if something was pushed to a queue, then the thread starts its work.
Simplified code:
class SomeManager {
public:
SomeManager::SomeManager()
: m_bShutdownFlag(false) {}
void SomeManager::Initialize() {
boost::recursive_mutex::scoped_lock lock(m_mtxThread);
boost::thread thread(&SomeManager::ThreadProc, this);
m_thread.swap(thread);
}
void SomeManager::Shutdown() {
boost::recursive_mutex::scoped_lock lock(m_mtxThread);
if (m_thread.get_id() != boost::thread::id()) {
boost::lock_guard<boost::mutex> lockEvents(m_mtxEvents);
m_bShutdownFlag = true;
m_condEvents.notify_one();
m_queue.clear();
}
}
void SomeManager::QueueEvent(const SomeEvent& event) {
boost::lock_guard<boost::mutex> lockEvents(m_mtxEvents);
m_queue.push_back(event);
m_condEvents.notify_one();
}
private:
void SomeManager::ThreadProc(SomeManager* pMgr) {
while (true) {
boost::unique_lock<boost::mutex> lockEvents(pMgr->m_mtxEvents);
while (!(pMgr->m_bShutdownFlag || pMgr->m_queue.empty()))
pMgr->m_condEvents.wait(lockEvents);
if (pMgr->m_bShutdownFlag)
break;
else
/* Thread-safe processing of all the events in m_queue */
}
}
boost::thread m_thread;
boost::recursive_mutex m_mtxThread;
bool m_bShutdownFlag;
boost::mutex m_mtxEvents;
boost::condition_variable m_condEvents;
SomeThreadSafeQueue m_queue;
}
But when I test it with two (or more) almost simultaneous calls to QueueEvent, it gets locked at the line boost::lock_guard<boost::mutex> lockEvents(m_mtxEvents); forever.
Seems like the first call doesn't ever release lockEvents, so all the rest just keep waiting for its freeing.
Please, help me to find out what am I doing wrong and how to fix this.
There's a few things to point out on your code:
You may wish to join your thread after calling shutdown, to ensure that your main thread doesn't finish before your other thread.
m_queue.clear(); on shutdown is done outside of your m_mtxEvents mutex lock, meaning it's not as thread safe as you think it is.
your 'thread safe processing' of the queue should be just taking an item off and then releasing the lock while you go off to process the event. You've not shown that explicitly, but failure to do so will result in the lock preventing items from being added.
The good news about a thread blocking like this, is that you can trivially break and inspect what the other threads are doing, and locate the one that is holding the lock. It might be that as per my comment #3 you're just taking a long time to process an event. On the other hand it may be that you've got a dead lock. In any case, what you need is to use your debugger to establish exactly what you've done wrong, since your sample doesn't have enough in it to demonstrate your problem.
inside ThreadProc, while(ture) loop, the lockEvents is not unlocked in any case. try put lock and wait inside a scope.

How to block until a condition is met

I wanted to know what the best way is to block a method until a condition becomes true.
Example:
class DoWork
{
int projects_completed;
public:
.....
void WaitForProjectsCompleted()
{
---->//How do I block until projects_completed == 12;
}
};
I want it to be used as such
class foo
{
....
void someMethod()
{
DoWork work;
work.WaitForProjectsCompleted();//This should block
}
}
Assuming that there's another thread that's actually going to do something here, an easy thing to use is a std::condition_variable:
std::condition_variable cv;
std::mutex mtx;
void WaitForProjectsCompleted() {
std::unique_lock<std::mutex> lk(mtx);
cv.wait(lk, [this]{
return projects_completed >= 12;
});
}
Where somewhere else, some other member function might do:
void CompleteProject() {
{
std::lock_guard<std::mutex> lk(mtx);
++projects_completed;
}
cv.notify_one(); // let the waiter know
}
If projects_completed is atomic, you could instead just spin:
void WaitForProjectsCompleted() {
while (projects_completed < 12) ;
}
That would work fine too.
Condition variables are an excellent synchronization primitive, and in my personal experience it is the tool I respond with to 95% of synchs/threading situations.
If you don't have C++11 available you can use boost::condition_variable.
In which case you won't have the wait version with a predicate (because no lambdas in C++03). So you absolutely need to remember to loop over your condition check. As explained in the docs:
boost::unique_lock<boost::mutex> lock(mut);
while (projects_completed < 12)
{
wait(lock);
}
c.f.:
http://www.boost.org/doc/libs/1_58_0/doc/html/thread/synchronization.html#thread.synchronization.condvar_ref
That's because you get no guarantee that the condition is fulfilled after a notification, particularly because the lock can be acquired by another thread in the interstice between unlock and notify. Also a spurious wake up could happen.
I also wrote an article about it:
http://www.gamedev.net/page/resources/_/technical/general-programming/multithreading-r3048
Also if you use timed_wait (and I recommend it as it often mitigates priority inversion), another trap not to fall into is the timeout, because of the loop you cannot use a relative timeout (like 2 seconds) you need an absolute system time determined before entering the loop.
boost makes it very clean with this technique:
system_time const timeout = get_system_time() + posix_time::seconds(2);
About the spin lock pattern proposed by Barry, I would not recommend it, unless you are in a real time environment, like playstation 3/4 or equivalent. Or unless you are sure it won't last for more than a few seconds.
By using spin locking you waste power, and you don't leave chance for CPU to enter sleep states (c.f intel speed step).
This also has consequences on fairness and scheduling, as explained on wikipedia:
https://en.wikipedia.org/wiki/Spinlock
Finally if you don't have boost, since windows Vista we get natives Win32 functions:
SleepConditionVariableCS
https://msdn.microsoft.com/en-us/library/windows/desktop/ms686301(v=vs.85).aspx

Simple threaded timer, sanity check please

I've made a very simple threaded timer class and given the pitfalls around MT code, I would like a sanity check please. The idea here is to start a thread then continuously loop waiting on a variable. If the wait times out, the interval was exceeded and we call the callback. If the variable was signalled, the thread should quit and we don't call the callback.
One of the things I'm not sure about is what happens in the destructor with my code, given the thread may be joinable there (just). Can I join a thread in a destructor to make sure it's finished?
Here's the class:
class TimerThreaded
{
public:
TimerThreaded() {}
~TimerThreaded()
{
if (MyThread.joinable())
Stop();
}
void Start(std::chrono::milliseconds const & interval, std::function<void(void)> const & callback)
{
if (MyThread.joinable())
Stop();
MyThread = std::thread([=]()
{
for (;;)
{
auto locked = std::unique_lock<std::mutex>(MyMutex);
auto result = MyTerminate.wait_for(locked, interval);
if (result == std::cv_status::timeout)
callback();
else
return;
}
});
}
void Stop()
{
MyTerminate.notify_all();
}
private:
std::thread MyThread;
std::mutex MyMutex;
std::condition_variable MyTerminate;
};
I suppose a better question might be to ask someone to point me towards a very simple threaded timer, if there's one already available somewhere.
Can I join a thread in a destructor to make sure it's finished?
Not only you can, but it's quite typical to do so. If the thread instance is joinable (i.e. still running) when it's destroyed, terminate would be called.
For some reason result is always timeout. It never seems to get signalled and so never stops. Is it correct? notify_all should unblock the wait_for?
It can only unblock if the thread happens to be on the cv at the time. What you're probably doing is call Start and then immediately Stop before the thread has started running and begun waiting (or possibly while callback is running). In that case, the thread would never be notified.
There is another problem with your code. Blocked threads may be spuriously woken up on some implementations even when you don't explicitly call notify_X. That would cause your timer to stop randomly for no apparent reason.
I propose that you add a flag variable that indicates whether Stop has been called. This will fix both of the above problems. This is the typical way to use condition variables. I've even written the code for you:
class TimerThreaded
{
...
MyThread = std::thread([=]()
{
for (;;)
{
auto locked = std::unique_lock<std::mutex>(MyMutex);
auto result = MyTerminate.wait_for(locked, interval);
if (stop_please)
return;
if (result == std::cv_status::timeout)
callback();
}
});
....
void Stop()
{
{
std::lock_guard<std::mutex> lock(MyMutex);
stop_please = true;
}
MyTerminate.notify_all();
MyThread.join();
}
...
private:
bool stop_please = false;
...
With these changes yout timer should work, but do realize that "[std::condition_variable::wait_for] may block for longer than timeout_duration due to scheduling or resource contention delays", in the words of cppreference.com.
point me towards a very simple threaded timer, if there's one already available somewhere.
I don't know of a standard c++ solution, but modern operating systems typically provide this kind of functionality or at least pieces that can be used to build it. See timerfd_create on linux for an example.

How can I protect a QThread function so it will not be called again until finished its previous work?

I'm using a QThread and inside its run method I have a timer invoking a function that performs some heavy actions that take some time. Usually more than the interval that triggers the timer (but not always).
What I need is to protect this method so it can be invoked only if it has completed its previous job.
Here is the code:
NotificationThread::NotificationThread(QObject *parent)
: QThread(parent),
bWorking(false),
m_timerInterval(0)
{
}
NotificationThread::~NotificationThread()
{
;
}
void NotificationThread::fire()
{
if (!bWorking)
{
m_mutex.lock(); // <-- This is not protection the GetUpdateTime method from invoking over and over.
bWorking = true;
int size = groupsMarkedForUpdate.size();
if (MyApp::getInstance()->GetUpdateTime(batchVectorResult))
{
bWorking = false;
emit UpdateNotifications();
}
m_mutex.unlock();
}
}
void NotificationThread::run()
{
m_NotificationTimer = new QTimer();
connect(m_NotificationTimer,
SIGNAL(timeout()),
this,
SLOT(fire(),
Qt::DirectConnection));
int interval = val.toInt();
m_NotificationTimer->setInterval(3000);
m_NotificationTimer->start();
QThread::exec();
}
// This method is invoked from the main class
void NotificationThread::Execute(const QStringList batchReqList)
{
m_batchReqList = batchReqList;
start();
}
You could always have a thread that needs to run the method connected to an onDone signal that alerts all subscribers that it is complete. Then you should not run into the problems associated with double lock check and memory reordering. Maintain the run state in each thread.
I'm assuming you want to protect your thread from calls from another thread. Am I right? If yes, then..
This is what QMutex is for. QMutex gives you an interface to "lock" the thread until it is "unlocked", thus serializing access to the thread. You can choose to unlock the thread until it is done doing its work. But use it at your own risk. QMutex presents its own problems when used incorrectly. Refer to the documentation for more information on this.
But there are many more ways to solve your problem, like for example, #Beached suggests a simpler way to solve the problem; your instance of QThread would emit a signal if it's done. Or better yet, make a bool isDone inside your thread which would then be true if it's done, or false if it's not. If ever it's true then it's safe to call the method. But make sure you do not manipulate isDone outside the thread that owns it. I suggest you only manipulate isDone inside your QThread.
Here's the class documentation: link
LOL, I seriously misinterpreted your question. Sorry. It seems you've already done my second suggestion with bWorking.