I have a vector of Timer Objects. Each Timer Object launches an std::thread that simulates a growing period. I am using a Command pattern.
What is happening is each Timer is getting executed one after another but what I really want is for one to be executed....then once finished, the next one...once finished the next...while not interfering with the main execution of the program
class Timer
{
public:
bool _bTimerStarted;
bool _bTimerCompleted;
int _timerDuration;
virtual ~Timer() { }
virtual void execute()=0;
virtual void runTimer()=0;
inline void setDuration(int _s) { _timerDuration = _s; };
inline int getDuration() { return _timerDuration; };
inline bool isTimerComplete() { return _bTimerCompleted; };
};
class GrowingTimer : public Timer
{
public:
void execute()
{
//std::cout << "Timer execute..." << std::endl;
_bTimerStarted = false;
_bTimerCompleted = false;
//std::thread t1(&GrowingTimer::runTimer, this); //Launch a thread
//t1.detach();
runTimer();
}
void runTimer()
{
//std::cout << "Timer runTimer..." << std::endl;
_bTimerStarted = true;
auto start = std::chrono::high_resolution_clock::now();
std::this_thread::sleep_until(start + std::chrono::seconds(20));
_bTimerCompleted = true;
std::cout << "Growing Timer Finished..." << std::endl;
}
};
class Timers
{
std::vector<Timer*> _timers;
struct ExecuteTimer
{
void operator()(Timer* _timer) { _timer->execute(); }
};
public:
void add_timer(Timer& _timer) { _timers.push_back(&_timer); }
void execute()
{
//std::for_each(_timers.begin(), _timers.end(), ExecuteTimer());
for (int i=0; i < _timers.size(); i++)
{
Timer* _t = _timers.at(i);
_t->execute();
//while ( ! _t->isTimerComplete())
//{
//}
}
}
};
Executing the above like:
Timers _timer;
GrowingTimer _g, g1;
_g.setDuration(BROCCOLI::growTimeSeconds);
_g1.setDuration(BROCCOLI::growTimeSeconds);
_timer.add_timer(_g);
_timer.add_timer(_g1);
start_timers();
}
void start_timers()
{
_timer.execute();
}
In Timers::execute I am trying a few different ways to execute the first and not execute the
next until I somehow signal it is done.
UPDATE:
I am now doing this to execute everything:
Timers _timer;
GrowingTimer _g, g1;
_g.setDuration(BROCCOLI::growTimeSeconds);
_g1.setDuration(BROCCOLI::growTimeSeconds);
_timer.add_timer(_g);
_timer.add_timer(_g1);
//start_timers();
std::thread t1(&Broccoli::start_timers, this); //Launch a thread
t1.detach();
}
void start_timers()
{
_timer.execute();
}
The first time completes (I see the "completed" cout), but crashes at _t->execute(); inside the for loop with an EXEC_BAD_ACCESS. I added a cout to check the size of the vector and it is 2 so both timers are inside. I do see this in the console:
this Timers * 0xbfffd998
_timers std::__1::vector<Timer *, std::__1::allocator<Timer *> >
if I change the detach() to join() everything completes without the crash, but it blocks execution of my app until those timers finish.
Why are you using threads here? Timers::execute() calls execute on a timer, then waits for it to finish, then calls execute on the next, and so forth. Why don't you just call the timer function directly in Timers::execute() rather than spawning a thread and then waiting for it?
Threads allow you to write code that executes concurrently. What you want is serial execution, so threads are the wrong tool.
Update: In the updated code you run start_timers on a background thread, which is good. However, by detaching that thread you leave the thread running past the end of the scope. This means that the timer objects _g and _g1 and even the Timers object _timers are potentially destroyed before the thread has completed. Given the time-consuming nature of the timers thread, and the fact that you used detach rather than join in order to avoid your code blocking, this is certainly the cause of your problem.
If you run code on a thread then you need to ensure that all objects accessed by that thread have a long-enough lifetime that they are still valid when the thread accesses them. For detached threads this is especially hard to achieve, so detached threads are not recommended.
One option is to create an object containing _timers, _g and _g1 along side the thread t1, and have its destructor join with the thread. All you need to do then is to ensure that the object lives until the point that it is safe to wait for the timers to complete.
If you don't want to interfere with the execution of the program, you could do something like #Joel said but also adding a thread in the Timers class which would execute the threads in the vector.
You could include a unique_ptr to the thread in GrowingTimer instead of creating it as a local object in execute and calling detach. You can still create the thread in execute, but you would do it with a unique_ptr::reset call.
Then use join instead of isTimerComplete (add a join function to the Timer base class). The isTimerComplete polling mechanism will be extremely inefficient because it will basically use up that thread's entire time slice continually polling, whereas join will block until the other thread is complete.
An example of join:
#include <iostream>
#include <chrono>
#include <thread>
using namespace std;
void threadMain()
{
this_thread::sleep_for(chrono::seconds(5));
cout << "Done sleeping\n";
}
int main()
{
thread t(threadMain);
for (int i = 0; i < 10; ++i)
{
cout << i << "\n";
}
t.join();
cout << "Press Enter to exit\n";
cin.get();
return 0;
}
Note how the main thread keeps running while the other thread does its thing. Note that Anthony's answer is right in that it doesn't really seem like you need more than one background thread that just executes tasks sequentially rather than starting a thread and waiting for it to finish before starting a new one.
Related
I have been working on a idea for a system where I can have many workers that are triggered on a regular basis by a a central timer class. The part I'm concerned about here is a TriggeredWorker which, in a loop, uses the mutex & conditionVariable approach to wait to be told to do work. It has a method trigger that is called (by a different thread) that triggers work to be done. It is an abstract class that has to be subclassed for the actual work method to be implemented.
I have a test that shows that this mechanism works. However, as I increase the load by reducing the trigger interval, the test starts to fail. When I delay 20 microseconds between triggers, the test is 100% reliable. As I reduce down to 1 microsecond, I start to get failures in that the count of work performed reduces from 1000 (expected) to values like 986, 933, 999 etc..
My questions are: (1) what is it that is going wrong and how can I capture what is going wrong so I can report it or do something about it? And, (2) is there some better approach that I could use that would be better? I have to admit that my experience with c++ is limited to the last 3 months, although I have worked with other languages for several years.
Many thanks for reading...
Here are the key bits of code:
Triggered worker header file:
#ifndef TIMER_TRIGGERED_WORKER_H
#define TIMER_TRIGGERED_WORKER_H
#include <thread>
#include <plog/Log.h>
class TriggeredWorker {
private:
std::mutex mutex_;
std::condition_variable condVar_;
std::atomic<bool> running_{false};
std::atomic<bool> ready_{false};
void workLoop();
protected:
virtual void work() {};
public:
void start();
void stop();
void trigger();
};
#endif //TIMER_TRIGGERED_WORKER_H
Triggered worker implementation:
#include "TriggeredWorker.h"
void TriggeredWorker::workLoop() {
PLOGD << "workLoop started...";
while(true) {
std::unique_lock<std::mutex> lock(mutex_);
condVar_.wait(lock, [this]{
bool ready = this->ready_;
bool running = this->running_;
return ready | !running; });
this->ready_ = false;
if (!this->running_) {
break;
}
PLOGD << "Calling work()...";
work();
lock.unlock();
condVar_.notify_one();
}
PLOGD << "Worker thread completed.";
}
void TriggeredWorker::start() {
PLOGD << "Worker start...";
this->running_ = true;
auto thread = std::thread(&TriggeredWorker::workLoop, this);
thread.detach();
}
void TriggeredWorker::stop() {
PLOGD << "Worker stop.";
this->running_ = false;
}
void TriggeredWorker::trigger() {
PLOGD << "Trigger.";
std::unique_lock<std::mutex> lock(mutex_);
ready_ = true;
lock.unlock();
condVar_.notify_one();
}
and the test:
#include "catch.hpp"
#include "TriggeredWorker.h"
#include <thread>
TEST_CASE("Simple worker performs work when triggered") {
static std::atomic<int> twt_count{0};
class SimpleTriggeredWorker : public TriggeredWorker {
protected:
void work() override {
PLOGD << "Incrementing counter.";
twt_count.fetch_add(1);
}
};
SimpleTriggeredWorker worker;
worker.start();
for (int i = 0; i < 1000; i++) {
worker.trigger();
std::this_thread::sleep_for(std::chrono::microseconds(20));
}
std::this_thread::sleep_for(std::chrono::seconds(1));
CHECK(twt_count == 1000);
std::this_thread::sleep_for(std::chrono::seconds(1));
worker.stop();
}
What happens when worker.trigger() is called twice before workLoop acquires the lock? You loose one of those "triggers". Smaller time gap means higher probability of test failure, because of higher probability of multiple consecutive worker.trigger() calls before workLoop wakes up. Note that there's nothing that guarantees that workLoop will acquire the lock after worker.trigger() but before another worker.trigger() happens, even when those calls happen one after another (i.e. not in parallel). This is governed by the OS scheduler and we have no control over it.
Anyway the core problem is that setting ready_ = true twice looses information. Unlike incrementing an integer twice. And so the simplest solution is to replace bool with int and do inc/dec with == 0 checks. This solution is also known as semaphore. More advanced (potentially better, especially when you need to pass some data to the worker) approach is to use a (bounded?) thread safe queue. That depends on what exactly you are trying to achieve.
BTW 1: all your reads and updates, except for stop() function (and start() but this isn't really relevant), happen under the lock. I suggest you fix stop() to be under lock as well (since it is rarely called anyway) and turn atomics into non-atomics. There's an unnecessary overhead of atomics at the moment.
BTW 2: I suggest not using thread.detach(). You should store the std::thread object on TriggeredWorker and add destructor that does stop with join. These are not independent beings and so without detach() you make your code safer (one should never die without the other).
In a project we're creating multiple statemachines in a wrapper-class. Each wrapper runs in it's own thread. When the jobs is done, the wrapper-class destructor is being called, and in there we would like to stop the thread.
Though if we're using thread.join(), we get a deadlock (since it tries to join itself). We could somehow signal another thread, but that seems a bit messy.
Is there any way to properly terminate the thread in which a class is running in, upon object destruction?
thread.join() does not stop a thread. It waits for the thread to finish and then returns. In order to stop a thread you have to have some way of telling the thread to stop, and the thread has to check to see whether it's time to stop. One way to do that is with an atomic bool:
class my_thread {
public:
my_thread() : done(false) { }
~my_thread() { done = true; thr.join(); }
void run() { thread th(&my_thread::do_it, this); swap(th, thr); }
private:
void do_it() { while (!done) { /* ... */ } }
std::thread thr;
std::atomic<bool> done;
};
That's off the top of my head; not compiled, not tested.
So I have this class:
class foo {
public:
foo() { };
void me1() const {
while(1) {
std::lock_guard<std::mutex> ldock(m);
std::cout << 0;
}
}
void me2() const {
while(1) {
std::lock_guard<std::mutex> ldock(m);
std::cout << 1;
}
}
private:
std::mutex m;
};
Now I want to run this two methods in some two different threads, I do it like this:
int main() {
foo myfoo;
std::thread firstThread(&foo::me1, &myfoo);
std::thread secondThread(&foo::me2, &myfoo);
firstThread.detach();
secondThread.detach();
//while(1) { }
return 0;
}
I don't want to wait for any of this two methods to finish, they will simultaneously run until the main thread will be killed.
Is it ok to have some kind of infinite-loop at the end of main thread? (like the commented while(1) {}).
Or should I call some kinda sleep function?
You need to define an exit condition in your foo::me1() and foo::me2() . If you don't know how to do that, that
sleep(/*number of seconds you want your program to run*/ );
will do just fine.
If you define a termination clause then the bruteforce would be
to expose something like an atomic:
class foo {
public:
std::atomic<int> me1done = false;
std::atomic<int> me2done = false;
foo() { };
void me1() {
while(/* need exit condition here*/) {
std::lock_guard<std::mutex> ldock(m);
std::cout << 0;
}
me1done = true;
}
void me2() {
while(/*need exit condition here*/) {
std::lock_guard<std::mutex> ldock(m);
std::cout << 1;
}
me2done = true;
}
private:
std::mutex m;
};
and then you can check in main by polling every x-seconds.
int main(void)
{
// start your threads and detach
foo myfoo;
std::thread firstThread(&foo::me1, &myfoo);
std::thread secondThread(&foo::me2, &myfoo);
firstThread.detach();
secondThread.detach();
while( not (myfoo.me1done and myfoo.me2done ) )
{
sleep( /* some time */);
}
return 0;
}
If you want to be more elaborate you will have to work with condition variables.
If you want to determine if the two threads have finished your best bet is actually not to detach() the threads but rather join() them before exiting the main thread. That is, you'd kick off both threads and they'll run concurrently and once kicked off you simply join() each. Of course, that assumes that the threads would terminate.
Having a detach()ed thread effectively means you can never be sure if it has finished. That is generally rarely useful and I consider it a mistake that detach() was added to std::thread. However, even with detach()ed thread you can recognize when an objective is achieved without a busy wait. To that end you'd set up suitable variables indicating completion or progress and have them protected by a std::mutex. The main thread would then wait() on a std::condition_variable which gets notify_once()ed by the respective thread upon the completion/progress update which would be done in reasonable intervals. Once all threads have indicated that they are done or have achieved a suitable objective the main() thread can finish.
Using a timer alone is generally not a good approach. The signalling between threads is typically preferable and tends to create a more responsive system. You can still used a timed version of wait() (i.e., wait_until() or wait_for()), e.g., to alert upon suspecting a somehow hung or timed-out thread.
empty infinite loops as while(1) { } are UB.
adding a sleep inside is OK though.
To run infinitely foo::me1/foo::me2, you have several other choices:
int main()
{
foo myfoo;
std::thread firstThread(&foo::me1, &myfoo);
std::thread secondThread(&foo::me2, &myfoo);
firstThread.join(); // wait infinitely as it never ends.
secondThread.join(); // and so never reach
}
or simply use main thread to do one work:
int main()
{
foo myfoo;
std::thread firstThread(&foo::me1, &myfoo);
myfoo.me2(); // work infinitely as it never ends.
firstThread.join(); // and so never reach
}
I have a program which spawns multiple threads, each of which executes a long-running task. The main thread then waits for all worker threads to join, collects results, and exits.
If an error occurs in one of the workers, I want the remaining workers to stop gracefully, so that the main thread can exit shortly afterwards.
My question is how best to do this, when the implementation of the long-running task is provided by a library whose code I cannot modify.
Here is a simple sketch of the system, with no error handling:
void threadFunc()
{
// Do long-running stuff
}
void mainFunc()
{
std::vector<std::thread> threads;
for (int i = 0; i < 3; ++i) {
threads.push_back(std::thread(&threadFunc));
}
for (auto &t : threads) {
t.join();
}
}
If the long-running function executes a loop and I have access to the code, then
execution can be aborted simply by checking a shared "keep on running" flag at the top of each iteration.
std::mutex mutex;
bool error;
void threadFunc()
{
try {
for (...) {
{
std::unique_lock<std::mutex> lock(mutex);
if (error) {
break;
}
}
}
} catch (std::exception &) {
std::unique_lock<std::mutex> lock(mutex);
error = true;
}
}
Now consider the case when the long-running operation is provided by a library:
std::mutex mutex;
bool error;
class Task
{
public:
// Blocks until completion, error, or stop() is called
void run();
void stop();
};
void threadFunc(Task &task)
{
try {
task.run();
} catch (std::exception &) {
std::unique_lock<std::mutex> lock(mutex);
error = true;
}
}
In this case, the main thread has to handle the error, and call stop() on
the still-running tasks. As such, it cannot simply wait for each worker to
join() as in the original implementation.
The approach I have used so far is to share the following structure between
the main thread and each worker:
struct SharedData
{
std::mutex mutex;
std::condition_variable condVar;
bool error;
int running;
}
When a worker completes successfully, it decrements the running count. If
an exception is caught, the worker sets the error flag. In both cases, it
then calls condVar.notify_one().
The main thread then waits on the condition variable, waking up if either
error is set or running reaches zero. On waking up, the main thread
calls stop() on all tasks if error has been set.
This approach works, but I feel there should be a cleaner solution using some
of the higher-level primitives in the standard concurrency library. Can
anyone suggest an improved implementation?
Here is the complete code for my current solution:
// main.cpp
#include <chrono>
#include <mutex>
#include <thread>
#include <vector>
#include "utils.h"
// Class which encapsulates long-running task, and provides a mechanism for aborting it
class Task
{
public:
Task(int tidx, bool fail)
: tidx(tidx)
, fail(fail)
, m_run(true)
{
}
void run()
{
static const int NUM_ITERATIONS = 10;
for (int iter = 0; iter < NUM_ITERATIONS; ++iter) {
{
std::unique_lock<std::mutex> lock(m_mutex);
if (!m_run) {
out() << "thread " << tidx << " aborting";
break;
}
}
out() << "thread " << tidx << " iter " << iter;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
if (fail) {
throw std::exception();
}
}
}
void stop()
{
std::unique_lock<std::mutex> lock(m_mutex);
m_run = false;
}
const int tidx;
const bool fail;
private:
std::mutex m_mutex;
bool m_run;
};
// Data shared between all threads
struct SharedData
{
std::mutex mutex;
std::condition_variable condVar;
bool error;
int running;
SharedData(int count)
: error(false)
, running(count)
{
}
};
void threadFunc(Task &task, SharedData &shared)
{
try {
out() << "thread " << task.tidx << " starting";
task.run(); // Blocks until task completes or is aborted by main thread
out() << "thread " << task.tidx << " ended";
} catch (std::exception &) {
out() << "thread " << task.tidx << " failed";
std::unique_lock<std::mutex> lock(shared.mutex);
shared.error = true;
}
{
std::unique_lock<std::mutex> lock(shared.mutex);
--shared.running;
}
shared.condVar.notify_one();
}
int main(int argc, char **argv)
{
static const int NUM_THREADS = 3;
std::vector<std::unique_ptr<Task>> tasks(NUM_THREADS);
std::vector<std::thread> threads(NUM_THREADS);
SharedData shared(NUM_THREADS);
for (int tidx = 0; tidx < NUM_THREADS; ++tidx) {
const bool fail = (tidx == 1);
tasks[tidx] = std::make_unique<Task>(tidx, fail);
threads[tidx] = std::thread(&threadFunc, std::ref(*tasks[tidx]), std::ref(shared));
}
{
std::unique_lock<std::mutex> lock(shared.mutex);
// Wake up when either all tasks have completed, or any one has failed
shared.condVar.wait(lock, [&shared](){
return shared.error || !shared.running;
});
if (shared.error) {
out() << "error occurred - terminating remaining tasks";
for (auto &t : tasks) {
t->stop();
}
}
}
for (int tidx = 0; tidx < NUM_THREADS; ++tidx) {
out() << "waiting for thread " << tidx << " to join";
threads[tidx].join();
out() << "thread " << tidx << " joined";
}
out() << "program complete";
return 0;
}
Some utility functions are defined here:
// utils.h
#include <iostream>
#include <mutex>
#include <thread>
#ifndef UTILS_H
#define UTILS_H
#if __cplusplus <= 201103L
// Backport std::make_unique from C++14
#include <memory>
namespace std {
template<typename T, typename ...Args>
std::unique_ptr<T> make_unique(
Args&& ...args)
{
return std::unique_ptr<T>(new T(std::forward<Args>(args)...));
}
} // namespace std
#endif // __cplusplus <= 201103L
// Thread-safe wrapper around std::cout
class ThreadSafeStdOut
{
public:
ThreadSafeStdOut()
: m_lock(m_mutex)
{
}
~ThreadSafeStdOut()
{
std::cout << std::endl;
}
template <typename T>
ThreadSafeStdOut &operator<<(const T &obj)
{
std::cout << obj;
return *this;
}
private:
static std::mutex m_mutex;
std::unique_lock<std::mutex> m_lock;
};
std::mutex ThreadSafeStdOut::m_mutex;
// Convenience function for performing thread-safe output
ThreadSafeStdOut out()
{
return ThreadSafeStdOut();
}
#endif // UTILS_H
I've been thinking about your situation for sometime and this maybe of some help to you. You could probably try doing a couple of different methods to achieve you goal. There are 2-3 options that maybe of use or a combination of all three. I will at minimum show the first option for I'm still learning and trying to master the concepts of Template Specializations as well as using Lambdas.
Using a Manager Class
Using Template Specialization Encapsulation
Using Lambdas.
Pseudo code of a Manager Class would look something like this:
class ThreadManager {
private:
std::unique_ptr<MainThread> mainThread_;
std::list<std::shared_ptr<WorkerThread> lWorkers_; // List to hold finished workers
std::queue<std::shared_ptr<WorkerThread> qWorkers_; // Queue to hold inactive and waiting threads.
std::map<unsigned, std::shared_ptr<WorkerThread> mThreadIds_; // Map to associate a WorkerThread with an ID value.
std::map<unsigned, bool> mFinishedThreads_; // A map to keep track of finished and unfinished threads.
bool threadError_; // Not needed if using exception handling
public:
explicit ThreadManager( const MainThread& main_thread );
void shutdownThread( const unsigned& threadId );
void shutdownAllThreads();
void addWorker( const WorkerThread& worker_thread );
bool isThreadDone( const unsigned& threadId );
void spawnMainThread() const; // Method to start main thread's work.
void spawnWorkerThread( unsigned threadId, bool& error );
bool getThreadError( unsigned& threadID ); // Returns True If Thread Encountered An Error and passes the ID of that thread,
};
Only for demonstration purposes did I use bool value to determine if a thread failed for simplicity of the structure, and of course this can be substituted to your like if you prefer to use exceptions or invalid unsigned values, etc.
Now to use a class of this sort would be something like this: Also note that a class of this type would be considered better if it was a Singleton type object since you wouldn't want more than 1 ManagerClass since you are working with shared pointers.
SomeClass::SomeClass( ... ) {
// This class could contain a private static smart pointer of this Manager Class
// Initialize the smart pointer giving it new memory for the Manager Class and by passing it a pointer of the Main Thread object
threadManager_ = new ThreadManager( main_thread ); // Wouldn't actually use raw pointers here unless if you had a need to, but just shown for simplicity
}
SomeClass::addThreads( ... ) {
for ( unsigned u = 1, u <= threadCount; u++ ) {
threadManager_->addWorker( some_worker_thread );
}
}
SomeClass::someFunctionThatSpawnsThreads( ... ) {
threadManager_->spawnMainThread();
bool error = false;
for ( unsigned u = 1; u <= threadCount; u++ ) {
threadManager_->spawnWorkerThread( u, error );
if ( error ) { // This Thread Failed To Start, Shutdown All Threads
threadManager->shutdownAllThreads();
}
}
// If all threads spawn successfully we can do a while loop here to listen if one fails.
unsigned threadId;
while ( threadManager_->getThreadError( threadId ) ) {
// If the function passed to this while loop returns true and we end up here, it will pass the id value of the failed thread.
// We can now go through a for loop and stop all active threads.
for ( unsigned u = threadID + 1; u <= threadCount; u++ ) {
threadManager_->shutdownThread( u );
}
// We have successfully shutdown all threads
break;
}
}
I like the design of manager class since I have used them in other projects, and they come in handy quite often especially when working with a code base that contains many and multiple resources such as a working Game Engine that has many assets such as Sprites, Textures, Audio Files, Maps, Game Items etc. Using a Manager Class helps to keep track and maintain all of the assets. This same concept can be applied to "Managing" Active, Inactive, Waiting Threads, and knows how to intuitively handle and shutdown all threads properly. I would recommend using an ExceptionHandler if your code base and libraries support exceptions as well as thread safe exception handling instead of passing and using bools for errors. Also having a Logger class is good to where it can write to a log file and or a console window to give an explicit message of what function the exception was thrown in and what caused the exception where a log message might look like this:
Exception Thrown: someFunctionNamedThis in ThisFile on Line# (x)
threadID 021342 failed to execute.
This way you can look at the log file and find out very quickly what thread is causing the exception, instead of using passed around bool variables.
The implementation of the long-running task is provided by a library whose code I cannot modify.
That means you have no way to synchronize the job done by working threads
If an error occurs in one of the workers,
Let's suppose that you can really detect worker errors; some of then can be easily detected if reported by the used library others cannot i.e.
the library code loops.
the library code prematurely exit with an uncaught exception.
I want the remaining workers to stop **gracefully**
That's just not possible
The best you can do is writing a thread manager checking on worker thread status and if an error condition is detected it just (ungracefully) "kills" all the worker threads and exits.
You should also consider detecting a looped working thread (by timeout) and offer to the user the option to kill or continue waiting for the process to finish.
Your problem is that the long running function is not your code, and you say you cannot modify it. Consequently you cannot make it pay any attention whatsoever to any kind of external synchronisation primitive (condition variables, semaphores, mutexes, pipes, etc), unless the library developer has done that for you.
Therefore your only option is to do something that wrestles control away from any code no matter what it's doing. This is what signals do. For that, you're going to have to use pthread_kill(), or whatever the equivalent is these days.
The pattern would be that
The thread that detects an error needs to communicate that error back to the main thread in some manner.
The main thread then needs to call pthread_kill() for all the other remaining threads. Don't be confused by the name - pthread_kill() is simply a way of delivering an arbitrary signal to a thread. Note that signals like STOP, CONTINUE and TERMINATE are process-wide even if raised with pthread_kill(), not thread specific so don't use those.
In each of those threads you'll need a signal handler. On delivery of the signal to a thread the execution path in that thread will jump to the handler no matter what the long running function was doing.
You are now back in (limited) control, and can (probably, well, maybe) do some limited cleanup and terminate the thread.
In the meantime the main thread will have been calling pthread_join() on all the threads it's signaled, and those will now return.
My thoughts:
This is a really ugly way of doing it (and signals / pthreads are notoriously difficult to get right and I'm no expert), but I don't really see what other choice you have.
It'll be a long way from looking 'graceful' in source code, though the end user experience will be OK.
You will be aborting execution part way through running that library function, so if there's any clean up it would normally do (e.g. freeing up memory it has allocated) that won't get done and you'll have a memory leak. Running under something like valgrind is a way of working out if this is happening.
The only way of getting the library function to clean up (if it needs it) will be for your signal handler to return control to the function and letting it run to completion, just what you don't want to do.
And of course, this won't work on Windows (no pthreads, at least none worth speaking of, though there may be an equivalent mechanism).
Really the best way is going to be to re-implement (if at all possible) that library function.
I am having an issue with terminating worker threads from the main thread. So far each method I tried either leads to a race condition or dead lock.
The worker threads are stored in a inner class inside a class called ThreadPool, ThreadPool maintains a vector of these WorkerThreads using unique_ptr.
Here is the header for my ThreadPool:
class ThreadPool
{
public:
typedef void (*pFunc)(const wpath&, const Args&, Global::mFile_t&, std::mutex&, std::mutex&); // function to point to
private:
class WorkerThread
{
private:
ThreadPool* const _thisPool; // reference enclosing class
// pointers to arguments
wpath _pPath; // member argument that will be modifyable to running thread
Args * _pArgs;
Global::mFile_t * _pMap;
// flags for thread management
bool _terminate; // terminate thread
bool _busy; // is thread busy?
bool _isRunning;
// thread management members
std::mutex _threadMtx;
std::condition_variable _threadCond;
std::thread _thisThread;
// exception ptr
std::exception_ptr _ex;
// private copy constructor
WorkerThread(const WorkerThread&): _thisPool(nullptr) {}
public:
WorkerThread(ThreadPool&, Args&, Global::mFile_t&);
~WorkerThread();
void setPath(const wpath); // sets a new task
void terminate(); // calls terminate on thread
bool busy() const; // returns whether thread is busy doing task
bool isRunning() const; // returns whether thread is still running
void join(); // thread join wrapper
std::exception_ptr exception() const;
// actual worker thread running tasks
void thisWorkerThread();
};
// thread specific information
DWORD _numProcs; // number of processors on system
unsigned _numThreads; // number of viable threads
std::vector<std::unique_ptr<WorkerThread>> _vThreads; // stores thread pointers - workaround for no move constructor in WorkerThread
pFunc _task; // the task threads will call
// synchronization members
unsigned _barrierLimit; // limit before barrier goes down
std::mutex _barrierMtx; // mutex for barrier
std::condition_variable _barrierCond; // condition for barrier
std::mutex _coutMtx;
public:
// argument mutex
std::mutex matchesMap_mtx;
std::mutex coutMatch_mtx;
ThreadPool(pFunc f);
// wake a thread and pass it a new parameter to work on
void callThread(const wpath&);
// barrier synchronization
void synchronizeStartingThreads();
// starts and synchronizes all threads in a sleep state
void startThreads(Args&, Global::mFile_t&);
// terminate threads
void terminateThreads();
private:
};
So far the real issue I am having is that when calling terminateThreads() from main thread
causes dead lock or race condition.
When I set my _terminate flag to true, there is a chance that the main will already exit scope and destruct all mutexes before the thread has had a chance to wake up and terminate. In fact I have gotten this crash quite a few times (console window displays: mutex destroyed while busy)
If I add a thread.join() after I notify_all() the thread, there is a chance the thread will terminate before the join occurs, causing an infinite dead lock, as joining to a terminated thread suspends the program indefinitely.
If I detach - same issue as above, but causes program crash
If I instead use a while(WorkerThread.isRunning()) Sleep(0);
The program may crash because the main thread may exit before the WorkerThread reaches that last closing brace.
I am not sure what else to do to stop halt the main until all worker threads have terminated safely. Also, even with try-catch in thread and main, no exceptions are being caught. (everything I have tried leads to program crash)
What can I do to halt the main thread until worker threads have finished?
Here are the implementations of the primary functions:
Terminate Individual worker thread
void ThreadPool::WorkerThread::terminate()
{
_terminate = true;
_threadCond.notify_all();
_thisThread.join();
}
The actual ThreadLoop
void ThreadPool::WorkerThread::thisWorkerThread()
{
_thisPool->synchronizeStartingThreads();
try
{
while (!_terminate)
{
{
_thisPool->_coutMtx.lock();
std::cout << std::this_thread::get_id() << " Sleeping..." << std::endl;
_thisPool->_coutMtx.unlock();
_busy = false;
std::unique_lock<std::mutex> lock(_threadMtx);
_threadCond.wait(lock);
}
_thisPool->_coutMtx.lock();
std::cout << std::this_thread::get_id() << " Awake..." << std::endl;
_thisPool->_coutMtx.unlock();
if(_terminate)
break;
_thisPool->_task(_pPath, *_pArgs, *_pMap, _thisPool->coutMatch_mtx, _thisPool->matchesMap_mtx);
_thisPool->_coutMtx.lock();
std::cout << std::this_thread::get_id() << " Finished Task..." << std::endl;
_thisPool->_coutMtx.unlock();
}
_thisPool->_coutMtx.lock();
std::cout << std::this_thread::get_id() << " Terminating" << std::endl;
_thisPool->_coutMtx.unlock();
}
catch (const std::exception&)
{
_ex = std::current_exception();
}
_isRunning = false;
}
Terminate All Worker Threads
void ThreadPool::terminateThreads()
{
for (std::vector<std::unique_ptr<WorkerThread>>::iterator it = _vThreads.begin(); it != _vThreads.end(); ++it)
{
it->get()->terminate();
//it->get()->_thisThread.detach();
// if thread threw an exception, rethrow it in main
if (it->get()->exception() != nullptr)
std::rethrow_exception(it->get()->exception());
}
}
and lastly, the function that is calling the thread pool (the scan function is running on main)
// scans a path recursively for all files of selected extension type, calls thread to parse file
unsigned int Functions::Scan(wpath path, const Args& args, ThreadPool& pool)
{
wrecursive_directory_iterator d(path), e;
unsigned int filesFound = 0;
while ( d != e )
{
if (args.verbose())
std::wcout << L"Grepping: " << d->path().string() << std::endl;
for (Args::ext_T::const_iterator it = args.extension().cbegin(); it != args.extension().cend(); ++it)
{
if (extension(d->path()) == *it)
{
++filesFound;
pool.callThread(d->path());
}
}
++d;
}
std::cout << "Scan Function: Calling TerminateThreads() " << std::endl;
pool.terminateThreads();
std::cout << "Scan Function: Called TerminateThreads() " << std::endl;
return filesFound;
}
Ill repeat the question again: What can I do to halt the main thread until worker threads have finished?
I don't get the issue with thread termination and join.
Joining threads is all about waiting until the given thread has terminated, so it's exaclty what you want to do. If the thread has finished execution already, join will just return immediately.
So you'll just want to join each thread during the terminate call as you already do in your code.
Note: currently you immediately rethrow any exception if a thread you just terminated has an active exception_ptr. That might lead to unjoined threads. You'll have to keep that in mind when handling those exceptions
Update: after looking at your code, I see a potential bug: std::condition_variable::wait() can return when a spurious wakeup occurs. If that is the case, you will work again on the path that was worked on the last time, leading to wrong results. You should have a flag for new work that is set if new work has been added, and that _threadCond.wait(lock) line should be in a loop that checks for the flag and _terminate. Not sure if that one will fix your problem, though.
The problem was two fold:
synchronizeStartingThreads() would sometimes have 1 or 2 threads blocked, waiting for the okay to go ahead (a problem in the while (some_condition) barrierCond.wait(lock). The condition would sometimes never evaluate to true. removing the while loop fixed this blocking issue.
The second issue was the potential for a worker thread to enter the _threadMtx, and notify_all was called just before they entered the _threadCond.wait(), since notify was already called, the thread would wait forever.
ie.
{
// terminate() is called
std::unique_lock<std::mutex> lock(_threadMtx);
// _threadCond.notify_all() is called here
_busy = false;
_threadCond.wait(lock);
// thread is blocked forever
}
surprisingly, locking this mutex in terminate() did not stop this from happening.
This was solved by adding a timeout of 30ms to the _threadCond.wait()
Also, a check was added before the starting of task to make sure the same task wasn't being processed again.
The new code now looks like this:
thisWorkerThread
_threadCond.wait_for(lock, std::chrono::milliseconds(30)); // hold the lock a max of 30ms
// after the lock, and the termination check
if(_busy)
{
Global::mFile_t rMap = _thisPool->_task(_pPath, *_pArgs, _thisPool->coutMatch_mtx);
_workerMap.element.insert(rMap.element.begin(), rMap.element.end());
}