Is it safe to interrupt own thread? - c++

I have a class which is creating a thread and stores it. While doing some stuff this thread can notice some error, notify the class and interrupt it self.
class foo {
public:
enum mode {
run=0,
booboo,
stop
};
foo() {
th_ = boost::thread(boost::bind(&foo::bar, this));
mode_ = run;
}
~foo() {
stop();
}
void stop() {
mode_ = stop;
th_.join();
}
void bar {
for(;;) {
if(mode_ == stop)
return;
if(error) {
mode_ = booboo;
th_.interrupt();
// the sleep is the interruption point
boost::this_thread::sleep_for(boost::chrono::nanoseconds(0));
}
// do some stuff
}
}
private:
boost::thread th_;
mode mode_;
};
According to boost::thread.interrupt there is no indication that this won't work. If I understand it right, interrupt will mark the thread to interrupt it on the next interruption_point which is in my case the call of sleep_for.
On my machine and OS, it is working, so my question is, is it safe to do so?

Related

How to schedule an operation to run at future time

I have a class TaskManager that holds a queue of tasks. Each time the next task is popped and executed.
class TaskManager
{
TaskQueue m_queue;
svc_tasks()
{
while (!m_queue.empty())
{
Task* task = m_queue.pop();
task->execute();
}
}
};
Inside the Task there are certain points I would like to pause for at least SLEEP_TIME_MS milliseconds. During this pause I would like to start executing the next task. When the pause ends I would like to put the task in the queue again.
class Task
{
int m_phase = -1;
execute()
{
m_phase++;
switch(m_phase)
{
case 0:
...
do_pause(SLEEP_TIME_MS);
return;
case 1:
...
break;
}
}
};
Is there a scheduler in std (C++ 17) or boost that I could use that would call a handler function when SLEEP_TIME_MS passes?
Thank you for any advice
You can use boost::asio::high_resolution_timer with its async_wait method.
Every time when you want to schedule the operation of pushing task into queue you have to:
create high_resolution_timer
call expires_after which specifies the expiry time (SLEEP_TIME_MS) i.e. when handler is called. In your case in this handler you push a task into the queue.
call async_wait with your handler
If we assume that execute method returns bool which indicates whether a task is completed (all phases were executed), it may be rewritten into sth like this:
while (!m_queue.empty()) // this condition should be changed
{
Task* task = m_queue.pop();
bool finished = task->execute();
if (!finished)
scheduler works here - start async_wait with handler
}
If I understand correctly, you want to push task into queue when SLEEP_TIME_MS is expired, so you cannot break loop when queue is empty, because you have to wait until pending tasks will be completion. You can introduce stop flag. And break loop on demand.
Below I put a snippet of code which works in the way you described (I hope):
struct Scheduler {
Scheduler(boost::asio::io_context& io)
: io(io) {}
boost::asio::io_context& io;
template<class F>
void schedule (F&& handler) {
auto timer = std::make_shared<boost::asio::high_resolution_timer>(io);
timer->expires_after(std::chrono::milliseconds(5000)); // SLEEP_TIME_MS
timer->async_wait(
[timer,handler](const boost::system::error_code& ec) {
handler();
});
}
};
struct Task {
int phase = -1;
bool execute() {
++phase;
std::cout << "phase: " << phase << std::endl;
if (phase == 0) {
return false;
}
else {
}
return true;
}
};
struct TaskManager {
Scheduler s;
std::queue<std::shared_ptr<Task>> tasks;
std::mutex tasksMtx;
std::atomic<bool> stop{false};
TaskManager(boost::asio::io_context& io) : s(io) {
for (int i = 0; i < 5; ++i)
tasks.push(std::make_shared<Task>());
}
void run() {
while (true) {
if (stop)
break;
{
std::lock_guard<std::mutex> lock{tasksMtx};
if (tasks.empty())
continue;
}
std::shared_ptr<Task> currTask = tasks.front();
tasks.pop();
bool finished = currTask->execute();
if (!finished)
s.schedule( [this, currTask](){ insertTaskToVector(std::move(currTask)); } );
}
}
template<class T>
void insertTaskToVector(T&& t) {
std::lock_guard<std::mutex> lock{tasksMtx};
tasks.push(std::forward<T>(t));
}
};
int main() {
boost::asio::io_context io;
boost::asio::io_context::work work{io};
std::thread th([&io](){ io.run();});
TaskManager tm(io);
tm.run();

Shutting down a multithreaded application by installing a signal handler

In the following code, I create a toy class that has a thread which writes to a queue while the other thread reads from that queue and prints it to stdout. Now, in order to cleanly shutdown the system, I setup a handler for SIGINT. I am expecting the signal handler to set up the std::atomic<bool> variable stopFlag, which will lead threadB to push a poison pill (sentinel) on to the queue encountering which threadA will halt.
class TestClass
{
public:
TestClass();
~TestClass();
void shutDown();
TestClass(const TestClass&) = delete;
TestClass& operator=(const TestClass&) = delete;
private:
void init();
void postResults();
std::string getResult();
void processResults();
std::atomic<bool> stopFlag;
std::mutex outQueueMutex;
std::condition_variable outQueueConditionVariable;
std::queue<std::string> outQueue;
std::unique_ptr<std::thread> threadA;
std::unique_ptr<std::thread> threadB;
};
void TestClass::init()
{
threadA = std::make_unique<std::thread>(&TestClass::processResults, std::ref(*this));
threadB = std::make_unique<std::thread>(&TestClass::postResults, std::ref(*this));
}
TestClass::TestClass():
stopFlag(false)
{
init();
}
TestClass::~TestClass()
{
threadB->join();
}
void TestClass::postResults()
{
while(true)
{
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
std::string name = "ABCDEF";
{
std::unique_lock<std::mutex> lock(outQueueMutex);
outQueue.push(name);
outQueueConditionVariable.notify_one();
}
if(stopFlag)
{
/*For shutting down output thread*/
auto poisonPill = std::string();
{
std::unique_lock<std::mutex> lock(outQueueMutex);
outQueue.push(poisonPill);
outQueueConditionVariable.notify_one();
}
threadA->join();
break;
}
}
}
void TestClass::shutDown()
{
stopFlag = true;
}
std::string TestClass::getResult()
{
std::string result;
{
std::unique_lock<std::mutex> lock(outQueueMutex);
while(outQueue.empty())
{
outQueueConditionVariable.wait(lock);
}
result= outQueue.front();
outQueue.pop();
}
return result;
}
void TestClass::processResults()
{
while(true)
{
const auto result = getResult();
if(result.empty())
{
break;
}
std::cout << result << std::endl;
}
}
static void sigIntHandler(std::shared_ptr<TestClass> t, int)
{
t->shutDown();
}
static std::function<void(int)> handler;
int main()
{
auto testClass = std::make_shared<TestClass>();
handler = std::bind(sigIntHandler, testClass, std::placeholders::_1);
std::signal(SIGINT, [](int n){ handler(n);});
return 0;
}
I compiled this using gcc 5.2 using the -std=c++14 flag. On hitting Ctrl-C on my CentOS 7 machine, I get the following error,
terminate called after throwing an instance of 'std::system_error'
what(): Invalid argument
Aborted (core dumped)
Please help me understand what is going on.
What happens is that your main function exits immediately destroying global handler object and then testClass. Then the main thread gets blocked in TestClass::~TestClass. The signal handler ends up accessing already destroyed objects, which leads to the undefined behaviour.
The root cause is undefined object ownership due to shared pointers - you do not know what and when ends up destroying your objects.
A more general approach is to use another thread to handle all signals and block signals in all other threads. That signal handling thread then can call any functions upon receiving a signal.
You also do not need the smart pointers and function wrappers here at all.
Example:
class TestClass
{
public:
TestClass();
~TestClass();
void shutDown();
TestClass(const TestClass&) = delete;
TestClass& operator=(const TestClass&) = delete;
private:
void postResults();
std::string getResult();
void processResults();
std::mutex outQueueMutex;
std::condition_variable outQueueConditionVariable;
std::queue<std::string> outQueue;
bool stop = false;
std::thread threadA;
std::thread threadB;
};
TestClass::TestClass()
: threadA(std::thread(&TestClass::processResults, this))
, threadB(std::thread(&TestClass::postResults, this))
{}
TestClass::~TestClass() {
threadA.join();
threadB.join();
}
void TestClass::postResults() {
while(true) {
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
std::string name = "ABCDEF";
{
std::unique_lock<std::mutex> lock(outQueueMutex);
if(stop)
return;
outQueue.push(name);
outQueueConditionVariable.notify_one();
}
}
}
void TestClass::shutDown() {
std::unique_lock<std::mutex> lock(outQueueMutex);
stop = true;
outQueueConditionVariable.notify_one();
}
std::string TestClass::getResult() {
std::string result;
{
std::unique_lock<std::mutex> lock(outQueueMutex);
while(!stop && outQueue.empty())
outQueueConditionVariable.wait(lock);
if(stop)
return result;
result= outQueue.front();
outQueue.pop();
}
return result;
}
void TestClass::processResults()
{
while(true) {
const auto result = getResult();
if(result.empty())
break;
std::cout << result << std::endl;
}
}
int main() {
// Block signals in all threads.
sigset_t sigset;
sigfillset(&sigset);
::pthread_sigmask(SIG_BLOCK, &sigset, nullptr);
TestClass testClass;
std::thread signal_thread([&testClass]() {
// Unblock signals in this thread only.
sigset_t sigset;
sigfillset(&sigset);
int signo = ::sigwaitinfo(&sigset, nullptr);
if(-1 == signo)
std::abort();
std::cout << "Received signal " << signo << '\n';
testClass.shutDown();
});
signal_thread.join();
}
On your platform this signal handler is invoked when a real SIGINT signal comes. The list of functions that can be invoked inside of this signal handler is rather limited and calling anything else leads to an undefined behavior.

Threaded timer, interrupting a sleep (stopping it)

I'm wanting a reasonably reliable threaded timer, so I've written a timer object that fires a std::function on a thread. I would like to give this timer the ability to stop before it gets to the next tick; something you can't do with ::sleep (at least I don't think you can).
So what I've done is put a condition variable on a mutex. If the condition times out, I fire the event. If the condition is signalled the thread is exited. So the Stop method needs to be able to get the thread to stop and/or interrupt its wait, which I think is what it's doing right now.
There are problems with this however. Sometimes the thread isn't joinable() and sometimes the condition is signalled after its timeout but before it's put into its wait state.
How can I improve this and make it robust?
The following is a full repo. The wait is 10 seconds here but the program should terminate immediately as the Foo is created and then immediately destroyed. It does sometimes but mostly it does not.
#include <atomic>
#include <thread>
#include <future>
#include <sstream>
#include <chrono>
#include <iostream>
class Timer
{
public:
Timer() {}
~Timer()
{
Stop();
}
void Start(std::chrono::milliseconds const & interval, std::function<void(void)> const & callback)
{
Stop();
thread = std::thread([=]()
{
for(;;)
{
auto locked = std::unique_lock<std::mutex>(mutex);
auto result = terminate.wait_for(locked, interval);
if (result == std::cv_status::timeout)
{
callback();
}
else
{
return;
}
}
});
}
void Stop()
{
terminate.notify_one();
if(thread.joinable())
{
thread.join();
}
}
private:
std::thread thread;
std::mutex mutex;
std::condition_variable terminate;
};
class Foo
{
public:
Foo()
{
timer = std::make_unique<Timer>();
timer->Start(std::chrono::milliseconds(10000), std::bind(&Foo::Callback, this));
}
~Foo()
{
}
void Callback()
{
static int count = 0;
std::ostringstream o;
std::cout << count++ << std::endl;
}
std::unique_ptr<Timer> timer;
};
int main(void)
{
{
Foo foo;
}
return 0;
}
See my comment. You forgot to implement the state of the thing the thread is waiting for, leaving the mutex nothing to protect and the thread nothing to wait for. Condition variables are stateless -- your code must track the state of the thing whose change you're notifying the thread about.
Here's the code fixed. Notice that the mutex protects stop, and stop is the thing the thread is waiting for.
class Timer
{
public:
Timer() {}
~Timer()
{
Stop();
}
void Start(std::chrono::milliseconds const & interval,
std::function<void(void)> const & callback)
{
Stop();
{
auto locked = std::unique_lock<std::mutex>(mutex);
stop = false;
}
thread = std::thread([=]()
{
auto locked = std::unique_lock<std::mutex>(mutex);
while (! stop) // We hold the mutex that protects stop
{
auto result = terminate.wait_for(locked, interval);
if (result == std::cv_status::timeout)
{
callback();
}
}
});
}
void Stop()
{
{
// Set the predicate
auto locked = std::unique_lock<std::mutex>(mutex);
stop = true;
}
// Tell the thread the predicate has changed
terminate.notify_one();
if(thread.joinable())
{
thread.join();
}
}
private:
bool stop; // This is the thing the thread is waiting for
std::thread thread;
std::mutex mutex;
std::condition_variable terminate;
};

Thread pool implementation using pthreads

I am trying to understand the below implementation of thread pool using the pthreads. When I comment out the the for loop in the main, the program stucks, upon putting the logs it seems that its getting stuck in the join function in threadpool destructor.
I am unable to understand why this is happening, is there any deadlock scenario happening ?
This may be naive but can someone help me understand why this is happening and how to correct this.
Thanks a lot !!!
#include <stdio.h>
#include <queue>
#include <unistd.h>
#include <pthread.h>
#include <malloc.h>
#include <stdlib.h>
// Base class for Tasks
// run() should be overloaded and expensive calculations done there
// showTask() is for debugging and can be deleted if not used
class Task {
public:
Task() {}
virtual ~Task() {}
virtual void run()=0;
virtual void showTask()=0;
};
// Wrapper around std::queue with some mutex protection
class WorkQueue {
public:
WorkQueue() {
// Initialize the mutex protecting the queue
pthread_mutex_init(&qmtx,0);
// wcond is a condition variable that's signaled
// when new work arrives
pthread_cond_init(&wcond, 0);
}
~WorkQueue() {
// Cleanup pthreads
pthread_mutex_destroy(&qmtx);
pthread_cond_destroy(&wcond);
}
// Retrieves the next task from the queue
Task *nextTask() {
// The return value
Task *nt = 0;
// Lock the queue mutex
pthread_mutex_lock(&qmtx);
// Check if there's work
if (finished && tasks.size() == 0) {
// If not return null (0)
nt = 0;
} else {
// Not finished, but there are no tasks, so wait for
// wcond to be signalled
if (tasks.size()==0) {
pthread_cond_wait(&wcond, &qmtx);
}
// get the next task
nt = tasks.front();
if(nt){
tasks.pop();
}
// For debugging
if (nt) nt->showTask();
}
// Unlock the mutex and return
pthread_mutex_unlock(&qmtx);
return nt;
}
// Add a task
void addTask(Task *nt) {
// Only add the task if the queue isn't marked finished
if (!finished) {
// Lock the queue
pthread_mutex_lock(&qmtx);
// Add the task
tasks.push(nt);
// signal there's new work
pthread_cond_signal(&wcond);
// Unlock the mutex
pthread_mutex_unlock(&qmtx);
}
}
// Mark the queue finished
void finish() {
pthread_mutex_lock(&qmtx);
finished = true;
// Signal the condition variable in case any threads are waiting
pthread_cond_signal(&wcond);
pthread_mutex_unlock(&qmtx);
}
// Check if there's work
bool hasWork() {
//printf("task queue size is %d\n",tasks.size());
return (tasks.size()>0);
}
private:
std::queue<Task*> tasks;
bool finished;
pthread_mutex_t qmtx;
pthread_cond_t wcond;
};
// Function that retrieves a task from a queue, runs it and deletes it
void *getWork(void* param) {
Task *mw = 0;
WorkQueue *wq = (WorkQueue*)param;
while (mw = wq->nextTask()) {
mw->run();
delete mw;
}
pthread_exit(NULL);
}
class ThreadPool {
public:
// Allocate a thread pool and set them to work trying to get tasks
ThreadPool(int n) : _numThreads(n) {
int rc;
printf("Creating a thread pool with %d threads\n", n);
threads = new pthread_t[n];
for (int i=0; i< n; ++i) {
rc = pthread_create(&(threads[i]), 0, getWork, &workQueue);
if (rc){
printf("ERROR; return code from pthread_create() is %d\n", rc);
exit(-1);
}
}
}
// Wait for the threads to finish, then delete them
~ThreadPool() {
workQueue.finish();
//waitForCompletion();
for (int i=0; i<_numThreads; ++i) {
pthread_join(threads[i], 0);
}
delete [] threads;
}
// Add a task
void addTask(Task *nt) {
workQueue.addTask(nt);
}
// Tell the tasks to finish and return
void finish() {
workQueue.finish();
}
// Checks if there is work to do
bool hasWork() {
return workQueue.hasWork();
}
private:
pthread_t * threads;
int _numThreads;
WorkQueue workQueue;
};
// stdout is a shared resource, so protected it with a mutex
static pthread_mutex_t console_mutex = PTHREAD_MUTEX_INITIALIZER;
// Debugging function
void showTask(int n) {
pthread_mutex_lock(&console_mutex);
pthread_mutex_unlock(&console_mutex);
}
// Task to compute fibonacci numbers
// It's more efficient to use an iterative algorithm, but
// the recursive algorithm takes longer and is more interesting
// than sleeping for X seconds to show parrallelism
class FibTask : public Task {
public:
FibTask(int n) : Task(), _n(n) {}
~FibTask() {
// Debug prints
pthread_mutex_lock(&console_mutex);
printf("tid(%d) - fibd(%d) being deleted\n", pthread_self(), _n);
pthread_mutex_unlock(&console_mutex);
}
virtual void run() {
// Note: it's important that this isn't contained in the console mutex lock
long long val = innerFib(_n);
// Show results
pthread_mutex_lock(&console_mutex);
printf("Fibd %d = %lld\n",_n, val);
pthread_mutex_unlock(&console_mutex);
// The following won't work in parrallel:
// pthread_mutex_lock(&console_mutex);
// printf("Fibd %d = %lld\n",_n, innerFib(_n));
// pthread_mutex_unlock(&console_mutex);
}
virtual void showTask() {
// More debug printing
pthread_mutex_lock(&console_mutex);
printf("thread %d computing fibonacci %d\n", pthread_self(), _n);
pthread_mutex_unlock(&console_mutex);
}
private:
// Slow computation of fibonacci sequence
// To make things interesting, and perhaps imporove load balancing, these
// inner computations could be added to the task queue
// Ideally set a lower limit on when that's done
// (i.e. don't create a task for fib(2)) because thread overhead makes it
// not worth it
long long innerFib(long long n) {
if (n<=1) { return 1; }
return innerFib(n-1) + innerFib(n-2);
}
long long _n;
};
int main(int argc, char *argv[])
{
// Create a thread pool
ThreadPool *tp = new ThreadPool(10);
// Create work for it
/*for (int i=0;i<100; ++i) {
int rv = rand() % 40 + 1;
showTask(rv);
tp->addTask(new FibTask(rv));
}*/
delete tp;
printf("\n\n\n\n\nDone with all work!\n");
}
The design is more or less OK-ish but implementationwise it contains several things that are a bit overcomplicated and may introduce instabilities. I guess you prog deadlocks when you comment out the for loop because you should use pthread_cond_broadcast instead of pthread_cond_signal in your WorkQueue::finish() method.
Note: I usually implemented threadpool termination by placing NUM_THREADS number of NULL items into the workqueue and I set a finished flag only to be able to check something in my addTask() method because after finish() I usually don't let adding new tasks and I return with false from addTask() or sometimes I assert.
Another note: Its best to encapsulate threads into classes, that has several benefits and makes proting to multiple platforms easier.
There may be other bugs too as I haven't executed your program, just ran through your code.
EDIT: Here is a reworked version, I issued some modifications to your code but I don't guarantee that it works. Fingers crossed... :-)
#include <stdio.h>
#include <queue>
#include <unistd.h>
#include <pthread.h>
#include <malloc.h>
#include <stdlib.h>
#include <assert.h>
// Reusable thread class
class Thread
{
public:
Thread()
{
state = EState_None;
handle = 0;
}
virtual ~Thread()
{
assert(state != EState_Started);
}
void start()
{
assert(state == EState_None);
// in case of thread create error I usually FatalExit...
if (pthread_create(&handle, NULL, threadProc, this))
abort();
state = EState_Started;
}
void join()
{
// A started thread must be joined exactly once!
// This requirement could be eliminated with an alternative implementation but it isn't needed.
assert(state == EState_Started);
pthread_join(handle, NULL);
state = EState_Joined;
}
protected:
virtual void run() = 0;
private:
static void* threadProc(void* param)
{
Thread* thread = reinterpret_cast<Thread*>(param);
thread->run();
return NULL;
}
private:
enum EState
{
EState_None,
EState_Started,
EState_Joined
};
EState state;
pthread_t handle;
};
// Base task for Tasks
// run() should be overloaded and expensive calculations done there
// showTask() is for debugging and can be deleted if not used
class Task {
public:
Task() {}
virtual ~Task() {}
virtual void run()=0;
virtual void showTask()=0;
};
// Wrapper around std::queue with some mutex protection
class WorkQueue
{
public:
WorkQueue() {
pthread_mutex_init(&qmtx,0);
// wcond is a condition variable that's signaled
// when new work arrives
pthread_cond_init(&wcond, 0);
}
~WorkQueue() {
// Cleanup pthreads
pthread_mutex_destroy(&qmtx);
pthread_cond_destroy(&wcond);
}
// Retrieves the next task from the queue
Task *nextTask() {
// The return value
Task *nt = 0;
// Lock the queue mutex
pthread_mutex_lock(&qmtx);
while (tasks.empty())
pthread_cond_wait(&wcond, &qmtx);
nt = tasks.front();
tasks.pop();
// Unlock the mutex and return
pthread_mutex_unlock(&qmtx);
return nt;
}
// Add a task
void addTask(Task *nt) {
// Lock the queue
pthread_mutex_lock(&qmtx);
// Add the task
tasks.push(nt);
// signal there's new work
pthread_cond_signal(&wcond);
// Unlock the mutex
pthread_mutex_unlock(&qmtx);
}
private:
std::queue<Task*> tasks;
pthread_mutex_t qmtx;
pthread_cond_t wcond;
};
// Thanks to the reusable thread class implementing threads is
// simple and free of pthread api usage.
class PoolWorkerThread : public Thread
{
public:
PoolWorkerThread(WorkQueue& _work_queue) : work_queue(_work_queue) {}
protected:
virtual void run()
{
while (Task* task = work_queue.nextTask())
task->run();
}
private:
WorkQueue& work_queue;
};
class ThreadPool {
public:
// Allocate a thread pool and set them to work trying to get tasks
ThreadPool(int n) {
printf("Creating a thread pool with %d threads\n", n);
for (int i=0; i<n; ++i)
{
threads.push_back(new PoolWorkerThread(workQueue));
threads.back()->start();
}
}
// Wait for the threads to finish, then delete them
~ThreadPool() {
finish();
}
// Add a task
void addTask(Task *nt) {
workQueue.addTask(nt);
}
// Asking the threads to finish, waiting for the task
// queue to be consumed and then returning.
void finish() {
for (size_t i=0,e=threads.size(); i<e; ++i)
workQueue.addTask(NULL);
for (size_t i=0,e=threads.size(); i<e; ++i)
{
threads[i]->join();
delete threads[i];
}
threads.clear();
}
private:
std::vector<PoolWorkerThread*> threads;
WorkQueue workQueue;
};
// stdout is a shared resource, so protected it with a mutex
static pthread_mutex_t console_mutex = PTHREAD_MUTEX_INITIALIZER;
// Debugging function
void showTask(int n) {
pthread_mutex_lock(&console_mutex);
pthread_mutex_unlock(&console_mutex);
}
// Task to compute fibonacci numbers
// It's more efficient to use an iterative algorithm, but
// the recursive algorithm takes longer and is more interesting
// than sleeping for X seconds to show parrallelism
class FibTask : public Task {
public:
FibTask(int n) : Task(), _n(n) {}
~FibTask() {
// Debug prints
pthread_mutex_lock(&console_mutex);
printf("tid(%d) - fibd(%d) being deleted\n", (int)pthread_self(), (int)_n);
pthread_mutex_unlock(&console_mutex);
}
virtual void run() {
// Note: it's important that this isn't contained in the console mutex lock
long long val = innerFib(_n);
// Show results
pthread_mutex_lock(&console_mutex);
printf("Fibd %d = %lld\n",(int)_n, val);
pthread_mutex_unlock(&console_mutex);
// The following won't work in parrallel:
// pthread_mutex_lock(&console_mutex);
// printf("Fibd %d = %lld\n",_n, innerFib(_n));
// pthread_mutex_unlock(&console_mutex);
// this thread pool implementation doesn't delete
// the tasks so we perform the cleanup here
delete this;
}
virtual void showTask() {
// More debug printing
pthread_mutex_lock(&console_mutex);
printf("thread %d computing fibonacci %d\n", (int)pthread_self(), (int)_n);
pthread_mutex_unlock(&console_mutex);
}
private:
// Slow computation of fibonacci sequence
// To make things interesting, and perhaps imporove load balancing, these
// inner computations could be added to the task queue
// Ideally set a lower limit on when that's done
// (i.e. don't create a task for fib(2)) because thread overhead makes it
// not worth it
long long innerFib(long long n) {
if (n<=1) { return 1; }
return innerFib(n-1) + innerFib(n-2);
}
long long _n;
};
int main(int argc, char *argv[])
{
// Create a thread pool
ThreadPool *tp = new ThreadPool(10);
// Create work for it
for (int i=0;i<100; ++i) {
int rv = rand() % 40 + 1;
showTask(rv);
tp->addTask(new FibTask(rv));
}
delete tp;
printf("\n\n\n\n\nDone with all work!\n");
}
I think you are having a race condition there...
When you remove the for loop, the pool is destructed as soon as it gets created so there is no time for the threads to start waiting on the queue. Try putting a sleep there and you'll see.
I implemented a threadpool library, which is used widely among all our services, so here come some advices:
You are using C++, so there's no need to use pthreads, just use boost, or std:thread if available
Don't signal, push empty tasks instead (pushing a task requires to signal, of course)
Use boost::function or std:function instead of a base class
Cope with spurious wake-ups (you code doesn't seem to handle them)
pthread_cond_signal wakes-up only one thread, you must use pthread_cond_broadcast if you want to notify them all, that said, I'd recommend, again, to stick to boost's conditions (#pasztorpisti got it rigth here, he's got my upvote)

Inner class and initialisation

I have a class defined like this: This is not all complete and probably won't compile.
class Server
{
public:
Server();
~Server();
class Worker
{
public:
Worker(Server& server) : _server(server) { }
~Worker() { }
void Run() { }
void Stop() { }
private:
Server& _server;
}
void Run()
{
while(true) {
// do work
}
}
void Stop()
{
// How do I stop the thread?
}
private:
std::vector<Worker> _workers;
};
My question is, how do I initialize the workers array passing in the outer class named Server.
What I want is a vector of worker threads. Each worker thread has it's own state but can access some other shared data (not shown). Also, how do I create the threads. Should they be created when the class object is first created or externally from a thread_group.
Also, how do I go about shutting down the threads cleanly and safely?
EDIT:
It seems that I can initialize Worker like this:
Server::Server(int thread_count)
: _workers(thread_count), Worker(*this)), _thread_count(thread_count) { }
And I'm currently doing this in Server::Run to create the threads.
boost::thread_group _threads; // a Server member variable
Server::Run(){
for (int i = 0; i < _thread_count; i++)
_threads.create_thread(boost::bind(&Server::Worker::Run, _workers(i)));
// main thread.
while(1) {
// Do stuff
}
_threads.join_all();
}
Does anyone see any problems with this?
And how about safe shutdown?
EDIT:
One problem I have found with it is that the Worker objects don't seem to get constructed!
oops. Yes they do I need a copy constructor on the Worker class.
But oddly, creating the threads results in the copy constructor for Worker being called multiple times.
I have done it with pure WINAPI, look:
#include <stdio.h>
#include <conio.h>
#include <windows.h>
#include <vector>
using namespace std;
class Server
{
public:
class Worker
{
int m_id;
DWORD m_threadId;
HANDLE m_threadHandle;
bool m_active;
friend Server;
public:
Worker (int id)
{
m_id = id;
m_threadId = 0;
m_threadHandle = 0;
m_active = true;
}
static DWORD WINAPI Run (LPVOID lpParam)
{
Worker* p = (Worker*) lpParam; // it's needed because of the static modifier
while (p->m_active)
{
printf ("I'm a thread #%i\n", p->m_id);
Sleep (1000);
}
return 0;
}
void Stop ()
{
m_active = false;
}
};
Server ()
{
m_workers = new vector <Worker*> ();
m_count = 0;
}
~Server ()
{
delete m_workers;
}
void Run ()
{
puts ("Server is run");
}
void Stop ()
{
while (m_count > 0)
RemoveWorker ();
puts ("Server has been stopped");
}
void AddWorker ()
{
HANDLE h;
DWORD threadId;
Worker* n = new Worker (m_count ++);
m_workers->push_back (n);
h = CreateThread (NULL, 0, Worker::Run, (VOID*) n, CREATE_SUSPENDED, &threadId);
n->m_threadHandle = h;
n->m_threadId = threadId;
ResumeThread (h);
}
void RemoveWorker ()
{
HANDLE h;
DWORD threadId;
if (m_count <= 0)
return;
Worker* n = m_workers->at (m_count - 1);
m_workers->pop_back ();
n->Stop ();
TerminateThread (n->m_threadHandle, 0);
m_count --;
delete n;
}
private:
int m_count;
vector <Worker*>* m_workers;
};
int main (void)
{
Server a;
int com = 1;
a.Run ();
while (com)
{
if (kbhit ())
{
switch (getch ())
{
case 27: // escape key code
com = 0;
break;
case 'a': // add worker
a.AddWorker ();
break;
case 'r': // remove worker
a.RemoveWorker ();
break;
}
}
}
a.Stop ();
return 0;
}
There are no synchronization code here, because I haven't enougth time to do it... But I wish it will help you =)
Have you looked at boost asio at all? It looks like it could be a good fit for what you are trying to do. Additionally you can call boost asio's io_service's run (similar to your Run method) from many threads i.e. you can process your IO in many threads. Also of interest could be http://think-async.com/Asio/Recipes for an asio based thread-pool.
Have a look at the asio examples. Perhaps they offer an alternative way of handling what you are trying to do. Esp. have a look at how a clean shutdown is accomplished.