How to get back data from a thread? - c++

I learned how to send additional parameters to a thread on a related post, but now i would like to know if i can get back the data(processed by the thread) back into the calling function!
I am writing a program in which i need to use a thread that continuously puts user input into a string variable. The problem is that i don't know how to get the string variable data back to the main() where it is displayed (graphically). And so (i prefer that) the getting of user input and the Displaying of the string be done independently (since they need to be looped at different rates : say...30 fps for user input and 16 fps for display)
i hope i am clear
Here is an ideal problematic situation(but not one that i need a solution to):
typedef struct
{
int a,b;
}ThreadData;
int avg(void* data)
{
ThreadData* tdata=(ThreadData*)data;
int processed_average=(tdata->a+tdata->b)/2.0;
//this is what i want to send back to the main()
return 0;
}
void main()
{
int a=10,b=20;
SDL_Thread* mythread=SDL_CreateThread(avg,myThreadData);
cout<<"The average of a and b is "; //i dont know what to put here!
}
Forgive me for any syntax errors in my demo
As a conclusive question :
How to get the current contents of a string that is continuously updated by a thread (using a loop) back into the main() which contains another loop that continuously updates the screen (graphically) with the current(latest) contents of the string?

A decent pattern for inter thread communication is a message queue - you can implement one with a mutex, a list and a condition variable - one use an off the shelf variant. Here are some implementations you can look at:
http://pocoproject.org/docs/Poco.NotificationQueue.html
http://gnodebian.blogspot.com.es/2013/07/a-thread-safe-asynchronous-queue-in-c11.html
http://docs.wxwidgets.org/trunk/classwx_message_queue_3_01_t_01_4.html
http://software.intel.com/sites/products/documentation/doclib/tbb_sa/help/reference/containers_overview/concurrent_queue_cls.htm
You would then have the thread push data onto the queue - and in main pop data from the queue.
Edit 1: in response to the OP's edit.
If you have a single string that has to be edited by the thread and then rendered by main it is best to just use std::string, protect all access to it with a mutex, and then use a condition variable to signal the main thread when the string changes. Will try and write some sample code for you in a minute.
Edit 2: Sample code as promised:
#include <SDL/SDL.h>
#include <SDL/SDL_thread.h>
#include <iostream>
#include <sstream>
#include <stdexcept>
class SdlMutex
{
public:
SdlMutex()
{
mutex = SDL_CreateMutex();
if ( !mutex ) throw std::runtime_error( "SDL_CreateMutex == NULL" );
}
~SdlMutex()
{
SDL_DestroyMutex( mutex );
}
void lock()
{
if( SDL_mutexP( mutex ) == -1 ) throw std::runtime_error( "SDL_mutexP == -1" );
// Note:
// -1 does not mean it was already locked - it means there was an error in locking -
// if it was locked it will just block - see SDL_mutexP(3)
}
void unlock()
{
if ( SDL_mutexV( mutex ) == -1 ) throw std::runtime_error( "SDL_mutexV == -1" );
}
SDL_mutex* underlying()
{
return mutex;
}
private:
SDL_mutex* mutex;
};
class SdlScopedLock
{
public:
SdlScopedLock( SdlMutex& mutex )
:
mutex( mutex )
{
mutex.lock();
}
~SdlScopedLock()
{
try
{
this->unlock();
}
catch( const std::exception& e )
{
// Destructors should never throw ...
std::cerr << "SdlScopedLock::~SdlScopedLock - caught : " << e.what() << std::endl;
}
}
void unlock()
{
mutex.unlock();
}
private:
SdlMutex& mutex;
};
class ThreadData
{
public:
ThreadData()
:
dataReady( false ),
done( false )
{
condition = SDL_CreateCond();
}
~ThreadData()
{
SDL_DestroyCond( condition );
}
// Using stringstream so I can just shift on integers...
std::stringstream data;
bool dataReady;
bool done;
SdlMutex mutex;
SDL_cond* condition;
};
int threadFunction( void* data )
{
try
{
ThreadData* threadData = static_cast< ThreadData* >( data );
for ( size_t i = 0; i < 100; i++ )
{
{
SdlScopedLock lock( threadData->mutex );
// Everything in this scope is now syncronized with the mutex
if ( i != 0 ) threadData->data << ", ";
threadData->data << i;
threadData->dataReady = true;
} // threadData->mutex is automatically unlocked here
// Its important to note that condition should be signaled after mutex is unlocked
if ( SDL_CondSignal( threadData->condition ) == -1 ) throw std::runtime_error( "Failed to signal" );
}
{
SdlScopedLock lock( threadData->mutex );
threadData->done = true;
}
if ( SDL_CondSignal( threadData->condition ) == -1 ) throw std::runtime_error( "Failed to signal" );
return 0;
}
catch( const std::exception& e )
{
std::cerr << "Caught : " << e.what() << std::endl;
return 1;
}
}
int main()
{
ThreadData threadData;
SDL_Thread* thread = SDL_CreateThread( threadFunction, &threadData );
while ( true )
{
SdlScopedLock lock( threadData.mutex );
while ( threadData.dataReady == false && threadData.done == false )
{
// NOTE: must call condition wait with mutex already locked
if ( SDL_CondWait( threadData.condition, threadData.mutex.underlying() ) == -1 ) throw std::runtime_error( "Failed to wait" );
}
// once dataReady == true or threadData.done == true we get here
std::cout << "Got data = " << threadData.data.str() << std::endl;
threadData.data.str( "" );
threadData.dataReady = false;
if ( threadData.done )
{
std::cout << "child done - ending" << std::endl;
break;
}
}
int status = 99;
SDL_WaitThread( thread, &status );
std::cerr << "Thread completed with : " << status << std::endl;
}
Edit 3: And then the cage comes down...
You should probbably not use SDL thread support in C++, or atleast wrap it in some RAII classes - for example, in the above code - if an exception is throw - you should ensure mutex is unlocked. I will update sample with RAII, but there are many better options to SDL thread helpers. (NOTE: Edit 4 adds RAII - so now mutex is unlocked when an exception is thrown)
Edit 4: Code is now safer - still make sure you do error checks - and basically: don't use SDL threads in C++ - use boost::thread or std::thread.

I think you want SDL_WaitThread.
void SDL_WaitThread(SDL_Thread *thread, int *status);
The return code for the thread function is placed in the area pointed
to by status, if status is not NULL.
Have your avg function return the average.

Related

How to properly wait for condition variable in C++?

In trying to create an asynchronous I/O file reader in C++ under Linux. The example I have has two buffers. The first read blocks. Then, for each time around the main loop, I asynchronously launch the IO and call process() which runs the simulated processing of the current block. When processing is done, we wait for the condition variable. The idea is that the asynchronous handler should notify the condition variable.
Unfortunately the notify seems to happen before wait, and it seems like this is not the way the condition variable wait() function works. How should I rewrite the code so that the loop waits until the asynchronous io has completed?
#include <aio.h>
#include <fcntl.h>
#include <signal.h>
#include <unistd.h>
#include <condition_variable>
#include <cstring>
#include <iostream>
#include <thread>
using namespace std;
using namespace std::chrono_literals;
constexpr uint32_t blockSize = 512;
mutex readMutex;
condition_variable cv;
int fh;
int bytesRead;
void process(char* buf, uint32_t bytesRead) {
cout << "processing..." << endl;
usleep(100000);
}
void aio_completion_handler(sigval_t sigval) {
struct aiocb* req = (struct aiocb*)sigval.sival_ptr;
// check whether asynch operation is complete
if (aio_error(req) == 0) {
int ret = aio_return(req);
bytesRead = req->aio_nbytes;
cout << "ret == " << ret << endl;
cout << (char*)req->aio_buf << endl;
}
{
unique_lock<mutex> readLock(readMutex);
cv.notify_one();
}
}
void thready() {
char* buf1 = new char[blockSize];
char* buf2 = new char[blockSize];
aiocb cb;
char* processbuf = buf1;
char* readbuf = buf2;
fh = open("smallfile.dat", O_RDONLY);
if (fh < 0) {
throw std::runtime_error("cannot open file!");
}
memset(&cb, 0, sizeof(aiocb));
cb.aio_fildes = fh;
cb.aio_nbytes = blockSize;
cb.aio_offset = 0;
// Fill in callback information
/*
Using SIGEV_THREAD to request a thread callback function as a notification
method
*/
cb.aio_sigevent.sigev_notify_attributes = nullptr;
cb.aio_sigevent.sigev_notify = SIGEV_THREAD;
cb.aio_sigevent.sigev_notify_function = aio_completion_handler;
/*
The context to be transmitted is loaded into the handler (in this case, a
reference to the aiocb request itself). In this handler, we simply refer to
the arrived sigval pointer and use the AIO function to verify that the request
has been completed.
*/
cb.aio_sigevent.sigev_value.sival_ptr = &cb;
int currentBytesRead = read(fh, buf1, blockSize); // read the 1st block
while (true) {
cb.aio_buf = readbuf;
aio_read(&cb); // each next block is read asynchronously
process(processbuf, currentBytesRead); // process while waiting
{
unique_lock<mutex> readLock(readMutex);
cv.wait(readLock);
}
currentBytesRead = bytesRead; // make local copy of global modified by the asynch code
if (currentBytesRead < blockSize) {
break; // last time, get out
}
cout << "back from wait" << endl;
swap(processbuf, readbuf); // switch to other buffer for next time
currentBytesRead = bytesRead; // create local copy
}
delete[] buf1;
delete[] buf2;
}
int main() {
try {
thready();
} catch (std::exception& e) {
cerr << e.what() << '\n';
}
return 0;
}
A condition varible should generally be used for
waiting until it is possible that the predicate (for example a shared variable) has changed, and
notifying waiting threads that the predicate may have changed, so that waiting threads should check the predicate again.
However, you seem to be attempting to use the state of the condition variable itself as the predicate. This is not how condition variables are supposed to be used and may lead to race conditions such as those described in your question. Another reason to always check the predicate is that spurious wakeups are possible with condition variables.
In your case, it would probably be appropriate to create a shared variable
bool operation_completed = false;
and use that variable as the predicate for the condition variable. Access to that variable should always be controlled by the mutex.
You can then change the lines
{
unique_lock<mutex> readLock(readMutex);
cv.notify_one();
}
to
{
unique_lock<mutex> readLock(readMutex);
operation_completed = true;
cv.notify_one();
}
and change the lines
{
unique_lock<mutex> readLock(readMutex);
cv.wait(readLock);
}
to:
{
unique_lock<mutex> readLock(readMutex);
while ( !operation_completed )
cv.wait(readLock);
}
Instead of
while ( !operation_completed )
cv.wait(readLock);
you can also write
cv.wait( readLock, []{ return operation_completed; } );
which is equivalent. See the documentation of std::condition_varible::wait for further information.
Of course, operation_completed should also be set back to false when appropriate, while the mutex is locked.

C++ callback timer implementation

I have found the following implementation for a callback timer to use in my c++ application. However, this implementation requires me to "join" the thread from the start caller, which effectively blocks the caller of the start function.
What I really like to do is the following.
someone can call foo(data) multiple times and store them in a db.
whenever foo(data) is called, it initiates a timer for few seconds.
while the timer is counting down, foo(data) can be called several
times and multiple items can be stored, but doesn't call erase until timer finishes
whenever the timer is up,
the "remove" function is called once to remove all the records from the
db.
Bascially I want to be able to do a task, and wait a few seconds and batch do a single batch task B after a few seconds.
class CallBackTimer {
public:
/**
* Constructor of the CallBackTimer
*/
CallBackTimer() :_execute(false) { }
/**
* Destructor
*/
~CallBackTimer() {
if (_execute.load(std::memory_order_acquire)) {
stop();
};
}
/**
* Stops the timer
*/
void stop() {
_execute.store(false, std::memory_order_release);
if (_thd.joinable()) {
_thd.join();
}
}
/**
* Start the timer function
* #param interval Repeating duration in milliseconds, 0 indicates the #func will run only once
* #param delay Time in milliseconds to wait before the first callback
* #param func Callback function
*/
void start(int interval, int delay, std::function<void(void)> func) {
if(_execute.load(std::memory_order_acquire)) {
stop();
};
_execute.store(true, std::memory_order_release);
_thd = std::thread([this, interval, delay, func]() {
std::this_thread::sleep_for(std::chrono::milliseconds(delay));
if (interval == 0) {
func();
stop();
} else {
while (_execute.load(std::memory_order_acquire)) {
func();
std::this_thread::sleep_for(std::chrono::milliseconds(interval));
}
}
});
}
/**
* Check if the timer is currently running
* #return bool, true if timer is running, false otherwise.
*/
bool is_running() const noexcept {
return ( _execute.load(std::memory_order_acquire) && _thd.joinable() );
}
private:
std::atomic<bool> _execute;
std::thread _thd;
};
I have tried modifying the above code using the thread.detach(). However, I am running issues in detached thread not being able to write (erase) from the database..
Any help and suggestions are appreciated!
Rather than using threads you could use std::async. The following class will process the queued strings in order 4 seconds after the last string is added. Only 1 async task will be launched at a time and std::aysnc takes care of all the threading for you.
If there are unprocessed items in the queue when the class is destructed then the async task stops without waiting and these items aren't processed (but this would be easy to change if its not your desired behaviour).
#include <iostream>
#include <string>
#include <future>
#include <mutex>
#include <chrono>
#include <queue>
class Batcher
{
public:
Batcher()
: taskDelay( 4 ),
startTime( std::chrono::steady_clock::now() ) // only used for debugging
{
}
void queue( const std::string& value )
{
std::unique_lock< std::mutex > lock( mutex );
std::cout << "queuing '" << value << " at " << std::chrono::duration_cast< std::chrono::milliseconds >( std::chrono::steady_clock::now() - startTime ).count() << "ms\n";
work.push( value );
// increase the time to process the queue to "now + 4 seconds"
timeout = std::chrono::steady_clock::now() + taskDelay;
if ( !running )
{
// launch a new asynchronous task which will process the queue
task = std::async( std::launch::async, [this]{ processWork(); } );
running = true;
}
}
~Batcher()
{
std::unique_lock< std::mutex > lock( mutex );
// stop processing the queue
closing = true;
bool wasRunning = running;
condition.notify_all();
lock.unlock();
if ( wasRunning )
{
// wait for the async task to complete
task.wait();
}
}
private:
std::mutex mutex;
std::condition_variable condition;
std::chrono::seconds taskDelay;
std::chrono::steady_clock::time_point timeout;
std::queue< std::string > work;
std::future< void > task;
bool closing = false;
bool running = false;
std::chrono::steady_clock::time_point startTime;
void processWork()
{
std::unique_lock< std::mutex > lock( mutex );
// loop until std::chrono::steady_clock::now() > timeout
auto wait = timeout - std::chrono::steady_clock::now();
while ( !closing && wait > std::chrono::seconds( 0 ) )
{
condition.wait_for( lock, wait );
wait = timeout - std::chrono::steady_clock::now();
}
if ( !closing )
{
std::cout << "processing queue at " << std::chrono::duration_cast< std::chrono::milliseconds >( std::chrono::steady_clock::now() - startTime ).count() << "ms\n";
while ( !work.empty() )
{
std::cout << work.front() << "\n";
work.pop();
}
std::cout << std::flush;
}
else
{
std::cout << "aborting queue processing at " << std::chrono::duration_cast< std::chrono::milliseconds >( std::chrono::steady_clock::now() - startTime ).count() << "ms with " << work.size() << " remaining items\n";
}
running = false;
}
};
int main()
{
Batcher batcher;
batcher.queue( "test 1" );
std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
batcher.queue( "test 2" );
std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
batcher.queue( "test 3" );
std::this_thread::sleep_for( std::chrono::seconds( 2 ) );
batcher.queue( "test 4" );
std::this_thread::sleep_for( std::chrono::seconds( 5 ) );
batcher.queue( "test 5" );
}

In C++11, is it wise (or even safe) to use std::unique_lock<std::mutex> as a class member? If so, are there any guidelines?

Is it wise (or even safe) to use std::unique_lock as a class member? If so, are there any guidelines?
My thinking in using std::unique_lock was to ensure that the mutex is unlocked in the case of an exception being thrown.
The following code gives an example of how I'm currently using the unique_lock. I would like to know if I'm going in the wrong direction or not before the project grows too much.
#include <iostream>
#include <string>
#include <thread>
#include <mutex>
#include <unistd.h>
class WorkerClass {
private:
std::thread workerThread;
bool workerThreadRunning;
int workerThreadInterval;
int sharedResource;
std::mutex mutex;
std::unique_lock<std::mutex> workerMutex;
public:
WorkerClass() {
workerThreadRunning = false;
workerThreadInterval = 2;
sharedResource = 0;
workerMutex = std::unique_lock<std::mutex>(mutex);
unlockMutex();
}
~WorkerClass() {
stopWork();
}
void startWork() {
workerThreadRunning = true;
workerThread = std::thread(&WorkerClass::workerThreadMethod,
this);
}
void stopWork() {
lockMutex();
if (workerThreadRunning) {
workerThreadRunning = false;
unlockMutex();
workerThread.join();
}else {
unlockMutex();
}
}
void lockMutex() {
try {
workerMutex.lock();
}catch (std::system_error &error) {
std::cout << "Already locked" << std::endl;
}
}
void unlockMutex() {
try {
workerMutex.unlock();
}catch (std::system_error &error) {
std::cout << "Already unlocked" << std::endl;
}
}
int getSharedResource() {
int result;
lockMutex();
result = sharedResource;
unlockMutex();
return result;
}
void workerThreadMethod() {
bool isRunning = true;
while (isRunning) {
lockMutex();
sharedResource++;
std::cout << "WorkerThread: sharedResource = "
<< sharedResource << std::endl;
isRunning = workerThreadRunning;
unlockMutex();
sleep(workerThreadInterval);
}
}
};
int main(int argc, char *argv[]) {
int sharedResource;
WorkerClass *worker = new WorkerClass();
std::cout << "ThisThread: Starting work..." << std::endl;
worker->startWork();
for (int i = 0; i < 10; i++) {
sleep(1);
sharedResource = worker->getSharedResource();
std::cout << "ThisThread: sharedResource = "
<< sharedResource << std::endl;
}
worker->stopWork();
std::cout << "Done..." << std::endl;
return 0;
}
this is actually quite bad. storing a std::unique_lock or std::lock_guard as a member variable misses the point of scoped locking, and locking in general.
the idea is to have shared lock between threads, but each one temporary locks the shared resource the lock protects. the wrapper object makes it return-from-function safe and exception-safe.
you first should think about your shared resource. in the context of "Worker" I'd imagine some task queue. then, that task queue is associated with a some lock. each worker locks that lock with scoped-wrapper for queuing a task or dequeuing it. there is no real reason to keep the lock locked as long as some instance of a worker thread is alive, it should lock it when it needs to.
It is not a good idea to do that for a number of reasons. The first you're already "handling" with the try-catch block: two threads attempting to lock the same lock results in an exception. If you want non-blocking lock attempts you should use try_lock instead.
The second reason is that when std::unique_lock is stack-allocated in the scope of the duration of the lock, then when it is destructed it will unlock the resource for you. This means it is exception safe, if workerThread.join() throws in your current code then the lock will remain acquired.

pthread_cond_wait wake many threads example

pthread_cond_wait wake many threads example
Code to wake up thread 1 & 3 on some broadcast from thread 0.
Setup: Win7 with mingw32, g++ 4.8.1 with mingw32-pthreads-w32
pthread condition variable
Solution:
http://pastebin.com/X8aQ5Fz8
#include <iostream>
#include <string>
#include <list>
#include <map>
#include <pthread.h>
#include <fstream>
#include <sstream> // for ostringstream
#define N_THREAD 7
using namespace std;
// Prototypes
int main();
int scheduler();
void *worker_thread(void *ptr);
string atomic_output(int my_int, int thread_id);
// Global variables
//pthread_t thread0, thread1, thread2, thread3, thread4, thread5, thread6, thread7;
pthread_t m_thread[N_THREAD];
int count = 1;
pthread_mutex_t count_mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t condition_var = PTHREAD_COND_INITIALIZER;
// Main
int main() {
cout << "Launching main. \n";
//Start to monitor for exceptions
register_exception_handler();
//Start scheduler
scheduler();
return 0;
}
// Scheduler
int scheduler() {
// Starting scheduler log file
ofstream scheduler_log;
scheduler_log.open ("scheduler_log.txt");
//scheduler_log << "[Scheduler] Starting." << endl;
cout << "[Scheduler] Starting. \n";
// Scheduler::Main Section
int thread_id[N_THREAD];
for(int i=0;i<N_THREAD;i++) {
thread_id[i] = i;
pthread_create( &m_thread[i], NULL, worker_thread, (void *) &thread_id[i]);
}
for(int i=0;i<N_THREAD;i++)
pthread_join(m_thread[i], NULL);
cout << "[Scheduler] Ending. \n";
// Closing scheduler log file
scheduler_log.close();
return 0;
}
string atomic_output(int my_int, int thread_id) {
ostringstream stm;
stm << "Thread ";
stm << thread_id;
stm << ": ";
//count fn
stm << my_int;
stm << "\n";
//stm << "Finished. \n";
return stm.str();
}
void *worker_thread(void *ptr) {
string line;
//int boo = 0;
int thread_id = *(int *) ptr;
//if(thread_id == 0)
// pthread_mutex_lock( &count_mutex );
for(int i=0;i<10;i++) {
//boo++;
if (thread_id == 1) {
pthread_mutex_lock(&count_mutex);
while (count == 1) {
cout << "[Thread 1] Before pthread_cond_wait...\n";
pthread_cond_wait( &condition_var, &count_mutex );
cout << "[Thread 1] After pthread_cond_wait...\n";
}
pthread_mutex_unlock(&count_mutex);
}
if (thread_id == 3) {
pthread_mutex_lock(&count_mutex);
while (count == 1) {
cout << "[Thread 3] Before pthread_cond_wait...\n";
pthread_cond_wait( &condition_var, &count_mutex );
cout << "[Thread 3] After pthread_cond_wait...\n";
}
pthread_mutex_unlock(&count_mutex);
}
//count fn
line = atomic_output(i, *(int *)ptr);
cout << line;
if (i == 5) {
if(thread_id == 0) {
pthread_mutex_lock( &count_mutex );
count = 0;
pthread_mutex_unlock( &count_mutex );
pthread_cond_broadcast(&condition_var);
}
}
}
//line = atomic_output(0, *(int *)ptr);
//cout << line;
}
(old) -= What I've tried =-
*Edit: early problem in the code with while(0) instead of while(predicate). Keeping it there for easy reference with the comments.
Code 1: http://pastebin.com/rCbYjPKi
I tried to while(0) pthread_cond_wait( &condition_var, &count_mutex );
with pthread_cond_broadcast(&condition_var); ... The thread does not respect the condition.
Proof of condition non-respect : http://pastebin.com/GW1cg4fY
Thread 0: 0
Thread 0: 1
Thread 0: 2
Thread 0: 3
Thread 2: 0
Thread 6: 0
Thread 1: 0 <-- Here, Thread 1 is not supposed to tick before Thread 0 hit 5. Thread 0 is at 3.
Code 2: http://pastebin.com/g3E0Mw9W
I tried pthread_cond_wait( &condition_var, &count_mutex ); in thread 1 and 3 and the program does not return.
either thread 1, or thread 3 waits forever. Even using broadcast which says it should wake up all waiting threads. Obviously something is not working, code or lib?
More:
I've tried to unlock the mutex first, then broadcast. I've tried to broadcast then unlock. Both don't work.
I've tried to use signal instead of broadcast, same problem.
References that I can't make work (top google search)
http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html
http://docs.oracle.com/cd/E19455-01/806-5257/6je9h032r/index.html
http://www-01.ibm.com/support/knowledgecenter/ssw_i5_54/apis/users_76.htm
Code 3: http://pastebin.com/tKP7F8a8
Trying to use a predicate variable count, to fix race problem condition. Still a problem, doesn't prevent thread1 and thread3 from running when thread0 is between 0 and 5.
What would be the code to wake up thread 1 & 3 on some function call from thread0
if(thread_id == 0)
pthread_mutex_lock( &count_mutex );
for(int i=0;i<10;i++) {
//boo++;
if (thread_id == 1) {
while(0)
pthread_cond_wait( &condition_var, &count_mutex );
}
None of this makes any sense. The correct way to wait for a condition variable is:
pthread_mutex_lock(&mutex_associated_with_condition_variable);
while (!predicate)
pthread_cond_wait(&condition_variable, mutex_associated_with_condition_variable);
Notice:
The mutex must be locked.
The predicate (thing you are waiting for) must be checked before waiting.
The wait must be in a loop.
Breaking any of these three rules will cause the kind of problems you are seeing. Your main problem is that you break the second rule, waiting even when the thing you want to wait for has already happened.

terminating a running boost thread

I currently have a boost thread as such
class foo
{
private:
boost::shared_ptr<boost::thread> t;
public:
foo()
{
t = boost::make_shared<boost::thread>(&foo::SomeMethod,this);
}
void SomeMethod()
{
while(true)
{
.... //Does some work
boost::this_thread::sleep(boost::posix_time::milliseconds(5000)); //sleep for 5 seconds
}
}
void stopThread()
{
//Elegant and instant way of stopping thread t
}
}
I have read from this post that you have to define interruption points however I am not sure if I understand how that would fit in my scenario. I am looking for a safe elegant way that will ensure that thread t is terminated
You can't ever safely terminate a thread, you just need to tell it from the outside that it should stop. If you interrupt a thread, you don't know where you interrupted it and you could leave the system in an unknown state.
Instead of looping forever, you can check a variable (make sure it's thread safe though!) inside the thread's loop to see if the thread should exit. What I do in work threads is I have them wait on a condition variable, and then when there's work they wake up and do work, but when they're awake they also check the "shutdown" flag to see if they should exit.
A snippet of my code:
//-----------------------------------------------------------------------------
void Manager::ThreadMain() {
unique_lock<mutex> lock( m_work_mutex, std::defer_lock );
while( true ) {
lock.lock();
while( m_work_queue.empty() && !m_shutdown ) {
m_work_signal.wait( lock );
}
if( !m_work_queue.empty() ) {
// (process work...)
continue;
}
// quit if no work left and shutdown flag set.
if( m_shutdown ) return;
}
}
You could maybe get away with something like:
std::atomic<bool> stop_thread = false;
void SomeMethod()
{
while( !stop_thread )
{
.... //Does some work
boost::this_thread::sleep(boost::posix_time::milliseconds(5000)); //sleep for 5 seconds
}
}
void stopThread()
{
stop_thread = true;
// join thread (wait for it to stop.)
t->join();
}
And let me tell you, sometimes it isn't easy to make something safely exit. A few weeks ago I had a big struggle with threaded console input. I ended up having to handle raw windows console events and translating them into keystrokes myself, just so I could simultaneously intercept my custom shutdown event.
Use boost::thread interrupt()
#include <iostream>
#include <boost/thread.hpp>
#include <boost/chrono.hpp>
class Foo
{
private:
boost::shared_ptr<boost::thread> t;
public:
Foo()
{
t = boost::make_shared<boost::thread>(&Foo::SomeMethod, this);
}
void SomeMethod()
{
std::cout << "thread starts" << std::endl;
while(true) {
std::cout << "." << std::endl;
boost::this_thread::sleep(boost::posix_time::seconds(1));
}
}
void stopThread()
{
t->interrupt();
t->join();
std::cout << "thread stopped" << std::endl;
}
};
int main()
{
Foo foo;
boost::this_thread::sleep(boost::posix_time::seconds(5));
foo.stopThread();
return 0;
}
Execute it
# g++ f.cpp -lboost_thread && ./a.out
thread starts
.
.
.
.
.
thread stopped