My case looks like this: The program uses two threads, let's call them "Sender" and "Recipient" because it is a mechanism of interprocess communication.
The "Sender" thread after sending the message stops at the condition provided by std::condition_variable and the .wait (Lock) function. The "Recipient" thread informs the waiting thread about the response to his message using .notify_one().
I'm happy with the way it works, but I want to add the ability to handle the timeout.
I prepared the following class (I would like it to be universal so the notification function is defined from the external class) but I'm sure that it can be implemented better. I wanted to avoid a lot of CPU usage, that's why I used std::this_thread::sleep_for, but I suppose that it can be somehow replaced with std::this_thread::yield(). I would like to use eg std::future_status, but I do not know how to do it. How can this be improved? I can use std c++11 or boost 1.55.
class Timer
{
private:
int MsLimit;
std::atomic<bool> Stop;
std::atomic<bool> LimitReached;
std::thread T;
std::mutex M;
std::function<void()> NotifyWaitingThreadFunction;
void Timeout()
{
std::unique_lock<std::mutex> Lock(M);
std::chrono::system_clock::time_point TimerStart = std::chrono::system_clock::now();
std::chrono::duration<long long, std::milli> ElapsedTime;
unsigned int T = 0;
do
{ std::this_thread::sleep_for(std::chrono::milliseconds(5));
std::chrono::system_clock::time_point TimerEnd = std::chrono::system_clock::now();
ElapsedTime = std::chrono::duration_cast<std::chrono::milliseconds>(TimerEnd - TimerStart);
T+=ElapsedTime.count();
if((T > MsLimit) && (!Stop))
{ LimitReached = true;
Stop = true;
}
}while(!Stop);
if(LimitReached)
{
NotifyWaitingThreadFunction();
}
}
public:
Timer(int Milliseconds) : MsLimit(Milliseconds)
{
}
void StartTimer()
{
Stop = false;
LimitReached = false;
T = std::thread(&Timer::Timeout,this);
}
void StopTimer()
{
std::unique_lock<std::mutex> Lock(M);
Stop = true;
LimitReached = false;
}
template<class T>
void AssignFunction(T* ObjectInstance, void (T::*MemberFunction)())
{
NotifyWaitingThreadFunction = std::bind(MemberFunction,ObjectInstance);
}
};
Your solution has one fault - do while loop is executed until MsLimit is elapsed. After Timeout started, M mutex is blocked and the call of StopTimer cannot break loop, because stop in StopTimer is set on true when M is released in Timeout what happens if (T > MsLimit) returns true and function ends. BTW, the use of mutex is redundant, because Stop is atomic.
You can use one of timers from boost library instead of creating your own one.
The code below uses boost::asio::high_resolution_timer (boost 1.55 version has it):
class Timer
{
public:
Timer (int ms)
: timer(io), ms(ms) {}
~Timer() {if(t.joinable()) t.join();}
void Start() {
t = std::thread( [this]()
{
timer.expires_from_now(std::chrono::milliseconds(ms));
timer.async_wait([this](const boost::system::error_code& ec) // start async wait
{ // lambda is called when timeout expired or error occures
if (!ec) // if there is no error, call function
NotifyWaitingThreadFunction();
});
io.run(); // process async operations
});
}
void Stop() {
timer.cancel();
}
template<class T>
void AssignFunction(T* ObjectInstance, void (T::*MemberFunction)())
{
NotifyWaitingThreadFunction = std::bind(MemberFunction,ObjectInstance);
}
private:
boost::asio::io_service io; // needed for timer
boost::asio::high_resolution_timer timer;
std::thread t;
int ms;
std::function<void()> NotifyWaitingThreadFunction;
};
In Start method thread is created where we set value of timeout in ms, timer is started by async_wait. Lambda passed into async_wait is called when timeout expired or an error occures. So if there is no error, you can call NotifyWaitingThreadFunction. To stop timer use Stop method. Stop cancels started aynchronous operation then lambda is called with ec == boost::asio::error::operation_aborted. In this case lambda ends without calling NotifyWaitingThreadFunction.
Related
I am trying to write a ThreadPool class
class ThreadPool {
public:
ThreadPool(size_t numberOfThreads):isAlive(true) {
for(int i =0; i < numberOfThreads; i++) {
workerThreads.push_back(std::thread(&ThreadPool::doJob, this));
}
#ifdef DEBUG
std::cout<<"Construction Complete"<<std::endl;
#endif
}
~ThreadPool() {
#ifdef DEBUG
std::cout<<"Destruction Start"<<std::endl;
#endif
isAlive = false;
conditionVariable.notify_all();
waitForExecution();
#ifdef DEBUG
std::cout<<"Destruction Complete"<<std::endl;
#endif
}
void waitForExecution() {
for(std::thread& worker: workerThreads) {
worker.join();
}
}
void addWork(std::function<void()> job) {
#ifdef DEBUG
std::cout<<"Adding work"<<std::endl;
#endif
std::unique_lock<std::mutex> lock(lockListMutex);
jobQueue.push_back(job);
conditionVariable.notify_one();
}
private:
// performs actual work
void doJob() {
// try {
while(isAlive) {
#ifdef DEBUG
std::cout<<"Do Job"<<std::endl;
#endif
std::unique_lock<std::mutex> lock(lockListMutex);
if(!jobQueue.empty()) {
#ifdef DEBUG
std::cout<<"Next Job Found"<<std::endl;
#endif
std::function<void()> job = jobQueue.front();
jobQueue.pop_front();
job();
}
conditionVariable.wait(lock);
}
}
// a vector containing worker threads
std::vector<std::thread> workerThreads;
// a queue for jobs
std::list<std::function<void()>> jobQueue;
// a mutex for synchronized insertion and deletion from list
std::mutex lockListMutex;
std::atomic<bool> isAlive;
// condition variable to track whether or not there is a job in queue
std::condition_variable conditionVariable;
};
I am adding work to this thread pool from my main thread. My problem is calling waitForExecution() results in forever waiting main thread. I need to be able to terminate threads when all work is done and continue main thread execution from there. How should I proceed here?
The first step when writing a robust thread pool is to split the queue from the management of threads. A thread-safe queue is hard enough to write by its own, and managing threads similarly.
A thread safe queue looks like:
template<class T>
struct threadsafe_queue {
boost::optional<T> pop() {
std::unique_lock<std::mutex> l(m);
cv.wait(l, [&]{ aborted || !data.empty(); } );
if (aborted) return {};
return data.pop_front();
}
void push( T t )
{
std::unique_lock<std::mutex> l(m);
if (aborted) return;
data.push_front( std::move(t) );
cv.notify_one();
}
void abort()
{
std::unique_lock<std::mutex> l(m);
aborted = true;
data = {};
cv.notify_all();
}
~threadsafe_queue() { abort(); }
private:
std::mutex m;
std::condition_variable cv;
std::queue< T > data;
bool aborted = false;
};
where pop returns nullopt when the queue is aborted.
Now our thread pool is easy:
struct threadpool {
explicit threadpool(std::size_t n) { add_threads(n); }
threadpool() = default;
~threadpool(){ abort(); }
void add_thread() { add_threads(1); }
void add_threads(std::size_t n)
{
for (std::size_t i = 0; i < n; ++i)
threads.push_back( std::thread( [this]{ do_thread_work(); } ) );
}
template<class F>
auto add_task( F && f )
{
using R = std::result_of_t< F&() >;
auto pptr = std::make_shared<std::promise<R>>();
auto future = pptr.get_future();
tasks.push([pptr]{ (*pptr)(); });
return future;
}
void abort()
{
tasks.abort();
while (!threads.empty()) {
threads.back().join();
threads.pop_back();
}
}
private:
threadsafe_queue< std::function<void()> > tasks;
std::vector< std::thread > threads;
void do_thread_work() {
while (auto f = tasks.pop()) {
(*f)();
}
}
};
note that if you abort, outstanding future's are filled with a broken promise exception.
Worker threads stop running when the queue they are feeding from is aborted. The main thread on abort() will wait for the worker threads to finish (as is wise).
This does mean that worker thread tasks must also terminate, or the main thread will hang. There is no way to avoid this; often, your worker threads' tasks need to cooperate to get a message saying they should abort early.
Boost has a thread pool that integrates with its threading primitives and permits a less cooperative abort; in it, all mutex type operations implicitly check for an abort flag, and if they see it the operation throws.
How should I proceed here?
Well, you should learn to use your debugger, which should show you exactly where each of the threads you want to join is stopped.
I'm going to tell you what looks wrong, but strongly encourage you to do that first. It's invaluable.
OK, now: your condition variable loop is wrong.
The correct pattern is the one that behaves like the second form, with the predicate argument, here:
while (!pred()) {
wait(lock);
}
Specifically, if your predicate is true, you must not call wait. You may never be woken again, because the predicate never became false in the first place!
Try
// wait until we have something to do
while(jobQueue.empty() && isAlive) {
conditionVariable.wait(lock);
}
// unless we're exiting, we must have a job
if (isAlive) {
#ifdef DEBUG
std::cout<<"Next Job Found"<<std::endl;
#endif
std::function<void()> job = jobQueue.front();
jobQueue.pop_front();
job();
}
Imagine your thread is running a job when you call notify_all - it will call wait after the notification has already happened, and it isn't coming again. Since it doesn't check isAlive between finishing the job and calling wait, it's going to wait forever.
Even without the shutdown problem it would be wrong, because it should keep consuming jobs while there is work to do, instead of blocking every time it finishes one. Which reminds me of the last issue - you should probably unlock the mutex while executing the job (and re-lock it afterwards) - otherwise your pool is single-threaded.
I'm trying to make a stopwatch class, which has an seperate thread that counts down.
The issue here is that when I use the cancel function or try setting up the stopwatch again after a timeout, my program crashes. At the moment I am sure it's due some threading issues, probably because of misuse. Is there anyone that can tell me why this doesn't work and can help me get it working?
Stopwatch.h
class Stopwatch
{
public:
void Run(uint64_t ticks);
void Set(uint64_t ms);
void Cancel();
private:
std::thread mythread;
};
Stopwatch.cpp
void Stopwatch::Run(uint64_t ticks)
{
uint64_t clockCycles = ticks;
while(clockCycles > 0){
std::this_thread::sleep_for(std::chrono::milliseconds(1));
clockCycles--;
}
//do anything timeout related, probab a cout in the future
}
void Stopwatch::Set(uint64_t ms)
{
mythread = std::thread(&Timer::Run, this, ms);
}
void Stopwatch::Cancel()
{
mythread.join();
}
What I want is to call the stopwatch to set a time and get some timeout reaction. With the cancel function is can be stopped at any time. After that with the set function you can restart it.
As noted in the comments, you must always eventually call join or detach for a std::thread which is joinable (i.e., has or had an associated running thread). The code fails to do this in three places. In Set, one uses std::thread's move-assignment to assign to the thread without regard to its previous contents. Calling Set multiple times without calling Cancel therefore is guaranteed to terminate. There is also no such call in the move-assignment operator or destructor of Stopwatch.
Second, because Cancel just calls join, it will wait for the timeout to happen and execute before returning, instead of canceling the timer. To cancel the timer, one must notify the thread. On the other end, the Run loop in the thread must have a way to be notified. The traditional way to do this is with a condition_variable.
For example:
class Stopwatch
{
public:
Stopwatch() = default;
Stopwatch(const Stopwatch&) = delete;
Stopwatch(Stopwatch&&) = delete;
Stopwatch& operator=(const Stopwatch&) = delete;
Stopwatch& operator=(Stopwatch&& other) = delete;
~Stopwatch()
{
Cancel();
}
void Run(uint64_t ticks)
{
std::unique_lock<std::mutex> lk{mutex_};
cv_.wait_for(lk, std::chrono::milliseconds(ticks), [&] { return canceled_; });
if (!canceled_)
/* timeout code here */;
}
void Set(uint64_t ms)
{
Cancel();
thread_ = std::thread(&Stopwatch::Run, this, ms);
}
void Cancel()
{
if (thread_.joinable())
{
{
std::lock_guard<std::mutex> lk{mutex_};
canceled_ = true;
}
cv_.notify_one();
thread_.join();
}
}
private:
bool canceled_ = false;
std::condition_variable cv_;
std::mutex mutex_;
std::thread thread_;
};
I'm wondering what the best (cleanest, hardest to mess up) method for cleanup is in this situation.
void MyClass::do_stuff(boost::asio::yield_context context) {
while (running_) {
uint32_t data = async_buffer->Read(context);
// do other stuff
}
}
Read is a call which asynchronously waits until there is data to be read, then returns that data. If I want to delete this instance of MyClass, how can I make sure I do so properly? Let's say that the asynchronous wait here is performed via a deadline_timer's async_wait. If I cancel the event, I still have to wait for the thread to finish executing the "other stuff" before I know things are in a good state (I can't join the thread, as it's a thread that belongs to the io service that may also be handling other jobs). I could do something like this:
MyClass::~MyClass() {
running_ = false;
read_event->CancelEvent(); // some way to cancel the deadline_timer the Read is waiting on
boost::mutex::scoped_lock lock(finished_mutex_);
if (!finished_) {
cond_.wait(lock);
}
// any other cleanup
}
void MyClass::do_stuff(boost::asio::yield_context context) {
while (running_) {
uint32_t data = async_buffer->Read(context);
// do other stuff
}
boost::mutex::scoped_lock lock(finished_mutex_);
finished_ = true;
cond.notify();
}
But I'm hoping to make these stackful coroutines as easy to use as possible, and it's not straightforward for people to recognize that this condition exists and what would need to be done to make sure things are cleaned up properly. Is there a better way? Is what I'm trying to do here wrong at a more fundamental level?
Also, for the event (what I have is basically the same as Tanner's answer here) I need to cancel it in a way that I'd have to keep some extra state (a true cancel vs. the normal cancel used to fire the event) -- which wouldn't be appropriate if there were multiple pieces of logic waiting on that same event. Would love to hear if there's a better way to model the asynchronous event to be used with a coroutine suspend/resume.
Thanks.
EDIT: Thanks #Sehe, took a shot at a working example, I think this illustrates what I'm getting at:
class AsyncBuffer {
public:
AsyncBuffer(boost::asio::io_service& io_service) :
write_event_(io_service) {
write_event_.expires_at(boost::posix_time::pos_infin);
}
void Write(uint32_t data) {
buffer_.push_back(data);
write_event_.cancel();
}
uint32_t Read(boost::asio::yield_context context) {
if (buffer_.empty()) {
write_event_.async_wait(context);
}
uint32_t data = buffer_.front();
buffer_.pop_front();
return data;
}
protected:
boost::asio::deadline_timer write_event_;
std::list<uint32_t> buffer_;
};
class MyClass {
public:
MyClass(boost::asio::io_service& io_service) :
running_(false), io_service_(io_service), buffer_(io_service) {
}
void Run(boost::asio::yield_context context) {
while (running_) {
boost::system::error_code ec;
uint32_t data = buffer_.Read(context[ec]);
// do something with data
}
}
void Write(uint32_t data) {
buffer_.Write(data);
}
void Start() {
running_ = true;
boost::asio::spawn(io_service_, boost::bind(&MyClass::Run, this, _1));
}
protected:
boost::atomic_bool running_;
boost::asio::io_service& io_service_;
AsyncBuffer buffer_;
};
So here, let's say that the buffer is empty and MyClass::Run is currently suspended while making a call to Read, so there's a deadline_timer.async_wait that's waiting for the event to fire to resume that context. It's time to destroy this instance of MyClass, so how do we make sure that it gets done cleanly.
A more typical approach would be to use boost::enable_shared_from_this with MyClass, and run the methods as bound to the shared pointer.
Boost Bind supports binding to boost::shared_ptr<MyClass> transparently.
This way, you can automatically have the destructor run only when the last user disappears.
If you create a SSCCE, I'm happy to change it around, to show what I mean.
UPDATE
To the SSCCEE: Some remarks:
I imagined a pool of threads running the IO service
The way in which MyClass calls into AsyncBuffer member functions directly is not threadsafe. There is actually no thread safe way to cancel the event outside the producer thread[1], since the producer already access the buffer for Writeing. This could be mitigated using a strand (in the current setup I don't see how MyClass would likely be threadsafe). Alternatively, look at the active object pattern (for which Tanner has an excellent answer[2] on SO).
I chose the strand approach here, for simplicity, so we do:
void MyClass::Write(uint32_t data) {
strand_.post(boost::bind(&AsyncBuffer::Write, &buffer_, data));
}
You ask
Also, for the event (what I have is basically the same as Tanner's answer here) I need to cancel it in a way that I'd have to keep some extra state (a true cancel vs. the normal cancel used to fire the event)
The most natural place for this state is the usual for the deadline_timer: it's deadline. Stopping the buffer is done by resetting the timer:
void AsyncBuffer::Stop() { // not threadsafe!
write_event_.expires_from_now(boost::posix_time::seconds(-1));
}
This at once cancels the timer, but is detectable because the deadline is in the past.
Here's a simple demo with a a group of IO service threads, one "producer coroutine" that produces random numbers and a "sniper thread" that snipes the MyClass::Run coroutine after 2 seconds. The main thread is the sniper thread.
See it Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/async_result.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/atomic.hpp>
#include <list>
#include <iostream>
// for refcounting:
#include <boost/enable_shared_from_this.hpp>
#include <boost/make_shared.hpp>
namespace asio = boost::asio;
class AsyncBuffer {
friend class MyClass;
protected:
AsyncBuffer(boost::asio::io_service &io_service) : write_event_(io_service) {
write_event_.expires_at(boost::posix_time::pos_infin);
}
void Write(uint32_t data) {
buffer_.push_back(data);
write_event_.cancel();
}
uint32_t Read(boost::asio::yield_context context) {
if (buffer_.empty()) {
boost::system::error_code ec;
write_event_.async_wait(context[ec]);
if (ec != boost::asio::error::operation_aborted || write_event_.expires_from_now().is_negative())
{
if (context.ec_)
*context.ec_ = boost::asio::error::operation_aborted;
return 0;
}
}
uint32_t data = buffer_.front();
buffer_.pop_front();
return data;
}
void Stop() {
write_event_.expires_from_now(boost::posix_time::seconds(-1));
}
private:
boost::asio::deadline_timer write_event_;
std::list<uint32_t> buffer_;
};
class MyClass : public boost::enable_shared_from_this<MyClass> {
boost::atomic_bool stopped_;
public:
MyClass(boost::asio::io_service &io_service) : stopped_(false), buffer_(io_service), strand_(io_service) {}
void Run(boost::asio::yield_context context) {
while (!stopped_) {
boost::system::error_code ec;
uint32_t data = buffer_.Read(context[ec]);
if (ec == boost::asio::error::operation_aborted)
break;
// do something with data
std::cout << data << " " << std::flush;
}
std::cout << "EOF\n";
}
bool Write(uint32_t data) {
if (!stopped_) {
strand_.post(boost::bind(&AsyncBuffer::Write, &buffer_, data));
}
return !stopped_;
}
void Start() {
if (!stopped_) {
stopped_ = false;
boost::asio::spawn(strand_, boost::bind(&MyClass::Run, shared_from_this(), _1));
}
}
void Stop() {
stopped_ = true;
strand_.post(boost::bind(&AsyncBuffer::Stop, &buffer_));
}
~MyClass() {
std::cout << "MyClass destructed because no coroutines hold a reference to it anymore\n";
}
protected:
AsyncBuffer buffer_;
boost::asio::strand strand_;
};
int main()
{
boost::thread_group tg;
asio::io_service svc;
{
// Start the consumer:
auto instance = boost::make_shared<MyClass>(svc);
instance->Start();
// Sniper in 2 seconds :)
boost::thread([instance]{
boost::this_thread::sleep_for(boost::chrono::seconds(2));
instance->Stop();
}).detach();
// Start the producer:
auto producer_coro = [instance, &svc](asio::yield_context c) { // a bound function/function object in C++03
asio::deadline_timer tim(svc);
while (instance->Write(rand())) {
tim.expires_from_now(boost::posix_time::milliseconds(200));
tim.async_wait(c);
}
};
asio::spawn(svc, producer_coro);
// Start the service threads:
for(size_t i=0; i < boost::thread::hardware_concurrency(); ++i)
tg.create_thread(boost::bind(&asio::io_service::run, &svc));
}
// now `instance` is out of scope, it will selfdestruct after the snipe
// completed
boost::this_thread::sleep_for(boost::chrono::seconds(3)); // wait longer than the snipe
std::cout << "This is the main thread _after_ MyClass self-destructed correctly\n";
// cleanup service threads
tg.join_all();
}
[1] logical thread, this could be a coroutine that gets resumed on different threads
[2] boost::asio and Active Object
I want to create an event loop class that will run on it's own thread, support adding of tasks as std::functions and execute them.
For this, I am using the SafeQueue from here: https://stackoverflow.com/a/16075550/1069662
class EventLoop
{
public:
typedef std::function<void()> Task;
EventLoop(){ stop=false; }
void add_task(Task t) { queue.enqueue(t); }
void start();
void stop() { stop = true; }
private:
SafeQueue<Task> queue;
bool stop;
};
void EventLoop::start()
{
while (!stop) {
Task t = queue.dequeue(); // Blocking call
if (!stop) {
t();
}
}
cout << "Exit Loop";
}
Then, you would use it like this:
EventLoop loop;
std::thread t(&EventLoop::start, &loop);
loop.add_task(myTask);
// do smth else
loop.stop();
t.join();
My question is: how to stop gracefully the thread ?
Here stop cannot exit the loop because of the blocking queue call.
Queue up a 'poison pill' stop task. That unblocks the queue wait and either directly requests the thread to clean up and exit or allows the consumer thread to check a 'stop' boolean.
That's assuming you need to stop the threads/task at all before the app terminates. I usually try to not do that, if I can get away with it.
An alternative approach: just queue up a task that throws an exception. A few changes to your code:
class EventLoop {
// ...
class stopexception {};
// ...
void stop()
{
add_task(
// Boring function that throws a stopexception
);
}
};
void EventLoop::start()
{
try {
while (1)
{
Task t = queue.dequeue(); // Blocking call
t();
}
} catch (const stopexception &e)
{
cout << "Exit Loop";
}
}
An alternative that doesn't use exceptions, for those who are allergic to them, would be to redefine Task as a function that takes an EventLoop reference as its sole parameter, and stop() queues up a task that sets the flag that breaks out of the main loop.
I am running function Foo from somebody else's library in a single-threaded application currently. Most of the time, I make a call to Foo and it's really quick, some times, I make a call to Foo and it takes forever. I am not a patient man, if Foo is going to take forever, I want to stop execution of Foo and not call it with those arguments.
What is the best way to call Foo in a controlled manner (my current environment is POSIX/C++) such that I can stop execution after a certain number of seconds. I feel like the right thing to do here is to create a second thread to call Foo, while in my main thread I create a timer function that will eventually signal the second thread if it runs out of time.
Is there another, more apt model (and solution)? If not, would Boost's Signals2 library and Threads do the trick?
You can call Foo on a second thread with a timeout. For example:
#include <boost/date_time.hpp>
#include <boost/thread/thread.hpp>
boost::posix_time::time_duration timeout = boost::posix_time::milliseconds(500);
boost::thread thrd(&Foo);
if (thrd.timed_join(timeout))
{
//finished
}
else
{
//Not finished;
}
You can use the following class:
class timer
{
typedef boost::signals2::signal<void ()> timeout_slot;
public:
typedef timeout_slot::slot_type timeout_slot_t;
public:
timer() : _interval(0), _is_active(false) {};
timer(int interval) : _interval(interval), _is_active(false) {};
virtual ~timer() { stop(); };
inline boost::signals2::connection connect(const timeout_slot_t& subscriber) { return _signalTimeout.connect(subscriber); };
void start()
{
boost::lock_guard<boost::mutex> lock(_guard);
if (is_active())
return; // Already executed.
if (_interval <= 0)
return;
_timer_thread.interrupt();
_timer_thread.join();
timer_worker job;
_timer_thread = boost::thread(job, this);
_is_active = true;
};
void stop()
{
boost::lock_guard<boost::mutex> lock(_guard);
if (!is_active())
return; // Already executed.
_timer_thread.interrupt();
_timer_thread.join();
_is_active = false;
};
inline bool is_active() const { return _is_active; };
inline int get_interval() const { return _interval; };
void set_interval(const int msec)
{
if (msec <= 0 || _interval == msec)
return;
boost::lock_guard<boost::mutex> lock(_guard);
// Keep timer activity status.
bool was_active = is_active();
if (was_active)
stop();
// Initialize timer with new interval.
_interval = msec;
if (was_active)
start();
};
protected:
friend struct timer_worker;
// The timer worker thread.
struct timer_worker
{
void operator()(timer* t)
{
boost::posix_time::milliseconds duration(t->get_interval());
try
{
while (1)
{
boost::this_thread::sleep<boost::posix_time::milliseconds>(duration);
{
boost::this_thread::disable_interruption di;
{
t->_signalTimeout();
}
}
}
}
catch (boost::thread_interrupted const& )
{
// Handle the thread interruption exception.
// This exception raises on boots::this_thread::interrupt.
}
};
};
protected:
int _interval;
bool _is_active;
boost::mutex _guard;
boost::thread _timer_thread;
// Signal slots
timeout_slot _signalTimeout;
};
An example of usage:
void _test_timer_handler()
{
std::cout << "_test_timer_handler\n";
}
BOOST_AUTO_TEST_CASE( test_timer )
{
emtorrus::timer timer;
BOOST_CHECK(!timer.is_active());
BOOST_CHECK(timer.get_interval() == 0);
timer.set_interval(1000);
timer.connect(_test_timer_handler);
timer.start();
BOOST_CHECK(timer.is_active());
std::cout << "timer test started\n";
boost::this_thread::sleep<boost::posix_time::milliseconds>(boost::posix_time::milliseconds(5500));
timer.stop();
BOOST_CHECK(!timer.is_active());
BOOST_CHECK(_test_timer_count == 5);
}
You can also set an alarm right before calling that function, and catch SIGALRM.
Vlad, excellent post! Your code compiled and works beautifully. I implemented a software watchdog timer with it. I made a few modifications:
To prevent pointer decay, store the signal in boost::shared_ptr and pass this to the thread worker instead of a weak pointer to the timer class. This eliminates the need for the thread worker to be a friend struct and guarantees the signal is in memory.
Add parameter _is_periodic to allow the caller to select whether or not the worker thread is periodic or if it terminates after expiration.
Store _is_active, _interval and _is_periodic in boost::atomic to allow thread-safe access.
Narrow the scope of mutex locking.
Add reset() method to "kick" the timer, preventing it from issuing the expiration signal.
With these changes applied:
#include <atomic>
#include <boost/signals2.hpp>
#include <boost/thread.hpp>
class IntervalThread
{
using interval_signal = boost::signals2::signal<void(void)>;
public:
using interval_slot_t = interval_signal::slot_type;
IntervalThread(const int interval_ms = 60)
: _interval_ms(interval_ms),
_is_active(false),
_is_periodic(false),
_signal_expired(new interval_signal()) {};
inline ~IntervalThread(void) { stop(); };
boost::signals2::connection connect(const interval_slot_t &subscriber)
{
// thread-safe: signals2 obtains a mutex on connect()
return _signal_expired->connect(subscriber);
};
void start(void)
{
if (is_active())
return; // Already executed.
if (get_interval_ms() <= 0)
return;
boost::lock_guard<boost::mutex> lock(_timer_thread_guard);
_timer_thread.interrupt();
_timer_thread.join();
_timer_thread = boost::thread(timer_worker(),
static_cast<int>(get_interval_ms()),
static_cast<bool>(is_periodic()),
_signal_expired);
_is_active = true;
};
void reset(void)
{
if (is_active())
stop();
start();
}
void stop(void)
{
if (!is_active())
return; // Already executed.
boost::lock_guard<boost::mutex> lock(_timer_thread_guard);
_timer_thread.interrupt();
_timer_thread.join();
_is_active = false;
};
inline bool is_active(void) const { return _is_active; };
inline int get_interval_ms(void) const { return _interval_ms; };
void set_interval_ms(const int interval_ms)
{
if (interval_ms <= 0 || get_interval_ms() == interval_ms)
return;
// Cache timer activity state.
const bool was_active = is_active();
// Initialize timer with new interval.
if (was_active)
stop();
_interval_ms = interval_ms;
if (was_active)
start();
};
inline bool is_periodic(void) const { return _is_periodic; }
inline void set_periodic(const bool is_periodic = true) { _is_periodic = is_periodic; }
private:
// The timer worker for the interval thread.
struct timer_worker {
void operator()(const int interval_ms, const bool is_periodic, boost::shared_ptr<interval_signal> signal_expired)
{
boost::posix_time::milliseconds duration(interval_ms);
try {
do {
boost::this_thread::sleep<boost::posix_time::milliseconds>(duration);
{
boost::this_thread::disable_interruption di;
signal_expired->operator()();
}
} while (is_periodic);
} catch (const boost::thread_interrupted &) {
// IntervalThread start(), stop() and reset() throws boost::this_thread::interrupt,
// which is expected since this thread is interrupted. No action neccessary.
}
};
};
std::atomic<int> _interval_ms; // Interval, in ms
std::atomic<bool> _is_active; // Is the timed interval active?
std::atomic<bool> _is_periodic; // Is the timer periodic?
boost::mutex _timer_thread_guard;
boost::thread _timer_thread;
// The signal to call on interval expiration.
boost::shared_ptr<interval_signal> _signal_expired;
};