Conversion of Qt Signals to Boost Signals2 - c++

I have the following code that implements a Signal/Slot + Concurrency in Qt and was wondering if I can convert this to Boost/Threads and Boost/signal2
void MyClass::Func1()
{
emit ImplementingFunc1();
//Do the stuff here
Func1Implementation()
QFuture<void> future = QtConcurrent::run(this, &MyClass::WaitForFunc1Finish());
}
void MyClass::WaitForFunc1Finish()
{
int result = GetResponse();
emit Func1HasFinished();
}
How can I implement the emit functions (in the above, these are slots in MyClass) and the pipeline using Boost?
Thanks in advance for the help

You could implement your requirements using boost. However, the signals are different because boost does not give you an event loop to dispatch signals to slots.
It means that slot connected to a boost signal called a in thread will be executed in that thread !.
Roughly:
MyClass.h
typedef boost::signals2::signal<void ()> FinishedSig;
typedef boost::shared_ptr<FinishedSig> FinishedSigPtr;
typedef boost::lock_guard<boost::mutex> LockGuard;
class MyClass
{
public:
// Signal
FinishedSig& finished() { return *m_sig; }
void Func1();
void WaitForFunc1Finish();
void WaitForFunc1FinishSlot();
private:
FinishedSigPtr m_sig;
boost::mutex m_mutex;
boost::thread m_thread;
}
MyClass.cpp
// Signal connection
this->finished().connect(boost::bind(&MyClass::Func1HasFinishedSlot, this));
void MyClass::Func1()
{
//Do the stuff here
Func1Implementation()
m_thread = boost::thread(&MyClass::WaitForFunc1Finish, this);
}
void MyClass::WaitForFunc1Finish()
{
LockGuard l(m_mutex);
// Variables are guarded against concurrent access here
int result = GetResponse();
(*m_sig)(); // emit finished sig
}
void MyClass::Func1HasFinishedSlot()
{
// This will be executed in the calling thread
LockGuard l(m_mutex);
// Variables are guarded against concurrent access here
// do stuff
}

Related

How to temporarily switch to the GUI thread

I have a program that takes a long action, I run this function on a different thread. Periodically I need to update the information for the user, so I send a signal to the GUI thread. But sometimes I need the user to make a choice, I need to display the QDialog on the GUI thread and pause the slow thread while the user selects an option, and when the user completes the selection, return the value to the slow thread and continue it
it should look something like this:
But I don’t know how to stop and continue the thread and whether it should be done this way.
Header:
class Example:public QObject
{
//...
Q_OBJECT
void mainLoop();
Example();
signals:
void updateGUI(const QString &message);
void sendQuestion(const QString &message);
void continueMainLoop(const QString &answer);
private slots:
void updatuGUIslot(const QString &message);
void showQuestionDialog(const QString &message);
};
Source:
Example::Example()
{
connect(this,&Example::updateGUI,this,&Example::updatuGUIslot);
connect(this,&Example::sendQuestion,this,&Example::showQuestionDialog);
std::thread t(&Example::mainLoop,this);
t.detach();
// in the project it is not in the constructor
}
void Example::mainLoop()
{
while(some condition1)
{
// slow action
if(some condition2)
emit updateGUI("message");
if(some condition3)
{
QString result;
ThreadPtr th = this_thread(); // pseudocode
connect(this,&Example::continueMainLoop,this,[&](const QString &answer)
{
result = answer;
th.continue(); // pseudocode
});
emit sendQuestion("question");
th.wait(); // pseudocode
}
// slow action
}
}
void Example::showQuestionDialog(const QString &message)
{
// show dialog with question
emit continueMainLoop("answer");
}
void Example::updatuGUIslot(const QString &message)
{
// update GUI
}
you need to invoke the method with BlockingQueuedConnection before condition3 for checking which option that selected by the user.
bool updateGui ;
QMetaObject::invokeMethod(this, "showDialog",Qt::BlockingQueuedConnection,
Q_RETURN_ARG(bool, updateGui));
if(updateGui)
{
//update GUI
}

Create boost::thread without callable object

I would like to create boost::thread without passing callable object to constructor ( not-any-thread state ).
boost::shared_ptr<boost::thread> ThreadHandle_.reset( new boost::thread() );
but how can i later pass callable object ?
void foo() { }
boost::thread th; // not-any-thread
th = boost::thread(foo); // bound
You can do exactly the same with the shared ptr complication
You can create your own wrapper on top of boost::thread, something similar to:
class QThread
{
public:
QThread();
virtual ~QThread();
void operator()(const boost::function0<void>& action);
void Join();
private:
void Process();
std::list<boost::function0<void> > m_Queue;
bool m_Destroy;
boost::condition_variable_any m_Available;
boost::mutex m_QueueLock;
boost::barrier m_Barrier;
boost::thread m_Thread;
};
and implement Process in a way it wait until you put something to do ( a task, as a callable function in the queue)
void QThread::Process()
{
m_Barrier.wait();
boost::mutex::scoped_lock lock(m_QueueLock);
while(!m_Destroy)
{
while(!m_Destroy && !m_Queue.empty())
{
boost::function0<void> action = m_Queue.front();
m_Queue.pop_front();
lock.unlock();
action();
lock.lock();
}
if (!m_Destroy)
{
m_Available.wait(lock);
}
}
}

Proper cleanup with a suspended coroutine

I'm wondering what the best (cleanest, hardest to mess up) method for cleanup is in this situation.
void MyClass::do_stuff(boost::asio::yield_context context) {
while (running_) {
uint32_t data = async_buffer->Read(context);
// do other stuff
}
}
Read is a call which asynchronously waits until there is data to be read, then returns that data. If I want to delete this instance of MyClass, how can I make sure I do so properly? Let's say that the asynchronous wait here is performed via a deadline_timer's async_wait. If I cancel the event, I still have to wait for the thread to finish executing the "other stuff" before I know things are in a good state (I can't join the thread, as it's a thread that belongs to the io service that may also be handling other jobs). I could do something like this:
MyClass::~MyClass() {
running_ = false;
read_event->CancelEvent(); // some way to cancel the deadline_timer the Read is waiting on
boost::mutex::scoped_lock lock(finished_mutex_);
if (!finished_) {
cond_.wait(lock);
}
// any other cleanup
}
void MyClass::do_stuff(boost::asio::yield_context context) {
while (running_) {
uint32_t data = async_buffer->Read(context);
// do other stuff
}
boost::mutex::scoped_lock lock(finished_mutex_);
finished_ = true;
cond.notify();
}
But I'm hoping to make these stackful coroutines as easy to use as possible, and it's not straightforward for people to recognize that this condition exists and what would need to be done to make sure things are cleaned up properly. Is there a better way? Is what I'm trying to do here wrong at a more fundamental level?
Also, for the event (what I have is basically the same as Tanner's answer here) I need to cancel it in a way that I'd have to keep some extra state (a true cancel vs. the normal cancel used to fire the event) -- which wouldn't be appropriate if there were multiple pieces of logic waiting on that same event. Would love to hear if there's a better way to model the asynchronous event to be used with a coroutine suspend/resume.
Thanks.
EDIT: Thanks #Sehe, took a shot at a working example, I think this illustrates what I'm getting at:
class AsyncBuffer {
public:
AsyncBuffer(boost::asio::io_service& io_service) :
write_event_(io_service) {
write_event_.expires_at(boost::posix_time::pos_infin);
}
void Write(uint32_t data) {
buffer_.push_back(data);
write_event_.cancel();
}
uint32_t Read(boost::asio::yield_context context) {
if (buffer_.empty()) {
write_event_.async_wait(context);
}
uint32_t data = buffer_.front();
buffer_.pop_front();
return data;
}
protected:
boost::asio::deadline_timer write_event_;
std::list<uint32_t> buffer_;
};
class MyClass {
public:
MyClass(boost::asio::io_service& io_service) :
running_(false), io_service_(io_service), buffer_(io_service) {
}
void Run(boost::asio::yield_context context) {
while (running_) {
boost::system::error_code ec;
uint32_t data = buffer_.Read(context[ec]);
// do something with data
}
}
void Write(uint32_t data) {
buffer_.Write(data);
}
void Start() {
running_ = true;
boost::asio::spawn(io_service_, boost::bind(&MyClass::Run, this, _1));
}
protected:
boost::atomic_bool running_;
boost::asio::io_service& io_service_;
AsyncBuffer buffer_;
};
So here, let's say that the buffer is empty and MyClass::Run is currently suspended while making a call to Read, so there's a deadline_timer.async_wait that's waiting for the event to fire to resume that context. It's time to destroy this instance of MyClass, so how do we make sure that it gets done cleanly.
A more typical approach would be to use boost::enable_shared_from_this with MyClass, and run the methods as bound to the shared pointer.
Boost Bind supports binding to boost::shared_ptr<MyClass> transparently.
This way, you can automatically have the destructor run only when the last user disappears.
If you create a SSCCE, I'm happy to change it around, to show what I mean.
UPDATE
To the SSCCEE: Some remarks:
I imagined a pool of threads running the IO service
The way in which MyClass calls into AsyncBuffer member functions directly is not threadsafe. There is actually no thread safe way to cancel the event outside the producer thread[1], since the producer already access the buffer for Writeing. This could be mitigated using a strand (in the current setup I don't see how MyClass would likely be threadsafe). Alternatively, look at the active object pattern (for which Tanner has an excellent answer[2] on SO).
I chose the strand approach here, for simplicity, so we do:
void MyClass::Write(uint32_t data) {
strand_.post(boost::bind(&AsyncBuffer::Write, &buffer_, data));
}
You ask
Also, for the event (what I have is basically the same as Tanner's answer here) I need to cancel it in a way that I'd have to keep some extra state (a true cancel vs. the normal cancel used to fire the event)
The most natural place for this state is the usual for the deadline_timer: it's deadline. Stopping the buffer is done by resetting the timer:
void AsyncBuffer::Stop() { // not threadsafe!
write_event_.expires_from_now(boost::posix_time::seconds(-1));
}
This at once cancels the timer, but is detectable because the deadline is in the past.
Here's a simple demo with a a group of IO service threads, one "producer coroutine" that produces random numbers and a "sniper thread" that snipes the MyClass::Run coroutine after 2 seconds. The main thread is the sniper thread.
See it Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/async_result.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/atomic.hpp>
#include <list>
#include <iostream>
// for refcounting:
#include <boost/enable_shared_from_this.hpp>
#include <boost/make_shared.hpp>
namespace asio = boost::asio;
class AsyncBuffer {
friend class MyClass;
protected:
AsyncBuffer(boost::asio::io_service &io_service) : write_event_(io_service) {
write_event_.expires_at(boost::posix_time::pos_infin);
}
void Write(uint32_t data) {
buffer_.push_back(data);
write_event_.cancel();
}
uint32_t Read(boost::asio::yield_context context) {
if (buffer_.empty()) {
boost::system::error_code ec;
write_event_.async_wait(context[ec]);
if (ec != boost::asio::error::operation_aborted || write_event_.expires_from_now().is_negative())
{
if (context.ec_)
*context.ec_ = boost::asio::error::operation_aborted;
return 0;
}
}
uint32_t data = buffer_.front();
buffer_.pop_front();
return data;
}
void Stop() {
write_event_.expires_from_now(boost::posix_time::seconds(-1));
}
private:
boost::asio::deadline_timer write_event_;
std::list<uint32_t> buffer_;
};
class MyClass : public boost::enable_shared_from_this<MyClass> {
boost::atomic_bool stopped_;
public:
MyClass(boost::asio::io_service &io_service) : stopped_(false), buffer_(io_service), strand_(io_service) {}
void Run(boost::asio::yield_context context) {
while (!stopped_) {
boost::system::error_code ec;
uint32_t data = buffer_.Read(context[ec]);
if (ec == boost::asio::error::operation_aborted)
break;
// do something with data
std::cout << data << " " << std::flush;
}
std::cout << "EOF\n";
}
bool Write(uint32_t data) {
if (!stopped_) {
strand_.post(boost::bind(&AsyncBuffer::Write, &buffer_, data));
}
return !stopped_;
}
void Start() {
if (!stopped_) {
stopped_ = false;
boost::asio::spawn(strand_, boost::bind(&MyClass::Run, shared_from_this(), _1));
}
}
void Stop() {
stopped_ = true;
strand_.post(boost::bind(&AsyncBuffer::Stop, &buffer_));
}
~MyClass() {
std::cout << "MyClass destructed because no coroutines hold a reference to it anymore\n";
}
protected:
AsyncBuffer buffer_;
boost::asio::strand strand_;
};
int main()
{
boost::thread_group tg;
asio::io_service svc;
{
// Start the consumer:
auto instance = boost::make_shared<MyClass>(svc);
instance->Start();
// Sniper in 2 seconds :)
boost::thread([instance]{
boost::this_thread::sleep_for(boost::chrono::seconds(2));
instance->Stop();
}).detach();
// Start the producer:
auto producer_coro = [instance, &svc](asio::yield_context c) { // a bound function/function object in C++03
asio::deadline_timer tim(svc);
while (instance->Write(rand())) {
tim.expires_from_now(boost::posix_time::milliseconds(200));
tim.async_wait(c);
}
};
asio::spawn(svc, producer_coro);
// Start the service threads:
for(size_t i=0; i < boost::thread::hardware_concurrency(); ++i)
tg.create_thread(boost::bind(&asio::io_service::run, &svc));
}
// now `instance` is out of scope, it will selfdestruct after the snipe
// completed
boost::this_thread::sleep_for(boost::chrono::seconds(3)); // wait longer than the snipe
std::cout << "This is the main thread _after_ MyClass self-destructed correctly\n";
// cleanup service threads
tg.join_all();
}
[1] logical thread, this could be a coroutine that gets resumed on different threads
[2] boost::asio and Active Object

How to handle thread-safe callback registration and execution in C++?

For example I've an EventGenerator class that call IEventHandler::onEvent for all registered event handlers:
class IEventHandler {
public: virtual void onEvent(...) = 0;
};
class EventGenerator {
private:
std::vector<IEventHandler*> _handlers;
std::mutex _mutex; // [1]
public:
void AddHandler(IEventHandler* handler) {
std::lock_guard<std::mutex> lck(_mutex); // [2]
_handlers.push_back(handler);
}
void RemoveHanler(IEventHandler* handler) {
std::lock_guard<std::mutex> lck(_mutex); // [3]
// remove from "_handlers"
}
private:
void threadMainTask() {
while(true) {
// Do some work ...
// Post event to all registered handlers
{
std::lock_guard<std::mutex> lck(_mutex); // [4]
for(auto& h : _handlers) { h->onEvent(...); )
}
// Do some work ...
}
}
The code should be thread safe in the following manner:
one thread is executing the EventGenerator::threadMainTask
many threads might access EventGenerator::AddHandler and EventGenerator::RemoveHandler APIs.
To support this, I have the following synchonization (see comment in the code):
[1] is the mutex that protects the vector _handlers from multiple thread access.
[2] and [3] are protect adding or removing handlers simultaneously.
[4] is preventing from changing the vector while the main thread is posting events.
This code works until... If for some reason, during the execution of IEventHandler::onEvent(...) the code is trying to call EventManager::RemoveHandler or EventManager::AddHandler. The result is runtime exception.
What is the best approach to handle registration of the event handlers and executing the event handler callback in the thread safe manner?
>> UPDATE <<
So based on the inputs, I've updated to the following design:
class IEventHandler {
public: virtual void onEvent(...) = 0;
};
class EventDelegate {
private:
IEventHandler* _handler;
std::atomic<bool> _cancelled;
public:
EventDelegate(IEventHandler* h) : _handler(h), _cancelled(false) {};
void Cancel() { _cancelled = true; }
void Invoke(...) { if (!_cancelled) _handler->onEvent(...); }
}
class EventGenerator {
private:
std::vector<std::shared_ptr<EventDelegate>> _handlers;
std::mutex _mutex;
public:
void AddHandler(std::shared_ptr<EventDelegate> handler) {
std::lock_guard<std::mutex> lck(_mutex);
_handlers.push_back(handler);
}
void RemoveHanler(std::shared_ptr<EventDelegate> handler) {
std::lock_guard<std::mutex> lck(_mutex);
// remove from "_handlers"
}
private:
void threadMainTask() {
while(true) {
// Do some work ...
std::vector<std::shared_ptr<EventDelegate>> handlers_copy;
{
std::lock_guard<std::mutex> lck(_mutex);
handlers_copy = _handlers;
}
for(auto& h : handlers_copy) { h->Invoke(...); )
// Do some work ...
}
}
As you can see, there is additional class EventDelegate that have two purposes:
hold the event callback
enable to cancel the callback
In the threadMainTask, I'm using a local copy of the std::vector<std::shared_ptr<EventDelegate>> and I'm releasing the lock before invoking the callbacks. This approach solves an issue when during the IEventHandler::onEvent(...) the EventGenerator::{AddHandler,RemoveHanler} is called.
Any thoughts about the new design?
Copy-on-Write vector implemented on atomic swap of shared_ptr's (in assumptions callback registration is occurring far less frequently than events the callbacks are notified about):
using callback_t = std::shared_ptr<std::function<void(event_t const&)> >;
using callbacks_t = std::shared_ptr<std::vector<callback_t> >;
callbacks_t callbacks_;
mutex_t mutex_; // a mutex of your choice
void register(callback_t cb)
{
// the mutex is to serialize concurrent callbacks registrations
// this is not always necessary, as depending on the application
// architecture, single writer may be enforced by design
scoped_lock lock(mutex_);
auto callbacks = atomic_load(&callbacks_);
auto new_callbacks = std::make_shared< std::vector<callback_t> >();
new_callbacks->reserve(callbacks->size() + 1);
*new_callbacks = callbacks;
new_callbacks->push_back(std::move(cb));
atomic_store(&callbacks_, new_callbacks);
}
void invoke(event_t const& evt)
{
auto callbacks = atomic_load(&callbacks_);
// many people wrap each callback invocation into a try-catch
// and de-register on exception
for(auto& cb: *callbacks) (*cb)(evt);
}
Specifically on the subject of asynchronous behavior when callback is executed while being de-registered, well here the best approach to take is remember of the Separation of Concerns principle.
The callback should not be able to die until it has been executed. This is achieved via another classic trick called "extra level of indirection". Namely, instead of registering user provided callback one would wrap it to something like the below and callback de-registration apart from updating the vector will call the below defined discharge() method on the callback wrapper and will even notify the caller of de-registration method of whether the callback execution finished successfully.
template <class CB> struct cb_wrapper
{
mutable std::atomic<bool> done_;
CB cb_;
cb_wrapper(CB&& cb): cb(std::move(cb_)) {}
bool discharge()
{
bool not_done = false;
return done_.compare_exchange_strong(not_done, true);
}
void operator()(event_t const&)
{
if (discharge())
{
cb();
}
}
};
I can't see a right thing here. From your update I can see a problem: you are not synchronizing the invoke method with callback removal. There's an atomic but it's not enough. Example: just after this line of code:
if (!_cancelled)
Another thread calls the remove method. What can happen is that the onEvent() is called anyway, even if the removed method has removed the callback from the list and returned the result, there's nothing to keep synchronized this execution flow. Same problem for the answer of #bobah.

Can I create a software watchdog timer thread in C++ using Boost Signals2 and Threads?

I am running function Foo from somebody else's library in a single-threaded application currently. Most of the time, I make a call to Foo and it's really quick, some times, I make a call to Foo and it takes forever. I am not a patient man, if Foo is going to take forever, I want to stop execution of Foo and not call it with those arguments.
What is the best way to call Foo in a controlled manner (my current environment is POSIX/C++) such that I can stop execution after a certain number of seconds. I feel like the right thing to do here is to create a second thread to call Foo, while in my main thread I create a timer function that will eventually signal the second thread if it runs out of time.
Is there another, more apt model (and solution)? If not, would Boost's Signals2 library and Threads do the trick?
You can call Foo on a second thread with a timeout. For example:
#include <boost/date_time.hpp>
#include <boost/thread/thread.hpp>
boost::posix_time::time_duration timeout = boost::posix_time::milliseconds(500);
boost::thread thrd(&Foo);
if (thrd.timed_join(timeout))
{
//finished
}
else
{
//Not finished;
}
You can use the following class:
class timer
{
typedef boost::signals2::signal<void ()> timeout_slot;
public:
typedef timeout_slot::slot_type timeout_slot_t;
public:
timer() : _interval(0), _is_active(false) {};
timer(int interval) : _interval(interval), _is_active(false) {};
virtual ~timer() { stop(); };
inline boost::signals2::connection connect(const timeout_slot_t& subscriber) { return _signalTimeout.connect(subscriber); };
void start()
{
boost::lock_guard<boost::mutex> lock(_guard);
if (is_active())
return; // Already executed.
if (_interval <= 0)
return;
_timer_thread.interrupt();
_timer_thread.join();
timer_worker job;
_timer_thread = boost::thread(job, this);
_is_active = true;
};
void stop()
{
boost::lock_guard<boost::mutex> lock(_guard);
if (!is_active())
return; // Already executed.
_timer_thread.interrupt();
_timer_thread.join();
_is_active = false;
};
inline bool is_active() const { return _is_active; };
inline int get_interval() const { return _interval; };
void set_interval(const int msec)
{
if (msec <= 0 || _interval == msec)
return;
boost::lock_guard<boost::mutex> lock(_guard);
// Keep timer activity status.
bool was_active = is_active();
if (was_active)
stop();
// Initialize timer with new interval.
_interval = msec;
if (was_active)
start();
};
protected:
friend struct timer_worker;
// The timer worker thread.
struct timer_worker
{
void operator()(timer* t)
{
boost::posix_time::milliseconds duration(t->get_interval());
try
{
while (1)
{
boost::this_thread::sleep<boost::posix_time::milliseconds>(duration);
{
boost::this_thread::disable_interruption di;
{
t->_signalTimeout();
}
}
}
}
catch (boost::thread_interrupted const& )
{
// Handle the thread interruption exception.
// This exception raises on boots::this_thread::interrupt.
}
};
};
protected:
int _interval;
bool _is_active;
boost::mutex _guard;
boost::thread _timer_thread;
// Signal slots
timeout_slot _signalTimeout;
};
An example of usage:
void _test_timer_handler()
{
std::cout << "_test_timer_handler\n";
}
BOOST_AUTO_TEST_CASE( test_timer )
{
emtorrus::timer timer;
BOOST_CHECK(!timer.is_active());
BOOST_CHECK(timer.get_interval() == 0);
timer.set_interval(1000);
timer.connect(_test_timer_handler);
timer.start();
BOOST_CHECK(timer.is_active());
std::cout << "timer test started\n";
boost::this_thread::sleep<boost::posix_time::milliseconds>(boost::posix_time::milliseconds(5500));
timer.stop();
BOOST_CHECK(!timer.is_active());
BOOST_CHECK(_test_timer_count == 5);
}
You can also set an alarm right before calling that function, and catch SIGALRM.
Vlad, excellent post! Your code compiled and works beautifully. I implemented a software watchdog timer with it. I made a few modifications:
To prevent pointer decay, store the signal in boost::shared_ptr and pass this to the thread worker instead of a weak pointer to the timer class. This eliminates the need for the thread worker to be a friend struct and guarantees the signal is in memory.
Add parameter _is_periodic to allow the caller to select whether or not the worker thread is periodic or if it terminates after expiration.
Store _is_active, _interval and _is_periodic in boost::atomic to allow thread-safe access.
Narrow the scope of mutex locking.
Add reset() method to "kick" the timer, preventing it from issuing the expiration signal.
With these changes applied:
#include <atomic>
#include <boost/signals2.hpp>
#include <boost/thread.hpp>
class IntervalThread
{
using interval_signal = boost::signals2::signal<void(void)>;
public:
using interval_slot_t = interval_signal::slot_type;
IntervalThread(const int interval_ms = 60)
: _interval_ms(interval_ms),
_is_active(false),
_is_periodic(false),
_signal_expired(new interval_signal()) {};
inline ~IntervalThread(void) { stop(); };
boost::signals2::connection connect(const interval_slot_t &subscriber)
{
// thread-safe: signals2 obtains a mutex on connect()
return _signal_expired->connect(subscriber);
};
void start(void)
{
if (is_active())
return; // Already executed.
if (get_interval_ms() <= 0)
return;
boost::lock_guard<boost::mutex> lock(_timer_thread_guard);
_timer_thread.interrupt();
_timer_thread.join();
_timer_thread = boost::thread(timer_worker(),
static_cast<int>(get_interval_ms()),
static_cast<bool>(is_periodic()),
_signal_expired);
_is_active = true;
};
void reset(void)
{
if (is_active())
stop();
start();
}
void stop(void)
{
if (!is_active())
return; // Already executed.
boost::lock_guard<boost::mutex> lock(_timer_thread_guard);
_timer_thread.interrupt();
_timer_thread.join();
_is_active = false;
};
inline bool is_active(void) const { return _is_active; };
inline int get_interval_ms(void) const { return _interval_ms; };
void set_interval_ms(const int interval_ms)
{
if (interval_ms <= 0 || get_interval_ms() == interval_ms)
return;
// Cache timer activity state.
const bool was_active = is_active();
// Initialize timer with new interval.
if (was_active)
stop();
_interval_ms = interval_ms;
if (was_active)
start();
};
inline bool is_periodic(void) const { return _is_periodic; }
inline void set_periodic(const bool is_periodic = true) { _is_periodic = is_periodic; }
private:
// The timer worker for the interval thread.
struct timer_worker {
void operator()(const int interval_ms, const bool is_periodic, boost::shared_ptr<interval_signal> signal_expired)
{
boost::posix_time::milliseconds duration(interval_ms);
try {
do {
boost::this_thread::sleep<boost::posix_time::milliseconds>(duration);
{
boost::this_thread::disable_interruption di;
signal_expired->operator()();
}
} while (is_periodic);
} catch (const boost::thread_interrupted &) {
// IntervalThread start(), stop() and reset() throws boost::this_thread::interrupt,
// which is expected since this thread is interrupted. No action neccessary.
}
};
};
std::atomic<int> _interval_ms; // Interval, in ms
std::atomic<bool> _is_active; // Is the timed interval active?
std::atomic<bool> _is_periodic; // Is the timer periodic?
boost::mutex _timer_thread_guard;
boost::thread _timer_thread;
// The signal to call on interval expiration.
boost::shared_ptr<interval_signal> _signal_expired;
};