I have a boost::asio::io_service running in a thread that performs some operation:
struct clicker
{
clicker(boost::asio::io_service& io) : timer_(io) { wait_for_timer(); }
void stop() { timer_.cancel(); }
void wait_for_timer()
{
timer_.expires_from_now(std::chrono::milliseconds(500));
timer_.async_wait(std::bind(&clicker::wait_completed, this, _1));
}
void wait_completed(const boost::system::error_code& err)
{
if (!err) {
std::cout << "Click" << std::endl;
wait_for_timer();
}
}
boost::asio::steady_timer timer_;
};
int main()
{
boost::asio::io_service io;
clicker cl(io);
std::thread io_thread(&boost::asio::io_service::run, &io);
while (true) { // the run loop
// gather input
if (user_clicked_stop_button()) { cl.stop(); break; }
}
io_thread.join();
}
Now calling stop() should cancel waiting for the timer and fire wait_completed() with an error. However, we have a race condition here: at times, stop() will be called while wait_for_timer() is running and before the async_wait has been scheduled. Then, the code will run indefinitely.
What's the recommended way to deal with this situation? A boolean flag inside clicker that is tested in wait_completed? A mutex?
Update:
This is just a simplified example, in the real code I have several operations running in the io_service, so calling io_service::stop() is not an option here.
Post the action to the io_service thread:
void stop()
{
timer_.get_io_service().post([=] { timer_.cancel(); });
}
(If your compiler is not c++11-compatible, use bind to create the lambda.)
I think you want to call io.stop() to stop the io_service object.
Related
I'm trying to make a stopwatch class, which has an seperate thread that counts down.
The issue here is that when I use the cancel function or try setting up the stopwatch again after a timeout, my program crashes. At the moment I am sure it's due some threading issues, probably because of misuse. Is there anyone that can tell me why this doesn't work and can help me get it working?
Stopwatch.h
class Stopwatch
{
public:
void Run(uint64_t ticks);
void Set(uint64_t ms);
void Cancel();
private:
std::thread mythread;
};
Stopwatch.cpp
void Stopwatch::Run(uint64_t ticks)
{
uint64_t clockCycles = ticks;
while(clockCycles > 0){
std::this_thread::sleep_for(std::chrono::milliseconds(1));
clockCycles--;
}
//do anything timeout related, probab a cout in the future
}
void Stopwatch::Set(uint64_t ms)
{
mythread = std::thread(&Timer::Run, this, ms);
}
void Stopwatch::Cancel()
{
mythread.join();
}
What I want is to call the stopwatch to set a time and get some timeout reaction. With the cancel function is can be stopped at any time. After that with the set function you can restart it.
As noted in the comments, you must always eventually call join or detach for a std::thread which is joinable (i.e., has or had an associated running thread). The code fails to do this in three places. In Set, one uses std::thread's move-assignment to assign to the thread without regard to its previous contents. Calling Set multiple times without calling Cancel therefore is guaranteed to terminate. There is also no such call in the move-assignment operator or destructor of Stopwatch.
Second, because Cancel just calls join, it will wait for the timeout to happen and execute before returning, instead of canceling the timer. To cancel the timer, one must notify the thread. On the other end, the Run loop in the thread must have a way to be notified. The traditional way to do this is with a condition_variable.
For example:
class Stopwatch
{
public:
Stopwatch() = default;
Stopwatch(const Stopwatch&) = delete;
Stopwatch(Stopwatch&&) = delete;
Stopwatch& operator=(const Stopwatch&) = delete;
Stopwatch& operator=(Stopwatch&& other) = delete;
~Stopwatch()
{
Cancel();
}
void Run(uint64_t ticks)
{
std::unique_lock<std::mutex> lk{mutex_};
cv_.wait_for(lk, std::chrono::milliseconds(ticks), [&] { return canceled_; });
if (!canceled_)
/* timeout code here */;
}
void Set(uint64_t ms)
{
Cancel();
thread_ = std::thread(&Stopwatch::Run, this, ms);
}
void Cancel()
{
if (thread_.joinable())
{
{
std::lock_guard<std::mutex> lk{mutex_};
canceled_ = true;
}
cv_.notify_one();
thread_.join();
}
}
private:
bool canceled_ = false;
std::condition_variable cv_;
std::mutex mutex_;
std::thread thread_;
};
My case looks like this: The program uses two threads, let's call them "Sender" and "Recipient" because it is a mechanism of interprocess communication.
The "Sender" thread after sending the message stops at the condition provided by std::condition_variable and the .wait (Lock) function. The "Recipient" thread informs the waiting thread about the response to his message using .notify_one().
I'm happy with the way it works, but I want to add the ability to handle the timeout.
I prepared the following class (I would like it to be universal so the notification function is defined from the external class) but I'm sure that it can be implemented better. I wanted to avoid a lot of CPU usage, that's why I used std::this_thread::sleep_for, but I suppose that it can be somehow replaced with std::this_thread::yield(). I would like to use eg std::future_status, but I do not know how to do it. How can this be improved? I can use std c++11 or boost 1.55.
class Timer
{
private:
int MsLimit;
std::atomic<bool> Stop;
std::atomic<bool> LimitReached;
std::thread T;
std::mutex M;
std::function<void()> NotifyWaitingThreadFunction;
void Timeout()
{
std::unique_lock<std::mutex> Lock(M);
std::chrono::system_clock::time_point TimerStart = std::chrono::system_clock::now();
std::chrono::duration<long long, std::milli> ElapsedTime;
unsigned int T = 0;
do
{ std::this_thread::sleep_for(std::chrono::milliseconds(5));
std::chrono::system_clock::time_point TimerEnd = std::chrono::system_clock::now();
ElapsedTime = std::chrono::duration_cast<std::chrono::milliseconds>(TimerEnd - TimerStart);
T+=ElapsedTime.count();
if((T > MsLimit) && (!Stop))
{ LimitReached = true;
Stop = true;
}
}while(!Stop);
if(LimitReached)
{
NotifyWaitingThreadFunction();
}
}
public:
Timer(int Milliseconds) : MsLimit(Milliseconds)
{
}
void StartTimer()
{
Stop = false;
LimitReached = false;
T = std::thread(&Timer::Timeout,this);
}
void StopTimer()
{
std::unique_lock<std::mutex> Lock(M);
Stop = true;
LimitReached = false;
}
template<class T>
void AssignFunction(T* ObjectInstance, void (T::*MemberFunction)())
{
NotifyWaitingThreadFunction = std::bind(MemberFunction,ObjectInstance);
}
};
Your solution has one fault - do while loop is executed until MsLimit is elapsed. After Timeout started, M mutex is blocked and the call of StopTimer cannot break loop, because stop in StopTimer is set on true when M is released in Timeout what happens if (T > MsLimit) returns true and function ends. BTW, the use of mutex is redundant, because Stop is atomic.
You can use one of timers from boost library instead of creating your own one.
The code below uses boost::asio::high_resolution_timer (boost 1.55 version has it):
class Timer
{
public:
Timer (int ms)
: timer(io), ms(ms) {}
~Timer() {if(t.joinable()) t.join();}
void Start() {
t = std::thread( [this]()
{
timer.expires_from_now(std::chrono::milliseconds(ms));
timer.async_wait([this](const boost::system::error_code& ec) // start async wait
{ // lambda is called when timeout expired or error occures
if (!ec) // if there is no error, call function
NotifyWaitingThreadFunction();
});
io.run(); // process async operations
});
}
void Stop() {
timer.cancel();
}
template<class T>
void AssignFunction(T* ObjectInstance, void (T::*MemberFunction)())
{
NotifyWaitingThreadFunction = std::bind(MemberFunction,ObjectInstance);
}
private:
boost::asio::io_service io; // needed for timer
boost::asio::high_resolution_timer timer;
std::thread t;
int ms;
std::function<void()> NotifyWaitingThreadFunction;
};
In Start method thread is created where we set value of timeout in ms, timer is started by async_wait. Lambda passed into async_wait is called when timeout expired or an error occures. So if there is no error, you can call NotifyWaitingThreadFunction. To stop timer use Stop method. Stop cancels started aynchronous operation then lambda is called with ec == boost::asio::error::operation_aborted. In this case lambda ends without calling NotifyWaitingThreadFunction.
My application is based on the asio chat example and consists of a client and a server:
- Client: Connect to the server, receive requests and respond to it
- Server: Has a QT GUI (main thread) and a network service (separate thread) listening for connections, sending requests to particular clients and interprets the response from/in the GUI
I want to achieve this in an asynchronous way to avoid a seperate thread for each client connection.
In my QT window, I have one io_service instance and one instance of my network service:
io_service_ = new asio::io_service();
asio::ip::tcp::endpoint endpoint(asio::ip::tcp::v4(), "1234");
service_ = new Service(*io_service_, endpoint, this);
asio::io_service* ioServicePointer = io_service_;
t = std::thread{ [ioServicePointer](){ ioServicePointer->run(); } };
I want to be able to send data to one client, like this:
service_->send_message(selectedClient.id, msg);
And I am receiving and handling the responses via the observer pattern (the window implements the IStreamListener interface)
Service.cpp:
#include "Service.h"
#include "Stream.h"
void Service::runAcceptor()
{
acceptor_.async_accept(socket_,
[this](asio::error_code ec)
{
if (!ec)
{
std::make_shared<Stream>(std::move(socket_), &streams_)->start();
}
runAcceptor();
});
}
void Service::send_message(std::string streamID, chat_message& msg)
{
io_service_.post(
[this, msg, streamID]()
{
auto stream = streams_.getStreamByID(streamID);
stream->deliver(msg);
});
}
Stream.cpp:
#include "Stream.h"
#include <iostream>
#include "../chat_message.h"
Stream::Stream(asio::ip::tcp::socket socket, StreamCollection* streams)
: socket_(std::move(socket))
{
streams_ = streams; // keep a reference to the streamCollection
// retrieve endpoint ip
asio::ip::tcp::endpoint remote_ep = socket_.remote_endpoint();
asio::ip::address remote_ad = remote_ep.address();
this->ip_ = remote_ad.to_string();
}
void Stream::start()
{
streams_->join(shared_from_this());
readHeader();
}
void Stream::deliver(const chat_message& msg)
{
bool write_in_progress = !write_msgs_.empty();
write_msgs_.push_back(msg);
if (!write_in_progress)
{
write();
}
}
std::string Stream::getName()
{
return name_;
}
std::string Stream::getIP()
{
return ip_;
}
void Stream::RegisterListener(IStreamListener *l)
{
m_listeners.insert(l);
}
void Stream::UnregisterListener(IStreamListener *l)
{
std::set<IStreamListener *>::const_iterator iter = m_listeners.find(l);
if (iter != m_listeners.end())
{
m_listeners.erase(iter);
}
else {
std::cerr << "Could not unregister the specified listener object as it is not registered." << std::endl;
}
}
void Stream::readHeader()
{
auto self(shared_from_this());
asio::async_read(socket_,
asio::buffer(read_msg_.data(), chat_message::header_length),
[this, self](asio::error_code ec, std::size_t /*length*/)
{
if (!ec && read_msg_.decode_header())
{
readBody();
}
else if (ec == asio::error::eof || ec == asio::error::connection_reset)
{
std::for_each(m_listeners.begin(), m_listeners.end(), [&](IStreamListener *l) {l->onStreamDisconnecting(this->id()); });
streams_->die(shared_from_this());
}
else
{
std::cerr << "Exception: " << ec.message();
}
});
}
void Stream::readBody()
{
auto self(shared_from_this());
asio::async_read(socket_,
asio::buffer(read_msg_.body(), read_msg_.body_length()),
[this, self](asio::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
// notify the listener (GUI) that a response has arrived and pass a reference to it
auto msg = std::make_shared<chat_message>(std::move(read_msg_));
std::for_each(m_listeners.begin(), m_listeners.end(), [&](IStreamListener *l) {l->onMessageReceived(msg); });
readHeader();
}
else
{
streams_->die(shared_from_this());
}
});
}
void Stream::write()
{
auto self(shared_from_this());
asio::async_write(socket_,
asio::buffer(write_msgs_.front().data(),
write_msgs_.front().length()),
[this, self](asio::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
write_msgs_.pop_front();
if (!write_msgs_.empty())
{
write();
}
}
else
{
streams_->die(shared_from_this());
}
});
}
Interfaces
class IStream
{
public:
/// Unique stream identifier
typedef void* TId;
virtual TId id() const
{
return (TId)(this);
}
virtual ~IStream() {}
virtual void deliver(const chat_message& msg) = 0;
virtual std::string getName() = 0;
virtual std::string getIP() = 0;
/// observer pattern
virtual void RegisterListener(IStreamListener *l) = 0;
virtual void UnregisterListener(IStreamListener *l) = 0;
};
class IStreamListener
{
public:
virtual void onStreamDisconnecting(IStream::TId streamId) = 0;
virtual void onMessageReceived(std::shared_ptr<chat_message> msg) = 0;
};
/*
streamCollection / service delegates
*/
class IStreamCollectionListener
{
public:
virtual void onStreamDied(IStream::TId streamId) = 0;
virtual void onStreamCreated(std::shared_ptr<IStream> stream) = 0;
};
StreamCollection is basically a set of IStreams:
class StreamCollection
{
public:
void join(stream_ptr stream)
{
streams_.insert(stream);
std::for_each(m_listeners.begin(), m_listeners.end(), [&](IStreamCollectionListener *l) {l->onStreamCreated(stream); });
}
// more events and observer pattern inplementation
First of all: The code works as intended so far.
My question:
Is this the way ASIO is supposed to be used for asynchronous programming? I'm especially unsure about the Service::send_message method and the use of io_service.post. What is it's purpose in my case? It did work too when I just called async_write, without wrapping it in the io_service.post call.
Am I running into problems with this approach?
Asio is designed to be a tookit rather than a framework. As such, there are various ways to successfully use it. Separating the GUI and network threads, and using asynchronous I/O for scalability can be a good idea.
Delegating work to the io_service within a public API, such as Service::send_message(), has the following consequences:
decouples the caller's thread from the thread(s) servicing the io_service. For example, if Stream::write() performs a time consuming cryptographic function, the caller thread (GUI) would not be impacted.
it provides thread-safety. The io_service is thread-safe; however socket is not thread-safe. Additionally, other objects may not be thread safe, such as write_msgs_. Asio guarantees that handlers will only be invoked from within threads running the io_servce. Consequently, if only one thread is running the io_service, then there is no possibility for concurrency and both socket_ and write_msgs_ will be accessed in a thread-safe manner. Asio refers to this as an implicit strand. If more than one thread is processing the io_service, then one may need to use an explicit strand to provide thread safety. See this answer for more details on strands.
Additional Asio considerations:
Observers are invoked within handlers, and handlers are running within the network thread. If any observer takes a long time to complete, such as having to synchronize with various shared objects touched by the GUI thread, then it could create poor responsiveness across other operations. Consider using a queue to broker events between the observer and subject components. For instance, one could use another io_service as a queue, that is being ran by its own thread, and post into it:
auto msg = std::make_shared<chat_message>(std::move(read_msg_));
for (auto l: m_listeners)
dispatch_io_service.post([=](){ l->onMessageReceived(msg); });
Verify that the container type for write_msgs_ does not invalidate iterators, pointers and references to existing elements on push_back() and other elements for pop_front(). For instance, using std::list or std::dequeue would be safe, but a std::vector may invalidate references to existing elements on push_back.
StreamCollection::die() may be called multiple times for a single Stream. This function should either be idempotent or handle the side effects appropriately.
On failure for a given Stream, its listeners are informed of a disconnect only in one path: failing to read a header with an error of asio::error::eof or asio::error::connection_reset. Other paths do not invoke IStreamListener.onStreamDisconnecting():
the header is read, but decoding failed. In this particular case, the entire read chain will stop without informing other components. The only indication that a problem has occurred is a print statement to std::cerr.
when there is a failure reading the body.
I'm wondering what the best (cleanest, hardest to mess up) method for cleanup is in this situation.
void MyClass::do_stuff(boost::asio::yield_context context) {
while (running_) {
uint32_t data = async_buffer->Read(context);
// do other stuff
}
}
Read is a call which asynchronously waits until there is data to be read, then returns that data. If I want to delete this instance of MyClass, how can I make sure I do so properly? Let's say that the asynchronous wait here is performed via a deadline_timer's async_wait. If I cancel the event, I still have to wait for the thread to finish executing the "other stuff" before I know things are in a good state (I can't join the thread, as it's a thread that belongs to the io service that may also be handling other jobs). I could do something like this:
MyClass::~MyClass() {
running_ = false;
read_event->CancelEvent(); // some way to cancel the deadline_timer the Read is waiting on
boost::mutex::scoped_lock lock(finished_mutex_);
if (!finished_) {
cond_.wait(lock);
}
// any other cleanup
}
void MyClass::do_stuff(boost::asio::yield_context context) {
while (running_) {
uint32_t data = async_buffer->Read(context);
// do other stuff
}
boost::mutex::scoped_lock lock(finished_mutex_);
finished_ = true;
cond.notify();
}
But I'm hoping to make these stackful coroutines as easy to use as possible, and it's not straightforward for people to recognize that this condition exists and what would need to be done to make sure things are cleaned up properly. Is there a better way? Is what I'm trying to do here wrong at a more fundamental level?
Also, for the event (what I have is basically the same as Tanner's answer here) I need to cancel it in a way that I'd have to keep some extra state (a true cancel vs. the normal cancel used to fire the event) -- which wouldn't be appropriate if there were multiple pieces of logic waiting on that same event. Would love to hear if there's a better way to model the asynchronous event to be used with a coroutine suspend/resume.
Thanks.
EDIT: Thanks #Sehe, took a shot at a working example, I think this illustrates what I'm getting at:
class AsyncBuffer {
public:
AsyncBuffer(boost::asio::io_service& io_service) :
write_event_(io_service) {
write_event_.expires_at(boost::posix_time::pos_infin);
}
void Write(uint32_t data) {
buffer_.push_back(data);
write_event_.cancel();
}
uint32_t Read(boost::asio::yield_context context) {
if (buffer_.empty()) {
write_event_.async_wait(context);
}
uint32_t data = buffer_.front();
buffer_.pop_front();
return data;
}
protected:
boost::asio::deadline_timer write_event_;
std::list<uint32_t> buffer_;
};
class MyClass {
public:
MyClass(boost::asio::io_service& io_service) :
running_(false), io_service_(io_service), buffer_(io_service) {
}
void Run(boost::asio::yield_context context) {
while (running_) {
boost::system::error_code ec;
uint32_t data = buffer_.Read(context[ec]);
// do something with data
}
}
void Write(uint32_t data) {
buffer_.Write(data);
}
void Start() {
running_ = true;
boost::asio::spawn(io_service_, boost::bind(&MyClass::Run, this, _1));
}
protected:
boost::atomic_bool running_;
boost::asio::io_service& io_service_;
AsyncBuffer buffer_;
};
So here, let's say that the buffer is empty and MyClass::Run is currently suspended while making a call to Read, so there's a deadline_timer.async_wait that's waiting for the event to fire to resume that context. It's time to destroy this instance of MyClass, so how do we make sure that it gets done cleanly.
A more typical approach would be to use boost::enable_shared_from_this with MyClass, and run the methods as bound to the shared pointer.
Boost Bind supports binding to boost::shared_ptr<MyClass> transparently.
This way, you can automatically have the destructor run only when the last user disappears.
If you create a SSCCE, I'm happy to change it around, to show what I mean.
UPDATE
To the SSCCEE: Some remarks:
I imagined a pool of threads running the IO service
The way in which MyClass calls into AsyncBuffer member functions directly is not threadsafe. There is actually no thread safe way to cancel the event outside the producer thread[1], since the producer already access the buffer for Writeing. This could be mitigated using a strand (in the current setup I don't see how MyClass would likely be threadsafe). Alternatively, look at the active object pattern (for which Tanner has an excellent answer[2] on SO).
I chose the strand approach here, for simplicity, so we do:
void MyClass::Write(uint32_t data) {
strand_.post(boost::bind(&AsyncBuffer::Write, &buffer_, data));
}
You ask
Also, for the event (what I have is basically the same as Tanner's answer here) I need to cancel it in a way that I'd have to keep some extra state (a true cancel vs. the normal cancel used to fire the event)
The most natural place for this state is the usual for the deadline_timer: it's deadline. Stopping the buffer is done by resetting the timer:
void AsyncBuffer::Stop() { // not threadsafe!
write_event_.expires_from_now(boost::posix_time::seconds(-1));
}
This at once cancels the timer, but is detectable because the deadline is in the past.
Here's a simple demo with a a group of IO service threads, one "producer coroutine" that produces random numbers and a "sniper thread" that snipes the MyClass::Run coroutine after 2 seconds. The main thread is the sniper thread.
See it Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/async_result.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/atomic.hpp>
#include <list>
#include <iostream>
// for refcounting:
#include <boost/enable_shared_from_this.hpp>
#include <boost/make_shared.hpp>
namespace asio = boost::asio;
class AsyncBuffer {
friend class MyClass;
protected:
AsyncBuffer(boost::asio::io_service &io_service) : write_event_(io_service) {
write_event_.expires_at(boost::posix_time::pos_infin);
}
void Write(uint32_t data) {
buffer_.push_back(data);
write_event_.cancel();
}
uint32_t Read(boost::asio::yield_context context) {
if (buffer_.empty()) {
boost::system::error_code ec;
write_event_.async_wait(context[ec]);
if (ec != boost::asio::error::operation_aborted || write_event_.expires_from_now().is_negative())
{
if (context.ec_)
*context.ec_ = boost::asio::error::operation_aborted;
return 0;
}
}
uint32_t data = buffer_.front();
buffer_.pop_front();
return data;
}
void Stop() {
write_event_.expires_from_now(boost::posix_time::seconds(-1));
}
private:
boost::asio::deadline_timer write_event_;
std::list<uint32_t> buffer_;
};
class MyClass : public boost::enable_shared_from_this<MyClass> {
boost::atomic_bool stopped_;
public:
MyClass(boost::asio::io_service &io_service) : stopped_(false), buffer_(io_service), strand_(io_service) {}
void Run(boost::asio::yield_context context) {
while (!stopped_) {
boost::system::error_code ec;
uint32_t data = buffer_.Read(context[ec]);
if (ec == boost::asio::error::operation_aborted)
break;
// do something with data
std::cout << data << " " << std::flush;
}
std::cout << "EOF\n";
}
bool Write(uint32_t data) {
if (!stopped_) {
strand_.post(boost::bind(&AsyncBuffer::Write, &buffer_, data));
}
return !stopped_;
}
void Start() {
if (!stopped_) {
stopped_ = false;
boost::asio::spawn(strand_, boost::bind(&MyClass::Run, shared_from_this(), _1));
}
}
void Stop() {
stopped_ = true;
strand_.post(boost::bind(&AsyncBuffer::Stop, &buffer_));
}
~MyClass() {
std::cout << "MyClass destructed because no coroutines hold a reference to it anymore\n";
}
protected:
AsyncBuffer buffer_;
boost::asio::strand strand_;
};
int main()
{
boost::thread_group tg;
asio::io_service svc;
{
// Start the consumer:
auto instance = boost::make_shared<MyClass>(svc);
instance->Start();
// Sniper in 2 seconds :)
boost::thread([instance]{
boost::this_thread::sleep_for(boost::chrono::seconds(2));
instance->Stop();
}).detach();
// Start the producer:
auto producer_coro = [instance, &svc](asio::yield_context c) { // a bound function/function object in C++03
asio::deadline_timer tim(svc);
while (instance->Write(rand())) {
tim.expires_from_now(boost::posix_time::milliseconds(200));
tim.async_wait(c);
}
};
asio::spawn(svc, producer_coro);
// Start the service threads:
for(size_t i=0; i < boost::thread::hardware_concurrency(); ++i)
tg.create_thread(boost::bind(&asio::io_service::run, &svc));
}
// now `instance` is out of scope, it will selfdestruct after the snipe
// completed
boost::this_thread::sleep_for(boost::chrono::seconds(3)); // wait longer than the snipe
std::cout << "This is the main thread _after_ MyClass self-destructed correctly\n";
// cleanup service threads
tg.join_all();
}
[1] logical thread, this could be a coroutine that gets resumed on different threads
[2] boost::asio and Active Object
I want to create an event loop class that will run on it's own thread, support adding of tasks as std::functions and execute them.
For this, I am using the SafeQueue from here: https://stackoverflow.com/a/16075550/1069662
class EventLoop
{
public:
typedef std::function<void()> Task;
EventLoop(){ stop=false; }
void add_task(Task t) { queue.enqueue(t); }
void start();
void stop() { stop = true; }
private:
SafeQueue<Task> queue;
bool stop;
};
void EventLoop::start()
{
while (!stop) {
Task t = queue.dequeue(); // Blocking call
if (!stop) {
t();
}
}
cout << "Exit Loop";
}
Then, you would use it like this:
EventLoop loop;
std::thread t(&EventLoop::start, &loop);
loop.add_task(myTask);
// do smth else
loop.stop();
t.join();
My question is: how to stop gracefully the thread ?
Here stop cannot exit the loop because of the blocking queue call.
Queue up a 'poison pill' stop task. That unblocks the queue wait and either directly requests the thread to clean up and exit or allows the consumer thread to check a 'stop' boolean.
That's assuming you need to stop the threads/task at all before the app terminates. I usually try to not do that, if I can get away with it.
An alternative approach: just queue up a task that throws an exception. A few changes to your code:
class EventLoop {
// ...
class stopexception {};
// ...
void stop()
{
add_task(
// Boring function that throws a stopexception
);
}
};
void EventLoop::start()
{
try {
while (1)
{
Task t = queue.dequeue(); // Blocking call
t();
}
} catch (const stopexception &e)
{
cout << "Exit Loop";
}
}
An alternative that doesn't use exceptions, for those who are allergic to them, would be to redefine Task as a function that takes an EventLoop reference as its sole parameter, and stop() queues up a task that sets the flag that breaks out of the main loop.