How to stop/destroy a thread with a blocking call gracefully upon call of C++ destructor? - c++

In the following class a worker thread is started within constructor.
The worker has a blocking call to a queue.
It works as expected, but when the AsyncQueue object gets out of scope (for whatever reason), its destructor is called. Also, the destructor of the simple_queue object is called (what I checked by debugging).
But what happens to the worker? Because it is still waiting on the blocking call to the queue!
I observed, that without calling impl_thread_.detach(); within the destructor execution crashes.
However, I don't know whether this is a solution at all. What I don't understand additionally: although the queue object is destroyed the blocking call is not raising an exception - in fact I set a break-point in the catch handler. So what is going on here and what is the right way to implement that scenario? I deeply feel, that what I do here is not quite as it should be ;-)
template<typename T>
class AsyncQueue
{
public:
AsyncQueue() : impl_thread_(&AsyncQueue::worker, this)
{
tq_ = std::shared_ptr<simple_queue<T>>(new simple_queue<T>);
impl_thread_.detach();
}
//~AsyncQueue() = default;
~AsyncQueue() {
std::cout << "[" << boost::this_thread::get_id() << "] destructor AsyncQueue" << std::endl;
return;
}
private:
std::thread impl_thread_;
std::shared_ptr<simple_queue<T>> tq_;
void worker()
{
try {
while (true)
{
boost::optional<T> item = tq_->deq(); // blocks
...
...
...
}
}
catch (exception const& e) {
return;
}
}
public:
...
...
};

The simplest way if you can is in your destructor to send a stop token in your eq queue and check on the stop token in the worker to exit it. Remove the detach first.
~AsyncQueue() {
_eq.add(stopToken); // what ever your can use here. else use an atomic bool
std::cout << "[" << boost::this_thread::get_id() << "] destructor AsyncQueue" << std::endl;
impl_thread_.join();
}
(untested, incomplete)

Related

Executing a function on a specific thread in C++

I have a class which has some of its functions thread-safe.
class A
{
public:
// Thread Safe class B
B foo;
// Thread specific class C
C bar;
void somefunc()
{
// uses foo and bar
}
}
class C
{
public:
C()
{
m_id = std::this_thread::get_id();
}
// id of the thread which created the class
std::thread::id m_id;
}
class A can be set on different threads. As class C is thread-specific I want to run somefun from the thread m_id.
So I was thinking of executing somefun by submitting somefun to the thread identified by m_id.
The main question is can I run a particular function on a live thread given that I know the thread id of the thread?
I was thinking of executing somefun by submitting somefun to the thread identified by m_id.
That is not how threads work in general. You can't ask just any thread to stop what it is doing and call a certain function. The only way that it makes sense to submit anything to a thread is if the thread is already running code that is designed to accept the submission and, that knows what to do with it.
You could write a thread that loops forever, and on each iteration it waits to consume a std::function<...> object from a blocking queue, and then it calls the object. Then, some other thread could "submit" std::function<...> objects to the thread by putting them in the queue.
You can use boost::asio::io_service.
A function (or work) posted on a thread will be executed on a different thread (on which run() member function of io_service is called).
A Rough Example:
#include <boost/asio/io_service.hpp>
boost::asio::io_service ios_;
void func(void)
{
std::cout << "Executing work: " << std::this_thread::get_id() << std::endl;
}
// Thread 1
ios_.run();
// Thread 2
std::cout << "Posting work: " << std::this_thread::get_id() << std::endl;
ios_.post(func);
ios_.port([] () {
std::cout << "Lambda" << std::endl;
});

pending std::future get of std::async with shared_from_this argument blocks destruction of this

I wanted to create a class that would represent a task that can be started running asynchronously and will run continuously (effectively in a detached thread) until a stop signal is received. The usage for the sake of this question would look like this:
auto task = std::make_shared<Task>();
task->start(); // starts the task running asynchronously
... after some time passes ...
task->stop(); // signals to stop the task
task->future.get(); // waits for task to stop running and return its result
However, a key feature of this Task class is that I cannot guarantee that the future will be waited/got... i.e. the last line may not get called before the shared pointer is destroyed.
A stripped-down toy version of the class I wrote is as follows (please ignore that everything is in public, this is just for this example's simplicity):
class MyClass : public std::enable_shared_from_this<MyClass> {
public:
~MyClass() { std::cout << "Destructor called" << std::endl; }
void start() {
future = std::async(std::launch::async, &MyClass::method, this->shared_from_this());
}
void stop() { m_stop = true; }
void method() {
std::cout << "running" << std::endl;
do {
std::this_thread::sleep_for(std::chrono::seconds(1));
} while(m_stop == false);
std::cout << "stopped" << std::endl;
return;
}
std::future<void> future;
std::atomic<bool> m_stop = false;
};
However, I discovered an undesirable feature of this code: if instead of get on the future, I just wait (e.g. if I don't care about the result of method, which in this case is a void anyway), then when task is deleted, the instance doesn't get destroyed.
I.e. doing task->future.get() gives:
running
stopped
Destructor called
But task->future.wait() gives:
running
stopped
From reading answer to What is the lifetime of the arguments of std::async? I believe the problem here is the this->shared_from_this() argument to std::async won't be destroyed until the future from the async has been made invalid (through get or destruction or otherwise). So this shared_ptr is keeping the class instance alive.
Solution Attempt 1:
Replace the line in start with:
future = std::async(std::launch::async, [this]() {
return this->shared_from_this()->method();
});
This ensures shared_ptr it creates is destroyed when the method completes, but I have been worried that there's nothing to stop this being destroyed between the time of it being captured by the lambda capture (which happens at this line, correct?) and the time the lambda is executed in the new thread. Is this a real possibility?
Solution Attempt 2:
To protect the this (task) being destroyed before the lambda function runs, I add another member variable std::shared_ptr<MyClass> myself then my start method can look like this:
myself = this->shared_from_this();
future = std::async(std::launch::async, [this]() {
auto my_ptr = std::move(this->myself);
return myself->method();
});
Here the idea is that myself will ensure that if I delete the task shared_ptr, I don't destroy the class. Then inside the lambda, the shared_ptr is transferred to the local my_ptr variable, which is destroyed on exit.
Are there issues with this solution, or have I overlooked a much cleaner way of achieving the sort functionality I'm after?
Thanks!
Solution attempt 2 I found in some scenarios would generate a deadlock exception. This appears to come from the async thread simultaneously trying to destroy the future (by destroying the instance of the class) while also trying to set the value of the future.
Solution attempt 3 - this seems to pass all my tests so far:
myself = this->shared_from_this();
std::promise<void> p;
future = p.get_future();
std::thread([this](std::promise<void>&& p) {
p.set_value_at_thread_exit( myself->method() );
myself.reset();
}, std::move(p)).detach();
The logic here is that it is safe to destroy myself (by resetting the shared pointer) once the method call is finished - its safe to delete the future of a promise before the promise has set its value. No deadlock occurs because the future is destroyed before the promise tries to transfer a value.
Any comments on this solution or potentially neater alternatives would be welcome. In particular it would be good to know if there are issues I've overlooked.
I would suggest one of the following solutions:
Solution 1, Use std::async with this instead of shared_from_this:
class MyClass /*: public std::enable_shared_from_this<MyClass> not needed here */ {
public:
~MyClass() { std::cout << "Destructor called" << std::endl; }
void start() {
future = std::async(std::launch::async, &MyClass::method, this);
}
void stop() { m_stop = true; }
void method() {
std::cout << "running" << std::endl;
do {
std::this_thread::sleep_for(std::chrono::seconds(1));
} while(m_stop == false);
std::cout << "stopped" << std::endl;
return;
}
std::atomic<bool> m_stop = false;
std::future<void> future; // IMPORTANT: future constructed last, destroyed first
};
This solution would work even if not calling wait or get on the future because the destructor of a future returned by std::async blocks until the termination of the task. It is important to construct the future last, so that it is destroyed (and thus blocks) before all other members are destroyed. If this is too risky, use solution 3 instead.
Solution 2, Use a detached thread like you did:
void start() {
std::promise<void> p;
future = p.get_future();
std::thread(
[m = this->shared_from_this()](std::promise<void>&& p) {
m->method();
p.set_value();
},
std::move(p))
.detach();
}
One drawback of this solution: If you have many instances of MyClass you will create a lot of threads maybe resulting in contention. So a better option would be to use a thread pool instead of a single thread per object.
Solution 3, Separate the executable from the task class e.g:
class ExeClass {
public:
~ExeClass() { std::cout << "Destructor of ExeClass" << std::endl; }
void method() {
std::cout << "running" << std::endl;
do {
std::this_thread::sleep_for(std::chrono::seconds(1));
} while (m_stop == false);
std::cout << "stopped" << std::endl;
return;
}
std::atomic<bool> m_stop = false;
};
class MyClass {
public:
~MyClass() { std::cout << "Destructor of MyClass" << std::endl; }
void start() {
future = std::async(std::launch::async, &ExeClass::method, exe);
}
void stop() { exe->m_stop = true; }
std::shared_ptr<ExeClass> exe = std::make_shared<ExeClass>();
std::future<void> future;
};
Like solution 1 this would block when the future is destroyed, however you don't need to take care of the order of construction and destruction. IMO this is the cleanest option.

C++ Boost::asio::io_service how can I safe destroy io_service resources when program be finished

I run aync job thread for async io_service work.
I want to destroy this resources used for async job.
boost::asio::io_service
boost::asio::io_service::work
boost::asio::steady_timer
boost::thread
I manage the singleton object by shared pointer, below code AsyncTraceProcessor. As you know, shared_ptr automatically call the destructor when use count be 0. I want to destroy all resources safe way at that time.
I wrote code below, but there is SIGSEGV error on JVM. (This program is java native library program)
How can I solve it? In my opinion, already queue but not yet executed works throw cause this error. In this case, how can I treat remain works in safety way?
AsyncTraceProcessor::~AsyncTraceProcessor() {
cout << "AsyncTraceProcessor Desructor In, " << instance.use_count() << endl;
_once_flag;
cout <<"++++++++++flag reset success" << endl;
traceMap.clear();
cout <<"++++++++++traceMap reset success" << endl;
timer.cancel();
cout <<"++++++++++timer reset success" << endl;
async_work.~work();
cout <<"++++++++++work reset success" << endl;
async_service.~io_service();
cout <<"++++++++++io_service reset success" << endl;
async_thread.~thread();
cout <<"++++++++++thread reset success" << endl;
instance.reset();
cout <<"++++++++++instance reset success" << endl;
cout << "AsyncTraceProcessor Desructor Out " << endl;
}
Error Log
AsyncTraceProcessor Desructor In, 0
Isn't Null
++++++++++flag reset success
++++++++++traceMap reset success
++++++++++timer reset success
++++++++++work reset success
A fatal error has been detected by the Java Runtime Environment:
++++++++++io_service reset success
++++++++++thread reset success
SIGSEGV
++++++++++instance reset success
AsyncTraceProcessor Desructor Out
C++ is unlike Java or C# - basically any garbage collecting language runtime. It has deterministic destruction. Lifetimes of object are very tangible and reliable.
async_service.~io_service();
This is explicitly invoking a destructor without deleting the object, or before the lifetime of the automatic-storage variable ends.
The consequence is that the language will still invoke the destructor when the lifetime does end.
This is not what you want.
If you need to clear the work, make it a unique_ptr<io_service::work> so you can work_p.reset() instead (which does call its destructor, once).
After that, just wait for the threads to complete io_service::run(), meaning you should thread::join() them before the thread object gets destructed.
Member objects of classes have automatic storage duration and will be destructed when leaving the destructor body. They will be destructed in the reverse order in which they are declared.
Sample
struct MyDemo {
boost::asio::io_service _ios;
std::unique_ptr<boost::asio::io_service::work> _work_p { new boost::asio::io_service::work(_ios) };
std::thread _thread { [&ios] { ios.run(); } };
~MyDemo() {
_work_p.reset();
if (_thread.joinable())
_thread.join();
} // members are destructed by the language
};

How can I handle interrupt signal and call destructor in c++? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is destructor called if SIGINT or SIGSTP issued?
My code like this:
#include <iostream>
#include <signal.h>
#include <cstdlib>
void handler(int) {
std::cout << "will exit..." << std::endl;
exit(0);
}
class A {
public:
A() {std::cout << "constructor" << std::endl;}
~A() {std::cout << "destructor" << std::endl;}
};
int main(void) {
signal(SIGINT, &handler);
A a;
for (;;);
return 0;
}
When I pressed Ctrl-C, it printed:
constructor
^Cwill exit...
There is no "destructor" printed.
So, how can I exit cleanly?
With difficulty. Already, the code you've written has undefined
behavior; you're not allowed to output to a stream in a signal handler;
for that matter, you're not allowed to call exit either. (I'm basing
my assertions here on the Posix standard. In pure C++, all you're
allowed to do is assign to a variable of sig_atomic_t type.)
In a simple case like your code, you could do something like:
sig_atomic_t stopFlag = 0;
void
handler( int )
{
stopFlag = 1;
}
int
main()
{
signal( SIGINT, &handler );
A a;
while ( stopFlag == 0 ) {
}
std::cout << "will exit..." << std::endl;
return 0;
}
Depending on the application, you may be able to do something like this,
checking the stopFlag at appropriate places. But generally, if you
try this, there will be race conditions: you check stopFlag before
starting an interuptable system call, then do the call; the signal
arrives between the check and the call, you do the call, and it isn't
interrupted. (I've used this technique, but in an application where the
only interruptable system call was a socket read with a very short
timeout.)
Typically, at least under Posix, you'll end up having to create a signal
handling thread; this can then be used to cleanly shut down all of the
other threads. Basically, you start by setting the signal mask to block
all signals, then in the signal handling thread, once started, set it to
accept the signals you're interested in and call sigwait(). This
implies, however, that you do all of the usual actions necessary for a
clean shutdown of the threads: the signal handling thread has to know
about all other threads, call pthread_cancel on them, etc., and you're
compiler has to generate the correct code to handle pthread_cancel, or
you need to develop some other means of ensuring that all threads are
correctly notified. (One would hope, today, that all compilers handle
pthread_cancel correctly. But one never knows; doing so has
significant runtime cost, and is not usually needed.)
You need to exit from the main function's scope to have the destructor working:
#include <iostream>
#include <signal.h>
#include <cstdlib>
bool stop = false;
void handler(int) {
std::cout << "will exit..." << std::endl;
stop = true;
}
class A {
public:
A() {std::cout << "constructor" << std::endl;}
~A() {std::cout << "destructor" << std::endl;}
};
int main(void) {
A a;
signal(SIGINT, &handler);
for (;!stop;);
return 0;
}
It's because the context of the normal code and the signal handler is different. If you put the variable a in global scope (i.e. outside of any function) you will see that the destructor is called properly.
If you want to handle cleaning up yourself (instead of letting the run-time and OS handle it), you can have a conditional loop, something like this:
bool keep_running = true;
void handler(int) {
std::cout << "will exit..." << std::endl;
keep_running = false;
}
int main(void) {
signal(SIGINT, &handler);
A a;
while (keep_running);
return 0;
}
Memory should be freed anyway. but if you've got code to be handled, I guess you'd have to track all your objects and then destroy them as needed (e.g. having the constructor adding them to a std::set, while the destructor removes them again). However this wouldn't ensure proper order of destruction (which might require some more complex solution).
You could as well use your signal handler to set some flag that will leave the infinite loop (or whatever you're doing in your main loop) instead of simply terminating using exit().
exit terminates the process almost immediately; in particular, objects with automatic storage duration are not destroyed. Streams are also flushed and closed, but you're not allowed to touch streams from inside a signal handler. So...
Simply don't call exit from a signal handler; set some atomic flag to instruct the loop to end instead.
#include <iostream>
#include <signal.h>
#include <cstdlib>
sig_atomic_t exitRequested = 0;
void handler(int) {
std::cout << "will exit..." << std::endl;
exitRequested = 1;
}
struct A {
A() { std::cout << "constructor" << std::endl; }
~A() { std::cout << "destructor" << std::endl; }
};
int main() {
signal(SIGINT, &handler);
A a;
for (; !exitRequested; );
}

Why might this thread management pattern result in a deadlock?

I'm using a common base class has_threads to manage any type that should be allowed to instantiate a boost::thread.
Instances of has_threads each own a set of threads (to support waitAll and interruptAll functions, which I do not include below), and should automatically invoke removeThread when a thread terminates to maintain this set's integrity.
In my program, I have just one of these. Threads are created on an interval every 10s, and each performs a database lookup. When the lookup is complete, the thread runs to completion and removeThread should be invoked; with a mutex set, the thread object is removed from internal tracking. I can see this working properly with the output ABC.
Once in a while, though, the mechanisms collide. removeThread is executed perhaps twice concurrently. What I can't figure out is why this results in a deadlock. All thread invocations from this point never output anything other than A. [It's worth noting that I'm using thread-safe stdlib, and that the issue remains when IOStreams are not used.] Stack traces indicate that the mutex is locking these threads, but why would the lock not be eventually released by the first thread for the second, then the second for the third, and so on?
Am I missing something fundamental about how scoped_lock works? Is there anything obvious here that I've missed that could lead to a deadlock, despite (or even due to?) the use of a mutex lock?
Sorry for the poor question, but as I'm sure you're aware it's nigh-on impossible to present real testcases for bugs like this.
class has_threads {
protected:
template <typename Callable>
void createThread(Callable f, bool allowSignals)
{
boost::mutex::scoped_lock l(threads_lock);
// Create and run thread
boost::shared_ptr<boost::thread> t(new boost::thread());
// Track thread
threads.insert(t);
// Run thread (do this after inserting the thread for tracking so that we're ready for the on-exit handler)
*t = boost::thread(&has_threads::runThread<Callable>, this, f, allowSignals);
}
private:
/**
* Entrypoint function for a thread.
* Sets up the on-end handler then invokes the user-provided worker function.
*/
template <typename Callable>
void runThread(Callable f, bool allowSignals)
{
boost::this_thread::at_thread_exit(
boost::bind(
&has_threads::releaseThread,
this,
boost::this_thread::get_id()
)
);
if (!allowSignals)
blockSignalsInThisThread();
try {
f();
}
catch (boost::thread_interrupted& e) {
// Yes, we should catch this exception!
// Letting it bubble over is _potentially_ dangerous:
// http://stackoverflow.com/questions/6375121
std::cout << "Thread " << boost::this_thread::get_id() << " interrupted (and ended)." << std::endl;
}
catch (std::exception& e) {
std::cout << "Exception caught from thread " << boost::this_thread::get_id() << ": " << e.what() << std::endl;
}
catch (...) {
std::cout << "Unknown exception caught from thread " << boost::this_thread::get_id() << std::endl;
}
}
void has_threads::releaseThread(boost::thread::id thread_id)
{
std::cout << "A";
boost::mutex::scoped_lock l(threads_lock);
std::cout << "B";
for (threads_t::iterator it = threads.begin(), end = threads.end(); it != end; ++it) {
if ((*it)->get_id() != thread_id)
continue;
threads.erase(it);
break;
}
std::cout << "C";
}
void blockSignalsInThisThread()
{
sigset_t signal_set;
sigemptyset(&signal_set);
sigaddset(&signal_set, SIGINT);
sigaddset(&signal_set, SIGTERM);
sigaddset(&signal_set, SIGHUP);
sigaddset(&signal_set, SIGPIPE); // http://www.unixguide.net/network/socketfaq/2.19.shtml
pthread_sigmask(SIG_BLOCK, &signal_set, NULL);
}
typedef std::set<boost::shared_ptr<boost::thread> > threads_t;
threads_t threads;
boost::mutex threads_lock;
};
struct some_component : has_threads {
some_component() {
// set a scheduler to invoke createThread(bind(&some_work, this)) every 10s
}
void some_work() {
// usually pretty quick, but I guess sometimes it could take >= 10s
}
};
Well, a deadlock might occurs if the same thread lock a mutex it has already locked (unless you use a recursive mutex).
If the release part is called a second time by the same thread as it seems to happen with your code, you have a deadlock.
I have not studied your code in details, but you probably have to re-design your code (simplify ?) to be sure that a lock can not be acquired twice by the same thread. You can probably use a safeguard checking for the ownership of the lock ...
EDIT:
As said in my comment and in IronMensan answer, one possible case is that the thread stop during creation, the at_exit being called before the release of the mutex locked in the creation part of your code.
EDIT2:
Well, with mutex and scoped lock, I can only imagine a recursive lock, or a lock that is not released. It can happen if a loop goes to infinite due to a memory corruption for instance.
I suggest to add more logs with a thread id to check if there is a recursive lock or something strange. Then I will check that my loop is correct. I will also check that the at_exit is only called once per thread ...
One more thing, check the effect of erasing (thus calling the destructor) of a thread while being in the at_exit function...
my 2 cents
You may need to do something like this:
void createThread(Callable f, bool allowSignals)
{
// Create and run thread
boost::shared_ptr<boost::thread> t(new boost::thread());
{
boost::mutex::scoped_lock l(threads_lock);
// Track thread
threads.insert(t);
}
//Do not hold threads_lock while starting the new thread in case
//it completes immediately
// Run thread (do this after inserting the thread for tracking so that we're ready for the on-exit handler)
*t = boost::thread(&has_threads::runThread<Callable>, this, f, allowSignals);
}
In other words, use thread_lock exclusively to protect threads.
Update:
To expand on something in the comments with speculation about how boost::thread works, the lock patterns could look something like this:
createThread:
(createThread) obtain threads_lock
(boost::thread::opeator =) obtain a boost::thread internal lock
(boost::thread::opeator =) release a boost::thread internal lock
(createThread) release threads_lock
thread end handler:
(at_thread_exit) obtain a boost::thread internal lock
(releaseThread) obtain threads_lock
(releaseThread) release threads_lock
(at_thread_exit) release a boost:thread internal lock
If those two boost::thread locks are the same lock, the potential for deadlock is clear. But this is speculation because much of the boost code scares me and I try not to look at it.
createThread could/should be reworked to move step 4 up between steps one and two and eliminate the potential deadlock.
It is possible that the created thread is finishing before or during the assignment operator in createThread is complete. Using an event queue or some other structure that is might be necessary. Though a simpler, though hack-ish, solution might work as well. Don't change createThread since you have to use threads_lock to protect threads itself and the thread objects it points to. Instead change runThread to this:
template <typename Callable>
void runThread(Callable f, bool allowSignals)
{
//SNIP setup
try {
f();
}
//SNIP catch blocks
//ensure that createThread is complete before this thread terminates
boost::mutex::scoped_lock l(threads_lock);
}