Managing thread life-cycle in derived class - c++

I have a Base class which acts as an interface to multiple strategies for synchronous event processing. I now want the strategies to process the events asynchronously. To minimize code refactor, each strategies will have its own internal thread for asynchronous event processing. My main concern is how to manage the lifecycle of this thread. The Derived strategies classes are constructed and destructed all around the codebase so it would be hard to manage the thread lifecycle (start/stop) outside of the strategies classes.
I ended up with the following code:
#include <iostream>
#include <cassert>
#include <boost/shared_ptr.hpp>
#include <boost/thread.hpp>
struct Base
{
virtual ~Base()
{
std::cout << "In ~Base()" << std::endl;
// For testing purpose: spend some time in Base dtor
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
}
virtual void processEvents() = 0;
void startThread()
{
if(_thread)
{
stopThread();
}
_thread.reset(new boost::thread(&Base::processEvents, this));
assert(_thread);
}
void stopThread()
{
if(_thread)
{
std::cout << "Interrupting and joining thread" << std::endl;
_thread->interrupt();
_thread->join();
_thread.reset();
}
}
boost::shared_ptr<boost::thread> _thread;
};
struct Derived : public Base
{
Derived()
{
startThread();
}
virtual ~Derived()
{
std::cout << "In ~Derived()" << std::endl;
// For testing purpose: make sure the virtual method is called while in dtor
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
stopThread();
}
virtual void processEvents()
{
try
{
// Process events in Derived specific way
while(true)
{
// Emulated interruption point for testing purpose
boost::this_thread::sleep(boost::posix_time::milliseconds(100));
std::cout << "Processing events..." << std::endl;
}
}
catch (boost::thread_interrupted& e)
{
std::cout << "Thread interrupted" << std::endl;
}
}
};
int main(int argc, char** argv)
{
Base* b = new Derived;
delete b;
return 0;
}
As you can see, the thread is interrupted and joined in the Derived class destructor. Many comments on Stackoverflow argues that it's a bad idea to join a thread in a destructor. However, I can't find a better idea considering the constraint that the thread lifecycle must be managed through the construction/destruction of the Derived class. Does someone has a better proposition?

It is a good idea to release resources a class creates when the class is destroyed, even if one of the resources is a thread. However, when performing any non-trivial task in a destructor, it is often worth taking the time to examine the implications in full.
Destructors
A general rule is to not throw exceptions in destructors. If a Derived object is on a stack that is unwinding from another exception, and Derived::~Derived() throws an exception, then std::terminate() will be invoked, killing the application. While Derived::~Derived() is not explicitly throwing an exception, it is important to consider that some of the functions it is invoking may throw, such as _thread->join().
If std::terminate() is the desired behavior, then no change is required. However, if std::terminate() is not desired, then catch boost::thread_interrupted and suppress it.
try
{
_thread->join();
}
catch (const boost::thread_interrupted&)
{
/* suppressed */
}
Inheritance
It looks as though inheritance was used to for code reuse and minimizing code refactoring by isolating the asynchronous behavior to be internal to the Base hierarchy. However, some of the boilerplate logic is also in Dervied. As classes derived from Base are already having to be changed, I would suggest considering aggregation or the CRTP to minimize the amount of boilerplate logic and code within these classes.
For example, a helper type can be introduced to encapsulate the threading logic:
class AsyncJob
{
public:
typedef boost::function<void()> fn_type;
// Start running a job asynchronously.
template <typename Fn>
AsyncJob(const Fn& fn)
: thread_(&AsyncJob::run, fn_type(fn))
{}
// Stop the job.
~AsyncJob()
{
thread_.interrupt();
// Join may throw, so catch and suppress.
try { thread_.join(); }
catch (const boost::thread_interrupted&) {}
}
private:
// into the run function so that the loop logic does not
// need to be duplicated.
static void run(fn_type fn)
{
// Continuously call the provided function until an interrupt occurs.
try
{
while (true)
{
fn();
// Force an interruption point into the loop, as the user provided
// function may never call a Boost.Thread interruption point.
boost::this_thread::interruption_point();
}
}
catch (const boost::thread_interrupted&) {}
}
boost::thread thread_;
};
This helper class could be aggregated and initialized in Derived's constructor. It removes the need for much of the boilerplate code, and can be reused elsewhere:
struct Derived : public Base
{
Derived()
: job_(boost::bind(&Base::processEvents, this))
{}
virtual void processEvents()
{
// Process events in Derived specific way
}
private:
AsyncJob job_;
};
Another key point is that the AsyncJob forces a Boost.Thread interruption point into the loop logic. The job shutdown logic is implemented in terms of interruption points. Thus, it is critical that an interruption point be reached during iterations. Otherwise, it could be possible to end up in a deadlock if the user code never reaches an interruption point.
Lifespan
Examine whether it is the thread's lifetime that must be associated with the object's lifetime, or if it is the asynchronous event processing that needs to be associated with the object's lifetime. If it is the latter, then it may be worth considering using thread pools. A thread pool could provide finer grain control over thread resources, such as imposing a maximum limit, as well as minimize the amount of wasted threads, such as threads doing nothing or time spent creating/destroying short-lived threads.
For example, consider the case where a user creates an array of 500 Dervied classes. Are 500 threads needed to handle 500 strategies? Or could 25 threads handle 500 strategies? Keep in mind that on some systems, thread creation/destruction can be expensive, and there may even be maximum thread limit imposed by the OS.
In conclusion, examine the tradeoffs, and determine which behaviors are acceptable. It can be difficult to minimize code refactoring, particularly when changing the threading model that has implications to various areas of the codebase. The perfect solution is very rarely obtainable, so identify the solution that covers the majority of cases. Once the supported behavior has been clearly defined, work on modifying existing code so that it is within the supported behavior.

Related

What are some C++ alternatives to static objects that could make destruction safer (or more deterministic)?

I'm working on a large code base that, for performance reasons, limits access to one or more resources. A thread pool is a good analogy to my problem - we don't want everyone in the process spinning up their own threads, so a common pool with a producer/consumer job queue exists in an attempt to limit the number of threads running at any given time.
There isn't an elegant way to make ownership of the thread pool clear so, for all intents and purposes, it is a singleton. I speak better in code than in English, so here is an example:
class ThreadPool {
public:
static void SubmitTask(Task&& t) { instance_.SubmitTask(std::move(t)); }
private:
~ThreadPool() {
std::for_each(pool_.begin(), pool_.end(), [](auto &t) {
if (t.joinable()) t.join();
});
}
private:
std::array<std::thread, 5> pool_;
static ThreadPool instance_; // here or anonymous namespace
};
The issue with this pattern is instance_ doesn't go out of scope until after main has returned which typically results in races or crashes. Also, keep in mind this is analogous to my problem so better ways to do something asynchronously isn't really what I'm after; just better ways to manage the lifecycle of static objects.
Alternatives I've thought of:
Provide an explicit Terminate function that must be called manually before leaving main.
Not using statics at all and leaving it up to the app to ensure only a single instance exists.
Not using statics at all and crashing the app if more than 1 instance is instantiated.
I also realize that a small, sharp, team could probably make the above code work just fine. However, this code lives within a large organization that has many developers of various skill levels contributing to it.
You could explicitly bind the lifetime to your main function. Either add a static shutdown() method to your ThreadPool that does any cleanup you need and call it at the end of main().
Or fully bind the lifetime via RAII:
class ThreadPool {
public:
static ThreadPool* get() { return instance_.get(); }
void SubmitTask(Task&& t) { ... }
~ThreadPool() { ... }
private:
ThreadPool() {}
static inline std::unique_ptr<ThreadPool> instance_;
friend class ThreadPoolScope;
};
class ThreadPoolScope {
public:
ThreadPoolScope(){
assert(!ThreadPool::instance_);
ThreadPool::instance_.reset(new ThreadPool());
}
~ThreadPoolScope(){
ThreadPool::instance_.reset();
}
};
int main() {
ThreadPoolScope thread_pool_scope{};
...
}
void some_func() {
ThreadPool::get()->SubmitTask(...);
}
This makes destruction completely deterministic and if you do this with multiple objects, they are automatically destroyed in the correct order.

How to detect stack unwinding in C++20 coroutines?

The typical advice in C++ is to detect stack unwinding in the destructor using std::uncaught_exceptions(), see the example from https://en.cppreference.com/w/cpp/error/uncaught_exception :
struct Foo {
int count = std::uncaught_exceptions();
~Foo() {
std::cout << (count == std::uncaught_exceptions()
? "~Foo() called normally\n"
: "~Foo() called during stack unwinding\n");
}
};
But this advice looks no longer applicable to C++20 coroutines, which can be suspended and resumed including during stack unwinding. Consider the following example:
#include <coroutine>
#include <iostream>
struct ReturnObject {
struct promise_type {
ReturnObject get_return_object() { return { std::coroutine_handle<promise_type>::from_promise(*this) }; }
std::suspend_always initial_suspend() { return {}; }
std::suspend_always final_suspend() noexcept { return {}; }
void unhandled_exception() {}
void return_void() {}
};
std::coroutine_handle<promise_type> h_;
};
struct Foo {
int count = std::uncaught_exceptions();
Foo() { std::cout << "Foo()\n"; }
~Foo() {
std::cout << (count == std::uncaught_exceptions()
? "~Foo() called normally\n"
: "~Foo() called during stack unwinding\n");
}
};
struct S
{
std::coroutine_handle<ReturnObject::promise_type> h_;
~S() { h_(); }
};
int main()
{
auto coroutine = []() -> ReturnObject { Foo f; co_await std::suspend_always{}; };
auto h = coroutine().h_;
try
{
S s{ .h_ = h };
std::cout << "Exception being thrown\n";
throw 0; // calls s.~S() during stack unwinding
}
catch( int ) {}
std::cout << "Exception caught\n";
h();
h.destroy();
}
It uses the same class Foo inside the coroutine, which is destructed normally (not due to stack unwinding during exception), but still prints:
Exception being thrown
Foo()
Exception caught
~Foo() called during stack unwinding
Demo: https://gcc.godbolt.org/z/Yx1b18zT9
How one can re-design class Foo to properly detect stack unwinding in coroutines as well?
The archetypal reason for wanting to know if a function is being executed due to stack unwinding is for something like rolling back a database transaction. So the situation looks rather like this:
Your function does some database work. It creates a database transaction governed by a RAII object. That object is on the function's stack (either directly or indirectly as a subobject of some other stack object). You do some stuff, and when that RAII object leaves the stack, the database transaction should commit or rollback, depending on whether it left the stack normally or because an exception passed through the function itself respectively.
This is all pretty neat and tidy. There is no explicit cleanup code needed in the function itself.
What does this mean for a coroutine? That becomes exceedingly complicated, because a coroutine can be terminated for reasons outside of its own execution.
For a normal function, it either completes or throws an exception. If such a function fails, it happens internally to the function. Coroutines don't work like that. Between suspend points, the code that schedules the resumption of the coroutine might itself fail.
Consider asynchronous file loading. You pass a continuation function to the file reader, and the continuation will be given the file data as it gets read to process it. Partially through this process, a file read error happens. But that happens in the external code that's accessing the file, not the continuation function that is consuming it.
So the external code needs to tell the consuming function that an error happened and it should abort its process. This cannot happen via an exception (at least not by default); the interface between these two pieces of code must have a mechanism to transmit that the process failed. There are ways to have this interface actually throw an exception within the continuation function itself (ie: the continuation gets some object that it calls to access the currently read data, and it throws if a read error happened), but that is still a cooperative mechanism.
It doesn't happen by itself.
So even if you could solve this problem in a coroutine, you would still need to account for cases when a coroutine needs to terminate for reasons outside of an exception thrown from within. Since you're going to need explicit code to do cleanup/rollbacks/etc anyway, there's little point in relying on purely RAII mechanisms to do this.
To more directly answer the question, if you still want to do this, you need to treat the code between suspend points as if they were their own functions. Each suspend point is effectively a separate function call, with its own exception count and so forth.
So either a RAII object lives entirely between suspend points, or you need to update the exception count every time a suspend point starts.

std::function in combination with thread c++11 fails debug assertion in vector

I want to build a helper class that can accept an std::function created via std::bind) so that i can call this class repeaded from another thread:
short example:
void loopme() {
std::cout << "yay";
}
main () {
LoopThread loop = { std::bind(&loopme) };
loop.start();
//wait 1 second
loop.stop();
//be happy about output
}
However, when calling stop() my current implementation will raise the following error: debug assertion Failed , see Image: i.stack.imgur.com/aR9hP.png.
Does anyone know why the error is thrown ?
I don't even use vectors in this example.
When i dont call loopme from within the thread but directly output to std::cout, no error is thrown.
Here the full implementation of my class:
class LoopThread {
public:
LoopThread(std::function<void(LoopThread*, uint32_t)> function) : function_{ function }, thread_{ nullptr }, is_running_{ false }, counter_{ 0 } {};
~LoopThread();
void start();
void stop();
bool isRunning() { return is_running_; };
private:
std::function<void(LoopThread*, uint32_t)> function_;
std::thread* thread_;
bool is_running_;
uint32_t counter_;
void executeLoop();
};
LoopThread::~LoopThread() {
if (isRunning()) {
stop();
}
}
void LoopThread::start() {
if (is_running_) {
throw std::runtime_error("Thread is already running");
}
if (thread_ != nullptr) {
throw std::runtime_error("Thread is not stopped yet");
}
is_running_ = true;
thread_ = new std::thread{ &LoopThread::executeLoop, this };
}
void LoopThread::stop() {
if (!is_running_) {
throw std::runtime_error("Thread is already stopped");
}
is_running_ = false;
thread_->detach();
}
void LoopThread::executeLoop() {
while (is_running_) {
function_(this, counter_);
++counter_;
}
if (!is_running_) {
std::cout << "end";
}
//delete thread_;
//thread_ = nullptr;
}
I used the following Googletest code for testing (however a simple main method containing the code should work):
void testfunction(pft::LoopThread*, uint32_t i) {
std::cout << i << ' ';
}
TEST(pfFiles, TestLoop)
{
pft::LoopThread loop{ std::bind(&testfunction, std::placeholders::_1, std::placeholders::_2) };
loop.start();
std::this_thread::sleep_for(std::chrono::milliseconds(500));
loop.stop();
std::this_thread::sleep_for(std::chrono::milliseconds(2500));
std::cout << "Why does this fail";
}
Your use of is_running_ is undefined behavior, because you write in one thread and read in another without a synchronization barrier.
Partly due to this, your stop() doesn't stop anything. Even without this UB (ie, you "fix" it by using an atomic), it just tries to say "oy, stop at some point", by the end it does not even attempt to guarantee the stop happened.
Your code calls new needlessly. There is no reason to use a std::thread* here.
Your code violates the rule of 5. You wrote a destructor, then neglected copy/move operations. It is ridiculously fragile.
As stop() does nothing of consequence to stop a thread, your thread with a pointer to this outlives your LoopThread object. LoopThread goes out of scope, destroying what the pointer your std::thread stores. The still running executeLoop invokes a std::function that has been destroyed, then increments a counter to invalid memory (possibly on the stack where another variable has been created).
Roughly, there is 1 fundamental error in using std threading in every 3-5 lines of your code (not counting interface declarations).
Beyond the technical errors, the design is wrong as well; using detach is almost always a horrible idea; unless you have a promise you make ready at thread exit and then wait on the completion of that promise somewhere, doing that and getting anything like a clean and dependable shutdown of your program is next to impossible.
As a guess, the vector error is because you are stomping all over stack memory and following nearly invalid pointers to find functions to execute. The test system either puts an array index in the spot you are trashing and then the debug vector catches that it is out of bounds, or a function pointer that half-makes sense for your std function execution to run, or somesuch.
Only communicate through synchronized data between threads. That means atomic data, or mutex guarded, unless you are getting ridiculously fancy. You don't understand threading enough to get fancy. You don't understand threading enough to copy someone who got fancy and properly use it. Don't get fancy.
Don't use new. Almost never, ever use new. Use make_shared or make_unique if you absolutely have to. But use those rarely.
Don't detach a thread. Period. Yes this means you might have to wait for it to finish a loop or somesuch. Deal with it, or write a thread manager that does the waiting at shutdown or somesuch.
Be extremely clear about what data is owned by what thread. Be extremely clear about when a thread is finished with data. Avoid using data shared between threads; communicate by passing values (or pointers to immutable shared data), and get information from std::futures back.
There are a number of hurdles in learning how to program. If you have gotten this far, you have passed a few. But you probably know people who learned along side of you that fell over at one of the earlier hurdles.
Sequence, that things happen one after another.
Flow control.
Subprocedures and functions.
Looping.
Recursion.
Pointers/references and dynamic vs automatic allocation.
Dynamic lifetime management.
Objects and Dynamic dispatch.
Complexity
Coordinate spaces
Message
Threading and Concurrency
Non-uniform address spaces, Serialization and Networking
Functional programming, meta functions, currying, partial application, Monads
This list is not complete.
The point is, each of these hurdles can cause you to crash and fail as a programmer, and getting each of these hurdles right is hard.
Threading is hard. Do it the easy way. Dynamic lifetime management is hard. Do it the easy way. In both cases, extremely smart people have mastered the "manual" way to do it, and the result is programs that exhibit random unpredictable/undefined behavior and crash a lot. Muddling through manual resource allocation and deallocation and multithreaded code can be made to work, but the result is usually someone whose small programs work accidentally (they work insofar as you fixed the bugs you noticed). And when you master it, initial mastery comes in the form of holding an entire program's "state" in uour head and understanding how it works; this fails to scale to large many-developer code bases, so younusually graduate to having large programs that work accidentally.
Both make_unique style and only-immutable-shared-data based threading are composible strategies. This means if small pieces are correct, and you put them together, the resulting program is correct (with regards to resource lifetime and concurrency). That permits local mastery of small-scale threading or resource management to apply to larfe-scale programs in the domain that these strategies work.
After following the guide from #Yakk i decided to restructure my programm:
bool is_running_ will change to td::atomic<bool> is_running_
stop() will not only trigger the stopping, but will activly wait for the thread to stop via a thread_->join()
all calls of new are replaced with std::make_unique<std::thread>( &LoopThread::executeLoop, this )
I have no experience with copy or move constructors. So i decided to forbid them. This should prevent me from accidently using this. If i sometime in the future will need those i have to take a deepter look on thoose
thread_->detach() was replaced by thread_->join() (see 2.)
This is the end of the list.
class LoopThread {
public:
LoopThread(std::function<void(LoopThread*, uint32_t)> function) : function_{ function }, is_running_{ false }, counter_{ 0 } {};
LoopThread(LoopThread &&) = delete;
LoopThread(const LoopThread &) = delete;
LoopThread& operator=(const LoopThread&) = delete;
LoopThread& operator=(LoopThread&&) = delete;
~LoopThread();
void start();
void stop();
bool isRunning() const { return is_running_; };
private:
std::function<void(LoopThread*, uint32_t)> function_;
std::unique_ptr<std::thread> thread_;
std::atomic<bool> is_running_;
uint32_t counter_;
void executeLoop();
};
LoopThread::~LoopThread() {
if (isRunning()) {
stop();
}
}
void LoopThread::start() {
if (is_running_) {
throw std::runtime_error("Thread is already running");
}
if (thread_ != nullptr) {
throw std::runtime_error("Thread is not stopped yet");
}
is_running_ = true;
thread_ = std::make_unique<std::thread>( &LoopThread::executeLoop, this );
}
void LoopThread::stop() {
if (!is_running_) {
throw std::runtime_error("Thread is already stopped");
}
is_running_ = false;
thread_->join();
thread_ = nullptr;
}
void LoopThread::executeLoop() {
while (is_running_) {
function_(this, counter_);
++counter_;
}
}
TEST(pfThread, TestLoop)
{
pft::LoopThread loop{ std::bind(&testFunction, std::placeholders::_1, std::placeholders::_2) };
loop.start();
std::this_thread::sleep_for(std::chrono::milliseconds(50));
loop.stop();
}

C++11 thread doesn't work with virtual member function

I'm trying to get a class run a thread, which will call a virtual member function named Tick() in a loop. Then I tried to derive a class and override the base::Tick().
but when execute, the program just call the base class's Tick instead of override one. any solutions?
#include <iostream>
#include <atomic>
#include <thread>
#include <chrono>
using namespace std;
class Runnable {
public:
Runnable() : running_(ATOMIC_VAR_INIT(false)) {
}
~Runnable() {
if (running_)
thread_.join();
}
void Stop() {
if (std::atomic_exchange(&running_, false))
thread_.join();
}
void Start() {
if (!std::atomic_exchange(&running_, true)) {
thread_ = std::thread(&Runnable::Thread, this);
}
}
virtual void Tick() {
cout << "parent" << endl;
};
std::atomic<bool> running_;
private:
std::thread thread_;
static void Thread(Runnable *self) {
while(self->running_) {
self->Tick();
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
}
};
class Fn : public Runnable {
public:
void Tick() {
cout << "children" << endl;
}
};
int main (int argc, char const* argv[])
{
Fn fn;
fn.Start();
return 0;
}
outputs:
parent
You can't let an object run out of scope until you're finished using it! The return 0; at the end of main causes fn to go out of scope. So by the time you get around to calling tick, there's no guarantee the object even exists any more.
(The logic in ~Runnable is totally broken. Inside the destructor is way too late -- the object is already at least partially destroyed.)
The approach of using inheritance with the parent serving as control for the thread and the children implementing the functions is a bad idea in general. The common problems with this approach come from construction and destruction:
if the thread is started from the constructor in the parent (control) then it might start running before the constructor completes and the thread might call the virtual function before the complete object has been fully constructed
if the thread is stopped in the destructor of the parent, then by the time that the control joins the thread, the thread is executing a method on an object that does no longer exist.
In your particular case you are hitting the second case. The program starts executing, and in main the second thread is started. At that point there is a race between the main thread and the newly launched, if the new thread is faster (unlikely, as starting the thread is an expensive operation), it will call the member method Tick that will be dispatched to the final overrider Fn::Tick.
But if the main thread is faster it will exit the scope of main, and it will start destruction of the object, it will complete destruction of the Fn object and during construction of the Runnable it will join the thread. If the main thread is fast enough, it will make it to the join before the second thread and wait there for the second thread to call Tick on the now final overrider that is Runnable::Tick. Note that this is Undefined Behavior, and not guaranteed, since the second thread is accessing an object that is being destroyed.
Also, there are other possible orderings, like for example, the second thread could dispatch to Fn::Tick before the main thread starts destruction, but might not complete the function before the main thread destroys the Fn sub object, in which case your second thread would be calling a member function on a dead object.
You should rather follow the approach in the C++ standard: separate the control from the logic, fully construct the object that will be run and pass it to the thread during construction. Note that this is the case of Java's Runnable, which is recommended over extending the Thread class. Note that from a design point of view this separation makes sense: the thread object manages the execution, and the runnable is the code to execute.
A thread is not a ticker, but rather what controls the execution of the ticker. And in your code Runnable is not something that can be run, but rather something that runs other objects that happen to derive from it.

Clean-up code in the C++ exception's destructor

Can we use the destructor of an exception as a place to put some clean-up code?
In this manner we may allow the client to control the finalization step as opposed to RAII.
Is this a good or a bad design?
Is this a correct solution in the context of OOP and C++?
I'm currently working on an asynchronous procedure which itself starts asynchronously multiple tasks.
The pattern looks as follows:
struct IAsyncResult
{
...
virtual void EndCall() const;
}
typedef std::shared_ptr<IAsyncResult> IAsyncResultPtr;
struct IAsyncTask
{
virtual IAsyncResultPtr BeginTask() = 0;
virtual void EndTask(IAsyncResultPtr async) const = 0;
}
class CompositeTask : public IAsyncTask
{
…
}
Unfortunately I’m unable to guarantee that each subtask’s BeginTask method will not fail. So it is possible that N-1 subtasks would start successfully and the Nth fail.
In general it is vital to be sure that no background tasks are running before the client’s code finishes. But sometimes the client doesn’t care if some tasks fail.
So my current solution involves a custom exception which is thrown from the CompositeTask’s BeginAsync method in case if one task failed to start. This allows a client to control the clean-up stage:
class composite_async_exception : public std::exception
{
std::vector<IAsyncResultPtr> successfully_started_tasks;
mutable bool manage_cleanup;
public:
composite_async_exception(std::vector<IAsyncResultPtr> const& _successfully_started_tasks)
: successfully_started_tasks(_successfully_started_tasks)
, manage_cleanup(true)
{
}
virtual ~composite_async_exception() throw()
{
if(!manage_cleanup)
return;
for( auto task = successfully_started_tasks.begin(); task != successfully_started_tasks.end(); ++task)
{
task->CancelTask();
}
}
void Giveup() const throw()
{
manage_cleanup = false;
}
};
And the client uses the code as shown:
try
{
compositeTask.BeginAsync();
}
catch(composite_async_exception const& ex)
{
//prevent the exception to cancel tasks
ex.Giveup();
// some handling
}
Are there some best practices to handle such a situation?
The exception is eligible to be copied, the destructor would be called multiple times then. In your case that seem not to be a problem.
Exception handling mechanism might stop your tasks by destroying temporary exception object aborting your tasks at throw point, not at handling one.
To verify this one should read standard, which I'm too lazy to do.