Prevent destruction of self after main? - c++

I'm writing some asynchronous I/O stuff in C++, and I need to prevent an object from being destructed until its handler for the asynchronous I/O is called. I'm trying to use shared_ptr and create my object with a static constructor so I can be sure that it is using reference counting. Then I save that in a weak_ptr until I start the asynchronous I/O, when I store it into another shared_ptr to be sure it doesn't become invalid during that time. Finally, I reset it when the callback completes. Here's an example:
#pragma once
#include <memory>
#include <functional>
using namespace std;
class SomeIO {
std::weak_ptr<SomeIO> self;
std::shared_ptr<SomeIO> savingSelf;
void myCallback() {
// do my callback stuff here
savingSelf.reset();
}
public:
SomeIO() = delete;
~SomeIO() {}
static shared_ptr<SomeIO> create() {
auto self = make_shared<SomeIO>();
self->self = self;
return self;
}
void start() {
savingSelf = self.lock();
//startSomeAsyncIO(bind(self, SomeIO::myCallback));
}
};
int main() {
auto myIO = SomeIO::create();
myIO->start();
return 0;
}
My question is, what is going to happen after main returns? Will it stay alive until the final reference is released, or is this going to cause a memory leak? If this does cause a memory leak, how do I handle this situation so the asynchronous I/O can be canceled and the program can end without a memory leak? I would think that shared_ptr protects me from memory leaks, but I'm not so sure about this situation.
Thanks!

In C++ (as opposed to Java) , the program ends whenever the main ends. all other threads are terminated. memory leaks are not your problem since the program ends anyway and all the memory is deallocated.
you can use std::thread with std::thread::join to prevent you program from exiting too early :
int main (void){
std::thread myAsyncIOThread ([]{
auto myIO = SomeIO::create();
myIO->start();
});
//other things you program needs to do
myAsyncIOThread.join();
return 0;
}
you might want to be interested having a Thread-Pool in your program.

Related

How to handle infinite loop in threads and avoid memory leak

I have a project which run several infinite loops in threads, I simplify it to the following code:
#include <iostream>
#include <vector>
#include <thread>
#include <boost/fiber/algo/round_robin.hpp>
#include <boost/thread.hpp>
#include <chrono>
#include <boost/thread.hpp>
#include <string>
void foo(){
std::cout<<"thread a"<<std::endl;
while(true){
std::this_thread::sleep_for(std::chrono::seconds{5});
}
return;
}
void foo2(){
std::cout<<"thread b"<<std::endl;
while(true){
std::this_thread::sleep_for(std::chrono::seconds{5});
}
return;
}
int main(){
std::thread a(foo);
std::thread b(foo2);
while(true){
std::this_thread::sleep_for(std::chrono::seconds{5});
}
return 0;
}
It works as expected.
I use valgrind to detect memory leak and it shows it has memory leak(I guess infinite loop never release memory because it never stops). I considered to use join(), but it doesn't make sense here. I tried to add
a.detach();
b.detach();
before the while loop in main function, but it doesn't solve memory leak issue.
Would somebody please give me some advice how to avoid memory leak here?
Its a long answer, so I'll start with a summary: The leak in your example code is not an issue. Nevertheless you should fix it. And the way to fix it is to turn the infinite loops into non-infinite loops and to join the threads.
A memory leak is for example this:
void bar() {
int * x = new int;
}
An object is dynamically allocated and when the function returns all pointers to the object are lost. The memory is still allocated to the process but you cannot free it. Calling bar many times will pile up memory until the process runs out of memory and gets killed. This is to be avoided.
Then there is a less severe type of memory leaks:
int main() {
bar();
}
Here some memory is allocated, but next the process terminates. When the process terminates all memory is reclaimed by the OS. The missing delete is not such a big issue here.
There are other ways of leaking memory and I am not trying to enumerate them all, but rather use the examples to get a point across.
Then there are good reasons to worry also about this second type of leaks, that I called "less severe". And that is because it is typically not just memory that is leaked. Consider (dont write code like this! it is only for illustrating a point):
int main() {
A* = new A();
}
A is some class. In main some memory is allocated and an A is constructed. The memory is the lesser problem here. The real problem is any other resource that A claimed in its constructor. It might have opened a file. It might have opened a connection to a data base. Such resources must be cleaned up in a destructor. If the A object is not properly destroyed critical data might get lost.
Conclusion: Leaking memory when returning from main isn't a big issue. Leaking other resource is a big issue. And the memory leak is good indication that also other resources are not cleaned up properly.
In your toy example there is no problem but only a small change makes your approach problematic:
void foo(){
A a;
while(true){
std::this_thread::sleep_for(std::chrono::seconds{5});
}
}
A is again the class that acquires some resource in its constructor and that resouce must be properly release in the destructor. Also when the program terminates you want to have the data in the database, the last log message in the log file, etc.
Rather than while(true) and detach you should use some atomic or condition variable to signal the threads that they should stop. Something along the line of
std::atomic<bool> foo_runs;
void foo(){
A a;
while(foo_runs.load()){
std::this_thread::sleep_for(std::chrono::seconds{5});
}
}
int main() {
foo_runs.store(true);
std::thread a(foo);
// do something else
foo_runs.store(false);
a.join();
}
Whatever you do, you have to join()/detach() on a and b. If you call join() before the main loop, you'll never get to the main loop. If you get to the end of main() without join()/detach(), std::abort() will be called.
I don't see a leak, but there is a race on the cout stream. Maybe potential leak can happen if detached thread a or b escapes main() and continues running a never-ending function. In such case, the thread itself is leaked since it is detached from *this (main), and there is no owner to destroy it. If that's the story, try to call join() on both a and b after the main loop.

Destructor, when object's dynamic variable is locked by mutex will not free it?

I'm trying to solve some complicated (for me at least) asynchronous scenario at once, but I think it will be better to understand more simple case.
Consider an object, that has allocated memory, carrying by variable:
#include <thread>
#include <mutex>
using namespace std;
mutex mu;
class Object
{
public:
char *var;
Object()
{
var = new char[1]; var[0] = 1;
}
~Object()
{
mu.lock();
delete[]var; // destructor should free all dynamic memory on it's own, as I remember
mu.unlock();
}
}*object = nullptr;
int main()
{
object = new Object();
return 0;
}
What if while, it's var variable in detached, i.e. asynchronous thread, will be used, in another thread this object will be deleted?
void do_something()
{
for(;;)
{
mu.lock();
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
mu.unlock();
}
}
int main()
{
object = new Object();
thread th(do_something);
th.detach();
Sleep(1000);
delete object;
object = nullptr;
return 0;
}
Is is it possible that var will not be deleted in destructor?
Do I use mutex with detached threads correctly in code above?
2.1 Do I need cover by mutex::lock and mutex::unlock also delete object line?
I also once again separately point that I need new thread to be asynchronous. I do not need the main thread to be hanged, while new is running. I need two threads at once.
P.S. From a list of commentaries and answers one of most important thing I finally understood - mutex. The biggest mistake I thought is that already locked mutex skips the code between lock and unlock.
Forget about shared variables, mutex itself has noting to do with it. Mutex is just a mechanism for safely pause threads:
mutex mu;
void a()
{
mu.lock();
Sleep(1000);
mu.unlock();
}
int main()
{
thread th(a);
th.detach();
mu.lock(); // hangs here, until mu.unlock from a() will be called
mu.unlock();
return;
}
The concept is extremely simple - mutex object (imagine) has flag isLocked, when (any) thread calls lock method and isLocked is false, it just sets isLocked to true. But if isLocked is true already, mutex somehow on low-level hangs thread that called lock until isLocked will not become false. You can find part of source code of lock method scrolling down this page. Instead of mutex, probably just a bool variable could be used, but it will cause undefined behaviour.
Why is it referred to shared stuff? Because using same variable (memory) simultaneously from multiple threads makes undefined behaviour, so one thread, reaching some variable that currently can be used by another - should wait, until another will finish working with it, that's why mutex is used here.
Why accessing mutex itself from different threads does not make undefined behaviour? I don't know, going to google it.
Do I use mutex with detached threads correctly in code above?
Those are orthogonal concepts. I don't think mutex is used correctly since you only have one thread mutating and accessing the global variable, and you use the mutex to synchronize waits and exits. You should join the thread instead.
Also, detached threads are usually a code smell. There should be a way to wait all threads to finish before exiting the main function.
Do I need cover by mutex::lock and mutex::unlock also delete object line?
No since the destructor will call mu.lock() so you're fine here.
Is is it possible that var will not be deleted in destructor?
No, it will make you main thread to wait though. There are solutions to do this without using a mutex though.
There's usually two way to attack this problem. You can block the main thread until all other thread are done, or use shared ownership so both the main and the thread own the object variable, and only free when all owner are gone.
To block all thread until everyone is done then do cleanup, you can use std::barrier from C++20:
void do_something(std::barrier<std::function<void()>>& sync_point)
{
for(;;)
{
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
} // break at a point so the thread exits
sync_point.arrive_and_wait();
}
int main()
{
object = new Object();
auto const on_completion = []{ delete object; };
// 2 is the number of threads. I'm counting the main thread since
// you're using detached threads
std::barrier<std::function<void()>> sync_point(2, on_completion);
thread th(do_something, std::ref(sync_point));
th.detach();
Sleep(1000);
sync_point.arrive_and_wait();
return 0;
}
Live example
This will make all the threads (2 of them) wait until all thread gets to the sync point. Once that sync point is reached by all thread, it will run the on_completion function, which will delete the object once when no one needs it anymore.
The other solution would be to use a std::shared_ptr so anyone can own the pointer and free it only when no one is using it anymore. Note that you will need to remove the object global variable and replace it with a local variable to track the shared ownership:
void do_something(std::shared_ptr<Object> object)
{
for(;;)
{
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
}
}
int main()
{
std::shared_ptr<Object> object = std::make_shared<Object>();
// You need to pass it as parameter otherwise it won't be safe
thread th(do_something, object);
th.detach();
Sleep(1000);
// If the thread is done, this line will call delete
// If the thread is not done, the thread will call delete
// when its local `object` variable goes out of scope.
object = nullptr;
return 0;
}
Is is it possible that var will not be deleted in destructor?
With
~Object()
{
mu.lock();
delete[]var; // destructor should free all dynamic memory on it's own, as I remember
mu.unlock();
}
You might have to wait that lock finish, but var would be deleted.
Except that your program exhibits undefined behaviour with non protected concurrent access to object. (delete object isn't protected, and you read it in your another thread), so everything can happen.
Do I use mutex with detached threads correctly in code above?
Detached or not is irrelevant.
And your usage of mutex is wrong/incomplete.
which variable should your mutex be protecting?
It seems to be a mix between object and var.
If it is var, you might reduce scope in do_something (lock only in if-block)
And it misses currently some protection to object.
2.1 Do I need cover by mutex::lock and mutex::unlock also delete object line?
Yes object need protection.
But you cannot use that same mutex for that. std::mutex doesn't allow to lock twice in same thread (a protected delete[]var; inside a protected delete object) (std::recursive_mutex allows that).
So we come back to the question which variable does the mutex protect?
if only object (which is enough in your sample), it would be something like:
#include <thread>
#include <mutex>
using namespace std;
mutex mu;
class Object
{
public:
char *var;
Object()
{
var = new char[1]; var[0] = 1;
}
~Object()
{
delete[]var; // destructor should free all dynamic memory on it's own, as I remember
}
}*object = nullptr;
void do_something()
{
for(;;)
{
mu.lock();
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
mu.unlock();
}
}
int main()
{
object = new Object();
thread th(do_something);
th.detach();
Sleep(1000);
mu.lock(); // or const std::lock_guard<std::mutex> lock(mu); and get rid of unlock
delete object;
object = nullptr;
mu.unlock();
return 0;
}
Alternatively, as you don't have to share data between thread, you might do:
int main()
{
Object object;
thread th(do_something);
Sleep(1000);
th.join();
return 0;
}
and get rid of all mutex
Have a look at this, it shows the use of scoped_lock, std::async and managment of lifecycles through scopes (demo here : https://onlinegdb.com/FDw9fG9rS)
#include <future>
#include <mutex>
#include <chrono>
#include <iostream>
// using namespace std; <== dont do this
// mutex mu; avoid global variables.
class Object
{
public:
Object() :
m_var{ 1 }
{
}
~Object()
{
}
void do_something()
{
using namespace std::chrono_literals;
for(std::size_t n = 0; n < 30; ++n)
{
// extra scope to reduce time of the lock
{
std::scoped_lock<std::mutex> lock{ m_mtx };
m_var++;
std::cout << ".";
}
std::this_thread::sleep_for(150ms);
}
}
private:
std::mutex m_mtx;
char m_var;
};
int main()
{
Object object;
// extra scope to manage lifecycle of future
{
// use a lambda function to start the member function of object
auto future = std::async(std::launch::async, [&] {object.do_something(); });
std::cout << "do something started\n";
// destructor of future will synchronize with end of thread;
}
std::cout << "\n work done\n";
// safe to go out of scope now and destroy the object
return 0;
}
All you assumed and asked in your question is right. The variable will always be freed.
But your code has one big problem. Lets look at your example:
int main()
{
object = new Object();
thread th(do_something);
th.detach();
Sleep(1000);
delete object;
object = nullptr;
return 0;
}
You create a thread that will call do_something(). But lets just assume that right after the thread creation the kernel interrupts the thread and does something else, like updating the stackoverflow tab in your web browser with this answer. So do_something() isn't called yet and won't be for a while since we all know how slow browsers are.
Meanwhile the main function sleeps 1 second and then calls delete object;. That calls Object::~Object(), which acquires the mutex and deletes the var and releases the mutex and finally frees the object.
Now assume that right at this point the kernel interrupts the main thread and schedules the other thread. object still has the address of the object that was deleted. So your other thread will acquire the mutex, object is not nullptr so it accesses it and BOOM.
PS: object isn't atomic so calling object = nullptr in main() will also race with if (object).

Can the thread object be deleted after std::thread::detach?

I have a question about std::thread::detach(). In cplusplus.com it says 'After a call to this function, the thread object becomes non-joinable and can be destroyed safely', by which it seems to mean that the destructor ~thread() may be called safely.
My question is, does this mean that it is ok & safe to delete a thread object immediately after calling detach(), as in the following sample code? Will the function my_function continue safely, and safely use its thread_local variables and variables that are global to the program?
#include <thread>
#include <unistd.h>
void my_function(int t)
{
sleep(t);
}
int main()
{
std::thread *X = new std::thread(my_function, 10);
X->detach();
delete X;
sleep(30);
return 0;
}
The code 'runs' ok, I just want to know if this is safe from the point of view of memory ownership. My motivation here is to have a program that runs 'forever', and spawns a few child threads from time to time (e.g. every 30 seconds.) Each child thread then does something, and dies: I do not want to have to somehow keep track of the children in the parent thread, call join() and then delete.

pthread_key_create destructor not getting called

As per pthread_key_create man page we can associate a destructor to be called at thread shut down. My problem is that the destructor function I have registered is not being called. Gist of my code is as follows.
static pthread_key_t key;
static pthread_once_t tls_init_flag = PTHREAD_ONCE_INIT;
void destructor(void *t) {
// thread local data structure clean up code here, which is not getting called
}
void create_key() {
pthread_key_create(&key, destructor);
}
// This will be called from every thread
void set_thread_specific() {
ts = new ts_stack; // Thread local data structure
pthread_once(&tls_init_flag, create_key);
pthread_setspecific(key, ts);
}
Any idea what might prevent this destructor being called? I am also using atexit() at moment to do some cleanup in the main thread. Is there any chance that is interfering with destructor function being called? I tried removing that as well. Still didn't work though. Also I am not clear if I should handle the main thread as a separate case with atexit. (It's a must to use atexit by the way, since I need to do some application specific cleanup at application exit)
This is by design.
The main thread exits (by returning or calling exit()), and that doesn't use pthread_exit(). POSIX documents pthread_exit calling the thread-specific destructors.
You could add pthread_exit() at the end of main. Alternatively, you can use atexit to do your destruction. In that case, it would be clean to set the thread-specific value to NULL so in case the pthread_exit was invoked, the destruction wouldn't happen twice for that key.
UPDATE Actually, I've solved my immediate worries by simply adding this to my global unit test setup function:
::atexit([] { ::pthread_exit(0); });
So, in context of my global fixture class MyConfig:
struct MyConfig {
MyConfig() {
GOOGLE_PROTOBUF_VERIFY_VERSION;
::atexit([] { ::pthread_exit(0); });
}
~MyConfig() { google::protobuf::ShutdownProtobufLibrary(); }
};
Some of the references used:
http://www.resolvinghere.com/sof/6357154.shtml
https://sourceware.org/ml/pthreads-win32/2008/msg00007.html
http://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_key_create.html
http://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_exit.html
PS. Of course c++11 introduced <thread> so you have better and more portable primitves to work with.
It's already in sehe's answer, just to present the key points in a compact way:
pthread_key_create() destructor calls are triggered by a call to pthread_exit().
If the start routine of a thread returns, the behaviour is as if pthread_exit() was called (i. e., destructor calls are triggered).
However, if main() returns, the behaviour is as if exit() was called — no destructor calls are triggered.
This is explained in http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_create.html. See also C++17 6.6.1p5 or C11 5.1.2.2.3p1.
I wrote a quick test and the only thing I changed was moving the create_key call of yours outside of the set_thread_specific.
That is, I called it within the main thread.
I then saw my destroy get called when the thread routine exited.
I call destructor() manually at the end of main():
void * ThreadData = NULL;
if ((ThreadData = pthread_getspecific(key)) != NULL)
destructor(ThreadData);
Of course key should be properly initialized earlier in main() code.
PS. Calling Pthread_Exit() at the end to main() seems to hang entire application...
Your initial thought of handling the main thread as a separate case with atexit worked best for me.
Be ware that pthread_exit(0) overwrites the exit value of the process. For example, the following program will exit with status of zero even though main() returns with number three:
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
class ts_stack {
public:
ts_stack () {
printf ("init\n");
}
~ts_stack () {
printf ("done\n");
}
};
static void cleanup (void);
static pthread_key_t key;
static pthread_once_t tls_init_flag = PTHREAD_ONCE_INIT;
void destructor(void *t) {
// thread local data structure clean up code here, which is not getting called
delete (ts_stack*) t;
}
void create_key() {
pthread_key_create(&key, destructor);
atexit(cleanup);
}
// This will be called from every thread
void set_thread_specific() {
ts_stack *ts = new ts_stack (); // Thread local data structure
pthread_once(&tls_init_flag, create_key);
pthread_setspecific(key, ts);
}
static void cleanup (void) {
pthread_exit(0); // <-- Calls destructor but sets exit status to zero as a side effect!
}
int main (int argc, char *argv[]) {
set_thread_specific();
return 3; // Attempt to exit with status of 3
}
I had similar issue as yours: pthread_setspecific sets a key, but the destructor never gets called. To fix it we simply switched to thread_local in C++. You could also do something like if that change is too complicated:
For example, assume you have some class ThreadData that you want some action to be done on when the thread finishes execution. You define the destructor something on these lines:
void destroy_my_data(ThreadlData* t) {
delete t;
}
When your thread starts, you allocate memory for ThreadData* instance and assign a destructor to it like this:
ThreadData* my_data = new ThreadData;
thread_local ThreadLocalDestructor<ThreadData> tld;
tld.SetDestructorData(my_data, destroy_my_data);
pthread_setspecific(key, my_data)
Notice that ThreadLocalDestructor is defined as thread_local. We rely on C++11 mechanism that when the thread exits, the destructor of ThreadLocalDestructor will be automatically called, and ~ThreadLocalDestructor is implemented to call function destroy_my_data.
Here is the implementation of ThreadLocalDestructor:
template <typename T>
class ThreadLocalDestructor
{
public:
ThreadLocalDestructor() : m_destr_func(nullptr), m_destr_data(nullptr)
{
}
~ThreadLocalDestructor()
{
if (m_destr_func) {
m_destr_func(m_destr_data);
}
}
void SetDestructorData(void (*destr_func)(T*), T* destr_data)
{
m_destr_data = destr_data;
m_destr_func = destr_func;
}
private:
void (*m_destr_func)(T*);
T* m_destr_data;
};

Best way to handle multi-thread cleanup

I have a server-type application, and I have an issue with making sure thread's aren't deleted before they complete. The code below pretty much represents my server; the cleanup is required to prevent a build up of dead threads in the list.
using namespace std;
class A {
public:
void doSomethingThreaded(function<void()> cleanupFunction, function<bool()> getStopFlag) {
somethingThread = thread([cleanupFunction, getStopFlag, this]() {
doSomething(getStopFlag);
cleanupFunction();
});
}
private:
void doSomething(function<bool()> getStopFlag);
thread somethingThread;
...
}
class B {
public:
void runServer();
void stop() {
stopFlag = true;
waitForListToBeEmpty();
}
private:
void waitForListToBeEmpty() { ... };
void handleAccept(...) {
shared_ptr<A> newClient(new A());
{
unique_lock<mutex> lock(listMutex);
clientData.push_back(newClient);
}
newClient.doSomethingThreaded(bind(&B::cleanup, this, newClient), [this]() {
return stopFlag;
});
}
void cleanup(shared_ptr<A> data) {
unique_lock<mutex> lock(listMutex);
clientData.remove(data);
}
list<shared_ptr<A>> clientData;
mutex listMutex;
atomc<bool> stopFlag;
}
The issue seems to be that the destructors run in the wrong order - i.e. the shared_ptr is destructed at when the thread's function completes, meaning the 'A' object is deleted before thread completion, causing havok when the thread's destructor is called.
i.e.
Call cleanup function
All references to this (i.e. an A object) removed, so call destructor (including this thread's destructor)
Call this thread's destructor again -- OH NOES!
I've looked at alternatives, such as maintaining a 'to be removed' list which is periodically used to clean the primary list by another thread, or using a time-delayed deletor function for the shared pointers, but both of these seem abit chunky and could have race conditions.
Anyone know of a good way to do this? I can't see an easy way of refactoring it to work ok.
Are the threads joinable or detached? I don't see any detach,
which means that destructing the thread object without having
joined it is a fatal error. You might try simply detaching it,
although this can make a clean shutdown somewhat complex. (Of
course, for a lot of servers, there should never be a shutdown
anyway.) Otherwise: what I've done in the past is to create
a reaper thread; a thread which does nothing but join any
outstanding threads, to clean up after them.
I might add that this is a good example of a case where
shared_ptr is not appropriate. You want full control over
when the delete occurs; if you detach, you can do it in the
clean up function (but quite frankly, just using delete this;
at the end of the lambda in A::doSomethingThreaded seems more
readable); otherwise, you do it after you've joined, in the
reaper thread.
EDIT:
For the reaper thread, something like the following should work:
class ReaperQueue
{
std::deque<A*> myQueue;
std::mutex myMutex;
std::conditional_variable myCond;
A* getOne()
{
std::lock<std::mutex> lock( myMutex );
myCond.wait( lock, [&]( !myQueue.empty() ) );
A* results = myQueue.front();
myQueue.pop_front();
return results;
}
public:
void readyToReap( A* finished_thread )
{
std::unique_lock<std::mutex> lock( myMutex );
myQueue.push_back( finished_thread );
myCond.notify_all();
}
void reaperThread()
{
for ( ; ; )
{
A* mine = getOne();
mine->somethingThread.join();
delete mine;
}
}
};
(Warning: I've not tested this, and I've tried to use the C++11
functionality. I've only actually implemented it, in the past,
using pthreads, so there could be some errors. The basic
principles should hold, however.)
To use, create an instance, then start a thread calling
reaperThread on it. In the cleanup of each thread, call
readyToReap.
To support a clean shutdown, you may want to use two queues: you
insert each thread into the first, as it is created, and then
move it from the first to the second (which would correspond to
myQueue, above) in readyToReap. To shut down, you then wait
until both queues are empty (not starting any new threads in
this interval, of course).
The issue is that, since you manage A via shared pointers, the this pointer captured by the thread lambda really needs to be a shared pointer rather than a raw pointer to prevent it from becoming dangling. The problem is that there's no easy way to create a shared_ptr from a raw pointer when you don't have an actual shared_ptr as well.
One way to get around this is to use shared_from_this:
class A : public enable_shared_from_this<A> {
public:
void doSomethingThreaded(function<void()> cleanupFunction, function<bool()> getStopFlag) {
somethingThread = thread([cleanupFunction, getStopFlag, this]() {
shared_ptr<A> temp = shared_from_this();
doSomething(getStopFlag);
cleanupFunction();
});
this creates an extra shared_ptr to the A object that keeps it alive until the thread finishes.
Note that you still have the problem with join/detach that James Kanze identified -- Every thread must have either join or detach called on it exactly once before it is destroyed. You can fulfill that requirement by adding a detach call to the thread lambda if you never care about the thread exit value.
You also have potential for problems if doSomethingThreaded is called multiple times on a single A object...
For those who are interested, I took abit of both answers given (i.e. James' detach suggestion, and Chris' suggestion about shared_ptr's).
My resultant code looks like this and seems neater and doesn't cause a crash on shutdown or client disconnect:
using namespace std;
class A {
public:
void doSomething(function<bool()> getStopFlag) {
...
}
private:
...
}
class B {
public:
void runServer();
void stop() {
stopFlag = true;
waitForListToBeEmpty();
}
private:
void waitForListToBeEmpty() { ... };
void handleAccept(...) {
shared_ptr<A> newClient(new A());
{
unique_lock<mutex> lock(listMutex);
clientData.push_back(newClient);
}
thread clientThread([this, newClient]() {
// Capture the shared_ptr until thread over and done with.
newClient->doSomething([this]() {
return stopFlag;
});
cleanup(newClient);
});
// Detach to remove the need to store these threads until their completion.
clientThread.detach();
}
void cleanup(shared_ptr<A> data) {
unique_lock<mutex> lock(listMutex);
clientData.remove(data);
}
list<shared_ptr<A>> clientData; // Can remove this if you don't
// need to connect with your clients.
// However, you'd need to make sure this
// didn't get deallocated before all clients
// finished as they reference the boolean stopFlag
// OR make it a shared_ptr to an atomic boolean
mutex listMutex;
atomc<bool> stopFlag;
}