Self destructing objects - c++

Just wondering whether an object can self-destruct.
Consider this situation.
An object that extends a thread object.
Session : Thread
{
Session() {}
~Session() {}
ThreadMain()
{
while(!done){
/* do stuff ... */
...
// something sets done = true;
}
~Client();
}
};
void start_session()
{
Session* c = new Session();
Session->Start();
// when I exit here, I've lost my reference to s. But if the object
// self destructs when done, I don't need it right?
}
Somewhere along the way, we have a function called start_session which starts a session.
Eventually the session ends.
In the conventional approach I would have to have some sort of list of Session objects
placed in that list after calling new.
To clean up the objects I'd have to figure out which ones are finished and call a cleanup
function later.
I thought it might make more sense if they could just clean up themselves. Can that be done?
Why? why not? better approaches?

you can do "delete this" when the session loop exits
but see https://isocpp.org/wiki/faq/freestore-mgmt

Related

Resetting a shared pointer captured in a lambda function

(I'm very unsure about the phrasing of the question title. I'm hoping it's not misleading because I really don't know how to summarize this. But I'll try to explain my problem as well as I can.)
In a project, there is something like this (written from memory and simplified):
Class A {
private:
boost::weak_ptr<SomeClassB> b;
public:
static boost::shared_ptr<SomeClassB> StopSomeProcesses () {
boost::shared_ptr<SomeClassB> temp (new SomeClassB());
b = temp;
return temp;
}
}
Now in another project, I need to do something similar to the following:
boost::shared_ptr<SomeClassB> obj;
void someFunction () {
obj = A::StopSomeProcesses();
auto callback = [](){
//some other stuff here
obj.reset();
};
NamespaceFromYetAnotherProject::DoSomething(callback);
}
What this basically does is while b holds a valid object from A::StopSomeProcesses, as the name implies, some processes will be stopped. In this case, the processes are stopped while DoSomething is executed. At the end, DoSomething will call callback where obj is reset and the stopped processes can now finally continue.
I've done this and it works. However, as much as possible, I'd like to avoid using global variables. I tried doing the following:
void someFunction () {
boost::shared_ptr<SomeClassB> obj;
obj = A::StopSomeProcesses();
auto callback = [&obj](){
//some other stuff here
obj.reset();
};
NamespaceFromYetAnotherProject::DoSomething(callback);
}
The above code works. But I'm not sure if I was already in "undefined behavior" territory and just got lucky. Doesn't obj's scope end already? Or does the fact that the lambda was passed as an argument help extend its "life"? If this is safe to do, is that safety lost if callback is run on another thread?
I also tried doing this:
void someFunction () {
boost::shared_ptr<SomeClassB> obj;
obj = A::StopSomeProcesses();
auto callback = [obj](){
//some other stuff here
boost::shared_ptr<SomeClassB> tempObj (new SomeClassB(*obj));
tempObj.reset();
};
NamespaceFromYetAnotherProject::DoSomething(callback);
}
But this was something I tried randomly. I wrote it while completely focused on just deleting the object held by the shared pointer. It worked, but I'm not even sure if it's just roundabout or even valid.
Are these attempts going anywhere? Or am I completely going the wrong way? Or should I just stick to using a global variable? Would appreciate any help on how to go about this problem. Thanks!
You are using a shared_ptr and StopSomeProcesses will internally allocate the memory it points to. Pointers are passed by value so the lifetime of obj is irelevant. Every function call makes a new copy of it as does the binding in the lambda. What matters is what the pointer points too and that was allocated with new and lives on.

Any way to detect if a QObject belongs to a "dead" QThread?

The story :
I make use of the QtConcurrent API for every "long" operation in my application.
It works pretty well, but I face some problems with the QObjects creation.
Consider this piece of code, which use a thread to create a "Foo" object :
QFuture<Foo*> = QtConcurrent::run([=]()
{
Data* data = /*long operation to acquire the data*/
Foo* result = new Foo(data);
return result;
});
It works well, but if the "Foo" class is derived from QObject class, the "result" instance belongs to the QThread who has created the object.
So to use properly signal/slot with the "result" instance, one should do something like this :
QFuture<Foo*> = QtConcurrent::run([=]()
{
Data* data = /*long operation to acquire the data*/
Foo* result = new Foo(data);
// Move "result" to the main application thread
result->moveToThread(qApp->thread());
return result;
});
Now, all works as exepected, and I think this is the normal behaviour and the nominal solution.
The problem :
I have a lot of this kind of code, which sometimes create objects, which can also create objects. Most of them are created properly with a "moveToThread" call.
But sometimes, I miss one "moveToThread" call.
And then, a lot of things look like they doesn't work (because this object slots are "broken"), without any Qt warning.
Now, I sometimes spend a lot of time to figure why someting doesn't work, before understanding it's only because the slots are no more called on a particular object instance.
The question :
Is there any way to help me to prevent/detect/debug this kind of situation ?
For example :
having a warning logged every time a QThread is deleted but there are objects alive who belongs to it ?
having a warning logged every time a signal is emitted to an object which QThread is deleted ?
having a warning logged every time a signal is emitted to an object (in another thread) and not processed before a timeout ?
Thanks
It is possible to track an object's movement among threads. Just before an object is moved to the new thread, it is sent a ThreadChange event. You can filter that event and have your code run to take a note of when an object leaves a thread. But it's too early at that point to know of whether the object goes anywhere. To detect that, you need to post a metacall (see this question) to the object's queue to be executed as soon as the object's event processing resumes in the new thread. You'd also attach to QThread::finished to get a chance to look through your object list and check if any of them live on the thread that's about to die.
But all this is fairly involved: each thread will need its own tracker/filter object, as event filters must live in the object's thread. We're probably talking of more than 200 lines of code to do it right, handling all corner cases.
Instead, you can leverage RAII and hold your objects using handles that manage thread affinity as a resource (because it is one!):
// https://github.com/KubaO/stackoverflown/tree/master/questions/thread-track-38611886
#include <QtConcurrent>
template <typename T>
class MainResult {
Q_DISABLE_COPY(MainResult)
T * m_obj;
public:
template<typename... Args>
MainResult(Args&&... args) : m_obj{ new T(std::forward<Args>(args)...) } {}
MainResult(T * obj) : m_obj{obj} {}
T* operator->() const { return m_obj; }
operator T*() const { return m_obj; }
T* operator()() const { return m_obj; }
~MainResult() { m_obj->moveToThread(qApp->thread()); }
};
struct Foo : QObject { Foo(int) {} };
You can return a MainResult by value, but the return type of the functor must be explicitly given:
QFuture<Foo*> test1() {
return QtConcurrent::run([=]()->Foo*{ // explicit return type
MainResult<Foo> obj{1};
obj->setObjectName("Hello");
return obj; // return by value
});
}
Alternatively, you can return the result of calling MainResult; it's a functor itself to save a bit of typing but this might be considered a hack and perhaps you should convert operator()() to a method with a short name.
QFuture<Foo*> test2() {
return QtConcurrent::run([=](){ // deduced return type
MainResult<Foo> obj{1};
obj->setObjectName("Hello");
return obj(); // return by call
});
}
While it's preferable to construct the object along with the handle, it's also possible to pass an instance pointer to the handle's constructor:
MainResult<Foo> obj{ new Foo{1} };

pthread_key_create destructor not getting called

As per pthread_key_create man page we can associate a destructor to be called at thread shut down. My problem is that the destructor function I have registered is not being called. Gist of my code is as follows.
static pthread_key_t key;
static pthread_once_t tls_init_flag = PTHREAD_ONCE_INIT;
void destructor(void *t) {
// thread local data structure clean up code here, which is not getting called
}
void create_key() {
pthread_key_create(&key, destructor);
}
// This will be called from every thread
void set_thread_specific() {
ts = new ts_stack; // Thread local data structure
pthread_once(&tls_init_flag, create_key);
pthread_setspecific(key, ts);
}
Any idea what might prevent this destructor being called? I am also using atexit() at moment to do some cleanup in the main thread. Is there any chance that is interfering with destructor function being called? I tried removing that as well. Still didn't work though. Also I am not clear if I should handle the main thread as a separate case with atexit. (It's a must to use atexit by the way, since I need to do some application specific cleanup at application exit)
This is by design.
The main thread exits (by returning or calling exit()), and that doesn't use pthread_exit(). POSIX documents pthread_exit calling the thread-specific destructors.
You could add pthread_exit() at the end of main. Alternatively, you can use atexit to do your destruction. In that case, it would be clean to set the thread-specific value to NULL so in case the pthread_exit was invoked, the destruction wouldn't happen twice for that key.
UPDATE Actually, I've solved my immediate worries by simply adding this to my global unit test setup function:
::atexit([] { ::pthread_exit(0); });
So, in context of my global fixture class MyConfig:
struct MyConfig {
MyConfig() {
GOOGLE_PROTOBUF_VERIFY_VERSION;
::atexit([] { ::pthread_exit(0); });
}
~MyConfig() { google::protobuf::ShutdownProtobufLibrary(); }
};
Some of the references used:
http://www.resolvinghere.com/sof/6357154.shtml
https://sourceware.org/ml/pthreads-win32/2008/msg00007.html
http://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_key_create.html
http://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_exit.html
PS. Of course c++11 introduced <thread> so you have better and more portable primitves to work with.
It's already in sehe's answer, just to present the key points in a compact way:
pthread_key_create() destructor calls are triggered by a call to pthread_exit().
If the start routine of a thread returns, the behaviour is as if pthread_exit() was called (i. e., destructor calls are triggered).
However, if main() returns, the behaviour is as if exit() was called — no destructor calls are triggered.
This is explained in http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_create.html. See also C++17 6.6.1p5 or C11 5.1.2.2.3p1.
I wrote a quick test and the only thing I changed was moving the create_key call of yours outside of the set_thread_specific.
That is, I called it within the main thread.
I then saw my destroy get called when the thread routine exited.
I call destructor() manually at the end of main():
void * ThreadData = NULL;
if ((ThreadData = pthread_getspecific(key)) != NULL)
destructor(ThreadData);
Of course key should be properly initialized earlier in main() code.
PS. Calling Pthread_Exit() at the end to main() seems to hang entire application...
Your initial thought of handling the main thread as a separate case with atexit worked best for me.
Be ware that pthread_exit(0) overwrites the exit value of the process. For example, the following program will exit with status of zero even though main() returns with number three:
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
class ts_stack {
public:
ts_stack () {
printf ("init\n");
}
~ts_stack () {
printf ("done\n");
}
};
static void cleanup (void);
static pthread_key_t key;
static pthread_once_t tls_init_flag = PTHREAD_ONCE_INIT;
void destructor(void *t) {
// thread local data structure clean up code here, which is not getting called
delete (ts_stack*) t;
}
void create_key() {
pthread_key_create(&key, destructor);
atexit(cleanup);
}
// This will be called from every thread
void set_thread_specific() {
ts_stack *ts = new ts_stack (); // Thread local data structure
pthread_once(&tls_init_flag, create_key);
pthread_setspecific(key, ts);
}
static void cleanup (void) {
pthread_exit(0); // <-- Calls destructor but sets exit status to zero as a side effect!
}
int main (int argc, char *argv[]) {
set_thread_specific();
return 3; // Attempt to exit with status of 3
}
I had similar issue as yours: pthread_setspecific sets a key, but the destructor never gets called. To fix it we simply switched to thread_local in C++. You could also do something like if that change is too complicated:
For example, assume you have some class ThreadData that you want some action to be done on when the thread finishes execution. You define the destructor something on these lines:
void destroy_my_data(ThreadlData* t) {
delete t;
}
When your thread starts, you allocate memory for ThreadData* instance and assign a destructor to it like this:
ThreadData* my_data = new ThreadData;
thread_local ThreadLocalDestructor<ThreadData> tld;
tld.SetDestructorData(my_data, destroy_my_data);
pthread_setspecific(key, my_data)
Notice that ThreadLocalDestructor is defined as thread_local. We rely on C++11 mechanism that when the thread exits, the destructor of ThreadLocalDestructor will be automatically called, and ~ThreadLocalDestructor is implemented to call function destroy_my_data.
Here is the implementation of ThreadLocalDestructor:
template <typename T>
class ThreadLocalDestructor
{
public:
ThreadLocalDestructor() : m_destr_func(nullptr), m_destr_data(nullptr)
{
}
~ThreadLocalDestructor()
{
if (m_destr_func) {
m_destr_func(m_destr_data);
}
}
void SetDestructorData(void (*destr_func)(T*), T* destr_data)
{
m_destr_data = destr_data;
m_destr_func = destr_func;
}
private:
void (*m_destr_func)(T*);
T* m_destr_data;
};

Best way to handle multi-thread cleanup

I have a server-type application, and I have an issue with making sure thread's aren't deleted before they complete. The code below pretty much represents my server; the cleanup is required to prevent a build up of dead threads in the list.
using namespace std;
class A {
public:
void doSomethingThreaded(function<void()> cleanupFunction, function<bool()> getStopFlag) {
somethingThread = thread([cleanupFunction, getStopFlag, this]() {
doSomething(getStopFlag);
cleanupFunction();
});
}
private:
void doSomething(function<bool()> getStopFlag);
thread somethingThread;
...
}
class B {
public:
void runServer();
void stop() {
stopFlag = true;
waitForListToBeEmpty();
}
private:
void waitForListToBeEmpty() { ... };
void handleAccept(...) {
shared_ptr<A> newClient(new A());
{
unique_lock<mutex> lock(listMutex);
clientData.push_back(newClient);
}
newClient.doSomethingThreaded(bind(&B::cleanup, this, newClient), [this]() {
return stopFlag;
});
}
void cleanup(shared_ptr<A> data) {
unique_lock<mutex> lock(listMutex);
clientData.remove(data);
}
list<shared_ptr<A>> clientData;
mutex listMutex;
atomc<bool> stopFlag;
}
The issue seems to be that the destructors run in the wrong order - i.e. the shared_ptr is destructed at when the thread's function completes, meaning the 'A' object is deleted before thread completion, causing havok when the thread's destructor is called.
i.e.
Call cleanup function
All references to this (i.e. an A object) removed, so call destructor (including this thread's destructor)
Call this thread's destructor again -- OH NOES!
I've looked at alternatives, such as maintaining a 'to be removed' list which is periodically used to clean the primary list by another thread, or using a time-delayed deletor function for the shared pointers, but both of these seem abit chunky and could have race conditions.
Anyone know of a good way to do this? I can't see an easy way of refactoring it to work ok.
Are the threads joinable or detached? I don't see any detach,
which means that destructing the thread object without having
joined it is a fatal error. You might try simply detaching it,
although this can make a clean shutdown somewhat complex. (Of
course, for a lot of servers, there should never be a shutdown
anyway.) Otherwise: what I've done in the past is to create
a reaper thread; a thread which does nothing but join any
outstanding threads, to clean up after them.
I might add that this is a good example of a case where
shared_ptr is not appropriate. You want full control over
when the delete occurs; if you detach, you can do it in the
clean up function (but quite frankly, just using delete this;
at the end of the lambda in A::doSomethingThreaded seems more
readable); otherwise, you do it after you've joined, in the
reaper thread.
EDIT:
For the reaper thread, something like the following should work:
class ReaperQueue
{
std::deque<A*> myQueue;
std::mutex myMutex;
std::conditional_variable myCond;
A* getOne()
{
std::lock<std::mutex> lock( myMutex );
myCond.wait( lock, [&]( !myQueue.empty() ) );
A* results = myQueue.front();
myQueue.pop_front();
return results;
}
public:
void readyToReap( A* finished_thread )
{
std::unique_lock<std::mutex> lock( myMutex );
myQueue.push_back( finished_thread );
myCond.notify_all();
}
void reaperThread()
{
for ( ; ; )
{
A* mine = getOne();
mine->somethingThread.join();
delete mine;
}
}
};
(Warning: I've not tested this, and I've tried to use the C++11
functionality. I've only actually implemented it, in the past,
using pthreads, so there could be some errors. The basic
principles should hold, however.)
To use, create an instance, then start a thread calling
reaperThread on it. In the cleanup of each thread, call
readyToReap.
To support a clean shutdown, you may want to use two queues: you
insert each thread into the first, as it is created, and then
move it from the first to the second (which would correspond to
myQueue, above) in readyToReap. To shut down, you then wait
until both queues are empty (not starting any new threads in
this interval, of course).
The issue is that, since you manage A via shared pointers, the this pointer captured by the thread lambda really needs to be a shared pointer rather than a raw pointer to prevent it from becoming dangling. The problem is that there's no easy way to create a shared_ptr from a raw pointer when you don't have an actual shared_ptr as well.
One way to get around this is to use shared_from_this:
class A : public enable_shared_from_this<A> {
public:
void doSomethingThreaded(function<void()> cleanupFunction, function<bool()> getStopFlag) {
somethingThread = thread([cleanupFunction, getStopFlag, this]() {
shared_ptr<A> temp = shared_from_this();
doSomething(getStopFlag);
cleanupFunction();
});
this creates an extra shared_ptr to the A object that keeps it alive until the thread finishes.
Note that you still have the problem with join/detach that James Kanze identified -- Every thread must have either join or detach called on it exactly once before it is destroyed. You can fulfill that requirement by adding a detach call to the thread lambda if you never care about the thread exit value.
You also have potential for problems if doSomethingThreaded is called multiple times on a single A object...
For those who are interested, I took abit of both answers given (i.e. James' detach suggestion, and Chris' suggestion about shared_ptr's).
My resultant code looks like this and seems neater and doesn't cause a crash on shutdown or client disconnect:
using namespace std;
class A {
public:
void doSomething(function<bool()> getStopFlag) {
...
}
private:
...
}
class B {
public:
void runServer();
void stop() {
stopFlag = true;
waitForListToBeEmpty();
}
private:
void waitForListToBeEmpty() { ... };
void handleAccept(...) {
shared_ptr<A> newClient(new A());
{
unique_lock<mutex> lock(listMutex);
clientData.push_back(newClient);
}
thread clientThread([this, newClient]() {
// Capture the shared_ptr until thread over and done with.
newClient->doSomething([this]() {
return stopFlag;
});
cleanup(newClient);
});
// Detach to remove the need to store these threads until their completion.
clientThread.detach();
}
void cleanup(shared_ptr<A> data) {
unique_lock<mutex> lock(listMutex);
clientData.remove(data);
}
list<shared_ptr<A>> clientData; // Can remove this if you don't
// need to connect with your clients.
// However, you'd need to make sure this
// didn't get deallocated before all clients
// finished as they reference the boolean stopFlag
// OR make it a shared_ptr to an atomic boolean
mutex listMutex;
atomc<bool> stopFlag;
}

Thread-Safe implementation of an object that deletes itself

I have an object that is called from two different threads and after it was called by both it destroys itself by "delete this".
How do I implement this thread-safe? Thread-safe means that the object never destroys itself exactly one time (it must destroys itself after the second callback).
I created some example code:
class IThreadCallBack
{
virtual void CallBack(int) = 0;
};
class M: public IThreadCallBack
{
private:
bool t1_finished, t2_finished;
public:
M(): t1_finished(false), t2_finished(false)
{
startMyThread(this, 1);
startMyThread(this, 2);
}
void CallBack(int id)
{
if (id == 1)
{
t1_finished = true;
}
else
{
t2_finished = true;
}
if (t1_finished && t2_finished)
{
delete this;
}
}
};
int main(int argc, char **argv) {
M* MObj = new M();
while(true);
}
Obviously I can't use a Mutex as member of the object and lock the delete, because this would also delete the Mutex. On the other hand, if I set a "toBeDeleted"-flag inside a mutex-protected area, where the finised-flag is set, I feel unsure if there are situations possible where the object isnt deleted at all.
Note that the thread-implementation makes sure that the callback method is called exactly one time per thread in any case.
Edit / Update:
What if I change Callback(..) to:
void CallBack(int id)
{
mMutex.Obtain()
if (id == 1)
{
t1_finished = true;
}
else
{
t2_finished = true;
}
bool both_finished = (t1_finished && t2_finished);
mMutex.Release();
if (both_finished)
{
delete this;
}
}
Can this considered to be safe? (with mMutex being a member of the m class?)
I think it is, if I don't access any member after releasing the mutex?!
Use Boost's Smart Pointer. It handles this automatically; your object won't have to delete itself, and it is thread safe.
Edit:
From the code you've posted above, I can't really say, need more info. But you could do it like this: each thread has a shared_ptr object and when the callback is called, you call shared_ptr::reset(). The last reset will delete M. Each shared_ptr could be stored with thread local storeage in each thread. So in essence, each thread is responsible for its own shared_ptr.
Instead of using two separate flags, you could consider setting a counter to the number of threads that you're waiting on and then using interlocked decrement.
Then you can be 100% sure that when the thread counter reaches 0, you're done and should clean up.
For more info on interlocked decrement on Windows, on Linux, and on Mac.
I once implemented something like this that avoided the ickiness and confusion of delete this entirely, by operating in the following way:
Start a thread that is responsible for deleting these sorts of shared objects, which waits on a condition
When the shared object is no longer being used, instead of deleting itself, have it insert itself into a thread-safe queue and signal the condition that the deleter thread is waiting on
When the deleter thread wakes up, it deletes everything in the queue
If your program has an event loop, you can avoid the creation of a separate thread for this by creating an event type that means "delete unused shared objects" and have some persistent object respond to this event in the same way that the deleter thread would in the above example.
I can't imagine that this is possible, especially within the class itself. The problem is two fold:
1) There's no way to notify the outside world not to call the object so the outside world has to be responsible for setting the pointer to 0 after calling "CallBack" iff the pointer was deleted.
2) Once two threads enter this function you are, and forgive my french, absolutely fucked. Calling a function on a deleted object is UB, just imagine what deleting an object while someone is in it results in.
I've never seen "delete this" as anything but an abomination. Doesn't mean it isn't sometimes, on VERY rare conditions, necessary. Problem is that people do it way too much and don't think about the consequences of such a design.
I don't think "to be deleted" is going to work well. It might work for two threads, but what about three? You can't protect the part of code that calls delete because you're deleting the protection (as you state) and because of the UB you'll inevitably cause. So the first goes through, sets the flag and aborts....which of the rest is going to call delete on the way out?
The more robust implementation would be to implement reference counting. For each thread you start, increase a counter; for each callback call decrease the counter and if the counter has reached zero, delete the object. You can lock the counter access, or you could use the Interlocked class to protect the counter access, though in that case you need to be careful with potential race between the first thread finishing and the second starting.
Update: And of course, I completely ignored the fact that this is C++. :-) You should use InterlockExchange to update the counter instead of the C# Interlocked class.