I have an object that sends a signal when a group of tasks completes. These tasks are executed independently in a thread pool. I want to send a notification to an observer when all of the tasks in a group are complete.
Essentially this boils down to a reference counting scheme. When ref=0, I send the notification. One implementation of this would be to leverage boost smart pointers (or any ref counted auto).
class TaskCompletionNotifier {
~TaskCompletionNotifier() {
_listener->notify();
}
setListener(listener);
Listener _listener;
}
class Task {
boost::shared_ptr<TaskCompletionNotifier> _notifier;
}
My question is, is it bad practice to perform this call-out in the destructor of an object?
Is this inherently bad? No.
Does it open up potential pitfalls? Yes.
Make sure you don't allow any exceptions to escape the destructor, and you're best off making sure that _listener->notify() doesn't end up modifying any member data of this object: it's safe and okay if it does, but may be confusing and/or mess up your destructor's close-down logic.
Other than that, go for it.
Is it bad practice to call an observer in a destructor?
No. It is not.
But it opens up potential for many pitfalls so make sure you do it without violating of C++ rules (C++ standard). In particular:
make sure exception are handled so that no exception will propagate out of destructor
Related
I would like to create a thread pool. I have a class called ServerThread.cpp, whose constructor should do something like this:
ServerThread::ServerThread()
{
for( int i=0 ; i<init_thr_num ; i++ )
{
//create a pool of threads
//suspend them, they will wake up when requests arrive for them to process
}
}
I was wondering if creating pthreads inside a constructor can cause any undefined behavior that one should avoid running into.
Thanks
You can certainly do that in a constructor but should be aware of a problem that is clearly explained by Scott Meyers ins his Effective/More Effective C++ books.
In short his point is that if any kind of exception is raised within a constructor, then your half-backed object will not be destroyed. This leads to memory leaks. So Meyers' suggestion is to have "light" constructors and then do the "heavy" work in an init method called after the object has been fully created.
This argument is not strictly related to creating a pool of pthreads within a constructor (whereby you might argue that no exception will be raised if you simply create them and then immediately suspend them), but is a general consideration about what to do in a constructor (read: good practices).
Another considerations to be done is that a constructor has no return value. While it is true that (if no exceptions are thrown) you can leave the object is a consistent state even if the thread creation fails, it would be possibly better to manage a return value from a kind of init or start method.
You could also read this thread on S.O. about the topic, and this one.
From a strictly formal point of view, a constructor is really just a
function like any other, and there shouldn't be any problem.
Practically, there could be an issue: the threads may actually start
running before the constructor has finished. If the threads need a
fully constructed ServerThread to operate, then you're in
trouble—this is often the case when ServerThread is a base
class, and the threads need to interact with the derived class. (This
is a very difficult problem to spot, because with the most frequently
used thread scheduling algorithms, the new thread will usually not
start executing immediately.)
A few days ago my friend told me about the situation, they had in their project.
Someone decided, that it would be good to destroy the object of NotVerySafeClass in parallel thread (like asynchronously). It was implemented some time ago.
Now they get crashes, because some method is called in main thread, while object is destroyed.
Some workaround was created to handle the situation.
Ofcourse, this is just an example of not very good solution, but still the question:
Is there some way to prevent the situation internally in NotVerySafeClass (deny running the methods, if destructor was called already, and force the destructor to wait, until any running method is over (let's assume there is only one method))?
No, no and no. This is a fundamental design issue, and it shows a common misconception in thinking about multithreaded situations and race conditions in general.
There is one thing that can happen equally likely, and this is really showing that you need an ownership concept: The function calling thread could call the function just right after the object has been destroyed, so there is no object anymore and try to call a function on it is UB, and since the object does not exist anymore, it also has no chance to prevent any interaction between the dtor and a member function.
What you need is a sound ownership policy. Why is the code destroying the object when it is still needed?
Without more details about the code, a std::shared_ptr would probably solve this issue. Depending on your specific situation, you may be able to solve it with a more lightweight policy.
Sounds like a horrible design. Can't you use smart pointer to make sure the object is destroyed only when no-one holds any references to it?
If not, I'd use some external synchronization mechanism. Synchronizing the destructor with a method is really awkward.
There is no methods that can be used to prevent this scenario.
In multithread programming, you need to make sure that an object will not be deleted if there are some others thread still accessing it.
If you are dealing with such code, it needs fundamental fix
(Not to promote bad design) but to answer your two questions:
... deny running the methods, if destructor was called already
You can do this with the solution proposed by #snemarch and #Simon (a lock). To handle the situation where one thread is inside the destructor, while another one is waiting for the lock at the beginning of your method, you need to keep track of the state of the object in a thread-safe way in memory shared between threads. E.g. a static atomic int that is set to 0 by the destructor before releasing the lock. The method checks for the int once it acquires the lock and bails if its 0.
... force the destructor to wait, until any running method is over
The solution proposed by #snemarch and #Simon (a lock) will handle this.
No. Just need to design the program propertly so that it is thread safe.
Why not make use of a mutex / semaphore ? At the beginning of any method the mutex is locked, and the destructor wait until the mutex is unlocked. It's a fix, not a solution. Maybe you should change the design of a part of your application.
Simple answer: no.
Somewhat longer answer: you could guard each and every member function and the destructor in your class with a mutex... welcome to deadlock opportunities and performance nightmares.
Gather a mob and beat some design sense into the 'someone' who thought parallel destruction was a good idea :)
My current project has a mechanism that tracks/proxies C++ objects to safely expose them to a script environment. Part of its function is to be informed when a C++ object is destroyed so it can safely clean up references to that object in the script environment.
To achieve this, I have defined the following class:
class DeleteEmitter {
public:
virtual ~DeleteEmitter() {
onDelete.emit();
}
sigc::signal<void> onDelete;
};
I then have any class that may need to be exposed to the script environment inherit from this class. When the proxy layer is invoked it connects a callback to the onDelete signal and is thus informed when the object is destroyed.
Light testing shows that this works, but in live tests I'm seeing peculiar memory corruptions (read: crashes in malloc/free) in unrelated parts of the code. Running under valgrind suggests there may be a double-free or continued use of an object after its been freed, so its possible that there is an old bug in a class that was only exposed after DeleteEmitter was added to its inheritance hierarchy.
During the course of my investigation it has occured to me that it might not be safe to emit a sigc++ signal during a destructor. Obviously it would be a bad thing to do if the callback tried to use the object being deleted, but I can confirm that is not what's happening here. Assuming that, does anyone know if this is a safe thing to do? And is there a more common pattern for achieving the same result?
The c++ spec guarantees that the data members in your object will not be destroyed until your destructor returns, so the onDelete object is untouched at that point. If you're confident that the signal won't indirectly result in any reads, writes or method calls on the object(s) being destroyed (multiple objects if the DeleteEmitter is part of another object) or generate C++ exceptions, then it's "safe." Assuming, of course, that you're not in a multi-threaded environment, in which case you also have to ensure other threads aren't interfering.
Assume an OO design where objects call each other, and after a while the called upon objects callback the initiating objects (calls and callbacks). During normal program termination, while destructors are called, is there some kind of promise that no system timers will be called, and no object will initiate a callback?
There is a pretty awesome library to handle these kinds of calls, and of course it's a Boost one.
Behold Boost.Signals2, with guarantees of correctness even in multi-threaded application :)
Note in particular the use of boost::trackable class, so that objects destruction automatically invalidate the calls before they happen.
Note: Boost.Signals (its ancestor) has pretty much the same functionality, but isn't thread-safe
No, there is no such guarantee.
You need to write your code such that objects are not destroyed until you are finished using them.
boost's weak_ptr can help avoid accessing destroyed objects from callbacks in general, so probably also during termination. When using this, all objects that need to be called back, need a shared_ptr.
timers calling functions/callbacks are usually higher-end libraries, so it depend whether they support a stopAllTimers() functionality. If you have control over the library, it's probably not too difficult to implement, but you still need to know when to trigger it.
No, there is no promise.
There are two ways to handle this:
Have a deregistration function that an object can call on destruction, which guarantees no callbacks will be invoked after it terminates - CancelAllCallbacks() or similar.
Have a weak reference mechanism such as the weak_ptr that was already mentioned, to invoke the callback only if a strong reference has been successfully obtained.
Usually the first option is enough unless the callbacks can be scheduled or invoked asynchronously - then you don't have a synchronous way to prevent a callback that is already scheduled to be called, or is in fact being called now (or a few instructions from now) in a different thread.
We have some issue with my friend. Assume, we have a class which implements database connection, like this:
class DB
{
void Connect();
void Disconnect();
// ...
~DB();
};
In my opinion, destructor should be minimalistic, which means destructor should not call Disconnect method when connection was established. I think, that this should be done by separate method (disconnect() in this example). Am I correct, or my friend is?
PS. Community Wiki?
Your destructor should be enough to clean up all the resources that were acquired during the object lifetime. This might or might not include ending connnections. Otherwise who will do the cleanup if an exception is thrown?
The RAII idioms says: acquire in the constructor and release in the deconstructor. You must guarantee that your deconstructor will NOT throw anything. Otherwise you will have core dump (or undefined behaviour) if your object deconstructor will throw an exception during the stack-unwind.
Also in your specific case I will probably implement a reference counting mechanism, and call the disconnect just when you haven't any more object using the connection.
According to the syntax it looks like C++. Am I correct? Because if so, you can (and it is highly recommended to) use the RAII idiom. That means that you aquire the DB connection on construction, free it (disconnect) on destruction.
Good reference: What's RAII All About?
From the class user point of view, I believe it is better to try and disconnect the database connection when it should instead of assuming the destructor will do the job.
However, bad things happen, most notably exceptions, and you must guarantee that cleanup will occur regardless of what happened. This is why your destructor should nevertheless disconnect if necessary (i.e: if it hasn't been explicitly called by the user).