Assume an OO design where objects call each other, and after a while the called upon objects callback the initiating objects (calls and callbacks). During normal program termination, while destructors are called, is there some kind of promise that no system timers will be called, and no object will initiate a callback?
There is a pretty awesome library to handle these kinds of calls, and of course it's a Boost one.
Behold Boost.Signals2, with guarantees of correctness even in multi-threaded application :)
Note in particular the use of boost::trackable class, so that objects destruction automatically invalidate the calls before they happen.
Note: Boost.Signals (its ancestor) has pretty much the same functionality, but isn't thread-safe
No, there is no such guarantee.
You need to write your code such that objects are not destroyed until you are finished using them.
boost's weak_ptr can help avoid accessing destroyed objects from callbacks in general, so probably also during termination. When using this, all objects that need to be called back, need a shared_ptr.
timers calling functions/callbacks are usually higher-end libraries, so it depend whether they support a stopAllTimers() functionality. If you have control over the library, it's probably not too difficult to implement, but you still need to know when to trigger it.
No, there is no promise.
There are two ways to handle this:
Have a deregistration function that an object can call on destruction, which guarantees no callbacks will be invoked after it terminates - CancelAllCallbacks() or similar.
Have a weak reference mechanism such as the weak_ptr that was already mentioned, to invoke the callback only if a strong reference has been successfully obtained.
Usually the first option is enough unless the callbacks can be scheduled or invoked asynchronously - then you don't have a synchronous way to prevent a callback that is already scheduled to be called, or is in fact being called now (or a few instructions from now) in a different thread.
Related
I'm starting to get into the practice of using C++11 std::threads. Typically, with Win32, I need to call CloseHandle whenever I have a handle to a thread. Do I still need to call CloseHandle when I use a C++11 native_handle? Also, if I don't use C++11 native handles, do thread handles get cleaned up properly?
Of course not.
Thread Objects has a destructor which releases any operating system specific resources that the object may acquired.
Actually, every (good) C++ object has a destructor which cleans whatever is needed to be cleaned up and this destructor (when the code is written correctly) is called automatically by the program.
this idiom is known as RAII - every object has a constructor which gathers resources that the object needs, and a peer destructor which releases them when the object gets out of scope.
when done correctly, this technique is far more powerfull than C-style manual resource management or a "high-level" garbage collector.
as a word of advise, if the standard provides you with some utility, ignore completely the corresponding Win32 API. the standard does not depends on an operating system specific API to work correctly.
No, they are RAII, like shared pointers.
Main thing you need to worry about are:
synchronization (no pain no gain!)
that you cannot copy threads (deleted constructor) so you pass them by reference.
Details here.
http://www.cplusplus.com/reference/thread/thread/thread/
I have an object that sends a signal when a group of tasks completes. These tasks are executed independently in a thread pool. I want to send a notification to an observer when all of the tasks in a group are complete.
Essentially this boils down to a reference counting scheme. When ref=0, I send the notification. One implementation of this would be to leverage boost smart pointers (or any ref counted auto).
class TaskCompletionNotifier {
~TaskCompletionNotifier() {
_listener->notify();
}
setListener(listener);
Listener _listener;
}
class Task {
boost::shared_ptr<TaskCompletionNotifier> _notifier;
}
My question is, is it bad practice to perform this call-out in the destructor of an object?
Is this inherently bad? No.
Does it open up potential pitfalls? Yes.
Make sure you don't allow any exceptions to escape the destructor, and you're best off making sure that _listener->notify() doesn't end up modifying any member data of this object: it's safe and okay if it does, but may be confusing and/or mess up your destructor's close-down logic.
Other than that, go for it.
Is it bad practice to call an observer in a destructor?
No. It is not.
But it opens up potential for many pitfalls so make sure you do it without violating of C++ rules (C++ standard). In particular:
make sure exception are handled so that no exception will propagate out of destructor
A few days ago my friend told me about the situation, they had in their project.
Someone decided, that it would be good to destroy the object of NotVerySafeClass in parallel thread (like asynchronously). It was implemented some time ago.
Now they get crashes, because some method is called in main thread, while object is destroyed.
Some workaround was created to handle the situation.
Ofcourse, this is just an example of not very good solution, but still the question:
Is there some way to prevent the situation internally in NotVerySafeClass (deny running the methods, if destructor was called already, and force the destructor to wait, until any running method is over (let's assume there is only one method))?
No, no and no. This is a fundamental design issue, and it shows a common misconception in thinking about multithreaded situations and race conditions in general.
There is one thing that can happen equally likely, and this is really showing that you need an ownership concept: The function calling thread could call the function just right after the object has been destroyed, so there is no object anymore and try to call a function on it is UB, and since the object does not exist anymore, it also has no chance to prevent any interaction between the dtor and a member function.
What you need is a sound ownership policy. Why is the code destroying the object when it is still needed?
Without more details about the code, a std::shared_ptr would probably solve this issue. Depending on your specific situation, you may be able to solve it with a more lightweight policy.
Sounds like a horrible design. Can't you use smart pointer to make sure the object is destroyed only when no-one holds any references to it?
If not, I'd use some external synchronization mechanism. Synchronizing the destructor with a method is really awkward.
There is no methods that can be used to prevent this scenario.
In multithread programming, you need to make sure that an object will not be deleted if there are some others thread still accessing it.
If you are dealing with such code, it needs fundamental fix
(Not to promote bad design) but to answer your two questions:
... deny running the methods, if destructor was called already
You can do this with the solution proposed by #snemarch and #Simon (a lock). To handle the situation where one thread is inside the destructor, while another one is waiting for the lock at the beginning of your method, you need to keep track of the state of the object in a thread-safe way in memory shared between threads. E.g. a static atomic int that is set to 0 by the destructor before releasing the lock. The method checks for the int once it acquires the lock and bails if its 0.
... force the destructor to wait, until any running method is over
The solution proposed by #snemarch and #Simon (a lock) will handle this.
No. Just need to design the program propertly so that it is thread safe.
Why not make use of a mutex / semaphore ? At the beginning of any method the mutex is locked, and the destructor wait until the mutex is unlocked. It's a fix, not a solution. Maybe you should change the design of a part of your application.
Simple answer: no.
Somewhat longer answer: you could guard each and every member function and the destructor in your class with a mutex... welcome to deadlock opportunities and performance nightmares.
Gather a mob and beat some design sense into the 'someone' who thought parallel destruction was a good idea :)
If I have a COM object, is it required for the AddRef() and Release() methods to be thread-safe- i.e., that I have to use atomic operations for my ref count?
Yes, if you are using the free threaded aparement model, use InterlockedIncrement() and InterlockedDecrement() to handle the ref count.
I think the answer is no. It is not required. If you want your COM object to be thread safe then those should be thread safe. Otherwise they do not have to be.
E.g. if you look here: The Rules of the Component Object Model it is not mentioned as a requirement.
Also The COM Programmer's Cookbook (Building a COM Component) you can see a sample object without thread safe reference counting.
Microsoft code snippet:
ULONG COutside::AddRef (void)
{
return ++ m_cRef;
}
In practice most implementations would do this because otherwise the COM objects would not be thread safe. If you know the object will only be used in one thread I believe it is an allowed optimization. Not all COM objects are thread safe, I've worked with a few that weren't.
To deal with the fact that COM objects may or may not be thread safe COM offers different "apartments" in which COM objects are created. In a single threaded apartment only a single thread can access objects within that apartment whereas in multi threaded apartment objects can be shared between multiple threads. Quoting from Understanding and Using COM Threading Models:
"Although multi-threaded apartments,
sometimes called free-threaded
apartments, are a much simpler model,
they are more difficult to develop for
because the developer must implement
the thread synchronization for the
objects, a decidedly nontrivial task."
Yup. That is required. COM is a simple binary standard and if you use Free Threaded Apartments, you'll get truly free threaded accesses
This will depend on the threading model you use and the kind of object. Please see the description of _ATL_*_THREADED macros. Those macros affect thread-safety of AddRef()/Release() of "usual" classes and of factories.
If you use a "too loose" macro you violate thread-safety requirements and your program might malfunction. If you choose a "too tight" macro you might lose some performance, but you as usual don't know if you care before you profile.
Here's how you choose the right macro (and this explains whether AddRef()/Release() have to be thread-safe).
If all the classes of a single server have no threading model specified (Main STA) then there's no chance of concurrent access to either objects or factories and they all can have non-threadsafe AddRef()/Release() and you get this by specifying _ATL_SINGLE_THREADED macro.
Otherwise if at least one class has "Apartment" model specified you need thread-safe AddRef()/Release() for the factory of that object but still can have a non-threadsafe AddRef()/Release() in the object itself and you get this by specifying _ATL_APARTMENT_THREADED macro. This macro will make all factories have thread-safe AddRef()/Release() and all object - non-threadsafe AddRef()/Release().
Finally if at least one class has "Both" or "Free" threading model specified you need AddRef()/Release() to be thread-safe in both that class and in the factory and you have to either specify _ATL_FREE_THREADED or just not specify any of the above - this "most tight" macro effect will be on by default. So the default configuration for COM objects created using ATL is to have thread-safe AddRef()/Release() for all objects - both served objects and factories.
That said you don't always need AddRef()/Release() to be thread-safe, but you usually should unless you know for sure that you can get without it and that getting without it lets you gain performance.
I have an object (Client * client) which starts multiple threads to handle various tasks (such as processing incoming data). The threads are started like this:
// Start the thread that will process incoming messages and stuff them into the appropriate queues.
mReceiveMessageThread = CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)receiveRtpMessageFunction, this, 0, 0);
These threads all have references back to the initial object, like so:
// Thread initialization function for receiving RTP messages from a newly connected client.
static int WINAPI receiveRtpMessageFunction(LPVOID lpClient)
{
LOG_METHOD("receiveRtpMessageFunction");
Client * client = (Client *)lpClient;
while(client ->isConnected())
{
if(client ->receiveMessage() == ERROR)
{
Log::log("receiveRtpMessageFunction Failed to receive message");
}
}
return SUCCESS;
}
Periodically, the Client object gets deleted (for various good and sufficient reasons). But when that happens, the processing threads that still have references to the (now deleted) object throw exceptions of one sort or another when trying to access member functions on that object.
So I'm sure that there's a standard way to handle this situation, but I haven't been able to figure out a clean approach. I don't want to just terminate the thread, as that doesn't allow for cleaning up resources. I can't set a property on the object, as it's precisely properties on the object that become inaccessible.
Thoughts on the best way to handle this?
I would solve this problem by introducing a reference count to your object. The worker thread would hold a reference and so would the creator of the object. Instead of using delete, you decrement from the reference count and whoever drops the last reference is the one that actually calls delete.
You can use existing reference counting mechanisms (shared_ptr etc.), or you can roll your own with the Win32 APIs InterlockedIncrement() and InterlockedDecrement() or similar (maybe the reference count is a volatile DWORD starting out at 1...).
The only other thing that's missing is that when the main thread releases its reference, it should signal to the worker thread to drop its own reference. One way you can do this is by an event; you can rewrite the worker thread's loop as calls to WaitForMultipleObjects(), and when a certain event is signalled, you take that to mean that the worker thread should clean up and drop the reference.
You don't have much leeway because of the running threads.
No combination of shared_ptr + weak_ptr may save you... you may call a method on the object when it's valid and then order its destruction (using only shared_ptr would).
The only thing I can imagine is to first terminate the various processes and then destroy the object. This way you ensure that each process terminate gracefully, cleaning up its own mess if necessary (and it might need the object to do that).
This means that you cannot delete the object out of hand, since you must first resynchronize with those who use it, and that you need some event handling for the synchronization part (since you basically want to tell the threads to stop, and not wait indefinitely for them).
I leave the synchronization part to you, there are many alternatives (events, flags, etc...) and we don't have enough data.
You can deal with the actual cleanup from either the destructor itself or by overloading the various delete operations, whichever suits you.
You'll need to have some other state object the threads can check to verify that the "client" is still valid.
One option is to encapsulate your client reference inside some other object that remains persistent, and provide a reference to that object from your threads.
You could use the observer pattern with proxy objects for the client in the threads. The proxies act like smart pointers, forwarding access to the real client. When you create them, they register themselves with the client, so that it can invalidate them from its destructor. Once they're invalidated, they stop forwarding and just return errors.
This could be handled by passing a (boost) weak pointer to the threads.