Should private function members be exception safe? - c++

When writing exception-safe code, should all private member functions guarantee at least basic exception safety? What would be best/good practice in this situation? Alternatives?
For example, say I have a class Foo with public function member DoSomething, which calls private function member DoSomeOfIt. DoSomeOfIt may fail due to some functions that I cannot influence, and when it does it may leave the Foo object in a partially modified state which violates Foo's invariants. So I could wrap DoSomeOfIt in a try-catch block and call another private function member UndoThat in the catch block to undo what DoSomeOfIt did. Individually, DoSomeOfIt and UndoThat may not be exception safe, but DoSomething is. Would that be exception safe code? (In this case with strong guarantee)
class Foo {
public:
// I provide strong exception safety guarantee
void DoSomething() {
try {
DoSomeOfIt();
} catch (const std::exception& e) {
UndoThat();
throw;
}
}
private:
// I provide no exception safety guarantee.
DoSomeOfIt() {
// may throw and violate class invariants.
}
// I provide no exception safety guarantee.
UndoThat() {
// undoes everything most recent call of
// DoSomeOfIt did.
}
};
Of course I could simply include the code of DoSomeOfIt and UndoThat in DoSomething, but that could lead to code bloating, and a long function body, whereas breaking up the functions modularizes the tasks and may make the code more readable(?)
DISCLAIMER:
I understand this may be opinion-based. I am not sure if that makes this a bad post, but I'd appreciate any opinions, or experiences where this has lead to issues, or is common practice etc.

This is my opinion.
If there is a possibility that DoSomeOfIt will be used by more than one function, it'll be better to have the exception handling code reside in that function itself -- you don't want to have the exception handling code duplicated across multiple functions. If that is not a possibility, your posted code is fine.
Having said that, if you move the exception handling code to DoSomething, you don't lose anything. In fact, the function becomes better. If it is used in the future by another function, the exception handling is already taken care of. Looked at from that point of view, it will be better to move the exception handling code to DoSomething.

Related

Exceptions from static variable constructor/destructor

Hi I found this line in a tutorial on web.
What happens when you declare a static object and the destructor throws and exception?
As with static constructor exceptions the application will crash.
I can't seam to understand what's the difference if object is static or not...
Thanks
I'm not sure if you're asking about constructors or destructors that throw exceptions - the problem statement refers to destructors, but the example code and some of the comments refer to constructors.
With regards to constructors that throw, it depends on whether the static object is local or global. Local static objects are constructed the first time that control passes through the scope in which they are defined, and exception handlers should behave normally for them. Global static objects are constructed before the program enters main(); since you can't have a try-catch block at global scope, if the constructor of a global static object throws, it basically means that your application crashes before it makes it out of the starting gate.
As for destructors, generally speaking, destructors that can throw exceptions create serious problems. Herb Sutter details the reasons why in his great book "Exceptional C++", which is available here on Google Books. Basically, if a destructor can throw exceptions, it makes it nearly impossible to write exception-safe code. Consider the following example.
class T
{
T() {}
~T() { throw 5; }
};
void foo()
{
T t;
throw 10;
}
When foo() reaches the throw 10; statement, control will exit the context of foo(), destroying the local object t in the process. This calls the destructor for t, which tries to throw another exception. In C++, it isn't possible to throw two exceptions simultaneously; if a second exception is thrown like this, the program calls the built-in function terminate(), which does what it sounds like and terminates the program (you can set your own function to be called instead using set_terminate, but this is mostly for doing custom clean-up - you can't change the fact that the program ends after the function is done).
The way to avoid this is to make sure that destructors never throw exceptions. It may not matter with static objects, because as celtschk noted the destructor won't be called until the program is terminating anyway, but as a general rule, if you find yourself writing a class with a destructor that can throw an exception, you should think carefully about whether it is really the best approach.

Is this a safe way of throwing an exception from a destructor?

I know that throwing from a destructor is in general a bad idea, but I was wondering if i could use std::uncaught_exception() to safely throw from a destructor.
Consider the following RAII type:
struct RAIIType {
...
~RAIIType() {
//do stuff..
if (SomethingBadHappened()) {
//Assume that if an exception is already active, we don't really need to detect this error
if (!std::uncaught_exception()) {
throw std::runtime_error("Data corrupted");
}
}
}
};
Is this UB in c++11? Is it a bad design?
You have an if, did you think about the "other" condition? It can throw an exception or... do what? There's two things that can be in the other branch.
Nothing (If nothing needs to happen when the error occurs, why throw an exception?)
It "handles" the exception (If it can be "handled", why throw an exception?)
Now that we've established that there's no purpose to throwing an exception conditionally like that, the rest of the question is sort of moot. But here's a tidbit: NEVER THROW EXCEPTIONS FROM DESTRUCTORS. If an object throws an exception, the calling code normally checks that object in some way to "handle" the exception. If that object no longer exists, there's usually no way to "handle" the exception, meaning the exception should not be thrown. Either it's ignored, or the program makes a dump file and aborts. So throwing exceptions from destructors is pointless anyway, because catching it is pointless. With this is mind, classes assume that destructors won't throw, and virtually every class leaks resources if a destructor throws. So NEVER THROW EXCEPTIONS FROM DESTRUCTORS.
Note that your code doesn't do what you think it does. In case SomethingBadHappened and there is no stack unwinding in place, you attempt to throw from a destructor and nonetheless std::terminate is called. This is the new behavior in C++11 (see this article). You will need to annotate your destructor with noexcept(false) specification.
Suppose you do this, it is not clear what you mean by "safely". Your destructor never triggers std::terminate directly. But calling std::terminate is not a UB: it is very well defined and useful (see this article).
For sure, you cannot put your class RAIIType into STL containers. The C++ Standard explicitly calls that UB (when a destructor throws in an STL container).
Also, the design look suspicious: the if-statement really means "sometimes report a failure and sometimes not". Are you fine with this?
See also this post for a similar discussion.
I know that throwing from a destructor is in general a bad idea, but I was wondering if i could use std::uncaught_exception() to safely throw from a destructor.
You may like to have a look at uncaught_exceptions proposal from Herb Sutter:
Motivation
std::uncaught_exception is known to be “nearly useful” in many situations, such as when implementing an Alexandrescu-style ScopeGuard. [1]
In particular, when called in a destructor, what C++ programmers often expect and what is basically true is: “uncaught_exception returns true iff this destructor is being called during stack unwinding.”
However, as documented at least since 1998 in Guru of the Week #47, it means code that is transitively called from a destructor that could itself be invoked during stack unwinding cannot correctly detect whether it itself is actually being called as part of unwinding. Once you’re in unwinding of any exception, to uncaught_exception everything looks like unwinding, even if there is more than
one active exception.
...
This paper proposes a new function int std::uncaught_exceptions() that returns the number of exceptions currently active, meaning thrown or rethrown but not yet handled.
A type that wants to know whether its destructor is being run to unwind this object can query uncaught_exceptions in its constructor and store the result, then query uncaught_exceptions again in its destructor; if the result is different, then this destructor is being invoked as part of stack unwinding
due to a new exception that was thrown later than the object’s construction.
It depends what you mean by "safely".
That will prevent one of the issues with throwing from a destructor - the program won't be terminated if the error happens during stack unwinding when handling another exception.
However, there are still issues, among them:
If you have an array of these, then they may not all be destroyed if one throws on destruction.
Some exception-safety idioms rely on non-throwing destruction.
Many people (such as myself) don't know all the rules governing what will or won't be correctly destroyed if a destructor throws, and won't be confident that they can use your class safely.

Is it abusive to implement the "execute-around" idiom with scoped objects?

Should scoped objects (with complimentary logic implemented in constructor and destructor) only be used for resource cleanup (RAII)?
Or can I use it to implement certain aspects of the application's logic?
A while ago I asked about Function hooking in C++. It turns out that Bjarne addressed this problem and the solution he proposes is to create a proxy object that implements operator-> and allocates a scoped object there. The "before" and "after" are implemented in the scoped object's constructor and destructor respectively.
The problem is that destructors should not throw. So you have to wrap the destructor in a try { /* ... */ } catch(...) { /*empty*/ } block. This severely limits the ability to handle errors in the "after" code.
Should scoped objects only be used to cleanup resources or can I use it for more than that? Where do I draw the line?
If you pedantically consider the definition of RAII, anything you do using scoping rules and destructor invocation that doesn't involve resource deallocation simply isn't RAII.
But, who cares? Maybe what you're really trying to ask is,
I want X to happen every time I leave function Y. Is it
abusive to use the same scoping rules and destructor invocation that
RAII uses in C++ if X isn't resource deallocation?
I say, no. Scratch that, I say heck no. In fact, from a code clarity point of view, it might be better to use destructor calls to execute a block of code if you have multiple return points or possibly exceptions. I would document the fact that your object is doing something non-obvious on destruction, but this can be a simple comment at the point of instantiation.
Where do you draw the line? I think the KISS principle can guide you here. You could probably write your entire program in the body of a destructor, but that would be abusive. Your Spidey Sense will tell you that is a Bad Idea, anyway. Keep your code as simple as possible, but not simpler. If the most natural way to express certain functionality is in the body of a destructor, then express it in the body of a destructor.
You want a scenario where, guaranteed, the suffix is always done. That sounds exactly like the job of RAII to me. I however would not necessarily actually write it that way. I'd rather use a method chain of templated member functions.
I think with C++11 you can semi-safely allow the suffix() call to throw. The strict rule isn't "never throw from a destructor", although that's good advice, instead the rule is:
never throw an exception from a destructor while processing another
exception
In the destructor you can now use std::current_exception, which I think verifies the "while processing another exception" element of the destructor+exception rule. With this you could do:
~Call_proxy() {
if (std::current_exception()) {
try {
suffix();
}
catch(...) {
// Not good, but not fatal perhaps?
// Just don't rethrow and you're ok
}
}
else {
suffix();
}
}
I'm not sure if that's actually a good idea in practise though, you have a hard problem to deal with if it throws during the throwing of another exception still.
As for the "is it abuse" I don't think it's abusive anymore than metaprogramming or writing a?b:c instead of a full blown if statement if it's the right tool for the job! It's not subverting any language rules, simply exploiting them, within the letter of the law. The real issue is the predictability of the behaviour to readers unfamiliar with the code and the long term maintainability, but that's an issue for all designs.

Is my use of C++ catch clause, families of exception classes, and destruction sane?

Once in a while, I notice some coding pattern that I've had for years and it makes me nervous. I don't have a specific problem, but I also don't remember enough about why I adopted that pattern, and some aspect of it seems to match some anti-pattern. This has recently happened to me WRT how some of my code uses exceptions.
The worrying thing involves cases where I catch an exception "by reference", treating it in a similar way to how I'd treat a parameter to a function. One reason to do this is so I can have an inheritance hierarchy of exception classes, and specify a more general or more precise catch type depending on the application. For example, I might define...
class widget_error {};
class widget_error_all_wibbly : public widget_error {};
class widget_error_all_wobbly : public widget_error {};
void wibbly_widget ()
{
throw widget_error_all_wibbly ();
}
void wobbly_widget ()
{
throw widget_error_all_wobbly ();
}
void call_unknown_widget (void (*p_widget) ())
{
try
{
p_widget ();
}
catch (const widget_error &p_exception)
{
// Catches either widget_error_all_wibbly or
// widget_error_all_wobbly, or a plain widget_error if that
// is ever thrown by anything.
}
}
This is now worrying me because I've noticed that a class instance is constructed (as part of the throw) within a function, but is referenced (via the p_Exception catch-clause "parameter") after that function has exited. This is normally an anti-pattern - a reference or pointer to a local variable or temporary created within a function, but passed out when the function exits, is normally a dangling reference/pointer since the local variable/temporary is destructed and the memory freed when the function exits.
Some quick tests suggest that the throw above is probably OK - the instance constructed in the throw clause isn't destructed when the function exits, but is destructed when the catch-clause that handles it completes - unless the catch block rethrows the exception, in which case the next catch block does this job.
My remaining nervousness is because a test run in one or two compilers is no proof of what the standard says, and since my experience says that what I think is common sense is often different to what the language guarantees.
So - is this pattern of handling exceptions (catching them using a reference type) safe? Or should I be doing something else, such as...
Catching (and explicitly deleting) pointers to heap-allocated instances instead of references to something that looks (when thrown) very like a temporary?
Using a smart pointer class?
Using "pass-by-value" catch clauses, and accepting that I cannot catch any exception class from a hierarchy with one catch clause?
Something I haven't thought of?
This is ok. It's actually good to catch exceptions by constant reference (and bad to catch pointers). Catching by value creates an unnecessary copy. The compiler is smart enough to handle the exception (and its destruction) properly -- just don't try to use the exception reference outside of your catch block ;-)
In fact, what I often do is to inherit my hierarchy from std::runtime_error (which inherits from std::exception). Then I can use .what(), and use even fewer catch blocks while handling more exceptions.
This pattern is definitely safe.
There are special rules that extend the lifetime of a thrown object. Effectively, it exists as long as it is being handled and it is guaranteed to exist until the end of the last catch block that handles it.
One very common idiom, for example, to derive custom exceptions from std::exception, override its what() member function, and catch it by reference so that you can print error messages from a wide variety of exceptions with one catch clause.
No, you're definitely doing it right. See http://www.parashift.com/c++-faq-lite/exceptions.html#faq-17.13 , and the rest of the FAQ chapter for that matter.
Yes. So far so good.
Personally I use std::runtime_error as the base of all exception calsses. It handles error messages etc.
Also don't declare more exceptions that you need to. Define an exception only for things that can actually be caught and fixed. Use a more generic exception for things that can not be caught or fixed.
For example: If I develop a library A. Then I will have an AException derived from std::runtime_error. This exception will be used for all generic exceptions from the library. For any specific exceptions where the user of the library can actually catch and do something (fix or mitigate) with the exception then I will create a specific exception derived from AException (but only if there is something that can be done with the exception).
Indeed, Sutter and Alexandrescu recomend this pattern in their 'C++ Coding Standards'.

Should I add throw() to the declarations for my C++ destructors?

I have seen some C++ classes with a destructor defined as follows:
class someClass
{
public:
someClass();
~someClass() throw();
};
Is this a good idea?
I am well aware that destructors should never throw exceptions, but will this actually prevent me from throwing exceptions in my destructors? I'm not 100% sure what it guarantees.
Reference: this recent question
It does not prevent you from throwing exceptions from your destructor. The compiler will still let you do it. The difference is that if you do allow an exception to escape from that destructor, your program will immediately call unexpected. That function calls whatever unexpected_handler points to, which by default is terminate. So unless you do something to handle an unexpected exception, your program terminates, which isn't altogether a bad idea. After all, if the exception really is unexpected, then there's not really anything your program would be able to do to handle it anyway.
This isn't something special about destructors; the same rules apply to exception specifications for all methods.
It's not an awful idea. If you throw in the dtor while no exception is being propagated, you will abort immediately which lets you know you've forgotten to make an actual non-throwing dtor.
If on the other hand you leave the throw spec out, you'll only know about your bad dtor implementation when an exception is, in fact, thrown.