Should scoped objects (with complimentary logic implemented in constructor and destructor) only be used for resource cleanup (RAII)?
Or can I use it to implement certain aspects of the application's logic?
A while ago I asked about Function hooking in C++. It turns out that Bjarne addressed this problem and the solution he proposes is to create a proxy object that implements operator-> and allocates a scoped object there. The "before" and "after" are implemented in the scoped object's constructor and destructor respectively.
The problem is that destructors should not throw. So you have to wrap the destructor in a try { /* ... */ } catch(...) { /*empty*/ } block. This severely limits the ability to handle errors in the "after" code.
Should scoped objects only be used to cleanup resources or can I use it for more than that? Where do I draw the line?
If you pedantically consider the definition of RAII, anything you do using scoping rules and destructor invocation that doesn't involve resource deallocation simply isn't RAII.
But, who cares? Maybe what you're really trying to ask is,
I want X to happen every time I leave function Y. Is it
abusive to use the same scoping rules and destructor invocation that
RAII uses in C++ if X isn't resource deallocation?
I say, no. Scratch that, I say heck no. In fact, from a code clarity point of view, it might be better to use destructor calls to execute a block of code if you have multiple return points or possibly exceptions. I would document the fact that your object is doing something non-obvious on destruction, but this can be a simple comment at the point of instantiation.
Where do you draw the line? I think the KISS principle can guide you here. You could probably write your entire program in the body of a destructor, but that would be abusive. Your Spidey Sense will tell you that is a Bad Idea, anyway. Keep your code as simple as possible, but not simpler. If the most natural way to express certain functionality is in the body of a destructor, then express it in the body of a destructor.
You want a scenario where, guaranteed, the suffix is always done. That sounds exactly like the job of RAII to me. I however would not necessarily actually write it that way. I'd rather use a method chain of templated member functions.
I think with C++11 you can semi-safely allow the suffix() call to throw. The strict rule isn't "never throw from a destructor", although that's good advice, instead the rule is:
never throw an exception from a destructor while processing another
exception
In the destructor you can now use std::current_exception, which I think verifies the "while processing another exception" element of the destructor+exception rule. With this you could do:
~Call_proxy() {
if (std::current_exception()) {
try {
suffix();
}
catch(...) {
// Not good, but not fatal perhaps?
// Just don't rethrow and you're ok
}
}
else {
suffix();
}
}
I'm not sure if that's actually a good idea in practise though, you have a hard problem to deal with if it throws during the throwing of another exception still.
As for the "is it abuse" I don't think it's abusive anymore than metaprogramming or writing a?b:c instead of a full blown if statement if it's the right tool for the job! It's not subverting any language rules, simply exploiting them, within the letter of the law. The real issue is the predictability of the behaviour to readers unfamiliar with the code and the long term maintainability, but that's an issue for all designs.
Related
Suppose you have a C++ function that only makes calls to C functions, like
int ClearTheBin()
{
int result = SHEmptyRecycleBinW(
nullptr,
nullptr,
SHERB_NOCONFIRMATION |
SHERB_NOPROGRESSUI |
SHERB_NOSOUND);
if (SUCCEEDED(result) ||
result == E_UNEXPECTED) // Already empty
{
return 0;
}
return result;
}
There are obviously a zillion different things that could go wrong with the call to the C function, but since C lacks exceptions such errors will be stored in the resulting code. My question is: should such functions be declared noexcept? Even if the method can't throw, it may give the reader the false impression "Nothing can go wrong with this function and I can assume it's 100% reliable," because that's usually what noexcept functions imply.
What's your thoughts on the matter?
"Nothing can go wrong with this function and I can assume it's 100%
reliable"
I wouldn't really assume that so quickly. Typically if I see a function marked as noexcept, I'd tend to look for alternative ways it could give me errors like a boolean state, some kind of global error retrieval function, a success/fail return value, something like that. Only lacking that, and perhaps in the face of a function that causes no side effects, might I sort of make this assumption.
In your case, you got the whole error code kind of thing going for the function, so clearly one glance at the interface documentation should tell me that things can go wrong, yet the noexcept suggests to me that these exceptional cases won't (or at least shouldn't) be reported to me (the caller) in the form of an exception.
Applying noexcept
It seems generally on the safe side to be reluctant to use noexcept. A single introduction of, say, std::string to that function and now it can throw, only the noexcept will turn that into a termination rather than something we can catch. Guaranteeing that a function will never throw at the interface/design level is a hard one to ensure in anything which actually mixes C++ code, especially in a team setting where any random colleague might introduce (maybe on a crunch day) some random C++ function call or object which can throw to this function.
Impossible to Throw
In some cases it's actually easy to make this guarantee. One example is a noexcept function which has its own try/catch block, where everything the function does is contained within. If an exception is caught, translate it into an error code of some sort and return that. In those cases, the try/catch guarantees that the function will swallow the exception and not leak it to the outside world. It becomes easy to apply noexcept here because the function can't throw.
Can't Possibly Throw Correctly
Another example of where noexcept doesn't require too much thought is in the case of a function that should be considered broken if it threw. An example is an API function which is called across module boundaries for an SDK designed to allow plugins to be built with any compiler. Since we shouldn't throw across module boundaries, and especially in this scenario, throwing would already be UB and shouldn't be done. In that case, noexcept can be easy to apply because the correct behavior of the function, by design, should never throw.
If in doubt, leave it out.
In your case, if in doubt, I'd suggest to leave it out. You can go a lot more wrong by using noexcept and inadvertently throwing out of the function and terminating the program. An exception is if you can make a hard guarantee that this ClearTheBin function is always going to be using only C, no operator new (or at least only nothrow versions), no dynamic_casts, no standard lib, etc. Omission seems like erring on the safe side, unless you can make this kind of hard interface-level guarantee for both present and future.
noexcept was never intended to mean that "nothing can go wrong". noexcept simply means that function does not throw C++ exceptions. It does not in any way imply that the function is somehow immune to other kinds of problems, like undefined behavior.
So, yes, if all your function does is calls C functions, then from the formal point of view declaring it noexcept is a perfectly reasonable thing to do.
If you personally want to develop your own convention, within which noexcept means something more, like a "100% reliable function" (assuming such thing even exists), then you are free to do so. But that becomes a matter of your own convention, no a matter of language specification.
As you might know C++11 has noexcept keyword. Now ugly part about it is this:
Note that a noexcept specification on a function is not a compile-time
check; it is merely a method for a programmer to inform the compiler
whether or not a function should throw exceptions.
http://en.cppreference.com/w/cpp/language/noexcept_spec
So is this a design failure on the committee part or they just left it as an exercise for the compile writers :) in a sense that decent compilers will enforce it, bad ones can still be compliant?
BTW if you ask why there isnt a third option ( aka cant be done) reason is that I can easily think of a (slow) way to check if function can throw or not. Problem is off course if you limit the input to 5 and 7(aka I promise the file wont contain anything beside 5 and 7) and it only throws when you give it 33, but that is not a realistic problem IMHO.
The committee pretty clearly considered the possibility that code that (attempted to) throw an exception not allowed by an exception specification would be considered ill-formed, and rejected that idea. According to $15.4/11:
An implementation shall not reject an expression merely because when executed it throws or might throw an exception that the containing function does not allow. [ Example:
extern void f() throw(X, Y);
void g() throw(X) {
f(); // OK
}
the call to f is well-formed even though when called, f might throw exception Y that g does not allow. —end example ]
Regardless of what prompted the decision, or what else it may have been, it seems pretty clear that this was not a result of accident or oversight.
As for why this decision was made, at least some goes back to interaction with other new features of C++11, such as move semantics.
Move semantics can make exception safety (especially the strong guarantee) much harder to enforce/provide. When you do copying, if something goes wrong, it's pretty easy to "roll back" the transaction -- destroy any copies you've made, release the memory, and the original remains intact. Only if/when the copy succeeds, you destroy the original.
With move semantics, this is harder -- if you get an exception in the middle of moving things, anything you've already moved needs to be moved back to where it was to restore the original to order -- but if the move constructor or move assignment operator can throw, you could get another exception in the process of trying to move things back to try to restore the original object.
Combine this with the fact that C++11 can/does generate move constructors and move assignment operators automatically for some types (though there is a long list of restrictions). These don't necessarily guarantee against throwing an exception. If you're explicitly writing a move constructor, you almost always want to ensure against it throwing, and that's usually even pretty easy to do (since you're normally "stealing" content, you're typically just copying a few pointers -- easy to do without exceptions). It can get a lot harder in a hurry for template though, even for simple ones like std:pair. A pair of something that can be moved with something that needs to be copied becomes difficult to handle well.
That meant, if they'd decided to make nothrow (and/or throw()) enforced at compile time, some unknown (but probably pretty large) amount of code would have been completely broken -- code that had been working fine for years suddenly wouldn't even compile with the new compiler.
Along with this was the fact that, although they're not deprecated, dynamic exception specifications remain in the language, so they were going to end up enforcing at least some exception specifications at run-time anyway.
So, their choices were:
Break a lot of existing code
Restrict move semantics so they'd apply to far less code
Continue (as in C++03) to enforce exception specifications at run time.
I doubt anybody liked any of these choices, but the third apparently seemed the last bad.
One reason is simply that compile-time enforcement of exception specifications (of any flavor) is a pain in the ass. It means that if you add debugging code you may have to rewrite an entire hierarchy of exception specifications, even if the code you added won't throw exceptions. And when you're finished debugging you have to rewrite them again. If you like this kind of busywork you should be programming in Java.
The problem with compile-time checking: it's not really possible in any useful way.
See the next example:
void foo(std::vector<int>& v) noexcept
{
if (!v.empty())
++v.at(0);
}
Can this code throw?
Clearly not. Can we check automatically? Not really.
The Java's way of doing things like this is to put the body in a try-catch block, but I don't think it is better than what we have now...
As I understand things (admittedly somewhat fuzzy), the entire idea of throw specifications was found to be a nightmare when it actually came time to try to use it in useful way.
Calling functions that don't specify what they throw or do not throw must be considered to potentially throw anything at all! So the compiler, were it to require that you neither throw nor call anything that might throw anything outside of the specification you're provided actually enforce such a thing, your code could call almost nothing whatsoever, no library in existence would be of any use to you or anyone else trying to make any use of throw specifications.
And since it is impossible for a compiler to tell the difference between "This function may throw an X, but the caller may well be calling it in such a way that it will never throw anything at all" -- one would forever be hamstrung by this language "feature."
So... I believe that the only possibly useful thing to come of it was the idea of saying nothrow - which indicates that it is safe to call from dtors and move and swap and so on, but that you're making a notation that - like const - is more about giving your users an API contract rather than actually making the compiler responsible to tell whether you violate your contract or not (like with most things C/C++ - the intelligence is assumed to be on the part of the programmer, not the nanny-compiler).
Today I learned that swap is not allowed to throw an exception in C++.
I also know that the following cannot throw exceptions either:
Destructors
Reading/writing primitive types
Are there any others?
Or perhaps, is there some sort of list that mentions everything that may not throw?
(Something more succinct than the standard itself, obviously.)
There is a great difference between cannot and should not. Operations on primitive types cannot throw, as many functions and member functions, including many operations in the standard library and/or many other libraries.
Now on the should not, you can include destructors and swap. Depending on how you implement them, they can actually throw, but you should avoid having destructors that throw, and in the case of swap, providing a swap operation with the no-throw guarantee is the simplest way of achieving the strong exception guarantee in your class, as you can copy aside, perform the operation on the copy, and then swap with the original.
But note that the language allows both destructors and swap to throw. swap can throw, in the simplest case if you do not overload it, then std::swap performs a copy construction, an assignment and a destruction, three operations that can each throw an exception (depending on your types).
The rules for destructors have changed in C++11, which means that a destructor without exception specification has an implicit noexcept specification which in turn means that if it threw an exception the runtime will call terminate, but you can change the exception specification to noexcept(false) and then the destructor can also throw.
At the end of the day, you cannot provide exception guarantees without understanding your code base, because pretty much every function in C++ is allowed to throw.
So this doesn't perfectly answer you question -- I searched for a bit out of my own curiosity -- but I believe that nothrow guaranteed functions/operators mostly originate from any C-style functions available in C++ as well as a few functions which are arbitrarily simple enough to give such a guarantee. In general it's not expected for C++ programs to provide this guarantee ( When should std::nothrow be used? ) and it's not even clear if such a guarantee buys you anything useful in code that makes regular use of exceptions. I could not find a comprehensive list of ALL C++ functions that are nothrow functions (please correct me if I missed a standard dictating this) other than listings of swap, destructors, and primitive manipulations. Also it seems fairly rare for a function that isn't fully defined in a library to require the user to implement a nothrows function.
So perhaps to get to the root of your question, you should mostly assume that anything can throw in C++ and take it as a simplification when you find something that absolutely cannot throw an exception. Writing exception safe code is much like writing bug free code -- it's harder than it sounds and honestly is oftentimes not worth the effort. Additionally there are many levels between exception unsafe code and strong nothrow functions. See this awesome answer about writing exception safe code as verification for these points: Do you (really) write exception safe code?. There's more information about exception safety at the boost site http://www.boost.org/community/exception_safety.html.
For code development, I've heard mixed opinions from Professors and coding experts on what should and shouldn't throw an exception and what guarantees such code should provide. But a fairly consistent assertion is that code which can easily throw an exception should be very clearly documented as such or indicate the thrown capability in the function definition (not always applicable to C++ alone). Functions that can possible throw an exception are much more common than functions that Never throw and knowing what exceptions can occur is very important. But guaranteeing that a function which divides one input by another will never throws a divide-by-0 exception can be quite unnecessary/unwanted. Thus nothrow can be reassuring, but not necessary or always useful for safe code execution.
In response to comments on the original question:
People will sometimes state that exception throwing constructors are evil when throw in containers or in general and that two-step initialization and is_valid checks should always be used. However, if a constructor fails it's oftentimes unfixable or in a uniquely bad state, otherwise the constructor would have resolved the problem in the first place. Checking if the object is valid is as difficult as putting a try catch block around initialization code for objects you know have a decent chance of throwing an exception. So which is correct? Usually whichever was used in the rest of the code base, or your personal preference. I prefer exception based code as it gives me a feeling of more flexibility without a ton of baggage code of checking every object for validity (others might disagree).
Where does this leave you original question and the extensions listed in the comments? Well, from the sources provided and my own experience worrying about nothrow functions in an "Exception Safety" perspective of C++ is oftentimes the wrong approach to handling code development. Instead keep in mind the functions you know might reasonably throw an exception and handle those cases appropriately. This is usually involving IO operations where you don't have full control over what would trigger the exception. If you get an exception that you never expected or didn't think possible, then you have a bug in your logic (or your assumptions about the function uses) and you'll need to fix the source code to adapt. Trying to make guarantees about code that is non-trivial (and sometimes even then) is like saying a sever will never crash -- it might be very stable, but you'll probably not be 100% sure.
If you want the in-exhaustive-detail answer to this question go to http://exceptionsafecode.com/ and either watch the 85 min video that covers just C++03 or the three hour (in two parts) video that covers both C++03 and C++11.
When writing Exception-Safe code, we assume all functions throw, unless we know different.
In short,
*) Fundamental types (including arrays of and pointers to) can be assigned to and from and used with operations that don't involve user defined operators (math using only fundamental integers and floating point values for example). Note that division by zero (or any expression whose result is not mathematically defined) is undefined behavior and may or may not throw depending on the implementation.
*) Destructors: There is nothing conceptually wrong with destructors that emit exceptions, nor does the standard prohibited them. However, good coding guidelines usually prohibit them because the language doesn't support this scenario very well. (For example, if destructors of objects in STL containers throw, the behavior is undefined.)
*) Using swap() is an important technique for providing the strong exception guarantee, but only if swap() is non-throwing. In general, we can't assume that swap() is non-throwing, but the video covers how to create a non-throwing swap for your User-Defined Types in both C++03 and C++11.
*) C++11 introduces move semantics and move operations. In C++11, swap() is implemented using move semantics and the situation with move operations is similar to the situation with swap(). We cannot assume that move operations do not throw, but we can generally create non-throwing move operations for the User-Defined Types that we create (and they are provided for standard library types). If we provide non-throwing move operations in C++11, we get non-throwing swap() for free, but we may choose to implement our own swap() any way for performance purposes. Again, this is cover in detail in the video.
*) C++11 introduces the noexcept operator and function decorator. (The "throw ()" specification from Classic C++ is now deprecated.) It also provides for function introspection so that code can be written to handle situations differently depending on whether or not non-throwing operations exist.
In addition to the videos, the exceptionsafecode.com website has a bibliography of books and articles about exceptions which needs to be updated for C++11.
The strong exception safety guarantee says that an operation won't change any program state if an exception occurs. An elegant way of implementing exception-safe copy-assignment is the copy-and-swap idiom.
My questions are:
Would it be overkill to use copy-and-swap for every mutating operation of a class that mutates non-primitive types?
Is performance really a fair trade for strong exception-safety?
For example:
class A
{
public:
void increment()
{
// Copy
A tmp(*this);
// Perform throwing operations on the copy
++(tmp.x);
tmp.x.crazyStuff();
// Now that the operation is done sans exceptions,
// change program state
swap(tmp);
}
int setSomeProperty(int q)
{
A tmp(*this);
tmp.y.setProperty("q", q);
int rc = tmp.x.otherCrazyStuff();
swap(tmp);
return rc;
}
//
// And many others similarly
//
void swap(const A &a)
{
// Non-throwing swap
}
private:
SomeClass x;
OtherClass y;
};
You should always aim for the basic exception guarantee: make sure that in the event of an exception, all resources are released correctly and the object is in a valid state (which can be undefined, but valid).
The strong exception guarantee (ie. "transactions") is something you should implement when you think it makes sense: you don't always need transactional behavior.
If it is easy to achieve transactional operations (eg. via copy&swap), then do it. But sometimes it is not, or it incurs a big perfomance impact, even for fundamental things like assignment operators. I remember implementing something like boost::variant where I could not always provide the strong guarantee in the copy assignment.
One tremendous difficulty you'll encounter is with move semantics. You do want transactions when moving, because otherwise you lose the moved object. However, you cannot always provide the strong guarantee: think about std::pair<movable_nothrow, copyable> (and see the comments). This is where you have to become a noexcept virtuoso, and use an uncomfortable amount of metaprogramming. C++ is difficult to master precisely because of exception safety.
As all matters of engineering, it is about balance.
Certainly, const-ness/immutability and strong guarantees increase confidence in one's code (especially accompanied with tests). They also help trim down the space of possible explanations for a bug.
However, they might have an impact on performance.
Like all performance issues, I would say: profile and get rid of the hot spots. Copy And Swap is certainly not the only way to achieve transactional semantics (it is just the easiest), so profiling will tell you where you should absolutely not use it, and you will have to find alternatives.
It depends on what environment your application is going to run in. If you just run it on your own machine (one end of the spectrum), it might not be worth to be too strict on exception safety. If you are writing a program e.g. for medical devices (the other end), you do not want unintentional side-effects left behind when an exception occurs. Anything in-between is dependent on the level of tolerance for errors and available resources for development (time, money, etc.)
Yes, the problem you are facing is that this idiom is very hard to scale. None of the other answers mentioned it, but another very interesting idiom invented by Alexandrescu called scopeGuards. It helps to enhance the economy of the code and makes huge improvements in readability of functions that need to conform to strong exception safety guarantees.
The idea of a scope guard is a stack instance that lets just attach rollback function objects to each resource adquisition. when the scope guard is destructed (by an exception) the rollback is invoked. You need to explicitly call commit() in normal flow to avoid the rollback invocation at scope exit.
Check this recent question from me that is related to designed a safe scopeguard using c++11 features.
As mentioned in this answer simply calling the destructor for the second time is already undefined behavior 12.4/14(3.8).
For example:
class Class {
public:
~Class() {}
};
// somewhere in code:
{
Class* object = new Class();
object->~Class();
delete object; // UB because at this point the destructor call is attempted again
}
In this example the class is designed in such a way that the destructor could be called multiple times - no things like double-deletion can happen. The memory is still allocated at the point where delete is called - the first destructor call doesn't call the ::operator delete() to release memory.
For example, in Visual C++ 9 the above code looks working. Even C++ definition of UB doesn't directly prohibit things qualified as UB from working. So for the code above to break some implementation and/or platform specifics are required.
Why exactly would the above code break and under what conditions?
I think your question aims at the rationale behind the standard. Think about it the other way around:
Defining the behavior of calling a destructor twice creates work, possibly a lot of work.
Your example only shows that in some trivial cases it wouldn't be a problem to call the destructor twice. That's true but not very interesting.
You did not give a convincing use-case (and I doubt you can) when calling the destructor twice is in any way a good idea / makes code easier / makes the language more powerful / cleans up semantics / or anything else.
So why again should this not cause undefined behavior?
The reason for the formulation in the standard is most probably that everything else would be vastly more complicated: it’d have to define when exactly double-deleting is possible (or the other way round) – i.e. either with a trivial destructor or with a destructor whose side-effect can be discarded.
On the other hand, there’s no benefit for this behaviour. In practice, you cannot profit from it because you can’t know in general whether a class destructor fits the above criteria or not. No general-purpose code could rely on this. It would be very easy to introduce bugs that way. And finally, how does it help? It just makes it possible to write sloppy code that doesn’t track life-time of its objects – under-specified code, in other words. Why should the standard support this?
Will existing compilers/runtimes break your particular code? Probably not – unless they have special run-time checks to prevent illegal access (to prevent what looks like malicious code, or simply leak protection).
The object no longer exists after you call the destructor.
So if you call it again, you're calling a method on an object that doesn't exist.
Why would this ever be defined behavior? The compiler may choose to zero out the memory of an object which has been destructed, for debugging/security/some reason, or recycle its memory with another object as an optimisation, or whatever. The implementation can do as it pleases. Calling the destructor again is essentially calling a method on arbitrary raw memory - a Bad Idea (tm).
When you use the facilities of C++ to create and destroy your objects, you agree to use its object model, however it's implemented.
Some implementations may be more sensitive than others. For example, an interactive interpreted environment or a debugger might try harder to be introspective. That might even include specifically alerting you to double destruction.
Some objects are more complicated than others. For example, virtual destructors with virtual base classes can be a bit hairy. The dynamic type of an object changes over the execution of a sequence of virtual destructors, if I recall correctly. That could easily lead to invalid state at the end.
It's easy enough to declare properly named functions to use instead of abusing the constructor and destructor. Object-oriented straight C is still possible in C++, and may be the right tool for some job… in any case, the destructor isn't the right construct for every destruction-related task.
Destructors are not regular functions. Calling one doesn't call one function, it calls many functions. Its the magic of destructors. While you have provided a trivial destructor with the sole intent of making it hard to show how it might break, you have failed to demonstrate what the other functions that get called do. And neither does the standard. Its in those functions that things can potentially fall apart.
As a trivial example, lets say the compiler inserts code to track object lifetimes for debugging purposes. The constructor [which is also a magic function that does all sorts of things you didn't ask it to] stores some data somewhere that says "Here I am." Before the destructor is called, it changes that data to say "There I go". After the destructor is called, it gets rid of the information it used to find that data. So the next time you call the destructor, you end up with an access violation.
You could probably also come up with examples that involve virtual tables, but your sample code didn't include any virtual functions so that would be cheating.
The following Class will crash in Windows on my machine if you'll call destructor twice:
class Class {
public:
Class()
{
x = new int;
}
~Class()
{
delete x;
x = (int*)0xbaadf00d;
}
int* x;
};
I can imagine an implementation when it will crash with trivial destructor. For instance, such implementation could remove destructed objects from physical memory and any access to them will lead to some hardware fault. Looks like Visual C++ is not one of such sort of implementations, but who knows.
Standard 12.4/14
Once a destructor is invoked for an
object, the object no longer exists;
the behavior is undefined if the
destructor is invoked for an object
whose lifetime has ended (3.8).
I think this section refers to invoking the destructor via delete. In other words: The gist of this paragraph is that "deleting an object twice is undefined behavior". So that's why your code example works fine.
Nevertheless, this question is rather academic. Destructors are meant to be invoked via delete (apart from the exception of objects allocated via placement-new as sharptooth correctly observed). If you want to share code between a destructor and second function, simply extract the code to a separate function and call that from your destructor.
Since what you're really asking for is a plausible implementation in which your code would fail, suppose that your implementation provides a helpful debugging mode, in which it tracks all memory allocations and all calls to constructors and destructors. So after the explicit destructor call, it sets a flag to say that the object has been destructed. delete checks this flag and halts the program when it detects the evidence of a bug in your code.
To make your code "work" as you intended, this debugging implementation would have to special-case your do-nothing destructor, and skip setting that flag. That is, it would have to assume that you're deliberately destroying twice because (you think) the destructor does nothing, as opposed to assuming that you're accidentally destroying twice, but failed to spot the bug because the destructor happens to do nothing. Either you're careless or you're a rebel, and there's more mileage in debug implementations helping out people who are careless than there is in pandering to rebels ;-)
One important example of an implementation which could break:
A conforming C++ implementation can support Garbage Collection. This has been a longstanding design goal. A GC may assume that an object can be GC'ed immediately when its dtor is run. Thus each dtor call will update its internal GC bookkeeping. The second time the dtor is called for the same pointer, the GC data structures might very well become corrupted.
By definition, the destructor 'destroys' the object and destroy an object twice makes no sense.
Your example works but its difficult that works generally
I guess it's been classified as undefined because most double deletes are dangerous and the standards committee didn't want to add an exception to the standard for the relatively few cases where they don't have to be.
As for where your code could break; you might find your code breaks in debug builds on some compilers; many compilers treat UB as 'do the thing that wouldn't impact on performance for well defined behaviour' in release mode and 'insert checks to detect bad behaviour' in debug builds.
Basically, as already pointed out, calling the destructor a second time will fail for any class destructor that performs work.
It's undefined behavior because the standard made it clear what a destructor is used for, and didn't decide what should happen if you use it incorrectly. Undefined behavior doesn't necessarily mean "crashy smashy," it just means the standard didn't define it so it's left up to the implementation.
While I'm not too fluent in C++, my gut tells me that the implementation is welcome to either treat the destructor as just another member function, or to actually destroy the object when the destructor is called. So it might break in some implementations but maybe it won't in others. Who knows, it's undefined (look out for demons flying out your nose if you try).
It is undefined because if it weren't, every implementation would have to bookmark via some metadata whether an object is still alive or not. You would have to pay that cost for every single object which goes against basic C++ design rules.
The reason is because your class might be for example a reference counted smart pointer. So the destructor decrements the reference counter. Once that counter hits 0 the actual object should be cleaned up.
But if you call the destructor twice then the count will be messed up.
Same idea for other situations too. Maybe the destructor writes 0s to a piece of memory and then deallocates it (so you don't accidentally leave a user's password in memory). If you try to write to that memory again - after it has been deallocated - you will get an access violation.
It just makes sense for objects to be constructed once and destructed once.
The reason is that, in absence of that rule, your programs would become less strict. Being more strict--even when it's not enforced at compile-time--is good, because, in return, you gain more predictability of how program will behave. This is especially important when the source code of classes is not under your control.
A lot of concepts: RAII, smart pointers, and just generic allocation/freeing of memory rely on this rule. The amount of times the destructor will be called (one) is essential for them. So the documentation for such things usually promises: "Use our classes according to C++ language rules, and they will work correctly!"
If there wasn't such a rule, it would state as "Use our classes according to C++ lanugage rules, and yes, don't call its destructor twice, then they will work correctly." A lot of specifications would sound that way.
The concept is just too important for the language in order to skip it in the standard document.
This is the reason. Not anything related to binary internals (which are described in Potatoswatter's answer).