Explanation required regarding Double destruction of exception objects - c++

In his insightful paper,
Error and Exception Handling,
#Dave Abrahams says:
Make your exception class immune to double-destruction if possible. Unfortunately, several popular compilers occasionally cause exception objects to be destroyed twice. If you can arrange for that to be harmless (e.g. by zeroing deleted pointers) your code will be more robust.
I am not able to understand this particular guideline, Can someone:
Please provide a code example of this double destruction scenario &
What is the best way to implement a custom exception class to avoid this?

Like #Tony said, this guideline was meant as a protection against compiler bugs. This guideline dates back to 2001 or so, when exceptions support was probably still a bit unstable. Since then, I think/hope most compilers have fixed this bug, so the guideline might not be very relevant anymore.
FWIW, this guideline has been eliminated from the CERT coding practices. In the discussion on this page, an interesting point is raised: destructing an object twice is UB anyway, so whatever you do to handle that in your classes will never make your program fully predictible.
However, if you really want your code to be portable across compilers (including old versions), you should probably take all these little glitches into account. For instance, Boost goes through a lot of work to work around compiler bugs; they could simply write standard-compliant code and defer the responsability of failures to implementations, but that would hinder the adoption of their libraries.
Whether you need to put the same care when writing your code depends on your requirements, and basically boils down to this question: is supporting dozens of compilers really worth the amount of work that implies?

To quote from Article by #chrisaycock:
"why destroy twice"? Because of compiler bugs, that's why! This is an
error, compilers should not do this. But they do. I worked on a
project where I got bitten by this using Sun's Studio8 compiler. I
created a ostringstream object in a catch clause and found it got
destructed twice. To fix it I moved it to before the try, then it
worked. This sort of bug does not happen very often. Most of the time
creating objects in the catch clause was ok but it is something to be
aware of.
Regards,
Andrew Marlow

There is no scenario in the Standard where one object may be destructed twice. Any instance where this occurs is a bug on behalf of the user, or, where the object is destructed by the compiler such as an exception, then the compiler bug. I have never heard of such a bug prior to now in any major compiler, and see no reason to believe that it will be problematic for anyone writing C++ code in general.

Related

If you're in the "we don't use exceptions" camp, then how do you use the standard library?

Note: I'm not playing the devil's advocate or anything like that here - I'm just genuinely curious since I'm not in this camp myself.
Most types in the standard library have either mutating functions that can throw exceptions (for instance if memory allocation fails) or non-mutating functions that can throw exceptions (for instance out of bounds indexed accessors). In addition to that, many free functions can throw exceptions (for instance operator new and dynamic_cast<T&>).
How do you practically deal with this in the context of "we don't use exceptions"?
Are you trying to never call a function that can throw? (I can't see how that'd scale, so I'm very interested to hear how you accomplish this if this is the case)
Are you ok with the standard library throwing and you treat "we don't use exceptions" as "we never throw exceptions from our code and we never catch exceptions from other's code"?
Are you disabling exception handling altogether via compiler switches? If so, how do the exception-throwing parts of the standard library work?
EDIT Your constructors, can they fail, or do you by convention use a 2-step construction with a dedicated init function that can return an error code upon failure (which the constructor can't), or do you do something else?
EDIT Minor clarification 1 week after the inception of the question... Much of the content in comments and questions below focus on the why aspects of exceptions vs "something else". My interest is not in that, but when you choose to do "something else", how do you deal with the standard library parts that do throw exceptions?
I will answer for myself and my corner of the world. I write c++14 (will be 17 once compilers have better support) latency critical financial apps that process gargantuan amounts of money and can't ever go down. The ruleset is:
no exceptions
no rtti
no runtime dispatch
(almost) no inheritance
Memory is pooled and pre-allocated, so there are no malloc calls after initialization. Data structures are either immortal or trivially copiable, so destructors are nearly absent (there are some exceptions, such as scope guards). Basically, we are doing C + type safety + templates + lambdas. Of course, exceptions are disabled via the compiler switch. As for the STL, the good parts of it (i.e.: algorithm, numeric, type_traits, iterator, atomic, ...) are all useable. The exception-throwing parts coincide with the runtime-memory-allocating parts and the semi-OO parts nicely so we get to get rid of all the cruft in one go: streams, containers except std::array, std::string.
Why do this?
Because like OO, exception offers illusory cleanliness by hiding or moving the problem elsewhere, and makes the rest of the program harder to diagnose. When you compile without "-fno-exceptions", all your clean and nicely behaved functions have to endure the suspicion of being failable. It is much easier to have extensive sanity checking around the perimeter of your codebase, than to make every operation failable.
Because exceptions are basically long range GOTOs that have an unspecified destination. You won't use longjmp(), but exceptions are arguably much worse.
Because error codes are superior. You can use [[nodiscard]] to force calling code to check.
Because exception hierarchies are unnecessary. Most of the time it makes little sense to distinguish what errored, and when it does, it's likely because different errors require different clean-up and it would have been much better to signal explicitly.
Because we have complex invariants to maintain. This means that there are code, however deep down in the bowels, that need to have transnational guarantees. There are two ways of doing this: either you make your imperative procedures as pure as possible (i.e.: make sure you never fail), or you have immutable data structures (i.e.: make failure recovery possible). If you have immutable data structures, then of course you can have exceptions, but you won't be using them because when you will be using sum types. Functional data structures are slow though, so the other alternative is to have pure functions and do it in an exception-free language such as C, no-except C++, or Rust. No matter how pretty D looks, as long as it isn't cleansed of GC and exceptions, it's an non-option.
Do you ever test your exceptions like you would an explicit code path? What about exceptions that "can never happen"? Of course you don't, and when you actually hit those exceptions you are screwed.
I have seen some "beautiful" exception-neutral code in C++. That is, it performs optimally with no edge cases regardless of whether the code it calls uses exceptions or not. They are really hard to write and I suspect, tricky to modify if you want to maintain all your exception guarantees. However, I have not seen any "beautiful" code that either throws or catches exceptions. All code that I have seen that interacts with exceptions directly have been universally ugly. The amount of effort that went into writing exception-neutral code completely dwarfs the amount of effort that was saved from the crappy code that either throws or catches exceptions. "Beautiful" is in quotes because it is not actual beauty: it is usually fossilized because editing it requires the extra burden of maintaining exception-neutrality. If you don't have unit tests that deliberately and comprehensively misuse exceptions to trigger those edge cases, even "beautiful" exception-neutral code decays into manure.
In our case, we disable the exceptions via the compiler (e.g -fno-exceptions for gcc).
In the case of gcc, they use a macro called _GLIBCXX_THROW_OR_ABORT which is defined as
#ifndef _GLIBCXX_THROW_OR_ABORT
# if __cpp_exceptions
# define _GLIBCXX_THROW_OR_ABORT(_EXC) (throw (_EXC))
# else
# define _GLIBCXX_THROW_OR_ABORT(_EXC) (__builtin_abort())
# endif
#endif
(you can find it in libstdc++-v3/include/bits/c++config on latest gcc versions).
Then you juste have to deal with the fact that exceptions thrown just abort. You can still catch the signal and print the stack (there is a good answer on SO that explains this), but you have better avoid this kind of things to happen (at least in releases).
If you want some example, instead of having something like
try {
Foo foo = mymap.at("foo");
// ...
} catch (std::exception& e) {}
you can do
auto it = mymap.find("foo");
if (it != mymap.end()) {
Foo foo = it->second;
// ...
}
I also want to point out, that when asking about not using exceptions, there's a more general question about standard library: Are you using standard library when you're in one of the "we don't use exceptions" camps?
Standard library is heavy. In some "we don't use exceptions" camps, like many GameDev companies for example, better suited alternatives for STL are used - mostly based on EASTL or TTL. These libraries don't use exceptions anyway and that's because eighth generation consoles didn't handle them too well (or even at all). For a cutting edge AAA production code, exceptions are too heavy anyway, so it's a win - win scenario in such cases.
In other words, for many programmers, turning exceptions off goes in pair with not using STL at all.
Note I use exceptions... but I have been forced not to.
Are you trying to never call a function that can throw? (I can't see how that'd scale, so I'm very interested to hear how you accomplish this if this is the case)
This would probably be infeasible, at least on a large scale. Many functions can land up throwing, avoid them entirely cripples your code base.
Are you ok with the standard library throwing and you treat "we don't use exceptions" as "we never throw exceptions from our code and we never catch exceptions from other's code"?
You pretty much have to be ok with that... If the library code is going to throw an exception and your code is not going to handle it, termination is the default behaviour.
Are you disabling exception handling altogether via compiler switches? If so, how does the exception-throwing parts of the standard library work?
This is possible (back in the day it was sometime popular for some project types); compilers do/may support this, but you will need to consult their documentation for what the result(s) would and could be (and what language features are supported under those conditions).
In general, when an exception would be thrown, the program would need to abort or otherwise exit. Some coding standards still require this, the JSF coding standard comes to mind (IIRC).
General strategy for those who "don't use exceptions"
Most functions have a set of preconditions that can be checked for before the call is made. Check for those. If they are not met, then don't make the call; fall back to whatever the error handling is in that code. For those functions that you can't check to ensure the preconditions are met... not much, the program will likely abort.
You could look to avoid libraries that throw exceptions - you asked this in the context of the standard library, so this doesn't quite fit the bill, but it remains an option.
Other possible strategies; I know this sounds trite, but pick a language that doesn't use them. C could do nicely...
...crux of my question (your interaction with the standard library, if any), I'm quite interested in hearing about your constructors. Can they fail, or do you by convention use a 2-step construction with a dedicated init function that can return an error code upon failure (which the constructor can't)? Or what's your strategy there?
If constructors are used, there are generally two approaches that are used to indicate the failure;
Set an internal error code or enum to indicate the failure and what the failure is. This can be interrogated after the object's construction and appropriate action taken.
Don't use a constructor (or at least only construct what cannot fail in the constructor - if anything) and then use an init() method of some sort to do (or complete) the construction. The member method can then return an error if there is some failure.
The use of the init() technique is generally favored as it can be chained and scales better than the internal "error" code.
Again, these are techniques that come from environments where exceptions do not exist (such as C). Using a language such as C++ without exceptions limits its usability and the usefulness of the breadth of the standard library.
Not trying to fully answer the questions you have asked, I will just give google as an example for code base which does not utilize exceptions as a mechanism to deal with errors.
In Google C++ code base, every functions which may fail return a status object which have methods like ok to specify the result of the callee.
They have configurated GCC to fail the compilation if the developer ignored the return status object.
Also, from the little open source code they provide (such as LevelDB library), it seems they are not using STL that much anyway, so exception handling become rare. as Titus Winters says in his lectures in CPPCon, they "Respect the standard, but don't idolize it".
I think this is an attitude question. You need to be in the camp of "I don't care if something fails".
This usually results in code, for which one needs a debugger (at the customer site) to find out, why suddenly something is not working anymore.
Also potentially people which are doing software "engineering" in this way, do not use very complex code. E.g. one would be unable to write code, which relies on the fact that it is only executed, if all n resources it relies on have been successfully allocated (while using RAII for these resources).
Thus: Such coding would result in either:
an unmanageable amount of code for error handling
an unmanageable amount of code to avoid executing code, which relies on successful allocation of some resources
no error handling and thus considerable higher amount of support and developer time
Note, that I'm talking about modern code, loading customer-provided dlls on demand and using child processes. There are many interfaces on which something can fail. I'm not talking about some replacement for grep/more/ls/find.

How do you keep track of exception safety guarantees offered by each function

When writing exception safe code, it is necessary to consider the exception safety guarantee (none, basic, strong or no-throw) of all the functions called. Since the compiler offers no help, I was thinking that a function naming convention might be helpful here. Is there any kind of established notational standard indicating the level of exception safety guarantee offered by functions? I was thinking along the lines of something hungarian-like:
void setFooB(Foo const& s); // B, offers basic guarantee
int computeSomethingS(); // S, offers strong guarantee
int getDataNT() throws(); // NT, offers no-throw
void allBetsAreOffN(); // N, offers no guarantee
Edit: I agree with comments that this kind of naming convention is ugly, so allow me to elaborate on my reasons for suggesting in.
Say I refactor some code, and in that process, change the level of exception safety offered by a function. If the guarantee has changed from, say, strong to basic (justified perhaps by improvement in speed), then every function that calls the refactored function must be reconsidered for their exception safety. If the change in guarantee triggered a change in the function name as well, it would allow the compiler to help me out a little bit in at least flagging all uses of the changed function. This was my rationale for suggesting the naming convention above, problematic as it is. This is very similar to const, where a change in the const-ness of a function has cascading effects on other calling functions, but in that situation the compiler gives very effective assistance.
So I guess my question is, what kind of work habits have people developed in order to ensure that code actually fullfills their intended exception guarantees, especially during code maintenance and refactoring.
I usually find myself documenting that in comments (doxygen), excepts the no throw guarantee, that I often tag with the throw() exception specification if and only when sure that the function is guaranteed not to throw, and exception safety is important.
That is, I usually worry more about exceptions in parts of the code where an unhandled exception would cause problems, and deal with that locally (ensure that your code is exception safe by other means, as RAII, performing the work outside and then merging the results with a no throw operation --i.e. no throw swap, which is about the only function that I actively mark as throw().
Other people might have other experiences, but I find that to be sufficient for my daily work.
I don't think you need to do anything special.
The only ones I really document are no-throw and that is because the syntax of the language allows it.
There should be no code in your project that provides no guarantee. So that only leaves strong/basic to document. For these I don't think you need to explicitly call it out as it not really about the methods themselves but the class as a whole (for these two guarantees). The guarantees they provide are really dependent on usage.
I would like to think I provide the strong guarantee on everything (but I don't) sometimes it is too expensive sometimes its just not worth the effort (if things are going to throw they will be destroyed anyway).
I understand your willingness to do well, but I am unsure about such a naming convention.
I am, in general, wary of naming conventions that are not enforced by the language: they are prone to become the greatest liars.
If you truly need such things, my suggestion is to get your hands on a compiler (Clang for example) and add a new set of attributes. Do note that you'll need to edit your Standard Library provided headers, and all 3rd party headers you rely on, to annotate them so that you can get those guarantees from the ground up.
Then you can have the compiler check the annotations (won't be trivial either...), and then the annotations become useful, because they cannot lie.
I am thinking about adding
#par Exception Safety
Strong guarantee
to my javadocs where appropriate.

Implementing atomic<T>::store

I'm attempting to implement the atomic library from the C++0x draft. Specifically, I'm implementing §29.6/8, the store method:
template <typename T>
void atomic<T>::store(T pDesired, memory_order pOrder = memory_order_seq_cst);
The requirement states:
The order argument shall not be memory_order_consume, memory_order_acquire, nor memory_order_acq_rel.
I'm not sure what to do if it is one of these. Should I do nothing, throw an exception, get undefined behavior, or do something else?
P.S.: "C++0X" looks kinda like a dead fish :3
Do what you want. It doesn't matter.
When ISO state that you "shall not do something", doing it is undefined behaviour. If a user does that, they have violated the contract with the implementation, and the implementation is within its rights to do as it pleases.
What you decide to do is entirely up to you. I would opt for whatever makes your implementation "better" (in your eyes, be that faster, more readable, subject to the principle of least astonishment, and so forth).
Myself, I'd go for readability (since I would have to maintain the thing) with speed taking a close second.
I prefer a compile time error. If not that, then an assert() failure.
Assert is good because it compiles out of the release version and will not impact performance.
Compile time errors are even better because they provide more immediate feedback without waiting for the software to trip over the bug. Compile time error checks are a thing I love about C++ code over Python, Ruby, Perl code.
I'd rather get vaguey sane behaviour that something crazy.
Well, as a potential consumer of your library, here's what I'd like: if there's no performance cost to the documented usage, then see if one of the memory_order values provides a functional superset of the others, particularly something corresponding to what a caller might naively expect the unsupported modes to do (if any sensible expectation can be formed). The caller may get the slowest, safest mode, but that's better than something functionally wrong. You minimise the client code's dependency on getting everything perfect for your code. The problem with this - compared to an assert/exception - is that it can go unnoticed in a test environment, so consider also writing an explanation to std::cerr, using a static variable to limit the messages to one per process run. That's a very useful diagnostic.
An exception, fatal assertion etc. might bring down a client application at a very inconvenient moment.... Seems a bit draconian, and not something I'd appreciate particularly. Another option is to have an environment variable control this behaviour.
(There's presumably a similar issue for values that aren't even in your current enumeration.)

When and how to catch exceptions

I am working on a code-base with a bunch of developers who aren't primarily Computer Science or Software Engineering (Mostly Computer Engineering)
I am looking for a good article about when exceptions should be caught and when one should try to recover from them. I found an article a while ago that I thought explained things well, but google isn't helping me find it again.
We are developing in C++. Links to articles are an acceptable form of answer, as are summaries with pointers. I'm trying to teach here, so tutorial format would be good. As would something that was written to be accessible to non-software engineers. Thanks.
Herb Sutter has an excellent article that may be useful to you. It does not answer your specific question (when/how to catch) but does give a general overview and guidelines for handling exceptional conditions.
I've copied his summary here verbatim
Distinguish between errors and
nonerrors. A failure is an error if
and only if it violates a function's
ability to meet its callees'
preconditions, to establish its own
postconditions, or to reestablish an
invariant it shares responsibility for
maintaining. Everything else is not an
error.
Ensure that errors always leave your
program in a valid state; this is the
basic guarantee. Beware of
invariant-destroying errors
(including, but not limited to,
leaks), which are just plain bugs.
Prefer to additionally guarantee that
either the final state is either the
original state (if there was an error,
the operation was rolled back) or
intended target state (if there was no
error, the operation was committed);
this is the strong guarantee.
Prefer to additionally guarantee that
the operation can never fail. Although
this is not possible for most
functions, it is required for
functions such as destructors and
deallocation functions.
Finally, prefer to use exceptions
instead of error codes to report
errors. Use error codes only when
exceptions cannot be used (when you
don't control all possible calling
code and can't guarantee it will be
written in C++ and compiled using the
same compiler and compatible compile
options), and for conditions that are
not errors.
Read the chapter "Exception Handling" from the Book
Thinking in C++, Volume 2  - Bruce Eckel
May be this MSDN section will help you...
The most simplistic advice:
If you don't know whether or not catching an exception, don't catch it and let it flow, someone will at one point.
The point about exceptions is that they are exceptional (think std::bad_alloc). Apart from some weird uses for "quick exit" of deeply nested code blocks (that I don't like much), exceptions should be used only when you happen to remark something that you have no idea how to deal with.
Let's pick examples:
file = open('littlefile.txt', open.mode.Read)
It does seem obvious, to me, that this may fail, and in a number of conditions. While reporting the cause of failure is important (for accurate diagnostic), I find that throwing an exception here is NOT good practice.
In C++ I would write such a function as:
boost::variant<FileHandle,Error> open(std::string const& name, mode_t mode);
The function may either return a file handle (great) or an error (oups). But since it's expected, better deal with it now. Also it has the great advantage of being explicit, looking at the signature means that you know what to expect (not talking about exception specifications, it's a broken feature).
In general I tend to think of these functions as find functions. When you search for something, it is expected that the search may fail, there is nothing exceptional here.
Think about the general case of an associative container:
template <typename Key, typename Value>
boost::optional<Value const&> Associative::GetItem(Key const& key) const;
Once again, thanks to Boost, I make it clear that my method may (or not) return the expected value. There is no need for a ElementNotFound exception to be thrown.
For yet another example: user input validation is expected to fail. In general, inputs are expected to be hostile / ill formed / wrong. No need for exceptions here.
On the other hand, suppose my software deal with a database and cannot possibly run without it. If the database abstraction layer loses the connection to the database and cannot establish a new one, then it makes sense to raise an exception.
I reserve exceptions for technical issues (lost connection, out of memory, etc...).

How to explain undefined behavior to know-it-all newbies?

There'a a handful of situations that the C++ standard attributes as undefined behavior. For example if I allocate with new[], then try to free with delete (not delete[]) that's undefined behavior - anything can happen - it might work, it might crash nastily, it might corrupt something silently and plant a timed problem.
It's so problematic to explain this anything can happen part to newbies. They start "proving" that "this works" (because it really works on the C++ implementation they use) and ask "what could possibly be wrong with this"? What concise explanation could I give that would motivate them to just not write such code?
Undefined means explicitly unreliable. Software should be reliable. You shouldn't have to say much else.
A frozen pond is a good example of an undefined walking surface. Just because you make it across once doesn't mean you should add the shortcut to your paper route, especially if you're planning for the four seasons.
Two possibilities come to my mind:
You could ask them "just because you can drive on the motorway the opposite direction at midnight and survive, would you do it regularly?"
The more involved solution might be to set up a different compiler / run environment to show them how it fails spectacularly under different circumstances.
"Congratulations, you've defined the behavior that compiler has for that operation. I'll expect the report on the behavior that the other 200 compilers that exist in the world exhibit to be on my desk by 10 AM tomorrow. Don't disappoint me now, your future looks promising!"
Simply quote from the standard. If they can't accept that, they aren't C++ programmers. Would Christians deny the bible? ;-)
1.9 Program execution
The semantic descriptions in this International Standard define a parameterized nondeterministic abstract machine. [...]
Certain aspects and operations of the abstract machine are described in this International Standard as implementation-defined (for example, sizeof(int)). These constitute the parameters of the abstract machine. Each implementation shall include documentation describing its characteristics and behavior in these respects. [...]
Certain other aspects and operations of the abstract machine are described in this International Standard as unspecified (for example, order of evaluation of arguments to a function). Where possible, this International Standard defines a set of allowable behaviors. These define the nondeterministic aspects of the abstract machine. [...]
Certain other operations are described in this International Standard as undefined (for example, the effect of dereferencing the null pointer). [ Note: this International Standard imposes no requirements on the behavior of programs that contain undefined behavior. —end note ]
You can't get any clearer than that.
I'd explain that if they didn't write the code correctly, their next performance review would not be a happy one. That's sufficient "motivation" for most people.
Let them try their way until their code will crash during test. Then the words won't be needed.
The thing is that newbies (we've all been there) have some amount of ego and self-confidence. It's okay. In fact, you couldn't be a programmer if you didn't. It's important to educate them but no less important to support them and don't cut their start in the journey by undermining their trust in themselves. Just be polite but prove your position with facts not with words. Only facts and evidence will work.
John Woods:
In short, you can't use sizeof() on a structure whose elements haven't been
defined, and if you do, demons may fly out of your nose.
"Demons may fly out of your nose" simply must be part of the vocabulary of every programmer.
More to the point, talk about portability. Explain how programs frequently have to be ported to different OSes, let alone different compilers. In the real world, the ports are usually done by people other than the original programmers. Some of these ports are even to embedded devices, where there can be enormous costs of discovering that the compiler decided differently from your assumption.
Turn the person into a pointer. Tell them that they are a pointer to a class human and you are invoking the function 'RemoveCoat'. When they are pointing at a person and saying 'RemoveCoat' all is fine. If the person does not have a coat, no worries - we check for that, all RemoveCoat really does is remove the top layer of clothing (with decency checks).
Now what happens if they are pointing somewhere random and they say RemoveCoat - if they are pointing at a wall then the paint might peel off, if they are pointing at a tree the bark might come off, dogs might shave themselves, the USS Enterprise might lower its shields at a critical moment etc!
There is no way of working out what might happen the behaviour has not been defined for that situation - this is called undefined behaviour and must be avoided.
Quietly override new, new[], delete and delete[] and see how long it takes him to notice ;)
Failing that ... just tell him he is wrong and point him towards the C++ spec. Oh yeah .. and next time be more careful when employing people to make sure you avoid a-holes!
One would be...
"This" usage is not part of the language. If we would say that in this case the compiler must generate code that crashes, then it would be a feature, some kind of requirement for the compiler's manufacturer. The writers of the standard did not wanted to give unnecessary work on "features" that are not supported. They decided not to make any behavioral requirements in such cases.
I like this quote:
Undefined behavior: it may corrupt your files, format your disk or send hate mail to
your boss.
I don't know who to attribute this to (maybe it's from Effective C++)?
C++ is not really a language for dilletantes, and simply listing out some rules and making them obey without question will make for some terrible programmers; most of the stupidest things I see people say are probably related to this kind of blind rules following/lawyering.
On the other hand if they know the destructors won't get called, and possibly some other problems, then they will take care to avoid it. And more importantly, have some chance to debug it if they ever do it by accident, and also to have some chance to realize how dangerous many of the features of C++ can be.
Since there's many things to worry about, no single course or book is ever going to make someone master C++ or probably even become that good with it.
Just show them Valgrind.
Compile and run this program:
#include <iostream>
class A {
public:
A() { std::cout << "hi" << std::endl; }
~A() { std::cout << "bye" << std::endl; }
};
int main() {
A* a1 = new A[10];
delete a1;
A* a2 = new A[10];
delete[] a2;
}
At least when using GCC, it shows that the destructor only gets called for one of the elements when doing single delete.
About single delete on POD arrays. Point them to a C++ FAQ or have them run their code through cppcheck.
One point not yet mentioned about undefined behavior is that if performing some operation would result in undefined behavior, a standards-conforming implementation could legitimately, perhaps in an effort to be 'helpful' or improve efficiency, generate code which would fail if such an operation were attempted. For example, one can imagine a multi-processor architecture in which any memory location may be locked, and attempting to access a locked location (except to unlock it) will stall until such time as the location in question was unlocked. If the locking and unlocking were very cheap (plausible if they're implemented in hardware) such an architecture could be handy in some multi-threading scenarios, since implementing x++ as (atomically read and lock x; add one to read value; atomically unlock and write x) would ensure that if two threads both performed x++ simultaneously, the result would be to add two to x. Provided programs are written to avoid undefined behavior, such an architecture might ease the design of reliable multi-threaded code without requiring big clunky memory barriers. Unfortunately, a statement like *x++ = *y++; could cause deadlock if x and y were both references to the same storage location and the compiler attempted to pipeline the code as t1 = read-and-lock x; t2 = read-and-lock y; read t3=*t1; write *t2=t3; t1++; t2++; unlock-and-write x=t1; write-and-unlock y=t2;. While the compiler could avoid deadlock by refraining from interleaving the various operations, doing so might impede efficiency.
Turn on malloc_debug and delete an array of objects with destructors. freeing a pointer inside the block should fail. Call them all together and demonstrate this.
You'll need to think of other examples to build your credibility until they understand that they are newbies and there's a lot to know about C++.
Tell them about standards and how tools are developed to comply with the standards. Anything outside the standard might or might not work, which is UB.
Just because their program appears to work is a guarantee of nothing; the compiler could generate code that happens to work (how do you even define "work" when the correct behavior is undefined?) on weekdays but formats your disk on weekends. Did they read the source code to their compiler? Examine their disassembled output?
Or remind them just because it happens to "work" today is no guarantee of it working when you upgrade your compiler version. Tell them to have fun finding whatever subtle bugs creep up from that.
And really, why not? They should be providing a justifiable argument to use undefined behavior, not the other way around. What reason is there to use delete instead of delete[] other than laziness? (Okay, there's std::auto_ptr. But if you're using std::auto_ptr with a new[]-allocated array, you probably ought to be using a std::vector anyway.)
Both the C and C++ Standards use the term "Undefined Behavior" to refer to situations in which it may be useful for different implementations to process constructs in differing, incompatible, fashions, some of which will behave predictably but some of which may not. Both use the same terminology to describe UB, and while I don't know of any published Rationale for C++ Standards, the Rationale for the C Standard says:
Undefined behavior gives the implementor license not to catch certain program errors that are difficult to diagnose. It also identifies areas of possible conforming language extension: the implementor may augment the language by providing a definition of the officially undefined behavior."
Note that many actions which were classified as Undefined Behavior by the C Standard were considered fully defined on many if not all implementations, but the authors of the Standard wanted to give implementors targeting unusual platforms or application fields the ability to deviate from the normal behaviors if doing so would benefit their customers. Such freedom was not intended to invite arbitrary and capricious deviations from precedent that make it harder for programmers to quickly and easily do what needed to be done.
Unfortunately, many programmers who use gcc and clang don't understand their needs as well as the maintainers of those compilers, who recognize that that since the Standard avoids mandating anything that would impair the efficiency of applications that will never receive maliciously-crafted inputs, or will only run in contexts where even malicious programs would be unable to damage anything, that implies that there's no need for any implementations to allow programmers to easily and efficiently write programs that are suitable for use in other contexts.