Explicit call to destructor - c++

I stumbled upon the following code snippet:
#include <iostream>
#include <string>
using namespace std;
class First
{
string *s;
public:
First() { s = new string("Text");}
~First() { delete s;}
void Print(){ cout<<*s;}
};
int main()
{
First FirstObject;
FirstObject.Print();
FirstObject.~First();
}
The text said that this snippet should cause a runtime error. Now, I wasn't really sure about that, so I tried to compile and run it. It worked. The weird thing is, despite the simplicity of the data involved, the program stuttered after printing "Text" and only after one second it completed.
I added a string to be printed to the destructor as I was unsure if it was legal to explicitly call a destructor like that. The program printed twice the string. So my guess was that the destructor is called twice as the normal program termination is unaware of the explicit call and tries to destroy the object again.
A simple search confirmed that explicitly calling a destructor on an automated object is dangerous, as the second call (when the object goes out of scope) has undefined behaviour. So I was lucky with my compiler (VS 2017) or this specific program.
Is the text simply wrong about the runtime error? Or is it really common to have runtime error? Or maybe my compiler implemented some kind of warding mechanism against this kind of things?

A simple search confirmed that explicitly calling a destructor on an automated object is dangerous, as the second call (when the object goes out of scope) has undefined behaviour.
That is true. Undefined Behavor is invoked if you explicitly destroy an object with automatic storage. Learn more about it.
So I was lucky with my compiler (VS 2017) or this specific program.
I'd say you were unlucky. The best (for you, the coder) that can happen with UB is a crash at first run. If it appears to work fine, the crash could happen in January 19, 2038 in production.
Is the text simply wrong about the runtime error? Or is it really common to have runtime error? Or maybe my compiler implemented some kind of warding mechanism against this kind of things?
Yes, the text's kinda wrong. Undefined behavior is undefined. A run-time error is only one of many possibilities (including nasal demons).
A good read about undefined behavor: What is undefined behavor?

No this is simply undefined behavior from the draft C++ standard [class.dtor]p16:
Once a destructor is invoked for an object, the object no longer exists; the behavior is undefined if the destructor is invoked for an object whose lifetime has ended ([basic.life]).
[ Example: If the destructor for an automatic object is explicitly invoked, and the block is subsequently left in a manner that would ordinarily invoke implicit destruction of the object, the behavior is undefined.
— end example
 
and we can see from the defintion of undefined behavior:
behavior for which this document imposes no requirements
You can have no expectations as to the results. It may have behaved that way for the author on their specific compiler with specific options on a specific machine but we can't expect it to be a portable nor reliable result. Althought there are cases where the implementation does try to obtain a specific result but that is just another form of acceptable undefined behavior.
Additionally [class.dtor]p15 gives more context on the normative section I quote above:
[ Note: Explicit calls of destructors are rarely needed.
One use of such calls is for objects placed at specific addresses using a placement new-expression.
Such use of explicit placement and destruction of objects can be necessary to cope with dedicated hardware resources and for writing memory management facilities.
For example,
void* operator new(std::size_t, void* p) { return p; }
struct X {
X(int);
~X();
};
void f(X* p);
void g() { // rare, specialized use:
char* buf = new char[sizeof(X)];
X* p = new(buf) X(222); // use buf[] and initialize
f(p);
p->X::~X(); // cleanup
}
— end note  ]

Is the text simply wrong about the runtime error?
It is wrong.
Or is it really common to have runtime error? Or maybe my compiler implemented some kind of warding mechanism against this kind of things?
You cannot know, and this is what happens when your code invokes Undefined Behavior; you don't know what will happen when you execute it.
In your case, you were (un)lucky* and it worked, while for me, it caused an error (double free).
*Because if you received an error you would start debugging, otherwise, in a large project for example, you might missed it...

Related

Does a memory leak cause undefined behaviour? [duplicate]

Turns out many innocently looking things are undefined behavior in C++. For example, once a non-null pointer has been delete'd even printing out that pointer value is undefined behavior.
Now memory leaks are definitely bad. But what class situation are they - defined, undefined or what other class of behavior?
Memory leaks.
There is no undefined behavior. It is perfectly legal to leak memory.
Undefined behavior: is actions the standard specifically does not want to define and leaves upto the implementation so that it is flexible to perform certain types of optimizations without breaking the standard.
Memory management is well defined.
If you dynamically allocate memory and don't release it. Then the memory remains the property of the application to manage as it sees fit. The fact that you have lost all references to that portion of memory is neither here nor there.
Of course if you continue to leak then you will eventually run out of available memory and the application will start to throw bad_alloc exceptions. But that is another issue.
Memory leaks are definitely defined in C/C++.
If I do:
int *a = new int[10];
followed by
a = new int[10];
I'm definitely leaking memory as there is no way to access the 1st allocated array and this memory is not automatically freed as GC is not supported.
But the consequences of this leak are unpredictable and will vary from application to application and from machine to machine for a same given application. Say an application that crashes out due to leaking on one machine might work just fine on another machine with more RAM. Also for a given application on a given machine the crash due to leak can appear at different times during the run.
If you leak memory, execution proceeds as if nothing happens. This is defined behavior.
Down the track, you may find that a call to malloc fails due to there not being enough available memory. But this is a defined behavior of malloc, and the consequences are also well-defined: the malloc call returns NULL.
Now this may cause a program that doesn't check the result of malloc to fail with a segmentation violation. But that undefined behavior is (from the POV of the language specs) due to the program dereferencing an invalid pointer, not the earlier memory leak or the failed malloc call.
My interpretation of this statement:
For an object of a class type with a non-trivial destructor, the
program is not required to call the destructor explicitly before the
storage which the object occupies is reused or released; however, if
there is no explicit call to the destructor or if a delete-expression
(5.3.5) is not used to release the storage, the destructor shall not
be implicitly called and any program that depends on the side effects
produced by the destructor has undefined behavior.
is as follows:
If you somehow manage to free the storage which the object occupies without calling the destructor on the object that occupied the memory, UB is the consequence, if the destructor is non-trivial and has side-effects.
If new allocates with malloc, the raw storage could be released with free(), the destructor would not run, and UB would result. Or if a pointer is cast to an unrelated type and deleted, the memory is freed, but the wrong destructor runs, UB.
This is not the same as an omitted delete, where the underlying memory is not freed. Omitting delete is not UB.
(Comment below "Heads-up: this answer has been moved here from Does a memory leak cause undefined behaviour?" - you'll probably have to read that question to get proper background for this answer O_o).
It seems to me that this part of the Standard explicitly permits:
having a custom memory pool that you placement-new objects into, then release/reuse the whole thing without spending time calling their destructors, as long as you don't depend on side-effects of the object destructors.
libraries that allocate a bit of memory and never release it, probably because their functions/objects could be used by destructors of static objects and registered on-exit handlers, and it's not worth buying into the whole orchestrated-order-of-destruction or transient "phoenix"-like rebirth each time those accesses happen.
I can't understand why the Standard chooses to leave the behaviour undefined when there are dependencies on side effects - rather than simply say those side effects won't have happened and let the program have defined or undefined behaviour as you'd normally expect given that premise.
We can still consider what the Standard says is undefined behaviour. The crucial part is:
"depends on the side effects produced by the destructor has undefined behavior."
The Standard §1.9/12 explicitly defines side effects as follows (the italics below are the Standards, indicating the introduction of a formal definition):
Accessing an object designated by a volatile glvalue (3.10), modifying an object, calling a library I/O function, or calling a function that does any of those operations are all side effects, which are changes in the state of the execution environment.
In your program, there's no dependency so no undefined behaviour.
One example of dependency arguably matching the scenario in §3.8 p4, where the need for or cause of undefined behaviour isn't apparent, is:
struct X
{
~X() { std::cout << "bye!\n"; }
};
int main()
{
new X();
}
An issue people are debating is whether the X object above would be considered released for the purposes of 3.8 p4, given it's probably only released to the O.S. after program termination - it's not clear from reading the Standard whether that stage of a process's "lifetime" is in scope for the Standard's behavioural requirements (my quick search of the Standard didn't clarify this). I'd personally hazard that 3.8p4 applies here, partly because as long as it's ambiguous enough to be argued a compiler writer may feel entitled to allow undefined behaviour in this scenario, but even if the above code doesn't constitute release the scenario's easily amended ala...
int main()
{
X* p = new X();
*(char*)p = 'x'; // token memory reuse...
}
Anyway, however main's implemented the destructor above has a side effect - per "calling a library I/O function"; further, the program's observable behaviour arguably "depends on" it in the sense that buffers that would be affected by the destructor were it to have run are flushed during termination. But is "depends on the side effects" only meant to allude to situations where the program would clearly have undefined behaviour if the destructor didn't run? I'd err on the side of the former, particularly as the latter case wouldn't need a dedicated paragraph in the Standard to document that the behaviour is undefined. Here's an example with obviously-undefined behaviour:
int* p_;
struct X
{
~X() { if (b_) p_ = 0; else delete p_; }
bool b_;
};
X x{true};
int main()
{
p_ = new int();
delete p_; // p_ now holds freed pointer
new (&x){false}; // reuse x without calling destructor
}
When x's destructor is called during termination, b_ will be false and ~X() will therefore delete p_ for an already-freed pointer, creating undefined behaviour. If x.~X(); had been called before reuse, p_ would have been set to 0 and deletion would have been safe. In that sense, the program's correct behaviour could be said to depend on the destructor, and the behaviour is clearly undefined, but have we just crafted a program that matches 3.8p4's described behaviour in its own right, rather than having the behaviour be a consequence of 3.8p4...?
More sophisticated scenarios with issues - too long to provide code for - might include e.g. a weird C++ library with reference counters inside file stream objects that had to hit 0 to trigger some processing such as flushing I/O or joining of background threads etc. - where failure to do those things risked not only failing to perform output explicitly requested by the destructor, but also failing to output other buffered output from the stream, or on some OS with a transactional filesystem might result in a rollback of earlier I/O - such issues could change observable program behaviour or even leave the program hung.
Note: it's not necessary to prove that there's any actual code that behaves strangely on any existing compiler/system; the Standard clearly reserves the right for compilers to have undefined behaviour... that's all that matters. This is not something you can reason about and choose to ignore the Standard - it may be that C++14 or some other revision changes this stipulation, but as long as it's there then if there's even arguably some "dependency" on side effects then there's the potential for undefined behaviour (which of course is itself allowed to be defined by a particular compiler/implementation, so it doesn't automatically mean that every compiler is obliged do something bizarre).
The language specification says nothing about "memory leaks". From the language point of view, when you create an object in dynamic memory, you are doing just that: you are creating an anonymous object with unlimited lifetime/storage duration. "Unlimited" in this case means that the object can only end its lifetime/storage duration when you explicitly deallocate it, but otherwise it continues to live forever (as long as the program runs).
Now, we usually consider a dynamically allocated object become a "memory leak" at the point in program execution when all references (generic "references", like pointers) to that object are lost to the point of being unrecoverable. Note, that even to a human the notion of "all references being lost" is not very precisely defined. What if we have a reference to some part of the object, which can be theoretically "recalculated" to a reference to the entire object? Is it a memory leak or not? What if we have no references to the object whatsoever, but somehow we can calculate such a reference using some other information available to the program (like precise sequence of allocations)?
The language specification doesn't concern itself with issues like that. Whatever you consider an appearance of "memory leak" in your program, from the language point of view it is a non-event at all. From the language point of view a "leaked" dynamically allocated object just continues to live happily until the program ends. This is the only remaining point of concern: what happens when program ends and some dynamic memory is still allocated?
If I remember correctly, the language does not specify what happens to dynamic memory which is still allocated the moment of program termination. No attempts will be made to automatically destruct/deallocate the objects you created in dynamic memory. But there's no formal undefined behavior in cases like that.
The burden of evidence is on those who would think a memory leak could be C++ UB.
Naturally no evidence has been presented.
In short for anyone harboring any doubt this question can never be clearly resolved, except by very credibly threatening the committee with e.g. loud Justin Bieber music, so that they add a C++14 statement that clarifies that it's not UB.
At issue is C++11 §3.8/4:
” For an object of a class type with a non-trivial destructor, the program is not required to call the destructor explicitly before the storage which the object occupies is reused or released; however, if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior.
This passage had the exact same wording in C++98 and C++03. What does it mean?
the program is not required to call the destructor explicitly before the storage which the object occupies is reused or released
– means that one can grab the memory of a variable and reuse that memory, without first destroying the existing object.
if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called
– means if one does not destroy the existing object before the memory reuse, then if the object is such that its destructor is automatically called (e.g. a local automatic variable) then the program has Undefined Behavior, because that destructor would then operate on a no longer existing object.
and any program that depends on the side effects produced by the destructor has undefined behavior
– can't mean literally what it says, because a program always depends on any side effects, by the definition of side effect. Or in other words, there is no way for the program not to depend on the side effects, because then they would not be side effects.
Most likely what was intended was not what finally made its way into C++98, so that what we have at hand is a defect.
From the context one can guess that if a program relies on the automatic destruction of an object of statically known type T, where the memory has been reused to create an object or objects that is not a T object, then that's Undefined Behavior.
Those who have followed the commentary may notice that the above explanation of the word “shall” is not the meaning that I assumed earlier. As I see it now, the “shall” is not a requirement on the implementation, what it's allowed to do. It's a requirement on the program, what the code is allowed to do.
Thus, this is formally UB:
auto main() -> int
{
string s( 666, '#' );
new( &s ) string( 42, '-' ); // <- Storage reuse.
cout << s << endl;
// <- Formal UB, because original destructor implicitly invoked.
}
But this is OK with a literal interpretation:
auto main() -> int
{
string s( 666, '#' );
s.~string();
new( &s ) string( 42, '-' ); // <- Storage reuse.
cout << s << endl;
// OK, because of the explicit destruction of the original object.
}
A main problem is that with a literal interpretation of the standard's paragraph above it would still be formally OK if the placement new created an object of a different type there, just because of the explicit destruction of the original. But it would not be very in-practice OK in that case. Maybe this is covered by some other paragraph in the standard, so that it is also formally UB.
And this is also OK, using the placement new from <new>:
auto main() -> int
{
char* storage = new char[sizeof( string )];
new( storage ) string( 666, '#' );
string const& s = *(
new( storage ) string( 42, '-' ) // <- Storage reuse.
);
cout << s << endl;
// OK, because no implicit call of original object's destructor.
}
As I see it – now.
Its definately defined behaviour.
Consider a case the server is running and keep allocating heap memory and no memory is released even if there's no use of it.
Hence the end result would be that eventually server will run out of memory and definately crash will occur.
Adding to all the other answers, some entirely different approach. Looking at memory allocation in § 5.3.4-18 we can see:
If any part of the object initialization described above76 terminates
by throwing an exception and a suitable deallocation function can be
found, the deallocation function is called to free the memory in which
the object was being constructed, after which the exception continues
to propagate in the context of the new-expression. If no unambiguous
matching deallocation function can be found, propagating the exception
does not cause the object’s memory to be freed. [ Note: This is
appropriate when the called allocation function does not allocate
memory; otherwise, it is likely to result in a memory leak. —end note
]
Would it cause UB here, it would be mentioned, so it is "just a memory leak".
In places like §20.6.4-10, a possible garbage collector and leak detector is mentioned. A lot of thought has been put into the concept of safely derived pointers et.al. to be able to use C++ with a garbage collector (C.2.10 "Minimal support for garbage-collected regions").
Thus if it would be UB to just lose the last pointer to some object, all the effort would make no sense.
Regarding the "when the destructor has side effects not running it ever UB" I would say this is wrong, otherwise facilities such as std::quick_exit() would be inherently UB too.
If the space shuttle must take off in two minutes, and I have a choice between putting it up with code that leaks memory and code that has undefined behavior, I'm putting in the code that leaks memory.
But most of us aren't usually in such a situation, and if we are, it's probably by a failure further up the line. Perhaps I'm wrong, but I'm reading this question as, "Which sin will get me into hell faster?"
Probably the undefined behavior, but in reality both.
defined, since a memory leak is you forgetting to clean up after yourself.
of course, a memory leak can probably cause undefined behaviour later.
Straight forward answer: The standard doesn't define what happens when you leak memory, thus it is "undefined". It's implicitly undefined though, which is less interesting than the explicitly undefined things in the standard.
This obviously cannot be undefined behaviour. Simply because UB has to happen at some point in time, and forgetting to release memory or call a destructor does not happen at any point in time. What happens is just that the program terminates without ever having released memory or called the destructor; this does not make the behaviour of the program, or of its termination, undefined in any way.
This being said, in my opinion the standard is contradicting itself in this passage. On one hand it ensures that the destructor will not be called in this scenario, and on the other hand it says that if the program depends on the side effects produced by the destructor then it has undefined behaviour. Suppose the destructor calls exit, then no program that does anything can pretend to be independent of that, because the side effect of calling the destructor would prevent it from doing what it would otherwise do; but the text also assures that the destructor will not be called so that the program can go on with doing its stuff undisturbed. I think the only reasonable way to read the end of this passage is that if the proper behaviour of the program would require the destructor to be called, then behaviour is in fact not defined; this then is a superfluous remark, given that it has just been stipulated that the destructor will not be called.
Undefined behavior means, what will happen has not been defined or is unknown. The behavior of memory leaks is definitly known in C/C++ to eat away at available memory. The resulting problems, however, can not always be defined and vary as described by gameover.

Ctor Initializer: self initialization causes crash?

I had a hard time debugging a crash on production. Just wanted to confirm with folks here about the semantics. We have a class like ...
class Test {
public:
Test()
{
// members initialized ...
m_str = m_str;
}
~Test() {}
private:
// other members ...
std::string m_str;
};
Someone changed the initialization to use ctor initialization-lists which is reasonably correct within our code semantics. The order of initialization and their initial value is correct among other things. So the class looks like ...
class Test {
public:
Test()
: /*other inits ,,, */ m_str(m_str)
{
}
~Test() {}
private:
// other members ...
std::string m_str;
};
But the code suddenly started crashing! I isolated the long list of inits to this piece of code m_str(m_str). I confirmed this via link text.
Does it have to crash? What does the standard say about this? (Is it undefined behavior?)
The first constructor is equivalent to
Test()
: m_str()
{
// members initialized ...
m_str = m_str;
}
that is, by the time you get to the assignment within the constructor, m_str has already been implicitly initialized to an empty string. So the assignment to self, although completely meaningless and superfluous, causes no problems (since std::string::operator=(), as any well written assignment operator should, checks for self assignment and does nothing in this case).
However, in the second constructor, you are trying to initialize m_str with itself in the initializer list - at which point it is not yet initialized. So the result is undefined behaviour.
Update: For primitive types, this is still undefined behaviour (resulting in a field with garbage value), but it does not crash (usually - see the comments below for exceptions) because primitive types by definition have no constructors, destructors and contain no pointers to other objects.
Same is true for any type that does not contain pointer members with ownership semantics. std::string is hereby demonstrated to be not one of these :-)
m_str is constructed in the initialization list. Therefore, at the time you are assigning it to itself, it is not fully constructed. Hence, undefined behavior.
(What is that self-assignment supposed to do anyway?)
The original "initialization" by assignment is completely superfluous.
It didn't do any harm, other than wasting processor cycles, because at the time of the assignment the m_str member had already been initialized, by default.
In the second code snippet the default initialization is overridden to use the as-yet-uninitialized member to initialize itself. That's Undefined Behavior. And it's completely unnecessary: just remove that (and don't re-introduce the original time-waster, just, remove).
By turning up the warning level of your compiler you may be able to get warnings about this and similar trivially ungood code.
Unfortunately the problem you're having is not this technical one, it's much more fundamental. It's like a worker in a car factory poses a question about the square wheels they're putting on the new car brand. Then the problem isn't that the square wheels don't work, it's that a whole lot of engineers and managers have been involved in the decision to use the fancy looking square wheels and none of them objected -- some of them undoubtedly didn't understand that square wheels don't work, but most of them, I suspect, were simply afraid to say what that they were 100% sure of. So it's most probably a management problem. I'm sorry, but I don't know a fix for that...
Undefined behavior doesn't have to lead to a crash -- it can do just about anything, from continuing to work as if there was no problem at all, to crashing immediately, to doing something really strange that causes seemingly unrelated problems later. The canonical claim is that it makes "demons fly out of your nose" (aka, "causes nasal demons"). At one time the inventor of the phase had a (pretty cool) web site telling about the nuclear war that started from somebody causing undefined behavior in the "DeathStation 9000".
Edit: The exact wording from the standard is (§:1.3.12):
1.3.12 undefined behavior [defns.undefined]
behavior, such as might arise upon use of an erroneous program construct or erroneous data, for which this
International Standard imposes no requirements. Undefined behavior may also be expected when this
International Standard omits the description of any explicit definition of behavior. [Note: permissible undefined
behavior ranges from ignoring the situation completely with unpredictable results, to behaving during
translation or program execution in a documented manner characteristic of the environment (with or without
the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a
diagnostic message).
This is the same difference as between
std::string str;
str = str;
and
std::string str(str);
The former works (although it's nonsense), the latter doesn't, since it tries to copy-construct an object from a not-yet-constructed object.
Of course, the way to go would be
Test() : m_str() {}

Does explicitly calling destructor result in Undefined Behavior here?

In my opinion, the following code (from some C++ question) should lead to UB, but the it seems it is not. Here is the code:
#include <iostream>
using namespace std;
class some{ public: ~some() { cout<<"some's destructor"<<endl; } };
int main() { some s; s.~some(); }
and the answer is:
some's destructor
some's destructor
I learned form c++ faq lite that we should not explicitly call destructor. I think after the explicitly call to the destructor, the object s should be deleted. The program automatically calls the destructor again when it's finished, it should be UB. However, I tried it on g++, and get the same result as the above answer.
Is it because the class is too simple (no new/delete involved)? Or it's not UB at all in this case?
The behavior is undefined because the destructor is invoked twice for the same object:
Once when you invoke it explicitly
Once when the scope ends and the automatic variable is destroyed
Invoking the destructor on an object whose lifetime has ended results in undefined behavior per C++03 §12.4/6:
the behavior is undefined if the destructor is invoked for an object whose lifetime has ended
An object's lifetime ends when its destructor is called per §3.8/1:
The lifetime of an object of type T ends when:
— if T is a class type with a non-trivial destructor (12.4), the destructor call starts, or
— the storage which the object occupies is reused or released.
Note that this means if your class has a trivial destructor, the behavior is well-defined because the lifetime of an object of such a type does not end until its storage is released, which for automatic variables does not happen until the end of the function. Of course, I don't know why you would explicitly invoke the destructor if it is trivial.
What is a trivial destructor? §12.4/3 says:
A destructor is trivial if it is an implicitly-declared destructor and if:
— all of the direct base classes of its class have trivial destructors and
— for all of the non-static data members of its class that are of class type (or array thereof), each such class has a trivial destructor.
As others have mentioned, one possible result of undefined behavior is your program appearing to continue running correctly; another possible result is your program crashing. Anything can happen and there are no guarantees whatsoever.
It's undefined behavior -- but as with any UB, one possibility is that it (more or less) appears to work, at least for some definition of work.
Essentially the only time you need (or want) to explicitly invoke a destructor is in conjunction with placement new (i.e., you use placement new to create an object at a specified location, and an explicit dtor invocation to destroy that object).
From http://www.devx.com/tips/Tip/12684
Undefined behavior indicates that an implementation may behave unpredictably when a program reaches a certain state, which almost without exception is a result of a bug. Undefined behavior can be manifested as a run time crash, unstable and unreliable program state, or--in rare cases--it may even pass unnoticed.
In your case it doesn't crash because the destructor doesn't manipulate any field; actually, your class doesn't have any data members at all. If it did and in destructor's body you manipulated it in any way, you would likely get a run-time exception while calling destructor for the second time.
The problem here is that deletion / deallocation and destructors are separate and independent constructs. Much like new / allocation and constructors. It is possible to do only one of the above without the other.
In the general case this scenario does lack usefulness and just lead to confusion with stack allocated values. Off the top of my head I can't think of a good scenario where you would want to do this (although I'm sure there is potentially one). However it is possible to think of contrived scenarios where this would be legal.
class StackPointer<T> {
T* m_pData;
public:
StackPointer(T* pData) :m_pData(pData) {}
~StackPointer() {
delete m_pData;
m_pData = NULL;
}
StackPointer& operator=(T* pOther) {
this->~StackPointer();
m_pData = pOther;
return this;
}
};
Note: Please don't ever code a class this way. Have an explicit Release method instead.
It most likely works fine because the destructor does not reference any class member variables. If you tried to delete a variable within the destructor you would probably run into trouble when it is automatically called the second time.
Then again, with undefined behavior, who knows? :)
What the main function does is reserving space on the stack, calling some's constructor, and at the end calling some's destructor. This always happens with a local variable, whatever code you put inside the function.
Your compiler won't detect that you manually called the destructor.
Anyway you should never manually call an object's destructor, except for objects created with placement-new.
I believe that if you want your code to be OK you simply need to call placement new and fill it back in before exiting. The call to the destructor isn't the issue, it's the second call to the destructor made when you leave scope.
Can you define the undefined behaviour you expect? Undefined doesn't mean random (or catastrophic): the behaviour of a given program may be repeatable between invocations, it just means you can't RELY on any particular behaviour because it is undefined and there is no guarantee of what will happen.
It is undefined behaviour. The undefined behaviour is the double destructor call and not with the destructor call itself. If you modify your example to:
#include <iostream>
using namespace std;
class some{ public: ~some() { [INSERT ANY CODE HERE] } };
int main() { some s; s.~some(); }
where [INSERT ANY CODE HERE] can be replaced with any arbitrary code. The results have unpredictable side effects, which is why it is considered undefined.

Why exactly is calling the destructor for the second time undefined behavior in C++?

As mentioned in this answer simply calling the destructor for the second time is already undefined behavior 12.4/14(3.8).
For example:
class Class {
public:
~Class() {}
};
// somewhere in code:
{
Class* object = new Class();
object->~Class();
delete object; // UB because at this point the destructor call is attempted again
}
In this example the class is designed in such a way that the destructor could be called multiple times - no things like double-deletion can happen. The memory is still allocated at the point where delete is called - the first destructor call doesn't call the ::operator delete() to release memory.
For example, in Visual C++ 9 the above code looks working. Even C++ definition of UB doesn't directly prohibit things qualified as UB from working. So for the code above to break some implementation and/or platform specifics are required.
Why exactly would the above code break and under what conditions?
I think your question aims at the rationale behind the standard. Think about it the other way around:
Defining the behavior of calling a destructor twice creates work, possibly a lot of work.
Your example only shows that in some trivial cases it wouldn't be a problem to call the destructor twice. That's true but not very interesting.
You did not give a convincing use-case (and I doubt you can) when calling the destructor twice is in any way a good idea / makes code easier / makes the language more powerful / cleans up semantics / or anything else.
So why again should this not cause undefined behavior?
The reason for the formulation in the standard is most probably that everything else would be vastly more complicated: it’d have to define when exactly double-deleting is possible (or the other way round) – i.e. either with a trivial destructor or with a destructor whose side-effect can be discarded.
On the other hand, there’s no benefit for this behaviour. In practice, you cannot profit from it because you can’t know in general whether a class destructor fits the above criteria or not. No general-purpose code could rely on this. It would be very easy to introduce bugs that way. And finally, how does it help? It just makes it possible to write sloppy code that doesn’t track life-time of its objects – under-specified code, in other words. Why should the standard support this?
Will existing compilers/runtimes break your particular code? Probably not – unless they have special run-time checks to prevent illegal access (to prevent what looks like malicious code, or simply leak protection).
The object no longer exists after you call the destructor.
So if you call it again, you're calling a method on an object that doesn't exist.
Why would this ever be defined behavior? The compiler may choose to zero out the memory of an object which has been destructed, for debugging/security/some reason, or recycle its memory with another object as an optimisation, or whatever. The implementation can do as it pleases. Calling the destructor again is essentially calling a method on arbitrary raw memory - a Bad Idea (tm).
When you use the facilities of C++ to create and destroy your objects, you agree to use its object model, however it's implemented.
Some implementations may be more sensitive than others. For example, an interactive interpreted environment or a debugger might try harder to be introspective. That might even include specifically alerting you to double destruction.
Some objects are more complicated than others. For example, virtual destructors with virtual base classes can be a bit hairy. The dynamic type of an object changes over the execution of a sequence of virtual destructors, if I recall correctly. That could easily lead to invalid state at the end.
It's easy enough to declare properly named functions to use instead of abusing the constructor and destructor. Object-oriented straight C is still possible in C++, and may be the right tool for some job… in any case, the destructor isn't the right construct for every destruction-related task.
Destructors are not regular functions. Calling one doesn't call one function, it calls many functions. Its the magic of destructors. While you have provided a trivial destructor with the sole intent of making it hard to show how it might break, you have failed to demonstrate what the other functions that get called do. And neither does the standard. Its in those functions that things can potentially fall apart.
As a trivial example, lets say the compiler inserts code to track object lifetimes for debugging purposes. The constructor [which is also a magic function that does all sorts of things you didn't ask it to] stores some data somewhere that says "Here I am." Before the destructor is called, it changes that data to say "There I go". After the destructor is called, it gets rid of the information it used to find that data. So the next time you call the destructor, you end up with an access violation.
You could probably also come up with examples that involve virtual tables, but your sample code didn't include any virtual functions so that would be cheating.
The following Class will crash in Windows on my machine if you'll call destructor twice:
class Class {
public:
Class()
{
x = new int;
}
~Class()
{
delete x;
x = (int*)0xbaadf00d;
}
int* x;
};
I can imagine an implementation when it will crash with trivial destructor. For instance, such implementation could remove destructed objects from physical memory and any access to them will lead to some hardware fault. Looks like Visual C++ is not one of such sort of implementations, but who knows.
Standard 12.4/14
Once a destructor is invoked for an
object, the object no longer exists;
the behavior is undefined if the
destructor is invoked for an object
whose lifetime has ended (3.8).
I think this section refers to invoking the destructor via delete. In other words: The gist of this paragraph is that "deleting an object twice is undefined behavior". So that's why your code example works fine.
Nevertheless, this question is rather academic. Destructors are meant to be invoked via delete (apart from the exception of objects allocated via placement-new as sharptooth correctly observed). If you want to share code between a destructor and second function, simply extract the code to a separate function and call that from your destructor.
Since what you're really asking for is a plausible implementation in which your code would fail, suppose that your implementation provides a helpful debugging mode, in which it tracks all memory allocations and all calls to constructors and destructors. So after the explicit destructor call, it sets a flag to say that the object has been destructed. delete checks this flag and halts the program when it detects the evidence of a bug in your code.
To make your code "work" as you intended, this debugging implementation would have to special-case your do-nothing destructor, and skip setting that flag. That is, it would have to assume that you're deliberately destroying twice because (you think) the destructor does nothing, as opposed to assuming that you're accidentally destroying twice, but failed to spot the bug because the destructor happens to do nothing. Either you're careless or you're a rebel, and there's more mileage in debug implementations helping out people who are careless than there is in pandering to rebels ;-)
One important example of an implementation which could break:
A conforming C++ implementation can support Garbage Collection. This has been a longstanding design goal. A GC may assume that an object can be GC'ed immediately when its dtor is run. Thus each dtor call will update its internal GC bookkeeping. The second time the dtor is called for the same pointer, the GC data structures might very well become corrupted.
By definition, the destructor 'destroys' the object and destroy an object twice makes no sense.
Your example works but its difficult that works generally
I guess it's been classified as undefined because most double deletes are dangerous and the standards committee didn't want to add an exception to the standard for the relatively few cases where they don't have to be.
As for where your code could break; you might find your code breaks in debug builds on some compilers; many compilers treat UB as 'do the thing that wouldn't impact on performance for well defined behaviour' in release mode and 'insert checks to detect bad behaviour' in debug builds.
Basically, as already pointed out, calling the destructor a second time will fail for any class destructor that performs work.
It's undefined behavior because the standard made it clear what a destructor is used for, and didn't decide what should happen if you use it incorrectly. Undefined behavior doesn't necessarily mean "crashy smashy," it just means the standard didn't define it so it's left up to the implementation.
While I'm not too fluent in C++, my gut tells me that the implementation is welcome to either treat the destructor as just another member function, or to actually destroy the object when the destructor is called. So it might break in some implementations but maybe it won't in others. Who knows, it's undefined (look out for demons flying out your nose if you try).
It is undefined because if it weren't, every implementation would have to bookmark via some metadata whether an object is still alive or not. You would have to pay that cost for every single object which goes against basic C++ design rules.
The reason is because your class might be for example a reference counted smart pointer. So the destructor decrements the reference counter. Once that counter hits 0 the actual object should be cleaned up.
But if you call the destructor twice then the count will be messed up.
Same idea for other situations too. Maybe the destructor writes 0s to a piece of memory and then deallocates it (so you don't accidentally leave a user's password in memory). If you try to write to that memory again - after it has been deallocated - you will get an access violation.
It just makes sense for objects to be constructed once and destructed once.
The reason is that, in absence of that rule, your programs would become less strict. Being more strict--even when it's not enforced at compile-time--is good, because, in return, you gain more predictability of how program will behave. This is especially important when the source code of classes is not under your control.
A lot of concepts: RAII, smart pointers, and just generic allocation/freeing of memory rely on this rule. The amount of times the destructor will be called (one) is essential for them. So the documentation for such things usually promises: "Use our classes according to C++ language rules, and they will work correctly!"
If there wasn't such a rule, it would state as "Use our classes according to C++ lanugage rules, and yes, don't call its destructor twice, then they will work correctly." A lot of specifications would sound that way.
The concept is just too important for the language in order to skip it in the standard document.
This is the reason. Not anything related to binary internals (which are described in Potatoswatter's answer).

Are memory leaks "undefined behavior" class problem in C++?

Turns out many innocently looking things are undefined behavior in C++. For example, once a non-null pointer has been delete'd even printing out that pointer value is undefined behavior.
Now memory leaks are definitely bad. But what class situation are they - defined, undefined or what other class of behavior?
Memory leaks.
There is no undefined behavior. It is perfectly legal to leak memory.
Undefined behavior: is actions the standard specifically does not want to define and leaves upto the implementation so that it is flexible to perform certain types of optimizations without breaking the standard.
Memory management is well defined.
If you dynamically allocate memory and don't release it. Then the memory remains the property of the application to manage as it sees fit. The fact that you have lost all references to that portion of memory is neither here nor there.
Of course if you continue to leak then you will eventually run out of available memory and the application will start to throw bad_alloc exceptions. But that is another issue.
Memory leaks are definitely defined in C/C++.
If I do:
int *a = new int[10];
followed by
a = new int[10];
I'm definitely leaking memory as there is no way to access the 1st allocated array and this memory is not automatically freed as GC is not supported.
But the consequences of this leak are unpredictable and will vary from application to application and from machine to machine for a same given application. Say an application that crashes out due to leaking on one machine might work just fine on another machine with more RAM. Also for a given application on a given machine the crash due to leak can appear at different times during the run.
If you leak memory, execution proceeds as if nothing happens. This is defined behavior.
Down the track, you may find that a call to malloc fails due to there not being enough available memory. But this is a defined behavior of malloc, and the consequences are also well-defined: the malloc call returns NULL.
Now this may cause a program that doesn't check the result of malloc to fail with a segmentation violation. But that undefined behavior is (from the POV of the language specs) due to the program dereferencing an invalid pointer, not the earlier memory leak or the failed malloc call.
My interpretation of this statement:
For an object of a class type with a non-trivial destructor, the
program is not required to call the destructor explicitly before the
storage which the object occupies is reused or released; however, if
there is no explicit call to the destructor or if a delete-expression
(5.3.5) is not used to release the storage, the destructor shall not
be implicitly called and any program that depends on the side effects
produced by the destructor has undefined behavior.
is as follows:
If you somehow manage to free the storage which the object occupies without calling the destructor on the object that occupied the memory, UB is the consequence, if the destructor is non-trivial and has side-effects.
If new allocates with malloc, the raw storage could be released with free(), the destructor would not run, and UB would result. Or if a pointer is cast to an unrelated type and deleted, the memory is freed, but the wrong destructor runs, UB.
This is not the same as an omitted delete, where the underlying memory is not freed. Omitting delete is not UB.
(Comment below "Heads-up: this answer has been moved here from Does a memory leak cause undefined behaviour?" - you'll probably have to read that question to get proper background for this answer O_o).
It seems to me that this part of the Standard explicitly permits:
having a custom memory pool that you placement-new objects into, then release/reuse the whole thing without spending time calling their destructors, as long as you don't depend on side-effects of the object destructors.
libraries that allocate a bit of memory and never release it, probably because their functions/objects could be used by destructors of static objects and registered on-exit handlers, and it's not worth buying into the whole orchestrated-order-of-destruction or transient "phoenix"-like rebirth each time those accesses happen.
I can't understand why the Standard chooses to leave the behaviour undefined when there are dependencies on side effects - rather than simply say those side effects won't have happened and let the program have defined or undefined behaviour as you'd normally expect given that premise.
We can still consider what the Standard says is undefined behaviour. The crucial part is:
"depends on the side effects produced by the destructor has undefined behavior."
The Standard §1.9/12 explicitly defines side effects as follows (the italics below are the Standards, indicating the introduction of a formal definition):
Accessing an object designated by a volatile glvalue (3.10), modifying an object, calling a library I/O function, or calling a function that does any of those operations are all side effects, which are changes in the state of the execution environment.
In your program, there's no dependency so no undefined behaviour.
One example of dependency arguably matching the scenario in §3.8 p4, where the need for or cause of undefined behaviour isn't apparent, is:
struct X
{
~X() { std::cout << "bye!\n"; }
};
int main()
{
new X();
}
An issue people are debating is whether the X object above would be considered released for the purposes of 3.8 p4, given it's probably only released to the O.S. after program termination - it's not clear from reading the Standard whether that stage of a process's "lifetime" is in scope for the Standard's behavioural requirements (my quick search of the Standard didn't clarify this). I'd personally hazard that 3.8p4 applies here, partly because as long as it's ambiguous enough to be argued a compiler writer may feel entitled to allow undefined behaviour in this scenario, but even if the above code doesn't constitute release the scenario's easily amended ala...
int main()
{
X* p = new X();
*(char*)p = 'x'; // token memory reuse...
}
Anyway, however main's implemented the destructor above has a side effect - per "calling a library I/O function"; further, the program's observable behaviour arguably "depends on" it in the sense that buffers that would be affected by the destructor were it to have run are flushed during termination. But is "depends on the side effects" only meant to allude to situations where the program would clearly have undefined behaviour if the destructor didn't run? I'd err on the side of the former, particularly as the latter case wouldn't need a dedicated paragraph in the Standard to document that the behaviour is undefined. Here's an example with obviously-undefined behaviour:
int* p_;
struct X
{
~X() { if (b_) p_ = 0; else delete p_; }
bool b_;
};
X x{true};
int main()
{
p_ = new int();
delete p_; // p_ now holds freed pointer
new (&x){false}; // reuse x without calling destructor
}
When x's destructor is called during termination, b_ will be false and ~X() will therefore delete p_ for an already-freed pointer, creating undefined behaviour. If x.~X(); had been called before reuse, p_ would have been set to 0 and deletion would have been safe. In that sense, the program's correct behaviour could be said to depend on the destructor, and the behaviour is clearly undefined, but have we just crafted a program that matches 3.8p4's described behaviour in its own right, rather than having the behaviour be a consequence of 3.8p4...?
More sophisticated scenarios with issues - too long to provide code for - might include e.g. a weird C++ library with reference counters inside file stream objects that had to hit 0 to trigger some processing such as flushing I/O or joining of background threads etc. - where failure to do those things risked not only failing to perform output explicitly requested by the destructor, but also failing to output other buffered output from the stream, or on some OS with a transactional filesystem might result in a rollback of earlier I/O - such issues could change observable program behaviour or even leave the program hung.
Note: it's not necessary to prove that there's any actual code that behaves strangely on any existing compiler/system; the Standard clearly reserves the right for compilers to have undefined behaviour... that's all that matters. This is not something you can reason about and choose to ignore the Standard - it may be that C++14 or some other revision changes this stipulation, but as long as it's there then if there's even arguably some "dependency" on side effects then there's the potential for undefined behaviour (which of course is itself allowed to be defined by a particular compiler/implementation, so it doesn't automatically mean that every compiler is obliged do something bizarre).
The language specification says nothing about "memory leaks". From the language point of view, when you create an object in dynamic memory, you are doing just that: you are creating an anonymous object with unlimited lifetime/storage duration. "Unlimited" in this case means that the object can only end its lifetime/storage duration when you explicitly deallocate it, but otherwise it continues to live forever (as long as the program runs).
Now, we usually consider a dynamically allocated object become a "memory leak" at the point in program execution when all references (generic "references", like pointers) to that object are lost to the point of being unrecoverable. Note, that even to a human the notion of "all references being lost" is not very precisely defined. What if we have a reference to some part of the object, which can be theoretically "recalculated" to a reference to the entire object? Is it a memory leak or not? What if we have no references to the object whatsoever, but somehow we can calculate such a reference using some other information available to the program (like precise sequence of allocations)?
The language specification doesn't concern itself with issues like that. Whatever you consider an appearance of "memory leak" in your program, from the language point of view it is a non-event at all. From the language point of view a "leaked" dynamically allocated object just continues to live happily until the program ends. This is the only remaining point of concern: what happens when program ends and some dynamic memory is still allocated?
If I remember correctly, the language does not specify what happens to dynamic memory which is still allocated the moment of program termination. No attempts will be made to automatically destruct/deallocate the objects you created in dynamic memory. But there's no formal undefined behavior in cases like that.
The burden of evidence is on those who would think a memory leak could be C++ UB.
Naturally no evidence has been presented.
In short for anyone harboring any doubt this question can never be clearly resolved, except by very credibly threatening the committee with e.g. loud Justin Bieber music, so that they add a C++14 statement that clarifies that it's not UB.
At issue is C++11 §3.8/4:
” For an object of a class type with a non-trivial destructor, the program is not required to call the destructor explicitly before the storage which the object occupies is reused or released; however, if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior.
This passage had the exact same wording in C++98 and C++03. What does it mean?
the program is not required to call the destructor explicitly before the storage which the object occupies is reused or released
– means that one can grab the memory of a variable and reuse that memory, without first destroying the existing object.
if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called
– means if one does not destroy the existing object before the memory reuse, then if the object is such that its destructor is automatically called (e.g. a local automatic variable) then the program has Undefined Behavior, because that destructor would then operate on a no longer existing object.
and any program that depends on the side effects produced by the destructor has undefined behavior
– can't mean literally what it says, because a program always depends on any side effects, by the definition of side effect. Or in other words, there is no way for the program not to depend on the side effects, because then they would not be side effects.
Most likely what was intended was not what finally made its way into C++98, so that what we have at hand is a defect.
From the context one can guess that if a program relies on the automatic destruction of an object of statically known type T, where the memory has been reused to create an object or objects that is not a T object, then that's Undefined Behavior.
Those who have followed the commentary may notice that the above explanation of the word “shall” is not the meaning that I assumed earlier. As I see it now, the “shall” is not a requirement on the implementation, what it's allowed to do. It's a requirement on the program, what the code is allowed to do.
Thus, this is formally UB:
auto main() -> int
{
string s( 666, '#' );
new( &s ) string( 42, '-' ); // <- Storage reuse.
cout << s << endl;
// <- Formal UB, because original destructor implicitly invoked.
}
But this is OK with a literal interpretation:
auto main() -> int
{
string s( 666, '#' );
s.~string();
new( &s ) string( 42, '-' ); // <- Storage reuse.
cout << s << endl;
// OK, because of the explicit destruction of the original object.
}
A main problem is that with a literal interpretation of the standard's paragraph above it would still be formally OK if the placement new created an object of a different type there, just because of the explicit destruction of the original. But it would not be very in-practice OK in that case. Maybe this is covered by some other paragraph in the standard, so that it is also formally UB.
And this is also OK, using the placement new from <new>:
auto main() -> int
{
char* storage = new char[sizeof( string )];
new( storage ) string( 666, '#' );
string const& s = *(
new( storage ) string( 42, '-' ) // <- Storage reuse.
);
cout << s << endl;
// OK, because no implicit call of original object's destructor.
}
As I see it – now.
Its definately defined behaviour.
Consider a case the server is running and keep allocating heap memory and no memory is released even if there's no use of it.
Hence the end result would be that eventually server will run out of memory and definately crash will occur.
Adding to all the other answers, some entirely different approach. Looking at memory allocation in § 5.3.4-18 we can see:
If any part of the object initialization described above76 terminates
by throwing an exception and a suitable deallocation function can be
found, the deallocation function is called to free the memory in which
the object was being constructed, after which the exception continues
to propagate in the context of the new-expression. If no unambiguous
matching deallocation function can be found, propagating the exception
does not cause the object’s memory to be freed. [ Note: This is
appropriate when the called allocation function does not allocate
memory; otherwise, it is likely to result in a memory leak. —end note
]
Would it cause UB here, it would be mentioned, so it is "just a memory leak".
In places like §20.6.4-10, a possible garbage collector and leak detector is mentioned. A lot of thought has been put into the concept of safely derived pointers et.al. to be able to use C++ with a garbage collector (C.2.10 "Minimal support for garbage-collected regions").
Thus if it would be UB to just lose the last pointer to some object, all the effort would make no sense.
Regarding the "when the destructor has side effects not running it ever UB" I would say this is wrong, otherwise facilities such as std::quick_exit() would be inherently UB too.
If the space shuttle must take off in two minutes, and I have a choice between putting it up with code that leaks memory and code that has undefined behavior, I'm putting in the code that leaks memory.
But most of us aren't usually in such a situation, and if we are, it's probably by a failure further up the line. Perhaps I'm wrong, but I'm reading this question as, "Which sin will get me into hell faster?"
Probably the undefined behavior, but in reality both.
defined, since a memory leak is you forgetting to clean up after yourself.
of course, a memory leak can probably cause undefined behaviour later.
Straight forward answer: The standard doesn't define what happens when you leak memory, thus it is "undefined". It's implicitly undefined though, which is less interesting than the explicitly undefined things in the standard.
This obviously cannot be undefined behaviour. Simply because UB has to happen at some point in time, and forgetting to release memory or call a destructor does not happen at any point in time. What happens is just that the program terminates without ever having released memory or called the destructor; this does not make the behaviour of the program, or of its termination, undefined in any way.
This being said, in my opinion the standard is contradicting itself in this passage. On one hand it ensures that the destructor will not be called in this scenario, and on the other hand it says that if the program depends on the side effects produced by the destructor then it has undefined behaviour. Suppose the destructor calls exit, then no program that does anything can pretend to be independent of that, because the side effect of calling the destructor would prevent it from doing what it would otherwise do; but the text also assures that the destructor will not be called so that the program can go on with doing its stuff undisturbed. I think the only reasonable way to read the end of this passage is that if the proper behaviour of the program would require the destructor to be called, then behaviour is in fact not defined; this then is a superfluous remark, given that it has just been stipulated that the destructor will not be called.
Undefined behavior means, what will happen has not been defined or is unknown. The behavior of memory leaks is definitly known in C/C++ to eat away at available memory. The resulting problems, however, can not always be defined and vary as described by gameover.