How to tell the destructor is not called? [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I just had an interview question, the interviewer asked
How to tell the destructor is not called when it should have be called?
And what will you do if the destructor is not called?
To be honest, I don't know the answer. My guess is that putting the destructor inside a try catch block, but I have never seen people doing that. Is there a better solution?

There are a number of ways that the destructor of an object can fail to be called:
call abort or _exit (even exit will leave stack variables undestructed).
have the constructor thrown an exception. (Technically, if the constructor threw, the object never started to exist, so there wasn't an object to have its destructor called).
invoke undefined behaviour (at which point the C++ standard allows anything to happen). Calling deleteon an array allocated with new [] is one way of invoking undefined behaviour, and one common behaviour is to call the destructor of the first object only (leaving second and subsequent undestructed) - but it's still undefined behaviour.
Another way to invoke undefined behaviour is a way which is quite likely to leave a destructor uncalled is to have a pointer-to-base which actually points to a derived object, and call delete on the pointer-to-base. If the base class doesn't have a virtual destructor, you have undefined behaviour.
you have not yet called delete on a pointer allocated with new (this is particularly problemmatic if you have a memory leak). (This is actually a particularly common case of "the destructor is not supposed to have been run yet").
If you are trying to debug a program and want to find out if the destructor is being invoked, then
set a break point and run under the debugger
printf or whatever logging framework you are using.

Here is another classic no-destruction:
#include <iostream>
#include <memory>
class Base
{
public:
Base()
{
std::cout << "All your base, sucker!" << std::endl;
}
~Base() <-- note lack of virtual
{
std::cout << "Base destroyed!" << std::endl;
}
};
class Sub: public Base
{
public:
Sub()
{
std::cout << "We all live in a Yellow Submarine..." << std::endl;
}
~Sub()
{
std::cout << "Sub destroyed" << std::endl;
}
};
int main()
{
std::unique_ptr<Base> b(new Sub());
}
Output:
All your base, sucker!
We all live in a Yellow Submarine...
Base destroyed!
Because Base's destructor is not virtual, ~Base is called instead of ~Sub on destruction, and ~Base has no clue that Sub even exists and can't call ~Sub to finish clean-up.

You can for example put a static bool in the class you want to test, set it true in the constructor and false in the destructor. When the destructor is not called, the bool will remain true. Or it can be a static int, increment in the constructor and decrement in the destructor (and check counts before and after the scope). This is one of simple methods to check for resource leaks. I was already using this technique in unit tests to easily check if the correct constructor has been called when a custom smart pointer went out of scope.
The destructor might not be called in many situations, usually as a result of programming error. For example:
deleting inherited class through a base class pointer without having virtual destructor (then only base destructor is called)
deleting pointer to forward declared class (this case is tricky, as only some of the compilers issue a warning)
forgetting to delete at all (memory leak)
initializing object by placement new and not calling the destructor manually (which is required for placement new)
mismatched array/non-array operators (allocating by new[] and deleting by regular delete - if it does not crash it only calls destructor of the first item)

I do not know what Interviewer wanted to ask you as context is not clear but below points may be helpful
For a object on stack - Destructor is called as the object go out of scope.
For a object created on heap - for each object created by new , a delete will call the destructor. In case the program terminates before delete the destructor may not be called, In such proper handling should be done ( I would recommend using smart pointers to avoid such cases)

Here is an example where the destructor is not called:
#include <iostream>
class A {
public:
~A() { std::cout << "Destructor called" << std::endl;}
};
int main()
{
A *a = new A;
return 0;
}
There are plenty of other examples. Like casting, static, ...

It's not easy to detect a "negative event": that something didn't happen.
Instead what we test for is some event which happens unconditionally, and always after the interesting event that we are trying to detect (when that event does happen). When that other even happens, we then know that we are past the point in time when the interesting happen should have happened (if it happened at all). At that point, we have justification in looking for some positive evidence which determines whether the interesting event happened or not.
For instance, we can have the destructor set some kind of flag, or invoke some callback function or whatever. We also know that a C++ program executes statements in sequence. So suppose we don't know whether a given destructor was called during the execution of statement S1 in S1 ; S2. We simply arrange for the gathering of evidence, prior to executing S1, and then in or after S2, we look for that evidence (is the flag set, was the callback invoked, ...)
If this is just during debugging, then use your debugger or code coverage tools!
If you're wondering "is this line of code executed while I run such and such", then put a debugger breakpoint on it..
Or run a code coverage tool and then analyze the results: it will tell you how many times the lines of your program were reached. Lines that weren't executed will be flagged as never reached (no coverage). Code coverage can accumulate the coverage info from multiple runs of the program; they can help you find code that is not being hit by your test cases.

Related

Why does CLANG 3.5 on Linux cleans a "std::string" up twice when calling DTOR when throwing in CTOR?

There's a project focusing on using C++ 98 without additional dependencies, but it needs to maintain dynamically allocated memory. Smart pointers are not available, so code to manually clean things up has been added. The approach is to explicitly set variables to NULL in the CTOR, read some data during which memory might be allocated dynamically, catch any occurring exception and clean memory up as necessary by manually calling the DTOR. That needs to implement freeing memory anyway in case everything succeeded and has simply been enhanced by safeguards to check if memory has been allocated at all or not.
The following is the most relevant available code for this question:
default_endian_expr_exception_t::doc_t::doc_t(kaitai::kstream* p__io, default_endian_expr_exception_t* p__parent, default_endian_expr_exception_t* p__root) : kaitai::kstruct(p__io) {
m__parent = p__parent;
m__root = p__root;
m_main = 0;
try {
_read();
} catch(...) {
this->~doc_t();
throw;
}
}
void default_endian_expr_exception_t::doc_t::_read() {
m_indicator = m__io->read_bytes(2);
m_main = new main_obj_t(m__io, this, m__root);
}
default_endian_expr_exception_t::doc_t::~doc_t() {
if (m_main) {
delete m_main; m_main = 0;
}
}
The most relevant part of the header is the following:
class doc_t : public kaitai::kstruct {
public:
doc_t(kaitai::kstream* p__io, default_endian_expr_exception_t* p__parent = 0, default_endian_expr_exception_t* p__root = 0);
private:
void _read();
public:
~doc_t();
private:
std::string m_indicator;
main_obj_t* m_main;
default_endian_expr_exception_t* m__root;
default_endian_expr_exception_t* m__parent;
};
The code is tested in three different environments, clang3.5_linux, clang7.3_osx and msvc141_windows_x64, to explicitly throw exceptions when reading data and if it leaks memory under those conditions. The problem is that this triggers SIGABRT on CLANG 3.5 for Linux only. The most interesting stack frames are the following:
<frame>
<ip>0x577636E</ip>
<obj>/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19</obj>
<fn>std::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()</fn>
</frame>
<frame>
<ip>0x5ECFB4</ip>
<obj>/home/travis/build/kaitai-io/ci_targets/compiled/cpp_stl_98/bin/ks_tests</obj>
<fn>default_endian_expr_exception_t::doc_t::doc_t(kaitai::kstream*, default_endian_expr_exception_t*, default_endian_expr_exception_t*)</fn>
<dir>/home/travis/build/kaitai-io/ci_targets/tests/compiled/cpp_stl_98</dir>
<file>default_endian_expr_exception.cpp</file>
<line>51</line>
</frame>
[...]
<frame>
<ip>0x577636E</ip>
<obj>/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19</obj>
<fn>std::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()</fn>
</frame>
<frame>
<ip>0x5ED17E</ip>
<obj>/home/travis/build/kaitai-io/ci_targets/compiled/cpp_stl_98/bin/ks_tests</obj>
<fn>default_endian_expr_exception_t::doc_t::~doc_t()</fn>
<dir>/home/travis/build/kaitai-io/ci_targets/tests/compiled/cpp_stl_98</dir>
<file>default_endian_expr_exception.cpp</file>
<line>62</line>
</frame>
The lines 51 one and 62 are the last lines of the CTOR and DTOR as provided above, so really the closing brackets. This looks like some added code by the compiler is simply trying to free the maintained std::string two times, once in the DTOR and an additional time in the CTOR, most likely only when throwing an exception.
Is this analysis correct at all?
And if so, is this expected behvaiour of C++ in general or this concrete compiler only? I wonder because the other compilers don't SIGABRT, even though the code is the same for all. Does this mean that different compilers clean non-pointers like std::string up differently? How does one know how each compiler behaves?
Looking at what the C++-standard says, I would have expected that the std::string being freed only by the CTOR because of the exception:
C++11 15.2 Constructors and destructors (2)
An object of any storage duration whose initialization or destruction is terminated by an exception will have destructors executed for all of its fully constructed subobjects (excluding the variant members of a union-like class), that is, for subobjects for which the principal constructor (12.6.2) has completed execution and the destructor has not yet begun execution.
The destruction is NOT terminated by an exception in this case, only the construction. But because the DTOR is a DTOR, it's designed to automatically clean things up as well? And if so, in general with all compilers or only this one?
Is calling a DTOR manually reliable at all?
According to my research, calling a DTOR manually shouldn't be too bad. Is that a wrong expression and it's a big no-go because of the things I see right now? I had the impression that if a DTOR is called manually, it simply needs to be compatible to be called this way. Which the above should be from my understanding. It only fails because of aut-generated code by the compiler I wasn't aware of.
How to fix this?
Instead of calling the DTOR manually and trigger the automatically generated code, one should simply use a custom cleanUp-function freeing memory and setting pointers to NULL? It should be safe to call that in the CTOR in case of an exception and always in the DTOR, correct? Or is there some way to keep calling the DTOR in a compatible way for all compilers?
Thanks!
Here's a simplified example that resembles your case, and makes the behavior obvious:
#include <iostream>
struct S {
S() { std::cout << "S constructed\n";}
~S() { std::cout << "S destroyed\n";}
};
class Throws {
S s;
public:
Throws() {
try {
throw 42;
} catch (int) {
this->~Throws();
throw;
}
}
};
int main() {
try {
Throws t;
} catch (int) {}
}
Output:
S constructed
S destroyed
S destroyed
Demo with clang, demo with gcc.
The example exhibits undefined behavior, by destroying the same S instance twice. Since the destructor doesn't do much, and in particular doesn't access this, the undefined behavior manifests itself by actually running the destructor twice successfully, so it can be easily observed in action.
Apparently, the OP has doubts that a destructor is supposed to actually destroy the object, together with all its members and base classes. To assuage those doubts, here's the relevant quote from the standard:
[class.dtor]/14 After executing the body of the destructor and destroying any objects with automatic storage duration allocated within the body, a destructor for class X calls the destructors for X’s direct non-variant non-static data members, the destructors for X’s non-virtual direct base classes and, if X is the most derived class (11.10.2), its destructor calls the destructors for X’s virtual base classes...
Once the destructor is called, the object ceases to be (leaving you with uninitialized memory). This means that destructors may omit "finalizing" memory writes, such as setting a pointer to zero (the object ceases to be, so its value cannot ever be read). It also means that basically any further operation on that object is UB.
People assume some leeway on destroying *this, if the this pointer is not used in any way anymore. This is not the case in your example, as the destructor is called twice.
I am aware of exactly one case in which calling the destructor manually is correct and one where it is mostly-correct: When the object was created with placement new (in which case there will be no operation that automatically calls the destructor). The mostly-correct case is when destroying the object is immediately followed by re-initializing the object via a call to placement-new at the very same location.
As to your second question: Why do you want to explicitly call the destructor anyway? As far as I can see, your code should work just fine without all the contortions:
default_endian_expr_exception_t::doc_t::doc_t(kaitai::kstream* p__io, default_endian_expr_exception_t* p__parent, default_endian_expr_exception_t* p__root)
: kaitai::kstruct(p__io), m__parent(p__parent), m__root(p__root), m_main() {
_read();
}
The object is initialized to a valid state before the user-provided constructor is run. If _read throws an exception that should still be the case (otherwise fix _read!) and therefore the implicit destructor call should clean up everything nicely.

What is wrong with this c++ code using destruction?

#include <iostream.h>
class a {
public:
~a() { cout << 1; }
};
int main()
{
a ob;
ob.~a();
return 0;
}
if wrong than what is wrong with it?I've tried this code running on turbo c++,still i'm getting the error of
member identifier expected at "ob.~a();"line
else guess the output?
You don't call destructor functions explicitly usually. They will be called implicitly when the instance goes out of scope.
Calling a destructor function for the same instance twice leads to undefined behavior.
There's no compiler error with a modern compiler to be observed though. See here please. May be that was one of the rare good decisions from the Turbo C++ designers leaving such in an error message.
There are rare cases to call the destructor function explicitly, e.g. if you're maintaining a pool of instances created with placement new.
The call might work with ob.a::~a().
That being said, you don't need and should not call the destructor explicitly, it is called automatically once the ob object goes out of scope.

Detecting when a "new" item has been deleted [duplicate]

This question already has answers here:
How can I determine if a C++ object has been deallocated?
(6 answers)
Closed 4 years ago.
Consider this program:
int main()
{
struct test
{
test() { cout << "Hello\n"; }
~test() { cout << "Goodbye\n"; }
void Speak() { cout << "I say!\n"; }
};
test* MyTest = new test;
delete MyTest;
MyTest->Speak();
system("pause");
}
I was expecting a crash, but instead this happened:
Hello
Goodbye
I say!
I'm guessing this is because when memory is marked as deallocated it isn't physically wiped, and since the code references it straight away the object is still to be found there, wholly intact. The more allocations made before calling Speak() the more likely a crash.
Whatever the reason, this is a problem for my actual, threaded code. Given the above, how can I reliably tell if another thread has deleted an object that the current one wants to access?
There is no platform-independent way of detecting this, without having the other thread(s) set the pointer to NULL after they've deleted the object, preferably inside a critical section, or equivalent.
The simple solution is: design your code so that this can't occur. Don't delete objects that might be needed by other threads. Clear up shared resource only once it's safe.
I was expecting a crash, but instead
this happened:
That is because Speak() is not accessing any members of the class. The compiler does not validate pointers for you, so it calls Speak() like any other function call, passing the (deleted) pointer as the hidden 'this' parameter. Since Speak() does not access that parameter for anything, there is no reason for it to crash.
I was expecting a crash, but instead this happened:
Undefined Behaviour means anything can happen.
Given the above, how can I reliably tell if another thread has deleted an object that the current one wants to access?
How about you set the MyTest pointer to zero (or NULL). That will make it clear to other threads that it's no longer valid. (of course if your other threads have their own pointers pointing to the same memory, well, you've designed things wrong. Don't go deleting memory that other threads may use.)
Also, you absolutely can't count on it working the way it has. That was lucky. Some systems will corrupt memory immediately upon deletion.
Despite it's best to improve the design to avoid access to a deleted object, you can add a debug feature to find the location where you access deleted objects.
Make all methods and the destructor virtual.
Check that your compiler creates an object layout where the pointer to
the vtable is in front of the object
Make the pointer to the vtable invalid in the destructor
This dirty trick causes that all functions calls reads the address where the pointer points to and cause a NULL pointer exception on most systems. Catch the exception in the debugger.
If you hesitate to make all methods virtual, you can also create an abstract base class and inherit from this class. This allows you to remove the virtual function with little effort. Only the destructor needs to be virtual inside the class.
example
struct Itest
{
virtual void Speak() = 0;
virtual void Listen() = 0;
};
struct test : public Itest
{
test() { cout << "Hello\n"; }
virtual ~test() {
cout << "Goodbye\n";
// as the last statement!
*(DWORD*)this = 0; // invalidate vtbl pointer
}
void Speak() { cout << "I say!\n"; }
void Listen() { cout << "I heard\n"; }
};
You might use reference counting in this situation. Any code that dereferences the pointer to the allocated object will increment the counter. When it's done, it decrements. At that time, iff the count hits zero, deletion occurs. As long as all users of the object follow the rules, nobody access the deallocated object.
For multithreading purposes I agree with other answer that it's best to follow design principles that don't lead to code 'hoping' for a condition to be true. From your original example, were you going to catch an exception as a way to tell if the object was deallocated? That is kind of relying on a side effect, even if it was a reliable side effect which it's not, which I only like to use as a last resort.
This is not a reliable way to "test" if something has been deleted elsewhere because you are invoking undefined behavior - that is, it may not throw an exception for you to catch.
Instead, use std::shared_ptr or boost::shared_ptr and count references. You can force a shared_ptr to delete it's contents using shared_ptr::reset(). Then you can check if it was deleted later using shared_ptr::use_count() == 0.
You could use some static and runtime analyzer like valgrind to help you see these things, but it has more to do with the structure of your code and how you use the language.
// Lock on MyTest Here.
test* tmp = MyTest;
MyTest = NULL;
delete tmp;
// Unlock MyTest Here.
if (MyTest != NULL)
MyTest->Speak();
One solution, not the most elegant...
Place mutexes around your list of objects; when you delete an object, mark it as null. When you use an object, check for null. Since access is serialized, you'll have a consistent operation.

Is the C++ compiler optimizer allowed to break my destructor ability to be called multiple times?

We once had an interview with a very experienced C++ developer who couldn't answer the following question: is it necessary to call the base class destructor from the derived class destructor in C++?
Obviously the answer is no, C++ will call the base class destructor automagically anyway. But what if we attempt to do the call? As I see it the result will depend on whether the base class destructor can be called twice without invoking erroneous behavior.
For example in this case:
class BaseSafe {
public:
~BaseSafe()
{
}
private:
int data;
};
class DerivedSafe {
public:
~DerivedSafe()
{
BaseSafe::~BaseSafe();
}
};
everything will be fine - the BaseSafe destructor can be called twice safely and the program will run allright.
But in this case:
class BaseUnsafe {
public:
BaseUnsafe()
{
buffer = new char[100];
}
~BaseUnsafe ()
{
delete[] buffer;
}
private:
char* buffer;
};
class DerivedUnsafe {
public:
~DerivedUnsafe ()
{
BaseUnsafe::~BaseUnsafe();
}
};
the explicic call will run fine, but then the implicit (automagic) call to the destructor will trigger double-delete and undefined behavior.
Looks like it is easy to avoid the UB in the second case. Just set buffer to null pointer after delete[].
But will this help? I mean the destructor is expected to only be run once on a fully constructed object, so the optimizer could decide that setting buffer to null pointer makes no sense and eliminate that code exposing the program to double-delete.
Is the compiler allowed to do that?
Standard 12.4/14
Once a destructor is invoked for an
object, the object no longer exists;
the behavior is undefined if the
destructor is invoked for an object
whose lifetime has ended (3.8).
So I guess the compiler should be free to optimize away the setting of buffer to null since the object no longer exists after calling the destructor.
But even if the setting of the buffer to null wasn't removed by the compiler, it seems like calling the destructor twice would result in UB.
Calling the destructor converts an object into raw memory. You cannot destruct raw memory; this is undefined behaviour. The C++ compiler is entitled to do anything it wants. While it is unlikely that it will turn your computer in cottage cheese, it might deliberately trigger a slap-on-the-wrist SEGFAULT (at least in debug mode).

method running on an object BEFORE the object has been initialised?

#include <iostream>
using namespace std;
class Foo
{
public:
Foo(): initialised(0)
{
cout << "Foo() gets called AFTER test() ?!" << endl;
};
Foo test()
{
cout << "initialised= " << initialised << " ?! - ";
cout << "but I expect it to be 0 from the 'initialised(0)' initialiser on Foo()" << endl;
cout << "this method test() is clearly working on an uninitialised object ?!" << endl;
return Foo();
}
~Foo()
{};
private:
int initialised;
};
int main()
{
//SURE this is bad coding but it compiles and runs
//I want my class to DETECT and THROW an error to prevent this type of coding
//in other words how to catch it at run time and throw "not initialised" or something
Foo foo=foo.test();
}
Yes, it is calling the function on a yet not constructed object, which is undefined behavior. You can't detect it reliable. I would argue you also should not try to detect it. It's nothing which would happen likely by accident, compared to for example calling a function on an already deleted object. Trying to catch every and all possible mistakes is just about impossible. The name declared is visible already in its initializer, for other useful purposes. Consider this:
Type *t = (Type*)malloc(sizeof(*t));
Which is a common idiom in C programming, and which still works in C++.
Personally, i like this story by Herb Sutter about null references (which are likewise invalid). The gist is, don't try to protect from cases that the language clearly forbids and in particular are in their general case impossible to diagnose reliably. You will get a false security over time, which becomes quite dangerous. Instead, train your understanding of the language and design interfaces in a way (avoid raw pointers, ...) that reduces the chance of doing mistakes.
In C++ and likewise in C, many cases are not explicitly forbidden, but rather are left undefined. Partially because some things are rather difficult to diagnose efficiently and partially because undefined behavior lets the implementation design alternative behavior for it instead of completely ignoring it - which is used often by existing compilers.
In the above case for example, any implementation is free to throw an exception. There are other situations that are likewise undefined behavior which are much harder to diagnose efficiently for the implementation: Having an object in a different translation unit accessed before it was constructed is such an example - which is known as the static initialization order fiasco.
The constructor is the method you want (not running before initialization but rather on initialization, but that should be OK). The reason it doesn't work in your case is that you have undefined behavior here.
Particularly, you use the not-yet-existent foo object to initialize itself (eg. the foo in foo.Test() doesn't exist yet). You can solve it by creating an object explicitly:
Foo foo=Foo().test()
You cannot check for it in the program, but maybe valgrind could find this type of bug (as any other uninitialized memory access).
You can't prevent people from coding poorly, really. It works just like it "should":
Allocate memory for Foo (which is the value of the "this" pointer)
Going to Foo::test by doing: Foo::test(this), in which,
It gets the value by this->initialised, which is random junk, then it
Calls Foo's default constructor (because of return Foo();), then
Call Foo's copy constructor, to copy the right-handed Foo().
Just like it should. You can't prevent people from not knowing the right way to use C++.
The best you could do is have a magic number:
class A
{
public:
A(void) :
_magicFlag(1337)
{
}
void some_method(void)
{
assert (_magicFlag == 1337); /* make sure the constructor has been called */
}
private:
unsigned _magicFlag;
}
This "works" because the chances _magicFlag gets allocated where the value is already 1337 is low.
But really, don't do this.
You're getting quite a few responses that basically say, "you shouldn't expect the compiler to help you with this". However, I'd agree with you that the compiler should help with this problem by with some sort of diagnostic. Unfortunately (as the other answers point out), the language spec doesn't help here - once you get to the initializer part of the declaration, the newly declared identifier is in scope.
A while back, DDJ had an article about a simple debugging class called "DogTag" that could be used as a debugging aid to help with:
using an object after deletion
overwriting an object's memory with garbage
using an object before initializing it
I haven't used it much - but it did come in handly on an embedded project that was running into some memory overwrite bugs.
It's basically an elaboration of the "MagicFlag" technique that GMan described.