What is wrong with this c++ code using destruction? - c++

#include <iostream.h>
class a {
public:
~a() { cout << 1; }
};
int main()
{
a ob;
ob.~a();
return 0;
}
if wrong than what is wrong with it?I've tried this code running on turbo c++,still i'm getting the error of
member identifier expected at "ob.~a();"line
else guess the output?

You don't call destructor functions explicitly usually. They will be called implicitly when the instance goes out of scope.
Calling a destructor function for the same instance twice leads to undefined behavior.
There's no compiler error with a modern compiler to be observed though. See here please. May be that was one of the rare good decisions from the Turbo C++ designers leaving such in an error message.
There are rare cases to call the destructor function explicitly, e.g. if you're maintaining a pool of instances created with placement new.

The call might work with ob.a::~a().
That being said, you don't need and should not call the destructor explicitly, it is called automatically once the ob object goes out of scope.

Related

What is the effect of calling a virtual method by a base class pointer bound to a derived object that has been deleted

The fllowing questions are:
p->test() should not work after b is destroyed. However, the code is running without any issue, the dynamic binding still works;
when the destructor of A is defined, the dynamic binding doesnot work anymore. What is the logic behind it?
#include <iostream>
using namespace std;
struct A {
//~A() {}
virtual void test() { cout << 0 << endl; }
};
class B :public A {
void test() { cout << 1 << endl; }
};
int main() {
A* p;
{
B b;
p = &b;
}
p->test(); // the method called will be different if the destructor of A is removed
}
p->test() should not work after b is destroyed. However, the code is running without any issue, the dynamic binding still works;
It does not "work". p->test() invokes undefined behavior.
when the destructor of A is defined, the dynamic binding doesnot work anymore. What is the logic behind it?
There is no logic behind it, other than implementation details of the compiler you are using, because the C++ standard does not mandate what a compiler should do with code that has undefined behavior.
For more details on undefined behavior I refer you to https://en.cppreference.com/w/cpp/language/ub
Compilers cannot detect all undefined behavior, but some of it. With gcc you can try -fsanitize=address to see the issue more clearly: https://godbolt.org/z/qxTs4sxcW.
Welcome to the world of Undefined Behaviour! Any access to a deleted object, including calling a method on it invokes undefined behaviour.
That means that the languages requires no specific behaviour, and depending on the internals of the compiler and possibly on any apparently unrelated thing on the computer anything can happen from the expected behaviour (you experienced it) to an immediate crash or unexpected results occuring immediately or later.
Never try to test the result of an UB operation: it can change from one compilation to a new one even with the same configuration or even from one run to the other if it depends on uninitialized memory.
Worse: if the compiler can detect UB in a portion of code, it can optimize out that portion because the language allows it to assume that a program should never invoke UB.
TL/DR: you are invoking UB here. Don't. Just don't.

How to tell the destructor is not called? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I just had an interview question, the interviewer asked
How to tell the destructor is not called when it should have be called?
And what will you do if the destructor is not called?
To be honest, I don't know the answer. My guess is that putting the destructor inside a try catch block, but I have never seen people doing that. Is there a better solution?
There are a number of ways that the destructor of an object can fail to be called:
call abort or _exit (even exit will leave stack variables undestructed).
have the constructor thrown an exception. (Technically, if the constructor threw, the object never started to exist, so there wasn't an object to have its destructor called).
invoke undefined behaviour (at which point the C++ standard allows anything to happen). Calling deleteon an array allocated with new [] is one way of invoking undefined behaviour, and one common behaviour is to call the destructor of the first object only (leaving second and subsequent undestructed) - but it's still undefined behaviour.
Another way to invoke undefined behaviour is a way which is quite likely to leave a destructor uncalled is to have a pointer-to-base which actually points to a derived object, and call delete on the pointer-to-base. If the base class doesn't have a virtual destructor, you have undefined behaviour.
you have not yet called delete on a pointer allocated with new (this is particularly problemmatic if you have a memory leak). (This is actually a particularly common case of "the destructor is not supposed to have been run yet").
If you are trying to debug a program and want to find out if the destructor is being invoked, then
set a break point and run under the debugger
printf or whatever logging framework you are using.
Here is another classic no-destruction:
#include <iostream>
#include <memory>
class Base
{
public:
Base()
{
std::cout << "All your base, sucker!" << std::endl;
}
~Base() <-- note lack of virtual
{
std::cout << "Base destroyed!" << std::endl;
}
};
class Sub: public Base
{
public:
Sub()
{
std::cout << "We all live in a Yellow Submarine..." << std::endl;
}
~Sub()
{
std::cout << "Sub destroyed" << std::endl;
}
};
int main()
{
std::unique_ptr<Base> b(new Sub());
}
Output:
All your base, sucker!
We all live in a Yellow Submarine...
Base destroyed!
Because Base's destructor is not virtual, ~Base is called instead of ~Sub on destruction, and ~Base has no clue that Sub even exists and can't call ~Sub to finish clean-up.
You can for example put a static bool in the class you want to test, set it true in the constructor and false in the destructor. When the destructor is not called, the bool will remain true. Or it can be a static int, increment in the constructor and decrement in the destructor (and check counts before and after the scope). This is one of simple methods to check for resource leaks. I was already using this technique in unit tests to easily check if the correct constructor has been called when a custom smart pointer went out of scope.
The destructor might not be called in many situations, usually as a result of programming error. For example:
deleting inherited class through a base class pointer without having virtual destructor (then only base destructor is called)
deleting pointer to forward declared class (this case is tricky, as only some of the compilers issue a warning)
forgetting to delete at all (memory leak)
initializing object by placement new and not calling the destructor manually (which is required for placement new)
mismatched array/non-array operators (allocating by new[] and deleting by regular delete - if it does not crash it only calls destructor of the first item)
I do not know what Interviewer wanted to ask you as context is not clear but below points may be helpful
For a object on stack - Destructor is called as the object go out of scope.
For a object created on heap - for each object created by new , a delete will call the destructor. In case the program terminates before delete the destructor may not be called, In such proper handling should be done ( I would recommend using smart pointers to avoid such cases)
Here is an example where the destructor is not called:
#include <iostream>
class A {
public:
~A() { std::cout << "Destructor called" << std::endl;}
};
int main()
{
A *a = new A;
return 0;
}
There are plenty of other examples. Like casting, static, ...
It's not easy to detect a "negative event": that something didn't happen.
Instead what we test for is some event which happens unconditionally, and always after the interesting event that we are trying to detect (when that event does happen). When that other even happens, we then know that we are past the point in time when the interesting happen should have happened (if it happened at all). At that point, we have justification in looking for some positive evidence which determines whether the interesting event happened or not.
For instance, we can have the destructor set some kind of flag, or invoke some callback function or whatever. We also know that a C++ program executes statements in sequence. So suppose we don't know whether a given destructor was called during the execution of statement S1 in S1 ; S2. We simply arrange for the gathering of evidence, prior to executing S1, and then in or after S2, we look for that evidence (is the flag set, was the callback invoked, ...)
If this is just during debugging, then use your debugger or code coverage tools!
If you're wondering "is this line of code executed while I run such and such", then put a debugger breakpoint on it..
Or run a code coverage tool and then analyze the results: it will tell you how many times the lines of your program were reached. Lines that weren't executed will be flagged as never reached (no coverage). Code coverage can accumulate the coverage info from multiple runs of the program; they can help you find code that is not being hit by your test cases.

Using exit and a global object

I have the following program where i am calling exit() in the destructor. When i create an object of type sample inside main() destructor is called once and program exits normally. But when i create a global object of type sample, "Destructing.." gets printed infinitely. Can anyone please explain how?
#include "iostream"
#include "conio.h"
using namespace std;
class sample
{
public:
~sample() {
cout <<"Destructing.."<<endl;
exit(0);
}
};
sample obj;
int main()
{
getch();
}
What's happening is, the exit() function is getting the program to call destructors on all of the global objects. And since at the point where your class's destructor calls exit(1); the object is not yet considered to be destructed, the destructor gets called again, resulting in an infinite loop.
You could get away with this:
class sample {
bool exiting;
public:
sample() { exiting = false; }
~sample() {
cout << "Destructing.." << endl;
if(exiting) return;
exiting = true;
exit(0);
}
};
But having a destructor call exit() is a Bad Idea. consider one of these alternatives:
create a separate normal (non-destructor) method for exiting
create a function that runs until the "program" is finished and call it from main()
use abort() instead of exit() (Thanks goldilocks for mentioning this one). abort() bypasses all of the cleaning up that is normally done when exit() is called and when main() returns. However this is not necessarily a good idea either, as certain cleanup operations in your program could be quite critical. abort() is meant only for errors that are already so bad that bypassing cleanup is warranted.
I suggested exceptions before but remembered about throwing exceptions from inside destructors and changed my mind. here's why.
Note also, the behavior is not consistent - some compilers/environments result in an infinite loop, some don't. It comes down to at which point in the destructor the object is considered to be destroyed. I would guess the standard either doesn't cover this or says the behavior in this case is undefined.
I agree with Micheal Slade that doing that in a destructor is a sign of bad design. But if you think you have a good reason to do so (eg, development issue), use abort() instead of exit(0). This will prevent any more destructors from being called and get you out of the recursive loop.
Your destructing destructor calls exit which in turn calls the destruction destructor which calls exit which in turn calls ... (Yes, it goes on for a very long time).

What happens in C++ when I pass an object by reference and it goes out of scope?

I think this question is best asked with a small code snippet I just wrote:
#include <iostream>
using namespace std;
class BasicClass
{
public:
BasicClass()
{
}
void print()
{
cout << "I'm printing" << endl;
}
};
class FriendlyClass
{
public:
FriendlyClass(BasicClass& myFriend) :
_myFriend(myFriend)
{
}
void printFriend()
{
cout << "Printing my friend: ";
_myFriend.print();
}
private:
BasicClass& _myFriend;
};
int main(int argv, char** argc)
{
FriendlyClass* fc;
{
BasicClass bc;
fc = new FriendlyClass(bc);
fc->printFriend();
}
fc->printFriend();
delete fc;
return 0;
}
The code compiles and runs fine using g++:
$ g++ test.cc -o test
$ ./test
Printing my friend: I'm printing
Printing my friend: I'm printing
However, this is not the behavior I was expecting. I was expecting some sort of failure on the second call to fc->printFriend(). Is my understanding of how the passing/storing by reference works incorrect or is this something that just happens to work on a small scale and would likely blow up in a more sophisticated application?
It works exactly as for pointers: using something (pointer/reference) that refers to an object that no longer exists is undefined behavior. It may appear to work but it can break at any time.
Warning: what follows is a quick explanation of why such method calls can seem to work in several occasions, just for informative purposes; when writing actual code you should rely only on what the standard says
As for the behavior you are observing: on most (all?) compilers method calls are implemented as function calls with a hidden this parameter that refers to the instance of the class on which the method is going to operate. But in your case, the this pointer isn't being used at all (the code in the function is not referring to any field, and there's no virtual dispatch), so the (now invalid) this pointer is not used and the call succeeds.
In other instances it may appear to work even if it's referring to an out-of-scope object because its memory hasn't been reused yet (although the destructor has already run, so the method will probably find the object in an inconsistent state).
Again, you shouldn't rely on this information, it's just to let you know why that call still works.
When you store a reference to an object that has ended its lifetime, accessing it is undefined behavior. So anything can happen, it can work, it can fail, it can crash, and as it appears I like to say it can order a pizza.
Undefined behavior. By definition you cannot make assumptions about what will happen when that code runs. The compiler may not be clearing out the memory where bc resides yet, but you can't count on it.
I actually fixed the same bug in a program at work once. When using Intel's compiler the variable which had gone out of scope had not been "cleaned up" yet, so the memory was still "valid" (but the behavior was undefined). Microsoft's compiler however cleaned it up more aggressively and the bug was obvious.
You have a dangling reference, which results in undefined behavior.
look here: Can a local variable's memory be accessed outside its scope?
It will almost will work, since you have no virtual functions, and you don't acces fields of BasicClass: all methods you call have static binding, and 'this' is never used,
so you never actually access "not allocated memory".

calling constructor of the class in the destructor of the same class

Experts !! I know this question is one of the lousy one , but still I dared to open my mind , hoping I would learn from all.
I was trying some examples as part of my routine and did this horrible thing, I called the constructor of the class from destructor of the same class.
I don't really know if this is ever required in real programming , I cant think of any real time scenarios where we really need to call functions/CTOR in our destructor. Usually , destructor is meant for cleaning up.
If my understanding is correct, why the compiler doesn't complain ? Is this because it is valid for some good reasons ? If so what are they ?
I tried on Sun Forte, g++ and VC++ compiler and none of them complain about it.\
Edit : I thank everyone for their answers, I think I didn't cut my point clearly, I knew the result , it will end up recursively and the program can crash, but the question actually is on Destructor allowing to create an object.
using namespace std;
class test{
public:
test(){
cout<<"CTOR"<<endl;
}
~test() {cout<<"DTOR"<<endl;
test();
}};
When the following runs
test();
you construct a temporary (new) object that is immediately destroyed when control "passes by the semicolon", the destructor for that temporary object is invoked, which constructs another temporary object, etc., so you get a death spiral of endless recursive calls which leads to a stack overflow and crashes your program.
Prohibiting the destructor from creating temporary objects would be ridiculous - it would severely limit you in what code you could right. Also it makes no sense - the destructor is destroying the current object, and those temporary object are completely irrelevant to it, so enforcing such constrains on them is meaningless.
As far as I understand you're simply instantiate new test object in the destructor and leave it intact.
This code, which gives the test instances a large size, actually produces a stack overflow very quickly, because of the infinite recursion:
#include <iostream>
using namespace std;
class test{
public:
int a[10000];
test(){
}
~test() {
test();
}};
int main() {
test t;
}
C++ does not require that a warning be issued for infinite recursion, and in general it is very difficult to detect.
Static analysis tools are the things which should complain. For me your case is not very different from the following:
void foo();
void bar()
{
foo();
}
void foo()
{
bar();
}
I don't know are there any compilers which will complain about above code, but this example is much simpler than yours and there can be many others.
EDIT:
In your case the problem is much simpler. It's an ordinary infinite recursion, because the idea of your destructor is somewhat like that:
~test()
{
cout<<"DTOR"<<endl;
test tmp();
tmp.~test(); // infinite recursion.
}
I see no reason why it should be illegal, but I admit I'm struggling to come up with a decent example of why I'd do this. To make it "work" you'd need to conditionally call the c'tor rather than unconditionally.