I'm trying to understand the behavior of exceptions in c++.
I wrote the following code:
class A{
public:
A(){
};
~A(){
cout<<"hello";
};
};
int exceptionTest(){
throw "blablabla";
};
int main(){
A sd;
int test = exceptionTest();
return 0;
}
I've noticed that in this case the distructor gets called even though no one caught the exception.
If I change the "main" code to:
int main(){
A* sd = new A();
int test = exceptionTest();
return 0;
}
The distructor will not be called.
Can anyone please tell me what is the reason for the different behavior?
Thanks,
Li
The fact that you are throwing an exception is irrelevant here. In your first example, sd is an object that exists on the stack. When execution exits its scope, for whatever reason, it gets destroyed. In the second example, sd is a pointer to an object that was explicitly allocated using new. This object will not be destroyed until that pointer is passed to delete; since you never do so, your program is currently leaking it.
The standard has the following to say on the matter:
-9- If no matching handler is found in a program, the function terminate() is called; whether or not the stack is unwound before this call to terminate() is implementation-defined.
So your compiler performs stack unwinding (invoking destructors of locals), others may not. For example, with G++ or codepad.org, this program will not output "hello".
Dynamically allocated objects are not destroyed until you explicitly destroy them (with delete or such). In particular, if an exception occurs in the meantime, code may never reach the deallocation statement.
Local variable destructors are called automatically, as soon as the variable is out of scope.
Destructors are never called on pointers, so you must call it yourself.
I've noticed that in this case the distructor gets called even though no one caught the exception.
That's exactly what to expect.
This mechanism is a RAII consequence that makes you "sure" that resources will be freed even if there is an exception. For example :
class File
{
public:
File( const std::string filename ) : file_handler(file_open( filename )) { } // whatever the implementation
~File() { file_close(file_handler); }
private:
FileHandler file_handler;
};
void test(){ throw "This is a test"; }
int main()
{
File file("test.txt");
test();
return false;
}
You're assured that the file will be closed even with the throw. So if you use RAII to manage your resources.
That's because when the exception is thrown, until it get catch, it goes back in the call stack and if there is no catch the local objects are destroyed the way they would be if we got out of scope.
This is not really an answer, but I might clarify the behavior, in case of RAII mechanism, that I understood from the other answer and Mike's comments.
#include <iostream>
class Bar
{
public:
Bar() { std::cout << "Bar constructor" << std::endl; }
~Bar() { std::cout << "Bar destructor" << std::endl; }
};
void foo()
{
throw("Exception");
}
int main()
{
// Variation, add { to create a new scope
Bar bar;
foo();
// Variation : }
return 0;
}
Using g++, this code, where the exception is not catched will output the following:
Bar constructor
terminate called after throwing an instance of 'char const*'
Aborted
Meaning that g++ does not unwind the stack (or let go the variable out of scope, if I understand the "variant" correctly), so the destructor is not called.
However, if you catch the exception:
#include <iostream>
class Bar
{
public:
Bar() { std::cout << "Bar constructor" << std::endl; }
~Bar() { std::cout << "Bar destructor" << std::endl; }
};
void foo()
{
throw("Exception");
}
int main()
{
try
{
Bar bar;
foo();
}
catch (...)
{
// Nothing here
}
return 0;
}
then the output will be
Bar constructor
Bar destructor
and you recover the correct behavior.
Related
Error handling is a challenge in C++ constructors. There are several common approaches but all of them has obvious disadvantages. Throwing exceptions for example, may cause leak of the allocated resources earlier in the constructor, making it an error prone approach. Using a static init() method is another common solution, but it goes against the RAII principle.
Studying the subject I found this answer and blog suggesting the use of C++17 feature named std::optional<>, and I found it promising. However it seems that this kind of solution comes with an underlying problem - it triggers the destructor instantly when the user retrieved the object.
Here is a simple code example describing the problem, my code is based on the the above sources
class A
{
public:
A(int myNum);
~A();
static std::optional<A> make(int myNum);
bool isBuf() { return _buf; };
private:
char* _buf;
};
std::optional<A> A::make(int myNum)
{
std::cout << "A::make()\n";
if (myNum < 8)
return {};
return A(myNum);
}
A::A(int myNum)
{
std::cout << "A()\n";
_buf = new char[myNum];
}
A::~A()
{
std::cout << "~A()\n";
delete[]_buf;
}
int main()
{
if (std::optional<A> a = A::make(42))
{
if (a->isBuf())
std::cout << "OK\n";
else
std::cout << "NOT OK\n";
std::cout << "if() finished\n";
}
std::cout << "main finished\n";
}
The output of this program will be:
A::make()
A()
~A()
OK
if() finished
~A()
followed with a runtime error (at least in Visual C++ environment) for attempting to delete a->_buf twice.
I used cout for the reader's convenience as I found this problem debugging a much complex code, but the problem is clear - the return statement in A::make() constructs the objects, but since it is the end of the A::make() scope - the destructor is invoked. The user sure his object is initialized (notice how we got an "OK" message) while in reality it was destroyed, and when we step out of the if() scope in main, a->~A() is invoked once again.
So, am I doing this wrong?
The use of std::optional for error handling in constructors is common, or so I've been told. Thanks in advance
Your class violates the rule of 3/5.
Instrument the copy constructor and simplify main to get this:
#include <optional>
#include <iostream>
class A
{
public:
A(int myNum);
~A();
A(const A& other){
std::cout << "COPY!\n";
}
static std::optional<A> make(int myNum);
bool isBuf() { return _buf; };
private:
char* _buf = nullptr;
};
std::optional<A> A::make(int myNum)
{
std::cout << "A::make()\n";
if (myNum < 8)
return {};
return A(myNum);
}
A::A(int myNum)
{
std::cout << "A()\n";
_buf = new char[myNum];
}
A::~A()
{
std::cout << "~A()\n";
delete[]_buf;
}
int main()
{
std::optional<A> a = A::make(42);
std::cout << "main finished\n";
}
Output is:
A::make()
A()
COPY!
~A()
main finished
~A()
When you call A::make() the local A(myNum) is copied to the retunred optional and its destructor is called afterwards. You'd have the same issue without std::optional (eg by returning an A by value).
The copy constructor I added does not copy anything, but the compiler generated one does make a shallow copy of the char* _buf; member. As you do not properly deep copy the buffer it gets deleted twice which results in the runtime error.
Use a std::vector for the rule of 0, or properly implement the rule of 3/5. Your code invokes undefined behavior.
PS Not directly related to the problem, but you should initialize members instead of assigning to them in the constructors body. Change:
A::A(int myNum)
{
std::cout << "A()\n";
_buf = new char[myNum];
}
to
A::A(int myNum) : _buf( new char[myNum])
{
std::cout << "A()\n";
}
or better yet, use a std::vector as mentioned above.
PPS:
Throwing exceptions for example, may cause leak of the allocated resources earlier in the constructor, making it an error prone approach.
No, throwing from a constructor is common and has no problem when you don't manage memory via raw pointers. Both using a std::vector or a smart pointer would help to make your constructor excpetion safe.
Please help:
I am aware about destructors and atexit() and know following too:
atexit() register a function to be called when the program terminates (e.g. when main() calls a return or when exit() is explicitly called somewhere).
When exit() is invoked, static objects are destroyed (destructor is called), but not objects in local variable scope and of course dynamically allocated objects neither (those are only destroyed if you explicitly call delete).
below code give output as:
atexit handler
Static dtor
Can you please help me knowing why destructor of local objects wont be called when we use atexit()?.
Thanks in advance:
class Static {
public:
~Static()
{
std::cout << "Static dtor\n";
}
};
class Sample {
public:
~Sample()
{
std::cout << "Sample dtor\n";
}
};
class Local {
public:
~Local()
{
std::cout << "Local dtor\n";
}
};
Static static_variable; // dtor of this object *will* be called
void atexit_handler()
{
std::cout << "atexit handler\n";
}
int main()
{
Local local_variable;
const int result = std::atexit(atexit_handler);
Sample static_variable; // dtor of this object *will not* be called
std::exit(EXIT_SUCCESS);//succesful exit
return 0;
}
Calling destructors is not about atexitbut exit.
I don't generally regard std::exit any good C++ programming. In fact, this and the std::atexit
extern "C" int atexit( void (*func)() ); // in <cstdlib>
come from the C standard library. As looking at your example, I believe you have seen http://en.cppreference.com/w/cpp/utility/program/exit, and you have also seen
"Stack is not unwound: destructors of variables with automatic storage durations are not called."
Which is the answer to your question "why"? There are some cases, particularly unrecovered errors you might make use of exit, but generally use should stick to using exceptions, e.g.,
Static static_variable;
int mainactual()
{
Local local_variable;
Sample static_variable;
throw std::exception("EXIT_FAILURE"); // MS specific
// ... more code
}
int main()
{
try
{
mainactual()
}catch ( std::exception & e )
{
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
class XX
{
public:
static unsigned s_cnt;
XX()
{
++ s_cnt;
std::cout << "C XX " << s_cnt << "\n";
if ( s_cnt > 2 )
throw std::exception();
}
//private:
~XX()
{
std::cout << "~ XX\n";
}
};
unsigned XX::s_cnt = 0;
int main()
{
try
{
XX *xx = new XX[10];
} catch ( ... )
{
std::cout << "Exc\n";
}
}
Output:
C XX 1
C XX 2
C XX 3
~ XX
~ XX
Exc
But when i remove try-catch, i see:
C XX 1
C XX 2
C XX 3
terminate called after throwing an instance of 'std::exception'
what(): std::exception
zsh: abort ./a.out
Why does C++ call the destructors in first case but not in second?
When you don't catch an exception (i.e. it becomes an uncaught exception and terminates your program), C++ doesn't give you any guarantees that destructors actually get called.
This gives the compilers leeway in how to implement exception handling. For example, GCC first searches for a handler. If it can't find one, it aborts immediately, preserving the full stack information for debugging. If it does find one, it actually unwinds the stack, destroying objects, until it arrives at the handler. That's why you don't see output: the program aborts before destroying any objects.
When you throw an exception from a constructor, the standard treat the object as not constructed. You don't destroy something that does not exists, don't you?
In fact, even this simple example doesn't work as you suggest:
struct A {
A() {throw std::exception();}
~A() {std::cout << "A::~A()" << std::endl;}
};
int main() {
A a;
}
This terminates with an uncaught exception, but does not print "A::~A()".
If you think about it, it's the only possible way to give a semantic to it.
For example, we can use our type A as a member object of a class B:
struct B {
B() : m_a(),m_p(new int) {}
~B() {delete m_p;}
A m_a;
int * m_p;
};
Obviously B::B() throws immediately (and doesn't even initialize m_p). If an hypothetical C++ standard mandates to calls B::~B() in this case, the destructor has no possible way to know if m_p has been initialized or not.
In other words, throwing an exception from a constructor means that the object never existed and its lifetime never began. This GotW is pretty clear about it.
Bonus: what happens if in the definition of B we swap the order of m_a and m_p ?
Then you have a memory leak. That's why exists a particular syntax to catch exception during the initialization of member objects.
As a C++ neophyte trying to understand smart pointers. I have written below code to check.
It did compile and run but I was expecting the destructor of my class to be invoked and print the cout's from the destructor but it didn't .
Do we need to overload any function in the user defined class so that its destructor is called when the smart_ptr object of that class gets destroyed.
Why is that it did not invoke the object destructor. What is that i am missing?
#include <iostream>
#include <cstdlib>
#include <tr1/memory>
#include <string>
//using namespace std;
class myclass
{
public:
myclass();
myclass(int);
~myclass();
private:
int *ptr;
std::string *cptr;
};
myclass::myclass()
{
std::cout << "Inside default constructor\n";
}
myclass::myclass(int a)
{
std::cout << "Inside user defined constructor\n" ;
ptr = new int[10];
cptr = new std::string("AD");
}
myclass::~myclass()
{
std::cout << "Inside destructor..\n";
delete [] ptr;
delete cptr;
std::cout << "Freed memory..\n";
}
int main()
{
int i;
std::cin >> i;
std::tr1::shared_ptr<std::string> smartstr(new std::string);
std::tr1::shared_ptr<myclass> smart_a(new myclass(i));
if(i == 0)
{
std::cout << "Exiting...\n";
exit(-1);
}
}
The reason the object is never destroyed is because you are exiting the program by calling exit. This causes the program to exit before the smart pointer objects have a chance to go out of scope thus the objects they manage are never destroyed. Since you are in main use a return statement instead of calling exit.
And, as additional information to other answers, note from the Standard:
Per ยง3.6.1/4:
Terminating the program without leaving the current block (e.g., by
calling the function std::exit(int) (18.5)) does not destroy any
objects with automatic storage duration (12.4).
In the code below,
if(i == 0)
{
std::cout << "Exiting...\n";
exit(-1);
}
You are terminating the program by calling exit(), so the object is never destroyed. So remove the exit(-1); from the code.
One possible solution is ensure that your buffer is flushed in your destructor. Use std::endl; in your destructor. For more information, please look here: Buffer Flushing, Stack Overflow
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
When is a function try block useful?
Difference between try-catch syntax for function
This code throws an int exception while constructing the Dog object inside class UseResources. The int exception is caught by a normal try-catch block and the code outputs :
Cat()
Dog()
~Cat()
Inside handler
#include <iostream>
using namespace std;
class Cat
{
public:
Cat() { cout << "Cat()" << endl; }
~Cat() { cout << "~Cat()" << endl; }
};
class Dog
{
public:
Dog() { cout << "Dog()" << endl; throw 1; }
~Dog() { cout << "~Dog()" << endl; }
};
class UseResources
{
class Cat cat;
class Dog dog;
public:
UseResources() : cat(), dog() { cout << "UseResources()" << endl; }
~UseResources() { cout << "~UseResources()" << endl; }
};
int main()
{
try
{
UseResources ur;
}
catch( int )
{
cout << "Inside handler" << endl;
}
}
Now, if we replace the definition of the UseResources() constructor, with one that uses a function try block, as below,
UseResources() try : cat(), dog() { cout << "UseResources()" << endl; } catch(int) {}
the output is the same
Cat()
Dog()
~Cat()
Inside handler
i.e., with exactly the same final result.
What is then, the purpose of a function try block ?
Imagine if UseResources was defined like this:
class UseResources
{
class Cat *cat;
class Dog dog;
public:
UseResources() : cat(new Cat), dog() { cout << "UseResources()" << endl; }
~UseResources() { delete cat; cat = NULL; cout << "~UseResources()" << endl; }
};
If Dog::Dog() throws, then cat will leak memory. Becase UseResources's constructor never completed, the object was never fully constructed. And therefore it does not have its destructor called.
To prevent this leak, you must use a function-level try/catch block:
UseResources() try : cat(new Cat), dog() { cout << "UseResources()" << endl; } catch(...)
{
delete cat;
throw;
}
To answer your question more fully, the purpose of a function-level try/catch block in constructors is specifically to do this kind of cleanup. Function-level try/catch blocks cannot swallow exceptions (regular ones can). If they catch something, they will throw it again when they reach the end of the catch block, unless you rethrow it explicitly with throw. You can transform one type of exception into another, but you can't just swallow it and keep going like it didn't happen.
This is another reason why values and smart pointers should be used instead of naked pointers, even as class members. Because, as in your case, if you just have member values instead of pointers, you don't have to do this. It's the use of a naked pointer (or other form of resource not managed in a RAII object) that forces this kind of thing.
Note that this is pretty much the only legitimate use of function try/catch blocks.
More reasons not to use function try blocks. The above code is subtly broken. Consider this:
class Cat
{
public:
Cat() {throw "oops";}
};
So, what happens in UseResources's constructor? Well, the expression new Cat will throw, obviously. But that means that cat never got initialized. Which means that delete cat will yield undefined behavior.
You might try to correct this by using a complex lambda instead of just new Cat:
UseResources() try
: cat([]() -> Cat* try{ return new Cat;}catch(...) {return nullptr;} }())
, dog()
{ cout << "UseResources()" << endl; }
catch(...)
{
delete cat;
throw;
}
That theoretically fixes the problem, but it breaks an assumed invariant of UseResources. Namely, that UseResources::cat will at all times be a valid pointer. If that is indeed an invariant of UseResources, then this code will fail because it permits the construction of UseResources in spite of the exception.
Basically, there is no way to make this code safe unless new Cat is noexcept (either explicitly or implicitly).
By contrast, this always works:
class UseResources
{
unique_ptr<Cat> cat;
Dog dog;
public:
UseResources() : cat(new Cat), dog() { cout << "UseResources()" << endl; }
~UseResources() { cout << "~UseResources()" << endl; }
};
In short, look on a function-level try-block as a serious code smell.
Ordinary function try blocks have relatively little purpose. They're almost identical to a try block inside the body:
int f1() try {
// body
} catch (Exc const & e) {
return -1;
}
int f2() {
try {
// body
} catch (Exc const & e) {
return -1;
}
}
The only difference is that the function-try-block lives in the slightly larger function-scope, while the second construction lives in the function-body-scope -- the former scope only sees the function arguments, the latter also local variables (but this doesn't affect the two versions of try blocks).
The only interesting application comes in a constructor-try-block:
Foo() try : a(1,2), b(), c(true) { /* ... */ } catch(...) { }
This is the only way that exceptions from one of the initializers can be caught. You cannot handle the exception, since the entire object construction must still fail (hence you must exit the catch block with an exception, whether you want or not). However, it is the only way to handle exceptions from the initializer list specifically.
Is this useful? Probably not. There's essentially no difference between a constructor try block and the following, more typical "initialize-to-null-and-assign" pattern, which is itself terrible:
Foo() : p1(NULL), p2(NULL), p3(NULL) {
p1 = new Bar;
try {
p2 = new Zip;
try {
p3 = new Gulp;
} catch (...) {
delete p2;
throw;
}
} catch(...) {
delete p1;
throw;
}
}
As you can see, you have an unmaintainable, unscalable mess. A constructor-try-block would be even worse because you couldn't even tell how many of the pointers have already been assigned. So really it is only useful if you have precisely two leakable allocations. Update: Thanks to reading this question I was alerted to the fact that actually you cannot use the catch block to clean up resources at all, since referring to member objects is undefined behaviour. So [end update]
In short: It is useless.