This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
When is a function try block useful?
Difference between try-catch syntax for function
This code throws an int exception while constructing the Dog object inside class UseResources. The int exception is caught by a normal try-catch block and the code outputs :
Cat()
Dog()
~Cat()
Inside handler
#include <iostream>
using namespace std;
class Cat
{
public:
Cat() { cout << "Cat()" << endl; }
~Cat() { cout << "~Cat()" << endl; }
};
class Dog
{
public:
Dog() { cout << "Dog()" << endl; throw 1; }
~Dog() { cout << "~Dog()" << endl; }
};
class UseResources
{
class Cat cat;
class Dog dog;
public:
UseResources() : cat(), dog() { cout << "UseResources()" << endl; }
~UseResources() { cout << "~UseResources()" << endl; }
};
int main()
{
try
{
UseResources ur;
}
catch( int )
{
cout << "Inside handler" << endl;
}
}
Now, if we replace the definition of the UseResources() constructor, with one that uses a function try block, as below,
UseResources() try : cat(), dog() { cout << "UseResources()" << endl; } catch(int) {}
the output is the same
Cat()
Dog()
~Cat()
Inside handler
i.e., with exactly the same final result.
What is then, the purpose of a function try block ?
Imagine if UseResources was defined like this:
class UseResources
{
class Cat *cat;
class Dog dog;
public:
UseResources() : cat(new Cat), dog() { cout << "UseResources()" << endl; }
~UseResources() { delete cat; cat = NULL; cout << "~UseResources()" << endl; }
};
If Dog::Dog() throws, then cat will leak memory. Becase UseResources's constructor never completed, the object was never fully constructed. And therefore it does not have its destructor called.
To prevent this leak, you must use a function-level try/catch block:
UseResources() try : cat(new Cat), dog() { cout << "UseResources()" << endl; } catch(...)
{
delete cat;
throw;
}
To answer your question more fully, the purpose of a function-level try/catch block in constructors is specifically to do this kind of cleanup. Function-level try/catch blocks cannot swallow exceptions (regular ones can). If they catch something, they will throw it again when they reach the end of the catch block, unless you rethrow it explicitly with throw. You can transform one type of exception into another, but you can't just swallow it and keep going like it didn't happen.
This is another reason why values and smart pointers should be used instead of naked pointers, even as class members. Because, as in your case, if you just have member values instead of pointers, you don't have to do this. It's the use of a naked pointer (or other form of resource not managed in a RAII object) that forces this kind of thing.
Note that this is pretty much the only legitimate use of function try/catch blocks.
More reasons not to use function try blocks. The above code is subtly broken. Consider this:
class Cat
{
public:
Cat() {throw "oops";}
};
So, what happens in UseResources's constructor? Well, the expression new Cat will throw, obviously. But that means that cat never got initialized. Which means that delete cat will yield undefined behavior.
You might try to correct this by using a complex lambda instead of just new Cat:
UseResources() try
: cat([]() -> Cat* try{ return new Cat;}catch(...) {return nullptr;} }())
, dog()
{ cout << "UseResources()" << endl; }
catch(...)
{
delete cat;
throw;
}
That theoretically fixes the problem, but it breaks an assumed invariant of UseResources. Namely, that UseResources::cat will at all times be a valid pointer. If that is indeed an invariant of UseResources, then this code will fail because it permits the construction of UseResources in spite of the exception.
Basically, there is no way to make this code safe unless new Cat is noexcept (either explicitly or implicitly).
By contrast, this always works:
class UseResources
{
unique_ptr<Cat> cat;
Dog dog;
public:
UseResources() : cat(new Cat), dog() { cout << "UseResources()" << endl; }
~UseResources() { cout << "~UseResources()" << endl; }
};
In short, look on a function-level try-block as a serious code smell.
Ordinary function try blocks have relatively little purpose. They're almost identical to a try block inside the body:
int f1() try {
// body
} catch (Exc const & e) {
return -1;
}
int f2() {
try {
// body
} catch (Exc const & e) {
return -1;
}
}
The only difference is that the function-try-block lives in the slightly larger function-scope, while the second construction lives in the function-body-scope -- the former scope only sees the function arguments, the latter also local variables (but this doesn't affect the two versions of try blocks).
The only interesting application comes in a constructor-try-block:
Foo() try : a(1,2), b(), c(true) { /* ... */ } catch(...) { }
This is the only way that exceptions from one of the initializers can be caught. You cannot handle the exception, since the entire object construction must still fail (hence you must exit the catch block with an exception, whether you want or not). However, it is the only way to handle exceptions from the initializer list specifically.
Is this useful? Probably not. There's essentially no difference between a constructor try block and the following, more typical "initialize-to-null-and-assign" pattern, which is itself terrible:
Foo() : p1(NULL), p2(NULL), p3(NULL) {
p1 = new Bar;
try {
p2 = new Zip;
try {
p3 = new Gulp;
} catch (...) {
delete p2;
throw;
}
} catch(...) {
delete p1;
throw;
}
}
As you can see, you have an unmaintainable, unscalable mess. A constructor-try-block would be even worse because you couldn't even tell how many of the pointers have already been assigned. So really it is only useful if you have precisely two leakable allocations. Update: Thanks to reading this question I was alerted to the fact that actually you cannot use the catch block to clean up resources at all, since referring to member objects is undefined behaviour. So [end update]
In short: It is useless.
Related
Error handling is a challenge in C++ constructors. There are several common approaches but all of them has obvious disadvantages. Throwing exceptions for example, may cause leak of the allocated resources earlier in the constructor, making it an error prone approach. Using a static init() method is another common solution, but it goes against the RAII principle.
Studying the subject I found this answer and blog suggesting the use of C++17 feature named std::optional<>, and I found it promising. However it seems that this kind of solution comes with an underlying problem - it triggers the destructor instantly when the user retrieved the object.
Here is a simple code example describing the problem, my code is based on the the above sources
class A
{
public:
A(int myNum);
~A();
static std::optional<A> make(int myNum);
bool isBuf() { return _buf; };
private:
char* _buf;
};
std::optional<A> A::make(int myNum)
{
std::cout << "A::make()\n";
if (myNum < 8)
return {};
return A(myNum);
}
A::A(int myNum)
{
std::cout << "A()\n";
_buf = new char[myNum];
}
A::~A()
{
std::cout << "~A()\n";
delete[]_buf;
}
int main()
{
if (std::optional<A> a = A::make(42))
{
if (a->isBuf())
std::cout << "OK\n";
else
std::cout << "NOT OK\n";
std::cout << "if() finished\n";
}
std::cout << "main finished\n";
}
The output of this program will be:
A::make()
A()
~A()
OK
if() finished
~A()
followed with a runtime error (at least in Visual C++ environment) for attempting to delete a->_buf twice.
I used cout for the reader's convenience as I found this problem debugging a much complex code, but the problem is clear - the return statement in A::make() constructs the objects, but since it is the end of the A::make() scope - the destructor is invoked. The user sure his object is initialized (notice how we got an "OK" message) while in reality it was destroyed, and when we step out of the if() scope in main, a->~A() is invoked once again.
So, am I doing this wrong?
The use of std::optional for error handling in constructors is common, or so I've been told. Thanks in advance
Your class violates the rule of 3/5.
Instrument the copy constructor and simplify main to get this:
#include <optional>
#include <iostream>
class A
{
public:
A(int myNum);
~A();
A(const A& other){
std::cout << "COPY!\n";
}
static std::optional<A> make(int myNum);
bool isBuf() { return _buf; };
private:
char* _buf = nullptr;
};
std::optional<A> A::make(int myNum)
{
std::cout << "A::make()\n";
if (myNum < 8)
return {};
return A(myNum);
}
A::A(int myNum)
{
std::cout << "A()\n";
_buf = new char[myNum];
}
A::~A()
{
std::cout << "~A()\n";
delete[]_buf;
}
int main()
{
std::optional<A> a = A::make(42);
std::cout << "main finished\n";
}
Output is:
A::make()
A()
COPY!
~A()
main finished
~A()
When you call A::make() the local A(myNum) is copied to the retunred optional and its destructor is called afterwards. You'd have the same issue without std::optional (eg by returning an A by value).
The copy constructor I added does not copy anything, but the compiler generated one does make a shallow copy of the char* _buf; member. As you do not properly deep copy the buffer it gets deleted twice which results in the runtime error.
Use a std::vector for the rule of 0, or properly implement the rule of 3/5. Your code invokes undefined behavior.
PS Not directly related to the problem, but you should initialize members instead of assigning to them in the constructors body. Change:
A::A(int myNum)
{
std::cout << "A()\n";
_buf = new char[myNum];
}
to
A::A(int myNum) : _buf( new char[myNum])
{
std::cout << "A()\n";
}
or better yet, use a std::vector as mentioned above.
PPS:
Throwing exceptions for example, may cause leak of the allocated resources earlier in the constructor, making it an error prone approach.
No, throwing from a constructor is common and has no problem when you don't manage memory via raw pointers. Both using a std::vector or a smart pointer would help to make your constructor excpetion safe.
I was studying about the RAII mechanism in C++ which replaces the finally of Java.
I wrote the following code to test it:
void foo(int* arr) {
cout << "A" << endl;
throw 20;
cout << "B" << endl;
}
class Finally {
private:
int* p;
public:
Finally(int* arr) {
cout << "constructor" << endl;
p = arr;
}
~Finally() {
cout << "destructor" << endl;
delete(p);
}
};
int main()
{
int * arr = new int[10];
new Finally(arr);
try {
foo(arr);
} catch (int e) {
cout << "Oh No!" << endl;
}
cout << "Done" << endl;
return 0;
}
I want to free the memory which I used for arr so I set a new class Finally which saves the pointer to the array and when it exits the scope it should call the destructor and free it. But the output is:
constructor
A
Oh No!
Done
No call for the destructor. It also does not work when I move the body of main to some other void method (like void foo()). What fix should I do to achieve the desired action?
That's because the object you create with new Finally(arr) doesn't really gets destructed in your program.
Your allocation of the object just throws the object away immediately, leading to a memory leak, but more importantly it's created outside the scope of the try and catch.
For it to work you have to do something like
try {
Finally f(arr);
foo(arr);
} catch (int e) {
cout << "Oh No!" << endl;
}
That would create a new object f in the try, which will then get destructed when the exception is thrown (as well as when it goes out of scope of the try if no exception is thrown).
You're assuming C++ works liked Java. It doesn't, so there are actually a few things wrong in your code
The statement
new Finally(arr);
dynamically allocates a Finally, but it is NEVER released in your code. Hence its destructor is never called.
Instead do
Finally some_name(arr);
This will invoke the destructor of Finally - at the end of main() - which will give the output your expect.
However, the second thing wrong is that the destructor of Finally does delete (p) which gives undefined behaviour, since p is the result (in main()) of new int [10]. To give the code well-defined behaviour, change delete (p) to delete [] p.
Third, with the two fixes above, you are not using RAII. RAII means "Resource Acquisition Is Initialisation", which is not actually what your code does. A better form would be to initialise p using a new expression in Finallys constructor and release it with a correct delete expression in the destructor.
class Finally
{
private:
int* p;
public:
Finally() : p(new int [10])
{
cout << "constructor" << endl;
};
~Finally()
{
cout << "destructor" << endl;
delete [] p;
};
int *data() {return p;};
};
AND replace the first two lines of your main() with a single line
Finally some_name;
and the call of foo() with
foo(some_name.data());
More generally, stop assuming that C++ works like Java. Both languages work differently. If you insist on using C++ constructors like you would in Java, you will write terribly buggy C++ code.
I got some weird problem. I use delete operator inside of class method and I want to know how solve this problem.
This is code:
#include <iostream>
using namespace std;
class A
{
public:
int a;
~A() {
cout << "call ~A()" << endl;
}
void action()
{
/* code here */
delete this; //this line depends on some if statements
/* code here */
}
void test2() {
cout << "call test2()" << a << endl;
}
void test() {
a = 10;
cout << "call test()" << endl;
action();
//how can I check if data is deleted?
test2();
}
};
int main()
{
A* a = new A();
a->test();
}
How can I check if data is deleted by delete operator?
Is it even possible?
Using delete this; is nearly always "bad". There are exceptions, but those are really unusual. Think about what you are trying to do. Most of the time, this means that you should have a wrapper object and an inner object that is created/deleted by the wrapper object.
You can't check if something has been deleted (in a reliable way/portable way). In your particular test-case, you are exercising "undefined behaviour" by "using an object after it has been destroyed", which means you can't really tell what is going to happen. You have to trust the C++ runtime that delete does what it says on the label.
In C++ there are other return values available than void. Therefore your action can return something that indicates if this has been deleted or not.
Note that you should not access any non-static data members or call any non-static member functions after deleteing this. Some people feel it to be difficult to guarantee so they ban delete this altogether.
Note to opponents that C++ FAQ claims that delete this is legal construct and I haven't also found anything forbidding it.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
C++: Delete this?
Object-Oriented Suicide or delete this;
I wonder if the code below is run safely:
#include <iostream>
using namespace std;
class A
{
public:
A() {
cout << "Constructor" << endl;
}
~A() {
cout << "Destructor" << endl;
}
void deleteMe() {
delete this;
cout << "I was deleted" << endl;
}
};
int main()
{
A *a = new A();
a->deleteMe();
cout << "Exit ...";
return 0;
}
Output is:
Constructor
Destructor
I was deleted
Exit ...
and program exit normally, but are there some memory access violents here?
It's ok to delete this in case no one will use the object after that call. And in case the object was allocated on a heap of course
For example cocos2d-x game engine does so. It uses the same memory management scheme as Objective-C and here is a method of base object:
void CCObject::release(void)
{
CCAssert(m_uReference > 0, "reference count should greater than 0");
--m_uReference;
if (m_uReference == 0)
{
delete this;
}
}
I don't think it's a c++ way of managing memory, but it's possible
It's ok, because you have running simple method. After delete this, all variables and virtual table are clear. Just, analyze this example:
#include <iostream>
class foo
{
public:
int m_var;
foo() : m_var(1)
{
}
void deleteMe()
{
std::cout << m_var << "\n";
delete this;
std::cout << m_var << "\n"; // this may be crush program, but on my machine print "trash" value, 64362346 - for example
}
virtual void bar()
{
std::cout << "virtual bar()\n";
}
void anotherSimpleMethod()
{
std::cout << "anotherSimpleMethod\n";
}
void deleteMe2()
{
bar();
delete this;
anotherSimpleMethod();
// bar(); // if you uncomment this, program was crashed, because virtual table was deleted
}
};
int main()
{
foo * p = new foo();
p->deleteMe();
p = new foo();
p->deleteMe2();
return 0;
}
I've can't explain more details because it's need some knowledge about storing class and methods in RAM after program is loaded.
Absolutelly, you just run destructor. Methods does not belongs to object, so it runs ok. In external context object (*a) will be destroyed.
When in doubt whether theres strange things going on in terms of memory usage (or similar issues) rely on the proper analysis tools to assist you in clarifying the situation.
For example use valgrind or a similar program to check whether there are memory leaks or similar problems the compiler can hardly help you with.
While every tool has its limitations, oftentimes some valuable insights can be obtained by using it.
There is no memory access violation in it, you just need to be careful. But deleting this pointer is not recommended at all in any language, even though the code above would work fine. As it is same as delete a, But try doing it other way, the safe way.
For example there is something illogical in your posted code itself.
void deleteMe()
{
delete this;
cout << "I was deleted" << endl; // The statement here doesn't make any sense as you no longer require the service of object a after the delete operation
}
EDIT: For Sjoerd
Doing it this way make more sense
void deleteMe()
{
delete this;
}
int main()
{
A *a = new A();
a->deleteMe();
cout << "a was deleted" << endl;
cout << "Exit ...";
return 0;
}
The second line in your deleteMe() function should never be reached, but its getting invoked. Don't you think that its against the language's philosophy?
I'm trying to understand the behavior of exceptions in c++.
I wrote the following code:
class A{
public:
A(){
};
~A(){
cout<<"hello";
};
};
int exceptionTest(){
throw "blablabla";
};
int main(){
A sd;
int test = exceptionTest();
return 0;
}
I've noticed that in this case the distructor gets called even though no one caught the exception.
If I change the "main" code to:
int main(){
A* sd = new A();
int test = exceptionTest();
return 0;
}
The distructor will not be called.
Can anyone please tell me what is the reason for the different behavior?
Thanks,
Li
The fact that you are throwing an exception is irrelevant here. In your first example, sd is an object that exists on the stack. When execution exits its scope, for whatever reason, it gets destroyed. In the second example, sd is a pointer to an object that was explicitly allocated using new. This object will not be destroyed until that pointer is passed to delete; since you never do so, your program is currently leaking it.
The standard has the following to say on the matter:
-9- If no matching handler is found in a program, the function terminate() is called; whether or not the stack is unwound before this call to terminate() is implementation-defined.
So your compiler performs stack unwinding (invoking destructors of locals), others may not. For example, with G++ or codepad.org, this program will not output "hello".
Dynamically allocated objects are not destroyed until you explicitly destroy them (with delete or such). In particular, if an exception occurs in the meantime, code may never reach the deallocation statement.
Local variable destructors are called automatically, as soon as the variable is out of scope.
Destructors are never called on pointers, so you must call it yourself.
I've noticed that in this case the distructor gets called even though no one caught the exception.
That's exactly what to expect.
This mechanism is a RAII consequence that makes you "sure" that resources will be freed even if there is an exception. For example :
class File
{
public:
File( const std::string filename ) : file_handler(file_open( filename )) { } // whatever the implementation
~File() { file_close(file_handler); }
private:
FileHandler file_handler;
};
void test(){ throw "This is a test"; }
int main()
{
File file("test.txt");
test();
return false;
}
You're assured that the file will be closed even with the throw. So if you use RAII to manage your resources.
That's because when the exception is thrown, until it get catch, it goes back in the call stack and if there is no catch the local objects are destroyed the way they would be if we got out of scope.
This is not really an answer, but I might clarify the behavior, in case of RAII mechanism, that I understood from the other answer and Mike's comments.
#include <iostream>
class Bar
{
public:
Bar() { std::cout << "Bar constructor" << std::endl; }
~Bar() { std::cout << "Bar destructor" << std::endl; }
};
void foo()
{
throw("Exception");
}
int main()
{
// Variation, add { to create a new scope
Bar bar;
foo();
// Variation : }
return 0;
}
Using g++, this code, where the exception is not catched will output the following:
Bar constructor
terminate called after throwing an instance of 'char const*'
Aborted
Meaning that g++ does not unwind the stack (or let go the variable out of scope, if I understand the "variant" correctly), so the destructor is not called.
However, if you catch the exception:
#include <iostream>
class Bar
{
public:
Bar() { std::cout << "Bar constructor" << std::endl; }
~Bar() { std::cout << "Bar destructor" << std::endl; }
};
void foo()
{
throw("Exception");
}
int main()
{
try
{
Bar bar;
foo();
}
catch (...)
{
// Nothing here
}
return 0;
}
then the output will be
Bar constructor
Bar destructor
and you recover the correct behavior.