Will there be any leak in below C++ shared_ptr usage? - c++

Is the allocated memory managed by a smart pointer guaranteed to be freed up in event of an exception, such as below?
#include <memory>
void test( std::shared_ptr<int> sptr )
{
throw "exception";
}
int main()
{
std::shared_ptr<int> ptr( new int(1) );
test( ptr );
return 0;
}
I tried executing the code, putting breakpoint at shared_ptr destructor but I did not see it getting called. I think the memory should be cleaned up by itself. Am I right, or won't it be cleaned up?

The language standard states that:
If no matching handler is found, the function std::terminate() is
called; whether or not the stack is unwound before this call to
std::terminate() is implementation-defined
So your program isn't guaranteed to clean up after itself, but most (if not all) modern operating systems will do it post-mortem.
Had you caught the exception, the shared_ptr's instance would've been destroyed properly, ensuring no leaks.

Take better example for understanding:
#include <memory>
#include <windows.h>
using namespace std;
class A
{
public:
A()
{
cout << "Constructor" << endl;
}
~A()
{
cout << "destructor" << endl;
}
};
void test(std::shared_ptr<A> sptr)
{
throw "exception";
}
void function()
{
std::shared_ptr<A> ptr(new A);
test(ptr);
}
int main()
{
function();
Sleep(5000);
}
Before Program crash only one constructor gets called which shows it does not do destruction.
But if we do debugging in visual studio and say continue after exception then even destructor gets called.

Related

what if I ignore the return value of a function with shared_ptr return type

#include <iostream>
#include <memory>
using namespace std;
shared_ptr<string> func()
{
shared_ptr<string> ptr = make_shared<string>("smart poiter");
return ptr;
}
int main(int argc, char const *argv[])
{
func();
cout << "pause" << endl;
return 0;
}
like code above, will the memory of string "smart poiter" be released?
Yes. The internal counter will reach 0 and the memory will be released safely.
Yes. shared_ptrs aren't special here; any instance's destructor will be invoked promptly (when the statement finishes evaluation) if the instance is returned without being assigned; not doing so would break RAII in a critical way. shared_ptr's destructor decrements the reference count, no other instances own a reference, so the destructor will release the associated memory.

Throwing an exception which causes a destructor to be called crashes the program

Consider this small snippet of code, which is actually part of a larger codebase:
class A
{
public:
A()
{
std::cout << "A" << std::endl;
}
~A()
{
std::cout << "~A" << std::endl;
}
};
void f()
{
A a;
throw;
}
void g()
{
try
{
f();
}
catch(...)
{
std::cout << "Caught" << std::endl;
}
}
For my particular case, the output turns out to be
A
~A
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
It seems that rather than the exception being caught, the program is just terminated. However, if I remove A's constructor, the exception does get caught.
Without closely analyzing the code, is it possible to know what causes this sort of behaviour?
A throw-expression with no operand, as in your code:
Rethrows the currently handled exception (the same object, not a copy of it)
Or, if there is no currently handled exception, calls std::terminate.
I am assuming f() is not being called while an exception is being handled (I imagine you're calling it directly from main or something). Thus, std::terminate is called.
The object a is irrelevant.

Exceptions, stack unwinding, encapsulated heap memory, exit() [duplicate]

This question already has answers here:
Memory deallocation and exceptions
(4 answers)
Closed 9 years ago.
When an exception is thrown, then the block where it is thrown is unwinded from stack:
int main ()
{
try
{
Object x; // doesn't throw
Object y; // throws
cout << "wonderful";
}
catch (...)
{
cout << "fail";
}
}
When Object allocates on construction memory on heap and deallocates it properly on destruction, then there should be no memory leak, because stack unwinding calls destructor of x (not of y, but Object guarantees, that when constructor failing, then no memory leak). As far all right, isn't it?
Let's go into depth:
int main ()
{
Object x; // doesn't throw
double *d = new double[256*256*256]; // doesn't throw
Object y; // throws
cout << "wonderful";
delete[] d;
}
Because of good education, I want to clean my trash up by myself, and not let do it the OS. I know, that every modern OS deletes by itself heap memory of a program which terminates unexpected (or expected, but no explicit deallocation). So in the upper case, the deallocation of d would do my OS, but x would be still properly deallocating its memory (because of stack unwinding and destructor call) before OS would do it, right?
What about that:
#include <cstdlib>
int main ()
{
Object x; // doesn't throw
try { Object y; } // throws
catch (...) { cout << "fail"; exit(EXIT_FAILURE); }
cout << "working and working...";
cin.get();
}
Is the destructor of x called before exit gives control back to OS?
And more in depth:
void Object::be_stupid ()
{
Object a; // doesn't throw
try { Object b; }// throws
catch (...) { exit(EXIT_FAILURE); }
}
void main ()
{
Object x; // doesn't throw
try { x.be_stupid(); } // exits program
}
Is the constructor of x called before exit gives control back to OS? If yes, then exit "unwinds" all surrounding stacks including main(), right?
Ok, got it thanks to polkadotcadaver: never use exit(), propagate exceptions until main() and do there a simple return - all stack Objects will be deallocated by their own destructor before OS takes control.

Shouldn't I use _endthreadex() in thread procedure for stack unwinding?

I examined about stack unwinding on thread procedure in win32 environment.
My test code is the following.
class Dummy
{
public:
Dummy() { wcout << L"dummy ctor" << endl; }
~Dummy() { wcout << L"dummy dtor" << endl; }
};
void InnerFunc()
{
Dummy dm;
while(1)
{
char *buf = new char[100000000];
}
}
unsigned WINAPI ThreadFunc(void *arg)
{
Dummy dm;
try
{
InnerFunc();
}
catch(bad_alloc e)
{
wcout << e.what() << endl;
}
_endthreadex(0);
return 0;
}
void OuterFunc()
{
Dummy dm;
HANDLE hModule;
hModule = (HANDLE)_beginthreadex(0, 0, ThreadFunc, 0, 0, 0);
WaitForSingleObject(hModule, INFINITE);
CloseHandle(hModule);
}
int _tmain(int argc, _TCHAR* argv[])
{
OuterFunc();
wcout << e.what() << endl;
return 0;
}
Output result:
dummy ctor
dummy ctor
dummy ctor
dummy dtor
bad allocation
dummy dtor
As you know, an output of constructor and destructor is not paired. I think that _endthreadex() makes the thread handle be signaled and skips stack unwinding of the thread.
When I tested again without _endthreadex(), I was able to get a result I expected.
In this case, if I need stack unwinding on thread, shouldn't I use _endthreadex() in thread procedure?
I would guess the destructor is never called for the instance created in ThreadFunc. However, you should add a way to distinguish each constructor and destructor call to be sure.
Assuming that's what's happening, it seems pretty clear that endthreadex terminates the thread immediately without cleaning up the stack. The docs explicitly state that endthreadex is called when ThreadFunc returns, so why bother calling it explicitly here?
This is definitely a case where I'd use boost::thread instead. It will do the right thing in terms of thread creation and cleanup without making you worry about the win32-specific details.
Your problem is:
while(1)
{
char *buf = new char[100000000];
}
You have created a memory leak, on each iteration you create a new object losing any reference to the old object.
Stack Unwinding, clears off all the local objects in that scope,
Dummy dm;
is a object allocated on local storage inside InnerFunc(), Stack Unwinding rightly destroys this object and the single destructor call trace you see is due to this.
Stack Unwinding does not explicitly deallocate the dynamic memory. Each pointer allocated with new[] will have to be explicitly deallocated by calling a delete [] on the same address.
I don't see how it is related to any of the Windows thread functions(I am not much in to windows) but as I already stated you have a problem there.
Solution:
The simple solution to handling cleanups during exceptions is RAII.
You should use a Smart pointer to wrap your raw pointer and then the Smart pointer ensures that your object memory gets appropriately deallocated once the scope ends.

C++ RAII not working?

I'm just getting started with RAII in C++ and set up a little test case. Either my code is deeply confused, or RAII is not working! (I guess it is the former).
If I run:
#include <exception>
#include <iostream>
class A {
public:
A(int i) { i_ = i; std::cout << "A " << i_ << " constructed" << std::endl; }
~A() { std::cout << "A " << i_ << " destructed" << std::endl; }
private:
int i_;
};
int main(void) {
A a1(1);
A a2(2);
throw std::exception();
return 0;
}
with the exception commented out I get:
A 1 constructed
A 2 constructed
A 2 destructed
A 1 destructed
as expected, but with the exception I get:
A 1 constructed
A 2 constructed
terminate called after throwing an instance of 'std::exception'
what(): std::exception
Aborted
so my objects aren't destructed even though they are going out of scope. Is this not the whole basis for RAII.
Pointers and corrections much appreciated!
You don't have a handler for your exception. When this happens the standard says that std::terminate is called, which in turn calls abort. See section 14.7 in The C++ Programming Language, 3rd edition.
The problem is that main has a special status. When an exception is thrown from there, the stack can't be meaningfully unwound, the application just calls std:terminate instead.
And then it makes a bit of sense why the variables don't go out of scope. We haven't actually left the scope in which they were declared. What happens could be considered to be equivalent to this:
int main(void) {
A a1(1);
A a2(2);
std::terminate();
}
(I believe it is implementation-defined whether destructors are called in this case though, so on some platforms, it'll work as you expected)
You have an unhanded exception in the main, which means a call to terminate. Try this:
int main(void)
{
try
{
A a1(1);
A a2(2);
throw std::exception();
return 0;
}
catch(const std::exception & e)
{
return 1;
}
}
If an exception escapes main() it is implementation defined weather the stack is unwound.
try
int main()
{
try
{
doWork(); // Do you experiment here.
}
catch(...)
{ /*
* By catching here you force the stack to unwind correctly.
*/
throw; // re-throw so exceptions pass to the OS for debugging.
}
}
As others have pointed out, you've got an uncaught exception, which calls terminate(). It is implementation-defined (see the Standard, 15.3 paragraph 9 and 15.5.1 paragraph 2) whether destructors are called in this case, and the definition in your implementation is apparently "No, they won't". (If terminate() is called for any other reason than throwing an exception that doesn't have a handler, destructors will not be called.)
Your A objects are not being destroyed because std::terminate is being called.
std::terminate is called when an unhandled exception leaks out of main. If you wrap your code in a try/catch (even if the catch just re-raises) you'll see the behaviour you were expecting.
You are not handling the exception properly, so your application is exiting before the objects go out of scope.
I am going to explain a little more. If an exception "bubbles" up to main the stack is unwound (edit). Even moving the code to a secondary function will not fix this issue. ex:
1 #include <exception>
2 #include <iostream>
3
4 void test();
5
6 class A {
7 public:
8 A(int i) { i_ = i; std::cout << "A " << i_ << " constructed" << std::endl; }
9 ~A() { std::cout << "A " << i_ << " destructed" << std::endl; }
10 private: int i_;
11 };
12
13
14 int main(void) {
15 test();
16 return 0;
17 }
18
19 void test(){
20 A a1(1);
21 A a2(2);
22 throw std::exception();
23 }
The above code will not solve the issue. The only way to solve this is to wrap the thrown exception in a try-catch block. This will keep the exception from reaching the main, and stop termination that is happening before the objects go out of scope.
Others have suggested putting a try/catch inside main() to handle this, which works fine. For some reason I find the rarely used 'function-try-block' to look better, which surprises me (I thought it would look too weird). But I don't think there's any real advantage:
int main(void)
try
{
A a1(1);
A a2(2);
throw std::exception();
return 0;
}
catch (...)
{
throw;
}
A couple of disadvantages are that since it's rarely used a lot of developers get thrown for a loop when they see it, and VC6 chokes on it if that's a consideration.
Since the exception is not handled by the time it reaches main(), it results in a call to std::terminate(), you essentially have the equivalent of
int main(void) {
A a1(1);
A a2(2);
exit(1);
}
Destructors are NOT guaranteed to be called in cases where the program terminates before they go out of scope. For another hole in RAII, consider:
int main(void) {
A *a1 = new A(1);
}
The following code works.
#include <exception>
#include <iostream>
class A {
public:
A(int i) { i_ = i; std::cout << "A " << i_ << " constructed" << std::endl; }
~A() { std::cout << "A " << i_ << " destructed" << std::endl; }
private:
int i_;
};
void test() {
A a1(1);
A a2(2);
throw std::exception();
}
int main(void) {
try {
test();
} catch(...) {
}
return 0;
}