So we have a constructor that can throw an exception depending on the arguments passed to it, but we do not know how to delete the object if this occurs. Important part of the code:
try
{
GameBase *gameptr = GameBase::getGame(argc, argv);
if (gameptr == 0)
{
std::cout << "Correct usage: " << argv[PROGRAM_NAME] << " " << "TicTacToe" << std::endl;
return NO_GAME;
}
else
{
gameptr->play();
}
delete gameptr;
}
catch (error e)
{
if (e == INVALID_DIMENSION)
{
std::cout << "Win condition is larger than the length of the board." << std::endl;
return e;
}
}
catch (...)
{
std::cout << "An exception was caught (probably bad_alloc from new operator)" << std::endl;
return GENERIC_ERROR;
}
In the third line, GameBase::getGame() calls the constructor for one of the games derived from GameBase and returns a pointer to that game, and these constructors can throw exceptions. The question is, how can we then delete the (partial?) object pointed to by gameptr if this occurs? If an exception is thrown, we will exit the scope of gameptr because we leave the try block and cannot call delete gameptr.
To assess the exception safety, you need to provide more detail of the construction of the object in GameBase::getGame.
The rule is through, that if a constructor throws, the object is not created, hence the destructor is not called. Associated memory allocations are also deallocated (i.e. the memory for the object itself).
The issue then becomes, how was the memory allocated to begin with? If it was with a new GameBase(...), then there is no need to deallocate or delete the resultant pointer - the memory is deallocated by the runtime.
For clarity on what happens to the member variables that are already constructed; they are destructed on the exception of the "parent" object. Consider the sample code;
#include <iostream>
using namespace std;
struct M {
M() { cout << "M ctor" << endl; }
~M() { cout << "M dtor" << endl; }
};
struct C {
M m_;
C() { cout << "C ctor" << endl; throw exception(); }
~C() { cout << "C dtor" << endl; }
};
auto main() -> int {
try {
C c;
}
catch (exception& e) {
cout << e.what() << endl;
}
}
The output is;
M ctor
C ctor
M dtor
std::exception
If the M m_ member is to be dynamically allocated, favour a unique_ptr or a shared_ptr over a naked pointer, and allow the smart pointers to manage the object for you; as follows;
#include <iostream>
#include <memory>
using namespace std;
struct M {
M() { cout << "M ctor" << endl; }
~M() { cout << "M dtor" << endl; }
};
struct C {
unique_ptr<M> m_;
C() : m_(new M()) { cout << "C ctor" << endl; throw exception(); }
~C() { cout << "C dtor" << endl; }
};
The output here mirrors the output above.
When you write Foo* result = new Foo(), the compiler translates this to the equivalent of this code:
void* temp = operator new(sizeof(Foo)); // allocate raw memory
try {
Foo* temp2 = new (temp) Foo(); // call constructor
result = temp2;
} catch (...) {
operator delete(temp); // constructor threw, deallocate memory
throw;
}
So you don't need to worry about the allocated memory if the constructor throws. Note, however, that this does not apply to extra memory allocated within the constructor. Destructors are only called for objects whose constructor finished, so you should get all your allocations into small wrapper objects (smart pointers) immediately.
If you throw in a constructor, the object is not constructed and thus, you are responsible for the deletion of allocated resources. This goes even further! Consider this Code
int a = function(new A, new A);
It is up to the compiler in which ordering is the A allocated AND constructed. You might end up with an memory leak, if your A constructor can throw!
Edit:
Use instead
try{
auto first = std::make_unique<A>();
auto second = std::make_unique<A>();
int a = function(*first, *second);
...
Related
I wrote a small program, to check the difference between creating shared_ptr via new and make_shared() function in case of exceptions. I read everywhere that via make_shared() it is an exception-safe.
But the interesting thing about both these cases is that the destructor in both cases is not called after stack unwinding? Am I missed something? Thanks in advance.
#include <iostream>
#include <memory>
class Car
{
public:
Car() { cout << "Car constructor!" << endl; throw std::runtime_error("Oops"); }
~Car() { cout << "Car destructor!" << endl; }
};
void doProcessing()
{
// std::shared_ptr<Car> sp(new Car());
std::shared_ptr<Car> sp2 = std::make_shared<Car>();
}
int main()
{
try
{
doProcessing();
}
catch(...)
{
}
return 0;
}
What object?
The only object in a smart pointer here did not actually complete construction, because its constructor threw. It doesn't exist.
You don't need smart pointers to demonstrate this. Just throw from any constructor and you'll see that the destructor body is not invoked.
Just wanted to add an answer addressing "I read everywhere that via make_shared() it is an exception-safe" part of your question, the rest is already answered by Lightness Races in Orbit.
Difference between make_share and shared_ptr(new Car) can be demonstrated by below program.
class Car
{
public:
Car() { cout << "Car constructor!" << endl; throw std::runtime_error("Car oops"); }
~Car() { cout << "Car destructor!" << endl; }
};
class Bycicle
{
public:
Bycicle() { cout << "Bycicle constructor!, does not throw" << endl;}
~Bycicle() { cout << "Bycicle destructor!" << endl; }
};
void doProcessing(std::shared_ptr<Car> /*carPtr*/, std::shared_ptr<Bycicle> /*bPtr*/)
{
}
int main()
{
try
{
doProcessing(std::shared_ptr<Car>(new Car), std::shared_ptr<Bycicle>(new Bycicle));
}
catch(std::exception& ex)
{
std::cout << "Ex is " << ex.what() << std::endl;
}
return 0;
}
Until C++17, compiler is allowed to make following function calls (in the order described)
-- Call new Bycicle along with the constructor of Bycicle but NOT call ctor of shared_ptr.
-- Call the constructor of Car which throws.
In this case, as pointed out Car was never fully constructed so it won't leak. However constructor of Bycicle was fully executed and it does leak (since shared_ptr does NOT yet own the object).
Calling doProcessing(std::make_shared<Car>(), std::make_shared<Bycicle>()); guarantees that ownership of fully allocated objects is passed to the shared_ptr.
Final Note: This is not applicable since C++ 17, because C++ 17 guarantees that arguments are evaluated fully (order in which they are evaluated is still not guaranteed).
I am using RAII and using try/catch to not leak memory for example. Here is an implementation in C++:
#include <iostream>
using namespace std;
void g(){
throw 5;
}
class Rand{
public:
~Rand(){
cout << "Randomm Destructor called" << endl;
}
int a = 17;
};
void f(){
auto p = std::make_unique<Rand>(); //Should call dtor
Rand* r = new Rand(); //Shouldnt call dtor
cout << p->a << endl; //Prints 17
g();
cout << "This never executes" << endl;
}
int main(){
f();
}
Due to stackunwinding and using RAII with the std::unique_ptr, shouldn't the destructors for stack allocated objects be called as a basic guarantee to throw/try since an exception is being thrown?
From throw:
Stack unwinding
As the control flow moves up the call stack, destructors are invoked
for all objects with automatic storage duration constructed, but not
yet destroyed, since the corresponding try-block was entered, in
reverse order of completion of their constructors.
There is no corresponding try-block in your code, so no destructors are called and the program is terminated.
If you change the program as:
try
{
auto p = std::make_unique<Rand>(); //Should call dtor
Rand* r = new Rand(); //Shouldnt call dtor
cout << p->a << endl; //Prints 17
g();
cout << "This never executes" << endl;
}
catch (int) {}
you will see, that the destructor for the object which is wrapped into unique_ptr is called.
What happens, in the following code, if construction / destruction of some array element throws?
X* x = new X[10]; // (1)
delete[] x; // (2)
I know that memory leaks are prevented, but additionally:
Ad (1), are the previously constructed elements destructed? If yes, what happens if destructor throws in such a case?
Ad (2), are the not-yet-destructed elements destructed? If yes, what happens if destructor throws again?
Yes, if the constructor of x[5] throws, then the five array elements x[0]..x[4] already successfully constructed will be destroyed correctly.
Destructors should not throw. If a destructor does throw, this happens while the previous (constructor) exception is still being handled. As nested exceptions aren't supported, std::terminate is called immediately. This is why destructors shouldn't throw.
There are two mutually-exclusive options here:
If you reach label (2), the constructor didn't throw. That is, if x was successfully created, all ten elements were successfully constructed. In this case, yes, they all get deleted. No, your destructor still shouldn't throw.
If the constructor threw part-way through step (1), then the array x never really existed. The language tried to create it for you, failed, and threw an exception - so you don't reach (2) at all.
The key thing to understand is that x either exists - in a sane and predictable state - or it doesn't.
The language doesn't give you some un-usable half-initialized thing, if a constructor failed, because you couldn't do anything with it anyway. (You couldn't even safely delete it, because there would be no way to track which of the elements were constructed, and which were just random garbage).
It might help to consider the array as an object with ten data members. If you're constructing an instance of such a class, and one of the base-class or member constructors throws, all the previously-constructed bases and members are destroyed in exactly the same way and your object never starts existing.
We can test with the following code:
#include <iostream>
//`Basic` was borrowed from some general-purpose code I use for testing various issues
//relating to object construction/assignment
struct Basic {
Basic() {
std::cout << "Default-Constructor" << std::endl;
static int val = 0;
if(val++ == 5) throw std::runtime_error("Oops!");
}
Basic(Basic const&) { std::cout << "Copy-Constructor" << std::endl; }
Basic(Basic &&) { std::cout << "Move-Constructor" << std::endl; }
Basic & operator=(Basic const&) { std::cout << "Copy-Assignment" << std::endl; return *this; }
Basic & operator=(Basic &&) { std::cout << "Move-Assignment" << std::endl; return *this; }
~Basic() noexcept { std::cout << "Destructor" << std::endl; }
};
int main() {
Basic * ptrs = new Basic[10];
delete[] ptrs;
return 0;
}
This code yields the following output before crashing:
Default-Constructor
Default-Constructor
Default-Constructor
Default-Constructor
Default-Constructor
Default-Constructor
[std::runtime_error thrown and uncaught here]
Note that at no point were the Destructors called. This isn't necessarily a critical thing, since an uncaught exception will crash the program anyways. But if we catch the error, we see something reassuring:
int main() {
try {
Basic * ptrs = new Basic[10];
delete[] ptrs;
} catch (std::runtime_error const& e) {std::cerr << e.what() << std::endl;}
return 0;
}
The output changes to this:
Default-Constructor
Default-Constructor
Default-Constructor
Default-Constructor
Default-Constructor
Default-Constructor
Destructor
Destructor
Destructor
Destructor
Destructor
Oops!
So Destructors will be automatically called for fully constructed objects, even without an explicit delete[] call, because the new[] call has handling mechanisms to deal with this.
But you do have to worry about that sixth object: in our case, because Basic doesn't do any resource management (and a well-designed program wouldn't have Basic do resource management if its constructor could throw like this), we don't have to worry. But we might have to worry if our code looks like this instead:
#include <iostream>
struct Basic {
Basic() { std::cout << "Default-Constructor" << std::endl; }
Basic(Basic const&) { std::cout << "Copy-Constructor" << std::endl; }
Basic(Basic &&) { std::cout << "Move-Constructor" << std::endl; }
Basic & operator=(Basic const&) { std::cout << "Copy-Assignment" << std::endl; return *this; }
Basic & operator=(Basic &&) { std::cout << "Move-Assignment" << std::endl; return *this; }
~Basic() noexcept { std::cout << "Destructor" << std::endl; }
};
class Wrapper {
Basic * ptr;
public:
Wrapper() : ptr(new Basic) {
std::cout << "WRDefault-Constructor" << std::endl;
static int val = 0;
if(val++ == 5) throw std::runtime_error("Oops!");
}
Wrapper(Wrapper const&) = delete; //Disabling Copy/Move for simplicity
~Wrapper() noexcept { delete ptr; std::cout << "WRDestructor" << std::endl; }
};
int main() {
try {
Wrapper * ptrs = new Wrapper[10];
delete[] ptrs;
} catch (std::runtime_error const& e) {std::cout << e.what() << std::endl;}
return 0;
}
Here, we get this output:
Default-Constructor
WRDefault-Constructor
Default-Constructor
WRDefault-Constructor
Default-Constructor
WRDefault-Constructor
Default-Constructor
WRDefault-Constructor
Default-Constructor
WRDefault-Constructor
Default-Constructor
WRDefault-Constructor
Destructor
WRDestructor
Destructor
WRDestructor
Destructor
WRDestructor
Destructor
WRDestructor
Destructor
WRDestructor
Oops!
The large block of Wrapper objects will not leak memory, but the sixth Wrapper object will leak a Basic object because it was not properly cleaned up!
Fortunately, as is usually the case with any resource-management scheme, all these problems go away if you use smart pointers:
#include <iostream>
#include<memory>
struct Basic {
Basic() { std::cout << "Default-Constructor" << std::endl; }
Basic(Basic const&) { std::cout << "Copy-Constructor" << std::endl; }
Basic(Basic &&) { std::cout << "Move-Constructor" << std::endl; }
Basic & operator=(Basic const&) { std::cout << "Copy-Assignment" << std::endl; return *this; }
Basic & operator=(Basic &&) { std::cout << "Move-Assignment" << std::endl; return *this; }
~Basic() noexcept { std::cout << "Destructor" << std::endl; }
};
class Wrapper {
std::unique_ptr<Basic> ptr;
public:
Wrapper() : ptr(new Basic) {
std::cout << "WRDefault-Constructor" << std::endl;
static int val = 0;
if(val++ == 5) throw std::runtime_error("Oops!");
}
//Wrapper(Wrapper const&) = delete; //Copy disabled by default, move enabled by default
~Wrapper() noexcept { std::cout << "WRDestructor" << std::endl; }
};
int main() {
try {
std::unique_ptr<Wrapper[]> ptrs{new Wrapper[10]}; //Or std::make_unique
} catch (std::runtime_error const& e) {std::cout << e.what() << std::endl;}
return 0;
}
And the output:
Default-Constructor
WRDefault-Constructor
Default-Constructor
WRDefault-Constructor
Default-Constructor
WRDefault-Constructor
Default-Constructor
WRDefault-Constructor
Default-Constructor
WRDefault-Constructor
Default-Constructor
WRDefault-Constructor
Destructor
WRDestructor
Destructor
WRDestructor
Destructor
WRDestructor
Destructor
WRDestructor
Destructor
WRDestructor
Destructor
Oops!
Note that the number of calls to Destructor now match the number of calls to Default-Constructor, which tells us that the Basic objects are now getting properly cleaned up. And because the resource management that Wrapper was doing has been delegated to the unique_ptr object, the fact that the sixth Wrapper object doesn't have its deleter called is no longer a problem.
Now, a lot of this involves strawmanned code: no reasonable programmer would ever have a resource manager throw without proper handling code, even if it were made "safe" by use of smart-pointers. But some programmers just aren't reasonable, and even if they are, it's possible you might come across a weird, exotic scenario where you have to write code like this. The lesson, then, as far as I'm concerned, is to always use smart pointers and other STL objects to manage dynamic memory. Don't try to roll your own. It'll save you headaches exactly like this when trying to debug things.
Consider we create the array using this way:
T* arr = new T[num];
And now because of some reasons we understood that we need simply delete that array but without calling any T destructors.
We all know that if we write next:
delete arr;
the T destructor will be called.
If we write this:
delete[] arr;
the num destructors would be called.
Having played with pointers, you realize that new inserts before the result pointer the unsigned long long value that represents the number of allocated T instances. So we try to outwit the C++ trying to change that value to number of bytes that arr occupies and delete it as (char*) in hope that in this case the delete would not call the destructors for T instances and simply free occupied memory. So you write something like this:
typedef unsigned long long;
unsll & num = *(unsll)((char*)arr-sizeof(unsll));
num = num*sizeof(T);
delete ((char*)arr);
But that doesn't work and C++ creates the trigger breakpoint(run time error) when trying to delete this. So that doesn't work. And a lot of other playing with pointers doesn't work as at least some error(compile- or run-time) occurs. So the question is:
Is that possible to delete an array of classes in C++ without calling their destructors?
Perhaps you want ::operator delete[](arr).
(See http://en.cppreference.com/w/cpp/memory/new/operator_delete)
But this still has undefined behaviour, and is a terrible idea.
One simple way to deallocate without calling destructors is to separate allocation and initialization. When you take proper care of alignment you can use placement new (or the functionality of a standard allocator object) to create the object instances inside the allocated block. Then at the end you can just deallocate the block, using the appropriate deallocation function.
I can't think of any situation where this would be a smart thing to do: it smells strongly of premature optimization and X/Y-problem (dealing with problem X by imagining impractical Y as a solution, then asking only about Y).
A new-expression is designed to couple allocation with initialization, so that they're executed as an all-or-nothing operation. This coupling, and ditto coupling for cleanup and deallocation, is key to correctness, and it also simplifies things a lot (i.e., inside there's complexity that one doesn't have to deal with). Uncoupling needs to have a very good reason. Avoiding destructor calls, for e.g. purposes of optimization, is not a good reason.
I'm only going to address your specific question of
Is that possible to delete an array of classes in C++ without calling their destructors?
The short answer is yes.
The long answer is yes, but there's caveats and considering specifically what a destructor is for (i.e. resource clean up), it's generally a bad idea to avoid calling a class destructor.
Before I continue the answer, it should be noted that this is specifically to answer your question and if you're using C++ (vs. straight C), using this code will work (since it's compliant), but if you're needing to produce code in this way, you might need to rethink some of your design since code like this can lead to bugs/errors and general undefined behavior if not used properly.
TL;DR if you need to avoid destructors, you need to rethink your design (i.e. use copy/move semantics or an STL container instead).
You can use malloc and free to avoid constructor and destructor calls, example code:
#include <iostream>
#include <cstdio>
class MyClass {
public:
MyClass() : m_val(0)
{
this->init(42);
std::cout << "ctor" << std::endl;
}
~MyClass()
{
std::cout << "dtor" << std::endl;
}
friend std::ostream& operator<<(std::ostream& stream, const MyClass& val)
{
stream << val.m_val;
return stream;
}
void init(int val)
{
/* just showing that the this pointer is valid and can
reference private members regardless of new or malloc */
this->_init(val);
}
private:
int m_val;
void _init(int val)
{
this->m_val = val;
}
};
template < typename Iterator >
void print(Iterator begin, Iterator end)
{
while (begin != end) {
std::cout << *begin << std::endl;
++begin;
}
}
void set(MyClass* arr, std::size_t count)
{
for (; count > 0; --count) {
arr[count-1].init(count);
}
}
int main(int argc, char* argv[])
{
std::cout << "Calling new[10], 10 ctors called" << std::endl;
MyClass* arr = new MyClass[10]; // 10 ctors called;
std::cout << "0: " << *arr << std::endl;
set(arr, 10);
print(arr, arr+10);
std::cout << "0: " << *arr << std::endl;
std::cout << "Calling delete[], 10 dtors called" << std::endl;
delete[] arr; // 10 dtors called;
std::cout << "Calling malloc(sizeof*10), 0 ctors called" << std::endl;
arr = static_cast<MyClass*>(std::malloc(sizeof(MyClass)*10)); // no ctors
std::cout << "0: " << *arr << std::endl;
set(arr, 10);
print(arr, arr+10);
std::cout << "0: " << *arr << std::endl;
std::cout << "Calling free(), 0 dtors called" << std::endl;
free(arr); // no dtors
return 0;
}
It should be noted that mixing new with free and/or malloc with delete results in undefined behavoir, so calling MyClass* arr = new MyClass[10]; and then call free(arr); might not work as "expected" (hence the UB).
Another issue that will arise from not calling a constructor/destructor in C++ is with inheritance. The above code will work with malloc and free for basic classes, but if you start to throw in more complex types, or inherit from other classes, the constructors/destructors of the inherited classes will not get called and things get ugly real quick, example:
#include <iostream>
#include <cstdio>
class Base {
public:
Base() : m_val(42)
{
std::cout << "Base()" << std::endl;
}
virtual ~Base()
{
std::cout << "~Base" << std::endl;
}
friend std::ostream& operator<<(std::ostream& stream, const Base& val)
{
stream << val.m_val;
return stream;
}
protected:
Base(int val) : m_val(val)
{
std::cout << "Base(" << val << ")" << std::endl;
}
void _init(int val)
{
this->m_val = val;
}
int m_val;
};
class Child : public virtual Base {
public:
Child() : Base(42)
{
std::cout << "Child()" << std::endl;
}
~Child()
{
std::cout << "~Child" << std::endl;
}
void init(int val)
{
this->_init(val);
}
};
template < typename Iterator >
void print(Iterator begin, Iterator end)
{
while (begin != end) {
std::cout << *begin << std::endl;
++begin;
}
}
void set(Child* arr, std::size_t count)
{
for (; count > 0; --count) {
arr[count-1].init(count);
}
}
int main(int argc, char* argv[])
{
std::cout << "Calling new[10], 20 ctors called" << std::endl;
Child* arr = new Child[10]; // 20 ctors called;
// will print the first element because of Base::operator<<
std::cout << "0: " << *arr << std::endl;
set(arr, 10);
print(arr, arr+10);
std::cout << "0: " << *arr << std::endl;
std::cout << "Calling delete[], 20 dtors called" << std::endl;
delete[] arr; // 20 dtors called;
std::cout << "Calling malloc(sizeof*10), 0 ctors called" << std::endl;
arr = static_cast<Child*>(std::malloc(sizeof(Child)*10)); // no ctors
std::cout << "The next line will seg-fault" << std::endl;
// Segfault because the base pointers were never initialized
std::cout << "0: " << *arr << std::endl; // segfault
set(arr, 10);
print(arr, arr+10);
std::cout << "0: " << *arr << std::endl;
std::cout << "Calling free(), 0 dtors called" << std::endl;
free(arr); // no dtors
return 0;
}
The above code is compliant and compiles without error on g++ and Visual Studio, but due to the inheritance, both crash when I try to print the first element after a malloc (because the base class was never initialized).
So you can indeed create and delete an array of objects without calling their constructors and destructors, but doing so results in a slew of extra scenarios you need to be aware of and account for to avoid undefined behavior or crashes, and if this is the case for your code, such that you need to ensure the destructors are not called, you might want to reconsider your overall design (possibly even use an STL container or smart pointer types).
Hope that can help.
I know Object *pObject=new Object contain two steps:
operator new to allocate memory
call object's constructor.
and call delete pObject:
call object's destruct;
operator delete to free memory.
But when new Object process, if the step 2 throw exception, if the operator delete be called to free memory by system?
No, the destructor is not called. As the object isn't constructed properly it would be unsafe to call the destructor. However if any member objects have been constructed fully then they are destructed (as the object is complete).
Some people recommend against throwing in constructors, I believe it is better than zombie states which is akin to error codes and makes verbose code. So long as you follow RAII you should be fine (each resource is managed by it's own object). Before you throw in the constructor make sure that you clean up anything you've half done, but again, if you're using RAII that should be nothing.
The following outputs "B":
#include <iostream>
struct B {
~B() { std::cout << "B" << std::endl; }
};
struct A {
A() : b() { throw(1); }
~A() { std::cout << "A" << std::endl; }
B b;
};
int main() {
try {
A *a = new A;
delete a;
} catch(int a) {}
}
Edit:
The above isn't what you asked, yes the delete operator is called, http://www.cplusplus.com/reference/new/operator%20delete[] says:
"These deallocation functions are called by delete-expressions and by new-expressions to deallocate memory after destructing (or failing to construct) objects with dynamic storage duration."
This could be tested by overriding the operator delete.
Yes, the operator delete will be called to release the memory allocated.
The program below can prove that:
#include <iostream>
using std::cout;
using std::endl;
class A {
public:
A() { cout << "A() at " << this << endl; throw 1; }
~A() { cout << "~A() at " << this << endl; }
};
int main(int argc, char *argv[]) {
int N = 3;
for (int i = 0; i < N; ++i) {
try {
new A;
} catch (int a) {
// pass
}
}
return 0;
}
Running this program on my system, I find that the result printed out are like this:
A() at 0x2170010
A() at 0x2170010
A() at 0x2170010
Obviously, the destructors are NOT call because NO
~A() at 0x2170010
lines are printed out.
And the operator delete are surely called because the addresses of the three objects are exactly the same.