I have these kind of classes:
Game:
class Game {
private:
BoardField*** m_board_fields;
public:
Game() {
m_board_fields = new BoardField**[8];
for (int i = 0; i < 8; i++) {
m_board_fields[i] = new BoardField*[8];
}
}
Game::~Game() {
for (int i = 0; i < 8; i++) {
for (int j = 0; i < 8; j++) {
delete m_board_fields[i][j];
}
delete[] m_board_fields[i];
}
delete[] m_board_fields;
}
}
BoardField:
class BoardField {
private:
ChessPiece* m_piece;
....
public:
BoardField::~BoardField() {
delete m_piece;
}
}
And on the close of the program I get error in ~BordField:
Exception thrown: read access violation.
this was 0xFDFDFDFD.
Did I made my destructors incorrect? What is the best way to clear memory from multidimensional array ?
There is are two fundamental flaws in your design:
there is no clear ownership of the BoardFields: someone create it, someone else deletes it. It can work if you're very cautious but it's error prone.
you do not ensure the rule of 3 (or better 5): if you have any piece of code where you create a copy of either your Game or a of any BoardField the first object that gets destroyed will delete the m_piece pointer, and when the second object gets destroyed, it'll try to delete a second time the same pointer, which is UB.
There is a third important issue: you're over-using raw pointers:
if m_board_fields is a 2d array of fixed size, make it a fixed size array (aka BoardField* m_board_fields[8][8]). If you want to keep its size dynamic, use vectors.
a cell of m_board_field could be a pointer if there's some polymorphism expected. But this seems not the case here, as obviously ChessPiece is the polymorphic class. So better use plain fields instead of pointers (aka BoardField m_board_fields[8][8]).
Finally, instead of using raw pointer to ChessPiece, better use a shared_ptr<ChessPiece> : you don't have to worry about shallow pointer copies and double delete; the shared_ptr will take care of itself and destroy the object if it's no longer used.
Related
I am having a situation where I have to call a function in loop with pointer (to a class object) as a parameter. Issue is, I cannot modify the signature of that function and that with every iteration of loop, I have to initialize the pointer. This will lead to memory leak as I cannot delete the pointer (after passing it to the function) inside the loop. Is there any way I can prevent memory leak in such a case?
I would like to explain with a simple example:
class testDelete
{
public:
void setValue(int* val) {vec.push_back(val);};
void getValue();
private:
vector <int*> vec;
};
void testDelete::getValue()
{
for (int i=0;i<vec.size();i++)
{
cout << "vec[" << i << "] = " << *vec[i]<<"\n";
}
}
int main()
{
testDelete tD;
int* value = NULL;
for (int i=0;i<10;i++)
{
value=new int(i+1);
/*I am not allowed to change this function's signature, and hence I am forced to pass pointer to it*/
tD.setValue(value);
/*I cannot do delete here otherwise the getValue function will show garbage value*/
//delete value;
}
tD.getValue();
return 0;
}
If deleteTest wants to use pointers of maybe gone objects it should hold std::weak_ptrs.
Holding on to a raw pointer and dereferencing it later is dangerous (unless you can make sure the object is still alive, a.k.a don't use raw but smart pointers).
[...] I cannot modify the signature of that function and that with every
iteration of loop, I have to initialize the pointer. Is there any way I can prevent memory leak in such a case?
If you need dynamically allocated objects, use smart pointers (eg std::smart_ptr for shared ownership). If you do not need to dynamically allocate them then don't.
For the sake of the example lets assume you cannot modify deleteTest, then for integers there is no reason to dynamically allocate anything
int main()
{
std::array<int,10> x;
testDelete tD;
for (int i=0;i<10;i++)
{
x[i] = i+1;
tD.setValue(&x[i]);
}
tD.getValue();
return 0;
}
Take this code with a grain of salt, it is actually deleteTest that needs to be fixed to avoid creating trouble.
TL;DR
In your example you have actually two problems. deleteTest may try to access already gone objects and memory leaks in main. Using smart pointers solves both.
Store the integers in a container:
int main()
{
std::vector<int> values(10);
testDelete tD;
for (int i=0;i<10;i++)
{
values[i] = i + 1;
tD.setValue(&values[i]);
}
tD.getValue();
return 0;
}
here is an example of a code hopefully will demonstrate my confusion
#define MAX_LENGTH 5
class Bar
{
private:
Foo *_myFooArray[MAX_LENGTH];
public:
Bar()
{
for (int i = 0; i < MAX_LENGTH; ++i)
{
_myFooArray[i] = new Foo(i);
}
}
};
Since I am not creating the array with new I don't think I can use delete[] but what if I want to delete the objects that are allocated dynamicly? do I iterate through the array and delete them one at a time? as such;
~Bar()
{
for (int i = 0; i < MAX_LENGTH; ++i)
{
delete _myFooArray[i];
}
}
I will probably hear some of you screaming at me Use Vectors!! I appreciate that. I just want to learn. Just for completeness, If for some reason I have to use the array as mentioned above, are there anything that I need to pay extra attention besides deleting the array correctly?
One of the rules in C++ when not using smart pointers is "every new needs a delete."
You must use the destructor from your question to avoid leaking memory. The array itself is not allocated using new so you do not need to delete it.
No, you do not delete [] the array. The array is part of your instance's data. The elements you allocate specifically are not, so you should delete them.
In the following program we are creating Circle object in local scope because we are not using new keyword. We know that memory of a variable or object automatically terminated of finish at the end of program than why we use destruct?
#include<iostream>
using namespace std;
class Circle //specify a class
{
private :
double radius; //class data members
public:
Circle() //default constructor
{
radius = 0;
}
void setRadius(double r) //function to set data
{
radius = r;
}
double getArea()
{
return 3.14 * radius * radius;
}
~Circle() //destructor
{}
};
int main()
{
Circle c; //defalut constructor invoked
cout << c.getArea()<<endl;
return 0;
}
Assuming memory as an infinite resource is VERY dangerous. Think about a real-time application which needs to run 24x7 and listen to a data feed at a high rate (let' say 1,000 messages per second). Each message is around 1KB and each time it allocates a new memory block (in heap obviously) for each message. Altogether, we need around 82 GB per day. If you don't manage your memory, now you can see what will happen. I'm not talking about sophisticated memory pool techniques or alike. With a simple arithmetic calculation, we can see we can't store all messages in memory. This is another example that you have think about memory management (from both allocation and deallocation perspectives).
Well, first of all, you don’t need to explicitly define a destructor. One will automatically be defined by the compiler. As a side note if you do, by the rule of the 3, or the 5 in c++11 if you declare any of the following: copy constructor, copy assignment, move constructor (c++11), move assignment (c++11) or destructor you should explicitly define all of them.
Moving on. Oversimplified, the RAII principle states that every resource allocated must be deallocated. Furthermore, over every resource allocated must exist one and only one owner, an object responsible for dealocating the resource. That’s resource management. Resource here can mean anything that has to initialized before use and released after use, e.g. dynamically allocated memory, system handler (file handlers, thread handlers), sockets etc. The way that is achieved is through constructors and destructors. If your object is responsible of destroying a resource, then the resource should be destroyed when your object dies. Here comes in play the destructor.
Your example is not that great since your variable lives in main, so it will live for the entirely of the program.
Consider a local variable inside a function:
int f()
{
Circle c;
// whatever
return 0;
}
Every time you call the function f, a new Circle object is created and it’s destroyed when the function returns.
Now as an exercise consider what is wrong with the following program:
std::vector foo() {
int *v = new int[100];
std::vector<int> result(100);
for (int i = 0; i < 100; ++i) {
v[i] = i * 100 + 5;
}
//
// .. some code
//
for (int i = 0; i < 100; ++i) {
result.at(i) = v[i];
}
bar(result);
delete v;
return result;
}
Now this is a pretty useless program. However consider it from the perspective of correctness. You allocate an array of 100 ints at the beginning of the function and then you deallocate them at the end of the function. So you might think that that is ok and no memory leaks occur. You could’t be more wrong. Remember RAII? Who is responsible for that resource? the function foo? If so it does a very bad job at it. Look at it again:
std::vector foo() {
int *v = new int[100];
std::vector<int> result(100); <-- might throw
for (int i = 0; i < 100; ++i) {
v[i] = i * 100 + 5;
}
//
// .. some code <-- might throw in many places
//
for (int i = 0; i < 100; ++i) {
result.at(i) = v[i]; <-- might (theoretically at least) throw
}
bar(result); <-- might throw
delete v;
return result;
}
If at any point the function throws, the delete v will not be reached and the resource will never be deleted. So you must have a clear resource owner responsible with the destruction of that resource. What do you know the constructors and destructors will help us:
class Responsible() { // looks familiar? take a look at unique_ptr
private:
int * p_ = nullptr;
public:
Responsible(std::size_t size) {
p_ = new int[size];
}
~Responsible() {
delete p_;
}
// access methods (getters and setter)
};
So the program becomes:
std::vector foo() {
Responsible v(100);
//
// .. some code
//
return result;
}
Now even if the function will throw the resource will be properly managed because when an exception occurs the stack is unwinded, that is all the local variables are destroyed well... lucky us, the destructor of Responsible will be invoked.
Well, sometimes your object can have pointers or something that needs to be deallocated or such.
For example if you have a poiner in you Circle class, you need to deallocate that to avoid memory leak.
Atleast this is how i know.
In my C++ code I have a class Object equipped with an id field of type int. Now I want to create a vector of pointers of type Object*. First I tried
vector<Object*> v;
for(int id=0; id<n; id++) {
Object ob = Object(id);
v.push_back(&ob);
}
but this failed because here the same address just repeats itself n times. If I used the new operator I would get what I want but I'd like to avoid dynamic memory allocation. Then I thought that what I need is somehow to declare n different pointers before the for loop. Straightforward way to this is to declare an array so I did this :
vector<Object*> v;
Object ar[n];
for(int i=0; i<n; i++) {
ar[i] = Object(i);
}
for(int i=0; i<n; i++) {
v.push_back(ar+i);
}
Is there still possibility to get a memory leak if I do it this way? Also going through an array declaration is a bit clumsy in my opinion. Are there any other ways to create vector of pointers but avoid manual memory management?
EDIT: Why do I want pointers instead of just plain objects?
Well I modified the original actual situation a bit because I thought in this way I can represent the question in the simplest possible form. Anyway I still think the question can be answered without knowing why I want a vector of pointers.
Actually I have
Class A {
protected:
vector<Superobject*> vec;
...
};
Class B: public A {...};
Class Superobject {
protected:
int id;
...
}
Class Object: public Superobject {...}
In derived class B I want to fill the member field vec with objects of type Object. If the superclass didn't use pointers I would have problems with object slicing. So in class B constructor I want to initialize vec as vector of pointers of type Object*.
EDIT2
Yes, it seems to me that dynamic allocation is the reasonable option and the idea to use an array is a bad idea. When the array goes out of scope, things will go wrong because the pointers in vector point to memory locations that don't necessarily contain the objects anymore.
In constructor for class B I had
B(int n) {
vector<Object*> vec;
Object ar[n];
for(int id=0; id<n; id++) {
ar[id] = Object(id);
}
for(int id=0; id<n; id++) {
v.push_back(ar+id);
}
}
This caused very strange behavior in objects of class B.
In this loop:
for(int id=0; id<n; id++) {
Object ob = Object(id);
v.push_back(&ob);
}
You are creating n times Object instance on stack. At every iteration there is created and removed element. You can simply avoid this using that:
for(int id=0; id<n; id++) {
Object* ob = new Object(id);
v.push_back(ob);
}
thanks that every new element exist on heap not on the stack. Try to add to in class Object constructor something like that:
std::cout<<"Object ctor()\n";
and the same in the destructor:
std::cout<<"Object dtor()\n";
If you dont want to create these objects with "new" try reason written by #woolstar
Your question about memory leaks makes me think you are worried about the lifecycle and cleanup of these objects. I originally proposed shared_ptr wrappers, but C++11 gave us unique_ptr, and C++14 filled in the missing make_unique. So with all that we can do:
vector<unique_ptr<SuperObject>> v ;
Which you create in place with the wonderfulness of perfect forwarding and variadic templates,
v.push_back( make_unique<Object>( ... ) ) ;
Yes you are going to have to live with some heap allocations, but everything will be cleaned up when v goes away.
Someone proposed a boost library, ptr_container, but that requires not only adding boost to your project, but educating all future readers what this ptr_container is and does.
No there is no memory leak in your version. When the program leaves your scope the vector as well the array are destroyed. To your second question: Why not simply store the objects directly in an vector?
vector<Object> v;
for(int i = 0; i < n; i++)
{
Object obj = Object(i);
v.push_back(obj);
}
I am just starting to really understand Polymorphism, but still this is a new topic for me.
So here is my Problem: I have to classes, enemy and Bankrobber. However Bankrobber inherits from
enemy. I tried to make an array of 10 Bankrobbers. A global function should then use all members of the array to do something, well I guess this is a worthless description, so here is the code:
void UpdateEnemies(Enemy * p_Enemy, int counter) {
for(unsigned int i = 0;i < counter;i++) {
p_Enemy[i].Update();
}
}
int main(void) {
BankRobber EnemyArray[10];
Enemy * p_Enemy = new BankRobber(13,1);
UpdateEnemies(EnemyArray,10);
system("PAUSE");
};
I apologize for any language mistakes. I am not a native speaker
My actual Problem: This code is just for practicing, so the purpose is just to see 10 times Update on the console, for each member of the Array. So the function UpdateEnemys should call all the enemy.update functions. The method with the type casting is not exactly what I want, cause it is not dynamicly anymore, as there will be more enemy later on. Not only Bankrobbers.
Polymorphism only works on single objects, accessed by a reference or pointer to a base class. It does not work on an array of objects: to access array elements, the element size must be known, and that's not the case if you have a pointer to a base class.
You would need an extra level of indirection: an array of pointers to single objects, along the lines of
void UpdateEnemies(Enemy ** p_Enemy, int counter) {
for(unsigned int i = 0;i < counter;i++) {
p_Enemy[i]->Update();
}
}
int main() {
// An array of Enemy base-class pointers
Enemy * EnemyArray[10];
// Populate with pointers to concrete Enemy types
for (unsigned i = 0; i < 9; ++i) {
EnemyArray[i] = new BankRobber;
}
// Of course, the array can contain pointers to different Enemy types
EnemyArray[9] = new Dragon;
// The function can act polymorphically on these
UpdateEnemies(EnemyArray,10);
// Don't forget to delete them. Enemy must have a virtual destructor.
for (unsigned i = 0; i < 10; ++i) {
delete EnemyArray[i];
}
}
You should also consider using RAII types, such as containers and smart pointers, to manage these dynamic resources; but that's beyond the scope of this question.
Declaring an array of BankRobber like this
BankRobber EnemyArray[10];
But than acessing them through the base class pointer like this
Enemy * p_Enemy;
p_Enemy[i].Update();
Wouldn't work. That's because indexing an array p_Enemy[i] will be done by using the offcet sizeof(Enemy)
But sizeof(BankRobber) is probably bigger than from sizeof(Enemy), so p_Enemy[i] will end up in the wrong place
You should use a vector of pointers instead, like
std::vector<Enemy*>
That way you can also use polymorphism if you add pointers to different objects into the vector. And you don't need to pass the ugly int counter around
Indeed you didn't say exactly what is the problem.
Did you try casting inside your code? Something like:
void UpdateEnemies(Enemy * p_Enemy, int counter) {
BankRobber *pRobber = (BankRobber*)p_Enemy;
for(unsigned int i = 0;i < counter;i++) {
pRobber[i].Update();
}
}