I am working on an object that contains an array of queues with an array length that isn't decided until the constructor is called. Basically it looks something like the following
#include <queue>
class myClass{
public:
//public functions
private:
//private functions and variables
queue<int>* myQueue;
};
it is initialized like so:
myClass::myClass(int numOfQueues){
myQueue = new queue<int>[numOfQueues];
}
This all works beautifully, it seems. it functions exactly like I was hoping it would, but now every time I exit the program I get a segmentation fault. The class has some other arrays in it that are initialized in the same way, but those are of types bool and int instead of queue. My destructor looks like:
myClass::~myClass(){
delete boolArray;
delete intArray;
delete myQueue;
}
Now I assume this destructor is working for the boolArray and intArray pointers, because I didn't start to get a segfault until I added myQueue. Does anyone have any idea what the proper way is to write the destructor? Is it possible that this is all I have to do and the destructor just isn't being called at the proper time?
Because you allocated using new[] you should do delete[] myQueue; in destructor. Otherwise it would invoke undefined behavior. BTW, you can use std::vector<std::queue<int> > if you don't want to get this type of memory management issues.
Why're you not using std::vector instead of arrays?
You need to delete[] arrays, not delete - you allocated with new[]
Using delete with new[] won't just cause memory leak but also invokes Undefined behaviour.
The correct form of delete to be used with new[] is delete[].
However in idiomatic C++ it is always recommended to use std::vector instead of using C style arrays. You need not explicitly manage memory yourself when you use STL containers.
Naveen has already solved the problem. I'd like to add a good programming practice.
The following use case below will also create deletion problems.
void foo()
{
myClass a;
myClass b(a);
}
when we declare a, a new instance of myQueue will be created. However when declaring b, copy constructor will be called instead of myClass::myClass(int numQueues) constructor. Thus a.myQueue == b.myQueue.
When exiting function foo, a's destructor will delete myQueue then b's destructor will try to delete an unreferenced pointer which would lead to a fatal error.
A good programming practice is to either implement copy constructor and = operator accordingly or to declare copy constructor and = operator private to avoid such problems.
private:
myClass(const myClass&);
const myClass& operator=(const myClass&);
See also boost::NonCopyable
Related
Let's assume I have three classes:
class A{
};
class B{
int S;
A* arr;
B(int s):S(s){
arr = new A[S];
}
~B(){
delete [] arr;
}
};
class C{
B& b;
C(B& b): b(b){}
};
Should I define explicitly a destructor in class C? What would it look like?
Should I define explicitly a destructor in class C?
To answer that question, you must know what your class does. Is the purpose of the class to manage allocated memory? Then, yes you probably need a destructor. Is the purpose to refer to an object managed by something else? Then why would you need a destructor? If you don't know what a destructor should do, then you quite often do not need one.
You should never use owning references, so there should be little reason for C to have a destructor, or it has to be changed drastically.
You also should not use owning bare pointers, so class B should also be changed. Easiest solution is to replace it with std::vector.
No, you don't need a destructor for C, but you do need a copy-constructor (as well as a move-constructor, copy-assignment operator and move-assignment operator) for class B if you want to correctly implement copy-semantics. As it is now if you were to make a copy of an object of class B, you would have to objects pointing to the same array - when the objects are destructed you would get a double delete causing undefined behavior.
You don't need to use a raw pointer. You can simply use a std::vector will handle the allocation of the elements as well as its destruction. That also means that you won't need to implement any other kinds of constructors or a destructor for B since std::vector handles copies and moves correctly.
#include <queue>
using namespace std;
class Test{
int *myArray;
public:
Test(){
myArray = new int[10];
}
~Test(){
delete[] myArray;
}
};
int main(){
queue<Test> q
Test t;
q.push(t);
}
After I run this, I get a runtime error "double free or corruption". If I get rid of the destructor content (the delete) it works fine. What's wrong?
Let's talk about copying objects in C++.
Test t;, calls the default constructor, which allocates a new array of integers. This is fine, and your expected behavior.
Trouble comes when you push t into your queue using q.push(t). If you're familiar with Java, C#, or almost any other object-oriented language, you might expect the object you created earler to be added to the queue, but C++ doesn't work that way.
When we take a look at std::queue::push method, we see that the element that gets added to the queue is "initialized to a copy of x." It's actually a brand new object that uses the copy constructor to duplicate every member of your original Test object to make a new Test.
Your C++ compiler generates a copy constructor for you by default! That's pretty handy, but causes problems with pointer members. In your example, remember that int *myArray is just a memory address; when the value of myArray is copied from the old object to the new one, you'll now have two objects pointing to the same array in memory. This isn't intrinsically bad, but the destructor will then try to delete the same array twice, hence the "double free or corruption" runtime error.
How do I fix it?
The first step is to implement a copy constructor, which can safely copy the data from one object to another. For simplicity, it could look something like this:
Test(const Test& other){
myArray = new int[10];
memcpy( myArray, other.myArray, 10 );
}
Now when you're copying Test objects, a new array will be allocated for the new object, and the values of the array will be copied as well.
We're not completely out trouble yet, though. There's another method that the compiler generates for you that could lead to similar problems - assignment. The difference is that with assignment, we already have an existing object whose memory needs to be managed appropriately. Here's a basic assignment operator implementation:
Test& operator= (const Test& other){
if (this != &other) {
memcpy( myArray, other.myArray, 10 );
}
return *this;
}
The important part here is that we're copying the data from the other array into this object's array, keeping each object's memory separate. We also have a check for self-assignment; otherwise, we'd be copying from ourselves to ourselves, which may throw an error (not sure what it's supposed to do). If we were deleting and allocating more memory, the self-assignment check prevents us from deleting memory from which we need to copy.
The problem is that your class contains a managed RAW pointer but does not implement the rule of three (five in C++11). As a result you are getting (expectedly) a double delete because of copying.
If you are learning you should learn how to implement the rule of three (five). But that is not the correct solution to this problem. You should be using standard container objects rather than try to manage your own internal container. The exact container will depend on what you are trying to do but std::vector is a good default (and you can change afterwords if it is not opimal).
#include <queue>
#include <vector>
class Test{
std::vector<int> myArray;
public:
Test(): myArray(10){
}
};
int main(){
queue<Test> q
Test t;
q.push(t);
}
The reason you should use a standard container is the separation of concerns. Your class should be concerned with either business logic or resource management (not both). Assuming Test is some class you are using to maintain some state about your program then it is business logic and it should not be doing resource management. If on the other hand Test is supposed to manage an array then you probably need to learn more about what is available inside the standard library.
You are getting double free or corruption because first destructor is for object q in this case the memory allocated by new will be free.Next time when detructor will be called for object t at that time the memory is already free (done for q) hence when in destructor delete[] myArray; will execute it will throw double free or corruption.
The reason is that both object sharing the same memory so define \copy, assignment, and equal operator as mentioned in above answer.
You need to define a copy constructor, assignment, operator.
class Test {
Test(const Test &that); //Copy constructor
Test& operator= (const Test &rhs); //assignment operator
}
Your copy that is pushed on the queue is pointing to the same memory your original is. When the first is destructed, it deletes the memory. The second destructs and tries to delete the same memory.
You can also try to check null before delete such that
if(myArray) { delete[] myArray; myArray = NULL; }
or you can define all delete operations ina safe manner like this:
#ifndef SAFE_DELETE
#define SAFE_DELETE(p) { if(p) { delete (p); (p) = NULL; } }
#endif
#ifndef SAFE_DELETE_ARRAY
#define SAFE_DELETE_ARRAY(p) { if(p) { delete[] (p); (p) = NULL; } }
#endif
and then use
SAFE_DELETE_ARRAY(myArray);
Um, shouldn't the destructor be calling delete, rather than delete[]?
Suppose I have a following Class:
class foo{
int array_allocation(int length){
array= new int[length];
return 0;
}
private:
int *array;
};
Here should I need to implement ~foo(){ delete []array}; or its implicitly done??
Neither, you should use std::vector<int> instead. I'm not even talking as a member of your class, but instead of your class.
EDIT: no, the memory is not freed automatically. You need to provide a meaningful destructor, copy constructor & assignment operator.
Pointers are not automatically deleted. Also when that class is copied the pointer is copied not the memory. You should observe the RAII design pattern and the rule of three.
Below is the pattern of new/delete operators in my program. Valgrind says that memory is "definitely lost". I couldn't quite get where the leak is. Is there something wrong with my usage of new/delete operators.
class Generic
{
GenericInterface *gInterface; //GenericInterface is abstract class
public:
Generic ()
{
gInterface = NULL;
}
~Generic ()
{
delete gInterface;
}
void Create()
{
gInterface = new Specific();
}
};
class Specific : public GenericInterface
{
MyClass* _myClass;
public:
Specific()
{
_myClass = new MyClass;
}
~Specific()
{
delete _myClass;
}
};
int main()
{
Generic g;
g.Create();
}
valgrind says that memory is lost.
==2639== 8 bytes in 1 blocks are definitely lost in loss record 2 of 45
==2639== at 0x4026351: operator new(unsigned int) (vg_replace_malloc.c:255)
==2639== by 0x804D77C: Specific::Specific() (Specific.cc:13)
==2639== by 0x804DAFC: Generic::Create() (Generic.cc:58)
You are not following the rule of three. If your class manages resources that need to be cleaned up, you must declare a destructor, copy constructor, and copy assignment operator. Neither of your classes has a copy constructor or a copy assignment operator.
Really, you should almost certainly just be using a smart pointer like unique_ptr from C++0x; shared_ptr from Boost, C++ TR1, and C++0x; or scoped_ptr from Boost.
The likely issue causing this specific problem is that you have forgotten to make the base-class GenericInterface destructor virtual, so the wrong destructor is being called and the MyClass object that you dynamically create in Specific is never destroyed.
deleteing an object via a pointer to one of its base classes results in undefined behavior if the base class destructor is not declared virtual (that means bad things will happen, ranging from memory leaks to crashes).
Your GenericInterface destructor probably is not virtual, so only the GenericInterface destructor is getting called when gInterface is destroyed.
Does your GenericInterface class declare a virtual destructor? If not, the Specific class's destructor won't get called, and thus the "delete _myClass" line won't be executed.
Just another way C++ makes your life more interesting :)
In your Generic class destructor you should also check the gInterface pointer before deleting it. If Create() is not called before the object is destroyed it will cause problems. If your c++ compiler doesn't throw on new failures your other class may also do the same thing
So I'm a beginner trying to get to grips with operator new. What's wrong with my destructor?
class arr{
public:
arr(){
pool=::operator new(100*sizeof(double));
}
~arr(){
::operator delete(pool);
}
void* pool;
};
int main()
{
arr a;
a.~arr(); //If I comment this out it's ok.
void* pool2=::operator new(100*sizeof(double)); //Works
::operator delete(pool2); //Fine.
system("pause");
return 0;
}
Leaving a.~arr(); in gives me this error:
Debug assertion failed! File: dbgdel.cpp line: 52
Expression: _BLOCK_TYPE_IS_VALID(pHead->nBlockUse)
I can't see why pool2 works fine but using the class gives me problems. Also the error only pops up after the system "pauses", which is after a.~arr() is called???
Thanks!
Well, at a glance, you should not be explicitly calling the destructor. Instead use scoping to force a out of scope and call the destructor.
int main()
{
{
arr a;
} //This calls destructor of a
//Rest of code
}
Otherwise the destructor of a gets called twice: once when you call it and again when a goes out of scope.
EDIT:
Here ya go.
http://www.parashift.com/c++-faq-lite/dtors.html
The problem is that you explicitly invoke the destructor on a (a.~arr()), while the destructor will already be invoked automatically when a goes out of scope at the end of main(). When the destructor is called for the second time, it is called on an already destructed object. Technically, this leads to Undefined Behavior (which is a way to say that any result will be fine according to the C++ standard). In practice this code will likely just execute the destructor again, passing the address stored in the memory location that used to be a.pool to the ::operator delete() (which might or might not be what the constructor stored there), which is caught by the Debug runtime.
What you should do instead if you want a to be deleted in the middle of main() is to introduce an additional scope:
int main()
{
{
arr a;
} //a will be deleted here
// rest of main
}
Another way would be to not to use an automatic object, but a dynamic one, for which you can set a lifetime.
But there's another problem with your code, which you didn't ask about, but which I nevertheless feel a need to point out:
Your class urgently needs a copy constructor and a assignment operator. (At the very least, declare them private in order to disallow copying.) As per the Rule Of Three, the fact that you need a destructor should give you a hint that the other two are needed as well.
You can avoid all the hassle by not attempting to manually manage dynamically allocated memory. Instead use a type which does this for you:
class arr{
public:
arr() : pool(100*sizeof(double)) {
}
// ~arr() // not needed any longer
std::vector<char> pool;
};
you don't need to call destructor for objects created on stack. The variable 'a' is on stack and will be deleted automatically when out of scope.