C++ application crashes at delete - c++

I've a fairly complex application written in c++. I've a class called OrderBook. I need to create an array of OrderBook objects dynamically, so what i've done is,
OrderBook* pOrderBooks; // In header file
At runtime, I create the array as
p_OrderBooks = new OrderBook[n]; // n is an integer initialized at run time
The program works fine. But when I try to delete the array (as I need to create a new array pointed by pOrderBooks) the program crashes. This is how i delete it.
delete[] p_OrderBooks;
I've made sure that the crash happen exactly due to that line. So what i'm currently doing is reinitializing the pointer without deleting the previously allocated memory.
//delete[] p_OrderBooks; // <- crash happens here
p_OrderBooks = new OrderBook[k]; // for some 'k'
But it's bad since there'll be a memory leak. I'd like to know how to properly free the memory before re-pointing to the new array.

You are allocating p_OrderBooks but deleting pOrderBooks
If that's just a simple typo in your post, then it is likely that you are overrunning the bounds of this array, writing to elements past the beginning or end, therefore corrupting the heap so it crashes when you try to delete it.

Is it possible that one or more of your destructors for OrderBook is throwing an exception out of the destructor? If so (typically considered bad) and if it is not handled will crash your application.

I found the issue. I'm passing a pointer to an object created in the base class to OrderBook objects.
Server* p_Server = new Server(); // Some class
...
pOrderbook[i]->SetServer(p_Server) // <- for i=[0:99]
Passed p_Server is stored in OrderBook objects as p_ServerBase (say).
Server* p_ServerBase; // <- in OrderBook.h
...
OrderBook::SetServer(Server* pServer)
{
p_ServerBase = pServer;
}
Then in the OrderBook's distructor i'm trying to delete that, p_ServerBase, which is still being used in the base class.
...
~OrderBook()
{
delete p_ServerBase;
}
Haven't had that experiance before. I' won't do that again :)

If you are doing something like this:
OrderBook* p_OrderBooks;
int n;
p_OrderBooks = new OrderBook[n]; // here n contains garbage
cin>>n
delete[] p_OrderBooks;
Here n can be any garbage value, we don't know its size ,and perhaps we start accessing memory that we don't own. It could be problematic.
You should take input first
cin>>n
p_OrderBooks = new OrderBook[n];

Related

VS 2010 C++ Crash when deleting an array of structures

I have a class with a member function mBoundingBox made up of the following struct
typedef struct
{
unsigned int xMin;
unsigned int yMin;
unsigned int xMax;
unsigned int yMax;
} boundingBox;
class CImgProc
{
public:
CImgProc(void);
virtual ~CImgProc(void);
...
boundingBox *mBoundingBox;
...
}
In code I allocate the member:
mBoundingBox = new boundingBox [mBlobCnt];
piddle around with it (don't assign any pointers to it, just using array indexing), then, when I exit I:
if (mBoundingBox != NULL) delete [] mBoundingBox;
and this is causing an error.
Any input?
Updated info. The error does occur at termination in the destructor. The message generated by VS is:
Windows has triggered a breakpoint in ProcImage.exe.
This may be due to a corruption of the heap, ...
This may also be due to the user pressing F12 while ProcImage.exe has focus.
The output window may have more diagnostic information.
I am setting the pointer to NULL in the constructor and then allocating (with new) when I need to. The pointer is valid, but apparently not on the heap (break lands in dbgheap.c).
Once I allocate the memory, I don't happen to do any pointer magic with it. In this case I am looping through an image and gather stats. Then I use the stats stored in this memory to draw back into my image, but, again, in a rather brute force manner, so nothing else makes use of this memory.
It is legal for me to use new to create an array of structs isn't it?
Doh!!! Sorry to waste ya'lls time. I dug back in and discovered that my creation and destruction are fine, but somewhere in the middle I set the value of mBoundingBox[X]. whatever where it turns out X is the dim of the array created.
Typical user error, just a surprising place for the bug to show up.
Most probably you are deleting your array twice. To manage it better use
delete[] mBoundingBox;
mBoundingBox = 0;
instead of
if (mBoundingBox != NULL) delete [] mBoundingBox;
or even better use a smart pointer.
first of all the following check is wrong
if (mBoundingBox != NULL) delete [] mBoundingBox;
new does not returns NULL when it fails to allocate memory, rather it throws an exception.
use nothrow version of "new" if you want to proceed like you are doing. In nothrow version new will return NULL instead of throwing an exception.
mBoundingBox = new (std::nothrow) boundingBox [mBlobCnt];

C++ two way association memory management strategy

I've implemented a heap using two classes called IndexedHeap and HeapEntry. I have some corrupt memory access causing segfaults and I believe I know where/why, but I'm not sure how to fix it. Here's how I've designed the classes so far:
class IndexedHeap
{
private:
std::vector<HeapEntry*> heap; // the entries held by this heap
public:
...
};
class HeapEntry
{
private:
int index;
size_t priority;
unsigned long long key;
IndexedHeap* myHeap; // reference to heap where this entry resides
public:
HeapEntry(unsigned long long key, size_t priority, IndexedHeap* myHeap)
: key(key), priority(priority), index(-1), myHeap(myHeap)
{}
};
Both the heap and its entries need to refer to each other. As you can see I've decided to use a raw pointer to an IndexedHeap in HeapEntry. This is where I think I went wrong, but I'm not sure.
Throughout program execution, new heap entries are created as part of one heap. Entries are also removed from this heap and destroyed. Perhaps when one heap entry is destroyed, the heap it points to gets corrupted. That would explain my memory issues, because the next time a heap entry tries to access its heap, it accesses memory that has been released.
Unfortunately I'm not convinced of that. I haven't implemented a destructor for HeapEntry. The default destructor just calls destructors for all instance variables of a class right? So wouldn't the pointer to myHeap get destroyed, while the heap object itself survives?
So, what is the correct way of designing this kind of relationship, and can my memory issues be explained from the code I've posted? Thanks, and please let me know if you'd like to see more code or more details.
Code that creates and destroys entries on the heap:
HeapEntry* IndexedHeap::insert(unsigned long long key)
{
HeapEntry* entry = new HeapEntry(key, 1, this);
heap.push_back(entry);
int index = heapifyUp(heap.size() - 1);
heap[index]->setIndex(index);
return entry;
}
void IndexedHeap::deleteAtIndex(int pos)
{
if (pos >= 0 && pos < heap.size())
{
// Copy heap.back() into the position of target, thus overwriting it
*heap[pos] = *heap.back();
// Fix the index field for the just copied element
heap[pos]->setIndex(pos);
// We've removed the target by overwriting it with heap.back()
// Now get rid the extra copy of heap.back()
// Release the mem, then pop back to get rid of the pointer
delete heap.back();
heap.pop_back();
// Heapify from the position we just messed with
// use heapifyDown because back() always has a lower priority than the element we are removing
heapifyDown(pos);
}
}
Well, firstly, why arent you using the priority queue from STL, or using a multimap as a priority queue? Its a better solution than writing your own.
Next, the code structure: std::vector<HeapEntry*> heap; is notorious for leaking memory, with people not deleteing the memory pointed to, while and for causing serious memory faults when people try to delete the pointed memory and get that deletion wrong.
The "IndexedHeap* myHeap;" is most likely not your problem. References to things you dont own can be an issue if someone deletes those objects, but chances are you have stopped using th entries by then. Btw, since its a reference, you should consider making it a reference (which is then bound during ctr and never changed) - but that wouold alter the safety of the code in anyway. As youbelieve, the dtr for a pointer does nothing to the target.
Can you run valgrind? it solve things like this very quickly. Else:
Try not deleting any Entries, and see if that stops your faults, if so its telling.
You could also try tracking the pointers you new and delete, either by prints, or by a global set/map object. This can be handy to find these things.

Where to properly delete dynamically allocated object in loop in C++

Tutorials, searches, and the dim memory of my C++ formal education have left me clueless as to where I should use delete when I'm using a dynamically allocated object pointer in a loop, such as:
// necessary files are included, this code is within main
T * t;
t = foo.getNewT();
while (!t->isFinalT()) {
// print t stuff
delete t; // is this where I should delete t?
t = foo.getNewT();
}
delete t;
This lack of knowledge has become particularly troublesome on a recent class project. On my laptop (Linux Mint, g++ Ubuntu/Linaro 4.7.3-1ubuntu1) the code ran fine without the delete statement and crashed when I added the delete statement. On the school server (Solaris, g++ (GCC) 3.4.5), the code segfaulted after a few iterations without the delete statement, and runs fine when I add the delete statement.
How do I handle this kind of loop properly so that it will run in most environments?
Additional Info:
The error on my laptop occurs when the program reaches the delete request:
*** Error in 'program': free(): invalid next size (fast):...
Some of the other code:
// T.h
class T {
int id;
int num;
int strVarPos;
char * strVar;
public:
T();
~T();
// + misc. methods
}
// T.cpp
T::T() {
id = 0;
num = -1;
strVarPos = 0;
char * strVar = new char[11];
strVar[0] = '\0'
}
T::~T() {
delete [] strVar;
}
// Foo.cpp
T * Foo::getNewT() {
T * t = new T;
// populate T's fields
return t;
}
Resolution:
Because a simple test with just T * t and the loop worked ok, I ended up reconstructing the project starting from blank and adding one class at a time, to see when the problem would appear. Turns out that I had added additional content into a dynamically allocated array elsewhere in the program without updating the size constant I was using to initialize the array.
Evidently the school server could only handle the resulting memory discrepancy without crashing if I was making sure to delete the pointers properly (the program didn't run long enough to cause a significant memory leak in my tests), while my laptop wouldn't notice the memory discrepancy until I attempted to call delete (and then would crash).
Assuming that foo.getNewT() is handing ownership of the memory over to the caller:
T * t;
t = foo.getNewT();
//while (!t->isFinalT()) // if foo.getNewT ever returns NULL, this will be UB!!!
while (t != nullptr && !t->isFinalT())
{
// ...
delete t; // if you now own it and are no longer going to use it, yes, delete it here
t = foo.getNewT();
}
delete t; // you also need this one to delete the "final" t
However, you can avoid having to do it yourself by using std::unique_ptr:
std::unique_ptr<T> t;
t.reset(foo.getNewT());
while (t && !t->isFinalT())
{
// ...
t.reset(foo.getNewT());
}
Alternatively, you could rewrite the loop to flow a bit better:
std::unique_ptr<T> t;
do
{
t.reset(foo.getNewT());
if (t)
{
// do stuff with t
}
} while (t && !t->isFinalT());
the code ran fine without the delete statement and crashed when I
added the delete statement.
Are you sure getNewT is handing ownership of the T* to you? If you delete it, and then it tries to delete it later, you will end up with a heap corruption. If it is handing ownership over to the caller, and you do not delete it, you get a memory leak.
With the additional information in your edit:
char * strVar = new char[11];
That line is unnecessary if you declare strVar as either a std::string or a char[11]. If you attempt to copy any of those T objects, you'll be using the default copy constructor (as you have not defined one), which will do a shallow copy (that is, copy the value of the pointer for strVar). When you delete 2 Ts that are both pointing to the same memory location, you get a heap corruption. The most robust solution would be to declare strVar as a std::string.
The problem is not the delete. You have put it in the right place. It's more likely something else you are doing that is causing undefined behaviour.
Note that you should have a delete t after the loop as well (to catch the last one). This is assuming that foo.getNewT() always returns a valid pointer (which it must, because you never check if it is NULL).
You should delete a dynamically allocated memory when you no longer need it. If you want t to hold its value inside the for loop, then delete it outside the loop otherwise delete it inside.
However, the best thing to do is to use std::unique_ptr when you really have to use pointers . It will take care of deallocating the memory itself when all references to the memory are destroyed. You should try to avoid allocating memory as much as you can. Use STL containers if they fit the job.
I think when you delete t you are deleting the real object inside your structure.
Maybe that what is causing the problem.

How to create a memory leak in C++?

I was just wondering how you could create a system memory leak using C++. I have done some googling on this but not much came up, I am aware that it is not really feasible to do it in C# as it is managed code but wondered if there was a simple way to do this with C++? I just thought it would be interesting to see how much the system suffers because of code not being written properly. Thanks.
A memory leak occurs when you call new without calling a corresponding delete later. As illustrated in this sample code:
int main() {
// OK
int * p = new int;
delete p;
// Memory leak
int * q = new int;
// no delete
}
Create pointer to object and allocate it on the heap
Don't delete it.
Repeat previous steps
????
PROFIT
int main() {
while(true) new int;
}
There are many kinds of memory leaks:
Allocated memory that is unreleasable because nothing points to it.
These kind of leaks are easy to create in C and C++. They are also pretty easy to prevent, easy to detect, and easy to cure. Because they are easy to detect there are lots of tools, free and commercial, to help find such leaks.
Still-accessible allocated memory that should have been released a long time ago.
These kinds of leaks are much harder to detect, prevent, or cure. Something still points to it, and it will be released eventually -- for example, right before exit(). Technically speaking, this isn't quite a leak, but for all practical purposes it is a leak. Lots of supposedly leak-free applications have such leaks. All you have to do is run a system profile to see some silly application consume ever more memory. These kinds of leaks are easy to create even in managed languages.
Allocated memory that should never have been allocated in the first place.
Example: A user can easily ask Matlab to creating these kinds of leaks. Matlab is also rather aggressive at creating these kinds of leaks. When Matlab gets a failure from malloc it goes into a loop where it waits for a bit and then retries the malloc. Meanwhile, the OS frantically tries to deal with the loss of memory by shuffling chunks of programs from real memory into virtual memory. Eventually everything is in virtual memory -- and everything creeps to a standstill.
Just write an application which allocates "a lot of data" and then blocks until it is killed. Just run this program and leave it running.
class ClassWithLeakedMemory{
private:
char* str;
public:
ClassWithLeakedMemory(){
str = new char[100];
}
~ClassWithLeakedMemory(){
cout<<"We are not freeing the dynamically allocated string memory"<<endl;
}
};
class ClassWithNoLeakedMemory{
private:
char* str;
public:
ClassWithNoLeakedMemory(){
str = new char[100];
}
~ClassWithNoLeakedMemory(){
cout<<"We are freeing the dynamically allocated string memory"<<endl;
delete[] str;
str = null;
}
};
int main() {
//we are creating an automatic object of the ClassWithleakedMemory
//when we will come out of the main, this object will be
//out of scope. hence it will be deleted. so destructor will
//be called. but in the destructor, we have not specifically
//deleted the dynamically allocated string.
//so the stack based pointer object str will be deleted but the memory
//it was pointing to won't be deleted. so we will be left with an
//unreferenced memory. that is memory leak.
ClassWithLeakedMemory objectWithLeakedmemory;
ClassWithNoLeakedMemory objectWithNoLeakedmemory;
return 0;
}
The way the stack based pointer object refers to the dynamically allocated memory in both the classes can be shown pictorially as below:
In C#, just use P/Invoke to allocate a lot of memory, resource handles and keep them around.
You can use unmanaged code just fine in a simple C# harness
When an object that is created using new is no longer referenced, the delete operator has to be applied to it. If not, the memory it occupies will be lost until the program terminates. This is known as a memory leak. Here is an illustration:
#include <vector>
using namespace std;
void memory_leak(int nbr)
{
vector<int> *ptrVector = new vector<int>(nbr);
// some other stuff ...
return;
}
If we return without calling delete on the object (i.e. delete ptrToVector) a memory leak occurs. To avoid this, don't allocate the local object on the memory heap but instead use a stack-allocated variable because these get automatically cleaned up when the functions exits. To allocate the vector on the run-time stack avoid using new (which creates it on the heap) and the pointer.
It's as simple as:⠀⠀⠀
new int;
#include <stdio.h>
void main(){
for(int i = 0; i < 1000; i++)
double* ptr = (double*)malloc(1000000*sizeof(double))
//free(ptr);
ptr = NULL;
}
note : the hashed line of code caused a memory leak while the process allocated it and did't return it back to the OS

Can one deallocate memory in a destructor if they allocated memory within a private function in C++?

I am trying to define a class in the global scope which contains some dynamically-allocated arrays. When the class' constructor is called, the program does not have access to user-defined parameters read through a parameter file (i.e. the number of years in a simulation) thus it cannot allocate memory to the proper size. My idea was to allocate memory within a private function in the class, and then deallocate it using the destructor. Some example code:
class Simulation{
private:
int initial_call; //a flag used to initialize memory
double *TransferTracker;
public:
Simulation();
~Simulation();
void calc();
};
Simulation simulator; //global instance of Simulation
Simulation::Simulation()
{
initial_call = 1;
}
Simulation::~Simulation()
{
//when calling the destructor, though, the address is
//0xcccccccc and the following attempt to delete produces
//the compiler error.
delete [] TransferTracker; //see error
}
void Simulation::calc()
{
for (int i = 0; i < num_its; i++)
{
if (initial_call)
{
TransferTracker = new double [5];
//The address assigned is, for example, 0x004ce3e0
initial_call = 0;
}
}
//even if this calc function is called multiple times, I see
//that the address is still 0x004ce3e0.
}
The error I receive from the above code fragment is:
Unhandled exception at 0x5d4e57aa (msvcr100d.dll) in LRGV_SAMPLER.exe: 0xC0000005: Access
violation reading location 0xccccccc0.
This error makes sense because I checked the memory address of TransferTracker when entering the destructor. My question is, why do we lose the address when entering the destructor? It probably has something to do with the fact that simulator is global; this paradigm seems to work fine if the class was not global. I am new to object-oriented programming so any help is appreciated!
EDIT: This was basically a blunder on my part and was helped by the answers. Two problems occurred: (1) the pointers were never set to NULL, thus creating confusion on trying to delete unallocated pointers. (2) There were actually two instances of the class in my scope, which was a mistake on my part. In the final code, there will only ever be one instance. Thanks everyone!
Initialize the pointer to NULL (0)
Simulation::Simulation() : TransferTracker(NULL)
{
initial_call = 1;
}
Simulation::~Simulation()
{
//when calling the destructor, though, the address is
//0xcccccccc and the following attempt to delete produces
//the compiler error.
if(TransferTracker) delete [] TransferTracker; //see error
TransferTracker = NULL;
}
That way you can check wether or not it has been initialised when you want to delete it. It's best practice, so do it always, not only at construction
EDIT:
void Simulation::calc()
{
for (int i = 0; i < num_its; i++)
{
if (initial_call)
{
if(TransferTracker) delete [] TransferTracker;
TransferTracker = new double [5];
initial_call = 0;
}
}
}
You have to initialize the value of the instance variable TransferTracker to 0 in the constructor. The problem you're having is the destruction of the Simulation class without actually having assigned dynamic memory to TransferTracker.
Calling delete[] in the destructor with a null pointer is safe. The problem is that if you don't give a value to TransferTracker, it may have any undefined value, that will cause trouble trying to deallocate with delete[].
EDIT:
As per your edit, how do you assure that there is only one instance of the Simulation class? This has to do with if you include several .o files in your build, etc.
I suspect the cause is that your destructor is getting called when you haven't invoked the calc() function, therefore the memory hasn't been allocated yet.
You want to put in place a "guard" that will make sure that you've already allocated the memory before attempting to deallocate the memory for TransferTracker.