I've implemented a heap using two classes called IndexedHeap and HeapEntry. I have some corrupt memory access causing segfaults and I believe I know where/why, but I'm not sure how to fix it. Here's how I've designed the classes so far:
class IndexedHeap
{
private:
std::vector<HeapEntry*> heap; // the entries held by this heap
public:
...
};
class HeapEntry
{
private:
int index;
size_t priority;
unsigned long long key;
IndexedHeap* myHeap; // reference to heap where this entry resides
public:
HeapEntry(unsigned long long key, size_t priority, IndexedHeap* myHeap)
: key(key), priority(priority), index(-1), myHeap(myHeap)
{}
};
Both the heap and its entries need to refer to each other. As you can see I've decided to use a raw pointer to an IndexedHeap in HeapEntry. This is where I think I went wrong, but I'm not sure.
Throughout program execution, new heap entries are created as part of one heap. Entries are also removed from this heap and destroyed. Perhaps when one heap entry is destroyed, the heap it points to gets corrupted. That would explain my memory issues, because the next time a heap entry tries to access its heap, it accesses memory that has been released.
Unfortunately I'm not convinced of that. I haven't implemented a destructor for HeapEntry. The default destructor just calls destructors for all instance variables of a class right? So wouldn't the pointer to myHeap get destroyed, while the heap object itself survives?
So, what is the correct way of designing this kind of relationship, and can my memory issues be explained from the code I've posted? Thanks, and please let me know if you'd like to see more code or more details.
Code that creates and destroys entries on the heap:
HeapEntry* IndexedHeap::insert(unsigned long long key)
{
HeapEntry* entry = new HeapEntry(key, 1, this);
heap.push_back(entry);
int index = heapifyUp(heap.size() - 1);
heap[index]->setIndex(index);
return entry;
}
void IndexedHeap::deleteAtIndex(int pos)
{
if (pos >= 0 && pos < heap.size())
{
// Copy heap.back() into the position of target, thus overwriting it
*heap[pos] = *heap.back();
// Fix the index field for the just copied element
heap[pos]->setIndex(pos);
// We've removed the target by overwriting it with heap.back()
// Now get rid the extra copy of heap.back()
// Release the mem, then pop back to get rid of the pointer
delete heap.back();
heap.pop_back();
// Heapify from the position we just messed with
// use heapifyDown because back() always has a lower priority than the element we are removing
heapifyDown(pos);
}
}
Well, firstly, why arent you using the priority queue from STL, or using a multimap as a priority queue? Its a better solution than writing your own.
Next, the code structure: std::vector<HeapEntry*> heap; is notorious for leaking memory, with people not deleteing the memory pointed to, while and for causing serious memory faults when people try to delete the pointed memory and get that deletion wrong.
The "IndexedHeap* myHeap;" is most likely not your problem. References to things you dont own can be an issue if someone deletes those objects, but chances are you have stopped using th entries by then. Btw, since its a reference, you should consider making it a reference (which is then bound during ctr and never changed) - but that wouold alter the safety of the code in anyway. As youbelieve, the dtr for a pointer does nothing to the target.
Can you run valgrind? it solve things like this very quickly. Else:
Try not deleting any Entries, and see if that stops your faults, if so its telling.
You could also try tracking the pointers you new and delete, either by prints, or by a global set/map object. This can be handy to find these things.
Related
I've been having trouble understanding the delete and delete [] functions in C++. Here's what I know so far:
aClass *ptr = new aClass(); //Allocates memory on the heap for a aClass object
//Adds a pointer to that object
...
delete ptr; //ptr is still a pointer, but the object that it
//was pointing to is now destroyed. ptr is
//pointing to memory garbage at this point
ptr = anotehrOjbectPtr //ptr is now pointing to something else
In the case that this happens,
aClass *ptr new aClass();
...
ptr = anotherObjectPtr
the object that pointer was pointing to, is now lost in memory, adn this will cause a memory leak. The object should've been deleted first.
I hope the above is correct
But I wrote this small program, where I'm getting some unexpected behaviour
#include <iostream>
#include <string>
using namespace std;
class Database {
private:
Database() {
arrNames = NULL;
capacity = 1;
size = 0;
}
Database(const Database &db) {}
Database &operator=(const Database &db) {}
string *arrNames;
int capacity, size;
public:
static Database &getDB() {
static Database database;
return database;
}
void addName(string name) {
if (arrNames == NULL) {
arrNames = new string[capacity];
}
if (size == capacity - 1) {
capacity *= 2;
string *temp = new string[capacity];
int i = 0;
while (i <= size) {
temp[i] = arrNames[i];
i++;
}
delete [] arrNames;
arrNames = temp;
}
arrNames[size] = name;
size++;
}
void print() {
int i = 0;
while (i <= size) {
cout << arrNames[i] << endl;
i++;
}
}
};
int main() {
Database &database = Database::getDB();
Database &db = Database::getDB();
Database &info = Database::getDB();
database.addName("Neo");
db.addName("Morpheus");
info.addName("Agent Smith");
database.print();
db.print();
info.print();
}
In the addName function, when I call delete [] arrNames, what I think is happening is that the memory associated with the current array arrNames is destroyed, so arrNames is now pointing at garbage, Then arrNames is directed to point to another location in memory that is pointed to by temp. So if I hadn't called delete [] arrNames, then that location in memory would've been invalid, causing a memory leak. However, when I comment out that line, the code still works without problems. Am I not understanding something here?
Sorry that this si so long
Thanks for the halp
However, when I comment out that line, the code still works without problems. Am I not understanding something here?
An important thing to know about programming is that doing things correctly is not merely a matter of having things apparently work.
Often times you can try something out hand have things appear to work, but then some outside circumstances change, something you're not explicitly controlling or accounting for, and things stop working. For example you might write a program and it runs find on your computer, then you try to demo it to someone and happen to run it on their computer, and the program crashes. This idea is the basis of the running joke among programmers: "It works for me."
So things might appear to work, but in order to know that things will work even when conditions change you have to meet a higher standard.
You've been told how to do things correctly with delete, but that doesn't necessarily mean that things will break in an obvious way if you fail to do so. You need to abandon the idea that you can definitively determine whether something is correct or not by trying it out.
From what I think I see in your code, it looks like addName() is meant to append the new name onto the dynamic array. Doing this yourself can be headache inducing, and there is an existing convenient STL template for just this which I strongly recommend, called vector, from the <vector> header.
If you add #include <vector> and change string *arrNames to vector<string> arrNames, then your entire addName() function can be reduced to:
void addName(string name){
arrNames.push_back(name);
}
From the vector.size() method, you can determine the current length of the vector as well, and your members capacity and size are no longer needed.
A memory leak doesn't involve anything being made invalid. Quite the reverse, it's a failure to make a memory location invalid, causing it to remain in use even when it shouldn't be.
First of all, when you delete something, you are not destroying it in memory, just making it available for some further allocation. This is somewhat similar to filesystem - when you delete file, you just say space it occupied is now available for some new data. You could actually retrieve unmodified data after you called delete on them, but this is undefined behavior and will be compiler/OS specific.
If you don´t delete[] arrNames, you leave its data forgotten in your process´s memory, and creating memory leak. But beside this fatal flaw, there is no more magic happening.
I've a fairly complex application written in c++. I've a class called OrderBook. I need to create an array of OrderBook objects dynamically, so what i've done is,
OrderBook* pOrderBooks; // In header file
At runtime, I create the array as
p_OrderBooks = new OrderBook[n]; // n is an integer initialized at run time
The program works fine. But when I try to delete the array (as I need to create a new array pointed by pOrderBooks) the program crashes. This is how i delete it.
delete[] p_OrderBooks;
I've made sure that the crash happen exactly due to that line. So what i'm currently doing is reinitializing the pointer without deleting the previously allocated memory.
//delete[] p_OrderBooks; // <- crash happens here
p_OrderBooks = new OrderBook[k]; // for some 'k'
But it's bad since there'll be a memory leak. I'd like to know how to properly free the memory before re-pointing to the new array.
You are allocating p_OrderBooks but deleting pOrderBooks
If that's just a simple typo in your post, then it is likely that you are overrunning the bounds of this array, writing to elements past the beginning or end, therefore corrupting the heap so it crashes when you try to delete it.
Is it possible that one or more of your destructors for OrderBook is throwing an exception out of the destructor? If so (typically considered bad) and if it is not handled will crash your application.
I found the issue. I'm passing a pointer to an object created in the base class to OrderBook objects.
Server* p_Server = new Server(); // Some class
...
pOrderbook[i]->SetServer(p_Server) // <- for i=[0:99]
Passed p_Server is stored in OrderBook objects as p_ServerBase (say).
Server* p_ServerBase; // <- in OrderBook.h
...
OrderBook::SetServer(Server* pServer)
{
p_ServerBase = pServer;
}
Then in the OrderBook's distructor i'm trying to delete that, p_ServerBase, which is still being used in the base class.
...
~OrderBook()
{
delete p_ServerBase;
}
Haven't had that experiance before. I' won't do that again :)
If you are doing something like this:
OrderBook* p_OrderBooks;
int n;
p_OrderBooks = new OrderBook[n]; // here n contains garbage
cin>>n
delete[] p_OrderBooks;
Here n can be any garbage value, we don't know its size ,and perhaps we start accessing memory that we don't own. It could be problematic.
You should take input first
cin>>n
p_OrderBooks = new OrderBook[n];
I have a class with a member function mBoundingBox made up of the following struct
typedef struct
{
unsigned int xMin;
unsigned int yMin;
unsigned int xMax;
unsigned int yMax;
} boundingBox;
class CImgProc
{
public:
CImgProc(void);
virtual ~CImgProc(void);
...
boundingBox *mBoundingBox;
...
}
In code I allocate the member:
mBoundingBox = new boundingBox [mBlobCnt];
piddle around with it (don't assign any pointers to it, just using array indexing), then, when I exit I:
if (mBoundingBox != NULL) delete [] mBoundingBox;
and this is causing an error.
Any input?
Updated info. The error does occur at termination in the destructor. The message generated by VS is:
Windows has triggered a breakpoint in ProcImage.exe.
This may be due to a corruption of the heap, ...
This may also be due to the user pressing F12 while ProcImage.exe has focus.
The output window may have more diagnostic information.
I am setting the pointer to NULL in the constructor and then allocating (with new) when I need to. The pointer is valid, but apparently not on the heap (break lands in dbgheap.c).
Once I allocate the memory, I don't happen to do any pointer magic with it. In this case I am looping through an image and gather stats. Then I use the stats stored in this memory to draw back into my image, but, again, in a rather brute force manner, so nothing else makes use of this memory.
It is legal for me to use new to create an array of structs isn't it?
Doh!!! Sorry to waste ya'lls time. I dug back in and discovered that my creation and destruction are fine, but somewhere in the middle I set the value of mBoundingBox[X]. whatever where it turns out X is the dim of the array created.
Typical user error, just a surprising place for the bug to show up.
Most probably you are deleting your array twice. To manage it better use
delete[] mBoundingBox;
mBoundingBox = 0;
instead of
if (mBoundingBox != NULL) delete [] mBoundingBox;
or even better use a smart pointer.
first of all the following check is wrong
if (mBoundingBox != NULL) delete [] mBoundingBox;
new does not returns NULL when it fails to allocate memory, rather it throws an exception.
use nothrow version of "new" if you want to proceed like you are doing. In nothrow version new will return NULL instead of throwing an exception.
mBoundingBox = new (std::nothrow) boundingBox [mBlobCnt];
Tutorials, searches, and the dim memory of my C++ formal education have left me clueless as to where I should use delete when I'm using a dynamically allocated object pointer in a loop, such as:
// necessary files are included, this code is within main
T * t;
t = foo.getNewT();
while (!t->isFinalT()) {
// print t stuff
delete t; // is this where I should delete t?
t = foo.getNewT();
}
delete t;
This lack of knowledge has become particularly troublesome on a recent class project. On my laptop (Linux Mint, g++ Ubuntu/Linaro 4.7.3-1ubuntu1) the code ran fine without the delete statement and crashed when I added the delete statement. On the school server (Solaris, g++ (GCC) 3.4.5), the code segfaulted after a few iterations without the delete statement, and runs fine when I add the delete statement.
How do I handle this kind of loop properly so that it will run in most environments?
Additional Info:
The error on my laptop occurs when the program reaches the delete request:
*** Error in 'program': free(): invalid next size (fast):...
Some of the other code:
// T.h
class T {
int id;
int num;
int strVarPos;
char * strVar;
public:
T();
~T();
// + misc. methods
}
// T.cpp
T::T() {
id = 0;
num = -1;
strVarPos = 0;
char * strVar = new char[11];
strVar[0] = '\0'
}
T::~T() {
delete [] strVar;
}
// Foo.cpp
T * Foo::getNewT() {
T * t = new T;
// populate T's fields
return t;
}
Resolution:
Because a simple test with just T * t and the loop worked ok, I ended up reconstructing the project starting from blank and adding one class at a time, to see when the problem would appear. Turns out that I had added additional content into a dynamically allocated array elsewhere in the program without updating the size constant I was using to initialize the array.
Evidently the school server could only handle the resulting memory discrepancy without crashing if I was making sure to delete the pointers properly (the program didn't run long enough to cause a significant memory leak in my tests), while my laptop wouldn't notice the memory discrepancy until I attempted to call delete (and then would crash).
Assuming that foo.getNewT() is handing ownership of the memory over to the caller:
T * t;
t = foo.getNewT();
//while (!t->isFinalT()) // if foo.getNewT ever returns NULL, this will be UB!!!
while (t != nullptr && !t->isFinalT())
{
// ...
delete t; // if you now own it and are no longer going to use it, yes, delete it here
t = foo.getNewT();
}
delete t; // you also need this one to delete the "final" t
However, you can avoid having to do it yourself by using std::unique_ptr:
std::unique_ptr<T> t;
t.reset(foo.getNewT());
while (t && !t->isFinalT())
{
// ...
t.reset(foo.getNewT());
}
Alternatively, you could rewrite the loop to flow a bit better:
std::unique_ptr<T> t;
do
{
t.reset(foo.getNewT());
if (t)
{
// do stuff with t
}
} while (t && !t->isFinalT());
the code ran fine without the delete statement and crashed when I
added the delete statement.
Are you sure getNewT is handing ownership of the T* to you? If you delete it, and then it tries to delete it later, you will end up with a heap corruption. If it is handing ownership over to the caller, and you do not delete it, you get a memory leak.
With the additional information in your edit:
char * strVar = new char[11];
That line is unnecessary if you declare strVar as either a std::string or a char[11]. If you attempt to copy any of those T objects, you'll be using the default copy constructor (as you have not defined one), which will do a shallow copy (that is, copy the value of the pointer for strVar). When you delete 2 Ts that are both pointing to the same memory location, you get a heap corruption. The most robust solution would be to declare strVar as a std::string.
The problem is not the delete. You have put it in the right place. It's more likely something else you are doing that is causing undefined behaviour.
Note that you should have a delete t after the loop as well (to catch the last one). This is assuming that foo.getNewT() always returns a valid pointer (which it must, because you never check if it is NULL).
You should delete a dynamically allocated memory when you no longer need it. If you want t to hold its value inside the for loop, then delete it outside the loop otherwise delete it inside.
However, the best thing to do is to use std::unique_ptr when you really have to use pointers . It will take care of deallocating the memory itself when all references to the memory are destroyed. You should try to avoid allocating memory as much as you can. Use STL containers if they fit the job.
I think when you delete t you are deleting the real object inside your structure.
Maybe that what is causing the problem.
int count;
class MyClass {
std::shared_ptr<void> p;
public:
MyClass(std::shared_ptr<void> f):p(f){
++count;
}
~MyClass(){
--count;
}
};
void test(int n){
std::shared_ptr<void> p;
for(int i=0;i<n;++i){
p = std::make_shared<MyClass>(p);
}
std::cout<<count<<std::endl;
}
int main(int argc, char* argv[])
{
test(200000);
std::cout<<count<<std::endl;
return 0;
}
The above program causes stack overflow under "release" build in Visual Studio 2010 IDE.
The question is: if you do need to create some data structure like the above, how to avoid this problem.
UPDATE: Now I have seen one meaningful answer. However this is not good enough. Please consider I have updated MyClass to contain two (or more) shared_ptrs, and each of them can be an instance of MyClass or some other data.
UPDATE: Somebody updated the title for me and saying "deep ref-counted data structure", which is not necessary related to this question. Actually, shared_ptr is only a convenient example; you can easily change to other data types with the same problem. I also removed the C++11 tag because it is not C++11 only problem as well.
Make the stack explicit (i.e. put it in a container on the heap).
Have non-opaque pointers (non-void) so that you can walk your structure.
Un-nest your deep recursive structure onto the heap container, making the structure non-recursive (by disconnecting it as you go along).
Deallocate everything by iterating over the pointers collected above.
Something like this, with the type of p changed so we can inspect it.
std::shared_ptr<MyClass> p;
~MyClass() {
std::stack<std::shared_ptr<MyClass>> ptrs;
std::shared_ptr<MyClass> current = p;
while(current) {
ptrs.push_back(current);
current = current->p;
ptrs.back()->p.reset(); // does not call the dtor, since we have a copy in current
}
--count;
// ptrs dtor deallocates every ptr here, and there's no recursion since the objects p member is null, and each object is destroyed by an iterative for-loop
}
Some final tips:
If you want to untangle any structure, your types should provide an interface that returns and releases all internal shared_ptr's, i.e something like: std::vector<shared_ptr<MyClass>> yieldSharedPtrs(), perhaps within a ISharedContainer interface or something if you can't restrict yourself to MyClass.
For recursive structures, you should check that you don't add the same object to your ptr-container twice.
Thanks to #Macke's tips, I have an improved solution like the following:
~MyClass(){
DEFINE_THREAD_LOCAL(std::queue< std::shared<void> >, q)
bool reentrant = !q.empty();
q.emplace(std::move(p)); //IMPORTANT!
if(reentrant) return;
while(!q.empty()){
auto pv = q.front();
q.pop();
}
}
DEFINE_THREAD_LOCAL is a macro that defines a variable (param 2) as specified type (param 1) with thread local storage type, which means there is no more than one instance for each running thread. Because thread_local keyword is still not available for mainstream compilers, I have to assume such a macro to make it work for compilers.
For single thread programs, DEFINE_THREAD_LOCAL(type, var) is simply
static type var;
The benefit of this solution is it do not require to change the class definition.
Unlike #Macke's solution, I use std::queue rather than std::stack in order to keep the destruction order.
In the given test case, q.size() will never be more than 1. However, it is just because this algorithm is breadth-first. If MyClass has more links to another instance of MyClass, q.size() will reach greater values.
NOTE: It is important to remember use std::move to pass p to the queue. You have not solved the problem if you forgotten to do so, you are just creating and destroying a new copy of p, and after the visible code the destruction will still be recursive.
UPDATE: the original posted code has a problem: q is going to be modified within pop() call. The solution is cache the value of q.front() for later destruction.
If you really have to work with such an odd code, you can increase the size of your stack. You should find this option in the project properties of Visual Studio.
As already suggested, I must tell you that this kind of code should be avoided when working with a large mole of data structures, and increasing the stack size is not a good solution if you plan to release your software. It may also terribly slow down your own computer if you abuse this feature, obviously.