I am trying to build a stack that resizes itself and realloc() crashes my program.
Constructor:
Stack::Stack()
{
st = (int*)malloc(sizeof(int));
sp = 0;
length = 1;
}
This is my add() function:
void Stack::add(int item)
{
if (sp == length)
Stack::resizeStack(&st, &length, 1);
st[sp++] = item;
}
Resize function (I use the variable a in order to be able to reuse it for pop) :
void Stack::resizeStack(int **st, int *length, bool a)
{
if (a == 1)
*length *= 2;
else
*length /= 2;
realloc(*st, sizeof(int) * (*length));
}
This is how I test my stack:
Stack* myStack = new Stack();
for (int i = 0; i < 10; i += 1) {
myStack->add(i);
cout << myStack->getStackSize() << '\n';
}
free(myStack);
I have noticed that the program crashes at the end of the for.
I would appreciate if someone explained what I am doing wrong.
All those people who say, malloc() and free() are a bad idea in C++ are 100% correct. Prefer new and delete over malloc() and free() and prefer standard library containers over your own homebrew stack implementation.
Anyway, the real issue here is, that realloc() might allocate a new memory block and free the old one. It returns a pointer to the new one.
The correct call is:
*st = realloc(*st, sizeof(int) * (*length));
Now *st will store the new pointer and everything is all right.
Consider to use the standard library, instead of implementing your own fundamental data structures. It has a well designed interface, and is very thoroughly tested.
You are lucky that you get a crash. This is undefined behavior.
Let's see what Bjarne Stroustrup says about this from here:
No, in the sense that you cannot allocate an object with malloc() and
free it using delete. Nor can you allocate with new and delete with
free() or use realloc() on an array allocated by new.
The C++ operators new and delete guarantee proper construction and
destruction; where constructors or destructors need to be invoked,
they are. The C-style functions malloc(), calloc(), free(), and
realloc() doesn't ensure that. Furthermore, there is no guarantee that
the mechanism used by new and delete to acquire and release raw memory
is compatible with malloc() and free(). If mixing styles works on your
system, you were simply "lucky" - for now.
C++ FAQ has also special entry for this:
https://isocpp.org/wiki/faq/freestore-mgmt#realloc-and-renew
Crashing at the end is most likely because you're mixing new with free. This is undefined behaviour. It may work on some systems, but is never a good idea. You should pair new with delete. Calls to malloc are paired with free, but these are more for C code. C++ code usually uses new and delete.
Of course you can eliminate the new by making myStack a local variable:
Stack myStack;
You would also need to adjust member access to use . instead of ->. There would be no need to delete myStack, since local variables would be automatically cleaned up once the function exits, including in the case of exceptions.
Also noticed the return value of realloc is ignored. If the current memory block can't be extended by realloc, it must allocate a new memory block and copy the old data over to it. A new pointer is returned in this case, so st must be updated.
*st = realloc(*st, sizeof(int) * (*length));
But again, using malloc, realloc, and free is a bit odd in C++ code.
You can use new[] and delete[] if you're forced to use manual memory management of arrays (such as for learning), or you can use the vector or stack classes for more serious code.
Use std::vector and it will automatically take care of all the memory management for you. In fact, you pretty much just don't need a Stack class in the face of std::stack but that's another matter.
Furthermore for the test do not allocate the Stack dynamically in a super pointless way, just make a local Stack.
Related
Suppose I have the following snippet.
int main()
{
int num;
int* cost;
while(cin >> num)
{
int sum = 0;
if (num == 0)
break;
// Dynamically allocate the array and set to all zeros
cost = new int [num];
memset(cost, 0, num);
for (int i = 0; i < num; i++)
{
cin >> cost[i];
sum += cost[i];
}
cout << sum/num;
}
` `delete[] cost;
return 0;
}
Although I can move the delete statement inside the while loop
for my code, for understanding purposes, I want to know what happens with the code as it's written. Does C++ allocate different memory spaces each time I use operator new?
Does operator delete only delete the last allocated cost array?
Does C++ allocate different memory spaces each time I use operator new?
Yes.
Does operator delete only delete the last allocated cost array?
Yes.
You've lost the only pointers to the others, so they are irrevocably leaked. To avoid this problem, don't juggle pointers, but use RAII to manage dynamic resources automatically. std::vector would be perfect here (if you actually needed an array at all; your example could just keep reading and re-using a single int).
I strongly advise you not to use "C idioms" in a C++ program. Let the std library work for you: that's why it's there. If you want "an array (vector) of n integers," then that's what std::vector is all about, and it "comes with batteries included." You don't have to monkey-around with things such as "setting a maximum size" or "setting it to zero." You simply work with "this thing," whose inner workings you do not [have to ...] care about, knowing that it has already been thoroughly designed and tested.
Furthermore, when you do this, you're working within C++'s existing framework for memory-management. In particular, you're not doing anything "out-of-band" within your own application "that the standard library doesn't know about, and which might (!!) it up."
C++ gives you a very comprehensive library of fast, efficient, robust, well-tested functionality. Leverage it.
There is no cost array in your code. In your code cost is a pointer, not an array.
The actual arrays in your code are created by repetitive new int [num] calls. Each call to new creates a new, independent, nameless array object that lives somewhere in dynamic memory. The new array, once created by new[], is accessible through cost pointer. Since the array is nameless, that cost pointer is the only link you have that leads to that nameless array created by new[]. You have no other means to access that nameless array.
And every time you do that cost = new int [num] in your cycle, you are creating a completely new, different array, breaking the link from cost to the previous array and making cost to point to the new one.
Since cost was your only link to the old array, that old array becomes inaccessible. Access to that old array is lost forever. It is becomes a memory leak.
As you correctly stated it yourself, your delete[] expression only deallocates the last array - the one cost ends up pointing to in the end. Of course, this is only true if your code ever executes the cost = new int [num] line. Note that your cycle might terminate without doing a single allocation, in which case you will apply delete[] to an uninitialized (garbage) pointer.
Yes. So you get a memory leak for each iteration of the loop except the last one.
When you use new, you allocate a new chunk of memory. Assigning the result of the new to a pointer just changes what this pointer points at. It doesn't automatically release the memory this pointer was referencing before (if there was any).
First off this line is wrong:
memset(cost, 0, num);
It assumes an int is only one char long. More typically it's four. You should use something like this if you want to use memset to initialise the array:
memset(cost, 0, num*sizeof(*cost));
Or better yet dump the memset and use this when you allocate the memory:
cost = new int[num]();
As others have pointed out the delete is incorrectly placed and will leak all memory allocated by its corresponding new except for the last. Move it into the loop.
Every time you allocate new memory for the array, the memory that has been previously allocated is leaked. As a rule of thumb you need to free memory as many times as you have allocated.
Apologies if this is a silly question - I've been self-teaching C++ and am currently writing a memory manager as an exercise for myself, but I'm not clear on what happens under the hood when I'm calling malloc and free. I've provided some skeleton code below that hopefully illustrates my question a little better.
I have overriden the global new and delete operators to call into the Alloc(size_t) and Free(void*) methods of a MemoryManager class and have set up a few memory pools that are working very well. However, I allow one of my pools to grow when it needs to. This pool is initialized by allocating some heap memory to a pointer: char* mPoolAllocator.
My question is basically: When I grow my pool, is it safe to use the same pointer (mPoolAllocator) to allocate some new heap memory? What happens when I call free(mPoolAllocator) in ~MemoryManager() below? Does the default memory manager keep track of every bit of heap memory I've allocated using this pointer and allow me to free them all in one call to free, or is it simply freeing the block beginning at the address that the pointer was last set to?
The code below is only an illustration and is nowhere near to how my MemoryManager class works: I'm primarily looking for feedback on malloc() and free().
.....................................................................................................................................................................
class MemoryManager
class MemoryManager
{
public:
MemoryManager();
~MemoryManager();
void* Alloc(size_t size);
void Free(void* address);
private:
size_t mFreeMemory; // unallocated memory left
char* mPoolAllocator, // used to alloc memory from the heap
* mUnallocated; // points to front of free blocks linked list
void ExtendPool(); // extends pool, increasing available memory
void* GetBlock(size_t size); // returns heap address sufficient for and object of size
}
.
void* MemoryManager::Alloc(size_t size)
{
/* If there is free memory */
if(size <= mFreeMemory)
{
return GetBlock(size);
}
else // else create new free memory
{
ExtendPool();
return GetBlock(size);
}
}
.
void MemoryManager::ExtendPool()
{
mPoolAllocator = (char*)malloc(POOL_EXTEND_SIZE);
// some calls to functions that split the extended pool into blocks
mUnallocated = mPoolAllocator; // point to the next unallocated memory block (beginning of extended pool)
}
.
MemoryManager::~MemoryManager()
{
free(mPoolAllocator);
}
No, that leaks memory.
Each return value from malloc() must be used as an argument in a distinct call tofree(). For this usage, look into realloc() which will make it work more like you expect, since it allows you to grow an already-allocated piece of heap memory.
There is no trace in the mPoolAllocator variable of the previous pointers returned from malloc().
Also, in C++, shouldn't you use new[] to allocate arrays of bytes?
I have the following code
void foo()
{
char* pcBlock = new char[1000];
...
delete[] pcBlock;
...
pcBlock = new char[100000];
...
delete[] pcBlock;
}
Would the code below result in a memory leak?
void foo()
{
char* pcBlock = new char[1000];
...
pcBlock = new char[100000];
...
delete[] pcBlock;
}
Yes, there's likely a memory leak if you don't delete[] pcBlock in the first .... Reassigning a pointer does not automatically delete what it previously pointed to.
Operator "new" and "delete" should be used in pairs. Otherwise, using "new" without "deleting" causes memory leakage.
Yes, the previously allocated 1000 bytes are not freed and pcBlock is replaced with new set of memory. There is no way to release the previous 1000 bytes. So its a mem leak.
Yes, it will most likely leak memory (unless the compiler is smart enough to fix that for you, but most won't).
Maybe you should try to realloc in some way.
Yes, it will. Probably you think that the arrays will overlap and delete therefore will free the first array, but in fact they are allocated to different parts of memory.
YES IT WOULD CAUSE MEMORY LEAK
In C++ there is one simple rule:
Every call of new should end with call of delete at the end.
And every new[] -> with delete[]. Otherwise you will get memory leak.
C++ is language where programmer should control dynamic memory by himself (or using 3rd-party libs).
I'm catching up on pointers. I wrote down a few lines of code to test the different ways I could dynamically allocate a 2d array (posted at the bottom)
My questions are the following:
Should I use malloc in c++, or new? If I use new, can I still use realloc?
When should I use realloc? What are the implications of using it in terms of performance and bugs?
Out of the examples bellow, which objectNew version should I use? If the answer depends on the application, what are does it depend on?
Many thanks,
Matt
#include <stdio.h>
#include <stdlib.h>
struct myObject
{
int data;
myObject(int i)
{
data = i;
}
myObject()
{
data = 0;
}
};
int main(){
int r = 7;
int c = 6;
printf("Objects using NEW===============================\n");
//notice the triple pointer being assigned a new double pointer array
myObject*** objectNew = new myObject** [r];
for(int i=0;i<r;i++)
{
//objectNew is a 1D array of double pointers, however if we assign another layer of pointers, it becomes a 2D array of pointers to myObject
objectNew[i] = new myObject* [c];
for(int j=0;j<c;j++){
objectNew[i][j] = new myObject(10*i+j);
//notice that we dereference data (->)
printf("objectNew[%2d][%2d]=%02d\n",i,j,objectNew[i][j]->data);
}
}
delete objectNew;
printf("Objects using NEW version 2===============================\n");
//notice the triple pointer being assigned a new double pointer array
myObject** objectNew2 = new myObject* [r];
for(int i=0;i<r;i++)
{
//objectNew is a 1D array of double pointers, however if we assign another layer of pointers, it becomes a 2D array of pointers to myObject
objectNew2[i] = new myObject [c];
for(int j=0;j<c;j++){
objectNew2[i][j] = myObject(10*i+j);
//notice that we dereference data (->)
printf("objectNew2[%2d][%2d]=%02d\n",i,j,objectNew2[i][j].data);
}
}
delete objectNew2;
printf("Objects using MALLOC===============================\n");
//notice the double pointer being allocated double pointers the size of pointers to myObject
myObject** objectMalloc =(myObject**) malloc(sizeof(myObject*)*r);
for(int i=0;i<r;i++)
{
//now we are assigning array of pointers the size of myObject to each double pointer
objectMalloc[i] = (myObject*) malloc(sizeof(myObject)*c);
for(int j=0;j<c;j++){
objectMalloc[i][j] = myObject(10*i+j);
//notice that we access data without dereferencing (.)
printf("objectMalloc[%2d][%2d]=%02d\n",i,j,objectMalloc[i][j].data);
}
}
free((void*) objectMalloc);
//same as Malloc
printf("Objects using CALLOC===============================\n");
myObject** objectCalloc = (myObject**) calloc(r,sizeof(myObject*));
for(int i=0;i<r;i++)
{
objectCalloc[i] = (myObject*) calloc(c,sizeof(myObject));
for(int j=0;j<c;j++){
objectCalloc[i][j] = myObject(10*i+j);
printf("objectCalloc[%2d][%2d]=%02d\n",i,j,objectCalloc[i][j].data);
}
}
free((void*) objectCalloc);
printf("Int using NEW===============================\n");
//int is not an object
int** intNew = new int* [r];
for(int i=0;i<r;i++)
{
intNew[i] = new int[c];
for(int j=0;j<c;j++){
intNew[i][j] = 10*i+j;
printf("intNew[%2d][%2d]=%02d\n",i,j,intNew[i][j]);
}
}
delete intNew;
printf("Int using malloc===============================\n");
int** intMalloc =(int**) malloc(sizeof(int*)*r);
for(int i=0;i<r;i++)
{
intMalloc[i] =(int*) malloc(sizeof(int)*c);
for(int j=0;j<c;j++){
intMalloc[i][j] = 10*i+j;
printf("intMalloc[%2d][%2d]=%02d\n",i,j,intMalloc[i][j]);
}
}
free((void*) intMalloc);
getchar();
return 0;
}
new and new[] are constructor aware, the C memory allocation functions are not.
Similarly, delete and delete [] are destructor aware, whereas free isn't.
realloc does not work with memory allocated with new, there is no standards defined analog to the realloc function with C++ memory management functions.
Do not mix new and delete with malloc/calloc/realloc and free (i.e. don't free any memory allocated with new, don't delete any memory allocated with malloc)
If you are doing new [], you most likely should be using an std::vector, which encapsulates all that memory management for you.
If you are using memory allocated raw pointers, you might be better served using smart/semi-smart pointers like boost::shared_ptr or boost::scoped_ptr or std::auto_ptr, depending on your needs.
Thanks to David Thornley for bringing this up:
Never mix scalar new and delete with their array forms, i.e. don't delete[] memory allocated with new, and don't delete memory allocated with new[]. Undefined behavior all around.
In your example, you SHOULD use new and new[] if you must, because as I said, they are constructor aware, and your class is not a POD data type since it has a constructor.
Please see the C++ FAQ for more information:
C++ FAQ Freestore Management
1) Should I use malloc in c++, or new?
If I use new, can I still use realloc?
Use new. And never use realloc with new.
When should I use realloc? What are
the implications of using it in terms
of performance and bugs?
Instead of using realloc, I usually implement my own memory-pool, from which I reallocate memory when I need it. This also improves performance.
Should I use malloc in c++, or new? If
I use new, can I still use realloc?
You can use malloc in C++; however, it is strongly discouraged in most circumstances. Additionally, if you call malloc, calloc, realloc, etc., you need to use free to deallocate the memory when done.
If you use new to allocate memory, you must use delete to deallocate it.
The C-allocation routines do not have type safety. new does.
If you want to write C++ (!!) programs don't use malloc, realloc, free. It is just not C++ functions.
And there is no necessity to use new is your cases (I'm talking about the example).
You should use vector or some other containter.
In general, you should try to avoid doing your own memory allocation in C++. Most (not all) arrays can be done with std::vector better than you're going to do your own, particularly since you are making a mistake here.
You allocate, say, intNew and intNew[i]. Then you delete [] intNew; (the brackets are necessary here, leaving them off is an error), without delete []ing any of the intNew[i]. The one-dimensional arrays you allocated are now memory leaks, and cannot possibly be freed because you've lost all references to them. Neither C nor C++ normally come with a garbage collector.
The C memory allocation functions deal with memory only. They aren't type-safe, and don't call constructors or destructors. They can be used with data types that require neither, but you do have to make sure they do neither. (It is possible to malloc memory and use placement new to create objects there, but that's more advanced.)
The C++ memory allocation functions deal with memory and objects. They are type-safe, mostly, and do call constructors and destructors. Not all objects can be moved safely, so there is no equivalent of malloc. The tricky part is that you have to match new with delete, and new[] with delete[]. You have this error in your code: delete intNew; should be delete [] intNew;.
following a discussion in a software meeting I've set out to find out if deleting a dynamically allocated, primitives array with plain delete will cause a memory leak.
I have written this tiny program and compiled it with visual studio 2008 running on windows XP:
#include "stdafx.h"
#include "Windows.h"
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*1000; i++)
{
int* p = new int[1024*100000];
for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2;
Sleep(1000);
delete p;
}
}
I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected
I've modified my test program to allocate a non primitive type array :
#include "stdafx.h"
#include "Windows.h"
struct aStruct
{
aStruct() : i(1), j(0) {}
int i;
char j;
} NonePrimitive;
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*100000; i++)
{
aStruct* p = new aStruct[1024*100000];
Sleep(1000);
delete p;
}
}
after running for for 10 minutes there was no meaningful increase in memory
I compiled the project with warning level 4 and got no warnings.
is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?
delete p, where p is an array is called undefined behaviour.
Specifically, when you allocate an array of raw data types (ints), the compiler doesnt have a lot of work to do, so it turns it into a simple malloc(), so delete p will probably work.
delete p is going to fail, typically, when:
p was a complex data type - delete p; won't know to call individual destructors.
a "user" overloads operator new[] and delete[] to use a different heap to the regular heap.
the debug runtime overloads operator new[] and delete[] to add extra tracking information for the array.
the compiler decides it needs to store extra RTTI information along with the object, which delete p; won't understand, but delete []p; will.
No, it's undefined behavior. Don't do it - use delete[].
In VC++ 7 to 9 it happens to work when the type in question has trivial destructor, but it might stop working on newer versions - usual stuff with undefined behavior. Don't do it anyway.
It's called undefined behaviour; it might work, but you don't know why, so you shouldn't stick with it.
I don't think Visual Studio keeps track of how you allocated the objects, as arrays or plain objects, and magically adds [] to your delete. It probably compiles delete p; to the same code as if you allocated with p = new int, and, as I said, for some reason it works. But you don't know why.
One answer is that yes, it can cause memory leaks, because it doesn't call the destructor for every item in the array. That means that any additional memory owned by items in the array will leak.
The more standards-compliant answer is that it's undefined behaviour. The compiler, for example, has every right to use different memory pools for arrays than for non-array items. Doing the new one way but the delete the other could cause heap corruption.
Your compiler may make guarantees that the standard doesn't, but the first issue remains. For POD items that don't own additional memory (or resources like file handles) you might be OK.
Even if it's safe for your compiler and data items, don't do it anyway - it's also misleading to anyone trying to read your code.
no, you should use delete[] when dealing with arrays
Just using delete won't call the destructors of the objects in the array. While it will possibly work as intended it is undefined as there are some differences in exactly how they work. So you shouldn't use it, even for built in types.
The reason seems not to leak memory is because delete is typically based on free, which already knows how much memory it needs to free. However, the c++ part is unlikely to be cleaned up correctly. I bet that only the destructor of the first object is called.
Using delete with [] tells the compiler to call the destructor on every item of the array.
Not using delete [] can cause memory leaks if used on an array of objects that use dynamic memory allocations like follows:
class AClass
{
public:
AClass()
{
aString = new char[100];
}
~AClass()
{
delete [] aString;
}
private:
const char *aString;
};
int main()
{
AClass * p = new AClass[1000];
delete p; // wrong
return 0;
}