Memory leak on vector._Reallocate() in Visual C++ Memory Debugger - c++

I have two vectors of pointers:
A vector containing pointers to all objects
A temporary vector containing pointers to some of the objects in the above vector. Each frame, this vector is cleared and pointers are then added again.
No objects are created or deleted during these frames.
According to Visual Studio's memory profiler I get a memory leak, in vector._Reallocate() every time I add a pointer to this vector.
Image: http://i.stack.imgur.com/f4Ky3.png
My question is:
Does vector.push_back() allocate something that is not deallocated later, or is Visual Studio's memory profiler just confused because I'm clearing a vector without destroying the elements? This only happens in Debug mode - not in Release.
I understand that the _Reallocate(..)-method allocates space on the heap, but since I clear the vector this should not cause a leak, I suppose?
Update: using std::list instead of std:.vector solves the "problem" - though I don't know if it really is a problem or not
Update2: It DOES happen in Release mode as well. I was a little bit confused by the output, but I get the same problem there
**Update3: I have attached some code that can reproduce the problem. I could only reproduce the problem if I store the vectors in a multidimensional array, like in the original problem.
class MyClassContainer
{
public:
std::vector<MyClass*> vec;
};
int main(int args, char **argv)
{
std::vector<MyClass*> orig;
MyClassContainer copy[101][101];
for(int i = 0; i < 101; i++)
orig.push_back(new MyClass());
while (true)
{
int rnd = std::rand() * 100 / RAND_MAX; int rnd2 = std::rand() * 100 / RAND_MAX;
for (int i = 0; i < 101; i++)
for (int j = 0; j < 101; j++)
copy[i][j].vec.clear(); // this should clear all??
copy[rnd2][rnd].vec.push_back(orig[rnd]);
}
return 0;
}
Update 4: The memory debugger shows an increasing number of heap allocations as time goes. However, I noticed now that if I wait for a long time the number of new heap allocations per second decreases towards 0. So apparently it's not really a memory leak.
It seems that when each vector in the array has been pushed to at least once nothing more gets allocated on the heap. I don't understand why though.

Not to point out the obvious... but you are not freeing the MyClass objects created here. Since you have a vector of raw pointers, they must be freed (and freed only once).
orig.push_back(new MyClass());
Put this at the end of main and your leaks should go away. Perhaps you were misled by the location of the leak report? vector::push_back() should not leak.
// Free the objects
for (int i = 0; i < orig.size(); i++) {
delete orig[i];
orig[i] = 0; // not necessary, but good defensive practice
}
I also recommend using smart-pointers, for exactly these cases.
PS: Not really my business, but what is this algorithm doing? That sure is a lot of vectors...

Related

C++ Memory leaks vectors(?)

In C++, the importance of deallocating memory when the program is either exiting or no longer serves a purpose is important. So if this is allocation of a dynamic array
char** dynamicArr = nullptr;
for (int i = 0; i < x; i++) {
mapPtr[i] = new char[y];
}
and this is deallocation of a dynamic array
for (int i = 0; i < x; i++) {
delete[] mapPtr[i];
}
delete[] mapPtr;
However, when it comes to vectors, I noticed that my global vector with 0 elements inside seems to be causing some memory leaks.
I've read up on this link, a user commented that
No. The std::vector will automatically de-allocate the memory it uses
Screenshot of my debugging.
I have also tried these steps to clear the vector as well as make sure the vector inside the struct citySummInfo has shrunk to fit and clear hopefully not getting any memory leak but to no avail. Is there any way that I'm doing it wrong?
As what #JohnFilleau have mentioned
_CrtDumpMemoryLeaks() should be called at the point in the program where you want to see what is remaining on the heap. Since your
vectors are statically allocated, they will not have been destroyed at
the time you call this function.
_CrtDumpMemoryLeaks() is meant to place right before the program terminates, and since my vectors are statically allocated, it has not been deallocated at the time when _CrtDumpMemoryLeaks() has been called hence the "leaks".

dynamic memory allocation c++

I have the following dynamically allocated array
int capacity = 0;
int *myarr = new int [capacity];
What happens if I do something like this:
for (int i = 0; i < 5; ++i)
myarr[++capacity] = 1;
Can this cause an error if the for loop is executed many times more than 5? It worked fine for small numbers, but I'm wondering If this is maybe a wrong approach.
You are setting memory outside of the bounds of the array. This might overwrite memory being used in another part of the program, it is undefined behavior. To fix this you should declare your array like this:
int* myArray = new int[5];
which will make it so you don't allocate out of the bounds of the array.
However, it is better to use std::vector. It will prevent situations like this from occurring by managing memory itself. You can add items to a std::vector like this:
Vector.push_back(myItem);
and you can declare a vector of ints like this:
std::vector<int> Vector;
You can read more about std::vector here.
It will cause out of bounds access. This is just undefined behaviour. Your program may crash, your car may not start in the morning, you see the pattern here.
It happens to work fine in your case because usually the OS allocates more than you ask it for, usually in multiples of 2, so probably here it allocates like 8/16..256 bytes at least. But you should not rely on this kind of behaviour.
EDIT The car issue was (almost) a joke, however undefined behaviour really means undefined. C++ standard decided to not enforce compiler checking of many of such issues, because checking takes time. So it declares the behaviour as Undefined Behaviour (UB). In this case, absolutely nothing is guaranteed. If you work e.g. on some small embedded system that controls the rudder of a plane, well, you can guess what may happen.
This approach is wrong in c++. Allocating memory with new allocates memory for new "capacity" objects that you can access without running in run-time error.
Here is a better way to manage the raw array in c++:
int capacity = 10;
int *myarr = new int [capacity];
for (int i = 0; i < 20 ; ++i) {
if (capacity < i) {
// allocate more memory and copy the old data
int *old_data = myarr;
myarr = new int [capacity * 2]
for (int j = 0; j < capacity; ++j) {
myarr[j] = old_data[j];
capacity *= 2;
}
delete [] old_data;
}
myarr[i] = 1;
}
And its always better to call delete []at the end. Another way to use pre-made dynamic array in c++ is to use the std:array(supported in c++11) or std::vector(or legacy support and the preferred option as of me) library which automatically reallocates more memory.
More for the vector and array and examples here for vector here
and here
for std::array.

Is it possible to store 10 million numbers in array?

I want to know how many numbers can you store in array?
srand (time(NULL));
int array[10000000];
for(int i = 0; i < 10000000; i++){
array[i] = (rand() % 10000000) + 1;
}
Every time I want to store 10.000.000 numbers in array my program crashed (Eclipse). I even tryed Visual Studio and it crashed to.
So i want to know how many numbers can I store in array or is something wrong with my code?
You can store as many numbers as you have memory for, but you cannot do it like that. The reason your program crashes is that you are using an "automatic" variable, which is allocated on the "stack." The stack is much more limited in size than the "heap" typically, so using automatic variables of such large size may result in a...wait for it...
STACK OVERFLOW!
Instead, try this:
int* array = new int[10000000];
Then after using it:
delete[] array;
Step two will be to learn about smart pointers; you can use something like boost::scoped_array for this case, but there are lots of options depending on which libraries you prefer (or if you have C++11).
If you have C++11 you can use "RAII" to avoid needing to remember when and where to call delete. Just do this to allocate the array:
std::unique_ptr<int[]> array(new int[10000000]);
Or just use a vector, which always allocates its contents dynamically ("on the heap", loosely speaking):
std::vector<int> array(10000000); // 10000000 elements, all zero
The language is more than capable of storing 10,000,000 values in an array. The problem here though is you've declared the 10,000,000 elements to exist on the stack. The size of the stack is implementation dependent but most stacks simply don't have enough space for that many elements. The heap is a much better place for such an array
int* array = new int[10000000];
for(int i = 0; i < 10000000; i++){
array[i] = (rand() % 10000000) + 1;
}
...
delete[] array;
There are a couple places you could put your array in memory. The most common difference we think about is the stack and the heap.
The stack is how the computer keeps track of which function you're in, how to return from the function, and the local variables. It is often of limited size. The exact limit depends on your platform, and perhaps how you compiled your program.
The heap is another area in memory, where the compiler usually stores things you've allocated with the new keyword. This is often much larger, and capable of storing a big array such as yours. The downside to keeping things on the heap is that you have to remember to delete them at the appropriate time.
In your example, you are declaring a 10,000,000 element array on the stack. If you wanted to declare that array on the heap, you would do it like this:
srand (time(NULL));
int* array = new int[10000000];
for(int i = 0; i < 10000000; i++){
array[i] = (rand() % 10000000) + 1;
}
//Sometime later...
delete[] array;
However, C++ gives us better tools for this. If you want a big array, use std::vector.
srand (time(NULL));
std::vector<int> array(10000000);
for(std::size_t i = 0; i < array.size(); i++){
array[i] = (rand() % 10000000) + 1;
}
Now, your std::vector is on the stack, but the memory it controls is on the heap. You don't have to remember to delete it later, and your program doesn't crash.

define a large array in for-loop would compromise the performance?

Code looks like this:
for (int i = 0; i <= LARGE_NUMBER; ++i) {
int x[LARGE_NUMBER] = {0};
// do something with x
}
I think array x will be created each time when for-loop wades thru 0~LARGE_NUMBER, so
this will compromise the performance? Will -O2 do some help?
Your array will be zeroed in each iteration, so definitely it will.
This code is linear time, every element of the array will be zero-initialized:
int x[LARGE_NUMBER] = {0};
And this is constant time, just increment of the stack pointer:
int x[LARGE_NUMBER];
Performance will depend on whether LARGE_NUMBER is really large or not. If LARGE_NUMBER is of size of one or two cache lines than you won't notice difference between first and second version. But if LARGE_NUMBER is really large - you will. But if your array is so large than performance difference is noticeable - than you definitely need to move it to heap. Stack space is expensive and allocation of megabytes of data in it is wrong.
If your array is really large, you can allocate it on the heap once and call memset between iteration.
How is LARGE_NUMBER expected to be?
Consider that an on-stack allocation of a wide object can result wider than the stack space the system can give to a thread, and you are probably facing an "out of memory" problem even before performance can start. (The stack need to be fast, and hence in no more than few Megabytes: Ideally it has to fit the processor cache)
If that's the case a std::vector (that leave in the stack, but manages allocation from the heap) play better.
But defining it inside, makes it to be created / destroyed upon every iteration. Now: does those creation / destruction make sense (I mean: do they take some action that make sense to be repeated every time) or your problem is just to initialize to zero on every iteration? If that's the case, I wold probably do something like:
{ //just begin a scope block
std::vector<int> v(LARGE_NUMBER); //allocates LARGE_NUMBER int-s on heap
for (int i = 0; i <= LARGE_NUMBER; ++i)
{
std::fill(v.begin(), v.end(), 0); //reset at every iteration
// other stuff with v
}
} //here the vector and associated memory is finally eliminated
Note that the performance of std:fill is linear just like the initialization of an array, but you avoid that way to allocate/deallocate at every cycle.
In any case, your problem has O2 complexity, by definition.
Depends on your application... i assume that you have a fixed array size? It will use: http://www.cplusplus.com/reference/cstring/memset/
#include <stdio.h>
#include <string.h>
int* x = new int[LARGE_NUMBER];
for (int i = 0; i <= LARGE_NUMBER; ++i) {
memset(x,0,LARGE_NUMBER);
// do something with x
}
//some more stuff that needs x
delete[] x;
BTW: I have no C/C++ at hand to test the code.

Deleting an Array of Pointers - Am I doing it right?

I feel a little stupid for making a question about the deletion of pointers but I need to make sure I'm deleting in the correct way as I'm currently going through the debugging process of my program.
Basically I have a few arrays of pointers which are defined in my header file as follows:
AsteroidView *_asteroidView[16];
In a for loop I then initialise them:
for(int i = 0; i < 16; i++)
{
_asteroidView[i] = new AsteroidView();
}
Ok, so far so good, everything works fine.
When I eventually need to delete these in the destructor I use this code:
for(int i = 0; i < 16; i++)
{
delete _asteroidView[i];
}
Is this all I need to do? I feel like it is, but I'm worried about getting memory leaks.
Out of interest...
Is there much of a difference between an Array of Points to Objects compared with an Array of Objects?
This is correct. However, you may want to consider using Boost.PointerContainer, and save you hassle the hassle of manual resource management:
boost::ptr_vector<AsteroidView> _asteroidView;
for(int i = 0; i < 16; i++)
{
_asteroidView.push_back(new AsteroidView());
}
You do not have to manage the deletion, the container does that for you. This technique is called RAII, and you should learn about it if you want to have fun using C++ :)
About your edit: There are several difference, but I guess the most important are these:
An array of pointers can contain objects of different types, if these are subclasses of the array type.
An array of objects does not need any deletion, all objects are destroyed when the array is destroyed.
It's absolutely fine.
The rule of thumb is: match each call to new with an appropriate call to delete (and each call to new[] with a call to delete[])
Is this all I need to do? I feel like it is, but I'm worried about getting memory leaks.
Yes. The program is deallocating resources correctly. No memory leaks :)
If you are comfortable with using std::vector( infact it is easy ), it does the deallocation process when it goes out of scope. However, the type should be of -
std::vector<AsteroidView>
Given a class:
// this class has hidden data and no methods other than the constructor/destructor
// obviously it's not ready for prime time
class Foo {
int* bar_[16];
public:
Foo()
{
for (unsigned int i = 0; i < 16; ++i)
bar_[i] = new int;
}
~Foo()
{
for (unsigned int i= 0; i < 16; ++i)
delete bar_[i];
}
};
You won't leak memory if the constructor completes correctly. However, if new fails fails in the constructor (new throws a std::bad_alloc if you're out of memory), then the destructor is not run, and you will have a memory leak. If that bothers you, you will have to make the constructor exception safe (say, add a try ... catch block around the constructor, use RAII). Personally, I would just use the Boost Pointer Container if the elements in the array must be pointers, and a std::vector if not.