In C++ FAQ, the [16.16] gives the following example,
void manipulateArray(unsigned nrows, unsigned ncols[])
{
typedef Fred* FredPtr;
FredPtr* matrix = new FredPtr[nrows];
// Set each element to NULL in case there is an exception later.
// (See comments at the top of the try block for rationale.)
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = NULL;
try {
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = new Fred[ ncols[i] ];
for (unsigned i = 0; i < nrows; ++i) {
for (unsigned j = 0; j < ncols[i]; ++j) {
someFunction( matrix[i][j] );
}
}
if (today == "Tuesday" && moon.isFull()) {
for (unsigned i = nrows; i > 0; --i)
delete[] matrix[i-1];
delete[] matrix;
return;
}
...code that fiddles with the matrix...
}
catch (...) {
for (unsigned i = nrows; i > 0; --i)
delete[] matrix[i-1];
delete[] matrix;
throw; // Re-throw the current exception
}
for (unsigned i = nrows; i > 0; --i)
delete[] matrix[i-1];
delete[] matrix;
}
Why we have to use delete this way, I mean,
First delete[] matrix[i-1];
then delete[] matrix;
Moreover, what’s the point of after the whole “try…catch” cycle, we still have to put
for (unsigned i = nrows; i > 0; --i)
delete[] matrix[i-1];
delete[] matrix;
at the end of this function.
What you're missing is the horribly evil indentation.
delete matrix[i-1]; happens once per loop iteration and deletes the nested arrays.
delete matrix happens just one time after the loop completes and deletes the outer array.
Never write code like this in C++, use vector<vector<T> > instead.
The reason the deletes also exist in the catch is because if you catch an exception you're still responsible to clean up the memory you allocated.
The try/catch block is necessary to ensure proper clean-up even if an exception is thrown anywhere in the code before the normal clean-up happens. This includes an exception in one of the new expressions. The delete[] is safe because all the relevant pointers were initially set to zero, so that the deletion is valid even if no allocation ever occurred.
(Note that if any exception does occur, it will still be propagated outside the function. The local try/catch block only ensures that the function itself doesn't leak any memory.)
There are two sets of arrays: one is the outer array matrix, which is an array of pointers. This array gets allocated first and deleted last. Second, each element matrix[i] is itself a pointer to an array of Fred elements. Each array gets allocated in the first for loop, and thus has to be deleted in another loop at the end.
When you in a loop deleting each of the rows, you're freeing up the memory allocated to the corresponding row. Then you need to free up the memory allocated for the pointers to each row.
Think of it this way:
FredPtr* matrix = new FredPtr[nrows];
allocates an array of pointers to rows - and it will need to be freed up at the end.
Then for each of the rows,
matrix[i] = new Fred[ ncols[i] ];
allocates memory for an array of pointers to columns - and it will need to be freed up separately.
yes, that is not a quality of an example-code but it is working fine. The copy-pasted code in the catch-block and after the catch-block is needed because in case of an exception the memory should be freed and in this case the exception is forwarded to the caller of that function. if you dont want to forward that exception you can delete the code inside the catch-block (but at least a console-output would be nice ;) )
The catch block in the try...catch is there to delete the matrix if an exception was thrown, and then re-throw the exception.
If no exception is thrown, the catch block never gets hit, and the matrix has to be deleted on the way out through the normal exit of the routine.
Related
In my code, I have a 2D array of pointers to my data:
Data***
The reason I'm not using array notation is because the size is not determined at compile time.
So, somewhere in my code I allocate all the necessary space:
arr = new Data **[xVals];
for (int i = 0; i < xVals; i++)
{
arr[i] = new Data *[yVals];
for (int j = 0; j < yVals; j++)
{
arr[i][j] = nullptr;
}
}
And then fill the array with my proper pointers some time later on.
Furthermore, the pointer also gets stored in a std::vector:
for(...) {
for(...) {
// Conditional statement; not the whole array gets filled, some parts stay nullptr
...
arr[xCoord][yCoord] = new Data(...);
myVector.push_back(arr[xCoord][yCoord]);
}
}
... // Do some other stuff that takes advantage of the spatial properties of the 2D array
Once I'm done using the 2D array, I want to delete it, but NOT delete the Data-pointers themselves, since they are now stored in my vector.
I've been trying the following:
for (int i = 0; i < xVals; i++)
{
// Delete all "column" arrays
delete[] arr[i];
}
// Delete
delete[] arr;
However, I get a corrupted heap error CRT detected that the application wrote to memory after end of heap buffer, so I'm not sure what exactly I did wrong there. How do I delete a 2D array without deleting the data it held?
I am trying to recreate the vector class and I believe there is a memory leak in my code, but I don't know how to solve it. Using the CRT Library in my Visual Studio, it tells me that there is a supposed memory leak that doubles for each time that reserve is called.
I am not quite sure why that is or if there even is a memory leak. The memory leak detection says that it is this line in the reserve function int* temp = new int[n];
This is what I understand to be happening in the reserve function:
Once the contents of arr are copied into temp, it's fine to delete arr. Assigning arr = temp should work because all I'm doing is making arr point to the same place as temp. Because arr was previously deleted, I only have 1 array in the heap and arr and temp both point to the same array so there should be no memory leak. Temp shouldn't matter because it disappears after it exits the scope. On subsequent calls to the reserve function, every thing repeats and there should only be one array in the heap which arr points to.
I do believe that my thinking is probably erroneous in some way.
#include "Vector.h"
namespace Vector {
vector::vector() {
sz = 0;
space = 0;
arr = nullptr;
}
vector::vector(int n) {
sz = n;
space = n;
arr = new int[n];
for(int i = 0; i < n; i++) {
arr[i] = 0;
}
}
void vector::push_back(int x) {
if(sz == 0) {
reserve(1);
} else if (sz == space) {
reserve(2*space);
}
arr[sz] = x;
sz++;
}
void vector::reserve(int n) {
if (n == 1) {
arr = new int[1]; //arr was a nullptr beforehand
}
int* temp = new int[n];
for(int i = 0; i < n; i++) {
temp[i] = arr[i];
}
delete[] arr;
arr = temp;
space = n;
}
Your code assumes in vector::reserve(int n) that arr is null.
Instead maybe spilt up how reserve functions based on whether or not arr is null.
void vector::reserve(int n) {
if (arr) { //handle case when arr is null
space += n;
arr = new int[space];
//no need to copy anything!
} else { //handle case when arr is not null
int* tmp(new int[space + n]);
for(int i = 0; i < space; i++) {
tmp[i] = arr[i];
}
delete[] arr;
arr = tmp;
space += n;
}
}
Also the above code assumes you mean to reserve space+n instead of allowing reserve to shrink the array as you'll lose data if you reserve less then a previous reserve. It's usually better practice to not use assumptions about a pointer's state when working with them because when your code gets more complex the assumptions can end up getting forgotten or more obscure.
I have same issues too. I have created two pointers that points in the same address in heap. When I'm trying too deallocate the memory, and the result is only one pointer that can do that, it's the first pointers that point that address. The second or third pointers that points that address doesn't have an authority to deallocate the memory, but only the first pointers who have that authority.
Example
int *a = new int[5];
int *b = a;
int *c = a;
Pointers b and c doesn't have an authority too dealloacte the memory address that pointers a pointed. Therefore, the memory wasn't deallocated if i'm saying delete[] b nor delete[] c, they didn't have an authority for doing that. Then I tried to write delete [] a and that worked. I don't have an real answers, and I just trying to approaching through my try and errors that I have done. And that's what I got.
Actually this case is violating the rules, but C++ still allowed us to do it, it's called undefined behaviors. We are violating the rules of delete[] operators by, but C++ still allowed you to do, and as the result, you get unexpected output.
Not too much wrong in there.
Bugs
if (n == 1) {
arr = new int[1]; //arr was a nullptr beforehand
}
The comment cannot be guaranteed. Nothing prevents multiple calls of resize including a call of reserve(1), and that will leak whatever memory was pointed at by arr. Instead consider
if (arr == nullptr) {
arr = new int[n]; //arr was a nullptr beforehand
}
now the comment is guaranteed to be true.
The copy loop overshoots the end of arr every time the size of the array is increased.
for(int i = 0; i < n; i++) {
temp[i] = arr[i];
}
arr is only good up to arr[sz-1]. If n is greater than space, and it almost always will be, arr[i] wanders into the great wilds of Undefined Behaviour. Not a good place to go.
for(int i = 0; i < n && i < sz; i++) {
temp[i] = arr[i];
}
Checks both n and sz to prevent overrun on either end and copying of data that has not been set yet. If there is nothing to be copied, all done.
Targets of opportunity
The class needs a destructor to release any memory that it owns (What is ownership of resources or pointers?) when it is destroyed. Without it, you have a leak.
vector::~vector() {
delete[] arr;
}
And if it has a destructor, the Rule of Three requires it to have special support functions to handle (at least) copying of the class or expressly forbid copying.
// exchanges one vector for the other. Generally useful, but also makes moves
// and assignments easy
void vector::swap(vector& a, vector& b)
{
std::swap(a.sz, b.sz);
std::swap(a.space, b.space);
std::swap(a.arr, b.arr);
}
// Copy constructor
vector::vector(const vector& src):
sz(src.sz),
space (src.space),
arr(new int[space])
{
for(int i = 0; i < sz; i++) {
arr[i] = src.arr[i];
}
}
// move constructor
vector::vector(vector&& src): vector()
{
swap(*this, src);
}
// assignment operator
vector::vector& vector::operator=(vector src)
{
swap(*this, src);
return *this;
}
The Copy Constructor uses a Member Initializer List. That's the funny : bit.
The assignment operator makes use of the Copy and Swap Idiom. This isn't the fastest way to implement the assignment operator, but it is probably the easiest. Start with easy and only go to hard if easiest doesn't meet the requirements.
I have a function using a 2D array and I want to copy data from one array to another and I used a tmp array, but valgrind kept on saying I have memory leak. I can't figure out why. The following is part of a function.
// valgrind gave me error as operator new[] (unsigned long) for the following line
T** temp_pointer = new T*[rows];
for (int i=0; i < rows; i++) {
temp_pointer[i] = new T[columns];
}
for (int i =0; i< rows; i++) {
for (int j =0; j < (columns-3); j++) {
temp_pointer[i][j] = Arry[i][j];
}
temp_pointer[i][columns -3 ] = myvalue1;
temp_pointer[i][columns-2] = myvalue2;
temp_pointer[i][columns-1] = myvalue3;
}
for ( int i =0; i< rows; i++)
delete [] Arry[i];
delete [] Arry;
Arry= temp_pointer;
I also have a destructor which recursively delete the Arry pointers. Arry is private member of a template class.
I just could not figure out why it was a memory leak. Am I supposed to recursively delete temp_pointer ?? (I tried and it didn't work)
I just didn't know where did it leak?
It is not entirely clear why valgrind claims that the memory is being leaked, but you clearly have an out of bound access in your loop.
temp_pointer[i][columns] = myvalue;
The index of the last element of an array is not its size, it is (size-1). Writing into a location outside of the array's bound could clobber the memory allocator's housekeeping information and cause valgrind to complain.
I have some pointers that I allocate in the constructor of a class and then attempt to delete in its destructor:
TileMap::TileMap(int x, int y) {
mapSize.x = x;
mapSize.y = y;
p_p_map = new Tile*[x];
for(int i = 0; i < x; i++) {
p_p_map[i] = new Tile[y];
}
randomize();
}
TileMap::~TileMap() {
for(int i = 0; i < mapSize.x; i++) {
delete p_p_map[i];
}
delete p_p_map;
}
void TileMap::randomize() {
for(int i = 0; i < mapSize.x; i++) {
for(int j = 0; j < mapSize.y; j++) {
p_p_map[i][j] = *new Tile(Tile::TileSize * i, Tile::TileSize * j, TileType::randomType());
}
}
}
At the end of the program the destructor is called to free the memory of the pointers I allocated, but when it reaches "delete p_p_map[i];" in the destructor, XCode informs me that the pointer was not allocated. I am new to C++, but I feel that I pretty explicitly allocated memory to the pointers in the randomize() function.
What error am I making?
You have to match delete with new and delete[] with new[]. Mixing one up with the other leads to issues. So if you do:
p_p_map = new Tile*[x];
you have to delete it like:
delete[] p_p_map;
and same with
delete[] p_p_map[i];
If you create something like:
pSomething = new Type;
then you delete it like:
delete pSomething;
What error am I making?
A few:
First, as #uesp pointed out, you mismatch new and delete calls
Second, you are using the "memory leak operator":
p_p_map[i][j] = *new Tile(Tile::TileSize * i, Tile::TileSize * j, TileType::randomType());
The construct new Tile(...) allocates memory. Then, this memory (not stored anywhere) is dereferenced, and the result is assigned to p_p_map[i][j].
Because the pointer is not stored anywhere, it is leaked.
Third, you are not respecting RAII. While this is not technically an error in itself, the way you write the code is unsafe, and in low memory conditions, you will get UB.
For example, here's what happens if you construct a Tile instance with large values for x and y:
TileMap::TileMap(int x, int y) { // e.g. (x = 1024 * 1024, y = 1024 * 1024 * 1024)
mapSize.x = x;
mapSize.y = y;
p_p_map = new Tile*[x]; // allocate 1049600 pointers block
for(int i = 0; i < x; i++) {
p_p_map[i] = new Tile[y]; // run out of memory (for example) half way through the loop
}
randomize();
}
Depending where your allocations fail, your constructor will not finish executing, meaning your TileMap instance is "half-constructed" (i.e. in an invalid state) and the destructor will not be called.
In this case, everything the class allocated is leaked, and (especially if you allocated a large size) your application is left in low memory conditions.
To fix this, make sure each pointer is managed by a different instance of a class (part of RAII). This ensures that if an allocation fails, the allocated resources are released before exitting the scope, as part of stack unwinding (as #CaptainObvlious said, use std::vector for the array and std::unique_ptr for each element).
When I allocate multidimensional arrays using new, I am doing it this way:
void manipulateArray(unsigned nrows, unsigned ncols[])
{
typedef Fred* FredPtr;
FredPtr* matrix = new FredPtr[nrows];
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = new Fred[ ncols[i] ];
}
where ncols[] contains the length for each element in matrix, and nrows the number of element in matrix.
If I want to populate matrix, I then have
for (unsigned i = 0; i < nrows; ++i) {
for (unsigned j = 0; j < ncols[i]; ++j) {
someFunction( matrix[i][j] );
But I am reading C++ FAQ, who is telling be to be very careful. I should initialize each row with NULL first. Then, I should trycatch the allocation for rows. I really do not understand why all this. I have always (but I am in the beginning) initialized in C style with the above code.
FAQ wants me to do this
void manipulateArray(unsigned nrows, unsigned ncols[])
{
typedef Fred* FredPtr;
FredPtr* matrix = new FredPtr[nrows];
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = NULL;
try {
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = new Fred[ ncols[i] ];
for (unsigned i = 0; i < nrows; ++i) {
for (unsigned j = 0; j < ncols[i]; ++j) {
someFunction( matrix[i][j] );
}
}
}
catch (...) {
for (unsigned i = nrows; i > 0; --i)
delete[] matrix[i-1];
delete[] matrix;
throw; // Re-throw the current exception
}
}
1/ Is it farfetched or very proper to always initialize so cautiously ?
2/ Are they proceeding this way because they are dealing with non built-in types? Would code be the same (with same level of cautiousness) with double* matrix = new double[nrows]; ?
Thanks
EDIT
Part of the answer is in next item in FAQ
The reason for being this careful is that you'll have memory leaks if any of those allocations fail, or if the Fred constructor throws. If you were to catch the exception higher up the callstack, you have no handles to the memory you allocated, which is a leak.
1) It's correct, but generally if you're going to this much trouble to protect against memory leaks, you'd prefer to use std::vector and std::shared_ptr (and so on) to manage memory for you.
2) It's the same for built-in types, though at least then the only exception that will be thrown is std::bad_alloc if the allocation fails.
I would think that it depends on the target platform and the requirements to your system. If safety is a high priority and / or if you can run out of memory, then no, this is not farfetched. However, if you are not concerned too much with safety and you know that the users of your system will have ample free memory, then I would not do this either.
It does not depend on whether builtin-types are used or not. The FAQ solution is nulling the pointers to the rows so that in the event of an exception, only those rows which have already been created are deleted (and not some random memory location).
That said, I can only second R. Martinho Ferndandes' comment that you should use STL containers for this. Managing your own memory is tedious and dangerous.