I am dynamically allocating memory for an array in a function. My question is: once the function finishes running is the memory freed?
code:
void f(){
cv::Mat* arr = new cv::Mat[1];
...
}
No, memory allocated using new is not automatically freed when the pointer goes out of scope.
However, you can (and should) use C++11's unique_ptr, which handles freeing the memory when it goes out of scope:
void f(){
std::unique_ptr<cv::Mat[]> arr(new cv::Mat[1]);
...
}
C++11 also provides shared_ptr for pointers you might want to copy. Modern C++ should strive to use these "smart pointers", as they provide safer memory management with almost no performance hit.
No, it is not. You must free it by calling
delete[] arr;
But you should ask yourself whether it is necessary to allocate dynamically. This would require no explicit memory management:
void f(){
cv::Mat arr[1];
...
}
If you need a dynamically sized array, you could use an std::vector. The vector will internally allocate dynamically, but will take care of de-allocating it's resources:
void f(){
std::vector<cv::Mat> arr(n); // contains n cv::Mat objects
...
}
No. Every call to new needs to be matched up with a call to delete somewhere.
In your case arr iteself is a variable with automatic storage duration. This means that arr itself will be destroyed when it goes out of scope. However the thing that arr points to does not, because that is a variable that does not have autoatic storage duration.
The fact that arr itself has automatic storage duration can be used to your advantage, by wrapping the raw pointer in a class that destroys the stored pointer when the automatic object is destroyed. This object utilizes an idion known as RAII to implement a so-called "smart pointer". Since this is a very common requirement in well-designed applications, the C++ Standard Library provides a number of smart pointer classes which you can use. In C++03, you can use
std::auto_ptr
In C++11 auto_ptr has been deprecated and replaced by several other better smart pointers. Among them:
std::unique_ptr
std::shared_ptr
In general, it is a good idea to use smart pointers instead of raw ("dumb") pointers as you do here.
If what you ultimately need are dynamically-sized arrays, as supported by C99, you should know that C++ does not have direct support for them. Some compilers (notably GCC) do provide support for dynamically sized arrays, but these are compiler-specific language extensions. In order to have something that approximates a dynamically sized array while using only portable, Standards-compliant code, why not use a std::vector?
Edit: Assigning to an array:
In your comments you describe the way in which you assign to this array, which I have taken to mean something like this:
int* arr = new int[5];
arr[1] = 1;
arr[4] = 2;
arr[0] = 3;
If this is the case, you can accomplish the same using vector by calling vector::operator[]. Doing this looks uses very similar syntax to what you've used above. The one real "gotcha" is that since vectors are dyanamically sized, you need to make sure the vector has at least N elements before trying to assign the element at position N-1. This can be accomplished in a number of ways.
You can create the vector with N items from the get-go, in which case each will be value-initialized:
vector<int> arr(5); // creates a vector with 5 elements, all initialized to zero
arr[1] = 1;
arr[4] = 2;
arr[0] = 3;
You can resize the vector after the fact:
vector<int> arr; // creates an empty vector
arr.resize(5); // ensures the vector has exactly 5 elements
arr[1] = 1;
arr[4] = 2;
arr[0] = 3;
Or you can use a variety of algorithms to fill the vector with elements. One such example is fill_n:
vector<int> arr; // creates empty vector
fill_n(back_inserter(arr), 5, 0); // fills the vector with 5 elements, each one has a value of zero
arr[1] = 1;
arr[4] = 2;
arr[0] = 3;
No, of course not.
For every new you need precisely one delete.
For every new[] you need precisely one delete[].
Since you don't have the matching delete[], your program is broken.
(For that reason, the adult way of using C++ is not to use new or pointers at all. Then you don't have those problems.)
No. The array the allocated on the heap, you'll have to delete it before it goes out of scope:
void f() {
cv::Mat * arr = new cv::Mat[1];
// ...
delete [] arr;
}
Related
This question is similar to Problem with delete[], how to partially delete the memory?
I understand that deleting an array after incrementing its pointer is not possible as it loses the track of how many bytes to clean. But, I am not able to understand why one-by-one delete/deallocation of a dynamic array doesn't work either.
int main()
{
int n = 5;
int *p = new int[n];
for(int i=0;i<n;++i){
delete &p[i];
}
}
I believe this should work, but in clang 12.0 it fails with the invalid pointer error. Can anyone explain why?
An array is a contiguous object in memory of a specific size. It is one object where you can place your data in and therefore you can only free/delete it as one object.
You are thinking that an array is a list of multiple objects, but that's not true. That would be true for something like a linked list, where you allocate individual objects and link them together.
You allocated one object of the type int[n] (one extent of memory for an array) using the operator new
int *p = new int[n];
Elements of the array were not allocated dynamically separately.
So to delete it you just need to write
delete []p;
If for example you allocated an array of pointers like
int **p = new int *[n];
and then for each pointer of the array you allocated an object of the type int like
for ( int i = 0;i < n;++i )
{
p[i] = new int( i );
}
then to delete all the allocated objects you need to write
for ( int i = 0; i < n; ++i )
{
delete p[i];
}
delete []p;
That is the number of calling of the operator delete or delete [] one to one corresponds to the number of calling operator new or new [].
One new always goes with one delete. Just as that.
In detail, when we request an array using new, what we actually do is to get a pointer that controls a contiguous & fixed block on the memory. Whatever we do with that array, we do it through that pointer and this pointer associates strictly with the array itself.
Furthermore, let's assume that you were able to delete an elemnent in the middle of that array. After the deletion, that array would fall apart and they are not contiguous anymore! By then, an array would not really be an array!
Because of that, we can not 'chop off' an array into separate pieces. We must always treat an array as one thing, not distinctive elements scattered around the memory.
Greatly simplyfyinh: in most systems memory is allocated in logical blocks which are described by the starting pointer of the allocated block.
So if you allocate an array:
int* array = new int[100];
OS stores the information of that allocation as a pair (simplifying) (block_begin, size) -> (value of array ptr, 100)
Thus when you deallocate the memory you don't need to specify how much memory you allocated i.e:
// you use
delete[] array; // won't go into detail why you do delete[] instead of delete - mostly it is due to C++ way of handling destruction of objects
// instead of
delete[100] array;
In fact in bare C you would do this with:
int* array = malloc(100 * sizeof(int))
[...]
free(array)
So in most OS'es it is not possible due to the way they are implemented.
However theoretically allocating large chunk of memory in fact allocate many smaller blocks which could be deallocated this way, but still it would deallocate smaller blocks at a time not one-by-one.
All of new or new[] and even C's malloc do exactly the same in respect to memory: requesting a fix block of memory from the operating system.
You cannot split up this block of memory and return it partially to the operating system, that's simply not supported, thus you cannot delete a single element from the array either. Only all or none…
If you need to remove an element from an array all you can do is copy the subsequent elements one position towards front, overwriting the element to delete and additionally remember how many elements actually are valid – the elements at the end of the array stay alive!
If these need to be destructed immediately you might call the destructor explicitly – and then assure that it isn't called again on an already destructed element when delete[]ing the array (otherwise undefined behaviour!) – ending in not calling new[] and delete[] at all but instead malloc, placement new for each element, std::launder any pointer to any element created that way and finally explicitly calling the constructor when needed.
Sounds like much of a hassle, doesn't it? Well, there's std::vector doing all this stuff for you! You should this one it instead…
Side note: You could get similar behaviour if you use an array of pointers; you then can – and need to – maintain (i.e. control its lifetime) each object individually. Further disadvantages are an additional level of pointer indirection whenever you access the array members and the array members indeed being scattered around the memory (though this can turn into an advantage if you need to move objects around your array and copying/moving objects is expensive – still you would to prefer a std::vector, of pointers this time, though; insertions, deletions and managing the pointer array itself, among others, get much safer and much less complicated).
#include <iostream>
class A {
public:
A(int d) : y(d) {}
int y;
};
int main(void) {
A d[3] = {A(0), A(1), A(2)};
std::cout << d[1].y << std::endl;
};
I'm working on a project for university. I'm instructed not to use an object array and instantiate each element with a temporary object of my class - so something like the above code is a no-no. Instead, it's recommended we use a an array of type A* and then use new on each element so that we create an object and make each element point to it.
So we'd have something like this:
A* d[3];
for (int i=0;i<3;i++)
d[i]=new A(i);
However, I don't quite understand the difference in practice when compared to the first version. As far as I understand in the first version we we create the three temporary objects (which are rvalues) using the constructor and then assign them to the array. After the assignment ends, the temporary is destroyed and its values are copied to the array. In the second version, we have the advantage of being able to delete what d[] points to but disregarding that, what is the drawback of using the first method?
EDIT:
class B{
public:
A a1[3];
}
Assuming my initial definition is followed by this (A isn't default constructible, is something like this impossible? So in that case I can't an array of objects and I instead have to resort to a pointer array of type A* and then use new, as my only resort.
Array of pointers -> memory allocation when you create an object and assign it to the pointer. Array of 1000 pointers with 1 element inside = 1x the memory.
Array of objects -> memory allocation when you create an array. Array of 1000 with 1 element inside = 1000x the memory.
1 ) A d[3], it creates a built-in array (or C-style, plain, naked array...) which contains (three) elements of type A, these elements live on the stack (according to your code).
2 ) A* d[3];, it creates a built-in array which contains (three) pointers to elements of type A (or they should). It doesn't mean that the pointed element is of type A, it's developer's task to create those elements, and you can do it in two ways, using the stack (as before):
A element;
...
d[0] = &element;
or using the heap (with new/new[] and delete/delete[]):
d[0] = new A;
...
delete d[0];
Side effects of the second option; you may point to an already destroyed element (i.e. the element is out of scope), hence potential seg fault; you may loose the pointer to the allocated memory (i.e. the array of pointers is out of scope and the allocated memory has not been released), hence memory leak.
However, an advantage is that you can use lazy initialisation (you don't pay for what you don't use). If you don't need an A object in d[2] (or pointed by), why should you create it and waste memory? And here smart pointers arise, they make your life easier allowing you to allocate memory only when you need it, reducing the risk of seg faults (worth it to mention std::weak_ptr), memory leaks, etc:
std::unique_ptr<A> d[3];
d[0] = std::make_unique<A>(); // Call make_unique when you need it
But still a bit nasty, the built-in arrays are good but may be much better under some circumstances. It is hard to add or remove elements, iterate, get the size, access to the tail/head, etc... And that's why containers are quite useful, i.e. std::array or std::vector (where its size doesn't have to be known at compilation time):
std::vector<std::unique_ptr< A> > d(3);
std::array<std::unique_ptr<A,3> > d;
There are no advantages whatsoever to the following code in C++:
A* d[3];
for (int i=0;i<3;i++)
d[i]=new A(i);
In fact, you shouldn't write it at all.
Your solution with a locally-declared temporary array is valid, but has the significant limitation of requiring that the number of elements in the array be a compile-time constant (i.e., known ahead of time).
If you need to have an array with a dynamic number of elements, known only at run time, then you should use a std::vector, which performs the new allocation automatically. That is the idiomatic way of working with dynamically-allocated arrays in C++.
I have
int * array=new int[2];
and I would like to free the memory of the last element, thus reducing the allocated memory to only 1 element. I tried to call
delete array+1;
but it gives error
*** glibc detected *** skuska:
free(): invalid pointer: 0x000000000065a020 *
Can this be done in C++03 without explicit reallocation?
Note: If I wanted to use a class instead a primitive datatype (like int), how can I free the memory so that the destructor of the class is called too?
Note2: I am trying to implement vector::pop_back
Don't use new[] expression for this. That's not how vector works. What you do is allocate a chunk of raw memory. You could use malloc for this, or you could use operator new, which is different from the new expression. This is essentially what the reserve() member function of std::vector does, assuming you've used the default allocator. It doesn't create any actual objects the way the new[] expression does.
When you want to construct an element, you use placement new, passing it a location somewhere in the raw memory you've allocated. When you want to destoy an element, you call its destructor directly. When you are done, instead of using the delete[] expression, you use operator delete if you used operator new, or you use free() if you used malloc.
Here's an example creating 10 objects, and destoying them in reverse order. I could destroy them in any order, but this is how you would do it in a vector implementation.
int main()
{
void * storage = malloc(sizeof(MyClass) * 10);
for (int i=0; i<10; ++i)
{
// this is placement new
new ((MyClass*)storage + i) MyClass;
}
for (int i=9; i>=0; --i)
{
// calling the destructor directly
((MyClass*)storage + i)->~MyClass();
}
free(storage);
}
pop_back would be implemented by simply calling the destructor of the last element, and decrementing the size member variable by 1. It wouldn't, shouldn't (and couldn't, without making a bunch of unnecessary copies) free any memory.
There is no such option. Only way to resize array is allocate new array with size old_size - 1, copy content of old array and then delete old array.
If you want free object memory why not create array of pointers?
MyClass **arr = new MyClass*[size];
for(int i = 0; i < size; i++)
arr[i] = new MyClass;
// ...
delete arr[size-1];
std::vector::pop_back doesn't reallocate anything — it simply updates the internal variable determining data size, reducing it by one. The old last element is still there in memory; the vector simply doesn't let you access it through its public API. *
This, as well as growing re-allocation being non-linear, is the basis of why std::vector::capacity() is not equivalent to std::vector::size().
So, if you're really trying to re-invent std::vector for whatever reason, the answer to your question about re-allocation is don't.
* Actually for non-primitive data types it's a little more complex, since such elements are semantically destroyed even though their memory will not be freed.
Since you are using C++03, you have access to the std::vector data type. Use that and it's one call:
#include <vector>
//...
std::vector<int> ary(3);
//...
ary.erase(ary.begin() + (ary.size() - 1));
or
#include <vector>
//...
std::vector<int> ary(3);
//...
ary.pop_back();
EDIT:
Why are you trying to re-invent the wheel? Just use vector::pop_back.
Anyway, the destructor is called on contained data types ONLY if the contained data type IS NOT a pointer. If it IS a pointer you must manually call delete on the object you want to delete, set it to nullptr or NULL (because attempting to call delete on a previously deleted object is bad, calling delete on a null pointer is a non-op), then call erase.
I'm familiar with Java and trying to teach myself C/C++. I'm stealing some curriculum from a class that is hosting their materials here. I unfortunately can't ask the teacher since I'm not in the class. My concern is with the section under "dynamically declared arrays":
If you
want to be able to alter the size of
your array at run time, then declare
dynamic arrays. These are done with
pointers and the new operator. For the
basics on pointers, read the pointers
section.
Allocate memory using new, and then
you access the array in the same way
you would a static array. For example,
int* arrayPtr = new int[10]; for
(int i = 0; i < 10; i++) {
arrayPtr[i] = i; }
The memory picture is identical to the
static array, but you can change the
size if you need to. Don't forget you
must deallocate the memory before
allocating new memory (or you will
have a memory leak).
delete [] arrayPtr; // the []
is needed when deleting array pointers
arrayPtr = new int[50]; . . .
When you're completely done with the
array, you must delete its memory:
delete [] arrayPtr;
Dynamic multi-dimensional arrays are
done in a similar manner to Java. You
will have pointers to pointers. For an
example, see a
My understanding is that an array in C is simply a reference to the memory address of the first element in the array.
So, what is the difference between int *pointerArray = new int[10]; and int array[10]; if any?
I've done some tests that seem to indicate that they do the exact same thing. Is the website wrong or did I read that wrong?
#include <cstdlib>
#include <iostream>
using namespace std;
int main(int argc, char** argv) {
// Initialize the pointer array
int *pointerArray = new int[10];
for (int i = 0; i < 10; i++){
pointerArray[i] = i;
}
// Initialize the regular array
int array[10];
for (int i = 0; i < 10; i++){
array[i]= i;
}
cout << *(pointerArray + 5) << endl;
cout << *(array + 5) << endl;
cout << pointerArray[5] << endl;
cout << array[5] << endl;
cout << pointerArray << endl;
cout << array << endl;
return 0;
}
Output:
5
5
5
5
0x8f94030
0xbfa6a37c
I've tried to "dynamically re-size" my pointer array as described on the site, but my new (bigger) pointer array ends up filled with 0's which is not very useful.
int array[10]; declares the array size statically, that means it is fixed - which is the only major difference. It also might be allocated to be inside the function's stack frame, i.e. on the program's stack. You do not need to worry about using delete [] on that kind of array, in fact, you might crash the program if you delete it.
When you use operator new, you allocate memory dynamically which could be slower and the memory usually comes from the heap rather than the program's stack (though not always). This is better in most cases, as you are more limited in the stack space than the heap space. However, you must watch out for memory leaks and delete[] your stuff when you don't need it anymore.
As to your array being filled with zeros, what your class material does not say is that you have to do this:
int *arr = new int[20]; // old array
//do magic here and decide that we need a bigger array
int *bigger = new int[50]; // allocate a bigger array
for (int i = 0; i < 20; i++) bigger[i] = arr[i]; // copy the elements from the old array into the new array
delete[] arr;
arr = bigger;
That code extends the array arr by 30 more elements. Note that you must copy the old data into the new array, or else it will not be there (in your case, everything becomes 0).
My understanding is that an array in C is simply a reference to the memory address of the first element in the array.
So, what is the difference between int *pointerArray = new int[10]; and int array[10]; if any?
What you mention is the reason for much confusion in any C/C++ beginner.
In C/C++, an array corresponds to a block of memory sufficiently large to hold all of its elements. This is associated to the [] syntax, like in your example:
int array[10];
One feature of C/C++ is that you can refer to an array by using a pointer to its type. For this reason, you are allowed to write:
int* array_pointer = array;
which is the same as:
int* array_pointer = &array[0];
and this allows to access array elements in the usual way: array_pointer[3],
but you cannot treat array as a pointer, like doing pointer arithmetics on it (i.e., array++ miserably fails).
That said, it is also true that you can manage arrays without using the [] syntax at all and just allocate arrays by using malloc and then using them with raw pointers. This makes the "beauty" of C/C++.
Resuming: a distinction must be made between the pointer and the memory that it points to (the actual array):
the [] syntax in declarations (i.e., int array[10];) refers to both aspects at once (it gives you, as to say, a pointer and an array);
when declaring a pointer variable (i.e., int* p;), you just get the pointer;
when evaluating an expression (i.e., int i = p[4];, or array[4];), the [] just means dereferencing a pointer.
Apart from this, the only difference between int *pointerArray = new int[10]; and int array[10]; is that former is allocated dynamically, the latter on the stack.
Dynamically allocated:
int * pointerArray = new int[10];
[BTW, this is a pointer to an array of 10 ints, NOT a pointer array]
Statically allocated (possibly on the stack):
int array[10];
Otherwise they are the same.
The problem with understanding C/C++ arrays when coming from Java is that C/C++ distinguishes between the array variable and the memory used to store the array contents. Both concepts are important and distinct. In Java, you really just have a reference to an object that is an array.
You also need to understand that C/C++ has two ways of allocating memory. Memory can be allocated on the help or the stack. Java doesn't have this distinction.
In C and C++, an array variable is a pointer to the first element of the array. An array variable can exist on the heap or the stack, and so can the memory that contains its contents. And they can be difference. Your examples are int arrays, so you can consider the array variable to be an int*.
There are two differences between int *pointerArray = new int[10]; and int array[10];:
The first difference is that the memory that contains the contents of the first array is allocated on the heap. The second array is more tricky. If array is a local variable in a function then its contents are allocated on the stack, but if it is a member variable of a class then its contents are allocated wherever the containing object is allocated (heap or stack).
The second difference is that, as you've realised, the first array is dynamic: its size can be determined at run-time. The second array is fixed: the compiler must be able to determine its size at compile time.
First, I'd look for some other place to learn C++. The page you cite is
very confusing, and has little to do with the way one actually programs
in C++. In C++, most of the time, you'd use std::vector for an array,
not the complex solutions proposed on the page you cite. In practice,
you never use operator new[] (an array new).
In fact, std::vector is in some ways more like ArrayList than simple
arrays in Java; unlike an array in Java, you can simply grow the vector
by inserting elements into it, preferrably at the end. And it supports
iterators, although C++ iterators are considerably different than Java
iterators. On the other hand, you can access it using the []
operator, like a normal array.
The arrays described on the page you cite are usually called C style
arrays. In C++, their use is mostly limited to objects with static
lifetime, although they do occasionally appear in classes. In any case, they are never allocated dynamically.
The main difference is that some operations that are allowed on pointers are not allowed on arrays.
On the one hand:
int ar[10];
is using memory allocated on the stack. You can think of it also locally available and while it is possible to pass a pointer / reference to otehr functions, the memory will be freed as soon as it goes out of scope (in your example at the end of the main method but that's usually not the case).
On the other hand:
int ar* = new int[10];
allocates the memory for the array on the heap. It is available until your whole program exits or it is deleted using
delete[] ar;
note, that for delete you need the "[]" if and only if the corresponding new has had them, too.
There is a difference but not in the area that you point to. *pointerArray will point to the beginning of a block of memory of size 10 bytes. So will array. The only difference will be where it is stored in memory. pointerArray is dynamically assigned memory (at run-time) and hence will go on the heap, while array[10] will be allocated at compile-time and will go to the stack.
It is true that you can get most of array functionality by using a pointer to its first element. But compiler knows that a static array is composed of several elements and the most notable difference is the result of the sizeof operator.
sizeof(pointerArray) = sizeof int*
sizeof(array) = 10 * sizeof int
I come from a java background and there's something I could do in Java that I need to do in C++, but I'm not sure how to do it.
I need to declare an array, but at the moment I don't know the size. Once I know the size, then I set the size of the array. I java I would just do something like:
int [] array;
then
array = new int[someSize];
How do I do this in C++?
you want to use std::vector in most cases.
std::vector<int> array;
array.resize(someSize);
But if you insist on using new, then you have do to a bit more work than you do in Java.
int *array;
array = new int[someSize];
// then, later when you're done with array
delete [] array;
No c++ runtimes come with garbage collection by default, so the delete[] is required to avoid leaking memory. You can get the best of both worlds using a smart pointer type, but really, just use std::vector.
In C++ you can do:
int *array; // declare a pointer of type int.
array = new int[someSize]; // dynamically allocate memory using new
and once you are done using the memory..de-allocate it using delete as:
delete[]array;
Best way would be for you to use a std::vector. It does all you want and is easy to use and learn. Also, since this is C++, you should use a vector instead of an array. Here is an excellent reason as to why you should use a container class (a vector) instead of an array.
Vectors are dynamic in size and grow as you need them - just what you want.
The exact answer:
char * array = new char[64]; // 64-byte array
// New array
delete[] array;
array = new char[64];
std::vector is a much better choice in most cases, however. It does what you need without the manual delete and new commands.
As others have mentioned, std::vector is generally the way to go. The reason is that vector is very well understood, it's standardized across compilers and platforms, and above all it shields the programmer from the difficulties of manually managing memory. Moreover, vector elements are required to be allocated sequentially (i.e., vector elements A, B, C will appear in continuous memory in the same order as they were pushed into the vector). This should make the vector as cache-friendly as a regular dynamically allocated array.
While the same end result could definitely be accomplished by declaring a pointer to int and manually managing the memory, that would mean extra work:
Every time you need more memory, you must manually allocate it
You must be very careful to delete any previously allocated memory before assigning a new value to the pointer, lest you'll be stuck with huge memory leaks
Unlike std::vector, this approach is not RAII-friendly. Consider the following example:
void function()
{
int* array = new int[32];
char* somethingElse = new char[10];
// Do something useful.... No returns here, just one code path.
delete[] array;
delete[] somethingElse;
}
It looks safe and sound. But it isn't. What if, upon attempting to allocate 10 bytes for "somethingElse", the system runs out of memory? An exception of type std::bad_alloc will be thrown, which will start unwinding the stack looking for an exception handler, skipping the delete statements at the end of the function. You have a memory leak. That is but one of many reasons to avoid manually managing memory in C++. To remedy this (if you really, really want to), the Boost library provides a bunch of nice RAII wrappers, such as scoped_array and scoped_ptr.
use std::array when size is known at compile time otherwise use std::vector
#include <array>
constexpr int someSize = 10;
std::array<int, someSize> array;
or
#include <vector>
std::vector<int> array; //size = 0
array.resize(someSize); //size = someSize
Declare a pointer:
int * array;