What are the advantages of using dynamic memory allocation to create an array compared to the ordinary allocation of elements in an array?
You don't have to know the size of the array in advance, or over-allocate memory to account for large arrays. This allows your program to be more efficient with its memory use.
Arrays created dynamically can typically be larger than those created automatically, and can have longer lifetimes. But both have numerous disadvantages compared to using a std::vector.
1) Dynamic memory allocation allows for lots of liberty when it comes to managing the lifecycle of an object.
2) The size of the array can also be controlled with more liberty.
int x[100]; is fixed size and you can not expand it. Its lifetime is tied to the context where it was created and can not be passed around different functions/methods.
int *x = new int[n]; ... delete[] x; can be reallocated so it can resize and n does not have to be known in the compile time (so you can ask user how many numbers do she need and create an array of that size). As pointed by #Neil Butterworth, this is creating array on the heap and can be larger in size, while the static variant is creating array on the stack.
std::vector wraps a lot of this magic reallocation code and probably this is something you should use in your code.
The basic advantages of a dynamically allocated array are that you can determine the size of a dynamically allocated array at run-time, and, because the array is on the heap rather than the stack, it can be larger than a static array. It is important to remember though, that std::vector does this work for you. The occasions on which you should roll your own array class are few and far between. Use std::vector,
When you are allocating arrays then size of the array has to be given at the compile time. In that case most of the times, either you are allocating more memory or less memory.
This is where dynamic memory allocation helps you out so that you can actually allocate only those chunks of memory that are really needed and as and when you are done with the memory management you can free the memory.
In nutshell, it helps programmer to create Array of size which user says.
Related
I'm having some troubles with a large array of vectors in C++.
Basicaly I wan't an array of 2 millions elements, and each elements is a vector<int> (It's for building an adjacency list).
So when I do vector<int> myList[10] it works great, but when i do vector<int> myList[2000000] it does not work and I don't know why.
I tried to do unsigned long int var = 2000000; vector<int> myList[var]; but still the same error. (I don't know what is the error, my program just crash)
If you have any idea,
Thanks
There's a big difference between heap and stack memory. The heap is the nice big space where you can dynamically allocate gigabytes of memory - the stack is much more constrained in terms of allocation size (and is determined at compile time).
If defining a local variable, that means it lives on the stack (like your array). With 2 million elements, that's at least 2MB being allocated (or assuming ~24B of stack usage per vector, more like 48MB), which is quite a lot for the stack, and likely causes the crash. Dynamically allocating an array of vectors (or preferably just allocating a vector of vectors) ensures that the bulk of the memory is being allocated from the heap, which might prevent this crash.
You can also increase the size of the stack using compiler flags, but that's generally not preferable to just dynamic allocation.
This is helpfull
Dynamically allocate memory for my_List. or
Declare your array of vector of int's(my_List) as a global variable and size a `const. Thier storage locations are by design big enough to allocate such large mermory size.
For local variable, the stack segment might be to small to allocate 2e6*24B.
I know that dynamic memory has advantages over setting a fixed size array and and using a portion of it. But in dynamic memory you have to enter the amount data that you want to store in the array. When using strings you can type as many letters as you want(you can even use strings for numbers and then use a function to convert them). This fact makes me think that dynamic memory for character arrays is obsolete compared to strings.
So i wanna know what are the advantages and disadvantages when using strings? When is the space occupied by strings freed? Is maybe the option to free your dynamically allocated memory with delete an advantage over strings? Please explain.
The short answer is "no, there is no drawbacks, only advantages" with std::string over character arrays.
Of course, strings do USE dynamic memory, it just hides the fact behind the scenes so you don't have to worry about it.
In answer to you question: When is the space occupied by strings freed? this post may be helpful. Basically, std::strings are freed once they go out of scope. Often the compiler can decide when to allocate and release the memory.
std::string usually contains an internal dynamically allocated buffer. When you assign data, or if you push back new data, and the current buffer size is not sufficient, a new buffer is allocated with an increased size and the old data is copied or moved to the new buffer. The old buffer is then deallocated.
The main buffer is deallocated when the string goes out of scope. If the string object is a local variable in a function (on the stack), it will deallocate at the end of the current code block. If it's a function parameter, when the function exits. If it's a class member, whenever the class is destroyed.
The advantage of strings is flexibility (increases in size automatically) and safety (harder to go over the bounds of an array). A fixed-size char array on the stack is faster as no dynamic allocation is required. But you should worry about that if you have a performance problem, and not before.
well, your question got me thinking, and then i understood that you are talking about syntax differences, because both ways are dynamic allocating char arrays. the only difference is in the need:
if you need to create a string containing a sentence then you can, and
that's fine, not to use malloc
if you want an array and to "play" with it, meaning change or set the cells cording to some method, or changing it's size, then initiating it with malloc would be the appropriate way
the only reason i see to a static allocating char a[17] (for example) is for a single purpose string that you need, meaning only when you know the exact size you'll need and it won't change
and one important point the i found:
In dynamic memory allocation, if the memory is being continually allocated but the one allocated for objects that are not in use, is not released, then it can lead to stack overflow condition or memory leak which is a big disadvantage.
Probably, this is a very basic question, but here goes anyways. I have an array of size say 10. But, while assigning integers to that array I give only 8 elements. Can I free the memory of 2 elements that are not used ?
No, you can't. For dynamic allocation, you can only free or delete memory that was allocated with malloc or new. The exact same amount with the exact same pointer. For automatic variables, the memory will be freed automatically.
But since this is C++, use a std::vector instead. Please.
It depends on how you got your array in the first place.
If it is an array that is allocated in the automatic or static storage (i.e. a local or a global) there is nothing you can free, because you did not allocate anything (the compiler did it for you).
If this is a dynamically allocated array, you can achieve the same effect by creating a smaller array with only eight elements, copying the original values into it, and then freeing the original array. This does not guarantee that the amount of memory allocated to your program would necessarily go down, because the allocator of the eight-element array is allowed to allocate space for more elements. If the numbers are 10000 and 8000, on the other hand, you will almost certainly get some savings (although the standard does not guarantee it either).
What is the exact difference between dynamic arrays and vectors. It was an interview question to me.
I said both have sequential memory.
Vectors can be grown in size at any point in the code. He then said even dynamic arrays can be grown in size after creating.
I said vectors are error free since it is in the standard library. He said he will provide as .so file of dynamic arrays which is error free and has all the qualities on par with STL.
I am confused and didn't answer the exact difference. When I searched on Internet, I had seen the above statements only.
Can someone please explain me the exact difference? And what was the interviewer expecting from me?
He said he will provide as .so file of dynamic arrays which is error free and has all the qualities on par with STL.
If his dynamic array class does the same as std::vector (that is: it implements RAII to clean up after itself, can grow and shrink and whatever else std::vector does), then there's only one major advantage std::vector has over his dynamic array class:
std::vector is standardized and everybody knows it. If I see a std::vector in some piece of code, I know exactly what it does and how it is supposed to be used. If, however, I see a my::dynamic_array, I do not know that at all. I would need to have to look at its documentation or even — gasp! — implementation to find out whether my_dynamic_array::resize() does the same as std::vector::resize().
A great deal here depends on what he means by a "dynamic array". Most people mean something where the memory is allocated with array-new and freed with array-delete. If that's the intent here, then having qualities on a par with std::vector simply isn't possible.
The reason is fairly simple: std::vector routinely allocates a chunk of memory larger than necessary to hold the number of elements currently being stored. It then constructs objects in that memory as needed to expand. With array-new, however, you have no choice -- you're allocating an array of objects, so if you allocate space for (say) 100 objects, you end up with 100 objects being created in that space (immediately). It simply has no provision for having a buffer some part of which contains real objects, and another part of which is just plain memory, containing nothing.
I suppose if yo want to stretch a point, it's possible to imitate std::vector and still allocate the space with array-new. To do it, you just have to allocate an array of char, and then use placement new to create objects in that raw memory space. This allows pretty much the same things as std::vector, because it is nearly the same thing as std::vector. We're still missing a (potential) level of indirection though -- std::vector actually allocates memory via an Allocator object so you can change exactly how it allocates its raw memory (by default it uses std::allocator<T>, which uses operator new, but if you wanted to, you could actually write an allocator that would use new char[size], though I can't quite imagine why you would).
You could, of course, write your dynamic array to use an allocator object as well. At that point, for all practical purposes you've just reinvented std::vector under a (presumably) new name. In that case, #sbi is still right: the mere fact that it's not standardized means it's still missing one of the chief qualities of std:::vector -- the quality of being standardized and already known by everybody who knows C++. Even without that, though, we have to stretch the phrase "dynamic array" to (and I'd posit, beyond) the breaking point to get the same qualities as std::vector, even if we ignore standardization.
I expect they wanted you to talk about the traps of forgetting to delete the dynamic array with operator delete[] and then got confused themselves when they tried to help you along; it doesn't make much sense to implement a dynamic array as a plain class since it bakes in the element type.
The array memory allocated for vectors is released when the vector goes out of scope, in case the vector is declared on the stack (the backing array will be on the heap).
void foo() {
vector<int> v;
// ... method body
// backing array will be freed here
}
It says here: "Internally, vectors use a dynamically allocated array to store their elements."
Underlying concept of vectors is dynamically allocated array.
http://www.cplusplus.com/reference/vector/vector/
Maybe it's that dynamic array you would go through the copy process to a new dynamic array whenever you want to resize, but you are able to control when it does that depending on your knowledge of the data going into the array.
Whereas a vector uses the same process, but a vector does not know if it will grow or not later, so it probably allocates extra storage for possible growth in size, therefore it COULD possibly consume more memory space than intended to manage itself compared to dynamic arrays.
So, I'd say the difference is to use a vector when managing it's size is not a big deal, where you would use a dynamic array when you would rather do the resizing yourself.
Arrays have to be deallocated explicitly if defined dynamically whereas vectors are automatically de-allocated from heap memory.
Size of array cannot be determined if dynamically allocated whereas Size of the vector can be determined in O(1) time.
3.When arrays are passed to a function, a separate parameter for size is also passed whereas in case of passing a vector to a function, there is no such need as vector maintains variables which keeps track of size of container at all times.
4.When we allocate array dynamically then after size is initialized we cannot change the size whereasin vector we can do it.
For instance:
int* pArray;
pArray = new array[];
instead of:
int* pArray;
pArray = new array[someNumber];
Since pointers are able to dynamically change the size of an array at run time, and the name of the pointer points to the first element of an array, shouldn't the default size be [1]? Does anyone know what's happening behind the scene?
Since pointers are able to dynamically change the size of an array at run time
This is not true. They can't change the size unless you allocate a new array with the new size.
If you want to have an array-like object that dynamically changes the size you should use the std::vector.
#include<vector>
#include<iostream>
...
std::vector<int> array;
array.push_back(1);
array.push_back(2);
array.push_back(3);
array.push_back(4);
std::cout << array.size() << std::endl; // should be 4
When you create an array with new, you are allocating a specific amount of memory for that array. You need to tell it how many items are to be stored so it can allocate enough memory.
When you "resize" the array, you are creating a new array (one with even more memory) and copying the items over before deleting the old array (or else you have a memory leak).
Quite simply, C++ arrays have no facility to change their size automatically. Therefore, when allocating an array you must specify it size.
Pointers cannot change an array. They can be made to point to different arrays at runtime, though.
However, I suggest you stay away from anything involving new until you have learned more about the language. For arrays changing their size dynamically use std::vector.
Pointers point to dynamically allocated memory. The memory is on the heap rather than the stack. It is dynamic because you can call new and delete on it, adding to it and removing from it at run time (in simple terms). The pointer has nothing to do with that - a pointer can point to anything and in this case, it just happens to point to the beginning of your dynamic memory. The resizing and management of that memory is completely your responsibility (or the responsibility of the container you may use, e.g. std::vector manages dynamic memory and acts as a dynamic array).
They cannot change the size dynamically. You can get the pointer to point to a new allocation of memory from the heap.
Behind the scenes there is memory allocated, a little chunk of silicium somewhere in your machine is now dedicated to the array you just newed.
When you want to "resize" your array, it is only possible to do so in place if the chunk of silicium has some free space around it. Most of the times, it is instead necessary to reserve another, bigger, chunk and copy the data that were in the first... and obviously relinquish the first (otherwise you have a memory leak).
This is done automatically by STL containers (like std::vector or std::deque), but manually when you yourself call new. Therefore, the best solution to avoid leaks is to use the Standard Library instead of trying to emulate it yourself.
int *pArray = new int; can be considered an array of size 1 and it kinda does what you want "by default".
But what if I need an array of 10 elements?
Pointers do not have any magical abilites, they just point to memory, therefore:
pArray[5] = 10; will just yield a run-time error (if you are lucky).
Therefore there is a possibility to allocate an array of needed size by calling new type[size].