Difference between dynamic array and normal array [duplicate] - c++

This question already has answers here:
What is the difference between Static and Dynamic arrays in C++?
(13 answers)
Closed 9 years ago.
I'm a beginer. I'm confused about the difference between them. I have read a few answers and realized one of the difference is that the dynamic array can be deleted while the normal array can't. But are there any other differences? Such as their functions, sizes or whatsoever?
Well I have read such an example, and I don't see any difference if I replace the dynamic array {p= new (nothrow) int[i];} with a normal array {int p [i];}.
#include <iostream>
#include <new>
using namespace std;
int main ()
{
int i,n;
int * p;
cout << "How many numbers would you like to type? ";
cin >> i;
p= new (nothrow) int[i];
if (p == 0)
cout << "Error: memory could not be allocated";
else
{
for (n=0; n<i; n++)
{
cout << "Enter number: ";
cin >> p[n];
}
cout << "You have entered: ";
for (n=0; n<i; n++)
cout << p[n] << ", ";
delete[] p;
}
return 0;
}

But are there any other differences?
Compile and run this, see what happens:
int main(int argc, char** argv){
char buffer[1024*1024*64];
buffer[0] = 0;
return 0;
}
Explanation:
Well, normal array is placed either on stack or within code segment (if it is global variable or static local variable). At least on windows/linux PCs. The stack has limited size (although you can change it using ulimit in linux and compiler settings on windows). So your array is too big for the stack, your program will instantly crash upon entering the function with that array ("segmentation fault" on linux, "stack overflow" or "access violation" on windows (forgoet which one)). Default limit for array size is 1Megabyte on windows (x86), and 8 megabytes on linux.
You cannot determine size of a block allocated with new. int *p = new int[146]; std::cout << sizeof(p) << std::endl. will print sizeof(int*), not size of allocated memory. However, sizeof works on arrays.
Theoretically, using dynamic memory allocation, you can allocate as much memory as you want (operating system may impose limits, though, and on 32bit system maximum allocated block size will be 2..3 GB). You can also control resource usage by freeing the memory, so your program won't be eating system ram/swap file for no reason.
Dynamic arrays are not automatically freed, you have delete them manually.
That's just a brief overview.
Dynamic memory allocation provides finer resource control and removes some limitations of local variables.
Speaking of which, although you CAN use new/delete, if you want to use arrays with variable size, you should use std::vector instead. Manual memory management is prone to errors, so you should make compiler do it for you when possible. As a result, it is advised to at least study STL(Standard Template Library) , smart pointers, RAII, Rule of Three.

Dynamic is just that. You can change the size of the resulting array at runtime rather than compile time. There are compiler extensions that allow you to use static arrays (int array[CONSTANT]) like a dynamic array in that you can specify the size at runtime but those are nonstandard.
Dynamic arrays are also allocated on the heap rather than the stack. This allows you to use all of the available memory if necessary to create the array. The stack has a limited size that depends on the OS.
Allocating memory on the heap allows you to for example do this:
int* createArray(int n) {
int* arr = new int[n];
return arr;
}
int main()
{
int* myArray = createArray(10);
// this array is now valid and can be used
delete[] myArray;
}
I'm sure there are numerous other intricacies between dynamic vs static arrays but those points are the big ones.

Related

I dont know why But this code prompts Segmentation error [duplicate]

The following code gives me a segmentation fault when run on a 2Gb machine, but works on a 4GB machine.
int main()
{
int c[1000000];
cout << "done\n";
return 0;
}
The size of the array is just 4Mb. Is there a limit on the size of an array that can be used in c++?
You're probably just getting a stack overflow here. The array is too big to fit in your program's stack region; the stack growth limit is usually 8 MiB or 1 MiB for user-space code on most mainstream desktop / server OSes. (Normal C++ implementations use the asm stack for automatic storage, i.e. non-static local variables arrays. This makes deallocating them happen for free when functions return or an exception propagates through them.)
If you dynamically allocate the array you should be fine, assuming your machine has enough memory.
int* array = new int[1000000]; // may throw std::bad_alloc
But remember that this will require you to delete[] the array manually to avoid memory leaks, even if your function exits via an exception. Manual new/delete is strongly discouraged in modern C++, prefer RAII.
A better solution would be to use std::vector<int> array (cppreference). You can reserve space for 1000000 elements, if you know how large it will grow. Or even resize it to default-construct them (i.e. zero-initialize the memory, unlike when you declare a plain C-style array with no initializer), like std::vector<int> array(1000000)
When the std::vector object goes out of scope, its destructor will deallocate the storage for you, even if that happens via an exception in a child function that's caught by a parent function.
In C or C++ local objects are usually allocated on the stack. You are allocating a large array on the stack, more than the stack can handle, so you are getting a stackoverflow.
Don't allocate it local on stack, use some other place instead. This can be achieved by either making the object global or allocating it on the global heap. Global variables are fine, if you don't use the from any other compilation unit. To make sure this doesn't happen by accident, add a static storage specifier, otherwise just use the heap.
This will allocate in the BSS segment, which is a part of the heap. Since it's in static storage, it's zero initialized if you don't specify otherwise, unlike local variables (automatic storage) including arrays.
static int c[1000000];
int main()
{
cout << "done\n";
return 0;
}
A non-zero initializer will make a compiler allocate in the DATA segment, which is a part of the heap too. (And all the data for the array initializer will take space in the executable, including all the implicit trailing zeros, instead of just a size to zero-init in the BSS)
int c[1000000] = {1, 2, 3};
int main()
{
cout << "done\n";
return 0;
}
This will allocate at some unspecified location in the heap:
int main()
{
int* c = new int[1000000]; // size can be a variable, unlike with static storage
cout << "done\n";
delete[] c; // dynamic storage needs manual freeing
return 0;
}
Also, if you are running in most UNIX & Linux systems you can temporarily increase the stack size by the following command:
ulimit -s unlimited
But be careful, memory is a limited resource and with great power come great responsibilities :)
You array is being allocated on the stack in this case attempt to allocate an array of the same size using alloc.
Because you store the array in the stack. You should store it in the heap. See this link to understand the concept of the heap and the stack.
Your plain array is allocated in stack, and stack is limited to few magabytes, hence your program gets stack overflow and crashes.
Probably best is to use heap-allocated std::vector-based array which can grow almost to size of whole memory, instead of your plain array.
Try it online!
#include <vector>
#include <iostream>
int main() {
std::vector<int> c(1000000);
std::cout << "done\n";
return 0;
}
Then you can access array's elements as usual c[i] and/or get its size c.size() (number of int elements).
If you want multi-dimensional array with fixed dimensions then use mix of both std::vector and std::array, as following:
Try it online!
#include <vector>
#include <array>
#include <iostream>
int main() {
std::vector<std::array<std::array<int, 123>, 456>> c(100);
std::cout << "done\n";
return 0;
}
In example above you get almost same behavior as if you allocated plain array int c[100][456][123]; (except that vector allocates on heap instead of stack), you can access elements as c[10][20][30] same as in plain array. This example above also allocates array on heap meaning that you can have array sizes up to whole memory size and not limited by stack size.
To get pointer to the first element in vector you use &c[0] or just c.data().
there can be one more way that worked for me!
you can reduce the size of array by changing its data type:
int main()
{
short c[1000000];
cout << "done\n";
return 0;
}
or
int main()
{
unsigned short c[1000000];
cout << "done\n";
return 0;
}

Cannot create 2-d array with 45000000 elements in main function Clion Apple Silicon [duplicate]

The following code gives me a segmentation fault when run on a 2Gb machine, but works on a 4GB machine.
int main()
{
int c[1000000];
cout << "done\n";
return 0;
}
The size of the array is just 4Mb. Is there a limit on the size of an array that can be used in c++?
You're probably just getting a stack overflow here. The array is too big to fit in your program's stack region; the stack growth limit is usually 8 MiB or 1 MiB for user-space code on most mainstream desktop / server OSes. (Normal C++ implementations use the asm stack for automatic storage, i.e. non-static local variables arrays. This makes deallocating them happen for free when functions return or an exception propagates through them.)
If you dynamically allocate the array you should be fine, assuming your machine has enough memory.
int* array = new int[1000000]; // may throw std::bad_alloc
But remember that this will require you to delete[] the array manually to avoid memory leaks, even if your function exits via an exception. Manual new/delete is strongly discouraged in modern C++, prefer RAII.
A better solution would be to use std::vector<int> array (cppreference). You can reserve space for 1000000 elements, if you know how large it will grow. Or even resize it to default-construct them (i.e. zero-initialize the memory, unlike when you declare a plain C-style array with no initializer), like std::vector<int> array(1000000)
When the std::vector object goes out of scope, its destructor will deallocate the storage for you, even if that happens via an exception in a child function that's caught by a parent function.
In C or C++ local objects are usually allocated on the stack. You are allocating a large array on the stack, more than the stack can handle, so you are getting a stackoverflow.
Don't allocate it local on stack, use some other place instead. This can be achieved by either making the object global or allocating it on the global heap. Global variables are fine, if you don't use the from any other compilation unit. To make sure this doesn't happen by accident, add a static storage specifier, otherwise just use the heap.
This will allocate in the BSS segment, which is a part of the heap. Since it's in static storage, it's zero initialized if you don't specify otherwise, unlike local variables (automatic storage) including arrays.
static int c[1000000];
int main()
{
cout << "done\n";
return 0;
}
A non-zero initializer will make a compiler allocate in the DATA segment, which is a part of the heap too. (And all the data for the array initializer will take space in the executable, including all the implicit trailing zeros, instead of just a size to zero-init in the BSS)
int c[1000000] = {1, 2, 3};
int main()
{
cout << "done\n";
return 0;
}
This will allocate at some unspecified location in the heap:
int main()
{
int* c = new int[1000000]; // size can be a variable, unlike with static storage
cout << "done\n";
delete[] c; // dynamic storage needs manual freeing
return 0;
}
Also, if you are running in most UNIX & Linux systems you can temporarily increase the stack size by the following command:
ulimit -s unlimited
But be careful, memory is a limited resource and with great power come great responsibilities :)
You array is being allocated on the stack in this case attempt to allocate an array of the same size using alloc.
Because you store the array in the stack. You should store it in the heap. See this link to understand the concept of the heap and the stack.
Your plain array is allocated in stack, and stack is limited to few magabytes, hence your program gets stack overflow and crashes.
Probably best is to use heap-allocated std::vector-based array which can grow almost to size of whole memory, instead of your plain array.
Try it online!
#include <vector>
#include <iostream>
int main() {
std::vector<int> c(1000000);
std::cout << "done\n";
return 0;
}
Then you can access array's elements as usual c[i] and/or get its size c.size() (number of int elements).
If you want multi-dimensional array with fixed dimensions then use mix of both std::vector and std::array, as following:
Try it online!
#include <vector>
#include <array>
#include <iostream>
int main() {
std::vector<std::array<std::array<int, 123>, 456>> c(100);
std::cout << "done\n";
return 0;
}
In example above you get almost same behavior as if you allocated plain array int c[100][456][123]; (except that vector allocates on heap instead of stack), you can access elements as c[10][20][30] same as in plain array. This example above also allocates array on heap meaning that you can have array sizes up to whole memory size and not limited by stack size.
To get pointer to the first element in vector you use &c[0] or just c.data().
there can be one more way that worked for me!
you can reduce the size of array by changing its data type:
int main()
{
short c[1000000];
cout << "done\n";
return 0;
}
or
int main()
{
unsigned short c[1000000];
cout << "done\n";
return 0;
}

segmentation fault for a c++ big 4 dimensions float array [duplicate]

The following code gives me a segmentation fault when run on a 2Gb machine, but works on a 4GB machine.
int main()
{
int c[1000000];
cout << "done\n";
return 0;
}
The size of the array is just 4Mb. Is there a limit on the size of an array that can be used in c++?
You're probably just getting a stack overflow here. The array is too big to fit in your program's stack region; the stack growth limit is usually 8 MiB or 1 MiB for user-space code on most mainstream desktop / server OSes. (Normal C++ implementations use the asm stack for automatic storage, i.e. non-static local variables arrays. This makes deallocating them happen for free when functions return or an exception propagates through them.)
If you dynamically allocate the array you should be fine, assuming your machine has enough memory.
int* array = new int[1000000]; // may throw std::bad_alloc
But remember that this will require you to delete[] the array manually to avoid memory leaks, even if your function exits via an exception. Manual new/delete is strongly discouraged in modern C++, prefer RAII.
A better solution would be to use std::vector<int> array (cppreference). You can reserve space for 1000000 elements, if you know how large it will grow. Or even resize it to default-construct them (i.e. zero-initialize the memory, unlike when you declare a plain C-style array with no initializer), like std::vector<int> array(1000000)
When the std::vector object goes out of scope, its destructor will deallocate the storage for you, even if that happens via an exception in a child function that's caught by a parent function.
In C or C++ local objects are usually allocated on the stack. You are allocating a large array on the stack, more than the stack can handle, so you are getting a stackoverflow.
Don't allocate it local on stack, use some other place instead. This can be achieved by either making the object global or allocating it on the global heap. Global variables are fine, if you don't use the from any other compilation unit. To make sure this doesn't happen by accident, add a static storage specifier, otherwise just use the heap.
This will allocate in the BSS segment, which is a part of the heap. Since it's in static storage, it's zero initialized if you don't specify otherwise, unlike local variables (automatic storage) including arrays.
static int c[1000000];
int main()
{
cout << "done\n";
return 0;
}
A non-zero initializer will make a compiler allocate in the DATA segment, which is a part of the heap too. (And all the data for the array initializer will take space in the executable, including all the implicit trailing zeros, instead of just a size to zero-init in the BSS)
int c[1000000] = {1, 2, 3};
int main()
{
cout << "done\n";
return 0;
}
This will allocate at some unspecified location in the heap:
int main()
{
int* c = new int[1000000]; // size can be a variable, unlike with static storage
cout << "done\n";
delete[] c; // dynamic storage needs manual freeing
return 0;
}
Also, if you are running in most UNIX & Linux systems you can temporarily increase the stack size by the following command:
ulimit -s unlimited
But be careful, memory is a limited resource and with great power come great responsibilities :)
You array is being allocated on the stack in this case attempt to allocate an array of the same size using alloc.
Because you store the array in the stack. You should store it in the heap. See this link to understand the concept of the heap and the stack.
Your plain array is allocated in stack, and stack is limited to few magabytes, hence your program gets stack overflow and crashes.
Probably best is to use heap-allocated std::vector-based array which can grow almost to size of whole memory, instead of your plain array.
Try it online!
#include <vector>
#include <iostream>
int main() {
std::vector<int> c(1000000);
std::cout << "done\n";
return 0;
}
Then you can access array's elements as usual c[i] and/or get its size c.size() (number of int elements).
If you want multi-dimensional array with fixed dimensions then use mix of both std::vector and std::array, as following:
Try it online!
#include <vector>
#include <array>
#include <iostream>
int main() {
std::vector<std::array<std::array<int, 123>, 456>> c(100);
std::cout << "done\n";
return 0;
}
In example above you get almost same behavior as if you allocated plain array int c[100][456][123]; (except that vector allocates on heap instead of stack), you can access elements as c[10][20][30] same as in plain array. This example above also allocates array on heap meaning that you can have array sizes up to whole memory size and not limited by stack size.
To get pointer to the first element in vector you use &c[0] or just c.data().
there can be one more way that worked for me!
you can reduce the size of array by changing its data type:
int main()
{
short c[1000000];
cout << "done\n";
return 0;
}
or
int main()
{
unsigned short c[1000000];
cout << "done\n";
return 0;
}

Meaning behind memory surrounding array c++

I've been lately experimenting with dynamically allocated arrays. I got to conclusion that they have to store their own size in order to free the memory.
So I dug a little in memory with pointers and found that 6*4 bytes directly before and 1*4 bytes directly after array don't change upon recompilation (aren't random garbage).
I represented these as unsigned int types and printed them out in win console:
Here's what I got:
(array's content is between fdfdfdfd uints in representation)
So I figured out that third unsigned int directly before the array's first element is the size of allocated memory in bytes.
However I cannot find any information about rest of them.
Q: Does anyone know what the memory surrounding array's content means and care to share?
The code used in program:
#include <iostream>
void show(unsigned long val[], int n)
{
using namespace std;
cout << "Array length: " << n <<endl;
cout << "hex: ";
for (int i = -6; i < n + 1; i++)
{
cout << hex << (*(val + i)) << "|";
}
cout << endl << "dec: ";
for (int i = -6; i < n + 1; i++)
{
cout << dec << (*(val + i)) << "|";
}
cout << endl;
}
int main()
{
using namespace std;
unsigned long *a = new unsigned long[15]{ 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14 };
unsigned long *b = new unsigned long[15]{ 0 };
unsigned long *c = new unsigned long[17]{ 0 };
show(a, 15);
cout << endl;
show(b, 15);
cout << endl;
show(c, 17);
cout << endl;
cout << endl;
system("PAUSE");
delete[] a;
delete[] b;
delete[] c;
}
It typically means that you carried out your experiments using a debugging configuration of the project and debugging version of the standard library. That version of the library uses some pre-defined bit-patterns to mark the boundaries of each allocated memory block ("no man's land" areas). Later, it checks if these bit-patterns survived intact (e.g. at the moment of delete[]). If they did not, it implies that someone wrote beyond the boundaries of the memory block. Debug version of the library will issue a diagnostic message about the problem.
If you compile your test program in release (optimized) configuration with release (optimized) version of the standard library, these "no man's land" areas will not be created, these bit-patterns will disappear from memory and the associated memory checks will disappear from the code.
Note also the the memory layout you observed is typically specific for arrays of objects with no destructors or with trivial destructors (which is basically the same thing). In your case you were working with plain unsigned long.
Once you start allocating arrays of objects with non-trivial destructors, you will observe that it is not just the size of memory block (in bytes) that's stored by the implementation, but the exact size of the array (in elements) is typically stored there as well.
"I got to conclusion that they have to store their own size in order to free the memory." No they don't.
Array does not free it's memory. You never get an array from new/malloc. You get a pointer to memory under which you can store an array, but if you forget size you have requested you cannot get it back. The standard library often does depend on OS under the hood as well.
And even OS does not have to remember it. There are implementations with very simple memory management which basically returns you current pointer to the free memory, and move the pointer by the requested size. free does nothing and freed memory is forgotten.
Bottom line, memory management is implementation defined, and outside of what you get nothing is guaranteed. Compiler or OS can mess with it, so you need to look documentation specific for the environment.
Bit patterns that you talk about, are often used as safe guards, or used for debugging. E.g: When and why will an OS initialise memory to 0xCD, 0xDD, etc. on malloc/free/new/delete?

Stack overflow while using vector

I made a program for school's little contest, I have:
#include <iostream>
#include <vector>
int main(){
int n;
std::cin >> n;
int c[n];
std::vector<int> r(n);
std::cout << "I made it through";
//rest of program
}
When i type in 1000000 and press enter program crashes without a word, exit code (0xC00000FD). I presumed it happens when i initialize vector, but i though it can handle a lot more. Am i doing something wrong? I know i can use pointer but i'd prefer not to touch things that are already working.
The stack is a very restricted resource.
Even though your compiler seems to implement C-style VLA's in C++ (int c[n];), it does not magically gain more memory.
Test 1) Success of reading n, and 2) n not being out-of-bounds for your use before executing the statement allocating that stack-array.
Default stack-size for Windows: 1MB sizeof(int): 4 Thus, about 250000 fit.
Default stack size for Linux: 8MB sizeof(int): 4 Thus, about 2000000 fit.
Workaround: Use dynamic allocation for the ints, like with a std::vector:
std::vector<int> c(n);
Alternatively, at least use a smart-pointer:
std::unique_ptr<int[]> c(new int[n]);
The problem isn't vector (which uses the heap for its underlying expanding storage in most STL implementations), but int c[n], which will allocate 1,000,000 4-byte integers in the stack, which is almost 4MB. On Win32 the stack is by default around 1MB, hence the overflow.
If you really need to use an array then change your c array to be allocated on the heap by using new, but don't forget to delete[], otherwise use of vector is preferred for most expanding storage scenarios. If you need a fixed-length array then consider array<class T, size_t N> (new in C++11) which adds bounds-checking.
You probably just need to allocate the array dynamically. In C++
#include <iostream>
#include <vector>
int main()
{
int n;
std::cin >> n;
int *c = new int[n];
if(nullptr == c) {
std::cerr << "Runtime error" << std::endl;
return 1;
}
std::vector<int> r(begin(n), end(n));
std::cout << "I made it through";
delete[] c;
return 0;
}
Also to make the vector after you have allocated c dynamically you can use begin, end