I tried to check, what is the largest size of array, which can be created in CPP. I declared a "int" array and kept increasing the array size. After 10^9 the program started crashing, but there was a serious error for array of size 5*10^8 and more(even when program did not crash). The code used and the problem is following :
#include<iostream>
int ar[500000000];
int main()
{
printf("Here\n");
}
The above code runs successfully, if the size of array is reduced to 4*10^8 and lesser. But, for array size greater than 5*10^8, the program runs successfully but it does not print any thing, also it does not get crashed, or gave any error or warning.
Also, if the array definition is local then there is no such error, after same limit the program gets crashed. It's when using the global definition of array, the program does not get crashed nor does print anything.
Can anybody please explain the reason for this behavior. I've understood that the size of array will vary for different machines.
I've 1.2 GB of free RAM. How am I able to create the local integer array of size 4*10^8. This requires around 1.49GB, and I don't have that much free RAM.
The real question is: why are you using globals? And to make it worse, it's a static raw array?
As suggested already, the memory used to hold global variables is being overflowed (and it possibly wrote over your "Here\n" string).
If you really need that big of an array, use dynamically-allocated memory:
int main() {
int* bigArray = new int[500000000];
// ... use bigArray here
delete[] bigArray;
}
C++ inherently doesn't restrict the max limits on the array size. In this case since it is a global variable it will be outside the stack as well. The only thing I can think of is the memory limit on your machine. How much memory does your machine has? How much memory is free before running your program?
Related
Why can I compile and run this code? Isn't the array too large? How is memory allocated to this array?
#include <iostream>
#define Nbig 10000000000000000
int main() {
int x[Nbig];
x[Nbig-1]=100;
std::cout <<"x[Nbig-1]= "<< x[Nbig-1] <<"\n\n";
return 0;
}
I thought when a static array is declared, a chunk of RAM should be allocated to it and when I assign a value to say x[1000], the memory bytes at the 'x+1000*4' address (4 for int and x the address of the first element) should represent the value. I tried googling and read about static and dynamics allocation, heap and stack, RAM itsel but didn't find my answer anywhere. Additional information that might help: I'm using linux with 32GB RAM and and compile the code with gcc.
I thought when a static array is declared, a chunk of RAM should be allocated to it
That is an implementation detail of the C++ implementation. You are asking for an array of the given size on the language level. The compiler has to compile the program in such a way that it behaves as the language specifies programs using arrays to behave. The language standard makes no mention of RAM or a stack and allows arrays of any arbitrary size up to an implementation-defined limit. How the compiler uses memory to provide for this behavior of the program is completely up to the compiler. If it can figure out that e.g. no RAM use is required to make the program behave equivalently to the specification, then it doesn't need to use any.
Since you use only one element of the array, there is clearly no need to reserve memory for the whole array and using less memory than asked for is also a desirable optimization, so it is not surprising that a compiler would choose to not allocate memory for the rest of the array. Even further it is obvious that you are only using the single element of the array to pass a constant to std::cout, so the compiler can completely avoid reserving memory for the array and just pass the constant directly to std::cout << in a register.
If the address of an automatic-duration object is never exposed to outside code or otherwise used in ways a compiler can't fully track, and if a compiler can "understand" everything that is done with the object, the compiler need not allocate actual storage for the object.
In this case, some compilers would probably be able to see that only one element of the array is ever read, and it's always written with the value 100, and thus there is no need to allocate any storage for the array. Instead, any operation that would read the array may be replaced with operation that loads the constant 100.
FYI, this code:
int main() {
int x[Nbig];
attempts to allocate x on the stack (it's local to the function main and is not explicitly declared static). Typically, the max size of the stack is somewhere around a couple of megabytes, so your array won't fit. Most compilers know this and will refuse to compile this code. If it does compile, it will almost certainly fail at runtime.
In your case, the compiler will have optimized the whole thing away, since you only use one element.
If you need a compile-time array that can be allocated, make it static, like this:
int main() {
static int x[Nbig];
Or:
int x[Nbig];
int main() {
(ignoring the fact that your #define may have been truncated)
This will allocate the array in the fixed-size data segment.
I am using Dev C++ to write a simulation program. For it, I need to declare a single dimensional array with the data type double. It contains 4200000 elements - like double n[4200000].
The compiler shows no error, but the program exits on execution. I have checked, and the program executes just fine for an array having 5000 elements.
Now, I know that declaring such a large array on the stack is not recommended. However, the thing is that the simulation requires me to call specific elements from the array multiple times - for example, I might need the value of n[234] or n[46664] for a given calculation. Therefore, I need an array in which it is easier to sift through elements.
Is there a way I can declare this array on the stack?
No there is no(we'll say "reasonable") way to declare this array on the stack. You can however declare the pointer on the stack, and set aside a bit of memory on the heap.
double *n = new double[4200000];
accessing n[234] of this, should be no quicker than accessing n[234] of an array that you declared like this:
double n[500];
Or even better, you could use vectors
std::vector<int> someElements(4200000);
someElements[234];//Is equally fast as our n[234] from other examples, if you optimize (-O3) and the difference on small programs is negligible if you don't(+5%)
Which if you optimize with -O3, is just as fast as an array, and much safer. As with the
double *n = new double[4200000];
solution you will leak memory unless you do this:
delete[] n;
And with exceptions and various things, this is a very unsafe way of doing things.
You can increase your stack size. Try adding these options to your link flags:
-Wl,--stack,36000000
It might be too large though (I'm not sure if Windows places an upper limit on stack size.) In reality though, you shouldn't do that even if it works. Use dynamic memory allocation, as pointed out in the other answers.
(Weird, writing an answer and hoping it won't get accepted... :-P)
Yes, you can declare this array on the stack (with a little extra work), but it is not wise.
There is no justifiable reason why the array has to live on the stack.
The overhead of dynamically allocating a single array once is neglegible (you could say "zero"), and a smart pointer will safely take care of not leaking memory, if that is your concern.
Stack allocated memory is not in any way different from heap allocated memory (apart from some caching effects for small objects, but these do not apply here).
Insofar, just don't do it.
If you insist that you must allocate the array on the stack, you will need to reserve 32 megabytes of stack space first (preferrably a bit more). For that, using Dev-C++ (which presumes Windows+MingW) you will either need to set the reserved stack size for your executable using compiler flags such as -Wl,--stack,34000000 (this reserves somewhat more than 32MiB), or create a thread (which lets you specify a reserved stack size for that thread).
But really, again, just don't do that. There's nothing wrong with allocating a huge array dynamically.
Are there any reasons you want this on the stack specifically?
I'm asking because the following will give you a construct that can be used in a similar way (especially accessing values using array[index]), but it is a lot less limited in size (total max size depending on 32bit/64bit memory model and available memory (RAM and swap memory)) because it is allocated from the heap.
int arraysize= 4200000;
int *heaparray= new int[arraysize];
...
k= heaparray[456];
...
delete [] heaparray;
return;
When i run this code in my Devcpp compiler->
#include<bits/stdc++.h>
using namespace std;
int main()
{
vector<int> vec;
for(int i=0;i<100000000;i++)
vec.push_back(i);
}
It works even on run time.
But when i run->
#include<bits/stdc++.h>
using namespace std;
int arr[1000000000];
int main()
{
return 0;
}
It gives me link error.
As long as space is required both arr and vec requires the same space.Then why is it that vec code runs even fine on run time but arr code doesnt even compile.
The issue is with the allocation. In the first case, std::vector default allocator uses dynamic allocation, which in principle can allocate as much memory as you want (bounded of course by the OS and the amount of physical memory) whereas in the second case it uses the memory available for static allocation (technically the array has static storage duration), which in your case is smaller than 1000000000 * sizeof int bytes. See this for a nice answer regarding the various types of allocations in a C program (which also applies for C++).
Btw, avoid #include<bits/stdc++.h>, as it is non-standard. Include only the standard headers you need. One more issue: I don't think you get a compile-time error, you probably get a run-time error. In other words, the code compiles just fine, but fails to run.
It seems that the object
int arr[1000000000];
is too large to fit in the global data of your program for your environment. I don't get a compile time error but I get a link time error in my environment also (cygwin/g++ 4.9.3).
Reducing the size by one tenth work for me. It may work for you also. I don't know how you can determine the maximum size of objects that can fit in global data.
Space available in stack is the smallest in size.
Space available in global data is larger that that.
Space available in heap is the largest of all.
If your object is too large to fit in stack, try to put into global data.
If your object is too large to fit in global data, use heap.
I am curious whether it is possible to determine the maximum size that an array can have in C++.
#include <iostream>
using namespace std;
#define MAX 2000000
int main()
{
long array[MAX];
cout << "Message" << endl;
return 0;
}
This compiles just fine, but then segfaults as soon as I run it (even though array isn't actually referenced). I know it's the array size too because if I change it to 1000000 it runs just fine.
So, is there some define somewhere or some way of having #define MAX MAX_ALLOWED_ARRAY_SIZE_FOR_MY_MACHINE_DEFINED_SOMEWHERE_FOR_ME?
I don't actually need this for anything, this question is for curiosity's sake.
There isn't a way to determine this statically, because the actual limit depends on how much stack space your thread has been given. You could create a new thread, give it 10 megabytes of stack, and you would be able to allocate a correspondingly larger local array.
The amount you can allocate on the stack also depends on how much has already been used so far.
void foo(int level){
int dummy[100];
cerr<<level<<endl;
foo(level+1);
}
Then, maybe you can multiply the last printed level with 400 bytes. Recursive calls occupy most of the stack space I guess but you can get a lower-bound. I may miss some understanding of memory management here, so open to corrections.
So this is what I got on my machine with varying dummy array size.
level array size stack total
24257 100 9702800
2597 1000 10388000
260 10000 10400000
129 20000 10320000
64 40000 10240000
25 100000 10000000
12 200000 9600000
Most variables declared inside functions are allocated on the stack, which is basically a block of memory of fixed size. Trying to allocate a stack variable larger than the size of the stack will cause a stack overflow, which is what the segfault is caused by.
Often the size of the stack is 8MB, so, on a 64-bit machine, long max[1000000] has size 8*1000000 < 8MB (the stack is "safe"), but long max[2000000] has size 8*2000000 > 8MB, so the stack overflows and the program segfaults.
On the other hand, dynamically allocating the array with malloc puts the memory into the heap, which is basically unbounded (4GB on a 32-bit machine, 17,179,869,184GB on a 64-bit machine).
Please read this to understand the limitations that are set by hardware and compiler. I don't think that you can have a MAX defined for you to use it right away.
I wanna to declare an array:
int a[256][256][256]
And the program hang. (I already comment out all other codes...)
When I try int a[256][256], it runs okay.
I am using MingW C++ compiler, Eclipse CDT.
My code is:
int main(){
int a[256][256][256];
return 0;
}
Any comment is welcomed.
This might happen if your array is local to a function. In that case, you'd need a stack size sufficient to hold 2^24 ints (2^26 bytes, or 64 MB).
If you make the array a global, it should work. I'm not sure how to modify the stack size in Windows; in Linux you'd use "ulimit -s 10000" (units are KB).
If you have a good reason not to use a global (concurrency or recursion), you can use malloc/free. The important thing is to either increase your stack (not a good idea if you're using threads), or get the data on the heap (malloc/free) or the static data segment (global).
Ideally you'd get program termination (core dump) and not a hang. I do in cygwin.
Maybe you don't have 16MB of free continuous memory? Kind of hard to imagine but possible...
You want something like this
#include <malloc.h>
int main()
{
int *a;
a = (int*)malloc(256*256*256*sizeof(int)); // allocate array space in heap
return 0;
}
Otherwise, you get something like this:
alt text http://bweaver.net/files/stackoverflow1.jpg
Because, as others have pointed out, in your code you're allocating the array on the stack, and blowing it up.
Allocating the array via malloc or its friends is the way to go. (Creating it globally works too, if you must go that route.)