I wanna to declare an array:
int a[256][256][256]
And the program hang. (I already comment out all other codes...)
When I try int a[256][256], it runs okay.
I am using MingW C++ compiler, Eclipse CDT.
My code is:
int main(){
int a[256][256][256];
return 0;
}
Any comment is welcomed.
This might happen if your array is local to a function. In that case, you'd need a stack size sufficient to hold 2^24 ints (2^26 bytes, or 64 MB).
If you make the array a global, it should work. I'm not sure how to modify the stack size in Windows; in Linux you'd use "ulimit -s 10000" (units are KB).
If you have a good reason not to use a global (concurrency or recursion), you can use malloc/free. The important thing is to either increase your stack (not a good idea if you're using threads), or get the data on the heap (malloc/free) or the static data segment (global).
Ideally you'd get program termination (core dump) and not a hang. I do in cygwin.
Maybe you don't have 16MB of free continuous memory? Kind of hard to imagine but possible...
You want something like this
#include <malloc.h>
int main()
{
int *a;
a = (int*)malloc(256*256*256*sizeof(int)); // allocate array space in heap
return 0;
}
Otherwise, you get something like this:
alt text http://bweaver.net/files/stackoverflow1.jpg
Because, as others have pointed out, in your code you're allocating the array on the stack, and blowing it up.
Allocating the array via malloc or its friends is the way to go. (Creating it globally works too, if you must go that route.)
Related
The following code is generating a stack overflow error for me
int main(int argc, char* argv[])
{
int sieve[2000000];
return 0;
}
How do I get around this? I am using Turbo C++ but would like to keep my code in C
EDIT:
Thanks for the advice. The code above was only for example, I actually declare the array in a function and not in sub main. Also, I needed the array to be initialized to zeros, so when I googled malloc, I discovered that calloc was perfect for my purposes.
Malloc/calloc also has the advantage over allocating on the stack of allowing me to declare the size using a variable.
Your array is way too big to fit into the stack, consider using the heap:
int *sieve = malloc(2000000 * sizeof(*sieve));
If you really want to change the stack size, take a look at this document.
Tip: - Don't forget to free your dynamically allocated memory when it's no-longer needed.
There are 3 ways:
Allocate array on heap - use malloc(), as other posters suggested. Do not forget to free() it (although for main() it is not that important - OS will clean up memory for you on program termination).
Declare the array on unit level - it will be allocated in data segment and visible for everybody (adding static to declaration will limit the visibility to unit).
Declare your array as static - in this case it will be allocated in data segment, but visible only in main().
That's about 7MB of stack space. In visual studio you would use /STACK:###,### to reflect the size you want. If you truely want a huge stack (could be a good reason, using LISP or something :), even the heap is limited to small'sh allocations before forcing you to use VirtualAlloc), you may also want to set your PE to build with /LARGEADDRESSAAWARE (Visual Studio's linker again), but this configure's your PE header to allow your compiled binary to address the full 4GB of 32'bit address space (if running in a WOW64). If building truely massive binaries, you would also typically need to configure /bigobj as an additional linker paramerter.
And if you still need more space, you can radically violate convention by using something simular to (again MSVC's link) /merge:, which will allow you to pack one section into another, so you can use every single byte for a single shared code/data section. Naturally you would also need to configure the SECTIONS permissions in a def file or with #pgrama.
Use malloc. All check the return type is not null, if it is null then your system simply doesn't have enought memory to fit that many values.
You would be better off allocating it on the heap, not the stack. something like
int main(int argc, char* argv[])
{
int * sieve;
sieve = malloc(20000);
return 0;
}
Your array is huge.
It's possible that your machine or OS don't have or want to allocate so much memory.
If you absolutely need an enormous array, you can try to allocate it dynamically (using malloc(...)), but then you're at risk of leaking memory. Don't forget to free the memory.
The advantage of malloc is that it tries to allocate memory on the heap instead of the stack (therefore you won't get a stack overflow).
You can check the value that malloc returns to see if the allocation succeeded or failed.
If it fails, just try to malloc a smaller array.
Another option would be to use a different data structure that can be resized on the fly (like a linked list). Wether this option is good depends on what you are going to do with the data.
Yet another option would be to store things in a file, streaming data on the fly. This approach is the slowest.
If you go for storage on the hard drive, you might as well use an existing library (for databases)
As Turbo C/C++ is 16 bit compiler int datatype consumes about 2 bytes.
2bytes*2000000=40,00,000 bytes=3.8147MB space.
The auto variables of a function is stored in stack and it caused the overflow of the stack memory. Instead use the data memory [using static or global variable] or the dynamic heap memory [using the malloc/calloc] for creating the required memory as per the availability of the processor memory mapping.
Is there some reason why you can't use alloca() to allocate the space you need on the stack frame based on how big the object really needs to be?
If you do that, and still bust the stack, put it in allocated heap. I highly recommend NOT declaring it as static in main() and putting it in the data segment.
If it really has to be that big and your program can't allocate it on the heap, your program really has no business running on that type of machine to begin with.
What (exactly) are you trying to accomplish?
The following code is generating a stack overflow error for me
int main(int argc, char* argv[])
{
int sieve[2000000];
return 0;
}
How do I get around this? I am using Turbo C++ but would like to keep my code in C
EDIT:
Thanks for the advice. The code above was only for example, I actually declare the array in a function and not in sub main. Also, I needed the array to be initialized to zeros, so when I googled malloc, I discovered that calloc was perfect for my purposes.
Malloc/calloc also has the advantage over allocating on the stack of allowing me to declare the size using a variable.
Your array is way too big to fit into the stack, consider using the heap:
int *sieve = malloc(2000000 * sizeof(*sieve));
If you really want to change the stack size, take a look at this document.
Tip: - Don't forget to free your dynamically allocated memory when it's no-longer needed.
There are 3 ways:
Allocate array on heap - use malloc(), as other posters suggested. Do not forget to free() it (although for main() it is not that important - OS will clean up memory for you on program termination).
Declare the array on unit level - it will be allocated in data segment and visible for everybody (adding static to declaration will limit the visibility to unit).
Declare your array as static - in this case it will be allocated in data segment, but visible only in main().
That's about 7MB of stack space. In visual studio you would use /STACK:###,### to reflect the size you want. If you truely want a huge stack (could be a good reason, using LISP or something :), even the heap is limited to small'sh allocations before forcing you to use VirtualAlloc), you may also want to set your PE to build with /LARGEADDRESSAAWARE (Visual Studio's linker again), but this configure's your PE header to allow your compiled binary to address the full 4GB of 32'bit address space (if running in a WOW64). If building truely massive binaries, you would also typically need to configure /bigobj as an additional linker paramerter.
And if you still need more space, you can radically violate convention by using something simular to (again MSVC's link) /merge:, which will allow you to pack one section into another, so you can use every single byte for a single shared code/data section. Naturally you would also need to configure the SECTIONS permissions in a def file or with #pgrama.
Use malloc. All check the return type is not null, if it is null then your system simply doesn't have enought memory to fit that many values.
You would be better off allocating it on the heap, not the stack. something like
int main(int argc, char* argv[])
{
int * sieve;
sieve = malloc(20000);
return 0;
}
Your array is huge.
It's possible that your machine or OS don't have or want to allocate so much memory.
If you absolutely need an enormous array, you can try to allocate it dynamically (using malloc(...)), but then you're at risk of leaking memory. Don't forget to free the memory.
The advantage of malloc is that it tries to allocate memory on the heap instead of the stack (therefore you won't get a stack overflow).
You can check the value that malloc returns to see if the allocation succeeded or failed.
If it fails, just try to malloc a smaller array.
Another option would be to use a different data structure that can be resized on the fly (like a linked list). Wether this option is good depends on what you are going to do with the data.
Yet another option would be to store things in a file, streaming data on the fly. This approach is the slowest.
If you go for storage on the hard drive, you might as well use an existing library (for databases)
As Turbo C/C++ is 16 bit compiler int datatype consumes about 2 bytes.
2bytes*2000000=40,00,000 bytes=3.8147MB space.
The auto variables of a function is stored in stack and it caused the overflow of the stack memory. Instead use the data memory [using static or global variable] or the dynamic heap memory [using the malloc/calloc] for creating the required memory as per the availability of the processor memory mapping.
Is there some reason why you can't use alloca() to allocate the space you need on the stack frame based on how big the object really needs to be?
If you do that, and still bust the stack, put it in allocated heap. I highly recommend NOT declaring it as static in main() and putting it in the data segment.
If it really has to be that big and your program can't allocate it on the heap, your program really has no business running on that type of machine to begin with.
What (exactly) are you trying to accomplish?
The following code is generating a stack overflow error for me
int main(int argc, char* argv[])
{
int sieve[2000000];
return 0;
}
How do I get around this? I am using Turbo C++ but would like to keep my code in C
EDIT:
Thanks for the advice. The code above was only for example, I actually declare the array in a function and not in sub main. Also, I needed the array to be initialized to zeros, so when I googled malloc, I discovered that calloc was perfect for my purposes.
Malloc/calloc also has the advantage over allocating on the stack of allowing me to declare the size using a variable.
Your array is way too big to fit into the stack, consider using the heap:
int *sieve = malloc(2000000 * sizeof(*sieve));
If you really want to change the stack size, take a look at this document.
Tip: - Don't forget to free your dynamically allocated memory when it's no-longer needed.
There are 3 ways:
Allocate array on heap - use malloc(), as other posters suggested. Do not forget to free() it (although for main() it is not that important - OS will clean up memory for you on program termination).
Declare the array on unit level - it will be allocated in data segment and visible for everybody (adding static to declaration will limit the visibility to unit).
Declare your array as static - in this case it will be allocated in data segment, but visible only in main().
That's about 7MB of stack space. In visual studio you would use /STACK:###,### to reflect the size you want. If you truely want a huge stack (could be a good reason, using LISP or something :), even the heap is limited to small'sh allocations before forcing you to use VirtualAlloc), you may also want to set your PE to build with /LARGEADDRESSAAWARE (Visual Studio's linker again), but this configure's your PE header to allow your compiled binary to address the full 4GB of 32'bit address space (if running in a WOW64). If building truely massive binaries, you would also typically need to configure /bigobj as an additional linker paramerter.
And if you still need more space, you can radically violate convention by using something simular to (again MSVC's link) /merge:, which will allow you to pack one section into another, so you can use every single byte for a single shared code/data section. Naturally you would also need to configure the SECTIONS permissions in a def file or with #pgrama.
Use malloc. All check the return type is not null, if it is null then your system simply doesn't have enought memory to fit that many values.
You would be better off allocating it on the heap, not the stack. something like
int main(int argc, char* argv[])
{
int * sieve;
sieve = malloc(20000);
return 0;
}
Your array is huge.
It's possible that your machine or OS don't have or want to allocate so much memory.
If you absolutely need an enormous array, you can try to allocate it dynamically (using malloc(...)), but then you're at risk of leaking memory. Don't forget to free the memory.
The advantage of malloc is that it tries to allocate memory on the heap instead of the stack (therefore you won't get a stack overflow).
You can check the value that malloc returns to see if the allocation succeeded or failed.
If it fails, just try to malloc a smaller array.
Another option would be to use a different data structure that can be resized on the fly (like a linked list). Wether this option is good depends on what you are going to do with the data.
Yet another option would be to store things in a file, streaming data on the fly. This approach is the slowest.
If you go for storage on the hard drive, you might as well use an existing library (for databases)
As Turbo C/C++ is 16 bit compiler int datatype consumes about 2 bytes.
2bytes*2000000=40,00,000 bytes=3.8147MB space.
The auto variables of a function is stored in stack and it caused the overflow of the stack memory. Instead use the data memory [using static or global variable] or the dynamic heap memory [using the malloc/calloc] for creating the required memory as per the availability of the processor memory mapping.
Is there some reason why you can't use alloca() to allocate the space you need on the stack frame based on how big the object really needs to be?
If you do that, and still bust the stack, put it in allocated heap. I highly recommend NOT declaring it as static in main() and putting it in the data segment.
If it really has to be that big and your program can't allocate it on the heap, your program really has no business running on that type of machine to begin with.
What (exactly) are you trying to accomplish?
I have some code here where there is an array of "Bacon" objects. I can compile and run it and add objects to the array, but when I make the array size more than one million, I run it and it says 'bacon.exe has stopped working' and I have to close it. I think it might be a memory leak, but I am still learning about that. I am using netbeans ide, and I tried allocating more memory when it gets compiled, but I couldn't figure out how to do that. Note: It isn't because my whole computer runs out of memory, because I still have 2GB free after running the program. Here is my code:
#include <iostream>
#include "Bacon.h"
using namespace std;
int main() {
const int objs = 1000000;
Bacon *bacs[objs];
for(int i = 0;i < objs;i++){
bacs[i] = new Bacon(2,3);
}
for(int i = 0;i < objs;i++){
bacs[i]->print();
}
cin.ignore();
return 0;
}
Your computer has plenty of memory, but only so much of it can be allocated on the stack. Try allocating it on the heap instead:
Bacon **bacs = new Bacon*[objs];
and later:
delete[] bacs;
You're probably out of stack space.
You allocate huge array of pointers right on stack. Stack is limited resource (usually 8 megabytes per process). Size of pointer is usually 4 or 8 bytes; multiply it by one million and you overrun that limit.
As I learned, when you request for space from memory, if the operation system, which you use(Windows in this case, I think), lets you to take it, you can take and use that space.
For some reason, Windows may not be letting you to take that memory for this situation. But I'm not that much expert in this field. I am stating this as a thought.
The default stack size (windows visual studio 2005, probably others keep the same number) is 1MB, check out http://msdn.microsoft.com/en-us/library/tdkhxaks%28v=vs.80%29.aspx to change it
ulimit in linux to change it.
The heap solution is valid too, but in your example you don't need heap. Requesting heap memory to the OS for something that won't escape the current function is not a good practice. In assembler the stack is translated just to a bigger subtraction, heap is requested thru other methods that require more processing.
I tried to check, what is the largest size of array, which can be created in CPP. I declared a "int" array and kept increasing the array size. After 10^9 the program started crashing, but there was a serious error for array of size 5*10^8 and more(even when program did not crash). The code used and the problem is following :
#include<iostream>
int ar[500000000];
int main()
{
printf("Here\n");
}
The above code runs successfully, if the size of array is reduced to 4*10^8 and lesser. But, for array size greater than 5*10^8, the program runs successfully but it does not print any thing, also it does not get crashed, or gave any error or warning.
Also, if the array definition is local then there is no such error, after same limit the program gets crashed. It's when using the global definition of array, the program does not get crashed nor does print anything.
Can anybody please explain the reason for this behavior. I've understood that the size of array will vary for different machines.
I've 1.2 GB of free RAM. How am I able to create the local integer array of size 4*10^8. This requires around 1.49GB, and I don't have that much free RAM.
The real question is: why are you using globals? And to make it worse, it's a static raw array?
As suggested already, the memory used to hold global variables is being overflowed (and it possibly wrote over your "Here\n" string).
If you really need that big of an array, use dynamically-allocated memory:
int main() {
int* bigArray = new int[500000000];
// ... use bigArray here
delete[] bigArray;
}
C++ inherently doesn't restrict the max limits on the array size. In this case since it is a global variable it will be outside the stack as well. The only thing I can think of is the memory limit on your machine. How much memory does your machine has? How much memory is free before running your program?