Create a big array in C++ [duplicate] - c++

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Segmentation fault on large array sizes
Hi all
I am trying to create a very big array in the VS 2010 using C++.
When I try to create a array like below
int dp[4501][4501]
or
int dp[1000][1000]
It threw an exception "Stack Overflow"
Then I change it to:
int dp[100][100]
everything is fine.
So if I want to create a big array like above, what should I do?
Best Regards,

Use dynamic allocation or the STL. There was a recent thread about a very similar question. See this.

You should use dynamic allocation:
typedef std::vector<int> int_vector;
int_vector dp(10000);
A double array can be simulated by nesting arrays:
typedef std::vector<int_vector> int_double_vector;
int_double_vector dp(4501, int_vector(4501));

Put it on the heap.

If you want to avoid new[], or avoid using std::vector, make the array global. This will put the array on heap and stack overflow will not occur.

Text from Parashift faq : Why should I use container classes rather than simple arrays?
EDIT:
Take a look at stackoverflow threads:
When would you use an array rather than a vector/string?
Why use iterators instead of array indices?

Your stack has overflowed with too many bits. You must drain them. Preferably onto a heap of other bits. I suggest /F67108864. The /F stands for "F'ing hell why is the stack so small compared to the heap?". The 67108863 is arbitrary.

Your declaration looks a bit as if dp will be used as a matrix. In that case, a dedicated (dense) matrix class such as boost::numeric::ublas::matrix is the simplest solution, easier and more local than a vector of vectors. If the matrix is sparsely populated, use a sparse matrix class instead.

So if I want to create a big array
like above, what should I do?
Avoid using the stack for these cases (in other words, avoid creating arrays like these which aren't heap-allocated when working inside a function). Just to give you an idea, my thread-local stack is only 16 kilobytes large. 4501 * 4501 * 4 (assuming 4 bytes per int) = ~81 megabytes.
Consider something like this instead:
typedef vector<int> Row;
typedef vector<Row> Matrix;
Matrix dp(4501, Row(4501) );
If you want to create a 10x50 matrix:
Matrix dp(10, Row(50) );
You can use this just like your normal dp array had it not overflowed the stack. This one will be allocated and automatically deallocated to/from the heap so that you don't have to worry about stack overflow when using it.
dp[5][10] = 123;
Good luck!
[Edit] There are also matrix solutions in boost worth looking into but suggesting boost might be a bit premature given the nature of the topic.

Related

Two dimensional Array - Stackoverflow [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Segmentation fault on large array sizes
Hi all
I am trying to create a very big array in the VS 2010 using C++.
When I try to create a array like below
int dp[4501][4501]
or
int dp[1000][1000]
It threw an exception "Stack Overflow"
Then I change it to:
int dp[100][100]
everything is fine.
So if I want to create a big array like above, what should I do?
Best Regards,
Use dynamic allocation or the STL. There was a recent thread about a very similar question. See this.
You should use dynamic allocation:
typedef std::vector<int> int_vector;
int_vector dp(10000);
A double array can be simulated by nesting arrays:
typedef std::vector<int_vector> int_double_vector;
int_double_vector dp(4501, int_vector(4501));
Put it on the heap.
If you want to avoid new[], or avoid using std::vector, make the array global. This will put the array on heap and stack overflow will not occur.
Text from Parashift faq : Why should I use container classes rather than simple arrays?
EDIT:
Take a look at stackoverflow threads:
When would you use an array rather than a vector/string?
Why use iterators instead of array indices?
Your stack has overflowed with too many bits. You must drain them. Preferably onto a heap of other bits. I suggest /F67108864. The /F stands for "F'ing hell why is the stack so small compared to the heap?". The 67108863 is arbitrary.
Your declaration looks a bit as if dp will be used as a matrix. In that case, a dedicated (dense) matrix class such as boost::numeric::ublas::matrix is the simplest solution, easier and more local than a vector of vectors. If the matrix is sparsely populated, use a sparse matrix class instead.
So if I want to create a big array
like above, what should I do?
Avoid using the stack for these cases (in other words, avoid creating arrays like these which aren't heap-allocated when working inside a function). Just to give you an idea, my thread-local stack is only 16 kilobytes large. 4501 * 4501 * 4 (assuming 4 bytes per int) = ~81 megabytes.
Consider something like this instead:
typedef vector<int> Row;
typedef vector<Row> Matrix;
Matrix dp(4501, Row(4501) );
If you want to create a 10x50 matrix:
Matrix dp(10, Row(50) );
You can use this just like your normal dp array had it not overflowed the stack. This one will be allocated and automatically deallocated to/from the heap so that you don't have to worry about stack overflow when using it.
dp[5][10] = 123;
Good luck!
[Edit] There are also matrix solutions in boost worth looking into but suggesting boost might be a bit premature given the nature of the topic.

Numerical array too long in c++ : how can I circumvent this?

DISCLAIMER: I am at a very entry level in c++ (or any language)... I searched for similar questions but found none
I am trying to write a simple program which should make some operations on an array as big as int pop[100000000][4] (10^8); however my compiler crashs even for a int pop[130000][4] array... is there any way out? Am I using a wrong approach?
(For now I am limiting myself to a very simple program, my aim is to generate random numbers in the array[][0] every "turn" to simulate a population and work with that).
Thanks for your time and attention
An array of 130000 * 4 ints is going to be huge, and likely not something you want stored locally (in reality, on the stack where it generally won't fit).
Instead you can use dynamic allocation to get heap storage, the recommended means would be a vector of vectors
std::vector<std::vector<int>> pop(130000, std::vector<int>(4));
pop[12000][1] = 9; // expected syntax
vectors are dynamic, so know that they can be changed with all sorts of calls
If you're a new programmer and trying to write a simple programmer, you should consider not using 203KiB of ints

error while declaring double array of size 150000? [duplicate]

This question already has answers here:
Segmentation fault on large array sizes
(7 answers)
Closed 9 years ago.
I am developing a code in which there is need for declaring array of double of size 150000 and when one array is declared code is running successfully.if we declare two arrays then while execution it terminates throwing exception.
Code is :
double a[150000];
double b[150000];
if we declare a only then it executes perfectly.if declare both a and b then it terminates.
Can anyone suggest how to resolve this?
The two arrays are overflowing the stack (assuming they are local variables). Dynamically allocate memory for the arrays instead, using a std::vector to manage the memory for you:
std::vector<double> a(150000);
std::vector<double> b(150000);
Even though the std::vector instances are on the stack, the std::vector dynamically allocates memory internally for the data which is on the heap, avoiding the stack overflow.
Okay! You have Stack Overflow in your app!
Fixing examples:
don't use stack - use dynamic memory allocation (heap):
double* a = new double[150000];
use STL container, for example, vector - internally it allocates things on heap
std::vector<double> a(150000);
increase stack size (bad idea, but if you really need it read you compiler docs, and look here)
redesign you code somehow
There is one solution to this problem, but it leads to (at least) three different follow-on solutions. The solution is "don't use large arrays as local variables, because it blows up the stack".
The solution clearly means changing the code in some way. There are a few different ways to do that.
The obvious and straight-forwards solution is to use std::vector<double> instead.
Another solution is to use `
unique_ptr<double[]> a = std::unique_ptr<double[]>(new double[150000]);
The third, and SOMETIMES a good solutions, is to make a and b global variables.
There are several other variants, but they are generally variations on the same theme, just with slight variations. What is best in your case really depends on what the rest of your code is doing. I'd start with std::vector<double>, but other alternatives do exist, should that be an unsuitable solution for some reason.

datastructure/abtract data type with C which acts like vectors in C++

Hi I am using a C compiler(GCC) where I cannot use a vector like in C++. So how can I create similar kind of data structure/dynamic array with c which will work like a vector? It might be very easy but I don't have any idea how can I do it.
thanks
Start with a struct holding a pointer to the correct type the currently used size, and the current allocation size. Allocate space with malloc. If you run out of space, use realloc to increase it.

Memory issues with a very large array in C++

Hi I have the following:
struct myStructure
{
vector<int> myVector;
};
myStructure myArray[10000000];
As you can see I have a very large array of a vectors. The problem is that i dont have a priori knowledge of the number of elements I need to have in the array, but I know that 10 million elements is the max i can have. I have tried two things:
a) make myArray a global array, however the problem is that i have a function that will access myArray many many times, which is resulting in memory leaks and the program crashing for large calculations.
b) declare myArray dynamically from within the function that needs to access it, the memory is kept in check but the program runs about 8 times slower.
Any ideas on how to address this issue. Thanks
access myArray many many times, which is resulting in memory leaks and the program crashing for large calculations
You should fix those bugs in any case.
the memory is kept in check but the program runs about 8 times slower
Since you're already using dynamic allocation with an array of vectors it's not immediately obvious why dynamically allocating one more thing would result in such a slowdown. So you should look into this as well.
Then I would go with a vector<vector<int>> that isn't global but has the appropriate lifespan for its uses
#include <vector>
#include <functional>
#include <algorithm>
using std::vector;
int main() {
vector<vector<int>> v;
for(int i=0;i<100;++i) {
std::for_each(begin(v),end(v),std::mem_fn(&vector<int>::clear));
foo(v);
for(int j=0;j<100;++j) {
std::for_each(begin(v),end(v),std::mem_fn(&vector<int>::clear));
foo(v);
for(int k=0;k<100;++k) {
std::for_each(begin(v),end(v),std::mem_fn(&vector<int>::clear));
foo(v);
for(int l=0;l<100;++l) {
std::for_each(begin(v),end(v),std::mem_fn(&vector<int>::clear));
foo(v);
}
}
}
}
}
The best solution I can find is to call the function "malloc" which reserves space in "heap memory", in the array case you should code something like:
int* myArray = (int*) malloc ( sizeof(int)* Len );
..after that, don't forget to liberate heap memory using free(myArray);
it's a powerful tool to make arrays super large.
Declare this structure in an object with a lifetime guaranteed to surpass the objects that access it and use a reference to access this object. Ideally, you should have a class in your hierarchy that calls all the functions dealing with this struct, so these functions may well be members of your large array of vectors.
Did you try turning your array of vectors into a vector of vectors? Not knowing how many of an item you will need is what vectors are for, after all.
I believe it would be
vector<vector<int>> myVecs;
Use a different data structure. I'd suggest trying something like one of the
sparse matrix classes from Boost. They are optimised for storing numeric data in which each row or column contains a significant number of zeroes. Mind you, if the problem you're trying to solve isn't suitable for handling with a sparse data structure, it would be a good idea to set out the nature of the problem you're trying to solve, in greater detail. Take another look at https://stackoverflow.com/questions/how-to-ask even though I guess you already read that.
But before you do that I think you probably have another problem too:
access myArray many many times, which is resulting in memory leaks and
the program crashing for large calculations
It looks to me from what you write there that your code may have some pre-existing bugs. Unless your crashes are simply caused by trying to allocate a 10000000-element array as an auto variable.