Segmentation fault during the initialization of array - c++

I have seen segmentation fault sometimes during the initialization of an array with huge size.
For ex:
#include<iostream>
#include<limits>
using namespace std;
int main()
{
string h;
cin >> h;
int size=h.size();
cout << size << endl;
int arr[size][size];
cout << arr[0][0]<<endl;
arr[0][0]=1;
cout << arr[0][0]<<endl;
return 0;
}
When the user input is a small string lets say "sample" the program is working fine.
When the user input is a big string where the size is for ex. >1500.Segmentation is seen during the initialization of the array int arr[size][size];
What can be the issue?Is there any problem in initializating the array like the one above.

I think you are out of memory with those initializations, causing a stack overflow. I recommend to allocate it on the heap or by using a std:vector. See here: Segmentation fault on large array sizes

I think an array's size must always be a compile-time constant in C++ i.e. the value of your 'size' variable must be known at compile time.
If you want dynamic storage, use std::vector

MSDN states that the default stack size on Windows is 1 MB - in case of 1500 elements in each dimension your array would take up 1500 * 1500 * 4 bytes = 9000000 bytes = 8.58 megabytes, not sure about Linux (this states it to be 8 MB) - I guess it depends on the compiler and distributive. So either:
1) If you know that there is a limit for the string length increase the stack size accordingly with the /STACK linker flag on Windows or like posted in this answer on Linux
2) Allocate the array on heap - if you don't want to mess around with memory allocations std::vector or std::unique_ptr can be used as a container

Related

getting this "Process returned -1073741571 (0xC00000FD)" after running it in codeblocks with MinGW compiler [duplicate]

#include "stdafx.h"
int _tmain(int argc, _TCHAR* argv[])
{
float x[1000][1000];
return 0;
}
I get " First-chance exception at 0x01341637 in s.exe: 0xC00000FD: Stack overflow." why?
Your array is simply too large to fit on the stack. You don't have enough stack space for 1000 * 1000 elements.
You'll need to allocate your array on the heap. You can do this using the new keyword, but an easier way is to just use std::vector.
std::vector<std::vector<float> > floats(1000);
for (unsigned i = 0; i != floats.size(); ++i) floats[i].resize(1000);
This will give you a two-dimensional vector of floats, with 1000 elements per vector.
Also see: Segmentation fault on large array sizes
float is 4 bytes, so 4 * 1000 * 1000 = 4 megabytes.
"stack size defaults to 1 MB"
See here: http://msdn.microsoft.com/en-us/library/tdkhxaks(v=VS.100).aspx
As others explained, the size of the object is bigger than the (default) size defined for function stack frame. There are two solutions: 1) create an object on the heap, which is likely to be bigger; or 2) increase the function stack frame size, which can be problematic in 32-bit environment, because you can run out of addressable space, but it can easily be done in 64-bits.
Just declare your array static:
static float x[1000][1000];
Edited to add:
Sigh Another silent downvoter. Not that I'm surprised. This is obviously the simplest solution to OP's problem, so it violates the prime tenet of the OOP Komissariat: The simplest solution is always wrong.

Large arrays on local variables work on Linux but not on Windows [duplicate]

This question already has answers here:
Segmentation fault on large array sizes
(7 answers)
Closed 3 years ago.
Program with large global array:
int ar[2000000];
int main()
{
}
Program with large local array:
int main()
{
int ar[2000000];
}
When I declare an array with large size in the main function, the program crashes with "SIGSEGV (Segmentation fault)".
However, when I declare it as global, everything works fine. Why is that?
Declaring the array globally causes the compiler to include the space for the array in the data section of the compiled binary. In this case you have increased the binary size by 8 MB (2000000 * 4 bytes per int). However, this does mean that the memory is available at all times and does not need to be allocated on the stack or heap.
EDIT: #Blue Moon rightly points out that an uninitialized array will most likely be allocated in the bss data segment and may, in fact, take up no additional disk space. An initialized array will be allocated statically.
When you declare an array that large in your program you have probably exceeded the stack size of the program (and ironically caused a stack overflow).
A better way to allocate a large array dynamically is to use a pointer and allocate the memory on the heap like this:
using namespace std;
int main() {
int *ar;
ar = malloc(2000000 * sizeof(int));
if (ar != null) {
// Do something
free(ar);
}
return 0;
}
A good tutorial on the Memory Layout of C Programs can be found here.

Segmentation fault in below program

I was trying to solve very basic problem SPOJ CANDY
I am getting a segmentation fault when submitting the below solution.
But in Visual Studio its working fine.
I also declared variables by considering the size (sum as long long int)
because it can be large
1) Is it due to the fact that I am declaring the array inside the while loop;
should I declare that array outside of while loop so that for every test cases it uses that same array
2) Is every time loop runs(for every test cases) the new array is created, will it lead to garbage collection or compiler will automatically free the memory after every test cases (I know about dynamic memory allocation in that case we have to free memory explicitly ) can you tell me in which scope I
should declare the variables?
I got above doubts because segmentation fault is regarding memory access.
#include<iostream>
using namespace std;
int main(){
while(1){
int n;
int arr[10001];
cin>>n;
if(n==-1)
break;
long long int sum=0;
for(int i=0;i<n;i++){
int temp;
cin>>temp;
sum+=temp;
arr[i]=temp;
}
int mean=sum/n;
if((sum%n)!=0){
cout<<-1<<endl;
continue;
}
int count1=0;
for(int i=0;i<n;i++){
if(arr[i]>mean){
count1+=(arr[i]-mean);
}
}
cout<<count1<<endl;
}
}
Your problem is probably due to the stack allocation of int arr[10001]. This is most probably a 40kB allocation. Now, "allocation" is the wrong word, as it essentially just calculates the address of arr by doing something like int * arr = STACK_POINTER-40004.
Unfortunately, it is common to have the maximum stack size be 12 kB by default. This means that the operating system maps 12 kB into memory and sets STACK_POINTER to the top of that memory (assuming the stack grows downward).
So the net effect is that your arr pointer now points beyond the allocated stack -- into unallocated memory -- and the first access throws a segmentation fault. Normally you could fix this by upping the stack size with ulimit -s, but you do not have control over the judging platform used.
You have two options:
use a heap allocation instead int *arr = new int[10001]. This is not affected by the initial stack size. In a normal program you should take care to clean this up, but for a short program like this it is not necessary.
move the declaration of int arr[10001] to the top level. arr will point to a region known as the BSS section, which is initially zeroed. This is also not affected by the initial stack size.

the amount of dynamic allocated memory a pointer would take in c++

I have a program:
#include <iostream>
using namespace std;
int main(){
const int SIZE = 1000;
typedef int* IntPointer;
IntPointer ip;
do {
ip = new int[ SIZE ];
cout << "Memory allocated " << endl << flush;
} while (ip != nullptr);
}
This code is suppose to test the amount of memory used by the ip every time it loops.
I tried to print out the value of ip, which is the memory address in hex I believe, I can see everytime it loops once, the address will increase 4000 in dec. So, is it correct that every ip will take 4000 bytes memory? I am wondering if there is any function to get the value of memory used by every ip? If not, how do I get the size of cumulative memory use within the loop?
Appreciate your answer. Thank you!
I don't know why you have to allocate memory to know about this.
To get size of one pointer use below,
cout << "Memory allocated for one IntPointer :"<<sizeof(IntPointer);
To get size of 1000 Int* objects use as below,
cout << "Memory allocated for "<< SIZE <<" IntPointers :"<<sizeof(IntPointer*SIZE );
you are getting 4000 as you allocate memory for 1000 int* i.e 4*1000 = 4000.
A integer is on your system 4 bytes. So having a int array of 1000 -> 4000 bytes. That's correct. The std::cout << sizeof(int) << std::endl; function will show you that your integer has a size of 4 bytes.
There is no portable way to get the ammount of memory allocated at runtime. You can get the amount of memory by tracking the size and using sizeof(int)*SIZE byte for the array. You can't get the memory at runtime using sizeof. sizeof is a compile time operator.
The memory allocator will also allocate a struct internally to track the memory chunk and the size of it, but this bookkeeping struct isn't usually your concern, unless you want to make your own memory allocator.
You can use the sizeof() function to get the size in bytes. Make sure to dereference the pointer however, otherwise you'll get the size of the pointer itself, not the memory it points to. Plus just a note, make sure you get around to releasing memory allocated by a pointer. Anything you put on the heap is your own problem, unlike the stack. (You may already be aware but it doesn't hurt to reiterate.)
In your code, most of the data is simply lost, as your pointer can only point at one thing at a time.
Edit: Reading this correctly now, yeah you can't just get the size of the array. Dereferencing will just give you the size of the first element. You know the count of the array however, so you can simply multiply. If your elements could be different sizes (which in your case they can't) I'm not entirely sure of what you would do.

Dynamic allocation

Is it possible to mimic the behavior of dynamic allocation using the following code. For example we do not know the exact number of the integer that are stored in a file and we are going to read the file and then store it in an array called Hello.
int x;
int n=0;
ifstream input("a.dat");
while (!input.eof())
{
input >> x;
n++;
}
input.close();
int Hello[n];
cout << "n= " << n << endl;
int i=0;
while (!input.eof())
{
input >> Hello[i];
i++;
}
Is it possible to mimic the behavior of dynamic allocation using the
following code.
No, the major difference is that the array in your program is stored on stack, whereas all dynamic memory allocations takes place on heap.
What you are exactly doing, in your code is using the VLA feature of C99 standard of C in C++. Compiling with the -pedantic option in g++ compiler will reveal this. Since it is not directly supported by c++, and it is a implementation-specific language extension, its not such a good idea to use it, if you aim to write portable code.
VLA's use alloca() , to allocate memory on stack at runtime, and the disadvantages of such a tecnnique are discussed here.
Further more, VLA's allocate memory on stack during runtime, and if the value exceeds the range, the program simply crashes, while it is ok to quickly create a few bytes of array using VLA's , creating uncertain amounts of large memory may not be safe, and it is best to handle it using dynamic memory allocation.
int Hello[n];
is NOT dynamic allocation. It is required that n is a compile time constant if you want to declare Hello in this way.
try:
int* Hello = new int[n];
and don't forget to release the memory when you are done using it:
delete[] Hello;
This is allowed as an extension by some compilers, but is not strictly part of C++.
int Hello[n];
As an alternative, you can allocate the memory yourself:
int* Hello = new int[n];
And free it yourself also:
delete[] Hello;
But you can avoid manual memory management by usng std::vector from <vector>. One of its constructors accepts an initial size:
vector<int> Hello(n); // Vector with n elements, all initially 0.
You can also set an initial capacity without resizing, to do the allocation once:
vector<int> Hello; // Empty vector.
Hello.reserve(n); // Allocate space for n elements; size() is still 0.
Then read into an int and use push_back to insert values:
int value;
while (input >> value)
Hello.push_back(value);
Note the use of input >> value as the loop condition—this reads as long as reads are successful. eof() returns true only when the last read operation failed due to unexpected end of file, which is unlikely to be exactly what you want.
For a start the second
while (!input.eof())
Will always fail. That terminated the first one and then you set about closing that input stream!