Segmentation fault in below program - c++

I was trying to solve very basic problem SPOJ CANDY
I am getting a segmentation fault when submitting the below solution.
But in Visual Studio its working fine.
I also declared variables by considering the size (sum as long long int)
because it can be large
1) Is it due to the fact that I am declaring the array inside the while loop;
should I declare that array outside of while loop so that for every test cases it uses that same array
2) Is every time loop runs(for every test cases) the new array is created, will it lead to garbage collection or compiler will automatically free the memory after every test cases (I know about dynamic memory allocation in that case we have to free memory explicitly ) can you tell me in which scope I
should declare the variables?
I got above doubts because segmentation fault is regarding memory access.
#include<iostream>
using namespace std;
int main(){
while(1){
int n;
int arr[10001];
cin>>n;
if(n==-1)
break;
long long int sum=0;
for(int i=0;i<n;i++){
int temp;
cin>>temp;
sum+=temp;
arr[i]=temp;
}
int mean=sum/n;
if((sum%n)!=0){
cout<<-1<<endl;
continue;
}
int count1=0;
for(int i=0;i<n;i++){
if(arr[i]>mean){
count1+=(arr[i]-mean);
}
}
cout<<count1<<endl;
}
}

Your problem is probably due to the stack allocation of int arr[10001]. This is most probably a 40kB allocation. Now, "allocation" is the wrong word, as it essentially just calculates the address of arr by doing something like int * arr = STACK_POINTER-40004.
Unfortunately, it is common to have the maximum stack size be 12 kB by default. This means that the operating system maps 12 kB into memory and sets STACK_POINTER to the top of that memory (assuming the stack grows downward).
So the net effect is that your arr pointer now points beyond the allocated stack -- into unallocated memory -- and the first access throws a segmentation fault. Normally you could fix this by upping the stack size with ulimit -s, but you do not have control over the judging platform used.
You have two options:
use a heap allocation instead int *arr = new int[10001]. This is not affected by the initial stack size. In a normal program you should take care to clean this up, but for a short program like this it is not necessary.
move the declaration of int arr[10001] to the top level. arr will point to a region known as the BSS section, which is initially zeroed. This is also not affected by the initial stack size.

Related

why cpp allows to get a access to memory i haven't allocated?

In cpp one can use an array declaration as
typename array[size];
or
typename *array = new typename[size];
Where array is of length 'size' and elements are indexed from '0' to 'size -1'
Here my question is am I allowed to access the elements beyond the index >= size.
So I wrote this little code to check it
#include <iostream>
using namespace std;
int main()
{
//int *c; //for dynamic allocation
int n; //length of the array c
cin>>n; //getting the length
//c = new int[n]; //for dynamic allocation
int c[n]; //for static allocation
for(int i=0; i<n; i++) //getting the elements
cin>>c[i];
for(int i=0; i<n+10; i++) //showing the elements, I have add up 10
cout<<c[i]<<" "; //with size to access the memory I haven't
//allocated for
return 0;
}
And the result is like this
2
1 2
1 2 2686612 1970422009 7081064 4199040 2686592 0 1 1970387429 1971087432 2686700
Shouldn't the program crashed but gives garbage values. And for both the allocation methods it gives the same result. It makes more bugs which are hard to detect. Is it related with the environment or the compiler I am using or anything else?
I was using codeblocks IDE having TDM-GCC 4.8.1 compiler on windows 8.1
Thanks in advance.
This is called "undefined behavior" in the C++ standard.
Undefined behavior can mean any one of the following:
The program crashes
The program continues to run, but produces meaningless, garbage results
The program continues to run, and automatically copies the entire contents of your hard drive, and posts it on Facebook
The program continues to run, and automatically subscribes you to Publishers Clearinghouse Sweepstakes
The program continues to run, but your computer catches fire and explodes
The program continues to run, and makes your computer self-aware, which automatically links and networks with other self-aware networks, forming Skynet, and destroying the human race
Conclusion: do not run and access elements past the end of your arrays.
The c++ compilers don't enforce this as there is no specification to do so.
When you access an element of an array there is no boundary check done. c[i] just gets translated to c + i * sizeof(int) and that's it. If that area of memory is not initialize you'll get garbage, but you could be getting other useful information it all depends on what is there.
Please note that depending on the OS and the c++ runtime you're running you can get different results, for instance on a linux box you'll probably be getting a segmentation fault and the program will crash.

Segmentation fault during the initialization of array

I have seen segmentation fault sometimes during the initialization of an array with huge size.
For ex:
#include<iostream>
#include<limits>
using namespace std;
int main()
{
string h;
cin >> h;
int size=h.size();
cout << size << endl;
int arr[size][size];
cout << arr[0][0]<<endl;
arr[0][0]=1;
cout << arr[0][0]<<endl;
return 0;
}
When the user input is a small string lets say "sample" the program is working fine.
When the user input is a big string where the size is for ex. >1500.Segmentation is seen during the initialization of the array int arr[size][size];
What can be the issue?Is there any problem in initializating the array like the one above.
I think you are out of memory with those initializations, causing a stack overflow. I recommend to allocate it on the heap or by using a std:vector. See here: Segmentation fault on large array sizes
I think an array's size must always be a compile-time constant in C++ i.e. the value of your 'size' variable must be known at compile time.
If you want dynamic storage, use std::vector
MSDN states that the default stack size on Windows is 1 MB - in case of 1500 elements in each dimension your array would take up 1500 * 1500 * 4 bytes = 9000000 bytes = 8.58 megabytes, not sure about Linux (this states it to be 8 MB) - I guess it depends on the compiler and distributive. So either:
1) If you know that there is a limit for the string length increase the stack size accordingly with the /STACK linker flag on Windows or like posted in this answer on Linux
2) Allocate the array on heap - if you don't want to mess around with memory allocations std::vector or std::unique_ptr can be used as a container

Segmentation Fault with big input

So I wrote this program in C++ to solve COJ(Caribbean Online Judge) problem 1456. http://coj.uci.cu/24h/problem.xhtml?abb=1456. It works just fine with the sample input and with some other files I wrote to test it but I kept getting 'Wrong Answer' as a veredict, so I decided to try with a larger input file and I got Segmentation Fault:11. The file was 1000001 numbers long without the first integer which is the number of inputs that will be tested. I know that error is caused by something related to memory but I am really lacking more information. Hope anyone can help, it is driving me nuts. I program mainly in Java so I really have no idea how to solve this. :(
#include <stdio.h>
int main(){
long singleton;
long N;
scanf("%ld",&N);
long arr [N];
bool sing [N];
for(int i = 0; i<N; i++){
scanf("%ld",&arr[i]);
}
for(int j = 0; j<N; j++){
if(sing[j]==false){
for(int i = j+1; i<N; i++){
if(arr[j]==arr[i]){
sing[j]=true;
sing[i]=true;
break;
}
}
}
if(sing[j]==false){
singleton = arr[j];
break;
}
}
printf("%ld\n", singleton);
}
If you are writing in C, you should change the first few lines like this:
#include <stdio.h>
#include <stdlib.h>
int main(void){
long singleton;
long N;
printf("enter the number of values:\n");
scanf("%ld",&N);
long *arr;
arr = malloc(N * sizeof *arr);
if(arr == NULL) {
// malloc failed: handle error gracefully
// and exit
}
This will at least allocate the right amount of memory for your array.
update note that you can access these elements with the usual
arr[ii] = 0;
Just as if you had declared the array as
long arr[N];
(which doesn't work for you).
To make it proper C++, you have to convince the standard committee to add Variable length arrays to the language.
To make it valid C, you have to include <stdbool.h>.
Probably your VLA nukes your stack, consuming a whopping 4*1000001 byte. (The bool only adds a quarter to that) Unless you use the proper compiler options, that is probably too much.
Anyway, you should use dynamic memory for that.
Also, using sing without initialisation is ill-advised.
BTW: The easiest C answer for your programming challenge is: Read the numbers into an array (allocated with malloc), sort (qsort works), output the first non-duplicate.
When you write long arr[N]; there is no way that your program can gracefully handle the situation where there is not enough memory to store this array. At best, you might get a segfault.
However, with long *arr = malloc( N * sizeof *arr );, if there is not enough memory then you will find arr == NULL, and then your program can take some other action instead, for example exiting gracefully, or trying again with a smaller number.
Another difference between these two versions is where the memory is allocated from.
In C (and in C++) there are two memory pools where variables can be allocated: automatic memory, and the free store. In programming jargon these are sometimes called "the stack" and "the heap" respectively. long arr[N] uses the automatic area, and malloc uses the free store.
Your compiler and/or operating system combination decide how much memory is available to your program in each pool. Typically, the free store will have access to a "large" amount of memory, the maximum possible that a process can have on your operating system. However, the automatic storage area may be limited in size , as well as having the drawback that if allocation fails then you have to have your process killed or have your process go haywire.
Some systems use one large area and have the automatic area grow from the bottom, and free store allocations grow from the top, until they meet. On those systems you probably wouldn't run out of memory for your long arr[N], although the same drawback remains about not being able to handle when it runs out.
So you should prefer using the free store for anything that might be "large".

how to declare class with 1000000 elements c++

I am writing a c++ code and my need is to declare a class having two elments as
class arr{
public:
long num;
string str;
};
now i need to store almost 1000000 elments of this class(depending on user input number of class object can warry in a range of 1 <= n <= 1000000
The object are created dynamically as
#include <iostream>
#include<string>
using namespace std;
class arr{
public:
long i;
string str;
};
int main(){
long n,j,i;
cin>>n;
arr a[n];
.... rest of programme
but if value of n is large then 100000 then programs hang but works fine for value less then 100000 what approach should i try to declare more than 100000 objects in a go i tried solving issue with help of 2D array that is dividing arra in two part
arr a[1000][1000];
but this approach is not working for me
please if anybody have any idea do help me out with this
thanks in advance
Just use std::vector
#include <iostream>
#include <vector>
int main(){
long n;
cin>>n;
std::vector<arr> a(n);
}
what approach should i try to declare more than 100000 objects?
is to declare them on the heap not on the stack, you're declaring 1000000 object on the stack, that's toooooo much. Be careful, there's stack limit:
C/C++ maximum stack size of program
why is stack memory size so limited?
to declare them on the heap, use:
arr *arr= new arr[n];
that's the same mechanism the vector uses to initialize elements.
Here is some background information on why allocating too large objects on the stack fails with a segmentation fault, while the same amount of space may be allocated on the stack in many small chunks:
The mechanic is, that the system sets up the stack with a small amount of memory allocated. This allocation is guarded by an inaccessible memory region. Whenever the program accesses this inaccessible region, the hardware alarms the kernel; the kernel analyses the situation, deduces that the program needs more stack memory, grows the stack by a few memory pages, and tells the hardware to normally continue executing the program.
This works perfectly as long the accessed address is close enough to the top of the stack, so that the kernel may safely assume that the application indeed wanted to draw up a new stack frame, instead of accessing some random memory location due to a bug. In your case, the limit seems to be around 16 MiB.
You should be able to use much more than 16 MiB of memory, if none of your stack frames is larger than 16 MiB, because then the limit will be raised in many small step, and the system won't think that you just accessed a random location.

C++ StackOverflowException initializing struct over 63992

"Process is terminated due to StackOverflowException" is the error I receive when I run the code below. If I change 63993 to 63992 or smaller there are no errors. I would like to initialize the structure to 100,000 or larger.
#include <Windows.h>
#include <vector>
using namespace std;
struct Point
{
double x;
double y;
};
int main()
{
Point dxF4struct[63993]; // if < 63992, runs fine, over, stack overflow
Point dxF4point;
vector<Point> dxF4storage;
for (int i = 0; i < 1000; i++) {
dxF4point.x = i; // arbitrary values
dxF4point.y = i;
dxF4storage.push_back(dxF4point);
}
for (int i = 0; i < dxF4storage.size(); i++) {
dxF4struct[i].x = dxF4storage.at(i).x;
dxF4struct[i].y = dxF4storage.at(i).y;
}
Sleep(2000);
return 0;
}
You are simply running out of stackspace - it's not infinite, so you have to take care not to run out.
Three obvious choices:
Use std::vector<Point>
Use a global variable.
Use dynamic allocation - e.g. Point *dxF4struct = new Point[64000]. Don't forget to call delete [] dxF4struct; at the end.
I listed the above in order that I think is preferable.
[Technically, before someone else points that out, yes, you can increase the stack, but that's really just moving the problem up a level somewhere else, and if you keep going at it and putting large structures on the stack, you will run out of stack eventually no matter how large you make the stack]
Increase the stack size. On Linux, you can use ulimit to query and set the stack size. On Windows, the stack size is part of the executable and can be set during compilation.
If you do not want to change the stack size, allocate the array on the heap using the new operator.
Well, you're getting a stack overflow, so the allocated stack is too small for this much data. You could probably tell your compiler to allocate more space for your executable, though just allocating it on the heap (std::vector, you're already using it) is what I would recommend.
Point dxF4struct[63993]; // if < 63992, runs fine, over, stack overflow
That line, you're allocating all your Point structs on the stack. I'm not sure the exact memory size of the stack but the default is around 1Mb. Since your struct is 16Bytes, and you're allocating 63393, you have 16bytes * 63393 > 1Mb, which causes a stackoverflow (funny posting aboot a stackoverflow on stack overflow...).
So you can either tell your environment to allocate more stack space, or allocate the object on the heap.
If you allocate your Point array on the heap, you should be able to allocate 100,000 easily (assuming this isn't running on some embedded proc with less than 1Mb of memory)
Point *dxF4struct = new Point[63993];
As a commenter wrote, it's important to know that if you "new" memory on the heap, it's your responsibility to "delete" the memory. Since this uses array new[], you need to use the corresponding array delete[] operator. Modern C++ has a smart pointer which will help with managing the lifetime of the array.