This question already has an answer here:
will Index Out Of Array Bounds throw exception or error before core in C++?
(1 answer)
Closed 5 years ago.
I have this code:
#include <cstdio>
int foo[100];
int main()
{
for(int i=0;i<10000;i++)
foo[i]=10000;
}
Debugging with GDB gives a surprising result:
[New Thread 23684.0x59b4]
[New Thread 23684.0x5c0c]
[New Thread 23684.0x541c]
Program received signal SIGSEGV, Segmentation fault.
0x0000000000401564 in main () at C:\Users\DARREN\Documents\Visual Studio
2017\Projects\Untitled1.cpp:9
warning: Source file is more recent than executable.
9 }
(gdb) print i
$1 = 4080
(gdb)
Now, I know the statement foo[i]=10000 caused the error, but I declared foo to be only of size 100. Why would the value of i be so big before the error occurs?
Any explanation is appreciated.
After you made an update to your question you posted this:
int foo[100];
int main()
{
for(int i=0;i<10000;i++)
foo[i]=10000;
}
And you are asking about segmentation fault.
Here you have an array with a size of 100 and you have a loop that ranges [0,9999] and within the for loop you are indexing the array with the for loops incremental variable i. When you step through the for loop for each iteration of i and you get to:
foo[i] = 10000;
when i <= 99 everything is okay.
What do you suppose happens when i >= 100?
When you use raw arrays there is no bounds checking; and this is something that you and the user will have to be responsible for. If you want automatic bounds checking done for you to prevent this out of bounds memory segmentation fault you should use any of the standard containers such as std::vector<T>, std::list<T>, std::set<T> etc. depending on your needs. If you need to use array index notation then std::vector<T> is the way to go. Or any other vector from any other library such as boost.
EDIT
For you to fix this problem you would have to either increase the size of the array from 100 to 10,000 or you would have to decrease your loop's condition from i<10000 to i<100 to accommodate for proper array indexing. Do not forget that C++ arrays have their starting index at 0 so you would have a basic array and loop as such:
int var[10]; // Uninitialized
for ( int i = 0; i < 10; ++i ) {
var[i] = 0; // Initialize all array indexes to 0
}
Notice that the condition in the for loop is i < 10 which is less than the actual size of the array when it is declared and not i <= 10 less than or equal to for this would also generate a segmentation fault or out of bounds error.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am trying to solve a problem for code wars, but my code gives an error. I've tested the code on code blocks and it works, but when I test it on their website it gives me some strange error. I looked it on the internet and found out that it might be a segmentation fault because of a deref of a null pointer, but I dont know how to fix it. This is my code and the error. Can you tell me plase what is the problem and why it works on code block, but on the compiler on the website it doesn't.(P.S. Please excuse my english, Im from Romania).
#include <iostream>
#include <vector>
#include <bits/stdc++.h>
using namespace std;
long queueTime(std::vector<int> costumers, int n) {
vector<int> q;
int j, x;
long t;
for (j = 0; j < n; j++)
q.push_back(costumers[j]);
int u = costumers.size();
while (j <= u) {
x = *min_element(q.begin(), q.end());
t = t + x;
for (int i = 0; i < n; i++) {
q[i] = q[i] - x;
if (q[i] == 0) {
q[i] = costumers[j];
j++;
}
}
}
t += *max_element(q.begin(), q.end());
return t;
}
Error message:
UndefinedBehaviorSanitizer:DEADLYSIGNAL
==1==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x00000042547b bp 0x000000000000 sp 0x7ffec8fa0510 T1)
==1==The signal is caused by a READ memory access.
==1==Hint: address points to the zero page.
==1==WARNING: invalid path to external symbolizer!
==1==WARNING: Failed to use and restart external symbolizer!
#0 0x42547a (/workspace/test+0x42547a)
#1 0x427ffc (/workspace/test+0x427ffc)
#2 0x42686e (/workspace/test+0x42686e)
#3 0x426435 (/workspace/test+0x426435)
#4 0x42609b (/workspace/test+0x42609b)
#5 0x42aad5 (/workspace/test+0x42aad5)
#6 0x42581d (/workspace/test+0x42581d)
#7 0x7fc90f605b96 (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
#8 0x404489 (/workspace/test+0x404489)
UndefinedBehaviorSanitizer can not provide additional info.
==1==ABORTING
SEGV would indicate that there is a segmentation fault happening somewhere so you are on the right track with your debugging. Looking at the code you have provided there are few tips that might help you narrow down where things are going wrong.
The first thing that sticks out is that seem to be taking a local copy of costumers on this line:
for (j = 0; j < n; j++) q.push_back(costumers[j]);
Here you make the assumption that n is less or equal to costumers.size() and if n is larger than this then this will try to read from beyond the end of the vector. An alternative here is to use the = operator instead:
vector<int> q = costumers;
If you actually only wanted the first n elements of costumers copied to q then you could use:
if(n < q.size()){
q.resize(n);
}
to shrink it to size afterwards.
Another general style point is that it is good practice to something called "Resource allocation is initialization" (RAII): at the top of your queueTime function you have a bunch of variables declared but not initialized to values:
int j, x;
long t;
The problem here is that these will often be initialized to junk values and if you forget to initialize them later then you may be reading these junk values without knowing. Try instead to declare the variable at the point in the code you assign a value to it, eg fo j:
for(int j = 0; ... )
and x
int x = *min_element(q.begin(), q.end());
or in the case where you need t everywhere in the function scope, at least assign an initial value when you declare it
long t = 0;
Finally when using algorithms that return iterators it is generally good practice to check that they are valid before dereferencing them ie. writing:
auto itr_min_elem = min_element(q.begin(), q.end());
if(itr_min_elem == q.end()){
continue;
}
int x = *itr_min_elem;
so that if q is empty and min_element returns an end iterator then you don't try to dereference it.
Sorry for the wall of text but I hope these offer some help for debugging your function.
As a general note to your question about why it was working on code blocks but not on the website could come down to a number of reasons, likely related to how the code is being compiled. Some compilers will initialize memory to 0s in debug builds and this can have the effect of uninitialized variables behaving nicely in debug but in an undefined way in release. Also depending on the environment the code is executed in there may be differences in the memory layout so reading past the end of an array in one environment may just give junk while in another you may be indexing into protected memory or outside of your programs allocated memory. This would cause the platform running your code to be very unhappy and force it to abort.
This question already has answers here:
Large 2D array gives segmentation fault
(8 answers)
Closed 7 years ago.
I run code below and get some strange output:
#include <iostream>
using namespace std;
int main()
{
for(int ll=1444; ll<1450; ++ll)
{
cout<<ll<<endl;
cout<<"###### 0 #######"<<endl;
int mtx[ll][ll];
cout<<"###### 1 #######"<<endl;
}
return 0;
}
The output is:
1444
###### 0 #######
###### 1 #######
1445
###### 0 #######
###### 1 #######
1446
###### 0 #######
###### 1 #######
1447
###### 0 #######
###### 1 #######
1448
###### 0 #######
Segmentation fault
I tried checked one by one with ll's value, when ll reaches 1448, segmentation fault does happen.
Then I changed the array from int to bool, problem disappears.
A calculation based on ll's value:
ll=1447, total space of array is 1447*1447*4 = 8375236 bytes = 8178.95 Kbytes
ll=1448, total space of array is 1448*1448*4 = 8386816 bytes = 8190.25 Kbytes
Could a possible reason be the size of this array is bigger than default page size? (How to check it in Ubunut 14.04..?)
BTW, tried with java and there is no problem.
You are allocating your array into the stack, and the default stack size is usually 8 MB.
What you are doing here is allocating an array of [n][m] on the stack every cycle. Depending on how large the stack is (you can set this, but there are limits) you will eventually run out of memory.
What you want to do is to allocate memory on the heap using the new operator or use a container that will do this for you.
Since you are using C++,
Using STL
#include <vector>
#using std
This creates a vector of vectors, each of dimension ll.
vector<vector<int>> v(ll, vector<int>(ll, 0));
This creates a 1-d vector of the same size as your 2d array. I would typically use this approach because it gets the job done with the least fuss. The drawback here is that you would have to address the vector in the form v[i*ll + j] instead of v[i][j]
vector<int> v(ll*ll, 0)
Without Using STL
This creates an array of arrays on the heap, and is not contiguous. you have to remember to call delete in a loop after you are done with the data structure. This is considered bad because you are creating an array of arrays, which is usually a less efficient data structure as its not contiguous.
int **v = new int *[ll];
for (int i = 0; i < ll; i++)
v[i] = new int[ll];
This creates a contiguous 2d array on the heap, see this for more details
int (*array)[256] = malloc(512 * sizeof *array);
Also see
How do I declare a 2d array in C++ using new?
for creating a 2d array using new
to fix this code I would do the following:
int main()
{
for(int ll=1444; ll<1450; ++ll)
{
cout<<ll<<endl;
cout<<"###### 0 #######"<<endl;
std::vector<int> v(ll * ll, 0);
cout<<"###### 1 #######"<<endl;
}
return 0;
}
code not tested but you get the idea
I have try the following code to judge prime:
const int N = 200000;
long prime[N] = {0};
long num_prime = 0;
int is_not_prime[N]={1,1};
void Prime_sort(void)
{
for( long i = 2 ; i<N ; i++ )
{
if( !is_not_prime[i] )
{
prime[num_prime++] = i;
}
for( long j = 0; j<num_prime && i*prime[i]<N ; j++ )
{
is_not_prime[i*prime[j]] = 1;
}
}
}
But when I run it, it cause a segmentation fault! That fault I have never meet.And I searched Google,and it explain segmentation fault as follow:
A segmentation fault (often shortened to segfault) is a particular
error condition that can occur during the operation of computer
software. In short, a segmentation fault occurs when a program
attempts to access a memory location that it is not allowed to access,
or attempts to access a memory location in a way that is not allowed
But I don't know where cause this fault in my code.Please help me.
Your array is_not_prime has length N. For example, at the final lap of the outer for loop, i will have the value N-1. When i is that big, is_not_prime[i*prime[j]] will cause you to write far out of bounds of the array.
I'm not quite sure what j<num_prime && i*prime[i]<N is supposed to do, but it is likely part of the bug. Single step through the program with your debugger and see at what values the variables have when the program crashes.
Just re-write your program in a less complex manner and all bugs will go away.
Compare your loop bound checking to your indexing - they aren't the same. (I believe you meant to write i*prime[j]<N in your for loop.)
Your program crashes because an index goes out of bounds. And the index goes out of bounds because your algorithm is not valid.
As it still crashes if you set N at a much smaller value
const int N = 3;
it shouln't be too difficult to see what goes wrong by running your program with pencil and paper...
This question already has answers here:
Accessing an array out of bounds gives no error, why?
(18 answers)
Closed 7 years ago.
While debugging I found an error with an int array of 0. In a test document I messed around with array with more cell input than their length.
int array[0];
for(int i = 0; i < 10; i++)
array[i] = i;
for(int i = 0; i < 10; i++)
cout << array[i];
After I compiled and ran the program I got
0123456789
Then I received the message "test.exe has stopped working". I expected both of these, but what I am confused about is why the compiler lets me create an array of 0, and why the program doesn't crash until the very end of the program. I expected the program to crash once I exceeded the array length.
Can someone explain?
The compiler should have at least warned you about a zero size array - if it didn't .. consider changing compiler.
Remember that an array is just a bit of memory just like any other. In your example the array is probably stored on the stack and so writing off the end of it may not cause much of a problem until your function exits. At that point you may find you have written some text over the return address and you get an exception. Writing off the end of arrays are a common cause of bugs in C/C++ - just be thankful you got an error with this one and it didn't just overwrite some other unrelated data.
This question already has answers here:
Why doesn't this program segfault?
(1 answer)
Why doesn't this code cause a segfault?
(3 answers)
Closed 9 years ago.
I wrote the following after noticing something weird happening in another project. This code does not produce a segfault even though arrays are being called out of bounds multiple times. Can someone explain to me why there is no segfault from running the code below?
#include <stdlib.h>
#include <stdio.h>
int main()
{
int *a = (int *)malloc(4 * sizeof(int));
int *b = (int *)malloc(3 * sizeof(int));
int i = 0;
for(i =0; i <3 ; i++)
{
b[i] = 3+i;
}
for(i = 0; i < 4; i++)
{
a[i] = i;
}
for(i = 0; i < 100 ; i++){
a[i] = -1;
}
for(i = 0 ; i < 100 ; i++){
printf("%d \n", b[i]);
}
}
A segfault only happens if you try to access memory locations that are not mapped into your process.
The mallocs are taken from bigger chunks of preallocated memory that makes the heap. E.g. the system may make (or increase) the heap in 4K blocks, so reaching beyond the the bounds of your arrays will still be inside that block of heap-memory that is already allocated to your process (and from which it would assign memory for subsequent mallocs).
In a different situation (where more memory was allocated previously, so your mallocs are near the end of a heap block), this may segfault, but it is basically impossible to predict this (especially taking into account different platforms or compilers).
Undefined behaviour is undefined. Anything can happen, including the appearance of "correct" behaviour.
A segmentation fault occurs when a process tries to access memory which OS accounts as not belonging to the process. As memory accounting inside an OS is done by pages (usually 1 page = 4 KB), a process can access any memory within the allocated page without OS noticing it.
Should be using new and not malloc
What is the platform?
When you try undefined behaviour - gues what it is undefined.