This question already has answers here:
Why doesn't this program segfault?
(1 answer)
Why doesn't this code cause a segfault?
(3 answers)
Closed 9 years ago.
I wrote the following after noticing something weird happening in another project. This code does not produce a segfault even though arrays are being called out of bounds multiple times. Can someone explain to me why there is no segfault from running the code below?
#include <stdlib.h>
#include <stdio.h>
int main()
{
int *a = (int *)malloc(4 * sizeof(int));
int *b = (int *)malloc(3 * sizeof(int));
int i = 0;
for(i =0; i <3 ; i++)
{
b[i] = 3+i;
}
for(i = 0; i < 4; i++)
{
a[i] = i;
}
for(i = 0; i < 100 ; i++){
a[i] = -1;
}
for(i = 0 ; i < 100 ; i++){
printf("%d \n", b[i]);
}
}
A segfault only happens if you try to access memory locations that are not mapped into your process.
The mallocs are taken from bigger chunks of preallocated memory that makes the heap. E.g. the system may make (or increase) the heap in 4K blocks, so reaching beyond the the bounds of your arrays will still be inside that block of heap-memory that is already allocated to your process (and from which it would assign memory for subsequent mallocs).
In a different situation (where more memory was allocated previously, so your mallocs are near the end of a heap block), this may segfault, but it is basically impossible to predict this (especially taking into account different platforms or compilers).
Undefined behaviour is undefined. Anything can happen, including the appearance of "correct" behaviour.
A segmentation fault occurs when a process tries to access memory which OS accounts as not belonging to the process. As memory accounting inside an OS is done by pages (usually 1 page = 4 KB), a process can access any memory within the allocated page without OS noticing it.
Should be using new and not malloc
What is the platform?
When you try undefined behaviour - gues what it is undefined.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am trying to solve a problem for code wars, but my code gives an error. I've tested the code on code blocks and it works, but when I test it on their website it gives me some strange error. I looked it on the internet and found out that it might be a segmentation fault because of a deref of a null pointer, but I dont know how to fix it. This is my code and the error. Can you tell me plase what is the problem and why it works on code block, but on the compiler on the website it doesn't.(P.S. Please excuse my english, Im from Romania).
#include <iostream>
#include <vector>
#include <bits/stdc++.h>
using namespace std;
long queueTime(std::vector<int> costumers, int n) {
vector<int> q;
int j, x;
long t;
for (j = 0; j < n; j++)
q.push_back(costumers[j]);
int u = costumers.size();
while (j <= u) {
x = *min_element(q.begin(), q.end());
t = t + x;
for (int i = 0; i < n; i++) {
q[i] = q[i] - x;
if (q[i] == 0) {
q[i] = costumers[j];
j++;
}
}
}
t += *max_element(q.begin(), q.end());
return t;
}
Error message:
UndefinedBehaviorSanitizer:DEADLYSIGNAL
==1==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x00000042547b bp 0x000000000000 sp 0x7ffec8fa0510 T1)
==1==The signal is caused by a READ memory access.
==1==Hint: address points to the zero page.
==1==WARNING: invalid path to external symbolizer!
==1==WARNING: Failed to use and restart external symbolizer!
#0 0x42547a (/workspace/test+0x42547a)
#1 0x427ffc (/workspace/test+0x427ffc)
#2 0x42686e (/workspace/test+0x42686e)
#3 0x426435 (/workspace/test+0x426435)
#4 0x42609b (/workspace/test+0x42609b)
#5 0x42aad5 (/workspace/test+0x42aad5)
#6 0x42581d (/workspace/test+0x42581d)
#7 0x7fc90f605b96 (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
#8 0x404489 (/workspace/test+0x404489)
UndefinedBehaviorSanitizer can not provide additional info.
==1==ABORTING
SEGV would indicate that there is a segmentation fault happening somewhere so you are on the right track with your debugging. Looking at the code you have provided there are few tips that might help you narrow down where things are going wrong.
The first thing that sticks out is that seem to be taking a local copy of costumers on this line:
for (j = 0; j < n; j++) q.push_back(costumers[j]);
Here you make the assumption that n is less or equal to costumers.size() and if n is larger than this then this will try to read from beyond the end of the vector. An alternative here is to use the = operator instead:
vector<int> q = costumers;
If you actually only wanted the first n elements of costumers copied to q then you could use:
if(n < q.size()){
q.resize(n);
}
to shrink it to size afterwards.
Another general style point is that it is good practice to something called "Resource allocation is initialization" (RAII): at the top of your queueTime function you have a bunch of variables declared but not initialized to values:
int j, x;
long t;
The problem here is that these will often be initialized to junk values and if you forget to initialize them later then you may be reading these junk values without knowing. Try instead to declare the variable at the point in the code you assign a value to it, eg fo j:
for(int j = 0; ... )
and x
int x = *min_element(q.begin(), q.end());
or in the case where you need t everywhere in the function scope, at least assign an initial value when you declare it
long t = 0;
Finally when using algorithms that return iterators it is generally good practice to check that they are valid before dereferencing them ie. writing:
auto itr_min_elem = min_element(q.begin(), q.end());
if(itr_min_elem == q.end()){
continue;
}
int x = *itr_min_elem;
so that if q is empty and min_element returns an end iterator then you don't try to dereference it.
Sorry for the wall of text but I hope these offer some help for debugging your function.
As a general note to your question about why it was working on code blocks but not on the website could come down to a number of reasons, likely related to how the code is being compiled. Some compilers will initialize memory to 0s in debug builds and this can have the effect of uninitialized variables behaving nicely in debug but in an undefined way in release. Also depending on the environment the code is executed in there may be differences in the memory layout so reading past the end of an array in one environment may just give junk while in another you may be indexing into protected memory or outside of your programs allocated memory. This would cause the platform running your code to be very unhappy and force it to abort.
My question is related to a problem described here. I have written a C++ implementation of the Sieve of Eratosthenes that hits a memory overflow if I set the target value too high. As suggested in that question, I am able to fix the problem by using a boolean <vector> instead of a normal array.
However, I am hitting the memory overflow at a much lower value than expected, around n = 1 200 000. The discussion in the thread linked above suggests that the normal C++ boolean array uses a byte for each entry, so with 2 GB of RAM, I expect to be able to get to somewhere on the order of n = 2 000 000 000. Why is the practical memory limit so much smaller?
And why does using <vector>, which encodes the booleans as bits instead of bytes, yield more than an eightfold increase in the computable limit?
Here is a working example of my code, with n set to a small value.
#include <iostream>
#include <cmath>
#include <vector>
using namespace std;
int main() {
// Count and sum of primes below target
const int target = 100000;
// Code I want to use:
bool is_idx_prime[target];
for (unsigned int i = 0; i < target; i++) {
// initialize by assuming prime
is_idx_prime[i] = true;
}
// But doesn't work for target larger than ~1200000
// Have to use this instead
// vector <bool> is_idx_prime(target, true);
for (unsigned int i = 2; i < sqrt(target); i++) {
// All multiples of i * i are nonprime
// If i itself is nonprime, no need to check
if (is_idx_prime[i]) {
for (int j = i; i * j < target; j++) {
is_idx_prime[i * j] = 0;
}
}
}
// 0 and 1 are nonprime by definition
is_idx_prime[0] = 0; is_idx_prime[1] = 0;
unsigned long long int total = 0;
unsigned int count = 0;
for (int i = 0; i < target; i++) {
// cout << "\n" << i << ": " << is_idx_prime[i];
if (is_idx_prime[i]) {
total += i;
count++;
}
}
cout << "\nCount: " << count;
cout << "\nTotal: " << total;
return 0;
}
outputs
Count: 9592
Total: 454396537
C:\Users\[...].exe (process 1004) exited with code 0.
Press any key to close this window . . .
Or, changing n = 1 200 000 yields
C:\Users\[...].exe (process 3144) exited with code -1073741571.
Press any key to close this window . . .
I am using the Microsoft Visual Studio interpreter on Windows with the default settings.
Turning the comment into a full answer:
Your operating system reserves a special section in the memory to represent the call stack of your program. Each function call pushes a new stack frame onto the stack. If the function returns, the stack frame is removed from the stack. The stack frame includes the memory for the parameters to your function and the local variables of the function. The remaining memory is referred to as the heap. On the heap, arbitrary memory allocations can be made, whereas the structure of the stack is governed by the control flow of your program. A limited amount of memory is reserved for the stack, when it gets full (e.g. due to too many nested function calls or due to too large local objects), you get a stack overflow. For this reason, large objects should be allocated on the heap.
General references on stack/heap: Link, Link
To allocate memory on the heap in C++, you can:
Use vector<bool> is_idx_prime(target);, which internally does a heap allocation and deallocates the memory for you when the vector goes out of scope. This is the most convenient way.
Use a smart pointer to manage the allocation: auto is_idx_prime = std::make_unique<bool[]>(target); This will also automatically deallocate the memory when the array goes out of scope.
Allocate the memory manually. I am mentioning this only for educational purposes. As mentioned by Paul in the comments, doing a manual memory allocation is generally not advisable, because you have to manually deallocate the memory again. If you have a large program with many memory allocations, inevitably you will forget to free some allocation, creating a memory leak. When you have a long-running program, such as a system service, creating repeated memory leaks will eventually fill up the entire memory (and speaking from personal experience, this absolutely does happen in practice). But in theory, if you would want to make a manual memory allocation, you would use bool *is_idx_prime = new bool[target]; and then later deallocate again with delete [] is_idx_prime.
This question already has answers here:
Accessing an array out of bounds gives no error, why?
(18 answers)
Closed 7 years ago.
While debugging I found an error with an int array of 0. In a test document I messed around with array with more cell input than their length.
int array[0];
for(int i = 0; i < 10; i++)
array[i] = i;
for(int i = 0; i < 10; i++)
cout << array[i];
After I compiled and ran the program I got
0123456789
Then I received the message "test.exe has stopped working". I expected both of these, but what I am confused about is why the compiler lets me create an array of 0, and why the program doesn't crash until the very end of the program. I expected the program to crash once I exceeded the array length.
Can someone explain?
The compiler should have at least warned you about a zero size array - if it didn't .. consider changing compiler.
Remember that an array is just a bit of memory just like any other. In your example the array is probably stored on the stack and so writing off the end of it may not cause much of a problem until your function exits. At that point you may find you have written some text over the return address and you get an exception. Writing off the end of arrays are a common cause of bugs in C/C++ - just be thankful you got an error with this one and it didn't just overwrite some other unrelated data.
I have a mystery that I do not have an answer for. I have written a simple program in C++(and I should say that I'm not a professional C++ developer). Here it is:
#include <iostream>
int main(){
const int SIZE = 1000;
int pool1 [SIZE];
int pool2 [SIZE];
int result [SIZE*SIZE];
//Prepare data
for(int i = 0; i < SIZE; i++){
pool1[i] = i + 1;
pool2[i] = SIZE - i;
}
//Run test
for(int i = 0; i < SIZE; i++){
for(int j = 0; j < SIZE; j++){
result[i*SIZE + j] = pool1[i]*pool2[j];
}
}
return 0;
}
The program seems to work (I use it as a kind of benchmark for different languages) but then I ran it with valgrind and it started complaing:
==25912== Invalid read of size 4
==25912== at 0x804864B: main (in /home/renra/Dev/Benchmarks/array_iteration/array_iteration_cpp)
==25912== Address 0xbee79da0 is on thread 1's stack
==25912== Invalid write of size 4
==25912== at 0x8048632: main (in /home/renra/Dev/Benchmarks/array_iteration/array_iteration_cpp)
==25912== Address 0xbeaa9498 is on thread 1's stack
==25912== More than 10000000 total errors detected. I'm not reporting any more.
==25912== Final error counts will be inaccurate. Go fix your program!
Hmm, does not look good. Size 4 probably refers to the size of int. As you can see at first I was using SIZE 1000 so the results array would be 1,000,000 ints long. So, I thought, it was just overflowing and I needed a larger value type(at least for the iterators and the array of results). I used unsigned long long (the max of unsigned long is 18,446,744,073,709,551,615 and all I needed was 1,000,000 - SIZE*SIZE ). But I'm still getting these error messages (and they still say the read and write size is 4 even though sizeof(long long) is 8).
Also the messages are not there when I use a lower SIZE, but they seem to kick in exactly at SIZE 707 regardless of the used type. Anybody has a clue? I'm quite curious :-).
C and C++ both have no clear limit on what sizes of arrays you will be able to use on the stack and also usually no builtin protection. Just don't allocate such large chunks as automatic (scope local) variables. Use malloc in C or new in C++ for such a purpose.
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
This code causes a heap corruption corresponding to Visual Studio 2010.
What causes heap corruption? What part of this code is causing it?
#define size 65536
int main()
{
int* a = new int[size];//size is equal to
srand(time(NULL));
for(int i = 0 ; i < size; i++)
{
a[i]= 1 + rand() % 10;
}
for(int i = 0; (size / 2) / pow((double)2, i)>= 1; i++)
{
int n = pow((double)2, i);
int offset = 0;
for(int j = 0; j < (size / 2) / pow((double)2, i); j++)
{
int* tmp = new int[n];
merge(a + offset, n, a + offset + n, n, tmp);
memcpy(a + offset, tmp, n*2 * sizeof(int));
offset += pow((double)2, i+1);
}
}
for(int i = 0; i < size; i++)
{
cout<<a[i]<<" ";
}
cout<<endl;
system("PAUSE");
return 0;
}
Suspect the memcpy is the problem. You're copying (n * 2 * sizeof(int)) bytes from tmp, while you only allocated n * sizeof(int) for it.
Heap corruption simply means that you have allocated a blcok of memory and then written data outside that block. Typically this means you've written past the end of the array.
A small amount of overwriting will hit "guard words" that are placed after your memory allocation, so the runtime will detect anbd report heap corruption while your program continues to run ok. However, if you write further, you may corrupt some other piece of critical data (causing undefined results when your program tries to use the data) or running off the end of your memory map and giving a fatal access violation error.
Check that the indexes into your arrays are always in the range 0..Length-1
If you can't calculate what the maximum index used will be, then put a line of code in to check that the index is within this range, and break into the debugger if it's not. i.e. check that the values you are passing into merge/memcpy are always within range. (Chances are that they write one element too many - a quick bodge for this is to allocate a bit more memory than you "need", but it's obviously not the correct solution - you need to be sure that you only write the data you intend to)
You haven't allocated enough space for tmp:
int* tmp = new int[2*n];
The incrementing in the merge code for (..; ...; c_i++) looks very suspicious too.
You probably have a couple of bugs, use a debugger or write trace messages and check what's going on - verify that you don't write out of bounds.