Pointers to statically allocated objects - c++

I'm trying to understand how pointers to statically allocated objects work and where they can go wrong.
I wrote this code:
int* pinf = NULL;
for (int i = 0; i<1;i++) {
int inf = 4;
pinf = &inf;
}
cout<<"inf"<< (*pinf)<<endl;
I was surprised that it worked becasue I thought that inf would dissapear when the program left the block and the pointer would point to something that no longer exists. I expected a segmentation fault when trying to access pinf. At what stage in the program would inf die?

Your understanding is correct. inf disappears when you leave the scope of the loop, and so accessing *pinf yields undefined behavior. Undefined behavior means the compiler and/or program can do anything, which may be to crash, or in this case may be to simply chug along.
This is because inf is on the stack. Even when it is out of scope pinf still points to a useable memory location on the stack. As far as the runtime is concerned the stack address is fine, and the compiler doesn't bother to insert code to verify that you're not accessing locations beyond the end of the stack. That would be prohibitively expensive in a language designed for speed.
For this reason you must be very careful to avoid undefined behavior. C and C++ are not nice the way Java or C# are where illegal operations pretty much always generate an immediate exception and crash your program. You the programmer have to be vigilant because the compiler will miss all kinds of elementary mistakes you make.

You use so called Dangling pointer. It will result in undefined behavior by the C++ Standard.

It probably will never die because pinf will point to something on the stack.
Stacks don't often shrink.
Modify it and you'll pretty much be guaranteed an overwrite though.

If you are asking about this:
int main() {
int* pinf = NULL;
for (int i = 0; i<1;i++){
int inf = 4;
pinf = &inf;
}
cout<<"inf"<< (*pinf)<<endl;
}
Then what you have is undefined behaviour. The automatically allocated (not not static) object inf has gone out of scope and notionally been destroyed when you access it via the pointer. In this case, anything might happen, including it appearing to "work".

You won't necessarily get a SIGSEGV (segmentation fault). inf memory is probably allocated in the stack. And the stack memory region is probably still allocated to your process at that point, so, that's probably why you are not getting a seg fault.

The behaviour is undefined, but in practice, "destructing" an int is a noop, so most compilers will leave the number alone on the stack until something else comes along to reuse that particular slot.
Some compilers might set the int to 0xDEADBEEF (or some such garbage) when it goes out of scope in debug mode, but that won't make the cout << ... fail; it will simply print the nonsensical value.

The memory may or may not still contain a 4 when it gets to your cout line. It might contain a 4 strictly by accident. :)
First things first: your operating system can only detect memory access gone astray on page boundaries. So, if you're off by 4k or 8k or 16k or more. (Check /proc/self/maps on a Linux system some day to see the memory layout of a process; any addresses in the listed ranges are allowed, any outside the listed ranges aren't allowed. Every modern OS on protected-memory CPUs will support a similar mechanism, so it'll be instructive even if you're just not that interested in Linux. I just know it is easy on Linux.) So, the OS can't help you when your data is so small.
Also, your int inf = 4; might very well be stashed in the .rodata, .data or .text segments of your program. Static variables may be stuffed into any of these sections (I have no idea how the compiler/linker decides; I consider it magic) and they will therefore be valid throughout the entire duration of the program. Check size /bin/sh next time you are on a Unix system for an idea how much data gets put into which sections. (And check out readelf(1) for way too much information. objdump(1) if you're on older systems.)
If you change inf = 4 to inf = i, then the storage will be allocated on the stack, and you stand a much better chance of having it get overwritten quickly.

A protection fault occurs when the memory page you point to is not valid anymore for the process.
Luckily most OS's don't create a separate page for each integer's worth of stack space.

Related

How can an operating system detect an out of range segmentation fault in C?

I encounter this problem when learning Operating System and I'm really interested in how operating system detecs whether an array index is out of range and therefore produce a segmentation fault?
int main(){
char* ptr0;
ptr0[0] = 1;
}
The code above will absolutely produce a segmentation fault, since ptr0 is not allocated with any memory.
But if add one line, things change.
int main(){
char* ptr0;
char* ptr1 = ptr0;
ptr0[0] = 1;
}
This code won't cause any fault, even you change ptr0[0] to ptr0[1000], it still won't cause any segmentation fault.
I don't know why the line has such power
char* ptr1 = ptr0
I tried to disassmbly these codes but find little information.
Could somebody explain that to me on the perspective of memory allocation? thanks a lot.
A segmentation fault happens when a process attempts to access memory it's not supposed to, not necessarily if an array is read out of bounds.
In your particular case the variable ptr0 is uninitialized, and so if you attempt to read it any value may be read and it need not even be consistent. So in the case of the first program the value that was read happened to be an invalid memory address and attempting to read from that address triggered a sigfault, while in the case of the second program the value read happened to be a valid address for the program and so a segfault was not generated.
When I ran these programs, both resulted in a segfault. This demonstrates the undefined behavior present in the program which attempts to dereference an invalid pointer.
When a program has undefined behavior, the C standard makes no guarantees regarding what the program will do. It may crash, it may output unexpected results, or it may appear to work properly.
The operating system and the hardware maintain a map of memory in the process’ virtual address space. Each time the process accesses memory, the hardware consults the map information and decides whether or not the access is allowed. If the access is not allowed, it generates a trap to alert the operating system.
This process will catch many incorrect accesses—it will catch all those that attempt to read or write memory that is not mapped or that attempt to write memory that is mapped read-only. It will not catch all incorrect accesses—it will not catch accesses that are to a wrong location (in terms of what the program’s author or user desires) but that are within mapped memory and the appropriate permissions.
Commonly, the operating system’s map information is more complete than the hardware’s. The operating system may map only a subset of the a process’ address space. This is because processes often do not use all of their address space (code and data to handle rare errors is often not executed or used) or do not use all of it all the time (processes spend some time doing one task before going on to another, and perhaps later returning to an earlier task). So, when the hardware reports a memory access fault to the operating system, the operating system will consult its complete information. If the process is allowed to access the attempted location, the operating system will set up physical memory as necessary, update the map for the hardware, and resume execution of the process. If the process is not allowed to access the attempted location, the operating system will report a signal to the process or terminate it or take other appropriate action.
The code above [char *ptr0; ptr0[0] = 1;] will absolutely produce a segmentation fault, since ptr0 is not allocated with any memory.
This is false, for several reasons:
Since ptr0 is not initialized, its value is indeterminate. When calculating the address for ptr0[0], the compiler will not necessarily use an address outside the process’ address space; it might use an address that is inside the address space and writable. In this case, storing 1 in that location will not generate a segmentation fault.
Due to a special rule in the C standard (C 2018 6.3.2.1 2), using the uninitialized object ptr0 in this situation results in the behavior of the program not being defined by the C standard. The compiler may transform this compiler to any other program.
Even if ptr0 were defined, the compiler may observe that the program has no defined observable behavior—it does not read any input, print anything, write anything to files, or change any volatile objects. So the compiler may optimize the program by changing it to an empty main function that does nothing.
But if add one line, things change.
If there is any change here, using char *ptr0; char *ptr1 = ptr0;, it is mere happenstance of the compiler. This program has the same semantics as the earlier program.
From your other comments, you might have intended to write char ptr0[0]; instead of char *ptr;. Zero-length arrays are not defined by the C standard. However, some compilers may allow them, as an extension to the C standard. In this case, what likely happens is that the compiler picks a location to put the array, likely on the stack. Then ptr0[0] = 1; attempts to store a byte at that location. Although the array has been assigned a location, zero bytes there are reserved for it. Instead, those bytes may be in use for something else. Possibly they are the function return address or possibly they are just filler used to help align the stack. In this case, ptr0[0] = 1; might overwrite necessary data for your program and break it. Or it might overwrite unused data and have no effect. Or, again, the behavior of your program is not defined by the C standard, so the compiler might transform it in other ways.

Why does this line of code cause a computer to crash?

Why does this line of code cause the computer to crash? What happens on memory-specific level?
for(int *p=0; ;*(p++)=0)
;
I have found the "answer" on Everything2, but I want a specific technical answer.
This code simply formally sets an integer pointer to null, then writes to the integer pointed by it a 0 and increments the pointer, looping forever.
The null pointer is not pointing to anything, so writing a 0 to it is undefined behavior (i.e. the standard doesn't say what should happen). Also you're not allowed to use pointer arithmetic outside arrays and so even just the increment is also undefined behavior.
Undefined behavior means that the compiler and library authors don't need to care at all about these cases and still the system is a valid C/C++ implementation. If a programmer does anything classified as undefined behavior then whatever happens happens and s/he cannot blame compiler and library authors. A programmer entering the undefined behavior realm cannot expect an error message or a crash, but cannot complain if getting one (even one million executed instructions later).
On systems where the null pointer is represented as zeros and there is no support for memory protection the effect or such a loop could be of starting wiping all the addressable memory, until some vital part of memory like an interrupt table is corrupted or until the code writes zeros on the code itself, self-destroying. On other systems with memory protection (most common desktop systems today) execution may instead simply stop at the very first write operation.
Undoubtedly, the cause of the problem is that p has not been assigned a reasonable address.
By not properly initializing a pointer before writing to where it points, it is probably going to do Bad Things™.
It could merely segfault, or it could overwrite something important, like the function's return address where a segfault wouldn't occur until the function attempts to return.
In the 1980s, a theoretician I worked with wrote a program for the 8086 to, once a second, write one word of random data at a randomly computed address. The computer was a process controller with watchdog protection and various types of output. The question was: How long would the system run before it ceased usefully functioning? The answer was hours and hours! This was a vivid demonstration that most of memory is rarely accessed.
It may cause an OS to crash, or it may do any number of other things. You are invoking undefined behavior. You don't own the memory at address 0 and you don't own the memory past it. You're just trouncing on memory that doesn't belong to you.
It works by overwriting all the memory at all the addresses, starting from 0 and going upwards. Eventually it will overwrite something important.
On any modern system, this will only crash your program, not the entire computer. CPUs designed since, oh, 1985 or so, have this feature called virtual memory which allows the OS to redirect your program's memory addresses. Most addresses aren't directed anywhere at all, which means trying to access them will just crash your program - and the ones that are directed somewhere will be directed to memory that is allocated to your program, so you can only crash your own program by messing with them.
On much older systems (older than 1985, remember!), there was no such protection and this loop could access memory addresses allocated to other programs and the OS.
The loop is not necessary to explain what's wrong. We can simply look at only the first iteration.
int *p = 0; // Declare a null pointer
*p = 0; // Write to null pointer, causing UB
The second line is causing undefined behavior, since that's what happens when you write to a null pointer.

Why does my dynamically allocated array get initialized to 0?

I have some code that creates a dynamically allocated array with
int *Array = new int[size];
From what I understand, Array should be a pointer to the first item of Array in memory. When using gdb, I can call x Array to examine the value at the first memory location, x Array+1 to examine the second, etc. I expect to have junk values left over from whatever application was using those spots in memory prior to mine. However, using x Array returns 0x00000000 for all those spots. What am I doing wrong? Is my code initializing all of the values of the Array to zero?
EDIT: For the record, I ask because my program is an attempt to implement this: http://eli.thegreenplace.net/2008/08/23/initializing-an-array-in-constant-time/. I want to make sure that my algorithm isn't incrementing through the array to initialize every element to 0.
In most modern OSes, the OS gives zeroed pages to applications, as opposed to letting information seep between unrelated processes. That's important for security reasons, for example. Back in the old DOS days, things were a bit more casual. Today, with memory protected OSes, the OS generally gives you zeros to start with.
So, if this new happens early in your program, you're likely to get zeros. You'd be crazy to rely on that though; it's undefined behavior if you do.
If you keep allocating, filling, and freeing memory, eventually new will return memory that isn't zeroed. Rather, it'll contain remnants of your process' own earlier scribblings.
And there's no guarantee that any particular call to new, even at the beginning of your program, will return memory filled with zeros. You're just likely to see that for calls to new early in your program. Don't let that mislead you.
I expect to have junk values left over from whatever application was using those spots
It's certainly possible but by no means guaranteed. Particularly in debug builds, you're just as likely to have the runtime zero out that memory (or fill it with some recognisable bit pattern) instead, to help you debug things if you use the memory incorrectly.
And, really, "those spots" is a rather loose term, given virtual addressing.
The important thing is that, no, your code is not setting all those values to zero.

segfault with array

I have two questions regarding array:
First one is regarding following code:
int a[30]; //1
a[40]=1; //2
why isn't the line 2 giving segfault, it should give because array has been allocated
only 30 int space and any dereferencing outside its allocated space should give segfault.
Second: assuming that above code works is there any chance that a[40] will get over written, since it doesn't come is the reserved range of arrray.
Thanks in advance.
That's undefined behavior - it may crash, it may silently corrupt data, it may produce no observable results, anything. Don't do it.
In your example the likely explanation is that the array is stack-allocated and so there's a wide range of addresses around the array accessible for writing, so there're no immediate observable results. However depending on how (which direction - to larger addresses or to smaller addresses) the stack grows on your system this might overwrite the return address and temporaries of functions up the call stack and this will crash your program or make it misbehave when it tries to return from the function.
For performance reason, C will not check array size each time you access it. You could also access elements via direct pointers in which case there is no way to validate the access.
SEGFAULT will happen only if you are out of the memory allocated to your process.
For 2nd question, yes it can be overwritten as this memory is allocated to your process and is possibly used by other variables.
It depends on where has the system allocated that array, if by casuality position 40 is in an operative system reserved memory then you will receive segfault.
Your application will crash only if you do something illegal for the rest of your system: if you try and access a virutal memory address that your program doesn't own, what happens is that your hardware will notice that, will inform your operating system, and it will kill your application with a segmentation fault: you accessed a memory segment you were not supposed to.
However if you access a random memory address (which is what you did: for sure a[40] is outside of your array a, but it could be wherever), you could access a valid memory cell (which is what happened to you).
This is an error: you'll likely overwrite some memory area your program owns, thus risking to break your program elsewhere, but the system cannot know if you accessed it by purpose or by mistake and won't kill you.
Programs written in managed languages (ie: programs that run in a protected environment checking anything) would notice your erroneous memory access, but C is not a managed language: you're free to do whatever you want (as soon as you don't create problems to the rest of the system).
The reason line 2 works and doesn't throw a segfault is because in C/C++, arrays are pointers. So your array variable a points to some memory address e.g. 1004. The array syntax tells your program how many bytes down from the location of a to look for an array element.
This means that
printf("%p", a);
// prints out "1004"
and
printf("%p", a[0]);
// prints out "1004"
should print the same value.
However,
printf("%p", a[40]);
// prints out "1164"
returns the memory address that is sizeof(int) * 40 down from the address of a.
Yes, it will eventually be overwritten.
If you malloc the space, you should get a segfault (or at least I believe so), but when using an array without allocating space, you'll be able to overwrite memory for a while. It will crash eventually, possibly when the program does an array size check or maybe when you hit a memory block reserved for something else (not sure what's going on under the hood).
Funny thing is that, IIRC, efence won't catch this either :D.

Array index out of bound behavior

Why does C/C++ differentiates in case of array index out of bound
#include <stdio.h>
int main()
{
int a[10];
a[3]=4;
a[11]=3;//does not give segmentation fault
a[25]=4;//does not give segmentation fault
a[20000]=3; //gives segmentation fault
return 0;
}
I understand that it's trying to access memory allocated to process or thread in case of a[11] or a[25] and it's going out of stack bounds in case of a[20000].
Why doesn't compiler or linker give an error, aren't they aware of the array size? If not then how does sizeof(a) work correctly?
The problem is that C/C++ doesn't actually do any boundary checking with regards to arrays. It depends on the OS to ensure that you are accessing valid memory.
In this particular case, you are declaring a stack based array. Depending upon the particular implementation, accessing outside the bounds of the array will simply access another part of the already allocated stack space (most OS's and threads reserve a certain portion of memory for stack). As long as you just happen to be playing around in the pre-allocated stack space, everything will not crash (note i did not say work).
What's happening on the last line is that you have now accessed beyond the part of memory that is allocated for the stack. As a result you are indexing into a part of memory that is not allocated to your process or is allocated in a read only fashion. The OS sees this and sends a seg fault to the process.
This is one of the reasons that C/C++ is so dangerous when it comes to boundary checking.
The segfault is not an intended action of your C program that would tell you that an index is out of bounds. Rather, it is an unintended consequence of undefined behavior.
In C and C++, if you declare an array like
type name[size];
You are only allowed to access elements with indexes from 0 up to size-1. Anything outside of that range causes undefined behavior. If the index was near the range, most probably you read your own program's memory. If the index was largely out of range, most probably your program will be killed by the operating system. But you can't know, anything can happen.
Why does C allow that? Well, the basic gist of C and C++ is to not provide features if they cost performance. C and C++ has been used for ages for highly performance critical systems. C has been used as a implementation language for kernels and programs where access out of array bounds can be useful to get fast access to objects that lie adjacent in memory. Having the compiler forbid this would be for naught.
Why doesn't it warn about that? Well, you can put warning levels high and hope for the compiler's mercy. This is called quality of implementation (QoI). If some compiler uses open behavior (like, undefined behavior) to do something good, it has a good quality of implementation in that regard.
[js#HOST2 cpp]$ gcc -Wall -O2 main.c
main.c: In function 'main':
main.c:3: warning: array subscript is above array bounds
[js#HOST2 cpp]$
If it instead would format your hard disk upon seeing the array accessed out of bounds - which would be legal for it - the quality of implementation would be rather bad. I enjoyed to read about that stuff in the ANSI C Rationale document.
You generally only get a segmentation fault if you try to access memory your process doesn't own.
What you're seeing in the case of a[11] (and a[10] by the way) is memory that your process does own but doesn't belong to the a[] array. a[25000] is so far from a[], it's probably outside your memory altogether.
Changing a[11] is far more insidious as it silently affects a different variable (or the stack frame which may cause a different segmentation fault when your function returns).
C isn't doing this. The OS's virtual memeory subsystem is.
In the case where you are only slightly out-of-bound you are addressing memeory that is allocated for your program (on the stack call stack in this case). In the case where you are far out-of-bounds you are addressing memory not given over to your program and the OS is throwing a segmentation fault.
On some systems there is also a OS enforced concept of "writeable" memory, and you might be trying to write to memeory that you own but is marked unwriteable.
Just to add what other people are saying, you cannot rely on the program simply crashing in these cases, there is no gurantee of what will happen if you attempt to access a memory location beyond the "bounds of the array." It's just the same as if you did something like:
int *p;
p = 135;
*p = 14;
That is just random; this might work. It might not. Don't do it. Code to prevent these sorts of problems.
As litb mentioned, some compilers can detect some out-of-bounds array accesses at compile time. But bounds checking at compile time won't catch everything:
int a[10];
int i = some_complicated_function();
printf("%d\n", a[i]);
To detect this, runtime checks would have to be used, and they're avoided in C because of their performance impact. Even with knowledge of a's array size at compile time, i.e. sizeof(a), it can't protect against that without inserting a runtime check.
As I understand the question and comments, you understand why bad things can happen when you access memory out of bounds, but you're wondering why your particular compiler didn't warn you.
Compilers are allowed to warn you, and many do at the highest warning levels. However the standard is written to allow people to run compilers for all sorts of devices, and compilers with all sorts of features so the standard requires the least it can while guaranteeing people can do useful work.
There are a few times the standard requires that a certain coding style will generate a diagnostic. There are several other times where the standard does not require a diagnostic. Even when a diagnostic is required I'm not aware of any place where the standard says what the exact wording should be.
But you're not completely out in the cold here. If your compiler doesn't warn you, Lint may. Additionally, there are a number of tools to detect such problems (at run time) for arrays on the heap, one of the more famous being Electric Fence (or DUMA). But even Electric Fence doesn't guarantee it will catch all overrun errors.
That's not a C issue its an operating system issue. You're program has been granted a certain memory space and anything you do inside of that is fine. The segmentation fault only happens when you access memory outside of your process space.
Not all operating systems have seperate address spaces for each proces, in which case you can corrupt the state of another process or of the operating system with no warning.
C philosophy is always trust the programmer. And also not checking bounds allows the program to run faster.
As JaredPar said, C/C++ doesn't always perform range checking. If your program accesses a memory location outside your allocated array, your program may crash, or it may not because it is accessing some other variable on the stack.
To answer your question about sizeof operator in C:
You can reliably use sizeof(array)/size(array[0]) to determine array size, but using it doesn't mean the compiler will perform any range checking.
My research showed that C/C++ developers believe that you shouldn't pay for something you don't use, and they trust the programmers to know what they are doing. (see accepted answer to this: Accessing an array out of bounds gives no error, why?)
If you can use C++ instead of C, maybe use vector? You can use vector[] when you need the performance (but no range checking) or, more preferably, use vector.at() (which has range checking at the cost of performance). Note that vector doesn't automatically increase capacity if it is full: to be safe, use push_back(), which automatically increases capacity if necessary.
More information on vector: http://www.cplusplus.com/reference/vector/vector/