Segmentation fault in strcpy - c++

consider the program below
char str[5];
strcpy(str,"Hello12345678");
printf("%s",str);
When run this program gives segmentation fault.
But when strcpy is replaced with following, program runs fine.
strcpy(str,"Hello1234567");
So question is it should crash when trying to copy to str any other string of more than 5 chars length.
So why it is not crashing for "Hello1234567" and only crashing for "Hello12345678" ie of string with length 13 or more than 13.
This program was run on 32 bit machine .

There are three types of standards behaviour you should be interested in.
1/ Defined behaviour. This will work on all complying implementations. Use this freely.
2/ Implementation-defined behaviour. As stated, it depends on the implementation but at least it's still defined. Implementations are required to document what they do in these cases. Use this if you don't care about portability.
3/ Undefined behaviour. Anything can happen. And we mean anything, up to and including your entire computer collapsing into a naked singularity and swallowing itself, you and a large proportion of your workmates. Never use this. Ever! Seriously! Don't make me come over there.
Copying more that 4 characters and a zero-byte to a char[5] is undefined behaviour.
Seriously, it doesn't matter why your program crashes with 14 characters but not 13, you're almost certainly overwriting some non-crashing information on the stack and your program will most likely produce incorrect results anyway. In fact, the crash is better since at least it stops you relying on the possibly bad effects.
Increase the size of the array to something more suitable (char[14] in this case with the available information) or use some other data structure that can cope.
Update:
Since you seem so concerned with finding out why an extra 7 characters doesn't cause problems but 8 characters does, let's envisage the possible stack layout on entering main(). I say "possible" since the actual layout depends on the calling convention that your compiler uses. Since the C start-up code calls main() with argc and argv, the stack at the start of main(), after allocating space for a char[5], could look like this:
+------------------------------------+
| C start-up code return address (4) |
| argc (4) |
| argv (4) |
| x = char[5] (5) |
+------------------------------------+
When you write the bytes Hello1234567\0 with:
strcpy (x, "Hello1234567");
to x, it overwrites the argc and argv but, on return from main(), that's okay. Specifically Hello populates x, 1234 populates argv and 567\0 populates argc. Provided you don't actually try to use argc and/or argv after that, you'll be okay:
+------------------------------------+ Overwrites with:
| C start-up code return address (4) |
| argc (4) | '567<NUL>'
| argv (4) | '1234'
| x = char[5] (5) | 'Hello'
+------------------------------------+
However, if you write Hello12345678\0 (note the extra "8") to x, it overwrites the argc and argv and also one byte of the return address so that, when main() attempts to return to the C start-up code, it goes off into fairy land instead:
+------------------------------------+ Overwrites with:
| C start-up code return address (4) | '<NUL>'
| argc (4) | '5678'
| argv (4) | '1234'
| x = char[5] (5) | 'Hello'
+------------------------------------+
Again, this depends entirely on the calling convention of your compiler. It's possible a different compiler would always pad out arrays to a multiple of 4 bytes and the code wouldn't fail there until you wrote another three characters. Even the same compiler may allocate variables on the stack frame differently to ensure alignment is satisfied.
That's what they mean by undefined: you don't know what's going to happen.

You're copying to the stack, so it's dependent on what the compiler has placed on the stack, for how much extra data will be required to crash your program.
Some compilers might produce code that will crash with only a single byte over the buffer size - it's undefined what the behaviour is.
I guess size 13 is enough to overwrite the return address, or something similar, which crashes when your function returns. But another compiler or another platform could/will crash with a different length.
Also your program might crash with a different length if it ran for a longer time, if something less important was being overwritten.

For 32-bit Intel platform the explanation is the following. When you declare char[5] on stack the compiler really allocates 8 bytes because of alignment. Then it's typical for functions to have the following prologue:
push ebp
mov ebp, esp
this saves ebp registry value on stack, then moves esp register value into ebp for using esp value to access the parameters. This leads to 4 more bytes on stack to be occupied with ebp value.
In the epilogue ebp is restored, but its value is usually only used for accessing stack-allocated function parameters, so overwriting it may not hurt in most cases.
So you have the following layout (stack grows downwards on Intel): 8 bytes for your array, then 4 bytes for ebp, then usually the return address.
This is why you need to overwrite at least 13 bytes to crash your program.

To add to the above answers: you can test for bugs like these with a tool such as Valgrind. If you're on Windows, have a look at this SO thread.

It depends on what's on the stack after the "str" array. You just happen not to be trampling on anything critical until you copy that many characters.
So it's going to depend on what else is in the function, the compiler you use and possibly the compiler options too.
13 is 5 + 8, suggesting there are two non-critical words after the str array, then something critical (maybe the return address)

That's the pure beauty of undefined behavior (UB): it's undefined.
Your code:
char str[5];
strcpy(str,"Hello12345678");
Writes 14 bytes/chars to str which can only hold 5 bytes/chars. This invokes UB.

Q: So why it is not crashing for "Hello1234567" and only crashing for "Hello12345678" ie of string with length 13 or more than 13.
Because the behaviour is undefined. Use strncpy. See this page http://en.wikipedia.org/wiki/Strcpy for more information.

Because the behaviour is undefined.
Use strncpy. See this page
http://en.wikipedia.org/wiki/Strcpy
for more information.
strncpy is unsafe since it doesn't add a NULL termination if the source string has a length >= n where n is the size of the destination string.
char s[5];
strncpy(s,5,"test12345");
printf("%s",s); // crash
We always use strlcpy to alleviate this.

Related

Memory waste? If main() should only return 0 or 1, why is main declared with int and not short int or even char?

For example:
#include <stdio.h>
int main (void) /* Why int and not short int? - Waste of Memory */
{
printf("Hello World!");
return 0;
}
Why main() is conventional defined with int type, which allocates 4 bytes in memory on 32-bit, if it usually returns only 0 or 1, while other types such as short int (2 bytes,32-bit) or even char (1 byte,32-bit) would be more memory saving?
It is wasting memory space.
NOTE: The question is not a duplicate of the thread given; its answers only correspond to the return value itself but not its datatype at explicit focus.
The Question is for C and C++. If the answers between those alter, share your wisdom with the mention of the context of which language in particular is focused.
Usually assemblers use their registers to return a value (for example the register AX in Intel processors). The type int corresponds to the machine word That is, it is not required to convert, for example, a byte that corresponds to the type char to the machine word.
And in fact, main can return any integer value.
It's because of a machine that's half a century old.
Back in the day when C was created, an int was a machine word on the PDP-11 - sixteen bits - and it was natural and efficient to have main return that.
The "machine word" was the only type in the B language, which Ritchie and Thompson had developed earlier, and which C grew out of.
When C added types, not specifying one gave you a machine word - an int.
(It was very important at the time to save space, so not requiring the most common type to be spelled out was a Very Good Thing.)
So, since a B program started with
main()
and programmers are generally language-conservative, C did the same and returned an int.
There are two reasons I would not consider this a waste:
1 practical use of 4 byte exit code
If you want to return an exit code that exactly describes an error you want more than 8 bit.
As an example you may want to group errors: the first byte could describe the vague type of error, the second byte could describe the function that caused the error, the third byte could give information about the cause of the error and the fourth byte describes additional debug information.
2 Padding
If you pass a single short or char they will still be aligned to fit into a machine word, which is often 4 Byte/32 bit depending on architecture. This is called padding and means, that you will most likely still need 32 bit of memory to return a single short or char.
The old-fashioned convention with most shells is to use the least significant 8 bits of int, not just 0 or 1. 16 bits is increasingly common due to that being the minimum size of an int allowed by the standard.
And what would the issue be with wasting space? Is the space really wasted? Is your computer so full of "stuff" that the remaining sizeof(int) * CHAR_BIT - 8 would make a difference? Could the architecture exploit that and use those remaining bits for something else? I very much doubt it.
So I wouldn't say the memory is at all wasted since you get it back from the operating system when the program finishes. Perhaps extravagent? A bit like using a large wine glass for a small tipple perhaps?
1st: Alone your assumption/statement if it usually returns only 0 or 1 is wrong.
Usually the return code is expected to be 0 if no error occurred but otherwise it can return any number to represent different errors. And most (at least command line programs) do so. Many programs also output negative numbers.
However there are a few common used codes https://www.tldp.org/LDP/abs/html/exitcodes.html also here another SO member points to a unix header that contains some codes https://stackoverflow.com/a/24121322/2331592
So after all it is not just a C or C++ type thing but also has historical reasons how most operating systems work and expect the programs to behave and since that the languages have to support that and so at least C like languages do that by using an int main(...).
2nd:
your conclusion It is wasting memory space is wrong.
Using an int in comparison to a shorter type does not involve any waste.
Memory is usually handled in word-size (that that mean may depend from your architecture) anyway
working with sub-word-types involves computation overheand on some architecture (read: load, word, mask out unrelated bits; store: load memory, mask out variable bits, or them with the new value, write the word back)
the memory is not wasted unless you use it. if you write return 0; no memory is ever used at this point. if you return myMemorySaving8bitVar; you only have 1 byte used (most probable on the stack (if not optimized out at all))
You're either working in or learning C, so I think it's a Real Good Idea that you are concerned with efficiency. However, it seems that there are a few things that seem to need clarifying here.
First, the int data type is not an never was intended to mean "32 bits". The idea was that int would be the most natural binary integer type on the target machine--usually the size of a register.
Second, the return value from main() is meant to accommodate a wide range of implementations on different operating systems. A POSIX system uses an unsigned 8-bit return code. Windows uses 32-bits that are interpreted by the CMD shell as 2's complement signed. Another OS might choose something else.
And finally, if you're worried about memory "waste", that's an implementation issue that isn't even an issue in this case. Return codes from main are typically returned in machine registers, not in memory, so there is no cost or savings involved. Even if there were, saving 2 bytes in the run of a nontrivial program is not worth any developer's time.
The answer is "because it usually doesn't return only 0 or 1." I found this thread from software engineering community that at least partially answers your question. Here are the two highlights, first from the accepted answer:
An integer gives more room than a byte for reporting the error. It can be enumerated (return of 1 means XYZ, return of 2 means ABC, return of 3, means DEF, etc..) or used as flags (0x0001 means this failed, 0x0002 means that failed, 0x0003 means both this and that failed). Limiting this to just a byte could easily run out of flags (only 8), so the decision was probably to use an integer.
An interesting point is also raised by Keith Thompson:
For example, in the dialect of C used in the Plan 9 operating system main is normally declared as a void function, but the exit status is returned to the calling environment by passing a string pointer to the exits() function. The empty string denotes success, and any non-empty string denotes some kind of failure. This could have been implemented by having main return a char* result.
Here's another interesting bit from a unix.com forum:
(Some of the following may be x86 specific.)
Returning to the original question: Where is the exit status stored? Inside the kernel.
When you call exit(n), the least significant 8 bits of the integer n are written to a cpu register. The kernel system call implementation will then copy it to a process-related data structure.
What if your code doesn't call exit()? The c runtime library responsible for invoking main() will call exit() (or some variant thereof) on your behalf. The return value of main(), which is passed to the c runtime in a register, is used as the argument to the exit() call.
Related to the last quote, here's another from cppreference.com
5) Execution of the return (or the implicit return upon reaching the end of main) is equivalent to first leaving the function normally (which destroys the objects with automatic storage duration) and then calling std::exit with the same argument as the argument of the return. (std::exit then destroys static objects and terminates the program)
Lastly, I found this really cool example here (although the author of the post is wrong in saying that the result returned is the returned value modulo 512). After compiling and executing the following:
int main() {
return 42001;
}
on a POSIX compliant my* system, echo $? returns 17. That is because 42001 % 256 == 17 which shows that 8 bits of data are actually used. With that in mind, choosing int ensures that enough storage is available for passing the program's exit status information, because, as per this answer, compliance to the C++ standard guarantees that size of int (in bits)
can't be less than 8. That's because it must be large enough to hold "the eight-bit code units of the Unicode UTF-8 encoding form."
EDIT:
*As Andrew Henle pointed out in the comment:
A fully POSIX compliant system makes the entire int return value available, not just 8 bits. See pubs.opengroup.org/onlinepubs/9699919799/basedefs/signal.h.html: "If si_code is equal to CLD_EXITED, then si_status holds the exit value of the process; otherwise, it is equal to the signal that caused the process to change state. The exit value in si_status shall be equal to the full exit value (that is, the value passed to _exit(), _Exit(), or exit(), or returned from main()); it shall not be limited to the least significant eight bits of the value."
I think this makes for an even stronger argument for the use of int over data types of smaller sizes.

How long* cast works

So I have this chunk of code
char buf[2];
buf[0] = 'a';
buf[1] = 'b';
std::cout << *((long *)((void*)buf) + 1) << std::endl;
When I saw that I said to myself:
We have memory address 1000 (for example) and that's the address of buf[1].
So I thought that *((long )((void)buf) + 1) will print out whatever is in addresses:
from 1000 until 1000 + sizeof(long)
But that's not the case. This code prints always -858993460
What is that number and why it isn't random?
I obviously lack the knowledge to understand what is going on so I would appreciate if you could give me a hint or something!
What is that number and why it isn't random?
It is a random value. Nothing in your program suggests that value should be printed.
It happens to be consistent so far as you've run your program. Maybe you haven't run it enough. Using uninitialized memory produces undefined behavior. Programs with UB might work as intended for years and then fail to compile.
By the way, your expression doesn't have the intended meaning. Try adding more spaces.
* ( (long *) ((void*)buf) + 1 )
First you cast buf to void *, then you cast it againt to long *, then you added one (to get the next long, not the next byte), and then fetched a long from memory. The bytes that got printed are entirely outside the char[2] array.
This code is reading past the end of a buffer and so is a security risk and completely undefined behaviour.
The contents of memory beyond buf could be anything.
Your compiler, architecture, and/or build settings may be such that currently that value is the same each time you run it, but that's just blind chance and could change at any point.
It will be different again on 64-bit systems where long is 64 bits wide. Alignment rules may also cause this code to fail outright.
Summary: even though this code is returning you the same result for each run right now, this is totally unsafe and undefined behaviour.
Avoid.
At a particular instant the value in a particular memory address will be constant unless other variables are created which take it's place. In your program if you output the memory address of buf it will be the same. Which means that you would be referring to the same address everytime the program is run and hence the same garbage value would be printed.

Segmentation fault - why and how does it work?

In both the functions defined below, it tries to allocate 10M of memory in the stack. But the segmentation fault happens only in the second case and not it the first and I am trying to understand why so.
Function definition 1:
a(int *i)
{
char iptr[50000000];
*i = 1;
}
Function definition 2:
a()
{
char c;
char iptr[5000000];
printf("&c = 0x%lx, iptr = 0x%x ... ", &c, iptr);
fflush(stdout);
c = iptr[0];
printf("ok\n");
}
According to my understanding in case of local variables that are not alloted memory dynamically are stored in stack section of the program. So I suppose, during compile time itself the compiler checks if the variable fits in the stack or not.
Hence if above stated is true, then segmentation fault should occur in both the cases (i.e. also in case 1).
The website (http://web.eecs.utk.edu/courses/spring2012/cs360/360/notes/Memory/lecture.html) from where I picked this states that the segfault happens in function 2 in a when the code attempts to push iptr on the stack for the printf call. This is because the stack pointer is pointing to the void. Had we not referenced anything at the stack pointer, our program should have worked.
I need help understanding this last statement and my earlier doubt related to this.
So I suppose, during compile time itself the compiler checks if the variable fits in the stack or not.
No, that cannot be done. When compiling a function, the compiler does not know what the call stack will be when the function is called, so it will assume that you know what you are doing (which might or not be the case). Also note that the amount of stack space may be affected by both compile time and runtime restrictions (in Linux you can set the stack size with ulimit on the shell that starts the process).
I need help understanding this last statement and my earlier doubt related to this.
I would not attempt to look too much into that statement, it is not standard but rather based on knowledge of a particular implementation that is not even described there, and thus is built on some assumptions that are not necessarily true.
It assumes that the act of allocating the array does not 'touch' the allocated memory (in some debug builds in some implementations that is false) and thus whether you attempt to allocate 1 byte or 100M if the data is not touched by your program the allocation is fine --this need not be the case.
It also assumes that the arguments of the function printf are passed in the stack (this is actually the case in all implementations I know, due to the variadic arguments nature of the function). With the previous assumption, the array would overflow the stack (assuming an stack of <10M), but would not crash as the memory is not accessed, but to be able to call printf the value of the argument would be pushed to the stack beyond the array. This will write to memory and that write will be beyond the allocated space for the stack and crash.
Again, all this is implementation, not defined by the language.
Error in your code is being thrown by the following code:
; Find next lower page and probe
cs20:
sub eax, _PAGESIZE_ ; decrease by PAGESIZE
test dword ptr [eax],eax ; probe page. "**This line throws the error**"
jmp short cs10
_chkstk endp
end
From chkstk.asm file, which Provide stack checking on procedure entry. And this file explicitically defines:
_PAGESIZE_ equ 1000h
Now as a explanation of your problem This Question tells everything you need as mentioned by: Shafik Yaghmour
Your printf format string assumes that pointers, ints (%x), and longs (%lx) are all the same size; this may be false on your platform, leading to undefined behavior. Use %p instead. I intended to make this a comment, but can't yet.
I am surprised no one noticed that the first function allocates 10 times the space than the second function. There are seven zeros after 5 in the first function whereas the second function has six zeros after 5 :-)
I compiled it with gcc-4.6.3 and got segmentation fault on the first function but not on the second function. After I removed the additional zero in the first function, seg fault went away. Adding a zero in the second function introduced the seg fault. So at least in my case, the reason of this seg fault is that the program could not allocate the required space on the stack. I would be happy to hear about the observations that differ from the above.

Could I ever want to access the address zero?

The constant 0 is used as the null pointer in C and C++. But as in the question "Pointer to a specific fixed address" there seems to be some possible use of assigning fixed addresses. Is there ever any conceivable need, in any system, for whatever low level task, for accessing the address 0?
If there is, how is that solved with 0 being the null pointer and all?
If not, what makes it certain that there is not such a need?
Neither in C nor in C++ null-pointer value is in any way tied to physical address 0. The fact that you use constant 0 in the source code to set a pointer to null-pointer value is nothing more than just a piece of syntactic sugar. The compiler is required to translate it into the actual physical address used as null-pointer value on the specific platform.
In other words, 0 in the source code has no physical importance whatsoever. It could have been 42 or 13, for example. I.e. the language authors, if they so pleased, could have made it so that you'd have to do p = 42 in order to set the pointer p to null-pointer value. Again, this does not mean that the physical address 42 would have to be reserved for null pointers. The compiler would be required to translate source code p = 42 into machine code that would stuff the actual physical null-pointer value (0x0000 or 0xBAAD) into the pointer p. That's exactly how it is now with constant 0.
Also note, that neither C nor C++ provides a strictly defined feature that would allow you to assign a specific physical address to a pointer. So your question about "how one would assign 0 address to a pointer" formally has no answer. You simply can't assign a specific address to a pointer in C/C++. However, in the realm of implementation-defined features, the explicit integer-to-pointer conversion is intended to have that effect. So, you'd do it as follows
uintptr_t address = 0;
void *p = (void *) address;
Note, that this is not the same as doing
void *p = 0;
The latter always produces the null-pointer value, while the former in general case does not. The former will normally produce a pointer to physical address 0, which might or might not be the null-pointer value on the given platform.
On a tangential note: you might be interested to know that with Microsoft's C++ compiler, a NULL pointer to member will be represented as the bit pattern 0xFFFFFFFF on a 32-bit machine. That is:
struct foo
{
int field;
};
int foo::*pmember = 0; // 'null' member pointer
pmember will have the bit pattern 'all ones'. This is because you need this value to distinguish it from
int foo::*pmember = &foo::field;
where the bit pattern will indeed by 'all zeroes' -- since we want offset 0 into the structure foo.
Other C++ compilers may choose a different bit pattern for a null pointer to member, but the key observation is that it won't be the all-zeroes bit pattern you might have been expecting.
You're starting from a mistaken premise. When you assign an integer constant with the value 0 to a pointer, that becomes a null pointer constant. This does not, however, mean that a null pointer necessarily refers to address 0. Quite the contrary, the C and C++ standards are both very clear that a null pointer may refer to some address other than zero.
What it comes down to is this: you do have to set aside an address that a null pointer would refer to -- but it can be essentially any address you choose. When you convert zero to a pointer, it has to refer to that chosen address -- but that's all that's really required. Just for example, if you decided that converting an integer to a point would mean adding 0x8000 to the integer, then the null pointer to would actually refer to address 0x8000 instead of address 0.
It's also worth noting that dereferencing a null pointer results in undefined behavior. That means you can't do it in portable code, but it does not mean you can't do it at all. When you're writing code for small microcontrollers and such, it's fairly common to include some bits and pieces of code that aren't portable at all. Reading from one address may give you the value from some sensor, while writing to the same address could activate a stepper motor (just for example). The next device (even using exactly the same processor) might be connected up so both of those addresses referred to normal RAM instead.
Even if a null pointer does refer to address 0, that doesn't prevent you from using it to read and/or write whatever happens to be at that address -- it just prevents you from doing so portably -- but that doesn't really matter a whole lot. The only reason address zero would normally be important would be if it was decoded to connect to something other than normal storage, so you probably can't use it entirely portably anyway.
The compiler takes care of this for you (comp.lang.c FAQ):
If a machine uses a nonzero bit pattern for null pointers, it is the compiler's responsibility to generate it when the programmer requests, by writing "0" or "NULL," a null pointer. Therefore, #defining NULL as 0 on a machine for which internal null pointers are nonzero is as valid as on any other, because the compiler must (and can) still generate the machine's correct null pointers in response to unadorned 0's seen in pointer contexts.
You can get to address zero by referencing zero from a non-pointer context.
In practice, C compilers will happily let your program attempt to write to address 0. Checking every pointer operation at run time for a NULL pointer would be a tad expensive. On computers, the program will crash because the operating system forbids it. On embedded systems without memory protection, the program will indeed write to address 0 which will often crash the whole system.
The address 0 might be useful on an embedded systems (a general term for a CPU that's not in a computer; they run everything from your stereo to your digital camera). Usually, the systems are designed so that you wouldn't need to write to address 0. In every case I know of, it's some kind of special address. Even if the programmer needs to write to it (e.g., to set up an interrupt table), they would only need to write to it during the initial boot sequence (usually a short bit of assembly language to set up the environment for C).
Memory address 0 is also called the Zero Page. This is populated by the BIOS, and contains information about the hardware running on your system. All modern kernels protect this region of memory. You should never need to access this memory, but if you want to you need to do it from within kernel land, a kernel module will do the trick.
On the x86, address 0 (or rather, 0000:0000) and its vicinity in real mode is the location of the interrupt vector. In the bad old days, you would typically write values to the interrupt vector to install interrupt handers (or if you were more disciplined, used the MS-DOS service 0x25). C compilers for MS-DOS defined a far pointer type which when assigned NULL or 0 would recieve the bit pattern 0000 in its segment part and 0000 in its offset part.
Of course, a misbehaving program that accidentally wrote to a far pointer whose value was 0000:0000 would cause very bad things to happen on the machine, typically locking it up and forcing a reboot.
In the question from the link, people are discussing setting to fixed addresses in a microcontroller. When you program a microcontroller everything is at a much lower level there.
You even don't have an OS in terms of desktop/server PC, and you don't have virtual memory and that stuff. So there is it OK and even necessary to access memory at a specific address. On a modern desktop/server PC it is useless and even dangerous.
I compiled some code using gcc for the Motorola HC11, which has no MMU and 0 is a perfectly good address, and was disappointed to find out that to write to address 0, you just write to it. There's no difference between NULL and address 0.
And I can see why. I mean, it's not really possible to define a unique NULL on an architecture where every memory location is potentially valid, so I guess the gcc authors just said 0 was good enough for NULL whether it's a valid address or not.
char *null = 0;
; Clears 8-bit AR and BR and stores it as a 16-bit pointer on the stack.
; The stack pointer, ironically, is stored at address 0.
1b: 4f clra
1c: 5f clrb
1d: de 00 ldx *0 <main>
1f: ed 05 std 5,x
When I compare it with another pointer, the compiler generates a regular comparison. Meaning that it in no way considers char *null = 0 to be a special NULL pointer, and in fact a pointer to address 0 and a "NULL" pointer will be equal.
; addr is a pointer stored at 7,x (offset of 7 from the address in XR) and
; the "NULL" pointer is at 5,y (offset of 5 from the address in YR). It doesn't
; treat the so-called NULL pointer as a special pointer, which is not standards
; compliant as far as I know.
37: de 00 ldx *0 <main>
39: ec 07 ldd 7,x
3b: 18 de 00 ldy *0 <main>
3e: cd a3 05 cpd 5,y
41: 26 10 bne 53 <.LM7>
So to address the original question, I guess my answer is to check your compiler implementation and find out whether they even bothered to implement a unique-value NULL. If not, you don't have to worry about it. ;)
(Of course this answer is not standard compliant.)
It all depends on whether the machine has virtual memory. Systems with it will typically put an unwritable page there, which is probably the behaviour that you are used to. However in systems without it (typically microcontrollers these days, but they used to be far more common) then there's often very interesting things in that area such as an interrupt table. I remember hacking around with those things back in the days of 8-bit systems; fun, and not too big a pain when you had to hard-reset the system and start over. :-)
Yes, you might want to access memory address 0x0h. Why you would want to do this is platform-dependent. A processor might use this for a reset vector, such that writing to it causes the CPU to reset. It could also be used for an interrupt vector, as a memory-mapped interface to some hardware resource (program counter, system clock, etc), or it could even be valid as a plain old memory address. There is nothing necessarily magical about memory address zero, it is just one that was historically used for special purposes (reset vectors and the like). C-like languages follow this tradition by using zero as the address for a NULL pointer, but in reality the underlying hardware may or may not see address zero as special.
The need to access address zero usually arises only in low-level details like bootloaders or drivers. In these cases, the compiler can provide options/pragmas to compile a section of code without optimizations (to prevent the zero pointer from being extracted away as a NULL pointer) or inline assembly can be used to access the true address zero.
C/C++ don't allows you to write to any address. It is the OS that can raise a signal when a user access some forbidden address. C and C++ ensure you that any memory obtained from the heap, will be different of 0.
I have at times used loads from address zero (on a known platform where that would be guaranteed to segfault) to deliberately crash at an informatively named symbol in library code if the user violates some necessary condition and there isn't any good way to throw an exception available to me. "Segfault at someFunction$xWasnt16ByteAligned" is a pretty effective error message to alert someone to what they did wrong and how to fix it. That said, I wouldn't recommend making a habit of that sort of thing.
Writing to address zero can be done, but it depends upon several factors such as your OS, target architecture and MMU configuration. In fact, it can be a useful debugging tool (but not always).
For example, a few years ago while working on an embedded system (with few debugging tools available), we had a problem which was resulting in a warm reboot. To help locate the problem, we were debugging using sprintf(NULL, ...); and a 9600 baud serial cable. As I said--few debugging tools available. With our setup, we knew that a warm reboot would not corrupt the first 256 bytes of memory. Thus after the warm reboot we could pause the loader and dump the memory contents to find out what happened prior to reboot.
Remember that in all normal cases, you don't actually see specific addresses.
When you allocate memory, the OS supplies you with the address of that chunk of memory.
When you take the reference of a variable, the the variable has already been allocated at an address determined by the system.
So accessing address zero is not really a problem, because when you follow a pointer, you don't care what address it points to, only that it is valid:
int* i = new int(); // suppose this returns a pointer to address zero
*i = 42; // now we're accessing address zero, writing the value 42 to it
So if you need to access address zero, it'll generally work just fine.
The 0 == null thing only really becomes an issue if for some reason you're accessing physical memory directly. Perhaps you're writing an OS kernel or something like that yourself. In that case, you're going to be writing to specific memory addresses (especially those mapped to hardware registers), and so you might conceivably need to write to address zero. But then you're really bypassing C++ and relying on the specifics of your compiler and hardware platform.
Of course, if you need to write to address zero, that is possible. Only the constant 0 represents a null pointer. The non-constant integer value zero will not, if assigned to a pointer, yield a null pointer.
So you could simply do something like this:
int i = 0;
int* zeroaddr = (int*)i;
now zeroaddr will point to address zero(*), but it will not, strictly speaking, be a null pointer, because the zero value was not constant.
(*): that's not entirely true. The C++ standard only guarantees an "implementation-defined mapping" between integers and addresses. It could convert the 0 to address 0x1633de20` or any other address it likes. But the mapping is usually the intuitive and obvious one, where the integer 0 is mapped to the address zero)
It may surprise many people, but in the core C language there is no such thing as a special null pointer. You are totally free to read and write to address 0 if it's physically possible.
The code below does not even compile, as NULL is not defined:
int main(int argc, char *argv[])
{
void *p = NULL;
return 0;
}
OTOH, the code below compiles, and you can read and write address 0, if the hardware/OS allows:
int main(int argc, char *argv[])
{
int *p = 0;
*p = 42;
int x = *p; /* let's assume C99 */
}
Please note, I did not include anything in the above examples.
If we start including stuff from the standard C library, NULL becomes magically defined. As far as I remember it comes from string.h.
NULL is still not a core C feature, it's a CONVENTION of many C library functions to indicate the invalidity of pointers. The C library on the given platform will define NULL to a memory location which is not accessible anyway. Let's try it on a Linux PC:
#include <stdio.h>
int main(int argc, char *argv[])
{
int *p = NULL;
printf("NULL is address %p\n", p);
printf("Contents of address NULL is %d\n", *p);
return 0;
}
The result is:
NULL is address 0x0
Segmentation fault (core dumped)
So our C library defines NULL to address zero, which it turns out is inaccessible.
But it was not the C compiler, of not even the C-library function printf() that handled the zero address specially. They all happily tried to work with it normally. It was the OS that detected a segmentation fault, when printf tried to read from address zero.
If I remember correctly, in an AVR microcontroller the register file is mapped into an address space of RAM and register R0 is at the address 0x00. It was clearly done in purpose and apparently Atmel thinks there are situations, when it's convenient to access address 0x00 instead of writing R0 explicitly.
In the program memory, at the address 0x0000 there is a reset interrupt vector and again this address is clearly intended to be accessed when programming the chip.

off-by-one error with string functions (C/C++) and security potentials

So this code has the off-by-one error:
void foo (const char * str) {
char buffer[64];
strncpy(buffer, str, sizeof(buffer));
buffer[sizeof(buffer)] = '\0';
printf("whoa: %s", buffer);
}
What can malicious attackers do if she figured out how the function foo() works?
Basically, to what kind of security potential problems is this code vulnerable?
I personally thought that the attacker can't really do anything in this case, but I heard that they can do a lot of things even if they are limited to work with 1 byte.
The only off-by-one error I see here is this line:
buffer[sizeof(buffer)] = '\0';
Is that what you're talking about? I'm not an expert on these things, so maybe I've overlooking something, but since the only thing that will ever get written to that wrong byte is a zero, I think the possibilities are quite limited. The attacker can't control what's being written there. Most likely it would just cause a crash, but it could also cause tons of other odd behavior, all of it specific to your application. I don't see any code injection vulnerability here unless this error causes your app to expose another such vulnerability that would be used as the vector for the actual attack.
Again, take with a grain of salt...
Read Shell Coder's Handbook 2nd Edition for lots of information.
Disclaimer: This is inferred knowledge from some research I just did, and should not be taken as gospel.
It's going to overwrite part or all of your saved frame pointer with a null byte - that's the reference point that your calling function will use to offset it's memory accesses. So at that point the calling function's memory operations are going to a different location. I don't know what that location will be, but you don't want to be accessing the wrong memory. I won't say you can do anything, but you might be able to do something.
How do I know this (really, how did I infer this)? Smashing the stack for Fun and Profit by Aleph One. It's quite old, and I don't know if Windows or Compilers have changed the way the stack behaves to avoid these problems. But it's a starting point.
example1.c:
void function(int a, int b, int c) {
char buffer1[5];
char buffer2[10];
}
void main() {
function(1,2,3);
}
To understand what the program does to call function() we compile it with
gcc using the -S switch to generate assembly code output:
$ gcc -S -o example1.s example1.c
By looking at the assembly language output we see that the call to
function() is translated to:
pushl $3
pushl $2
pushl $1
call function
This pushes the 3 arguments to function backwards into the stack, and
calls function(). The instruction 'call' will push the instruction pointer
(IP) onto the stack. We'll call the saved IP the return address (RET). The
first thing done in function is the procedure prolog:
pushl %ebp
movl %esp,%ebp
subl $20,%esp
This pushes EBP, the frame pointer, onto the stack. It then copies the
current SP onto EBP, making it the new FP pointer. We'll call the saved FP
pointer SFP. It then allocates space for the local variables by subtracting
their size from SP.
We must remember that memory can only be addressed in multiples of the
word size. A word in our case is 4 bytes, or 32 bits. So our 5 byte buffer
is really going to take 8 bytes (2 words) of memory, and our 10 byte buffer
is going to take 12 bytes (3 words) of memory. That is why SP is being
subtracted by 20. With that in mind our stack looks like this when
function() is called (each space represents a byte):
bottom of top of
memory memory
buffer2 buffer1 sfp ret a b c
<------ [ ][ ][ ][ ][ ][ ][ ]
top of bottom of
stack stack
What can malicious attackers do if she
figured out how the function foo()
works? Basically, to what kind of
security potential problems is this
code vulnerable?
This is probably not the best example of a bug that could be easily exploited for security purposes although it could exploited to potentially crash the code simply by using a string of 64-characters or longer.
While it certainly is a bug that will corrupt the address immediately after the array (on the stack) with a single zero byte, there is no easy way for a hacker to inject data into the corrupted area. Calling the printf() function will push parameters on the stack and may clear the zero that was written out of array bounds and lead to a potentially unterminated string being passed to printf.
However, without intimate knowledge of what goes on in printf (and needing to exploit printf as well as foo), a hacker would be hard pressed to do anything other than crash your code.
FWIW, this is a good reason to compile with warnings on or to use functions like strncpy_s which both respects buffer size and also includes a terminating null even if the copied string is larger than the buffer. With strncpy_s, the line "buffer[sizeof(buffer)] = '\0';" is not even necessary.
The issue is that you don't have permission to write to the item after the array. When you asked for 64 chars for buffer, the system is required to give you at least 64 bytes. It's normal for the system to give you more than that -- in which case the memory belongs to you and there is no problem in practice -- but it is possible that even the first byte after the array belongs to "somebody else."
So what happens if you overwrite it? If the "somebody else" is actually inside your program (maybe in a different structure or thread) the operating system probably won't notice you trampled on that data, but that other structure or thread might. There's no telling what data should be there or how trampling over it will affect things.
In this case you allocated buffer on the stack, which means (1) the somebody else is you, and in fact is your current stack frame, and (2) it's not in another thread (but could affect other local variables in the current stack frame).