C++ buffer overflow different on 3 machines - c++

I am testing a simple buffer overflow in c++. The example is a test where given that checks are not in place, a malicious user could overwrite variables using a buffer overflow.
The example defines a buffer and then a variable, this means that space should be allocated for the buffer, and then space for the variable. The example reads from cin to a buffer of length 5, and then checks if the admin variable is set to something other that 0, if it is, the user conceptually gained admin access.
#include <iostream>
using namespace std;
int main()
{
char buffer[5];
int admin = 0;
cin>>buffer;
if(strcmp(buffer,"in") == 0)
{
admin = 1;
cout<<"Correct"<<endl;
}
if(admin != 0)
cout << "Access" << endl;
return 0;
}
I have 3 machines, 1 Windows and 2 Linux systems.
When I test this on windows (CodeBlocks) it works (logically)
entering more than 5 characters overflows and rewrites the the admin variable's bytes
Now my first linux system also works but only when I enter 13 characters, is this to do with different compilers and how they allocate memory to the program?
My second linux machine can't overflow at all. It will give a dump error only after the 13th character.
Why do they differ that much?

You should examine disassembly. From there you will see what happens precisely.
Generally speaking, there are two things to consider:
Padding done by the compiler to align stack variables.
Relative placement of the stack variables by the compiler.
The first point: Your array char buffer[5]; will be padded so int admin; will be properly aligned on stack. I would expect it to be generally padded to 8 bytes on both x86 or x64 and so 9 symbols to overwrite. But compiler might do differently depending on what it sees fit. Nonetheless, it appears that Windows and Linux machines are x86 (32bit).
The second point: compiler is not required to put stack variables on stack in order of their declaration. On Windows and first Linux machine compiler does indeed place char buffer[5]; below int admin;, so you can overflow into it. On second Linux machine, compiler chooses to place it in reverse order, so instead of overflowing into int admin;, you are corrupting stack frame of the caller of main() after writing beyond space allocated for char buffer[5];.
Here is shameless link to my own answer to a similar question - an example of examining of such overflow.

Undefined behavior is, as you have discovered, undefined. Trying to explain it is in general not terribly productive.
In this case it's almost certainly due to the arrangement of your stack and padding bytes inserted by/between the local variables varying between compiler/system.

Related

Memory waste? If main() should only return 0 or 1, why is main declared with int and not short int or even char?

For example:
#include <stdio.h>
int main (void) /* Why int and not short int? - Waste of Memory */
{
printf("Hello World!");
return 0;
}
Why main() is conventional defined with int type, which allocates 4 bytes in memory on 32-bit, if it usually returns only 0 or 1, while other types such as short int (2 bytes,32-bit) or even char (1 byte,32-bit) would be more memory saving?
It is wasting memory space.
NOTE: The question is not a duplicate of the thread given; its answers only correspond to the return value itself but not its datatype at explicit focus.
The Question is for C and C++. If the answers between those alter, share your wisdom with the mention of the context of which language in particular is focused.
Usually assemblers use their registers to return a value (for example the register AX in Intel processors). The type int corresponds to the machine word That is, it is not required to convert, for example, a byte that corresponds to the type char to the machine word.
And in fact, main can return any integer value.
It's because of a machine that's half a century old.
Back in the day when C was created, an int was a machine word on the PDP-11 - sixteen bits - and it was natural and efficient to have main return that.
The "machine word" was the only type in the B language, which Ritchie and Thompson had developed earlier, and which C grew out of.
When C added types, not specifying one gave you a machine word - an int.
(It was very important at the time to save space, so not requiring the most common type to be spelled out was a Very Good Thing.)
So, since a B program started with
main()
and programmers are generally language-conservative, C did the same and returned an int.
There are two reasons I would not consider this a waste:
1 practical use of 4 byte exit code
If you want to return an exit code that exactly describes an error you want more than 8 bit.
As an example you may want to group errors: the first byte could describe the vague type of error, the second byte could describe the function that caused the error, the third byte could give information about the cause of the error and the fourth byte describes additional debug information.
2 Padding
If you pass a single short or char they will still be aligned to fit into a machine word, which is often 4 Byte/32 bit depending on architecture. This is called padding and means, that you will most likely still need 32 bit of memory to return a single short or char.
The old-fashioned convention with most shells is to use the least significant 8 bits of int, not just 0 or 1. 16 bits is increasingly common due to that being the minimum size of an int allowed by the standard.
And what would the issue be with wasting space? Is the space really wasted? Is your computer so full of "stuff" that the remaining sizeof(int) * CHAR_BIT - 8 would make a difference? Could the architecture exploit that and use those remaining bits for something else? I very much doubt it.
So I wouldn't say the memory is at all wasted since you get it back from the operating system when the program finishes. Perhaps extravagent? A bit like using a large wine glass for a small tipple perhaps?
1st: Alone your assumption/statement if it usually returns only 0 or 1 is wrong.
Usually the return code is expected to be 0 if no error occurred but otherwise it can return any number to represent different errors. And most (at least command line programs) do so. Many programs also output negative numbers.
However there are a few common used codes https://www.tldp.org/LDP/abs/html/exitcodes.html also here another SO member points to a unix header that contains some codes https://stackoverflow.com/a/24121322/2331592
So after all it is not just a C or C++ type thing but also has historical reasons how most operating systems work and expect the programs to behave and since that the languages have to support that and so at least C like languages do that by using an int main(...).
2nd:
your conclusion It is wasting memory space is wrong.
Using an int in comparison to a shorter type does not involve any waste.
Memory is usually handled in word-size (that that mean may depend from your architecture) anyway
working with sub-word-types involves computation overheand on some architecture (read: load, word, mask out unrelated bits; store: load memory, mask out variable bits, or them with the new value, write the word back)
the memory is not wasted unless you use it. if you write return 0; no memory is ever used at this point. if you return myMemorySaving8bitVar; you only have 1 byte used (most probable on the stack (if not optimized out at all))
You're either working in or learning C, so I think it's a Real Good Idea that you are concerned with efficiency. However, it seems that there are a few things that seem to need clarifying here.
First, the int data type is not an never was intended to mean "32 bits". The idea was that int would be the most natural binary integer type on the target machine--usually the size of a register.
Second, the return value from main() is meant to accommodate a wide range of implementations on different operating systems. A POSIX system uses an unsigned 8-bit return code. Windows uses 32-bits that are interpreted by the CMD shell as 2's complement signed. Another OS might choose something else.
And finally, if you're worried about memory "waste", that's an implementation issue that isn't even an issue in this case. Return codes from main are typically returned in machine registers, not in memory, so there is no cost or savings involved. Even if there were, saving 2 bytes in the run of a nontrivial program is not worth any developer's time.
The answer is "because it usually doesn't return only 0 or 1." I found this thread from software engineering community that at least partially answers your question. Here are the two highlights, first from the accepted answer:
An integer gives more room than a byte for reporting the error. It can be enumerated (return of 1 means XYZ, return of 2 means ABC, return of 3, means DEF, etc..) or used as flags (0x0001 means this failed, 0x0002 means that failed, 0x0003 means both this and that failed). Limiting this to just a byte could easily run out of flags (only 8), so the decision was probably to use an integer.
An interesting point is also raised by Keith Thompson:
For example, in the dialect of C used in the Plan 9 operating system main is normally declared as a void function, but the exit status is returned to the calling environment by passing a string pointer to the exits() function. The empty string denotes success, and any non-empty string denotes some kind of failure. This could have been implemented by having main return a char* result.
Here's another interesting bit from a unix.com forum:
(Some of the following may be x86 specific.)
Returning to the original question: Where is the exit status stored? Inside the kernel.
When you call exit(n), the least significant 8 bits of the integer n are written to a cpu register. The kernel system call implementation will then copy it to a process-related data structure.
What if your code doesn't call exit()? The c runtime library responsible for invoking main() will call exit() (or some variant thereof) on your behalf. The return value of main(), which is passed to the c runtime in a register, is used as the argument to the exit() call.
Related to the last quote, here's another from cppreference.com
5) Execution of the return (or the implicit return upon reaching the end of main) is equivalent to first leaving the function normally (which destroys the objects with automatic storage duration) and then calling std::exit with the same argument as the argument of the return. (std::exit then destroys static objects and terminates the program)
Lastly, I found this really cool example here (although the author of the post is wrong in saying that the result returned is the returned value modulo 512). After compiling and executing the following:
int main() {
return 42001;
}
on a POSIX compliant my* system, echo $? returns 17. That is because 42001 % 256 == 17 which shows that 8 bits of data are actually used. With that in mind, choosing int ensures that enough storage is available for passing the program's exit status information, because, as per this answer, compliance to the C++ standard guarantees that size of int (in bits)
can't be less than 8. That's because it must be large enough to hold "the eight-bit code units of the Unicode UTF-8 encoding form."
EDIT:
*As Andrew Henle pointed out in the comment:
A fully POSIX compliant system makes the entire int return value available, not just 8 bits. See pubs.opengroup.org/onlinepubs/9699919799/basedefs/signal.h.html: "If si_code is equal to CLD_EXITED, then si_status holds the exit value of the process; otherwise, it is equal to the signal that caused the process to change state. The exit value in si_status shall be equal to the full exit value (that is, the value passed to _exit(), _Exit(), or exit(), or returned from main()); it shall not be limited to the least significant eight bits of the value."
I think this makes for an even stronger argument for the use of int over data types of smaller sizes.

Why does using reinterpret_cast to convert from char* to a structure seem to work normally?

People say it's not good to trust reinterpret_cast to convert from raw data (like char*) to a structure. For example, for the structure
struct A
{
unsigned int a;
unsigned int b;
unsigned char c;
unsigned int d;
};
sizeof(A) = 16 and __alignof(A) = 4, exactly as expected.
Suppose I do this:
char *data = new char[sizeof(A) + 1];
A *ptr = reinterpret_cast<A*>(data + 1); // +1 is to ensure it doesn't points to 4-byte aligned data
Then copy some data to ptr:
memcpy_s(sh, sizeof(A),
"\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00", sizeof(A));
Then ptr->a is 1, ptr->b is 2, ptr->c is 3 and ptr->d is 4.
Okay, seems to work. Exactly what I was expecting.
But the data pointed by ptr is not 4-byte aligned like A should be. What problems this may cause in a x86 or x64 platform? Performance issues?
For one thing, your initialization string assumes that the underlying integers are stored in little endian format. But another architecture might use big endian, in which case your string will produce garbage. (Some huge numbers.) The correct string for that architecture would be
"\x00\x00\x00\x01\x00\x00\x00\x02\x03\x00\x00\x00\x00\x00\x00\x04".
Then, of course, there is the issue of alignment.
Certain architectures won't even allow you to assign the address of data + 1 to a non-character pointer, they will issue a memory alignment trap.
But even architectures which will allow this (like x86) will perform miserably, having to perform two memory accesses for each integer in the structure. (For more information, see this excellent answer:
https://stackoverflow.com/a/381368/773113)
Finally, I am not completely sure about this, but I think that C and C++ do not even guarantee to you that an array of characters will contain characters packed in bytes. (I hope someone who knows more might clarify this.) Conceivably, there can be architectures which are completely incapable of addressing non-word-aligned data, so in such architectures each character would have to occupy an entire word. This would mean that it would be valid to take the address of data + 1, because it would still be aligned, but your initialization string would be unsuitable for the intended job, as the first 4 characters in it would cover your entire structure, producing a=1, b=0, c=0 and d=0.
The problem is that you can not be sure if this code will run on another platform, with the next version of Visual Studio, etc. When running on another processor, it may cause a hardware exception.
There was a time when you could read out arbitrary memory locations, but all those programs crash with an "access violation" exception nowadays. Something similar could happen to this program in the future.
However, what you can do, and what any compiler that calls itself "C++ standard compliant" must compile correctly, is this:
You can reinterpret_cast a pointer to something else, and then back to the original type. The value of the type, when read before and after, must stay the same.
I don't know what exactly you want to do, but you might get away with, for example
allocating a struct A
reinterpret_casting it to chars
saving the memory content to a file
and restore everything later:
allocate a struct A
reinterpret_cast it to chars
load the content to memory
reinterpret_cast it back to a struct A

Global Variables not contiguous in C

Currently we are trying to keep track of the variables stored in memory, however we have faced with the following issues, maybe you would help us out
Currently we defined some global variables in our code, as follows
int x;
char y;
And we added the following lines of code
int main ( int argc, char *argv[ ] ){
printf("Memory of x %p\n",&x);
printf("Memory of y %p\n",&y);
system( "pause");
return 0;
}
The program returned the following address
Memory of x 0x028EE80
Memory of y 0x028EE87
If I make a sizeof x and a sizeof y I get 4 and 1 (the size of types integer and char)
What is then in between 0x028EE84 and 0x028EE86? why did it took 7 positions in order to insert the char variable in memory instead of inserting it on the 0x028EE81 memory position?
In general, the compiler will try to do something called alignment. This means that the compiler will try to have variables ending on multiples of 2, 4, 8, 16, ..., depending on the machine architecture. By doing this, memory accesses and writes are faster.
There are a number of very good answers here already however I do not feel any of them reach the very core of this issue. Where a compiler decides to place global variables in memory is not defined by C or C++. Though it may appear convenient to the programmer to store variables contiguously, the compiler has an enormous amount of information regarding your specific system and can thus provide a wide array of optimisations, perhaps causing it to use memory in ways which are not at first obvious.
Perhaps the compiler decided to place the int in an area of memory with other types of the same alignment and stuck the char among some strings which do not need to be aligned.
Still, the essence of this is that the compiler makes no obligations or promises of where it will store most types of variables in memory and short of reading the full sources of the compiler there is no easy way to understand why it did so. If you care about this so badly you should not be using separate variables, consider putting them into a struct which then has well defined memory placement rules (note padding is still allowed).
Because the compiler is free to insert padding in order to get better alignment.
If you absolutely must have them right next to each other in memory, put them in a struct and use #pragma pack to force the packing alignment to 1 (no padding).
#pragma pack(push, 1)
struct MyStruct
{
int x;
char y;
};
#pragma pack(pop)
This is technically compiler-dependent behavior (not enforced by the C++ standard) but I've found it to be fairly consistent among the major compilers.

Using a local short to pass a value into a function expecting an int; does this save memory?

I'm on an embedded system (for the first time) and have a whopping 512 bytes of memory. I'm constantly bumping up against that barrier, and I'm looking to save on each and every byte possible. As such, the following question:
In the SDK, there's a function, prototyped:
void foo(int val);
As such, my main looked like:
void main() {
int myVal = 0;
// do stuff to compute myVal
foo(myVal);
}
myVal, however, will never have a value more than ~100. Will I be saving any memory at all by doing this instead?
void main() {
short int myVal = 0;
// do stuff to compute myVal
foo(myVal);
}
edit: On this architecture, ints are 4-bytes, shorts are 2-bytes. I'm mostly unsure of whether using a local short (or char, or whatever) will save space since it has to up-cast to meet the foo(int) prototype.
I have no experience with such small systems; so the following is just a guess (the smallest ones I worked with had 384 K).
Changing int to short or back makes the compiler's optimizer work, and optimizer's output cannot be predicted with 100% accuracy.
Your platform has some sort of a convention for passing parameters to functions (ABI, probably documented together with the compiler). I guess the stack is aligned to 4 bytes (the size of int, which should be the most "natural" type for your platform). In this case, if your code uses the stack for passing your parameter, there will be no difference in memory consumption.
However:
If your functions have 1 or a few parameters, they will be placed in registers and not on the stack (ARM does this for the first 4 parameters), so there is no memory consumption to reduce
If your main function has two short local variables and not one, they will take 4 bytes of stack space, so short is better than int (and char, if it has 8 bits, is even better)
If you want to send two parameters and not one, you can stuff them into a struct; then 2 short parameters will take 4 bytes
Ultimately, it's easy to check this kind of stuff. Just look at your compiler's output (machine instructions) or tell the compiler to measure the maximal depth of stack (gcc can do it; not sure which compiler you use).

C++ Dereference the Non-allocated Memory but Without Segmentation Fault

I have encountered a problem which I don't understand, the following is my code:
#include <iostream>
#include <stdio.h>
#include <string.h>
#include <cstdlib>
using namespace std;
int main(int argc, char **argv)
{
char *format = "The sum of the two numbers is: %d";
char *presult;
int sum = 10;
presult = (char *)calloc(sizeof(format) + 20, 1); //allocate 24 bytes
sprintf(presult, format, sum); // after this operation,
// the length of presult is 33
cout << presult << endl;
presult[40] = 'g'; //still no segfault here...
delete(presult);
}
I compiled this code on different machines. On one machine the sizeof(format) is 4 bytes and on another, the sizeof(format) is 8 bytes; (On both machines, the char only takes one byte, which means sizeof(*format) equals 1)
However, no matter on which machine, the result is still confusing to me. Because even for the second machine, the allocated memory for use is just 20 + 8 which is 28 bytes and obviously the string has a length of 33 meaning that at least 33 bytes are needed. But there is NO segmentation fault occurring after I run this program. As you can see, even if I tried to dereference the presult at position 40, the program doesn't crash and show any segfault information.
Could anyone help to explain why? Thank you so much.
Accessing unallocated memory is undefined behavior, meaning you might get a segfault (if you're lucky) or you might not.
Or your program is free to display kittens on the screen.
Speculating on why something happens or doesn't happen in undefined behavior land is usually counter-productive, but I'd imagine what's happening to you is that the OS is actually assigning your application a larger block of memory than it's asking for. Since your application isn't trying to dereference anything outside that larger block, the OS doesn't detect the problem, and therefore doesn't kill your program with a segmentation fault.
Because undefined behavior is undefined. It's not "defined to crash".
There is no seg fault because there is no reason for there to be one. You are very likely stil writing into the heap since you got memory from the heap, so the memory isn't read only. Also, the memory there is likely to exist and be allocated for you(or at least the program), so it's not an access violation. Normally you would get a seg fault because you might try to access memory that is not given to you or you may be trying to write to memory that is read only. Neither of these appears to be the case here, so nothing goes wrong.
In fact, writing past the end of a buffer is a common security problem, known as the buffer overflow. It was the most common security vulnerability for some time. Nowadays people are using higher level languages which check for out of index bounds, so this is not as big of a problem anymore.
To respond to this: "the result is still confusing to me. Because even for the second machine, the allocated memory for use is just 20 + 8 which is 28 bytes and obviously the string has a length of 33 meaning that at least 33 bytes are needed."
sizeof(some_pointer) == sizeof(size_t) on any infrastructure. You were testing on a 32bit machine (4B) and on a 64bit machine (8B).
You have to give malloc the number of bytes to allocate; sizeof(ptr_to_char) will not give you the length of the string (the number of chars until '\0').
Btw, strlen does what you want: http://www.cplusplus.com/reference/cstring/strlen/