Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Does declaring a variable after a buffer in a function make its memory area inaccessible by the buffer? Because I tried doing that and every time I compile the program the buffer can still access it. First address of the buffer is always the lowest possible address of the stack frame.
Does it have to do with the compiler? I'm using gcc.
int check_authentication(char *password){
int demovar;
char password_buffer[16];
int auth_flag;
strcpy(password_buffer,password);
if(strcmp(password_buffer,"brilling")==0)auth_flag=1;
if(strcmp(password_buffer,"outgrabe")==0)auth_flag=1;
return auth_flag;
}
First:
The C standard does not tell anything about the location of your variables. The C standard doesn't even say that they are on a (call) stack. So your variables can be anywhere in memory (or they can not even be in memory).
A stack is an implementation specific thing that is never ever mentioned by the standard. Most (if not all) implementations use a stack but still there is no way to tell from the C code how variables will be located on the stack. It's an implementation thing - it's decided by your compiler.
Second:
C has no overflow protection what so ever. If you copy more into password_buffer than it can hold (16 char in this example), C will not warn you. It's called Undefined Behavior. It means that anything may happen. Maybe your program crash. Maybe it overwrites another variable. Maybe ... whatever. But nothing in C will help you. It's your responsebility to make sure such things doesn't happen.
It's kind of how C works. The programmer is responsible for doing things correctly. There is almost no help in C. The benefit is that there is almost no overhead in C. You win some, you lose some...
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I'm currently reading through Crash Course C++ and have a question regarding types.
If I declare and initialize both an int and a long long variable on a 64-bit machine running Linux to the decimal value 4, does the compiler recognize the wasted bytes and make changes to the underlying type? This is probably a no, as at some point that field may take on a value that would cause overflow with a smaller type (i.e. going from 8 bytes to 4).
I've read a little about object byte reordering during compilation in c++; that compilers can sometimes rearrange fields to minimize padding in memory. Just wondering if there is a similar optimization that happens for numeric types.
I do not think that the compiler will change the size of a variable. It might do so because of the as if rule, but if it can reliably do that, it means that the variable is used is a very simple context, for example assigned (or initialized) once from a constant and then only used in the same compilation unit and its address is not used (that last point if often said odr-usage for One Definition Rule).
But in that case, the compiler will simply optimize out the variable because it can directly use its value, so it will use no memory at all...
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I had an interesting question. I have:
char buf[100]
And I decided to try using close(buf)
Code compiled, the program works. But is there any point in using close() like this?
Thank you.
Assuming "close" is the function from posix, most likely nothing will happen but it's also possible stuff could break badly.
Arrays in c and c++ decay to pointers, close takes an int. Implicitly converting a pointer to an int is not allowed by the c++ spec but some compilers allow it anyway (doing some testing it looks like modern g++ only allows it if -fpermissive is specified).
Most likely the integer that results from said conversion will be large, file descripters are usually small, so most likely close will just return a bad file descriptor error and do nothing but if it does happen to match a file descriptor then things could get interesing.....
It should not compile. The compiler should emit warnings.
The behaviour is undefined
No, there is no point in using close on a char array
There is also no meaning in doing that. What would you want to achieve?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a piece of dumb code for which i need some explanations.
int main() {
int *ptr_i = new int[100];
char *ptr_c = (char *)ptr_i;
delete [] ptr_c;
return 0;
}
First of all I was expecting this code to crash but it didn't which i believe is because in this case the allocater will keep a track of how many bytes to de-allocate. I ran valgrind also on this code and it shows no memory leak.
I need clarification regarding following :
In case when dealing with POD data type, how will a c'tor of char differ from that of int ?
Apart from coding convention, what other problems can this code lead to ?
There is no constructor for an int, nor for a char. However, since the usage of operator delete (more accurately delete []) does not match the usage of operator new (new []) the behaviour is undefined.
Undefined behaviour does not mean a crash will occur. It does not mean that a memory leak will occur.
It simply means that the C++ standard places no restrictions on what happens.
A crash might or might not occur. A memory leak might or might not occur. The compiler might or might not reformat your hard drive. Your program might or might not print the value 42 a total of 27 times. Any other set of occurrences you can imagine might or might not occur.
So the problems such code may cause could be .... anything ... or even nothing. The biggest problem is that you cannot necessarily know.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is a C compiler obligated to place a static const variable in memory or is it allowed to use it as immediate instruction operand when referenced?
No, it isn't as long as you do not tell it to otherwise. It can very well use the constant as literal (immediate) values in assembler instructions.
Telling otherwise could be
declaring the const volatile (Telling the compiler: "We don't change it, but somebody else could")
declaring and/or using i.e. dereferencing a pointer to the constwhich is not explicitely const
A C compiler isn't obligated to put anything in memory. Even a non-static non-const variable could be entirely optimised out, as long as the compiler & linker could prove that the object were not to be referenced externally (or that its address is requested internally, e.g. using the & operator) and that its value did not depend on any unpredictable circumstances (such as user input).
A modern C or C++ compiler performs such optimisations aggressively, which is why the typical low-level "this is how your program works" explanations that come from the poorer introductory textbooks are misleading, and why we discuss the semantics of these languages in theoretical/abstract terms, rather than obsessing over which bits of data are on which chip of RAM when the user hits a button.
For reference on how this optimisation is permitted, look up the "as-if" rule.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I've recently hit a segmentation fault on a line equivalent to
some_file << some_number << ": ";
When the stack memory allocated to this application (it's on a pseudo-embedded system) is increased to 512 kB, we don't segmentation fault.
When writing to a file with the operator (<<), how is stack memory usage affected?
The some_file being written to is a std::ofstream. The some_number being written is passed by reference to the method where this sample line of code lives. The software is 32-bit and compiled with g++ on CentOS.
I'm curious how (or if) ofstream uses dynamic allocation, even in higher-level, general terms.
My first thought was to just upvote jalf's comment, but there are some things that are known. Unless the systems implementation of STL or the compiler is really unusual.
Unless it's inlined, and that's up to the compiler, there's a function call which means pushing a bunch of things to the stack. How much the call requires depends on the number of registers, size of registers and so on.
But more stack could be used inside the call to operator<<. All local variables use stack, and other function calls inside of the operator<< use the stack, unless they're inlined. And so on.
It depends on the implementation of whichever class some_file is an instantiation of. Without more details we can't say anything specific.