Stack memory with iostream [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I've recently hit a segmentation fault on a line equivalent to
some_file << some_number << ": ";
When the stack memory allocated to this application (it's on a pseudo-embedded system) is increased to 512 kB, we don't segmentation fault.
When writing to a file with the operator (<<), how is stack memory usage affected?
The some_file being written to is a std::ofstream. The some_number being written is passed by reference to the method where this sample line of code lives. The software is 32-bit and compiled with g++ on CentOS.
I'm curious how (or if) ofstream uses dynamic allocation, even in higher-level, general terms.

My first thought was to just upvote jalf's comment, but there are some things that are known. Unless the systems implementation of STL or the compiler is really unusual.
Unless it's inlined, and that's up to the compiler, there's a function call which means pushing a bunch of things to the stack. How much the call requires depends on the number of registers, size of registers and so on.
But more stack could be used inside the call to operator<<. All local variables use stack, and other function calls inside of the operator<< use the stack, unless they're inlined. And so on.
It depends on the implementation of whichever class some_file is an instantiation of. Without more details we can't say anything specific.

Related

When should you increase stack size ( Visual Studio C++ ) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 months ago.
Improve this question
When should we increase the stack size for a C++ program?
Why isn't it unlimited and are there any reasons for not increasing the stack size?
And does a program crash if the stack is full?
And also can we increase the stack size for a particular thread?
When should we increase the stack size for a C++ program?
When you have a specific program or a use case that overflows the stack. Note that the ideal solution is to modify the program or algorithm to behave within reasonable stack size, but that isn't always possible in practice (e.g. you have a program you cannot modify).
Why isn't it unlimited and are there any reasons for not increasing the stack size?
Because is not possible within the current architecture. In the virtual memory space of a program there are multiple stacks, one for each thread so specific limited space must be reserved for each stack. Keep in mind that a stack cannot be fragmented and cannot move (relative to the virtual memory space).
And does a program crash if the stack is full?
Please forgive my pedantic note: If the stack is full but you don't exceed it there is not problem. The problem is when the program overflows the stack.
I am not sure exactly. I think there is OS level protection against stack overflow in which case the program crashes with stack overflow exception. If am wrong and there is no protection (or if there is a setting to disable it) it depends on what it is in the memory you overflow. In any case, nothing good happens.
Why is the stack size set so small by default?
ok, it's not your question, but I feel it ties in here neatly
It's not. The OSes need to find a balance between too big of a stack and too small. Too big and you cut into the heap memory, too small and you make programs overflow it.
What can reside on the stack: call frames and local variables allocated on the stack. Call frames are very small (they usually contain just 1 pointer per frame) and local variables usually are pretty small. Big objects go on the heap.
What can overflow a stack? The most likely culprit is recursion. A recursive algorithm can easily overflow the stack with a big maximum recursion depth. But every recursive algorithm can be rewritten. Either there is an equivalent iterative algorithm or simply use a stack on the heap instead. That is the reason why in stack based memory allocation languages like C and C++ in the real world recursive algorithms with unbound recursion depths are avoided.

Stack Buffer overflow behaviour [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Does declaring a variable after a buffer in a function make its memory area inaccessible by the buffer? Because I tried doing that and every time I compile the program the buffer can still access it. First address of the buffer is always the lowest possible address of the stack frame.
Does it have to do with the compiler? I'm using gcc.
int check_authentication(char *password){
int demovar;
char password_buffer[16];
int auth_flag;
strcpy(password_buffer,password);
if(strcmp(password_buffer,"brilling")==0)auth_flag=1;
if(strcmp(password_buffer,"outgrabe")==0)auth_flag=1;
return auth_flag;
}
First:
The C standard does not tell anything about the location of your variables. The C standard doesn't even say that they are on a (call) stack. So your variables can be anywhere in memory (or they can not even be in memory).
A stack is an implementation specific thing that is never ever mentioned by the standard. Most (if not all) implementations use a stack but still there is no way to tell from the C code how variables will be located on the stack. It's an implementation thing - it's decided by your compiler.
Second:
C has no overflow protection what so ever. If you copy more into password_buffer than it can hold (16 char in this example), C will not warn you. It's called Undefined Behavior. It means that anything may happen. Maybe your program crash. Maybe it overwrites another variable. Maybe ... whatever. But nothing in C will help you. It's your responsebility to make sure such things doesn't happen.
It's kind of how C works. The programmer is responsible for doing things correctly. There is almost no help in C. The benefit is that there is almost no overhead in C. You win some, you lose some...

What is the use of data type of a variable [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Consider this two variable declaration. both of these declarations have data types. What is the actual usage of these data types.
int a;
MyClass b;
Is there a part of each declared memory to hold the data type?
Do these data types for human usage?
Do these data types not required beyond compiler(after compiled the program)?
Any good resource to read about this?
It is used to allocate the needed memory. Also it is used for (strong) type checking.
Also (but that is not the main reason).
Both. The compiler uses them, but afterwards dynamic behavior might be used depending on the object type.
?
The compiler is going to allocate memory on the stack for this variables. You cannot tell how much memory is allocated because this depends on the compiler and the system you are compiling your source code. Variables in c++ are always allocated on the stack unless you use pointer. In that case they are allocated on the heap.
In general yes. You CPU doesn't understand data types, in the end your code is compiled into a binary format (set of CPU instructions) to run on a CPU. You could as well write your program as a set of these instructions instead of c++. Then you would be using Assembler. But even Assembler is kind of a commodity interface to machine code since it has to be compiled an linked as well.
Based on your code the compiler can probably do some optimization of the code (for example copy elision).
I am not sure what you are expecting or trying to learn but i guess you could look for some compiler architecture literature.

Memory allocation and **argv argument [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I know for what we use this argument, and I even know how to work with the argument.
There is only one things I still do not understand. How a program allocate memory for strings which came from the input. **argv has no allocated memory at the start of the program, isn't it? I was expecting segfault, but it didn't happen.
Does anybody know how this memory allocation work?
The C/C++ runtime processes the command line arguments and creates an area of memory where the arguments are put. It then calls your main() providing you a count of the number of arguments along with a pointer to the area where the arguments are stored.
So the C/C++ runtime owns the memory area allocated and it is up to the C/C++ runtime to deallocate the area once your main() returns or if some other C/C++ function is used to stop the program such as exit().
This procedure originated with the use of C under Unix and was kept for C++ as a part of providing the degree of backwards compatibility the C++ committee has tried to maintain.
Normally when your program loads, the entry point that is started by the loader is not your main() function but rather an entry point defined in the C/C++ runtime. The C/C++ runtime does various kinds of initialization to setup the environment that the C/C++ standards say will exist at the point when the main() function is called by the C/C++ runtime once the initialization is completed.
One of the steps during this initialization is the parsing of command line arguments provided which are then provided to the main() function as its function arguments.

Fortran complex type VS C++ <complex> class performance [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I would like to write some performance-sensitive numerical code involving long formulas with complex numbers. Consider something simple like a=(b+c)*(d+e+f). I prefer to use C++ (which has the std::complex class), but I'm worried about the fact that the compiled code may create temporary class objects to hold the intermediate values b+c, d+e, and d+e+f, hence causing a slowdown. On the other hand, Fortran has native complex types which may lead to better compiler optimization. The code is multi-dimensional numerical integration, and the performance bottleneck is the evaluation of the integrand.
Are modern C++ compilers (such as Intel's) good enough at optimization that this is actually not a problem?
All Fortran can do (too) is to emulate the type.
There is no native machine type for complex numbers (on x86 at least).
Your concern is temporary usage of some stack bytes, that´s it.
If there is no other reason to keep it, even the stack resizing can be optimized away.
There is nothing Fortran could do better.
(Gcc and Clang are NOT generally worse than Intel. Each one has some good and bad points)