Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 26 days ago.
Improve this question
I don't have much knowledge in computers, so my questions are quite naive.
I learned that compilation of a C code reserves specific memory space in stack in the main memory during compilation.
Then,
Why does an executable work when it is compiled in one computer copied over to another computer?
If compilation reserve specific memory location of RAM, then are the number of executables (or compilation) limited by size of the RAM?
If compilation reserves space in RAM, why does an executable occupy a lot more disk space than pre-compilation .C text file?
Thank you
The stack is not reserved by the compiler in the compilation time. It is reserved in a sense that the compiler is inserting specific commands and directives into the executable, for the stack to be reserved when the executable is being loaded/run
No. See above. The RAM is not reserved (that is made unavailable to other executables) at a time of compilation. It is reserved when the executable is being loaded/executed.
This is not necessarily true. In many case the executable is smaller than the code. But it can depend on many factors, such as how the code is written, the executable format, metadata included in it and memory layout. Sometimes the executable will contain whole zero-filled sections, which can be defined by a single line in the code.
In general, a compiler (in conjunction with linker whatsoever, if we want to be pedantic) has only one "simple" job - to take input files (code) and generate output file(s) - the executable. That is - it is creating files, that are only occupying space in the file system. Other things can happen only when the environment (OS) is loading and doing something (loading, executing) with them.
The space is not reserved during compilation. During compilation, there are instructions generated that, when executed at runtime, will take space on the stack.
For example, when you declare a variable in your code:
int x = 5;
The compiler will emit instructions that push 4 bytes (let's assume that is the size of int) onto the stack. But this is happening at runtime. That space is reserved when this line of code is reached at runtime. The caveat here is that an optimizing compiler could do all kinds of things here and may not actually allocate stack space.
It works when you copy the executable to another machine because the stack reservation is going to happen on that machine as the code is executed.
The number of executables that can be running at a time is going to depend on the amount of memory. Note that many OS's will swap memory between RAM and an available hard disk if you run out of memory. This increases how many executables can be ran, but the system will generally slow down a lot when this occurs.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
how to overwrite all free disk space with zeros, like the cipher command in Windows; for example:
cipher /wc:\
This will overwrite the free disk space in three passes. How can I do this in C or C++? (I want to this in one pass and as fast as possible.)
You can create a set a files and write random bytes to them until available disk space is filled. These files should be removed before exiting the program.
The files must be created on the device you wish to clean.
Multiple files may be required on some file systems, due to file size limitations.
It is important to use different non repeating random sequences in these files to avoid file system compression and deduplicating strategies that may reduce the amount of disk space actually written.
Note also that the OS may have quota systems that will prevent you from filling available disk space and may also show erratic behavior when disk space runs out for other processes.
Removing the files may cause the OS to skip the cache flushing mechanism, causing some blocks to not be written to disk. A sync() system call or equivalent might be required. Further synching at the hardware level might be delayed, so waiting for some time before removing the files may be necessary.
Repeating this process with a different random seed improves the odds of hardware recovery through surface analysis with advanced forensic tools. These tools are not perfect, especially when recovery would be a life saver for a lost Bitcoin wallet owner, but may prove effective in other more problematic circumstances.
Using random bytes has a double purpose:
prevent some file systems from optimizing the blocks and compress or share them instead of writing to the media, thus overwriting existing data.
increase the difficulty in recovering previously written data with advanced hardware recovery tools, just like these security envelopes that have random patterns printed on the inside to prevent exposing the contents of the letter by simply scanning the envelope over a strong light.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
the program i am working on at the moment processes a large amount of data (>32GB). Due to "pipelining" however, a maximum of arround 600 MB is present in the main memory at each given time (i checked that, that works as planned).
If the program has finished however and i switch back to the workspace with Firefox open, for example (but also others), it takes a while till i can use it again (also HDD is highly active for a while). This fact makes me wonder if Linux (operating system i use) swaps out other programs while my program is running and why?
I have 4 GB of RAM installed on my machine and while my program is active it never goes above 2 GB of utilization.
My program only allocates/deallocates dynamic memory of only two different sizes. 32 and 64 MB chunks. It is written in C++ and i use new and delete. Should Linux not be sufficiently smart enough to reuse these blocks once i freed them and leave my other memory untouched?
Why does Linux kick my stuff out of the memory?
Is this some other effect i have not considered?
Can i work arround this problem without writing a custom memory management system?
The most likely culprit is file caching. The good news is that you can disable file caching. Without caching, your software will run more quickly, but only if you don't need to reload the same data later.
You can do this directly with linux APIs, but I suggest you use a library such as Boost ASIO. If your software is I/O bound, you should additionally make use of asynchronous I/O to improve performance.
All the recently-used pages are causing older pages to get squeezed out of the disk cache. As a result, when some other program runs, it has to paged back in.
What you want to do is use posix_fadvise (or posix_madvise if you're memory mapping the file) to eject pages you've forced the OS to cache so that your program doesn't have a huge cache footprint. This will let older pages from other programs remain in cache.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I understand the process of memory allocation for C++ programs. According to what I got from the internet, for C++ compilers, memory are allocated for the global variables and static variables at compiling time. While dynamically created variables (such as new/malloc operations) will be given space in the memory only when the executables are actually running. Correct me if I am wrong here.
So is it true that if the executable is never executed, then the part of memory previously allocated at compiling time for global & static variables will still and always sit there in the memory until the computer is shut down? What if we shut down the PC, and reboot it, then re-execute the executables? This time, there is no compiling process, when does the OS allocate memory for the global & static variables of this program? Is it in the system booting phase, or when the executable is actually executed?
Now extending this question to any general program in the PC. For example the Microsoft Word program. We did not code and compiled it by ourselves, we just installed it from its installation package, thus there is no compiling process in this situation (or maybe the installation process is actually the compiling process). Suppose these general programs also need space in the memory for static&global variables, when does the OS allocates memory for these programs? Is it when we power up and boot the OS, or when we actually execute the executables of these programs? If the OS pre-load all these static variables at boot time, that kind of explained why the OS booting process takes some time, but it seems to be a waste of memory resource if 90% of the programs installed in the system will not be executed each time the user powers up and use his PC.
The compiler essentially compiles all the static stuff and code into an image that is kept on disk, e.g. in exe files on Windows, etc.
When you run it, the operating system allocates some memory and basically copies this image into ram, then starts running the compiled code, which was also copied to ram.
Memory that you allocate dynamically in your program is allocated as your program executes.
No ram is allocated for your program at compile time. The statement "memory is allocated at compile time" is a conceptual simplification. What it really means is that the initial memory image, stored in the compiled file, is built at compile time. This won't be loaded into ram until the program is actually run.
This is very simplified but is the general gist. Check out the file format specifications for the binary file format on your system for some more interesting hints (for example), among other things.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Is
WCHAR bitmapPathBuffer[512]
ok for stack allocation? or it is better to use heap for this size? What is reasonable indicative size when it is better to go from stack to heap... all says "is depens" but our brains need some limits to orients.
You might want to check your system's default stack size, and consider whatever use your application makes of recursion, to arrive at some reasonable threshold.
Anyway, for typical desktop PCs I'd say ~100kb was reasonable to put on the stack for function that won't be invoked recursively without any unusual considerations (I had to revise that downwards after seeing how restrictive Windows was below). You may be able to go an order of magnitude more or less on specific systems but it's around that point you'd start to care about checking your system limits.
If you find you're doing that in many functions, you'd better think carefully about whether those functions could be called from each other, or just allocate dynamically (preferably implicitly via use of vector, string etc.) and not worry about it.
The 100kb guideline is based on these default stack size numbers ripped from the 'net:
platform default size # bits # digits
===============================================================
SunOS/Solaris 8172K bytes <=39875 <=12003 (Shared Version)
Linux 8172K bytes <=62407 <=18786
Windows 1024K bytes <=10581 <=3185 (Release Version)
cygwin 2048K bytes <=3630 <=1092
As others have said, the answer to this question is dependent on the system on which you are running. In order to come to a sensible answer, you need to know:
The default stack size. This might be different for threads other than the main thread(!), or if you're using closures or a third-party threading or coroutine library.
Whether the system stack is dynamically resized. On some systems, the stack can grow automatically up to a point.
On some platforms (e.g. ARM or PowerPC-based systems) there is a “red zone”. If you are in a leaf function (one that calls no other functions), if your stack usage is less than the size of the red zone, the compiler can generate more efficient code.
As a general rule I'd agree with the other respondents that on a desktop system, 16–64KB or so is a reasonable limit, but even that depends on things like recursion depth. Certainly, large stack frames are a code smell and should be investigated to make sure they're necessary.
In particular, it's well worth contemplating the lengths of any buffers allocated on the stack… are they really large enough for any conceivable input? And are you checking that at runtime to avoid overrun? e.g. In your example, are you sure that bitmapPathBuffer is never longer than 512 WCHARs in length? If you don't know the maximum length for certain, the heap may be better. Even then, if it's an adversarial environment, you may care to put a large upper bound on it to avoid attacks involving memory exhaustion.
Answer is really "it depends".
If you have many such variables defined, or if you do relatively large stack allocations in your function and in functions this one calls, then it is possible that you will have stack overflow.
Typical default stack size for Win32 executable is 1MB. If you allocate more than that, you are in trouble and should change largest allocations to be on heap.
I would follow simple rule - if your allocations are more than say 16 - 64KB, then allocate on heap. Otherwise, it should be ok to allocate on stack.
Modern compilers under normal circumstances use stack size of about 1 megabyte. So 1 KB is not a problem for a simple program.
If the program is very complex, other functions in the call chain also use large portions of the stack, your current function is very deep in the call stack, etc., then you better avoid large automatic variables.
If you use recursion, then you should carefully consider how deep it can be.
If you write a function that will be used in other projects or by other people, then you never know whether it can be called in a recursive function or deep in the stack. So it's usually a good idea to avoid large automatic variables in this case.
There's no hard limit, but you might want to consider what
happens if the allocation fails. If allocation of a local
variable fails, your program crashes; if allocation of a dynamic
variable fails, you get (or should get) an exception. For this
reason, I tend to use dynamic allocation (in the form of
std::vector) for anything over about 1K. The fact that
std::vector does bounds checking (at least with the
implementations I use) when compiling without optimization is
also a plus.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
How can I calculate the minimum Stack Size required for my program in UNIX, so that my program never get crashed.
Suppose my program is
int main()
{
int number;
number++;
return 0;
}
1) What can be the Stack Size requried to run for this program? How it is calculated ?
2) My Unix system gives ulimit -s 512000. Is this value 512MB really required for my small program?
3) And what if I have a big program having Multithreads, some 500 functions, including some libraries, Macros, Dynamically allocated memory etc. How much Stack Size is required for that ?
Your program in itself uses a few bytes - 1 int, but there is of course the part of the runtime that comes BEFORE main as well to take into account. But it's unlikely to be more than a few dozen bytes, maybe a couple of hundred bytes at a stretch. Since the minimum stack size in any modern OS is "one page" = 4KB, this should easily fit in that.
51200 = 51.2MB, but that seems quite high. On my Linux Fedora 16 x86-64 machine, it is 8192.
Threads don't really matter, as each thread has its own stack. The number of functions are in themselves not a huge contributor to stack usage. Running out of stack is nearly always caused by large local variables and/or deep recursion. For any program that is more than a little bit complex, calculating precise stack usage can be quite tricky. Typically, it involves running the program a lot and seeing if the stack "explodes". If not, you have enough stack. Library functions, generally speaking, tends to not use huge amounts of stack, but there are always exceptions.
To examplify:
void func()
{
int x, y, z;
float w;
...
}
This function takes up approximately 16 bytes of stack, plus the general overhead of calling a function, typically 1-3 "machine words" (4-12 bytes on a 32-bit machine, 8-24 bytes for a 64-bit machine).
void func2()
{
int x[10000];
...
}
This function will take 40000 bytes of stack-space. Obviously, you don't need many recursive calls to this function to run out of stack.
There is no magic way to tell how much space will your program require on the stack. It'd depend on what the code is actually doing. Infinite (or very deep) recursion would result in stack overflow even if the program doesn't seem to do anything.
As an example, see the following:
$ ulimit
unlimited
$ echo "foo(){foo();} main(){foo();}" | gcc -x c -
$ ./a.out
Segmentation fault (core dumped)
Most people rely on the stack being “large” and their programs not using all of it, simply because the size has been set so large that programs rarely fail because they run out of stack space unless they use very large arrays with automatic storage duration.
This is an engineering failure, in the sense that it is not engineering: A known and largely preventable source of complete failure is uncontrolled.
In general, it can be difficult to compute the actual stack needs of a program. Especially when there is recursion, a compiler cannot generally predict how many times a routine will be called recursively, so it cannot know how many times that routine will need stack space. Another complication is calls to addresses prepared at run-time, such as calls to virtual functions or through other pointers-to-functions.
However, compilers and linkers could provide some assistance. For any routine that uses a fixed amount of stack space, a compiler, in theory, could provide that information. A routine may include blocks that are or are not executed, and each block might have different stack space requirements. This would interfere with a compiler providing a fixed number for the routine, but a compiler might provide information about each block individually and/or a maximum for the routine.
Linkers could, in theory, examine the call tree and, if it is static and is not recursive, provide a maximum stack use for the linked program. They could also provide the stack use along a particular call subchain (e.g., from one routine through the chain of calls that leads to the same routine being called recursively) so that a human could then apply knowledge of the algorithm to multiple the stack use of the subchain by the maximum number of times it might be called recursively).
I have not seen compilers or linkers with these features. This suggests there is little economic incentive for developing these features.
There are times when stack use information is important. Operating system kernels may have a stack that is much more limited than user processes, so the maximum stack use of the kernel code ought (as a good engineering practice) to be calculated so that the stack size can be set appropriately (or the code redesigned to use less stack).
If you have a critical need for calculating stack space requirements, you can examine the assembly code generated by the compiler. In many routines on many computing platforms, a fixed number is subtracted from the stack pointer at the beginning of the routine. In the absence of additional subtractions or “push” instructions, this is the stack use of the routine, excluding further stack used by subroutines it calls. However, routines may contain blocks of code that contain additional stack allocations, so you must be careful about examining the generated assembly code to ensure you have found all stack adjustments.
Routines may also contain stack allocations computed at run-time. In a situation where calculating stack space is critical, you might avoid writing code that causes such allocations (e.g., avoid using C’s variable-length array feature).
Once you have determined the stack use of each routine, you can determine the total stack use of the program by adding the stack use of each routine along various routine-call paths (including the stack use of the start routine that runs before main is called).
This sort of calculation of the stack use of a complete program is generally difficult and is rarely performed.
You can generally estimate the stack use of a program by knowing how much data it “needs” to do its work. Each routine generally needs stack space for the objects it uses with automatic storage duration plus some overhead for saving processor registers, passing parameters to subroutines, some scratch work, and so on. Many things can alter stack use, so only an estimate can be obtained this way. For example, your sample program does not need any space for number. Since no result of declaring or using number is ever printed, the optimizer in your compiler can eliminate it. Your program only needs stack space for the start routine; the main routine does not need to do anything except return zero.