Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I understand the process of memory allocation for C++ programs. According to what I got from the internet, for C++ compilers, memory are allocated for the global variables and static variables at compiling time. While dynamically created variables (such as new/malloc operations) will be given space in the memory only when the executables are actually running. Correct me if I am wrong here.
So is it true that if the executable is never executed, then the part of memory previously allocated at compiling time for global & static variables will still and always sit there in the memory until the computer is shut down? What if we shut down the PC, and reboot it, then re-execute the executables? This time, there is no compiling process, when does the OS allocate memory for the global & static variables of this program? Is it in the system booting phase, or when the executable is actually executed?
Now extending this question to any general program in the PC. For example the Microsoft Word program. We did not code and compiled it by ourselves, we just installed it from its installation package, thus there is no compiling process in this situation (or maybe the installation process is actually the compiling process). Suppose these general programs also need space in the memory for static&global variables, when does the OS allocates memory for these programs? Is it when we power up and boot the OS, or when we actually execute the executables of these programs? If the OS pre-load all these static variables at boot time, that kind of explained why the OS booting process takes some time, but it seems to be a waste of memory resource if 90% of the programs installed in the system will not be executed each time the user powers up and use his PC.
The compiler essentially compiles all the static stuff and code into an image that is kept on disk, e.g. in exe files on Windows, etc.
When you run it, the operating system allocates some memory and basically copies this image into ram, then starts running the compiled code, which was also copied to ram.
Memory that you allocate dynamically in your program is allocated as your program executes.
No ram is allocated for your program at compile time. The statement "memory is allocated at compile time" is a conceptual simplification. What it really means is that the initial memory image, stored in the compiled file, is built at compile time. This won't be loaded into ram until the program is actually run.
This is very simplified but is the general gist. Check out the file format specifications for the binary file format on your system for some more interesting hints (for example), among other things.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 26 days ago.
Improve this question
I don't have much knowledge in computers, so my questions are quite naive.
I learned that compilation of a C code reserves specific memory space in stack in the main memory during compilation.
Then,
Why does an executable work when it is compiled in one computer copied over to another computer?
If compilation reserve specific memory location of RAM, then are the number of executables (or compilation) limited by size of the RAM?
If compilation reserves space in RAM, why does an executable occupy a lot more disk space than pre-compilation .C text file?
Thank you
The stack is not reserved by the compiler in the compilation time. It is reserved in a sense that the compiler is inserting specific commands and directives into the executable, for the stack to be reserved when the executable is being loaded/run
No. See above. The RAM is not reserved (that is made unavailable to other executables) at a time of compilation. It is reserved when the executable is being loaded/executed.
This is not necessarily true. In many case the executable is smaller than the code. But it can depend on many factors, such as how the code is written, the executable format, metadata included in it and memory layout. Sometimes the executable will contain whole zero-filled sections, which can be defined by a single line in the code.
In general, a compiler (in conjunction with linker whatsoever, if we want to be pedantic) has only one "simple" job - to take input files (code) and generate output file(s) - the executable. That is - it is creating files, that are only occupying space in the file system. Other things can happen only when the environment (OS) is loading and doing something (loading, executing) with them.
The space is not reserved during compilation. During compilation, there are instructions generated that, when executed at runtime, will take space on the stack.
For example, when you declare a variable in your code:
int x = 5;
The compiler will emit instructions that push 4 bytes (let's assume that is the size of int) onto the stack. But this is happening at runtime. That space is reserved when this line of code is reached at runtime. The caveat here is that an optimizing compiler could do all kinds of things here and may not actually allocate stack space.
It works when you copy the executable to another machine because the stack reservation is going to happen on that machine as the code is executed.
The number of executables that can be running at a time is going to depend on the amount of memory. Note that many OS's will swap memory between RAM and an available hard disk if you run out of memory. This increases how many executables can be ran, but the system will generally slow down a lot when this occurs.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have a small application. Depending on command line arguments, it loads or frees a library that filters system level input. Once the library is loaded, will it stay active after the process dies, or does the application need to stay alive in the background?
When a process dies everything it did dies with it - that includes threads it created, things it loaded into memory (like DLLs for example), memory it allocated, etc. Some differences exist between operating systems, but that's the general gist of it - your process terminates; it is gone along with everything it did (with exceptions like sysv shared memory, other global resources that may have been manipulated etc, but in most respects; when your process is pushing up daisies there's nothing left).
Dll may stay in memory even after process is terminated. For example it may be loaded into another process or just cached. However note that entire dll state, including all the objects and data handled by the code from that dll, is completely gone because that state was a part of the now non-existing process.
Just to clarify, a DLL is a machine code library that augments your executable with extra code, nothing more and nothing less. All threads, files and other resources belong to your executable even if they are create by functions in the DLL.
So when you executable dies, everything created by your executable dies.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am running a memory intensive c++ application and it is being killed by the kernel for excessively high memory usage. I would have thought that the os will automatically use swap when ram gets full. However, I don't think my swap space is getting utilised.
I have read the following two questions, but I can't relate it to my problem.
"How to avoid running out of memory in high memory usage application? C / C++"
Who "Killed" my process and why?
I will be grateful if someone can give me some hints/pointers to how I may solve this problem. Thanks.
Edit: I am running my application on a 64 bit machine linux machine. My ram and swap is 6gb and 12gb respectively.
I suspect your process is asking for more memory than is available. In situations where you know you're going to use the memory you ask for, you need to disable memory overcommit:
echo 2 > /proc/sys/vm/overcommit_memory
and/or put
vm.overcommit_memory=2
in /etc/sysctl.conf so the setting survives reboots.
If your process asks for 32 GB of RAM on a machine with 16 GB of RAM + swap, your malloc() (or new...) calls might very well succeed, but once you try to use that memory your process is going to get killed.
Perhaps you have (virtual) memory framgentation and are trying to allocate a large block of memory which the OS cannot find as a contiguous block?
For instance an array would require this, but if you create a large linked list on the heap you should be able to allocate non-contiguous memory.
How much memory are you trying to allocate and how, and do you have suffiecient amount of free resources? If you debug your application what happens when the process is getting killed?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
the program i am working on at the moment processes a large amount of data (>32GB). Due to "pipelining" however, a maximum of arround 600 MB is present in the main memory at each given time (i checked that, that works as planned).
If the program has finished however and i switch back to the workspace with Firefox open, for example (but also others), it takes a while till i can use it again (also HDD is highly active for a while). This fact makes me wonder if Linux (operating system i use) swaps out other programs while my program is running and why?
I have 4 GB of RAM installed on my machine and while my program is active it never goes above 2 GB of utilization.
My program only allocates/deallocates dynamic memory of only two different sizes. 32 and 64 MB chunks. It is written in C++ and i use new and delete. Should Linux not be sufficiently smart enough to reuse these blocks once i freed them and leave my other memory untouched?
Why does Linux kick my stuff out of the memory?
Is this some other effect i have not considered?
Can i work arround this problem without writing a custom memory management system?
The most likely culprit is file caching. The good news is that you can disable file caching. Without caching, your software will run more quickly, but only if you don't need to reload the same data later.
You can do this directly with linux APIs, but I suggest you use a library such as Boost ASIO. If your software is I/O bound, you should additionally make use of asynchronous I/O to improve performance.
All the recently-used pages are causing older pages to get squeezed out of the disk cache. As a result, when some other program runs, it has to paged back in.
What you want to do is use posix_fadvise (or posix_madvise if you're memory mapping the file) to eject pages you've forced the OS to cache so that your program doesn't have a huge cache footprint. This will let older pages from other programs remain in cache.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a small doubt regarding profiling applications which never exit until we manually reboot the machine.
I used tools like valgrind which talks about memory leaks or bloating of any application which exits after sometime.
But is there any tool which can be used to tell about memory consumption, bloating, overhead created by the application at various stages if possible?
NOTE: I am more intrested to know about apps which dont exit ... If an app exits I can use tools like valgrind ..
I'd consider adding a graceful exit from the program.
dtrosset's point is well put but apparently misunderstood. Add a means to terminate the program so you can perform a clean analysis. This can be something as simple as adding a signal handler for SIGUSR1, for example, that terminates the program at a point in time you decide. There are a variety of methods at your disposal depending on your OS.
There's a big difference between an application which never exits (embedded, daemons, etc) and one that cannot be exited. The prior is normal, the latter is bad design.
If anything, that application can be forcibly aborted (SIGKILL on *nix, terminate on win32) and you'd get your analysis. That method doesn't give your application the opportunity to clean up before it's destroyed so there will be very likely be retained memory reported.
Profiling is intrusive, so you don't want to deploy the app with the profiler attached, anyway. Therefore, include some #ifdef PROFILE_MODE-code that exits the app after an appropriate amount of time. Compile with -DPROFLILE_MODE, profile. Deploy without PROFILE_MODE.
Modify your program slightly so that you can request a Valgrind leak check at any point - when the command to do that is recieved, your program should use VALGRIND_DO_LEAK_CHECK from memcheck.h (this will have no effect if the program isn't running under Valgrind).
You can use GNU gprof, but it has also the problem that it requires an exit of the program.
You can overcom this by calling internal functions of gprof. (see below) It may be a real "dirty" hack, depending on the version of gcc and, and, and,... but it works.
#include "sys/gmon.h"
extern "C" //the internal functions and vars of gprof
{
void moncontrol (int mode);
void monstartup (unsigned long lowpc, unsigned long highpc);
void _mcleanup (void);
extern void _start(void), etext(void);
extern int __libc_enable_secure;
}
// call this whenever you want to write profiling information to file
void WriteProfilingInformation(char* Name)
{
setenv("GMON_OUT_PREFIX",Name,1); // set file name
int old = __libc_enable_secure; // save old value
__libc_enable_secure = 0; // has to be zero to change profile file name
_mcleanup();
__libc_enable_secure = old; // reset to old value
monstartup(lowpc, highpc); // restart profiler
moncontrol(1); // enable profiler
}
Rational Purify can do that, at least on windows. There seem to be a linux version but I don't know if it can do the same.
Some tools allow you to force a memory analysis at any point during the program's execution. This method is not as reliable as checking on exit, but it gives you a starting point.
Here's a Windows example using LeakDiag.
Have you tried GNU Gprof?
Note that in this document, "cc" and "gcc" are interchangeable. ("cc" is assumed as an alias for "gcc.")
http://www.cs.utah.edu/dept/old/texinfo/as/gprof_toc.html
Your question reads as if you were looking for top. It nicely displays (among other things) the current memory consumption of all running processes. (Limited to one page in the terminal.) On Linux, hit “M” to sort by memory usage. The man page shows more options for sorting and filtering.
I have used rational purify API's to check incremental leaks. Haven't used the API's in linux. I found the VALGRIND_DO_LEAK_CHECK option in Valgrind User Manual, I think this would suffice your requirement.
For windows, DebugDiag does that.
Generates a report in the end with probable memory leaks.
Also has memory pressure analysis.
And it's available for free # microsoft. Download it from here
You need stackshots. Either use pstack or lsstack, or just run it under a debugger or IDE and pause it (Ctrl-C) at random. It will not tell you about memory leaks, but it will give you a good idea of how the time is being used and why.
If time is being used because of memory leaks, you will see a good percent of samples ending in memory management routines. If they are in malloc or new, higher up the stack you will see what objects are being allocated and why, and you can consider how to do that less often.
Work of program that profiling memory leaks is based on detecting memory that was freed by OS not by program.