Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have a small application. Depending on command line arguments, it loads or frees a library that filters system level input. Once the library is loaded, will it stay active after the process dies, or does the application need to stay alive in the background?
When a process dies everything it did dies with it - that includes threads it created, things it loaded into memory (like DLLs for example), memory it allocated, etc. Some differences exist between operating systems, but that's the general gist of it - your process terminates; it is gone along with everything it did (with exceptions like sysv shared memory, other global resources that may have been manipulated etc, but in most respects; when your process is pushing up daisies there's nothing left).
Dll may stay in memory even after process is terminated. For example it may be loaded into another process or just cached. However note that entire dll state, including all the objects and data handled by the code from that dll, is completely gone because that state was a part of the now non-existing process.
Just to clarify, a DLL is a machine code library that augments your executable with extra code, nothing more and nothing less. All threads, files and other resources belong to your executable even if they are create by functions in the DLL.
So when you executable dies, everything created by your executable dies.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 26 days ago.
Improve this question
I don't have much knowledge in computers, so my questions are quite naive.
I learned that compilation of a C code reserves specific memory space in stack in the main memory during compilation.
Then,
Why does an executable work when it is compiled in one computer copied over to another computer?
If compilation reserve specific memory location of RAM, then are the number of executables (or compilation) limited by size of the RAM?
If compilation reserves space in RAM, why does an executable occupy a lot more disk space than pre-compilation .C text file?
Thank you
The stack is not reserved by the compiler in the compilation time. It is reserved in a sense that the compiler is inserting specific commands and directives into the executable, for the stack to be reserved when the executable is being loaded/run
No. See above. The RAM is not reserved (that is made unavailable to other executables) at a time of compilation. It is reserved when the executable is being loaded/executed.
This is not necessarily true. In many case the executable is smaller than the code. But it can depend on many factors, such as how the code is written, the executable format, metadata included in it and memory layout. Sometimes the executable will contain whole zero-filled sections, which can be defined by a single line in the code.
In general, a compiler (in conjunction with linker whatsoever, if we want to be pedantic) has only one "simple" job - to take input files (code) and generate output file(s) - the executable. That is - it is creating files, that are only occupying space in the file system. Other things can happen only when the environment (OS) is loading and doing something (loading, executing) with them.
The space is not reserved during compilation. During compilation, there are instructions generated that, when executed at runtime, will take space on the stack.
For example, when you declare a variable in your code:
int x = 5;
The compiler will emit instructions that push 4 bytes (let's assume that is the size of int) onto the stack. But this is happening at runtime. That space is reserved when this line of code is reached at runtime. The caveat here is that an optimizing compiler could do all kinds of things here and may not actually allocate stack space.
It works when you copy the executable to another machine because the stack reservation is going to happen on that machine as the code is executed.
The number of executables that can be running at a time is going to depend on the amount of memory. Note that many OS's will swap memory between RAM and an available hard disk if you run out of memory. This increases how many executables can be ran, but the system will generally slow down a lot when this occurs.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Can we make a list of causes that makes program to run correctly when compiled in debug mode but to crash in release mode, with Qt Creator. Let's talk in general, in most cases.
In my case, at point A, the program compiled and run correctly. After some work, at point B, it compiled but crashed at runtime in release mode and not in debug mode, I returned to point A by commenting my work between A and B, it has the same behaviour then point B, it compiles but crashes only in release mode. I think it is a mistake I did much before point A that was sleeping. It makes me not want to finish my program, since it's a free program I wanted to share in open source.
Any kind of undefined behavior can cause this type of issue. The most likely cause - writing past the boundary of an array/vector, or reading from there. It can be a destruction of an object that is already destroyed. Or a multithreading issue which reproduces only when execution is fast in release mode. It may be uninitialized struct, or a field of a POD type not assigned to in constructor.
In Debug mode the memory is allocated differently and in some cases may end up containing zeros (when passed to your program) rather than random garbage. This often causes crashes only in Release mode.
I strongly recommend you to setup "RelWithDebInfo" configuration to debug this issue, e.g. passing -g option to GCC when building in Release. Thus you will be able to stop in debugger when application crashes and identify the cause.
Otherwise your best bet is to do something like "binary search" over your code to find the exact location of the crash. Like, comment half the code, see if it still crashes, etc.
I know this explanation is a bit vague, but hope it helps!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
the program i am working on at the moment processes a large amount of data (>32GB). Due to "pipelining" however, a maximum of arround 600 MB is present in the main memory at each given time (i checked that, that works as planned).
If the program has finished however and i switch back to the workspace with Firefox open, for example (but also others), it takes a while till i can use it again (also HDD is highly active for a while). This fact makes me wonder if Linux (operating system i use) swaps out other programs while my program is running and why?
I have 4 GB of RAM installed on my machine and while my program is active it never goes above 2 GB of utilization.
My program only allocates/deallocates dynamic memory of only two different sizes. 32 and 64 MB chunks. It is written in C++ and i use new and delete. Should Linux not be sufficiently smart enough to reuse these blocks once i freed them and leave my other memory untouched?
Why does Linux kick my stuff out of the memory?
Is this some other effect i have not considered?
Can i work arround this problem without writing a custom memory management system?
The most likely culprit is file caching. The good news is that you can disable file caching. Without caching, your software will run more quickly, but only if you don't need to reload the same data later.
You can do this directly with linux APIs, but I suggest you use a library such as Boost ASIO. If your software is I/O bound, you should additionally make use of asynchronous I/O to improve performance.
All the recently-used pages are causing older pages to get squeezed out of the disk cache. As a result, when some other program runs, it has to paged back in.
What you want to do is use posix_fadvise (or posix_madvise if you're memory mapping the file) to eject pages you've forced the OS to cache so that your program doesn't have a huge cache footprint. This will let older pages from other programs remain in cache.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I understand the process of memory allocation for C++ programs. According to what I got from the internet, for C++ compilers, memory are allocated for the global variables and static variables at compiling time. While dynamically created variables (such as new/malloc operations) will be given space in the memory only when the executables are actually running. Correct me if I am wrong here.
So is it true that if the executable is never executed, then the part of memory previously allocated at compiling time for global & static variables will still and always sit there in the memory until the computer is shut down? What if we shut down the PC, and reboot it, then re-execute the executables? This time, there is no compiling process, when does the OS allocate memory for the global & static variables of this program? Is it in the system booting phase, or when the executable is actually executed?
Now extending this question to any general program in the PC. For example the Microsoft Word program. We did not code and compiled it by ourselves, we just installed it from its installation package, thus there is no compiling process in this situation (or maybe the installation process is actually the compiling process). Suppose these general programs also need space in the memory for static&global variables, when does the OS allocates memory for these programs? Is it when we power up and boot the OS, or when we actually execute the executables of these programs? If the OS pre-load all these static variables at boot time, that kind of explained why the OS booting process takes some time, but it seems to be a waste of memory resource if 90% of the programs installed in the system will not be executed each time the user powers up and use his PC.
The compiler essentially compiles all the static stuff and code into an image that is kept on disk, e.g. in exe files on Windows, etc.
When you run it, the operating system allocates some memory and basically copies this image into ram, then starts running the compiled code, which was also copied to ram.
Memory that you allocate dynamically in your program is allocated as your program executes.
No ram is allocated for your program at compile time. The statement "memory is allocated at compile time" is a conceptual simplification. What it really means is that the initial memory image, stored in the compiled file, is built at compile time. This won't be loaded into ram until the program is actually run.
This is very simplified but is the general gist. Check out the file format specifications for the binary file format on your system for some more interesting hints (for example), among other things.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Wants to create an application storing data in memory. But i dont want the data to be lost even if my app crashes.
What concept should i use?
Should I use a shared memory, or is there some other concept that suits my requirement better.
You are asking for persistence (or even orthogonal persistence) and/or for application checkpointing.
This is not possible (at least thru portable C++ code) in the general case for some arbitrary existing C++ code, e.g. because of ASLR, because of pointers on -or to- the local call stack, because of multi-threading, and because of external resources (sockets, opened files, ...), because the current continuation cannot be accessed, restored and handled in standard C++.
However, you might design your application with persistence in mind. This is a strong architectural requirement. You could for instance have every class contain some dumping method and its load factory function. Beware of shared pointers, and take into account that you could have cyclic references. Study garbage collection algorithms (e.g. in the Gc HandBook) which are similar to those needed for persistence (a copying GC is quite similar to a checkpointing algorithm).
Look also in serialization libraries (like libs11n). You might also consider persisting into textual format (e.g. JSON), perhaps inside some Sqlite database (or some real database like PostGreSQL or MongoDb....). I am doing this (in C) in my monimelt software.
You might also consider checkpointing libraries like BLCR
The important thing is to think about persistence & checkpointing very early at design time. Thinking of your application as some specialized bytecode interpreter or VM might help (notably if you want to persist continuations, or some form of "call stack").
You could fork your process (assuming you are on Linux or Posix) before persistence. Hence, persistence time does not matter that much (e.g. if you persist every hour or every ten minutes).
Some language implementations are able to persist their entire state (notably their heap), e.g. SBCL (a good Common Lisp implementation) with its save-lisp-and-die, or Poly/ML -an ML dialect- with its SaveState, or Squeak (a Smalltalk implementation).
See also this answer & that one. J.Pitrat's blog has a related entry: CAIA as a sleeping beauty.
Persistency of data with code (e.g. vtables of objects, function pointers) might be technically difficult. dladdr(3) -with dlsym- might help (and, if you are able to code machine-specific things, consider the old getcontext(3), but I don't recommend that). Avoid name mangling (for dlsym) by declaring extern "C" all code related to persistence. If you want to persist some data and be able to restart from it with a slightly modified program (e.g. a small bugfix) things are much more complex.
More pragmatically, you could have a class representing your entire persistable state, and implement methods to persist (and reload it). You would then persist only at certain steps of your algorithm (e.g. if you have a main loop or an event loop, at start of that loop). You probably don't want to persist too often (e.g. because of the time and disk space required to persist), e.g. perhaps every ten minutes. You might perhaps consider some transaction log if it fits in the overall picture of your application.
Use memory mapped files - mmap (https://en.wikipedia.org/wiki/Mmap) And allocate all your structures inside mapped memory region. System will properly save mapped file to disk when your app crashes.