Related
I've been searching Stack Overflow for a guideline on the max amount of memory one should allocate on the stack.
I see best practices for stack vs. heap allocation but nothing has numbers on a guideline on how much should be allocated on the stack and how much should be allocated on the heap.
Any ideas/numbers I can use as a guideline? When should I allocate on the stack vs. the heap and how much is too much?
In a typical case, the stack is limited to around 1-4 megabytes. To leave space for other parts of the code, you typically want to limit a single stack frame to no more than a few tens of kilobytes or so if possible. When/if recursion gets (or might get) involved, you typically want to limit it quite a bit more than that.
The answer here depends on the environment in which the code is running. On a small embedded system, the whole stack may be a few kilobytes. On a large system running on a desktop, the stack is typically in the megabytes.
For desktop/big embedded system, a few kilobytes is typically fine. For small embedded systems, that may not work well at all.
On the other hand, excessive use of the heap can lead to excessive overhead when calling new/delete frequently. So in a typical situation, you shouldn't use heap allocation for very small objects - unless necessary from other design criteria (e.g. you need a pointer to store permanently somewhere, and stack won't work for that as you are returning from the current function before the object has been finished with).
Of course, it's the overall design that matters. If you have a very simple application, with a few functions, none of which are recursive, it could be fine to allocate a few hundred kilobytes in main or a level above. On the other hand, if you are making a library for generic use, using more than a few kilobytes will probably not make you popular with the developers using the library. And if the library is being developed to run on low memory systems (in a washing machine, old style mobile phone, etc) then using more than a couple of hundred bytes is probably a bad idea.
Allocate on the stack as small as possible. Use the heap for datasets or else the stack allocation will carry through the scope's life, possibly thrashing the cache.
I want to understand the memory management in C and C++ programming for Application Development. Application will run on the PC.
If I want to make a program which uses RAM as less as possible while running, what are the points I need to consider while programming?
Here are two points according to what I understand, but I am not sure:
(1) Use minimum local variables in main() and other functions.
As local variables are saved in stack, which is RAM?
(2) Instead of local variables, use global variables on the top.
As global variables are saved in the uninitialized and initialized ROM area?
Thanks.
1) Generally the alternative to allocating on the stack is allocating on the heap (e.g., with malloc) which actually has a greater overhead due to bookkeeping/etc, and the stack already has memory reserved for it, so allocating on the stack where possible is often preferable. On the other hand there is less space on the stack while the heap can be close to “unlimited” on modern systems with virtual memory and 64-bit address space.
2) On PCs and other non-embedded system, everything in your program goes in RAM, i.e., it is not flashed to a ROM-like memory, so global versus local does not help in that regard. Also globals† tend to “live” as long as the application is running, while locals can be allocated and freed (either on the stack or heap) as required, and are thus preferable.
† More accurately, there can also be local variables with static duration, and variables with global scope that are pointers to dynamically allocated memory, so the terms local and global are used quite loosely here.
In general, modern desktop/laptop and even mobile operating systems are quite good at managing memory, so you probably shouldn't be trying to micro-optimize everything as you may actually do more harm than good.
If you really do need to bring down the memory footprint of your program, you must realize that everything in the program is stored in RAM, and so you need to work on reducing the number and size of the things you have, rather than trying to juggle their location. The other place where you can store things locally on a PC is the hard drive, so store large resources there and only load them as required (preferably only exactly the parts required). But remember that disk access is orders of magnitude slower than memory access, and that the operating system can also swap things out to disk if its memory gets full.
The program code itself is also stored in RAM, so have your compiler optimize for size (-Os or /Os option in many common compilers). Also remember that if you save a bit of space in variables by writing more complex code, the effort may be undone by the increased code size; save your optimizations for big wins (e.g., compressing large resources will require the added decompression code, but may still yield a large net win). Use of dynamically linked libraries (and other resources) also helps the overall memory footprint of the system if the same library is used by multiple programs running at the same time.
(Note that some of the above does not apply in embedded development, e.g., code and static constants may be indeed be stored in flash instead of RAM, etc.)
This is difficult because on your PC, the program will be running out of RAM unless you can somehow execute it out a ROM or Flash.
Here are the points to consider:
Reduce your code size.
Code takes up RAM.
Reduce variable quantity and size.
Variables need to live somewhere and that somewhere is in RAM.
Reduce character literals.
They too, take up space.
Reduce function call nesting.
A function may require parameters, which are placed in RAM.
A function that calls other functions needs a return path; the path is stored in RAM.
Use RAM from other devices.
Other devices, such as the Graphics Processor and your harddrive adaptor card, may have RAM you can use. If you use this RAM, you're not using the primary RAM.
Page memory to external device.
The OS is capable of virtual memory and can page memory out to an external device, such as a hard drive.
Edit 1 - Dynamic libraries
Too reduce the RAM footprint of your program, you could allocate a an area where you swap out library functions. This is similar to the DLL concept. When a function is needed, you load it from the hard drive into the reserved area.
Typically, a certain amount of space will be allocated for the stack; such space will be unavailable for other purposes whether or not it is used. If the space turns out to be inadequate, the program will die a gruesome death.
Local variables will be stored using some combination of registers and stack space. Some compilers will use the same registers or stack space for variables which are "live" at different times in a program's execution; others will not. Further, function arguments are typically pushed on the stack before calling a function and removed at the caller's convenience. In evaluating the code sequence:
function1(1,2,3,4,5);
function2(6,7,8,9,10);
the arguments for the first function will be pushed on the stack and that function will be called. At that point the compiler could remove those five values off the stack, but since a single instruction can remove any number of pushed values, many compilers will push the arguments for the second function (leaving the arguments of the first on the stack), call the second function, and then use one instruction to eliminate all ten. Normally this would be a non-issue, but in some deeply-nested recursive scenarios it could potentially be a problem.
Unless the "PC" you're developing for is tiny by today's standards, I wouldn't worry too much about trying to micro-optimize RAM usage. I've developed code for microcontrollers with only 25 bytes of RAM, and even written full-fledged games for use on a microprocessor-based console with a whopping 128 bytes (not KBytes!) of RAM, and on such system sit makes sense to worry about each individual byte. For PC applications, though, the only time it makes sense to worry about individual bytes is when they're part of a data structure which will get replicated many thousands of times in RAM.
You might want to get a book on "embedded" programming. Such a book will likely discuss ways to keep the memory footprint down, as embedded systems are more constrained than modern desktop or server systems.
When you use "local" variables, they are saved on the stack. As long as you don't use too much stack, this is basically free memory, as when the function exits the memory is returned. How much is "too much" varies... recently I had to work on a system where there is a limit of 8 KB of data for the stack per process.
When you use "global" variables or other static variables, the memory you use is tied up for the duration of the program. Thus you should minimize your use of globals, and/or find ways to share the same memory across multiple functions in your program.
I wrote a fairly elaborate "object manager" for a project I wrote a few years ago. A function can use the "get" operation to borrow an object, and then use the "release" operation when it is done borrowing the object. This means that all the functions in the system were able to share a relatively small amount of data space by taking turns using the shared objects. It's up to you to decide whether it is worth your time to build an "object manager" sort of thing or if you have enough memory to just use simple variables.
You can get much of the benefit of an "object manager" by simply calling malloc() and free() a lot. Then the heap allocator manages the shared resource, the heap memory, for you. The reason I wrote my own "object manager" was a need for speed. My system keeps using identical data objects, and it is way faster to just keep re-using the same ones than to keep freeing them and malloc-ing them again. Also, my system can be run on a DSP chip, and malloc() can be a surprisingly slow function on some DSP architectures.
Having multiple functions using the same global variables can lead you to tricky bugs, if one function tries to hold on to a global buffer while another one is overwriting the data. So your program will likely be more robust if you use malloc() and free() as long as each function only writes into data it allocated for itself. (But malloc() and free() can introduce bugs of their own: memory leaks, double-free errors, continuing to use a pointer after the data to which it points has been freed... if you use malloc() and free() be sure to use a tool such as Valgrind to check your code.)
Any variables, by definition, must be stored in read/write memory (or RAM). If you are talking about an embedded system with the code initially in ROM, then the runtime will copy the ROM image you identified into RAM to hold the values of the global variables.
Only items marked unchangeable (const) may be kept in the ROM during runtime.
Further, you need to reduce the depth of the calling structure of the program as each function call requires stack space (also in RAM) to record the return address and other values.
To minimise the use of memory, you can try to flag local variables with the register attribute, but this may not be honoured by your compiler.
Another common technique is to dynamically generate large variable data on the fly whenever it is required to avoid having to create buffers. These usually take up much more space the simple variables.
if this is a PC, then by default, you will be given a stack of a certain size ( you can make it bigger or smaller ). Using this stack is more efficient RAM wise than using global variables. because your ram usage will be the fixed stack size + globals + other stuff ( program, heap etc). The stack acts as a resuable piece of memory.
I have a question about low level stuff of dynamic memory allocation.
I understand that there may be different implementations, but I need to understand the fundamental ideas.
So,
when a modern OS memory allocator or the equivalent allocates a block of memory, this block needs to be freed.
But, before that happends, some system needs to exist to control the allocation process.
I need to know:
How this system keeps track of allocated and unallocated memory. I mean, the system needs to know what blocks have already been allocated and what their size is to use this information in allocation and deallocation process.
Is this process supported by modern hardware, like allocation bits or something like that?
Or is some kind of data structure used to store allocation information.
If there is a data structure, how much memory it uses compared to the allocated memory?
Is it better to allocate memory in big chunks rather than small ones and why?
Any answer that can help reveal fundamental implementation details is appreciated.
If there is a need for code examples, C or C++ will be just fine.
"How this system keeps track of allocated and unallocated memory." For non-embedded systems with operating systems, a virtual page table, which the OS is in charge of organizing (with hardware TLB support of course), tracks the memory usage of programs.
AS FAR AS I KNOW (and the community will surely yell at me if I'm mistaken), tracking individual malloc() sizes and locations has a good number of implementations and is runtime-library dependent. Generally speaking, whenever you call malloc(), the size and location is stored in a table. Whenever you call free(), the table entry for the provided pointer is looked up. If it is found, that entry is removed. If it is not found, the free() is ignored (which also indicates a possible memory leak).
When all malloc() entries in a virtual page are freed, that virtual page is then released back to the OS (this also implies that free() does not always release memory back to the OS since the virtual page may still have other malloc() entries in it). If there is not enough space within a given virtual page to support another malloc() of a specified size, another virtual page is requested from the OS.
Embedded processors usually don't have operating systems, virtual page tables, nor multiple processes. In this case, virtual memory is not used. Instead, the entire memory of the embedded processor is treated like one large virtual page (although the addresses are actually physical addresses) and memory management follows a similar process as previously described.
Here is a similar stack overflow question with more in-depth answers.
"Is it better to allocate memory in big chunks rather than small ones and why?" Allocate as much memory as you need, no more and no less. Compiler optimizations are very smart, and memory will almost always be managed more efficiently (i.e. reducing memory fragmentation) than the programmer can manually do. This is especially true in a non-embedded environment.
Here is a similar stack overflow question with more in-depth answers (note that it pertains to C and not C++, however it is still relevant to this discussion).
Well, there are more than one way to achieve that.
I once had to wrote a malloc() (and free()) implementation for educational purpose.
This is from my experience, and real world implementation surely vary.
I used a double linked list.
Memory chunk returned to the user after calling malloc() were in fact a struct containing relevant information to my implementation (ie the next and prev pointer, and a is_used byte).
So when a user request N bytes I allocated N + sizeof(my_struct) bytes, hiding next and prev pointers at the begenning of the chunk, and returning what's left to the user.
Surely, this is poor design for a program that use a lot of small allocation (because each allocation takes up to N + 2 pointers + 1 byte).
For a real world implementation, you can take a look to the code of good and well known memory allocator.
Normally there exist two different layers.
One layer lives at application level, usually as part of the C standard library. This is what you call through functions like malloc and free (or operator new in C++, which in turn usually calls malloc). This layer takes care of your allocations, but does not know about memory or where it comes from.
The other layer, at OS level, does not know and does not care anything about your allocations. It only maintains a list of fixed-size memory pages that have been reserved, allocated, and accessed, and with each page information such as where it maps to.
There are many different implementations for either layer, but in general it works like this:
When you allocate memory, the allocator (the "application level part") looks whether it has a matching block somewhere in its books that it can give to you (some allocators will split a larger block in two, if need be).
If it doesn't find a suitable block, it reserves a new block (usually much larger than what you ask for) from the operating system. sbrk or mmap on Linux, or VirtualAlloc on Windows would be typical examples of functions it might use for that effect.
This does very little apart from showing intent to the operating system, and generating some page table entries.
The allocator then (logically, in its books) splits up that large area into smaller pieces according to its normal mode of operation, finds a suitable block, and returns it to you. Note that this returned memory does not necessarily even exist as phsyical memory (though most allocators write some metadata into the first few bytes of each allocated unit, so they necessarily pre-fault the pages).
In the mean time, invisibly, a background task zeroes out memory pages that were in use by some process once but have been freed. This happens all the time, on a tentative base, since sooner or later someone will ask for memory (often, that's what the idle task does).
Once you access an address in the page that contains your allocated block for the first time, you generate a fault. The page table entry of this yet non-existent page (it logically exists, just not phsyically) is replaced with a reference to a page from the pool of zero pages. In the uncommon case that there is none left, for example if huge amounts of memory are being allocated all the time, the OS swaps out a page which it believes will not be accessed any time soon, zeroes it, and returns this one.
Now the page becomes part of your working set, it corresponds to actual phsyical memory, and it accounts towards your process' quota. While your process is running, pages may be moved in and out of your working set, or may be paged out and in, as you exceed certain limits, and according to how much memory is needed and how it is accessed.
Once you call free, the allocator puts the freed area back into its books. It may tell the OS that it does not need the memory any more instead, but usually this does not happen as it is not really necessary and it is more efficient to keep around a little extra memory and reuse it. Also, it may not be easy to free the memory because usually the units that you allocate/deallocate do not directly correspond with the units the OS works with (and, in the case of sbrk they'd need to happen in the correct order, too).
When the process ends, the OS simply throws away all page table entries and adds all pages to the list of pages that the idle task will zero out. So the physical memory becomes available to the next process asking for some.
Suppose I have a memory pool object with a constructor that takes a pointer to a large chunk of memory ptr and size N. If I do many random allocations and deallocations of various sizes I can get the memory in such a state that I cannot allocate an M byte object contiguously in memory even though there may be a lot free! At the same time, I can't compact the memory because that would cause a dangling pointer on the consumers. How does one resolve fragmentation in this case?
I wanted to add my 2 cents only because no one else pointed out that from your description it sounds like you are implementing a standard heap allocator (i.e what all of us already use every time when we call malloc() or operator new).
A heap is exactly such an object, that goes to virtual memory manager and asks for large chunk of memory (what you call "a pool"). Then it has all kinds of different algorithms for dealing with most efficient way of allocating various size chunks and freeing them. Furthermore, many people have modified and optimized these algorithms over the years. For long time Windows came with an option called low-fragmentation heap (LFH) which you used to have to enable manually. Starting with Vista LFH is used for all heaps by default.
Heaps are not perfect and they can definitely bog down performance when not used properly. Since OS vendors can't possibly anticipate every scenario in which you will use a heap, their heap managers have to be optimized for the "average" use. But if you have a requirement which is similar to the requirements for a regular heap (i.e. many objects, different size....) you should consider just using a heap and not reinventing it because chances are your implementation will be inferior to what OS already provides for you.
With memory allocation, the only time you can gain performance by not simply using the heap is by giving up some other aspect (allocation overhead, allocation lifetime....) which is not important to your specific application.
For example, in our application we had a requirement for many allocations of less than 1KB but these allocations were used only for very short periods of time (milliseconds). To optimize the app, I used Boost Pool library but extended it so that my "allocator" actually contained a collection of boost pool objects, each responsible for allocating one specific size from 16 bytes up to 1024 (in steps of 4). This provided almost free (O(1) complexity) allocation/free of these objects but the catch is that a) memory usage is always large and never goes down even if we don't have a single object allocated, b) Boost Pool never frees the memory it uses (at least in the mode we are using it in) so we only use this for objects which don't stick around very long.
So which aspect(s) of normal memory allocation are you willing to give up in your app?
Depending on the system there are a couple of ways to do it.
Try to avoid fragmentation in the first place, if you allocate blocks in powers of 2 you have less a chance of causing this kind of fragmentation. There are a couple of other ways around it but if you ever reach this state then you just OOM at that point because there are no delicate ways of handling it other than killing the process that asked for memory, blocking until you can allocate memory, or returning NULL as your allocation area.
Another way is to pass pointers to pointers of your data(ex: int **). Then you can rearrange memory beneath the program (thread safe I hope) and compact the allocations so that you can allocate new blocks and still keep the data from old blocks (once the system gets to this state though that becomes a heavy overhead but should seldom be done).
There are also ways of "binning" memory so that you have contiguous pages for instance dedicate 1 page only to allocations of 512 and less, another for 1024 and less, etc... This makes it easier to make decisions about which bin to use and in the worst case you split from the next highest bin or merge from a lower bin which reduces the chance of fragmenting across multiple pages.
Implementing object pools for the objects that you frequently allocate will drive fragmentation down considerably without the need to change your memory allocator.
It would be helpful to know more exactly what you are actually trying to do, because there are many ways to deal with this.
But, the first question is: is this actually happening, or is it a theoretical concern?
One thing to keep in mind is you normally have a lot more virtual memory address space available than physical memory, so even when physical memory is fragmented, there is still plenty of contiguous virtual memory. (Of course, the physical memory is discontiguous underneath but your code doesn't see that.)
I think there is sometimes unwarranted fear of memory fragmentation, and as a result people write a custom memory allocator (or worse, they concoct a scheme with handles and moveable memory and compaction). I think these are rarely needed in practice, and it can sometimes improve performance to throw this out and go back to using malloc.
write the pool to operate as a list of allocations, you can then extended and destroyed as needed. this can reduce fragmentation.
and/or implement allocation transfer (or move) support so you can compact active allocations. the object/holder may need to assist you, since the pool may not necessarily know how to transfer types itself. if the pool is used with a collection type, then it is far easier to accomplish compacting/transfers.
Let me start by saying that I have read this tutorial and have read this question. My questions are:
How big can the stack get ? Is it
processor/architecture/compiler
dependent ?
Is there a way to know exactly how
much memory is available to my
function/class stack and how much is
currently being used in order to
avoid overflows ?
Using modern compilers (say gcc 4.5)
on a modern computer (say 6 GB ram),
do I need to worry for stack
overflows or is it a thing of the
past ?
Is the actual stack memory
physically on RAM or on CPU cache(s) ?
How much faster is stack memory
access and read compared to heap
access and read ? I realize that
times are PC specific, so a ratio is
enough.
I've read that it is not advisable
to allocate big vars/objects on the
stack. How much is too big ? This
question here is given an answer
of 1MB for a thread in win32. How
about a thread in Linux amd64 ?
I apologize if those questions have been asked and answered already, any link is welcome !
Yes, the limit on the stack size varies, but if you care you're probably doing something wrong.
Generally no you can't get information about how much memory is available to your program. Even if you could obtain such information, it would usually be stale before you could use it.
If you share access to data across threads, then yes you normally need to serialize access unless they're strictly read-only.
You can pass the address of a stack-allocated object to another thread, in which case you (again) have to serialize unless the access is strictly read-only.
You can certainly overflow the stack even on a modern machine with lots of memory. The stack is often limited to only a fairly small fraction of overall memory (e.g., 4 MB).
The stack is allocated as system memory, but usually used enough that at least the top page or two will typically be in the cache at any given time.
Being part of the stack vs. heap makes no direct difference to access speed -- the two typically reside in identical memory chips, and often even at different addresses in the same memory chip. The main difference is that the stack is normally contiguous and heavily used, do the top few pages will almost always be in the cache. Heap-based memory is typically fragmented, so there's a much greater chance of needing data that's not in the cache.
Little has changed with respect to the maximum size of object you should allocate on the stack. Even if the stack can be larger, there's little reason to allocate huge objects there.
The primary way to avoid memory leaks in C++ is RAII (AKA SBRM, Stack-based resource management).
Smart pointers are a large subject in themselves, and Boost provides several kinds. In my experience, collections make a bigger difference, but the basic idea is largely the same either way: relieve the programmer of keeping track of every circumstance when a particular object can be used or should be freed.
1.How big can the stack get ? Is it processor/architecture/compiler dependent ?
The size of the stack is limited by the amount of memory on the platform and the amount of memory allocated to the process by the operating system.
2.Is there a way to know exactly how much memory is available to my function/class stack and how much is currently being used in order to avoid overflows ?
There is no C or C++ facility for determining the amount of available memory. There may be platform specific functions for this. In general, most programs try to allocate memory, then come up with a solution for when the allocation fails.
3.Using modern compilers (say gcc 4.5) on a modern computer (say 6 GB ram), do I need to worry for stack overflows or is it a thing of the past ?
Stack Overflows can happen depending on the design of the program. Recursion is a good example of depleting the stack, regardless of the amount of memory.
4.Is the actual stack memory physically on RAM or on CPU cache(s) ?
Platform dependent. Some CPU's can load up their cache with local variables on the stack. Wide variety of scenarios on this topic. Not defined in the language specification.
5.How much faster is stack memory access and read compared to heap access and read ?
I realize that times are PC specific, so a ratio is enough.
Usuallly there is no difference in speed. Depends on how the platform organizes its memory (physically) and how the executable's memory is laid out. The heap or stack could reside in a serial access memory chip (a slow method) or even on a Flash memory chip. Not specified in the language specification.
6.I've read that it is not advisable to allocate big vars/objects on the stack. How much is too big ? This question here is given an answer of 1MB for a thread in win32. How about a thread in Linux amd64 ?
The best advice is to allocate local small variables as needed (a.k.a. via stack). Huge items are either allocted from dynamic memory (a.k.a. heap), or some kind of global (static local to function or local to translation unit or even global variable). If the size is known at compile time, use the global type allocation. Use dynamic memory when the size may change during run-time.
The stack also contains information about function addresses. This is one major reason to not allocate a lot of objects locally. Some compilers have smaller limits for stacks than for heap or global variables. The premise is that nested function calls require less memory than large data arrays or buffers.
Remember that when switching threads or tasks, the OS needs to save the state somewhere. The OS may have different rules for saving stack memory versus other types.
1-2 : On some embedded CPUs the stack may be limited to a few kbytes; on some machines it may expand to gigabytes. There's no platform-independent way to know how big the stack can get, in some measure because some platforms are capable of expanding the stack when they reach the limit; the success of such an operation cannot always be predicted in advance.
3 : The effects of nearly-simultaneous writes, or of writes in one thread that occur nearly simultaneously with reads in another, are largely unpredictable in the absence of locks, mutexes, or other such devices. Certain things can be assumed (for example, if one thread reads a heap-stored 'int' while another thread changes it from 4 to 5, the first thread may see 4 or it may see 5; on most platforms, it would be guaranteed not to see 27).
4 : Some platforms share stack address space among threads; others do not. Passing pointers to things on the stack is usually a bad idea, though, since the the foreign thread receiving the pointer will have no way of ensuring that the target is in scope and won't go out of scope.
5 : Generally one does not need to worry about stack space in any routine which is written to limit recursion to a reasonable level. One does, however, need to worry about the possibility of defective data structures causing infinite recursion, which would wipe out any stack no matter how large it might be. One should also be mindful of the possibility of nasty input which would cause a much greater stack depth than expected. For example, a compiler using a recursive-descent parser might choke if fed a file containing a billion repetitions of the sequence "1+(". Even if the machine has a gig of stack space, if each nested sub-expression uses 64 bytes of stack, the aforementioned three-gig file could kill it.
6 : Stack is stored generally in RAM and/or cache; the most-recently-accessed parts will generally be in cache, while the less-recently-accessed parts will be in main memory. The same is generally true of code, heap, and static storage areas as well.
7 : That is very system dependent; generally, "finding" something on the heap will take as much time as accessing a few things on the stack, but in many cases making multiple accesses to different parts of the same heap object can be as fast as accessing a stack object.