We have a front end written in Visual Basic 6.0 that calls several back end DLLs written in mixed C/C++. The problem is that each DLL appears to have its own heap and one of them isn’t big enough. The heap collides with the program stack when we’ve allocated enough memory.
Each DLL is written entirely in C, except for the basic DLL wrapper, which is written in C++. Each DLL has a handful of entry points. Each entry point immediately calls a C routine. We would like to increase the size of the heap in the DLL, but haven’t been able to figure out how to do that. I searched for guidance and found these MSDN articles:
http://msdn.microsoft.com/en-us/library/hh405351(v=VS.85).aspx
These articles are interesting but provide conflicting information. In our problem it appears that each DLL has its own heap. This matches the “Heaps: Pleasures and Pains” article that says that the C Run-Time (C RT) library creates its own heap on startup. The “Managing Heap Memory” article says that the C RT library allocated out of the default process heap. The “Memory management options in Win32” article says the behavior depends on the version of the C RT library being used.
We’ve temporarily solved the problem by allocating memory from a private heap. However, in order to improve the structure of this very large complex program, we want to switch from C with a thin C++ wrapper to real C++ with classes. We’re pretty certain that the new and free operator won’t allocate memory from our private heap and we’re wondering how to control the size of the heap C++ uses to allocate objects in each DLL. The application needs to run in all versions of desktop Windows-NT, from 2000 through 7.
The Question
Can anyone point us to definitive and correct documentation that
explains how to control the size of the heap C++ uses to allocate
objects?
Several people have asserted that stack corruption due to heap allocations overwriting the stack are impossible. Here is what we observed. The VB front end uses four DLLs that it dynamicly loads. Each DLL is independant of the others and provides a handful of methods called by the front end. All the DLLs comunicate via data structures written to files on disk. These data structures are all structured staticlly. They contain no pointers, just value types and fixed sized arrays of value types. The problem DLL is invoked by a single call where a file name is passed. It is designed to allocate about 20MB of data structures required to do complete its processing. It does a lot of calculation, writes the results to disk, releases the 20MB of data structures, and returns and error code. The front end then unloads the DLL. While debugging the problem under discussion, we set a break point at the beginning of the data structure allocation code and watched the memory values returned from the calloc calls and compared them with the current stack pointer. We watched as the allocated blocks approached the the stack. After the allocation was complete the stack began to grow until it overlapped the heap. Eventually the calculations wrote into the heap and corrupted the stack. As the stack unwound it tried to return to an invalid address and crashed with a segmentation fault.
Each of our DLLs is staticly linked to the CRT, so that each DLL has its own CRT heap and heap manager. Microsoft says in http://msdn.microsoft.com/en-us/library/ms235460(v=vs.80).aspx:
Each copy of the CRT library has a separate and distinct state.
As such, CRT objects such as file handles, environment variables, and
locales are only valid for the copy of the CRT where these objects are
allocated or set. When a DLL and its users use different copies of the
CRT library, you cannot pass these CRT objects across the DLL boundary
and expect them to be picked up correctly on the other side.
Also, because each copy of the CRT library has its own heap manager,
allocating memory in one CRT library and passing the pointer across a
DLL boundary to be freed by a different copy of the CRT library is a
potential cause for heap corruption.
We don't pass pointers between DLLs. We aren't experiencing heap corruption, we are experiencing stack corruption.
OK, the question is:
Can anyone point us to definitive and correct documentation that
explains how to control the size of the heap C++ uses to allocate
objects?
I am going to answer my own question. I got the answer from reading Raymond Chen's blog The Old New Thing, specifically There's also a large object heap for unmanaged code, but it's inside the regular heap. In that article Raymond recommends Advanced Windows Debugging by Mario Hewardt and Daniel Pravat. This book has very specific information on both stack and heap corruption, which is what I wanted to know. As a plus it provides all sorts of information about how to debug these problems.
Could you please elaborate on this your statement:
The heap collides with the program stack when we’ve allocated enough memory.
If we're talking about Windows (or any other mature platform), this should not be happening: the OS makes sure that stacks, heaps, mapped files and other objects never intersect.
Also:
Can anyone point us to definitive and correct documentation that explains how to control the size of the heap C++ uses to allocate objects?
The heap size is not fixed on Windows: it grows as the application uses more and more memory. It will grow until all available virtual memory space for the process is used. It is pretty easy to confirm this: just write a simple test app which keeps allocating memory and counts how much has been allocated. On a default 32-bit Windows you'll reach almost 2Gb. Surely, initially the heap doesn't occupy all available space, therefore it must grow in the process.
Without many details about the "collision" it's hard to tell what's happening in your case. However, looking at the tags to this question prompts me to one possibility. It is possible (and happens quite often, unfortunately) that ownership of allocated memory areas is being passed between modules (DLLs in your case). Here's the scenario:
there are two DLLs: A and B. Both of them created their own heaps
the DLL A allocates an object in its heap and passes the pointer and ownership to B
the DLL B receives the pointer, uses the memory and deallocates the object
If the heaps are different, most heap managers would not check if the memory region being deallocated actually belongs to it (mostly for performance reasons). So they would deacllocate something which doesn't belong to them. By doing that they corrupt the other module's heap. This may (and often does) lead to a crash. But not always. Depending on your luck (and particular heap manager implementation), this operation may change one of the heaps in a manner that the next allocation will happen outside of the area where the heap is located.
This often happens when one module is managed code, while the other is native one. Since you have the VB6 tag in the question, I'd check if this is the case.
If the stack grows large enough to hit the heap, a prematurely aborted stack overflow may be the problem: Invalid data is passed that does not satisfy the exit condition of some recursion (loop detection is not working or not existing) in the problem DLL so that an infinite recursion consumes ridiculously large stack space. One would expect such a DLL to terminate with a stack overflow exception, but for maybe compiler / linker optimizations or large foreign heap sizes it crashes elsewhere.
Heaps are created by the CRT. That is to say, the malloc heap is created by the CRT, and is unrelated to HeapCreate(). It's not used for large allocations, though, which are handed off to the OS directly.
With multiple DLLs, you might have multiple heaps (newer VC versions are better at sharing, but even VC6 had no problem if you used MSVCRT.DLL - that's shared)
The stack, on the other hand, is managed by the OS. Here you see why multiple heaps don't matter: The OS allocation for the different heaps will never collide with the OS allocation for the stack.
Mind you, the OS may allocate heap space close to the stack. The rule is just no overlap, after all, there's no guaranteed "unused separation zone". If you then have a buffer overflow, it could very well overflow into the stack space.
So, any solutions? Yes: move to VC2010. It has buffer security checks, implemented in quite an efficient way. They're the default even in release mode.
Related
It is well known that the freeing of heap memory must be done with the same allocator as the one used to allocate it. This is something to take into account when exchanging heap allocated objects across DLL boundaries.
One solution is to provide a destructor for each object, like in a C API: if a DLL allows creating object A it will have to provide a function A_free or something similar 1.
Another related solution is to wrap all allocations into shared_ptr because they store a link to the deallocator 2.
Another solution is to "inject" a top-level allocator into all loaded DLLs (recursively) 3.
Another solution is to just to not exchange heap allocated objects but instead use some kind of protocol 4.
Yet another solution is to be absolutely sure that the DLLs will share the same heap, which should (will?) happen if they share compatible compilations options (compiler, flags, runtime, etc.) 5 6.
This seems quite difficult to guarantee, especially if one would like to use a package manager and not build everything at once.
Is there a way to check at runtime that the heaps are actually the same between multiple DLLs, preferably in a cross-platform way?
For reliability and ease of debugging this seems better than hoping that the application will immediately crash and not corrupt stuff silently.
I think the biggest problem here is your definition of "heap". That assumes there is a unique definition.
The problem is that Windows has HeapAlloc, while C++ typically uses "heap" as the memory allocated by ::operator new. These two can be the same, distinct, a subset, or partially overlapping.
With two DLL's, both might be written in C++ and use ::operator new, but they both could have linked their own unique versions. So there might be multiple answers to the observation in the previous paragraph.
Now let's assume for an example that one DLL has ::operator new forward directly to HeapAlloc, but the other counts allocations before calling HeapAlloc. Clearly the two can't be mixed formally, because the count kept by the second allocator would be wrong. But the code is so simple that both news are probably inlined. So at assembly level you just have calls to HeapAlloc.
There's no way you can detect this at runtime, even if you would disassemble the code on the fly (!) - the inlined counter increment instruction is not distinguishable from the surrounding code.
Let me clear up: I understand how new and delete (and delete[]) work. I understand what the stack is, and I understand when to allocate memory on the stack and on the heap.
What I don't understand, however, is: where on the heap is memory allocated. I know we're supposed to look at the heap as this big pool of pretty much limitless RAM, but surely that's not the case.
What is in control of choosing where on the heap memory is stored and how does it choose that?
Also: the term "returning memory to the OS" is one I come across quite often. Does this mean that the heap is shared between all processes?
The reason I care about all this is because I want to learn more about memory fragmentation. I figured it'd be a good idea to know how the heap works before I learn how to deal with memory fragmentation, because I don't have enough experience with memory allocation, nor C++ to dive straight into that.
The memory is managed by the OS. So the answer depends on the OS/Plattform that is used. The C++ specification does not specify how memory on a lower level is allocated/freed, it specifies it in from of the lifetime.
While multi-user desktop/server/phone OS (like Windows, Linux, macOS, Android, …) have similar ways to how memory is managed, it could be completely different on embedded systems.
What is in control of choosing where on the heap memory is stored and how does it choose that?
Its the OS that is responsible for that. How exactly depends - as already said - on the OS. The OS could also be a thin layer in the form of a combination of the runtime library and minimal OS like includeos
Does this mean that the heap is shared between all processes?
Depends on the point of view. The address space is - for multiuser systems - in general not shared between processes. The OS ensures that one process cannot access memory of another process, which is ensured through virtual address spaces. But the OS can distribute the whole RAM among all processes.
For embedded systems, it could even be the case, that each process has a fixed amount a preallocated memory - that is not shared between processes - and with no way to allocated new memory or free memory. And then it is up to the developer to manage that preallocated memory by themselves by providing custom allocators to the objects of the stdlib, and to construct in allocated storage.
I want to learn more about memory fragmentation
There are two ways of fragmentation. The one is given by the memory addresses exposed by the OS to the C++ runtime. And the one on the hardware/OS side (which could be the same for embedded system) . How and in which form the memory might be fragmented organized by the OS can't be determined using the function provided by the stdlib. And how the fragmentation of the address spaces of the process behaves, depends again on the os and the also on the used stdlib.
None of these details are specified in the C++ specification standard. Each C++ implementation is free to implement these details in whichever way that works for it, as long as the end result is agreeable with the standard.
Each C++ compiler, and operating system implements these low level details in its own unique way. There is no specific answer to these questions that apply to every C++ compiler and every operating system. Over time, a lot of research has went into profiling and optimizing memory allocation and deallocation algorithms for a typical C++ application, and there are some tailored C++ implementation that offer alternative memory allocation algorithm that each application will choose, that it thinks will work best for it. Of course, none of this is covered by the C++ standard.
Of course, all memory in your computer must be shared between all processes that are running on it, and your operating system is responsible for divvying it up and parceling it out to all the processes, when they request more memory. All that "returning memory to the OS" means is that a process's memory allocator has determined that it no longer needs a sufficiently large continuous memory range, that was used before but not any more, and notifies the operating system that it no longer uses it and it can be reassigned to another process.
What decides where on the heap memory is allocated?
From the perspective of a C++ programmer: It is decided by the implementation (of the C++ language).
From the perspective of a C++ standard library implementer (as an example of what may hypothetically be true for some implementation): It is decided by malloc which is part of the C standard library.
From the perspective of malloc implementer (as an example of what may hypothetically be true for some implementation): The location of heap in general is decided by the operating system (for example, on Linux systems it might be whatever address is returned by sbrk). The location of any individual allocation is up to the implementer to decide as long as they stay within the limitations established by the operating system and the specification of the language.
Note that heap memory is called "free store" in C++. I think this is to avoid confusion with the heap data structure which is unrelated.
I understand what the stack is
Note that there is no such thing as "stack memory" in the C++ language. The fact that C++ implementations store automatic variables in such manner is an implementation detail.
The heap is indeed shared between processes, but in C++ the delete keyword does not return the memory to the operating system, but keeps it to reuse later on. The location of the allocated memory is dependent on how much memory you want to access, there has to be enough space and how the OS handles memory allocations, it can be one on first, best and worst hit (Read more on that topic on google). The name RAM basically tells you where to search for your memory :D
It is however possible to get the same memory location when you have a small program and restart it multiple times.
I am debugging a potential memory leak problem in a debug DLL.
The case is that process run a sub-test which loads/unloads a DLL dynamically, during the test a lot of memory is reserved and committed(1.3GB). After test is finished and DLL unloaded, still massive amount of memory remains reserved(1.2GB).
Reason why I said this reserved memory is allocated by DLL is that if I use a release DLL(nothing else changed, same test), reserved memory is ~300MB, so all the additional reserved memory must be allocated in debug DLL.
Looks like a lot of memory is committed during test but decommit(not release to free status) after test. So I want to track who reserve/decommit that large memory. But in the source code, there is no VirtualAlloc called, so questions are:
Is VirtualAlloc the only way to reserve memory?
If not, what other API can do that? If is, what other API will internally call VirtualAlloc? Quite some people online says HeapAlloc will internally call VirtualAlloc? How does it work?
[Parts of this are purely implementation detail, and not things that your application should rely on, so take them only for informational purposes, not as official documentation or contract of any kind. That said, there is some value in understanding how things are implemented under the hood, if only for debugging purposes.]
Yes, the VirtualAlloc() function is the workhorse function for memory allocation in Windows. It is a low-level function, one that the operating system makes available to you if you need its features, but also one that the system uses internally. (To be precise, it probably doesn't call VirtualAlloc() directly, but rather an even lower level function that VirtualAlloc() also calls down to, like NtAllocateVirtualMemory(), but that's just semantics and doesn't change the observable behavior.)
Therefore, HeapAlloc() is built on top of VirtualAlloc(), as are GlobalAlloc() and LocalAlloc() (although the latter two became obsolete in 32-bit Windows and should basically never be used by applications—prefer explicitly calling HeapAlloc()).
Of course, HeapAlloc() is not just a simple wrapper around VirtualAlloc(). It adds some logic of its own. VirtualAlloc() always allocates memory in large chunks, defined by the system's allocation granularity, which is hardware-specific (retrievable by calling GetSystemInfo() and reading the value of SYSTEM_INFO.dwAllocationGranularity). HeapAlloc() allows you to allocate smaller chunks of memory at whatever granularity you need, which is much more suitable for typical application programming. Internally, HeapAlloc() handles calling VirtualAlloc() to obtain a large chunk, and then divvying it up as needed. This not only presents a simpler API, but is also more efficient.
Note that the memory allocation functions provided by the C runtime library (CRT)—namely, C's malloc() and C++'s new operator—are a higher level yet. These are built on top of HeapAlloc() (at least in Microsoft's implementation of the CRT). Internally, they allocate a sizable chunk of memory that basically serves as a "master" block of memory for your application, and then divvy it up into smaller blocks upon request. As you free/delete those individual blocks, they are returned to the pool. Once again, this extra layer provides a simplified interface (and in particular, the ability to write platform-independent code), as well as increased efficiency in the general case.
Memory-mapped files and other functionality provided by various OS APIs is also built upon the virtual memory subsystem, and therefore internally calls VirtualAlloc() (or a lower-level equivalent).
So yes, fundamentally, the lowest level memory allocation routine for a normal Windows application is VirtualAlloc(). But that doesn't mean it is the workhorse function that you should generally use for memory allocation. Only call VirtualAlloc() if you actually need its additional features. Otherwise, either use your standard library's memory allocation routines, or if you have some compelling reason to avoid them (like not linking to the CRT or creating your own custom memory pool), call HeapAlloc().
Note also that you must always free/release memory using the corresponding mechanism to the one you used to allocate the memory. Just because all memory allocation functions ultimately call VirtualAlloc() does not mean that you can free that memory by calling VirtualFree(). As discussed above, these other functions implement additional logic on top of VirtualAlloc(), and thus require that you call their own routines to free the memory. Only call VirtualFree() if you allocated the memory yourself via a call to VirtualAlloc(). If the memory was allocated with HeapAlloc(), call HeapFree(). For malloc(), call free(); for new, call delete.
As for the specific scenario described in your question, it is unclear to me why you are worrying about this. It is important to keep in mind the distinction between reserved memory and committed memory. Reserved simply means that this particular block in the address space has been reserved for use by the process. Reserved blocks cannot be used. In order to use a block of memory, it must be committed, which refers to the process of allocating a backing store for the memory, either in the page file or in physical memory. This is also sometimes known as mapping. Reserving and committing can be done as two separate steps, or they can be done at the same time. For example, you might want to reserve a contiguous address space for future use, but you don't actually need it yet, so you don't commit it. Memory that has been reserved but not committed is not actually allocated.
In fact, all of this reserved memory may not be a leak at all. A rather common strategy used in debugging is to reserve a specific range of memory addresses, without committing them, to trap attempts to access memory within this range with an "access violation" exception. The fact that your DLL is not making these large reservations when compiled in Release mode suggests that, indeed, this may be a debugging strategy. And it also suggests a better way of determining the source: rather than scanning through your code looking for all of the memory-allocation routines, scan your code looking for the conditional code that depends upon the build configuration. If you're doing something different when DEBUG or _DEBUG is defined, then that is probably where the magic is happening.
Another possible explanation is the CRT's implementation of malloc() or new. When you allocate a small chunk of memory (say, a few KB), the CRT will actually reserve a much larger block but only commit a chunk of the requested size. When you subsequently free/delete that small chunk of memory, it will be decommitted, but the larger block will not be released back to the OS. The reason for this is to allow future calls to malloc/new to re-use that reserved block of memory. If a subsequent request is for a larger block than can be satisfied by the currently reserved address space, it will reserve additional address space. If, in debugging builds, you are repeatedly allocating and freeing increasingly large chunks of memory, what you're seeing may be the result of memory fragmentation. But this is really not a problem, aside from a minor performance hit, which is really not worth worrying about in debugging builds.
I have a question about low level stuff of dynamic memory allocation.
I understand that there may be different implementations, but I need to understand the fundamental ideas.
So,
when a modern OS memory allocator or the equivalent allocates a block of memory, this block needs to be freed.
But, before that happends, some system needs to exist to control the allocation process.
I need to know:
How this system keeps track of allocated and unallocated memory. I mean, the system needs to know what blocks have already been allocated and what their size is to use this information in allocation and deallocation process.
Is this process supported by modern hardware, like allocation bits or something like that?
Or is some kind of data structure used to store allocation information.
If there is a data structure, how much memory it uses compared to the allocated memory?
Is it better to allocate memory in big chunks rather than small ones and why?
Any answer that can help reveal fundamental implementation details is appreciated.
If there is a need for code examples, C or C++ will be just fine.
"How this system keeps track of allocated and unallocated memory." For non-embedded systems with operating systems, a virtual page table, which the OS is in charge of organizing (with hardware TLB support of course), tracks the memory usage of programs.
AS FAR AS I KNOW (and the community will surely yell at me if I'm mistaken), tracking individual malloc() sizes and locations has a good number of implementations and is runtime-library dependent. Generally speaking, whenever you call malloc(), the size and location is stored in a table. Whenever you call free(), the table entry for the provided pointer is looked up. If it is found, that entry is removed. If it is not found, the free() is ignored (which also indicates a possible memory leak).
When all malloc() entries in a virtual page are freed, that virtual page is then released back to the OS (this also implies that free() does not always release memory back to the OS since the virtual page may still have other malloc() entries in it). If there is not enough space within a given virtual page to support another malloc() of a specified size, another virtual page is requested from the OS.
Embedded processors usually don't have operating systems, virtual page tables, nor multiple processes. In this case, virtual memory is not used. Instead, the entire memory of the embedded processor is treated like one large virtual page (although the addresses are actually physical addresses) and memory management follows a similar process as previously described.
Here is a similar stack overflow question with more in-depth answers.
"Is it better to allocate memory in big chunks rather than small ones and why?" Allocate as much memory as you need, no more and no less. Compiler optimizations are very smart, and memory will almost always be managed more efficiently (i.e. reducing memory fragmentation) than the programmer can manually do. This is especially true in a non-embedded environment.
Here is a similar stack overflow question with more in-depth answers (note that it pertains to C and not C++, however it is still relevant to this discussion).
Well, there are more than one way to achieve that.
I once had to wrote a malloc() (and free()) implementation for educational purpose.
This is from my experience, and real world implementation surely vary.
I used a double linked list.
Memory chunk returned to the user after calling malloc() were in fact a struct containing relevant information to my implementation (ie the next and prev pointer, and a is_used byte).
So when a user request N bytes I allocated N + sizeof(my_struct) bytes, hiding next and prev pointers at the begenning of the chunk, and returning what's left to the user.
Surely, this is poor design for a program that use a lot of small allocation (because each allocation takes up to N + 2 pointers + 1 byte).
For a real world implementation, you can take a look to the code of good and well known memory allocator.
Normally there exist two different layers.
One layer lives at application level, usually as part of the C standard library. This is what you call through functions like malloc and free (or operator new in C++, which in turn usually calls malloc). This layer takes care of your allocations, but does not know about memory or where it comes from.
The other layer, at OS level, does not know and does not care anything about your allocations. It only maintains a list of fixed-size memory pages that have been reserved, allocated, and accessed, and with each page information such as where it maps to.
There are many different implementations for either layer, but in general it works like this:
When you allocate memory, the allocator (the "application level part") looks whether it has a matching block somewhere in its books that it can give to you (some allocators will split a larger block in two, if need be).
If it doesn't find a suitable block, it reserves a new block (usually much larger than what you ask for) from the operating system. sbrk or mmap on Linux, or VirtualAlloc on Windows would be typical examples of functions it might use for that effect.
This does very little apart from showing intent to the operating system, and generating some page table entries.
The allocator then (logically, in its books) splits up that large area into smaller pieces according to its normal mode of operation, finds a suitable block, and returns it to you. Note that this returned memory does not necessarily even exist as phsyical memory (though most allocators write some metadata into the first few bytes of each allocated unit, so they necessarily pre-fault the pages).
In the mean time, invisibly, a background task zeroes out memory pages that were in use by some process once but have been freed. This happens all the time, on a tentative base, since sooner or later someone will ask for memory (often, that's what the idle task does).
Once you access an address in the page that contains your allocated block for the first time, you generate a fault. The page table entry of this yet non-existent page (it logically exists, just not phsyically) is replaced with a reference to a page from the pool of zero pages. In the uncommon case that there is none left, for example if huge amounts of memory are being allocated all the time, the OS swaps out a page which it believes will not be accessed any time soon, zeroes it, and returns this one.
Now the page becomes part of your working set, it corresponds to actual phsyical memory, and it accounts towards your process' quota. While your process is running, pages may be moved in and out of your working set, or may be paged out and in, as you exceed certain limits, and according to how much memory is needed and how it is accessed.
Once you call free, the allocator puts the freed area back into its books. It may tell the OS that it does not need the memory any more instead, but usually this does not happen as it is not really necessary and it is more efficient to keep around a little extra memory and reuse it. Also, it may not be easy to free the memory because usually the units that you allocate/deallocate do not directly correspond with the units the OS works with (and, in the case of sbrk they'd need to happen in the correct order, too).
When the process ends, the OS simply throws away all page table entries and adds all pages to the list of pages that the idle task will zero out. So the physical memory becomes available to the next process asking for some.
When someone mention the memory management the c++ is capable of doing, how can i see this thing? is this done in theory like guessing?
i took a logical design intro course and it covered the systems of numbers and boolean algebra and combinational logic,will this help?
so say in Visual Studio , is there some kind of a tool to visualize the memory,i hope im not being ridiculous here ?
thank you.
C++ has a variety of memory areas:
space used for global and static variables, which is pre-allocated by the compiler
"stack" memory that's used for preserving caller context during function calls, passing some function arguments (others may fit in CPU registers), and local variables
"heap" memory that's allocated using new or new[](C++'s preferred approach) or malloc (a lower-level function inherited from C), and released with delete, delete[] or free respectively.
The heap is important in that it supports run-time requests for arbitrary amounts of memory, and the usage persists until delete or free is explicitly used, rather than being tied to the lifetime of particular function calls as per stack memory.
I'm not aware of any useful tools for visualising and categorising the overall memory usage of a running C++ program, less still for relating that back to which pointers in the source code currently have how much memory associated with them. As a very general guideline, it's encouraged to write code in such a way that pointers are only introduced when the program is ready to point them at something, and they go out of scope when they will no longer pointer at something. When that's impractical, it can be useful to set them to NULL (0), so that if you're monitoring the executing program in a debugger you can tell the pointer isn't meant to point to legitimate data at that point.
Memory management is not something that you can easily visualize while you are programming. Instead, it refers to how your program is allocating and freeing memory while it is running. Many debuggers will provide a way to halt a program while it is running and view information about the dynamic memory that it has allocated. You can plan your classes and interfaces with proper memory management techniques, but it's not as simple as "hit this button for a chart of your memory usage".
You can also implement something like this to keep track of your memory allocations and warn you about anything that your program didn't free. A garbage collector can free you from some of the hassles associated with memory management.
In Visual Studio, there is a Memory Window (Alt+6) that will let you read/write memory manually provided it is a valid memory location for the operation you are trying to do, during debugging.
On Windows platform, you can get a first initial feel of memory management using tools like perfmon.exe, taskmgr.exe and many other tools from sysinternals