I have a C++ application that I am trying to iron the memory leaks out of and I realized I don't fully understand the difference between virtual and physical memory.
Results from top (so 16.8g = virtual, 111m = physical):
4406 um 20 0 16.8g 111m 4928 S 64.7 22.8 36:53.65 client
My process holds 500 connections, one for each user, and at these numbers it means there is about 30 MB of virtual overhead for each user. Without going into the details of my application, the only way this could sound remotely realistic, adding together all the vectors, structs, threads, functions on the stack, etc., is if I have no idea what virtual memory actually means. No -O optimization flags, btw.
So my questions are:
what operations in C++ would inflate virtual memory so much?
Is it a problem if my task is using gigs of virtual memory?
The stack and heap function variables, vectors, etc. - do those necessarily increase the use of physical memory?
Would removing a memory leak (via delete or free() or such) necessarily reduce both physical and virtual memory usage?
Virtual memory is what your program deals with. It consists of all of the addresses returned by malloc, new, et al. Each process has its own virtual-address space. Virtual address usage is theoretically limited by the address size of your program: 32-bit programs have 4GB of address space; 64-bit programs have vastly more. Practically speaking, the amount of virtual memory that a process can allocate is less than those limits.
Physical memory are the chips soldered to your motherboard, or installed in your memory slots. The amount of physical memory in use at any given time is limited to the amount of physical memory in your computer.
The virtual-memory subsystem maps virtual addresses that your program uses to physical addresses that the CPU sends to the RAM chips. At any particular moment, most of your allocated virtual addresses are unmapped; thus physical memory use is lower than virtual memory use. If you access a virtual address that is allocated but not mapped, the operating system invisibly allocates physical memory and maps it in. When you don't access a virtual address, the operating system might unmap the physical memory.
To take your questions in turn:
what operations in C++ would inflate virtual memory so much?
new, malloc, static allocation of large arrays. Generally anything that requires memory in your program.
Is it a problem if my task is using gigs of virtual memory?
It depends upon the usage pattern of your program. If you allocate vast tracks of memory that you never, ever touch, and if your program is a 64-bit program, it may be okay that you are using gigs of virtual memory.
Also, if your memory use grows without bound, you will eventually run out of some resource.
The stack and heap function variables, vectors, etc. - do those necessarily increase the use of physical memory?
Not necessarily, but likely. The act of touching a variable ensures that, at least momentarily, it (and all of the memory "near" it) is in physical memory. (Aside: containers like std::vector may be allocated on either stack or heap, but the contained objects are allocated on the heap.)
Would removing a memory leak (via delete or free() or such) necessarily reduce both physical and virtual memory usage?
Physical: probably. Virtual: yes.
Virtual memory is the address space used by the process. Each process has a full view of the 64 bit (or 32, depending on the architecture) addressable bytes of a pointer, but not every byte maps to something real. The operating system manages the table that maps virtual address to real physical memory pages -- or whatever that address really is (no matter it seems to be memory for your application). For instance, for your application an address may point to some function, but in reality it has not yet been loaded from disk, and when you call it, it generates a page fault interruption, that the kernel treats by loading the appropriated section from the executable and mapping it to the address space of your application, so it can be executed.
From a Linux perspective (and I believe most modern OS's):
Allocating memory inflates virtual memory. Actually using the allocated memory inflates physical memory usage. Do it too much and it will be swapped to disk, and eventually your process will be killed.
mmaping files will increase only virtual memory usage, this includes the size of the executables: the larger they are, the more virtual memory used.
The only problem of using up virtual memory is that you may have it depleted. It is mainly an issue on 32 bits system, where you only have 4gb of it (and 1gb is reserved for the kernel, so application data only have 3gb).
Function calls, that allocates variables on stack, may increase physical memory usage, but you (usually) won't leak this memory.
Allocated heap variables takes up virtual memory, but will only actually get the physical memory if you read/write on them.
Freeing or deleting variables does not necessarily reduces virtual/physical memory consumption, it depends on the allocator internals, but usually does.
You can set following environment variables to control internal memory allocations by malloc. After setting it, it will answer all four questions. If you want to know other options please refer :
http://man7.org/linux/man-pages/man3/mallopt.3.html
export MALLOC_MMAP_THRESHOLD_=8192
export MALLOC_ARENA_MAX=4
Related
Let's say I have 8 Gigabytes of RAM and 16 Gigabytes of swap memory. Can I allocate a 20 Gigabyte array there in C? If yes, how is it possible? What would that memory layout look like?
[linux] Can I create an array exceeding RAM, if I have enough swap memory?
Yes, you can. Note that accessing swap is veerry slooww.
how is it possible
Allocate dynamic memory. The operating system handles the rest.
How would that memory layout look like?
On an amd64 system, you can have 256 TiB of address space. You can easily fit a contiguous block of 8 GiB in that space. The operating system divides the virtual memory into pages and copies the pages between physical memory and swap space as needed.
Modern operating systems use virtual memory. In Linux and most other OSes rach process has it's own address space according to the abilities of the architecture. You can check the size of the virtual address space in /proc/cpuinfo. For example you may see:
address sizes : 43 bits physical, 48 bits virtual
This means that virtual addresses use 48 bit. Half of that is reserved for the kernel so you only can use 47 bit, or 128TiB. Any memory you allocate will be placed somewhere in those 128 TiB of address space as if you actually had that much memory.
Linux uses demand page loading and per default over commits memory. When you say
char *mem = (char*)malloc(1'000'000'000'000);
what happens is that Linux picks a suitable address and just records that you have allocated 1'000'000'000'000 (rounded up to the nearest page) of memory starting at that point. (It does some sanity check that the amount isn't totally bonkers depending on the amount of physical memory that is free, the amount of swap that is free and the overcommit setting. Per default you can allocate a lot more than you have memory and swap.)
Note that at this point no physical memory and no swap space is connected to your allocated block at all. This changes when you first write to the memory:
mem[4096] = 0;
At this point the program will page fault. Linux checks the address is actually something your program is allowed to write to, finds a physical page and map it to &mem[4096]. Then it lets the program retry to write there and everything continues.
If Linux can't find a physical page it will try to swap something out to make a physical page available for your programm. If that also fails your program will receive a SIGSEGV and likely die.
As a result you can allocate basically unlimited amounts of memory as long as you never write to more than the physical memory and swap and support. On the other hand if you initialize the memory (explicitly or implicitly using calloc()) the system will quickly notice if you try to use more than available.
You can, but not with a simple malloc. It's platform-dependent.
It requires an OS call to allocate swapable memory (it's VirtualAlloc on Windows, for example, on Linux it should be mmap and related functions).
Once it's done, the allocated memory is divided into pages, contiguous blocks of fixed size. You can lock a page, therefore it will be loaded in RAM and you can read and modify it freely. For old dinosaurs like me, it's exactly how EMS memory worked under DOS... You address your swappable memory with a kind of segment:offset method: first, you divide your linear address by the page size to find which page is needed, then you use the remainder to get the offset within this page.
Once unlocked, the page remains in memory until the OS needs memory: then, an unlocked page will be flushed to disk, in swap, and discarded in RAM... Until you lock (and load...) it again, but this operation may requires to free RAM, therefore another process may have its unlocked pages swapped BEFORE your own page is loaded again. And this is damnly SLOOOOOOW... Even on a SSD!
So, it's not always a good thing to use swap. A better way is to use memory mapped files - perfect for reading very big files mostly sequentially, with few random accesses - if it can suits your needs.
I want to understand if shared memory gets memory allocation from kernel space then why it is not going through context switching ? And if it is not from kernel space then from where this memory gets allocated.
In most modern computers memory isn't allocated from kernel space. Rather the kernel finds a page of physical memory and then maps it into a process at a virtual address that the process is not currently using. The physical address and the virtual address in the process are not the same. So the memory is always "user space" memory. This is all part of the Virtual Memory subsystem.
To share the physical page between processes the kernel maps the page into both processes. Usually at the same virtual address in both. Once this is done the kernel is no longer involved as both processes have the same physical memory mapped at that location. Thus any change will show up for both.
Note: Kernel memory is memory usually only accessible to the kernel and is a different concept.
When shmget() call is made with a requirement of some memory then it makes context switching from user to kernel space, system call service routine runs in kernel with the arguments passed from the user space to revert back with the required memory space [and this memory page is not a part of kernel space its just that its not yet mapped into process memory yet] which gets mapped to process local address space
So this means there is a reserved memory in memory management which is not a part of kernel memory nor it is mapped in process local address space and this memory is used to facilitate such request.
I have learn in the few past days the issue with memory overcommitment (when memory overcommit is activated, which is usually a default), which basically means that:
void* p = malloc(100);
the operative system gives you 100 contiguous (virtual) addresses taken from the (virtual) address space of your process, whose total range is OS-defined. Since that memory region has not been initialized yet, it doesn't count as ocuppied storage from a system-wide point of view, so it's a pure abstraction besides consuming your virtual addresses.
memset(p, 0, 5);
That uses the first 5 bytes, so from the point of view of the OS, your process ocuppies now 5 extra bytes, and so the system has 5 bytes less of free storage. You have still 95 bytes of uninitialized storage.
The system only crash or start killing processes when the combined ocuppied storage (initialized) of every process is beyond what the OS can hold.
If my understanding is right at this regard, is there a way to "des-"initialize a region of memory when you are done with it, in order to increase the system-wide free space, without loosing the address region requested by malloc or aligned_malloc (so you don't increase fragmentation over time)?
The purpose of this question is more theoretical than practical and not about actually "freeing memory", but about freeing memory while conserving already assigned virtual addresses.
Source about the difference between requesting virtual addresses and ocuppying storage: https://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6
PD: With knowing it for Linux to fill my curiosity I'm ok.
No, there is no way.
On most systems, as soon as you allocate memory, it counts towards RAM or swap.
As your link shows, on Linux, you may need to access the memory once so that the memory actually gets allocated. But as soon as you do, the system must keep that memory available somewhere, in case you access it later.
The way to tell the system you are done with the memory is to actually free it.
On running the "free" command, I see that memory used is:
total:3854884
used:3752304
free:102580
shared:352
buffers:9252
cached:150908
-/+ buffers/cache:
used: 3592144
free: 262740
Swap: 0 0 0
But on running htop, I see there are many processes using 4507M amount of memory under the VIRT column (virtual memory). RES column (physical RAM the process is using) shows 209M. SHR (shared memory) is 5352M.
-Xmx for the process is configured as 2048m.
How can virtual memory be used if the swap space is zero?
The virtual memory these programs (htop and the like) are counting is just the size of the address space the processes have requested. You have physical memory, actual RAM, and a virtual address space which maps addresses as user space programs see them to physical memory. They are separate. A 0x0ff84560 pointer probably doesn't actually reference that part of RAM. The OS gets to set up a mapping that decides where in RAM you're actually referencing. Even further, it can set up the mapping before it has RAM to back it up. It's an event driven process. The OS will set up the mapping on request with no real backing, no physical memory allocated, and only actually map it to real RAM when you try to use the virtual memory.
The size of virtual memory is the size of this mapping. But not all of it has to be backed by physical RAM, so it can be larger than RAM even when there's no swap. But this causes problems programs try to actually use more memory than there is RAM. It's no problem at all if they only request it, only if they use it.
Additionally, as Thilo mentioned, memory mapped files can add to this. You can map an entire 100TB file into your virtual address space no problem. The OS handles the logistics in the background: bringing in the parts you need (the parts you try to access) and reaping the parts it has to to clear physical memory.
An application's virtual bytes grow 2-times the private bytes.
does this indicate memory leak? bad application design?
OS is 32Bit
any thoughts are welcome.
application is stream database.
Fragmentation.
If you allocate the following chunks of memory:
16KB
8KB
16KB
and you then free the chunk of 8KB, your application will have 32 KB of private bytes, but 40 KB bytes of virtual memory, which is actually the highest virtual memory address that has ever been in use by your process (ignoring the other memory parts for sake of simplicity).
Consider (if possible) using another memory manager. Some alternatives are:
The Windows Low-fragementation heap (see http://msdn.microsoft.com/en-us/library/aa366750%28VS.85%29.aspx for more info)
The Doug-Lea open source memory manager
Commercial alternatives like Hoard
A fourth alternative is to write your own memory manager. It's not that easy, but if done right, it can have quite some benefits. Especially for certain niche or special applications, writing your own memory manager can be useful.
An application's virtual bytes grow 2-times the private bytes.
If application allocates only heap, then to me it would be the sign that application allocates lots of memory but never actually touches it. For example:
void *p = malloc( 16u<<20 );
would eat up 16MB of virtual memory. But as long as application doesn't perform any actions with the memory block, OS wouldn't even attempt to map the virtual memory to the RAM. Simplest way to force the actual allocation of private memory is to memset() it:
void *p = malloc( 16u<<20 );
memset( p, 0, 16u<<20 );
does this indicate memory leak? bad application design?
Or both. Or neither.
The longer variant of the response: unknown, depends on what memory application allocates, what other resources application uses, OS, h/w platform, etc.
If unsure, use a memory leak analysis tools to investigate, e.g. valgrind. Read up SO for more information on memory leak analysis in C++.
Memory allocation has overhead to store management information about what was allocated. If you're allocating very small buffers the extra information can be a significant percentage of the total. That might be what you're seeing.
One possibility is if you set a large stack reserve size for your threads with linker option /STACK:reserve_bytes and then you start a lot of threads.
For example, if you have an ATL service, it automatically starts 4*numberOfCores apartment message dispatching threads by default. Compile and link such a service with /STACK:12000000 (12 megabytes), then run it on a 16-core server and it will start 64 threads, each with a 12MB stack, immediately consuming 768MB of virtual address space, although the actual committed memory may be much lower.