If we compile and execute the code below:
int *p;
printf("%d\n", (int)sizeof(p));
it seems that the size of a pointer to whatever the type is 4 bytes, which means 32 bit, so 232 adresses are possible to store in a pointer. Since every address is associated to 1 byte, 232 bytes give 4 GB.
So, how can a pointer point to the address after 4 GB of memory? And how can a program use more than 4 GB of memory?
By principle, if you can't represent an address which goes over 2^X-1 then you can't address more than 2^X bytes of memory.
This is true for x86 even if some workarounds have been implemented and used (like PAE) that allows to have more physical memory even if with limits imposed by the fact that these are more hacks than real solutions to the problem.
With a 64 bit architecture the standard size of a pointer is doubled, so you don't have to worry anymore.
Mind that, in any case, virtual memory translates addresses from the process space to the physical space so it's easy to see that a hardware could support more memory even if the maximum addressable memory from the process point of view is still limited by the size of a pointer.
"How can a pointer point to the address after 4GB of memory?"
There is a difference between the physical memory available to the processor and the "virtual memory" seen by the process. A 32 bit process (which has a pointer of size 4 bytes) is limited to 4GB however the processor maintains a mapping (controlled by the OS) that lets each process have its own memory space, up to 4GB each.
That way 8GB of memory could be used on a 32 bit system, if there were two processes each using 4GB.
To access >4GB of address space you can do one of the following:
Compile in x86_64 (64 bit) on a 64 bit OS. This is the easiest.
Use AWE memory. AWE allows mapping a window of memory which (usually) resides above 4GB. The window address can be mapped and remapped again and again. Was used in large database applications and RAM drives in the 32 bit era.
Note that a memory address where the MSB is 1 is reserved for the kernel. Windows allows under several conditions to use up to 3GB (per process), the top 1GB is always for the kernel.
By default a 32 bit process has 2GB of user mode address space. It's possible to get 3GB via a special linker flag (in VS: /LARGEADDRESSAWARE).
Related
Let's say I have 8 Gigabytes of RAM and 16 Gigabytes of swap memory. Can I allocate a 20 Gigabyte array there in C? If yes, how is it possible? What would that memory layout look like?
[linux] Can I create an array exceeding RAM, if I have enough swap memory?
Yes, you can. Note that accessing swap is veerry slooww.
how is it possible
Allocate dynamic memory. The operating system handles the rest.
How would that memory layout look like?
On an amd64 system, you can have 256 TiB of address space. You can easily fit a contiguous block of 8 GiB in that space. The operating system divides the virtual memory into pages and copies the pages between physical memory and swap space as needed.
Modern operating systems use virtual memory. In Linux and most other OSes rach process has it's own address space according to the abilities of the architecture. You can check the size of the virtual address space in /proc/cpuinfo. For example you may see:
address sizes : 43 bits physical, 48 bits virtual
This means that virtual addresses use 48 bit. Half of that is reserved for the kernel so you only can use 47 bit, or 128TiB. Any memory you allocate will be placed somewhere in those 128 TiB of address space as if you actually had that much memory.
Linux uses demand page loading and per default over commits memory. When you say
char *mem = (char*)malloc(1'000'000'000'000);
what happens is that Linux picks a suitable address and just records that you have allocated 1'000'000'000'000 (rounded up to the nearest page) of memory starting at that point. (It does some sanity check that the amount isn't totally bonkers depending on the amount of physical memory that is free, the amount of swap that is free and the overcommit setting. Per default you can allocate a lot more than you have memory and swap.)
Note that at this point no physical memory and no swap space is connected to your allocated block at all. This changes when you first write to the memory:
mem[4096] = 0;
At this point the program will page fault. Linux checks the address is actually something your program is allowed to write to, finds a physical page and map it to &mem[4096]. Then it lets the program retry to write there and everything continues.
If Linux can't find a physical page it will try to swap something out to make a physical page available for your programm. If that also fails your program will receive a SIGSEGV and likely die.
As a result you can allocate basically unlimited amounts of memory as long as you never write to more than the physical memory and swap and support. On the other hand if you initialize the memory (explicitly or implicitly using calloc()) the system will quickly notice if you try to use more than available.
You can, but not with a simple malloc. It's platform-dependent.
It requires an OS call to allocate swapable memory (it's VirtualAlloc on Windows, for example, on Linux it should be mmap and related functions).
Once it's done, the allocated memory is divided into pages, contiguous blocks of fixed size. You can lock a page, therefore it will be loaded in RAM and you can read and modify it freely. For old dinosaurs like me, it's exactly how EMS memory worked under DOS... You address your swappable memory with a kind of segment:offset method: first, you divide your linear address by the page size to find which page is needed, then you use the remainder to get the offset within this page.
Once unlocked, the page remains in memory until the OS needs memory: then, an unlocked page will be flushed to disk, in swap, and discarded in RAM... Until you lock (and load...) it again, but this operation may requires to free RAM, therefore another process may have its unlocked pages swapped BEFORE your own page is loaded again. And this is damnly SLOOOOOOW... Even on a SSD!
So, it's not always a good thing to use swap. A better way is to use memory mapped files - perfect for reading very big files mostly sequentially, with few random accesses - if it can suits your needs.
I'm using Microsoft Visual Studio 2008
When I create a pointer to an object, it will receive a memory address which in my case is an 8 digit hexadecimal number. E.g.: 0x02e97fc0
With 8 hexadecimal digits a computer can address 4GB of memory. I've got 8GB of memory in my computer:
Does that mean that my IDE is not using more than 4GBs out of my memory?
Is the IDE able to address only the first 4GB of my memory or any 4GB out of the 8GBs not used?
The question is not only about the size of the memory used. It is also about the location of the memory used. Latter hasn't been detailed here: The maximum amount of memory any single process on Windows can address
Where does C++ create stack and heap in memory?
Well, C++ does not really handle memory, it ask the operating system to do so. When a binary object (.exe, .dll, .so ...) is loaded into memory, this is the OS which allocate memory for the stack. When you dynamically allocate memory with new, you're asking the OS for some space in the heap.
1) Does that mean that my IDE is not using more than 4GBs out of my memory?
No, not really. In fact, modern OS like Windows use what is called virtual address space. It maps an apparently contiguous memory segment (say 0x1000 to 0xffff) to a segment of virtual space just for your program; you have absolutely no guarantee over where your objects really lie in memory. When an address is dereferenced, the OS do some magic and let your program access the physical address in memory.
Having 32 bits addresses means a single instance of your program can't use more that 4GB of memory. Two instances of your same program can, since the OS can allocate two different segments of physical address inside the apparently same segment of virtual address (0x00000000 to 0xffffffff). And Windows will allocate yet more overlapping address spaces for its own processes.
2) Is the IDE able to address only the first 4GB of my memory or any 4GB out of the 8GBs not used?
Any. Even non-contiguous memory, even disk memory ... no one can tell.
Found some Microsoft source in the comments about it: https://msdn.microsoft.com/en-us/library/aa366778.aspx
I work on Peano-Hilbert data ordering (c++ 4.9, linux 64x) to coalesce dynamically allocated memory. For control I am trying to visualize the actual data distribution in the memory. For this I convert pointers to my data to integers like follows
unsigned long int address = *(unsigned long int*)(&pointer);
and then plot them as some 2D-map. It works fine for most of the case but sometimes I get values exceeding by far available memory, e.g. 140170747903888, which corresponds ~127 TB shift whereas I have only 16 GB of RAM. What the hell?
The memory management system does not handle memory in a linear way. It is free to tell a process that some memory block is in the address 0x1234123412345678, even if you only had 128MB of memory. This is called paging. The data might not even be in physical memory, but pages out to disk.
This means that you have no way of knowing where in physical memory anything is from the pointer value, since it might change all the time (or it might not even be in memory), you only know the virtual address the OS has happened to give you. And it is totally implementation dependent how it gives them out.
AMD 64 bit uses 48 bits for virtual memory addresses, which corresponds to 256TB. Virtual address space is distinct from physical RAM: the addresses are looked up in a table on the CPU and actual RAM faulted in when the pages in question are first accessed.
In debug mode I saw that the pointers have addresses like 0x01210040,
but as I realized, 0x means hexadecimal right? And there're 8 hex digits, i.e. in total there're are 128 bits that are addressed?? So does that mean that for 32-bit system the first two digits are always 0, and for a 64-bit system the first digit is 0?
Also, may I ask that, for a 32-bit program, would I be able to allocate as much as 3GB of memory as long as I remain in the heap and use only malloc()? Or is there some limitations the Windows system poses on a single thread? (the IDE I'm using is VS2012)
Since actually I was running a 32-bit program in a 64-bit system, but the program crashed with a memory leak when it only allocated about 1.5GB of memory...and I can't seem to figure out why.
(Oooops...sorry guys I think I made a simple mistake with the first question...indeed one hex digit is 4 bits, and 8 makes 32bits. However here is another question...how is address represented in a 64-bit program?)
For 32-bit Windows, the limit is actually 2GB usable per process, with virtual addresses from 0x00000000 (or simply 0x0) through 0x7FFFFFFF. The rest of the 4GB address space (0x80000000 through 0xFFFFFFFF) for use by Windows itself. Note that these have nothing to do with the actual physical memory addresses.
If your program is large address space aware, this limit is increased to 3GB on 32bit systems and 4GB for 32bit programs running on 64bit Windows.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366912(v=vs.85).aspx
And for the higher limits for large address space aware programs (IMAGE_FILE_LARGE_ADDRESS_AWARE), see here:
http://msdn.microsoft.com/en-us/library/aa366778.aspx
You might also want to take a look at the Virtual Memory article on Wikipedia to better understand how the mapping between virtual addresses and physical addresses works. The first MSDN link above also has a short explanation:
The virtual address space for a process is the set of virtual memory
addresses that it can use. The address space for each process is
private and cannot be accessed by other processes unless it is shared.
A virtual address does not represent the actual physical location of
an object in memory; instead, the system maintains a page table for
each process, which is an internal data structure used to translate
virtual addresses into their corresponding physical addresses. Each
time a thread references an address, the system translates the virtual
address to a physical address. The virtual address space for 32-bit
Windows is 4 gigabytes (GB) in size and divided into two partitions:
one for use by the process and the other reserved for use by the
system. For more information about the virtual address space in 64-bit
Windows, see Virtual Address Space in 64-bit Windows.
EDIT: As user3344003 points out, these values are not the amount of memory you can allocate using malloc or otherwise use for storing values, they just represent the size of the virtual address space.
There are a number of limits that would restrict the size of your malloc allocation.
1) The number of bits, restricts the size of the address space. For 32-bits, that is 4B.
2) System the subdivide that for the various processor modes. These days, usually 2GB goes to the user and 2GB to the kernel.
3) The address space may be limited by the size of the page tables.
4) The total virtual memory may be limited by the size of the page file.
5) Before you start malloc'ing, there be stuff already in the virtual address space (e.g., code stack, reserved area, data). Your malloc needs to return a contiguous block of memory. Largest theoretical block it could return has to fit within unallocated areas of virtual memory.
6) Your memory management heap may restrict the size that can be allocated.
There probably other limitations that I have omitted.
-=-=-=-=-
If your program crashed after allocating 1.5GB through malloc, did you check the return value from malloc to see if it was not null?
-=-=-=-=-=
The best way to allocate huge blocks of memory is through operating system services to map pages into the virtual address space.---not using malloc.
In reference to the following article
For a 32-bit application launched in a 32-bit Windows, the total size of all the mentioned data types must not exceed 2 Gbytes.
The same 32-bit program launched in a 64-bit system can allocate about 4 Gbytes (actually about 3.5 Gbytes)
The practical data you are looking at is around 1.7 GB due to space occupied by windows.
By any chance how did you find out the memory it had allocated when it crashed.?
I think I understand memory alignment, but what confuses me is that the address of a pointer on some systems is going to be in virtual memory, right? So most of the checking/ensuring of alignment I have seen seem to just use the pointer address. Is it not possible that the physical memory address will not be aligned? Isn't that problematic for things like SSE?
The physical address will be aligned because virtual memory only maps aligned pages to physical memory (and the pages are typically 4KB).
So unless you need alignment > page size, the physical memory will be aligned as per your requirements.
In the specific case of SSE, everything works fine because you only need 16 byte alignment.
I am not aware of any actual system in which an aligned virtual memory address can result in a misaligned physical memory address.
Typically, all alignments on a given platform will be powers of two. For example, on x86 32-bit integers have a natural alignment of 4 bytes (2^2). The page size - which defines how fine a block you can map in physical memory - is generally a large power of two. On x86, the smallest allowable page size is 4096 bytes (2^12). The largest datatype that might need alignment on x86 is 128 bits (for XMM registers and CMPXCHG16B) 32 bytes (for AVX) - 2^5. Since 2^12 is divisible by 2^5, you'll find that everything aligns right at the start of a page, and since pages are aligned both in virtual and physical memory, a virtual-aligned address will always be physical-aligned.
On a more practical level, allowing aligned virtual addresses to map to unaligned physical addresses not only would make it really hard to generate code, it would also make the CPU architecture more complex than simply allowing any alignment (since now we have odd-sized pages and other weirdness...)
Note that you may have reason to ask for larger alignments than a page from time to time. Typically, for user space coding, it doesn't matter if this is aligned in physical RAM (for that matter, if you're requesting multiple pages, it's unlikely to be even contiguous!). Problems here only arise if you're writing a device driver and need a large, aligned, contiguous block for DMA. But even then usually the device isn't a stickler about larger-than-page-size alignment.