In my Visual C++ app, I know the total objects(CMyObject) to be allocated is 16728064 and each object is 64 byte, so the total memory to be allocated is 1GB. The memory will be allocated in the beginning, used in the whole lifetime of the app, and release in the end.
In such a case, what is the best way to allocate the memory?
Current I try to allocate the memory at the beginning, as follows:
CMyObject *p = new CMyObject[16728064];
// Perform tasks.
delete [] p;
But the allocation will fail for most of the time. Now I want to do as follows:
CMyObject *p[10];
p[0] = new CMyObject[1672806];
p[1] = new CMyObject[1672806];
…
// Perform tasks
Delete [] p[0];
….
This seems to work for some time.
Therefore, should I split the allocation into pieces as small as possible? Or are there any good solutions for such a situation?
Thanks
In general, yes you should split larger allocations into smaller fragments. Depending on your system, it may not have 1GB of contiguous memory.
Assuming this is X86 processor or something similar, only the virtual address space is contiguous. For X86, physical memory is composed of 4096 byte pages, and the physical pages do not have to be contiguous, only the mapped virtual address space.
When I run Windows XP 32 bit, on a system with 4GB, it shows 3.6 GB of physical memory available, and usually my test programs don't have a problem with allocating 1 GB, with failures to allocate memory occurring somewhere between 1.5GB and 2GB.
My guess is the reason for failure with large allocations of available physical memory has to do with the operating system as opposed to a processor virtual to physical mapping limitation.
What operating system are you using?
Related
Let's say I have 8 Gigabytes of RAM and 16 Gigabytes of swap memory. Can I allocate a 20 Gigabyte array there in C? If yes, how is it possible? What would that memory layout look like?
[linux] Can I create an array exceeding RAM, if I have enough swap memory?
Yes, you can. Note that accessing swap is veerry slooww.
how is it possible
Allocate dynamic memory. The operating system handles the rest.
How would that memory layout look like?
On an amd64 system, you can have 256 TiB of address space. You can easily fit a contiguous block of 8 GiB in that space. The operating system divides the virtual memory into pages and copies the pages between physical memory and swap space as needed.
Modern operating systems use virtual memory. In Linux and most other OSes rach process has it's own address space according to the abilities of the architecture. You can check the size of the virtual address space in /proc/cpuinfo. For example you may see:
address sizes : 43 bits physical, 48 bits virtual
This means that virtual addresses use 48 bit. Half of that is reserved for the kernel so you only can use 47 bit, or 128TiB. Any memory you allocate will be placed somewhere in those 128 TiB of address space as if you actually had that much memory.
Linux uses demand page loading and per default over commits memory. When you say
char *mem = (char*)malloc(1'000'000'000'000);
what happens is that Linux picks a suitable address and just records that you have allocated 1'000'000'000'000 (rounded up to the nearest page) of memory starting at that point. (It does some sanity check that the amount isn't totally bonkers depending on the amount of physical memory that is free, the amount of swap that is free and the overcommit setting. Per default you can allocate a lot more than you have memory and swap.)
Note that at this point no physical memory and no swap space is connected to your allocated block at all. This changes when you first write to the memory:
mem[4096] = 0;
At this point the program will page fault. Linux checks the address is actually something your program is allowed to write to, finds a physical page and map it to &mem[4096]. Then it lets the program retry to write there and everything continues.
If Linux can't find a physical page it will try to swap something out to make a physical page available for your programm. If that also fails your program will receive a SIGSEGV and likely die.
As a result you can allocate basically unlimited amounts of memory as long as you never write to more than the physical memory and swap and support. On the other hand if you initialize the memory (explicitly or implicitly using calloc()) the system will quickly notice if you try to use more than available.
You can, but not with a simple malloc. It's platform-dependent.
It requires an OS call to allocate swapable memory (it's VirtualAlloc on Windows, for example, on Linux it should be mmap and related functions).
Once it's done, the allocated memory is divided into pages, contiguous blocks of fixed size. You can lock a page, therefore it will be loaded in RAM and you can read and modify it freely. For old dinosaurs like me, it's exactly how EMS memory worked under DOS... You address your swappable memory with a kind of segment:offset method: first, you divide your linear address by the page size to find which page is needed, then you use the remainder to get the offset within this page.
Once unlocked, the page remains in memory until the OS needs memory: then, an unlocked page will be flushed to disk, in swap, and discarded in RAM... Until you lock (and load...) it again, but this operation may requires to free RAM, therefore another process may have its unlocked pages swapped BEFORE your own page is loaded again. And this is damnly SLOOOOOOW... Even on a SSD!
So, it's not always a good thing to use swap. A better way is to use memory mapped files - perfect for reading very big files mostly sequentially, with few random accesses - if it can suits your needs.
In my program I want to allocate 32 byte aligned memory to use SSE/AVX. The amount I want to allocate is somewhere around 2000*1300*17*17*4(large data set). I tried using functions _aligned_malloc() and _mm_malloc but for larger sizes it doesn't allocate memory and results in a access violation exception. If the amount allocated is small like around 512*320*4*17*17(small data set) then the code work fine.
Here these functions return a null pointer when allocation is done for large data set.But works fine when input data size is small. Also here if I just use unaligned memory allocation using new then code works fine for large data set too.
Finally Can someone tell me Is there any significant performance gains in using aligned memory for AVX.
Edit: After some research according to this post it says that new allocate memory from free store and malloc() allocate memory from heap. Here I am exceeding maximum heap size as _aligned_malloc() return errno 12 which means ENOMEM in that case Can someone tell me a work around for this.
On memory allocation:
I seems you are actually trying to alocate 2000*1300*17*17*4 32 bytes elements. This is means you are trying to allocate 96 GB while your system has only 12 GB memory.
Since new is working but malloc not it seems your local implementation of new seems to be able to allocate huge amounts of virtual memory. Malloc allocates from the heap which means it is usally limited to the physical amount of memory you've got. That's the reason it fails.
As the dataset is bigger than your main memory you might want to allocate the memory using mmap which maps a file into virtual memory making it accessable as if it was in physical memory (but it will only partially be cached in memory). I'm not sure if it's guaranteed but mmap usally aligns on optimal page size boundary (almost always 4096 byte).
Anyway you will have a huge performance loss due to the fact that your disk is way slower than your RAM. This is so serious that using AVX will probably not speed up anything at all.
On the performance loss of using unaligned memory:
On modern hardware (say Intel's Haswell onwards I think) this depends on your access patterns. Unaligned access should have almost no performance overhead on iterating over the array in memory order (each cache line will still be loaded only once). If you access it in random order than you will often cross the 64 byte cache line boundry. This means your processor will have to load 2 lines into cache and remove 2 lines from the cache instead of only one. While this might be a serious problem for some situations in your case the disk will slows things down so much that you will barely notice this.
Addtional tips (or a shot in the dark):
The way you gave the size of the array (2000*1300*17*17*4) suggests that you are using a multidimensional array (e.g. auto x = new __m256[2000][1300][17][17][4]). So some tipps on that:
Iterate through it mostly sequential
Check if it is sparse (meaning some of the memory will never be accessed) and shrink it if possible.
You could try to flatten the array and do more complex index calculation yourself in order to reduce the amount of memory need. If you get it to fit completely into your RAM you can start to optimise your code (using AVX and/or aligned memory).
"Total paging file size for all drives is 15247MB" suggests that you actually using only parts of that 96 GB so there might be a way to further reduce your usage.
In that case you might also want to ask another question on how to reduce the memory usage with more info on what you are doing.
i implemented a bloom filter(bit table) using three dimension char array it works well until it reaches at a point where it can no more allocate memory and gives a bad_alloc message. It gives me this error on the next expand request after allocating 600MB.
The bloom filter(the array) is expected to grow as big as 8 to 10GB.
Here is the code i used to allocate(expand) the bit table.
unsigned char ***bit_table_=0;
unsigned int ROWS_old=5;
unsigned int EXPND_SIZE=5;
void expand_bit_table()
{
FILE *temp;
temp=fopen("chunk_temp","w+b");
//copy old content
for(int i=0;i<ROWS_old;++i)
for(int j=0;j<ROWS;++j)
fwrite(bit_table_[i][j],COLUMNS,1,temp);
fclose(temp);
//delete old table
chunk_delete_bit_table();
//create expanded bit table ==> add EXP_SIZE more rows
bit_table_=new unsigned char**[ROWS_old+EXPND_SIZE];
for(int i=0;i<ROWS_old+EXPND_SIZE;++i)
{
bit_table_[i]=new unsigned char*[ROWS];
for(int k=0;k<ROWS;++k)
bit_table_[i][k]=new unsigned char[COLUMNS];
}
//copy back old content
temp=fopen("chunk_temp","r+b");
for(int i=0;i<ROWS_old;++i)
{
fread(bit_table_[i],COLUMNS*ROWS,1,temp);
}
fclose(temp);
//set remaining content of bit_table_to 0
for(int i=ROWS_old;i<ROWS_old+EXPND_SIZE;++i)
for(int j=0;j<ROWS;++j)
for(int k=0;k<COLUMNS;++k)
bit_table_[i][j][k]=0;
ROWS_old+=EXPND_SIZE;
}
What is the maximum allowable size for an array and if this is not the issue what can i do about it.
EDIT:
It is developed using a 32 bit platform.
It is run on 64 bit platform(server) with 8GB RAM.
A 32-bit program must allocate memory from the virtual memory address space. Which stores chunks of code and data, memory is allocated from the holes between them. Yes, the maximum you can hope for is around 650 megabytes, the largest available hole. That goes rapidly down from there. You can solve it by making your data structure smarter, like a tree or list instead of one giant array.
You can get more insight in the virtual memory map of your process with the SysInternals' VMMap utility. You might be able to change the base address of a DLL so it doesn't sit plumb in the middle of an otherwise empty region of the address space. Odds that you'll get much beyond 650 MB are however poor.
There's a lot more breathing room on a 64-bit operating system, a 32-bit process has a 4 gigabyte address space since the operating system components run in 64-bit mode. You have to use the /LARGEADDRESSAWARE linker option to allow the process to use it all. Still, that only works on a 64-bit OS, your program is still likely to bomb on a 32-bit OS. When you really need that much VM, the simplest approach is to just make a 64-bit OS a prerequisite and build your program targeting x64.
A 32-bit machine gives you a 4GB address space.
The OS reserves some of this (half of it by default on Windows, giving you 2GB to yourself. I'm not sure about Linux, but I believe it reserves 1GB)
This means you have 2-3 GB to your own process.
Into this space, several things need to fit:
your executable (as well as all dynamically linked libraries) are memory-mapped into it
each thread needs a stack
the heap
and quite a few other nitty gritty bits.
The point is that it doesn't really matter how much memory you end up actually using. But a lot of different pieces have to fit into this memory space. And since they're not packed tightly into one end of it, they fragment the memory space. Imagine, for simplicity, that your executable is mapped into the middle of this memory space. That splits your 3GB into two 1.5GB chunks. Now say you load two dynamic libraries, and they subdivide those two chunks into four 750MB ones. Then you have a couple of threads, each needing further chunks of memory, splitting up the remaining areas further. Of course, in reality each of these won't be placed at the exact center of each contiguous block (that'd be a pretty stupid allocation strategy), but nevertheless, all these chunks of memory subdivide the available memory space, cutting it up into many smaller pieces.
You might have 600MB memory free, but you very likely won't have 600MB of contiguous memory available. So where a single 600MB allocation would almost certainly fail, six 100MB allocations may succeed.
There's no fixed limit on how big a chunk of memory you can allocate. The answer is "it depends". It depends on the precise layout of your process' memory space. But on a 32-bit machine, you're unlikely to be able to allocate 500MB or more in a single allocation.
The maximum in-memory data a 32-bit process can access is 4GB in theory (in practice it will be somewhat smaller). So you cannot have 10GB data in memory at once (even with the OS supporting more). Also, even though you are allocating the memory dynamically, the free store available is further limited by the stack size.
The actual memory available to the process depends on the compiler settings that generates the executable.
If you really do need that much, consider persisting (parts of) the data in the file system.
I just executed a program that mallocs 13 MB in a 12 MB machine (QEMU Emulated!) . Not just that, i even browsed through the memory and filled junk in it...
void
large_mem(void)
{
#define LONGMEM 13631488
long long *ptr = (long long *)malloc(LONGMEM);
long long i;
if(!ptr) {
printf("%s(): array allocation of size %lld failed.\n",__func__,LONGMEM);
ASSERT(0);
}
for(i = 0 ; i < LONGMEM ; i++ ) {
*(ptr+i)=i;
}
free(ptr);
}
How is it possible ? I was expecting a segmentation fault.
It's called virtual memory which is allocated for your program. It's not real memory which you call RAM.
There is a max limit for virtual memory as well, but it's higher than RAM. It's implemented (and defined) by your operating system.
This is called as Lazy Allocation.
Most OS like Linux have an Lazy Allocation memory model wherein the returned memory address is a virtual address and the actual allocation only happens at access-time. The OS assumes that it will be able to provide this allocation at access-Time.
The memory allocated by malloc is not backed by real memory until the program actually touches it.
While, since calloc initializes the memory to 0 you can be assured that the OS has already backed the allocation with actual RAM (or swap).
Try using callocand most probably it will return you out of memory unless your swap file/partition is big enough to satisfy the request.
Sounds like your operating system is swapping pages:
Paging is an important part of virtual memory implementation in most
contemporary general-purpose operating systems, allowing them to use
disk storage for data that does not fit into physical random-access
memory (RAM).
In other words, the operating system is using some of your hard disk space to satisfy your 13 MB allocation request (at great expense of speed, since the hard disk is much, much slower than RAM).
Unless the virtualized OS has swap available, what you're encountering is called overcommit, and it basically exists because the easy way to manage resources in a system with virtual memory and demand/copy-on-write pages is not to manage them. Overcommit is a lot like a bank loaning out more money than it actually has -- it seems to work for a while, then things come crashing down. The first thing you should do when setting up a Linux system is fix this with the command:
echo "2" > /proc/sys/vm/overcommit_memory
That just affects the currently running kernel; you can make it permanent by adding a line to /etc/sysctl.conf:
vm.overcommit_memory=2
An application's virtual bytes grow 2-times the private bytes.
does this indicate memory leak? bad application design?
OS is 32Bit
any thoughts are welcome.
application is stream database.
Fragmentation.
If you allocate the following chunks of memory:
16KB
8KB
16KB
and you then free the chunk of 8KB, your application will have 32 KB of private bytes, but 40 KB bytes of virtual memory, which is actually the highest virtual memory address that has ever been in use by your process (ignoring the other memory parts for sake of simplicity).
Consider (if possible) using another memory manager. Some alternatives are:
The Windows Low-fragementation heap (see http://msdn.microsoft.com/en-us/library/aa366750%28VS.85%29.aspx for more info)
The Doug-Lea open source memory manager
Commercial alternatives like Hoard
A fourth alternative is to write your own memory manager. It's not that easy, but if done right, it can have quite some benefits. Especially for certain niche or special applications, writing your own memory manager can be useful.
An application's virtual bytes grow 2-times the private bytes.
If application allocates only heap, then to me it would be the sign that application allocates lots of memory but never actually touches it. For example:
void *p = malloc( 16u<<20 );
would eat up 16MB of virtual memory. But as long as application doesn't perform any actions with the memory block, OS wouldn't even attempt to map the virtual memory to the RAM. Simplest way to force the actual allocation of private memory is to memset() it:
void *p = malloc( 16u<<20 );
memset( p, 0, 16u<<20 );
does this indicate memory leak? bad application design?
Or both. Or neither.
The longer variant of the response: unknown, depends on what memory application allocates, what other resources application uses, OS, h/w platform, etc.
If unsure, use a memory leak analysis tools to investigate, e.g. valgrind. Read up SO for more information on memory leak analysis in C++.
Memory allocation has overhead to store management information about what was allocated. If you're allocating very small buffers the extra information can be a significant percentage of the total. That might be what you're seeing.
One possibility is if you set a large stack reserve size for your threads with linker option /STACK:reserve_bytes and then you start a lot of threads.
For example, if you have an ATL service, it automatically starts 4*numberOfCores apartment message dispatching threads by default. Compile and link such a service with /STACK:12000000 (12 megabytes), then run it on a 16-core server and it will start 64 threads, each with a 12MB stack, immediately consuming 768MB of virtual address space, although the actual committed memory may be much lower.