Compilation hitting virtual memory limitation in g++ 4.7.1? - c++

I'm compiling some code that makes a heave use of templates (its based on boost::msm framework). When compiled with g++ 4.7.1 the cc1plus process reaches about 2.4 Gb of RAM size and than fails with "virtual memory exhausted: Cannot allocate memory" error.
I'm using a 32-bit compiler (switching to 64 bit is not an option ATM), the machine itself is a 64-bit Ubuntu with 16Gb of RAM, the compilation is performed under a 64-bit chroot of Debian wheezy distribution. At the time of compilation there is plenty of RAM available, so if the compilation is to fail because of lack of physically available RAM it is to reach 4Gb first. I tried playing with "ulimit -m" options, setting to different values and setting it to smaller sizes causes the compiler to fail earlier but when left to "unlimited" it fails at the above mentions 2+ Gb.
So I guess something else must be limiting me. Maybe someone encountered a similar issue and knows a way to change the limitation?

In 32-bit application (including compilers), you typically get somewhere between 2 and 3GB that is available for usermode in virtual space. This is caused by a combination of memory space being reserved, memory space fragmentation (there is virtual memory available, just not a big enough chunk to hold whatever size block that new or malloc is requesting), and "memory reservation", where process has allocated a fairly large chunk of memory, but it's not actually using all of it, so it's not "populated".
Any particular reason you can't use a 64-bit GCC to generate 32-bit code - using -M32? That would be my solution.

Related

Is it true that 32Bit program will be out of memory, if other programs use too much, in 64bit windows?

I am developing a 32 bit application and got out of memory error.
And I noticed that my Visual Studio and a plugin (other apps too) used too much memory which is around 4 or 5 GB.
So I suspected that these program use up all the memory addresses where my program is able to find free memory.
I suppose that 32 bit can only use the first 4 GB, other memory it can not use at all.
I don't know if I am correct with this, other wise I will look for other answers, like I have bug in my code.
Your statement of
I suppose that 32bit can only use the first 4 giga byte, othere momery
it can not use at all.
is definitely incorrect. In a 64-bit OS, all applications can use all of the memory, regardless of what bitness it is, thanks to the translation table for virtual to physical memory being 64-bit.
Some really ancient hardware may not allow DMA to addresses above 4GB, but I really hope most of that is in the junk-yard by now.
If the system as a whole is running low on memory, it will affect all applications more or less equally.
However, a 32-bit application can only, by default, use the lower 2GB of the virtual address range (although these 2GB can be placed anywhere in the physical memory, as described above by means of a 64-bit translation table). You can extend this to nearly 4GB (3GB in a 32-bit OS, and subject to the /3GB boot flag in this case) by using /LARGEADDRESSAWARE in your linking command - this simply tells the OS that your application will "understand" that addresses can be negative, and thus will operate correctly with addresses over 2GB.
Any system can be brought down by a too heavy load.
But in normal use in Windows and any other virtual memory OS, the memory consumption of other programs does not much affect any given program execution.
Getting an out of memory error is unusual, but it can happen if you make a large allocation or if you declare a large local automatic variable. It can also happen if you fail to properly deallocate memory that's no longer used, i.e. if the program is leaking memory. For a 32-bit program on a 64-bit machine it's then not memory itself that's used up, but available address space within the program.

Compile error: virtual memory exhausted

I am trying to compile an application but I seem to be running into a preset memory constraint. When compiling, it gives me the following error:
"virtual memory exhausted: Nicht genügend Hauptspeicher verfügbar", so I read this as having not enough RAM+Swap available.
As I am compiling this on a machine with 32GB RAM, this is quite unlikely. I checked the memory consumption and it breaks down at 3GB. Compiling the application on a different machine works, it needs around 3.5GB. I'm running on fedora 19, 64bit.
I also checked the available user memory using ulimit -a, but everything is set to unlimited (max memory size, virtual memory).
Are there any other places where there might be a limit set to the maximum memory available to a process or user? I'm starting to run out of options.
If the compiler is running out of memory, it might be due to a compiler bug, or some messed up template expansion (remember that in C++ templates are Turing complete, I remember some demented creative soul did something like computing $\pi$ to a lot of digits at compile time). Check your templates.
In case of possible compiler bug, upgrade everything. Try using clang++ instead of g++. Play with optimization and other settings.
Where does the code come from? Has somebody else built it?

Memory allocation limit on C++

I want to run this huge C++ project that uses up to 8.3 GB in memory. Can I run this program under certain circumstances or is it impossible ?
It's fine. You just need to be on a 64-bit architecture, and ensure that there's sufficient swap space + physical memory available
It really depends. If the program needs to have all the 8.3 GB in memory all the time (working size), you may need to have a similar amount of memory installed in your computer.
Let's now assume you have 4 GB of RAM. In such a case you will be most probably able to execute the program thanks to the use of swap (hard disk area where memory is swapped in and out with the intention of enlarging the virtual memory size). But, even if it may actually work, it could run really slow (up to the point that is not really usable) because of trashing.
On the other hand, if your program processes 8.3 GB of data, but it is processed in smaller chunks, that will mean that all the data is not in memory all the time. Then, you will not need to have installed such a big amount of RAM in your computer.
As Oli Charlesworth was mentioning you will need a 64-bit system (both the hardware and OS) or, at least, a system with PAE capabilities if you want to install more than 4 GB of RAM in your system.
Yes it is possible. You need to be in a 64-bit environment and, of course, have the RAM available. You may still be unable to allocate more than 4gb of contiguous address space at a time. It's possible that you'll have to allocate it in smaller chunks, though.

allocate more than 1 GB memory on 32 bit XP

I'v run into an odd problem, my process cannot allocate more than what seems to be slightly below 1 GiB. Windows Task Manager "Mem Usage" column shows values close to 1 GiB when my software gives a bad_alloc exception. Yes, i'v checked that the value passed to memory allocation is sensible. ( no race condition / corruption exists that would make this fail ). Yes, I need all this memory and there is no way around it. ( It's a buffer for images, which cannot be compressed any further )
I'm not trying to allocate the whole 1 GiB memory in one go, there a few allocations around 300 MiB each. Would this cause problems? ( I'll try to see if making more smaller allocations works any better ). Is there some compiler switch or something else that I must set in order to get past 1 GiB? I've seen others complaining about the 2 GiB limit, which would be fine for me.. I just need little bit more :). I'm using VS 2005 with SP1 and i'm running it on a 32 bit XP and it's in C++.
On a 32-bit OS, a process has a 4GB address space in total.
On Windows, half of this is off-limits, so your process has 2GB.
This is 2GB of contiguous memory. But it gets fragmented. Your executable is loaded in at one address, each DLL is loaded at another address, then there's the stack, and heap allocations and so on. So while your process probably has enough free address space, there are no contiguous blocks large enough to fulfill your requests for memory. So making smaller allocations will probably solve it.
If your application is compiled with the LARGEADDRESSAWARE flag, it will be allowed to use as much of the remaining 2GB as Windows can spare. (And the value of that depends on your platform and environment.
for 32-bit code running on a 64-bit OS, you'll get a full 4-GB address space
for 32-bit code running on a 32-bit OS without the /3GB boot switch, the flag means nothing at all
for 32-bit code running on a 32-bit OS with the /3GB boot switch, you'll get 3GB of address space.
So really, setting the flag is always a good idea if your application can handle it (it's basically a capability flag. It tells Windows that we can handle more memory, so if Windows can too, it should just go ahead and give us as large an address space as possible), but you probably can't rely on it having an effect. Unless you're on a 64-bit OS, it's unlikely to buy you much. (The /3GB boot switch is necessary, and it has been known to cause problems with drivers, especially video drivers)
Allocating big chunks of continuous memory is always a problem.
It is very likely to get more memory in smaller chunks
You should redesign your memory structures.
You are right to suspect the larger 300MB allocations. Your process will be able to get close to 2GB (3 if you use the /3GB boot.ini switch and LARGEADDRESSAWARE link flag), but not as a large contiguous block.
Typical solutions for this are to break up the requests into tiles or strips of fixed size (say 256x256x4 bytes) and write an intermediate class to hide this representation detail.
You can quickly verify this by writing a small allocation loop that allocate blocks of different sizes.
You could also check this function from MSDN. 1GB rings a bell from here:
This parameter must be greater than or equal to 13 pages (for example,
53,248 on systems with a 4K page size), and less than the system-wide
maximum (number of available pages minus 512 pages). The default size
is 345 pages (for example, this is 1,413,120 bytes on systems with a
4K page size).
Here they mentioned that default maximum number of pages allowed for a process is 345 pages which is slightly more than 1GB.
When I have a few big allocs like that to do, I use the Windows function VirtualAlloc, to avoid stressing the default allocator.
Another way forward might be to use nedmalloc in your project.

Out of memory (?) problem on Win32 (vs. Linux)

I have the following problem:
A program run on a windows machine (32bit, 3.1Gb memory, both VC++2008 and mingw compiled code) fails with a bad_alloc exception thrown (after allocating around 1.2Gb; the exception is thrown when trying to allocate a vector of 9 million doubles, i.e. around 75Mb) with plenty of RAM still available (at least according to task manager).
The same program run on linux machines (32bit, 4Gb memory; 32bit, 2Gb memory) runs fine with peak memory usage of around 1.6Gb. Interestingly the win32 code generated by mingw run on the 4Gb linux machine under wine also fails with a bad_alloc, albeit at a different (later) place then when run under windows...
What are the possible problems?
Heap fragmentation? (How would I know? How can this be solved?)
Heap corruption? (I have run the code with pageheap.exe enabled with no errors reported; implemented vector access with bounds checking --- again no errors; the code is essentially free of pointers, only std::vectors and std::lists are used. Running
the program under Valgrind (memcheck) consumes too much memory and ends prematurely, but does not find any errors)
Out of memory??? (There should be enough memory)
Moreover, what could be the reason that the windows version fails while the
linux version works (and even on machines with less memory)? (Also note that
the /LARGEADDRESSAWARE linker flag is used with VC+2008 if that can have any effect)
Any ideas would be much appreciated, I am at my wits end with this... :-(
It has nothing to do with how much RAM is in your system. You are running out of virtual address space. For a 32 bit windows OS process, you get a 4GB virtual address space (irrespective of how much RAM you are using) out of 2GB for the user-mode (3GB in case of LARGEADDRESSAWARE) and 2 GB for kernel. When you do try to allocate memory using new, OS will try to find the contiguos block of virtual memory which is large enough to satisfy the memory allocation request. If your virtual address space is badly fragmented or you are asking for a huge block of memory then it will fail throwing a bad_alloc exception. Check how much virtual memory your process is using.
With Windows XP x86 and the default settings, 1.2 GB is about all the address space you have left for your heap after system libraries, your code, the stack and other stuff get their share. Note that largeaddressaware requires you to boot with the /3GB boot flag to try to give your process up to 3GB. The /3GB flag causes instability on a lot of XP systems, which is why it's not enabled by default.
Server variants of Windows x86 give you more address space, both by using the 3GB/1GB split and by using PAE to allow the use of your full 4GB of RAM.
Linux x86 uses a 3GB/1GB split by default.
A 64 bit OS would give you more address space, even for a 32bit process.
Are you compiling in Debug mode? If so, the allocation will generate a huge amount of debugging data which might generate the error you have seen, with a genuine out-of-memory. Try in Release to see if that solves the problem.
I have only experienced this with VC, not MinGW, but then I haven't checked either, this could still explain the problem.
To elaborate more about the virtual memory:
Your application fails when it tries to allocate a single 90MB array, and there is no contiguous space of virtual memory where this can fit left. You might be able to get a little farther if you switched to data structures that use less memory -- perhaps some class that approximates a huge array by using a tree where all data is kept in 1MB (or so) leaf nodes. Also, under c++ when doing a huge amount of allocations, it really helps if all those big allocations are of same size, this helps reusing memory and keeps fragmentation much lower.
However, the correct thing to do in the long run is simply to switch to a 64-bit system.