virtual memory exhausted: Cannot allocate memory with 8 gb ram - c++

I have code which is 32 bit and i think compiler too. But when i am compiling my c++ code, its taking more than 2 GB memory. As per my understanding on 32 bit system no process can take more than 2 GB.
Any suggestions how can i achieve this? I found lot of posts on this but those
are not helpful as they are adding swaps. But i already have 8 GB ram. So my problem is not available memory, its size of compiling process which could not be more than 2 GB.
Even i have 8 GB ram I have tried to adding swap and that's also not working.

On Windows 32 Bit, the maximum amount of RAM is 4 GB. By default, this address space is seperated into kernel memory and process memory, both being 2 GB large. Most programs don't need more than 2 GB of memory, but if you do, you can enlarge the process memory by specifying the /3GB switch, leaving less memory for the kernel.
Read here for more information: https://msdn.microsoft.com/en-us/library/windows/hardware/ff556232(v=vs.85).aspx
Edit: Keep in mind that if you want to make use of this additional memory, you also need to compile your program with the /LARGEADDRESSAWARE switch. That will set a flag in the Process Environment Block of your program, making Windows aware that your program might need more than 2 GB of memory.

Since you stated you have 8GB of RAM, I am presuming your OS and CPU are actually 64-bit. So you are asking how to make a 32-bit program access more than 2GB of virtual address space, on a 64-bit OS, i.e. running under WOW64.
In that case, using the /LARGEADDRESSAWARE linker option in Visual Studio will give your app 4GB of virtual address space, under WOW64. You won't see any benefit in 32-bit Windows, unless you force your users to boot their OS with a certain flag.
I believe your app doesn't really need more than 2GB of RAM, but it's impossible to tell without knowing any details.
In any case, the one correct answer is: switch to a 64-bit app, which will get you 8TB of virtual address space. That's 8 terabytes.

Related

Why can I only allocate 2 GB on a 4 GB virtual memory space?

My professor said that regularly, we can only use about 2 GB out of 4 GB RAM because the other 2 GB is used by the OS. However, when running some tests, I see that with a 4 GB virtual memory space of a process, I can only allocate a maximum of just under 2 GB using VirtualAlloc() function. Why is that (I was expecting it to be about more than 3 GB)?
As I know, the stack, data, and code segments only use a small amount of memory. One of my friend told me that the other 2 GB is used by OS just like the professor said. However, I think that the professor meant 2 GB of physical memory. It's not in the virtual memory of this process.
Could anyone explain what happens here? Thanks.
Some information:
Physical memory: 4GB.
Virtual memory: 4GB.
OS: Windows 10.
Your professor is correct - 2 GB of your virtual memory are kernel memory.
This way, when a context switch occurs, these 2 GB can stay and only the other 2 need to be swapped. It helps performance.
You can also see here an explanation by Microsoft, including explanations how to increase the user portion to 3 GB.
By the way, the situation is different in 64-bit machines, where the virtual memory is much larger.
It does not have anything to do with RAM, the virtual in VirtualAlloc() tells no lies. Sure, the upper 2GB is reserved to the OS, biggest chunks it needs are the file system cache and the video memory aperture. The latter is the bigger reason why the /3GB boot option no longer works. As you found out, you can never get the full 2GB, your program needs address space as well and is always first. It got it when it was loaded by the OS loader, what is left can be divvied up by VirtualAlloc.
Usually well less than 2 GB, the address space tends to get fragmented by loaded DLLs. Beware that you might use some even if you did not link their import libraries, anti-malware and cloud-storage utilities may inject them. Any heap allocations in your program also tends to cause splits.
These concerns are getting pretty dated, all modern machines boot a 64-bit OS. A 32-bit program now runs in an emulator and the upper range is no longer needed by the OS. You can now get at lot closer to 4GB by linking with the /LARGEADDRESSAWARE linker option. That option by itself gives you a pretty good hint why they originally decided that splitting up the address space like that was considered a good idea. Also the approach taken in 64-bit OSes.

32 bit process memory leak on x64 processor

I made a 32 bit c++ program which is always run on x64 machines. A client is saying that running 5 instances of this process is using causing all of their 24 GB RAM to be used.
Immediately I would think there was a memory leak but I am unable to reproduce this memory issue.
Doing a bit more research into memory allocations I found Memory Limits for Windows. This tells me that a 32 bit process will not be allowed more than 2 GB of memory by the OS.
Is it at all possible that a 32 bit application on a 64 bit windows will be able to have a memory leak use more than 2 GB?
P.S. Killing the process results in the memory being restored to normal operating levels (about 2 GB).
[EDIT] I have now seen that most of the memory being used is Kernel Memory: Nonpaged. Does this mean that it is some system resource which is being used and not a memory leak?
[UPDATE] The problem is not a driver or memory leak. It seems to be a process handle leak. There is something which is continuously starting new handles to a file. This was found using perfmon to monitor the process. As a rule of thumb if a process has more than 2000 to 3000 handles you should investigate. Especially if that number is increasing every few seconds.
As stated in Memory Limits for Windows, limit for 32-bit process on 64-bit system is 4 GB with IMAGE_FILE_LARGE_ADDRESS_AWARE set, thus your 5 processes could consume 20 GB of memory total. This can be set through LARGEADDRESSAWARE option, which expands virtual address space.
It is obviously possible, as the client is experiencing it.
(maybe you did expect like some ideas how? You don't provide much info or code, so in a very general way I would suggest the memory allocation may be not in the app itself directly. Maybe the app itself will take only ~1-2GiB, but will also stir the OS to do something stupid, like virtual memory map of file of size of 4+GiB, or other devices lock, where the device driver does something stupid, etc...)
You should profile the memory usage on the target system to have idea how much your code does use. Then you can try to search for the rest of it.
In general, using the /LARGEADDRESSAWARE:ON linker switch can allow a 32bit application use more than 2GB. Also using the Address Windowing Extensions can allow using more memory. But if you aren't using any of these techniques in your application then it should have the 2GB range. However since the upper 2GB range is used for system resources maybe you are leaking system resources?

Is it true that 32Bit program will be out of memory, if other programs use too much, in 64bit windows?

I am developing a 32 bit application and got out of memory error.
And I noticed that my Visual Studio and a plugin (other apps too) used too much memory which is around 4 or 5 GB.
So I suspected that these program use up all the memory addresses where my program is able to find free memory.
I suppose that 32 bit can only use the first 4 GB, other memory it can not use at all.
I don't know if I am correct with this, other wise I will look for other answers, like I have bug in my code.
Your statement of
I suppose that 32bit can only use the first 4 giga byte, othere momery
it can not use at all.
is definitely incorrect. In a 64-bit OS, all applications can use all of the memory, regardless of what bitness it is, thanks to the translation table for virtual to physical memory being 64-bit.
Some really ancient hardware may not allow DMA to addresses above 4GB, but I really hope most of that is in the junk-yard by now.
If the system as a whole is running low on memory, it will affect all applications more or less equally.
However, a 32-bit application can only, by default, use the lower 2GB of the virtual address range (although these 2GB can be placed anywhere in the physical memory, as described above by means of a 64-bit translation table). You can extend this to nearly 4GB (3GB in a 32-bit OS, and subject to the /3GB boot flag in this case) by using /LARGEADDRESSAWARE in your linking command - this simply tells the OS that your application will "understand" that addresses can be negative, and thus will operate correctly with addresses over 2GB.
Any system can be brought down by a too heavy load.
But in normal use in Windows and any other virtual memory OS, the memory consumption of other programs does not much affect any given program execution.
Getting an out of memory error is unusual, but it can happen if you make a large allocation or if you declare a large local automatic variable. It can also happen if you fail to properly deallocate memory that's no longer used, i.e. if the program is leaking memory. For a 32-bit program on a 64-bit machine it's then not memory itself that's used up, but available address space within the program.

Memory allocation limit on C++

I want to run this huge C++ project that uses up to 8.3 GB in memory. Can I run this program under certain circumstances or is it impossible ?
It's fine. You just need to be on a 64-bit architecture, and ensure that there's sufficient swap space + physical memory available
It really depends. If the program needs to have all the 8.3 GB in memory all the time (working size), you may need to have a similar amount of memory installed in your computer.
Let's now assume you have 4 GB of RAM. In such a case you will be most probably able to execute the program thanks to the use of swap (hard disk area where memory is swapped in and out with the intention of enlarging the virtual memory size). But, even if it may actually work, it could run really slow (up to the point that is not really usable) because of trashing.
On the other hand, if your program processes 8.3 GB of data, but it is processed in smaller chunks, that will mean that all the data is not in memory all the time. Then, you will not need to have installed such a big amount of RAM in your computer.
As Oli Charlesworth was mentioning you will need a 64-bit system (both the hardware and OS) or, at least, a system with PAE capabilities if you want to install more than 4 GB of RAM in your system.
Yes it is possible. You need to be in a 64-bit environment and, of course, have the RAM available. You may still be unable to allocate more than 4gb of contiguous address space at a time. It's possible that you'll have to allocate it in smaller chunks, though.

allocate more than 1 GB memory on 32 bit XP

I'v run into an odd problem, my process cannot allocate more than what seems to be slightly below 1 GiB. Windows Task Manager "Mem Usage" column shows values close to 1 GiB when my software gives a bad_alloc exception. Yes, i'v checked that the value passed to memory allocation is sensible. ( no race condition / corruption exists that would make this fail ). Yes, I need all this memory and there is no way around it. ( It's a buffer for images, which cannot be compressed any further )
I'm not trying to allocate the whole 1 GiB memory in one go, there a few allocations around 300 MiB each. Would this cause problems? ( I'll try to see if making more smaller allocations works any better ). Is there some compiler switch or something else that I must set in order to get past 1 GiB? I've seen others complaining about the 2 GiB limit, which would be fine for me.. I just need little bit more :). I'm using VS 2005 with SP1 and i'm running it on a 32 bit XP and it's in C++.
On a 32-bit OS, a process has a 4GB address space in total.
On Windows, half of this is off-limits, so your process has 2GB.
This is 2GB of contiguous memory. But it gets fragmented. Your executable is loaded in at one address, each DLL is loaded at another address, then there's the stack, and heap allocations and so on. So while your process probably has enough free address space, there are no contiguous blocks large enough to fulfill your requests for memory. So making smaller allocations will probably solve it.
If your application is compiled with the LARGEADDRESSAWARE flag, it will be allowed to use as much of the remaining 2GB as Windows can spare. (And the value of that depends on your platform and environment.
for 32-bit code running on a 64-bit OS, you'll get a full 4-GB address space
for 32-bit code running on a 32-bit OS without the /3GB boot switch, the flag means nothing at all
for 32-bit code running on a 32-bit OS with the /3GB boot switch, you'll get 3GB of address space.
So really, setting the flag is always a good idea if your application can handle it (it's basically a capability flag. It tells Windows that we can handle more memory, so if Windows can too, it should just go ahead and give us as large an address space as possible), but you probably can't rely on it having an effect. Unless you're on a 64-bit OS, it's unlikely to buy you much. (The /3GB boot switch is necessary, and it has been known to cause problems with drivers, especially video drivers)
Allocating big chunks of continuous memory is always a problem.
It is very likely to get more memory in smaller chunks
You should redesign your memory structures.
You are right to suspect the larger 300MB allocations. Your process will be able to get close to 2GB (3 if you use the /3GB boot.ini switch and LARGEADDRESSAWARE link flag), but not as a large contiguous block.
Typical solutions for this are to break up the requests into tiles or strips of fixed size (say 256x256x4 bytes) and write an intermediate class to hide this representation detail.
You can quickly verify this by writing a small allocation loop that allocate blocks of different sizes.
You could also check this function from MSDN. 1GB rings a bell from here:
This parameter must be greater than or equal to 13 pages (for example,
53,248 on systems with a 4K page size), and less than the system-wide
maximum (number of available pages minus 512 pages). The default size
is 345 pages (for example, this is 1,413,120 bytes on systems with a
4K page size).
Here they mentioned that default maximum number of pages allowed for a process is 345 pages which is slightly more than 1GB.
When I have a few big allocs like that to do, I use the Windows function VirtualAlloc, to avoid stressing the default allocator.
Another way forward might be to use nedmalloc in your project.