when will virtual memory be used (windows)? - c++

I am debugging a program which crashed because no contiguous memory can be used for my vector which needs to be reallocated. So I have a question how come the virtual memory isnot used? In which way can virtual memory be used? Thanks.

Virtual memory is used automatically by OS. You don't need to care about this.
In your case, it's most likely that you run a 32-bit application. User address space for a 32-bit process in Windows is limited to 2 GB (well, 3 GB if Windows is booted with a specific key). If your vector requires more than several hundred megabytes of contiguous address space, this may become a problem (due to address space fragmentation).
Of course, any process can run out of memory (even while using virtual memory and swap file and whatever else). Take a look at memory usage of your program in Task Manager.

Virtual memory is the only memory you ever get as a program running on a modern OS (Linux, Unix, Windows, MacOS, Symbian, etc).
It sounds like your problem is that there isn't one contiguous virtual address range that is large enough for your vector [1]. I suspect what is happening is that you need, say, more than 1.5GB in a 32-bit process, which can only use 2GB at once, so there isn't much "room" on either end to stuff other bits into before the "middle" is smaller than 1.5GB - in particular, if you have a vector that is growing, you will need two copies of the vector, one at it's current size, and one at double the size to copy into.
A simple solution, assuming you know how big the vector needs to be is to set it's size, e.g. vector<int> vec(some_size);
If you don't know, there are some more solutions:
If you have a 64-bit OS, you could try setting the LARGEADDRESSAWARE flag for the executable (assuming it's Windows). That should give you a fair bit more memory, since the 64-bit OS doesn't have to reserve a large chunk of memory space for the OS itself (that lives well outside the 32-bit address range. In a 32-bit OS, you need to boot the OS with /3GB, and set the above flag.
Or compile the code as 64-bit (after upgrading to a 64-bit OS, if needed).
[1] Unless of course, you are writing a driver and trying to allocate many megabytes of physical memory as a buffer to use for DMA - but I think you would have said so.

The problem has nothing to do with memory, or even with virtual memory. An array needs a contiguous range of addresses. The address space (normally 2 GB in a Win32 program) is fragmented so that there is not a large enough space available.
If you could get the addresses Windows would automatically provide the virtual memory to go with them.
It is time to move your app up to 64 bits.

Related

Is it true that 32Bit program will be out of memory, if other programs use too much, in 64bit windows?

I am developing a 32 bit application and got out of memory error.
And I noticed that my Visual Studio and a plugin (other apps too) used too much memory which is around 4 or 5 GB.
So I suspected that these program use up all the memory addresses where my program is able to find free memory.
I suppose that 32 bit can only use the first 4 GB, other memory it can not use at all.
I don't know if I am correct with this, other wise I will look for other answers, like I have bug in my code.
Your statement of
I suppose that 32bit can only use the first 4 giga byte, othere momery
it can not use at all.
is definitely incorrect. In a 64-bit OS, all applications can use all of the memory, regardless of what bitness it is, thanks to the translation table for virtual to physical memory being 64-bit.
Some really ancient hardware may not allow DMA to addresses above 4GB, but I really hope most of that is in the junk-yard by now.
If the system as a whole is running low on memory, it will affect all applications more or less equally.
However, a 32-bit application can only, by default, use the lower 2GB of the virtual address range (although these 2GB can be placed anywhere in the physical memory, as described above by means of a 64-bit translation table). You can extend this to nearly 4GB (3GB in a 32-bit OS, and subject to the /3GB boot flag in this case) by using /LARGEADDRESSAWARE in your linking command - this simply tells the OS that your application will "understand" that addresses can be negative, and thus will operate correctly with addresses over 2GB.
Any system can be brought down by a too heavy load.
But in normal use in Windows and any other virtual memory OS, the memory consumption of other programs does not much affect any given program execution.
Getting an out of memory error is unusual, but it can happen if you make a large allocation or if you declare a large local automatic variable. It can also happen if you fail to properly deallocate memory that's no longer used, i.e. if the program is leaking memory. For a 32-bit program on a 64-bit machine it's then not memory itself that's used up, but available address space within the program.

Allocate extra space to process

can I provide extra space to the process other than provided by the operating system.
Can extra detachable memory be used for such purposes.
can I provide extra space to the process other than provided by the
operating system.
No you cant, for every piece of memory you have to request your OS.malloc(), new and other memory allocating functions and operator resolve as a system call that request OS for memory to be provided to the program.
Every process has a definite maximum memory space allocated to it, that depends on the machine architecture. On a 32-bit machine, the maximum addressable space is 2^32 bytes ~= 4GB. Hence a process should be able to address 4 GB of memory typically. But this space is divided into two parts, 1. Kernel Space and 2. Process Space. Kernel space is used for OS drivers etc while Process space is the space where your data can be allocated. Hence the memory available to you is just the Process space.
On a typical Windows XP machine, it is equally divided. i.e. 2 GB for process space (However, there are ways to modify this. For example, the /3G option). Any allocation beyond 2 GB gives a out of memory error.This process space becomes more when you move from a 32-bit application to a 64-bit application. This is one of the major incentives for moving to 64-bit applications.
So to answer your question, there is a maximum memory available to a process beyond which the OS denies memory allocations to the process.
There are some obscure ways. E.g. if you would attach a Windows CE device to a Windows PC, the memory of that device could be accessed via the "RAPI" interface. The Windows OS wouldn't be aware of this device memory; this was handles via the ActiveSync service. It wasn't very quick memory, though.

Memory allocation limit on C++

I want to run this huge C++ project that uses up to 8.3 GB in memory. Can I run this program under certain circumstances or is it impossible ?
It's fine. You just need to be on a 64-bit architecture, and ensure that there's sufficient swap space + physical memory available
It really depends. If the program needs to have all the 8.3 GB in memory all the time (working size), you may need to have a similar amount of memory installed in your computer.
Let's now assume you have 4 GB of RAM. In such a case you will be most probably able to execute the program thanks to the use of swap (hard disk area where memory is swapped in and out with the intention of enlarging the virtual memory size). But, even if it may actually work, it could run really slow (up to the point that is not really usable) because of trashing.
On the other hand, if your program processes 8.3 GB of data, but it is processed in smaller chunks, that will mean that all the data is not in memory all the time. Then, you will not need to have installed such a big amount of RAM in your computer.
As Oli Charlesworth was mentioning you will need a 64-bit system (both the hardware and OS) or, at least, a system with PAE capabilities if you want to install more than 4 GB of RAM in your system.
Yes it is possible. You need to be in a 64-bit environment and, of course, have the RAM available. You may still be unable to allocate more than 4gb of contiguous address space at a time. It's possible that you'll have to allocate it in smaller chunks, though.

64-bit library limited to 4GB?

I'm using an image manipulation library that throws an exception when I attempt to load images > 4GB in size. It claims to be 64bit, but wouldn't a 64bit library allow loading images larger than that? I think they recompiled their C libraries using a 64 bit memory model/compiler but still used unsigned integers and failed upgrade to use 64 bit types.
Is that a reasonable conclusion?
Edit - As an after-thought can OS memory become so fragemented that allocation of large chunks is no longer possible? (It doesn't work right after a reboot either, but just wondering.) What about under .NET? Can the .NET managed memory become so fragmented that allocation of large chunks fails?
It's a reasonable suggestion, however the exact cause could be a number of things - for example what OS are you running, how much RAM / swap do you have? The application/OS may not over-commit virtual memory so you'll need 4GB (or more) of free RAM to open the image.
Out of interest does it seem to be a definite stop at the 4GB boundary - i.e. does a 3.99GB image succeed, but a 4GB one fail - you say it does which would suggest a definite use of a 32bit size in the libraries data structures.
Update
With regards your second question - not really. Pretty much all modern OS's use virtual memory, so each process gets it's own contiguous address space. A single contiguous region in a processes' address space doesn't need to be backed by contiguous physical RAM, it can be made up of a number of separate physical areas of RAM made to look like they are contiguous; so the OS doesn't need to have a single 4GB chunk of RAM free to give your application a 4GB chunk.
It's possible that an application could fragment it's virtual address space such that there isn't room for a contiguous 4GB region, but considering the size of a 64-bit address space it's probably highly unlikely in your scenario.
Yes, unless perhaps the binary file format itself limits the size of images.

How much memory should you be able to allocate?

Background: I am writing a C++ program working with large amounts of geodata, and wish to load large chunks to process at a single go. I am constrained to working with an app compiled for 32 bit machines. The machine I am testing on is running a 64 bit OS (Windows 7) and has 6 gig of ram. Using MS VS 2008.
I have the following code:
byte* pTempBuffer2[3];
try
{
//size_t nBufSize = nBandBytes*m_nBandCount;
pTempBuffer2[0] = new byte[nBandBytes];
pTempBuffer2[1] = new byte[nBandBytes];
pTempBuffer2[2] = new byte[nBandBytes];
}
catch (std::bad_alloc)
{
// If we didn't get the memory just don't buffer and we will get data one
// piece at a time.
return;
}
I was hoping that I would be able to allocate memory until the app reached the 4 gigabyte limit of 32 bit addressing. However, when nBandBytes is 466,560,000 the new throws std::bad_alloc on the second try. At this stage, the working set (memory) value for the process is 665,232 K So, it I don't seem to be able to get even a gig of memory allocated.
There has been some mention of a 2 gig limit for applications in 32 bit Windows which may be extended to 3 gig with the /3GB switch for win32. This is good advice under that environment, but not relevant to this case.
How much memory should you be able to allocate under the 64 bit OS with a 32 bit application?
As much as the OS wants to give you. By default, Windows lets a 32-bit process have 2GB of address space. And this is split into several chunks. One area is set aside for the stack, others for each executable and dll that is loaded. Whatever is left can be dynamically allocated, but there's no guarantee that it'll be one big contiguous chunk. It might be several smaller chunks of a couple of hundred MB each.
If you compile with the LargeAddressAware flag, 64-bit Windows will let you use the full 4GB address space, which should help a bit, but in general,
you shouldn't assume that the available memory is contiguous. You should be able to work with multiple smaller allocations rather than a few big ones, and
You should compile it as a 64-bit application if you need a lot of memory.
on windows 32 bit, the normal process can take 2 GB at maximum, but with /3GB switch it can reach to 3 GB (for windows 2003).
but in your case I think you are allocating contiguous memory, and so the exception occured.
You can allocate as much memory as your page file will let you - even without the /3GB switch, you can allocate 4GB of memory without much difficulty.
Read this article for a good overview of how to think about physical memory, virtual memory, and address space (all three are different things). In a nutshell, you have exactly as much physical memory as you have RAM, but your app really has no interaction with that physical memory at all - it's just a convenient place to store the data that in your virtual memory. Your virtual memory is limited by the size of your pagefile, and the amount your app can use is limited by how much other apps are using (although you can allocate more, providing you don't actually use it). Your address space in the 32 bit world is 4GB. Of those, 2 GB are allocated to the kernel (or 1GB if you use the /3BG switch). Of the 2GB that are left, some is going to be used up by your stack, some by the program you are currently running, (and all the dlls, etc..). It's going to get fragmented, and you are only going to be able to get so much contiguous space - this is where your allocation is failing. But since that address space is just a convenient way to access the virtual memory you have allocated for you, it's possible to allocate much more memory, and bring chunks of it into your address space a few at a time.
Raymond Chen has an example of how to allocate 4GB of memory and map part of it into a section of your address space.
Under 32-bit Windows, the maximum allocatable is 16TB and 256TB in 64 bit Windows.
And if you're really into how memory management works in Windows, read this article.
During the ElephantsDream project the Blender Foundation with Blender 3D had similar problems (though on Mac). Can't include the link but google: blender3d memory allocation problem and it will be the first item.
The solution involved File Mapping. Haven't tried it myself but you can read up on it here: http://msdn.microsoft.com/en-us/library/aa366556(VS.85).aspx
With nBandBytes at 466,560,000, you are trying to allocate 1.4 GB. A 32-bit app typically only has access to 2 GB of memory (more if you boot with /3GB and the executable is marked as large address space aware). You may be hard pressed to find that many blocks of contiguous address space for your large chunks of memory.
If you want to allocate gigabytes of memory on a 64-bit OS, use a 64-bit process.
You should be able to allocate a total of about 2GB per process. This article (PDF) explains the details. However, you probably won't be able to get a single, contiguous block that is even close to that large.
Even if you allocate in smaller chunks, you couldn't get the memory you need, especially if the surrounding program has unpredictable memory behavior, or if you need to run on different operating systems. In my experience, the heap space on a 32-bit process caps at around 1.2GB.
At this amount of memory, I would recommend manually writing to disk. Wrap your arrays in a class that manages the memory and writes to temporary files when necessary. Hopefully the characteristics of your program are such that you could effectively cache parts of that data without hitting the disk too much.
Sysinternals VMMap is great for investigating virtual address space fragmentation, which is probably limiting how much contiguous memory you can allocate. I recommend setting it to display free space, then sorting by size to find the largest free areas, then sorting by address to see what is separating the largest free areas (probably rebased DLLs, shared memory regions, or other heaps).
Avoiding extremely large contiguous allocations is probably for the best, as others have suggested.
Setting LARGE_ADDRESS_AWARE=YES (as jalf suggested) is good, as long as the libraries that your application depends on are compatible with it. If you do so, you should test your code with the AllocationPreference registry key set to enable top-down virtual address allocation.