Is there a way on Win32 systems to programmatically get the full size of the OS's addressable memory space, using the Win32 API (or any accessible DLL that would be installed on a >=XP system). I know about GetPerformanceInfo and GlobalMemoryStatusEx, but the former only seems to deal with physical memory, and the latter pertains to memory addressable by my program, no the OS; since my program must be x86 and might be run on an x64 system, there is no guarantee this will even be ballpark.
Note: I'd prefer, but don't need, an exact size. I just need a "really good guess."
GetPhysicallyInstalledSystemMemory can get the physical limit.
GetNativeSystemInfo can retrieve the highest user virtual address the system can access.
Do either of those satisfy your requirement?
Related
I wondered if C or C++ has a way to find where the operating system operates in RAM and free that place. I know that I can use free() to free up memory place. I wonder if I can shut down my computer by freeing my operating system's RAM space.
Before protected memory was a thing you could just access any bit of memory using its physical address and manipulate it. This was how DOS and DOS-based Windows (pre Windows 95, like 3.1) worked.
Protected memory, or virtualized memory, means you can do things like swap out parts of memory to disk, in effect pretending to have more memory than the computer physically has. Chunks of memory can be swapped around as necessary, paged in and paged out, with the running program being none the wiser. These addresses are all virtual, or "fake" as in they don't physically exist, but as far as the CPU is concerned, they are real and work exactly as you'd expect, something accomplished by integrated Memory Management Unit (MMU) in the CPU.
After protected memory your "user space" program no longer sees physical memory addresses, but instead virtual addresses that the operating system itself manages. On Intel-type systems the kernel, the core of the operating system, runs within a special protection ring that prevents user programs from directly accessing or manipulating memory.
Any multi-user system must implement this kind of memory and kernel protection or there would be no way to prevent one user from accessing the memory of another user's processes.
Within the kernel there is no "malloc" or "free" in the conventional sense, the kernel has its own special allocation mechanisms. These are completely separate from the traditional malloc() and free() functions in the C standard library and are not in any way inter-compatible. Each kernel, be it Linux or BSD or Windows or otherwise, does this in a different way even if they can all support user-space code that uses the exact same malloc() function.
There should be no way that you can, through simple memory allocation calls, crash the system. If you can, congratulations, you've found an exploit and should document it and forward it to the appropriate parties for further analysis. Keep in mind this kind of thing is heavily researched so the likelihood of you discovering one by chance is very low. Competitions like pwn2own show just how much work is involved in bypassing all this security.
It's also important to remember that the operating system does not necessarily live in a fixed location. Address Space Layout Randomization is a technique to scramble the addresses of various functions and data to ensure that an exploit can't use hard-coded values. Before this was common you could predict where various things would live in memory and do blind manipulation through a tiny bug, but that's made much harder now as you must not only find an exploit to manipulate, but another to discover the address in the first place.
All that being said, there's nothing special about C or C++ in terms of "power" that makes it able to do things no other language can do. Any program that is able to bind against the operating system functions has the same equivalent "power" in terms of control. This includes Python, Perl, Ruby, Node.js, C# and long, long, list of others that can bind to C libraries and make arbitrary function calls.
People prototype "exploits" in whatever language is the most convenient, and often that's Perl or Python as often as C. It really depends on what you're trying to accomplish. Some bugs, once discovered, are so easy to reproduce you could do it with something as mundane as browser JavaScript, as was the case with Row Hammer.
You mention free() as a means to free memory which is correct but too simplified. Its counterparts malloc() and calloc() merely translate to a system call which requests the operating system for a chunk of memory. When you call free(), you relinquish ownership of the memory you asked for and return it to the operating system.
Your C/C++ program runs in a virtual address space which the operating system's memory management subsystem maps to actual RAM addresses. No matter what address you access, it can never be out of this virtual address space which is entirely under the control of the operating system.
A user application can never access the operating system's memory in case of modern operating systems. All memory it uses is granted to it by the operating system. The OS acts a bridge/abstraction between your user applications and hardware, that's their whole purpose, to prevent direct interaction with the hardware, in your case, RAM.
RAM was once upon a time directly accessible before the advent of virtual memory. It was exactly due to this vulnerability, along with the need to run programs larger than the system memory, that virtual memory was introduced.
The only way you can mess with the operating system in user space is to make system calls with malignant arguments.
I am developing a 32 bit application and got out of memory error.
And I noticed that my Visual Studio and a plugin (other apps too) used too much memory which is around 4 or 5 GB.
So I suspected that these program use up all the memory addresses where my program is able to find free memory.
I suppose that 32 bit can only use the first 4 GB, other memory it can not use at all.
I don't know if I am correct with this, other wise I will look for other answers, like I have bug in my code.
Your statement of
I suppose that 32bit can only use the first 4 giga byte, othere momery
it can not use at all.
is definitely incorrect. In a 64-bit OS, all applications can use all of the memory, regardless of what bitness it is, thanks to the translation table for virtual to physical memory being 64-bit.
Some really ancient hardware may not allow DMA to addresses above 4GB, but I really hope most of that is in the junk-yard by now.
If the system as a whole is running low on memory, it will affect all applications more or less equally.
However, a 32-bit application can only, by default, use the lower 2GB of the virtual address range (although these 2GB can be placed anywhere in the physical memory, as described above by means of a 64-bit translation table). You can extend this to nearly 4GB (3GB in a 32-bit OS, and subject to the /3GB boot flag in this case) by using /LARGEADDRESSAWARE in your linking command - this simply tells the OS that your application will "understand" that addresses can be negative, and thus will operate correctly with addresses over 2GB.
Any system can be brought down by a too heavy load.
But in normal use in Windows and any other virtual memory OS, the memory consumption of other programs does not much affect any given program execution.
Getting an out of memory error is unusual, but it can happen if you make a large allocation or if you declare a large local automatic variable. It can also happen if you fail to properly deallocate memory that's no longer used, i.e. if the program is leaking memory. For a 32-bit program on a 64-bit machine it's then not memory itself that's used up, but available address space within the program.
In C++ how would I check how much available RAM i have?
I am on windows, but be interested for Unix answers as well as windows.
Windows: GlobalMemoryStatusEx. MSDN page has a detailed C sample code.
Linux: check the "/proc/meminfo" file (discussion)
OSX: see this SO thread Determine physical mem size programmatically on OSX
The question is not clear, however. There is physical memory, there is virtual memory, there is an OS ability to swap some unused pages to disk/other storage.
If you need to write some kind of a system monitor, then my answer would do.
If you need to be sure that none of your malloc()/new[] calls fail, then just catch appropriate exceptions or handle NULL results. The other option is to build your own allocator which gets a large memory block at the beginning and allocates smaller blocks there.
EDIT: answer to comment
The calls to WinAPI's MapViewOfFile and CreateFileMapping provide error codes to exclude fatal situations. Since files are mapped to the virtual address space shared with your process' data, you may check if there are sufficient number of pages available. I.e., if you're on a 32-bit system, you won't be able to map the whole 8Gb file to the memory at once (but you can map its smaller parts), but on a 64-bit system the mapping possibilities are sufficient for any current needs.
I would like to map a file into memory using mmap function and would like to know if the amount of virtual memory on the current platform is sufficient to map a huge file. For a 32 system I cannot map file larger than 4 Gb.
Would std::numeric_limits<size_t>::max() give me the amount of addressable memory or is there any other type that I should test (off_t or something else)?
As Lie Ryan has pointed out in his comment the "virtual memory" here is misused. The question, however holds: there is a type associated with a pointer and it has the maximum value that defines the upper limit of what you can possibly adress on your system. What is this type? Is it size_t or perhaps ptrdiff_t?
size_t is only required to be big enough to store the biggest possible single contiguous object. That may not be the same as the size of the address space (on systems with a segmented memory model, for example)
However, on common platforms with a flat memory space, the two are equal, and so you can get away with using size_t in practice if you know the target CPU.
Anyway, this doesn't really tell you anything useful. Sure, a 32-bit CPU has a 4GB memory space, and so size_t is a 32-bit unsigned integer. But that says nothing about how much you can allocate. Some part of the memory space is used by the OS. And some parts are already used by your own application: for mapping the executable into memory (as well as any dynamic libraries it may use), for each thread's stack, allocated memory on the heap and so on.
So no, tricks such as taking the size of size_t tells you a little bit about the address space you're running in, but nothing very usable. You can ask the OS how much memory is in use by your process and other metrics, but again, that doesn't really help you much. It is possible for a process to use just a couple of megabytes, but have that spread out over so many small allocations that it's impossible to find a contiguous block of memory larger than 100MB, say. And so, on a 32-bit machine, with a process that uses nearly no memory, you'd be unlikely to make such an allocation. (And even if the OS had a magical WhatIsTheLargestPossibleMemoryAllocationICanMake() API, that still wouldn't help you. It would tell you what you needed from a moment ago. You have no guarantee that the answer would still be valid by the time you tried to map the file.
So really, the best you can do is try to map the file, and see if it fails.
Hi you can use GlobalMemoryStatusEx and VirtualQueryEx if you coding in win32
Thing is, the size of a pointer tells you nothing about how much of that "address space" is actually available to you, i.e. can be mapped as a single contiguous chunk.
It's limited by:
the operating system. It may choose to only make a subset of the theoretically-possible address range available to you, because mappable memory is needed for OS-own purposes (like, say, making the graphics card framebuffer visible, and of course for use by the OS itself).
configurable limits. On Linux / UNIX, the "ulimit" command resp. setrlimit() system call allows to restrict the maximum size of an application's address space in various ways, and Windows has similar options through registry parameters.
the history of the application. If the application uses memory mapping extensively, the address space can fragment limiting the maximum size of "available" contiguous virtual addresses.
the hardware platform. Some CPUs have address spaces with "holes"; an example of that is 64bit x86 where pointers are only valid if they're between 0x0..0x7fffffffffff or 0xffff000000000000 and 0xffffffffffffffff. I.e. you have 2x128TB instead of the full 16EB. Think of it as 48-bit "signed" pointers ...
Finally, don't confuse "available memory" and "available address space". There's a difference between doing a malloc(someBigSize) and a mmap(..., someBigSize, ...) because the former might require availability of physical memory to accommodate the request while the latter usually only requires availability of a large-enough free address range.
For UNIX platforms, part of the answer is to use getrlimit(RLIMIT_AS) as this gives the upper bound for the current invocation of your application - as said, the user and/or admin can configure this. You're guaranteed that any attempt to mmap areas larger than that will fail.
Re your rephrased question "upper limit of what you can possibly adress on your system", is somewhat misleading; it's hardware architecture specific. There are 64bit architectures out there (x64, sparc) whose MMU happily allows (uintptr_t)(-1) as valid address, i.e. you can map something into the last page of a 64bit address space. Whether the operating system allows an application to do so or not is again an entirely different question ...
For user applications, the "high mark" isn't (always) fixed a-priori. It's tunable on e.g. Solaris or Linux. That's where getrlimit(RLIMIT_AS) comes in.
Note that again, by specification, there'd be nothing to prevent a (weird) operating system design to choose e.g. putting application stacks and heaps at "low" addresses while putting code at "high" addresses, on a platform with address space holes. You'd need full 64bit pointers there, can't make them any smaller, but there could be an arbitrary number of "inaccessible / invalid" ranges which never are made available to your app.
You can try sizeof(int*). This will give you the length (in bytes) of a pointer on the target platform. Thus, you can find out how big the addressable space is.
In C++ is there a predefined library function that will return the size of RAM currently available on a computer a program is being run on, at run-time?
For instance, if an object is 4bytes, then can we divide the available virtual memory by 4 bytes to give an estimate of how many more objects could be stored by the program safely?
I have used the sizeof() function to return the size of objects within my program.
Seeing as this was frequently asked for in the helpful responses - The platform the program is running on is Windows (7).
Thanks
Not in the C++ Standard Library - your operating system probably provides this facility though, via a platform-specific API.
There's nothing in the C++ standard that returns the amount of free memory available. Such a function, if available at all, would be platform-specific.
First of all size of the RAM has nothing to do with how much free virtual memory available in the process. It just that your program will slow down if the RAM is less due to frequent page faults. Also, the virtual memory will be mostly fragmented so it makes more sense to find the things such as largest continuous free memory instead of total free memory.
There are no built in C++ functions to do this you have use OS API's to get it. For example, on windows you can use the Win32 APIs to get this information.
It's platform specific, not part of the language standard.
However, there's a Windows specific API to get process memory informations: GetProcessMemoryInfo().
Additionally, virtual addressing allow processes to allocate more than total physical RAM.
In Win32 you can use
MEMORYSTATUS st;
::GlobalMemoryStatus(&st);
There is no good solution for this in Windows. When a program frees a heap block, it almost always gets added to a list of free blocks. You can only discover these is by walking the heap with HeapWalk(). That's expensive and very detrimental to the operation of a multi-threaded program because you have to lock the heaps.
Also, a program almost never runs out of free virtual memory space. It first runs out of a free contiguous chunk of space that's large enough to fit the request. The sum of block sizes you get from HeapWalk is not meaningful unless you only ever make very small allocations.
The most typical reason for wanting a feature like this is because your program is routinely running out of memory. There is a very effective and cheap solution available for that problem. Two hundred bucks buys you a 64-bit version of Windows.