-Xms100G -Xmx100G
I have requirement to create an in-memory database. The application is a Java application. Wanted to check if there are any other JVM arguments that needs to be set before the application can be allocated max and min HeapSize of greater than 100G.
Based on response from #harold I am editing the link so it becomes useful for others reading this post.
https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/parallel.html#sthref30
You can do it like that, but bear in mind allocating 100G from the launch of the application will make it launch slower, as it will attempt to allocate the full memory block at once.
Consider using a smaller value for the minimum heap size (Xms) so launch times are not impacted, as the VM will increment the size of the heap as required up to the value defined as maximum (Xmx) by itself.
Related
I'm currently looking into memory consumption issues of a C++ application that I have written (a rendering engine using OpenGL) and have stumbled upon a rather unusual problem:
I'm using my own allocators basically everywhere in the system, which all obtain their memory from a default allocator which is using malloc()/free() for the actual memory.
It turns out that my application is always reserving at least 4096 bytes (the page size on my system) for every allocation through malloc(), even if the size is significantly smaller.
malloc(8) or even malloc(1) both result in an increase of memory of 4096 bytes. I'm tracking the used memory size through GetProcessMemoryInfo() directly before and after the allocation, as well as through the TaskManager (which basically shows the same values). Interestingly, using _msize(ptr) returns the correct size of the pointer.
I can only reproduce this behaviour within my own application, testing it with a new VS2012 C++ project did not yield the same results. This behaviour also seems independent of the current reserved size of the application, even with more than 10GB of free RAM it always reserves at least 4K per allocation.
I have no deep knowledge of the innards of the Windows operating system (if it is at all related to the OS), so if anyone has an idea what could cause this behaviour I would be greatful!
Check this, it's from 1993 :-)
http://msdn.microsoft.com/en-us/library/ms810603.aspx
This does not mean that the smallest amount of memory that can be allocated in a heap is 4096 bytes; rather, the heap manager commits pages of memory as needed to satisfy specific allocation requests. If, for example, an application allocates 100 bytes via a call to GlobalAlloc, the heap manager allocates a 100-byte chunk of memory within its committed region for this request. If there is not enough committed memory available at the time of the request, the heap manager simply commits another page to make the memory available.
You might be running with "full page heap"... a diagnostic mode to help more quickly catch memory access errors in your code.
So I have a native C++ application, and it needs to keep track of lots of things over long periods of time. It's running out of memory when task manager says that the process reaches somewhere between 800 and 1200 MB of memory, when the limit should be about 2GB.
I finally got a clue as to what's going on when I ran VMMap against my process, but that just gave me more questions. What I discovered:
The total size (type: total, column: size) is much larger than what task manager/process explorer were reporting
The total size seems to actually be the value that can't exceed 2GB before my program runs out of memory
The memory usage discrepency is almost entirely caused by "Private Data" - there is much more "size" than there is "committed". I have seen cases where there were around 800MB of committed private data, but a "Size" of around 1700MB.
The largest blocks of "Private Data" mainly consist of a pattern of pairs of one small sub-block (between 4K and 16K, generally) that has "Read/Write" protection and is fully committed, and one larger sub-block (between 90K and 400K) that has the "Reserved" protection and is not committed. This seems like a huge waste of resources. And there's usually one large (many megabytes) sub-block at the end that is "Reserved" and not committed.
The small part of the pair generally has strings that I recognize, while the larger block has no strings at all.
An example of these sub-block pairs: (not my application, but the idea is the same)
http://www.flickr.com/photos/95123032#N00/5280550393
It seems as though when one block of private data gets fully committed, a new block (usually the same or double the size of the previous largest block) gets allocated. Sounds fair. However, I have seen 3 blocks, all more than 100MB each, with less than 30MB committed. My application shouldn't behave in such a way (i.e. use up 400MB then shrink by 300MB in a matter of a few hours) that that would be possible.
As far as I can tell, the "Size" is the actual amount of virtual memory address space that has been allocated. "Committed" is the amount of "Size" that is actually being used (i.e. through calls to new/malloc). If that is indeed the case, then why is there such a huge discrepency between Size and Commited? And why is it allocating blocks that are multiple hundreds of megabytes in size?
The somewhat strange thing is that the behavior is entirely different when running on Windows 7. Whereas on 2003 Server, the application uses Private Data, on Windows 7, the application uses Heap. So...why? Why does VMMap show primarily private data usage on 2003, but primarily heap usage on 7? What's the difference? Well one difference is that I can't use the "Heap Allocations..." button in VMMap to see where all of that Private Data is being allocated.
I was beginning to wonder if excessive use of std::string was causing this problem since the strings that I recognized in the pairs (mentioned above) primarily consisted of strings stored in std::string that were frequently being created and destroyed (implying lots of memory allocation/deallocation). I converted all I could to use character arrays or using memory from a memory pool, but that seems to have had no effect. All of my other objects that are new/deleted frequently already have their own memory pools.
I also found out about the low fragmentation heap, so I tried enabling that, but it also didn't make a difference. I'm thinking it's because windows 2003 is not actually using the heap proper. VMMap shows that the low fragmentation heap is enabled, but since it's not actually used (i.e. it's using Private Data instead), it doesn't actually make a difference.
What actually seems to be happening is that those sub-block pairs are fragmenting the large Private Data blocks, which is causing the OS to allocate new blocks. Eventually, the fragmentation gets so bad that even though there's lots of uncommitted space, none of it seems to be usable and the process runs out of memory.
So my questions are:
Why is Windows Server 2003 using Private Data instead of Heap? Does it matter?
Is there a way to make Windows Server 2003 use Heap memory instead?
If so, would that improve my situation at all?
Is there any way to control how Private Data is allocated by the OS's memory allocator?
Is it possible to create my own custom heap and allocate off of that (without changing the majority of my codebase), and could that improve my situation? I know it's possible to make custom heaps, but as far as I can tell, you need to explicitly allocate from the custom heap instead of just calling new or just using STL containers normally.
Is there anything I'm missing or would be worth trying?
Private data is just a classification for all the memory that is not shared between two or more processes. Heap, relocated dll pages, stacks of all the threads in a process, unshared memory mapped files etc. fall in to the category of private data.
A request for memory from a process (via VirtualAlloc) would be failed by OS when one of the condition is true,
Contiguous virtual address space (not memory) is not available to hold the size requested.
The commit charge - the total memory committed memory of all the process and the operating system - has reached it's upper limit (that being RAM + page file size)
Apart from this Heap allocations may fail for their own reasons like, during expansion they would actually try to acquire more memory that the size of the allocation request that triggered the expansion - and if that fails they might just fail - though the actual requested size might be available through VirtualAlloc.
Few things that tend to accumulate memory are,
Having many heaps - they would hog memory - because they keep more in reserve. Many heaps means a lot of reserved space probably going unused. Heap compaction might help.
STL containers like vector and map might not shrink after elements are removed from them. Compacting them might help too.
Libraries like COM do some caching and thus accumulate memory - might help to investigate individual libraries to know about their memory hogging habits.
when task manager says that the process reaches somewhere between 800 and 1200 MB of memory, when the limit should be about 2GB
Probably you are looking at "Working Set" in Task Manager whereas the 2GB limit is on Virtual Memory. Task Manager doesn't show the amount of VM reserved; it will show the amount committed.
"Committed" is the amount of "Size" that is actually being used (i.e. through calls to new/malloc).
No, Committed means you actually touched the page (i.e. went to the address and did a load or store operation).
1.Why is Windows Server 2003 using Private Data instead of Heap?
According to "Windows Sysinternals Administrator's Reference" by Mark Russinovich and Aaron Margosis:
Private Data memory is memory that is allocated by VirtualAlloc and
that is not further handled by the Heap Manager or by the .Net runtime
So either your program is managing its memory differently on the two OS's, or VMmap is unable to detect the way in which this memory is being managed as a heap on Windows Server 2003.
4.Is there anything I'm missing or would be worth trying?
You can run with a 3GB limit on 32-bit OS and a 4GB limit for 32-bit processes on 64-bit OS. Google for "/3G" and "/4G".
A great source of information on this kind of stuff is the book "Windows Internals 6th Edition" by Mark Russinovich, David Solomon and Alex Ionescu.
I'm encountering the same issue.
In windows 2003, my application causes out of memory exception in a C++/CLI module when trying to allocate a 22MB array using gcnew. The same process works fine in windows 7.
VMMap shows the "private data" entry is almost 2 GB in win2003. After I enable /3GB flag, this entry also increased to almost 3GB. The "heap" entry is about 14 MB and the "managed heap" is nothing!
In windows 7, the "private data" is only 62 MB, the "heap" is 316MB and "managed heap" is 397MB. The entire memory usage is much less than win2003.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to find memory leak (using windows 7 OS) for the C++ program by observing "windows task manager" processes tab for gradual increase in memory.
I am confused as i see lot of column's related to memory that i listed below in "windows task manager's" process tab.
Memory - Working Set
Memory - Working Set Delta
Memory - Private Working Set
Memory - Commit Size
Memory - Paged Pool
Memory - Non-paged Pool
I have searched topic related to this on web, but i couldn't get satisfactory answer.
Please let me know which indicator i should use to check increase in memory so that i can decide if my C++ code\process is having memory leak.
FYI
My limitation is; I cannot use any profiling tool or static code analyzer tool and only have windows task manager access on the system to find memory leak.
As other posters have said, a slowly increasing and small increase does not necessarily indicate a problem.
However, if you have a long running process that slowly eats up vastly more memory than theoretically should be required (or has been measured in a healthy version of your component under similar usage scenarios) then you likely have a memory leak. I have first noticed problems in components by others reporting gigabyte usage of memory by the component (which normally uses about 2-3MB). Perfmon is useful if you want to see a long term view of your process memory. You can select the process by name, and then select the private bytes measure, and set up the timing and the grid to measure over (say) 24hrs.
Once you are sure that there is a definite increase in process memory, you can use tools like your debugger, Valgrind, Parasoft, Glow Code etc... to make sure what you are seeing is a real memory leak. However even if it is not a real memory leak (unreferenced heap memory) then you still need to redesign your component if your memory usage is increasing without end.
The short answer: It's not possible.
With only looking at task manager, there just ins't enough data available. A memory leak typically is memory that is still allocated, but isn't used anymore; to the task manager, however, it looks as if the process would still use that memory (and it has no way of finding out). You might note a continuous increase in memory usage, but that's only an indicator that there might be memory leaks - it could also be that the programm really uses that memory (or holds on to that memory for future use, e.g. because it uses its own memory management). Without using additional tools, you cannot know.
To confirm your suspicion about leaking part, you can take as an example Perfmon memory analysis -
Private Bytes are a reasonable approximation of the amount of memory
your executable is using and can be used to help narrow down a list of
potential candidates for a memory leak; if you see the number growing
and growing constantly and endlessly, you would want to check that
process for a leak. This cannot, however, prove that there is or is
not a leak.
See for details - What is private bytes, virtual bytes, working set?
We have an application that could potentially allocate a large number of small objects (depending on user input). Sometimes the application runs out of memory and effectively crashes.
However, if we knew that memory allocations were becoming tight there are some lower-priority objects which could be destroyed and thereby allow us to gracefully degrade the user results.
What's the best way to detect that memory for a process is running low before calls to 'new' actually fail? We could call API functions like GetProcessWorkingSetSize() or GetProcessMemoryInfo() but how do you know when the limits on a given machine are being reached (e.g. with 80% of maximum allocations)?
At start up, allocate a memory reserve.
Then use set_new_handler() to install a hook that will detect allocations failures.
When one happens:
Free the reserve (so you have enough free memory to work with).
Run your code that finds and frees low priority objects.
When that has done its job, try to reallocate the reserve again (for next time).
Finally return to let the original allocation attempt retry.
If it's a 32-bit process then you would want to ensure that you don't use more than 1.6GB, which is 80% of 2.0GB, the max allowed for your process. Calling GlobalMemoryStatusEx will fill in the struct MEMORYSTATUSEX.ullAvailVirtual, when this is only 400MB available (or less) then you are at your threshold.
Check this answer Win32/MFC: How to find free memory (RAM) available?.
You need to periodically find available free memory and stop allocating at some limit.
As explained in above referred answer, you can use GlobalMemoryStatusEx, and/or VirtualQueryEx.
BEA recommends to keep both min and max heap sizes same. They didn't elaborate the reason for the suggestion. Can someone provide details?
I also got another recommendation from an architect of not setting anything for minimum and just set the maximum. Any comments on this? If i dont use it, what would be the default?
What is the best tool to monitor and tune JVM settings. I am using JDK1.6, on BEA weblogic 10g. It is on Linux 32bit JVM.
Is Max heap size set to 2GB any good? Server has lots of RAM. Currently it is set at 1.5GB, and it is at 80% usage when there is 40 concurrent users.
Thanks,
When the JVM needs to increase the size of the heap it will invoke a full garbage collection which may reduce throughput or cause a pause, so I would think that this is recommended by them for performance reasons.
Default value is documented as 2MB so if you don't override it you are likely to get a lot of (probably very quick) full collections after starting up as the heap is frequently resized.
Unless you are trying to keep the memory footprint as small as possible, I would follow BEA's advice.
Impossible to say from the information given if 2GB is appropriate, or if the objects that are using up space in it are still in scope - the old generation will just gradually fill up until it runs out of space when a full collection will be invoked. Where does that 80% figure come from?
Use the following JVM arguments to log GC details to a file called gc.log:
-verbose:gc -XX:+PrintGCDetails -Xloggc:gc.log
Then you can analyse this using something like http://www.tagtraum.com/gcviewer.html