Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am running a memory intensive c++ application and it is being killed by the kernel for excessively high memory usage. I would have thought that the os will automatically use swap when ram gets full. However, I don't think my swap space is getting utilised.
I have read the following two questions, but I can't relate it to my problem.
"How to avoid running out of memory in high memory usage application? C / C++"
Who "Killed" my process and why?
I will be grateful if someone can give me some hints/pointers to how I may solve this problem. Thanks.
Edit: I am running my application on a 64 bit machine linux machine. My ram and swap is 6gb and 12gb respectively.
I suspect your process is asking for more memory than is available. In situations where you know you're going to use the memory you ask for, you need to disable memory overcommit:
echo 2 > /proc/sys/vm/overcommit_memory
and/or put
vm.overcommit_memory=2
in /etc/sysctl.conf so the setting survives reboots.
If your process asks for 32 GB of RAM on a machine with 16 GB of RAM + swap, your malloc() (or new...) calls might very well succeed, but once you try to use that memory your process is going to get killed.
Perhaps you have (virtual) memory framgentation and are trying to allocate a large block of memory which the OS cannot find as a contiguous block?
For instance an array would require this, but if you create a large linked list on the heap you should be able to allocate non-contiguous memory.
How much memory are you trying to allocate and how, and do you have suffiecient amount of free resources? If you debug your application what happens when the process is getting killed?
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I read by some googling about Heap & Stack, but most answer says just its concept description, differences.
I am curious other things.
as title says, Where is Heap and Stack on Physical Memory?
How is their size? For example, I use 12 giga byte memory at my desktop PC, then how much is Heap? and how much is Stack size?
Who made these 2 different type concept?
Can I manipulate Heap & Stack's allocation? if they take 50% of memory each, (if Heap take 6 giga byte memory, Stack take 6 giga byte too in my case), can I resize them?
as title says, Where is Heap and Stack on Physical Memory?
Ever since CPUs have had MMUs to add a layer of indirection between virtual memory and physical memory, heap and stack have been anywhere in physical memory. Ever since modern Operating Systems have implemented ASLR, heap and stack have been anywhere in virtual memory, too.
How is their size? For example, I use 12 giga byte memory at my desktop PC, then how much is Heap? and how much is Stack size?
Both start small and grow on demand. On Unix, the maximum stack size is set by ulimit -s and the heap size is limited by ulimit -d. You can see what limits are set by default on your Unix OS with ulimit -a.
Who made these 2 different type concept?
I would bet this goes back to at least the 1960s. Wikipedia has a reference from 1960.
Can I manipulate Heap & Stack's allocation? if they take 50% of memory each, (if Heap take 6 giga byte memory, Stack take 6 giga byte too in my case), can I resize them?
As already said, they resize themselves, or more accurately, they grow on demand, within limits set by the OS and the user. See the help for ulimit if you are using Unix and bash.
1. It can be everywhere. Even outside the physical memory, because in terms of application there is no such thing. Everything in user land uses virtual memory, that can be mapped to RAM or swap area on HDD. No certain assumptions here, sorry.
2. They both grow dynamically, difference lies in speed and size limits:
Heap is usually considered slower. It is allocated, depending on application requirements. It is as huge as the amount of RAM or even larger (paging).
Stack is much faster, because it "allocated" by simple move of stack pointer. It usually has size limit. For example, in C++, this limit is set at phase of compilation (ulimit -s on GCC, /STACK:reserve, /STACK:reserve,commit on MSVC).
Stack is usually much smaller and can be easily overflowed (that's what we call stack overflow). For example, in C++, you most likely won't be able to do this:
int main()
{
int large_array[1000000];
return 0;
}
Because:
While this is perfectly fine:
int main()
{
int* large_array = new int[1000000]; //allocated from heap
return 0;
}
3. Some really smart people.
4. Read carefully points 1-3 and you will know the answer.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
the program i am working on at the moment processes a large amount of data (>32GB). Due to "pipelining" however, a maximum of arround 600 MB is present in the main memory at each given time (i checked that, that works as planned).
If the program has finished however and i switch back to the workspace with Firefox open, for example (but also others), it takes a while till i can use it again (also HDD is highly active for a while). This fact makes me wonder if Linux (operating system i use) swaps out other programs while my program is running and why?
I have 4 GB of RAM installed on my machine and while my program is active it never goes above 2 GB of utilization.
My program only allocates/deallocates dynamic memory of only two different sizes. 32 and 64 MB chunks. It is written in C++ and i use new and delete. Should Linux not be sufficiently smart enough to reuse these blocks once i freed them and leave my other memory untouched?
Why does Linux kick my stuff out of the memory?
Is this some other effect i have not considered?
Can i work arround this problem without writing a custom memory management system?
The most likely culprit is file caching. The good news is that you can disable file caching. Without caching, your software will run more quickly, but only if you don't need to reload the same data later.
You can do this directly with linux APIs, but I suggest you use a library such as Boost ASIO. If your software is I/O bound, you should additionally make use of asynchronous I/O to improve performance.
All the recently-used pages are causing older pages to get squeezed out of the disk cache. As a result, when some other program runs, it has to paged back in.
What you want to do is use posix_fadvise (or posix_madvise if you're memory mapping the file) to eject pages you've forced the OS to cache so that your program doesn't have a huge cache footprint. This will let older pages from other programs remain in cache.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'm using Ubuntu 32 BIT.
- My app need to store incoming data at RAM (because I need to do a lot of searches on the incoming data and calc somthing).
- I have a need to save the data for X seconds => So I need to allocate 12GB of memory. (client requirements)
- I'm using Ubuntu 32 BIT (and dont want to to work with Ubuntu 64 BIT)
- So I am using Ram Disk to save the incomming data and to searach on it. (So I can use 12GB of Ram on 32 BIT system)
when I test the app with 2GB of allocated memory (instead of 12GB) I saw that the performance of the CPU when using RAM is better than when using RAM DISK when I just write data into my DB (15% VS 17% CPU usage)
but when I test the queries (which read a lot of data / or Files if I'm working with RAM disk) I saw a huge different (20% vs 80% CPU usage).
I dont understand why there is a huge of DIFF ?
Both RAM and RAM DISK work on RAM ? no ? Is there anything I can do to get better performance ?
There are two reasons that I can think of as to why a RAM disk is slower.
With a RAMDisk we might use RAM as the file media but we still have the overhead of using a filesystem. This involved system calls to access data with other forms of indirection or copying. Directly accessing memory is just that.
Memory access tends to be fast because we often can find what we are looking for in the processor cache. This saves us from reading directly from slower RAM. Using a RAMDisk will probably not be able to make use of the processor cache to the same extent if for no other reason, it requires a system call.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am developing a C++ application with Qt; involving tremendous number crunching. A large amount of dynamic memory is required for the entire operation. However the requirement is variable depending on a variable set by the user.
In resource monitor I can see that the Commit memory (memory allocated by OS for the exe) keeps on increasing with time as my program creates arrays in dynamic memory. So if I let Windows know beforehand that my exe will use X MB of memory, will this result in improved performance? If yes then how do I do this?
If you have a lot of memory allocations and cpu-intensive process that runs together, you might consider restructuring your program to use some memory pools.
The idea behind memory pool is that you allocate a pool of resources that you will probably need when processing beings, (maps, vectors, or any objects you happen to new very often), and the time you need a new object, you take the first available one from the pool, reset and use it, and when you are done with it you put it back into the pool so that it can be used again later.
This pattern can happen to be faster than continuously use new and delete, but only if your program intensively uses dynamic allocations while it is doing, for example, a minmax search over a huge tree, or something as intensive as that.
So if I let Windows know beforehand that my exe will use X MB of memory, will this result in improved performance? If yes then how do I do this?
I don't think so. The memory your app operates on is virtual and you don't really have a good control on how Windows actually allocates/maps physical memory onto virtual.
But you can try allocating the required amount of memory upfront and then use it as a pool for custom allocators. It may result in some performance hit however.
You can do a large alloc and delete.
char *ptr = new char[50*1024*1024L];
delete[] *ptr;
I doubt if there is going to be any performance difference.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to find memory leak (using windows 7 OS) for the C++ program by observing "windows task manager" processes tab for gradual increase in memory.
I am confused as i see lot of column's related to memory that i listed below in "windows task manager's" process tab.
Memory - Working Set
Memory - Working Set Delta
Memory - Private Working Set
Memory - Commit Size
Memory - Paged Pool
Memory - Non-paged Pool
I have searched topic related to this on web, but i couldn't get satisfactory answer.
Please let me know which indicator i should use to check increase in memory so that i can decide if my C++ code\process is having memory leak.
FYI
My limitation is; I cannot use any profiling tool or static code analyzer tool and only have windows task manager access on the system to find memory leak.
As other posters have said, a slowly increasing and small increase does not necessarily indicate a problem.
However, if you have a long running process that slowly eats up vastly more memory than theoretically should be required (or has been measured in a healthy version of your component under similar usage scenarios) then you likely have a memory leak. I have first noticed problems in components by others reporting gigabyte usage of memory by the component (which normally uses about 2-3MB). Perfmon is useful if you want to see a long term view of your process memory. You can select the process by name, and then select the private bytes measure, and set up the timing and the grid to measure over (say) 24hrs.
Once you are sure that there is a definite increase in process memory, you can use tools like your debugger, Valgrind, Parasoft, Glow Code etc... to make sure what you are seeing is a real memory leak. However even if it is not a real memory leak (unreferenced heap memory) then you still need to redesign your component if your memory usage is increasing without end.
The short answer: It's not possible.
With only looking at task manager, there just ins't enough data available. A memory leak typically is memory that is still allocated, but isn't used anymore; to the task manager, however, it looks as if the process would still use that memory (and it has no way of finding out). You might note a continuous increase in memory usage, but that's only an indicator that there might be memory leaks - it could also be that the programm really uses that memory (or holds on to that memory for future use, e.g. because it uses its own memory management). Without using additional tools, you cannot know.
To confirm your suspicion about leaking part, you can take as an example Perfmon memory analysis -
Private Bytes are a reasonable approximation of the amount of memory
your executable is using and can be used to help narrow down a list of
potential candidates for a memory leak; if you see the number growing
and growing constantly and endlessly, you would want to check that
process for a leak. This cannot, however, prove that there is or is
not a leak.
See for details - What is private bytes, virtual bytes, working set?