Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am developing a C++ application with Qt; involving tremendous number crunching. A large amount of dynamic memory is required for the entire operation. However the requirement is variable depending on a variable set by the user.
In resource monitor I can see that the Commit memory (memory allocated by OS for the exe) keeps on increasing with time as my program creates arrays in dynamic memory. So if I let Windows know beforehand that my exe will use X MB of memory, will this result in improved performance? If yes then how do I do this?
If you have a lot of memory allocations and cpu-intensive process that runs together, you might consider restructuring your program to use some memory pools.
The idea behind memory pool is that you allocate a pool of resources that you will probably need when processing beings, (maps, vectors, or any objects you happen to new very often), and the time you need a new object, you take the first available one from the pool, reset and use it, and when you are done with it you put it back into the pool so that it can be used again later.
This pattern can happen to be faster than continuously use new and delete, but only if your program intensively uses dynamic allocations while it is doing, for example, a minmax search over a huge tree, or something as intensive as that.
So if I let Windows know beforehand that my exe will use X MB of memory, will this result in improved performance? If yes then how do I do this?
I don't think so. The memory your app operates on is virtual and you don't really have a good control on how Windows actually allocates/maps physical memory onto virtual.
But you can try allocating the required amount of memory upfront and then use it as a pool for custom allocators. It may result in some performance hit however.
You can do a large alloc and delete.
char *ptr = new char[50*1024*1024L];
delete[] *ptr;
I doubt if there is going to be any performance difference.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am running a memory intensive c++ application and it is being killed by the kernel for excessively high memory usage. I would have thought that the os will automatically use swap when ram gets full. However, I don't think my swap space is getting utilised.
I have read the following two questions, but I can't relate it to my problem.
"How to avoid running out of memory in high memory usage application? C / C++"
Who "Killed" my process and why?
I will be grateful if someone can give me some hints/pointers to how I may solve this problem. Thanks.
Edit: I am running my application on a 64 bit machine linux machine. My ram and swap is 6gb and 12gb respectively.
I suspect your process is asking for more memory than is available. In situations where you know you're going to use the memory you ask for, you need to disable memory overcommit:
echo 2 > /proc/sys/vm/overcommit_memory
and/or put
vm.overcommit_memory=2
in /etc/sysctl.conf so the setting survives reboots.
If your process asks for 32 GB of RAM on a machine with 16 GB of RAM + swap, your malloc() (or new...) calls might very well succeed, but once you try to use that memory your process is going to get killed.
Perhaps you have (virtual) memory framgentation and are trying to allocate a large block of memory which the OS cannot find as a contiguous block?
For instance an array would require this, but if you create a large linked list on the heap you should be able to allocate non-contiguous memory.
How much memory are you trying to allocate and how, and do you have suffiecient amount of free resources? If you debug your application what happens when the process is getting killed?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
This might be a bit stupid question - should I call delete on huge map/set at the end of the program?
Assuming the map/set is needed throughout all program (delete is last line before return) and it's size is really huge (> 4GB). The delete call take a long time and from my perspective does not have any value(memory cannot be released any sooner), am I wrong? If so, why?
There is no guarantee in the C and C++ standards of what happens after your program exits. Including no guarantee that anything is cleaned up. Some smaller real-time OS' for example, will not perform automatic cleanup. So at least in theory, your program should definitely do delete everything you new to fulfil it's obligation as a complete and portable program that can run forever.
It is also possible that someone takes your code and puts a loop around it, so that your tree is now created a million times, and then goes to find you, bringing along the "trusty convincer", aka base-ball bat, when they find out WHY it's now running out of memory after 500 iterations.
Of course, like all things, this can be argued many different ways, and it really depends on what you are trying to achieve, why you are writing the program etc. My compiler project leaks memory like a sieve, because I use exactly the method of memory management that you describe (partly because tracking the lifetime of each dynamically allocated object is quite difficult, and partly because "I can't be bothered". I'm sure if someone actually wants a good pascal compiler, they won't go for my code anwyay).
Actually, my compiler project builds many different data structures, some of which are trees, arrays, etc, but basically none of them perform any cleanup afterwords. It's not as simple to fix as the case of building a large tree that need each node deleting. However, conceptually, it all boils down to "do cleanup" or "not do cleanup", and that in turn comes down to "well, who is going to use/modify the code, and what do you know about the environment it will run in".
As long as your program stays more or less the same, with the memory being used for the entire execution, it doesn't make a real difference.
However, if someone ever tries to take your program and make it into a component of some other program, not releasing the memory could result in a huge memory leak.
So to be on the safe side, and also to be more organized, always free what you allocate. In your case it's very easy so there's no downside - just a potential upside.
If you are using C++ STL, then there is no need for any explicit deletion, because map/set will automatically manage heap memory for you. Actually, you are not allowed to delete those map/set, since they are not pointers.
For the large objects stored inside map/set, you can use smart pointers when constructing those objects, and then you will no longer need to call their destructors. Memory leak may not be a problem for a toy program, but it is unacceptable for real-life programs, since they may run for a long time or even forever.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Here is my code :
int** tmp = new int*[l];
for(int i = 0; i < l; i++)
tmp[i] = new int[h];
for(int i=0;i< l;i++)
delete[] tmp[i];
delete[] tmp;
I would like to know if i'm correctly deallocation memory. The problem that i have is that when i check the process of my program on the task manager, memory wont drop.
Is it normal?
The code above is ok, altough in general it's the sort of terrible thing you hope you'll never encounter in a codebase you have to work on.
std::vector or boost::multi_array would be both better choices here, and they destroy without all that unnecessary error prone code. Basically if you have to wonder what the code is doing and whether it's correct, then something is wrong with it already.
CPU load is not connected to memory allocations directly and it's just a whole another problem you have with your code. Some loop that's endlessly polling the OS for something might be a reason for that; I have no information to what your code is doing besides allocating and deallocating memory, so it's hard to tell what could be improved.
After your comment... don't rely on task manager to tell you real memory usage of a program. Use a specialized leak detector for that. As #H2CO3 pointed out, OS might not immediately report deleted memory as free.
In its barebone implementation, new and delete are just sugar over malloc and free (from the C library), so we will reason about those instead.
Operating Systems usually provide primitives to (de)allocate memory, however those primitives:
are not as fine-grained as malloc and free: they work by 4K blocks, for example
are relatively expansive: notably, they often zero-out the memory
As a result, most implementations of malloc and free are not simple one-line wrappers around OS primitives, but instead will keep a pool of allocated pages and handle most requests internally. Some implementations even have a per-thread pool to avoid contention (such as jemalloc) or multiple pools with per-thread affinity (such as tcmalloc).
This results:
in faster malloc/free calls
at the expense of memory footprint of the process being slightly higher than strictly needed
Note: and I have not touched on fragmentation yet...
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
well , it's known that GlobalAlloc/GlobalFree/HeapAlloc/HeapFree APIs are managing default heaps or user defined heaps (CreateHeap) , for each heap there are segments each segment has multiple blocks.It's known The Freelist and the lookaside list are managing the free blocks in each heap.
In was reversing a piece of software and I found that is using VirtualAlloc to allocate a big chunk of memory . Basically I cannot say that it's a heap because the chunk was directly allocated from the Virtual address space and it doesn't show any signs of being a heap.
But some routines in the application will setup a custom Freelist which is itself managed by the application and it's used to define and control the free portions of that big chunk allocated using VirtualAlloc.
Can I call this chunk a HEAP as the application has setup a Freelist structure managing it ?
VirtualAlloc can be used with success to implement custom memory managers. I suppose this is what your code might be doing. It might use VirtualAlloc to reserve contiguous large address space, but it initially does not commit it, this means no physical memory is retrieved from system. Free list might point to such non committed address spaces.
VirtualAlloc is actually at the lowest level when it comes to memory management, malloc library might be actually implemented using it.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to find memory leak (using windows 7 OS) for the C++ program by observing "windows task manager" processes tab for gradual increase in memory.
I am confused as i see lot of column's related to memory that i listed below in "windows task manager's" process tab.
Memory - Working Set
Memory - Working Set Delta
Memory - Private Working Set
Memory - Commit Size
Memory - Paged Pool
Memory - Non-paged Pool
I have searched topic related to this on web, but i couldn't get satisfactory answer.
Please let me know which indicator i should use to check increase in memory so that i can decide if my C++ code\process is having memory leak.
FYI
My limitation is; I cannot use any profiling tool or static code analyzer tool and only have windows task manager access on the system to find memory leak.
As other posters have said, a slowly increasing and small increase does not necessarily indicate a problem.
However, if you have a long running process that slowly eats up vastly more memory than theoretically should be required (or has been measured in a healthy version of your component under similar usage scenarios) then you likely have a memory leak. I have first noticed problems in components by others reporting gigabyte usage of memory by the component (which normally uses about 2-3MB). Perfmon is useful if you want to see a long term view of your process memory. You can select the process by name, and then select the private bytes measure, and set up the timing and the grid to measure over (say) 24hrs.
Once you are sure that there is a definite increase in process memory, you can use tools like your debugger, Valgrind, Parasoft, Glow Code etc... to make sure what you are seeing is a real memory leak. However even if it is not a real memory leak (unreferenced heap memory) then you still need to redesign your component if your memory usage is increasing without end.
The short answer: It's not possible.
With only looking at task manager, there just ins't enough data available. A memory leak typically is memory that is still allocated, but isn't used anymore; to the task manager, however, it looks as if the process would still use that memory (and it has no way of finding out). You might note a continuous increase in memory usage, but that's only an indicator that there might be memory leaks - it could also be that the programm really uses that memory (or holds on to that memory for future use, e.g. because it uses its own memory management). Without using additional tools, you cannot know.
To confirm your suspicion about leaking part, you can take as an example Perfmon memory analysis -
Private Bytes are a reasonable approximation of the amount of memory
your executable is using and can be used to help narrow down a list of
potential candidates for a memory leak; if you see the number growing
and growing constantly and endlessly, you would want to check that
process for a leak. This cannot, however, prove that there is or is
not a leak.
See for details - What is private bytes, virtual bytes, working set?