How to solve Memory Fragmentation - c++

We've occasionally been getting problems whereby our long-running server processes (running on Windows Server 2003) have thrown an exception due to a memory allocation failure. Our suspicion is these allocations are failing due to memory fragmentation.
Therefore, we've been looking at some alternative memory allocation mechanisms that may help us and I'm hoping someone can tell me the best one:
1) Use Windows Low-fragmentation Heap
2) jemalloc - as used in Firefox 3
3) Doug Lea's malloc
Our server process is developed using cross-platform C++ code, so any solution would be ideally cross-platform also (do *nix operating systems suffer from this type of memory fragmentation?).
Also, am I right in thinking that LFH is now the default memory allocation mechanism for Windows Server 2008 / Vista?... Will my current problems "go away" if our customers simply upgrade their server os?

First, I agree with the other posters who suggested a resource leak. You really want to rule that out first.
Hopefully, the heap manager you are currently using has a way to dump out the actual total free space available in the heap (across all free blocks) and also the total number of blocks that it is divided over. If the average free block size is relatively small compared to the total free space in the heap, then you do have a fragmentation problem. Alternatively, if you can dump the size of the largest free block and compare that to the total free space, that will accomplish the same thing. The largest free block would be small relative to the total free space available across all blocks if you are running into fragmentation.
To be very clear about the above, in all cases we are talking about free blocks in the heap, not the allocated blocks in the heap. In any case, if the above conditions are not met, then you do have a leak situation of some sort.
So, once you have ruled out a leak, you could consider using a better allocator. Doug Lea's malloc suggested in the question is a very good allocator for general use applications and very robust most of the time. Put another way, it has been time tested to work very well for most any application. However, no algorithm is ideal for all applications and any management algorithm approach can be broken by the right pathelogical conditions against it's design.
Why are you having a fragmentation problem? - Sources of fragmentation problems are caused by the behavior of an application and have to do with greatly different allocation lifetimes in the same memory arena. That is, some objects are allocated and freed regularly while other types of objects persist for extended periods of time all in the same heap.....think of the longer lifetime ones as poking holes into larger areas of the arena and thereby preventing the coalesce of adjacent blocks that have been freed.
To address this type of problem, the best thing you can do is logically divide the heap into sub arenas where the lifetimes are more similar. In effect, you want a transient heap and a persistent heap or heaps that group things of similar lifetimes.
Some others have suggested another approach to solve the problem which is to attempt to make the allocation sizes more similar or identical, but this is less ideal because it creates a different type of fragmentation called internal fragmentation - which is in effect the wasted space you have by allocating more memory in the block than you need.
Additionally, with a good heap allocator, like Doug Lea's, making the block sizes more similar is unnecessary because the allocator will already be doing a power of two size bucketing scheme that will make it completely unnecessary to artificially adjust the allocation sizes passed to malloc() - in effect, his heap manager does that for you automatically much more robustly than the application will be able to make adjustments.

I think you’ve mistakenly ruled out a memory leak too early.
Even a tiny memory leak can cause a severe memory fragmentation.
Assuming your application behaves like the following:
Allocate 10MB
Allocate 1 byte
Free 10MB
(oops, we didn’t free the 1 byte, but who cares about 1 tiny byte)
This seems like a very small leak, you will hardly notice it when monitoring just the total allocated memory size.
But this leak eventually will cause your application memory to look like this:
.
.
Free – 10MB
.
.
[Allocated -1 byte]
.
.
Free – 10MB
.
.
[Allocated -1 byte]
.
.
Free – 10MB
.
.
This leak will not be noticed... until you want to allocate 11MB
Assuming your minidumps had full memory info included, I recommend using DebugDiag to spot possible leaks.
In the generated memory report, examine carefully the allocation count (not size).

As you suggest, Doug Lea's malloc might work well. It's cross platform and it has been used in shipping code. At the very least, it should be easy to integrate into your code for testing.
Having worked in fixed memory environments for a number of years, this situation is certainly a problem, even in non-fixed environments. We have found that the CRT allocators tend to stink pretty bad in terms of performance (speed, efficiency of wasted space, etc). I firmly believe that if you have extensive need of a good memory allocator over a long period of time, you should write your own (or see if something like dlmalloc will work). The trick is getting something written that works with your allocation patterns, and that has more to do with memory management efficiency as almost anything else.
Give dlmalloc a try. I definitely give it a thumbs up. It's fairly tunable as well, so you might be able to get more efficiency by changing some of the compile time options.
Honestly, you shouldn't depend on things "going away" with new OS implementations. A service pack, patch, or another new OS N years later might make the problem worse. Again, for applications that demand a robust memory manager, don't use the stock versions that are available with your compiler. Find one that works for your situation. Start with dlmalloc and tune it to see if you can get the behavior that works best for your situation.

You can help reduce fragmentation by reducing the amount you allocate deallocate.
e.g. say for a web server running a server side script, it may create a string to output the page to. Instead of allocating and deallocating these strings for every page request, just maintain a pool of them, so your only allocating when you need more, but your not deallocating (meaning after a while you get the situation you not allocating anymore either, because you have enough)
You can use _CrtDumpMemoryLeaks(); to dump memory leaks to the debug window when running a debug build, however I believe this is specific to the Visual C compiler. (it's in crtdbg.h)

I'd suspect a leak before suspecting fragmentation.
For the memory-intensive data structures, you could switch over to a re-usable storage pool mechanism. You might also be able to allocate more stuff on the stack as opposed to the heap, but in practical terms that won't make a huge difference I think.
I'd fire up a tool like valgrind or do some intensive logging to look for resources not being released.

#nsaners - I'm pretty sure the problem is down to memory fragmentation. We've analyzed minidumps that point to a problem when a large (5-10mb) chunk of memory is being allocated. We've also monitored the process (on-site and in development) to check for memory leaks - none were detected (the memory footprint is generally quite low).

The problem does happen on Unix, although it's usually not as bad.
The Low-framgmentation heap helped us, but my co-workers swear by Smart Heap
(it's been used cross platform in a couple of our products for years). Unfortunately due to other circumstances we couldn't use Smart Heap this time.
We also look at block/chunking allocating and trying to have scope-savvy pools/strategies, i.e.,
long term things here, whole request thing there, short term things over there, etc.

As usual, you can usually waste memory to gain some speed.
This technique isn't useful for a general purpose allocator, but it does have it's place.
Basically, the idea is to write an allocator that returns memory from a pool where all the allocations are the same size. This pool can never become fragmented because any block is as good as another. You can reduce memory wastage by creating multiple pools with different size chunks and pick the smallest chunk size pool that's still greater than the requested amount. I've used this idea to create allocators that run in O(1).

if you talking about Win32 - you can try to squeeze something by using LARGEADDRESSAWARE. You'll have ~1Gb extra defragmented memory so your application will fragment it longer.

The simple, quick and dirty, solution is to split the application into several process, you should get fresh HEAP each time you create the process.
Your memory and speed might suffer a bit (swapping) but fast hardware and big RAM should be able to help.
This was old UNIX trick with daemons, when threads did not existed yet.

Related

What are the issues with undeleted dynamic instance? [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Is it ever acceptable to have a memory leak in your C or C++ application?
What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's destructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.
What if a third party library forced this situation on you? Would refuse to use that third party library no matter how great it otherwise might be?
I only see one practical disadvantage, and that is that these benign leaks will show up with memory leak detection tools as false positives.
No.
As professionals, the question we should not be asking ourselves is, "Is it ever OK to do this?" but rather "Is there ever a good reason to do this?" And "hunting down that memory leak is a pain" isn't a good reason.
I like to keep things simple. And the simple rule is that my program should have no memory leaks.
That makes my life simple, too. If I detect a memory leak, I eliminate it, rather than run through some elaborate decision tree structure to determine whether it's an "acceptable" memory leak.
It's similar to compiler warnings – will the warning be fatal to my particular application? Maybe not.
But it's ultimately a matter of professional discipline. Tolerating compiler warnings and tolerating memory leaks is a bad habit that will ultimately bite me in the rear.
To take things to an extreme, would it ever be acceptable for a surgeon to leave some piece of operating equipment inside a patient?
Although it is possible that a circumstance could arise where the cost/risk of removing that piece of equipment exceeds the cost/risk of leaving it in, and there could be circumstances where it was harmless, if I saw this question posted on SurgeonOverflow.com and saw any answer other than "no," it would seriously undermine my confidence in the medical profession.
–
If a third party library forced this situation on me, it would lead me to seriously suspect the overall quality of the library in question. It would be as if I test drove a car and found a couple loose washers and nuts in one of the cupholders – it may not be a big deal in itself, but it portrays a lack of commitment to quality, so I would consider alternatives.
I don't consider it to be a memory leak unless the amount of memory being "used" keeps growing. Having some unreleased memory, while not ideal, is not a big problem unless the amount of memory required keeps growing.
Let's get our definitions correct, first. A memory leak is when memory is dynamically allocated, eg with malloc(), and all references to the memory are lost without the corresponding free. An easy way to make one is like this:
#define BLK ((size_t)1024)
while(1){
void * vp = malloc(BLK);
}
Note that every time around the while(1) loop, 1024 (+overhead) bytes are allocated, and the new address assigned to vp; there's no remaining pointer to the previous malloc'ed blocks. This program is guaranteed to run until the heap runs out, and there's no way to recover any of the malloc'ed memory. Memory is "leaking" out of the heap, never to be seen again.
What you're describing, though, sound like
int main(){
void * vp = malloc(LOTS);
// Go do something useful
return 0;
}
You allocate the memory, work with it until the program terminates. This is not a memory leak; it doesn't impair the program, and all the memory will be scavenged up automagically when the program terminates.
Generally, you should avoid memory leaks. First, because like altitude above you and fuel back at the hangar, memory that has leaked and can't be recovered is useless; second, it's a lot easier to code correctly, not leaking memory, at the start than it is to find a memory leak later.
In theory no, in practise it depends.
It really depends on how much data the program is working on, how often the program is run and whether or not it is running constantly.
If I have a quick program that reads a small amount of data makes a calculation and exits, a small memory leak will never be noticed. Because the program is not running for very long and only uses a small amount of memory, the leak will be small and freed when the program exists.
On the other hand if I have a program that processes millions of records and runs for a long time, a small memory leak might bring down the machine given enough time.
As for third party libraries that have leaks, if they cause a problem either fix the library or find a better alternative. If it doesn't cause a problem, does it really matter?
Many people seem to be under the impression that once you free memory, it's instantly returned to the operating system and can be used by other programs.
This isn't true. Operating systems commonly manage memory in 4KiB pages. malloc and other sorts of memory management get pages from the OS and sub-manage them as they see fit. It's quite likely that free() will not return pages to the operating system, under the assumption that your program will malloc more memory later.
I'm not saying that free() never returns memory to the operating system. It can happen, particularly if you are freeing large stretches of memory. But there's no guarantee.
The important fact: If you don't free memory that you no longer need, further mallocs are guaranteed to consume even more memory. But if you free first, malloc might re-use the freed memory instead.
What does this mean in practice? It means that if you know your program isn't going to require any more memory from now on (for instance it's in the cleanup phase), freeing memory is not so important. However if the program might allocate more memory later, you should avoid memory leaks - particularly ones that can occur repeatedly.
Also see this comment for more details about why freeing memory just before termination is bad.
A commenter didn't seem to understand that calling free() does not automatically allow other programs to use the freed memory. But that's the entire point of this answer!
So, to convince people, I will demonstrate an example where free() does very little good. To make the math easy to follow, I will pretend that the OS manages memory in 4000 byte pages.
Suppose you allocate ten thousand 100-byte blocks (for simplicity I'll ignore the extra memory that would be required to manage these allocations). This consumes 1MB, or 250 pages. If you then free 9000 of these blocks at random, you're left with just 1000 blocks - but they're scattered all over the place. Statistically, about 5 of the pages will be empty. The other 245 will each have at least one allocated block in them. That amounts to 980KB of memory, that cannot possibly be reclaimed by the operating system - even though you now only have 100KB allocated!
On the other hand, you can now malloc() 9000 more blocks without increasing the amount of memory your program is tying up.
Even when free() could technically return memory to the OS, it may not do so. free() needs to achieve a balance between operating quickly and saving memory. And besides, a program that has already allocated a lot of memory and then freed it is likely to do so again. A web server needs to handle request after request after request - it makes sense to keep some "slack" memory available so you don't need to ask the OS for memory all the time.
There is nothing conceptually wrong with having the os clean up after the application is run.
It really depends on the application and how it will be run. Continually occurring leaks in an application that needs to run for weeks has to be taken care of, but a small tool that calculates a result without too high of a memory need should not be a problem.
There is a reason why many scripting language do not garbage collect cyclical references… for their usage patterns, it's not an actual problem and would thus be as much of a waste of resources as the wasted memory.
I believe the answer is no, never allow a memory leak, and I have a few reasons which I haven't seen explicitly stated. There are great technical answers here but I think the real answer hinges on more social/human reasons.
(First, note that as others mentioned, a true leak is when your program, at any point, loses track of memory resources that it has allocated. In C, this happens when you malloc() to a pointer and let that pointer leave scope without doing a free() first.)
The important crux of your decision here is habit. When you code in a language that uses pointers, you're going to use pointers a lot. And pointers are dangerous; they're the easiest way to add all manner of severe problems to your code.
When you're coding, sometimes you're going to be on the ball and sometimes you're going to be tired or mad or worried. During those somewhat distracted times, you're coding more on autopilot. The autopilot effect doesn't differentiate between one-off code and a module in a larger project. During those times, the habits you establish are what will end up in your code base.
So no, never allow memory leaks for the same reason that you should still check your blind spots when changing lanes even if you're the only car on the road at the moment. During times when your active brain is distracted, good habits are all that can save you from disastrous missteps.
Beyond the "habit" issue, pointers are complex and often require a lot of brain power to track mentally. It's best to not "muddy the water" when it comes to your usage of pointers, especially when you're new to programming.
There's a more social aspect too. By proper use of malloc() and free(), anyone who looks at your code will be at ease; you're managing your resources. If you don't, however, they'll immediately suspect a problem.
Maybe you've worked out that the memory leak doesn't hurt anything in this context, but every maintainer of your code will have to work that out in his head too when he reads that piece of code. By using free() you remove the need to even consider the issue.
Finally, programming is writing a mental model of a process to an unambiguous language so that a person and a computer can perfectly understand said process. A vital part of good programming practice is never introducing unnecessary ambiguity.
Smart programming is flexible and generic. Bad programming is ambiguous.
I'm going to give the unpopular but practical answer that it's always wrong to free memory unless doing so will reduce the memory usage of your program. For instance a program that makes a single allocation or series of allocations to load the dataset it will use for its entire lifetime has no need to free anything. In the more common case of a large program with very dynamic memory requirements (think of a web browser), you should obviously free memory you're no longer using as soon as you can (for instance closing a tab/document/etc.), but there's no reason to free anything when the user selects clicks "exit", and doing so is actually harmful to the user experience.
Why? Freeing memory requires touching memory. Even if your system's malloc implementation happens not to store metadata adjacent to the allocated memory blocks, you're likely going to be walking recursive structures just to find all the pointers you need to free.
Now, suppose your program has worked with a large volume of data, but hasn't touched most of it for a while (again, web browser is a great example). If the user is running a lot of apps, a good portion of that data has likely been swapped to disk. If you just exit(0) or return from main, it exits instantly. Great user experience. If you go to the trouble of trying to free everything, you may spend 5 seconds or more swapping all the data back in, only to throw it away immediately after that. Waste of user's time. Waste of laptop's battery life. Waste of wear on the hard disk.
This is not just theoretical. Whenever I find myself with too many apps loaded and the disk starts thrashing, I don't even consider clicking "exit". I get to a terminal as fast as I can and type killall -9 ... because I know "exit" will just make it worse.
I think in your situation the answer may be that it's okay. But you definitely need to document that the memory leak is a conscious decision. You don't want a maintenance programmer to come along, slap your code inside a function, and call it a million times. So if you make the decision that a leak is okay you need to document it (IN BIG LETTERS) for whoever may have to work on the program in the future.
If this is a third party library you may be trapped. But definitely document that this leak occurs.
But basically if the memory leak is a known quantity like a 512 KB buffer or something then it is a non issue. If the memory leak keeps growing like every time you call a library call your memory increases by 512KB and is not freed, then you may have a problem. If you document it and control the number of times the call is executed it may be manageable. But then you really need documentation because while 512 isn't much, 512 over a million calls is a lot.
Also you need to check your operating system documentation. If this was an embedded device there may be operating systems that don't free all the memory from a program that exits. I'm not sure, maybe this isn't true. But it is worth looking into.
I'm sure that someone can come up with a reason to say Yes, but it won't be me.
Instead of saying no, I'm going to say that this shouldn't be a yes/no question.
There are ways to manage or contain memory leaks, and many systems have them.
There are NASA systems on devices that leave the earth that plan for this. The systems will automatically reboot every so often so that memory leaks will not become fatal to the overall operation. Just an example of containment.
If you allocate memory and use it until the last line of your program, that's not a leak. If you allocate memory and forget about it, even if the amount of memory isn't growing, that's a problem. That allocated but unused memory can cause other programs to run slower or not at all.
I can count on one hand the number of "benign" leaks that I've seen over time.
So the answer is a very qualified yes.
An example. If you have a singleton resource that needs a buffer to store a circular queue or deque but doesn't know how big the buffer will need to be and can't afford the overhead of locking or every reader, then allocating an exponentially doubling buffer but not freeing the old ones will leak a bounded amount of memory per queue/deque. The benefit for these is they speed up every access dramatically and can change the asymptotics of multiprocessor solutions by never risking contention for a lock.
I've seen this approach used to great benefit for things with very clearly fixed counts such as per-CPU work-stealing deques, and to a much lesser degree in the buffer used to hold the singleton /proc/self/maps state in Hans Boehm's conservative garbage collector for C/C++, which is used to detect the root sets, etc.
While technically a leak, both of these cases are bounded in size and in the growable circular work stealing deque case there is a huge performance win in exchange for a bounded factor of 2 increase in the memory usage for the queues.
If you allocate a bunch of heap at the beginning of your program, and you don't free it when you exit, that is not a memory leak per se. A memory leak is when your program loops over a section of code, and that code allocates heap and then "loses track" of it without freeing it.
In fact, there is no need to make calls to free() or delete right before you exit. When the process exits, all of its memory is reclaimed by the OS (this is certainly the case with POSIX. On other OSes – particularly embedded ones – YMMV).
The only caution I'd have with not freeing the memory at exit time is that if you ever refactor your program so that it, for example, becomes a service that waits for input, does whatever your program does, then loops around to wait for another service call, then what you've coded can turn into a memory leak.
In this sort of question context is everything. Personally I can't stand leaks, and in my code I go to great lengths to fix them if they crop up, but it is not always worth it to fix a leak, and when people are paying me by the hour I have on occasion told them it was not worth my fee for me to fix a leak in their code. Let me give you an example:
I was triaging a project, doing some perf work and fixing a lot of bugs. There was a leak during the applications initialization that I tracked down, and fully understood. Fixing it properly would have required a day or so refactoring a piece of otherwise functional code. I could have done something hacky (like stuffing the value into a global and grabbing it some point I know it was no longer in use to free), but that would have just caused more confusion to the next guy who had to touch the code.
Personally I would not have written the code that way in the first place, but most of us don't get to always work on pristine well designed codebases, and sometimes you have to look at these things pragmatically. The amount of time it would have taken me to fix that 150 byte leak could instead be spent making algorithmic improvements that shaved off megabytes of ram.
Ultimately, I decided that leaking 150 bytes for an app that used around a gig of ram and ran on a dedicated machine was not worth fixing it, so I wrote a comment saying that it was leaked, what needed to be changed in order to fix it, and why it was not worth it at the time.
this is so domain-specific that its hardly worth answering. use your freaking head.
space shuttle operating system: nope, no memory leaks allowed
rapid development proof-of-concept code: fixing all those memory leaks is a waste of time.
and there is a spectrum of intermediate situations.
the opportunity cost ($$$) of delaying a product release to fix all but the worst memory leaks is usually dwarfs any feelings of being "sloppy or unprofessional". Your boss pays you to make him money, not to get a warm, fuzzy feelings.
You have to first realize that there's a big difference between a perceived memory leak and an actual memory leak. Very frequently analysis tools will report many red herrings, and label something as having been leaked (memory or resources such as handles etc) where it actually isn't. Often times this is due to the analysis tool's architecture. For example, certain analysis tools will report run time objects as memory leaks because it never sees those object freed. But the deallocation occurs in the runtime's shutdown code, which the analysis tool might not be able to see.
With that said, there will still be times when you will have actual memory leaks that are either very difficult to find or very difficult to fix. So now the question becomes is it ever OK to leave them in the code?
The ideal answer is, "no, never." A more pragmatic answer may be "no, almost never." Very often in real life you have limited number of resources and time to resolve and endless list of tasks. When one of the tasks is eliminating memory leaks, the law of diminishing returns very often comes in to play. You could eliminate say 98% of all memory leaks in an application in a week, but the remaining 2% might take months. In some cases it might even be impossible to eliminate certain leaks because of the application's architecture without a major refactoring of code. You have to weigh the costs and benefits of eliminating the remaining 2%.
While most answers concentrate on real memory leaks (which are not OK ever, because they are a sign of sloppy coding), this part of the question appears more interesting to me:
What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's deconstructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.
If the associated memory is used, you cannot free it before the program ends. Whether the free is done by the program exit or by the OS does not matter. As long as this is documented, so that change don't introduce real memory leaks, and as long as there is no C++ destructor or C cleanup function involved in the picture. A not-closed file might be revealed through a leaked FILE object, but a missing fclose() might also cause the buffer not to be flushed.
So, back to the original case, it is IMHO perfectly OK in itself, so much that Valgrind, one of the most powerful leak detectors, will treat such leaks only if requested. On Valgrind, when you overwrite a pointer without freeing it beforehand, it gets considered as a memory leak, because it is more likely to happen again and to cause the heap to grow endlessly.
Then, there are not nfreed memory blocks which are still reachable. One could make sure to free all of them at the exit, but that is just a waste of time in itself. The point is if they could be freed before. Lowering memory consumption is useful in any case.
Even if you are sure that your 'known' memory leak will not cause havoc, don't do it. At best, it will pave a way for you to make a similar and probably more critical mistake at a different time and place.
For me, asking this is like questioning "Can I break the red light at 3 AM in the morning when no one is around?". Well sure, it may not cause any trouble at that time but it will provide a lever for you to do the same in rush hour!
No, you should not have leaks that the OS will clean for you. The reason (not mentioned in the answers above as far as I could check) is that you never know when your main() will be re-used as a function/module in another program. If your main() gets to be a frequently-called function in another persons' software - this software will have a memory leak that eats memory over time.
KIV
I agree with vfilby – it depends. In Windows, we treat memory leaks as relatively serous bugs. But, it very much depends on the component.
For example, memory leaks are not very serious for components that run rarely, and for limited periods of time. These components run, do theire work, then exit. When they exit all their memory is freed implicitly.
However, memory leaks in services or other long run components (like the shell) are very serious. The reason is that these bugs 'steal' memory over time. The only way to recover this is to restart the components. Most people don't know how to restart a service or the shell – so if their system performance suffers, they just reboot.
So, if you have a leak – evaluate its impact two ways
To your software and your user's experience.
To the system (and the user) in terms of being frugal with system resources.
Impact of the fix on maintenance and reliability.
Likelihood of causing a regression somewhere else.
Foredecker
I'm surprised to see so many incorrect definitions of what a memory leak actually is. Without a concrete definition, a discussion on whether it's a bad thing or not will go nowhere.
As some commentors have rightly pointed out, a memory leak only happens when memory allocated by a process goes out of scope to the extent that the process is no longer able to reference or delete it.
A process which is grabbing more and more memory is not necessarily leaking. So long as it is able to reference and deallocate that memory, then it remains under the explicit control of the process and has not leaked. The process may well be badly designed, especially in the context of a system where memory is limited, but this is not the same as a leak. Conversely, losing scope of, say, a 32 byte buffer is still a leak, even though the amount of memory leaked is small. If you think this is insignificant, wait until someone wraps an algorithm around your library call and calls it 10,000 times.
I see no reason whatsoever to allow leaks in your own code, however small. Modern programming languages such as C and C++ go to great lengths to help programmers prevent such leaks and there is rarely a good argument not to adopt good programming techniques - especially when coupled with specific language facilities - to prevent leaks.
As regards existing or third party code, where your control over quality or ability to make a change may be highly limited, depending on the severity of the leak, you may be forced to accept or take mitigating action such as restarting your process regularly to reduce the effect of the leak.
It may not be possible to change or replace the existing (leaking) code, and therefore you may be bound to accept it. However, this is not the same as declaring that it's OK.
I guess it's fine if you're writing a program meant to leak memory (i.e. to test the impact of memory leaks on system performance).
Its really not a leak if its intentional and its not a problem unless its a significant amount of memory, or could grow to be a significant amount of memory. Its fairly common to not cleanup global allocations during the lifetime of a program. If the leak is in a server or long running app, grows over time, then its a problem.
I think you've answered your own question. The biggest drawback is how they interfere with the memory leak detection tools, but I think that drawback is a HUGE drawback for certain types of applications.
I work with legacy server applications that are supposed to be rock solid but they have leaks and the globals DO get in the way of the memory detection tools. It's a big deal.
In the book "Collapse" by Jared Diamond, the author wonders about what the guy was thinking who cut down the last tree on Easter Island, the tree he would have needed in order to build a canoe to get off the island. I wonder about the day many years ago when that first global was added to our codebase. THAT was the day it should have been caught.
I see the same problem as all scenario questions like this: What happens when the program changes, and suddenly that little memory leak is being called ten million times and the end of your program is in a different place so it does matter? If it's in a library then log a bug with the library maintainers, don't put a leak into your own code.
I'll answer no.
In theory, the operating system will clean up after you if you leave a mess (now that's just rude, but since computers don't have feelings it might be acceptable). But you can't anticipate every possible situation that might occur when your program is run. Therefore (unless you are able to conduct a formal proof of some behaviour), creating memory leaks is just irresponsible and sloppy from a professional point of view.
If a third-party component leaks memory, this is a very strong argument against using it, not only because of the imminent effect but also because it shows that the programmers work sloppily and that this might also impact other metrics. Now, when considering legacy systems this is difficult (consider web browsing components: to my knowledge, they all leak memory) but it should be the norm.
Historically, it did matter on some operating systems under some edge cases. These edge cases could exist in the future.
Here's an example, on SunOS in the Sun 3 era, there was an issue if a process used exec (or more traditionally fork and then exec), the subsequent new process would inherit the same memory footprint as the parent and it could not be shrunk. If a parent process allocated 1/2 gig of memory and didn't free it before calling exec, the child process would start using that same 1/2 gig (even though it wasn't allocated). This behavior was best exhibited by SunTools (their default windowing system), which was a memory hog. Every app that it spawned was created via fork/exec and inherited SunTools footprint, quickly filling up swap space.
This was already discussed ad nauseam. Bottom line is that a memory leak is a bug and must be fixed. If a third party library leaks memory, it makes one wonder what else is wrong with it, no? If you were building a car, would you use an engine that is occasionally leaking oil? After all, somebody else made the engine, so it's not your fault and you can't fix it, right?
Generally a memory leak in a stand alone application is not fatal, as it gets cleaned up when the program exits.
What do you do for Server programs that are designed so they don't exit?
If you are the kind of programmer that does not design and implement code where the resources are allocated and released correctly, then I don't want anything to do with you or your code. If you don't care to clean up your leaked memory, what about your locks? Do you leave them hanging out there too? Do you leave little turds of temporary files laying around in various directories?
Leak that memory and let the program clean it up? No. Absolutely not. It's a bad habit, that leads to bugs, bugs, and more bugs.
Clean up after yourself. Yo momma don't work here no more.
As a general rule, if you've got memory leaks that you feel you can't avoid, then you need to think harder about object ownership.
But to your question, my answer in a nutshell is In production code, yes. During development, no. This might seem backwards, but here's my reasoning:
In the situation you describe, where the memory is held until the end of the program, it's perfectly okay to not release it. Once your process exits, the OS will clean up anyway. In fact, it might make the user's experience better: In a game I've worked on, the programmers thought it would be cleaner to free all the memory before exiting, causing the shutdown of the program to take up to half a minute! A quick change that just called exit() instead made the process disappear immediately, and put the user back to the desktop where he wanted to be.
However, you're right about the debugging tools: They'll throw a fit, and all the false positives might make finding your real memory leaks a pain. And because of that, always write debugging code that frees the memory, and disable it when you ship.

Is it ever OK to *not* use free() on allocated memory?

I'm studying computer engineering, and I have some electronics courses. I heard, from two of my professors (of these courses) that it is possible to avoid using the free() function (after malloc(), calloc(), etc.) because the memory spaces allocated likely won't be used again to allocate other memory. That is, for example, if you allocate 4 bytes and then release them you will have 4 bytes of space that likely won't be allocated again: you will have a hole.
I think that's crazy: you can't have a not-toy-program where you allocate memory on the heap without releasing it. But I don't have the knowledge to explain exactly why it's so important that for each malloc() there must be a free().
So: are there ever circumstances in which it might be appropriate to use a malloc() without using free()? And if not, how can I explain this to my professors?
Easy: just read the source of pretty much any half-serious malloc()/free() implementation. By this, I mean the actual memory manager that handles the work of the calls. This might be in the runtime library, virtual machine, or operating system. Of course the code is not equally accessible in all cases.
Making sure memory is not fragmented, by joining adjacent holes into larger holes, is very very common. More serious allocators use more serious techniques to ensure this.
So, let's assume you do three allocations and de-allocations and get blocks layed out in memory in this order:
+-+-+-+
|A|B|C|
+-+-+-+
The sizes of the individual allocations don't matter. then you free the first and last one, A and C:
+-+-+-+
| |B| |
+-+-+-+
when you finally free B, you (initially, at least in theory) end up with:
+-+-+-+
| | | |
+-+-+-+
which can be de-fragmented into just
+-+-+-+
| |
+-+-+-+
i.e. a single larger free block, no fragments left.
References, as requested:
Try reading the code for dlmalloc. I's a lot more advanced, being a full production-quality implementation.
Even in embedded applications, de-fragmenting implementations are available. See for instance these notes on the heap4.c code in FreeRTOS.
The other answers already explain perfectly well that real implementations of malloc() and free() do indeed coalesce (defragmnent) holes into larger free chunks. But even if that wasn't the case, it would still be a bad idea to forego free().
The thing is, your program just allocated (and wants to free) those 4 bytes of memory. If it's going to run for an extended period of time, it's quite likely that it will need to allocate just 4 bytes of memory again. So even if those 4 bytes will never coalesce into a larger contiguous space, they can still be re-used by the program itself.
It's total nonsense, for instance there are many different implementations of malloc, some try to make the heap more efficient like Doug Lea's or this one.
Are your professors working with POSIX, by any chance? If they're used to writing lots of small, minimalistic shell applications, that's a scenario where I can imagine this approach wouldn't be all too bad - freeing the whole heap at once at the leisure of the OS is faster than freeing up a thousand variables. If you expect your application to run for a second or two, you could easily get away with no de-allocation at all.
It's still a bad practice of course (performance improvements should always be based on profiling, not vague gut feeling), and it's not something you should say to students without explaining the other constraints, but I can imagine a lot of tiny-piping-shell-applications to be written this way (if not using static allocation outright). If you're working on something that benefits from not freeing up your variables, you're either working in extreme low-latency conditions (in which case, how can you even afford dynamic allocation and C++? :D), or you're doing something very, very wrong (like allocating an integer array by allocating a thousand integers one after another rather than a single block of memory).
You mentioned they were electronics professors. They may be used to writing firmware/realtime software, were being able to accurately time somethings execution is often needed. In those cases knowing you have enough memory for all allocations and not freeing and reallocating memory may give a more easily computed limit on execution time.
In some schemes hardware memory protection may also be used to make sure the routine completes in its allocated memory or generates a trap in what should be very exceptional cases.
Taking this from a different angle than previous commenters and answers, one possibility is that your professors have had experience with systems where memory was allocated statically(ie: when the program was compiled).
Static allocation comes when you do things like:
define MAX_SIZE 32
int array[MAX_SIZE];
In many real-time and embedded systems(those most likely to be encountered by EEs or CEs), it is usually preferable to avoid dynamic memory allocation altogether. So, uses of malloc, new, and their deletion counterparts are rare. On top of that, memory in computers has exploded in recent years.
If you have 512 MB available to you, and you statically allocate 1 MB, you have roughly 511 MB to trundle through before your software explodes(well, not exactly...but go with me here). Assuming you have 511 MB to abuse, if you malloc 4 bytes every second without freeing them, you will be able to run for nearly 73 hours before you run out of memory. Considering many machines shut off once a day, that means your program will never run out of memory!
In the above example, the leak is 4 bytes per second, or 240 bytes/min. Now imagine that you lower that byte/min ratio. The lower that ratio, the longer your program can run without problems. If your mallocs are infrequent, that is a real possibility.
Heck, if you know you're only going to malloc something once, and that malloc will never be hit again, then it's a lot like static allocation, though you don't need to know the size of what it is you're allocating up-front. Eg: Let's say we have 512 MB again. We need to malloc 32 arrays of integers. These are typical integers - 4 bytes each. We know the sizes of these arrays will never exceed 1024 integers. No other memory allocations occur in our program. Do we have enough memory? 32 * 1024 * 4 = 131,072. 128 KB - so yes. We have plenty of space. If we know we will never allocate any more memory, we can safely malloc those arrays without freeing them. However, this may also mean that you have to restart the machine/device if your program crashes. If you start/stop your program 4,096 times you'll allocate all 512 MB. If you have zombie processes it's possible that memory will never be freed, even after a crash.
Save yourself pain and misery, and consume this mantra as The One Truth: malloc should always be associated with a free. new should always have a delete.
I think the claim stated in the question is nonsense if taken literally from the programmer's standpoint, but it has truth (at least some) from the operating system's view.
malloc() will eventually end up calling either mmap() or sbrk() which will fetch a page from the OS.
In any non-trivial program, the chances that this page is ever given back to the OS during a processes lifetime are very small, even if you free() most of the allocated memory. So free()'d memory will only available to the same process most of the time, but not to others.
Your professors aren't wrong, but also are (they are at least misleading or oversimplifying). Memory fragmentation causes problems for performance and efficient use of memory, so sometimes you do have to consider it and take action to avoid it. One classic trick is, if you allocate a lot of things which are the same size, grabbing a pool of memory at startup which is some multiple of that size and managing its usage entirely internally, thus ensuring you don't have fragmentation happening at the OS level (and the holes in your internal memory mapper will be exactly the right size for the next object of that type which comes along).
There are entire third-party libraries which do nothing but handle that kind of thing for you, and sometimes it's the difference between acceptable performance and something that runs far too slowly. malloc() and free() take a noticeable amount of time to execute, which you'll start to notice if you're calling them a lot.
So by avoiding just naively using malloc() and free() you can avoid both fragmentation and performance problems - but when you get right down to it, you should always make sure you free() everything you malloc() unless you have a very good reason to do otherwise. Even when using an internal memory pool a good application will free() the pool memory before it exits. Yes, the OS will clean it up, but if the application lifecycle is later changed it'd be easy to forget that pool's still hanging around...
Long-running applications of course need to be utterly scrupulous about cleaning up or recycling everything they've allocated, or they end up running out of memory.
Your professors are raising an important point. Unfortunately the English usage is such that I'm not absolutely sure what it is they said. Let me answer the question in terms of non-toy programs that have certain memory usage characteristics, and that I have personally worked with.
Some programs behave nicely. They allocate memory in waves: lots of small or medium-sized allocations followed by lots of frees, in repeating cycles. In these programs typical memory allocators do rather well. They coalesce freed blocks and at the end of a wave most of the free memory is in large contiguous chunks. These programs are quite rare.
Most programs behave badly. They allocate and deallocate memory more or less randomly, in a variety of sizes from very small to very large, and they retain a high usage of allocated blocks. In these programs the ability to coalesce blocks is limited and over time they finish up with the memory highly fragmented and relatively non-contiguous. If the total memory usage exceeds about 1.5GB in a 32-bit memory space, and there are allocations of (say) 10MB or more, eventually one of the large allocations will fail. These programs are common.
Other programs free little or no memory, until they stop. They progressively allocate memory while running, freeing only small quantities, and then stop, at which time all memory is freed. A compiler is like this. So is a VM. For example, the .NET CLR runtime, itself written in C++, probably never frees any memory. Why should it?
And that is the final answer. In those cases where the program is sufficiently heavy in memory usage, then managing memory using malloc and free is not a sufficient answer to the problem. Unless you are lucky enough to be dealing with a well-behaved program, you will need to design one or more custom memory allocators that pre-allocate big chunks of memory and then sub-allocate according to a strategy of your choice. You may not use free at all, except when the program stops.
Without knowing exactly what your professors said, for truly production scale programs I would probably come out on their side.
EDIT
I'll have one go at answering some of the criticisms. Obviously SO is not a good place for posts of this kind. Just to be clear: I have around 30 years experience writing this kind of software, including a couple of compilers. I have no academic references, just my own bruises. I can't help feeling the criticisms come from people with far narrower and shorter experience.
I'll repeat my key message: balancing malloc and free is not a sufficient solution to large scale memory allocation in real programs. Block coalescing is normal, and buys time, but it's not enough. You need serious, clever memory allocators, which tend to grab memory in chunks (using malloc or whatever) and free rarely. This is probably the message OP's professors had in mind, which he misunderstood.
I'm surprised that nobody had quoted The Book yet:
This may not be true eventually, because memories may get large enough so that it would be impossible to run out of free memory in the lifetime of the computer. For example, there are about 3 ⋅ 1013 microseconds in a year, so if we were to cons once per microsecond we would need about 1015 cells of memory to build a machine that could operate for 30 years without running out of memory. That much memory seems absurdly large by today’s standards, but it is not physically impossible. On the other hand, processors are getting faster and a future computer may have large numbers of processors operating in parallel on a single memory, so it may be possible to use up memory much faster than we have postulated.
http://sarabander.github.io/sicp/html/5_002e3.xhtml#FOOT298
So, indeed, many programs can do just fine without ever bothering to free any memory.
I know about one case when explicitly freeing memory is worse than useless. That is, when you need all your data until the end of process lifetime. In other words, when freeing them is only possible right before program termination. Since any modern OS takes care freeing memory when a program dies, calling free() is not necessary in that case. In fact, it may slow down program termination, since it may need to access several pages in memory.

Increase in memory footprint. False alarm or a memory leak?

I have a graphics program where I am creating and destroying the same objects over and over again. All in all, there are 140 objects. They get deleted and newed such that the number never increases 140. This is a requirement as it is a stress test, that is I cannot have a memory pool or dummy objects. Now I am fairly certain there aren't any memory leaks. I am also using a memory leak detector which is not reporting any leaks.
The problem is that the memory footprint of the program keeps increasing (albeit quite slowly, slower than the rate at which the objects are being destroyed/created). So my question then is whether an increasing memory footprint is a solid sign for memory leaks or can it sometimes be deceiving?
EDIT: I am using new/delete to create/destroy the objects
It does seem possible that this behavior could come from a situation in which there is no leak.
Is there any chance that your heap is getting fragmented?
Say you make lots of allocations of size n. You free them all, which makes your C library insert those buffers into a free list. Some other code path then makes allocations smaller than n, so those blocks in the free list get chunked up into smaller units. Then the next iteration of the loop does another batch of allocations of size n, and the free list no longer contains contiguous memory at that size, and malloc has to ask the kernel for more memory. Eventually those "smaller-than-n" allocations get freed as would your "n-sized" ones, but if you run enough iterations where the fragmentation exists, I could see the process gradually increasing its memory footprint.
One way to avoid this might be to allocate all your objects once, and not keep allocating/freeing them. Since you're using C++ this might necessitate placement new or something similar. Since you are using Windows, I might also mention that Win32 supports having multiple heaps in a process, so if your objects come from a different heap than other allocations you may avoid this.
It depends if you're under a CLR (or a virtual machine with garbage collector) or your still in the old mode (like C++, MFC ect...)
When you have a GC around - you can't really tell, only if you test it long enough. GC can decide not to clean your objects for now... (there is a way to force it)
In the native applications, yes, a footprint increase might mean a leak.
there are some tools (very good tools) for c++ that find these leaks (google devpartner or boundschecker)
I guess there are some tools for c# and Java as well.
If your application's process footprint increases beyond a reasonable limit, which depends on your application and what it does, and continues to increase until eventually you (will) run out of virtual memory, you definitely have a memory leak.
Try memory allocation tests included in CRT: http://msdn.microsoft.com/en-us/library/e5ewb1h3%28VS.80%29.aspx
They help A LOT.
But I've noticed that apps do tend to vary their memory consumption a little if you look at some factors. Windows 7 might also create extra padding in memory allocation to fix bugs: http://msdn.microsoft.com/en-us/library/dd744764%28VS.85%29.aspx
I strongly suggest to try Visual Studio 2015 (the Community Edition is free). It comes with Diagnostic Tools that helps you analyze Memory Usage; it allows you to take snapshots and view the heap

Are memory leaks ever ok? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Is it ever acceptable to have a memory leak in your C or C++ application?
What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's destructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.
What if a third party library forced this situation on you? Would refuse to use that third party library no matter how great it otherwise might be?
I only see one practical disadvantage, and that is that these benign leaks will show up with memory leak detection tools as false positives.
No.
As professionals, the question we should not be asking ourselves is, "Is it ever OK to do this?" but rather "Is there ever a good reason to do this?" And "hunting down that memory leak is a pain" isn't a good reason.
I like to keep things simple. And the simple rule is that my program should have no memory leaks.
That makes my life simple, too. If I detect a memory leak, I eliminate it, rather than run through some elaborate decision tree structure to determine whether it's an "acceptable" memory leak.
It's similar to compiler warnings – will the warning be fatal to my particular application? Maybe not.
But it's ultimately a matter of professional discipline. Tolerating compiler warnings and tolerating memory leaks is a bad habit that will ultimately bite me in the rear.
To take things to an extreme, would it ever be acceptable for a surgeon to leave some piece of operating equipment inside a patient?
Although it is possible that a circumstance could arise where the cost/risk of removing that piece of equipment exceeds the cost/risk of leaving it in, and there could be circumstances where it was harmless, if I saw this question posted on SurgeonOverflow.com and saw any answer other than "no," it would seriously undermine my confidence in the medical profession.
–
If a third party library forced this situation on me, it would lead me to seriously suspect the overall quality of the library in question. It would be as if I test drove a car and found a couple loose washers and nuts in one of the cupholders – it may not be a big deal in itself, but it portrays a lack of commitment to quality, so I would consider alternatives.
I don't consider it to be a memory leak unless the amount of memory being "used" keeps growing. Having some unreleased memory, while not ideal, is not a big problem unless the amount of memory required keeps growing.
Let's get our definitions correct, first. A memory leak is when memory is dynamically allocated, eg with malloc(), and all references to the memory are lost without the corresponding free. An easy way to make one is like this:
#define BLK ((size_t)1024)
while(1){
void * vp = malloc(BLK);
}
Note that every time around the while(1) loop, 1024 (+overhead) bytes are allocated, and the new address assigned to vp; there's no remaining pointer to the previous malloc'ed blocks. This program is guaranteed to run until the heap runs out, and there's no way to recover any of the malloc'ed memory. Memory is "leaking" out of the heap, never to be seen again.
What you're describing, though, sound like
int main(){
void * vp = malloc(LOTS);
// Go do something useful
return 0;
}
You allocate the memory, work with it until the program terminates. This is not a memory leak; it doesn't impair the program, and all the memory will be scavenged up automagically when the program terminates.
Generally, you should avoid memory leaks. First, because like altitude above you and fuel back at the hangar, memory that has leaked and can't be recovered is useless; second, it's a lot easier to code correctly, not leaking memory, at the start than it is to find a memory leak later.
In theory no, in practise it depends.
It really depends on how much data the program is working on, how often the program is run and whether or not it is running constantly.
If I have a quick program that reads a small amount of data makes a calculation and exits, a small memory leak will never be noticed. Because the program is not running for very long and only uses a small amount of memory, the leak will be small and freed when the program exists.
On the other hand if I have a program that processes millions of records and runs for a long time, a small memory leak might bring down the machine given enough time.
As for third party libraries that have leaks, if they cause a problem either fix the library or find a better alternative. If it doesn't cause a problem, does it really matter?
Many people seem to be under the impression that once you free memory, it's instantly returned to the operating system and can be used by other programs.
This isn't true. Operating systems commonly manage memory in 4KiB pages. malloc and other sorts of memory management get pages from the OS and sub-manage them as they see fit. It's quite likely that free() will not return pages to the operating system, under the assumption that your program will malloc more memory later.
I'm not saying that free() never returns memory to the operating system. It can happen, particularly if you are freeing large stretches of memory. But there's no guarantee.
The important fact: If you don't free memory that you no longer need, further mallocs are guaranteed to consume even more memory. But if you free first, malloc might re-use the freed memory instead.
What does this mean in practice? It means that if you know your program isn't going to require any more memory from now on (for instance it's in the cleanup phase), freeing memory is not so important. However if the program might allocate more memory later, you should avoid memory leaks - particularly ones that can occur repeatedly.
Also see this comment for more details about why freeing memory just before termination is bad.
A commenter didn't seem to understand that calling free() does not automatically allow other programs to use the freed memory. But that's the entire point of this answer!
So, to convince people, I will demonstrate an example where free() does very little good. To make the math easy to follow, I will pretend that the OS manages memory in 4000 byte pages.
Suppose you allocate ten thousand 100-byte blocks (for simplicity I'll ignore the extra memory that would be required to manage these allocations). This consumes 1MB, or 250 pages. If you then free 9000 of these blocks at random, you're left with just 1000 blocks - but they're scattered all over the place. Statistically, about 5 of the pages will be empty. The other 245 will each have at least one allocated block in them. That amounts to 980KB of memory, that cannot possibly be reclaimed by the operating system - even though you now only have 100KB allocated!
On the other hand, you can now malloc() 9000 more blocks without increasing the amount of memory your program is tying up.
Even when free() could technically return memory to the OS, it may not do so. free() needs to achieve a balance between operating quickly and saving memory. And besides, a program that has already allocated a lot of memory and then freed it is likely to do so again. A web server needs to handle request after request after request - it makes sense to keep some "slack" memory available so you don't need to ask the OS for memory all the time.
There is nothing conceptually wrong with having the os clean up after the application is run.
It really depends on the application and how it will be run. Continually occurring leaks in an application that needs to run for weeks has to be taken care of, but a small tool that calculates a result without too high of a memory need should not be a problem.
There is a reason why many scripting language do not garbage collect cyclical references… for their usage patterns, it's not an actual problem and would thus be as much of a waste of resources as the wasted memory.
I believe the answer is no, never allow a memory leak, and I have a few reasons which I haven't seen explicitly stated. There are great technical answers here but I think the real answer hinges on more social/human reasons.
(First, note that as others mentioned, a true leak is when your program, at any point, loses track of memory resources that it has allocated. In C, this happens when you malloc() to a pointer and let that pointer leave scope without doing a free() first.)
The important crux of your decision here is habit. When you code in a language that uses pointers, you're going to use pointers a lot. And pointers are dangerous; they're the easiest way to add all manner of severe problems to your code.
When you're coding, sometimes you're going to be on the ball and sometimes you're going to be tired or mad or worried. During those somewhat distracted times, you're coding more on autopilot. The autopilot effect doesn't differentiate between one-off code and a module in a larger project. During those times, the habits you establish are what will end up in your code base.
So no, never allow memory leaks for the same reason that you should still check your blind spots when changing lanes even if you're the only car on the road at the moment. During times when your active brain is distracted, good habits are all that can save you from disastrous missteps.
Beyond the "habit" issue, pointers are complex and often require a lot of brain power to track mentally. It's best to not "muddy the water" when it comes to your usage of pointers, especially when you're new to programming.
There's a more social aspect too. By proper use of malloc() and free(), anyone who looks at your code will be at ease; you're managing your resources. If you don't, however, they'll immediately suspect a problem.
Maybe you've worked out that the memory leak doesn't hurt anything in this context, but every maintainer of your code will have to work that out in his head too when he reads that piece of code. By using free() you remove the need to even consider the issue.
Finally, programming is writing a mental model of a process to an unambiguous language so that a person and a computer can perfectly understand said process. A vital part of good programming practice is never introducing unnecessary ambiguity.
Smart programming is flexible and generic. Bad programming is ambiguous.
I'm going to give the unpopular but practical answer that it's always wrong to free memory unless doing so will reduce the memory usage of your program. For instance a program that makes a single allocation or series of allocations to load the dataset it will use for its entire lifetime has no need to free anything. In the more common case of a large program with very dynamic memory requirements (think of a web browser), you should obviously free memory you're no longer using as soon as you can (for instance closing a tab/document/etc.), but there's no reason to free anything when the user selects clicks "exit", and doing so is actually harmful to the user experience.
Why? Freeing memory requires touching memory. Even if your system's malloc implementation happens not to store metadata adjacent to the allocated memory blocks, you're likely going to be walking recursive structures just to find all the pointers you need to free.
Now, suppose your program has worked with a large volume of data, but hasn't touched most of it for a while (again, web browser is a great example). If the user is running a lot of apps, a good portion of that data has likely been swapped to disk. If you just exit(0) or return from main, it exits instantly. Great user experience. If you go to the trouble of trying to free everything, you may spend 5 seconds or more swapping all the data back in, only to throw it away immediately after that. Waste of user's time. Waste of laptop's battery life. Waste of wear on the hard disk.
This is not just theoretical. Whenever I find myself with too many apps loaded and the disk starts thrashing, I don't even consider clicking "exit". I get to a terminal as fast as I can and type killall -9 ... because I know "exit" will just make it worse.
I think in your situation the answer may be that it's okay. But you definitely need to document that the memory leak is a conscious decision. You don't want a maintenance programmer to come along, slap your code inside a function, and call it a million times. So if you make the decision that a leak is okay you need to document it (IN BIG LETTERS) for whoever may have to work on the program in the future.
If this is a third party library you may be trapped. But definitely document that this leak occurs.
But basically if the memory leak is a known quantity like a 512 KB buffer or something then it is a non issue. If the memory leak keeps growing like every time you call a library call your memory increases by 512KB and is not freed, then you may have a problem. If you document it and control the number of times the call is executed it may be manageable. But then you really need documentation because while 512 isn't much, 512 over a million calls is a lot.
Also you need to check your operating system documentation. If this was an embedded device there may be operating systems that don't free all the memory from a program that exits. I'm not sure, maybe this isn't true. But it is worth looking into.
I'm sure that someone can come up with a reason to say Yes, but it won't be me.
Instead of saying no, I'm going to say that this shouldn't be a yes/no question.
There are ways to manage or contain memory leaks, and many systems have them.
There are NASA systems on devices that leave the earth that plan for this. The systems will automatically reboot every so often so that memory leaks will not become fatal to the overall operation. Just an example of containment.
If you allocate memory and use it until the last line of your program, that's not a leak. If you allocate memory and forget about it, even if the amount of memory isn't growing, that's a problem. That allocated but unused memory can cause other programs to run slower or not at all.
I can count on one hand the number of "benign" leaks that I've seen over time.
So the answer is a very qualified yes.
An example. If you have a singleton resource that needs a buffer to store a circular queue or deque but doesn't know how big the buffer will need to be and can't afford the overhead of locking or every reader, then allocating an exponentially doubling buffer but not freeing the old ones will leak a bounded amount of memory per queue/deque. The benefit for these is they speed up every access dramatically and can change the asymptotics of multiprocessor solutions by never risking contention for a lock.
I've seen this approach used to great benefit for things with very clearly fixed counts such as per-CPU work-stealing deques, and to a much lesser degree in the buffer used to hold the singleton /proc/self/maps state in Hans Boehm's conservative garbage collector for C/C++, which is used to detect the root sets, etc.
While technically a leak, both of these cases are bounded in size and in the growable circular work stealing deque case there is a huge performance win in exchange for a bounded factor of 2 increase in the memory usage for the queues.
If you allocate a bunch of heap at the beginning of your program, and you don't free it when you exit, that is not a memory leak per se. A memory leak is when your program loops over a section of code, and that code allocates heap and then "loses track" of it without freeing it.
In fact, there is no need to make calls to free() or delete right before you exit. When the process exits, all of its memory is reclaimed by the OS (this is certainly the case with POSIX. On other OSes – particularly embedded ones – YMMV).
The only caution I'd have with not freeing the memory at exit time is that if you ever refactor your program so that it, for example, becomes a service that waits for input, does whatever your program does, then loops around to wait for another service call, then what you've coded can turn into a memory leak.
In this sort of question context is everything. Personally I can't stand leaks, and in my code I go to great lengths to fix them if they crop up, but it is not always worth it to fix a leak, and when people are paying me by the hour I have on occasion told them it was not worth my fee for me to fix a leak in their code. Let me give you an example:
I was triaging a project, doing some perf work and fixing a lot of bugs. There was a leak during the applications initialization that I tracked down, and fully understood. Fixing it properly would have required a day or so refactoring a piece of otherwise functional code. I could have done something hacky (like stuffing the value into a global and grabbing it some point I know it was no longer in use to free), but that would have just caused more confusion to the next guy who had to touch the code.
Personally I would not have written the code that way in the first place, but most of us don't get to always work on pristine well designed codebases, and sometimes you have to look at these things pragmatically. The amount of time it would have taken me to fix that 150 byte leak could instead be spent making algorithmic improvements that shaved off megabytes of ram.
Ultimately, I decided that leaking 150 bytes for an app that used around a gig of ram and ran on a dedicated machine was not worth fixing it, so I wrote a comment saying that it was leaked, what needed to be changed in order to fix it, and why it was not worth it at the time.
this is so domain-specific that its hardly worth answering. use your freaking head.
space shuttle operating system: nope, no memory leaks allowed
rapid development proof-of-concept code: fixing all those memory leaks is a waste of time.
and there is a spectrum of intermediate situations.
the opportunity cost ($$$) of delaying a product release to fix all but the worst memory leaks is usually dwarfs any feelings of being "sloppy or unprofessional". Your boss pays you to make him money, not to get a warm, fuzzy feelings.
You have to first realize that there's a big difference between a perceived memory leak and an actual memory leak. Very frequently analysis tools will report many red herrings, and label something as having been leaked (memory or resources such as handles etc) where it actually isn't. Often times this is due to the analysis tool's architecture. For example, certain analysis tools will report run time objects as memory leaks because it never sees those object freed. But the deallocation occurs in the runtime's shutdown code, which the analysis tool might not be able to see.
With that said, there will still be times when you will have actual memory leaks that are either very difficult to find or very difficult to fix. So now the question becomes is it ever OK to leave them in the code?
The ideal answer is, "no, never." A more pragmatic answer may be "no, almost never." Very often in real life you have limited number of resources and time to resolve and endless list of tasks. When one of the tasks is eliminating memory leaks, the law of diminishing returns very often comes in to play. You could eliminate say 98% of all memory leaks in an application in a week, but the remaining 2% might take months. In some cases it might even be impossible to eliminate certain leaks because of the application's architecture without a major refactoring of code. You have to weigh the costs and benefits of eliminating the remaining 2%.
While most answers concentrate on real memory leaks (which are not OK ever, because they are a sign of sloppy coding), this part of the question appears more interesting to me:
What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's deconstructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.
If the associated memory is used, you cannot free it before the program ends. Whether the free is done by the program exit or by the OS does not matter. As long as this is documented, so that change don't introduce real memory leaks, and as long as there is no C++ destructor or C cleanup function involved in the picture. A not-closed file might be revealed through a leaked FILE object, but a missing fclose() might also cause the buffer not to be flushed.
So, back to the original case, it is IMHO perfectly OK in itself, so much that Valgrind, one of the most powerful leak detectors, will treat such leaks only if requested. On Valgrind, when you overwrite a pointer without freeing it beforehand, it gets considered as a memory leak, because it is more likely to happen again and to cause the heap to grow endlessly.
Then, there are not nfreed memory blocks which are still reachable. One could make sure to free all of them at the exit, but that is just a waste of time in itself. The point is if they could be freed before. Lowering memory consumption is useful in any case.
Even if you are sure that your 'known' memory leak will not cause havoc, don't do it. At best, it will pave a way for you to make a similar and probably more critical mistake at a different time and place.
For me, asking this is like questioning "Can I break the red light at 3 AM in the morning when no one is around?". Well sure, it may not cause any trouble at that time but it will provide a lever for you to do the same in rush hour!
No, you should not have leaks that the OS will clean for you. The reason (not mentioned in the answers above as far as I could check) is that you never know when your main() will be re-used as a function/module in another program. If your main() gets to be a frequently-called function in another persons' software - this software will have a memory leak that eats memory over time.
KIV
I agree with vfilby – it depends. In Windows, we treat memory leaks as relatively serous bugs. But, it very much depends on the component.
For example, memory leaks are not very serious for components that run rarely, and for limited periods of time. These components run, do theire work, then exit. When they exit all their memory is freed implicitly.
However, memory leaks in services or other long run components (like the shell) are very serious. The reason is that these bugs 'steal' memory over time. The only way to recover this is to restart the components. Most people don't know how to restart a service or the shell – so if their system performance suffers, they just reboot.
So, if you have a leak – evaluate its impact two ways
To your software and your user's experience.
To the system (and the user) in terms of being frugal with system resources.
Impact of the fix on maintenance and reliability.
Likelihood of causing a regression somewhere else.
Foredecker
I'm surprised to see so many incorrect definitions of what a memory leak actually is. Without a concrete definition, a discussion on whether it's a bad thing or not will go nowhere.
As some commentors have rightly pointed out, a memory leak only happens when memory allocated by a process goes out of scope to the extent that the process is no longer able to reference or delete it.
A process which is grabbing more and more memory is not necessarily leaking. So long as it is able to reference and deallocate that memory, then it remains under the explicit control of the process and has not leaked. The process may well be badly designed, especially in the context of a system where memory is limited, but this is not the same as a leak. Conversely, losing scope of, say, a 32 byte buffer is still a leak, even though the amount of memory leaked is small. If you think this is insignificant, wait until someone wraps an algorithm around your library call and calls it 10,000 times.
I see no reason whatsoever to allow leaks in your own code, however small. Modern programming languages such as C and C++ go to great lengths to help programmers prevent such leaks and there is rarely a good argument not to adopt good programming techniques - especially when coupled with specific language facilities - to prevent leaks.
As regards existing or third party code, where your control over quality or ability to make a change may be highly limited, depending on the severity of the leak, you may be forced to accept or take mitigating action such as restarting your process regularly to reduce the effect of the leak.
It may not be possible to change or replace the existing (leaking) code, and therefore you may be bound to accept it. However, this is not the same as declaring that it's OK.
I guess it's fine if you're writing a program meant to leak memory (i.e. to test the impact of memory leaks on system performance).
Its really not a leak if its intentional and its not a problem unless its a significant amount of memory, or could grow to be a significant amount of memory. Its fairly common to not cleanup global allocations during the lifetime of a program. If the leak is in a server or long running app, grows over time, then its a problem.
I think you've answered your own question. The biggest drawback is how they interfere with the memory leak detection tools, but I think that drawback is a HUGE drawback for certain types of applications.
I work with legacy server applications that are supposed to be rock solid but they have leaks and the globals DO get in the way of the memory detection tools. It's a big deal.
In the book "Collapse" by Jared Diamond, the author wonders about what the guy was thinking who cut down the last tree on Easter Island, the tree he would have needed in order to build a canoe to get off the island. I wonder about the day many years ago when that first global was added to our codebase. THAT was the day it should have been caught.
I see the same problem as all scenario questions like this: What happens when the program changes, and suddenly that little memory leak is being called ten million times and the end of your program is in a different place so it does matter? If it's in a library then log a bug with the library maintainers, don't put a leak into your own code.
I'll answer no.
In theory, the operating system will clean up after you if you leave a mess (now that's just rude, but since computers don't have feelings it might be acceptable). But you can't anticipate every possible situation that might occur when your program is run. Therefore (unless you are able to conduct a formal proof of some behaviour), creating memory leaks is just irresponsible and sloppy from a professional point of view.
If a third-party component leaks memory, this is a very strong argument against using it, not only because of the imminent effect but also because it shows that the programmers work sloppily and that this might also impact other metrics. Now, when considering legacy systems this is difficult (consider web browsing components: to my knowledge, they all leak memory) but it should be the norm.
Historically, it did matter on some operating systems under some edge cases. These edge cases could exist in the future.
Here's an example, on SunOS in the Sun 3 era, there was an issue if a process used exec (or more traditionally fork and then exec), the subsequent new process would inherit the same memory footprint as the parent and it could not be shrunk. If a parent process allocated 1/2 gig of memory and didn't free it before calling exec, the child process would start using that same 1/2 gig (even though it wasn't allocated). This behavior was best exhibited by SunTools (their default windowing system), which was a memory hog. Every app that it spawned was created via fork/exec and inherited SunTools footprint, quickly filling up swap space.
This was already discussed ad nauseam. Bottom line is that a memory leak is a bug and must be fixed. If a third party library leaks memory, it makes one wonder what else is wrong with it, no? If you were building a car, would you use an engine that is occasionally leaking oil? After all, somebody else made the engine, so it's not your fault and you can't fix it, right?
Generally a memory leak in a stand alone application is not fatal, as it gets cleaned up when the program exits.
What do you do for Server programs that are designed so they don't exit?
If you are the kind of programmer that does not design and implement code where the resources are allocated and released correctly, then I don't want anything to do with you or your code. If you don't care to clean up your leaked memory, what about your locks? Do you leave them hanging out there too? Do you leave little turds of temporary files laying around in various directories?
Leak that memory and let the program clean it up? No. Absolutely not. It's a bad habit, that leads to bugs, bugs, and more bugs.
Clean up after yourself. Yo momma don't work here no more.
As a general rule, if you've got memory leaks that you feel you can't avoid, then you need to think harder about object ownership.
But to your question, my answer in a nutshell is In production code, yes. During development, no. This might seem backwards, but here's my reasoning:
In the situation you describe, where the memory is held until the end of the program, it's perfectly okay to not release it. Once your process exits, the OS will clean up anyway. In fact, it might make the user's experience better: In a game I've worked on, the programmers thought it would be cleaner to free all the memory before exiting, causing the shutdown of the program to take up to half a minute! A quick change that just called exit() instead made the process disappear immediately, and put the user back to the desktop where he wanted to be.
However, you're right about the debugging tools: They'll throw a fit, and all the false positives might make finding your real memory leaks a pain. And because of that, always write debugging code that frees the memory, and disable it when you ship.

Reducing memory footprint of large unfamiliar codebase

Suppose you have a fairly large (~2.2 MLOC), fairly old (started more than 10 years ago) Windows desktop application in C/C++. About 10% of modules are external and don't have sources, only debug symbols.
How would you go about reducing application's memory footprint in half? At least, what would you do to find out where memory is consumed?
Override malloc()/free() and new()/delete() with wrappers that keep track of how big the allocations are and (by recording the callstack and later resolving it against the symbol table) where they are made from. On shutdown, have your wrapper display any memory still allocated.
This should enable you both to work out where the largest allocations are and to catch any leaks.
this is description/skeleton of memory tracing application I used to reduce memory consumption of our game by 20%. It helped me to track many allocations done by external modules.
It's not an easy task. Begin by chasing down any memory leaks you cand find (a good tool would be Rational Purify). Skim the source code and try to optimize data structures and/or algorithms.
Sorry if this may sound pessimistic, but cutting down memory usage by 50% doesn't sound realistic.
There is a chance is you can find some significant inefficiencies very fast. First you should check what is the memory used for. A tool which I have found very handy for this is Memory Validator
Once you have this "memory usage map", you can check for Low Hanging Fruit. Are there any data structures consuming a lot of memory which could be represented in a more compact form? This is often possible, esp. when the data access is well encapsulated and when you have a spare CPU power you can dedicate to compressing / decompressing them on each access.
I don't think your question is well posed.
The size of source code is not directly related to the memory footprint. Sure, the compiled code will occupy some memory but the application might will have memory requirements on it's own. Both static (the variables declared in the code) and dynamic (the object the application creates).
I would suggest you to profile program execution and study the code carefully.
First places to start for me would be:
Does the application do a lot of preallocation memory to be used later? Does this memory often sit around unused, never handed out? Consider switching to newing/deleting (or better use a smart_ptr) as needed.
Does the code use a static array such as
Object arrayOfObjs[MAX_THAT_WILL_EVER_BE_USED];
and hand out objs in this array? If so, consider manually managing this memory.
One of the tools for memory usage analysis is LeakDiag, available for free download from Microsoft. It apparently allows to hook all user-mode allocators down to VirtualAlloc and to dump process allocation snapshots to XML at any time. These snapshots then can be used to determine which call stacks allocate most memory and which call stacks are leaking. It lacks pretty frontend for snapshot analysis (unless you can get LDParser/LDGrapher via Microsoft Premier Support), but all the data is there.
One more thing to note is that you may have false leak positives from BSTR allocator due to caching, see "Hey, why am I leaking all my BSTR's?"