Related
Working a lot with microcontrollers and C++ it is important for me to know that I do not perform dynamic memory allocations. However I would like to get the most out of the STD lib. What would be the best strategy to determine if a function/class from STD uses dynamic memory allocation?
So far I come up with these options:
Read and understand the STD code. This is of course possible but lets be honest, it is not the easiest code to read and there is a lot of it.
A variation on reading the code could be to have a script search for memory allocation and highlight those parts to it make it easier to read. This still would require figuring out where functions allocating memory are used, and so forts.
Just testing what I would like to use and watch the memory with the debugger. So far I have been using this method but this is a reactive approach. I would like to know before hand when designing code what I can use from STD. Also what is there to say that there are some (edge) cases where memory is allocated. Those might not show up in this limited test.
Finally what could be done is regularly scan the generated assembler code for memory allocations. I suspect this could be scripted and included in the toolchain but again this is a reactive method.
If you see any other options or have experience doing something similar, please let me know.
p.s. I work mainly with ARM Cortex-Mx chips at this moment compiling with GCC.
You have some very good suggestions in the comments, but no actual answers, so I will attempt an answer.
In essence you are implying some difference between C and C++ that does not really exist. How do you know that stdlib functions don't allocate memory?
Some STL functions are allowed to allocate memory and they are supposed to use allocators. For example, vectors take an template parameter for an alternative allocator (for example pool allocators are common). There is even a standard function for discovering if a type uses memory
But... some types like std::function sometimes use memory allocation and sometimes do not, depending on the size of the parameter types, so your paranoia is not entirely unjustified.
C++ allocates via new/delete. New/Delete allocate via malloc/free.
So the real question is, can you override malloc/free? The answer is yes, see this answer https://stackoverflow.com/a/12173140/440558. This way you can track all allocations, and catch your error at run-time, which is not bad.
You can go better, if you are really hardcore. You can edit the standard "runtime C library" to rename malloc/free to something else. This is possible with "objcopy" which is part of the gcc tool chain. After renaming the malloc/free, to say ma11oc/fr33, any call to allocate/free memory will no longer link.
Link your executable with "-nostdlib" and "-nodefaultlibs" options to gcc, and instead link your own set of libs, which you generated with objcopy.
To be honest, I've only seen this done successfully once, and by a programmer you did not trust objcopy, so he just manually found the labels "malloc" "free" using a binary editor, and changed them. It definitely works though.
Edit:
As pointed out by Fureeish (see comments), it is not guaranteed by the C++ standard that new/delete use the C allocator functions.
It is however, a very common implementation, and your question does specifically mention GCC. In 30 years of development, I have never seen a C++ program that runs two heaps (one for C, and one for C++) just because the standard allows for it. There would simply be no advantage in it. That doesn't preclude the possibility that there may be an advantage in the future though.
Just to be clear, my answer assumes new USES malloc to allocate memory. This doesn't mean you can assume that every new call calls malloc though, as there may be caching involved, and the operator new may be overloaded to use anything at all at the global level. See here for GCC/C++ allocator schemes.
https://gcc.gnu.org/onlinedocs/libstdc++/manual/memory.html
Yet another edit:
If you want to get technical - it depends on the version of libstdc++ you are using. You can find operator new in new_op.cc, in the (what I assume is the official) source repository
(I will stop now)
The options you listed are pretty comprehensive, I think I would just add some practical color to a couple of them.
Option 1: if you have the source code for the specific standard library implementation you're using, you can "simplify" the process of reading it by generating a static call graph and reading that instead. In fact the llvm opt tool can do this for you, as demonstrated in this question. If you were to do this, in theory you could just look at a given method and see if goes to an allocation function of any kind. No source code reading required, purely visual.
Option 4: scripting this is easier than you think. Prerequisites: make sure you're building with -ffunction-sections, which allows the linker to completely discard functions which are never called. When you generate a release build, you can simply use nm and grep on the ELF file to see if for example malloc appears in the binary at all.
For example I have a bare metal cortex-M based embedded system which I know for a fact has no dynamic memory allocation, but links against a common standard library implementation. On the debug build I can do the following:
$ nm Debug/Project.axf | grep malloc
700172bc T malloc
$
Here malloc is found because dead code has not been stripped.
On the release build it looks like this:
$ nm Release/Project.axf | grep malloc
$
grep here will return "0" if a match was found and something other than "0" if it wasn't, so if you were to use this in a script it would be something like:
nm Debug/Project.axf | grep malloc > /dev/null
if [ "$?" == "0" ]; then
echo "error: something called malloc"
exit 1
fi
There's a mountain of disclaimers and caveats that come with any of these approaches. Keep in mind that embedded systems in particular use a wide variety of different standard library implementations, and each implementation is free to do pretty much whatever it wants with regard to memory management.
In fact they don't even have to call malloc and free, they could implement their own dynamic allocators. Granted this is somewhat unlikely, but it is possible, and thus grepping for malloc isn't actually sufficient unless you know for a fact that all memory management in your standard library implementation goes through malloc and free.
If you're serious about avoiding all forms of dynamic memory allocation, the only sure way I know of (and have used myself) is simply to remove the heap entirely. On most bare metal embedded systems I've worked with, the heap start address, end address, and size are almost always provided a symbols in the linker script. You should remove or rename these symbols. If anything is using the heap, you'll get a linker error, which is what you want.
To give a very concrete example, newlib is a very common libc implementation for embedded systems. Its malloc implementation requires that the common sbrk() function be present in the system. For bare metal systems, sbrk() is just implemented by incrementing a pointer that starts at the end symbol provided by the linker script.
If you were using newlib, and you didn't want to mess with the linker script, you could still replace sbrk() with a function that simply hard faults so you catch any attempt to allocate memory immediately. This in my opinion would still be much better than trying to stare at heap pointers on a running system.
Of course your actual system may be different, and you may have a different libc implementation that you're using. This question can really only answered to any reasonable satisfaction in the exact context of your system, so you'll probably have to do some of your own homework. Chances are it's pretty similar to what I've described here.
One of the great things about bare metal embedded systems is the amount of flexibility that they provide. Unfortunately this also means there are so many variables that it's almost impossible to answer questions directly unless you know all of the details, which we don't here. Hopefully this will give you a better starting point than staring at a debugger window.
To make sure you do NOT use dynamic memory allocation, you can override the global new operator so that it always throws an exception. Then run unit tests against all your use of the library functions you want to use.
You may need help from the linker to avoid use of malloc and free as technically you can't override them.
Note: This would be in the test environment. You are simply validating that your code does not use dynamic allocation. Once you have done that validation, you don't need the override anymore so it would not be in place in the production code.
Are you sure you want to avoid them?
Sure, you don't want to use dynamic memory management that is designed for generic systems. That would definitely be a bad idea.
BUT does the tool chain you use not come with an implementation that is specific to your hardware that does an intelligent job for that hardware? or have some special ways to compile that allows you to use only a known piece of memory that you have pre-sized and aligned for the data area.
Moving to containers. Most STL containers allow you to specialize them with an allocator. You can write your own allocator that does not use dynamic memory.
Generally you can check (suitably thorough) documentation to see whether the function (e.g., a constructor) can throw std::bad_alloc. (The inverse is often phrased as noexcept, since that exception is often the only one risked by an operation.) There is the exception of std::inplace_merge, which becomes slower rather than throwing if allocation fails.
The gcc linker supports a -Map option which will generate a link map with all the symbols in your executable. If anything in your application does dynamic memory allocation unintentionally, you will find a section with *alloc and free functions.
If you start with a program with no allocation, you can check the map after every compile to see if you have introduced one through the library function calls.
I used this method to identify an unexpected dynamic allocation introduced by using a VLA.
I'd like to write an interpreter and tracing JIT for a programming language I'm designing. I already have many years of experience programming in C++, but I've been wondering if perhaps newer alternatives might be better. One of the things I found most frustrating, back in my C++ days, was having to use header files to deal with the clunky one-pass compiler model. The problem is that not all languages are equally suited for this purpose. For my tracing JIT, I need to be able to write executable code into memory and have the interpreter call to that code. I will also need the generated code to be able to call back into host functions.
I started looking at Go and saw that the language had pointers but no pointer arithmetic. This immediately struck me as a huge issue. I may well want to write my own allocator and garbage collector. I will need to closely control the way my language objects are laid out in memory and be able to get the address of specific fields and write to them. Unless there's ways to deal with this, it kind of seems like Go fails to be low-level enough for my purposes.
The D language seems promising. It has pointer arithmetic and a clear outline of the ABI needed to call in and out of D. I've heard lots of good things about it. It also has garbage collection which is nice for compiler writing, but I still have a few things I'm not sure about:
Does D have standard libs that will allow me to mark chunks of memory as executable?
If I allocate a big chunk of memory that I want to manage myself, with my own GC, and have a bunch of pointers going into there, will this pose problems with D's garbage collector?
How well does D interoperate with C code, in your experience? Is loading C dynamic libraries and calling into them fairly easy?
Finally, there's the whole support aspect. For those who have used D on linux here, how good is the toolchain? Any issues? Has anyone written a JIT compiler in D, and if so, how was the experience?
I believe so, see core.memory.GC if I remember right.
No, it shouldn't. Just call malloc or whatever you need, and make sure the GC doesn't see it.
Yes, it's pretty easy to interoperate with C code.
Caveat: You probably don't want to rely on the GC either, since it's not 'precise' (i.e. can and does leak memory if you're unlucky). But for small blocks of data it's usually fine.
Go does allow pointer arithmetic, but you must import the unsafe package to do so (or use a C function). Pointer arithmetic is a common source of bugs, and Go has other mechanisms, like slices, which provide safe ways to do some of the same activities that require pointer arithmetic in C. With unsafe you can cast any pointer to a uintptr and back, and uintptr is an ordinary numeric type, which allows you do do arithmetic.
I started looking at Go and saw that the language had pointers but no pointer arithmetic. This immediately struck me as a huge issue.
You, obviously, haven't tried the language. It works pretty well without any "pointer arithmetic". If you really need to bend rules, there is always "unsafe" package that will allow you to do anything.
I may well want to write my own allocator and garbage collector. I will need to closely control the way my language objects are laid out in memory and be able to get the address of specific fields and write to them.
I haven't written allocatior or garbage collector myself, but you can take address of a field of a structure. All Go data structures are simple and easy to control and reason about. See http://research.swtch.com/godata for short introduction. Also size and aligment guarantees are part of the language http://golang.org/ref/spec#Size_and_alignment_guarantees. If nothing else, you could always jump into C or asm.
IMHO, you should try to implement some small task to see if Go fits your requirements. Feel free to ask questions at http://groups.google.com/group/golang-nuts.
Alex
There is already a JIT compiler, very serious one, done in D. I highly recommend taking a look at http://lycus.org/ , more specifically pages about the MCI project - http://github.com/lycus/mci . MCI documentation will give you some more information. As you will see, MCI is more than just a JIT, it has its own (better than anything else I have seen) IR, optimizer, verifier, etc...
I am about to delve into kernel land. My question relates to the programming language. I have seen most tutorials to be written in C. I currently program in C++ and Assembly. I also studied C before C++, but I didn't use it a lot. Would it be possible to program in kernel mode using simplistic C++without using any advanced constructs? Basically I am trying to avoid the minor differences that exist between the two languages(like no bool in C, no automatic returning of 0 from main, really minor differences). I won't be using templates, classes and the like. So would it be possible to program in kernel mode using simplistic C++ without any major annoyances?
Even if not officially supported, you can use C++ as the development language for Windows kernel development.
You should be aware of the following things :
you MUST define the new and delete operator to map to ExAllocatePoolWithTag and ExFreePool.
try to avoid virtual functions. It seems not possible to control the location of the vtable of the object and this may have unexpected results if it is in a pageable portion and you code is called with IRQL >= DISPATCH_LEVEL.
if you still need to use virtual methods table than lock .rdata segment before using it on IRQL >= DISPATCH_LEVEL.
Apart from these kinds of limitations, you can use C++ for your driver development.
Add two links if you want to do C++ in WDK. It's a one time setup effort.
The NT Insider:Guest Article: C++ in an NT Driver
The NT Insider:Global Relief Effort - C++ Runtime Support for the NT DDK
Have seen kernel codes use lots of auto-locks/smart-pointers; although they make the code neat, I feel it has a learning curve for beginner to fully understand, and if abused, lots of construct/destruct codes slow things down.
If you write your code carefully, knowing what exactly stands behind each definition, operator, call, etc, then there should be no problem writing kernel code in C++. The Microsoft document mentioned in the comments above is a good reading precisely because it describes situations in which C++ isn't as transparent as C or doesn't provide similar important guarantees and from that you know what to avoid.
Microsoft has written a guide. Basically they tell us to steer clear of anything but using C++'s relaxed rules of variable declarations...sigh. Anything else and you're on your own. Anyway it can't be all that bad but here are some examples of what you need to remember:
Memory allocated in the paged pool can get paged out. If you try to access it when IRQL is above PASSIVE_LEVEL you're screwed (or at least you will be every once in a while when your customer complains about your driver BSODding their system)! Test your driver on a low memory system under load!
The non-paged pool is limited, you most likely cannot allocate all your needs from it.
Stack is much smaller than in user mode ~12-24K.
Anything you do involving floating point path in the kernel must be protected by KeSaveFloatingPointState and KeRestoreFloatingPointState
C++ exceptions: No
Read the guide for more. Now if you can make sure that the generated code follows the rules, go ahead and use C++.
Google's C++ style guide says "We do not use exceptions". The style does not mention STL with respect to usage of exception. Since STL allocators can fail, how do they handle exceptions thrown by containers?
If they use STL, how is the caller informed of allocation failures? STL methods like push_back() or map operator[] do not return any status codes.
If they do not use STL, what container implementation do they use?
They say that they don't use exceptions, not that nobody should use them. If you look at the rationale they also write:
Because most existing C++ code at Google is not prepared to deal with exceptions, it is comparatively difficult to adopt new code that generates exceptions.
The usual legacy problem. :-(
We simply don't handle exceptions thrown by containers, at least in application-level code.
I've been an engineer at Google Search working in C++ since 2008. We do use STL containers often. I cannot personally recall a single major failure or bug that was ever traced back to something like vector::push_back() or map::operator[] failing, where we said "oh man, we have to rewrite this code because the allocation could fail" or "dang, if only we used exceptions, this could have been avoided." Does a process ever run out of memory? Yes, but this is usually a simple mistake (e.g., someone added a large new data file to the program and forgot to increase the RAM allocation) or a catastrophic failure where there's no good way to recover and proceed. Our system already manages and restarts jobs automatically to be robust to machines with faulty disks, cosmic rays, etc., and this is really no different.
So as far as I can tell, there is no problem here.
I'm pretty sure that they mean they do not use exceptions in their code. If you check out their cpplint script, it does check to ensure you are including the correct headers for STL containers (like vector, list, etc).
I have found that Google mentions this explicitly about STL and exceptions (emphasis is mine):
Although you should not use exceptions in your own code, they are used
extensively in the ATL and some STLs, including the one that comes
with Visual C++. When using the ATL, you should define
_ATL_NO_EXCEPTIONS to disable exceptions. You should investigate whether
you can also disable exceptions in your STL, but if not, it is
OK to turn on exceptions in the compiler. (Note that this is only to
get the STL to compile. You should still not write exception handling
code yourself.)
I don't like such decisions (lucky that I am not working for Google), but they are quite clear about their behaviour and intentions.
You can’t handle allocation failures anyway on modern operating systems; as a performance optimization, they typically over-commit memory. For instance, if you call malloc() and ask for a really huge chunk of memory on Linux, it will succeed even if the memory required to back it actually isn’t there. It’s only when you access it that the kernel actually tries to allocate pages to back it, and at that point it’s too late to tell you that the allocation failed anyway.
So:
Except in special cases, don’t worry about allocation failures. If the machine runs out of memory, that’s a catastrophic failure from which you can’t reliably recover.
Nevertheless, it’s good practice to catch unhandled exceptions and log the e.what() output, then re-throw, since that may be more informative than a backtrace, and typical C++ library implementations don’t do that automatically for you.
The whole huge thread above about how you can’t rely on crashing when you run out of memory is complete and utter rubbish. The C(++) standard may not guarantee it, but on modern systems crashing is the only thing you can rely on if you run out of memory. In particular, you can’t rely on getting a NULL or indeed any other indication from your allocator, up to and include a C++ exception.
If you find yourself on an embedded system where page zero is accessible, I strongly suggest you fix that by mapping an inaccessible page at that location. Human beings cannot be relied upon to check for NULL pointers everywhere, but you can fix that by mapping a page once rather than trying to correct every possible (past, present and future) location at which someone might have missed a NULL.
I will qualify the above by saying that it’s possible you’re using some kind of custom allocator, or that you’re on a system that doesn’t over-commit (embedded systems without swap are one example of that, but not the only example). In that case, maybe you can handle out-of-memory conditions gracefully, on your system. But in general in the 21st century I’m afraid you are unlikely to get the chance; the first you’ll know that your system is out of memory is when things start crashing.
Stl itself is directly only throwing in case of memory allocation failure. But usually a real world application can fail for a variety of reasons, memory allocation failure just one of them. On 32 bit systems memory allocation failure is not something which should be ignored, as it can occur. So the entire discussion above that memory allocation failure is not going to happen is kind of pointless. Even assuming this, one would have to write ones code using two step initialization. And C++ exception handling predates 64 bit architectures by a long time.
I'm not certain how far I should go in dignifying the negative professionalism shown here by google and only answer the question asked. I remember some paper from IBM in around 1997 stating how well some people at IBM understood & appreciated the implications of C++ Exception Handling. Ok professionalism is not necessary
an indicator of success.
So giving up exception handling is not only a problem if one uses STL. It is a problem if one uses C++ as such. It means giving up on
constructor failure
being able to use member objects and base class objects as arguments for any of the following base/member class constructors ( without any testing). It is no wonder that people used two step construction before C++ exception handling existed.
giving up on hierarchical & rich error messages in an environment which allows for code to be provided by customers or third parties and throw errors, which the original writer of the calling code could not possible have foreseen when writing the calling code and could have provided space for in his return error code range.
avoids such hacks as returning a pointer to a static memory object to message allocation failure which the authors of FlexLm did
being able to use a memory allocator returning addresses into a memory mapped sparse file. In this case allocation failure happens when one accesses the memory in question.(ok currently this works only on windows but apple forced the gnu team to provide the necessary functionality in the G++ compiler. Some more pressure from Linux g++ developer will be necessary to provide the this functionality also for them) (oops -- this even applies to STL)
being able to leave this C style coding behind us (ignoring return values) and having to use a debugger with debug executable to find out what is failing in a non trivial environment with child processes and shared libraries provided by third parties or doing remote execution
being able to return rich error information to the caller without just dumping everything to stderr
There is only one possibility to handle allocation failure under assumptions outlined in the question:
that allocator force application exit on allocation failure. In particular, this requires the cusror allocator.
Index-out-of-bound exceptions are less interesting in this context,
because application can ensure they won't happen using pre-checks.
Late to the party here although I didn't see any comparisons between C++ exception handling at Google and exception handling in Go. Specifically, Go only has error handling via a built-in error type. The linked Golang blog post explicitly concludes
Proper error handling is an essential requirement of good software. By employing the techniques described in this post you should be able to write more reliable and succinct Go code.
The creation of Golang certainly took considerations of best practices from working with C++ into account. The underlying intuition is that less can be more. I haven't worked at Google but do find their use of C++ and creation of Golang to be potentially suggestive of underlying best practices at the company.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In unmanaged C/C++ code, what are the best practices to detect memory leaks? And coding guidelines to avoid? (As if it's that simple ;)
We have used a bit of a silly way in the past: having a counter increment for every memory allocation call and decrement while freeing. At the end of the program, the counter value should be zero.
I know this is not a great way and there are a few catches. (For instance, if you are freeing memory which was allocated by a platform API call, your allocation count will not exactly match your freeing count. Of course, then we incremented the counter when calling API calls that allocated memory.)
I am expecting your experiences, suggestions and maybe some references to tools which simplify this.
If your C/C++ code is portable to *nix, few things are better than Valgrind.
If you are using Visual Studio, Microsoft provides some useful functions for detecting and debugging memory leaks.
I would start with this article:
https://msdn.microsoft.com/en-us/library/x98tx3cf(v=vs.140).aspx
Here is the quick summary of those articles. First, include these headers:
#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
Then you need to call this when your program exits:
_CrtDumpMemoryLeaks();
Alternatively, if your program does not exit in the same place every time, you can call this at the start of your program:
_CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
Now when the program exits all the allocations that were not free'd will be printed in the Output Window along with the file they were allocated in and the allocation occurrence.
This strategy works for most programs. However, it becomes difficult or impossible in certain cases. Using third party libraries that do some initialization on startup may cause other objects to appear in the memory dump and can make tracking down your leaks difficult. Also, if any of your classes have members with the same name as any of the memory allocation routines( such as malloc ), the CRT debug macros will cause problems.
There are other techniques explained in the MSDN link referenced above that could be used as well.
In C++: use RAII. Smart pointers like std::unique_ptr, std::shared_ptr, std::weak_ptr are your friends.
As a C++ Developer here's some simply guidelines:
Use pointers only when absolutely necessary
If you need a pointer, doublecheck if a SmartPointer is a possibility
Use the GRASP Creator pattern.
As for the detection of memory leaks personally I've always used Visual Leak Detector and find it to be very useful.
I've been using DevStudio for far too many years now and it always amazes me just how many programmers don't know about the memory analysis tools that are available in the debug run time libraries. Here's a few links to get started with:
Tracking Heap Allocation Requests - specifically the section on Unique Allocation Request Numbers
_CrtSetDbgFlag
_CrtSetBreakAlloc
Of course, if you're not using DevStudio then this won't be particularly helpful.
I’m amazed no one mentioned DebugDiag for Windows OS.
It works on release builds, and even at the customer site.
(You just need to keep your release version PDBs, and configure DebugDiag to use Microsoft public symbol server)
Visual Leak Detector is a very good tool, altough it does not supports the calls on VC9 runtimes (MSVCR90D.DLL for example).
Microsoft VC++ in debug mode shows memory leaks, although it doesn't show where your leaks are.
If you are using C++ you can always avoid using new explicitly: you have vector, string, auto_ptr (pre C++11; replaced by unique_ptr in C++11), unique_ptr (C++11) and shared_ptr (C++11) in your arsenal.
When new is unavoidable, try to hide it in a constructor (and hide delete in a destructor); the same works for 3rd party APIs.
There are various replacement "malloc" libraries out there that will allow you to call a function at the end and it will tell you about all the unfreed memory, and in many cases, who malloced (or new'ed) it in the first place.
If you're using MS VC++, I can highly recommend this free tool from the codeproject:
leakfinder by Jochen Kalmbach.
You simply add the class to your project, and call
InitAllocCheck(ACOutput_XML)
DeInitAllocCheck()
before and after the code you want to check for leaks.
Once you've build and run the code, Jochen provides a neat GUI tool where you can load the resulting .xmlleaks file, and navigate through the call stack where each leak was generated to hunt down the offending line of code.
Rational's (now owned by IBM) PurifyPlus illustrates leaks in a similar fashion, but I find the leakfinder tool actually easier to use, with the bonus of it not costing several thousand dollars!
Never used it myself, but my C friends tell me Purify.
If you're using Visual Studio it might be worth looking at Bounds Checker. It's not free, but it's been incredibly helpful in finding leaks in my code. It doesn't just do memory leaks either, but also GDI resource leaks, WinAPI usage errors, and other stuff. It'll even show you where the leaked memory was initialized, making it much easier to track down the leak.
I think that there is no easy answer to this question. How you might really approach this solution depends on your requirements. Do you need a cross platform solution? Are you using new/delete or malloc/free (or both)? Are you really looking for just "leaks" or do you want better protection, such as detecting buffer overruns (or underruns)?
If you are working on the windows side, the MS debug runtime libraries have some basic debug detection functionality, and as another has already pointed out, there are several wrappers that can be included in your source to help with leak detection. Finding a package that can work with both new/delete and malloc/free obviously gives you more flexibility.
I don't know enough about the unix side to provide help, although again, others have.
But beyond just leak detection, there is the notion of detecting memory corruption via buffer overruns (or underruns). This type of debug functionality is I think more difficult than plain leak detection. This type of system is also further complicated if you are working with C++ objects because polymorhpic classes can be deleted in varying ways causing trickiness in determining the true base pointer that is being deleted. I know of no good "free" system that does decent protection for overruns. we have written a system (cross platform) and found it to be pretty challenging.
I'd like to offer something I've used at times in the past: a rudimentary leak checker which is source level and fairly automatic.
I'm giving this away for three reasons:
You might find it useful.
Though it's a bit krufty, I don't let that embarass me.
Even though it's tied to some win32 hooks, that should be easy to alleviate.
There are things of which you must be careful when using it: don't do anything that needs to lean on new in the underlying code, beware of the warnings about cases it might miss at the top of leakcheck.cpp, realize that if you turn on (and fix any issues with) the code that does image dumps, you may generate a huge file.
The design is meant to allow you to turn the checker on and off without recompiling everything that includes its header. Include leakcheck.h where you want to track checking and rebuild once. Thereafter, compile leakcheck.cpp with or without LEAKCHECK #define'd and then relink to turn it on and off. Including unleakcheck.h will turn it off locally in a file. Two macros are provided: CLEARALLOCINFO() will avoid reporting the same file and line inappropriately when you traverse allocating code that didn't include leakcheck.h. ALLOCFENCE() just drops a line in the generated report without doing any allocation.
Again, please realize that I haven't used this in a while and you may have to work with it a bit. I'm dropping it in to illustrate the idea. If there turns out to be sufficient interest, I'd be willing to work up an example, updating the code in the process, and replace the contents of the following URL with something nicer that includes a decently syntax-colored listing.
You can find it here: http://www.cse.ucsd.edu/~tkammeye/leakcheck.html
For Linux:
Try Google Perftools
There are a lot of tools that do similar alloc/free counting, the pros of Goolge Perftools:
Quite fast (in comparison to valgrind: very fast)
Comes with nice graphical display of results
Has other useful capabilities: cpu-profiling, memory-usage profiling...
The best defense against leaks is a program structure which minimizes the use of malloc. This is not only good from a programming perspective, but also improves performance and maintainability. I'm not talking about using other things in place of malloc, but in terms of re-using objects and keeping very explicit tabs on all objects being passed around rather than allocating willy-nilly like one often gets used to in languages with garbage collectors like Java.
For example, a program I work on has a bunch of frame objects representing image data. Each frame object has sub-data, which the frame's destructor frees. The program keeps a list of all frames that are allocated, and when it needs a new one, checks a list of unused frame objects to see if it can re-use an existing one rather than allocate a new one. On shutdown, it just iterates through the list, freeing everything.
I would recommend using Memory Validator from software verify.
This tool proved itself to be of invaluable help to help me track down memory leaks and to improve the memory management of the applications i am working on.
A very complete and fast tool.
Are you counting the allocs and frees by interpolating your own syscall functions which record the calls and then pass the call to the real function?
This is the only way you can keep track of calls originating from code that you haven't written.
Have a look at the man page for ld.so. Or ld.so.1 on some systems.
Also do Google LD_PRELOAD and you'll find some interesting articles explaining the technique over on www.itworld.com.
At least for MS VC++, the C Runtime library has several functions that I've found helpful in the past. Check the MSDN help for the _Crt* functions.
Paul Nettle's mmgr is a long time favourite tool of mine. You include mmgr.h in your source files, define TEST_MEMORY, and it delivers a textfile full of memory problems that occurred during a run of your app.
General Coding Guideline:
Resources should be deallocated at the same "layer" (function/class/library) where they are allocated.
If this is not possible, try to use some automatic deallocation (boost shared pointer...)
Memory debugging tools are worth their weight in gold but over the years I've found that two simple ideas can be used to prevent most memory/resource leaks from being coded in the first place.
Write release code immediatly after writing the acquisition code for the resources you want to allocate. With this method its harder to "forget" and in some sense forces one to seriously think of the lifecycle of resources being used upfront instead of as an aside.
Use return as sparringly as possible. What is allocated should only be freed in one place if possible. The conditional path between acquisition of resource and release should be designed to be as simple and obvious as possible.
At the top of this list (when I read it) was valgrind. Valgrind is excellent if you are able to reproduce the leak on a test system. I've used it with great success.
What if you've just noticed that the production system is leaking right now and you have no idea how to reproduce it in test? Some evidence of what's wrong is captured in the state of that production system, and it might be enough to provide an insight on where the problem is so you can reproduce it.
That's where Monte Carlo sampling comes into the picture. Read Raymond Chen's blog article,
“The poor man's way of identifying memory leaks” and then check out my implementation (assumes Linux, tested only on x86 and x86-64)
http://github.com/tialaramex/leakdice/tree/master
Working on Motorola cell phones operating system, we hijacked memory allocation library to observe all memory allocations. It helped to find a lot of problems with memory allocations.
Since prevention is better then curing, I would recommend to use static analysis tool like Klockwork or PC-Lint
Valgrind is a nice option for Linux. Under MacOS X, you can enable the MallocDebug library which has several options for debugging memory allocation problems (see the malloc manpage, the "ENVIRONMENT" section has the relevant details). The OS X SDK also includes a tool called MallocDebug (usually installed in /Developer/Applications/Performance Tools/) that can help you to monitor usage and leaks.
Detect:
Debug CRT
Avoid:
Smart pointers, boehm GC
A nice malloc, calloc and reallloc replacement is rmdebug, it's pretty simple to use. It is much faster to then valgrind, so you can test your code extensively. Of course it has some downsides, once you found a leak you probably still need to use valgrind to find where the leak appears and you can only test mallocs that you do directly. If a lib leaks because you use it wrong, rmdebug won't find it.
http://www.hexco.de/rmdebug/
Most memory profilers slow my large complex Windows application to the point where the results are useless. There is one tool that works well for finding leaks in my application: UMDH - http://msdn.microsoft.com/en-us/library/ff560206%28VS.85%29.aspx
Mtrace appears to be the standard built-in one for linux. The steps are :
set up the environment variable MALLOC_TRACE in bash
MALLOC_TRACE=/tmp/mtrace.dat
export MALLOC_TRACE;
Add #include <mcheck.h> to the top of you main source file
Add mtrace(); at the start of main and muntrace(); at the bottom (before the return statement)
compile your program with the -g switch for debug information
run your program
display leak info with mtrace your_prog_exe_name /tmp/mtrace.dat
(I had to install the mtrace perl script first on my fedora system with yum install glibc_utils )