Identify non released memory during runtime - c++

How does one best identify memory not released properly during run time? I know of several programs that identifies allocated and non freed (leaked) memory when the application closes. But my issue seems to be that during the program execution (possibly a thread) creates some objects that are not freed, although they should be after the system is done with the "work".
Keeping the system running this builds up over time. But when the program shuts down the memory seems to be freed correctly and thus never reported as a leak in MadExcept that I use at the moment.
How do I best go about to detect what is allocating this memory every time the "work" is run and not freeing it until program termination? This is in quite a large server system with around 1 million lines of code, several DLL sub projects and multiple threads running (40-50).
Perhaps there is some system that could identify allocated objects that have been alive for longer than X min. Let's say 60 min is selected and the system left running. Then this information could be used to locate many of these long living objects and investigate those?

if you are using c++ and visual studio, I think this link is helpful. You can _CrtMemCheckpoint and CrtMemDumpStatistics when you need.

I ended up trying the evaluation version of Softwareverify's C++ Memory Validator.
It worked just like what I wanted and was able to provide a time line of memory allocations etc to allow me to identify what had been accumulating over time and how long it had been alive. Using that I was able to identify the problem and fix it.

Related

Is it practical to delete all heap-allocated memory after you have finished using it?

Are there any specific situations in which it would not be practical nor necessary to delete the heap-allocated memory when you are done using it? Or does not deleting it always affect programs to a large extent?
In a few cases, I've had code that allocated lots of stuff on the heap. A typical run of the program took at least a few hours, and with larger data sets, that could go up to a couple of days or so. When it finished and you exited the program, all the destructors ran, and freed all the memory.
That led to a bit of a problem though. Especially after a long run (which allocated many blocks on the heap) it could take around five minutes for all the destructors to run.
So, I rewrote some destructors to do nothing--not even free memory an object had allocated.
The program had a pretty simple memory usage pattern, so everything it allocated remained in use until you shut it down. Disabling the destructors so they no longer released the memory that had been allocated reduced the time to shut down the program from ~5 minutes to what appeared instant (but was still actually pretty close to 100 ms).
That said, this is really only rarely an option. The vast majority of the time, a program should clean up after itself. With well written code it's usually pretty trivial anyway.
Are there any specific situations in which it would not be practical
nor necessary to delete the heap-allocated memory when you are done
using it?
Yes.
In certain types of telecomm embedded systems I have seen:
1) an operator commanded software-revision-update can also perform (or remind the user to perform) a software reset as the last step in the upgrade. This is not a power bounce, and (typically) the associated hw continues to run.
Note: There are two (or more) kinds of revision updates: 1) processor code; and 2) firmware (of the fpga's which is typically stored in eprom)
In this case, there need not be a delete of long-term heap allocated memory. The embedded software I am familiar with has many new'd data structures that last the life of the code. Software reset is the user-commanded end-of-life, and the memory is zero'd at system startup (not shutdown). No dtor's are used at this point, either.
There is often a customer requirement about the upper limit on how long a system reboot takes. The time starts when the customer wants ... perhaps at the start of the download of a new revision ... so a fast reset can help achieve that additional requirement.
2) I have worked on (embedded telecom) systems with a 'Watchdog' feature to detect certain inconsistencies (including thread 'hangs'). This failure mechanism generates a log entry in some persistent store (such as battery-back-static-ram or eprom or file system).
The log entry is evidence of some 'self-detected' inconsistency.
Any attempt to delete heap memory would be suspect, as the inconsistency might have already corrupted the system. This reset is not user-commanded, but may have site policy based controls. A fast reset is also desired here to restore functionality when the reset occurs with no user at the console.
Note:
IMHO, The most useful "development features" for embedded system (none of which trigger heap clean up efforts) are :
a) a soft-reset switch (fairly common availability) - reboots the processor with no impact to the hw that the software controls/monitors. Is used often.
b) a hard-reset switch (availability rare) - power bounces the card .. both processor and the equipment it controls, without impact to the rest of the cards in the shelf. (Unknown utility.)
c) a shelf-reset switch (some times the shelf has its own switch) - power bounces the shelf and all cards, processors and equipment within. This is seldom used, (except for system startup issues) but the alternative is to clumsily 'pull the power plug'.
d) computer control of these three switches - I've never seen it.
Are there any specific situations in which it would not be practical
nor necessary to delete the heap-allocated memory when you are done
using it?
Any heap memory you allocate and never free will remain allocated until your process exits. During that time, no other program will be able to use that portion of the computer's RAM for any purpose.
So the question is, will that cause a problem? The answer will depend on a number of variables:
How much RAM has your process allocated?
How much RAM does the computer have physically installed and available for other programs to use?
How long will your process continue running (and thus holding on to that memory) for?
If your program is of the type that runs, does its thing, and then exits (more-or-less) immediately, then there's likely no problem with it "leaking" memory, since the leaked memory will be reclaimed by the OS when your process exits (note: some very primitive/embedded/old OS's may not reclaim the resources of an exited process, so make sure your OS does -- that said, almost all commonly in-use modern OS's do)
If your program is of the type that can continue running indefinitely, on the other hand, then memory leaks are going to be a problem, because if the program keeps allocating memory and never freeing it, eventually it will eat up all of the computer's available RAM and then bad things will start to happen.
In general, there is no reason why you should ever have to leak memory in a modern C++ program -- smart pointers (e.g. std::unique_ptr and std::shared_ptr) are there specifically to make memory-leaks easy to avoid, and they are easier to use than the old/leak-prone raw C pointers, so there's no reason not to use them.

Checking all sorts of memory usage during the runtime of a C++ Application

I'm using CentOS 7 and I'm running a C++ Application. Recently I switched to a newer version of a library which the application was using for various MySQL C API functions. But after integrating the new library, I saw a tremendous increase in memory usage of the program i.e. the application crashes if left running for more than a day or two. Precisely, what happens is the memory usage for the application starts increasing upto a point where the application alone is using 74.9% of total memory of the system and then it is forcefully shut down by the system.
Is there any way of how to track memory usage of the whole application including the static variables as well. I've already tried valgrind's tool Massif.
Can anyone tell me what could be the possible reasons for the increased memory usage or any tools that can give me a deep insight of how the memory is being allocated (both static and dynamic). Is there any tool which can tell us about Memory Allocation for a C++ Application running in a linux environment?
Thanks in advance!
Static memory is allocate when the program starts. Are you seeing memory growth or a startup increase?
Since it takes 'a day or two to crash', the trouble is likely a memory leak or unbounded growth of a data structure. Valgrind should be able to help with both. If valgrind shows a big leak with the --leak-check-full option then you will likely have found the issue.
To check for unbounded growth, put a preemptive _exit() in the program at a point where you suspect the heap has grown. For example, put a timer on the main loop and have the program _exit after 10 minutes. If the valgrind shows a big 'in use at exit' then you likely have unbounded growth of a data structure but not a leak. Massif can help track this down. The ms_print gives details of allocations with function stack.
If you find an issue, try switching back to the older version of your library. If the problem goes away, check and make sure you are using the API properly in the new version. If you don't have the source code then you are a bit stuck in terms of a fix.
If you want to go the extra mile, you can write a shared library interposer for malloc/free to see what is happening. Here is a good start. Linux has the backtrace functionality that can help with determining the exact stack.
Finally, if you must use the 3rd party library and find the heap growing without bound or leaking then you can use the shared library interposer to directly call free/delete. This is a risky last-ditch unrecommended strategy but I've used in production to limp a process along.

How to make Qt GUI apps in C++ without memory leaks

I haven't been able to create a Qt GUI app that didn't have over 1K 'definitely lost' bytes in valgrind. I have experimented with this, making minimal apps that just show one QWidget, that extend QMainWindow; that just create a QApplication object without showing it or without executing it or both, but they always leak.
Trying to figure this out I have read that it's because X11 or glibc has bugs, or because valgrind gives false positives. And in one forum thread it seemed to be implied that creating a QApplication-object in the main function and returning the object's exec()-function, as is done in tutorials, is a "simplified" way to make GUIs (and not necessarily good, perhaps?).
The valgrind output does indeed mention libX11 and libglibc, and also libfontconfig. The rest of the memory losses, 5 loss records, occurs at ??? in libQtCore.so during QLibrary::setFileNameAndVersion.
If there is a more appropriate way to create GUI apps that prevents even just some of this from happening, what is it?
And if any of the valgrind output is just noise, how do I create a suppression file that suppresses the right things?
EDIT: Thank you for comments and answers!
I'm not worrying about the few lost kB themselves, but it'll be easier to find my own memory leaks if I don't have to filter several screens of errors but can normally get an "OK" from valgrind. And if I'm going to suppress warnings, I'd better know what they are, right?
Interesting to see how accepted leaks can be!
It is not uncommon for large-scale multi-thread-capable libraries such as QT, wxWidgets, X11, etc. to set up singleton-type objects that initialize once when a process is started and then make no attempt to effort to clean up the allocation when the process shuts down.
I can assure you that anything "leaked" from a function such as QLibrary::setFileNameAndVersion() has been left so intentionally. The bits of memory left behind by X11/glibc/fontConfig are probably not bugs either.
It could be seen as bad coding practice or etiquette, but it can also greatly simplify certain types of tasks. Operating systems these days offer a very strong guarantee for cleaning up any memory or resources left open by a process when its killed (either gracefully or by force), and if the allocation in question is very likely to be needed for the duration of the application, including shutdown procedures -- and various core components of QT would qualify -- then it can be a boon to performance to have the library set up some memory allocations as soon as it is loaded/initialized, and allow those to persist indefinitely. Among other things, this allows the memory to be present for use by any other C++ destructors that might reference that memory.
Since those allocations are only set up once, and from one point in the code, there is no risk of a meaningful memory leak. Its just memory that belongs to the process and is thus cleaned up when the process is closed by the operating system.
Conclusion: if the memory leak isn't in your code, and it doesn't appear to get significantly larger over time (and by significant these days, think megabytes), and/or is clearly orginating from first-time initialization setup code that is only ever invoked once within your app, then don't worry about it. It is probably intentional.
One way to test this can be to run your code inside a loop, and vary the number of iterations. If the difference between allocs and frees is independent on the number of iterations, you are likely to be safe.

Increased memory usage for a process

I have a C++ process running in Solaris which creates 3 threads to do some tasks.
These threads execute in loops and it runs as long as the process is running.
But, I see that the memory usage of the process grows continuously and the process core dumps once the memory usage exceeds 4GB.
Can someone give me some pointers on what could be the issue behind memory usage growth?
What can I do to prevent process from core dumping because of memory exhaustion?
Will thread restart help?
Any pointers welcome.
No, restarting a thread would not help.
It seems like you have a memory leak in your application.
In my experience there are two types of memory leaks:
real memory leaks that you can see when the application exits
'false' memory leaks, like a big list that increases during the lifetime of your application but which is correctly cleaned up at the end
For the first type, there are tools which can report the memory that has not been freed by your application when it exits. I don't know about Solaris but there are numerous tools under Windows which can do that. For Unix, I think that Valgrind does this.
For the second type, there are also tools under Windows that can take snapshots of the memory of your application. Simply take two snapshots with an interval of a few minutes or hours (depending on your application) and let them compare by the tool. There are probably simlar tools like this on Solaris.
Using these tools will probably require your application to take much more memory, since the tool needs to store the call stack of every memory allocation. Because of this it will also run much slower. However, you will only see this effect when you are actively using this tool, so there is no effect in real-life production code.
So, just look for this kind of tools under Solaris. I quickly Googled for it and found this link: http://prefetch.net/blog/index.php/2006/02/19/finding-memory-leaks-on-solaris-systems/. This could be a starting point.
EDIT: Some additional information: are you looking at the right kind of memory? Even if you only allocated 3GB in total, the total virtual address space may still reach 4GB because of memory fragmentation. Unfortunately, there is nothing you can do about this (except using another memory allocation strategy).

Memory usage and minimizing

We have a fairly graphical intensive application that uses the FOX toolkit and OpenSceneGraph, and of course C++. I notice that after running the application for some time, it seems there is a memory leak. However when I minimize, a substantial amount of memory appears to be freed (as witnessed in the Windows Task Manager). When the application is restored, the memory usage climbs but plateaus to an amount less than what it was before the minimize.
Is this a huge indicator that we have a nasty memory leak? Or might this be something with how Windows handles graphical applications? I'm not really sure what is going on.
What you are seeing is simply memory caching. When you call free()/delete()/delete, most implementations won't actually return this memory to the OS. They will keep it to be returned in a much faster fashion the next time you request it. When your application is minimized, they will free this memory because you won't be requesting it anytime soon.
It's unlikely that you have an actual memory leak. Task Manager is not particularly accurate, and there's a lot of behaviour that can change the apparent amount of memory that you're using- even if you released it properly. You need to get an actual memory profiler to take a look if you're still concerned.
Also, yes, Windows does a lot of things when minimizing applications. For example, if you use Direct3D, there's a device loss. There's thread timings somethings. Windows is designed to give the user the best experience in a single application at a time and may well take extra cached/buffered resources from your application to do it.
No, there effect you are seeing means that your platform releases resources when it's not visible (good thing), and that seems to clear some cached data, which is not restored after restoring the window.
Doing this may help you find memory leaks. If the minimum amount of memory (while minimized) used by the app grows over time, that would suggest a leak.
You are looking at the working set size of your program. The sum of the virtual memory pages of your program that are actually in RAM. When you minimize your main window, Windows assumes the user won't be interested in the program for a while and aggressively trims the working set. Copying the pages in RAM to the paging file and chucking them out, making room for the other process that the user is likely to start or to switch to.
This number will also go down automatically when the user starts another program that needs a lot of RAM. Windows chucks out your pages to make room for this program. It picks pages that your program hasn't used for a while, making it likely that this doesn't affect the perf of your program much.
When you switch back to your program, Windows needs to swap pages back into RAM. But this is on-demand, it only pages-in pages that your program actually uses. Which will normally be less than what it used before, no need to swap the initialization code of your program back in for example.
Needless to say perhaps, the number has absolutely nothing to do with the memory usage of your program, it is merely a statistical number.
Private bytes would be a better indicator for a memory leak. Taskmgr doesn't show that, SysInternals' ProcMon tool does. It still isn't a great indicator because that number also includes any blocks in the heap that were freed by your program and were added to the list of free blocks, ready to be re-used. There is no good way to measure actual memory in use, read the small print for the HeapWalk() API function for the kind of trouble that causes.
The memory and heap manager in Windows are far too sophisticated to draw conclusions from the available numbers. Use a leak detection tool, like the VC debug allocator (crtdbg.h).