I have a program on XCode's written with C++.
Some builds my memory consumption skyrockets out of nowhere, while other builds the memory stays constant and works as expected.
Are there any built in debugging features with XCodes that will allow me to view memory consumption on a per variable or per object basis?
Thanks in advance.
You should first look into using Instruments (wiki). You can run Instruments in Xcode using ⌘+I. It is built on top of DTrace.
DTrace background
Why is that important? DTrace lets you ask really interesting, specific questions about the runtime behaviors of your application, whether during debugging or in production.
The beauty of DTrace is in the providers, as well as the scripts (written in the D scripting language - not the same thing as dlang). Many languages have DTrace providers that allow you to probe into their standard libraries. There is also a syscall provider that lets you see when applications are hitting lower level system calls. Some third party applications have their own application-specific DTrace providers, often along with handy scripts you can use. Or you can create your own (see below).
Objc/C++ refresher
Keep in mind that both Objective-C and C++ essentially coexist with and extend C. So your new and delete in C++ are really just malloc and free with some size calculations done for you based on the class you're working with. In Objective-C, it's more or less the same story under the hood - things must be malloc'd onto the heap. So whether you're writing C, C++, or Objective-C code (or even Swift code!) there is some common ground we can rely on.
Instruments
Getting back to the issue at hand, Instruments simplifies using DTrace by offering a GUI with shortcuts to handy, pre-made scripts. You don't even have to see the scripts. For example, you can launch Instruments and tell it to monitor memory allocations or leaks, and suddenly you get a rolling graph of memory usage in your application.
Once you launch Instruments and start recording a session with the Memory Allocation tool, you should be looking at the "persistent bytes" column (used to be "live bytes"). This is a good starting point. This number will increase and decrease. You'll typically see a net increase over time, but you should see it decrease whenever you're expecting a deallocation to occur - the larger the deallocation, the easier it is to notice. If you are not seeing decreases at those moments, then you'll want to explore your code for bad memory management.
Taking It Further
You can even create custom probes.
Here's another in-depth article that describes what you're talking about, with tracking malloc/free usage from specific lines of your code.
Related
I am authoring a C++ program and find it consumes too much memory. I would like to know which part of the program consumes the most number of memory, ideally, I would like to know how much percentage of memory are consumed by what kind of C++ objects the program is using at a particular moment.
In Java, I know tools like Eclipse Memory Analyzer (https://www.eclipse.org/mat/) which could take a heap dump and show/visualize such memory usage, and I wonder if this can be done for a C++ program. For example, I expect to use a tool/approach letting me know a particular vector<shared_ptr<MyObject>> is holding 30% of the memory.
Note:
I develop the program mainly on macOS (compile using Apple Clang), so it will be better if the approach works on macOS. But I do deploy to Linux as well (compile using gcc) so approaches/tools on Linux is okay.
I tried using Apple's Intruments for such purpose, but so far I can only use it to find memory allocation issue. I have no idea how to figure out the memory consumption of the program at a particular moment (the memory consumption should be related with C++ objects in the program so that I can do some action to reduce it accordingly).
I don't find an easy way to visualize/summarize each part of my program's memory yet. So far, the best tool/approach that I found is Apple's Instruments (if you are on macOS).
By using Instruments, you can use Allocations profiling template. When using this profiling template, you can choose File ==> Recording Options ==> Check Discard events for freed memory option
And you will be able to figure out the un-free memory (aka. the data that are still in the memory) during allocation recording. If you have your program's debug symbol loaded, you can see which function leads to this result.
Although this doesn't address all the issues, it does help to identify part of the problem.
I thought I would ask the experts - see if you can help me :o)
My son has written C++ code for Collision Detection using Brute Force and Octree algorithms.
He has used Debug etc - and to collect stats on mem usage he has used windows & task manager - which have given him all the end results he has needed so far. The results are not yet as they were expect to be (that Octree would use more memory overall).
His tutor has suggested he checks memory once each is "initialised" and then plot at points through the test.
He was pointed in the direction of Valgrind .... but it looked uite complicated and becaus ehe has autism, he is worried that it might affect his programmes :o)
Anyone suggest a simple way to grab the information on Memory if not also Frame Rate and CPU usage???
Any help gratefully received, as I know nothing so can't help him at all, except for typing this on here - as it's "social" environment he can't deal with it.
Thanks
Rosalyn
For the memory leaks:
If you're on Windows, Visual C++ by Microsoft (the Express version is free) has a nice tool for debugging and is easy to setup with instructions can be found here; otherwise if you're on Linux, Valgrind is one of the standards. I have used the Visual C++ tool often and it's a nice verification that you have no memory leaks. Also, you can use it to enabled your programs to break on allocation numbers that you get from the memory leak log so it quickly points you to when and where the memory is getting assigned that leaks. Again, it's easy to implement (just a few header files and then a single function call where you want to dump the leaks at).
I have found the best way to implement the VC++ tool is to make the call to dump the memory leaks to the output window right before main returns a value. That way, you can catch the leaks of absolutely everything in your program. This works very well and I have used it for some advanced software.
For the framerate and CPU usage:
I usually use my own tools for benchmarking since they're not difficult to code once you learn the functions to call; this would usually require OS API calls, but I think Boost has that available and is cross-platform. There might be other tools out there that can track the process in the OS to get benchmarking data as well, but I'm not certain if they would be free or not.
It looks like you're running under a windows system. This isn't a programming solution, and you may have already tried it (so feel free to ignore), but if not, you should take a look at performance monitor (it's one of the tools that ships with windows). It'll let you track all sorts of useful stats about individual proceses and the system as a whole (cpu/commit size etc). It plots the results for you as a graph as the program is running and you can save the results off for future viewing.
On Windows 7, you get to it from here:
Control Panel\All Control Panel Items\Performance Information and Tools\Advanced Tools
Then Open Performance Monitor.
For older versions of windows, it used to be one of the administrative tools options.
Sometimes code can utilize device drivers up to the point where the system is unresponsive.
Lately I've optimized a WIN32/VC++ code which made the system almost unresponsive. The CPU usage, however, was very low. The reason was 1000's of creations and destruction of GDI objects (pens, brushes, etc.). Once I refactored the code to create all objects only once - the system became responsive again.
This leads me to the question: Is there a way to measure CPU/IO usage of device drivers (GPU/disk/etc) for a given program / function / line of code?
You can use various tools from SysInternals Utilities (now a Microsoft product, see http://technet.microsoft.com/en-us/sysinternals/bb545027) to give a basic idea before jumping in. In your case process explorer (procexp) and process monitor (procmon) performs a decent job. They can be used to get you a basic idea about what type of slowness it is before doing profiling drill down.
Then you can use xperf http://msdn.microsoft.com/en-us/performance/default to drill down. With correct setup, this tool can bring you to the exact function that causes slowness without injecting profiling code into your existing program. There's already a PDC video talking about how to use it http://www.microsoftpdc.com/2009/CL16 and I highly recommend this tool. Per my own experience, it's always better to observe using procexp/procmon first, then targeting your suspects with xperf, because xperf can generate overwhelming load of information if not filtered in a smart way.
In certain hard cases that involving locking contentions, Debugging Tools for Windows (windbg) will be very handy, and there are dedicated books talking about its usage. These books typically talk about hang detection and there are quite a few techniques here can be used to detect slowness, too. (e.g. !runaway)
Maybe you could use ETW for this? Not so sure it will help you see what line causes what, but it should give you a good overall picture of how your app is doing.
To find the CPU/memory/disk usage of the program in real time, you can use the resource monitor and task manager programs that come with windows. You can find the amount of time that a block of code takes relative to the other blocks of code by printing out the systime. Remember not to do too much monitoring at once, because that can throw off your calculations.
If you know how much CPU time that the program takes and what percentage of time the block of code takes, then you can estimate approximately how much CPU time that a block of code takes.
I've just restarted my firefox web browser again because it started stuttering and slowing down. This happens every other day due to (my understanding) of excessive memory usage.
I've noticed it takes 40M when it starts and then, by the time I notice slow down, it goes to
1G and my machine has nothing more to offer unless I close other applications.
I'm trying to understand the technical reasons behind why its such a difficult problem to sol
ve.
Mozilla have a page about high memory usage:
http://support.mozilla.com/en-US/kb/High+memory+usage
But I'm looking for a slightly more in depth and satisfying explanation. Not super technical but enough to give the issue more respect and please the crowd here.
Some questions I'm already pondering (they could be silly so take it easy):
When I close all tabs, why doesn't the memory usage go all the way down?
Why is there no limits on extensions/themes/plugins memory usage?
Why does the memory usage increase if it's left open for long periods of time?
Why are memory leaks so difficult to find and fix?
App and language agnostic answers also much appreciated.
Browsers are like people - they get old, they get bloated, and they get ditched for younger and leaner models.
Firefox is not just a browser, it's an ecosystem.
While I feel that recent versions are quite bloated, the core product is generally stable.
However, firefox is an ecosystem/platform for:
1) Badly written plug-ins
2) Badly written JavaScript code that executes within it.
3) Adobe flash as a platform for heavyweight video and for poorly written ad scripts such as 'hit Osama bin Laden with a duck to reduce your mortgage rate and receive a free iPod* (participation required).
4) Quicktime and other media player.
5) Some embedded Java code.
The description of a memory leak suggests a script running amok or a third-party tool requesting more memory. If you ever run Flash on a Mac, that's almost a given along with 90% CPU utilization.
The goal of most programming languages is not to save you but to give you tools to save yourself. You can write bad and bloated code with memory leaks in any language, including ones with garbage collection. Third party tools are usually not as well tested as the platform itself. Web pages that try to do too much are also not uncommon.
If you want to do an experiment to demonstrate this, get a mac with Firefox and go to a well-written site like Stack Overflow and spend an hour. Your memory usage shouldn't grow much. Then spend 5 minutes visiting random pages on Myspace.
Now let me try and answer your questions based on my guesses since I'm not familiar with the source code
When I close all tabs, why
doesn't the memory usage go all the
way down?
Whereas each browser instance is an independent process with its own memory, the tabs in a single window are all within the same process. Firefox used to have some sort of in-memory caching and merely closing a tab doesn't clear the relevant information immediately from the in-memory cache. If you reopened a tab to the same site, you might get better performance. There was some advanced option to allow you to disable it, something like browser.cache.memory.enable. Or just search for disabling the memory cache.
* Why is there no limits on extensions/themes/plugins memory usage?
For the same reason that Windows or Linux doesn't have a vetting process on applications you can run on them. It's an open environment and you assume the risk. If you want an environment where applications and extensions are 'validated', Apple might be the way to go :)
* Why does the memory usage increase if it's left open for long periods of time?
Not all calculations and actions in a script have visual manifestations. A script could be doing some stuff in the background (like requesting extra materials, pre-fetching stuff, just bugs) even if you don't see it.
* Why are memory leaks so difficult to find and fix?
It's about bookkeeping. Think about every item you ever borrowed (even a pen) or that someone borrowed from you in your entire life. Are they all accounted for? Memory leaks are the same way (you borrow memory from the system), except that you pass items around. Then look at the stuff on your desk, did you leave anything lying around because 'you might need it soon' even though you probably won't? same story.
Why are memory leaks so difficult to find and fix?
Because some developers refuse to use tools like Electric Fence.
Memory leaks are present in the first place because you want to keep things in memory and not on disk. For example, suppose you have a web page, which have images, CSS, JavaSript, text. If to display the page you will go to the hard disk every time you want to use the JavaScript interpreter or a CSS parser, or a font rendering engine to display a text with, then the browser will be very slow and sometimes won't work at all (because one JavaScript piece might need variables present which are left by another JavaScript piece, for example). Therefore, a browser tries to keep all things necessary for its work in memory, and those things get cross-referenced easily (JavaScript calling into Adobe Flash, Adobe Flash calling into JavaScript and so on). And you have to be very careful with such resource references, because cleaning them prematurely and out of order will break the code (better to keep a resource around then to die suddenly because it isn't there).
P.S. See also this article for some gory details.
If it is known that an application leaks memory (when executed), what are the various ways to locate such memory leak bugs in the source code of the application.
I know of certain parsers/tools (which probably do static analysis of the code) which can be used here but are there any other ways/techniques to do that, specific to the language (C/C++)/platform?
compile Your code with -g flag
Download valgrind (if You work on Linux) an run it with --leak-check=yes option
I thinkt that valgrind is the best tool for this task.
For Windows: See this topic: Is there a good Valgrind substitute for Windows?
There's valgrind and probably other great tools out there.
But I'll tell you what I do, that works very well for me, given that many times I code in environments where you can't run valgrind:
Be sure to pair each allocation with a deallocation. I always count news or mallocs and search for the delete or free.
If in C++ and using exceptions, try to put them paired on constructors/destructors. If you like risk, or can't put them in Ctor/dtor, be sure no exception can make the program flow not to execute the deallocation.
Use of smart pointers and ptr containers.
One can monitor alloc/dealloc rewriting new or installing a malloc handler. At some point, if the code runs continuously it can be obvious if the memory usage becomes stationary and doesn't grow without bounds which would be the worst case of leak.
Be careful with containers that never shrink such as vectors. There are tricks to shrink them swapping them with an empty container.
There are two general techniques for memory leak detection, dynamic and static analysis.
In dynamic analysis, you run the code and a tool analyzes the run to see what memory has leaked at the end. Dynamic analysis tends to be highly accurate but will only correctly analyze that specific executions you do within your tool. So, if some of your leaks that only happens for certain input and you don't have a test that uses that input, dynamic analysis will not detect those leaks.
Static analysis analyzes the source code to create all possible code paths and see if a leak can occur in any of them. While static analysis is pretty good right now, it's not perfect - you can not only get false negatives (the analysis misses leaks), you can also get false positives (the tool claims you have a leak when there actually isn't one).
There are many dynamic analysis tools including such well known tools as Valgrind (open source but limited to x86 Linux and Mac) and Purify (commercial but also available for Windows, Solaris and AIX). Wikipedia has a decent list of some other dynamic analysis tools as well.
On the static analysis side, the only tool I've thought worthwhile is Coverity (commerical). Once again, Wikipedia has a list of many other static analysis tools.
Purify will do a seemingly miraculous job of doing this
Not only memory leaks, but many other kinds of memory errors.
It works by instrumenting your machine code in real time, so you don't need source or need to compile with any particular options.
Just instrument your code with Purify (simplest way to do this: CC="purify cc" make), run your program, and get a nice gui that will show your leaks and other errors.
Available for Windows, Linux, and various flavors of Unix. There's a free trial download available.
http://www.ibm.com/software/awdtools/purify
If you utilize smart pointers and keep a table of them, then you can analyze it to tell what memory you are still using. Either provide a window to view it, or more commonly, stream to a log before the program terminates.
As far as doing it manually is concerned I don't think there are any established practices. To go over the code with a fine-toothed comb, looking for news (allocs) without corresponding deletes (frees), is all that is there to it.
You can also use purify for detection of memory leak.
There aren't very many general purpose guidelines for finding memory leaks. Fortunately, there's one simple guideline for preventing most leaks, both of memory and of other resources: use RAII (Resource Acquisition Is Initialization), and they just won't happen to start with. The name is a lousy description, but if you Google on it, you should get quite a few useful hits.
Personally, I would recommend that you wrap all variables which you need to allocate/deallocate memory with the clone_ptrclass which performs all the deallocation of memory for you when it is no longer needed. Thus, you do not have to use delete. It is quite similar to auto_ptr. The major difference is that you do not have to deal with the tricky ownership transfer part. More information and code on clone_ptr can be found here.