Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There're several new c++ guys working in our team, so too much ugly code everyday!
I hate those functions using readonly string, STL containers as parameters in, but without const reference!!! I'm crazy!!!
Is there any static code checker that can find these ugly code? I need such a tool used in our makefile.
Yeah, it's unlikely that "bad code" can be prevented with automated tools.
For myself, and I'm also doing this at my workplace, I've always turned on as many warnings as possible (usually by enabling a high level of warnings and only turning off the 'obviously dumb' warnings; g++ being the only exception since it doesn't have an option to turn on everything, so I do -Wall, -Wextra and a whole bunch of other -W, and occasionally go through the manual to see whether new warnings have been added).
I also compile with -Werror or /WX. Unfortunately, while Linux and Windows headers seem to be rather clean by now, I get stupid warnings about things like bad casts or incorractly used macros from boost headers. 3rd party libraries are often badly written wrt to warnings.
As for static analysis tools, I did try cppcheck and clang (both of which are free, which is why I tried them). Wasn't thrilled about either of them; I'm still planning to add some support for one or both to my build software, but it has rather low priority. One of the two (don't remember which one) actually found SOMETHING: an unnecessary assignment, which any decent optimizer will remove anyway. I don't think that I'm such a perfect 0-bugs developer, so I'm blaming the tools. Still, I did remove that assignment :-)
If I'm not mistaken, the commercial VisualStudio versions have code analysis as well (At home I'm more of an Express guy, and I'm stuck with MacOS development at work); maybe that one is better. Or one of the other commercial tools; they have to offer SOMETHING for their money, after all.
There are still some additional free tools that I haven't tried yet; I have no idea how complete the http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis#C.2FC.2B.2B list, but I hope to eventually try all the free tools that can handle C++.
For your problem in particular, Wi8kipedia describes "cpplint" as "cpplint implements what Google considers to be "best practices" in C++ coding". I have no idea what that means, but the Wikipedia page has a link to a "Google C++ Style Guide" pdf. Or you could just try it and see what it complains about :-)
Also, I probably wouldn't want to add such tools to the Makefile (unless you meant to imply that people still have to invoke "make check" to actually run it). Adding it to the source code repository to check new commits before allowing them is probably too time consuming (code analysis is pretty much "compiling with many extras", so it takes a good deal of time), but you could automatically run it every now and then.
parfait or lint, google "Static Analysis Tools"
You could probably pick up some of these by using the -Wall flag if you are using GCC to get these as warnings.
I know that E&C is a controversial subject and some say that it encourages a wrong approach to debugging, but still - I think we can agree that there are numerous cases when it is clearly useful - experimenting with different values of some constants, redesigning GUI parameters on-the-fly to find a good look... You name it.
My question is: Are we ever going to have E&C on GDB? I understand that it is a platform-specific feature and needs some serious cooperation with the compiler, the debugger and the OS (MSVC has this one easy as the compiler and debugger always come in one package), but... It still should be doable. I've even heard something about Apple having it implemented in their version of GCC [citation needed]. And I'd say it is indeed feasible.
Knowing all the hype about MSVC's E&C (my experience says it's the first thing MSVC users mention when asked "why not switch to Eclipse and gcc/gdb"), I'm seriously surprised that after quite some years GCC/GDB still doesn't have such feature. Are there any good reasons for that? Is someone working on it as we speak?
It is a surprisingly non-trivial amount of work, encompassing many design decisions and feature tradeoffs. Consider: you are debugging. The debugee is suspended. Its image in memory contains the object code of the source, and the binary layout of objects, the heap, the stacks. The debugger is inspecting its memory image. It has loaded debug information about the symbols, types, address mappings, pc (ip) to source correspondences. It displays the call stack, data values.
Now you want to allow a particular set of possible edits to the code and/or data, without stopping the debuggee and restarting. The simplest might be to change one line of code to another. Perhaps you recompile that file or just that function or just that line. Now you have to patch the debuggee image to execute that new line of code the next time you step over it or otherwise run through it. How does that work under the hood? What happens if the code is larger than the line of code it replaced? How does it interact with compiler optimizations? Perhaps you can only do this on a specially compiled for EnC debugging target. Perhaps you will constrain possible sites it is legal to EnC. Consider: what happens if you edit a line of code in a function suspended down in the call stack. When the code returns there does it run the original version of the function or the version with your line changed? If the original version, where does that source come from?
Can you add or remove locals? What does that do to the call stack of suspended frames? Of the current function?
Can you change function signatures? Add fields to / remove fields from objects? What about existing instances? What about pending destructors or finalizers? Etc.
There are many, many functionality details to attend to to make any kind of usuable EnC work. Then there are many cross-tools integration issues necessary to provide the infrastructure to power EnC. In particular, it helps to have some kind of repository of debug information that can make available the before- and after-edit debug information and object code to the debugger. For C++, the incrementally updatable debug information in PDBs helps. Incremental linking may help too.
Looking from the MS ecosystem over into the GCC ecosystem, it is easy to imagine the complexity and integration issues across GDB/GCC/binutils, the myriad of targets, some needed EnC specific target abstractions, and the "nice to have but inessential" nature of EnC, are why it has not appeared yet in GDB/GCC.
Happy hacking!
(p.s. It is instructive and inspiring to look at what the Smalltalk-80 interactive programming environment could do. In St80 there was no concept of "restart" -- the image and its object memory were always live, if you edited any aspect of a class you still had to keep running. In such environments object versioning was not a hypothetical.)
I'm not familiar with MSVC's E&C, but GDB has some of the things you've mentioned:
http://sourceware.org/gdb/current/onlinedocs/gdb/Altering.html#Altering
17. Altering Execution
Once you think you have found an error in your program, you might want to find out for certain whether correcting the apparent error would lead to correct results in the rest of the run. You can find the answer by experiment, using the gdb features for altering execution of the program.
For example, you can store new values into variables or memory locations, give your program a signal, restart it at a different address, or even return prematurely from a function.
Assignment: Assignment to variables
Jumping: Continuing at a different address
Signaling: Giving your program a signal
Returning: Returning from a function
Calling: Calling your program's functions
Patching: Patching your program
Compiling and Injecting Code: Compiling and injecting code in GDB
This is a pretty good reference to the old Apple implementation of "fix and continue". It also references other working implementations.
http://sources.redhat.com/ml/gdb/2003-06/msg00500.html
Here is a snippet:
Fix and continue is a feature implemented by many other debuggers,
which we added to our gdb for this release. Sun Workshop, SGI ProDev
WorkShop, Microsoft's Visual Studio, HP's wdb, and Sun's Hotspot Java
VM all provide this feature in one way or another. I based our
implementation on the HP wdb Fix and Continue feature, which they
added a few years back. Although my final implementation follows the
general outlines of the approach they took, there is almost no shared
code between them. Some of this is because of the architectual
differences (both the processor and the ABI), but even more of it is
due to implementation design differences.
Note that this capability may have been removed in a later version of their toolchain.
UPDATE: Dec-21-2012
There is a GDB Roadmap PDF presentation that includes a slide describing "Fix and Continue" among other bullet points. The presentation is dated July-9-2012 so maybe there is hope to have this added at some point. The presentation was part of the GNU Tools Cauldron 2012.
Also, I get it that adding E&C to GDB or anywhere in Linux land is a tough chore with all the different components.
But I don't see E&C as controversial. I remember using it in VB5 and VB6 and it was probably there before that. Also it's been in Office VBA since way back. And it's been in Visual Studio since VS2005. VS2003 was the only one that didn't have it and I remember devs howling about it. They intended to add it back anyway and they did with VS2005 and it's been there since. It works with C#, VB, and also C and C++. It's been in MS core tools for 20+ years, almost continuous (counting VB when it was standalone), and subtracting VS2003. But you could still say they had it in Office VBA during the VS2003 period ;)
And Jetbrains recently added it too their C# tool Rider. They bragged about it (rightly so imo) in their Rider blog.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Hi I am learning C++ and at the very beginning used a Command-line... then I started using Xcode (and since then couldn't switch back to command line) and was just wondering some specific reasons/situations to use Command-line instead of IDE...
More efficent for large systems -
Try opening a VS solution with a 100 projects and 10,000 files.
Simpler for a lot of tasks, you edit in one window, run make in another, have gdb in a third.
Easier to automate tasks, often easier to work in teams or cross-platform if everyone has gcc and vi (or emacs)
A big one for me is the editor. I really love vim, and it's pretty rare that an IDE has good vim emulation - especially since I use dvorak and have to remap a lot of the keys. For a lot of other people, they'd say the same thing except that they'd choose emacs.
Most IDE editors pale in comparison to the like of vim or emacs. There are features that you miss out on or are a lot harder to get to work well. For instance ctags definitely helps you be able to hop to the definitions of functions in vim, but it is nowhere near as good as a lot of IDEs can do since they actually understand the language. And of course, debugger integration and project management and the like aren't going to work as well in vim or emacs because they're not full-blown IDEs (though you can do a lot of that stuff with them). But often, the power of vim or emacs overshadows whatever the IDE has over them.
Also, as far as command-line editors go, they can be run on the command-line when you don't have access to an environment with a GUI, so there are plenty of situations where that flexibility is a big plus.
Of course, it would be great if you could combine all of the great features of vim or emacs with a full-blown IDE - and there are some attempts to do so - but there are always problems with it, and even the best attempts are far from perfect. So, you're often still stuck with the choice of vim or emacs and the features that they bring or an IDE with the features that it brings.
EDIT: Going into why vim or emacs is far more powerful in many respects than your typical IDE could get quite lengthy, and there are already several questions on SO which cover that.
This is a great answer on emacs: https://stackoverflow.com/questions/35809/why-are-vi-and-emacs-popular/35929#35929
This is a good article on vi that seems to get linked to frequently: http://www.viemu.com/a-why-vi-vim.html
For a quick attempt on my part to give some reasons why vim is more powerful:
Your typical IDE is basically a souped up notepad from a text-editing standpoint. You type in text and use your mouse to navigate around (which while sometimes quite useful, it can be quite a bit slower than just using your keyboard). They add code-specific features like code completion, automatic code indenting, the ability to jump to function definitions, refactoring tools, etc. (hence why IDEs can be so useful) But their basic text editing abilites are generally pretty poor. They might add some useful features like ctrl+d to delete a line, but what they add is generally very limited compared to what you'd get in vim or emacs.
Take deletion (a very basic operation) as an example. In vim, you can use the delete operation with any motion command, making for a potentially staggering number of ways to delete things.
dd Delete an entire line.
5dd Delete 5 lines.
dj Move the cursor up, deleting everything to the left on the current line and everything to the right on the line above.
dG Delete everything from here until the end of the file.
7dgg Delete everything between here and line 7.
d% Move to the brace or paren which matches the next paren or brace (whichever comes first) and delete everything between here and there, including the brace or paren that you jump to
dw Delete everything between here and the next beginning of a word
de Delete everything between here and the next end of a word
D Delete everything between here and the end of the line
d0 Delete everything between here and the beginning of the line
d^ Delete everything between here and the beginning of the first word on the line
dty Delete everything between here and the next occurrence of y on this line (or nothing if there are no y's between here and the end of the line)
The list goes on and on. And that's just for delete. The same goes for a whole list of other basic commands. And that's just basic commands. There are many, many more commands which are more advanced and quite powerful. Vim can do so much that most people who use it use only a fraction of what it's capable of.
Most IDEs don't even have a fraction of a fraction of such editing capabilities. They have many other programming-specific and language-specific features which vim and emacs either lack or in which they are much harder to get to work - such as good, context-sensitive code completion, refactoring tools, project management, etc. But as for text-editing capabalities, most IDEs just can't compare.
At least in UNIX, the command line tools are mature. The bugs I have to deal with are highly obscure at this point. vi, make, gcc, gdb - these tools are, in some cases, 20 years old or more. Tried, tested, proven.
Also, they are ubiquitous. Everyone has vi, make, gcc. I don't have to worry about not having my tool-du-joir. I can go to virtually anyones box, and no problem, I can compile, write, debug without any need to learn some fancy tool.
First, there's the increased choice. If you use an IDE, you use the tools that are specifically compatible with it, which may not be the tools you prefer. There are a lot of things that you can get in an integrated package or as components, and people go both ways.
Second, lots of developers got used to Unix tools, which are mature, very powerful, and meant to be used by themselves. There isn't a tradition like that on Microsoft Windows, and IDEs have ruled there since Turbo Pascal. (The big advantage Turbo Pascal had was that, in the days before any sort of multitasking, it wasn't necessary to start and stop the editor, then run the compiler and linker separately before running the test.)
Third, it's really, really easy to automate anything you'd do from a command line. It's harder to do such things in a GUI of any sort, both in that the tools are much harder to develop and in that they're harder to learn. Again, there's a culture gap here, as the Unix tradition is to deal with complicated procedures by automating them, and the Microsoft tradition is to build GUIs and wizards.
Fourth, for handling complicated systems, there's still no method clearly better than a human-readable text representation. The syntax of Unix makefiles is bad, but it would have avoided the issue I had recently, where the configuration of an after-build step in one setup of a Visual C++ project file was wrong in one case, and it was hard to spot. Using a makefile and vim, I can simply and reliably change how a given project will compile and build. It's a lot harder with Visual Studio.
You use command line when you want to automate your software builds. That's typically done when you are ready to release your software, and you need to do other things such as installer packaging in addition to software compilation before you can deliver the software to the users.
Beyond that you should always use an IDE for debugging, compiling and code editing purpose.
Commandline builds can be automated, and using standard tools such as CMake for your commandline builds make such builds cross-platform (note you can build an Xcode project from the commandline using xcodebuild, but that only works on Mac OS X with Xcode). Automation is incredibly important, because it allows you, for example, to create a "hook" for your version control system to build the project and reject code that doesn't compile or to have a continuous integration server periodically build your project, so that you can easily tell when breaking changes to the code have been made and fix them quickly.
In addition to portability and automation, the commandline is just faster. If you are using a Makefile project or something like the C++ Project Template which uses CMake but has a Makefile wrapper, then all it takes is the simple command "make" to build. The second time you build, you only have to hit the up arrow, and then hit ENTER for it to rerun "make". Although beginners with the commandline may be not be very fast and may find the GUI / IDE easier, once you have used the commandline for a while, it becomes much much faster than doing it with a GUI, and all those mouse movements and clicks seem slow and a waste of effort.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Is makefile an advanced problem or a general problem for a programmer?
For a C++ programmer, how often will he be asked to write a makefile file?
Nobody ever writes a Makefile. They take an existing Makefile and modify it.
Just be thankful you don't have to deal with IMakefile. (For those of you who were lucky enough to never deal with this, or who have managed to forget, IMakefile was a file that was sort of a meta-Makefile for X Window System based projects which would be used to generate a Makefile that knew where you'd installed all your X Windows libraries and binaries. Needless to say, it sucked.)
Setting up a meaningful build system is part of the craft. I'd expect any non-super-junior developer to be able to come up with a makefile.
As often as he is asked to start on a brand new project, which is not similar to an existing project whose builds scripts can be adapted. That is not a good answer I guess, but in my experience there has always been some code/resources/build scripts that I can adapt.
On a different note, for any C++ programmer, it is important to learn how his code is built. Makefiles, Ant scripts etc are just the implementation mechanism. So I think it is important enough to learn.
Programmers need to know how to build their projects. Typically (in my experience) the 'production build' is based on the original programmer's makefile. Many programmers go their entire careers just fumbling blindly when putting together their makefiles, but every programmer has to do it.
I have never thought of makefiles as problems, they are quite usefull...
The frequency would depend on the platform that you're using and your role.
For example, in some organizations there is a fairly fixed build file that doesn't often change, while in others there are frequent changes. Some organizations rely on the IDE to deal with the build, etc. You don't have to be the build engineer.
I think that most C/C++ developers should have at least some fundamental understanding of how makefiles work, though they don't have to be gurus. In my alma mater, that is part of first-year CS.
One thing to note is that, if the build system is set up in a modular way one practice isn't to build one huge makefile for the whole thing but makefiles for bits that can be called from the master makefile recursively. This is a great feature as it means different products (libraries?) can be built as separate units, often in parallel.
Makefiles are designed to automate the build process. Therefore, they are most definitely not a chore; they save you writing gcc -c *.c etc all the time. Not only that, but a properly written makefile has phonies for clean, rebuild etc. On most of my projects, make rebuild does exactly that - cleans everything and starts again.
Makefiles are really useful. I can't say that enough.
Even if you are using an IDE that deals with the details of building the project for you, makefiles can still be very useful. Occasionally you need to have some extra functionality as part of your build process, for instance to run a pre-processing step to generate a c++ file programmatically (if you use Qt then you may need to run moc on your header files or rcc on your resource files. You might need to process an .idl file to generate an implementation file), or as a post-build step to copy files to a destination. In these cases you typically have two choices:
run the scripts as pre- or post-build steps every time, which means the app will be rebuilt every time you build it
create a makefile to perform the action which will embody the dependency rule so that the step will only be invoked if the timestamp of the input file is newer than that of the output file. Building a project that is already up to date will do nothing
A C++ developer may go for many years without needing to create a Makefile if they're not working on Linux/Unix, but a decent developer will seek to understand how Makefiles work as they will probably improve part of your current workflow.
I suppose it depends...
Personally, I am quite mesmerized by the syntax of Makefiles. Doing any kind of even a simple operation on the paths of the object is ever so tricky.
On the other hand, being able to build is just necessary, and understanding the various arguments on the compile line is too. Though perhaps not for a junior.
I had to rewrite the build system of our application barely a year after I left school. I did it using scons and it worked great. Build time went from 1h30 to barely 10m because I was finally able to have parallel compilation and put an automatic local replication of the libraries used in place.
The morale is: if you ask someone to pick up a build mechanism, do not force Makefile on him. Let him pick a more recent tool.
A powerful part of make is the shell code that you can write in each rule.
So not only learning make is important, but understanding a shell language like bash and applying it to a project is very useful
It depends on the tools/libraries/frameworks you are using for your project. For my job I code using Qt so I never have to write a Makefile as qmake does it for me although I do have to know how to create a qmake project file instead.
I freely admit I probably couldn't write anything beyond the simplest Makefile without having to learn it all from scratch. I don't see that changing anytime soon as I would only learn it if I was forced to. Why learn something you may never need?
I haven't been asked to write a makefile in the last five years. Sometimes I could modify an existing makefile.
But when you have tools that make the job without the make program, it's not necessary to care for makefiles. Regarding your question, when a C++ programer has been asked for writing such a file: It depends.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
So far, I've only used Rational Quantify. I've heard great things about Intel's VTune, but have never tried it!
Edit: I'm mostly looking for software that will instrument the code, as I guess that's about the only way to get very fine results.
See also:
What are some good profilers for native C++ on Windows?
For linux development (although some of these tools might work on other platforms). These are the two big names I know of, there's plenty of other smaller ones that haven't seen active development in a while.
Valgrind
TAU - Tuning and Analysis Utilities
For Linux:
Google Perftools
Faster than valgrind (yet, not so fine grained)
Does not need code instrumentation
Nice graphical output (--> kcachegrind)
Does memory-profiling, cpu-profiling, leak-checking
IMHO, sampling using a debugger is the best method. All you need is an IDE or debugger that lets you halt the program. It nails your performance problems before you even get the profiler installed.
My only experience profiling C++ code is with AQTime by AutomatedQA (now SmartBear Software). It has several types of profilers built in (performance, memory, Windows handles, exception tracing, static analysis, etc.), and instruments the code to get the results.
I enjoyed using it - it was always fun to find those spots where a small change in code could make a dramatic improvement in performance.
I have never done profiling before. Yesterday I programmed a ProfilingTimer class with a static timetable (a map<std::string, long long>) for time storage.
The constructor stores the starting tick, and the destructor calculates the elapsed time and adds it to the map:
ProfilingTimer::ProfilingTimer(std::string name)
: mLocalName(name)
{
sNestedName += mLocalName;
sNestedName += " > ";
if(sTimetable.find(sNestedName) == sTimetable.end())
sTimetable[sNestedName] = 0;
mStartTick = Platform::GetTimerTicks();
}
ProfilingTimer::~ProfilingTimer()
{
long long totalTicks = Platform::GetTimerTicks() - mStartTick;
sTimetable[sNestedName] += totalTicks;
sNestedName.erase(sNestedName.length() - mLocalName.length() - 3);
}
In every function (or {block}) that I want to profile i need to add:
ProfilingTimer _ProfilingTimer("identifier");
This line is a bit cumbersome to add in all functions I want to profile since I have to guess which functions take a lot of time. But it works well and the print function shows time consumed in %.
(Is anyone else working with any similar "home-made profiling"? Or is it just stupid? But it's fun! Does anyone have improvement suggestions?
Is there some sort of auto-adding a line to all functions?)
I've used Glowcode extensively in the past and have had nothing but positive experiences with it. Its Visual Studio integration is really nice, and it is the most efficient/intuitive profiler that I've ever used (even compared to profilers for managed code).
Obviously, thats useless if your not running on Windows, but the question leaves it unclear to me exactly what your requirements are.
oprofile, without a doubt; its simple, reliable, does the job, and can give all sorts of nice breakdowns of data.
The profiler in Visual Studio 2008 is very good: fast, user friendly, clear and well integrated in the IDE.
For Windows, check out Xperf. It uses sampled profile, has some useful UI, & does not require instrumentation. Quite useful for tracking down performance problems. You can answer questions like:
Who is using the most CPU? Drill down to function name using call stacks.
Who is allocating the most memory?
Who is doing the most registry queries?
Disk writes? etc.
You will be quite surprised when you find the bottlenecks, as they are probably not where you expected!
Since you don't mention the platform you're working on, I'll say cachegrind under Linux. Definitely. It's part of the Valgrind toolset.
http://valgrind.org/info/tools.html
I've never used its sub-feature Callgrind, since most of my code optimization is for inside functions.
Note that there is a frontend KCachegrind available.
For Windows, I've tried AMD Codeanalyst, Intel VTune and the profiler in Visual Studio Team Edition.
Codeanalyst is buggy (crashes frequently) and on my code, its results are often inaccurate. Its UI is unintuitive. For example, to reach the call stack display in the profile results, you have to click the "Processes" tab, then click the EXE filename of your program, then click a toolbar button with the tiny letters "CSS" on it. But it is freeware, so you may as well try it, and it works (with fewer features) without an AMD processor.
VTune ($700) has a terrible user interface IMO; in a large program, it's hard to find the particular call tree you want, and you can only look at one "node" in a program at a time (a function with its immediate callers and callees)--you cannot look at a complete call tree. There is a call graph view, but I couldn't find a way to make the relative execution times appear on the graph. In other words, the functions in the graph look the same regardless of how much time was spent in them--it's as though they totally missed the point of profiling.
Visual Studio's profiler has the best GUI of the three, but for some reason it is unable to collect samples from the majority of my code (samples are only collected for a few functions in my entire C++ program). Also, I couldn't find a price or way to buy it directly; but it comes with my company's MSDN subscription. Visual Studio supports managed, native, and mixed code; I'm not sure about the other two profilers in that regard.
In conclusion, I don't know of a good profiler yet! I'll be sure to check out the other suggestions here.
There are different requirements for profiling. Is instrumented code ok, or do you need to profile optimized code (or even already compiled code)? Do you need line-by-line profile information? Which OS are you running? Do you need to profile shared libraries as well? What about trace into system calls?
Personally, I use oprofile for everything I do, but that might not be the best choice in every case. Vtune and Shark are both excellent as well.
For Windows development, I've been using Software Verification's Performance Validator - it's fast, reasonably accurate, and reasonably priced. Best yet, it can instrument a running process, and lets you turn data collection on and off at runtime, both manually and based on the callstack - great for profiling a small section of a larger program.
I use devpartner for the pc platform.
I have tried Quantify an AQTime, and Quantify won because of its invaluable 'focus on sub tree' and 'delete sub tree' features.
The only sensitive answer is PTU from Intel. Of course its best to use it on an Intel processor and to get even more valuable results at least on a C2D machine as the architecture itself is easier to give back meaningful profiles.
I've used VTune under Windows and Linux for many years with very good results. Later versions have gotten worse, when they outsourced that product to their Russian development crew quality and performance both went down (increased VTune crashes, often 15+ minutes to open an analysis file).
Regarding instrumentation, you may find out that it's less useful than you think. In the kind of applications I've worked on adding instrumentation often slows the product down so much that it doesn't work anymore (true story: start app, go home, come back next day, app still initializing). Also, with non instrumented profiling you can react to live problems. For example, with VTune remote date collector I can start up a sampling session against a live server with hundreds of simultaneous connections that is experiencing performance problems and catch issues that happen in production that I'd never be able to replicate in a test environment.
ElectricFence works nicely for malloc debugging
My favorite tool is Easy Profiler : http://code.google.com/p/easyprofiler/
It's a compile time profiler : the source code must be manually instrumented using a set of routines so to describe the target regions.
However, once the application is run, and measures automatically written to an XML file, it is only a matter of opening the Observer application and doing few clicks on the analysis/compare tools, before you can see the result in a qualitative chart.
Visual studio 2010 profiler under Windows. VTune had a great call graph tool, but it got broken as of Windows Vista/7. I don't know if they fixed it.
Let me give a plug for EQATEC... just what I was looking for... simple to learn and use and gives me the info I need to find the hotspots quickly. I much prefer it to the one built in to Visual Studio (though I haven't tried the VS 2010 one yet, to be fair).
The ability to take snapshots is HUGE. I often get an extra analysis and optimization done while waiting for the real target analysis to run... love it.
Oh, and its base version is free!
http://www.eqatec.com/Profiler/