I was looking at the perf profile for one of the spec2017 benchmark compiled with llvm. I saw hotness in few of the libflang routines. But I am not able to figure out what is the functionality of f90_sect_i8 function. I searched for sect calls but could not find anything substantial.
Related
I've used a few profilers in the past and never found them particularly easy. Maybe I picked bad ones, maybe I didn't really know what I was expecting!
But I'd like to know if there are any 'standard' profilers which simply drop in and work? I don't believe I need massively fine-detailed reports, just to pick up major black-spots. Ease of use is more important to me at this point.
It's VC++ 2008 we're using (I run standard edition personally). I don't suppose there are any tools in the IDE for this, I can't see any from looking at the main menus?
I suggest a very simple method (which I learned from reading Mike Dunlavey's posts on SO):
Just pause the program.
Do it several times to get a reasonable sample. If a particular function is taking half of your program's execution time, the odds are that you will catch it in the act very quickly.
If you improve that function's performance by 50%, then you've just improved overall execution time by 25%. And if you discover that it's not even needed at all (I have found several such cases in the short time I've been using this method), you've just cut the execution time in half.
I must confess that at first I was quite skeptical of the efficacy of this approach, but after trying it for a couple of weeks, I'm hooked.
VS built in:
If you have team edition you can use the Visual Studio profiler.
Other options:
Otherwise check this thread.
Creating your own easily:
I personally use an internally built one based on the Win32 API QueryPerformanceCounter.
You can make something nice and easy to use within a hundred lines of code or less.
The process is simple: create a macro at the top of each function that you want to profile called PROFILE_FUNC() and that will add to internally managed stats. Then have another macro called PROFILE_DUMP() which will dump the outputs to a text document.
PROFILE_FUNC() creates an object that will use RAII to log the amount of time until the object is destroyed. Both the constructor of this RAII object and the destructor will call QueryPerformanceCounter. You could also leave these lines in your code and control the behavior via a #define PROFILING_ON
I always used AMD CodeAnalyst, I find it quite easy to use and gives interesting results. I always used the time based profile, in which I found that it cooperates well with my apps' debug information, letting me find where the time is spent at procedure, C++ instruction and single assembly instruction level.
I used lt prof in the past for a quick run down of my C++ app. It works pretty easy and runs with a compiled program, does not need and source code hooks or tweaks. There is a trial version available I believe.
A very simple (and free) way to profile is to install the Windows debuggers (cdb/windbg), set a bp on the place of interest, and issue the wt command ("Trace and Watch Data"). Check out MSDN for more info.
Another super simple and useful profiling workflow that works on any programming languages is to comment out blocks of codes. After commenting out all of them, uncomment some and run your program to see the performance. If your program starts to run very slow when some code has been uncommented, then you'll probably want to check the performance there.
I'm trying to perform an audit on a rather complicated multi-physics model I'm working on and have been using Intel VTune Profiler to identify expensive subroutines. The most expensive function is a function called __mulq which is not something within the source code. I can see which subroutines are calling it, but I cannot figure out what exactly it is. I'm using the Intel fortran compiler. I have also tried using grep to search for __mulq within the directory containing all the code, and the only mentions of __mulq are within binary files. Can someone identify what this __mulq function may be? Thank you so much for your help!
By using the Bottom up pane in vtune you would be able to figure out the call stack, and traversing through source code and assembly, get to know which library or module makes use of the mulq function
From x86 guides it could be seen mulq instructions store the result of multiplying two 64-bit values—the first as given by the source operand, and the second from register %rax. As mulq instructions are low level in the assembly and these might be from a library or any module you are using in code. If you are able to figure out the function/module which makes use of mulq function, you could try making changes to the module to either use a diff implementation or reduce calls which reslut in mulq instruction. FOr eg if this is a third party library you could look for optimised alternatives.
My project currently has a library that is static linked (compiled with gcc and linked with ar), but I am currently trying to profile my whole entire project with gprof, in which I would also like to profile this statically linked library. Is there any way of going about doing this?
Gprof requires that you provide -pg to GCC for compilation and -pg to the linker. However, ar complains when -pg is added to the list of flags for it.
I haven't used gprof in a long time, but is -pg even a valid argument to ar? Does profiling work if you compile all of the objects with -pg, then create your archive without -pg?
If you can't get gprof to work, gperftools contains a CPU profiler which I think should work very well in this case. You don't need to compile your application with any special flags, and you don't need to try to change how your static library is linked.
Before starting, there are two tradeoffs involved with using gperftools that you should be aware of:
gperftools is a sampling profiler. As such, your results won't be 100%
accurate, but they should be really good. The big upside to using a
sampling profiler is that it won't really slow your application down.
In multithreaded applications, in my experience, gperftools will only
profile the main thread. The only way I've been able to successfully
profile worker threads is by adding profiling code to my application.
With that said, profiling the main thread shouldn't require any code
changes.
There are lots of different ways to use gperftools. My preferred way is to load the gperftools library with $LD_PRELOAD, specify a logging destination with $CPUPROFILE, and maybe bump up the sample frequency with $CPUPROFILE_FREQUENCY before starting my application up. Something like this:
export LD_PRELOAD=/usr/lib/libprofiler.so
export CPUPROFILE=/tmp/prof.out
export CPUPROFILE_FREQUENCY=10000
./my_application
This will write a bunch of profiling information to /tmp/prof.out. You can run a post-processing script to convert this file into something human readable. There are lots of supported output formats -- my preferred one is callgrind:
google-pprof --callgrind /path/to/my_application /tmp/prof.out > callgrind.dat
kcachegrind callgrind.dat &
This should provide a nice view of where your program is spending its time.
If you're interested, I spent some time over the weekend learning how to use gperftools to profile I/O bound applications, and I documented a lot of my findings here. There's a lot of overlap with what you're trying to do, so maybe it will be helpful.
I have a very large C++ program where certain low level functions should only be called from certain contexts or while taking specific precautions. I am looking for a tool that shows me which of these low-level functions are called by much higher level functions. I would prefer this to be visible in the IDE with some drop down or labeling, possibly in annotated source output, but any easier method than manually searching the call-graph will help.
This is a problem of static analysis and I'm not helped by a profiler.
I am mostly working on mac, linux is OK, and if something is only available on windows then I can live with that.
Update
Just having the call-graph does not make it that much quicker to answer the question, "does foo() potentially cause a call to x() y() or z()". (or I'm missing something about the call-graph tools, perhaps I need to write a program that traverses it to get a solution?)
There exists Clang Static Analyzer which uses LLVM which should also be present on OS X. Actually i'm of the opinion that this is integrated in Xcode. Anyway, there exists a GUI.
Furthermore there are several LLVM passes, where you can generate call graphs, but i'm not sure if this is what you want.
The tool Scientific Toolworks "Understand" tool is supposed to be able to produce call graphs for C and C++.
Doxygen also supposedly produces call graphs.
I don't have any experience with either of these, but some harsh opinions. You need to keep in mind that I'm a vendor of another tool, so take this opinion with a big grain of salt.
I have experience building reasonably accurate call graphs for massive C systems (25 million lines) with 250,000 functions.
One issue I encounter in building a realistic call graph are indirect function calls, and for C++, overloaded method function calls. In big systems, there are a lot of both of these. To determine what gets called when FOO gets invoked, your tool has to have to deep semantic understanding of how the compiler/language resolves an overloaded call, and for indirect function calls, a reasonably precise determination of what a function pointer might actually point-to in a big system. If you don't get these reasonably right, your call graph will contain a lot of false positives (e.g., bogus claims of A calls B), and on scale false positives are a disaster.
For C++, you must have what amounts to the full compiler front end. Neither Understand or Doxygen have this, so I don't see how they can actually understand C++'s overloading/Koenig lookup rules. Neither Understand or Doxygen make any attempt that I know of to reason about indirect function calls.
Our DMS Software Reengineering Toolkit does build calls graphs for C reasonably well, even with indirect function pointers, using a C-language precise front end.
We have C++ language precise front end, and it does the overload resolution correctly (to the extent the C++ committee agrees on it, and we understand what they said, and what the individual compilers do [they don't always agree]), and we have something like Doxygen that shows this information. We don't presently have function pointer analysis for C++ but we are working on it (we have full control flow graphs within methods and that's a big step).
I understand CLANG has some option for computing call graphs, and I'd expect that to be accurate on overloads since Clang is essentially a C++ compiler implemented with a bunch of components. I don't know what, if anything Clang does to analyze function pointers.
I've used a few profilers in the past and never found them particularly easy. Maybe I picked bad ones, maybe I didn't really know what I was expecting!
But I'd like to know if there are any 'standard' profilers which simply drop in and work? I don't believe I need massively fine-detailed reports, just to pick up major black-spots. Ease of use is more important to me at this point.
It's VC++ 2008 we're using (I run standard edition personally). I don't suppose there are any tools in the IDE for this, I can't see any from looking at the main menus?
I suggest a very simple method (which I learned from reading Mike Dunlavey's posts on SO):
Just pause the program.
Do it several times to get a reasonable sample. If a particular function is taking half of your program's execution time, the odds are that you will catch it in the act very quickly.
If you improve that function's performance by 50%, then you've just improved overall execution time by 25%. And if you discover that it's not even needed at all (I have found several such cases in the short time I've been using this method), you've just cut the execution time in half.
I must confess that at first I was quite skeptical of the efficacy of this approach, but after trying it for a couple of weeks, I'm hooked.
VS built in:
If you have team edition you can use the Visual Studio profiler.
Other options:
Otherwise check this thread.
Creating your own easily:
I personally use an internally built one based on the Win32 API QueryPerformanceCounter.
You can make something nice and easy to use within a hundred lines of code or less.
The process is simple: create a macro at the top of each function that you want to profile called PROFILE_FUNC() and that will add to internally managed stats. Then have another macro called PROFILE_DUMP() which will dump the outputs to a text document.
PROFILE_FUNC() creates an object that will use RAII to log the amount of time until the object is destroyed. Both the constructor of this RAII object and the destructor will call QueryPerformanceCounter. You could also leave these lines in your code and control the behavior via a #define PROFILING_ON
I always used AMD CodeAnalyst, I find it quite easy to use and gives interesting results. I always used the time based profile, in which I found that it cooperates well with my apps' debug information, letting me find where the time is spent at procedure, C++ instruction and single assembly instruction level.
I used lt prof in the past for a quick run down of my C++ app. It works pretty easy and runs with a compiled program, does not need and source code hooks or tweaks. There is a trial version available I believe.
A very simple (and free) way to profile is to install the Windows debuggers (cdb/windbg), set a bp on the place of interest, and issue the wt command ("Trace and Watch Data"). Check out MSDN for more info.
Another super simple and useful profiling workflow that works on any programming languages is to comment out blocks of codes. After commenting out all of them, uncomment some and run your program to see the performance. If your program starts to run very slow when some code has been uncommented, then you'll probably want to check the performance there.