Google Cpu profiler - profiling

I want to try google cpu profiler to analyze hotspots on a C project as an alternative to gprof so to find any differencies because i am not convinced about the accuracy gprof provides, but as i am now introducing myself to these tools i am confused about the way i should link the libraries to my program and then execute it in order to take the results for the profiling.
Keep in mind i compile the code through a makefile so i guess flags should be added there for the linking but whatever efforts i made ended in failure.
I downloaded gperf and related packages from synaptics package manager as well as installed gperf through the command line with sudo but i cant find a way to link my program to the appropriate libraries.
Any help would be appreciated, thank you.

Related

Using tensorflow in C++ on Windows

I know there are ways of using Tensorflow in C++ they even have a documentation for it but I can seem to be able to get the library for it. I've checked the build from source instructions but it seems to builds a pip package rather than a library I can link to my project. I also found a tutorial but when I tried it out I ran out of memory and my computer crashed. My question is, how can I actually get the C++ library to work on my project? I do have these requirements, I have to work on windows with Visual Studio in C++. What I would love to is if I could get a pre-compiled DLL that I could just link but I haven't found such a thing and I'm open to other alternatives.
I can't comment so I am writing this as an answer.
If you don't mind using Keras, you could use the package frugally deep. I haven't seen a library myself either, but I came across frugally deep and it seemed easy to implement. I am currently trying to use it, so I cannot guarantee it will work.
You could check out neural2D from here:
https://github.com/davidrmiller/neural2d
It is a neural network implementation without any dependent libraries (all written from scratch).
I would say that the best option is to use cppflow, an easy wrapper that I created to use Tensorflow from C++ easily.
You won't need to install anything, just download the TF C API, and place it somewhere in your computer. You can take a look to the docs to see how to do it, and how to use the library.
The answer seems to be that it is hard :-(
Try this to start. You can follow the latest instructions for building from source on Windows up to the point of building the pip package. But don't do that - do this/these instead:
bazel -config=opt //tensorflow:tensorflow.dll
bazel -config=opt //tensorflow:tensorflow.lib
bazel -config=opt tensorflow:install_headers
That much seems to work fine. The problems really start when you try to use Any of the header files - you will probably get compilation errors, at least with TF version >= 2.0. I have tried:
Build the label_image example (instructions in the readme.md file)
It builds and runs fine on Windows, meaning all the headers and source are there somewhere
Try incorporating that source into Windows console executable: runs into compiler errors due to conflicts with std::min & std::max, probably due to Windows SDK.
Include c_api.h in a Windows console application: won't compile.
Include TF-Lite header files: won't compile.
There is little point investing the lengthy compile time in the first two bazel commands if you can't get the headers to compile :-(
You may have time to invest in resolving these errors; I don't. At this stage Tensorflow lacks sufficient support for Windows C++ to rely on it, particularly in a commercial setting. I suggest exploring these options instead:
If TF-Lite is an option, watch this
Windows ML/Direct ML (requires conversion of TF models to ONNX format)
CPPFlow
Frugally Deep
Keras2CPP
UPDATE: having explored the list above, I eventually found the following worked best in my context (real-time continuous item recognition):
convert models to ONNX format (use tf2onnx or keras2onnx
use Microsoft's ONNX runtime
Even though Microsoft recommends using DirectML where milliseconds matter, the performance of ONNX runtime using DirectML as an execution provider means we can run a 224x224 RGB image through our Intel GPU in around 20ms, which is quick enough for us. But it was still hard finding our way to this answer

gdb Program exited code 01 for program using CMake

I am using scientific linux. I am dealing with a huge amount of code in C++ with tons of cpp files. Right now, it compiles successfully, but the values/data I'm getting are definitely wrong. Also, for some small changes I make to the code causes seg faults.
In the directory user/project/Build, I enter make to compile and link all the cpp files. I then have to go to user/project/Build/bin/project to run the project binary by typing user/run/run.sh
When I go to directory /user/project/Build/bin and then type gdb project and then run, I see
Program exited with code 01. Missing separate debuginfos, use: debuginfo-install glibc..
If I try to set a breakpoint, such as by break test.cpp:19, I get the message No source file named test.cpp.
Make breakpoint pending on future shared library load?
But I clearly have a source file named test.cpp
How can I set breakpoints? Considering that I'm a beginner with Unix, should I use another IDE such as emacs or Qt creator?
Did you read the documentation of GDB? It is definitely worthwhile to read it. Read also some tutorial on gdb
If your Makefile-s are generated by cmake, you should also study the documentation of cmake and the documentation of make. See also this answer to a related question.
Did you compile all your software with g++ -Wall -g (and without any optimization flags like -O1 or -O2) - or perhaps even -g3 instead of -g?
You might even install debugging variants of the major libraries you are using (e.g. packages like glibc-debuginfo etc...)
You probably want to specify (for gdb) the source directories to search with the dir command of gdb ...
And yes, I recommend using emacs. But above all, I strongly recommend spending hours (and perhaps days or even weeks) to learn more about Linux and software development on it (there are lots of books, websites, tutorials and other training, ... about that). Maybe start with a small, hello-world like, program (learning how to compile it with g++ and to debug it with gdb). Then try to compile and debug a small (e.g. of a hundred thousand lines of source code) free software that you like (e.g. fishshell or anything you've got from sourceforge or github), just to get a feeling of how such software is built.
If you are using (or improving) a big scientific software, it probably has some community website, mailing list, or forum for help (see geant or kiva or aster as examples, which I only know by name!). Please also use them.
PS. It is not mostly a matter of choosing tools: you are using the good ones (GCC i.e. g++, emacs, gdb, make, grep or ack, etags, git, awk, ....). It is a matter to get the knowledge -spending weeks or months of your time- about how to use them wisely and combine their use. See also this & that.

libtool slowing down gdb

I have a larger C++ programm with lot of templates which i want to debug. Unfortunately gdb takes several minutes to read the symbols.
http://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html contains lots of options for debugging.
Which options would you suggest to make gdb faster/more usable.
Update: It looks like the slow down is caused by libtool. If gdb is launched via libtool --mode execute it is slow. If gdb is launched gdb .libs/foo it is reasonable fast. Any ideas why is much slower?
Update: Another suggestion was -fvisibility=hidden see http://gcc.gnu.org/wiki/Visibility
Sometimes using -fdebug-types-section can make things a bit faster. It isn't guaranteed though.
Several minutes to load ... I wonder how big this executable is. If I were desperate I might try only compiling selected modules with debug info. Or perhaps look to see if it is a gdb bug. If it is split into an executable and some shared libraries, and some parts don't change very often, you could also look into using the "gdb index" feature (see the manual) to speed up the loading of debuginfo for those modules.

Building ffmpeg with an executable output

I generally don't like to ask such "you figure it out for me" questions, but I suspect this one will be really simple for a C++ guru. I want to build ffmpeg for Android, and I'd like it to output an executable rather than a set of libraries.
We've been using the guardian project's build:
https://github.com/guardianproject/android-ffmpeg
It does produce what we want, but I've found tweaking it for different architectures to be, at best, unpleasant.
I've gotten this version to build:
https://github.com/appunite/AndroidFFmpeg
It does a nice job of slicing and dicing different architectures, but produces a jni version.
There is a long story as to why I want the exe, but I'll skip it for now. Is there a flag that needs to be flipped? Some path or other setting? I am at this point fully baffled.
Thanks in advance.
Consider using scratchbox to statically cross-compile for ARM (and test) FFMPEG to your requirements on your desktop (still inside SB).
Once your happy, get enough space on your droid to keep the larger than otherwise binary and adb that exe up in there. Don't forget to chmod +x
This link I posted earlier in the comments has lots of general information about FFmpeg and Android. Then there is Estevex's tutorial on Android, FFmpeg and x264. In addition, here's Roman10's blog post about the subject.
When you manage to build the binaries, remember to set rights to the files (e.g. chmod 777 or chmod 775). The command to run FFmpeg is
Process p = Runtime.getRuntime().exec("/data/data/yourpackagename/files/ffmpeg")
Links
Some implementations:
https://github.com/guardianproject/android-ffmpeg
https://github.com/havlenapetr/FFMpeg
Discussion:
https://groups.google.com/forum/#!topic/android-ndk/vQVOn1iyd9g
http://ffmpeg-users.933282.n4.nabble.com/Issue-with-cross-compiling-ffmpeg-for-linux-in-windows-td3830825.html

How can I profile a complete C++ build?

I'm developing an application in C++ on Windows XP, using Eclipse as my IDE, and a Makefile-based build system (with custom tools to generate the Makefiles). In addition, I'm using LZZ, which allows me to write a single file, which then gets split into a header and an implementation file. I'm using TDM's port of GCC 4.
What tools or techniques could I use to determine exactly how much time each part of the build process takes, and why it is slow?
Of particular interest would be:
How much time does make need to figure out to parse the Makefiles, figure out the dependencies, check the timestamps, etc?
How much time does Eclipse need before and after the build?
How much time does GCC spend on parsing system and boost headers?
P.S.: This is my home project, so expensive tools are out of reach for me, but could be documented here anyway if they are particularly relevant.
Since Make and GCC are very verbose about what they're doing, a very crude way to get a high-level overview of time spent is to pipe make's output through a script that timestamps each line:
make | perl -MTime::HiRes -pe "printf '%.5f ', Time::HiRes::time()"
(I'm using ActivePerl to do this, but from what I gather, Strawberry Perl may now be the recommended Perl version for Windows.)
Reformat or process the timestamps to your liking.
To get more details about GCC in particular, use the --time-report option.
To find out how much overhead Eclipse adds, use a stopwatch to time builds from Eclipse and from the command line.
if you are using boost, most likely most of time is spent in template instantiation and subsequent optimization. You can tell GCC to report time spent, -time-report (UNIX option, might be something else on Windows GCC)
and if you are trying to speed up your compilation time, disable optimization, -O0 (last letter is number zero, first letter is capital o)
Try SparkBuild, a free gmake/nmake replacement that can generate an annotated build log with precise timing information for every job in the build. You can load that file into SparkBuild Insight to get a graphical overview of where the time goes.
See this blog for an example of how to use it.
There is a version of GNU make called remake that provides profiling information.