I have a large C++ project of hundreds of files with a CMake build system. How can I use GCC's -ftime-report option but get a single summary for the full build?
I am looking to improve build times and this would be helpful to know where to focus the effort.
You would need to implement that manually by parsing the output somehow.
A good way to get a higher level overview is to use Ninja and parse the .ninja_log file:
https://github.com/ninja-build/ninja/issues/1080#issuecomment-255436851
Also see https://github.com/nico/ninjatracing. Chromium uses tools like that to keep track of build times.
Update:
-ftime-report is simply not suitable for this task as it's meant for compiler devs. Use clang and https://github.com/aras-p/ClangBuildAnalyzer for this.
gcc is far from supporting -ftime-trace: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92396
Related
I have a prebuild-event tool (written in Ruby) in my C++ toolchain, that generates additional C++ code from existing C++ source code. I would like to replace this tool with a faster generator and using clang would be the best option.
Is there a way to write a C++ application that parses C++ source code of a file, so I can implement this prebuild tool in Clang? I am looking for a keyword or page with how to start. Any help is highly appreciated!
Parsing C++ is not a simple thing. Compile-time trickery and implicit semantics make it incredibly hard. Because of this I would suggest going with Clang. Its developers made it possible to use Clang as a library. Check out this guide to see different interfaces Clang has. If you want real C++ experience you might want to choose LibTooling.
I want to warn you though that in order for any C/C++ parser to work as expected they absolutely need compilation options used by the real compiler. Without include directories or macro definitions the code can make little to no sense. Basically your build system should tell your custom tool how to compile each file. The simplest approach would be using compilation database. It is a go-to solution for many Clang-based tools. However, it looks like you're making it a part of your build system, so maybe incorporating your tool and using options directly from the build system can be not such of a burden for you.
I hope this information is helpful!
We have a not very complicated but big (i.e. lots of files) Visual Studio C++ Win32 Console written in C++0x standard in VS2010.
It does not use any non standard code or anything (Hopefully!).
I now wanna port it to Linux.
Which way is the quickest way to do it?
autoconf?
old-fashioned make file?
any other solution?
I would use regular make but keep it simple with default rules as much as possible. Add in dependencies as you go along.
EDIT: As in interim step, build it with mingw so that you can avoid the whole API porting issue until you have a working build in your new build mechanism.
If your console app calls win32 API functions then you have a choice between modifying all the source where it is used or writing a module that implements those functions.
In prior porting efforts of this type I tried it both ways and the latter was easier. I ended up writing only about 18 to 20 shim functions.
It was successful enough that I ended up writing an OS abstraction layer that was used on many projects that simply let me compile on Windows native, cygwin, Linux, VxWorks, etc. with trivial changes to one or two files.
(p.s. Any interest in an open source version of a C++ based OS abstraction layer? I was thinking of releasing an unencumbered version of it to the world if there's sufficient interest. It's mostly useful where BOOST is too heavy -- i.e. embedded projects.)
Most probably you don't need autoconf (and I suggest you don't touch it, unless you love pain), because you are not trying to be portable to a dozen of Unix flavours.
Roll your Makefiles manually. It shouldn't be too difficult if you have a set of shared rules and have minimal Makefiles that just specify source files and compile options.
Use GCC 4.5 as it supports more C++0x features.
You can export a make file from Visual Studio.
Update: Actually you can't anymore, unless you have VC6 lying around
STAY AWAY FROM AUTO* and configure. These are horrible IMHO.
If you can somehow get a VS 8 or 9 vcproj/sln, you can use this. I have not used it, so I can't give any advice.
If you're up to manual conversion, I would suggest something like CMake, as it's pretty easy to get ready fast enough, even for large projects.
If the project has a simple layout, you could have success using Qt 4's qmake like this:
qmake -project
It will output a qmake .pro file, which can be converted into a makefile on many platforms (using qmake). This should work okay, but not perfectly. Alternatively, you can install the Qt plugin for VS, and let it generate the pro file from an existing VS project. It will make your build system depend on Qt4's qmake, which is probably not what you want.
There are of course other things like cmake, but they will all require manual intervention.
The fastest way to do it?
g++ *.cpp -o myapp
Seriously, depending on your needs, even generating a makefile might be overkill. If you're just interested in a quick and dirty "let's see if we can get a working program on Linux", just throw your code files at g++ and see what happens.
I'm developing an application in C++ on Windows XP, using Eclipse as my IDE, and a Makefile-based build system (with custom tools to generate the Makefiles). In addition, I'm using LZZ, which allows me to write a single file, which then gets split into a header and an implementation file. I'm using TDM's port of GCC 4.
What tools or techniques could I use to determine exactly how much time each part of the build process takes, and why it is slow?
Of particular interest would be:
How much time does make need to figure out to parse the Makefiles, figure out the dependencies, check the timestamps, etc?
How much time does Eclipse need before and after the build?
How much time does GCC spend on parsing system and boost headers?
P.S.: This is my home project, so expensive tools are out of reach for me, but could be documented here anyway if they are particularly relevant.
Since Make and GCC are very verbose about what they're doing, a very crude way to get a high-level overview of time spent is to pipe make's output through a script that timestamps each line:
make | perl -MTime::HiRes -pe "printf '%.5f ', Time::HiRes::time()"
(I'm using ActivePerl to do this, but from what I gather, Strawberry Perl may now be the recommended Perl version for Windows.)
Reformat or process the timestamps to your liking.
To get more details about GCC in particular, use the --time-report option.
To find out how much overhead Eclipse adds, use a stopwatch to time builds from Eclipse and from the command line.
if you are using boost, most likely most of time is spent in template instantiation and subsequent optimization. You can tell GCC to report time spent, -time-report (UNIX option, might be something else on Windows GCC)
and if you are trying to speed up your compilation time, disable optimization, -O0 (last letter is number zero, first letter is capital o)
Try SparkBuild, a free gmake/nmake replacement that can generate an annotated build log with precise timing information for every job in the build. You can load that file into SparkBuild Insight to get a graphical overview of where the time goes.
See this blog for an example of how to use it.
There is a version of GNU make called remake that provides profiling information.
we work under Linux/Eclipse/C++ using Eclipse's "native" C++ projects (.cproject). the system comprises from several C++ projects all kept under svn version control, using integrated subclipse plugin.
we want to have a script that would checkout, compile and package the system, without us needing to drive this process manually from eclipse, as we do now.
I see that there are generated makefile and support files (sources.mk, subdir.mk etc.), scattered around, which are not under version control (probably the subclipse plugin is "clever" enough to exclude them). I guess I can put them under svn and use in the script we need.
however, this feels shaky. have anybody tried it? Are there any issues to expect? Are there recommended ways to achieve what we need?
N.B. I don't believe that an idea of adopting another build system will be accepted nicely, unless it's SUPER-smooth. We are a small company of 4 developers running full-steam ahead, and any additional overhead or learning curve will not appreciated :)
thanks a lot in advance!
I would not recommend putting things that are generated in an external tool into version control. My favorite phrase for this tactic is "version the recipe, not the cake". Instead, you should use a third party tool like your script to manipulate Eclipse appropriately to generate these files from your sources, and then compile them. This avoids the risk of having one of these automatically generated files be out of sync with your root sources.
I'm not sure what your threshold for "super-smooth" is, but you might want to take a look at Maven2, which has a plugin for Eclipse projects to do just this.
I know that this is a big problem (I had exactly the same; in addition: maintaining a build-workspace in svn is a real pain!)
Problems I see:
You will get into problems as soon as somebody adds or changes project settings files but doesn't trigger a new build for all possible platforms! (makefiles aren't updated).
There is no overall make file so you can not easily use the build order of your projects that Eclipse had calculated
BTW: I wrote an Eclipse plugin that builds up a workspace from a given (textual) list of projects and then triggers the build. That's possible but also not an easy task.
Unfortunately I can't post the plugin somewhere because I wrote it for my former employer...
I have a large legacy C++ project compiled under Visual Studio 2008. I know there is a reasonably amount of 'dead' code that is not accessed anywhere -- methods that are not called, whole classes that are not used.
I'm looking for a tool that will identify this by static analysis.
This question: Dead code detection in legacy C/C++ project suggests using code coverage tools. This isn't an option as the test coverage just isn't high enough.
It also mentions a -Wunreachable-code. option to gcc. I'd like something similar for Visual Studio. We already use the linker's /OPT:REF option to remove redundant code, but this doesn't report the dead code at a useful level (when used with /VERBOSE there are over 100,000 lines, including a lot from libraries).
Are there any better options that work well with a Visual Studio project?
I know that Gimpel's Lint products (PC-Lint and Flexelint) will identify unreachable code and unused / unreferenced modules.
They both fall in the category of static analysis tools.
I have no affiliation w/ Gimpel, just a satisfied long-term customer.
You'll want something along the lines of QA-C++ (http://www.programmingresearch.com/QACPP_MAIN.html), also see http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis for similar products.
You're looking for a static code analysis tool that detects unreachable code; many coding guidelines (such as MISRA-C++, if I'm not mistaken) require that no unreachable code exists. An analysis tool geared specifically to enforce such a guideline would be your best bet.
And you'll like be able to find other uses for the tool as well.
I dont know Visual C, and had also recommended the -Wunreachable-code specific coverage tools. As solution for your situation I would try the following:
Make with ctags (or similar programm) a list of all your symbols in your source
Enable in your compiler the dead code elimination (I would assume it defaults to on)
Enable your whole-program/link time optimizations (so he knows that not used functions in your moduls are not required by other externals and get discarded)
Take the symbols from your binary and compare them with the symbols from 1.
Another approach could be some call graph generating tool (e.g. doxygen).
I suggest you use a couple approaches:
1. GCC has some useful compilation flags:
-Wunused-function
-Wunused-label
-Wunused-value
-Wunused-variable
-Wunused-parameter
-Wunused-but-set-parameter
2. Cppcheck has some useful features like:
--enable=unusedFunction
3. Use static analyzer as was suggest before.
One approach that works for me - with Delphi - is to enable debugging, and run your program under the debugger.
When a Delphi program is run under the debugger, the IDE shows in the margin which lines of code can be set as breakpoints. Code which is truly dead - i.e., has been stripped out by the linker/compiler is obvious as breakpoints can't be set there.
Some additional notes, as commenters seem to misunderstand this:
a: You don't need to try setting a breakpoint on each line. Just open up the source file in the IDE, and quickly scroll through it. Dead code is easily spotted.
b: This is NOT a 'code coverage' check. You don't need to run the application to see if it reaches the lines.
c: I'm not familiar enough VS2008 so can't say if this suggestion will work.
Either
1) MSVC's under-used in built static analysis tool.
2) The MSVC marketplace has lots of tools including support for most free tools, including CppCheck
You will need the latest version of Visual Studio for market place applications, but the free "Community Edition" has very lenient licencing.
Write a script that randomly deletes a function (from the source code) and recompiles everything from scratch. If it still compiles - that function was dead code.