gather actually used C++ code lines - c++

Is there a possibility (most likely by using gcc / g++ itself?) to determine which code lines of all files included by a single compilation are actually used?
I use much third-party includes and would like to strip them to the actually used code to speed up compilation.
Maybe g++ can output this by some combination of it's many options?

The best advice I can offer is to break apart huge include files into smaller pieces. This allows developers to include only the files they need to resolve symbols.
My favorite example is windows.h. The mega include file declares the entire Windows API, whether you need them or not. If you only want the APIs for processing files, you get the APIs for dialg boxes as well.
Some shops like the monster include files because they only have to include the one file in their sources. One draw back is that every source file now depends on mega include file. If I change one of the include files, the entire system will be rebuilt instead of a few modules that depend on the header file that I changed.
In my projects, only a couple of files are compiled at a time; usually less than 5. This makes the average turnaround time (modify then build) very fast. The entire system is rebuilt overnight by a server or by developers. There is no need to rebuild source files that have not changed.
So split up your modules, so you are not rebuilding your system every time.

Related

Do I have to recompile C++ code everytime during development?

Let say we have a large code base and we are doing development in C++. Do we have to recompile everytime in order to test the code?
If yes then it is going to take ages to do development.
What's the solution to this problem?
Yes, you'll definitely need to compile C++ code if you want to test it. C++ code can't be executed without being compiled.
However, if you organize your project smartly, compilation could take only a few seconds, or maybe up to a minute, even if there are thousand (or even more) of files.
By default, your build system will run an incremental build, except if you explicitly request a "rebuild" or did a "clean" previously. It will then invoke the compiler/linker accordingly and make sure it compiles/links only what needs to be (if a cpp file did not change, no need to compile it, this is all based on file timestamps, if the "object" file (generated) is older than the cpp file (source), the build system knows it's up-to-date and won't generate it again. If you use Visual Studio and/or CMake or whatever IDE, build system, they all support that!
Additionally, you can follow some guidelines to make this even faster:
Firstly, organize your project in modules (libraries), ideally with dynamic link. Then when a file from a library is changed, only this library needs to be compiled (other libraries or programs using the modified library won't have to be compiled again).
When you'll modify only an implementation file (cpp file), only this file + link of the module using it will be needed.
When you'll modify an header file (h file), all cpp files including it will need to be recompiled, so you must be careful to optimize your includes. Prefer forward declaration (see why here)to includes whenever it's possible (else, your header becomes a dependency of all cpp files using the other header file including yours...as a cascade, modifying this header file will end up requiring compilation of tones of cpp files). Don't include files you don't need (because it will fire a new useless build when the header file changes). Possibly use precompiled headers to speedup compilation.
Note: As commented, there are apparently some tools that can interpret C++ without compiling it...but that's not what C++ was designed for at the first time. And I doubt they will be fast as compiled code at runtime....so you'll probably save 20sec of incremental build time and then loose minutes at runtime....

Building project on Codelite(c++) recompiles too many files

I have a c++ project with many files. When I build the project making even small changes to the code, It recompiles a large no of files. This is increasing the compile time of the project. So I need suggestion about the ways I can improve the structure of the project or any other optimisations possible which will help in reducing the compile time of the project.
Also there are a couple of files which are getting recompiled even when I make no changes to the project. Somehow make doesn't detect that those files need not be recompiled or may be I am missing something.
I am using Codelite on linux(Ubuntu) for my project. The language is C++.
The link provided above will give you a detailed explanation. Just to put it in a lighter way I am adding a few more things.
If you change something in a CPP file only that file will get recompiled. If it is a header file then it will be a different story. If you have a header file included across multiple modules and when you make changes to that header file, all the associated CPP files will get compiled and again that is the way it should be. So you need to be careful while writing the code if you really care about this aspect. At a later stage it will be difficult to manage such things.
Regarding the compilation of some files even if you don't change anything may happen when
1] we tell the compiler to do so.
2] some CPP or header files are generated or modified automatically while run your build
3] there is a change in file time stamp
4] there is a change in folder name/structure
Last but not least, also try changing the code lite build target.
These are the possibilities I know. There will be more [definitely :)]...

Cleaning up a VC++ 6 project

I'm working with a very old and large VC6++ project and it's all messed up. There are unused files and folders everywhere, copies of folders and it's just a mess to clean it up by hand in its current state.
It will be done eventually, but is there any simple way to check what files and folders are used when it does a clean compile?
The project settings doesnt help me at all because it simply uses copies of folders and additional include directories.
Any suggestions?
Well, if you want to parse the compiler output you can get which files are actually used. I also find this when googling around, you might want to try (I haven't tried it myself). My way would be to clean the build, list all source files, build, and for each source find its corresponding .obj. The ones without .obj are not used. Note that this only works for source files, unused header files stay undetected.
VC6 will produce a makefile for you:
http://msdn.microsoft.com/en-us/library/aa233950%28v=vs.60%29.aspx
You can use the generated makefile (and the associated .dep file) as a starting point and edit it down to the list of files that get used in a build.
This will let you see the header files that the project depends on in addition to the .c/.cpp/.lib files that might show in the build log. One thing to keep in mind is that you'll probably also want to make sure you track the .dsw and .dsp workspace and project files.
If you're a bit adventurous, you might be able to convince the makefile to actually copy the source files to some other location for you with an appropriate override of the certain macros and/or dependencies. But that would probably be more trouble than it's worth for a one-time effort.
Finally, there's a commercial product, CopyWiz by Kinook Software, that seems to have features that might do what you're looking for (and it supports VC++ 6). Note: I'm not sure if it will do what you want, but it may be worth a look.
Yes. Run Process Monitor from SysInternals. It can capture all file system events and filter them based on the path and other factors.
So, set the filter to the root of your source tree, only succesfull file reads (VC looks for headers in many places), and build your project. You'll probably still see several thousand events. So, save them to file, sort by path, and remove duplicate paths (headers especially will have many duplicate entries)

Sharing Pre-compiled Headers efficiently

I have a framework which is being used by several projects (which includes several samples to show how the framework works). The framework has components such as the core, graphics, physics, gui etc. Each one is a separate library. There are several configurations as well.
A main solution file compiles the complete project with all the possible configurations so that the projects can use the libraries. Since the framework is rarely recompiled, especially by someone (including me) working on a project that utilizes the framework, it makes sense to pre-compile the many headers.
Initially I had each project/sample have its own pre-compiled header used for the whole project. Each time I would have to rebuild the same pch (for example, Debug), So I decided that a shared PCH would reduce the redundant PCH compilation. So far so good. I have a project that compiles the PCH along with the libraries. All the subsequent projects/samples are now using the same PCH. This has worked wonderfully.
The only problem is that I have seen an increase in file size. This is not a roadblock, as if a project that uses the framework is intended to be released, it can sever itself from the shared PCH and make its own. I have done this for the sake of rapid development (I have actually created a tool which creates the VS project files and source files for a new project/sample ready to be built as well as facilitate in upgrading a previous project that was using an older version of the framework).
Anyway, (I am presuming that) the increase in file size is because the independant VS project file that is creating the shared PCH is including all the headers from all the libraries. My question is whether I can use conditional compilation (#ifndef) to reduce the size of the final executable? Or maybe share multiple PCH files somehow (as far I know though, that is not possible, but I maybe wrong) If I am not making sense, please say so (in kind words :) ) as my knowledge of PCH files is very limited.
Thanks!
Note: To re-iterate and make it clear, so far, I have one solution file that is compiling all the libraries including the shared PCH. Now if I recompile all the samples and projects, they compile in a couple of seconds or more at most. Before, each project would recreate a PCH file. Also, initially I wanted a PCH for each library, but then I found out that a source file cannot use multiple PCH files, so this option was not feasible. Another option is to compile all possible combinations of PCH files, but that is too time consuming and cumbersome and error prone.
It sounds like the size problem is coming from using headers you don't actually need, but that it still makes sense to use these headers when developing because of the faster turn around.
On using #ifndefs: Precompilation is crude. You lose the ability to share the precompilation work at the point where there is a difference. If using #ifndefs to make different variants of what you include, I.e. if you have
#ifndef FOO
Then the precompiled header must stop before the point where FOO is defined differently in two files that use that precompiled header. So #ifndef is not going to solve the problem. The net result is that FOO must be the same, or you're back to separate pch files for the different projects. Neither solves things.
As to sharing multiple .pch files: A fundamental limitation of .pch files is that each .obj can only use one. Of course .pch files can have arbitrary combinations of headers. You could have a .pch for core+graphics, a .pch for core+physics, core+ai etc. This would work just dandy if none of the source files needed to 'talk' to more than core+one-module at a time. That does not sound realistic to me. Such a scheme and variants on it sound like a lot of restructuring work for no real gain. You don't want to be building zillions of combinations and keeping track of them all. It's possible, but it is not going to save you time.
In my view you're doing exactly the right thing by sacrificing executable size for fast turn-around during development/debugging, and then having a slower but leaner way of building for the actual release.
In the past I've found that you quite quickly run into diminishing returns as you put more in the precompiled headers, so if you're trying to put more in to make it more useful in a larger number of projects then it will hit a point that it slows down. On our projects the PCH files take longer than most source files to compile, but still only a few seconds maximum. I would suggest making the PCH files specific to each project you are using. You are right in saying that a source file can only refer to a single PCH file, but one way of getting around this is to use the 'force include' option (in the Advanced tab I think) to ensure that all files include the PCH file for that project.

Precompiled headers question

I am right now reorganizing my project and what recently was a simple application now became a pair of C++ projects - static library and real application.
I would like to share one precompiled header between two projects, but face some troubles with setting up the .pdb file paths.
Assume my first project is called Library and builds it's .lib file with a corresponding Library.pdb file. Now, the second project is called Application and builds everything into the same folder (.exe and another Application.pdb file).
Right now my both projects create their own precompiled headers file (Library.pch and Application.pch) based on one actual header file. It works, but I think it's a waste of time and I also think there should be a way to share one precompiled header between two projects.
If in my Application project I try to set the Use Precompiled Header (/Yu) option and set it to Library.pch, it wouldn't work, because of the following error:
error C2858: command-line option 'program database name "Application.pdb" inconsistent with precompiled header, which used "Library.pdb".
So, does anyone know some trick or way to share one precompiled header between two projects preserving proper debug information?
The question is, why do you want to share the precompiled header (PCH) files. Generally I woul d say, that does not make sense. PCH are used to speed up compiling not to share any information between different projects.
Since you also write about the PDB file, you probably want to debug the library code with your applications. This can be achieved by setting the /Fd parameter when compiling the library. When you link the library in your application and the linker finds the corresponding PDB file, you get full debug support.
This sounds complicated and cumbersome to set up. More than that, it may not be possible at all.
Instead, you can include the precompiled header from one application into the second. It will still be compiled once for the second project, but maintenance becomes easy and you do not have to redefine the dependencies in the second project (just include them).