Reducing the number of includes in a cpp file [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I currently have about 50 or more includes in my cpp file. I wanted to know whats the best policy of organizing this situation. Should I continue adding more includes ? or should I take another approach ? If so what should that be ?

1) Limit the scope of each file to one class, or to a small group of related classes with related behavior.
If your cpp file is only 400 to 1000 lines of code, it'll be pretty hard to require dozens of includes.
Also, as you move things around and make modifications, re-evaluate headers included in each file. If you originally implemented something using a vector, but then switched to set, consider removing the #include < vector >
2) In header files, use forward declarations, move those includes to the cpp file.
This doesn't address your question directly, but it is related in managing compile time. If you can get away with a forward declaration in a header file, do that.
If a.h includes b.h and b.h includes c.h, whenever c.h changes, anything including a.h has to recompile as well. Moving these includes to the .cpp file (which isn't typically #included) will limit these cascading changes.
Let's re-evaluate what happens if a.h forward-declares classes defined in b.h, and a.cpp includes b.h, and if b.h forward-declares classes defined in c.h, and b.cpp includes c.h.
When c.h changes, b.cpp needs to recompile, but a.cpp is fine.
3) Re-organize as you extend functionality.
Writing code can happen like this:
Plan to implement a feature.
Write code to support that plan. (If doing unit testing, consider writing the test first)
Debug and ensure the code meets your requirements.
But this is missing a few steps that make a huge difference.
Refactor your code, and remove extraneous includes. (Here's where unit testing helps.)
Debug again.
These last two steps can involve splitting up large functions into several smaller ones, breaking classes up, and (relevant to this question) creating new files for functionality which has out-grown its current compilation unit.
If you move on the moment your code seems to work and do not take a moment to tidy up it is the equivalent of a carpenter hammering legs onto a table, but never taking a moment to sand and polish the result. Sometimes it's okay to have a slab of plywood with some legs, but if you want a nice dining room table, you'll find this approach is not up to the task.
There is no simple way of reducing includes for cpp files, ultimately those includes need to exist somewhere (provided the original code relying on those includes is still present, if it is not, then simply delete the extraneous include), and the .cpp file is way better than the .h file. But with careful management and organization you can maintain smaller, more modular source code. This is ultimately how you keep things from becoming unworkable.

Split the cpp file into various smaller cpp files. If this cannot be done in a way that reduces the number of included files that each resulting cpp file needs to include, refactor your application. When refactoring, pay special attention to the S-part of the SOLID principle: "... every class should have a single responsibility".

Go through your header list and remove all those not needed in that compilation unit. If you only need a pointer to a class, that is no reason to include the full header of the class.
If there are still so many headers after the cleanup, it might be appropriate to break up you .cpp-file, especially if you can make out different clusters of classes and/or functions doing different work (and probably needing the same subset of headers).
No need to go to extremes, but putting thousands of lines into your only source file is a sign of bad modularisation.

Related

Is this good practice to include source files? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm working with functions.
Is it good practice, that I write the function in another .cpp file, and I include it in the main one?
Like this : #include "lehel.cpp".
Is this ok, or should I write the functions directly in the main.cpp file?
A good practice is to separate functionality into separate Software Units so they can be reused and so that a change to one unit has little effect on other units.
If lehel.cpp is included by main, this means that any changes in lehel.cpp will force a compilation of main. However, if lehel.cpp is compiled separately and linked in, then a change to lehel.cpp does not force main to be recompiled; only linked together.
IMHO, header files should contain information on how to use the functions. The source files should contain the implementations of the functions. The functions in a source file should be related by a theme. Also, keeping the size of the source files small will reduce the quantity of injected defects.
The established practice is putting function declarations of reusable functions in a .h or .hpp file and including that file where they're needed.
foo.cpp
int foo()
{
return 42;
}
foo.hpp
#ifndef FOO_HPP // include guards
#define FOO_HPP
int foo();
#endif // FOO_HPP
main.cpp
#include "foo.hpp"
int main()
{
return foo();
}
Including .cpp files is only sometimes used to split template definitions from declarations, but even this use is controversial, as there are counter-schemes of creating pairs (foo_impl.hpp and foo.hpp) or (foo.hpp and foo_fwd.hpp).
Header files (.h) are designed to provide the information that will be needed in multiple files. Things like class declarations, function prototypes, and enumerations typically go in header files. In a word, "definitions".
Code files (.cpp) are designed to provide the implementation information that only needs to be known in one file. In general, function bodies, and internal variables that should/will never be accessed by other modules, are what belong in .cpp files. In a word, "implementations".
The simplest question to ask yourself to determine what belongs where is "if I change this, will I have to change code in other files to make things compile again?" If the answer is "yes" it probably belongs in the header file; if the answer is "no" it probably belongs in the code file.
https://stackoverflow.com/a/1945866/3323444
To answer your question, As a programmer it will be a bad practice to add a function when cpp has given you header files to include.
Usually it is not done (and headers are used). However, there is a technique called "amalgamation", which means including all (or bunch of) .cpp files into a single main .cpp file and building it as a single unit.
The reasons why it might be sometimes done are:
sometimes actually faster compilation times of the "amalgamated" big unit compared to building all files separately (one reason might be for example that the headers are read only once instead for each .cpp file separately)
better optimization opportunities - the compiler sees all the source code as a whole, so it can make better optimization decisions across amalgamated .cpp files (which might not be possible if each file is compiled separately)
I once used that to improve compilation times of a bigger project - basically created multiple (8) basic source files, which each included part of the .cpp files and then were build in parallel. On full rebuild, the project built about 40% faster.
However, as mentioned, change of a single file of course causes rebuild of that composed unit, which can be a disadvantage during continuous development.

Multiple structures defined inside a header file - Should I move them out in separate h and cpp files [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
One of my previous colleagues wrote a huge header file which has around 100 odd structures with inline member function definitions. This structure file is included in most class implementations(cpp files) and header files(don't know why my colleague didn't use forward declarations)
Not only it is a nightmare to read such a huge header file, but its difficulty to track problems due to compiler complaining of multiple definitions and circular references now and then. Overall compilation process is also really slow.
To fix many such issues, I moved inclusion of this header file from other header files to cpp files (wherever possible) and used forward declarations of only relevant structures. Still I continue to get strange multiple definition errors like "fatal error LNK1169: one or more multiply defined symbols found".
I am now contemplating whether I should refactor this structure header file and separate the structure declaration and definition in separate h/cpp files for each and every structure. Though it will be painful and time consuming to do this without refactoring tools in Visual Studio, is this a good approach to solve such issues ?
PS: This question is related to following question : Multiple classes in a header file vs. a single header file per class
When challenged with a major refactoring like this, you will most likely do one of the following approaches: refactor in bulk or do this incrementally.
The advantage of doing it in bulk is that you will go through the code very fast (compared to incrementally) however if you end up with some mistake, it can take quite a lot of time before you get it fixed.
Doing this incrementally and splitting of the classes one by one, you reduce the risk of time-consuming mistakes, however it will take some more time.
Personally, I would try to combine the 2 approaches:
Split off every class, one-by-one (top-to-bottom) into different translation units
However keeping the major include file, and replacing all moved classed by includes
Afterwards you can remove the includes to this major header file and replace them by the singular classes
Finally, remove the major header file
Something that I've already found useful for creating self-sufficient header files is to precompile your headers. This compilation will fail when you haven't included the right data.

When to not create a separate interface (.h) and implementation (.c) in C? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm currently transitioning to working in C, primarily focused on developing large libraries. I'm coming from a decent amount of application based programming in C++, although I can't claim expertise in either language.
What I'm curious about is when and why many popular open source libraries choose to not separate their code in a 1-1 relationship with a .h file and corresponding .c files -- even in instances where the .c isn't generating an executable.
In the past I'd been lead to believe that structuring code in this manner is optimal not only in organization, but also for linking purposes -- and I don't see how the lack of OOD features of C would effect this (not to mention not separating the implementation and interface also occurs in C++ libraries).
There is no inherent technical reason in C to provide .c and .h files in matched pairs. There is certainly no reason related to linking, as in conventional C usage, neither .c nor .h files has anything directly to do with that.
It is entirely possible and potentially advantageous to collect declarations related to multiple .c files in a smaller number of .h files. Providing only one or a small number of header files makes it easier to use the associated library: you don't need to remember or look up which header you need for the declaration of each function, type, or variable.
There are at least three consequence that arise from doing that, however:
you make it harder to determine where to find the implementations of functions declared in collective headers.
you make your project more susceptible to mass rebuilding cascades, as most object files depend on one or more of a small number of headers, and changes or additions to your function signatures all affect one of that small number of headers.
the compiler has to spend more effort digesting one large header with all the library's declarations than to digest one or a small number of headers focused narrowly on the specific declarations used in a given .c file, as #ChrisBeck observed. This tends to be much less of a problem for C code than it does for C++ code, however.
You need a separate .h file only when something is included in more than one compilation unit.
A form of "keep things local unless you have to share" wisdom.
In the past I'd been lead to believe that structuring code in this manner is optimal not only in organization, but also for linking purposes -- and I don't see how the lack of OOD features of C would effect this (not to mention not separating the implementation and interface also occurs in C++ libraries).
In traditional C code, you will always put declarations in the .h files, and definitions in the .c files. This is indeed to optimize compilation -- the reason is that, it makes each compilation unit take the minimum amount of memory, since it only has the definitions that it needs to output code for, and if you manage includes properly, it only has the declarations it needs also. It also makes it simple to see that you aren't breaking the one definition rule.
In modern machines its less important to do this from the perspective of, not having awful build times -- machines now have a lot of memory.
In C++ you have template files which are generally only in the header.
You also in recent years have people experimenting with so-called "Unity Builds" where you have one compilation unit which includes all of the other source files and you build it all at once. See here: The benefits / disadvantages of unity builds?
So today, having 1-1 correspondence is mainly a style / organizational thing.
A really, really basic, but entirely realistic scenario where a 1-1 relation between .h and .c files is not required, and even not desirable:
main.h
//A lib's/extension/applications' main header file
//for user API -> obfuscated types
typedef struct _internal_str my_type;
//API functions
my_type * init_resource( void );//some arguments will probably be required
//get helper resource -> not part of the API, but the lib uses it internally in all translation units
const struct helper_str *get_help( void );
Now this get_help function is, as the comment says, not part of the libs' API. All the .c files that make up the lib are using it, though, and the get_help function is defined in the helper.c translation unit. This file might look something like this:
#include "main.h"
#include <third/party.h>
//static functions
static
third_party_type *init_external_resource( void )
{
//implement this
}
static
void cleanup_stuff(third_party_type *p)
{
third_party_free(p);
}
const struct helper_str *get_help( void )
{
//implementation of external function
}
Ok, so it's a convenience thing: not adding another .h file, because there's only 1 external function you're calling. But that's no good reason not to use a separate header file, right? Agreed. It's not a good reason.
However: Imagine that your code depends on this third party library a lot, and each component of whatever you're building uses a different part of this library. The "help" you need/want from this helper.c file might differ. That's when you could decide to create several header files, to control the way the helper.c file is being used internally in your project. For example: you've got some logging-stuff in translation units X and Y, these files might include a file like this:
//specific_help.h
char * err_to_log_msg(int error_nr);//relevant arguments, of course...
Whereas a file that doesn't come near output, but, for example, manages thread-safety or signals, might want to call a function in helper.c that frees some resources in case some event was detected (signals, keystrokes, mouse events... whatever). This file might include a header file like:
//system_help.h
void free_helper_resources(int level);
All of these headers link back to functions defined in helper.c, but you could end up with 10 header files for a single c file.
Once you have these various headers exposing a selection of functions, you might end up adding specific typedefs to each of these headers, depending on how the two components interact... ah well, it's a matter of taste anyway.
Many people will just opt for a single header file to go with the helper.c file, and include that. They'll probably not use half of the functions they have access to, but they'll have less files to worry about.
On the other hand, if others start tinkering with their code, they might be tempted to add functions in a certain file that don't belong: they might add logging functions to the signal/event handling files and vice-versa
In the end: use your common sense, don't expose more than you need to. It's easy to remove a static keyword and just add the prototype to a header file if you really need to.

Massive inclusion of .h files in my code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm a programmer for several years.
I was always told (and told others) that you should include in your .c files only the .h files that you need. Nothing more, nothing less.
But let me ask - WHY?
Using today's compilers I can include the entire h files of the project, and it won't have a huge effect on compilation times.
I'm not talking about including OS .h files, which include many definitions, macros, and preprocessing commands.
Just including one "MyProjectIncludes.h". That will only say:
#pragma once
#include "module1.h"
#include "module2.h"
// and so on for all of the modules in the project
What do you say?
It's not about the compilation time of your .c file taking longer due to including too many headers. As you said, including a few extra headers is not likely going to make much of a difference in the compilation time of that file.
The real problem is that once you make all your .c files include the "master" header file, then every time you change any .h file, every .c file will need to be recompiled, due to the fact that you made every .c file dependent on every .h file. So even if you do something innocuous like add a new #define that only one file will use, you will force a recompilation of the whole project. It gets even worse if you are working with a team and everyone makes header file changes frequently.
If the time to rebuild your entire project is small, such as less than 1 minute, then it won't matter that much. I will admit that I've done what you suggested in small projects for convenience. But once you start working on a large project that takes several minutes to rebuild everything, you will appreciate the difference between needing to rebuild one file vs rebuilding all files.
It will affect your build times. Also, you run the risk of having a circular dependency
In general you don't want to have to re-compile modules unless headers that they actually depend on are changed. For smaller projects this may not matter and a global "include_everything.h" file might make your project simple. But in large projects, compile times can be very significant and it is preferable to minimize inter-module dependencies as much as possible. Minimizing includes of unnecessary headers is only one approach. Using forward declarations of types that are only referenced by pointers or references, using Pimpl patterns, interfaces and factories, etc., are all approaches aimed at reducing dependencies amongst modules. Not only do these steps decrease compile time, they can also make your system easier to test and easier to modify in general.
An excellent, though somewhat dated reference on this subject, is John Lakos "Large Scale Software Design".
Sure, like you've said, including extra files won't harm your compilation times by much. Like what you suggest, it's much more convenient to just dump everything in using 1 include line.
But don't you feel a stranger could get better understanding of your code if they knew exactly what .h files you were using in a specific .c file? For starters, if your code had any bugs and those bugs were in the .h files, they'd know exactly which .h files to check up on.

Is there a tool to help reduce #include statements [duplicate]

Are there any tools that help organizing the #includes that belong at the top of a .c or .h file?
I was just wondering because I am reorganizing my code, moving various small function definitions/declarations from one long file into different smaller files. Now each of the smaller files needs a subset of the #includes that were at the top of the long file.
It's just annoying and error-prone to figure out all #includes by hand. Often the code compiles even though not all #includes are there. Example: File A uses std::vector extensively but doesn't include vector; but it currently includes some obscure other header which happens to include vector (maybe through some recursive includes).
VisualAssistX can help you jumping to the definition of a type. E.g. if you use a class MyClass in your source, you can click it, choose goto definition, and VisualAssistX opens the include file that contains the definition of this class (possibly Visual Studio can also do this, but at this point am so getting used to VisualAssistX, that I contribute every wonderful feature to VisualAssistX :-)). You can use this to find the include file necessary for your source code.
PC-Lint can do exactly the opposite. If you have an include file in your source that is not being used, PC-Lint can warn you about it, so that you know that the include file can be removed from the source (which will have a positive impact on your compilation time).
makedepend and gccmakedep
I have been dealing with this problem lately.
In our project we use C++ and have one X.h and one X.cpp file for each class X.
My strategy is the following:
(1) If A.h, where class A is declared, refers to a class B, then
I have to include header B.h.
If the declaration of class A contains only the type *B, then I only need a
forward declaration class B; in A.h. I might need to include B.h in A.cpp
(2) Using the above procedure, I move as many includes as possible from A.h to A.cpp. Then I try removing one include at a time and see if the .cpp file still compiles. This should allow to minimize the set of included files in the .cpp file, but I am not 100% sure if the result is minimal. I also think one can write a tool to do this automatically. I had started to write one for Visual Studio. I would be glad to know that there are such tools.
Note: maybe adding a proper module construct with well-defined import / export relations could be a desired addition for C++0x. It would make the task of reorganizing imports much easier and speed up compilation a lot.
Since this question has been asked a new tool has been created: include-what-you-use, it is based on clang, provides mappings to fake the existence of certain symbols (unique_ptr in memory, but actually defined in bits/unique_ptr.h in some standard headers and lets you provide your own mappings.
It provides nice diagnostics and automatic rewriting.
Now each of the smaller files needs a subset of the #includes that were at the top of the long file.
I do this using VisualAssistX. Firstly compile the file to see what is missing. Then you can use VisualAssistX's Add include function. So I just go and right-click on the functions or classes that I know need an include and hit Add Include. Recompile a couple of times to filter out new missing includes and it is done. I hope I wrote that understandably :)
Not perfect, but faster than doing it by hand.
I use the doxygen/dot-graphviz generated graphs to see how files are linked. Very handy, but not automatic, you have to visually check the graphs, then edit you code to remove unnecessary "#include" lines. Of course, not really suited for very-large projects (say > 100 files), where the graphs become unusable.