Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm a programmer for several years.
I was always told (and told others) that you should include in your .c files only the .h files that you need. Nothing more, nothing less.
But let me ask - WHY?
Using today's compilers I can include the entire h files of the project, and it won't have a huge effect on compilation times.
I'm not talking about including OS .h files, which include many definitions, macros, and preprocessing commands.
Just including one "MyProjectIncludes.h". That will only say:
#pragma once
#include "module1.h"
#include "module2.h"
// and so on for all of the modules in the project
What do you say?
It's not about the compilation time of your .c file taking longer due to including too many headers. As you said, including a few extra headers is not likely going to make much of a difference in the compilation time of that file.
The real problem is that once you make all your .c files include the "master" header file, then every time you change any .h file, every .c file will need to be recompiled, due to the fact that you made every .c file dependent on every .h file. So even if you do something innocuous like add a new #define that only one file will use, you will force a recompilation of the whole project. It gets even worse if you are working with a team and everyone makes header file changes frequently.
If the time to rebuild your entire project is small, such as less than 1 minute, then it won't matter that much. I will admit that I've done what you suggested in small projects for convenience. But once you start working on a large project that takes several minutes to rebuild everything, you will appreciate the difference between needing to rebuild one file vs rebuilding all files.
It will affect your build times. Also, you run the risk of having a circular dependency
In general you don't want to have to re-compile modules unless headers that they actually depend on are changed. For smaller projects this may not matter and a global "include_everything.h" file might make your project simple. But in large projects, compile times can be very significant and it is preferable to minimize inter-module dependencies as much as possible. Minimizing includes of unnecessary headers is only one approach. Using forward declarations of types that are only referenced by pointers or references, using Pimpl patterns, interfaces and factories, etc., are all approaches aimed at reducing dependencies amongst modules. Not only do these steps decrease compile time, they can also make your system easier to test and easier to modify in general.
An excellent, though somewhat dated reference on this subject, is John Lakos "Large Scale Software Design".
Sure, like you've said, including extra files won't harm your compilation times by much. Like what you suggest, it's much more convenient to just dump everything in using 1 include line.
But don't you feel a stranger could get better understanding of your code if they knew exactly what .h files you were using in a specific .c file? For starters, if your code had any bugs and those bugs were in the .h files, they'd know exactly which .h files to check up on.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
I have seen multiple times people include headers files subfolders in two ways
#include "header.h"
and add path/to/subfolder in include paths.
#include "path/to/subfolder/header.h"
and add only root/folder in include paths.
Not sure if this is just a matter of choice or there are any good bad practice rules around it.
An issue that can arise for case 1, but not case 2 is where you have two header files with the same name, but live in different directories, e.g. foo/utils.h and bar/utils.h. Using convention 2 from the outset eliminates this possibility.
In general use paths relative to the including file (= option 2.), if you don't see a risk of having to move the files relative to each other and use paths relative to a directory passed as compiler option otherwise.
The benefit of a path relative to the including file is that tools can pick up the included files outside of the context of a project/in the absence knowlege about include directories. This could come handy, if you just want to take a quick look at a source file without opening the whole corresponding project the IDE.
You may want to distinguish between both alternatives by using #include <...> to refer to files searched relative to a path passed as compiler option btw, since it's not always immediately obvious where to look for a included file as a human without the help of tools which may not always be available.
If all the headers are part of your own code base, then it's not too important how you do it, since if a naming conflict arises you can simply rename one of the header files to fix the issue.
If you're including headers from third-party projects, OTOH, then you might not be able to (easily) rename those header files (at least not without forking the third-party project and maintaining your own hacked version from then on, which most people want to avoid). In that scenario, it's best to have your #include-paths start at the level that contains the name of the third-party project (e.g. #include "third_party_lib/utils/utils.h") so that the chances of a naming collision are greatly reduced (it could still happen if your project needs to use two libraries that both have the same name, but the chances of that are much smaller than the chances of two libraries both having a utils.h)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
One of my previous colleagues wrote a huge header file which has around 100 odd structures with inline member function definitions. This structure file is included in most class implementations(cpp files) and header files(don't know why my colleague didn't use forward declarations)
Not only it is a nightmare to read such a huge header file, but its difficulty to track problems due to compiler complaining of multiple definitions and circular references now and then. Overall compilation process is also really slow.
To fix many such issues, I moved inclusion of this header file from other header files to cpp files (wherever possible) and used forward declarations of only relevant structures. Still I continue to get strange multiple definition errors like "fatal error LNK1169: one or more multiply defined symbols found".
I am now contemplating whether I should refactor this structure header file and separate the structure declaration and definition in separate h/cpp files for each and every structure. Though it will be painful and time consuming to do this without refactoring tools in Visual Studio, is this a good approach to solve such issues ?
PS: This question is related to following question : Multiple classes in a header file vs. a single header file per class
When challenged with a major refactoring like this, you will most likely do one of the following approaches: refactor in bulk or do this incrementally.
The advantage of doing it in bulk is that you will go through the code very fast (compared to incrementally) however if you end up with some mistake, it can take quite a lot of time before you get it fixed.
Doing this incrementally and splitting of the classes one by one, you reduce the risk of time-consuming mistakes, however it will take some more time.
Personally, I would try to combine the 2 approaches:
Split off every class, one-by-one (top-to-bottom) into different translation units
However keeping the major include file, and replacing all moved classed by includes
Afterwards you can remove the includes to this major header file and replace them by the singular classes
Finally, remove the major header file
Something that I've already found useful for creating self-sufficient header files is to precompile your headers. This compilation will fail when you haven't included the right data.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Please help I've several questions : what are precompiled Headers ? what is their usage ? how to make one ? and how to include one ?
Precompiled headers (PCH for short) are something that the some compilers support. The support and what they contain [aside from "something that hopefully can be read quicker than the original header file"] is up to each compiler producer to decide. I have a little bit of understanding of how Clang does it's precompiled headers, and it's basically a binary form of the "parsed" C or C++ code in the header - so it produces a single file that doesn't need parsing [to the same level as the header file itself].
The purpose is to reduce the compile-time. However, in my experience, the LONG part of the compilation is typically code-generation with optimisation. However, in some instances, especially when LOTS of header-files are involved, the time to read and parse the header files can be a noticeable part of the overall compilation time.
Generally speaking, how they are used is that you tell the compiler that you want precompiled header, and for each compilation, the compiler will generate the precompiled header if it's not already there, and read it in when it is present [1] - commonly this is done for one named header file, which includes lots of other things. Microsoft Visual Studio typically has a file called "stdafx.h" that is precompiled - and at least in the case of MS products, this has to be the first file that is inclduded in a project [this is so that no other header file for example changes the meaning of some macro - I expect there is a hash of the compiler/command-line definitions of macros, so if one of those changes, the PCH is recompiled].
The idea is not to include every single header-file in this one precompiled file, but header files that are used in MOST files, and that are not changing often (the PCH needs to be regenerated if one if the files that is precompiled has changed, so it's no point in doing that if you keep changing the header-files frequently). Of course, like any other build dependency, anything using the precompiled header will need to be rebuilt if the PCH has changed.
For exactly how to use this, you will need to read the documentation for the compiler you are using.
[1] If nothing has changed that requires it to be rebuilt.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm currently transitioning to working in C, primarily focused on developing large libraries. I'm coming from a decent amount of application based programming in C++, although I can't claim expertise in either language.
What I'm curious about is when and why many popular open source libraries choose to not separate their code in a 1-1 relationship with a .h file and corresponding .c files -- even in instances where the .c isn't generating an executable.
In the past I'd been lead to believe that structuring code in this manner is optimal not only in organization, but also for linking purposes -- and I don't see how the lack of OOD features of C would effect this (not to mention not separating the implementation and interface also occurs in C++ libraries).
There is no inherent technical reason in C to provide .c and .h files in matched pairs. There is certainly no reason related to linking, as in conventional C usage, neither .c nor .h files has anything directly to do with that.
It is entirely possible and potentially advantageous to collect declarations related to multiple .c files in a smaller number of .h files. Providing only one or a small number of header files makes it easier to use the associated library: you don't need to remember or look up which header you need for the declaration of each function, type, or variable.
There are at least three consequence that arise from doing that, however:
you make it harder to determine where to find the implementations of functions declared in collective headers.
you make your project more susceptible to mass rebuilding cascades, as most object files depend on one or more of a small number of headers, and changes or additions to your function signatures all affect one of that small number of headers.
the compiler has to spend more effort digesting one large header with all the library's declarations than to digest one or a small number of headers focused narrowly on the specific declarations used in a given .c file, as #ChrisBeck observed. This tends to be much less of a problem for C code than it does for C++ code, however.
You need a separate .h file only when something is included in more than one compilation unit.
A form of "keep things local unless you have to share" wisdom.
In the past I'd been lead to believe that structuring code in this manner is optimal not only in organization, but also for linking purposes -- and I don't see how the lack of OOD features of C would effect this (not to mention not separating the implementation and interface also occurs in C++ libraries).
In traditional C code, you will always put declarations in the .h files, and definitions in the .c files. This is indeed to optimize compilation -- the reason is that, it makes each compilation unit take the minimum amount of memory, since it only has the definitions that it needs to output code for, and if you manage includes properly, it only has the declarations it needs also. It also makes it simple to see that you aren't breaking the one definition rule.
In modern machines its less important to do this from the perspective of, not having awful build times -- machines now have a lot of memory.
In C++ you have template files which are generally only in the header.
You also in recent years have people experimenting with so-called "Unity Builds" where you have one compilation unit which includes all of the other source files and you build it all at once. See here: The benefits / disadvantages of unity builds?
So today, having 1-1 correspondence is mainly a style / organizational thing.
A really, really basic, but entirely realistic scenario where a 1-1 relation between .h and .c files is not required, and even not desirable:
main.h
//A lib's/extension/applications' main header file
//for user API -> obfuscated types
typedef struct _internal_str my_type;
//API functions
my_type * init_resource( void );//some arguments will probably be required
//get helper resource -> not part of the API, but the lib uses it internally in all translation units
const struct helper_str *get_help( void );
Now this get_help function is, as the comment says, not part of the libs' API. All the .c files that make up the lib are using it, though, and the get_help function is defined in the helper.c translation unit. This file might look something like this:
#include "main.h"
#include <third/party.h>
//static functions
static
third_party_type *init_external_resource( void )
{
//implement this
}
static
void cleanup_stuff(third_party_type *p)
{
third_party_free(p);
}
const struct helper_str *get_help( void )
{
//implementation of external function
}
Ok, so it's a convenience thing: not adding another .h file, because there's only 1 external function you're calling. But that's no good reason not to use a separate header file, right? Agreed. It's not a good reason.
However: Imagine that your code depends on this third party library a lot, and each component of whatever you're building uses a different part of this library. The "help" you need/want from this helper.c file might differ. That's when you could decide to create several header files, to control the way the helper.c file is being used internally in your project. For example: you've got some logging-stuff in translation units X and Y, these files might include a file like this:
//specific_help.h
char * err_to_log_msg(int error_nr);//relevant arguments, of course...
Whereas a file that doesn't come near output, but, for example, manages thread-safety or signals, might want to call a function in helper.c that frees some resources in case some event was detected (signals, keystrokes, mouse events... whatever). This file might include a header file like:
//system_help.h
void free_helper_resources(int level);
All of these headers link back to functions defined in helper.c, but you could end up with 10 header files for a single c file.
Once you have these various headers exposing a selection of functions, you might end up adding specific typedefs to each of these headers, depending on how the two components interact... ah well, it's a matter of taste anyway.
Many people will just opt for a single header file to go with the helper.c file, and include that. They'll probably not use half of the functions they have access to, but they'll have less files to worry about.
On the other hand, if others start tinkering with their code, they might be tempted to add functions in a certain file that don't belong: they might add logging functions to the signal/event handling files and vice-versa
In the end: use your common sense, don't expose more than you need to. It's easy to remove a static keyword and just add the prototype to a header file if you really need to.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
What are the advantages of using multiple source (.cpp) and header (.h) files in a single project?
Is it just a preferential thing or are there true benefits?
It helps you split your code and sort it by theme. Otherwise you get one file with 1000s of lines… which is hard to manage…
Usually, people have .h and .c for one or sometimes a few classes.
Also, it speeds up compilation, since only the modified files, and some related files need to be recompiled.
From Organizing Code Files in C and C++:
Splitting any reasonably-sized project up buys you some advantages,
the most significant of which are the following:
Speed up compilation - most compilers work on a file at a time. So if all your
10000 lines of code is in one file, and you change one line, then you
have to recompile 10000 lines of code. On the other hand, if your
10000 lines of code are spread evenly across 10 files, then changing
one line will only require 1000 lines of code to be recompiled. The
9000 lines in the other 9 files will not need recompiling. (Linking
time is unaffected.)
Increase organization - Splitting your code along logical lines will
make it easier for you (and any other programmers on the project) to
find functions, variables, struct/class declarations, and so on. Even
with the ability to jump directly to a given identifier that is
provided in many editors and development environments (such as
Microsoft Visual C++), there will always be times when you need to
scan the code manually to look for something. Just as splitting the
code up reduces the amount of code you need to recompile, it also
reduces the amount of code you need to read in order to find
something. Imagine that you need to find a fix you made to the sound
code a few weeks ago. If you have one large file called GAME.C,
that's potentially a lot of searching. If you have several small
files called GRAPHICS.C, MAINLOOP.C, SOUND.C, and INPUT.C, you know
where to look, cutting your browsing time by 3/4.
Facilitate code reuse - If your code is carefully split up into
sections that operate largely independently of each other, this lets
you use that code in another project, saving you a lot of rewriting
later. There is a lot more to writing reusable code than just using a
logical file organization, but without such an organization it is
very difficult to know which parts of the code work together and
which do not. Therefore putting subsystems and classes in a single
file or carefully delineated set of files will help you later if you
try to use that code in another project.
Share code between projects - The principle here is the same as with
the reuse issue. By carefully separating code into certain files, you
make it possible for multiple projects to use some of the same code
files without duplicating them. The benefit of sharing a code file
between projects rather than just using copy-and-paste is that any
bug fixes you make to that file or files from one project will affect
the other project, so both projects can be sure of using the most
up-to-date version.
Split coding responsibilities among programmers - For really large
projects, this is perhaps the main reason for separating code into
multiple files. It isn't practical for more than one person to be
making changes to a single file at any given time. Therefore you
would need to use multiple files so that each programmer can be
working on a separate part of the code without affecting the file
that the other programmers are editing. Of course, there still have
to be checks that 2 programmers don't try altering the same file;
configuration management systems and version control systems such as
CVS or MS SourceSafe help you here. All of the above can be
considered to be aspects of modularity, a key element of both
structured and object-oriented design.
Then, they go on about How to do it, Potential Pitfalls, Fixing problems, etc.
You should check it.
Code files become unmaintainable (try searching in them!) after a few hundred lines. Some people go up to a few thousand (but this is already a problem). Even small projects have thousands of lines, medium projects have tens of thousands of lines, and big projects have millions of lines. Text editors cannot cope with files this big (but programmers themselves cannot either).
Splitting a project into different source files is also necessary if you want to separate your project into different compilation units, which makes compilation much faster because only parts of the projects need to be recompiled.
A few decades ago programs used to be written in one single file / stack of cards. However, these programs were tiny in comparison to modern programs, and completely unmaintainable – even small changes essentially necessitated a rewrite, which put a fixed upper limit on the complexity that could thus be achieved.
Modern, more complex projects essentially require splitting apart. The question of putting everything in one file is frankly one that I’ve never asked myself because the idea is simply inconceivable.
Different cpp files are compiled as separate compilation units. This allows you to isolate things (header inclusions, anonymous namespaces, pimpl) from the rest of the source code.
Sometimes two libraries can not be used together in one source file because they have name clashes. This can be solved by including each library header in a different cpp file and expose required functionality via corresponding header files.
if its a small project such as hello world, there is no advantage, but imagine something like windows, or google chrome, or android.
a project of that size could not possibly be managed with a single file.
multiple files for your project are about manageability and re usability for the code.