Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Please help I've several questions : what are precompiled Headers ? what is their usage ? how to make one ? and how to include one ?
Precompiled headers (PCH for short) are something that the some compilers support. The support and what they contain [aside from "something that hopefully can be read quicker than the original header file"] is up to each compiler producer to decide. I have a little bit of understanding of how Clang does it's precompiled headers, and it's basically a binary form of the "parsed" C or C++ code in the header - so it produces a single file that doesn't need parsing [to the same level as the header file itself].
The purpose is to reduce the compile-time. However, in my experience, the LONG part of the compilation is typically code-generation with optimisation. However, in some instances, especially when LOTS of header-files are involved, the time to read and parse the header files can be a noticeable part of the overall compilation time.
Generally speaking, how they are used is that you tell the compiler that you want precompiled header, and for each compilation, the compiler will generate the precompiled header if it's not already there, and read it in when it is present [1] - commonly this is done for one named header file, which includes lots of other things. Microsoft Visual Studio typically has a file called "stdafx.h" that is precompiled - and at least in the case of MS products, this has to be the first file that is inclduded in a project [this is so that no other header file for example changes the meaning of some macro - I expect there is a hash of the compiler/command-line definitions of macros, so if one of those changes, the PCH is recompiled].
The idea is not to include every single header-file in this one precompiled file, but header files that are used in MOST files, and that are not changing often (the PCH needs to be regenerated if one if the files that is precompiled has changed, so it's no point in doing that if you keep changing the header-files frequently). Of course, like any other build dependency, anything using the precompiled header will need to be rebuilt if the PCH has changed.
For exactly how to use this, you will need to read the documentation for the compiler you are using.
[1] If nothing has changed that requires it to be rebuilt.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
I have seen multiple times people include headers files subfolders in two ways
#include "header.h"
and add path/to/subfolder in include paths.
#include "path/to/subfolder/header.h"
and add only root/folder in include paths.
Not sure if this is just a matter of choice or there are any good bad practice rules around it.
An issue that can arise for case 1, but not case 2 is where you have two header files with the same name, but live in different directories, e.g. foo/utils.h and bar/utils.h. Using convention 2 from the outset eliminates this possibility.
In general use paths relative to the including file (= option 2.), if you don't see a risk of having to move the files relative to each other and use paths relative to a directory passed as compiler option otherwise.
The benefit of a path relative to the including file is that tools can pick up the included files outside of the context of a project/in the absence knowlege about include directories. This could come handy, if you just want to take a quick look at a source file without opening the whole corresponding project the IDE.
You may want to distinguish between both alternatives by using #include <...> to refer to files searched relative to a path passed as compiler option btw, since it's not always immediately obvious where to look for a included file as a human without the help of tools which may not always be available.
If all the headers are part of your own code base, then it's not too important how you do it, since if a naming conflict arises you can simply rename one of the header files to fix the issue.
If you're including headers from third-party projects, OTOH, then you might not be able to (easily) rename those header files (at least not without forking the third-party project and maintaining your own hacked version from then on, which most people want to avoid). In that scenario, it's best to have your #include-paths start at the level that contains the name of the third-party project (e.g. #include "third_party_lib/utils/utils.h") so that the chances of a naming collision are greatly reduced (it could still happen if your project needs to use two libraries that both have the same name, but the chances of that are much smaller than the chances of two libraries both having a utils.h)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Background
In C and C++, header files are files that allow class definitions (as well as other entities) to be the same between files. When another file requires facilities from these header files, the #include is used in one of three ways:
#include <stdio.h> // Include the contents of the standard header
// 'stdio.h' (probably a file 'stdio.h').
#include <vector> // Include the contents of the standard header
// 'vector' (probably a file 'vector.h').
#include "user_defined.h" // Include the contents of the file 'user_defined.h'.
In each case, the preprocessor copies text from the header file into the file where the #include is located. Given the universal prevalence of header files, I was curious about the time added to the copy-paste method of header file inclusion. Hence my question...
Question: what is the time cost for the preprocessor to substitute in text for all #include directives it comes across?
Conceptually, the pre-processor is a separate independent step in compiling C code. The raw time required to process the preprocessing instructions will vary based on compiler and OS. Compiler options that change the search order of processing or output the "post-preprocessor" output as a separate file will also change the time required.
You are going to be mainly bound by I/O time, because even with SSD's the processor/main memory is considerably faster than the I/O. The I/O processing will take two forms; searching for the header(s) and reading the header file(s). Complex macro expansions can also take time, but again because of the relative difference in I/O and main memory speed this is generally secondary to actually looking for and reading the header files.
Most compilers will let you generate the preprocessor output without fully compiling to binary and most OS's have ways to monitor or profile processes/commands. You could uses these features to get a rough idea of the time used and how CPU/memory and I/O are used.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm working with functions.
Is it good practice, that I write the function in another .cpp file, and I include it in the main one?
Like this : #include "lehel.cpp".
Is this ok, or should I write the functions directly in the main.cpp file?
A good practice is to separate functionality into separate Software Units so they can be reused and so that a change to one unit has little effect on other units.
If lehel.cpp is included by main, this means that any changes in lehel.cpp will force a compilation of main. However, if lehel.cpp is compiled separately and linked in, then a change to lehel.cpp does not force main to be recompiled; only linked together.
IMHO, header files should contain information on how to use the functions. The source files should contain the implementations of the functions. The functions in a source file should be related by a theme. Also, keeping the size of the source files small will reduce the quantity of injected defects.
The established practice is putting function declarations of reusable functions in a .h or .hpp file and including that file where they're needed.
foo.cpp
int foo()
{
return 42;
}
foo.hpp
#ifndef FOO_HPP // include guards
#define FOO_HPP
int foo();
#endif // FOO_HPP
main.cpp
#include "foo.hpp"
int main()
{
return foo();
}
Including .cpp files is only sometimes used to split template definitions from declarations, but even this use is controversial, as there are counter-schemes of creating pairs (foo_impl.hpp and foo.hpp) or (foo.hpp and foo_fwd.hpp).
Header files (.h) are designed to provide the information that will be needed in multiple files. Things like class declarations, function prototypes, and enumerations typically go in header files. In a word, "definitions".
Code files (.cpp) are designed to provide the implementation information that only needs to be known in one file. In general, function bodies, and internal variables that should/will never be accessed by other modules, are what belong in .cpp files. In a word, "implementations".
The simplest question to ask yourself to determine what belongs where is "if I change this, will I have to change code in other files to make things compile again?" If the answer is "yes" it probably belongs in the header file; if the answer is "no" it probably belongs in the code file.
https://stackoverflow.com/a/1945866/3323444
To answer your question, As a programmer it will be a bad practice to add a function when cpp has given you header files to include.
Usually it is not done (and headers are used). However, there is a technique called "amalgamation", which means including all (or bunch of) .cpp files into a single main .cpp file and building it as a single unit.
The reasons why it might be sometimes done are:
sometimes actually faster compilation times of the "amalgamated" big unit compared to building all files separately (one reason might be for example that the headers are read only once instead for each .cpp file separately)
better optimization opportunities - the compiler sees all the source code as a whole, so it can make better optimization decisions across amalgamated .cpp files (which might not be possible if each file is compiled separately)
I once used that to improve compilation times of a bigger project - basically created multiple (8) basic source files, which each included part of the .cpp files and then were build in parallel. On full rebuild, the project built about 40% faster.
However, as mentioned, change of a single file of course causes rebuild of that composed unit, which can be a disadvantage during continuous development.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
One of my previous colleagues wrote a huge header file which has around 100 odd structures with inline member function definitions. This structure file is included in most class implementations(cpp files) and header files(don't know why my colleague didn't use forward declarations)
Not only it is a nightmare to read such a huge header file, but its difficulty to track problems due to compiler complaining of multiple definitions and circular references now and then. Overall compilation process is also really slow.
To fix many such issues, I moved inclusion of this header file from other header files to cpp files (wherever possible) and used forward declarations of only relevant structures. Still I continue to get strange multiple definition errors like "fatal error LNK1169: one or more multiply defined symbols found".
I am now contemplating whether I should refactor this structure header file and separate the structure declaration and definition in separate h/cpp files for each and every structure. Though it will be painful and time consuming to do this without refactoring tools in Visual Studio, is this a good approach to solve such issues ?
PS: This question is related to following question : Multiple classes in a header file vs. a single header file per class
When challenged with a major refactoring like this, you will most likely do one of the following approaches: refactor in bulk or do this incrementally.
The advantage of doing it in bulk is that you will go through the code very fast (compared to incrementally) however if you end up with some mistake, it can take quite a lot of time before you get it fixed.
Doing this incrementally and splitting of the classes one by one, you reduce the risk of time-consuming mistakes, however it will take some more time.
Personally, I would try to combine the 2 approaches:
Split off every class, one-by-one (top-to-bottom) into different translation units
However keeping the major include file, and replacing all moved classed by includes
Afterwards you can remove the includes to this major header file and replace them by the singular classes
Finally, remove the major header file
Something that I've already found useful for creating self-sufficient header files is to precompile your headers. This compilation will fail when you haven't included the right data.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm a programmer for several years.
I was always told (and told others) that you should include in your .c files only the .h files that you need. Nothing more, nothing less.
But let me ask - WHY?
Using today's compilers I can include the entire h files of the project, and it won't have a huge effect on compilation times.
I'm not talking about including OS .h files, which include many definitions, macros, and preprocessing commands.
Just including one "MyProjectIncludes.h". That will only say:
#pragma once
#include "module1.h"
#include "module2.h"
// and so on for all of the modules in the project
What do you say?
It's not about the compilation time of your .c file taking longer due to including too many headers. As you said, including a few extra headers is not likely going to make much of a difference in the compilation time of that file.
The real problem is that once you make all your .c files include the "master" header file, then every time you change any .h file, every .c file will need to be recompiled, due to the fact that you made every .c file dependent on every .h file. So even if you do something innocuous like add a new #define that only one file will use, you will force a recompilation of the whole project. It gets even worse if you are working with a team and everyone makes header file changes frequently.
If the time to rebuild your entire project is small, such as less than 1 minute, then it won't matter that much. I will admit that I've done what you suggested in small projects for convenience. But once you start working on a large project that takes several minutes to rebuild everything, you will appreciate the difference between needing to rebuild one file vs rebuilding all files.
It will affect your build times. Also, you run the risk of having a circular dependency
In general you don't want to have to re-compile modules unless headers that they actually depend on are changed. For smaller projects this may not matter and a global "include_everything.h" file might make your project simple. But in large projects, compile times can be very significant and it is preferable to minimize inter-module dependencies as much as possible. Minimizing includes of unnecessary headers is only one approach. Using forward declarations of types that are only referenced by pointers or references, using Pimpl patterns, interfaces and factories, etc., are all approaches aimed at reducing dependencies amongst modules. Not only do these steps decrease compile time, they can also make your system easier to test and easier to modify in general.
An excellent, though somewhat dated reference on this subject, is John Lakos "Large Scale Software Design".
Sure, like you've said, including extra files won't harm your compilation times by much. Like what you suggest, it's much more convenient to just dump everything in using 1 include line.
But don't you feel a stranger could get better understanding of your code if they knew exactly what .h files you were using in a specific .c file? For starters, if your code had any bugs and those bugs were in the .h files, they'd know exactly which .h files to check up on.