This question already has answers here:
Is there anyway to figure out what STL header file has not been included directly?
(2 answers)
Closed 9 years ago.
On Linux, what is a fast way to identify what are the necessary #include statements that I need for a C++ project?
I mean, let's say someone gives you a snippet from the web, but fails to provide the necessary #include statements. Is there potentially a way where you can run a Linux command or compiler command option and identify which functions or classes are missing, and, as a bonus, identify on the hard drive where I might have these things in a header file.
Basically you need some analyzer to parse your sources and headers and build a complete dependency graph which it spits out in the end for you to read and process further.
I'd follow john's advice on g++ and Clang for this purpose but I highly doubt they got what it takes.
What you actually can do, at least with g++, is print out a graph for already existing includes. Use the -H option to print a tree or -M to get a list.
I also refer you to this related topic: Tool to track #include dependencies
Not exactly what you want, but the tools mentioned there might be helpful.
I think Clang's "include-what-you-use" tool is what you want.
If, by necessary you mean minimal (i.e. if A includes B and B includes C then A doesn't need to include C) I don't know of a fast way.
One good approach, however, is for each cpp file to include its own header file first (after any precompiled headers.) That insures that each header file includes (directly or indirectly) all the header files it needs to define the symbols used in the header.
Also a project of reasonable size should be designed in layers such that Layer A knows about/depends on layer B which depends on layer C, etc, but lower layers never include higher layers (i.e. C never includes anything from layer A)
In that case the includes in each cpp or hpp should be in Layer order (A, B, C). If you do this it is fairly easy to check to see if any of the layer C headers can be eliminated (comment them out temporarily) because one of the includes that comes before them has already included them. This happens quite a lot and can significantly reduce the number of #includes in each file.
Having said all of that, this is a much less critical issue than it used to be because compilers are smarter. A combination of #pragma once and precompiled headers can keep build times down without requiring that you spend a lot of time optimizing includes.
The best way I know of to find undefined identifiers in a program is just to try to compile it. Depending on exactly what compiler you’re using, you might be able simply to pipe the output of GCC or Clang into grep, looking for phrases like “undeclared identifier.”
As for determining where the symbols are defined, I would recommend as a starting point looking at Ctags to parse your system headers (best managed using a Makefile) and using the resulting tags table to look up anything grep catches from GCC.
The fastest way... that's not how you should think of it.
https://stackoverflow.com/a/18544093/2112028
I wrote a lovely (I'm quite proud :P) answer there talking about how linking works (with templates) and proving it works and such, understand that.
The goal of #include directives is to create a "translation unit" where every symbol is declared (even if not defined) there's an example in my answer where I simply copy and paste the prototype into a code file, rather than use include.
You ought not worry about the "fastest" way if you use something called "Header guards" (these are mentioned briefly right at the bottom, but this isn't sufficient detail) they go like this:
#ifndef __WHATEVER_H
#define __WHATEVER_H
/*Your code here*/
#endif
So now you can include "whatever.h" AS MANY times as you like. the first time IN THE TRANSLATION UNIT, will define __WHATEVER_H, so the next file that includes it (however many includes deep from the file being compiled) will be empty. as everything between the #ifndef and #endif will be gone.
Hope this helps.
Also if you have unnecessary inputs, use -Wextra and -Wall, GCC will tell you about unused functions, typedefs and so forth. you can use the pragma error push and pop things to control this. For example wxWidget's header files may contain a lot of unused things, so you push the warnings onto the stack, remove the unused warning flags, include the file, pop the warnings stack (turning them back on), less you get thousands of lines of warnings.
Does including the same header files multiple times increase the compilation time?
For example, suppose every file in my project uses <iostream> <string> <vector> and <algorithm>. And if I include a lot of files in my source code, then does that increase the compile time?
I always thought that the guard headers served important purpose of avoiding double definitions but as a by product also eliminates double code.
Actually, someone I know proposed some ideas to remove such multiple inclusions. However, I consider them to be completely against the good design practices in c++. But was still wondering what might be the reasons of him to suggest the changes?
Most of these answers are wrong... For modern compilers, there is zero overhead for including the same file multiple times, assuming the header uses the usual "include guard" idiom.
The GCC preprocessor, for example, has special code to recognize the include guard idiom. It will not even open the header file (never mind reading it) for the second and subsequent #include directives.
I am not sure about other compilers, but I would be very surprised if most of them did not implement the same optimization.
Another technique besides precompiled headers is the compiler firewall idiom, explained here:
http://www.gotw.ca/publications/mill04.htm
http://www.gotw.ca/publications/mill05.htm
Every time #include <something.h> occurs in your source file, 'something.h' have to be found along the include path and read. But there is #ifndef _SOMETHING_H_ check, so the content of such something.h would not be compiled.
Thus there is some overhead, but it is really small.
If compile times were an issue, people used to use the optimisation recommended by Praetorian, originally recommened in Large Scale Software Design. However, most modern compilers automatically optimise for this case. For example, see the help from gcc
The best is to use precompiled headers. I do not know which compiler you are using, but most of them have this feature. I suggest you to refer to your compiler-manual on how to achieve this.
It basically collects all headerfiles and compiles it into a object file which then can be used by the linker. That speeds up compiling very much.
Minor Drawback:
You need to have 1 "uberheader" which is included in every compilation-unit (.cpp).
In that uberheader, only include static headers from libraries, not your own. Then the compiler does not need to recompile it very often.
It helps esp. when using header-only libraries such as boost or glm, eigen etc.
HTH
Yes, including the same header multiple times means that the file needs to be opened before the preprocessor guards kick in and prevent multiple definitions. The Mozilla source code uses the following trick to prevent this:
Foo.h
#ifndef FOO_H
#define FOO_H
// whatever
#endif /* FOO_H */
In all files that need to include foo.h
#ifndef FOO_H
#include "foo.h"
#endif
This prevents foo.h from having to be opened multiple times. Of course, this depends on everyone following a particular naming convention for their preprocessor guards.
You can't do this with standard headers, since there is no common naming convention for their preprocessor guards.
EDIT:
After reading your question again, I think you're asking about the same header being included in different source files. What I talked about above does not help with that. Each header file will still have to be opened and included at least once in every translation unit. The only way I know of to prevent this is to use precompiled headers, as #scorcher24 mentioned in his answer. But I'd stay away from this solution, because there is no standard way of generating precompiled headers across compilers, unless the compile times are absolutely prohibitive.
Some compilers, most notably Microsoft's, have a #pragma once directive that you can use to automatically skip an include file once it's already been included. This removes any performance penalty.
http://en.wikipedia.org/wiki/Pragma_once
It can be an issue. As others have said, most modern compilers
handle the case intelligently, and will only re-open the file in
degenerate cases. Most is not all, however, and one of the major
exceptions is Microsoft, which a lot of people do have to support. The
surest solution (if this is really a problem in your environment) is to
use the Lakos convention, putting the include guards around the
#include as well as in the header. This means, of course, a standard
convention for generating the guard names. (For external includes, wrap
them in your own header, which respects your local convention.)
Alternatively, you can use both the guards and #pragma once. The
guards will always work, and most compilers will avoid the extra opens,
and #pragma once will usually avoid the extra opens with Microsoft.
(#pragma once cannot be implemented reliably in complex networked
situation, but as long as all of your files are on your local drive,
it's quite reliable.)
Suppose i have the following code (literally) in a C++ source file:
// #include <iostream> // superfluous, commented-out
using std::cout;
using std::endl;
int main()
{
cout << "Hello World" << endl;
return 0;
}
I can compile this code even though #include <iostream> is commented-out:
g++ -include my_cpp_std_lib_hack source.cpp
Where my_cpp_std_lib_hack is a file in some central location that includes all the files of the C++ Standard Library:
#include <ciso646>
#include <climits>
#include <clocale>
...
#include <valarray>
#include <vector>
Of course, i can use proper compilation options for all compilers i care about (that being MS Visual Studio and maybe a few others), and i also use precompiled headers.
Using such a hack gives me the following advantages:
Fast compilation (because all of the Standard Library is precompiled)
No need to add #includes when all i want is adding some debugging output
No need to remember or look up all the time where the heck std::max is declared
A feeling that the STL is magically built-in to the language
So i wonder: am i doing something very wrong here?
Will this hack break down when writing large projects?
Maybe everyone else already uses this, and no one told me?
So i wonder: am i doing something very wrong here?
Yes. Sure, your headers are precompiled, but the compiler still has to do things like name lookups on the entire included mass of stuff which slows down compilation.
Will this hack break down when writing large projects?
Yes, that's pretty much the problem. Plus, if anyone else looks at that code, they're going to be wondering where std::cout (well, assume that's a user defined type) came from. Without the #includes they're going to have no idea whatsoever.
Not to mention, now you have to link against a ton of standard library features that you may have (probably could have) avoided linking against in the first place.
If you want to use precompilation that's fine, but someone should be able to build each and every implementation file even when precompilation is disabled.
The only thing "wrong" is that you are relying upon a compiler-specific command-line flag to make the files compilable. You'd need to do something different if not using GCC. Most compilers probably do provide an equivalent feature, but it is best to write portable source code rather than to unnecessarily rely on features of your specific build environment.
Other programmers shouldn't have to puzzle over your Makefiles (or Ant files, or Eclipse workspaces, or whatever) to figure out how things are working.
This may also cause problems for users of IDE's. If the IDE doesn't know what files are being included, it may not be able to provide automatic completion, source browsing, refactoring, and other such features.
(FWIW, I do think it is a good idea to have one header file that includes all of the Standard Library headers that you are using in your project. It makes precompilation easier, makes it easier to port to a non-standard environment, and also helps deal with those issues that sometimes arise when headers are included in different orders in different source files. But that header file should be explicitly included by each source file; there should be no magic.)
Forget the compilation speed-up - a precompiled header with templates isn't really "precompiled" except for the name and the parse, as far as I've heard. I won't believe in the compilation speed up until I see it in the benchmarks. :)
As for the usefulness:
I prefer to have an IDE which handles my includes for me (this is still bad for C++, but Eclipse already adds known includes with ctrl+shift+n with... well, acceptable reliability :)).
Doing 'clandestine' includes like this would also make testing more difficult. You want to compile a smallest-possible subset of code when testing a particular component. Figuring out what that subset is would be difficult if the headers/sources aren't being honest about their dependencies, so you'd probably just drag your my_cpp_std_lib_hack into every unit test. This would increase compilation time for your test suites a lot. Established code bases often have more than three times as much test code as regular code, so this is likely to become an issue as your code base grows.
From the GCC manual:
-include file
Process file as if #include "file" appeared as the first line of the
primary source file. However, the
first directory searched for file is
the preprocessor's working directory
instead of the directory containing
the main source file. If not found
there, it is searched for in the
remainder of the #include "..." search
chain as normal.
So what you're doing is essentially equivalent to starting each file with the line
#include "my_cpp_std_lib_hack"
which is what Visual Studio does when it gathers up commonly-included files in stdafx.h. There are some benefits to that, as outlined by others, but your approach hides this include in the build process, so that nobody who looked directly at one of your source files would know of this hidden magic. Making your code opaque in this way does not seem like a good style to me, so if you're keen on all the precompiled header benefits I suggest you explicitly include your hack file.
You are doing something very wrong. You are effectively including lots of headers that may not be needed. In general, this is a very bad idea, because you are creating unnecessary dependencies, and a change in any header would require recompilation of everything. Even if you are avoiding this by using precompiled headers, you are still linking to lots of object that you may not need, making your executable much larger than it needs to be.
There is really nothing wrong with the standard way of using headers. You should include everything you are using, and no more (forward declarations are your friends). This makes code easier to follow, and helps you keep dependencies under control.
We try not to include the unused or even the rarely used stuff for example in VC++ there is
#define WIN32_LEAN_AND_MEAN //exclude rarely used stuff
and what we hate in MFC is that if u want to make a simple application u will produce large executable file with the whole library (if statically linked), so it's not a good idea what if u only want to use the cout while the other no??
another thing i don't like to pass arguments via command line coz i may leave the project for a while, and forget what are the arguments... e.g. i prefer using
#pragma (comment, "xxx.lib")
than using it in command line, it reminds me at least with what file i want
That's is my own opinion make your code stable and easy to compile in order to to rot as code rotting is a very nasty thing !!!!!
Is there any way to not have to write function declarations twice (headers) and still retain the same scalability in compiling, clarity in debugging, and flexibility in design when programming in C++?
Use Lzz. It takes a single file and automatically creates a .h and .cpp for you with all the declarations/definitions in the right place.
Lzz is really very powerful, and handles 99% of full C++ syntax, including templates, specializations etc etc etc.
Update 150120:
Newer C++ '11/14 syntax can only be used within Lzz function bodies.
I felt the same way when I started writing C, so I also looked into this. The answer is that yes, it's possible and no, you don't want to.
First with the yes.
In GCC, you can do this:
// foo.cph
void foo();
#if __INCLUDE_LEVEL__ == 0
void foo() {
printf("Hello World!\n");
}
#endif
This has the intended effect: you combine both header and source into one file that can both be included and linked.
Then with the no:
This only works if the compiler has access to the entire source. You can't use this trick when writing a library that you want to distribute but keep closed-source. Either you distribute the full .cph file, or you have to write a separate .h file to go with your .lib. Although maybe you could auto-generate it with the macro preprocessor. It would get hairy though.
And reason #2 why you don't want this, and that's probably the best one: compilation speed. Normally, C sources files only have to be recompiled when the file itself changes, or any of the files it includes changes.
The C file can change frequently, but the change only involves recompiling the one file that changed.
Header files define interfaces, so they shouldn't change as often. When they do however, they trigger a recompile of every source file that includes them.
When all your files are combined header and source files, every change will trigger a recompile of all source files. C++ isn't known for its fast compile times even now, imagine what would happen when the entire project had to be recompiled every time. Then extrapolate that to a project of hundreds of source files with complicated dependencies...
Sorry, but there's no such thing as a "best practice" for eliminating headers in C++: it's a bad idea, period. If you hate them that much, you have three choices:
Become intimately familiar with C++ internals and any compilers you're using; you're going to run into different problems than the average C++ developer, and you'll probably need to solve them without a lot of help.
Pick a language you can use "right" without getting depressed
Get a tool to generate them for you; you'll still have headers, but you save some typing effort
In his article Simple Support for Design by Contract in C++, Pedro Guerreiro stated:
Usually, a C++ class comes in two
files: the header file and the
definition file. Where should we write
the assertions: in the header file,
because assertions are specification?
Or in the definition file, since they
are executable? Or in both, running
the risk of inconsistency (and
duplicating work)? We recommend,
instead, that we forsake the
traditional style, and do away with
the definition file, using only the
header file, as if all functions were
defined inline, very much like Java
and Eiffel do.
This is such a drastic
change from the C++ normality that it
risks killing the endeavor at the
outset. On the other hand, maintaining
two files for each class is so
awkward, that sooner or later a C++
development environment will come up
that hides that from us, allowing us
to concentrate on our classes, without
having to worry about where they are
stored.
That was 2001. I agreed. It is 2009 now and still no "development environment that hides that from us, allowing us to concentrate on our classes" has come up. Instead, long compile times are the norm.
Note: The link above seems to be dead now. This is the full reference to the publication, as it appears in the Publications section of the author's website:
Pedro Guerreiro, Simple Support for Design by Contract in C++, TOOLS USA 2001, Proceedings, pages 24-34, IEEE, 2001.
There is no practical way to get around headers. The only thing you could do is to put all code into one big c++ file. That will end up in an umaintainable mess, so please don't do it.
At the moment C++ header-files are a nessesary evil. I don't like them, but there is no way around them. I'd love to see some improvements and fresh ideas on the problem though.
Btw - once you've got used to it it's not that bad anymore.. C++ (and any other language as well) has more anoying things.
What I have seen some people like you do is write everything in the headers. That gives your desired property of only having to write the method profiles once.
Personally I think there are very good reasons why it is better to separate declaration and definition, but if this distresses you there is a way to do what you want.
There's header file generation software. I've never used it, but it might be worth looking into. For instance, check out mkhdr! It supposedly scans C and C++ files and generates the appropriate header files.
(However, as Richard points out, this seems to limit you from using certain C++ functionality. See Richard's answer instead here right in this thread.)
You have to write function declaration twice, actually (once in header file, once in implementation file). The definition (AKA implementation) of the function will be written once, in the implementation file.
You can write all the code in header files (it is actually a very used practice in generic programming in C++), but this implies that every C/CPP file including that header will imply recompilation of the implementation from those header files.
If you are thinking to a system similar to C# or Java, it is not possible in C++.
Nobody has mentioned Visual-Assist X under Visual Studio 2012 yet.
It has a bunch of menus and hotkeys that you can use to ease the pain of maintaining headers:
"Create Declaration" copies the function declaration from the current function into the .hpp file.
"Refactor..Change signature" allows you to simultaneously update the .cpp and .h file with one command.
Alt-O allows you to instantly flip between .cpp and .h file.
C++ 20 modules solve this problem. There is no need for copy-pasting anymore! Just write your code in a single file and export things using "export".
export module mymodule;
export int myfunc() {
return 1
}
Read more about modules here: https://en.cppreference.com/w/cpp/language/modules
At the time of writing this answer (2022 Feb), these compilers support it:
See here for the supported compilers:
https://en.cppreference.com/w/cpp/compiler_support
See this answer if you want to use modules with CMake:
https://stackoverflow.com/a/71119196/7910299
Actually... You can write the entire implementation in a file. Templated classes are all defined in the header file with no cpp file.
You can also save then with whatever extensions you want. Then in #include statements, you would include your file.
/* mycode.cpp */
#pragma once
#include <iostreams.h>
class myclass {
public:
myclass();
dothing();
};
myclass::myclass() { }
myclass::dothing()
{
// code
}
Then in another file
/* myothercode.cpp */
#pragma once
#include "mycode.cpp"
int main() {
myclass A;
A.dothing();
return 0;
}
You may need to setup some build rules, but it should work.
You can avoid headers. Completely. But I don't recommend it.
You'll be faced with some very specific limitations. One of them is you won't be able to have circular references (you won't be able to have class Parent contain a pointer to an instance of class ChildNode, and class ChildNode also contain a pointer to an instance of class Parent. It'd have to be one or the other.)
There are other limitations which just end up making your code really weird. Stick to headers. You'll learn to actually like them (since they provide a nice quick synopsis of what a class can do).
To offer a variant on the popular answer of rix0rrr:
// foo.cph
#define INCLUDEMODE
#include "foo.cph"
#include "other.cph"
#undef INCLUDEMODE
void foo()
#if !defined(INCLUDEMODE)
{
printf("Hello World!\n");
}
#else
;
#endif
void bar()
#if !defined(INCLUDEMODE)
{
foo();
}
#else
;
#endif
I do not recommend this, bit I think this construction demonstrates the removal of content repetition at the cost of rote repetition. I guess it makes copy-pasta easier? That's not really a virtue.
As with all the other tricks of this nature, a modification to the body of a function will still require recompilation of all files including the file containing that function. Very careful automated tools can partially avoid this, but they would still have to parse the source file to check, and be carefully constructed to not rewrite their output if it's no different.
For other readers: I spent a few minutes trying to figure out include guards in this format, but didn't come up with anything good. Comments?
I understand your problems. I would say that the C++ main problem is the compilation/build method that it inherited from the C. The C/C++ header structure has been designed in times when coding involved less definitions and more implementations. Don't throw bottles on me, but that's how it looks like.
Since then the OOP has conquered the world and the world is more about definitions then implementations. As the result, including headers makes pretty painful to work with a language where the fundamental collections such as the ones in the STL made with templates which are notoriously difficult job for the compiler to deal with. All those magic with the precompiled headers doesn't help so much when it comes to TDD, refactoring tools, the general development environment.
Of course C programmers are not suffering from this too much since they don't have compiler-heavy header files and so they are happy with the pretty straightforward, low-level compilation tool chain. With C++ this is a history of suffering: endless forward declarations, precompiled headers, external parsers, custom preprocessors etc.
Many people, however, does not realize that the C++ is the ONLY language that has strong and modern solutions for high- and low-level problems. It's easy to say that you should go for an other language with proper reflection and build system, but it is non-sense that we have to sacrifice the low-level programming solutions with that and we need to complicate things with low-level language mixed with some virtual-machine/JIT based solution.
I have this idea for some time now, that it would be the most cool thing on earth to have a "unit" based c++ tool-chain, similar to that in D. The problem comes up with the cross-platform part: the object files are able to store any information, no problem with that, but since on windows the object file's structure is different that of the ELF, it would be pain in the ass to implement a cross-platform solution to store and process the half-way-compilation units.
After reading all the other answers, I find it missing that there is ongoing work to add support for modules in the C++ standard. It will not make it to C++0x, but the intention is that it will be tackled in a later Technical Review (rather than waiting for a new standard, that will take ages).
The proposal that was being discussed is N2073.
The bad part of it is that you will not get that, not even with the newest c++0x compilers. You will have to wait. In the mean time, you will have to compromise between the uniqueness of definitions in header-only libraries and the cost of compilation.
As far as I know, no. Headers are an inherent part of C++ as a language. Don't forget that forward declaration allows the compiler to merely include a function pointer to a compiled object/function without having to include the whole function (which you can get around by declaring a function inline (if the compiler feels like it).
If you really, really, really hate making headers, write a perl-script to autogenerate them, instead. I'm not sure I'd recommend it though.
It's completely possible to develop without header files. One can include a source file directly:
#include "MyModule.c"
The major issue with this is one of circular dependencies (ie: in C you must declare a function before calling it). This is not an issue if you design your code completely top-down, but it can take some time to wrap ones head around this sort of design pattern if you're not used to it.
If you absolutely must have circular dependencies, one may want to consider creating a file specifically for declarations and including it before everything else. This is a little inconvenient, but still less pollution than having a header for every C file.
I am currently developing using this method for one of my major projects. Here is a breakdown of advantages I've experienced:
Much less file pollution in your source tree.
Faster build times. (Only one object file is produced by the compiler, main.o)
Simpler make files. (Only one object file is produced by the compiler, main.o)
No need to "make clean". Every build is "clean".
Less boiler plate code. Less code = less potential bugs.
I've discovered that Gish (a game by Cryptic Sea, Edmund McMillen) used a variation on this technique inside its own source code.
You can carefully lay out your functions so that all of the dependent functions are compiled after their dependencies, but as Nils implied, that is not practical.
Catalin (forgive the missing diacritical marks) also suggested a more practical alternative of defining your methods in the header files. This can actually work in most cases.. especially if you have guards in your header files to make sure they are only included once.
I personally think that header files + declaring functions is much more desirable for 'getting your head around' new code, but that is a personal preference I suppose...
You can do without headers. But, why spend effort trying to avoid carefully worked out best practices that have been developed over many years by experts.
When I wrote basic, I quite liked line numbers. But, I wouldn't think of trying to jam them into C++, because that's not the C++ way. The same goes for headers... and I'm sure other answers explain all the reasoning.
For practical purposes no, it's not possible. Technically, yes, you can. But, frankly, it's an abuse of the language, and you should adapt to the language. Or move to something like C#.
It is best practice to use the header files, and after a while it will grow into you.
I agree that having only one file is easier, but It also can leed to bad codeing.
some of these things, althoug feel awkward, allow you to get more then meets the eye.
as an example think about pointers, passing parameters by value/by reference... etc.
for me the header files allow-me to keep my projects properly structured
Learn to recognize that header files are a good thing. They separate how codes appears to another user from the implementation of how it actually performs its operations.
When I use someone's code I do now want to have to wade through all of the implementation to see what the methods are on a class. I care about what the code does, not how it does it.
This has been "revived" thanks to a duplicate...
In any case, the concept of a header is a worthy one, i.e. separate out the interface from the implementation detail. The header outlines how you use a class / method, and not how it does it.
The downside is the detail within headers and all the workarounds necessary. These are the main issues as I see them:
dependency generation. When a header is modified, any source file that includes this header requires recompilation. The issue is of course working out which source files actually use it. When a "clean" build is performed it is often necessary to cache the information in some kind of dependency tree for later.
include guards. Ok, we all know how to write these but in a perfect system it would not be necessary.
private details. Within a class, you must put the private details into the header. Yes, the compiler needs to know the "size" of the class, but in a perfect system it would be able to bind this in a later phase. This leads to all kinds of workaround like pImpl and using abstract base classes even when you only have one implementation just because you want to hide a dependency.
The perfect system would work with
separate class definition and declaration
A clear bind between these two so the compiler would know where a class declaration and its definition are, and would know what the size of a class.
You declare using class rather than pre-processor #include. The compiler knows where to find a class. Once you have done "using class" you can use that class name without qualifying it.
I'd be interested to know how D does it.
With regards to whether you can use C++ without headers, I would say no you need them for abstract base classes and standard library. Aside from that you could get by without them, although you probably would not want to.
Can I write C++ code without headers
Read more about C++, e.g. the Programming using C++ book then the C+11 standard n3337.
Yes, because the preprocessor is (conceptually) generating code without headers.
If your C++ compiler is GCC and you are compiling your translation unit foo.cc consider running g++ -O -Wall -Wextra -C -E foo.cc > foo.ii; the emitted file foo.ii does not contain any preprocessor directive, and could be compiled with g++ -O foo.ii -o foo-bin into a foo-bin executable (at least on Linux). See also Advanced Linux Programming
On Linux, the following C++ file
// file ex.cc
extern "C" long write(int fd, const void *buf, size_t count);
extern "C" long strlen(const char*);
extern "C" void perror(const char*);
int main (int argc, char**argv)
{
if (argc>1)
write(1, argv[1], strlen(argv[1]);
else
write(1, __FILE__ " has no argument",
sizeof(__FILE__ " has no argument"));
if (write(1, "\n", 1) <= 0) {
perror(__FILE__);
return 1;
}
return 0;
}
could be compiled using GCC as g++ ex.cc -O ex-bin into an executable ex-bin which, when executed, would show something.
In some cases, it is worthwhile to generate some C++ code with another program
(perhaps SWIG, ANTLR, Bison, RefPerSys, GPP, or your own C++ code generator) and configure your build automation tool (e.g. ninja-build or GNU make) to handle such a situation. Notice that the source code of GCC 10 has a dozen of C++ code generators.
With GCC, you might sometimes consider writing your own GCC plugin to analyze your (or others) C++ code (e.g. at the GIMPLE level). See also (in fall 2020) CHARIOT and DECODER European projects. You could also consider using the Clang static analyzer or Frama-C++.
Historically hearder files have been used for two reasons.
To provides symbols when compiling a program that wants to used a
library or a additional file.
To hide part of the implementing; keep things private.
For example say you have a function you don't want exposed to other
parts of your program, but want to use in your implementation. In that
case, you would write the function in the CPP file, but leave it out
of the header file. You can do this with variables and anything that
would want to keep private in the impregnation that you don't want
exposed to conumbers of that source code. In other programming
lanugases there is a "public" keyword that allows module parts to be
kept from being exposed to other parts of your program. In C and C++
no such facility exists at afile level, so header files are used
intead.
Header files are not perfect. Useing '#include' just copies the contents
of what ever file you provide. Single quotes for the current working
tree and < and > for system installed headers. In CPP for system
installed std components the '.h' is omitted; just another way C++
likes to do its own thing. If you want to give '#include' any kind of
file, it will be included. It really isn't a module system like Java,
Python, and most other programming lanuages have. Since headers are
not modules some extra steps need to be taken to get similar function
out of them. The Prepossesser (the thing that works with all the
#keywords) will blindly include what every you state is needed to be
consumed in that file, but C or C++ want to have your symbals or
implications defined only one in compilation. If you use a library, no
it main.cpp, but in the two files that main includes, then you only
want that library included once and not twice. Standard Library
components are handled special, so you don't need to worry about using
the same C++ include everywhere. To make it so that the first time the
Prepossesser sees your library it doesn't include it again, you need
to use a heard guard.
A heard guard is the simplest thing. It looks like this:
#ifndef LIBRARY_H
#define LIBRARY_H
// Write your definitions here.
#endif
It is considered good to comment the ifndef like this:
#endif // LIBRARY_H
But if you don't do the comment the compiler wont care and it wont
hurt anthing.
All #ifndef is doing is checking whether LIBRARY_H is equal to 0;
undefined. When LIBRARY_H is 0, it provides what comes before the
#endif.
Then #define LIBRARY_H sets LIBRARY_H to 1, so the next time the
Preprocessor sees #ifndef LIBRARY_H, it wont provide the same contents
again.
(LIBRARY_H should be what ever the file name is and then _ and the
extension. This is not going break anything if you don't write the
same thing, but you should be consistent. At least put the file name
for the #ifndef. Otherwise it might get confusing what guards are for
what.)
Really nothing fancy going on here.
Now you don't want to use header files.
Great, say you don't care about:
Having things private by excluding them from header files
You don't intend to used this code in a library. If you ever do, it
may be easier to go with headers now so you don't have to reorganise
your code into headers later.
You don't want to repeat yourself once in a header file and then in
a C++ file.
The purpose of hearder files can seem ambiguous and if you don't care
about people telling out it's wrong for imaginary reasons, then save
your hands and don't bother repeating yourself.
How to include only hearder files
Do
#ifndef THING_CPP
#define THING_CPP
#include <iostream>
void drink_me() {
std::cout << "Drink me!" << std::endl;
}
#endif // THING_CPP
for thing.cpp.
And for main.cpp do
#include "thing.cpp"
int main() {
drink_me();
return 0;
}
then compile.
Basically just name your included CPP file with the CPP extension and
then treat it like a header file but write out the implementations in
that one file.