Enforce explicit (direct) #include statements with GCC - c++

I am wondering if it is possible to enforce direct #include requirements with GCC. Let say I have these files:
abc.h:
typedef struct {
int useful;
} str;
file1.h:
#include <abc.h>
#ifndef GUARD
#define GUARD
#include <deh.h>
typedef struct {
int useful;
} str2;
#endif
file2.h:
#ifndef GUARD2
#define GUARD2
#include <file1.h>
void a_function (str* my_str);
void a_function2(str2* my_str);
#endif
The problem is that "file2.h" is using "str" defined in "abc.h". Let say "file1.h" is provided by the system on some Linux systems. I have no control of "file1.h" content. If may or may not include , it may or may not be inside include guards and it may or may not change over time.
The issue is when it come to support multiple distributions and system. If file2.h is accidentally using "str" without including , it may compile anyway on most systems, but may fail on others, or in the future when "file1.h" change.
Is there a way to force GCC (or LLVM) to use only types directly defined in file2.h? I understand that "#include" are just that, includes, so the compiler internal may not be aware of those issues after the preprocessor phase, however, I am wondering if this is currently possible and, if so, how?
I had this problem a few time with "normal" Linux distributions, but it was even worst with early Android NDK versions.

No, #include instructs the compiler to treat the other file's content as if it were placed at the #include directive -- you're asking for the other file's content to be treated somehow differently.
Your best hope in this scenario is to use a static analysis tool that performs dependency analysis, and check that there are no direct dependencies on types (or functions or objects) obtained through indirect (nested) inclusion.
The free doxygen documentation tool extracts information about inclusion and dependencies, which it makes available in XML format. Of course, it isn't as accurate as a true compiler, in terms of overload resolution and template processing. I'm sure there are paid tools that will be more accurate (user Ira Baxter pops up from time to time mentioning a commercial product his company sells, DMS Toolkit or something like that, which sounded like it would get at this information). But I'm guessing that doxygen will give you the right results for most "normal" code.

There isn't anything in the C++ language which would verify that all headers are included correctly. However, there is include-what-you-use which is based on clang. I haven't tried using it but it seems to be in the direction of what you are looking for. For C, implementing an analyzer detecting dependencies and report missing direct includes seems to be fairly straight forward. When trying the same with C++ things get somewhat harder due to the need of detecting dependencies for template instantiations.
Based on last weeks discussion at the C++ committee meeting, refactoring sources and headers to properly include what is actually used may be helpful for future module support in C++.

Related

How to determine which header files to include?

Say I have the below (very simple) code.
#include <iostream>
int main() {
std::cout << std::stoi("12");
}
This compiles fine on both g++ and clang; however, it fails to compile on MSVC with the following error:
error C2039: 'stoi': is not a member of 'std'
error C3861: 'stoi': identifier not found
I know that std::stoi is part of the <string> header, which presumably the two former compilers include as part of <iostream> and the latter does not. According to the C++ standard [res.on.headers]
A C++ header may include other C++ headers.
Which, to me, basically says that all three compilers are correct.
This issue arose when one of my students submitted work, which the TA marked as not compiling; I of course went and fixed it. However, I would like to prevent future incidents like this. So, is there a way to determine which header files should be included, short of compiling on three different compilers to check every time?
The only way I can think of is to ensure that for every std function call, an appropriate include exists; but if you have existing code which is thousands of lines long, this may be tedious to search through. Is there an easier/better way to ensure cross-compiler compatibility?
Example with the three compilers: https://godbolt.org/z/kJhS6U
Is there an easier/better way to ensure cross-compiler compatibility?
This is always going to be a bit of a chore if you have a huge codebase and haven't been doing this so far, but once you've gone through fixing your includes, you can stick to a simple procedure:
When you write new code that uses a standard feature, like std::stoi, plug that name into Google, go to the cppreference.com article for it, then look at the top to see which header it's defined in.
Then include that, if it's not already included. Job done!
(You could use the standard for this, but that's not as accessible.)
Do not be tempted to sack it all off in favour of cheap, unportable hacks like <bits/stdc++.h>!
tl;dr: documentation
Besides reviewing documentation and doing that manually (painful and time consuming) you can use some tools which can do that for you.
You can use ReSharper in Visual Studio which is capable to organize imports (in fact VS without ReSharper is not very usable). If include is missing it recommends to add it and if it is obsolete line with include is shown in more pale colors.
Or you can use CLion (available for all platforms) which also has this capability (in fact this is the same manufacture JetBrains).
There is also tool called include what you used, but its aim is take advantages of forward declaration, I never used that (personally - my team mate did that for our project).

Best (cleanest) way for writing platform specific code

Say you have a piece of code that must be different depending on the operating system your program is running on.
There's the old school way of doing it:
#ifdef WIN32
// code for Windows systems
#else
// code for other systems
#endif
But there must be cleaner solutions that this one, right?
The typical approach I've seen first hand at a half-dozen companies over my career is the use of a Hardware Abstraction Layer (HAL).
The idea is that you put the lowest level stuff into a dedicated header plus statically linked library, which includes things like:
Fixed width integers (int64_t on Linux, __int64 on Windows, etc).
Common library functions (strtok_r() vs strtok_s() on Linux vs Windows).
A common data type setup (ie: typedefs for all data types, such as xInt, xFloat etc, used throughout the code so that if the underlying type changes for a platform, or a new platform is suddenly supported, no need to re-write and re-test code that depends on it, which can be extremely expensive in terms of labor).
The HAL itself is usually riddled with preprocessor directives like in your example, and that's just the reality of the matter. If you wrap it with run-time if/else statements, you comilation will fail due to unresolved symbols. Or worse, you could have extra symbols included which will increase the size of your output, and likely slow down your program if that code is executed frequently.
So long as the HAL has been well-written, the header and library for the HAL give you a common interface and set of data types to use in the rest of your code with minimal hassle.
The most beautiful aspect of this, from a professional standpoint, is that all of your other code doesn't have to ever concern itself with architecture or operating system specifics. You'll have the same code-flow on various systems, which will by extension, allow you to test the same code in a variety of different manners, and find bugs you wouldn't normally expect or test for. From a company's perspective, this saves a ton of money in terms of labor, and not losing clients due to them being angry with bugs in production software.
I've had to do a lot of this sort of stuff in my career, supporting code that buils and runs on an embedded device, plus in windows, and then also have it run on different ASICS and/or revisions of ASICS.
I tend to do what you suggest and then when things really diverge, move on to defining the interface I desire to be fixed between platforms and then having separate implementation files or even libraries. It can get really messy as the codebase gets older and more exceptions need to be added.
Sometimes you can hide this stuff in header files, so your code looks 'clean', but a lot of times that's just obfuscating what's going on behind a bunch of macro magic.
The only other thing I'd add is I tend to make the #ifdef/#else/#endif chain fail if none of the options are defined. This forces me to revisit the issue when a new revision comes along. Some folks prefer it to have a default, but I find that just hides potential failures.
Granted, I'm working in the embedded world where code space is paramount (since memory is small and fixed), and code cleanliness unfortunately has to take a back seat.
An adopted practice for non-trivial projects is to write platform-specific code in separate files (and in separate directories, where applicable), avoiding "localized" #ifdefs to the fullest possible extent.
Say you are developing a library called "Example" and example.hpp will be your library header:
example.hpp
#include "platform.hpp"
//
// here: platform-independent declarations, includes etc
//
// below: platform-specific includes
#if defined(WINDOWS)
#include "windows\win32_specific_code.hpp"
// other win32 headers
#elif defined(POSIX)
#include "posix/linux_specific_code.hpp"
// other linux headers
#endif
platform.hpp (simplified)
#if defined(WIN32) && !defined(UNIX)
#define WINDOWS
#elif defined(UNIX) && !defined(WIN32)
#define POSIX
#endif
win32_specific_code.hpp
void Function1();
win32_specific_code.cpp
#include "../platform.hpp"
#ifdef WINDOWS // We should not violate the One Definition Rule
#include "win32_specific_code.hpp"
#include <iostream>
void Function1()
{
std::cout << "You are on WINDOWS" << std::endl;
}
//...
#endif /* WINDOWS */
Of course, declare Function1() in your linux_specific_code.hpp file as well.
Then, when implementing it for Linux (in the linux_specific_code.cpp file), be sure to surround everything for conditional compilation as well, similar to I did above (eg. using #ifdef POSIX). Otherwise, the compiler will generate multiple definitions and you'll get a linker error.
Now, everything an user of your library must do is #include <example.hpp> in his code, and place either #define WINDOWS or #define POSIX in his compiler's preprocessor definitions. In fact, the second step might not be necessary at all, assuming his environment already defines either one of the WIN32 or UNIX macros. This way, Function1() can already be used from the code in a cross-platform manner.
This approach is pretty much the one used by the Boost C++ Libraries. I personally find it clean and sensible. If, however, you don't like it, you can have a read at Chromium's conventions for multi-platform development for a somewhat different strategy.

Indicate C++ standard in source in a standard way

Standard compliant C++ compilers define a __cplusplus macro which may
be inspected during preprocessing to determine under what standard a
file is being compiled, e.g:
#if __cplusplus < 201103L
#error "You need a C++11 compliant compiler."
#endif
#include <iostream>
#include <vector>
int main(){
std::vector<int> v {1, 2, 3};
for (auto i : v){
std::cout << i << " ";
}
std::cout << std::endl;
return 0;
}
My question is:
Is there a standard way to indicate what standard a source
file should be compiled with?
That would allow build tools to inspect sources prior to compilation
to determine the appropriate argument for -std= (cf. shebang's which
can indicate scripting language/version: #!/usr/bin/env python3).
A non standard and brittle way I can think of is looking for the
preprocessor checks of __cplusplus but in the example above I could
also have written:
#if __cplusplus <= 199711L
#error "You need a C++11 compliant compiler."
#endif
hence, writing e.g. a regex would become quite tricky to catch all variations.
EDIT:
While I sympathize with the answer by #Gary which suggests relying on a build system,
it assumes that we actually will have a build step.
But you can already today:
use an interpreter to run a C++ program using e.g. CINT
or use a source to source translation using e.g. rosecompiler
My question is also about indicating that the source is C++ and what version
it was intended for (imagine someone digging out my code 70 years from now
when C++ might be as popular as say Cobol is today).
I guess the equivalent thing I would be looking for is the C++ equiavlent of HTML's:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
C++ Standards in a way are somewhat like developing against a library. In that sense, libraries typically evolve in a way that slowly deprecates old functions while making access to new functions. The typical way is the introduction of new methods or signatures while still allowing access to the old ones.
As a simple example, for instance, you might make an app for the iPhone that is backwards compatible with IOS 4 and above. You don't get the option to cherry pick what specific versions you want to support. This is good because otherwise you open code evolution up to a matrix of possibilities, making your code harder to understand and maintain.
Alternatively, you may introduce preprocessor instructions to build certain pieces conditionally depending on a version or flag of some sort. These are temporary measures, however, and should be removed as the code evolves.
So I think for answering this question as is, the better question is asking oneself in this situation is what will adding something like this actually solve and will it add needless complexity (one of the code smells of bad design)?
In this situation and from experience, I personally think you're better sticking with one standard. I think you'll find that trying to differentiate standards by sprinkling various preprocessor #ifdef and #ifndefs is going to make understanding your code base difficult to understand and manage. Even if you had one include file with the definition of what version is allowed that gets included by all other files, it becomes yet another file to manage....not to mention when you change it you have to recompile everything that includes it.
If you're worried about someone building your code base with the wrong standard, use a build system that doesn't require developers to input that information. For instance Make, Ant, cmake. It makes the building of your software simple and clearly defines how the project should be compiled in a repeatable fashion. If you go this route, you'll see that trying to protect the code from being compiled improperly becomes a non-issue.
Also, if they go out of their way and compile with the wrong standard, they'll be greeted with plenty of compiler errors =)

Abolish include-files in C++

Suppose i have the following code (literally) in a C++ source file:
// #include <iostream> // superfluous, commented-out
using std::cout;
using std::endl;
int main()
{
cout << "Hello World" << endl;
return 0;
}
I can compile this code even though #include <iostream> is commented-out:
g++ -include my_cpp_std_lib_hack source.cpp
Where my_cpp_std_lib_hack is a file in some central location that includes all the files of the C++ Standard Library:
#include <ciso646>
#include <climits>
#include <clocale>
...
#include <valarray>
#include <vector>
Of course, i can use proper compilation options for all compilers i care about (that being MS Visual Studio and maybe a few others), and i also use precompiled headers.
Using such a hack gives me the following advantages:
Fast compilation (because all of the Standard Library is precompiled)
No need to add #includes when all i want is adding some debugging output
No need to remember or look up all the time where the heck std::max is declared
A feeling that the STL is magically built-in to the language
So i wonder: am i doing something very wrong here?
Will this hack break down when writing large projects?
Maybe everyone else already uses this, and no one told me?
So i wonder: am i doing something very wrong here?
Yes. Sure, your headers are precompiled, but the compiler still has to do things like name lookups on the entire included mass of stuff which slows down compilation.
Will this hack break down when writing large projects?
Yes, that's pretty much the problem. Plus, if anyone else looks at that code, they're going to be wondering where std::cout (well, assume that's a user defined type) came from. Without the #includes they're going to have no idea whatsoever.
Not to mention, now you have to link against a ton of standard library features that you may have (probably could have) avoided linking against in the first place.
If you want to use precompilation that's fine, but someone should be able to build each and every implementation file even when precompilation is disabled.
The only thing "wrong" is that you are relying upon a compiler-specific command-line flag to make the files compilable. You'd need to do something different if not using GCC. Most compilers probably do provide an equivalent feature, but it is best to write portable source code rather than to unnecessarily rely on features of your specific build environment.
Other programmers shouldn't have to puzzle over your Makefiles (or Ant files, or Eclipse workspaces, or whatever) to figure out how things are working.
This may also cause problems for users of IDE's. If the IDE doesn't know what files are being included, it may not be able to provide automatic completion, source browsing, refactoring, and other such features.
(FWIW, I do think it is a good idea to have one header file that includes all of the Standard Library headers that you are using in your project. It makes precompilation easier, makes it easier to port to a non-standard environment, and also helps deal with those issues that sometimes arise when headers are included in different orders in different source files. But that header file should be explicitly included by each source file; there should be no magic.)
Forget the compilation speed-up - a precompiled header with templates isn't really "precompiled" except for the name and the parse, as far as I've heard. I won't believe in the compilation speed up until I see it in the benchmarks. :)
As for the usefulness:
I prefer to have an IDE which handles my includes for me (this is still bad for C++, but Eclipse already adds known includes with ctrl+shift+n with... well, acceptable reliability :)).
Doing 'clandestine' includes like this would also make testing more difficult. You want to compile a smallest-possible subset of code when testing a particular component. Figuring out what that subset is would be difficult if the headers/sources aren't being honest about their dependencies, so you'd probably just drag your my_cpp_std_lib_hack into every unit test. This would increase compilation time for your test suites a lot. Established code bases often have more than three times as much test code as regular code, so this is likely to become an issue as your code base grows.
From the GCC manual:
-include file
Process file as if #include "file" appeared as the first line of the
primary source file. However, the
first directory searched for file is
the preprocessor's working directory
instead of the directory containing
the main source file. If not found
there, it is searched for in the
remainder of the #include "..." search
chain as normal.
So what you're doing is essentially equivalent to starting each file with the line
#include "my_cpp_std_lib_hack"
which is what Visual Studio does when it gathers up commonly-included files in stdafx.h. There are some benefits to that, as outlined by others, but your approach hides this include in the build process, so that nobody who looked directly at one of your source files would know of this hidden magic. Making your code opaque in this way does not seem like a good style to me, so if you're keen on all the precompiled header benefits I suggest you explicitly include your hack file.
You are doing something very wrong. You are effectively including lots of headers that may not be needed. In general, this is a very bad idea, because you are creating unnecessary dependencies, and a change in any header would require recompilation of everything. Even if you are avoiding this by using precompiled headers, you are still linking to lots of object that you may not need, making your executable much larger than it needs to be.
There is really nothing wrong with the standard way of using headers. You should include everything you are using, and no more (forward declarations are your friends). This makes code easier to follow, and helps you keep dependencies under control.
We try not to include the unused or even the rarely used stuff for example in VC++ there is
#define WIN32_LEAN_AND_MEAN //exclude rarely used stuff
and what we hate in MFC is that if u want to make a simple application u will produce large executable file with the whole library (if statically linked), so it's not a good idea what if u only want to use the cout while the other no??
another thing i don't like to pass arguments via command line coz i may leave the project for a while, and forget what are the arguments... e.g. i prefer using
#pragma (comment, "xxx.lib")
than using it in command line, it reminds me at least with what file i want
That's is my own opinion make your code stable and easy to compile in order to to rot as code rotting is a very nasty thing !!!!!

Can I write C++ code without headers (repetitive function declarations)?

Is there any way to not have to write function declarations twice (headers) and still retain the same scalability in compiling, clarity in debugging, and flexibility in design when programming in C++?
Use Lzz. It takes a single file and automatically creates a .h and .cpp for you with all the declarations/definitions in the right place.
Lzz is really very powerful, and handles 99% of full C++ syntax, including templates, specializations etc etc etc.
Update 150120:
Newer C++ '11/14 syntax can only be used within Lzz function bodies.
I felt the same way when I started writing C, so I also looked into this. The answer is that yes, it's possible and no, you don't want to.
First with the yes.
In GCC, you can do this:
// foo.cph
void foo();
#if __INCLUDE_LEVEL__ == 0
void foo() {
printf("Hello World!\n");
}
#endif
This has the intended effect: you combine both header and source into one file that can both be included and linked.
Then with the no:
This only works if the compiler has access to the entire source. You can't use this trick when writing a library that you want to distribute but keep closed-source. Either you distribute the full .cph file, or you have to write a separate .h file to go with your .lib. Although maybe you could auto-generate it with the macro preprocessor. It would get hairy though.
And reason #2 why you don't want this, and that's probably the best one: compilation speed. Normally, C sources files only have to be recompiled when the file itself changes, or any of the files it includes changes.
The C file can change frequently, but the change only involves recompiling the one file that changed.
Header files define interfaces, so they shouldn't change as often. When they do however, they trigger a recompile of every source file that includes them.
When all your files are combined header and source files, every change will trigger a recompile of all source files. C++ isn't known for its fast compile times even now, imagine what would happen when the entire project had to be recompiled every time. Then extrapolate that to a project of hundreds of source files with complicated dependencies...
Sorry, but there's no such thing as a "best practice" for eliminating headers in C++: it's a bad idea, period. If you hate them that much, you have three choices:
Become intimately familiar with C++ internals and any compilers you're using; you're going to run into different problems than the average C++ developer, and you'll probably need to solve them without a lot of help.
Pick a language you can use "right" without getting depressed
Get a tool to generate them for you; you'll still have headers, but you save some typing effort
In his article Simple Support for Design by Contract in C++, Pedro Guerreiro stated:
Usually, a C++ class comes in two
files: the header file and the
definition file. Where should we write
the assertions: in the header file,
because assertions are specification?
Or in the definition file, since they
are executable? Or in both, running
the risk of inconsistency (and
duplicating work)? We recommend,
instead, that we forsake the
traditional style, and do away with
the definition file, using only the
header file, as if all functions were
defined inline, very much like Java
and Eiffel do.
This is such a drastic
change from the C++ normality that it
risks killing the endeavor at the
outset. On the other hand, maintaining
two files for each class is so
awkward, that sooner or later a C++
development environment will come up
that hides that from us, allowing us
to concentrate on our classes, without
having to worry about where they are
stored.
That was 2001. I agreed. It is 2009 now and still no "development environment that hides that from us, allowing us to concentrate on our classes" has come up. Instead, long compile times are the norm.
Note: The link above seems to be dead now. This is the full reference to the publication, as it appears in the Publications section of the author's website:
Pedro Guerreiro, Simple Support for Design by Contract in C++, TOOLS USA 2001, Proceedings, pages 24-34, IEEE, 2001.
There is no practical way to get around headers. The only thing you could do is to put all code into one big c++ file. That will end up in an umaintainable mess, so please don't do it.
At the moment C++ header-files are a nessesary evil. I don't like them, but there is no way around them. I'd love to see some improvements and fresh ideas on the problem though.
Btw - once you've got used to it it's not that bad anymore.. C++ (and any other language as well) has more anoying things.
What I have seen some people like you do is write everything in the headers. That gives your desired property of only having to write the method profiles once.
Personally I think there are very good reasons why it is better to separate declaration and definition, but if this distresses you there is a way to do what you want.
There's header file generation software. I've never used it, but it might be worth looking into. For instance, check out mkhdr! It supposedly scans C and C++ files and generates the appropriate header files.
(However, as Richard points out, this seems to limit you from using certain C++ functionality. See Richard's answer instead here right in this thread.)
You have to write function declaration twice, actually (once in header file, once in implementation file). The definition (AKA implementation) of the function will be written once, in the implementation file.
You can write all the code in header files (it is actually a very used practice in generic programming in C++), but this implies that every C/CPP file including that header will imply recompilation of the implementation from those header files.
If you are thinking to a system similar to C# or Java, it is not possible in C++.
Nobody has mentioned Visual-Assist X under Visual Studio 2012 yet.
It has a bunch of menus and hotkeys that you can use to ease the pain of maintaining headers:
"Create Declaration" copies the function declaration from the current function into the .hpp file.
"Refactor..Change signature" allows you to simultaneously update the .cpp and .h file with one command.
Alt-O allows you to instantly flip between .cpp and .h file.
C++ 20 modules solve this problem. There is no need for copy-pasting anymore! Just write your code in a single file and export things using "export".
export module mymodule;
export int myfunc() {
return 1
}
Read more about modules here: https://en.cppreference.com/w/cpp/language/modules
At the time of writing this answer (2022 Feb), these compilers support it:
See here for the supported compilers:
https://en.cppreference.com/w/cpp/compiler_support
See this answer if you want to use modules with CMake:
https://stackoverflow.com/a/71119196/7910299
Actually... You can write the entire implementation in a file. Templated classes are all defined in the header file with no cpp file.
You can also save then with whatever extensions you want. Then in #include statements, you would include your file.
/* mycode.cpp */
#pragma once
#include <iostreams.h>
class myclass {
public:
myclass();
dothing();
};
myclass::myclass() { }
myclass::dothing()
{
// code
}
Then in another file
/* myothercode.cpp */
#pragma once
#include "mycode.cpp"
int main() {
myclass A;
A.dothing();
return 0;
}
You may need to setup some build rules, but it should work.
You can avoid headers. Completely. But I don't recommend it.
You'll be faced with some very specific limitations. One of them is you won't be able to have circular references (you won't be able to have class Parent contain a pointer to an instance of class ChildNode, and class ChildNode also contain a pointer to an instance of class Parent. It'd have to be one or the other.)
There are other limitations which just end up making your code really weird. Stick to headers. You'll learn to actually like them (since they provide a nice quick synopsis of what a class can do).
To offer a variant on the popular answer of rix0rrr:
// foo.cph
#define INCLUDEMODE
#include "foo.cph"
#include "other.cph"
#undef INCLUDEMODE
void foo()
#if !defined(INCLUDEMODE)
{
printf("Hello World!\n");
}
#else
;
#endif
void bar()
#if !defined(INCLUDEMODE)
{
foo();
}
#else
;
#endif
I do not recommend this, bit I think this construction demonstrates the removal of content repetition at the cost of rote repetition. I guess it makes copy-pasta easier? That's not really a virtue.
As with all the other tricks of this nature, a modification to the body of a function will still require recompilation of all files including the file containing that function. Very careful automated tools can partially avoid this, but they would still have to parse the source file to check, and be carefully constructed to not rewrite their output if it's no different.
For other readers: I spent a few minutes trying to figure out include guards in this format, but didn't come up with anything good. Comments?
I understand your problems. I would say that the C++ main problem is the compilation/build method that it inherited from the C. The C/C++ header structure has been designed in times when coding involved less definitions and more implementations. Don't throw bottles on me, but that's how it looks like.
Since then the OOP has conquered the world and the world is more about definitions then implementations. As the result, including headers makes pretty painful to work with a language where the fundamental collections such as the ones in the STL made with templates which are notoriously difficult job for the compiler to deal with. All those magic with the precompiled headers doesn't help so much when it comes to TDD, refactoring tools, the general development environment.
Of course C programmers are not suffering from this too much since they don't have compiler-heavy header files and so they are happy with the pretty straightforward, low-level compilation tool chain. With C++ this is a history of suffering: endless forward declarations, precompiled headers, external parsers, custom preprocessors etc.
Many people, however, does not realize that the C++ is the ONLY language that has strong and modern solutions for high- and low-level problems. It's easy to say that you should go for an other language with proper reflection and build system, but it is non-sense that we have to sacrifice the low-level programming solutions with that and we need to complicate things with low-level language mixed with some virtual-machine/JIT based solution.
I have this idea for some time now, that it would be the most cool thing on earth to have a "unit" based c++ tool-chain, similar to that in D. The problem comes up with the cross-platform part: the object files are able to store any information, no problem with that, but since on windows the object file's structure is different that of the ELF, it would be pain in the ass to implement a cross-platform solution to store and process the half-way-compilation units.
After reading all the other answers, I find it missing that there is ongoing work to add support for modules in the C++ standard. It will not make it to C++0x, but the intention is that it will be tackled in a later Technical Review (rather than waiting for a new standard, that will take ages).
The proposal that was being discussed is N2073.
The bad part of it is that you will not get that, not even with the newest c++0x compilers. You will have to wait. In the mean time, you will have to compromise between the uniqueness of definitions in header-only libraries and the cost of compilation.
As far as I know, no. Headers are an inherent part of C++ as a language. Don't forget that forward declaration allows the compiler to merely include a function pointer to a compiled object/function without having to include the whole function (which you can get around by declaring a function inline (if the compiler feels like it).
If you really, really, really hate making headers, write a perl-script to autogenerate them, instead. I'm not sure I'd recommend it though.
It's completely possible to develop without header files. One can include a source file directly:
#include "MyModule.c"
The major issue with this is one of circular dependencies (ie: in C you must declare a function before calling it). This is not an issue if you design your code completely top-down, but it can take some time to wrap ones head around this sort of design pattern if you're not used to it.
If you absolutely must have circular dependencies, one may want to consider creating a file specifically for declarations and including it before everything else. This is a little inconvenient, but still less pollution than having a header for every C file.
I am currently developing using this method for one of my major projects. Here is a breakdown of advantages I've experienced:
Much less file pollution in your source tree.
Faster build times. (Only one object file is produced by the compiler, main.o)
Simpler make files. (Only one object file is produced by the compiler, main.o)
No need to "make clean". Every build is "clean".
Less boiler plate code. Less code = less potential bugs.
I've discovered that Gish (a game by Cryptic Sea, Edmund McMillen) used a variation on this technique inside its own source code.
You can carefully lay out your functions so that all of the dependent functions are compiled after their dependencies, but as Nils implied, that is not practical.
Catalin (forgive the missing diacritical marks) also suggested a more practical alternative of defining your methods in the header files. This can actually work in most cases.. especially if you have guards in your header files to make sure they are only included once.
I personally think that header files + declaring functions is much more desirable for 'getting your head around' new code, but that is a personal preference I suppose...
You can do without headers. But, why spend effort trying to avoid carefully worked out best practices that have been developed over many years by experts.
When I wrote basic, I quite liked line numbers. But, I wouldn't think of trying to jam them into C++, because that's not the C++ way. The same goes for headers... and I'm sure other answers explain all the reasoning.
For practical purposes no, it's not possible. Technically, yes, you can. But, frankly, it's an abuse of the language, and you should adapt to the language. Or move to something like C#.
It is best practice to use the header files, and after a while it will grow into you.
I agree that having only one file is easier, but It also can leed to bad codeing.
some of these things, althoug feel awkward, allow you to get more then meets the eye.
as an example think about pointers, passing parameters by value/by reference... etc.
for me the header files allow-me to keep my projects properly structured
Learn to recognize that header files are a good thing. They separate how codes appears to another user from the implementation of how it actually performs its operations.
When I use someone's code I do now want to have to wade through all of the implementation to see what the methods are on a class. I care about what the code does, not how it does it.
This has been "revived" thanks to a duplicate...
In any case, the concept of a header is a worthy one, i.e. separate out the interface from the implementation detail. The header outlines how you use a class / method, and not how it does it.
The downside is the detail within headers and all the workarounds necessary. These are the main issues as I see them:
dependency generation. When a header is modified, any source file that includes this header requires recompilation. The issue is of course working out which source files actually use it. When a "clean" build is performed it is often necessary to cache the information in some kind of dependency tree for later.
include guards. Ok, we all know how to write these but in a perfect system it would not be necessary.
private details. Within a class, you must put the private details into the header. Yes, the compiler needs to know the "size" of the class, but in a perfect system it would be able to bind this in a later phase. This leads to all kinds of workaround like pImpl and using abstract base classes even when you only have one implementation just because you want to hide a dependency.
The perfect system would work with
separate class definition and declaration
A clear bind between these two so the compiler would know where a class declaration and its definition are, and would know what the size of a class.
You declare using class rather than pre-processor #include. The compiler knows where to find a class. Once you have done "using class" you can use that class name without qualifying it.
I'd be interested to know how D does it.
With regards to whether you can use C++ without headers, I would say no you need them for abstract base classes and standard library. Aside from that you could get by without them, although you probably would not want to.
Can I write C++ code without headers
Read more about C++, e.g. the Programming using C++ book then the C+11 standard n3337.
Yes, because the preprocessor is (conceptually) generating code without headers.
If your C++ compiler is GCC and you are compiling your translation unit foo.cc consider running g++ -O -Wall -Wextra -C -E foo.cc > foo.ii; the emitted file foo.ii does not contain any preprocessor directive, and could be compiled with g++ -O foo.ii -o foo-bin into a foo-bin executable (at least on Linux). See also Advanced Linux Programming
On Linux, the following C++ file
// file ex.cc
extern "C" long write(int fd, const void *buf, size_t count);
extern "C" long strlen(const char*);
extern "C" void perror(const char*);
int main (int argc, char**argv)
{
if (argc>1)
write(1, argv[1], strlen(argv[1]);
else
write(1, __FILE__ " has no argument",
sizeof(__FILE__ " has no argument"));
if (write(1, "\n", 1) <= 0) {
perror(__FILE__);
return 1;
}
return 0;
}
could be compiled using GCC as g++ ex.cc -O ex-bin into an executable ex-bin which, when executed, would show something.
In some cases, it is worthwhile to generate some C++ code with another program
(perhaps SWIG, ANTLR, Bison, RefPerSys, GPP, or your own C++ code generator) and configure your build automation tool (e.g. ninja-build or GNU make) to handle such a situation. Notice that the source code of GCC 10 has a dozen of C++ code generators.
With GCC, you might sometimes consider writing your own GCC plugin to analyze your (or others) C++ code (e.g. at the GIMPLE level). See also (in fall 2020) CHARIOT and DECODER European projects. You could also consider using the Clang static analyzer or Frama-C++.
Historically hearder files have been used for two reasons.
To provides symbols when compiling a program that wants to used a
library or a additional file.
To hide part of the implementing; keep things private.
For example say you have a function you don't want exposed to other
parts of your program, but want to use in your implementation. In that
case, you would write the function in the CPP file, but leave it out
of the header file. You can do this with variables and anything that
would want to keep private in the impregnation that you don't want
exposed to conumbers of that source code. In other programming
lanugases there is a "public" keyword that allows module parts to be
kept from being exposed to other parts of your program. In C and C++
no such facility exists at afile level, so header files are used
intead.
Header files are not perfect. Useing '#include' just copies the contents
of what ever file you provide. Single quotes for the current working
tree and < and > for system installed headers. In CPP for system
installed std components the '.h' is omitted; just another way C++
likes to do its own thing. If you want to give '#include' any kind of
file, it will be included. It really isn't a module system like Java,
Python, and most other programming lanuages have. Since headers are
not modules some extra steps need to be taken to get similar function
out of them. The Prepossesser (the thing that works with all the
#keywords) will blindly include what every you state is needed to be
consumed in that file, but C or C++ want to have your symbals or
implications defined only one in compilation. If you use a library, no
it main.cpp, but in the two files that main includes, then you only
want that library included once and not twice. Standard Library
components are handled special, so you don't need to worry about using
the same C++ include everywhere. To make it so that the first time the
Prepossesser sees your library it doesn't include it again, you need
to use a heard guard.
A heard guard is the simplest thing. It looks like this:
#ifndef LIBRARY_H
#define LIBRARY_H
// Write your definitions here.
#endif
It is considered good to comment the ifndef like this:
#endif // LIBRARY_H
But if you don't do the comment the compiler wont care and it wont
hurt anthing.
All #ifndef is doing is checking whether LIBRARY_H is equal to 0;
undefined. When LIBRARY_H is 0, it provides what comes before the
#endif.
Then #define LIBRARY_H sets LIBRARY_H to 1, so the next time the
Preprocessor sees #ifndef LIBRARY_H, it wont provide the same contents
again.
(LIBRARY_H should be what ever the file name is and then _ and the
extension. This is not going break anything if you don't write the
same thing, but you should be consistent. At least put the file name
for the #ifndef. Otherwise it might get confusing what guards are for
what.)
Really nothing fancy going on here.
Now you don't want to use header files.
Great, say you don't care about:
Having things private by excluding them from header files
You don't intend to used this code in a library. If you ever do, it
may be easier to go with headers now so you don't have to reorganise
your code into headers later.
You don't want to repeat yourself once in a header file and then in
a C++ file.
The purpose of hearder files can seem ambiguous and if you don't care
about people telling out it's wrong for imaginary reasons, then save
your hands and don't bother repeating yourself.
How to include only hearder files
Do
#ifndef THING_CPP
#define THING_CPP
#include <iostream>
void drink_me() {
std::cout << "Drink me!" << std::endl;
}
#endif // THING_CPP
for thing.cpp.
And for main.cpp do
#include "thing.cpp"
int main() {
drink_me();
return 0;
}
then compile.
Basically just name your included CPP file with the CPP extension and
then treat it like a header file but write out the implementations in
that one file.