C++ headers difference - c++

Very new to programming and have some questions about headers.
In the book it uses:
#include "std_lib_facilities.h"
My teacher always uses:
#include <iostream>
using namespace std;
Any reason why we would use one over another? One work better than the other? Any help would be appreciated. 😊

The headers included in a file change from file to file based on the needs of the file. What Stroustrup uses works in the context of his source examples. What your teacher uses matches the teacher's needs.
What you use will depend on your needs and the needs of the program.
Each file should contain all of the headers required to be complete and no more. If you use std::string in the file, #include <string>. If you use std::set, #include <set>.
It may appear to be unnecessary to include some header files because another include includes them. For example since iostream already includes string, why bother including both iostream and string? Practically it may not matter, string was included, but it is now a maintenance issue. Not all implementations of a given header will have the same includes. You will find yourself in interesting cases where the code compiles in one compiler, or version of the compiler, and not another because an include dependency changed in a header possibly two or three includes down the chain.
It is best to head that off for you and everyone who may follow in maintaining your code.
If you are not concerned about maintenance, look up the y2k bug. That was a problem because software written to meet the needs and constraints of the 1960's and 1970's was still being used 40 years later in a world with different needs and vastly different constraints. You'd be surprised how much code outlives the expected lifetime.
While it's nice to have certain employment for the rest of your life, it sucks to blow a weekend of your Hawai'i vacation because the modification to your that could have been done by a co-op turned into a nightmare because GCC 7.12 is just plain different from the GCC 5.2 under which you originally wrote the program.
You will also find that as long chains of files get included, a poorly constructed header can inject ordering dependencies on headers. Program X works fine if header Y is included before header Z. This is because Z needs stuff included by Y that Z did not declare. Don't be the person to write Z.
However, don't include everything. The more you include, the longer it takes to compile and the more exposure you have to unforeseen combinations reacting badly. Like it or not, someone always writes header Z, and you don't want the kick to the schedule debugging Z causes if you don't to include Z.
Why does it take longer to compile? Think of each include as a command for the compiler (preprocessor, really) to paste the included file into the including file. The compiler then compiles the combined file. Every include needs to be loaded from disk (slow), possibly security scanned (often awesomely slow), and merged into the file to be compiled (faster than you might expect). All of the included file's includes are pasted in, and before you know it, you have one huge mother of a file to be parsed by the compiler. If a huge portion of that huge file is not required, that's wasted effort.
Avoid like the plague structures that require header Y including header Z and header Z including Y. An include guard, more on that later, will protect the obvious recursion problem (Y includes Z includes Y includes Z includes...), but also ensures that you cannot include either Y or Z in time to satisfy Z or Y.
Some common headers, string is a favourite, are included over and over again. You don't want to keep re including the header and its dependencies, but you also want to make sure it has been included if it hasn't. To solve this problem you surround your headers with a Header guard. This looks like:
#ifndef UNIQUE_NAME
#define UNIQUE_NAME
// header contents goes here
#endif
If UNIQUE_NAME has not been defined, define it and replicate the header contents in the file to be compiled. This has a couple problems. If UNIQUE_NAME is not unique, you're going to have some really freaky error "not found" messages because the first header guard will block the include of the next.
The uniqueness problem is solved with #pragma once, but #pragma once has a few problems of it's own. It's not standard C++ so it is not implemented in all compilers. #pragma instructions are silently ignored (unless compiler warnings are turned way up and sometimes not even then) if the pragma is not supported by the compiler. Utter chaos ensues as headers are repeatedly included and you'll have warning. #pragma once can also be fooled by complicated directory structures including network maps and links.
So...
Stroustrup is using #include "std_lib_facilities.h" because it includes all of the bits and pieces he needs for the early lessons in his book without having to risk information overload by covering the nitty-gritty of those bits and pieces. It's classic chicken and egg. Stroustrup wants to teach those early lessons in a controlled manner by reducing the information load on the student until Stroustrup can cover the material required to actually understand the background details of those early lessons.
Great teaching strategy. In a way, the strategy is worth emulating in code: He's eliminated a problem by adding a layer of indirection.
But it can be a bad strategy in the programmer's real world of sliding ship dates, pointy-haired bosses, death marches, and lawsuits. You don't want to waste time fixing a non problem that didn't need to be there in the first place. Code that's not there has no bugs. Or is a bug, but that's another issue.
And Bjarne Stroustrup can get away with some stuff because he's Bjarne expletive deleted Stroustrup. He knows what he's doing. He's demonstrated that. He's probably considered, measured, and weighed the implications of his megaheader because he knows what they are. If he was worried that something in it was going to smurf over his students, he wouldn't have done it.
The first year programming student isn't Bjarne Stroustrup. Even if they have a brain the size of a planet, they lack the years of experience. It's usually the the stuff you don't know that you have to worry about, and Stroustrup's pool of "don't know" is going to be notably smaller.
There are all sorts of unsafe things that you can do when you know what you are doing and can honestly justify the doing. And as long as you take into account that you might not be the only person with a vested interest in the code. Your super-elegant-but-super-arcane solution isn't much if the coder who picks up your portfolio after you get hit by a bus can't read it. Your lovely code is going into the bin and your legacy will be, "What the smurf was that smurfing idiot trying to do with that smurf?" At the very least leave some notes.
Stroustrup also has page and word constraints that require him to compress the text in the printed edition. One Header to Rule Them All helps dramatically.
Your teacher has a similar problem with student overload, but probably doesn't have the print space constraints. For the snippets of code you've seen so far, all that has been required is the basic input and output that requires iostream. And probably string, but even I'll admit it'll be really hard to write the iostream header without it including string.
Soon you will start seeing #include <vector>, #include <fstream>, and #include<random>, as well as start writing your own headers.
Strive to learn to do it right, otherwise you have a nasty software engineering learning curve after graduation.

The include_std_facilities.h file, supplied with the books or accessible online, is like a shortcut to including multiple header files (if you include that file, looking at the source, it just includes some useful headers like so that you don't need to explicitly include them yourself).
Think of it as inviting your mother around to dinner. You invite her, and that inevitably implies that your dad (okay Im making assumptions), your sister and that annoying yappy dog of hers will automatically come for dinner too, without asking them explicitly.
It's not really a great practise, but I presume the author has their reasons.
Your teacher is correct, the minimal code will often include iostream so that you can output to the terminal. However, and this is something I have to stress, using namespace std is bad practice. You can access std functionality by explicitly using std:: - for example:
std::cout << "hello world!" << std::endl;
Working outside of the std namespace is beneficial for a multitude of reasons. You can find these by a bit of googling.

My guess is the author of the book does it to keep the code examples shorter, not as an example of exemplary style. No need to put boiler plate text in every example.
For my take, putting a bunch of includes into a header file that everybody includes (that isn't a precompiled header) can slow compilation down considerably (not only having to seek and load each file, but to check dependencies, etc), particularly if you have an overzealous IT department that insists on live scanning every file access.

Related

Why does Xcode 4 include iostream in every header?

I noticed that newly created CPP files in Xcode ~4 all #include <iostream>. I never use any iostream functionality, so usually strip them out (hearing they can gradually slow build times from the Google blink team blog). Are there any useful generic functions of iostream that make including it all the time valuable? Such as instrumentation or reflection features that not having it everywhere would break?
It seems like a bold step to add everywhere - especially given how conservative many groups software engineering! - so feel there must be something important I'm missing.
Does anyone know the reason why this header has become so important it must be everywhere?
I sign juanchopanzas statement: There is no header that needs to be included everywhere. Each #include should only be in a file when it is really needed.

Include directives in header file? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
where should “include” be put in C++
Obviously, there are two "schools of thought" as to whether to put #include directives into C++ header files (or, as an alternative, put #include only into cpp files). Some people say it's ok, others say it only causes problems. Does anybody know whether this discussion has reached a conclusion what is to be preferred?
I am not aware of any schools of thoughts concerning this. Put them in the header when they are needed there, otherwise forward declare and put them in the .cpp files that require them. There is no benefit in including headers where they are not needed.
What I found effective is following a few simple rules:
Headers shall be self-sufficient, i.e., they shall declare classes they need names for and include headers for any definition they use.
Headers should minimize dependencies as much as possible without violation the previous point.
Getting the first point rught is fairly easy: Include the header first thing from the source implementing what it declares. Getting the second point exactly right isn't trivial, though, and I think it requires tool support to get it exactly right. However, a few unnecessary dependencies generally aren't that bad.
As a rule of thumb, you don't include the headers in a header as long as full definition of them is necessary there. Most of the time you play around with pointers of classes in a header file so it's just fine to forward declare them there.
I think the issue was settle a long time ago: headers should be self-contained (that is should not depend on the user to have included other headers before -- that aspect is settle for so long that some aren't even aware there was a debate on this, but your put includes only in .cpp seems to hint at this) but minimal (i.e. should not include definitions when a declaration would be enough for self-containment).
The reason for self-containment is maintenance: should an header be modified and now depend on something new, you'd have to track all the place it is used to include the new dependency. BTW, the standard trick to ensure self-containment is to include the header providing the declarations for things defined in a .cpp first in the .cpp.
These are not schools of thought so much as religions. In reality, both approaches have their advantages and disadvantages, and there are certain practices to be followed for either approach to be successful. But only one of these approaches will "scale" to large projects.
The advantage of not including headers inside headers is faster compilation. However, this advantage does not come from headers being read only once, because even if you include headers inside headers, smart compilers can work that out. The speed advantage comes from the fact that you include only those headers which are strictly necessary for a given source file. Another advantage is that if we look at a source file, we can see exactly what its dependencies are: the flat list of header files gives that to us plainly.
However, this practice is hard to maintain, especially in large projects with many programmers. It's quite an inconvenience when you want to use module foo, but you cannot just #include "foo.h": you need to include 35 other headers.
What ends up happening is this: programmers are not going to waste their time discovering the exact, minimal set of headers that they need just to add module foo. To save time, they will go to some example source file similar to the one they are working on, and cut and paste all of the #include directives. Then they will try compiling it, and if it doesn't build, then they will cut and paste more #include directives from yet elsewhere, and repeat that until it works.
The net result is that, little by little, you lose the advantage of faster compiling, because your files are now including unnecessary headers. Moreover, the list of #include directives no longer shows the true dependencies. Moreover, when you do incremental compiles now, you compile more than is necessary due to these false dependencies.
Once every source file includes nearly every header, you might as well have a big everything.h which includes all the headers, and then #include "everything.h" in every source file.
So this practice of including just specific headers is best left to small projects that are carefully maintained by a handful of developers who have plenty of time to maintain the ethic of minimal include dependencies by hand, or write tools to hunt down unnecessary #include directives.

What objective reasons for coding without headers in C or C++ are there? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Coding C++ without headers, best practices?
Inspired by this question - the OP just finds the concept of headers confusing and depressing there, it's his subjective experience.
Now are there any objective reasons and scenarios for avoiding headers when coding in C or C++?
There are no objective reasons, only irrational ones borne of subjective bias induced by experiences with other languages for which headers don't exist as a concept.
The compilation unit is a fundamental concept in C and C++ compilers. A compilation unit must have access to all declarations at the source code level in order to access functionality provided by other compilation units, and the only logically coherent way to ensure that publishers and consumers of functionality see the declarations is for them to share the source code for those declarations. Headers are the natural way to achieve this.
Being a bit more technical about it, headers are not a C (or C++) concept, per se. They are preprocessor concept. When processing a source file, whenever the preprocessor encounters a #include directive, it replaces the directive with the entire contents of the header being referred to, recursively expanding any #include directives in the header itself. By the time the compiler proper kicks in, it sees nothing but a single "file" containing all the declarations it needs to do its job.
The key point is that, for better or worse, headers are the standard way to organise complex C and C++ code bases. You can play fun and games with #ifdefs to put both "header" and implementation in one source file, but that works against every tool-chain in the universe, and means that preprocessors have far more work to do, thus dramatically slowing down your builds.
None at all, it's just a bad idea.
The only objective reason would be if your C compiler could only include an extremely limited number of files over the course of a compilation, or the system you are working on having an extremely limited number of inodes.
#includeing a header file tells the compiler, quite literally, "copy-paste some text here." Now perhaps your question is really asking, "are there any misuses for header files?" (and there certainly are), but as written, it makes as little sense as, "are there any objective reasons to avoid source files?"
I found header files great before I worked with java. Now I find them annoying, because it is code duplication, you have to keep 2 files in sync.
Of course, if you have a tool which builds the header dynamically, it's not that bad. Are there such tools?
No, header files are an essential tool and cannot be removed in the language as it is. That doesn't mean that they don't completely suck and shouldn't be removed ASAP.
I have experienced one good reason for avoiding headers in C: distributing an open source application as a single .c file.
This is the distribution mode of dcraw, an ANSI-C cross-platform software for converting digital camera RAW files.

Include everything, Separate with "using"

I'm developing a C++ library. It got me thinking of the ways Java and C# handle including different components of the libraries. For example, Java uses "import" to allow use of classes from other packages, while C# simply uses "using" to import entire modules.
My questions is, would it be a good idea to #include everything in the library in one massive include and then just use the using directive to import specific classes and modules? Or would this just be down right crazy?
EDIT:
Good responses so far, here are a few mitigating factors which I feel add to this idea:
1) Internal #includes are kept as normal (short and to the point)
2) The file which includes everything is optionally supplied with the library to those who wish to use it3) You could optionally make the big include file part of the pre-compiled header
You're confusing the purpose of #include statements in C++. They do not behave like import statements in Java or using statements in C#. #include does what it says; namely, loads and parses the entire indicated file as part of the current translation unit. The reason for the separate includes is to not have to spend compilation time parsing the entire standard library in every file. In contrast, the statements you're trying to make #include behave like are merely for programmer organization purposes.
#include is for management of the compilation process; not for separating uses. (In fact, you cannot use seperate headers to enforce seperate uses because to do so would violate the one definition rule)
tl;dr -> No, you shouldn't do that. #include as little as possible. When your project becomes large, you'll thank yourself when you're not waiting many hours to compile your project.
I would personally recommend only including the headers when you need them to explicitly show which functionalities your file requires. At the same time, doing so will prevent you from gaining access to functionalities you might no necessarily want, e.g functions unrelated to the goal of the file. Sure, this is no big deal, but I think that it's easier to maintain and change code when you don't have access to unnecessary functions/classes; it just makes it more straightforward.
I might be downvoted for this, but I think you bring up an interesting idea. It would probably slow down compilation a bit, but I think the concept is neat.
As long as you used using sparingly — only for the namespaces you need — other developers would be able to get an idea of what classes were used in a file by glancing at the top. It wouldn't be as granular as seeing a list of #included files, but is seeing a list of included header files really very useful? I don't think so.
Just make sure that all of the header files all use inclusion guards, of course. :)
As said by #Billy ONeal, the main thing is that #include is a preprocessor directive that causes a "^C, ^V" (copy-paste) of code that leads to a compile time increase.
The best considered policy in C++ is to forward declare all possible classes in ".h" files and just include them in the ".cpp" file. It isolates dependencies, as a C/C++ project will be cascadingly rebuilt if a dependent include file is changed.
Of course M$ compilers and its precompiled headers tend to do the opposite, enclosing to what you suggest. But anyone that tried to port code across those compilers is well aware of how smelly it can go.
Some libraries like Qt make extensive use of forward declarations. Take a look on it to see if you like its taste.
I think it will be confusing. When you write C++ you should avoid making it look like Java or C# (or C :-). I for one would really wonder why you did that.
Supplying an include-all file isn't really that helpful either, as a user could easily create one herself, with the parts of the library actually used. Could then be added to a precompiled header, if one is used.

Can I write C++ code without headers (repetitive function declarations)?

Is there any way to not have to write function declarations twice (headers) and still retain the same scalability in compiling, clarity in debugging, and flexibility in design when programming in C++?
Use Lzz. It takes a single file and automatically creates a .h and .cpp for you with all the declarations/definitions in the right place.
Lzz is really very powerful, and handles 99% of full C++ syntax, including templates, specializations etc etc etc.
Update 150120:
Newer C++ '11/14 syntax can only be used within Lzz function bodies.
I felt the same way when I started writing C, so I also looked into this. The answer is that yes, it's possible and no, you don't want to.
First with the yes.
In GCC, you can do this:
// foo.cph
void foo();
#if __INCLUDE_LEVEL__ == 0
void foo() {
printf("Hello World!\n");
}
#endif
This has the intended effect: you combine both header and source into one file that can both be included and linked.
Then with the no:
This only works if the compiler has access to the entire source. You can't use this trick when writing a library that you want to distribute but keep closed-source. Either you distribute the full .cph file, or you have to write a separate .h file to go with your .lib. Although maybe you could auto-generate it with the macro preprocessor. It would get hairy though.
And reason #2 why you don't want this, and that's probably the best one: compilation speed. Normally, C sources files only have to be recompiled when the file itself changes, or any of the files it includes changes.
The C file can change frequently, but the change only involves recompiling the one file that changed.
Header files define interfaces, so they shouldn't change as often. When they do however, they trigger a recompile of every source file that includes them.
When all your files are combined header and source files, every change will trigger a recompile of all source files. C++ isn't known for its fast compile times even now, imagine what would happen when the entire project had to be recompiled every time. Then extrapolate that to a project of hundreds of source files with complicated dependencies...
Sorry, but there's no such thing as a "best practice" for eliminating headers in C++: it's a bad idea, period. If you hate them that much, you have three choices:
Become intimately familiar with C++ internals and any compilers you're using; you're going to run into different problems than the average C++ developer, and you'll probably need to solve them without a lot of help.
Pick a language you can use "right" without getting depressed
Get a tool to generate them for you; you'll still have headers, but you save some typing effort
In his article Simple Support for Design by Contract in C++, Pedro Guerreiro stated:
Usually, a C++ class comes in two
files: the header file and the
definition file. Where should we write
the assertions: in the header file,
because assertions are specification?
Or in the definition file, since they
are executable? Or in both, running
the risk of inconsistency (and
duplicating work)? We recommend,
instead, that we forsake the
traditional style, and do away with
the definition file, using only the
header file, as if all functions were
defined inline, very much like Java
and Eiffel do.
This is such a drastic
change from the C++ normality that it
risks killing the endeavor at the
outset. On the other hand, maintaining
two files for each class is so
awkward, that sooner or later a C++
development environment will come up
that hides that from us, allowing us
to concentrate on our classes, without
having to worry about where they are
stored.
That was 2001. I agreed. It is 2009 now and still no "development environment that hides that from us, allowing us to concentrate on our classes" has come up. Instead, long compile times are the norm.
Note: The link above seems to be dead now. This is the full reference to the publication, as it appears in the Publications section of the author's website:
Pedro Guerreiro, Simple Support for Design by Contract in C++, TOOLS USA 2001, Proceedings, pages 24-34, IEEE, 2001.
There is no practical way to get around headers. The only thing you could do is to put all code into one big c++ file. That will end up in an umaintainable mess, so please don't do it.
At the moment C++ header-files are a nessesary evil. I don't like them, but there is no way around them. I'd love to see some improvements and fresh ideas on the problem though.
Btw - once you've got used to it it's not that bad anymore.. C++ (and any other language as well) has more anoying things.
What I have seen some people like you do is write everything in the headers. That gives your desired property of only having to write the method profiles once.
Personally I think there are very good reasons why it is better to separate declaration and definition, but if this distresses you there is a way to do what you want.
There's header file generation software. I've never used it, but it might be worth looking into. For instance, check out mkhdr! It supposedly scans C and C++ files and generates the appropriate header files.
(However, as Richard points out, this seems to limit you from using certain C++ functionality. See Richard's answer instead here right in this thread.)
You have to write function declaration twice, actually (once in header file, once in implementation file). The definition (AKA implementation) of the function will be written once, in the implementation file.
You can write all the code in header files (it is actually a very used practice in generic programming in C++), but this implies that every C/CPP file including that header will imply recompilation of the implementation from those header files.
If you are thinking to a system similar to C# or Java, it is not possible in C++.
Nobody has mentioned Visual-Assist X under Visual Studio 2012 yet.
It has a bunch of menus and hotkeys that you can use to ease the pain of maintaining headers:
"Create Declaration" copies the function declaration from the current function into the .hpp file.
"Refactor..Change signature" allows you to simultaneously update the .cpp and .h file with one command.
Alt-O allows you to instantly flip between .cpp and .h file.
C++ 20 modules solve this problem. There is no need for copy-pasting anymore! Just write your code in a single file and export things using "export".
export module mymodule;
export int myfunc() {
return 1
}
Read more about modules here: https://en.cppreference.com/w/cpp/language/modules
At the time of writing this answer (2022 Feb), these compilers support it:
See here for the supported compilers:
https://en.cppreference.com/w/cpp/compiler_support
See this answer if you want to use modules with CMake:
https://stackoverflow.com/a/71119196/7910299
Actually... You can write the entire implementation in a file. Templated classes are all defined in the header file with no cpp file.
You can also save then with whatever extensions you want. Then in #include statements, you would include your file.
/* mycode.cpp */
#pragma once
#include <iostreams.h>
class myclass {
public:
myclass();
dothing();
};
myclass::myclass() { }
myclass::dothing()
{
// code
}
Then in another file
/* myothercode.cpp */
#pragma once
#include "mycode.cpp"
int main() {
myclass A;
A.dothing();
return 0;
}
You may need to setup some build rules, but it should work.
You can avoid headers. Completely. But I don't recommend it.
You'll be faced with some very specific limitations. One of them is you won't be able to have circular references (you won't be able to have class Parent contain a pointer to an instance of class ChildNode, and class ChildNode also contain a pointer to an instance of class Parent. It'd have to be one or the other.)
There are other limitations which just end up making your code really weird. Stick to headers. You'll learn to actually like them (since they provide a nice quick synopsis of what a class can do).
To offer a variant on the popular answer of rix0rrr:
// foo.cph
#define INCLUDEMODE
#include "foo.cph"
#include "other.cph"
#undef INCLUDEMODE
void foo()
#if !defined(INCLUDEMODE)
{
printf("Hello World!\n");
}
#else
;
#endif
void bar()
#if !defined(INCLUDEMODE)
{
foo();
}
#else
;
#endif
I do not recommend this, bit I think this construction demonstrates the removal of content repetition at the cost of rote repetition. I guess it makes copy-pasta easier? That's not really a virtue.
As with all the other tricks of this nature, a modification to the body of a function will still require recompilation of all files including the file containing that function. Very careful automated tools can partially avoid this, but they would still have to parse the source file to check, and be carefully constructed to not rewrite their output if it's no different.
For other readers: I spent a few minutes trying to figure out include guards in this format, but didn't come up with anything good. Comments?
I understand your problems. I would say that the C++ main problem is the compilation/build method that it inherited from the C. The C/C++ header structure has been designed in times when coding involved less definitions and more implementations. Don't throw bottles on me, but that's how it looks like.
Since then the OOP has conquered the world and the world is more about definitions then implementations. As the result, including headers makes pretty painful to work with a language where the fundamental collections such as the ones in the STL made with templates which are notoriously difficult job for the compiler to deal with. All those magic with the precompiled headers doesn't help so much when it comes to TDD, refactoring tools, the general development environment.
Of course C programmers are not suffering from this too much since they don't have compiler-heavy header files and so they are happy with the pretty straightforward, low-level compilation tool chain. With C++ this is a history of suffering: endless forward declarations, precompiled headers, external parsers, custom preprocessors etc.
Many people, however, does not realize that the C++ is the ONLY language that has strong and modern solutions for high- and low-level problems. It's easy to say that you should go for an other language with proper reflection and build system, but it is non-sense that we have to sacrifice the low-level programming solutions with that and we need to complicate things with low-level language mixed with some virtual-machine/JIT based solution.
I have this idea for some time now, that it would be the most cool thing on earth to have a "unit" based c++ tool-chain, similar to that in D. The problem comes up with the cross-platform part: the object files are able to store any information, no problem with that, but since on windows the object file's structure is different that of the ELF, it would be pain in the ass to implement a cross-platform solution to store and process the half-way-compilation units.
After reading all the other answers, I find it missing that there is ongoing work to add support for modules in the C++ standard. It will not make it to C++0x, but the intention is that it will be tackled in a later Technical Review (rather than waiting for a new standard, that will take ages).
The proposal that was being discussed is N2073.
The bad part of it is that you will not get that, not even with the newest c++0x compilers. You will have to wait. In the mean time, you will have to compromise between the uniqueness of definitions in header-only libraries and the cost of compilation.
As far as I know, no. Headers are an inherent part of C++ as a language. Don't forget that forward declaration allows the compiler to merely include a function pointer to a compiled object/function without having to include the whole function (which you can get around by declaring a function inline (if the compiler feels like it).
If you really, really, really hate making headers, write a perl-script to autogenerate them, instead. I'm not sure I'd recommend it though.
It's completely possible to develop without header files. One can include a source file directly:
#include "MyModule.c"
The major issue with this is one of circular dependencies (ie: in C you must declare a function before calling it). This is not an issue if you design your code completely top-down, but it can take some time to wrap ones head around this sort of design pattern if you're not used to it.
If you absolutely must have circular dependencies, one may want to consider creating a file specifically for declarations and including it before everything else. This is a little inconvenient, but still less pollution than having a header for every C file.
I am currently developing using this method for one of my major projects. Here is a breakdown of advantages I've experienced:
Much less file pollution in your source tree.
Faster build times. (Only one object file is produced by the compiler, main.o)
Simpler make files. (Only one object file is produced by the compiler, main.o)
No need to "make clean". Every build is "clean".
Less boiler plate code. Less code = less potential bugs.
I've discovered that Gish (a game by Cryptic Sea, Edmund McMillen) used a variation on this technique inside its own source code.
You can carefully lay out your functions so that all of the dependent functions are compiled after their dependencies, but as Nils implied, that is not practical.
Catalin (forgive the missing diacritical marks) also suggested a more practical alternative of defining your methods in the header files. This can actually work in most cases.. especially if you have guards in your header files to make sure they are only included once.
I personally think that header files + declaring functions is much more desirable for 'getting your head around' new code, but that is a personal preference I suppose...
You can do without headers. But, why spend effort trying to avoid carefully worked out best practices that have been developed over many years by experts.
When I wrote basic, I quite liked line numbers. But, I wouldn't think of trying to jam them into C++, because that's not the C++ way. The same goes for headers... and I'm sure other answers explain all the reasoning.
For practical purposes no, it's not possible. Technically, yes, you can. But, frankly, it's an abuse of the language, and you should adapt to the language. Or move to something like C#.
It is best practice to use the header files, and after a while it will grow into you.
I agree that having only one file is easier, but It also can leed to bad codeing.
some of these things, althoug feel awkward, allow you to get more then meets the eye.
as an example think about pointers, passing parameters by value/by reference... etc.
for me the header files allow-me to keep my projects properly structured
Learn to recognize that header files are a good thing. They separate how codes appears to another user from the implementation of how it actually performs its operations.
When I use someone's code I do now want to have to wade through all of the implementation to see what the methods are on a class. I care about what the code does, not how it does it.
This has been "revived" thanks to a duplicate...
In any case, the concept of a header is a worthy one, i.e. separate out the interface from the implementation detail. The header outlines how you use a class / method, and not how it does it.
The downside is the detail within headers and all the workarounds necessary. These are the main issues as I see them:
dependency generation. When a header is modified, any source file that includes this header requires recompilation. The issue is of course working out which source files actually use it. When a "clean" build is performed it is often necessary to cache the information in some kind of dependency tree for later.
include guards. Ok, we all know how to write these but in a perfect system it would not be necessary.
private details. Within a class, you must put the private details into the header. Yes, the compiler needs to know the "size" of the class, but in a perfect system it would be able to bind this in a later phase. This leads to all kinds of workaround like pImpl and using abstract base classes even when you only have one implementation just because you want to hide a dependency.
The perfect system would work with
separate class definition and declaration
A clear bind between these two so the compiler would know where a class declaration and its definition are, and would know what the size of a class.
You declare using class rather than pre-processor #include. The compiler knows where to find a class. Once you have done "using class" you can use that class name without qualifying it.
I'd be interested to know how D does it.
With regards to whether you can use C++ without headers, I would say no you need them for abstract base classes and standard library. Aside from that you could get by without them, although you probably would not want to.
Can I write C++ code without headers
Read more about C++, e.g. the Programming using C++ book then the C+11 standard n3337.
Yes, because the preprocessor is (conceptually) generating code without headers.
If your C++ compiler is GCC and you are compiling your translation unit foo.cc consider running g++ -O -Wall -Wextra -C -E foo.cc > foo.ii; the emitted file foo.ii does not contain any preprocessor directive, and could be compiled with g++ -O foo.ii -o foo-bin into a foo-bin executable (at least on Linux). See also Advanced Linux Programming
On Linux, the following C++ file
// file ex.cc
extern "C" long write(int fd, const void *buf, size_t count);
extern "C" long strlen(const char*);
extern "C" void perror(const char*);
int main (int argc, char**argv)
{
if (argc>1)
write(1, argv[1], strlen(argv[1]);
else
write(1, __FILE__ " has no argument",
sizeof(__FILE__ " has no argument"));
if (write(1, "\n", 1) <= 0) {
perror(__FILE__);
return 1;
}
return 0;
}
could be compiled using GCC as g++ ex.cc -O ex-bin into an executable ex-bin which, when executed, would show something.
In some cases, it is worthwhile to generate some C++ code with another program
(perhaps SWIG, ANTLR, Bison, RefPerSys, GPP, or your own C++ code generator) and configure your build automation tool (e.g. ninja-build or GNU make) to handle such a situation. Notice that the source code of GCC 10 has a dozen of C++ code generators.
With GCC, you might sometimes consider writing your own GCC plugin to analyze your (or others) C++ code (e.g. at the GIMPLE level). See also (in fall 2020) CHARIOT and DECODER European projects. You could also consider using the Clang static analyzer or Frama-C++.
Historically hearder files have been used for two reasons.
To provides symbols when compiling a program that wants to used a
library or a additional file.
To hide part of the implementing; keep things private.
For example say you have a function you don't want exposed to other
parts of your program, but want to use in your implementation. In that
case, you would write the function in the CPP file, but leave it out
of the header file. You can do this with variables and anything that
would want to keep private in the impregnation that you don't want
exposed to conumbers of that source code. In other programming
lanugases there is a "public" keyword that allows module parts to be
kept from being exposed to other parts of your program. In C and C++
no such facility exists at afile level, so header files are used
intead.
Header files are not perfect. Useing '#include' just copies the contents
of what ever file you provide. Single quotes for the current working
tree and < and > for system installed headers. In CPP for system
installed std components the '.h' is omitted; just another way C++
likes to do its own thing. If you want to give '#include' any kind of
file, it will be included. It really isn't a module system like Java,
Python, and most other programming lanuages have. Since headers are
not modules some extra steps need to be taken to get similar function
out of them. The Prepossesser (the thing that works with all the
#keywords) will blindly include what every you state is needed to be
consumed in that file, but C or C++ want to have your symbals or
implications defined only one in compilation. If you use a library, no
it main.cpp, but in the two files that main includes, then you only
want that library included once and not twice. Standard Library
components are handled special, so you don't need to worry about using
the same C++ include everywhere. To make it so that the first time the
Prepossesser sees your library it doesn't include it again, you need
to use a heard guard.
A heard guard is the simplest thing. It looks like this:
#ifndef LIBRARY_H
#define LIBRARY_H
// Write your definitions here.
#endif
It is considered good to comment the ifndef like this:
#endif // LIBRARY_H
But if you don't do the comment the compiler wont care and it wont
hurt anthing.
All #ifndef is doing is checking whether LIBRARY_H is equal to 0;
undefined. When LIBRARY_H is 0, it provides what comes before the
#endif.
Then #define LIBRARY_H sets LIBRARY_H to 1, so the next time the
Preprocessor sees #ifndef LIBRARY_H, it wont provide the same contents
again.
(LIBRARY_H should be what ever the file name is and then _ and the
extension. This is not going break anything if you don't write the
same thing, but you should be consistent. At least put the file name
for the #ifndef. Otherwise it might get confusing what guards are for
what.)
Really nothing fancy going on here.
Now you don't want to use header files.
Great, say you don't care about:
Having things private by excluding them from header files
You don't intend to used this code in a library. If you ever do, it
may be easier to go with headers now so you don't have to reorganise
your code into headers later.
You don't want to repeat yourself once in a header file and then in
a C++ file.
The purpose of hearder files can seem ambiguous and if you don't care
about people telling out it's wrong for imaginary reasons, then save
your hands and don't bother repeating yourself.
How to include only hearder files
Do
#ifndef THING_CPP
#define THING_CPP
#include <iostream>
void drink_me() {
std::cout << "Drink me!" << std::endl;
}
#endif // THING_CPP
for thing.cpp.
And for main.cpp do
#include "thing.cpp"
int main() {
drink_me();
return 0;
}
then compile.
Basically just name your included CPP file with the CPP extension and
then treat it like a header file but write out the implementations in
that one file.