Related
So I finished my first C++ programming assignment and received my grade. But according to the grading, I lost marks for including cpp files instead of compiling and linking them. I'm not too clear on what that means.
Taking a look back at my code, I chose not to create header files for my classes, but did everything in the cpp files (it seemed to work fine without header files...). I'm guessing that the grader meant that I wrote '#include "mycppfile.cpp";' in some of my files.
My reasoning for #include'ing the cpp files was:
- Everything that was supposed to go into the header file was in my cpp file, so I pretended it was like a header file
- In monkey-see-monkey do fashion, I saw that other header files were #include'd in the files, so I did the same for my cpp file.
So what exactly did I do wrong, and why is it bad?
To the best of my knowledge, the C++ standard knows no difference between header files and source files. As far as the language is concerned, any text file with legal code is the same as any other. However, although not illegal, including source files into your program will pretty much eliminate any advantages you would've got from separating your source files in the first place.
Essentially, what #include does is tell the preprocessor to take the entire file you've specified, and copy it into your active file before the compiler gets its hands on it. So when you include all the source files in your project together, there is fundamentally no difference between what you've done, and just making one huge source file without any separation at all.
"Oh, that's no big deal. If it runs, it's fine," I hear you cry. And in a sense, you'd be correct. But right now you're dealing with a tiny tiny little program, and a nice and relatively unencumbered CPU to compile it for you. You won't always be so lucky.
If you ever delve into the realms of serious computer programming, you'll be seeing projects with line counts that can reach millions, rather than dozens. That's a lot of lines. And if you try to compile one of these on a modern desktop computer, it can take a matter of hours instead of seconds.
"Oh no! That sounds horrible! However can I prevent this dire fate?!" Unfortunately, there's not much you can do about that. If it takes hours to compile, it takes hours to compile. But that only really matters the first time -- once you've compiled it once, there's no reason to compile it again.
Unless you change something.
Now, if you had two million lines of code merged together into one giant behemoth, and need to do a simple bug fix such as, say, x = y + 1, that means you have to compile all two million lines again in order to test this. And if you find out that you meant to do a x = y - 1 instead, then again, two million lines of compile are waiting for you. That's many hours of time wasted that could be better spent doing anything else.
"But I hate being unproductive! If only there was some way to compile distinct parts of my codebase individually, and somehow link them together afterwards!" An excellent idea, in theory. But what if your program needs to know what's going on in a different file? It's impossible to completely separate your codebase unless you want to run a bunch of tiny tiny .exe files instead.
"But surely it must be possible! Programming sounds like pure torture otherwise! What if I found some way to separate interface from implementation? Say by taking just enough information from these distinct code segments to identify them to the rest of the program, and putting them in some sort of header file instead? And that way, I can use the #include preprocessor directive to bring in only the information necessary to compile!"
Hmm. You might be on to something there. Let me know how that works out for you.
This is probably a more detailed answer than you wanted, but I think a decent explanation is justified.
In C and C++, one source file is defined as one translation unit. By convention, header files hold function declarations, type definitions and class definitions. The actual function implementations reside in translation units, i.e .cpp files.
The idea behind this is that functions and class/struct member functions are compiled and assembled once, then other functions can call that code from one place without making duplicates. Your functions are declared as "extern" implicitly.
/* Function declaration, usually found in headers. */
/* Implicitly 'extern', i.e the symbol is visible everywhere, not just locally.*/
int add(int, int);
/* function body, or function definition. */
int add(int a, int b)
{
return a + b;
}
If you want a function to be local for a translation unit, you define it as 'static'. What does this mean? It means that if you include source files with extern functions, you will get redefinition errors, because the compiler comes across the same implementation more than once. So, you want all your translation units to see the function declaration but not the function body.
So how does it all get mashed together at the end? That is the linker's job. A linker reads all the object files which is generated by the assembler stage and resolves symbols. As I said earlier, a symbol is just a name. For example, the name of a variable or a function. When translation units which call functions or declare types do not know the implementation for those functions or types, those symbols are said to be unresolved. The linker resolves the unresolved symbol by connecting the translation unit which holds the undefined symbol together with the one which contains the implementation. Phew. This is true for all externally visible symbols, whether they are implemented in your code, or provided by an additional library. A library is really just an archive with reusable code.
There are two notable exceptions. First, if you have a small function, you can make it inline. This means that the generated machine code does not generate an extern function call, but is literally concatenated in-place. Since they usually are small, the size overhead does not matter. You can imagine them to be static in the way they work. So it is safe to implement inline functions in headers. Function implementations inside a class or struct definition are also often inlined automatically by the compiler.
The other exception is templates. Since the compiler needs to see the whole template type definition when instantiating them, it is not possible to decouple the implementation from the definition as with standalone functions or normal classes. Well, perhaps this is possible now, but getting widespread compiler support for the "export" keyword took a long, long time. So without support for 'export', translation units get their own local copies of instantiated templated types and functions, similar to how inline functions work. With support for 'export', this is not the case.
For the two exceptions, some people find it "nicer" to put the implementations of inline functions, templated functions and templated types in .cpp files, and then #include the .cpp file. Whether this is a header or a source file doesn't really matter; the preprocessor does not care and is just a convention.
A quick summary of the whole process from C++ code (several files) and to a final executable:
The preprocessor is run, which parses all the directives which starts with a '#'. The #include directive concatenates the included file with inferior, for example. It also does macro-replacement and token-pasting.
The actual compiler runs on the intermediate text file after the preprocessor stage, and emits assembler code.
The assembler runs on the assembly file and emits machine code, this is usually called an object file and follows the binary executable format of the operative system in question. For example, Windows uses the PE (portable executable format), while Linux uses the Unix System V ELF format, with GNU extensions. At this stage, symbols are still marked as undefined.
Finally, the linker is run. All the previous stages were run on each translation unit in order. However, the linker stage works on all the generated object files which were generated by the assembler. The linker resolves symbols and does a lot of magic like creating sections and segments, which is dependent on the target platform and binary format. Programmers aren't required to know this in general, but it surely helps in some cases.
Again, this was definetely more than you asked for, but I hope the nitty-gritty details helps you to see the bigger picture.
The typical solution is to use .h files for declarations only and .cpp files for implementation. If you need to reuse the implementation you include the corresponding .h file into the .cpp file where the necessary class/function/whatever is used and link against an already compiled .cpp file (either an .obj file - usually used within one project - or .lib file - usually used for reusing from multiple projects). This way you don't need to recompile everything if only the implementation changes.
Think of cpp files as a black box and the .h files as the guides on how to use those black boxes.
The cpp files can be compiled ahead of time. This doesn't work in you #include them, as it needs to actual "include" the code into your program each time it compiles it. If you just include the header, it can just use the header file to determine how to use the precompiled cpp file.
Although this won't make much of a difference for your first project, if you start writing large cpp programs, people are going to hate you because compile times are going to explode.
Also have a read of this: Header File Include Patterns
Header files usually contain declarations of functions / classes, while .cpp files contain the actual implementations. At compile time, each .cpp file gets compiled into an object file (usually extension .o), and the linker combines the various object files into the final executable. The linking process is generally much faster than the compilation.
Benefits of this separation: If you are recompiling one of the .cpp files in your project, you don't have to recompile all the others. You just create the new object file for that particular .cpp file. The compiler doesn't have to look at the other .cpp files. However, if you want to call functions in your current .cpp file that were implemented in the other .cpp files, you have to tell the compiler what arguments they take; that is the purpose of including the header files.
Disadvantages: When compiling a given .cpp file, the compiler cannot 'see' what is inside the other .cpp files. So it doesn't know how the functions there are implemented, and as a result cannot optimize as aggressively. But I think you don't need to concern yourself with that just yet (:
The basic idea that headers are only included and cpp files are only compiled. This will become more useful once you have many cpp files, and recompiling the whole application when you modify only one of them will be too slow. Or when the functions in the files will start depending on each other. So, you should separate class declarations into your header files, leave implementation in cpp files and write a Makefile (or something else, depending on what tools are you using) to compile the cpp files and link the resulting object files into a program.
If you #include a cpp file in several other files in your program, the compiler will try to compile the cpp file multiple times, and will generate an error as there will be multiple implementations of the same methods.
Compilation will take longer (which becomes a problem on large projects), if you make edits in #included cpp files, which then force recompilation of any files #including them.
Just put your declarations into header files and include those (as they don't actually generate code per se), and the linker will hook up the declarations with the corresponding cpp code (which then only gets compiled once).
re-usability, architecture and data encapsulation
here's an example:
say you create a cpp file which contains a simple form of string routines all in a class mystring, you place the class decl for this in a mystring.h compiling mystring.cpp to a .obj file
now in your main program (e.g. main.cpp) you include header and link with the mystring.obj.
to use mystring in your program you don't care about the details how mystring is implemented since the header says what it can do
now if a buddy wants to use your mystring class you give him mystring.h and the mystring.obj, he also doesn't necessarily need to know how it works as long as it works.
later if you have more such .obj files you can combine them into a .lib file and link to that instead.
you can also decide to change the mystring.cpp file and implement it more effectively, this will not affect your main.cpp or your buddies program.
While it is certainly possible to do as you did, the standard practice is to put shared declarations into header files (.h), and definitions of functions and variables - implementation - into source files (.cpp).
As a convention, this helps make it clear where everything is, and makes a clear distinction between interface and implementation of your modules. It also means that you never have to check to see if a .cpp file is included in another, before adding something to it that could break if it was defined in several different units.
If it works for you then there is nothing wrong with it -- except that it will ruffle the feathers of people who think that there is only one way to do things.
Many of the answers given here address optimizations for large-scale software projects. These are good things to know about, but there is no point in optimizing a small project as if it were a large project -- that is what is known as "premature optimization". Depending on your development environment, there may be significant extra complexity involved in setting up a build configuration to support multiple source files per program.
If, over time, your project evolves and you find that the build process is taking too long, then you can refactor your code to use multiple source files for faster incremental builds.
Several of the answers discuss separating interface from implementation. However, this is not an inherent feature of include files, and it is quite common to #include "header" files that directly incorporate their implementation (even the C++ Standard Library does this to a significant degree).
The only thing truly "unconventional" about what you have done was naming your included files ".cpp" instead of ".h" or ".hpp".
When you compile and link a program the compiler first compiles the individual cpp files and then they link (connect) them. The headers will never get compiled, unless included in a cpp file first.
Typically headers are declarations and cpp are implementation files. In the headers you define an interface for a class or function but you leave out how you actually implement the details. This way you don't have to recompile every cpp file if you make a change in one.
I will suggest you to go through Large Scale C++ Software Design by John Lakos. In the college, we usually write small projects where we do not come across such problems. The book highlights the importance of separating interfaces and the implementations.
Header files usually have interfaces which are supposed not to be changed so frequently.
Similarly a look into patterns like Virtual Constructor idiom will help you grasp the concept further.
I am still learning like you :)
It's like writing a book, you want to print out finished chapters only once
Say you are writing a book. If you put the chapters in separate files then you only need to print out a chapter if you have changed it. Working on one chapter doesn't change any of the others.
But including the cpp files is, from the compiler's point of view, like editing all of the chapters of the book in one file. Then if you change it you have to print all the pages of the entire book in order to get your revised chapter printed. There is no "print selected pages" option in object code generation.
Back to software: I have Linux and Ruby src lying around. A rough measure of lines of code...
Linux Ruby
100,000 100,000 core functionality (just kernel/*, ruby top level dir)
10,000,000 200,000 everything
Any one of those four categories has a lot of code, hence the need for modularity. This kind of code base is surprisingly typical of real-world systems.
There are times when non conventional programming techniques are actually quite useful and solve otherwise difficult (if not impossible problems).
If C source is generated by third party applications such as lexx and yacc they can obviously be compiled and linked separately and this is the usual approach.
However there are times when these sources can cause linkage problems with other unrelated sources. You have some options if this occurs. Rewrite the conflicting components to accommodate the lexx and yacc sources. Modify the lexx & yacc componets to accommodate your sources. '#Include' the lexx and yacc sources where they are required.
Re-writing the the components is fine if the changes are small and the components are understood to begin with (i.e: you not porting someone else's code).
Modifying the lexx and yacc source is fine as long as the build process doesn't keep regenerating the source from the lexx and yacc scripts.
You can always revert to one of the other two methods if you feel it is required.
Adding a single #include and modifying the makefile to remove the build of the lexx/yacc components to overcome all your problems is attractive fast and provides you the opportunity to prove the code works at all without spending time rewriting code and questing whether the code would have ever worked in the first place when it isn't working now.
When two C files are included together they are basically one file and there are no external references required to be resolved at link time!
So I finished my first C++ programming assignment and received my grade. But according to the grading, I lost marks for including cpp files instead of compiling and linking them. I'm not too clear on what that means.
Taking a look back at my code, I chose not to create header files for my classes, but did everything in the cpp files (it seemed to work fine without header files...). I'm guessing that the grader meant that I wrote '#include "mycppfile.cpp";' in some of my files.
My reasoning for #include'ing the cpp files was:
- Everything that was supposed to go into the header file was in my cpp file, so I pretended it was like a header file
- In monkey-see-monkey do fashion, I saw that other header files were #include'd in the files, so I did the same for my cpp file.
So what exactly did I do wrong, and why is it bad?
To the best of my knowledge, the C++ standard knows no difference between header files and source files. As far as the language is concerned, any text file with legal code is the same as any other. However, although not illegal, including source files into your program will pretty much eliminate any advantages you would've got from separating your source files in the first place.
Essentially, what #include does is tell the preprocessor to take the entire file you've specified, and copy it into your active file before the compiler gets its hands on it. So when you include all the source files in your project together, there is fundamentally no difference between what you've done, and just making one huge source file without any separation at all.
"Oh, that's no big deal. If it runs, it's fine," I hear you cry. And in a sense, you'd be correct. But right now you're dealing with a tiny tiny little program, and a nice and relatively unencumbered CPU to compile it for you. You won't always be so lucky.
If you ever delve into the realms of serious computer programming, you'll be seeing projects with line counts that can reach millions, rather than dozens. That's a lot of lines. And if you try to compile one of these on a modern desktop computer, it can take a matter of hours instead of seconds.
"Oh no! That sounds horrible! However can I prevent this dire fate?!" Unfortunately, there's not much you can do about that. If it takes hours to compile, it takes hours to compile. But that only really matters the first time -- once you've compiled it once, there's no reason to compile it again.
Unless you change something.
Now, if you had two million lines of code merged together into one giant behemoth, and need to do a simple bug fix such as, say, x = y + 1, that means you have to compile all two million lines again in order to test this. And if you find out that you meant to do a x = y - 1 instead, then again, two million lines of compile are waiting for you. That's many hours of time wasted that could be better spent doing anything else.
"But I hate being unproductive! If only there was some way to compile distinct parts of my codebase individually, and somehow link them together afterwards!" An excellent idea, in theory. But what if your program needs to know what's going on in a different file? It's impossible to completely separate your codebase unless you want to run a bunch of tiny tiny .exe files instead.
"But surely it must be possible! Programming sounds like pure torture otherwise! What if I found some way to separate interface from implementation? Say by taking just enough information from these distinct code segments to identify them to the rest of the program, and putting them in some sort of header file instead? And that way, I can use the #include preprocessor directive to bring in only the information necessary to compile!"
Hmm. You might be on to something there. Let me know how that works out for you.
This is probably a more detailed answer than you wanted, but I think a decent explanation is justified.
In C and C++, one source file is defined as one translation unit. By convention, header files hold function declarations, type definitions and class definitions. The actual function implementations reside in translation units, i.e .cpp files.
The idea behind this is that functions and class/struct member functions are compiled and assembled once, then other functions can call that code from one place without making duplicates. Your functions are declared as "extern" implicitly.
/* Function declaration, usually found in headers. */
/* Implicitly 'extern', i.e the symbol is visible everywhere, not just locally.*/
int add(int, int);
/* function body, or function definition. */
int add(int a, int b)
{
return a + b;
}
If you want a function to be local for a translation unit, you define it as 'static'. What does this mean? It means that if you include source files with extern functions, you will get redefinition errors, because the compiler comes across the same implementation more than once. So, you want all your translation units to see the function declaration but not the function body.
So how does it all get mashed together at the end? That is the linker's job. A linker reads all the object files which is generated by the assembler stage and resolves symbols. As I said earlier, a symbol is just a name. For example, the name of a variable or a function. When translation units which call functions or declare types do not know the implementation for those functions or types, those symbols are said to be unresolved. The linker resolves the unresolved symbol by connecting the translation unit which holds the undefined symbol together with the one which contains the implementation. Phew. This is true for all externally visible symbols, whether they are implemented in your code, or provided by an additional library. A library is really just an archive with reusable code.
There are two notable exceptions. First, if you have a small function, you can make it inline. This means that the generated machine code does not generate an extern function call, but is literally concatenated in-place. Since they usually are small, the size overhead does not matter. You can imagine them to be static in the way they work. So it is safe to implement inline functions in headers. Function implementations inside a class or struct definition are also often inlined automatically by the compiler.
The other exception is templates. Since the compiler needs to see the whole template type definition when instantiating them, it is not possible to decouple the implementation from the definition as with standalone functions or normal classes. Well, perhaps this is possible now, but getting widespread compiler support for the "export" keyword took a long, long time. So without support for 'export', translation units get their own local copies of instantiated templated types and functions, similar to how inline functions work. With support for 'export', this is not the case.
For the two exceptions, some people find it "nicer" to put the implementations of inline functions, templated functions and templated types in .cpp files, and then #include the .cpp file. Whether this is a header or a source file doesn't really matter; the preprocessor does not care and is just a convention.
A quick summary of the whole process from C++ code (several files) and to a final executable:
The preprocessor is run, which parses all the directives which starts with a '#'. The #include directive concatenates the included file with inferior, for example. It also does macro-replacement and token-pasting.
The actual compiler runs on the intermediate text file after the preprocessor stage, and emits assembler code.
The assembler runs on the assembly file and emits machine code, this is usually called an object file and follows the binary executable format of the operative system in question. For example, Windows uses the PE (portable executable format), while Linux uses the Unix System V ELF format, with GNU extensions. At this stage, symbols are still marked as undefined.
Finally, the linker is run. All the previous stages were run on each translation unit in order. However, the linker stage works on all the generated object files which were generated by the assembler. The linker resolves symbols and does a lot of magic like creating sections and segments, which is dependent on the target platform and binary format. Programmers aren't required to know this in general, but it surely helps in some cases.
Again, this was definetely more than you asked for, but I hope the nitty-gritty details helps you to see the bigger picture.
The typical solution is to use .h files for declarations only and .cpp files for implementation. If you need to reuse the implementation you include the corresponding .h file into the .cpp file where the necessary class/function/whatever is used and link against an already compiled .cpp file (either an .obj file - usually used within one project - or .lib file - usually used for reusing from multiple projects). This way you don't need to recompile everything if only the implementation changes.
Think of cpp files as a black box and the .h files as the guides on how to use those black boxes.
The cpp files can be compiled ahead of time. This doesn't work in you #include them, as it needs to actual "include" the code into your program each time it compiles it. If you just include the header, it can just use the header file to determine how to use the precompiled cpp file.
Although this won't make much of a difference for your first project, if you start writing large cpp programs, people are going to hate you because compile times are going to explode.
Also have a read of this: Header File Include Patterns
Header files usually contain declarations of functions / classes, while .cpp files contain the actual implementations. At compile time, each .cpp file gets compiled into an object file (usually extension .o), and the linker combines the various object files into the final executable. The linking process is generally much faster than the compilation.
Benefits of this separation: If you are recompiling one of the .cpp files in your project, you don't have to recompile all the others. You just create the new object file for that particular .cpp file. The compiler doesn't have to look at the other .cpp files. However, if you want to call functions in your current .cpp file that were implemented in the other .cpp files, you have to tell the compiler what arguments they take; that is the purpose of including the header files.
Disadvantages: When compiling a given .cpp file, the compiler cannot 'see' what is inside the other .cpp files. So it doesn't know how the functions there are implemented, and as a result cannot optimize as aggressively. But I think you don't need to concern yourself with that just yet (:
The basic idea that headers are only included and cpp files are only compiled. This will become more useful once you have many cpp files, and recompiling the whole application when you modify only one of them will be too slow. Or when the functions in the files will start depending on each other. So, you should separate class declarations into your header files, leave implementation in cpp files and write a Makefile (or something else, depending on what tools are you using) to compile the cpp files and link the resulting object files into a program.
If you #include a cpp file in several other files in your program, the compiler will try to compile the cpp file multiple times, and will generate an error as there will be multiple implementations of the same methods.
Compilation will take longer (which becomes a problem on large projects), if you make edits in #included cpp files, which then force recompilation of any files #including them.
Just put your declarations into header files and include those (as they don't actually generate code per se), and the linker will hook up the declarations with the corresponding cpp code (which then only gets compiled once).
re-usability, architecture and data encapsulation
here's an example:
say you create a cpp file which contains a simple form of string routines all in a class mystring, you place the class decl for this in a mystring.h compiling mystring.cpp to a .obj file
now in your main program (e.g. main.cpp) you include header and link with the mystring.obj.
to use mystring in your program you don't care about the details how mystring is implemented since the header says what it can do
now if a buddy wants to use your mystring class you give him mystring.h and the mystring.obj, he also doesn't necessarily need to know how it works as long as it works.
later if you have more such .obj files you can combine them into a .lib file and link to that instead.
you can also decide to change the mystring.cpp file and implement it more effectively, this will not affect your main.cpp or your buddies program.
While it is certainly possible to do as you did, the standard practice is to put shared declarations into header files (.h), and definitions of functions and variables - implementation - into source files (.cpp).
As a convention, this helps make it clear where everything is, and makes a clear distinction between interface and implementation of your modules. It also means that you never have to check to see if a .cpp file is included in another, before adding something to it that could break if it was defined in several different units.
If it works for you then there is nothing wrong with it -- except that it will ruffle the feathers of people who think that there is only one way to do things.
Many of the answers given here address optimizations for large-scale software projects. These are good things to know about, but there is no point in optimizing a small project as if it were a large project -- that is what is known as "premature optimization". Depending on your development environment, there may be significant extra complexity involved in setting up a build configuration to support multiple source files per program.
If, over time, your project evolves and you find that the build process is taking too long, then you can refactor your code to use multiple source files for faster incremental builds.
Several of the answers discuss separating interface from implementation. However, this is not an inherent feature of include files, and it is quite common to #include "header" files that directly incorporate their implementation (even the C++ Standard Library does this to a significant degree).
The only thing truly "unconventional" about what you have done was naming your included files ".cpp" instead of ".h" or ".hpp".
When you compile and link a program the compiler first compiles the individual cpp files and then they link (connect) them. The headers will never get compiled, unless included in a cpp file first.
Typically headers are declarations and cpp are implementation files. In the headers you define an interface for a class or function but you leave out how you actually implement the details. This way you don't have to recompile every cpp file if you make a change in one.
I will suggest you to go through Large Scale C++ Software Design by John Lakos. In the college, we usually write small projects where we do not come across such problems. The book highlights the importance of separating interfaces and the implementations.
Header files usually have interfaces which are supposed not to be changed so frequently.
Similarly a look into patterns like Virtual Constructor idiom will help you grasp the concept further.
I am still learning like you :)
It's like writing a book, you want to print out finished chapters only once
Say you are writing a book. If you put the chapters in separate files then you only need to print out a chapter if you have changed it. Working on one chapter doesn't change any of the others.
But including the cpp files is, from the compiler's point of view, like editing all of the chapters of the book in one file. Then if you change it you have to print all the pages of the entire book in order to get your revised chapter printed. There is no "print selected pages" option in object code generation.
Back to software: I have Linux and Ruby src lying around. A rough measure of lines of code...
Linux Ruby
100,000 100,000 core functionality (just kernel/*, ruby top level dir)
10,000,000 200,000 everything
Any one of those four categories has a lot of code, hence the need for modularity. This kind of code base is surprisingly typical of real-world systems.
There are times when non conventional programming techniques are actually quite useful and solve otherwise difficult (if not impossible problems).
If C source is generated by third party applications such as lexx and yacc they can obviously be compiled and linked separately and this is the usual approach.
However there are times when these sources can cause linkage problems with other unrelated sources. You have some options if this occurs. Rewrite the conflicting components to accommodate the lexx and yacc sources. Modify the lexx & yacc componets to accommodate your sources. '#Include' the lexx and yacc sources where they are required.
Re-writing the the components is fine if the changes are small and the components are understood to begin with (i.e: you not porting someone else's code).
Modifying the lexx and yacc source is fine as long as the build process doesn't keep regenerating the source from the lexx and yacc scripts.
You can always revert to one of the other two methods if you feel it is required.
Adding a single #include and modifying the makefile to remove the build of the lexx/yacc components to overcome all your problems is attractive fast and provides you the opportunity to prove the code works at all without spending time rewriting code and questing whether the code would have ever worked in the first place when it isn't working now.
When two C files are included together they are basically one file and there are no external references required to be resolved at link time!
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
where should “include” be put in C++
Obviously, there are two "schools of thought" as to whether to put #include directives into C++ header files (or, as an alternative, put #include only into cpp files). Some people say it's ok, others say it only causes problems. Does anybody know whether this discussion has reached a conclusion what is to be preferred?
I am not aware of any schools of thoughts concerning this. Put them in the header when they are needed there, otherwise forward declare and put them in the .cpp files that require them. There is no benefit in including headers where they are not needed.
What I found effective is following a few simple rules:
Headers shall be self-sufficient, i.e., they shall declare classes they need names for and include headers for any definition they use.
Headers should minimize dependencies as much as possible without violation the previous point.
Getting the first point rught is fairly easy: Include the header first thing from the source implementing what it declares. Getting the second point exactly right isn't trivial, though, and I think it requires tool support to get it exactly right. However, a few unnecessary dependencies generally aren't that bad.
As a rule of thumb, you don't include the headers in a header as long as full definition of them is necessary there. Most of the time you play around with pointers of classes in a header file so it's just fine to forward declare them there.
I think the issue was settle a long time ago: headers should be self-contained (that is should not depend on the user to have included other headers before -- that aspect is settle for so long that some aren't even aware there was a debate on this, but your put includes only in .cpp seems to hint at this) but minimal (i.e. should not include definitions when a declaration would be enough for self-containment).
The reason for self-containment is maintenance: should an header be modified and now depend on something new, you'd have to track all the place it is used to include the new dependency. BTW, the standard trick to ensure self-containment is to include the header providing the declarations for things defined in a .cpp first in the .cpp.
These are not schools of thought so much as religions. In reality, both approaches have their advantages and disadvantages, and there are certain practices to be followed for either approach to be successful. But only one of these approaches will "scale" to large projects.
The advantage of not including headers inside headers is faster compilation. However, this advantage does not come from headers being read only once, because even if you include headers inside headers, smart compilers can work that out. The speed advantage comes from the fact that you include only those headers which are strictly necessary for a given source file. Another advantage is that if we look at a source file, we can see exactly what its dependencies are: the flat list of header files gives that to us plainly.
However, this practice is hard to maintain, especially in large projects with many programmers. It's quite an inconvenience when you want to use module foo, but you cannot just #include "foo.h": you need to include 35 other headers.
What ends up happening is this: programmers are not going to waste their time discovering the exact, minimal set of headers that they need just to add module foo. To save time, they will go to some example source file similar to the one they are working on, and cut and paste all of the #include directives. Then they will try compiling it, and if it doesn't build, then they will cut and paste more #include directives from yet elsewhere, and repeat that until it works.
The net result is that, little by little, you lose the advantage of faster compiling, because your files are now including unnecessary headers. Moreover, the list of #include directives no longer shows the true dependencies. Moreover, when you do incremental compiles now, you compile more than is necessary due to these false dependencies.
Once every source file includes nearly every header, you might as well have a big everything.h which includes all the headers, and then #include "everything.h" in every source file.
So this practice of including just specific headers is best left to small projects that are carefully maintained by a handful of developers who have plenty of time to maintain the ethic of minimal include dependencies by hand, or write tools to hunt down unnecessary #include directives.
Looking around at different code bases I see a variety of styles:
Class "interfaces" defined in header file and the actual impl in a cpp file. In this approach the headers look well defined and easy to read but the cpp files look confusing as it's just a list of methods.
The second approach i see is just to put everything in a single class cpp file. These class files contain the definition and actual method impls in the body of the class definition. This approach looks better to me (more like Java and c#).
Which style should I be using?
For all but the simplest programs, style #2 is simply impossible. If you #include a .cpp file with function definitions from multiple other .cpp files, the definitions get added to multiple object files (.o / .obj) and the linker will complain about clashing symbols.
Use style #1 and learn to live with the confusion.
The former - interfaces in header files and class bodies in implementation files. You'll find this causes you fewer problems when working on large systems.
In C++ why have header files and cpp files?
C++ doesn't use "interfaces" they use classes - base/derived classes. I use one file to define class/and its implementation methods if the project is small and separate files if the project is large.
In java, I pack them up into one package then import it once in need.
Since you tagged with c++, go for first style. I don't find it confusing, for a Java programmer, it may seem different, but in C++, you are always going to use this approach.
In fact in my favorite IDE (MSVS), I open header file, and cpp file side by side. Makes looking up prototypes, and class declaration easy.
And when you have a dozen classes; a dozen .h files, and another dozen .cpp file, will make your work simpler. Because, when you want just to see, what a class does, you just open relevant .h file, and take a look at class members, and maybe short comments. You don't need to wade through several lines deep code.
Conclusion : The style options you gave, are option only for a small code, typically single file, with very few methods etc. Otherwise, it is not even a option. (#Thomas has given the reason why #2 is not even a option)
Header (HPP):
The header includes the declarations of your code, particularly function declarations. Technically speaking classes are defined in header-files, but again, the member functions are just declared.
Code in other files will include just this header and retain all necessary information from there.
Implementation (CPP):
The implementation includes the definition of functions, member-functions and variables.
Rationale:
Header-files gives a developer (a external user of your code) a plain overview and just offers the external available code (i.e. easy to read, only the information necessary for users).
Header-files allow the compiler to check the implementation for correctness
Header-files allow the compiler to check external code for correctness
Header-files allow for seperate-compilation. You need to keep in mind. that in former times, computers doesn't have enough resources to keep everything in main-memory during a compilation process. Header files are small, while implementation files are big.
Use #style 1, even for simple programs. So you can learn easily to work with. That maybe look outated today, especially in background of modern Multi-Pass-Compilers. But seperate header-files are even today beneficial. Rumours about the next C++-Standard appeared, as far as I know something like symbol export ( Java or C#) will be possible. But don't nail me down on this!
Notes:
- member-functions which are defined inside a class are by default inline, normally you don't want this
- use always defined guards
If you are developing large project, you'll find the first approach helps you a lot. The second approach may help you in small project. As your project becomes larger, management of complexity is a big issue of software development, and the first approach turns out to be a better choice.
What I do is:
write .cpp files, with the method names prefixed with the class name
in the .h file, create an empty class, with the appropriate name, then use a cogapp generator script, cog_addheaders.py, to insert the declarations, eg:
.cpp file: WeightsPersister.cpp
.h file: WeightsPersister.h
This way I get:
fast compilation (just needs to recompile the .cpp file, unless I change the class interface)
few issues with circular declarations
acceptably low tedious mindless manual work :-)
I've read everything I could find on this topic, including a couple of very helpful discussions on this site, the NASA coding guidelines and Google C++ guidelines. I even bought the "physical C++ design" book recommended on here (sorry, forgot the name) and got some useful ideas from that. Most sources seem to agree - header files should be self-contained, i.e. they include what they need so that a cpp file could include the header without including any others and it would compile. I also get the point about forward declaring rather than including whenever possible.
That said, how about if foo.cpp includes bar.h and qux.h, but it turns out that bar.h itself includes qux.h? Should foo.cpp then avoid including qux.h? Pro: cleans up foo.cpp (less "noise"). Con: if someone changes bar.h to no longer include qux.h, foo.cpp mysteriously starts failing to compile. Also causes the dependency between foo.cpp and qux.h not to be obvious.
If your answer is "a cpp file should #include every header it needs", taken to its logical conclusion that would mean that pretty much every cpp file has to #include <string>, <cstddef> etc. since most code will end up using those, and if you're not supposed to rely on some other header including them, your cpp needs to include them explicitly. That seems like a lot of "noise" in the cpp files.
Thoughts?
Previous discussions:
What are some techniques for limiting compilation dependencies in C++ projects?
Your preferred C/C++ header policy for big projects?
How do I automate finding unused #include directives?
ETA: Inspired by previous discussions on here, I've written a Perl script to successively comment out each 'include' and 'using', then attempt to recompile the source file, to figure out what's not needed. I've also figured out how to integrate it with VS 2005, so you can double-click to go to "unused" includes. If anyone wants it let me know...very much experimental right now though.
If your answer is "a cpp file should #include every header it needs", taken to its logical conclusion that would mean that pretty much every cpp file has to #include <string>, <cstddef> etc. since most code will end up using those, and if you're not supposed to rely on some other header including them, your cpp needs to include them explicitly.
Yup. That's the way I prefer it.
If the 'noise' is too much to bear, it's OK to have a 'global' include file that contains the usual, common set of includes (like stdafx.h in a lot of Windows programs), and include that at the start of each .cpp file (that helps with precompiled headers, too).
I think that you should still include both files. This helps with maintaining the code.
// Foo.cpp
#include <Library1>
#include <Library2>
I can read that and easily see which libraries it uses. If Library2 used Library1, and it was transformed to this:
// Foo.cpp
#include <Library2>
But I still saw Library1 code, I might be slightly confused. It's not hard to guess that some other library must be including it, but it's still a thought process that has to occur.
Being explicit means I don't have to guess, even for an extra micro second of compilation.
I think you should include both until it becomes unbearable. Then refactor your classes and source files to reduce coupling, on grounds that if just listing all your dependencies is onerous, then you probably have too many dependencies...
A compromise might be to say that if something in bar.h contains a class definition (or function declaration) which necessarily requires another class definition from qux.h, then bar.h can be assumed/required to include qux.h, and the user of the API in bar.h needn't bother including both.
So, suppose the client wants to think of itself as "really" only being interested in the bar API, but has to use qux too, because it calls a bar function that takes or returns a qux object by value. Then it can just forget that qux has its own header and imagine that it's one big bar API defined in bar.h, with qux just being a part of that API.
This means you can't count on, for example, searching cpp files for mentions of qux.h to find all clients of qux. But I'd never rely on that anyway, since in general it's too easy to accidentally miss explicitly including a dependency that's already indirectly included, and never realise you haven't listed all the relevant headers. So you probably shouldn't in any case assume header lists are complete, but instead use doxygen (or gcc -M, or whatever) to get a complete list of dependencies, and search that.
how about if foo.cpp includes bar.h and qux.h, but it turns out that bar.h itself includes qux.h? Should foo.cpp then avoid including qux.h?
If foo.cpp uses anything from qux.h directly, then it should include this header itself. Otherwise, since bar.h needs qux.h, I would rely onbar.h including everything it needs.
Each header file provides definitions needed to use a service of some type (a class definition, some macros, whatever). If you are directly using the services exposed through a header file, include that header file even if another header file also includes it. On the other hand, if you are using those services only indirectly, only in the context of the other header file's services, don't include the header file directly.
In my opinion this isn't a big deal either way. The second inclusion of a header file doesn't get fully parsed; it's basically commented out for all but the first time it's included. As things change and you find out you were using a header file that you weren't directly including, it's not a big deal to add it.
One way to think about whether foo.cpp directly uses qux.h, is to think about what would happen if foo.cpp no longer needs to include bar.h. If foo.cpp would still need to include qux.h, then it should be listed explicitly. If it would no longer need anything from qux.h, it is probably no necessary to include it explicitly.
The "never optimize without profiling" rule applies here, I think.
(This answer is biased toward "performance" over "clutter"; the clutter can generally be cleaned up at will, admittedly it gets harder later, but the performance impacts you on a regular basis.)
Try it both ways, see if there's any significant speed improvement for your compiler, and only then consider removing "duplicate" headers -- because as other answers point out, there's a long-term maintainability penalty you take by removing the duplicates.
Consider getting something like Sysinternals FileMon to see if you're actually generating filesystem hits at all for those duplicate includes (most compilers won't, with correct header guards).
Our experience here indicates that actively seeking out headers that are completely unused (or could be, with proper forward declarations) and removing them is far more worthy of time and effort than figuring out the include chains for possible duplicates. And a good lint (splint, PC-Lint, etc) can help you with this determination.
Even more worthy of our time and effort was figuring out how to get our compiler to handle multiple compilation units per execution (almost a linear speedup for us, up to a point -- the compilation was utterly dominated by compiler startup).
After that, you may want to consider the "single big CPP" madness. It can be quite impressive.