Hide class implementation from its interface - c++

I have this code for an interface of a class in a header file and its implementation in a separate source file :
First :The Header file "Gradebook.h"
#include <string>
using namespace std;
class Gradebook{
public:
Gradebook(string);
void setCoursename(string);
string getCoursename();
void displayMessage();
private:
string nameofCourse;
};
Second :The implementation "Gradebook.cpp"
#include "stdafx.h"
#include "Gradebook.h"
#include <iostream>
using namespace std;
Gradebook::Gradebook(string name){
setCoursename(name);
}
void Gradebook::setCoursename(string name){
nameofCourse = name;
}
string Gradebook::getCoursename(){
return nameofCourse;
}
void Gradebook::displayMessage(){
cout << "Display message function shows :" << getCoursename() << endl;
}
how can i link these two separate files in order to use only "Gradebook.h" in other project , and hide my implementation from the client-programmer ?

There are several answers. It would help to know why you actually want to hide the implementation. Here's the reasons why you might want to do this I can think of offhand.
Protecting trade secrets: Forget it. To be able to execute your code, the computer has to be able to run it. Effectively you can strip away comments, method names and variable names by compiling the code as a static library, even run an obfuscator over it to obscure the control flow (at the cost of slowing it down by adding unneeded jumps), but in the end the code (or the machine code generated from it) has to stay sort of readable or it couldn't be executed.
Make it easier to use your code: If you have several source files and might add more files, and you want your clients to just be able to drop in one file to get all the newest changes without having to add individual source files, compile it as a static or dynamic library. Then you can hand someone the library plus the headers and they can just use them.
You can also create an "umbrella header" that includes all the other headers. That way, clients can simply add the path to your library's includes folder to their compiler invocation/project file and include the one header, which includes all others. If you add or split up a header, you just change the umbrella to include the new headers and all projects that use it keep working.
Note that using a library will limit your clients: They can't step through your code in the debugger easily, they can't fix and compile stuff easily. If they need your code to run on a new platform, or want to use different optimization settings in the compiler, they can't just recompile it.
On the other hand if you plan to sell your library, you might want to hold on to your sources. Customers who don't care about the security of having the code if you ever go out of business can get a cheaper version of the library without the source code, you can charge them extra if they want any of the other features by making them buy a version for the new platforms that you coded for them, etc.
Prevent clients from accidentally relying on implementation details: You don't really need to do anything for this except split up your code into public and private files. Usually your implementation file is private and your headers are public, but you may have some private headers for internal classes.
Since C++ does not allow defining instance variables or methods that aren't declared in the header's class declaration like other languages do that support categories or class extensions, you may have to resort to the Private Implementation (aka 'pimpl') pattern.
What this usually means is that you declare one class that defines the public API that simply wraps a pointer to the actual class that contains the real implementation and calls through to it. It usually only has one instance variable, pimpl, which is the pointer to the other class. You simply forward-declare the private class using class Foo; and thus your clients' code doesn't know anything about the private class (unless they explicitly peek into the implementation file or private header, e.g. when fixing a bug).
Create a single-file class I mention this last because it is generally a stupid thing to do, but just in theory, you could also move the implementation file's content into the header. Then clients only need to include the header and get all the sources. This has lots of downsides, though, like making code harder to read, slowing down compile times, and requiring the client to deal with duplicate definitions of the class caused by including the file from several .cpp files. In short: Don't do it.

Related

If headers of an interface change, why clients of the interface need to be recompiled?

As I read the problem statement of item31: Minimize compilation dependencies between files of Effective C++, the following statement puzzles me:
class Person {
public:
Person(const std::string& name, const Date& birthday,
const Address& addr);
std::string name() const;
std::string birthDate() const;
std::string address() const;
...
private:
std::string theName; // implementation detail
Date theBirthDate; // implementation detail
Address theAddress; // implementation detail
};
in the file defining the Person class, you are likely to find something like this:
#include < string>
#include "date.h"
#include "address.h"
Unfortunately, this sets up a compilation dependency between the file defining Person and these header files. If any of these header files (comment mine: the headers listed above, namely < string>, "date.h", "address.h") is changed, or if any of the header files they depend on changes, the file containing the Person class must be recompiled, as must any files that use Person.
What I don't quite understand is the last part highlighted. Why do clients that use Person need recompilation? They just need to relink to the newly compiled Person object code, right (I am assuming the Person interface remains the same to its clients)?
If what clients really need - assuming the Person interface doesn't change - is just a relinking, does it still warrant the Pimpl idiom? The Pimpl class still need recompilation if any of the headers changes. The idiom only saves the client one relinking.
EDIT: It seems that there is a lot of confusion about what headers have changed. In this case, Scott Meyers was talking about the header files included by Person.h are changed. But Person.h itself does not change so clients using (#including) Person.h doesn't see a change (no timestamp change on Person.h). The makefile dependency would list Person.o as a prerequisite so the client will simply link with the new Person.o. I am learning Pimpl idiom, maybe I missed some obvious points in everyone's argument. Please elucidate.
EDIT2: When client needs to use Person, it includes Person.h which also include all the other included file such as date.h and address.h. I missed this part and thought only Person.cpp need to deal with these headers.
there is an intermediate step in compiling. i.e. if you compile foo.cpp and it includes a.h ab b.h then an intermediate source file
a.h content
b.h content
foo.cpp content
which is the input for compilation is created. note that if other headers are included in headers, they are also listed recursively.
since a chance in header cause your
compilation file, the intermediate file, change, foo.cpp should be recompiled.
Yes, but that re-linkage will fail if the datatype sizes are wrong, or the old code is trying to link to code that no longer exists. It's not magic: the code, at link-time, has still been compiled.
There is a subset of interface changes you can make without breaking binary compatibility; adding members to a type is not part of that subset.
(I am assuming the Person interface remains the same to its clients)
This is the key. Your assumption has removed the constraints, so the answer to "why do the other files need to be recompiled" becomes "they don't".
Obviously, the quote in its original context does not mention that assumption, which is why it is giving broader guidelines. Though, personally, I'd have liked to have seen a more in-depth explanation from Meyers of binary compatibility.
In a very practical sense: Suppose that person.h includes other files, or defines some preprocessor symbols. If you change its includes or change its preprocessors symbol, then any file that also includes person.h potentially has its meaning changed.
In practice the compiler will fully recompile any compilation units affected by the change, if I understand correctly. And even if there are some optimizations to avoid doing lots of work when only "minor" or "insignficant" changes take place, like adding whitespace or something, the compiler needs to at least look at any compilation unit whose text was potentially changed in order to be sure.
Generally speaking, most tool-chains don't cache the intermediate results of each compilation unit after preprocessor expansion, and even if you are using something like ccache it's not going to try to do anything intelligent with the cached stuff to avoid doing work when only small changes happen, it's only going to try to check if it's stale or not.
So, changing things in a header file that may seem even smaller than changing the layout or interface of a class, still needs to trigger recompilation generally. What if some of the compilation units contain queries like sizeof your class? Or use SFINAE tricks to detect if it has certain methods?
The core information in a header file describes interfaces.
An interface to a function describes its arguments (how many, what type, etc) and return type. The actual function implementation (definition) requires the function be called in an expected way - and the interface describes that. If code that is calling the function provides a different set of arguments, or if it acts as if the function returns something different than it actually does, then there will be a malfunction somewhere (either in the function, since it is not given information it expects, or in the caller, since the function doesn't give information the caller expects).
This means, if the interface to a function changes, then both code for the called function and for the callers need to be recompiled, in order to ensure consistency.
The same goes for type definitions. struct and class types may include member functions, and the compiler needs to ensure consistency between behaviour of those functions and their callers (or the programmer has to deal with inconsistency, which may manifest in tricky ways). Also, when creating an instance of a type (i.e. an object or a variable) the compiler needs to know the size of the type (how much memory it needs, how far the second element of an array is from the first, etc) in order to work with objects correctly.
All of this information is specified in the interface, which is typically placed in headers. Yes, the compiler might get away with making assumptions if it is not given information (e.g. in C, a function is assumed to return int and accept an arbitrary set of arguments if it is called without being previously declared) but there is still the problem of mismatches (e.g. if the function is assumed to return int, but actually returns a pointer of some type, what happens?).
More prosaically, build management processes (makefiles, build scripts, etc) typically check creation dates of files. For example, a source file may be recompiled if the corresponding object is older than that source file, or older than any of the header files that source file #includes. The logic of doing that is that the content of source file and its included headers affect how code in the compiled object behaves and, if the object file is older than one of those files, then there may well have been a change. The only way to make things line up is to recompile.
It would be possible to only recompile if a "substantive" change of file content has occurred (e.g. not recompile if only a comment has been changed in a header). However, doing that would mean it is necessary to reliably detect that the change in a file actually doesn't matter to the working of the program. The analysis to do that is certainly possible, but will often be more complicated - and time consuming, which is a problem as programmers tend to whine about long build times - than simply checking file dates.

Are there techniques to greatly improve C++ building time for 3D applications?

There are many slim laptops who are just cheap and great to use. Programming has the advantage of being done in any place where there is silence and comfort, since concentrating for long hours is important factor to be able to do effective work.
I'm kinda old fashioned as I like my statically compiled C or C++, and those languages can be pretty long to compile on those power-constrainted laptops, especially C++11 and C++14.
I like to do 3D programming, and the libraries I use can be large and won't be forgiving: bullet physics, Ogre3D, SFML, not to mention the power hunger of modern IDEs.
There are several solutions to make building just faster:
Solution A: Don't use those large libraries, and come up with something lighter on your own to relieve the compiler. Write appropriate makefiles, don't use an IDE.
Solution B: Set up a building server elsewhere, have a makefile set up on an muscled machine, and automatically download the resulting exe. I don't think this is a casual solution, as you have to target your laptop's CPU.
Solution C: use the unofficial C++ module
???
Any other suggestion ?
Compilation speed is something, that can be really boosted, if you know how to. It is always wise to think carefully about project's design (especially in case of large projects, consisted of multiple modules) and modify it, so compiler can produce output efficiently.
1. Precompiled headers.
Precompiled header is a normal header (.h file), that contains the most common declarations, typedefs and includes. During compilation, it is parsed only once - before any other source is compiled. During this process, compiler generates data of some internal (most likely, binary) format, Then, it uses this data to speed up code generation.
This is a sample:
#pragma once
#ifndef __Asx_Core_Prerequisites_H__
#define __Asx_Core_Prerequisites_H__
//Include common headers
#include "BaseConfig.h"
#include "Atomic.h"
#include "Limits.h"
#include "DebugDefs.h"
#include "CommonApi.h"
#include "Algorithms.h"
#include "HashCode.h"
#include "MemoryOverride.h"
#include "Result.h"
#include "ThreadBase.h"
//Others...
namespace Asx
{
//Forward declare common types
class String;
class UnicodeString;
//Declare global constants
enum : Enum
{
ID_Auto = Limits<Enum>::Max_Value,
ID_None = 0
};
enum : Size_t
{
Max_Size = Limits<Size_t>::Max_Value,
Invalid_Position = Limits<Size_t>::Max_Value
};
enum : Uint
{
Timeout_Infinite = Limits<Uint>::Max_Value
};
//Other things...
}
#endif /* __Asx_Core_Prerequisites_H__ */
In project, when PCH is used, every source file usually contains #include to this file (I don't know about others, but in VC++ this actually a requirement - every source attached to project configured for using PCH, must start with: #include PrecompiledHedareName.h). Configuration of precompiled headers is very platform-dependent and beyond the scope of this answer.
Note one important matter: things, that are defined/included in PCH should be changed only when absolutely necessary - every chnge can cause recompilation of whole project (and other depended modules)!
More about PCH:
Wiki
GCC Doc
Microsoft Doc
2. Forward declarations.
When you don't need whole class definition, forward declare it to remove unnecessary dependencies in your code. This also implicates extensive use of pointers and references when possible. Example:
#include "BigDataType.h"
class Sample
{
protected:
BigDataType _data;
};
Do you really need to store _data as value? Why not this way:
class BigDataType; //That's enough, #include not required
class Sample
{
protected:
BigDataType* _data; //So much better now
};
This is especially profitable for large types.
3. Do not overuse templates.
Meta-programming is a very powerful tool in developer's toolbox. But don't try to use them, when they are not necessary.
They are great for things like traits, compile-time evaluation, static reflection and so on. But they introduce a lot of troubles:
Error messages - if you have ever seen errors caused by improper usage of std:: iterators or containers (especially the complex ones, like std::unordered_map), than you know what is this all about.
Readability - complex templates can be very hard to read/modify/maintain.
Quirks - many techniques, templates are used for, are not so well-known, so maintenance of such code can be even harder.
Compile time - the most important for us now:
Remember, if you define function as:
template <class Tx, class Ty>
void sample(const Tx& xv, const Ty& yv)
{
//body
}
it will be compiled for each exclusive combination of Tx and Ty. If such function is used often (and for many such combinations), it can really slow down compilation process. Now imagine, what will happen, if you start to overuse templating for whole classes...
4. Using PIMPL idiom.
This is a very useful technique, that allows us to:
hide implementation details
speed up code generation
easy updates, without breaking client code
How does it work? Consider class, that contain a lot of data (for example, representing person). It could look like this:
class Person
{
protected:
string name;
string surname;
Date birth_date;
Date registration_date;
string email_address;
//and so on...
};
Our application evolves and we need to extend/change Person definition. We add some new fields, remove others... and everything crashes: size of Person changes, names of fields change... cataclysm. In particular, every client code, that depends on Person's definition needs to be changed/updated/fixed. Not good.
But we can do it the smart way - hide the details of Person:
class Person
{
protected:
class Details;
Details* details;
};
Now, we do few nice things:
client cannot create code, that depends on how Person is defined
no recompilation needed as long as we don't modify public interface used by client code
we reduce the compilation time, because definitions of string and Date no longer need to be present (in previous version, we had to include appropriate headers for these types, that adds additional dependencies).
5. #pragma once directive.
Although it may give no speed boost, it is clearer and less error-prone. It is basically the same thing as using include guards:
#ifndef __Asx_Core_Prerequisites_H__
#define __Asx_Core_Prerequisites_H__
//Content
#endif /* __Asx_Core_Prerequisites_H__ */
It prevents from multiple parses of the same file. Although #pragma once is not standard (in fact, no pragma is - pragmas are reserved for compiler-specific directives), it is quite widely supported (examples: VC++, GCC, CLang, ICC) and can be used without worrying - compilers should ignore unknown pragmas (more or less silently).
6. Unnecessary dependencies elimination.
Very important point! When code is being refactored, dependencies often change. For example, if you decide to do some optimizations and use pointers/references instead of values (vide point 2 and 4 of this answer), some includes can become unnecessary. Consider:
#include "Time.h"
#include "Day.h"
#include "Month.h"
#include "Timezone.h"
class Date
{
protected:
Time time;
Day day;
Month month;
Uint16 year;
Timezone tz;
//...
};
This class has been changed to hide implementation details:
//These are no longer required!
//#include "Time.h"
//#include "Day.h"
//#include "Month.h"
//#include "Timezone.h"
class Date
{
protected:
class Details;
Details* details;
//...
};
It is good to track such redundant includes, either using brain, built-in tools (like VS Dependency Visualizer) or external utilities (for example, GraphViz).
Visual Studio has also a very nice option - if you click with RMB on any file, you will see an option 'Generate Graph of include files' - it will generated a nice, readable graph, that can be easily analyzed and used to track unnecessary dependencies.
Sample graph, generated inside my String.h file:
As Mr. Yellow indicated in a comment, one of the best ways to improve compile times is to pay careful attention to your use of header files. In particular:
Use precompiled headers for any header that you don't expect to change including operating system headers, third party library headers, etc.
Reduce the number of headers included from other headers to the minimum necessary.
Determine whether a include is needed in the header or whether it can be moved to cpp file. This sometimes causes a ripple effect because someone else was depending on you to include the header for it, but it is better in the long term to move the include to the place where it's actually needed.
Using forward declared classes, etc. can often eliminate the need to include the header in which that class is declared. Of course, you still need to include the header in the cpp file, but that only happens once, as opposed to happening every time the corresponding header file is included.
Use #pragma once (if it is supported by your compiler) rather than include guard symbols. This means the compiler does not even need to open the header file to discover the include guard. (Of course many modern compilers figure that out for you anyway.)
Once you have your header files under control, check your make files to be sure you no longer have unnecessary dependencies. The goal is to rebuild everything you need to, but no more. Sometimes people err on the side of building too much because that is safer than building too little.
If you've tried all of the above, there's a commercial product that does wonders, assuming you have some available PCs on your LAN. We used to use it at a previous job. It's called Incredibuild (www.incredibuild.com) and it shrunk our build time from over an hour (C++) to about 10 minutes. From their website:
IncrediBuild accelerates build time through efficient parallel computing. By harnessing idle CPU resources on the network, IncrediBuild transforms a network of PCs and servers into a private computing cloud that can best be described as a “virtual supercomputer.” Processes are distributed to remote CPU resources for parallel processing, dramatically shortening build time up by to 90% or more.
Another point that's not mentioned in the other answers: Templates. Templates can be a nice tool, but they have fundamental drawbacks:
The template, and all the templates it depends upon, must be included. Forward declarations don't work.
Template code is frequently compiled several times. In how many .cpp files do you use an std::vector<>? That is how many times your compiler will need to compile it!
(I'm not advocating against the use of std::vector<>, on the contrary you should use it frequently; it's simply an example of a really frequently used template here.)
When you change the implementation of a template, you must recompile everything that uses that template.
With template heavy code, you often have relatively few compilation units, but each of them is huge. Of course, you can go all-template and have only a single .cpp file that pulls in everything. This would avoid multiple compiling of template code, however it renders make useless: any compilation will take as long as a compilation after a clean.
I would recommend going the opposite direction: Avoid template-heavy or template-only libraries, and avoid creating complex templates. The more interdependent your templates become, the more repeated compilation is done, and the more .cpp files need to be rebuilt when you change a template. Ideally any template you have should not make use of any other template (unless that other template is std::vector<>, of course...).

A basic understanding of C++ header files

I have a theory question rather than an error report.
I'm a rookie C++ programmer, trying to promote that away
Using the VC++ VS2008 compiler
I am often finding myself wondering WHY I want to take some actions in header files.
For example look at this code block:
#include "DrawScene.h"
#include "Camera.h"
#include "Player.h"
#include "Grid.h"
#include "InputHandler.h"
#include "GameState.h"
class Controller
{
public:
private:
public:
Controller();
~Controller(){}
void Update();
private:
};
And the hooked up CPP file , controller.cpp along with it
#include "stdafx.h"
#include "glut.h"
#include "Controller.h"
#include <iostream>
Grid* grid_ptr = new Grid();
InputHandler* inputHandler_ptr = new InputHandler();
DrawScene* drawScene_ptr = new DrawScene();
GameState* gameState_ptr = new GameState();
Controller::Controller()
{
}
void Controller::Update()
{
}
What is a good way to decide which includes go where? So far I've been going with the " whatever works" method but I find it somewhat unprofessional.
Now even though you can say my code has X syntax errors and design flaws, please do, but the focal point I would appreciate that information remains on the use of .h VS .cpp files.
Why is there even such a design? And what are the pits and traps to always be treading lightly around when making any kind of OOP based C++ program?
What sparked this question for me btw was that I wanted to notify the reader of the header file there will be objects stored in the controller but assigning these uninitialized objects seemed impossible without making them static.
Note: I stem from C# --> C++ , might help to know. That's kind of my perspective on code at this point.
Thank you in advance for you efforts!
EDIT: 26/08/2010 18:16
So the build time is the essence of good includes. Is there more to be cautious about?
This is my personal opinion rather than a consensus best practice, but what I recommend is: in a header file, only include those headers that are necessary to make the contents of the header file compile without syntax errors. And if you can use forward declarations rather than nested inclusions, do so.
The reason for this is, in C++ (unlike C#, iiuc) there's no export control. Everything you include from a header will be visible to the includers of the header, which could be a whole lot of junk that users of your interface should not see.
Generally speaking headers should reside in the cpp file. For standard library includes (and possibly 3rd library includes) you can stick them in headers. However headers defined specifically for your project should go in cpp files whenever possible.
The reason for this is compilation time and dependency issues. Any time you change a header file, the compiler will have to recompile every source file that includes it. When you include a header file in another header file, then the compiler has to recompile every cpp file that includes either header file.
This is why forward declaration and the PIMPL (Pointer to IMPLementation, or opaque pointer) pattern are popular. It allows you to shift at least some of the changes/implementation out of the header file. For example:
// header file:
class SomeType;
class AnotherType
{
private:
SomeType *m_pimpl;
};
doesn't require you to include "sometype.h" whereas:
// header file
class AnotherType
{
private:
SomeType m_impl;
};
does. EDIT: Actually, you don't need to include "sometype.h" in the "anothertype.h" if you ALWAYS include "sometype.h" before "anothertype.h" in EVERY cpp file that includes "anothertype.h".
Sometimes it can be difficult to move a header file to the cpp file. At that point you have a decision - is it better to go through the effort of abstracting the code so you can do so, or is it better to just add the include?
Only include headers in another header if absolutely necessary. If the header can go solely in the source file then that's the best place. You can use forward declarations of classes in the header if you are only using pointers and references to them. Your DrawScene, GameState, Grid and InputHandler classes looks like they might fall into this category.
Note that C++ as a language does not care about the distinction between headers and source files. That's just an extremely common system used by developers to maintain their code. The obvious advantage of using headers is to avoid code duplication and helps, to an extent, to enforce the one-definition-rule for classes, templates and inline functions.
Avoid putting too many (read, any unnecessary) #includes in your .h file. Doing so can lead to long build times, as e.g. whenever you change Camera.h you will have changed Controller.h, so anything that includes Controller.h will need to be rebuilt. Even if it is only a comment that has changed.
Use forward declarations if you are only storing pointer members, then add the #includes to the cpp file.
Theoretically, .h files contain just the interface, and .cpp files, the implementation. However, since private members are arguably implementation, and not interface, this is not strictly true, hence the need for forward declarations in order to minimise unnecessary rebuilds.
It is possible, in C++, to include the whole implementation inline in the class definition, file, much as you might with Java, but this really breaks the .h/.cpp interface/implementation rule.
Header files contain declarations of functions, classes, and other objects, whereas cpp files are meant for implementing these previously declared objects.
Header files exist primarily for historical reasons. It's easier to build a C++ compiler if the definitions of all functions, classes, etc to be used by your code are given before you actually call them.
Usually you use cpp for your implemenation. Implementing functions in header files automatically inlines them, so unless these are very small and / or very frequently called, this is usually not what you want.

Should I use a single header to include all static library headers?

I have a static library that I am building in C++. I have separated it into many header and source files. I am wondering if it's better to include all of the headers that a client of the library might need in one header file that they in turn can include in their source code or just have them include only the headers they need? Will that cause the code to be unecessary bloated? I wasn't sure if the classes or functions that don't get used will still be compiled into their products.
Thanks for any help.
Keep in mind that each source file that you compile involves an independent invocation of the compiler. With each invocation, the compiler has to read in every included header file, parse through it, and build up a symbol table.
When you use one of these "include the world" header files in lots of your source files, it can significantly impact your build time.
There are ways to mitigate this; for example, Microsoft has a precompiled header feature that essentially saves out the symbol table for subsequent compiles to use.
There is another consideration though. If I'm going to use your WhizzoString class, I shouldn't have to have headers installed for SOAP, OpenGL, and what have you. In fact, I'd rather that WhizzoString.h only include headers for the types and symbols that are part of the public interface (i.e., the stuff that I'm going to need as a user of your class).
As much as possible, you should try to shift includes from WhizzoString.h to WhizzoString.cpp:
OK:
// Only include the stuff needed for this class
#include "foo.h" // Foo class
#include "bar.h" // Bar class
public class WhizzoString
{
private Foo m_Foo;
private Bar * m_pBar;
.
.
.
}
BETTER:
// Only include the stuff needed by the users of this class
#include "foo.h" // Foo class
class Bar; // Forward declaration
public class WhizzoString
{
private Foo m_Foo;
private Bar * m_pBar;
.
.
.
}
If users of your class never have to create or use a Bar type, and the class doesn't contain any instances of Bar, then it may be sufficient to provide only a forward declaration of Bar in the header file (WhizzoString.cpp will have #include "bar.h"). This means that anyone including WhizzoString.h could avoid including Bar.h and everything that it includes.
In general, when linking the final executable, only the symbols and functions that are actually used by the program will be incorporated. You pay only for what you use. At least that's how the GCC toolchain appears to work for me. I can't speak for all toolchains.
If the client will always have to include the same set of header files, then it's okay to provide a "convience" header file that includes others. This is common practice in open-source libraries. If you decide to provide a convenience header, make it so that the client can also choose to include specifically what is needed.
To reduce compile times in large projects, it's common practice to include the least amount of headers as possible to make a unit compile.
what about giving both choices:
#include <library.hpp> // include everything
#include <library/module.hpp> // only single module
this way you do not have one huge include file, and for your separate files, they are stacked neatly in one directory
It depends on the library, and how you've structured it. Remember that header files for a library, and which pieces are in which header file, are essentially part of the API of the library. So, if you lead your clients to carefully pick and choose among your headers, then you will need to support that layout for a long time. It is fairly common for libraries to export their whole interface via one file, or just a few files, if some part of the API is truly optional and large.
A consideration should be compilation time: If the client has to include two dozen files to use your library, and those includes have internal includes, it can significantly increase compilation time in a big project, if used often. If you go this route, be sure all your includes have proper include guards around not only the file contents, but the including line as well. Though note: Modern GCC does a very good job of this particular issue and only requires the guards around the header's contents.
As to bloating the final compiled program, it depends on your tool chain, and how you compiled the library, not how the client of the library included header files. (With the caveat that if you declare static data objects in the headers, some systems will end up linking in the objects that define that data, even if the client doesn't use it.)
In summary, unless it is a very big library, or a very old and cranky tool chain, I'd tend to go with the single include. To me, freezing your current implementation's division into headers into the library's API is bigger worry than the others.
The problem with single file headers is explained in detail by Dr. Dobbs, an expert compiler writer. NEVER USE A SINGLE FILE HEADER!!! Each time a header is included in a .cc/.cpp file it has to be recompiled because you can feed the file macros to alter the compiled header. For this reason, a single header file will dramatically increase compile time without providing any benifit. With C++ you should optimize for human time first, and compile time is human time. You should never, because it dramatically increases compile time, include more than you need to compile in any header, each translation unit(TU) should have it's own implementation (.cc/.cpp) file, and each TU named with unique filenames;.
In my decade of C++ SDK development experience, I religiously ALWAYS have three files in EVERY module. I have a config.h that gets included into almost every header file that contains prereqs for the entire module such as platform-config and stdint.h stuff. I also have a global.h file that includes all of the header files in the module; this one is mostly for debugging (hint enumerate your seams in the global.h file for better tested and easier to debug code). The key missing piece here is that ou should really have a public.h file that includes ONLY your public API.
In libraries that are poorly programmed, such as boost and their hideous lower_snake_case class names, they use this half-baked worst practice of using a detail (sometimes named 'impl') folder design pattern to "conceal" their private interface. There is a long background behind why this is a worst practice, but the short story is that it creates an INSANE amount of redundant typing that turns one-liners into multi-liners, and it's not UML compliant and it messes up the UML dependency diagram resulting in overly complicated code and inconsistent design patterns such as children actually being parents and vice versa. You don't want or need a detail folder, you need to use a public.h header with a bunch of sibling modules WITHOUT ADDITIONAL NAMESPACES where your detail is a sibling and not a child that is in reatliy a parent. Namespaces are supposed to be for one thing and one thing only: to interface your code with other people's code, but if it's your code you control it and you should use unique class and funciton names because it's bad practice to use a namesapce when you don't need to because it may cause hash table collision that slow downt he compilation process. UML is the best pratice, so if you can organize your headers so they are UML compliant then your code is by definition more robust and portable. A public.h file is all you need to expose only the public API; thanks.

Is it a good practice to place C++ definitions in header files?

My personal style with C++ has always to put class declarations in an include file, and definitions in a .cpp file, very much like stipulated in Loki's answer to C++ Header Files, Code Separation. Admittedly, part of the reason I like this style probably has to do with all the years I spent coding Modula-2 and Ada, both of which have a similar scheme with specification files and body files.
I have a coworker, much more knowledgeable in C++ than I, who is insisting that all C++ declarations should, where possible, include the definitions right there in the header file. He's not saying this is a valid alternate style, or even a slightly better style, but rather this is the new universally-accepted style that everyone is now using for C++.
I'm not as limber as I used to be, so I'm not really anxious to scrabble up onto this bandwagon of his until I see a few more people up there with him. So how common is this idiom really?
Just to give some structure to the answers: Is it now The Way™, very common, somewhat common, uncommon, or bug-out crazy?
Your coworker is wrong, the common way is and always has been to put code in .cpp files (or whatever extension you like) and declarations in headers.
There is occasionally some merit to putting code in the header, this can allow more clever inlining by the compiler. But at the same time, it can destroy your compile times since all code has to be processed every time it is included by the compiler.
Finally, it is often annoying to have circular object relationships (sometimes desired) when all the code is the headers.
Bottom line, you were right, he is wrong.
EDIT: I have been thinking about your question. There is one case where what he says is true. templates. Many newer "modern" libraries such as boost make heavy use of templates and often are "header only." However, this should only be done when dealing with templates as it is the only way to do it when dealing with them.
EDIT: Some people would like a little more clarification, here's some thoughts on the downsides to writing "header only" code:
If you search around, you will see quite a lot of people trying to find a way to reduce compile times when dealing with boost. For example: How to reduce compilation times with Boost Asio, which is seeing a 14s compile of a single 1K file with boost included. 14s may not seem to be "exploding", but it is certainly a lot longer than typical and can add up quite quickly when dealing with a large project. Header only libraries do affect compile times in a quite measurable way. We just tolerate it because boost is so useful.
Additionally, there are many things which cannot be done in headers only (even boost has libraries you need to link to for certain parts such as threads, filesystem, etc). A Primary example is that you cannot have simple global objects in header only libs (unless you resort to the abomination that is a singleton) as you will run into multiple definition errors. NOTE: C++17's inline variables will make this particular example doable in the future.
As a final point, when using boost as an example of header only code, a huge detail often gets missed.
Boost is library, not user level code. so it doesn't change that often. In user code, if you put everything in headers, every little change will cause you to have to recompile the entire project. That's a monumental waste of time (and is not the case for libraries that don't change from compile to compile). When you split things between header/source and better yet, use forward declarations to reduce includes, you can save hours of recompiling when added up across a day.
The day C++ coders agree on The Way, lambs will lie down with lions, Palestinians will embrace Israelis, and cats and dogs will be allowed to marry.
The separation between .h and .cpp files is mostly arbitrary at this point, a vestige of compiler optimizations long past. To my eye, declarations belong in the header and definitions belong in the implementation file. But, that's just habit, not religion.
Code in headers is generally a bad idea since it forces recompilation of all files that includes the header when you change the actual code rather than the declarations. It will also slow down compilation since you'll need to parse the code in every file that includes the header.
A reason to have code in header files is that it's generally needed for the keyword inline to work properly and when using templates that's being instanced in other cpp files.
What might be informing you coworker is a notion that most C++ code should be templated to allow for maximum usability. And if it's templated, then everything will need to be in a header file, so that client code can see it and instantiate it. If it's good enough for Boost and the STL, it's good enough for us.
I don't agree with this point of view, but it may be where it's coming from.
I think your co-worker is smart and you are also correct.
The useful things I found that putting everything into the headers is that:
No need for writing & sync headers and sources.
The structure is plain and no circular dependencies force the coder to make a "better" structure.
Portable, easy to embedded to a new project.
I do agree with the compiling time problem, but I think we should notice that:
The change of source file are very likely to change the header files which leads to the whole project be recompiled again.
Compiling speed is much faster than before. And if you have a project to be built with a long time and high frequency, it may indicates that your project design has flaws. Seperate the tasks into different projects and module can avoid this problem.
Lastly I just wanna support your co-worker, just in my personal view.
Often I'll put trivial member functions into the header file, to allow them to be inlined. But to put the entire body of code there, just to be consistent with templates? That's plain nuts.
Remember: A foolish consistency is the hobgoblin of little minds.
As Tuomas said, your header should be minimal. To be complete I will expand a bit.
I personally use 4 types of files in my C++ projects:
Public:
Forwarding header: in case of templates etc, this file get the forwarding declarations that will appear in the header.
Header: this file includes the forwarding header, if any, and declare everything that I wish to be public (and defines the classes...)
Private:
Private header: this file is a header reserved for implementation, it includes the header and declares the helper functions / structures (for Pimpl for example or predicates). Skip if unnecessary.
Source file: it includes the private header (or header if no private header) and defines everything (non-template...)
Furthermore, I couple this with another rule: Do not define what you can forward declare. Though of course I am reasonable there (using Pimpl everywhere is quite a hassle).
It means that I prefer a forward declaration over an #include directive in my headers whenever I can get away with them.
Finally, I also use a visibility rule: I limit the scopes of my symbols as much as possible so that they do not pollute the outer scopes.
Putting it altogether:
// example_fwd.hpp
// Here necessary to forward declare the template class,
// you don't want people to declare them in case you wish to add
// another template symbol (with a default) later on
class MyClass;
template <class T> class MyClassT;
// example.hpp
#include "project/example_fwd.hpp"
// Those can't really be skipped
#include <string>
#include <vector>
#include "project/pimpl.hpp"
// Those can be forward declared easily
#include "project/foo_fwd.hpp"
namespace project { class Bar; }
namespace project
{
class MyClass
{
public:
struct Color // Limiting scope of enum
{
enum type { Red, Orange, Green };
};
typedef Color::type Color_t;
public:
MyClass(); // because of pimpl, I need to define the constructor
private:
struct Impl;
pimpl<Impl> mImpl; // I won't describe pimpl here :p
};
template <class T> class MyClassT: public MyClass {};
} // namespace project
// example_impl.hpp (not visible to clients)
#include "project/example.hpp"
#include "project/bar.hpp"
template <class T> void check(MyClass<T> const& c) { }
// example.cpp
#include "example_impl.hpp"
// MyClass definition
The lifesaver here is that most of the times the forward header is useless: only necessary in case of typedef or template and so is the implementation header ;)
To add more fun you can add .ipp files which contain the template implementation (that is being included in .hpp), while .hpp contains the interface.
As apart from templatized code (depending on the project this can be majority or minority of files) there is normal code and here it is better to separate the declarations and definitions. Provide also forward-declarations where needed - this may have effect on the compilation time.
Generally, when writing a new class, I will put all the code in the class, so I don't have to look in another file for it.. After everything is working, I break the body of the methods out into the cpp file, leaving the prototypes in the hpp file.
I personally do this in my header files:
// class-declaration
// inline-method-declarations
I don't like mixing the code for the methods in with the class as I find it a pain to look things up quickly.
I would not put ALL of the methods in the header file. The compiler will (normally) not be able to inline virtual methods and will (likely) only inline small methods without loops (totally depends on the compiler).
Doing the methods in the class is valid... but from a readablilty point of view I don't like it. Putting the methods in the header does mean that, when possible, they will get inlined.
I think that it's absolutely absurd to put ALL of your function definitions into the header file. Why? Because the header file is used as the PUBLIC interface to your class. It's the outside of the "black box".
When you need to look at a class to reference how to use it, you should look at the header file. The header file should give a list of what it can do (commented to describe the details of how to use each function), and it should include a list of the member variables. It SHOULD NOT include HOW each individual function is implemented, because that's a boat load of unnecessary information and only clutters the header file.
If this new way is really The Way, we might have been running into different direction in our projects.
Because we try to avoid all unnecessary things in headers. That includes avoiding header cascade. Code in headers will propably need some other header to be included, which will need another header and so on. If we are forced to use templates, we try avoid littering headers with template stuff too much.
Also we use "opaque pointer"-pattern when applicable.
With these practices we can do faster builds than most of our peers. And yes... changing code or class members will not cause huge rebuilds.
I put all the implementation out of the class definition. I want to have the doxygen comments out of the class definition.
IMHO, He has merit ONLY if he's doing templates and/or metaprogramming. There's plenty of reasons already mentioned that you limit header files to just declarations. They're just that... headers. If you want to include code, you compile it as a library and link it up.
Doesn't that really depends on the complexity of the system, and the in-house conventions?
At the moment I am working on a neural network simulator that is incredibly complex, and the accepted style that I am expected to use is:
Class definitions in classname.h
Class code in classnameCode.h
executable code in classname.cpp
This splits up the user-built simulations from the developer-built base classes, and works best in the situation.
However, I'd be surprised to see people do this in, say, a graphics application, or any other application that's purpose is not to provide users with a code base.
Template code should be in headers only. Apart from that all definitions except inlines should be in .cpp. The best argument for this would be the std library implementations which follow the same rule. You would not disagree the std lib developers would be right regarding this.
I think your co-worker is right as long as he does not enter in the process to write executable code in the header.
The right balance, I think, is to follow the path indicated by GNAT Ada where the .ads file gives a perfectly adequate interface definition of the package for its users and for its childs.
By the way Ted, have you had a look on this forum to the recent question on the Ada binding to the CLIPS library you wrote several years ago and which is no more available (relevant Web pages are now closed). Even if made to an old Clips version, this binding could be a good start example for somebody willing to use the CLIPS inference engine within an Ada 2012 program.