Related
I am a tutor and I'm trying to write something to help students learn C++. Suppose I have a .cpp file that includes two .h files, which we will call "solution.h" and "student_answer.h". The "solution.h" file contains a class called "Solution" which implements member functions and variables that solve a problem. Students are to implement their own solution to the same problem in a separate "student_answer.h" file, in a class which we will call "Student".
The .cpp file should then take the two class definitions, "Solution" (the class defined in solution.h) and "Student" (the class defined in student_answer.h), and run the two implementations to verify whether the student answer is correct, and it should provide detailed output in cases where the student answer has a bug. However, I want to be able to hide the contents of the solution.h file (or the "Solution" class) from the students while still providing them with the .cpp file that they can compile and run their own solution with.
Currently, I have something like this:
#include "solution.h"
#include "student_answer.h"
// ...
int main() {
Solution s;
Student a;
// run member functions of s and a, compare results to verify if student
// implementation is correct (and print helpful output if there is a bug)
...
}
Is there a way to do the same thing without having to reveal the solution.h file to the students? Is there a better approach to doing this?
Thanks!
Is there a way to do the same thing without having to reveal the solution.h file to the students?
No.
Best you could do is to not put any implementation details into solution.h and instead provide a pre-compiled library for comparison. That would prevent students from seeing the solution except for the API, which is presumably part of instructions for the student anyway.
This does have a challenge that the pre-compiled library must be compatible with the systems that the students use. This can be solved by instructing the students to use a provided VM or container image.
However, this approach does not prevent the students from implementing their API by delegating to the solution API.
There's really no need to provide the correct solution. Write to solution for yourself only; write tests that are required to pass with either your solution or the student's; Test the tests with your solution; Provide only the tests to the students, not your solution.
Create a static or dynamic library of Solution.h, compile it, and link it with the project. The students now get the header of the solution with declarations, but not definitions.
The library topic is a little bit too complex to explain in one answer. Watch the video from TheChernoProject ( https://www.youtube.com/watch?v=Wt4dxDNmDA8 ) for it.
Another way would be, making the code unreadable (defines) or put it in asm blocks.
I'm trying to learn C++ for Qt development, and I'm a little scared of header files. What I'd like to know is, what's the best workflow for keeping *.cpp and *.h files synched? For example, is the norm to write the class file and then copy the relevant info over to the header?
Sorry if this doesn't make any sense...I'm just looking for an efficient workflow for this.
Thanks!
Justin
For example, is the norm to write the class file and then copy the relevant info over to the header?
While there is no single standard approach, its usually a good idea to:
first think about the public interface
put that in the header
implement in the source file accordingly
update the header if needed
Jumping straight into the implementation can make for a painful refactoring later on.
So:
You'll get linker errors if something's in the header, used by a client, and not in the cpp.
You'll get compile errors if something in the cpp is not defined or is defined differently in the .h
What scenario are you worried about?
Create the test code that uses your class/functions (this way you can play with the end results before even writing any code)
Create the header with the class and the methods.
Implement those methods in your cpp file
If you run into the case where you have to change the signature of the methods, you will need to manually change it both place (header + source).
You can shell out a small amount of money and buy Visual Assist X. With Visual Assist X, you right click the method in your cpp file, choose Refactor -> Change Signature. Perform the change and hit Ok. It is changed in both places.
So, in short, there is no way to keep them in sync automatically, but with the right refactor tool, your life will be better.
In general for the first version I write the class file in the .h and when done I copy the method declarations to the .cpp and there change them into definitions (i.e. methods and their bodies). In this phase I declared only the public methods of the class, because I'm more concerned about its interface than the internals.
Later, when I'm implementing the public methods, if I need a new private method I begin by calling it wherever it's needed, so I get the parameters well defined. Then I write its declaration in the .h and go back to the .cpp to write its body.
It may be of interest to you the concepts of Tracer Bullet Design, which fits well with this workflow. It's described in the book The Pragmatic Programmer:
http://www.amazon.com/Pragmatic-Programmer-Journeyman-Master/dp/020161622X/ref=sr_1_1?ie=UTF8&s=books&qid=1279150854&sr=8-1
This page contains a brief description
http://www.artima.com/intv/tracer2.html
You keep them both open at the same time (I frequently use a horizontal split for this) and when you change one file you change the other file. It's just the same as how you keep the prototypes at the top of a file in sync with the function definitions at the bottom of the file when you're writing a program that fits in a single .cpp file.
I've got a C/C++ question, can I reuse functions across different object files or projects without writing the function headers twice? (one for defining the function and one for declaring it)
I don't know much about C/C++, Delphi and D. I assume that in Delphi or D, you would just write once what arguments a function takes and then you can use the function across diferent projects.
And in C you need the function declaration in header files *again??, right?. Is there a good tool that will create header files from C sources? I've got one, but it's not preprocessor-aware and not very strict. And I've had some macro technique that worked rather bad.
I'm looking for ways to program in C/C++ like described here http://www.digitalmars.com/d/1.0/pretod.html
Imho, generating the headers from the source is a bad idea and is unpractical.
Headers can contain more information that just function names and parameters.
Here are some examples:
a C++ header can define an abstract class for which a source file may be unneeded
A template can only be defined in a header file
Default parameters are only specified in the class definition (thus in the header file)
You usually write your header, then write the implementation in a corresponding source file.
I think doing the other way around is counter-intuitive and doesn't fit with the spirit of C or C++.
The only exception is can see to that is the static functions. A static function only appears in its source file (.cor .cpp) and can't (obviously) be used elsewhere.
While I agree it is often annoying to copy the header definition of a method/function to the source file, you can probably configure your code editor to ease this. I use Vim and a quick script helped me with this a lot. I guess a similar solution exists for most other editors.
Anyway, while this can seem annoying, keep in mind it also gives a greater flexibility. You can distribute your header files (.h, .hpp or whatever) and then transparently change the implementation in source files afterward.
Also, just to mention it, there is no such thing as C/C++: there is C and there is C++; those are different languages (which indeed share much, but still).
It seems to me that you don't really need/want to auto-generate headers from source; you want to be able to write a single file and have a tool that can intelligently split that into a header file and a source file.
Unfortunately, I'm not aware of any such tool. It's certainly possible to write one - but you'd need a given a C++ front end. You could try writing something using clang - but it would be a significant amount of work.
Considering you have declared some functions and wrote their implementation you will have a .c/cpp file and a header .h file.
What you must do in order to use those functions:
Create a library (DLL/so or static library .a/.lib - for now I recommend static library for the ease of use) from the files were the implementation resides
Use the header file (#include it) (you don't need to rewrite the header file again) in your programs to obtain the function definitions and link with your library from step 1.
Though >this< is an example for Visual Studio it makes perfect sense for other development environments also.
This seems like a rudimentary question, so assuming I have not mis-read,
Here is a basic example of re-use, to answer your first question:
#include "stdio.h"
int main( int c, char ** argv ){
puts( "Hello world" );
}
Explanation:
1. stdio.h is a C header file containing (among others) the definition of a function called puts().
2. in main, puts() is called, from the included definition.
Some compilers (including gcc I think ) have an option to generate headers.
There is always very much confusion about headers and source-files in C++. The links I provided should help to clear that up a little.
If you are in the situation that you want to extract headers from source-file, then you probably went about it the wrong way. Usually you first declare your function in a header-file, and then provide an implementation (definition) for it in a source-file. If your function is actually a method of a class, you can also provide the definition in header file.
Technically, a header file is just a bunch of text that is actually inserted into the source file by the preprocessor:
#include <vector>
tells the preprocessor to insert contents of the file vector at the exact place where the #include appears. This really just text-replacement. So, header-files are not some kind of special language construct. They contain normal code. But by putting that code into a separate file, you can easily include it in other files using the preprocessor.
I think it's a good question which is what led me to ask this: Visual studio: automatically update C++ cpp/header file when the other is changed?
There are some refactoring tools mentioned but unfortunately I don't think there's a perfect solution; you simply have to write your function signatures twice. The exception is when you are writing your implementations inline, but there are reasons why you can't or shouldn't always do this.
You might be interested in Lazy C++. However, you should do a few projects the old-fashioned way (with separate header and source files) before attempting to use this tool. I considered using it myself, but then figured I would always be accidentally editing the generated files instead of the lzz file.
You could just put all the definitions in the header file...
This goes against common practice, but is not unheard of.
Should every .C or .cpp file should have a header (.h) file for it?
Suppose there are following C files :
Main.C
Func1.C
Func2.C
Func3.C
where main() is in Main.C file. Should there be four header files
Main.h
Func1.h
Func2.h
Func3.h
Or there should be only one header file for all .C files?
What is a better approach?
For a start, it would be unusual to have a main.h since there's usually nothing that needs to be exposed to the other compilation units at compile time. The main() function itself needs to be exposed for the linker or start-up code but they don't use header files.
You can have either one header file per C file or, more likely in my opinion, a header file for a related group of C files.
One example of that is if you have a BTree implementation and you've put add, delete, search and so on in their own C files to minimise recompilation when the code changes.
It doesn't really make sense in that case to have separate header files for each C file, as the header is the API. In other words, it's the view of the library as seen by the user. People who use your code generally care very little about how you've structured your source code, they just want to be able to write as little code as possible to use it.
Forcing them to include multiple distinct header files just so they can create, insert into, delete from, and search, a tree, is likely to have them questioning your sanity :-)
You would be better off with one btree.h file and a single btree.lib file containing all of the BTree object files that were built from the individual C files.
Another example can be found in the standard C headers.
We don't know for certain whether there are multiple C files for all the stdio.h functions (that's how I'd do it but it's not the only way) but, even if there were, they're treated as a unit in terms of the API.
You don't have to include stdio_printf.h, stdio_fgets.h and so on - there's a single stdio.h for the standard I/O part of the C runtime library.
Header files are not mandatory.
#include simply copy/paste whatever file included (including .c source files)
Commonly used in real life projects are global header files like config.h and constants.h that contains commonly used information such as compile-time flags and project wide constants.
A good design of a library API would be to expose an official interface with one set of header files and use an internal set of header files for implementation with all the details. This adds a nice extra layer of abstraction to a C library without adding unnecessary bloat.
Use common sense. C/C++ is not really for the ones without it.
I used to follow the "it depends" trend until I realized that consistency, uniformity and simplicity are more important than saving the effort to create a file, and that "standards are good even when they are bad".
What I mean is the following: a .cpp/.h pair of files is pretty much what all "modules" end up anyway. Making the existing of both a requirement saves a lot of confusion and bad engineering.
For instance, when I see some interface of something in a header file, I know exactly where to search for / place its implementation. Conversely, if I need to expose the interface of something that was previously hidden in .cpp file (e.g. static function becoming global), I know exactly where to put it.
I've seen too many bad consequences of not following this simple rule. Unnecessary inline functions, breaking any kind of rules about encapsulation, (non)separation of the interface and implementation, misplaced code, to name a few -- all due to the fact that the appropriate sibling header or cpp file was never added.
So: always define both .h and .c files. Make it a standard, follow it, and safely rely on it. Life is much simpler this way, and simplicity is the most important thing in software.
Generally it's best to have a header file for each .c file, containing the declarations for functions etc in the .c file that you want to expose. That way, another .c file can include the .h file for the functions it needs, and won't need to be recompiled if a header file it didn't include got changed.
Generally there will be one .h file for each .c/.cpp file.
Bjarne Stroustrup Explains it beautifully in his book "The C++ Programming Language"....
The single header style of physical partitioning is most useful when the program is small and its parts are not intended for separate use. When namespaces are used, the logical structure of the program can still be explained in a single header file.
For larger Programs, the single header file approach is unworkable in a conventional file-based development environment. A change to the common header forces recompilation of the whole program, and updates of that single header by several programmers are error prone. Unless strong emphasis is placed on programming styles relying heavily on namespaces and classes, the logical structure deteriorates as program grows.
An alternative physical organization lets each logical module have its own header defining the facilities it provides. Each .c file then has a corresponding h. file specifying what it provides(its interface). Each .c module includes its own .h file and usually also other .h files that specifies what it needs from other modules in order to implement the services advertised in its interface. This physical organization corresponds to the logical organization of a module. The multiple header approach makes it easy to determine the dependencies. The single header approach forces us to look at every declarations used by any module and decide if its relevant. The simple fact is that maintenance of a code is invariably done with incomplete information and from a local perspective.
The better localization leads to less information to compile a module and thus faster compilation..
It depends. Usually your reason for having separate .c files will dictate whether you need separate .h files.
Generally cpp/c files are for implementation and h/hpp (hpp are not used often) files are for header files (prototypes and declarations only). Cpp files don't always have to have a header file associated with it but it usually does as the header file acts like a bridge between cpp files so each cpp file can use code from another cpp file.
One thing that should be strongly enforced is the no use of code within a header file! There's been too many times where header files break compiles in any size project because of redefinitions. And that's simply when you include the header file in 2 different cpp files. Header files should always be designed to be included multiple times as well. Cpp files should never be included.
It's all about what code needs to be aware of what other code. You want to reduce the amount other files are aware of to the bare minimum for them to do their jobs.
They need to know that a function exists, what types they need to pass into it, and what types it will return, but not what it's doing internally. Note that it's also important from the programmers point of view to know what those types actually mean. (e.g which int is the row, and which is the column) but the code itself doesn't care. This is why naming the function and parameters sensibly is worthwhile.
As others have said, if there's nothing in a cpp file worth exposing to other parts of the code, as is normally the case with main.c, then there's no need for a header file.
It's occasionally worth putting everything you want to expose in a single header file (e.g, Func1and2and3.h), so that anything that knows about Func1 knows about Func2 as well, but I'm personally not keen on this, as it means that you tend to load a hell of a lot of junk along with the stuff you actually want.
Summary:
Imagine that you trust that someone can write code and that their algorithms, design, etc. are all good. You want to use code they've written. All you need to know is what to give them to get something to happen, what you should give it to, and what you'll get back. That's what needs to go in the header files.
I like putting interfaces into header files and implementation in cpp files. I don't like writing C++ where I need to add member variables and prototypes to the header and then the method again in the C++. I prefer something like:
module.h
struct IModuleInterface : public IUnknown
{
virtual void SomeMethod () = 0;
}
module.cpp
class ModuleImpl : public IModuleInterface,
public CObject // a common object to do the reference
// counting stuff for IUnknown (so we
// can stick this object in a smart
// pointer).
{
ModuleImpl () : m_MemberVariable (0)
{
}
int m_MemberVariable;
void SomeInternalMethod ()
{
// some internal code that doesn't need to be in the interface
}
void SomeMethod ()
{
// implementation for the method in the interface
}
// whatever else we need
};
I find this is a really clean way of separating implementation and interface.
There is no better approach, only common and less common cases.
The more common case is when you have a class/function interface to declare/define. It's better to have only one .cpp/.c with the definitions, and one header for the declarations.
Giving them the same name makes easy to understand that they are directly related.
But that's not a "rule", that's the common way and the most efficient in almost all cases.
Now in some cases( like template classes or some tiny struct definition ) you'll not need any .c/.cpp file, just the header. We often have some virtual class interface definition in only a header file for example, with only virtual pure functions or trivial functions.
And in other rare cases (like an hypothetical main.c/.cpp file) if wouldn't be always required to allow code from external compilation unit to call the function of a given compilation unit. The main function is an example (no header/declaration needed), but there are others, mostly when it's code that "connect all the other parts together" and is not called by other parts of the application. That's very rare but in this case a header make no sense.
If your file exposes an interface - that is, if it has functions which will be called from other files - then it should have a header file. Otherwise, it shouldn't.
As already noted, generally, there will be one header (.h) file for each source (.c or .cpp) file.
However, you should look at the cohesiveness of the files. If the various source files provide separate, individually reusable sets of functions - an ideal organization - then you should certainly have one header per file. If, however, the three source files provide a composite set of functions (that is too big to fit into one file), then you would use a more complex organization. There would be one header for the external services used by the main program - and that would be used by other programs needing the same services. There would also be a second header used by the cooperating source files that provides 'internal' definitions shared by those files.
(Also noted by Pax): The main program does not normally need its own header - no other source code should be using the services it provides; it uses the services provided by other files.
If you want your compiled code to be used from another compilation unit you will need the header files. There are some situations for which you do now need/want to have a headers.
The first such case are main.c/cpp files. This class is not meant to be included and as such there is no need for a header file.
In some cases you can have a header file that defines behavior of a set of different implementations that are loaded through a dll that is loaded at runtime. There will be different set of .c/.cpp files that implement variations of the same header. This can be common in plugin systems.
In general, I don't think there is any explicit relationship between .h and .c files. In many cases (probably most), a unit of code is a library of functionality with a public interface (.h) and an opaque implementation (.c). Sometimes a number of symbols are needed, like enums or macros, and you get a .h with no corresponding .c and in a few circumstances, you will have a lump of code with no public interface and no corresponding .h
in particular, there are a number of times when, for the sake of readability, the headers or implementations (seldom both) are so big and hairy that they end up being broken into many smaller files, for the sake of the programmer's sanity.
My personal style with C++ has always to put class declarations in an include file, and definitions in a .cpp file, very much like stipulated in Loki's answer to C++ Header Files, Code Separation. Admittedly, part of the reason I like this style probably has to do with all the years I spent coding Modula-2 and Ada, both of which have a similar scheme with specification files and body files.
I have a coworker, much more knowledgeable in C++ than I, who is insisting that all C++ declarations should, where possible, include the definitions right there in the header file. He's not saying this is a valid alternate style, or even a slightly better style, but rather this is the new universally-accepted style that everyone is now using for C++.
I'm not as limber as I used to be, so I'm not really anxious to scrabble up onto this bandwagon of his until I see a few more people up there with him. So how common is this idiom really?
Just to give some structure to the answers: Is it now The Way™, very common, somewhat common, uncommon, or bug-out crazy?
Your coworker is wrong, the common way is and always has been to put code in .cpp files (or whatever extension you like) and declarations in headers.
There is occasionally some merit to putting code in the header, this can allow more clever inlining by the compiler. But at the same time, it can destroy your compile times since all code has to be processed every time it is included by the compiler.
Finally, it is often annoying to have circular object relationships (sometimes desired) when all the code is the headers.
Bottom line, you were right, he is wrong.
EDIT: I have been thinking about your question. There is one case where what he says is true. templates. Many newer "modern" libraries such as boost make heavy use of templates and often are "header only." However, this should only be done when dealing with templates as it is the only way to do it when dealing with them.
EDIT: Some people would like a little more clarification, here's some thoughts on the downsides to writing "header only" code:
If you search around, you will see quite a lot of people trying to find a way to reduce compile times when dealing with boost. For example: How to reduce compilation times with Boost Asio, which is seeing a 14s compile of a single 1K file with boost included. 14s may not seem to be "exploding", but it is certainly a lot longer than typical and can add up quite quickly when dealing with a large project. Header only libraries do affect compile times in a quite measurable way. We just tolerate it because boost is so useful.
Additionally, there are many things which cannot be done in headers only (even boost has libraries you need to link to for certain parts such as threads, filesystem, etc). A Primary example is that you cannot have simple global objects in header only libs (unless you resort to the abomination that is a singleton) as you will run into multiple definition errors. NOTE: C++17's inline variables will make this particular example doable in the future.
As a final point, when using boost as an example of header only code, a huge detail often gets missed.
Boost is library, not user level code. so it doesn't change that often. In user code, if you put everything in headers, every little change will cause you to have to recompile the entire project. That's a monumental waste of time (and is not the case for libraries that don't change from compile to compile). When you split things between header/source and better yet, use forward declarations to reduce includes, you can save hours of recompiling when added up across a day.
The day C++ coders agree on The Way, lambs will lie down with lions, Palestinians will embrace Israelis, and cats and dogs will be allowed to marry.
The separation between .h and .cpp files is mostly arbitrary at this point, a vestige of compiler optimizations long past. To my eye, declarations belong in the header and definitions belong in the implementation file. But, that's just habit, not religion.
Code in headers is generally a bad idea since it forces recompilation of all files that includes the header when you change the actual code rather than the declarations. It will also slow down compilation since you'll need to parse the code in every file that includes the header.
A reason to have code in header files is that it's generally needed for the keyword inline to work properly and when using templates that's being instanced in other cpp files.
What might be informing you coworker is a notion that most C++ code should be templated to allow for maximum usability. And if it's templated, then everything will need to be in a header file, so that client code can see it and instantiate it. If it's good enough for Boost and the STL, it's good enough for us.
I don't agree with this point of view, but it may be where it's coming from.
I think your co-worker is smart and you are also correct.
The useful things I found that putting everything into the headers is that:
No need for writing & sync headers and sources.
The structure is plain and no circular dependencies force the coder to make a "better" structure.
Portable, easy to embedded to a new project.
I do agree with the compiling time problem, but I think we should notice that:
The change of source file are very likely to change the header files which leads to the whole project be recompiled again.
Compiling speed is much faster than before. And if you have a project to be built with a long time and high frequency, it may indicates that your project design has flaws. Seperate the tasks into different projects and module can avoid this problem.
Lastly I just wanna support your co-worker, just in my personal view.
Often I'll put trivial member functions into the header file, to allow them to be inlined. But to put the entire body of code there, just to be consistent with templates? That's plain nuts.
Remember: A foolish consistency is the hobgoblin of little minds.
As Tuomas said, your header should be minimal. To be complete I will expand a bit.
I personally use 4 types of files in my C++ projects:
Public:
Forwarding header: in case of templates etc, this file get the forwarding declarations that will appear in the header.
Header: this file includes the forwarding header, if any, and declare everything that I wish to be public (and defines the classes...)
Private:
Private header: this file is a header reserved for implementation, it includes the header and declares the helper functions / structures (for Pimpl for example or predicates). Skip if unnecessary.
Source file: it includes the private header (or header if no private header) and defines everything (non-template...)
Furthermore, I couple this with another rule: Do not define what you can forward declare. Though of course I am reasonable there (using Pimpl everywhere is quite a hassle).
It means that I prefer a forward declaration over an #include directive in my headers whenever I can get away with them.
Finally, I also use a visibility rule: I limit the scopes of my symbols as much as possible so that they do not pollute the outer scopes.
Putting it altogether:
// example_fwd.hpp
// Here necessary to forward declare the template class,
// you don't want people to declare them in case you wish to add
// another template symbol (with a default) later on
class MyClass;
template <class T> class MyClassT;
// example.hpp
#include "project/example_fwd.hpp"
// Those can't really be skipped
#include <string>
#include <vector>
#include "project/pimpl.hpp"
// Those can be forward declared easily
#include "project/foo_fwd.hpp"
namespace project { class Bar; }
namespace project
{
class MyClass
{
public:
struct Color // Limiting scope of enum
{
enum type { Red, Orange, Green };
};
typedef Color::type Color_t;
public:
MyClass(); // because of pimpl, I need to define the constructor
private:
struct Impl;
pimpl<Impl> mImpl; // I won't describe pimpl here :p
};
template <class T> class MyClassT: public MyClass {};
} // namespace project
// example_impl.hpp (not visible to clients)
#include "project/example.hpp"
#include "project/bar.hpp"
template <class T> void check(MyClass<T> const& c) { }
// example.cpp
#include "example_impl.hpp"
// MyClass definition
The lifesaver here is that most of the times the forward header is useless: only necessary in case of typedef or template and so is the implementation header ;)
To add more fun you can add .ipp files which contain the template implementation (that is being included in .hpp), while .hpp contains the interface.
As apart from templatized code (depending on the project this can be majority or minority of files) there is normal code and here it is better to separate the declarations and definitions. Provide also forward-declarations where needed - this may have effect on the compilation time.
Generally, when writing a new class, I will put all the code in the class, so I don't have to look in another file for it.. After everything is working, I break the body of the methods out into the cpp file, leaving the prototypes in the hpp file.
I personally do this in my header files:
// class-declaration
// inline-method-declarations
I don't like mixing the code for the methods in with the class as I find it a pain to look things up quickly.
I would not put ALL of the methods in the header file. The compiler will (normally) not be able to inline virtual methods and will (likely) only inline small methods without loops (totally depends on the compiler).
Doing the methods in the class is valid... but from a readablilty point of view I don't like it. Putting the methods in the header does mean that, when possible, they will get inlined.
I think that it's absolutely absurd to put ALL of your function definitions into the header file. Why? Because the header file is used as the PUBLIC interface to your class. It's the outside of the "black box".
When you need to look at a class to reference how to use it, you should look at the header file. The header file should give a list of what it can do (commented to describe the details of how to use each function), and it should include a list of the member variables. It SHOULD NOT include HOW each individual function is implemented, because that's a boat load of unnecessary information and only clutters the header file.
If this new way is really The Way, we might have been running into different direction in our projects.
Because we try to avoid all unnecessary things in headers. That includes avoiding header cascade. Code in headers will propably need some other header to be included, which will need another header and so on. If we are forced to use templates, we try avoid littering headers with template stuff too much.
Also we use "opaque pointer"-pattern when applicable.
With these practices we can do faster builds than most of our peers. And yes... changing code or class members will not cause huge rebuilds.
I put all the implementation out of the class definition. I want to have the doxygen comments out of the class definition.
IMHO, He has merit ONLY if he's doing templates and/or metaprogramming. There's plenty of reasons already mentioned that you limit header files to just declarations. They're just that... headers. If you want to include code, you compile it as a library and link it up.
Doesn't that really depends on the complexity of the system, and the in-house conventions?
At the moment I am working on a neural network simulator that is incredibly complex, and the accepted style that I am expected to use is:
Class definitions in classname.h
Class code in classnameCode.h
executable code in classname.cpp
This splits up the user-built simulations from the developer-built base classes, and works best in the situation.
However, I'd be surprised to see people do this in, say, a graphics application, or any other application that's purpose is not to provide users with a code base.
Template code should be in headers only. Apart from that all definitions except inlines should be in .cpp. The best argument for this would be the std library implementations which follow the same rule. You would not disagree the std lib developers would be right regarding this.
I think your co-worker is right as long as he does not enter in the process to write executable code in the header.
The right balance, I think, is to follow the path indicated by GNAT Ada where the .ads file gives a perfectly adequate interface definition of the package for its users and for its childs.
By the way Ted, have you had a look on this forum to the recent question on the Ada binding to the CLIPS library you wrote several years ago and which is no more available (relevant Web pages are now closed). Even if made to an old Clips version, this binding could be a good start example for somebody willing to use the CLIPS inference engine within an Ada 2012 program.