Are functions in C/C++ headers a no-go? - c++

I'm working on a very tiny piece of C/C++ source code. The program reads input values from stdin, processes them with an algorithm and writes the results to stdout.
I would just implement all that in a single file, but I also want test cases for the algorithm (not the input/output reading), so I have the following files in my project:
main.cpp
sort.hpp
sort_test.cpp
I implement the algorithm in sort.hpp right away, no sort.cpp. It's rather short and doesn't have any dependencies.
Would you say that, in some cases, functions defined in headers are okay, even if they are sophisticated algorithms and not just simple accessors/mutators? Or is there a reason I should avoid this? When should I move code from header to source file?

There is nothing wrong with having functions in header files, as long as you understand the tradeoff. Putting them in a header file means they'll have to be compiled (and recompiled) in any translation unit that includes the header. (and they have to be declared inline, or you will get linker errors.)
In projects with many translation units, that may add up to a noticeable slowdown in compile times, if you do it a lot.
On the other hand, it ensures that the function definition is visible everywhere the function is called -- and that means that it can be trivially inlined, so the resulting program may run faster.
And finally, with function templates, you typically have no realistic alternative. The definition must be visible at the call site, and the only practical way to achieve that is to put it in a header.
A final consideration is that header-only libraries are easier to deploy and use. You don't need to link against anything, you don't have to worry about ABI's or anything else. You just add the headers to your project, include them and off you go.
Quite a few popular libraries use a header-only strategy.

When you put functions in headers you have to make sure to declare them inline. This is required to avoid a duplicate definition warning when more than one .cpp file include that header file. Generally you should only put small functions inside header files because it will be compiled for each cpp file that includes the header which will slow down compilation time and also results in code bloat; a larger executable file.

It's OK to put any function in the header as long as it's inline. Things such as functions defined inside class { } and templates are implicitly inline.
If the resulting application becomes too large, then optimize the code size. Optimizing before there is a problem is an anti-pattern, especially when there is a benefit to doing it "your way," and the fix is as simple as moving from one file to another and erasing inline.
Of course, if you want to distribute the code as a library, then deciding between a header, static library, or dynamic library binary is an important decision affecting the users.

The vast majority of the boost libraries are header-only, so I'd say: Yes, this is an established and accepted practice. Just don't forget to inline.

That really is a stile choice. But putting it in the header does mean that it will be inline code rather than a function. If you wanted that same functionality, you could use the inline keyword:
inline int max(int a, int b)
{
return (a > b) ? a : b;
}
http://en.wikipedia.org/wiki/Inline_function

The reason you should avoid this in general (for non inline functions) is because multiple source files will be including your header, creating linker errors.
It doesn't matter if you have a pramga once or similar trick - the duplication will show up if you have more than one compilation unit (e.g. cpp files) including the same header.

If you wish to inline the function, it MUST be in the header else it can't get inlined.
If you publish a header with your libraries and the header has some sort of implementation in it, you can be sure that after a few years if you change the implementation and it doesn't work exactly the same way as it did before, some peoples code will break since thay will have come to rely on the implementation they saw in the header. Yeah i know one should not do it but many people do look in header for the implementation and other behaviour they can exploit/use in a not intended way to overcome some problem they are having.
If you are planning to use templates then you have no choice but to put it all in header. (this might not be necessary if you compiler supports export templates but there is only 1 i know of).

Its ok to have the implementation in the header. It depends on what you need. If you separate the definition to a different file then the compiler will create symbols with external linkage if you dont want that you can define the functions inside the header itself. But you would be wasting some amount of memory for the code segment. If you include this header file in two different files then both files codes segment will have this function definition.
If other header file is going to have a function with similar name then its going to be a problem. Then you have to use inline.

Related

Motivating real world examples of the 'inline' specifier?

Background: The C++ inline keyword does not determine if a function should be inlined.
Instead, inline permits you to provide multiple definitions of a single function or variable, so long as each definition occurs in a different translation unit.
Basically, this allows definitions of global variables and functions in header files.
Are there some examples of why I might want to write a definition in a header file?
I've heard that there might be templating examples where it's impossible to write the definition in a separate cpp file.
I've heard other claims about performance. But is that really true? Since, to my knowledge, the use of the inline keyword doesn't guarantee that the function call is inlined (and vice versa).
I have a sense that this feature is probably primarily used by library writers trying to write wacky and highly optimized implementations. But are there some examples?
It's actually simple: you need inline when you want to write a definition (of a function or variable (since c++17)) in a header. Otherwise you would violate odr as soon as your header is included in more than 1 tu. That's it. That's all there is to it.
Of note is that some entities are implicitly declared inline like:
methods defined inside the body of the class
template functions and variables
constexpr functions and variables
Now the question becomes why and when would someone want to write definitions in the header instead of separating declarations in headers and definitions in source code files. There are advantages and disadvantages to this approach. Here are some to consider:
optimization
Having the definition in a source file means that the code of the function is baked into the tu binary. It cannot be inlined at the calling site outside of the tu that defines it. Having it in a header means that the compiler can inline it everywhere it sees fit. Or it can generate different code for the function depending on the context where it is called. The same can be achieved with lto within an executable or library, but for libraries the only option for enabling this optimization is having the definitions in the header.
library distribution
Besides enabling more optimizations in a library, having a header only library (when it's possible) means an easier way to distribute that library. All the user has to do is download the headers folder and add it to the include path of his/her project. In the case of non header only library things become more complicated. Because you can't mix and match binaries compiled by different compiler and even by the same compiler but with different flags. So you either have to distribute your library with the full source code along with a build tool or have the library compiled in many formats (cpu architecture/OS/compiler/compiler flags combinations)
human preference
Having to write the code once is considered by some (me included) an advantage: both from code documentation perspective and from a maintenance perspective. Others consider separating declaration from definitions is better. One argument is that it achieves separation of interface vs implementation but that is just not the case: in a header you need to have private member declarations even if those aren't part of the interface.
compile time performance
Having all the code in header means duplicating it in every tu. This is a real problem when it comes to compilation time. Heavy header C++ projects are notorious for slow compilation times. It also means that a modification of a function definition would trigger the recompilation of all the tu that include it, as opposed to just 1 tu in the case of definition in source code. Precompiled headers try to solve this problem but the solutions are not portable and have problems of their own.
If the same function definition appears in multiple compilation units then it needs to be inline otherwise you get a linking error.
You need the inline keyword e.g. for function templates if you want to make them available using a header because then their definition also has to be in the header.
The below statement might be a bit oversimplified because compilers and linkers are really complex nowadays, but to get a basic idea it is still valid.
A cpp file and the headers included by that cpp file form a compilation unit and each compilation unit is compiled individually. Within that compilation unit, the compiler can do many optimizations like potentially inlining any function call (no matter if it is a member or a free function) as long as the code still behaves according to the specification.
So if you place the function definition in the header you allow the compiler to know the code of that function and potentially do more optimizations.
If the definition is in another compilation unit the compiler can't do much and optimizations then can only be done at linking time. Link time optimizations are also possible and are indeed also done. And while link-time optimizations became better they potentially can't do as much as the compiler can do.
Header only libraries have the big advantage that you do not need to provide project files with them, the one how wants to use that library just copies the headers to their projects and includes them.
In short:
You're writing a library and you want it to be header-only, to make its use more convenient.
Even if it's not a library, in some cases you may want to keep some of the definitions in a header to make it easier to maintain (whether or not this makes things easier is subjective).
to my knowledge, the use of the inline keyword doesn't guarantee that the function call is inlined
Yes, defining it in a header (as inline) doesn't guarantee inlining. But if you don't define it in a header, it will never be inlined (unless you're using link-time optimizations). So:
You want the compiler to be able to inline the functions, if it decides to.
Also it may the compiler more knowledge about a function:
maybe it never throws, but is not marked noexcept;
maybe several consecutive calls can be merged into one (there's no side effects, etc), but __attribute__((const)) is missing;
maybe it never returns, but [[noreturn]] is missing;
...
there might be templating examples where it's impossible to write the definition in a separate cpp file.
That's true for most templates. They automatically behave as if they were inline, so you don't need to specify it explicitly. See Why can templates only be implemented in the header file? for details.

Ways not to write function headers twice?

I've got a C/C++ question, can I reuse functions across different object files or projects without writing the function headers twice? (one for defining the function and one for declaring it)
I don't know much about C/C++, Delphi and D. I assume that in Delphi or D, you would just write once what arguments a function takes and then you can use the function across diferent projects.
And in C you need the function declaration in header files *again??, right?. Is there a good tool that will create header files from C sources? I've got one, but it's not preprocessor-aware and not very strict. And I've had some macro technique that worked rather bad.
I'm looking for ways to program in C/C++ like described here http://www.digitalmars.com/d/1.0/pretod.html
Imho, generating the headers from the source is a bad idea and is unpractical.
Headers can contain more information that just function names and parameters.
Here are some examples:
a C++ header can define an abstract class for which a source file may be unneeded
A template can only be defined in a header file
Default parameters are only specified in the class definition (thus in the header file)
You usually write your header, then write the implementation in a corresponding source file.
I think doing the other way around is counter-intuitive and doesn't fit with the spirit of C or C++.
The only exception is can see to that is the static functions. A static function only appears in its source file (.cor .cpp) and can't (obviously) be used elsewhere.
While I agree it is often annoying to copy the header definition of a method/function to the source file, you can probably configure your code editor to ease this. I use Vim and a quick script helped me with this a lot. I guess a similar solution exists for most other editors.
Anyway, while this can seem annoying, keep in mind it also gives a greater flexibility. You can distribute your header files (.h, .hpp or whatever) and then transparently change the implementation in source files afterward.
Also, just to mention it, there is no such thing as C/C++: there is C and there is C++; those are different languages (which indeed share much, but still).
It seems to me that you don't really need/want to auto-generate headers from source; you want to be able to write a single file and have a tool that can intelligently split that into a header file and a source file.
Unfortunately, I'm not aware of any such tool. It's certainly possible to write one - but you'd need a given a C++ front end. You could try writing something using clang - but it would be a significant amount of work.
Considering you have declared some functions and wrote their implementation you will have a .c/cpp file and a header .h file.
What you must do in order to use those functions:
Create a library (DLL/so or static library .a/.lib - for now I recommend static library for the ease of use) from the files were the implementation resides
Use the header file (#include it) (you don't need to rewrite the header file again) in your programs to obtain the function definitions and link with your library from step 1.
Though >this< is an example for Visual Studio it makes perfect sense for other development environments also.
This seems like a rudimentary question, so assuming I have not mis-read,
Here is a basic example of re-use, to answer your first question:
#include "stdio.h"
int main( int c, char ** argv ){
puts( "Hello world" );
}
Explanation:
1. stdio.h is a C header file containing (among others) the definition of a function called puts().
2. in main, puts() is called, from the included definition.
Some compilers (including gcc I think ) have an option to generate headers.
There is always very much confusion about headers and source-files in C++. The links I provided should help to clear that up a little.
If you are in the situation that you want to extract headers from source-file, then you probably went about it the wrong way. Usually you first declare your function in a header-file, and then provide an implementation (definition) for it in a source-file. If your function is actually a method of a class, you can also provide the definition in header file.
Technically, a header file is just a bunch of text that is actually inserted into the source file by the preprocessor:
#include <vector>
tells the preprocessor to insert contents of the file vector at the exact place where the #include appears. This really just text-replacement. So, header-files are not some kind of special language construct. They contain normal code. But by putting that code into a separate file, you can easily include it in other files using the preprocessor.
I think it's a good question which is what led me to ask this: Visual studio: automatically update C++ cpp/header file when the other is changed?
There are some refactoring tools mentioned but unfortunately I don't think there's a perfect solution; you simply have to write your function signatures twice. The exception is when you are writing your implementations inline, but there are reasons why you can't or shouldn't always do this.
You might be interested in Lazy C++. However, you should do a few projects the old-fashioned way (with separate header and source files) before attempting to use this tool. I considered using it myself, but then figured I would always be accidentally editing the generated files instead of the lzz file.
You could just put all the definitions in the header file...
This goes against common practice, but is not unheard of.

Should every C or C++ file have an associated header file?

Should every .C or .cpp file should have a header (.h) file for it?
Suppose there are following C files :
Main.C
Func1.C
Func2.C
Func3.C
where main() is in Main.C file. Should there be four header files
Main.h
Func1.h
Func2.h
Func3.h
Or there should be only one header file for all .C files?
What is a better approach?
For a start, it would be unusual to have a main.h since there's usually nothing that needs to be exposed to the other compilation units at compile time. The main() function itself needs to be exposed for the linker or start-up code but they don't use header files.
You can have either one header file per C file or, more likely in my opinion, a header file for a related group of C files.
One example of that is if you have a BTree implementation and you've put add, delete, search and so on in their own C files to minimise recompilation when the code changes.
It doesn't really make sense in that case to have separate header files for each C file, as the header is the API. In other words, it's the view of the library as seen by the user. People who use your code generally care very little about how you've structured your source code, they just want to be able to write as little code as possible to use it.
Forcing them to include multiple distinct header files just so they can create, insert into, delete from, and search, a tree, is likely to have them questioning your sanity :-)
You would be better off with one btree.h file and a single btree.lib file containing all of the BTree object files that were built from the individual C files.
Another example can be found in the standard C headers.
We don't know for certain whether there are multiple C files for all the stdio.h functions (that's how I'd do it but it's not the only way) but, even if there were, they're treated as a unit in terms of the API.
You don't have to include stdio_printf.h, stdio_fgets.h and so on - there's a single stdio.h for the standard I/O part of the C runtime library.
Header files are not mandatory.
#include simply copy/paste whatever file included (including .c source files)
Commonly used in real life projects are global header files like config.h and constants.h that contains commonly used information such as compile-time flags and project wide constants.
A good design of a library API would be to expose an official interface with one set of header files and use an internal set of header files for implementation with all the details. This adds a nice extra layer of abstraction to a C library without adding unnecessary bloat.
Use common sense. C/C++ is not really for the ones without it.
I used to follow the "it depends" trend until I realized that consistency, uniformity and simplicity are more important than saving the effort to create a file, and that "standards are good even when they are bad".
What I mean is the following: a .cpp/.h pair of files is pretty much what all "modules" end up anyway. Making the existing of both a requirement saves a lot of confusion and bad engineering.
For instance, when I see some interface of something in a header file, I know exactly where to search for / place its implementation. Conversely, if I need to expose the interface of something that was previously hidden in .cpp file (e.g. static function becoming global), I know exactly where to put it.
I've seen too many bad consequences of not following this simple rule. Unnecessary inline functions, breaking any kind of rules about encapsulation, (non)separation of the interface and implementation, misplaced code, to name a few -- all due to the fact that the appropriate sibling header or cpp file was never added.
So: always define both .h and .c files. Make it a standard, follow it, and safely rely on it. Life is much simpler this way, and simplicity is the most important thing in software.
Generally it's best to have a header file for each .c file, containing the declarations for functions etc in the .c file that you want to expose. That way, another .c file can include the .h file for the functions it needs, and won't need to be recompiled if a header file it didn't include got changed.
Generally there will be one .h file for each .c/.cpp file.
Bjarne Stroustrup Explains it beautifully in his book "The C++ Programming Language"....
The single header style of physical partitioning is most useful when the program is small and its parts are not intended for separate use. When namespaces are used, the logical structure of the program can still be explained in a single header file.
For larger Programs, the single header file approach is unworkable in a conventional file-based development environment. A change to the common header forces recompilation of the whole program, and updates of that single header by several programmers are error prone. Unless strong emphasis is placed on programming styles relying heavily on namespaces and classes, the logical structure deteriorates as program grows.
An alternative physical organization lets each logical module have its own header defining the facilities it provides. Each .c file then has a corresponding h. file specifying what it provides(its interface). Each .c module includes its own .h file and usually also other .h files that specifies what it needs from other modules in order to implement the services advertised in its interface. This physical organization corresponds to the logical organization of a module. The multiple header approach makes it easy to determine the dependencies. The single header approach forces us to look at every declarations used by any module and decide if its relevant. The simple fact is that maintenance of a code is invariably done with incomplete information and from a local perspective.
The better localization leads to less information to compile a module and thus faster compilation..
It depends. Usually your reason for having separate .c files will dictate whether you need separate .h files.
Generally cpp/c files are for implementation and h/hpp (hpp are not used often) files are for header files (prototypes and declarations only). Cpp files don't always have to have a header file associated with it but it usually does as the header file acts like a bridge between cpp files so each cpp file can use code from another cpp file.
One thing that should be strongly enforced is the no use of code within a header file! There's been too many times where header files break compiles in any size project because of redefinitions. And that's simply when you include the header file in 2 different cpp files. Header files should always be designed to be included multiple times as well. Cpp files should never be included.
It's all about what code needs to be aware of what other code. You want to reduce the amount other files are aware of to the bare minimum for them to do their jobs.
They need to know that a function exists, what types they need to pass into it, and what types it will return, but not what it's doing internally. Note that it's also important from the programmers point of view to know what those types actually mean. (e.g which int is the row, and which is the column) but the code itself doesn't care. This is why naming the function and parameters sensibly is worthwhile.
As others have said, if there's nothing in a cpp file worth exposing to other parts of the code, as is normally the case with main.c, then there's no need for a header file.
It's occasionally worth putting everything you want to expose in a single header file (e.g, Func1and2and3.h), so that anything that knows about Func1 knows about Func2 as well, but I'm personally not keen on this, as it means that you tend to load a hell of a lot of junk along with the stuff you actually want.
Summary:
Imagine that you trust that someone can write code and that their algorithms, design, etc. are all good. You want to use code they've written. All you need to know is what to give them to get something to happen, what you should give it to, and what you'll get back. That's what needs to go in the header files.
I like putting interfaces into header files and implementation in cpp files. I don't like writing C++ where I need to add member variables and prototypes to the header and then the method again in the C++. I prefer something like:
module.h
struct IModuleInterface : public IUnknown
{
virtual void SomeMethod () = 0;
}
module.cpp
class ModuleImpl : public IModuleInterface,
public CObject // a common object to do the reference
// counting stuff for IUnknown (so we
// can stick this object in a smart
// pointer).
{
ModuleImpl () : m_MemberVariable (0)
{
}
int m_MemberVariable;
void SomeInternalMethod ()
{
// some internal code that doesn't need to be in the interface
}
void SomeMethod ()
{
// implementation for the method in the interface
}
// whatever else we need
};
I find this is a really clean way of separating implementation and interface.
There is no better approach, only common and less common cases.
The more common case is when you have a class/function interface to declare/define. It's better to have only one .cpp/.c with the definitions, and one header for the declarations.
Giving them the same name makes easy to understand that they are directly related.
But that's not a "rule", that's the common way and the most efficient in almost all cases.
Now in some cases( like template classes or some tiny struct definition ) you'll not need any .c/.cpp file, just the header. We often have some virtual class interface definition in only a header file for example, with only virtual pure functions or trivial functions.
And in other rare cases (like an hypothetical main.c/.cpp file) if wouldn't be always required to allow code from external compilation unit to call the function of a given compilation unit. The main function is an example (no header/declaration needed), but there are others, mostly when it's code that "connect all the other parts together" and is not called by other parts of the application. That's very rare but in this case a header make no sense.
If your file exposes an interface - that is, if it has functions which will be called from other files - then it should have a header file. Otherwise, it shouldn't.
As already noted, generally, there will be one header (.h) file for each source (.c or .cpp) file.
However, you should look at the cohesiveness of the files. If the various source files provide separate, individually reusable sets of functions - an ideal organization - then you should certainly have one header per file. If, however, the three source files provide a composite set of functions (that is too big to fit into one file), then you would use a more complex organization. There would be one header for the external services used by the main program - and that would be used by other programs needing the same services. There would also be a second header used by the cooperating source files that provides 'internal' definitions shared by those files.
(Also noted by Pax): The main program does not normally need its own header - no other source code should be using the services it provides; it uses the services provided by other files.
If you want your compiled code to be used from another compilation unit you will need the header files. There are some situations for which you do now need/want to have a headers.
The first such case are main.c/cpp files. This class is not meant to be included and as such there is no need for a header file.
In some cases you can have a header file that defines behavior of a set of different implementations that are loaded through a dll that is loaded at runtime. There will be different set of .c/.cpp files that implement variations of the same header. This can be common in plugin systems.
In general, I don't think there is any explicit relationship between .h and .c files. In many cases (probably most), a unit of code is a library of functionality with a public interface (.h) and an opaque implementation (.c). Sometimes a number of symbols are needed, like enums or macros, and you get a .h with no corresponding .c and in a few circumstances, you will have a lump of code with no public interface and no corresponding .h
in particular, there are a number of times when, for the sake of readability, the headers or implementations (seldom both) are so big and hairy that they end up being broken into many smaller files, for the sake of the programmer's sanity.

Writing function definition in header files in C++

I have a class which has many small functions. By small functions, I mean functions that doesn't do any processing but just return a literal value. Something like:
string Foo::method() const{
return "A";
}
I have created a header file "Foo.h" and source file "Foo.cpp". But since the function is very small, I am thinking about putting it in the header file itself. I have the following questions:
Is there any performance or other issues if I put these function definition in header file? I will have many functions like this.
My understanding is when the compilation is done, compiler will expand the header file and place it where it is included. Is that correct?
If the function is small (the chance you would change it often is low), and if the function can be put into the header without including myriads of other headers (because your function depends on them), it is perfectly valid to do so. If you declare them extern inline, then the compiler is required to give it the same address for every compilation unit:
headera.h:
inline string method() {
return something;
}
Member functions are implicit inline provided they are defined inside their class. The same stuff is true for them true: If they can be put into the header without hassle, you can indeed do so.
Because the code of the function is put into the header and visible, the compiler is able to inline calls to them, that is, putting code of the function directly at the call site (not so much because you put inline before it, but more because the compiler decides that way, though. Putting inline only is a hint to the compiler regarding that). That can result in a performance improvement, because the compiler now sees where arguments match variables local to the function, and where argument doesn't alias each other - and last but not least, function frame allocation isn't needed anymore.
My understanding is when the compilation is done, compiler will expand the header file and place it where it is included. Is that correct?
Yes, that is correct. The function will be defined in every place where you include its header. The compiler will care about putting only one instance of it into the resulting program, by eliminating the others.
Depending on your compiler and it's settings it may do any of the following:
It may ignore the inline keyword (it
is just a hint to the compiler, not a
command) and generate stand-alone
functions. It may do this if your
functions exceed a compiler-dependent
complexity threshold. e.g. too many
nested loops.
It may decide than your stand-alone
function is a good candidate for
inline expansion.
In many cases, the compiler is in a much better position to determine if a function should be inlined than you are, so there is no point in second-guessing it. I like to use implicit inlining when a class has many small functions only because it's convenient to have the implementation right there in the class. This doesn't work so well for larger functions.
The other thing to keep in mind is that if you are exporting a class in a DLL/shared library (not a good idea IMHO, but people do it anyway) you need to be really careful with inline functions. If the compiler that built the DLL decides a function should be inlined you have a couple of potential problems:
The compiler building the program
using the DLL might decide to not
inline the function so it will
generate a symbol reference to a
function that doesn't exist and the
DLL will not load.
If you update the DLL and change the
inlined function, the client program
will still be using the old version
of that function since the function
got inlined into the client code.
There will be an increase in performance because implementation in header files are implicitly inlined. As you mentioned your functions are small, inline operation will be so beneficial for you IMHO.
What you say about compiler is also true.There is no difference for compiler—other than inlining—between code in header file or .cpp file.
If your functions are that simple, make them inline, and you'll have to stick them in the header file anyway. Other than that, any conventions are just that - conventions.
Yes, the compiler does expand the header file where it encounters the #include statements.
It depends on the coding standards that apply in your case but:
Small functions without loops and anything else should be inlined for better performance (but slightly larger code - important for some constrained or embedded applications).
If you have the body of the function in the header you will have it by default inline(d) (which is a good thing when it comes to speed).
Before the object file is created by the compiler the preprocessor is called (-E option for gcc) and the result is sent to the compiler which creates the object out of code.
So the shorter answer is:
-- Declaring functions in header is good for speed (but not for space) --
C++ won’t complain if you do, but generally speaking, you shouldn’t.
when you #include a file, the entire content of the included file is inserted at the point of inclusion. This means that any definitions you put in your header get copied into every file that includes that header.
For small projects, this isn’t likely to be much of an issue. But for larger projects, this can make things take much longer to compile (as the same code gets recompiled each time it is encountered) and could significantly bloat the size of your executable. If you make a change to a definition in a code file, only that .cpp file needs to be recompiled. If you make a change to a definition in a header file, every code file that includes the header needs to be recompiled. One small change can cause you to have to recompile your entire project!
Sometimes exceptions are made for trivial functions that are unlikely to change (e.g. where the function definition is one line).
Source: http://archive.li/ACYlo (previous version of Chapter 1.9 on learncpp.com)

Programming style of method declaration of get/set method variables in C++?

Should you declare the getters/setters of the class inside the .h file and then define them in .cpp Or do both in .h file. Which style do you prefer and why? I personally like the latter wherein all of them are in .h and only methods which have logic associated with it other than setters/getters in .cpp.
For me it depends on who's going to be using the .h file. If it's a file largely internal to a module, then I tend to put the tiny methods in the header. If it's a more external header file that presents a more fixed API, then I'll put everything in the .cpp files. In this case, I'll often use the PIMPL Idiom for a full compilation firewall.
The trade-offs I see with putting them in the headers are:
Less typing
Easier inlining for the compiler (although compilers can sometimes do inlining between multiple translation units now anyway.)
More compilation dependencies
I would say that header files should be about interface, not implementation. I'd put them in the .cpp.
For me this depends on what I'm doing with the code. For code that I want maintained and to last over time, I put everything in the .cc file for the following reasons:
The .h file can remain sparse as documentation for people who want to look for function and method definitions.
My group's coding guidelines state that we put everything in the .cpp file and like to follow those, even if the function definition only takes one line. This eliminates guessing games about where things actually live, because you know which file you should examine.
If you're doing frequent recompiles of a big project, keeping the function definition in the .cpp file saves you some time compared to keeping function definitions in header files. This was relevant very recently for us, as we recently went through the code and added a lot of runtime assert statements to validate input data for our classes, and that required a lot of modification to getters and setters. If these method declarations had lived in .cpp files, this would have turned into a clean recompile for us, which can take ~30min on my laptop.
That's not to say that I don't play fast-and-dirty with the rules occasionally and put things in .h files when implementing something really fast, but for code I'm serious about I all code (regardless of length) in the .cpp file. For big projects (some of) the rules are there for a reason, and following them can be a virtue.
Speaking of which, I just thought of yet another Perl script I can hack together to find violations of the coding guidelines. It's good to be popular. :)
I put put all single-liners in the header as long as they do not require too much additional headers included (because calling methods of other classes).
Also I do not try to put all code in one line so I can put most of the methods in the header :-)
But Josh mentioned a good reason to put them in the .cpp anyway: if the header is for external use.
I prefer to keep the .h file as clean as possible. Therefore, small functions that are as simple as get/set I often use to put in a separate file as inline-defined functions, and then include that file (where I use the extension .inl) into the .h header file:
// foo.h
class foo
{
public:
int bar() const;
private:
int m_bar;
};
#include "foo.inl"
// foo.inl
inline
int foo::bar() const
{
return m_bar;
}
I think that this gives the best of two worlds, at the same time hiding most of the implementation from the header, and still keep the advantage of inlining simple code (as a rule of thumb I keep it within at most 3 statements).
I pretty much always follow the division of declaring them in the header, and defining in the source. Every time I don't, I end up having to go back and do it any way later.
I prefer to put them into the .cpp file, for the sake of fast compile/link times. Even tiny one-liners (empty virtual destructors!) can blow up your compile times, if they are instantiated a lot. In one project, I could cut the compile time by a few seconds by moving all virtual destructors into the .cpp files.
Since then, I'm sold on this, and I would only put them into the header again if a profiler tells me that I can profit from inlining. Only downside is you need more typing, but if you create the .cpp file while you write the header, you can often just copy&paste the declarations and fill them out in the .cpp file, so it's not that bad. Worse of course if you later find out you want to move stuff into a .cpp file.
A nice side effect is that reading stuff is simpler when you have only documentation and declarations in your header, especially if new developers join the project.
I use next rule: header for declaration, code file for realization. It becomes for actual when your header would be use outside of project - than more lightweight your header is, then it's more comfort in use