Is this good practice to include source files? [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm working with functions.
Is it good practice, that I write the function in another .cpp file, and I include it in the main one?
Like this : #include "lehel.cpp".
Is this ok, or should I write the functions directly in the main.cpp file?

A good practice is to separate functionality into separate Software Units so they can be reused and so that a change to one unit has little effect on other units.
If lehel.cpp is included by main, this means that any changes in lehel.cpp will force a compilation of main. However, if lehel.cpp is compiled separately and linked in, then a change to lehel.cpp does not force main to be recompiled; only linked together.
IMHO, header files should contain information on how to use the functions. The source files should contain the implementations of the functions. The functions in a source file should be related by a theme. Also, keeping the size of the source files small will reduce the quantity of injected defects.

The established practice is putting function declarations of reusable functions in a .h or .hpp file and including that file where they're needed.
foo.cpp
int foo()
{
return 42;
}
foo.hpp
#ifndef FOO_HPP // include guards
#define FOO_HPP
int foo();
#endif // FOO_HPP
main.cpp
#include "foo.hpp"
int main()
{
return foo();
}
Including .cpp files is only sometimes used to split template definitions from declarations, but even this use is controversial, as there are counter-schemes of creating pairs (foo_impl.hpp and foo.hpp) or (foo.hpp and foo_fwd.hpp).

Header files (.h) are designed to provide the information that will be needed in multiple files. Things like class declarations, function prototypes, and enumerations typically go in header files. In a word, "definitions".
Code files (.cpp) are designed to provide the implementation information that only needs to be known in one file. In general, function bodies, and internal variables that should/will never be accessed by other modules, are what belong in .cpp files. In a word, "implementations".
The simplest question to ask yourself to determine what belongs where is "if I change this, will I have to change code in other files to make things compile again?" If the answer is "yes" it probably belongs in the header file; if the answer is "no" it probably belongs in the code file.
https://stackoverflow.com/a/1945866/3323444
To answer your question, As a programmer it will be a bad practice to add a function when cpp has given you header files to include.

Usually it is not done (and headers are used). However, there is a technique called "amalgamation", which means including all (or bunch of) .cpp files into a single main .cpp file and building it as a single unit.
The reasons why it might be sometimes done are:
sometimes actually faster compilation times of the "amalgamated" big unit compared to building all files separately (one reason might be for example that the headers are read only once instead for each .cpp file separately)
better optimization opportunities - the compiler sees all the source code as a whole, so it can make better optimization decisions across amalgamated .cpp files (which might not be possible if each file is compiled separately)
I once used that to improve compilation times of a bigger project - basically created multiple (8) basic source files, which each included part of the .cpp files and then were build in parallel. On full rebuild, the project built about 40% faster.
However, as mentioned, change of a single file of course causes rebuild of that composed unit, which can be a disadvantage during continuous development.

Related

Why do classes usually contain function declarations instead of definitions [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Why are functions usually declared in classes and not defined?:
class MyClass {
public:
MyFunction(); //Function is declared not defined
};
Instead of:
class MyClass {
public:
MyFunction() {
std::cout << "This is a function" << std::endl;
} //Function is defined instead of being declared. No need for cpp file
};
Is the reason because it looks nice or easier to read?
The general (theoretical) reason is separation of concerns: to call a function (whether a class member function or not) the compiler only needs visibility of the function's interface, based on its signature (name, return type, argument types). The caller does not need visibility of function definitions (and, if the definition is not (or cannot be) inlined for some reason, the outcome is often a program that fails to compile or link).
Two common practical reasons are to maximise benefits of separate compilation and to allow distribution of a library without source code.
Separate compilation (over-simplistically, placing subsets of source code in distinct source files) is a feature of C++, that has numerous benefits in larger projects. It involves having a set of separately compiled source files rather than throwing all the source for a program into a single source file. One benefit of separate compilation is that it enables incremental builds. Once all source files for a project have been compiled, the only source files that need to be recompiled are those that have changed (since recompiling a source file that hasn't changed produces a functionally equivalent object file). For large projects that are edited and rebuilt often, this allows incremental builds (recompiling changed source files only and relinking) instead of universal builds (which recompile and relink everything). Practically, even in moderately large projects (lets' say projects that consist of a few hundred source files) the incremental build time can be measured in minutes while the universal rebuild time can be measured in days. The difference between the two equates to unproductive time (thumb-twiddling waiting for a build to complete) by programmers. Programmer time is THE largest cost in significant projects, so unproductive programmer time is expensive.
In practice, in moderate or larger projects, function interfaces (function signatures) are quite stable (change relatively infrequently) while function definitions are unstable (change relatively frequently). Having a function definition in a header file means that header file is more likely to be edited as the program evolves. And, when a header file is edited, EVERY source file which includes that header file must be recompiled in order to rebuild (build management tools often work that way, because it's the least complicated way to detect dependencies between files in a project). That tends to result in larger build times - while the impacts of recompiling all source files that include a changed header are not as large as doing an universal build, the impacts can involve increasing the incremental build time from a few minutes to a few hours. Again, more unproductive time for programmers.
That's not saying that functions should never be inlined. It does mean that it is necessary to choose carefully which functions should be inlined (e.g. based on performance benefits identified using profiling, avoid inlining functions that will be updated regularly) rather than (as this question suggests) defaulting to inlining.
The second common reason for not inlining functions is for distribution of libraries. For commercial (e.g. protecting intellectual property) and other reasons it is often preferable to distribute a compiled library and header files (the minimum needed for someone to use a library in another project) without distributing the complete source code. Placing functions into a header file increases the amount of source file that is distributed.
For non-template code placing definitions in source files separate from the declaration allows for more efficient compilation. You have a single source file that needs to be compiled, so long as that file doesn't change the output can be reused. The final link step isn't effected very much by having multiple input files, the symbols will need to be resolved from the same data structures whether there is one input file or many. It's harder with C++ but this can even allow distribution of binary-only libraries, that is, distributing headers containing only declarations and matching object files, without the implementing source at all.
On the other hand, if functions are defined inline (whether in the class body or outside with explicit 'inline' keyword) the linker will have to check its symbol table and toss duplicates. And that applies regardless of whether talking about templates or not. But templates need their definitions to be available in every translation unit that uses the template so their definitions will generally go in headers even with the cost of pruning duplicates at the link stage.
Generally, classes are declared in header files. Writing the definition in the header would run afoul of the one-definition rule. If I have
// my_class.h
class MyClass {
public:
void myFunction() {}
};
Then, as soon as I've included this file in two different .cpp files, I have two separate definitions of MyClass.myFunction, which is an error. Thus, we declare our class methods in the header and then write the implementation in the .cpp source file.
// my_class.h
class MyClass {
public:
void myFunction();
};
// my_class.cpp
void MyClass::myFunction() {}
The rules are more complicated if the function is inline or a template function, in which case there's a compelling (and often required) reason to define the function in the header. Some people will define template functions in the class itself as you've suggested, but others prefer to write the template function definition at the bottom of the header, to be more consistent with the way we write non-template functions. It's more a matter of style in this case, so pick your preferred convention and stick to it.
Likewise, if the class is not mentioned in a header file (i.e. if the class is an implementation detail and only exists in one .cpp file), then you're free to do it either way, and you'll find folks that prefer both conventions.
in one word, seperate implemention and declearation is always good.
in a large project, implemention in header file will cause multiple definition problem.
seperate them is always good way, can avoid a lot of future problems.

When to not create a separate interface (.h) and implementation (.c) in C? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm currently transitioning to working in C, primarily focused on developing large libraries. I'm coming from a decent amount of application based programming in C++, although I can't claim expertise in either language.
What I'm curious about is when and why many popular open source libraries choose to not separate their code in a 1-1 relationship with a .h file and corresponding .c files -- even in instances where the .c isn't generating an executable.
In the past I'd been lead to believe that structuring code in this manner is optimal not only in organization, but also for linking purposes -- and I don't see how the lack of OOD features of C would effect this (not to mention not separating the implementation and interface also occurs in C++ libraries).
There is no inherent technical reason in C to provide .c and .h files in matched pairs. There is certainly no reason related to linking, as in conventional C usage, neither .c nor .h files has anything directly to do with that.
It is entirely possible and potentially advantageous to collect declarations related to multiple .c files in a smaller number of .h files. Providing only one or a small number of header files makes it easier to use the associated library: you don't need to remember or look up which header you need for the declaration of each function, type, or variable.
There are at least three consequence that arise from doing that, however:
you make it harder to determine where to find the implementations of functions declared in collective headers.
you make your project more susceptible to mass rebuilding cascades, as most object files depend on one or more of a small number of headers, and changes or additions to your function signatures all affect one of that small number of headers.
the compiler has to spend more effort digesting one large header with all the library's declarations than to digest one or a small number of headers focused narrowly on the specific declarations used in a given .c file, as #ChrisBeck observed. This tends to be much less of a problem for C code than it does for C++ code, however.
You need a separate .h file only when something is included in more than one compilation unit.
A form of "keep things local unless you have to share" wisdom.
In the past I'd been lead to believe that structuring code in this manner is optimal not only in organization, but also for linking purposes -- and I don't see how the lack of OOD features of C would effect this (not to mention not separating the implementation and interface also occurs in C++ libraries).
In traditional C code, you will always put declarations in the .h files, and definitions in the .c files. This is indeed to optimize compilation -- the reason is that, it makes each compilation unit take the minimum amount of memory, since it only has the definitions that it needs to output code for, and if you manage includes properly, it only has the declarations it needs also. It also makes it simple to see that you aren't breaking the one definition rule.
In modern machines its less important to do this from the perspective of, not having awful build times -- machines now have a lot of memory.
In C++ you have template files which are generally only in the header.
You also in recent years have people experimenting with so-called "Unity Builds" where you have one compilation unit which includes all of the other source files and you build it all at once. See here: The benefits / disadvantages of unity builds?
So today, having 1-1 correspondence is mainly a style / organizational thing.
A really, really basic, but entirely realistic scenario where a 1-1 relation between .h and .c files is not required, and even not desirable:
main.h
//A lib's/extension/applications' main header file
//for user API -> obfuscated types
typedef struct _internal_str my_type;
//API functions
my_type * init_resource( void );//some arguments will probably be required
//get helper resource -> not part of the API, but the lib uses it internally in all translation units
const struct helper_str *get_help( void );
Now this get_help function is, as the comment says, not part of the libs' API. All the .c files that make up the lib are using it, though, and the get_help function is defined in the helper.c translation unit. This file might look something like this:
#include "main.h"
#include <third/party.h>
//static functions
static
third_party_type *init_external_resource( void )
{
//implement this
}
static
void cleanup_stuff(third_party_type *p)
{
third_party_free(p);
}
const struct helper_str *get_help( void )
{
//implementation of external function
}
Ok, so it's a convenience thing: not adding another .h file, because there's only 1 external function you're calling. But that's no good reason not to use a separate header file, right? Agreed. It's not a good reason.
However: Imagine that your code depends on this third party library a lot, and each component of whatever you're building uses a different part of this library. The "help" you need/want from this helper.c file might differ. That's when you could decide to create several header files, to control the way the helper.c file is being used internally in your project. For example: you've got some logging-stuff in translation units X and Y, these files might include a file like this:
//specific_help.h
char * err_to_log_msg(int error_nr);//relevant arguments, of course...
Whereas a file that doesn't come near output, but, for example, manages thread-safety or signals, might want to call a function in helper.c that frees some resources in case some event was detected (signals, keystrokes, mouse events... whatever). This file might include a header file like:
//system_help.h
void free_helper_resources(int level);
All of these headers link back to functions defined in helper.c, but you could end up with 10 header files for a single c file.
Once you have these various headers exposing a selection of functions, you might end up adding specific typedefs to each of these headers, depending on how the two components interact... ah well, it's a matter of taste anyway.
Many people will just opt for a single header file to go with the helper.c file, and include that. They'll probably not use half of the functions they have access to, but they'll have less files to worry about.
On the other hand, if others start tinkering with their code, they might be tempted to add functions in a certain file that don't belong: they might add logging functions to the signal/event handling files and vice-versa
In the end: use your common sense, don't expose more than you need to. It's easy to remove a static keyword and just add the prototype to a header file if you really need to.

In C++, when you make multiple classes, are you supposed to put them each in their own .cpp/.h pair? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Suppose I have a program that's like
#include <iostream>
#include <algorithm>
class HelloWorlder
{
public:
HelloWorld();
~HelloWorld();
void print_content() {std::cout << "Hello, world!"};
};
class StringReverser
{
public:
Reverser();
~Reverser();
void reverse_and_print(std::string & s) {std::reverse(s.begin(), s.end()); std::cout << s;}
};
int main()
{
HelloWorlder HW;
HW.print_content();
StringReverser SR;
SR.reverse_and_print("Jon Skeet");
return 0;
}
Then am I supposed to partition those into files main.cpp, HelloWorlder.h, HelloWorlder.cpp, StringReverser.h and StringReverser.cpp; or do I just use a main.cpp, one .h and one .cpp? If the former, then how do I link the files together if one of the classes uses another? For instance, if I changed the second class to
class StringReverser
{
public:
Reverser();
~Reverser();
void reverse_and_print(std::string & s) {std::reverse(s.begin(), s.end()); std::cout << s;}
private:
HelloWorlder h;
}
then do I have to do something like
StringReverser.cpp
#include "StringReverser.h"
#include "HelloWorlder.h"
class StringReverser
{
public:
Reverser();
~Reverser();
void reverse_and_print(std::string & s) {std::reverse(s.begin(), s.end()); std::cout << s;}
private:
HelloWorlder h;
}
or is there some other thing I have to do as well?
It is conventional; both practices are used : small C++ files with one file per class, or large C++ files with several related classes and functions.
I don't recommend putting in C++ each class in its separate source and header file (this is nearly required by Java, not in C++). I believe that a class belongs conceptually to some "module" (and often a "module" contains several related classes and functions).
So I suggest declaring related classes of the same "module" in the same header file (e.g. foo.hh), and implementing these classes in related (or same) code files (e.g. one foo.cc or several foo-bar.cc & foo-clu.cc & foo-dee.cc), each of them #include-ing foo.hh
You usually want to avoid very short C++ files (to avoid huge build time). So I prefer having code files of several thousand lines, and header files of several hundred lines. In small sized applications (e.g. less than 30 thousand lines of code), I might have a single common header file. And a "module" (currently this is only a design, no C++11 feature explicitly supports modules) is often made of several related classes and functions. So I find that a source file implementing a "module" with several classes is more readable & maintainable than several tiny source files.
There is a reason to avoid very short source code files (e.g. of a hundred lines of code). Your C++ program will use standard C++ template containers, so will #include -directly or indirectly- several standard headers like <string>, <vector>, <map>, <set>, etc.... In practice, these headers may bring thousands of lines (which the compiler will parse). For instance including both <vector> and <map> brings 41130 lines on my GCC 4.9. So a source file of a hundred lines including just these two headers takes a significant amount of time to be compiled (and you generally need to include standard containers in your header file). Hence a program with ten source files of a few thousands lines each will build faster than the same program organized in hundreds of source files of a few hundred lines each. Look into the preprocessed form of your code obtained using g++ -Wall -C -E !
Future versions of C++ might have proper modules (the C++ standardization committee has and will discuss that, see e.g. n4047). Today C++11 don't have them, so I quoted "modules". Some languages (Go, Rust, Ocaml, Ada, ...) already have proper modules. And a module is generally made of several related classes, types, functions, variables.
Some C++ compilers are able to pre-compile header files. But this is compiler specific. With g++ it works well only if you have one single common header file. See this answer.
BTW, you should look into existing free software C++ code. Some projects have lots of small files, others have fewer but larger files. Both approaches are used!
Putting class into separated source and header files is not required by the language, but generally it make a cleaner and isolated scope, and is recommended in most situation.
Check for the scope definition of the language, especially the class scope and file(compilation unit) scope.
No one's forcing you to, but it's recommended to do as you say: partition those classes into their own .cpp/.h files (or at least I'd recommend it).
This question right here shows how to link .cpp files together.
You don't need to put each class in its own file in C++. But, you can do it if you want to, and it is generally recommended to do so.
How you link them depends on which platform you are in? If you are in Windows, and you have Visual Studio you just need to pick Run or Debug from the Debug Menu (This may depend on the version you are using, so check against your particular version). If you are on Linux, a short tutorial on gcc like this one will help you.
In addition to the other questions, there's also some performance gain in separation. If you have 5 classes each with their own header files, but a .cpp file only uses two of the classes, then it makes sense to only load those two headers. The compiler will have to crunch less code as opposed to loading one header with all five class definitions. For a small project this isn't a big deal, but as you begin to add hundreds of classes to your code, being selective can have its performance perks.
This can be expanded further into using function and class prototypes instead of entire class files, but for the sake of your question, class isolation in separate header files is a good start in gaining both performance and organization of your code.
To answer the other part of your question, if your project is setup correctly, the compiler will automatically compile every .cpp referenced in it. The headers are a different story. So long as your .cpp files reference only the headers they need, your code should compile fine. At first, this may seem a bit difficult to manage, but after some practice you'll get good at it and find your code hierarchy to be easy to navigate and understand.

Reducing the number of includes in a cpp file [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I currently have about 50 or more includes in my cpp file. I wanted to know whats the best policy of organizing this situation. Should I continue adding more includes ? or should I take another approach ? If so what should that be ?
1) Limit the scope of each file to one class, or to a small group of related classes with related behavior.
If your cpp file is only 400 to 1000 lines of code, it'll be pretty hard to require dozens of includes.
Also, as you move things around and make modifications, re-evaluate headers included in each file. If you originally implemented something using a vector, but then switched to set, consider removing the #include < vector >
2) In header files, use forward declarations, move those includes to the cpp file.
This doesn't address your question directly, but it is related in managing compile time. If you can get away with a forward declaration in a header file, do that.
If a.h includes b.h and b.h includes c.h, whenever c.h changes, anything including a.h has to recompile as well. Moving these includes to the .cpp file (which isn't typically #included) will limit these cascading changes.
Let's re-evaluate what happens if a.h forward-declares classes defined in b.h, and a.cpp includes b.h, and if b.h forward-declares classes defined in c.h, and b.cpp includes c.h.
When c.h changes, b.cpp needs to recompile, but a.cpp is fine.
3) Re-organize as you extend functionality.
Writing code can happen like this:
Plan to implement a feature.
Write code to support that plan. (If doing unit testing, consider writing the test first)
Debug and ensure the code meets your requirements.
But this is missing a few steps that make a huge difference.
Refactor your code, and remove extraneous includes. (Here's where unit testing helps.)
Debug again.
These last two steps can involve splitting up large functions into several smaller ones, breaking classes up, and (relevant to this question) creating new files for functionality which has out-grown its current compilation unit.
If you move on the moment your code seems to work and do not take a moment to tidy up it is the equivalent of a carpenter hammering legs onto a table, but never taking a moment to sand and polish the result. Sometimes it's okay to have a slab of plywood with some legs, but if you want a nice dining room table, you'll find this approach is not up to the task.
There is no simple way of reducing includes for cpp files, ultimately those includes need to exist somewhere (provided the original code relying on those includes is still present, if it is not, then simply delete the extraneous include), and the .cpp file is way better than the .h file. But with careful management and organization you can maintain smaller, more modular source code. This is ultimately how you keep things from becoming unworkable.
Split the cpp file into various smaller cpp files. If this cannot be done in a way that reduces the number of included files that each resulting cpp file needs to include, refactor your application. When refactoring, pay special attention to the S-part of the SOLID principle: "... every class should have a single responsibility".
Go through your header list and remove all those not needed in that compilation unit. If you only need a pointer to a class, that is no reason to include the full header of the class.
If there are still so many headers after the cleanup, it might be appropriate to break up you .cpp-file, especially if you can make out different clusters of classes and/or functions doing different work (and probably needing the same subset of headers).
No need to go to extremes, but putting thousands of lines into your only source file is a sign of bad modularisation.

Coding best practices [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 months ago.
Improve this question
Is it a good programming practice to have #include statements in a header file. I personally feel that, especially, when you're going through a code base someone else wrote, you end up missing or spend time looking for definitions which could have found sooner if it was in the c file.
In some (from my experience - most) cases, it's impossible to do what you say. You often end up returning various other classes from your code, and - obviously - you need to include the information about that in the function declarations. In that case, the compiller has to already know what those other objects are, so you either have to already include a header with the declaration, or provide a forward declaration.
Since you'll end up including the header anyhow, there's no real point in doing an additional forward declaration. That's of course a thing of choice, but it doesn't make your code clearer in my opinion.
Also - most of the IDE's have an option to find a symbol in the included files.
If (and only if) you're in a point when you only need classes/functions inside your definitions, then you may vote to include the header in the *.c file. It may be clear at first glance, but you may find that - when modifying the class someday - you'll end up moving the #include to the *.h file anyway.
The short answer is yes. If a particular header that is defining/declaring classes, types, structs, etc. that are composed of classes, types, structs, etc. that are defined/declared in other header files then the most expedient and effective practice is to include those header files into the header file.
By doing so, the header files that are dependencies on the header file you are creating will be there.
There may be issues of files being included multiple times which is why most header files contain either #if to ensure the file is included only once or using something like a #pragma to ensure only included once.
To sum up, you should design your header files so that if they are included multiple times by several uses of a #include of your header file that the header file will only appear once in the preprocessor output. By including the header files on which your header file depends in the header file, you localize the use of the header and make sure that any dependencies will be available.
And in addition by using the #include of dependency header files into your header file, if the include path is incorrect so that a dependency header file is not available, it will be easy to find the header which is depending on the unavailable header file.
Header files should manage their own dependencies; having to get the order of #includes in a .c file just right is an annoying way to waste an afternoon, and it will be a perpetual maintenance headache going forward. Have you headers #include whatever they need directly, use #include guards in your own headers, and life will be much easier.
It is not in bad style if it is necessary to make the header file self-contained in the sense that it does not depend on any other header being manually included. For example, if the header contains declarations that use data types from stdint.h then it should have #include <stdint.h> rather than expect everyone to include it separately, and in the correct order.
Meanwhile unnecessary #includes are generally in bad style regardless of where they are (.h or .c). Arguably an exception to this might be an entire collection of libraries that could in theory be used individually but are intended to be used as a whole, such as a complete GUI toolkit – in that case I think it's cleaner to have some master header that pulls in all the declarations rather than have the user learn which header is required for which specific function.
I prefer to not have #include in header files, if only to make the dependencies visible on one place for each source file. But that is certainly arguable.
Golden rule is readability. Silver rule is follow existing practice.
Don't #include anything you don't need.
Don't #include anything you don't need because including unnecessary files leads to unnecessary dependencies and also leads to larger compile times for larger projects. As a consequence, if you modified an existing class to replace vector with list, take an extra few seconds to look through the file and make sure you don't have any vector remnants left over, and delete the #include <vector> from the top of the file.
Use forward declarations instead of #include where possible.
Use forward declarations where possible because it reduces dependencies and minimizes compile times. A forward declaration will do if your class header does not declare an object of the class; if you only use it by pointer or (in C++) by reference, then you can get by with a forward declaration.
Use #ifdef or #pragma guards to design your header files such that they're not included multiple times.
// MyClass.h
#ifndef MYCLASS_HEADER
#define MYCLASS_HEADER
// [header declaration]
#endif // MYCLASS_HEADER
Alternatively on a supporting compiler such as Visual Studio:
// MyClass.h
#pragma once
// [header declaration]
These guards will allow you to #include "MyClass.h" as often as desired without worrying about including it multiple times in the same translation unit.
Include/forward-declare in the header if the included file is needed by the header.
If the header needs a forward declaration or to fully include the header, then doing so is a no-brainer.
Include in the .c/.cpp file if the included file is not needed by the header, but is needed by the implementation.
If the header has no use for the header--because a forward-declaration is sufficient or because only the .c/.cpp file needs it--then don't #include in the header. Remember: Push off whatever you can into the .c/.cpp file to reduce dependencies and minimize compile times.