Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Why are functions usually declared in classes and not defined?:
class MyClass {
public:
MyFunction(); //Function is declared not defined
};
Instead of:
class MyClass {
public:
MyFunction() {
std::cout << "This is a function" << std::endl;
} //Function is defined instead of being declared. No need for cpp file
};
Is the reason because it looks nice or easier to read?
The general (theoretical) reason is separation of concerns: to call a function (whether a class member function or not) the compiler only needs visibility of the function's interface, based on its signature (name, return type, argument types). The caller does not need visibility of function definitions (and, if the definition is not (or cannot be) inlined for some reason, the outcome is often a program that fails to compile or link).
Two common practical reasons are to maximise benefits of separate compilation and to allow distribution of a library without source code.
Separate compilation (over-simplistically, placing subsets of source code in distinct source files) is a feature of C++, that has numerous benefits in larger projects. It involves having a set of separately compiled source files rather than throwing all the source for a program into a single source file. One benefit of separate compilation is that it enables incremental builds. Once all source files for a project have been compiled, the only source files that need to be recompiled are those that have changed (since recompiling a source file that hasn't changed produces a functionally equivalent object file). For large projects that are edited and rebuilt often, this allows incremental builds (recompiling changed source files only and relinking) instead of universal builds (which recompile and relink everything). Practically, even in moderately large projects (lets' say projects that consist of a few hundred source files) the incremental build time can be measured in minutes while the universal rebuild time can be measured in days. The difference between the two equates to unproductive time (thumb-twiddling waiting for a build to complete) by programmers. Programmer time is THE largest cost in significant projects, so unproductive programmer time is expensive.
In practice, in moderate or larger projects, function interfaces (function signatures) are quite stable (change relatively infrequently) while function definitions are unstable (change relatively frequently). Having a function definition in a header file means that header file is more likely to be edited as the program evolves. And, when a header file is edited, EVERY source file which includes that header file must be recompiled in order to rebuild (build management tools often work that way, because it's the least complicated way to detect dependencies between files in a project). That tends to result in larger build times - while the impacts of recompiling all source files that include a changed header are not as large as doing an universal build, the impacts can involve increasing the incremental build time from a few minutes to a few hours. Again, more unproductive time for programmers.
That's not saying that functions should never be inlined. It does mean that it is necessary to choose carefully which functions should be inlined (e.g. based on performance benefits identified using profiling, avoid inlining functions that will be updated regularly) rather than (as this question suggests) defaulting to inlining.
The second common reason for not inlining functions is for distribution of libraries. For commercial (e.g. protecting intellectual property) and other reasons it is often preferable to distribute a compiled library and header files (the minimum needed for someone to use a library in another project) without distributing the complete source code. Placing functions into a header file increases the amount of source file that is distributed.
For non-template code placing definitions in source files separate from the declaration allows for more efficient compilation. You have a single source file that needs to be compiled, so long as that file doesn't change the output can be reused. The final link step isn't effected very much by having multiple input files, the symbols will need to be resolved from the same data structures whether there is one input file or many. It's harder with C++ but this can even allow distribution of binary-only libraries, that is, distributing headers containing only declarations and matching object files, without the implementing source at all.
On the other hand, if functions are defined inline (whether in the class body or outside with explicit 'inline' keyword) the linker will have to check its symbol table and toss duplicates. And that applies regardless of whether talking about templates or not. But templates need their definitions to be available in every translation unit that uses the template so their definitions will generally go in headers even with the cost of pruning duplicates at the link stage.
Generally, classes are declared in header files. Writing the definition in the header would run afoul of the one-definition rule. If I have
// my_class.h
class MyClass {
public:
void myFunction() {}
};
Then, as soon as I've included this file in two different .cpp files, I have two separate definitions of MyClass.myFunction, which is an error. Thus, we declare our class methods in the header and then write the implementation in the .cpp source file.
// my_class.h
class MyClass {
public:
void myFunction();
};
// my_class.cpp
void MyClass::myFunction() {}
The rules are more complicated if the function is inline or a template function, in which case there's a compelling (and often required) reason to define the function in the header. Some people will define template functions in the class itself as you've suggested, but others prefer to write the template function definition at the bottom of the header, to be more consistent with the way we write non-template functions. It's more a matter of style in this case, so pick your preferred convention and stick to it.
Likewise, if the class is not mentioned in a header file (i.e. if the class is an implementation detail and only exists in one .cpp file), then you're free to do it either way, and you'll find folks that prefer both conventions.
in one word, seperate implemention and declearation is always good.
in a large project, implemention in header file will cause multiple definition problem.
seperate them is always good way, can avoid a lot of future problems.
Related
I know this maybe quite subjective, but are there any general rules for situations when it is not necessary for code to be split into two files?
For example is the class is extremely small, or if the file simply holds some global definitions or static functions? Also, in these cases, should the single file be a .cpp file or a .h file?
On the technical side, whenever you need to obey the one definition rule you have to separate declarations from definitions, since you will need to include the declarations many times in multiple translation units, but you must only provide one single definition.
In aesthetic terms, the answer could be something like "always", or "systematically". In any case, you should always have a header for every logical unit of code (e.g. one class or one collection of functions); and the source file is one that is possibly optional, depending on whether or not you have everything defined inline (exempting you from ODR), or if you have a template library.
As a meta strategy, you should seek to decouple the compilation units as much as possible, so that you can include only what's needed in a fine-grained way. This allows your project to grow without having compilation times become unbearable, and it makes it much easier to reuse code in other projects.
I favor putting code in .hpp files but am very often compelled to put the implementation in the .cpp for any of the following reasons:
Reducing build time. This is the #1 reason to use .cpp files... and the reason most code you find in .hpp files is small and simple. When the implementation changes, you don't want to have to rebuild every .cpp that includes it.
When the function's linkage is important. For example, if the function is exported as a library (e.g. DLL) function, it's important that it select a single compilation unit to live in. Or for static / global instances. This is also important for distributing an import header for a DLL.
When you wish to hide implementation details when distributing a library
The definition and declaration are not identical. This can be the case with respect to constness of arguments.
You want a "clean" overview of the interface in the .hpp file. I find that with modern code tools and increasing familiarity with single-code-file languages like javascript / C# / inline C++, this is not a strong argument.
You explicitly do not want the function to be declared inline. Although, it won't matter; the optimizing compiler will likely inline if it wants to.
There are logical motivations for keeping code inline in a .hpp file:
Why have two files when you can have one?
Duplicating the declaration / function headers is unnecessary maintenance and feels redundant. We have code analysis tools to show interfaces.
The concept that inline / template code must be put in a header and other code is put in a .cpp is arbitrary. If we are forced to put inline code in a .hpp file, and this is OK, then why not put all code in a .hpp file?
I am pretty sure though that the tradition of separate .cpp and .hpp files is far stronger than the reasoning however.
I know this maybe quite subjective, but are there any general rules
for situations when it is not necessary for code to be split into two
files?
Split the code into header and source whenever you can.
Generally, this shouldn't be done in next cases :
the code consists of template classes and functions (unless you
explicitly instantiate templates in the source file)
the header consists only of inline functions
should the single file be a .cpp file or a .h file?
Should be the header file (.h).
The rule I use is the following:
Whenever you can put code into a cpp file, do it.
The reasons are multiple:
Header files serve as rudimentary documentation. It is better not to clutter them with code.
You can also use pimpls at some places if you feel like, for the reason above.
It reduces compilation times:
whenever you change a .cpp, only this file will be recompiled.
whenever a header is included, it only contains the minimal amount of code needed
It allows you to assess more easily which part of your code depends on which other part, just by looking at which headers are included. This point mandates another rule of mine:
Whenever you can forward declare a class instead of including a header, do it.
This way, the .cpp files carry the dependency information between parts of your source. It also lowers build times.
I know this maybe quite subjective, but are there any general rules for situations when it is not necessary for code to be split into two files?
It's not always subjective; you will very good reasons to separate them in large projects. It's good to get into the practice of separating them, and learning when it is and is not appropriate to separate definition from declaration. It's hard to answer your question without knowing how complex your codebase will become.
For example is the class is extremely small
It's still not necessarily a bad idea to separate them, in general.
or if the file simply holds some global definitions
The header should not contain global definitions which require static construction, unless necessary.
or static functions?
These do not belong anywhere in C++. Use inline, or anonymous namespace. If you mean within a class' body, "it depends on the instruction count, if you are hoping it will be inlined".
Also, in these cases, should the single file be a .cpp file or a .h file?
The single file should be a header. Rationale: You should not #include cpp files.
And don't forget that intermodule (aka link-time) optimizations are getting better and better.
C++ compilation times are long, and it's very very very time consuming to fix this after the fact. I recommend that you get into the practice of using cpp files before your build times and dependencies explode.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am learning about architecture from Robert C. Martin's book Clean Architecture. One of the main rules emphasized through the book is the DIP rule that states Source code dependencies must point only inwards, toward higher-level policies . Trying to translate this in the embedded domain assume 2 components scheduler and timer . The scheduler is high-level policy which relies on the low level timer driver and needs to call the APIs get_current_time() and set_timeout() , simply I would split the module into an implementation file timer.c and a header(an interface?) timer.h and the scheduler.c could simply include timer.h to use these APIs . Reading the book portrayed the previous scenario as breaking of the dependency rule and implied that an interface between the 2 components should be implemented to break the dependency.
To imitate that in c for example timer_abstract can include a generic structure with pointers to functions
struct timer_drv {
uint32 (*get_current_time)(void);
void (*set_timeout)(uint32 t);
}
This to me looks like over-design. Isn't a simple header file sufficient ? Can a C header file be considered as an interface ?
In computing, an "interface" is a common boundary across which two or more components or subsystems exchange information.
A header file in C or C++ is a text file that contains a set of declarations and (possibly) macros that can be inserted into a compilation unit (a separate unit of source code, such as a source file), and allow that compilation unit to use those declarations and macros. In other words #include "headerfile" within a source file is replaced by the content of headerfile by the C or C++ preprocessor before subsequent compilation.
Based on these definitions, I would not describe a header file as an interface.
A header file may define data types, declare variables, and declare functions. Multiple source files may include that header, and each will be able to use the data types, variables, and functions that are declared in that header. One compilation unit may include that header, and then define some (or all) of the functions declared in the header.
However types, variables, and functions need not be placed in a header file. A programmer who is determined enough can manually copy declarations and macros into every source file that uses them, and never use a header file. A C or C++ compiler cannot tell the difference - because all the preprocessor does is text substitution.
The logical grouping of declarations and macros is actually what represents an interface, not the means by which information about the interface is made available to compilation units. A header file is simply one (optional) means by which a set of declarations and macros can be made available to compilation units.
Of course, a header file is often practically used to avoid errors in using a set of declarations and macros - so can help make it easier to manage the interface represented by those declarations and macros. Every compilation unit that #includes a header file receives the same content (unless affected by other preprocessor macros). This is much less error prone than the programmer manually copying declarations into every source file that needs them. It is also easier to maintain - editing a header file means all compilation units can be rebuilt and have visibility of the changes. Whereas, manually updating declarations and macros into every source file can introduce errors, because programmers are error prone - for example, by editing the declarations inconsistently between source files.
I think the reason why you would want an interface for a timer is indeed to break dependencies. Since the Scheduler uses the Timer, every location Scheduler.o is linked to, Timer.o must be linked to as well provided you use scheduler symbols that depend on timer symbols.
If you would have used an interface for Timer, no linking from Scheduler.o to Timer.o (or Scheduler.so to Timer.so if you want) is required nor useful. You will create an instance of Timer at runtime, likely pass it to the constructor of Scheduler, Timer.o will be linked to elsewhere.
Now why would that be useful? Unit testing is one example: you can pass a Timer stub class to Scheduler's ctor and link to TimerTestStub.o etc. You can see that this way of working does break dependencies. Scheduler.o does require a Timer, but which one is not a requirement at the build time level of scheduler.so but higher up. You pass the Timer instance as an argument of Scheduler's ctor.
This is also very useful to lower the amount of build-time dependencies when using libraries. The real trouble starts when creating a dependency chain. Scheduler requires Timer, Timer requires class X, class X requires class Y, class Y requires class Z ...
This may look still ok to you but know that every class could be in another library.
You then want to use Scheduler but are forced to drag a ton of includepath settings and likely do a ton of linking.
You can break dependencies by only exposing the functionality of Scheduler you really need in its interface, of course you can use multiple interfaces.
You should make your own demo, write 10 classes, put them in 10 shared libs, make sure every class requires 3 other classes out of those 10. Now include 1 of those class headers in your main.cpp and see what you need to do to get it build properly.
Now you need to think on breaking those dependencies.
For instance, when I define a class file in C++ I've always put the function bodies in the class header files(.h) along with the class definition. The source code file(.cpp) is the one with the main() function. Now is this commonly done among pro c++ programmers or do they follow the convention of separate header/source code files.
As for Native C, I do notice then done in GCC(and of course for the headers in Visual Studio for Windows).
So is this just a convention? Or is there a reason for this?
Function bodies are placed into .cpp files to achieve the following:
To make the compiler parse and compile them only once, as opposed to forcing it to compile them again, again and again everywhere the header file is included. Additionally, in case of header implementation linker will later have to detect and eliminate identical external-linkage functions arriving in different object files.
Header pre-compilation facilities implemented by many modern compilers might significantly reduce the wasted effort required for repetitive recompilation of the same header file, but they don't entirely eliminate the issue.
To hide the implementations of these functions from the future users of the module or library. Implementation hiding techniques help to enforce certain programming discipline, which reduces parasitic inter-dependencies between modules and thus leads to cleaner code and faster compilation times.
I'd even say that even if users have access to full source code of the library (i.e. nothing is really "hidden" from them), clean separation between what is supposed to be visible through header files and what is not supposed to be visible is beneficial to library's self-documenting properties (although such separation is achievable in header-only libraries as well).
To make some functions "invisible" to the outside world (i.e. internal linkage, not immediately relevant to your example with class methods).
Non-inline functions residing in a specific translation unit can be subjected to certain context-dependent optimizations. For example, two different functions with identical tail portions can end up "sharing" the machine code implementing these identical tails.
Functions declared as inline in header files are compiled multiple times in different translation units (i.e. in different contexts) and have to be eliminated by the linker later, which makes it more difficult (if at all possible) to take advantage of such optimization opportunities.
Other reasons I might have missed.
It is a convention but it also depends on the specific needs. For example if you are writing a library that you want the functionality to be fast (inline) and you are designing the library for others to use to be a simple header only library, then you can write all of your code within the header file(s) itself.
On the other hand; if you are writing a library that will be linked either statically or dynamically and you are trying to encapsulate internal object data from the user. Your functions - class member functions etc. would be written in a manner that they do what they are supposed to do so as to where the user of your library code shouldn't have to worry about the actual implementation details for that part is hidden. All they would need to know about your functions and classes are their interfaces. It would be in this manner that you would have both header files and implementation files.
If you place your function definitions in the header files along with their declarations, they will be inline and should run faster however your executable will be larger and they will have to be compiled every time. The implementation details are also exposed to the user.
If you place your function definitions in the header's related code file they will not be inline, your code will be smaller, it may run a little slower, but you should only have to compile them once. The implementation details are hidden and abstracted away from the user.
There is absolutely no reason to put function bodies in header files in 'c'. If the header file is included in multiple 'c' files, this would force the compiler to define the function multiple times. If the function is 'static', there will be multiple copies of it in the program, if it is global, the linker will complain.
Similar reasoning is for c++. The exception is for 'inline' members of the class and some template implementations.
If you define a temporary class in your 'cpp' file, it is perfectly ok to define it there and have function bodies defined inside the class.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm currently transitioning to working in C, primarily focused on developing large libraries. I'm coming from a decent amount of application based programming in C++, although I can't claim expertise in either language.
What I'm curious about is when and why many popular open source libraries choose to not separate their code in a 1-1 relationship with a .h file and corresponding .c files -- even in instances where the .c isn't generating an executable.
In the past I'd been lead to believe that structuring code in this manner is optimal not only in organization, but also for linking purposes -- and I don't see how the lack of OOD features of C would effect this (not to mention not separating the implementation and interface also occurs in C++ libraries).
There is no inherent technical reason in C to provide .c and .h files in matched pairs. There is certainly no reason related to linking, as in conventional C usage, neither .c nor .h files has anything directly to do with that.
It is entirely possible and potentially advantageous to collect declarations related to multiple .c files in a smaller number of .h files. Providing only one or a small number of header files makes it easier to use the associated library: you don't need to remember or look up which header you need for the declaration of each function, type, or variable.
There are at least three consequence that arise from doing that, however:
you make it harder to determine where to find the implementations of functions declared in collective headers.
you make your project more susceptible to mass rebuilding cascades, as most object files depend on one or more of a small number of headers, and changes or additions to your function signatures all affect one of that small number of headers.
the compiler has to spend more effort digesting one large header with all the library's declarations than to digest one or a small number of headers focused narrowly on the specific declarations used in a given .c file, as #ChrisBeck observed. This tends to be much less of a problem for C code than it does for C++ code, however.
You need a separate .h file only when something is included in more than one compilation unit.
A form of "keep things local unless you have to share" wisdom.
In the past I'd been lead to believe that structuring code in this manner is optimal not only in organization, but also for linking purposes -- and I don't see how the lack of OOD features of C would effect this (not to mention not separating the implementation and interface also occurs in C++ libraries).
In traditional C code, you will always put declarations in the .h files, and definitions in the .c files. This is indeed to optimize compilation -- the reason is that, it makes each compilation unit take the minimum amount of memory, since it only has the definitions that it needs to output code for, and if you manage includes properly, it only has the declarations it needs also. It also makes it simple to see that you aren't breaking the one definition rule.
In modern machines its less important to do this from the perspective of, not having awful build times -- machines now have a lot of memory.
In C++ you have template files which are generally only in the header.
You also in recent years have people experimenting with so-called "Unity Builds" where you have one compilation unit which includes all of the other source files and you build it all at once. See here: The benefits / disadvantages of unity builds?
So today, having 1-1 correspondence is mainly a style / organizational thing.
A really, really basic, but entirely realistic scenario where a 1-1 relation between .h and .c files is not required, and even not desirable:
main.h
//A lib's/extension/applications' main header file
//for user API -> obfuscated types
typedef struct _internal_str my_type;
//API functions
my_type * init_resource( void );//some arguments will probably be required
//get helper resource -> not part of the API, but the lib uses it internally in all translation units
const struct helper_str *get_help( void );
Now this get_help function is, as the comment says, not part of the libs' API. All the .c files that make up the lib are using it, though, and the get_help function is defined in the helper.c translation unit. This file might look something like this:
#include "main.h"
#include <third/party.h>
//static functions
static
third_party_type *init_external_resource( void )
{
//implement this
}
static
void cleanup_stuff(third_party_type *p)
{
third_party_free(p);
}
const struct helper_str *get_help( void )
{
//implementation of external function
}
Ok, so it's a convenience thing: not adding another .h file, because there's only 1 external function you're calling. But that's no good reason not to use a separate header file, right? Agreed. It's not a good reason.
However: Imagine that your code depends on this third party library a lot, and each component of whatever you're building uses a different part of this library. The "help" you need/want from this helper.c file might differ. That's when you could decide to create several header files, to control the way the helper.c file is being used internally in your project. For example: you've got some logging-stuff in translation units X and Y, these files might include a file like this:
//specific_help.h
char * err_to_log_msg(int error_nr);//relevant arguments, of course...
Whereas a file that doesn't come near output, but, for example, manages thread-safety or signals, might want to call a function in helper.c that frees some resources in case some event was detected (signals, keystrokes, mouse events... whatever). This file might include a header file like:
//system_help.h
void free_helper_resources(int level);
All of these headers link back to functions defined in helper.c, but you could end up with 10 header files for a single c file.
Once you have these various headers exposing a selection of functions, you might end up adding specific typedefs to each of these headers, depending on how the two components interact... ah well, it's a matter of taste anyway.
Many people will just opt for a single header file to go with the helper.c file, and include that. They'll probably not use half of the functions they have access to, but they'll have less files to worry about.
On the other hand, if others start tinkering with their code, they might be tempted to add functions in a certain file that don't belong: they might add logging functions to the signal/event handling files and vice-versa
In the end: use your common sense, don't expose more than you need to. It's easy to remove a static keyword and just add the prototype to a header file if you really need to.
I know this maybe quite subjective, but are there any general rules for situations when it is not necessary for code to be split into two files?
For example is the class is extremely small, or if the file simply holds some global definitions or static functions? Also, in these cases, should the single file be a .cpp file or a .h file?
On the technical side, whenever you need to obey the one definition rule you have to separate declarations from definitions, since you will need to include the declarations many times in multiple translation units, but you must only provide one single definition.
In aesthetic terms, the answer could be something like "always", or "systematically". In any case, you should always have a header for every logical unit of code (e.g. one class or one collection of functions); and the source file is one that is possibly optional, depending on whether or not you have everything defined inline (exempting you from ODR), or if you have a template library.
As a meta strategy, you should seek to decouple the compilation units as much as possible, so that you can include only what's needed in a fine-grained way. This allows your project to grow without having compilation times become unbearable, and it makes it much easier to reuse code in other projects.
I favor putting code in .hpp files but am very often compelled to put the implementation in the .cpp for any of the following reasons:
Reducing build time. This is the #1 reason to use .cpp files... and the reason most code you find in .hpp files is small and simple. When the implementation changes, you don't want to have to rebuild every .cpp that includes it.
When the function's linkage is important. For example, if the function is exported as a library (e.g. DLL) function, it's important that it select a single compilation unit to live in. Or for static / global instances. This is also important for distributing an import header for a DLL.
When you wish to hide implementation details when distributing a library
The definition and declaration are not identical. This can be the case with respect to constness of arguments.
You want a "clean" overview of the interface in the .hpp file. I find that with modern code tools and increasing familiarity with single-code-file languages like javascript / C# / inline C++, this is not a strong argument.
You explicitly do not want the function to be declared inline. Although, it won't matter; the optimizing compiler will likely inline if it wants to.
There are logical motivations for keeping code inline in a .hpp file:
Why have two files when you can have one?
Duplicating the declaration / function headers is unnecessary maintenance and feels redundant. We have code analysis tools to show interfaces.
The concept that inline / template code must be put in a header and other code is put in a .cpp is arbitrary. If we are forced to put inline code in a .hpp file, and this is OK, then why not put all code in a .hpp file?
I am pretty sure though that the tradition of separate .cpp and .hpp files is far stronger than the reasoning however.
I know this maybe quite subjective, but are there any general rules
for situations when it is not necessary for code to be split into two
files?
Split the code into header and source whenever you can.
Generally, this shouldn't be done in next cases :
the code consists of template classes and functions (unless you
explicitly instantiate templates in the source file)
the header consists only of inline functions
should the single file be a .cpp file or a .h file?
Should be the header file (.h).
The rule I use is the following:
Whenever you can put code into a cpp file, do it.
The reasons are multiple:
Header files serve as rudimentary documentation. It is better not to clutter them with code.
You can also use pimpls at some places if you feel like, for the reason above.
It reduces compilation times:
whenever you change a .cpp, only this file will be recompiled.
whenever a header is included, it only contains the minimal amount of code needed
It allows you to assess more easily which part of your code depends on which other part, just by looking at which headers are included. This point mandates another rule of mine:
Whenever you can forward declare a class instead of including a header, do it.
This way, the .cpp files carry the dependency information between parts of your source. It also lowers build times.
I know this maybe quite subjective, but are there any general rules for situations when it is not necessary for code to be split into two files?
It's not always subjective; you will very good reasons to separate them in large projects. It's good to get into the practice of separating them, and learning when it is and is not appropriate to separate definition from declaration. It's hard to answer your question without knowing how complex your codebase will become.
For example is the class is extremely small
It's still not necessarily a bad idea to separate them, in general.
or if the file simply holds some global definitions
The header should not contain global definitions which require static construction, unless necessary.
or static functions?
These do not belong anywhere in C++. Use inline, or anonymous namespace. If you mean within a class' body, "it depends on the instruction count, if you are hoping it will be inlined".
Also, in these cases, should the single file be a .cpp file or a .h file?
The single file should be a header. Rationale: You should not #include cpp files.
And don't forget that intermodule (aka link-time) optimizations are getting better and better.
C++ compilation times are long, and it's very very very time consuming to fix this after the fact. I recommend that you get into the practice of using cpp files before your build times and dependencies explode.