Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
I have seen multiple times people include headers files subfolders in two ways
#include "header.h"
and add path/to/subfolder in include paths.
#include "path/to/subfolder/header.h"
and add only root/folder in include paths.
Not sure if this is just a matter of choice or there are any good bad practice rules around it.
An issue that can arise for case 1, but not case 2 is where you have two header files with the same name, but live in different directories, e.g. foo/utils.h and bar/utils.h. Using convention 2 from the outset eliminates this possibility.
In general use paths relative to the including file (= option 2.), if you don't see a risk of having to move the files relative to each other and use paths relative to a directory passed as compiler option otherwise.
The benefit of a path relative to the including file is that tools can pick up the included files outside of the context of a project/in the absence knowlege about include directories. This could come handy, if you just want to take a quick look at a source file without opening the whole corresponding project the IDE.
You may want to distinguish between both alternatives by using #include <...> to refer to files searched relative to a path passed as compiler option btw, since it's not always immediately obvious where to look for a included file as a human without the help of tools which may not always be available.
If all the headers are part of your own code base, then it's not too important how you do it, since if a naming conflict arises you can simply rename one of the header files to fix the issue.
If you're including headers from third-party projects, OTOH, then you might not be able to (easily) rename those header files (at least not without forking the third-party project and maintaining your own hacked version from then on, which most people want to avoid). In that scenario, it's best to have your #include-paths start at the level that contains the name of the third-party project (e.g. #include "third_party_lib/utils/utils.h") so that the chances of a naming collision are greatly reduced (it could still happen if your project needs to use two libraries that both have the same name, but the chances of that are much smaller than the chances of two libraries both having a utils.h)
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've noticed that the source of the popular C++ library Cinder has separate src and include folders, containing *.cpp and *.h files correspondingly. Is there an advantage to doing it this way rather than simply putting every .cpp in the same dir as its matching .h?
Its often easier to structure your code this way, especially if you are going to export it as an API with pre-compiled libraries. The (public) headers then become your API, it makes sense to keep them in a separate place from the source as this is the part of the code that will have to be distributed with the library.
Usable options are
module/*.{cpp,h} - best for spacial locality of related files, worst when need to apply strict API focus (backward compatibility, release vs patch, etc.)
module/{include/*.h, src/*.{cpp,h}} - good for API focus, good for spacial locality, my preferred choice
include/module/*.h,src/module/*.{cpp,h} - best for API focus, not very good for spacial locality
There is no real pros an cons, well not in general. When designing API (compared to applications), you will have to provide a set of includes with your library and this particular rĂ´le of header files make developper choose to solution of separating them from sources in their filesystem.
I don't think an organisation is better than an other but I can give you two adviced to help you decide what is best for your projects :
Try to simplify you file hierarchy as much as possible. When it comes to project configuration, install scripts and version control, less embedded folders is less headaches.
The most important is not where header files are located but how they are included:
<> or ""
From internal source files or from external code which uses your headers ?
With a path to them or directly the file name ?
Seing how you want your headers to be written when calling #include helps you decide where it's more comfortable for you to put them.
As far as I'm concerned I don't really like headers/source separations. Some headers are not meant to be exposed by my APIs so either I have all my sources in one folder or I prefer a public/private separation.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Please help I've several questions : what are precompiled Headers ? what is their usage ? how to make one ? and how to include one ?
Precompiled headers (PCH for short) are something that the some compilers support. The support and what they contain [aside from "something that hopefully can be read quicker than the original header file"] is up to each compiler producer to decide. I have a little bit of understanding of how Clang does it's precompiled headers, and it's basically a binary form of the "parsed" C or C++ code in the header - so it produces a single file that doesn't need parsing [to the same level as the header file itself].
The purpose is to reduce the compile-time. However, in my experience, the LONG part of the compilation is typically code-generation with optimisation. However, in some instances, especially when LOTS of header-files are involved, the time to read and parse the header files can be a noticeable part of the overall compilation time.
Generally speaking, how they are used is that you tell the compiler that you want precompiled header, and for each compilation, the compiler will generate the precompiled header if it's not already there, and read it in when it is present [1] - commonly this is done for one named header file, which includes lots of other things. Microsoft Visual Studio typically has a file called "stdafx.h" that is precompiled - and at least in the case of MS products, this has to be the first file that is inclduded in a project [this is so that no other header file for example changes the meaning of some macro - I expect there is a hash of the compiler/command-line definitions of macros, so if one of those changes, the PCH is recompiled].
The idea is not to include every single header-file in this one precompiled file, but header files that are used in MOST files, and that are not changing often (the PCH needs to be regenerated if one if the files that is precompiled has changed, so it's no point in doing that if you keep changing the header-files frequently). Of course, like any other build dependency, anything using the precompiled header will need to be rebuilt if the PCH has changed.
For exactly how to use this, you will need to read the documentation for the compiler you are using.
[1] If nothing has changed that requires it to be rebuilt.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm a programmer for several years.
I was always told (and told others) that you should include in your .c files only the .h files that you need. Nothing more, nothing less.
But let me ask - WHY?
Using today's compilers I can include the entire h files of the project, and it won't have a huge effect on compilation times.
I'm not talking about including OS .h files, which include many definitions, macros, and preprocessing commands.
Just including one "MyProjectIncludes.h". That will only say:
#pragma once
#include "module1.h"
#include "module2.h"
// and so on for all of the modules in the project
What do you say?
It's not about the compilation time of your .c file taking longer due to including too many headers. As you said, including a few extra headers is not likely going to make much of a difference in the compilation time of that file.
The real problem is that once you make all your .c files include the "master" header file, then every time you change any .h file, every .c file will need to be recompiled, due to the fact that you made every .c file dependent on every .h file. So even if you do something innocuous like add a new #define that only one file will use, you will force a recompilation of the whole project. It gets even worse if you are working with a team and everyone makes header file changes frequently.
If the time to rebuild your entire project is small, such as less than 1 minute, then it won't matter that much. I will admit that I've done what you suggested in small projects for convenience. But once you start working on a large project that takes several minutes to rebuild everything, you will appreciate the difference between needing to rebuild one file vs rebuilding all files.
It will affect your build times. Also, you run the risk of having a circular dependency
In general you don't want to have to re-compile modules unless headers that they actually depend on are changed. For smaller projects this may not matter and a global "include_everything.h" file might make your project simple. But in large projects, compile times can be very significant and it is preferable to minimize inter-module dependencies as much as possible. Minimizing includes of unnecessary headers is only one approach. Using forward declarations of types that are only referenced by pointers or references, using Pimpl patterns, interfaces and factories, etc., are all approaches aimed at reducing dependencies amongst modules. Not only do these steps decrease compile time, they can also make your system easier to test and easier to modify in general.
An excellent, though somewhat dated reference on this subject, is John Lakos "Large Scale Software Design".
Sure, like you've said, including extra files won't harm your compilation times by much. Like what you suggest, it's much more convenient to just dump everything in using 1 include line.
But don't you feel a stranger could get better understanding of your code if they knew exactly what .h files you were using in a specific .c file? For starters, if your code had any bugs and those bugs were in the .h files, they'd know exactly which .h files to check up on.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm building a very basic library, and it's the first project that I plan on releasing for others to use if they'd like. As such, I'd like to know what some "best practices" as far as organization goes. The only thing that may be unique about my project is that in order for it to be used in a project, users would be required to extend certain abstract classes, which leads me to my first question:
A lot of libraries I've seen consist of a .a file and a single .h file. Is this best practice? Wouldn't it be better to expose all the public .h files so that users can choose which ones to include? If this is the preferred way of doing things, how exactly is it accomplished? What goes into that single .h file?
My second question involves dependencies. For example my current project relies on OpenGL, GLFW, and GLEW. Should I package those in some way with my project, or just make it the user's responsibility to ensure that they are installed?
Edit: Someone asked about my target OS. All of my dependencies are cross platform so I'm (perhaps naively) hoping to make my library cross platform as well.
Thanks for any and all help!
It really depends on the circumstances. If you have some fairly complex functionality, that are in a number of closely related functions, then one header is the right solution.
E.g. you write a set of functions that draw something to the screen, and you need a few functions to confgiure/set up the environment, a few functions to define and place objects in the scene, a few functions to do the actual drawing/processing, and finally teardown, then using one header file is a good plan.
In the above case, it's also possible to have one "overall" header-file that includes several smaller ones. Particularly if you have fairly large classes, sticking them all in one file gets rather messy.
On the other hand, if you have one set of functions that deal with gasses dissolved in liquids, another set of functions to calculate the strength/load capacity of a steel beam, and another set of functions to calculate the friction of a rubber tyre against a roadsurface, then they probably should have different headers - even if it's all feasible functionality to go in a "Physics/mechanics library".
It is rarely a good idea to supply third party libraries with your library - yes, if you want to offer two downloads, one with the "all you nead, just add water", and one "bare library", that's fine. But I don't want to spend three times longer than necessary to download your library, simply because it also contains three other libraries that your code is using, which is already on my machine. However, do document what libraries are needed, and what you need to do to install them on your supported platforms (and what the supported platforms are). And what versions of libraries you have tested - there's nothing worse than "getting the latest", only to find that the version something needs is two steps back...
(And as Jason C points out, licensing gets very messy once you have a few different packages that your code depends on, because your license then has to be compatible with ALL the other licenses - sometimes that's not even possible...)
You have options and it really depends on how convenient you choose to make it for developers using your libraries.
As for the headers, the general method for libraries of average complexity is to have a single header that a developer can include to get everything they need. A good method is, if you have multiple headers, create a single header with the same name as your library (not required, just common) and have it #include all the individual headers. Then distribute the single header and individual headers. That way your users have the option of #including just one to get everything, or #including individual ones if necessary.
E.g. in mylibrary.h:
#ifndef MYLIBRARY_H
#define MYLIBRARY_H
#include <mylibrary/something.h>
#include <mylibrary/another.h>
#include <mylibrary/lastone.h>
#endif
Ensure that your individual headers can be included standalone (i.e. they #include everything they need) if you want to provide that option to developers.
As for dependencies, you will want to make it the user's responsibility to ensure they are installed. The user is compiling their code and linking to your library, and so it is also the user's responsibility to link to dependent libraries. If you package third-party dependencies with your library you run many risks:
Breaking user's systems who already have dependencies installed.
As mentioned in Mats Petersson's answer, forcing users to download dependencies they already have.
Violating licensing rights on third-party libraries.
The best thing for you to do is clearly document the required dependencies.
For this there are not really standard "best practices". Any sane practice would be a good practice.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Where is it customary to write the in-code documentation of classes and methods?
Do you write such doc-blocks above the corresponding class/method in the header (.hpp) file, or within the source (.cpp) file?
Is there a widely respected convention for such things? Do most C++ projects do it one way rather than the other?
Or should documentation be written on the two sides (i.e. in the .hpp and the .cpp files), maybe with one short description one one side and a longer one on the other side?
Most importantly, are there any practical considerations that makes it more convenient to write it one way rather than the other way ? (E.g. the use of automatic documentation parsers and generators like Doxygen...)
Both:
Describe the API design and usage in the header: that's your public interface for clients.
Describe the implementation alternatives / issues and decisions in the implementation: that's for yourself - later - and other maintainers/enhancers, even someone reviewing the design as input to some next-gen system years hence.
Comment anything that's not obvious, and nothing that is (unless your documentation tool's too stupid to produce good documentation without).
Avoid putting implementation docs in the headers, as changing the header means makefile timestamp tests will trigger an unnecessary recompilation for client apps that include your header (at least in an enterprise or commercial library environment). For the same reason, aim to keep the header documentation stable and usable - good enough that you don't need to keep updating it as clients complain or ask for examples.
If you make a library, you typically distribute the compiled library and the header files. This makes it most useful to put documentation in the header files.
Most importantly, are there any
practical considerations that makes it
more convenient to write it one way
rather than the other way ?
Suppose that you want to add a clarification to one of your comments without changing the code. The problem is that your build system will only see that you changed the file, and unnecessarily assume that it needs to be recompiled.
If the comments are in the .cpp file, it will just recompile that one file. If the comments are in the .hpp file, it will recompile every .cpp file that depends on that header. This is a good reason to prefer having your comments in the .cpp files.
(E.g. the
use of automatic documentation parsers
and generators like Doxygen...)
Doxygen allows you to write your comments either way.
Again, both. As for the public documentation, it is nice to be in the .h with a format extractable with Doxygen, for example, as other commented. I like very much the Perl way of documenting things. The .pl (or .pm) file includes documentation in itself that can be extracted using a tool similar to what Doxygen does for C++ code. Also, Doxygen allows you to create several different pages, that allow you to include user manuals, etc., not just documenting the source code or API. I generally like the idea of a self-contained .h/.hpp file in the philosophy of literate programming.
I personally like documentation in the header files. However, there are some that believe that documentation should be put in the source files. The reason being that when something changes, the documentation is right there reminding you to update it. I somewhat agree, as I personally have forgotten to update the Doxygen comments in the header when I changed something in the source files.
I still prefer the Doxygen comments to be in the header file for aesthetic reasons, and old habits are hard to change. I've tried both and Doxygen offers the flexibility of documenting either in source or header files.