When I work on my personal C and C++ projects I usually put file.h and file.cpp in the same directory and then file.cpp can reference file.h with a #include "file.h" directive.
However, it is common to find out libraries and other kinds of projects (like the linux kernel and freeRTOS) where all .h files are placed inside an include/ directory, while .cpp files remain in another directory. In those projects, .h files are also included with #include "file.h" instead of #include "include/file.h" as I was hoping.
I have some questions about all of this:
What are the advantages of this file structure organization?
Why are .h files inside include/ included with #include "file.h" instead of #include "include/file.h"? I know the real trick is inside some Makefile, but is it really better to do that way instead of making clear (in code) that the file we want to include is actually in the include/ directory?
The main reason to do this is that compiled libraries need headers in order to be consumed by the eventual user. By convention, the contents of the include directory are the headers exposed for public consumption. The source directory may have headers for internal use, but those are not meant to be distributed with the compiled library.
So when using the library, you link to the binary and add the library's include directory to your build system's header paths. Similarly, if you install your compiled library to a centralized location, you can tell which files need to be copied to the central location (the compiled binaries and the include directory) and which files don't (the source directory and so forth).
It used to be that <header> style includes were of the implicit path type, that is, to be found on the includes environment variable path or a build macro, and the "header" style includes were of the explicit form, as-in, exactly relative to where-ever the source file is that included it. While some build tool chains still allow for this distinction, they often default to a configuration that effectively nullifies it.
Your question is interesting because it brings up the question of which really is better, implicit or explicit? The implicit form is certainly easier because:
Convenient groupings of related headers in hierarchies of directories.
You only need include a few directories in the includes path and need not be aware of every detail with regard to exact locations of files. You can change versions of libraries and their related headers without changing code.
DRY.
Flexible! Your build environment doesn't have to match mine, but we can often get nearly exact same results.
Explicit on the other hand has:
Repeatable builds. A reordering of paths in an includes macro/environment variable, doesn't change resulting header files found during the build.
Portable builds. Just package everything from the root of the build and ship it off to another dev.
Proximity of information. You know exactly where the header is with #include "\X\Y\Z". In the implicit form, you may have to go searching along multiple paths and might even find multiple versions of the same file, how do you know which one is used in the build?
Builders have been arguing over these two approaches for many decades, but a hybrid form of the two, mostly wins out because of the effort required to maintain builds based purely of the explicit form, and the obvious difficulty one might have familiarizing one's self with code of a purely implicit nature. We all generally understand that our various tool chains put certain common libraries and headers in particular locations, such that they can be shared across users and projects, so we expect to find standard C/C++ headers in one place, but we don't initially know anything about the specific structure of any arbitrary project, lacking a locally well documented convention, so we expect the code in those projects to be explicit with regard to the non-standard bits that are unique to them and implicit regarding the standard bits.
It is a good practice to always use the <header> form of include for all the standard headers and other libraries that are not project specific and to use the "header" form for everything else. Should you have an include directory in your project for your local includes? That depends to some extent on whether those headers will be shipped as interfaces to your libraries or merely consumed by your code, and also on your preferences. How large and complex is your project? If you have a mix of internal and external interfaces or lots of different components, you might want to group things into separate directories.
Keep in mind that the directory structure your finished product unpacks to, need not look anything like the directory structure under which you develop and build that product in. If you have only a few .c/.cpp files and headers, it's ok to put them all in one directory, but eventually, you're going to work on something non-trivial and will have to think through the consequences of your build environment choices, and hopefully document it for others to understand it.
1 . .hpp and .cpp doesn't necessary have 1 to 1 relationship, there may have multiple .cpp using same .hpp according to different conditions (eg:different environments), for example: a multi-platform library, imagine there is a class to get the version of the app, and the header is like that:
Utilities.h
#include <string.h>
class Utilities{
static std::string getAppVersion();
}
main.cpp
#include Utilities.h
int main(){
std::cout << Utilities::getAppVersion() << std::ends;
return 0;
}
there may have one .cpp for each platform, and the .cpp may be placed at different locations so that they are easily be selected by the corresponding platform, eg:
.cpp for iOS (path:DemoProject/ios/Utilities.cpp):
#include "Utilities.h"
std::string Utilities::getAppVersion(){
//some objective C code
}
.cpp for Android (path:DemoProject/android/Utilities.cpp):
#include "Utilities.h"
std::string Utilities::getAppVersion(){
//some jni code
}
and of course 2 .cpp would not be used at the same time normally.
2.
#include "file.h"
instead of
#include "include/file.h"
allows you to keep the source code unchanged when your headers are not placed in the "include" folder anymore.
Related
I'm developing a collection of C++ classes and am struggling with how to share the code in a way that maintains organization without compromising ease of compilation for a user of the collection.
Options that I have seen include:
Distribute compiled library file
Put the source in the header file (with implicit inline as discussed in this answer)
Use symbolic links to allow the compiler to find the files.
I'm currently using the third option where, for each class the I want to include I symbolic link each classess headers and source files (e.g. ln -s <path_to_class folder>/myclass.cpp) This works well except that I can't move the project folder location (it breaks all the symlinks) and I have to have all those symlinked files hanging around.
I like the second option (it has the appearance of Java), but I'm worried about code size bloat if everything is declared inline.
A user of the collection will create a project folder somewhere, and somehow include the collection into their compilation process.
I'd like a few things to be possible:
Easy compilation (something like gcc *.cpp from the project folder)
Easy distribution of library in uncompiled form.
Library organization by module.
Compiled code size is not bloated.
I'm not worried about documentation (Doxygen takes care of that) or compile time: the overall modules are small and even the largest projects on the slowest machines won't take more than a few seconds to compile.
I'm using the GCC compiler, if it makes any difference.
A library is the best option (in my opinion) of the three you raised. Then provide the header file(s) in the include path and the library in the linker path.
Since you also want to distribute the library in source code form, I would be inclined to provide a compressed archive (gzip, 7-zip, tarball, or other preferred format) in a central repository.
If I understand correctly, you do not want users to have to include the .cpp files in their build, but instead just want them to use either: (i) the headers directly, (ii) use a compiled form of the lib.
Your requirements are a bit unusual, but they can be achieved. It seems to me like you could organize your code in the following manner. First, have a global define that dictates whether or not you are compiling the library:
// global.h
// ...
#define LIB_SOURCE
// ...
Then in every header file, you check whether that define is set: if the library is distributed as a static/shared lib, the definitions are not included, otherwise, the '.cpp' file is included from the header file.
// A.h
#ifndef _A_H
#include "global.h"
#ifdef LIB_SOURCE
#include "A.cpp"
#endif
// ...
#endif
where 'A.cpp' would contain the actual implementation.
Again, this is a very strange way of doing things and I would actually advise against such practice. A better way (but one which requires more work) is to always distribute a shared library. But to keep things independent of the compiler, write a C layer around it. This way, you have a portable, maintainable library.
As for some of the other requirements:
Keep the build process simple by providing a Makefile
If you worry about the code size of the compiled library, look into gcc's optimization options (-Os). If you worry about the code size of the library when distributed in source-form in the headers, this is more tricky. Since the (inlined) code will actually be in the headers, the code will obviously grow with each inclusion in a .cpp file by the user.
I ended up using inline headers for all of the code. You can see the library here:
https://github.com/libpropeller/libpropeller/tree/master/libpropeller
The library is structured as:
library folder
class A
classA.h
classA.test.h
class B
classB.h
classB.test.h
class C
...
With this structure I can distribute the library as source, and all the user has to do is include -I/path/to/library in their makefile, and #include "library/classA/classA.h" in their source files.
And, as it turns out, having inline headers actually reduces the code size. I've done a full analysis of this, and it turns out that inline code in the headers allows the compiler to make the final binary roughly 5% smaller.
This question already has answers here:
Separate "include" and "src" folders for application-level code? [closed]
(10 answers)
Closed 6 years ago.
I know that it is common in C/C++ projects to place header files in a directory such as include and implementation in a separate directory such as src. I have been toying with different project structures and am wondering whether there any objective reasons for this or is it simply convention?
Convention is one of the reasons - most of the time, with effective abstraction, you only care about the interface and want to have it easy just looking at the headers.
It's not the only reason though. If your project is organised in modules, you most likely have to include some headers in different modules, and you want your include directory to be cleaned of other "noise" files in there.
Also, if you plan on redistributing your module, you probably want to hide implementation details. So you only supply headers and binaries - and distributing headers from a single folder with nothing else in it is simpler.
There's also an alternative which I actually prefer - public headers go in a separate folder (these contain the minimum interface - no implementation details are visible whatsoever), and private headers and implementation files are separate (possibly, but not necessarily, in separate folders).
I prefer putting them into the same directory. Reason:
The interface specification file(s), and the source file(s) implementing that interface belongs to the same part of the project. Say you have subsystemx. Then, if you put subsystemx files in the subsystemx directory, subsustemx is self-contained.
If there are many include files, sure you could do subsystemx/include and subsystemx/source, but then I argue that if you put the definition of class Foo in foo.hpp, and foo.cpp you certainly want to see both of them (or at least have the possibility to do so easily) together in a directory listing. Finding all files related to foo
ls foo*
Finding all implementation files:
ls *.cpp
Finding all declaration files:
ls *.hpp
Simple and clean.
It keeps your folder structure cleaner. Headers and source files are distinctly different, and are used for different things, so it makes sense to separate them. From this point-of-view the question is basically the same as "why do source files and documentation go in different folders"? The computer is highly agnostic about what you put in folders and what you don't, folders are -- for the most part -- just a handy abstraction because of the way that we humans parse, store, and recall information.
There's also the fact that header files remain useful even after you've built, i.e. if you're building a library and someone wants to use that library, they'll need the header files -- not the source files -- so it makes bundling those header files up -- grabbing the stuff in bin and the stuff in include and not having to sift through src -- much easier.
Besides (arguable?) usefulness for keeping things orderly, useful in other projects etc, there is one very neutral and objective advantage: compile time.
In particular, in a big project with a whole bunch of files, depending on search paths for the headers (.c/.cpp files using #include "headername.h" rather than #include "../../gfx/misc/something/headername.h" and the compiler passed the right parameters to be able to swallow that) you drastically reduce the number of entries that need to be scanned by the compiler in search of the right header. Since most compilers start separately for each file compiled, they need to read in the list of files on the include path and seek the right headers for each compiled file. If there is a bunch of .c, .o and other irrelevant files on the include path, finding the includes among them takes proportionally longer.
In short, a few reasons:
Maintainable code.
Code is well-designed and neat.
Faster compile time (at times, for minor changes done).
Easier segregation of the Interfaces for documentation etc.
Cyclic dependency at compile time can be avoided.
Easy to review.
Have a look at the article Organizing Code Files in C and C++ which explains it well.
Is there an automated way to take a large amount of C++ header files and combine them in a single one?
This operation must, of course, concatenate the files in the right order so that no types, etc. are defined before they are used in upcoming classes and functions.
Basically, I'm looking for something that allows me to distribute my library in two files (libfoo.h, libfoo.a), instead of the current bunch of include files + the binary library.
As your comment says:
.. I want to make it easier for library users, so they can just do one single #include and have it all.
Then you could just spend some time, including all your headers in a "wrapper" header, in the right order. 50 headers are not that much. Just do something like:
// libfoo.h
#include "header1.h"
#include "header2.h"
// ..
#include "headerN.h"
This will not take that much time, if you do this manually.
Also, adding new headers later - a matter of seconds, to add them in this "wrapper header".
In my opinion, this is the most simple, clean and working solution.
A little bit late, but here it is. I just recently stumbled into this same problem myself and coded this solution: https://github.com/rpvelloso/oneheader
How does it works?
Your project's folder is scanned for C/C++ headers and a list of headers found is created;
For every header in the list it analyzes its #include directives and assemble a dependency graph in the following way:
If the included header is not located inside the project's folder then it is ignored (e.g., if it is a system header);
If the included header is located inside the project's folder then an edge is create in the dependency graph, linking the included header to the current header being analyzed;
The dependency graph is topologically sorted to determine the correct order to concatenate the headers into a single file. If a cycle is found in the graph, the process is interrupted (i.e., if it is not a DAG);
Limitations:
It currently only detects single line #include directives (e.g., #include );
It does not handles headers with the same name in different paths;
It only gives you a correct order to combine all the headers, you still need to concatenate them (maybe you want remove or modify some of them prior to merging).
Compiling:
g++ -Wall -ggdb -std=c++1y -lstdc++fs oneheader.cpp -o oneheader[.exe]
Usage:
./oneheader[.exe] project_folder/ > file_sequence.txt
(Adapting an answer to my dupe question:)
There are several other libraries which aim for a single-header form of distribution, but are developed using multiple files; and they too need such a mechanism. For some (most?) it is opaque and not part of the distributed code. Luckily, there is at least one exception: Lyra, a command-line argument parsing library; it uses a Python-based include file fuser/joiner script, which you can find here.
The script is not well-documented, but they way you use it is with 3 command-line arguments:
--src-include - The include file to convert, i.e. to merge its include directives into its body. In your case it's libfoo.h which includes the other files.
--dst-include - The output file to write - the result of the merging.
--src-include-dir - The directory relative to which include files are specified (i.e. an "include search path" of one directory; the script doesn't support the complex mechanism of multiple include paths and search priorities which the C++ compiler offers)
The script acts recursively, so if file1.h includes another file under the --src-include-dir, that should be merged in as well.
Now, I could nitpick at the code of that script, but - hey, it works and it's FOSS - distributed with the Boost license.
If your library is so big that you cannot build and maintain a single wrapping header file like Kiril suggested, this may mean that it is not architectured well enough.
So if your library is really huge (above a million lines of source code), you might consider automating that, with tools like
GCC make dependency generator preprocessor options like -M -MD -MF etc, with another hand made script sorting them
expensive commercial static analysis tools like coverity
customizing a compiler thru plugins or (for GCC 4.6) MELT extensions
But I don't understand why you want an automated way of doing this. If the library is of reasonable size, you should understand it and be able to write and maintain a wrapping header by hand. Automating that task will take you some efforts (probably weeks, not minutes) so is worthwhile only for very large libraries.
If you have a master include file that includes all others available, you could simply hack a C preprocessor re-implementation in Perl. Process only ""-style includes and recursively paste the contents of these files. Should be a twenty-liner.
If not, you have to write one up yourself or try at random. Automatic dependency tracking in C++ is hard. Like in "let's see if this template instantiation causes an implicit instantiation of the argument class" hard. The only automated way I see is to shuffle your include files into a random order, see if the whole bunch compiles, and re-shuffle them until it compiles. Which will take n! time, you might be better off writing that include file by hand.
While the first variant is easy enough to hack, I doubt the sensibility of this hack, because you want to distribute on a package level (source tarball, deb package, Windows installer) instead of a file level.
You really need a build script to generate this as you work, and a preprocessor flag to disable use of the amalgamate (that could be for your uses).
To simplify this script/program, it helps to have your header structures and include hygiene in top form.
Your program/script will need to know your discovery paths (hint: minimise the count of search paths to one if possible).
Run the script or program (which you create) to replace include directives with header file contents.
Assuming your headers are all guarded as is typical, you can keep track of what files you have already physically included and perform no action if there is another request to include them. If a header is not found, leave it as-is (as an include directive) -- this is required for system/third party headers -- unless you use a separate header for external includes (which is not at all a bad idea).
It's good to have a build phase/translation that includes header alone and produces zero warnings or errors (warnings as errors).
Alternatively, you can create a special distribution repository so they never need to do more than pull from it occasionally.
What you want to do sounds "javascriptish" to me :-) . But if you insist, there is always "cat" (or the equivalent in Windows):
$ cat file1.h file2.h file3.h > my_big_file.h
Or if you are using gcc, create a file my_decent_lib_header.h with the following contents:
#include "file1.h"
#include "file2.h"
#include "file3.h"
and then use
$ gcc -C -E my_decent_lib_header.h -o my_big_file.h
and this way you even get file/line directives that will refer to the original files (although that can be disabled, if you wish).
As for how automatic is this for your file order, well, it is not at all; you have to decide the order yourself. In fact, I would be surprised to hear that a tool that orders header dependencies correctly in all cases for C/C++ can be built.
usually you don't want to include every bit of information from all your headers into the special header that enables the potential user to actually use your library. The non-trivial removal of type definitions, further includes or defines, that are not necessary for the user of your interface to know can not be automatedly done. As far as I know.
Short answer to your main question:
No.
My suggestions:
manually make a new header, that contains all relevant information (nothing more, nothing less) for the user of your library interface. Add nice documentation comments for each component it contains.
use forward declarations where possible, instead of full-fledged included definitions. Put the actual includes in your implementation files. The less include statements you have in your headers, the better.
don't build a deeply nested hierarchy of includes. This makes it extremely hard to keep an overview on the contents of every bit you include. The user of your library will look into the header to learn how to use it. And he will probably not be able to distinguish relevant code from irrelevant on the first sight. You want to maximize the ratio of relevant code per total code in the main header for your library.
EDIT
If you really do have a toolkit library, and the order of inclusion really does not matter, and you have a bunch of independent headers, that you want to enumerate just for convenience into a single header, then you can use a simple script. Like the following Python (untested):
import glob
with open("convenience_header.h", 'w') as f:
for header in glob.glob("*.h"):
f.write("#include \"%s\"\n" % header)
In my place we have a big C++ code base and I think there's a problem how header files are used.
There're many Visual Studio project, but the problem is in concept and is not related to VS. Each project is a module, performing particular functionality. Each project/module is compiled to library or binary. Each project has a directory containing all source files - *.cpp and *.h. Some header files are API of the module (I mean the to the subset of header files declaring API of the created library), some are internal to it.
Now to the problem - when module A needs to work with module B, than A adds B's source directory to include search path. Therefore all B's module internal headers are seen by A at compilation time.
As a side effect, developer is not forced to concentrate what is the exact API of each module, which I consider a bad habit anyway.
I consider an options how it should be on the first place. I thought about creating in
each project a dedicated directory containing interface header files only. A client module wishing to use the module is permitted to include the interface directory only.
Is this approach ok? How the problem is solved in your place?
UPD On my previous place, the development was done on Linux with g++/gmake and we indeed used to install API header files to a common directory is some of answers propose. Now we have Windows (Visual Studio)/Linux (g++) project using cmake to generate project files. How I force the prebuild install of API header files in Visual Studio?
Thanks
Dmitry
It sounds like your on the right track. Many third party libraries do this same sort of thing. For example:
3rdParty/myLib/src/ -contains the headers and source files needed to compile the library
3rdParty/myLib/include/myLib/ - contains the headers needed for external applications to include
Some people/projects just put the headers to be included by external apps in /3rdParty/myLib/include, but adding the additional myLib directory can help to avoid name collisions.
Assuming your using the structure: 3rdParty/myLib/include/myLib/
In Makefile of external app:
---------------
INCLUDE =-I$(3RD_PARTY_PATH)/myLib/include
INCLUDE+=-I$(3RD_PARTY_PATH)/myLib2/include
...
...
In Source/Headers of the external app
#include "myLib/base.h"
#include "myLib/object.h"
#include "myLib2/base.h"
Wouldn't it be more intuitive to put the interface headers in the root of the project, and make a subfolder (call it 'internal' or 'helper' or something like that) for the non-API headers?
Where I work we have a delivery folder structure created at build time. Header files that define libraries are copied out to a include folder. We use custom build scripts that let the developer denote which header files should be exported.
Our build is then rooted at a substed drive this allows us to use absolute paths for include directories.
We also have a network based reference build that allows us to use a mapped drive for include and lib references.
UPDATE: Our reference build is a network share on our build server. We use a reference build script that sets up the build environment and maps(using net use) the named share on the build server(i.e. \BLD_SRV\REFERENCE_BUILD_SHARE). Then during a weekly build(or manually) we set the share(using net share) to point to the new build.
Our projects then a list of absolute paths for include and lib references.
For example:
subst'ed local build drive j:\
mapped drive to reference build: p:\
path to headers: root:\build\headers
path to libs: root:\build\release\lib
include path in project settings j:\build\headers; p:\build\headers
lib path in project settings j:\build\release\lib;p:\build\release\lib
This will take you local changes first, then if you have not made any local changes(or at least you haven't built them) it will use the headers and libs from you last build on the build server.
I've seen problems like this addressed by having a set of headers in module B that get copied over to the release directory along with the lib as part of the build process. Module A then only sees those headers and never has access to the internals of B. Usually I've only seen this in a large project that was released publicly.
For internal projects it just doesn't happen. What usually happens is that when they are small it doesn't matter. And when they grow up it's so messy to separate it out no one wants to do it.
Typically I just see an include directory that all the interface headers get piled into. It certainly makes it easy to include headers. People still have to think about which modules they're taking dependencies on when they specify the modules for the linker.
That said, I kinda like your approach better. You could even avoid adding these directories to the include path, so that people can tell what modules a source file depends on just by the relative paths in the #includes at the top.
Depending on how your project is laid out, this can be problematic when including them from headers, though, since the relative path to a header is from the .cpp file, not from the .h file, so the .h file doesn't necessarily know where its .cpp files are.
If your projects have a flat hierarchy, however, this will work. Say you have
base\foo\foo.cpp
base\bar\bar.cpp
base\baz\baz.cpp
base\baz\inc\baz.h
Now any header file can include
#include "..\baz\inc\baz.h
and it will work since all the cpp files are one level deeper than base.
In a group I had been working, everything public was kept in a module-specific folder, while private stuff (private header, cpp file etc.) were kept in an _imp folder within this:
base\foo\foo.h
base\foo\_imp\foo_private.h
base\foo\_imp\foo.cpp
This way you could just grab around within your projects folder structure and get the header you want. You could grep for #include directives containing _imp and have a good look at them. You could also grab the whole folder, copy it somewhere, and delete all _imp sub folders, knowing you'd have everything ready to release an API.
Within projects headers where usually included as
#include "foo/foo.h"
However, if the project had to use some API, then API headers would be copied/installed by the API's build wherever they were supposed to go on that platform by the build system and then be installed as system headers:
#include <foo/foo.h>
One of the popular way to organize project directory is more or less like this:
MyLib
+--mylib_class_a.h
mylib_class_a.cpp
mylib_library_private_helpers.h
mylib_library_private_helpers.cpp
MyApp
+--other_class.h
other_class.cpp
app.cpp
app.cpp:
#include "other_class.h"
#include <mylib_class_a.h> // using library MyLib
All .h and .cpp files for the same library are in the same directory. To avoid name collision, file names are often prefix with company name and/or library name. MyLib will be in MyApp's header search path, etc. I'm not a fan of prefixing filenames, but I like the idea of looking at the #include and know exactly where that header file belongs. I don't hate this approach of organizing files, but I think there should be a better way.
Since I'm starting a new project, I want to solicit some directory organization ideas. Currently I like this directory structure:
ProjA
+--include
+--ProjA
+--mylib
+--class_a.h
+--app
+--other_class.h
+--src
+--mylib
+--class_a.cpp
library_private_helpers.h
library_private_helpers.cpp
+--app
+--other_class.cpp
app.cpp
util.h
app.cpp:
#include "util.h" // private util.h file
#include <ProjA/app/other_class.h> // public header file
#include <ProjA/mylib/class_a.h> // using class_a.h of mylib
#include <other3rdptylib/class_a.h> // class_a.h of other3rdptylib, no name collision
#include <class_a.h> // not ProjA/mylib/class_a.h
#include <ProjA/mylib/library_private_helpers.h> // error can't find .h
.cpp files and private (only visible to immediate library) .h files are stored under the src directory (src is sometimes called lib). Public header files are organized into a project/lib directory structure and included via <ProjectName/LibraryName/headerName.h>. File names are not prefixed with anything. If I ever needed to package up MyLib to be used by other teams, I could simply change my makefile to copy the appropriate binary files and the whole include/ProjA directory.
Once files are checked into source control and people start working on them it will be hard to change directory structure. It is better to get it right at the get-go.
Anyone with experience organizing source code like this? Anything you don't like about it? If you have a better way to do it, I would very much like to hear about it.
Well, it all depends on how big these projects are. If you've only got a few files, then whack them all in one folder.
Too many folders when you haven't got many files to manage is in my opinion overkill. It gets annoying digging in and out of folders when you've only got a few files in them.
Also, it depends on who's using this stuff. If you're writing a library and its going to be used by other programmers, then it's good to organize the headers they want to use into an include folder. If you're creating a number of libraries and publishing them all, then your structure might work. But, if they're independent libraries, and the development isn't all done together and they get versioned and released at different times, you'd be better off sticking with having all files for one project locatable within one folder.
In fact, I would say keep everything in one folder, until you get to a point where you find its unmanagable, then reorganize into a clever scheme of dividing the source up into folders like you've done. You'll probably know how it needs to be organized from the problems you run into.
KISS is usually always the solution in programming -> keep everything as simple as possible.
Why not do something like the first, only use the directory that MyLib resides in as a part of the include directive, which reduces the silly prefixing:
#include <MyLib/ClassA.h>
That tells you where they are from. As for the second choice, I personally get really annoyed when I have a header or source file open, and have to navigate around through the directory structure to find the other and open it. With your second example, if you had src/mylib/class_a.cpp open, and wanted to edit the header, in many editors you'd have to go back two levels, then into include/ProjA before finding the header. And how are we to know that the header is in the ProjA subdirectory without some other external clue? Plus, it's too easy for one file or the other to get moved into a different place that "better" represents how it is used, without the alternate file being moved. It just gives me headaches when I encounter it at my job (and we do have some parts of our codebase where people did every potential problem I've just mentioned).
I have tried both methods. Personally, I like the first better. I understand the urge to put everything in more specific directories, but it causes a lot of over-complication.
I usually use this rule: applications and in-house libraries use the first method. Public open source libraries use the second method. When you are releasing the code, it helps a lot if the include files are in a separate directory.