Is there any way to add files in the same manner as target include directories but for individual files?
My use case is this. I'm using a lot of templates and concepts and my header files are getting bulky. I really liked having implementation split from definitions so want something like so:
.h file
template <typename T>
T foo(T bar);
#include <name_of_implementation_file>
implementation .h file
template <typename T>
T foo(T bar) { return bar; };
currently the include has to look like so: #include "path/to/implementation_file_name"
This is annoying as it makes refactoring a pain later. I also don't want to add extra directories with just 1 or 2 files and call target_include_directories. I don't want to just include as my project is a library and I don't want the user to be able to include the implementation files. I am also building multiple targets and want to keep their included files separate.
TLDR: for user to use a template, user has to see it all.
is there something similar to target_include_files in cmake
No, there is not.
Is there any way to add files in the same manner as target include directories but for individual files?
The only way is:
create an empty directory (somewhere within CMAKE_CURRENT_BINARY_DIR)
copy the file there
add that directory to include_directories
Just for fun, that looks easy to implement, in untested pseudocode:
function(target_include_files target mode)
string(MD5 dir "${ARGN}")
set(dir ${CMAKE_CURRENT_BINARY_DIR}/${dir})
file(MAKE_DIRECTORY ${dir})
foreach(i IN LISTS ARGN)
# TODO: replace with build-time generation
configure_file(${i} ${CMAKE_CURRENT_BINARY_DIR}/${dir} COPYONLY)
endforeach()
target_include_directories(${target} ${mode} ${dir})
endfunction()
I really liked having implementation split from definitions so want something like so:
The simplest would be to keep implementation files in the same directory as header files - that way, simple #include "file" would suffice, because includes with " search current include directory first. If not - move implementation files to a subdirectory, and just include the subdirectory also relative to current file. If not - add the other directory with implementation files to search paths.
I don't want the user to be able to include the implementation files.
So, as some closed source C++ libraries do, explicitly instantiate the template for common types in source files and provide external explicit instantiations declarations in your header files for the types they were templated for. This would of cause limit the usability of your templates to only the types you explicitly instantiated for.
I'd like the user to be able to do that [instantiate the template] themselves,
Then it's impossible to do it (or it makes your library just unusable and pointless). Anyone who uses the template, to instantiate it has to see the whole definition of the template (or, in some cases, an external explicit instantiation for that type can be provided in a separate translation unit, mentioned above). If the user will not see the whole definitions of all symbols he uses and an external explicit instantiation will not be provided, the user will just end up with undefined reference to symbols from the template.
For further research, I advise to just research the term "explicit instantiation" in the context of C++, templates and when it's used and how does it differ from "implicit instantiation". For more further research, review materials about how compiler and linker work, what is linkage and how C++ templates were designed.
Related
In a large Qt project in which a lot of Qt and project headers are included in every file, it is easy to:
include extra Qt files that don't need to be included because they are already included in another Qt file (for example, qbytearray.h is included in qstring.h).
forget to include needed Qt files because they are already included in other included project files (for example, the compiler finds qstring.h included in another of your files and doesn't complain).
left included extra Qt files that are not needed anymore after a modification.
I have been also reading that, even with modern compilers, it is better to include the files needed, and only those, instead of the easy way of including more generic headers like QtCore and QtGui.
The rule seems easy: include only everything you need and don't depend on other included files in case they change in the future (for example, qstring.h could not use qbytearray.h anymore, which is also true for project files), but it's not so easy to achieve. And Qt Creator doesn't help much with that, because when you begin to write QStr... it auto-completes with QString and compiles, and you don't even wonder why nor think of including the header.
Is there a list of Qt headers dependencies or an automatic Qt tool or a rule or something to make sure I have chosen all the headers I need and nothing else? The question is general to C/C++, a way to get the optimum header dependency.
The rules of thumb to minimize the number of include files read:
A .cpp file usually has an associated header. That header must be included first - it ensures that the header will compile by itself and is not missing any dependencies.
For any class hierarchy, include only the most derived class's headers. E.g. if you include <QLabel>, you won't need <QFrame>, nor <QWidget>, nor <QObject>. If you include <QGraphicsView> and <QLabel>, you won't need <QAbstractScrollArea>, nor <QFrame>, nor <QWidget>, nor <QObject>. And so on.
Other than in the preceding rule, do not depend on "files included by other files". I.e. QString including QByteArray is an implementation detail and the API of QString does not warrant such inclusion.
The rules of thumb to minimize the number of compiled source files:
Cut the number of compiled files by two (!!) by adding #include "foo.moc" at the end of every foo.cpp that implements new QObject types.
Short classes (<250 lines total) belong in a single .h file, there's no need to separate them between .h and .cpp.
When I work on my personal C and C++ projects I usually put file.h and file.cpp in the same directory and then file.cpp can reference file.h with a #include "file.h" directive.
However, it is common to find out libraries and other kinds of projects (like the linux kernel and freeRTOS) where all .h files are placed inside an include/ directory, while .cpp files remain in another directory. In those projects, .h files are also included with #include "file.h" instead of #include "include/file.h" as I was hoping.
I have some questions about all of this:
What are the advantages of this file structure organization?
Why are .h files inside include/ included with #include "file.h" instead of #include "include/file.h"? I know the real trick is inside some Makefile, but is it really better to do that way instead of making clear (in code) that the file we want to include is actually in the include/ directory?
The main reason to do this is that compiled libraries need headers in order to be consumed by the eventual user. By convention, the contents of the include directory are the headers exposed for public consumption. The source directory may have headers for internal use, but those are not meant to be distributed with the compiled library.
So when using the library, you link to the binary and add the library's include directory to your build system's header paths. Similarly, if you install your compiled library to a centralized location, you can tell which files need to be copied to the central location (the compiled binaries and the include directory) and which files don't (the source directory and so forth).
It used to be that <header> style includes were of the implicit path type, that is, to be found on the includes environment variable path or a build macro, and the "header" style includes were of the explicit form, as-in, exactly relative to where-ever the source file is that included it. While some build tool chains still allow for this distinction, they often default to a configuration that effectively nullifies it.
Your question is interesting because it brings up the question of which really is better, implicit or explicit? The implicit form is certainly easier because:
Convenient groupings of related headers in hierarchies of directories.
You only need include a few directories in the includes path and need not be aware of every detail with regard to exact locations of files. You can change versions of libraries and their related headers without changing code.
DRY.
Flexible! Your build environment doesn't have to match mine, but we can often get nearly exact same results.
Explicit on the other hand has:
Repeatable builds. A reordering of paths in an includes macro/environment variable, doesn't change resulting header files found during the build.
Portable builds. Just package everything from the root of the build and ship it off to another dev.
Proximity of information. You know exactly where the header is with #include "\X\Y\Z". In the implicit form, you may have to go searching along multiple paths and might even find multiple versions of the same file, how do you know which one is used in the build?
Builders have been arguing over these two approaches for many decades, but a hybrid form of the two, mostly wins out because of the effort required to maintain builds based purely of the explicit form, and the obvious difficulty one might have familiarizing one's self with code of a purely implicit nature. We all generally understand that our various tool chains put certain common libraries and headers in particular locations, such that they can be shared across users and projects, so we expect to find standard C/C++ headers in one place, but we don't initially know anything about the specific structure of any arbitrary project, lacking a locally well documented convention, so we expect the code in those projects to be explicit with regard to the non-standard bits that are unique to them and implicit regarding the standard bits.
It is a good practice to always use the <header> form of include for all the standard headers and other libraries that are not project specific and to use the "header" form for everything else. Should you have an include directory in your project for your local includes? That depends to some extent on whether those headers will be shipped as interfaces to your libraries or merely consumed by your code, and also on your preferences. How large and complex is your project? If you have a mix of internal and external interfaces or lots of different components, you might want to group things into separate directories.
Keep in mind that the directory structure your finished product unpacks to, need not look anything like the directory structure under which you develop and build that product in. If you have only a few .c/.cpp files and headers, it's ok to put them all in one directory, but eventually, you're going to work on something non-trivial and will have to think through the consequences of your build environment choices, and hopefully document it for others to understand it.
1 . .hpp and .cpp doesn't necessary have 1 to 1 relationship, there may have multiple .cpp using same .hpp according to different conditions (eg:different environments), for example: a multi-platform library, imagine there is a class to get the version of the app, and the header is like that:
Utilities.h
#include <string.h>
class Utilities{
static std::string getAppVersion();
}
main.cpp
#include Utilities.h
int main(){
std::cout << Utilities::getAppVersion() << std::ends;
return 0;
}
there may have one .cpp for each platform, and the .cpp may be placed at different locations so that they are easily be selected by the corresponding platform, eg:
.cpp for iOS (path:DemoProject/ios/Utilities.cpp):
#include "Utilities.h"
std::string Utilities::getAppVersion(){
//some objective C code
}
.cpp for Android (path:DemoProject/android/Utilities.cpp):
#include "Utilities.h"
std::string Utilities::getAppVersion(){
//some jni code
}
and of course 2 .cpp would not be used at the same time normally.
2.
#include "file.h"
instead of
#include "include/file.h"
allows you to keep the source code unchanged when your headers are not placed in the "include" folder anymore.
I have a C++ 'library' which consists of a set of reusable classes which are templated (i.e. all source code is in header files) and a set of driver files. Each driver source file includes some (but not necessarily all) headers with class templates.
It would be nice if I could instantiate these class templates in each driver file with specific template parameters (known at compile time) and then automate the initialization of objects with the instantiated type by reading configuration files (this would help me remove some boilerplate code). These configuration files would be read upon object construction.
Suppose the config files would be bundled with the source code. Where should they be placed when drivers are compiled so that each class can locate its config files? I am using CMake to build the code.
Since reusable code is not compiled into a library, I can't place the config files in the same location as the library. I'm not even sure whether that would be a good idea actually.
One solution would be to specify a folder with config files as a CMake variable and hardcode this value in the source code of every configurable class. Is there a better way of doing this? Perhaps there's a standard CMake-style way of handling the problem?
I would consider doing this with good 'ol macros. You can use target_compile_definitions() to define macros in your source code. Your config files could then be CMake files themselves, loaded with include(). Then in your source files, you could do an explicit template specialization or a typedef to MyTemplateClass<TEMPLATE_ARG_MACRO_1, TEMPLATE_ARG_MACRO_2>.
Hopefully that makes some sense.
I have a Visual C++ solution with 2 projects AlgorithmA & AlgorithmB and both share a common header file RunAlgo.h with the class declaration. Each project in the solution has its own unique implementation for the header file.
I am trying to compile a DLL out of the common header file RunAlgo.h and add reference to this DLL in the projects AlgorithmA & AlgorithmB. I have then included separate RunAlgo.cpp definition file in both my projects. The problem is that I am getting linker errors while compiling the new DLL project which has only the header file.
So, the question is
Can a header file with only class declaration be compiled into a DLL (Similar to class library containing an Interface in C#)?
For the above scenario, is there a better approach to reuse the common Header file among projects?
Should the above method work (re-check my code?)
1 & 3: No, that doesn't make sense in C++.
Libraries (dynamic or otherwise) are only used during linking. During compilation declarations must be visible to the compiler in source-code form. This is why, for example, you have to explicitly #include standard library headers in addition to linking against the standard library.
2: What you're already doing is basically the only solution. Put the common header files in their own directory, and add that directory to the include path of each of the two projects.
Can a header file with only class
declaration be compiled into a DLL
No, headers typically include only declarations. Declarations when compiled don't produce any machine code, so resulting DLL would be empty.
For the above scenario, is there a
better approach to reuse the common
Header file among projects?
Reusing header is fine. In fact, every library has it's set of headers that you need to include in projects using that library.
I don't know much Visual C++, but I think you could make third project, containing common parts (i.e. RunAlgo.h header), and mark it as a dependency for AlgorithmA and AlgorithmB projects.
To 1.:
No, free-standing header files never end up in a dll. Header files are included in implementation files and that's how they are compiled. Header files are usually distributed along with a dll or library if you want to allow third parties to link against it.
To 2.:
Why don't you declare an abstract base class with the interface for the algorithm and provide two different implementations by defining two subclasses (AlgorithmA and AlgorithmB) deriving from the base class? I don't get why you want to different DLLs.
To 3.:
No, it shouldn't. See point 1.
Use 2 namespaces in C++ to write 2 different implementation with the same header file
namespace ImplementationA
{
}
namespace ImplementationB
{
}
When you want to use 1st implementation
using implementationA;
or
implementationA::function1();
In the C++ Boost libraries, why is there a ".ipp" extension on some header files?
It seems like they are header files included by the ".hpp" file of the same name.
Is this convention common outside of Boost?
What is the justification for having a special file type?
Explanation from one of the template gurus:
If you want to split up your template sources into interface and
implementation (there are lots of good reasons to do that, including
controlling instantiation), you can't very well use the same name
(foo.hpp) twice, and foo.cpp wouldn't be appropriate for either one.
foo.ipp clearly delineates the file as an implementation file intended to
be #included in foo.hpp.
I believe "ipp" stands from "implementation" file. i.e, they hold actually code (for inline functions & templates) rather than just declaration (which are in the header --.H or .HPP -- files)