C++ and CMake: configure a template class from file - c++

I have a C++ 'library' which consists of a set of reusable classes which are templated (i.e. all source code is in header files) and a set of driver files. Each driver source file includes some (but not necessarily all) headers with class templates.
It would be nice if I could instantiate these class templates in each driver file with specific template parameters (known at compile time) and then automate the initialization of objects with the instantiated type by reading configuration files (this would help me remove some boilerplate code). These configuration files would be read upon object construction.
Suppose the config files would be bundled with the source code. Where should they be placed when drivers are compiled so that each class can locate its config files? I am using CMake to build the code.
Since reusable code is not compiled into a library, I can't place the config files in the same location as the library. I'm not even sure whether that would be a good idea actually.
One solution would be to specify a folder with config files as a CMake variable and hardcode this value in the source code of every configurable class. Is there a better way of doing this? Perhaps there's a standard CMake-style way of handling the problem?

I would consider doing this with good 'ol macros. You can use target_compile_definitions() to define macros in your source code. Your config files could then be CMake files themselves, loaded with include(). Then in your source files, you could do an explicit template specialization or a typedef to MyTemplateClass<TEMPLATE_ARG_MACRO_1, TEMPLATE_ARG_MACRO_2>.
Hopefully that makes some sense.

Related

is there something similar to target_include_files in cmake

Is there any way to add files in the same manner as target include directories but for individual files?
My use case is this. I'm using a lot of templates and concepts and my header files are getting bulky. I really liked having implementation split from definitions so want something like so:
.h file
template <typename T>
T foo(T bar);
#include <name_of_implementation_file>
implementation .h file
template <typename T>
T foo(T bar) { return bar; };
currently the include has to look like so: #include "path/to/implementation_file_name"
This is annoying as it makes refactoring a pain later. I also don't want to add extra directories with just 1 or 2 files and call target_include_directories. I don't want to just include as my project is a library and I don't want the user to be able to include the implementation files. I am also building multiple targets and want to keep their included files separate.
TLDR: for user to use a template, user has to see it all.
is there something similar to target_include_files in cmake
No, there is not.
Is there any way to add files in the same manner as target include directories but for individual files?
The only way is:
create an empty directory (somewhere within CMAKE_CURRENT_BINARY_DIR)
copy the file there
add that directory to include_directories
Just for fun, that looks easy to implement, in untested pseudocode:
function(target_include_files target mode)
string(MD5 dir "${ARGN}")
set(dir ${CMAKE_CURRENT_BINARY_DIR}/${dir})
file(MAKE_DIRECTORY ${dir})
foreach(i IN LISTS ARGN)
# TODO: replace with build-time generation
configure_file(${i} ${CMAKE_CURRENT_BINARY_DIR}/${dir} COPYONLY)
endforeach()
target_include_directories(${target} ${mode} ${dir})
endfunction()
I really liked having implementation split from definitions so want something like so:
The simplest would be to keep implementation files in the same directory as header files - that way, simple #include "file" would suffice, because includes with " search current include directory first. If not - move implementation files to a subdirectory, and just include the subdirectory also relative to current file. If not - add the other directory with implementation files to search paths.
I don't want the user to be able to include the implementation files.
So, as some closed source C++ libraries do, explicitly instantiate the template for common types in source files and provide external explicit instantiations declarations in your header files for the types they were templated for. This would of cause limit the usability of your templates to only the types you explicitly instantiated for.
I'd like the user to be able to do that [instantiate the template] themselves,
Then it's impossible to do it (or it makes your library just unusable and pointless). Anyone who uses the template, to instantiate it has to see the whole definition of the template (or, in some cases, an external explicit instantiation for that type can be provided in a separate translation unit, mentioned above). If the user will not see the whole definitions of all symbols he uses and an external explicit instantiation will not be provided, the user will just end up with undefined reference to symbols from the template.
For further research, I advise to just research the term "explicit instantiation" in the context of C++, templates and when it's used and how does it differ from "implicit instantiation". For more further research, review materials about how compiler and linker work, what is linkage and how C++ templates were designed.

enabling optimisations of header only library for debug build

I'm using a header only library for a project (glm) and am currently trying to debug some problems I'm having. I trust that glm is giving me the correct values, however it is dog slow when built without optimisations (I'm using visual studio 2012/2013/2010 whichever is easiest to do this in, as all 3 are installed).
Is there a way to enable optimisations (specifically /O2), and disable debug symbols for just the GLM header files, while retaining the debug information for the rest of the solution?
EDIT:
I'd like to throw in, that I'd rather not change libraries at this point, as it's almost at the end of the project and I have other things to do aswell, so rewriting to use Eigen/CML isn't really on the table.
You can try:
1) Create one code file and include all headers you need.
2) Define all the template classes in this source file you want to use (e.g. "template ClassA;"
3) Compile this source File with optimization and link later against it.
4) Create a header file and declare all theses classes without the function definitions (simply copy the original header files and erase all functions definitions.)
5) Use this header file for your project.

How to structure a "library" of C++ source?

I'm developing a collection of C++ classes and am struggling with how to share the code in a way that maintains organization without compromising ease of compilation for a user of the collection.
Options that I have seen include:
Distribute compiled library file
Put the source in the header file (with implicit inline as discussed in this answer)
Use symbolic links to allow the compiler to find the files.
I'm currently using the third option where, for each class the I want to include I symbolic link each classess headers and source files (e.g. ln -s <path_to_class folder>/myclass.cpp) This works well except that I can't move the project folder location (it breaks all the symlinks) and I have to have all those symlinked files hanging around.
I like the second option (it has the appearance of Java), but I'm worried about code size bloat if everything is declared inline.
A user of the collection will create a project folder somewhere, and somehow include the collection into their compilation process.
I'd like a few things to be possible:
Easy compilation (something like gcc *.cpp from the project folder)
Easy distribution of library in uncompiled form.
Library organization by module.
Compiled code size is not bloated.
I'm not worried about documentation (Doxygen takes care of that) or compile time: the overall modules are small and even the largest projects on the slowest machines won't take more than a few seconds to compile.
I'm using the GCC compiler, if it makes any difference.
A library is the best option (in my opinion) of the three you raised. Then provide the header file(s) in the include path and the library in the linker path.
Since you also want to distribute the library in source code form, I would be inclined to provide a compressed archive (gzip, 7-zip, tarball, or other preferred format) in a central repository.
If I understand correctly, you do not want users to have to include the .cpp files in their build, but instead just want them to use either: (i) the headers directly, (ii) use a compiled form of the lib.
Your requirements are a bit unusual, but they can be achieved. It seems to me like you could organize your code in the following manner. First, have a global define that dictates whether or not you are compiling the library:
// global.h
// ...
#define LIB_SOURCE
// ...
Then in every header file, you check whether that define is set: if the library is distributed as a static/shared lib, the definitions are not included, otherwise, the '.cpp' file is included from the header file.
// A.h
#ifndef _A_H
#include "global.h"
#ifdef LIB_SOURCE
#include "A.cpp"
#endif
// ...
#endif
where 'A.cpp' would contain the actual implementation.
Again, this is a very strange way of doing things and I would actually advise against such practice. A better way (but one which requires more work) is to always distribute a shared library. But to keep things independent of the compiler, write a C layer around it. This way, you have a portable, maintainable library.
As for some of the other requirements:
Keep the build process simple by providing a Makefile
If you worry about the code size of the compiled library, look into gcc's optimization options (-Os). If you worry about the code size of the library when distributed in source-form in the headers, this is more tricky. Since the (inlined) code will actually be in the headers, the code will obviously grow with each inclusion in a .cpp file by the user.
I ended up using inline headers for all of the code. You can see the library here:
https://github.com/libpropeller/libpropeller/tree/master/libpropeller
The library is structured as:
library folder
class A
classA.h
classA.test.h
class B
classB.h
classB.test.h
class C
...
With this structure I can distribute the library as source, and all the user has to do is include -I/path/to/library in their makefile, and #include "library/classA/classA.h" in their source files.
And, as it turns out, having inline headers actually reduces the code size. I've done a full analysis of this, and it turns out that inline code in the headers allows the compiler to make the final binary roughly 5% smaller.

Proper structure for C++ project & libs

I'm starting to write a data processing library of mine and quite confused about building the proper structure of project and libraries.
Say, I'd like to have a set of functions stored in myfunclib library. My current set up (taken from multiple recommendations online) looks like this:
myproj/include/myfunclib.h - class declaration
myproj/include/myfunclib.cpp - class functionality
myproj/src/functest.cpp - test file to check functions
Firstly, it feels like this is a proper set up in case I use myfunc only for myproj project, but say I want to reuse it - then I'd need to specify it's path in each of cpp files using it or store multiple copies of it.
Secondly, compilation is a bit bulky in such case:
g++ -I include include/myfunclib.cpp src/functest.cpp
Is it a normal practice to type all that stuff every time? What if I have many custom libraries I need? Is there a way to store them all separately, simply include as 'myfunclib.h' and not worry about recompiling etc?
Use a makefile to handle all of your dependencies and building your code. Google the syntax it's pretty simple. then you can just say "make" on the command line and it will build everything for you.
here's a good tutorial
http://mrbook.org/tutorials/make/
some things that bit me originally,
remember that templated classes should only be included, what is generally the source implementation should not be built like normal class implementations into object files, so generally i put my whole template implementation within the include directory
i keep include and source files separate, by source files i mean code (definitions) that needs to be compiled into object files for linking, and includes are all the declarations, inline functions, etc it just seems to make more sense to me
sometimes i'll have a header file that includes all relevant headers for a specific module, and in turn perhaps a header file higher up that includes all main headers for modules i am using
also as said in the comments, you need to introduce yourself to some build tools, and get comfortable with them, these will help you track dependencies within your project, and in most cases avoid rebuilding an entire project when only a subset of dependencies have changed (this can be a pain to get right in the beginning but is worthwhile learning, if you use make and g++ there is a way to get this working with g++ -MM ... not sure how well it works for all cases ), i know that the way i organized my projects changed drastically the more i learnt about the build process, and the more complex my projects became (and the more flaws i had to fix )
this is how i generally keep my a project directory structure when starting
build - where all the built files will be stored
app - these are the main apps (can also be split into include/src)
include - includes files
src - src files (compiled into objects and then linked with main compiled app)
lib - any libs (usually 3rdparty libs , if any my src is compiled into a library it usually ends up in build/lib/target/... )
hope some of this helps

Header files dependencies between C++ modules

In my place we have a big C++ code base and I think there's a problem how header files are used.
There're many Visual Studio project, but the problem is in concept and is not related to VS. Each project is a module, performing particular functionality. Each project/module is compiled to library or binary. Each project has a directory containing all source files - *.cpp and *.h. Some header files are API of the module (I mean the to the subset of header files declaring API of the created library), some are internal to it.
Now to the problem - when module A needs to work with module B, than A adds B's source directory to include search path. Therefore all B's module internal headers are seen by A at compilation time.
As a side effect, developer is not forced to concentrate what is the exact API of each module, which I consider a bad habit anyway.
I consider an options how it should be on the first place. I thought about creating in
each project a dedicated directory containing interface header files only. A client module wishing to use the module is permitted to include the interface directory only.
Is this approach ok? How the problem is solved in your place?
UPD On my previous place, the development was done on Linux with g++/gmake and we indeed used to install API header files to a common directory is some of answers propose. Now we have Windows (Visual Studio)/Linux (g++) project using cmake to generate project files. How I force the prebuild install of API header files in Visual Studio?
Thanks
Dmitry
It sounds like your on the right track. Many third party libraries do this same sort of thing. For example:
3rdParty/myLib/src/ -contains the headers and source files needed to compile the library
3rdParty/myLib/include/myLib/ - contains the headers needed for external applications to include
Some people/projects just put the headers to be included by external apps in /3rdParty/myLib/include, but adding the additional myLib directory can help to avoid name collisions.
Assuming your using the structure: 3rdParty/myLib/include/myLib/
In Makefile of external app:
---------------
INCLUDE =-I$(3RD_PARTY_PATH)/myLib/include
INCLUDE+=-I$(3RD_PARTY_PATH)/myLib2/include
...
...
In Source/Headers of the external app
#include "myLib/base.h"
#include "myLib/object.h"
#include "myLib2/base.h"
Wouldn't it be more intuitive to put the interface headers in the root of the project, and make a subfolder (call it 'internal' or 'helper' or something like that) for the non-API headers?
Where I work we have a delivery folder structure created at build time. Header files that define libraries are copied out to a include folder. We use custom build scripts that let the developer denote which header files should be exported.
Our build is then rooted at a substed drive this allows us to use absolute paths for include directories.
We also have a network based reference build that allows us to use a mapped drive for include and lib references.
UPDATE: Our reference build is a network share on our build server. We use a reference build script that sets up the build environment and maps(using net use) the named share on the build server(i.e. \BLD_SRV\REFERENCE_BUILD_SHARE). Then during a weekly build(or manually) we set the share(using net share) to point to the new build.
Our projects then a list of absolute paths for include and lib references.
For example:
subst'ed local build drive j:\
mapped drive to reference build: p:\
path to headers: root:\build\headers
path to libs: root:\build\release\lib
include path in project settings j:\build\headers; p:\build\headers
lib path in project settings j:\build\release\lib;p:\build\release\lib
This will take you local changes first, then if you have not made any local changes(or at least you haven't built them) it will use the headers and libs from you last build on the build server.
I've seen problems like this addressed by having a set of headers in module B that get copied over to the release directory along with the lib as part of the build process. Module A then only sees those headers and never has access to the internals of B. Usually I've only seen this in a large project that was released publicly.
For internal projects it just doesn't happen. What usually happens is that when they are small it doesn't matter. And when they grow up it's so messy to separate it out no one wants to do it.
Typically I just see an include directory that all the interface headers get piled into. It certainly makes it easy to include headers. People still have to think about which modules they're taking dependencies on when they specify the modules for the linker.
That said, I kinda like your approach better. You could even avoid adding these directories to the include path, so that people can tell what modules a source file depends on just by the relative paths in the #includes at the top.
Depending on how your project is laid out, this can be problematic when including them from headers, though, since the relative path to a header is from the .cpp file, not from the .h file, so the .h file doesn't necessarily know where its .cpp files are.
If your projects have a flat hierarchy, however, this will work. Say you have
base\foo\foo.cpp
base\bar\bar.cpp
base\baz\baz.cpp
base\baz\inc\baz.h
Now any header file can include
#include "..\baz\inc\baz.h
and it will work since all the cpp files are one level deeper than base.
In a group I had been working, everything public was kept in a module-specific folder, while private stuff (private header, cpp file etc.) were kept in an _imp folder within this:
base\foo\foo.h
base\foo\_imp\foo_private.h
base\foo\_imp\foo.cpp
This way you could just grab around within your projects folder structure and get the header you want. You could grep for #include directives containing _imp and have a good look at them. You could also grab the whole folder, copy it somewhere, and delete all _imp sub folders, knowing you'd have everything ready to release an API.
Within projects headers where usually included as
#include "foo/foo.h"
However, if the project had to use some API, then API headers would be copied/installed by the API's build wherever they were supposed to go on that platform by the build system and then be installed as system headers:
#include <foo/foo.h>