I have C++ project that are built using Boost.Build. The project consists of 3 subprojects.
. [root]
\-- source
\-- common
\-- config
\-- config.cpp
\-- project_1
\-- Jamfile.jam
\-- project_2
\-- Jamfile.jam
\-- project_3
\-- Jamfile.jam
\-- Jamroot.jam
Jamroot.jam:
project my_project
: requirements
multi
debug:DEBUG
: default-build
static
: build-dir bin
;
alias project_1 : source/project_1 ;
alias project_2 : source/project_2 ;
alias project_3 : source/project_3 ;
install dist : project_1 project_2 project_3
: on EXE
;
Each project has Jamfile.jam according to this template:
project project_N
: requirements
CONFIG_DEFINE_1=
CONFIG_DEFINE_2=
;
lib config : [ glob ../common/config/*.cpp ] ;
exe project_N
: [ glob *.cpp ] config
:
;
config.cpp uses defines CONFIG_DEFINE_1 and CONFIG_DEFINE_2 for conditional compilation (actually they are simply constants), so there's a separate version of config library per project.
The problem is that such approach causes the config library to be rebuilt each time the whole project is built regardless of were the files changed or not. I.e. building the first time everything is compiled and linked, building the second time without doing any modifications - only the config library is built for each project_N. How should I properly setup the building so no redundand compilation occur?
As I understand it, your config library is shared over different projects and uses different defines for each project.
It's not possible to overcome recompilation in that case, irrespective of boost build.or any other build system. In between compiles of the cpp files the preprocessed files have changed.
If you want to avoid recompilation one option is to split of the config library into different libraries for each project, but depending on what config looks like, having a lot of code duplication is rarely desireable either...
The only other option I can think of is reducing the amount of code that needs to be recompiled every time.
e.g. you have a sourcefile LargeFunction.cpp with
#if CONFIG_DEFINE_1
void VeryLargeFunction() {
...
}
#elif CONFIG_DEFINE_2
void VeryLargeFunction() {
...
}
#endif
Split it into three files, one containing the VeryLargeFunction as defined for DEFINE_1, one as defined for DEFINE_2, and one which simply includes these two based on the values of the defines.
#if CONFIG_DEFINE_1
#include "definitionFileFor1"
#elif CONFIG_DEFINE_2
#include "definitionFileFor2"
#endif
This file still needs to be recompiled every time, but the object files which contain the 'real' code, will not.
You'll effectively only relink existing object files on every compile, i.s.o. recompiling everything.
The disadvantage is more maintenance though, and the different function definitions reside in different files, so the code becomes a bit harder to read.
Related
I have a large test suite that is compiled as an executable, that is roughly structured as follows:
ProjectRootDir
|
---A -> .cpp/h files with a fairly common set of expensive includes
|
---B -> .cpp/.h files with a fairly common but different set of expensive includes
|
(etc.)
Using precompiled headers greatly reduces the compile time of the entire project. But because this is a test project which means the pch included files can change often, ideally I'd have one pch for the 'A' source files, another pch for the 'B' source files, etc. In order to prevent recompiling the entire project every time a subset of the precompiled headers change. For changes where I truly do want to verify that the entire project still compiles.
What is the best way to do this in CMake?
Just make sure that the source files in A belong to a different target than those in B. You can do this with OBJECT libraries if it isn't the case already:
add_library(A OBJECT ...)
target_precompile_headers(A PRIVATE expensive_header_a.h)
add_library(B OBJECT ...)
target_precompile_headers(B PRIVATE expensive_header_b.h)
add_library(combined ...)
target_link_libraries(combined PRIVATE A B)
Now the sources in A will get a PCH with expensive_header_a.h in it and similarly for B. See the docs: https://cmake.org/cmake/help/latest/command/target_precompile_headers.html
In a typical C++ application (no Qt), I may have the following:
app/include/namespace1/Foo.h
app/src/namespace1/Foo.cpp
app/include/namespace2/Foo.h
app/src/namespace2/Foo.cpp
Where "app" is the root folder for the project. The classes in those files are:
//In app/include/namespace1/Foo.h
namespace namespace1 {
class Foo;
}
//In app/include/namespace2/Foo.h
namespace namespace2 {
class Foo;
}
In a build system like the one eclipse has, the object files for each .cpp files will be built into diffrent subdirectories.
In Qt, I create a .pri file in each of my folders which contains include, SOURCES, and HEADERS statements. However, when Qt builds the program, it places all of the object files in the same directory. So, [build output]/Foo.o gets generated twice and thus overwritten causing the linker to fail.
I looked into making each nested folder into its own SUBDIRS project with its own .pro file, but this doesn't work correctly since each folder is not an independent project, just an independent namespace.
What is the correct way to setup a project like this?
Here is one answer: https://riptutorial.com/qt/example/15975/preserving-source-directory-structure-in-a-build--undocumented---object-parallel-to-source--option--
This causes Qt to generate a directory structure in the build directory that "mirrors" the source directories.
I tried to shorten down my question (the Old question can still be found below)
My current directory structure looks like this
C:\users\documents\projects
|
+----- utility
| |
| +----- include (files not shown)
| +----- src
| |
| +----file1.c (and other files not shown)
|
+----- proj1
|
+----- include (files not shown)
+----- src
|
+----- proj_file1.c (and other files not shown)
I can include the .h files from ..\utility\include with #include <file.h> to proj1, if I add this directory as include path to my IDE (in proj1). Is there an aequivalent solution for the ..\utility\src files? I am using LPCXpresso IDE on Windows 7. I suppose there is the same solution on any IDE, so I just want to know how this whatever path (where .c files will be searched, if not found in the .\src directory) is generally called to find it in my project settings.
I try to avoid using libraries (.lib, .dll)
I don't want to copy the .c files in each project (proj1, proj2, ..., projn)
I want to be able to simply edit the .c and .h files and if recompiling proj1 and so on the changes will be applied, as they will for all other projects
Generating an own makefile may be a solution (but shouldn't there be an Option to add a source-file-path in IDEs?)
#include <..\utility\src> is a non-desired solution as changes to the directory will fource to add this line in each single file, where changing a path in the Options are only some clicks.
Thanks in advance and thanks for the answers up to now
Old Version of my question:
Motivation: Imagine, you write a program in C/C++ in some IDE and have .c and .h source code files as usual. In addition you have a helper.c and helper.h file, were you defined some useful, not project related functions (which can be used in several projects). You want to include these files, but don't want to have them were you store your project related source code.
As far as I know .h files can be stored in a separate folder, which is pointed to by the includepath. This path can be set in every IDE. Further it changes the
#include "helper.h"
statement to
#include <helper.h>
If I put the .c files in the same folder and not include them separately, the compiler will not find them. If I include them as well with
#include <helper.c>
a multiple inclusion will lead to multiple function deklaration and therefore to a compiler-error. Only solution may be an
#ifndef helper_c_
//content of helper.c file
#endif
, which is kind of impractical and will always need inclusion of the .h and the .c file. But i only need to have them stored once, with no copies and if i need to change something, it will change in all projects, as they are all pointing to that folder
I also now about library files, where you have an .lib and a .dll file, where the .lib file needs to be pointed at by the library-path and the .dll file needs to be in the same folder as the .exe file afterwards. But that is not what i want.
My Question: Is there a possibility to store the .h and .c file (in my current case there are 10 file-pairs) in a separate folder and point at them via an include path or so? I tried googling around, but I think I am not quite sure what i shall look for.
Thanks for help
EDIT: I forgot to mention: I use Windows 7, and my current IDE is the LPCXpresso-IDE
OK, suppose you have this directory structure:
C:\users\documents\projects
|
+----- utility
| |
| +----- include (files not shown)
| +----- src
| |
| +----file1.c (and other files not shown)
|
+----- proj1
|
+----- include (files not shown)
+----- src
|
+----- proj_file1.c (and other files not shown)
And also assume, that the current directory for compilation is in the proj1/src directory. I see at least three solutions to your question:
if you really want to #include the source files, which I do not recommend doing, just use a relative path to the files i.e.
#include "..\..\utility\src\file1.c"
Now in addition to the issues with including source files, this tends to be very fragile in that if you change the directory structure (or change a name of a directory) everything breaks. You would need to go back into your source and fix every line of code.
As iharob suggested, use a make file to handle this. In this case, you would have a compile line that looked like this (assuming you are using Microsoft's tool change);
cl /I..\..\utility\include ..\..\utility\src\file1.c /o util_file1.o
This causes the result of the compilation to be dropped in the current working directory and the linker will be able to find all the object files and combine them together into one executable. We still are dealing with relative paths here, but all the changes would be in a single file, and by using make variables the changes would be on a single line or two.
If the functions in the utility directory are going to be used in multiple projects, my personal favorite solution is to make a utility project that produces a dynamic library (a dll under windows) and link subsequent projects with that library. You still have to deal with locating where the include files are going to be (maybe a directory at the top level of where all your project folders are?), but to me a library seems cleaner. It also has the added advantage that if you want to modify the code in the utility project, just recompile the library and the rest of you project will 'see' the modifications with out recompilation of them (assuming you do not modify the interface of the library).
Hope this helps,
T
Yes of course there is, depending on what compiler you are using there will be a switch to tell the compiler where to search for headers, for gcc and some others AFAIK, it's -I, so for example suppose that your headers are in the myproject/headers directory, the compiler should be invoked like this
gcc -I myproject/header ${OTHER_OPTIONS} ${SOURCE_OR_OBJECT_FILES} -o ${OUTPUT_FILE}
The usual way to build a project with the .c files in different directories is to use a Makefile and a program that can parse the Makefile and invoke the neecessary commands to build the project.
A word of warning: it is generally not a good idea to #include source (.c) files. The source files are meant for compilation, not inclusion -- including them may result in strange errors (most prominently, apparent re-definitions of a function).
Here is what I would do (for each project that needs the helper code):
Add your utility .c files to the project. Look for Add existing file... or a similar IDE feature; this ensures that your utility source files, i.e. helper.c, get compiled along with your project.
As for the .h file, include it with #include <helper.h>, enabling you to use your utility declarations.
Finally, find a compiler option called Include paths, a.k.a. -I, and set it to the folder that contains helper.h. This is usually found in Project options/settings.
After looking around through all the settings and options I found the following satisfying solution: creating a Link to a source file folder
This answer applies to the LPCXpresso IDE
imagine the folder structure shown in my question
inside the LPCXpresso IDE -> rightclick on the project -> properties
navigate to C/C++ General>Paths and Symbols>Source Loacation
click "Link Folder..."
in the opened dialog tag the checkbox Link to folder in the file system
click Browse... or enter C:\users\documents\projects\utility\src
click OK
click Apply
recompile and be happy :)
In my project, I have two versions of the same system library: SysLib1.0 and SysLib2.0. These two libraries are used by other components of the system heavily.
SysLib1.0 headers are located in some directory: /project/include. Here's an example of the contents of the project include directory:
/project/include/syslib/
/project/include/awesomelib/
/project/include/coollib/
So naturally, in CMake, other components use include_directories(/project/include) to gain access to system and other component headers. C++ source code could access the headers like so:
#include <syslib/importantheader.hpp>
SysLib2.0 is installed in a separate location in order to avoid linking issues. SysLib2.0's headers are stored here:
/opt/SysLib2.0/include
So naturally, in CMake, other components which require SysLib2.0 use include_directories(/opt/SysLib2.0/include). C++ source code could access the headers like so:
#include <syslib/importantheader.hpp>
Now we have run into a problem. A new component I'm writing needs access to /project/include in order to access awesomelib, BUT also needs SysLib2.0. This involves including /opt/SysLib2.0/include as well. Now when I say #include <syslib/importantheader.hpp>, that could refer to either version of the library. The compiler yells at me with some redefinition errors, as it should.
Even worse, SysLib1.0 and SysLib2.0 both refer to themselves as syslib/... when looking for headers within their own library, which is just as ambiguous.
Does anyone have an idea of how I could exclude a particular directory from an include path? Even if I am including a parent directory as shown in my example? Any solutions or suggestions are appreciated.
You can create a tree-like directory structure for your project with nested CMakeLists.txt files and include separate directories in different leafs:
Given a directory structure:
A:
|main.cpp
|
|CMakeLists.txt
|
|B-----|
| |CMakeLists.txt
| |b.cpp
|
|C-----|
|CMakeLists.txt
|c.cpp
A/CMakeLists.txt:
add_subdirectory(B)
add_subdirectory(C)
add_executable(exe main.cpp)
target_link_libraries(exe a b)
B/CMakeLists.txt:
include_directories(/project/include)
add_library(b b.cpp)
C/CMakeLists.txt:
include_directories(/opt/SysLib2.0/include)
add_library(c c.cpp)
This way you can include different directories for different source files; pack them up in libraries and link the final target with the libraries
I don't like using one include path for all includes. I would rather the following structure.
include - your own headers
include/awesomelib
include/coollib
3rd - third party libs
3rd/syslib-1.0/include
3rd/syslib-1.0/src
3rd/syslib-2.0/include
3rd/syslib-2.0/src
src - your source
src/awesomelib (depends on syslib-1.0, includes 3rd/syslib-1.0/include)
src/coollib (depends on syslib-2.0, includes 3rd/syslib-2.0/include)
Then you can specify which syslib to use when building a library.
I have the following direcory stucture:
src
+-- lib1
+-- lib1.h
+-- lib2
+-- lib2.h
Both lib1 and lib2 are gonna be distributed (installed). lib2 makes use of lib1, so it needs some includes:
#include "../lib1/lib1.h" // 1
#include "lib1/lib1.h" // 2
#include <lib1/lib1.h> // 3
(1) is the straight forward way, but is very unflexible. (2) is the way I use at the moment, but the build system needs to know that src needs to be added to the include path. (3) seems the best for me under the distribution aspect because then it can be assumed that the headers reside in a standard location, but it's not too obvious for me how a build system handles that (in this case, lib1 needs to be installed before lib2 can be compiled).
What's the recommended way?
The only difference between "" and <> forms of include is that the "" forms first search in some places and then fallback to the same places as <>. The set of additional places is implementation dependent and the only common one is the directory of the file containing the include directive. The compiler options which add to the include path usually add for the <> form and so those directories get searched for both form.
So the choice between the two forms is mostly a style one. Using the "" form for the current project and the <> one for system libraries is common. For things in between, made a choice and stick to it in your project.
I vote for version 2.
#include "../lib1/lib1.h" // 1
This assumes the tree will always stay the same. So when you change your structure, you'll need to modify this everywhere.
#include "lib1/lib1.h" // 2
I don't see what the problem of adding src to the include path is. Actually, you don't even need to add src to the include path, you can directly add src/lib1 and just have #include "lib1.h"
#include <lib1/lib1.h> // 3
This style of includes is used for system headers. You should avoid this, as most programmers are used to see windows.h or string or vector inside <>. You're also telling the compiler to first look for those headers in default directories rather than your own. I'd avoid this one.
Side note:
You should think about a structure like this:
src
+-- lib1
+-- lib1.h
+-- lib2
+-- lib2.h
include
where the include directory contains all publicly visible headers. If lib1.h is public, move it there. If not, the structure you currently have should be ok.