Adding headers in a Modern CMake way in a large project - c++

I'm working on a project that encompasses an SDL-based game framework, an editor, and a game. The directory structure is shaping up to be something like this (edited for brevity)
├── CMake
├── Libraries
├── Resources
└── Source
├── Editor
├── Framework
│   └── Source
│   ├── AI
│   │   ├── Private
│   │   └── Public
│   ├── Game
│   │   ├── Private
│   │   └── Public
│   ├── Graphics
│   │   ├── Private
│   │   └── Public
│   └── Networking
├── Game
│   └── Source
│   └── Runtime
│   └── Launch
│   ├── Private
│   └── Public
└── Server
My add_executable command is being ran in Game/Source/Runtime/Launch/Private, and is dependent on files from the other modules.
According to some CMake how-tos and the accepted response for the following question, the headers for a given target should be included in the add_executable call in order for them to be listed as dependencies in the makefile.
How to properly add include directories with CMake
My question is, what is the cleanest way to accomplish this, given the abundance of header files and directories in my project? I can't imagine that the best practice would be to maintain a huge list of files directly in the add_executable call, but I could be wrong.
I was thinking that each directory's CMakeLists.txt could be responsible for adding to a variable that would eventually be used in add_executable, but distributing that intent through so many files seems like poor practice. Is there a better way?

You can follow exactly the pattern in the link you sent. For each library (I suppose they are libraries/targets in CMake), use:
target_include_directories(${target} PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/Private PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/Public)
By doing this, you tell CMake that the Private folders are just for the current target, but if someone uses the target for another target, it should add the Public include directories.
Now, you just need to do:
target_link_libraries(${target} PRIVATE Framework_Source)
If Framework_Source is the name of a target. ${target} is the name of the target you are currently building.

Related

Calling the header file in the parent folder

I'm doing some experiments to learn CMake. So the commands stay in my mind. I created a project to test what I just learned. However, I have a problem.
The structure of my project is as follows:
├── bin
├── CMakeLists.txt
└── src
├── Configuration
│   ├── CMakeLists.txt
│   ├── Test
│   │   └── TestConfiguration.h
├── Array
│   └── Array.h
├── CMakeLists.txt
├── Test2
│   ├── CMakeLists.txt
│   ├── Test2.cpp
│   ├── Test2.h
│   └── Test2-1.h
├── Main
│   ├── CMakeLists.txt
│   ├── Config.h
│   └── Main.h
├── Test3
│   ├── CMakeLists.txt
│   ├── Time.h
│   ├── Timer 
│   │   ├── CMakeLists.txt
│   │   ├── Iterate.h
│   │   ├── Run.h
│   │   ├── Serial.cmpl.cpp
│   │   └── Serial.h
│   ├── Smart.h
│   ├── Counting.h
│   ├── Mute.h
│   └── MainTest.h
└── Utilities
├── CMakeLists.txt
├── Inform.h
├── Settings.h
├── Print.h
└── Const.h
But I didn't understand how I should make these CMakeLists.txt files. For example, the file src/Utilities/Inform.h uses the following header:
// src/Utilities/Inform.h
#include "Main/Config.h"
I've tried everything I've seen on the internet and stackoverflow to edit the src/Utilities/CMakeLists.txt file. But no matter what I do, it never sees the Main/Config.h file. I just need to do something like ../../Main/Config.h.
The same problem applies to other folders. What I want to learn here is to be able to navigate and call all files in the project with CMakeLists.txt. While doing this, I tried many of the following parameters:
add_library
target_include_directories
target_link_libraries
link_directories
link_libraries
I think there's something I'm missing or misunderstood. I would be glad if you help me in this regard. If you tell me how to edit the src/Utilities/CMakeLists.txt file, I will try to fill the others accordingly.
Additionally, there is something I'm curious about. Do I also need to edit the src/CMakeLists.txt file? Or is it enough if I just edit for example src/Utilities/CMakeLists.txt?
Also, I don't know if it will be needed additionally, but I'm using cmake version 3.16.3. My development environment is an x86_64 20.04.1 Ubuntu-based Elementary OS.
I've read the official documentation for CMake 3.16 and the answers from fellow developers on StackOverFlow. I want to use the header file in the parent folder in a header in subdirectories. But many ways I've tried are wrong. There is always an error in the include path I entered. I want to learn from experienced developers what I did wrong.

Cmake package configuration when looking for each other's packages

When i catkin_make when module_one find package module_two and module_two find package module_one, there is error like below
-- +++ processing catkin package: 'module_one'
-- ==> add_subdirectory(module_one)
-- Could NOT find module_two (missing: module_two_DIR)
-- Could not find the required component 'module_two'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found.
CMake Error at /opt/ros/melodic/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
Could not find a package configuration file provided by "module_two" with
any of the following names:
module_twoConfig.cmake
module_two-config.cmake
Add the installation prefix of "module_two" to CMAKE_PREFIX_PATH or set
"module_two_DIR" to a directory containing one of the above files. If
"module_two" provides a separate development package or SDK, be sure it has
been installed.
Call Stack (most recent call first):
module_one/CMakeLists.txt:10 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/ri/workspace/catkin_playground/build/CMakeFiles/CMakeOutput.log".
See also "/home/ri/workspace/catkin_playground/build/CMakeFiles/CMakeError.log".
the workspace folder tree is below
├── build
│   ├── atomic_configure
│   ├── catkin
│   │   └── catkin_generated
│   │   └── version
│   ├── catkin_generated
│   │   ├── installspace
│   │   └── stamps
│   │   └── Project
│   ├── CMakeFiles
│   │   ├── 3.11.0
│   │   │   ├── CompilerIdC
│   │   │   │   └── tmp
│   │   │   └── CompilerIdCXX
│   │   │   └── tmp
│   │   └── CMakeTmp
│   ├── gtest
│   │   ├── CMakeFiles
│   │   └── googlemock
│   │   ├── CMakeFiles
│   │   └── gtest
│   │   └── CMakeFiles
│   ├── module_one
│   │   └── CMakeFiles
│   └── test_results
├── devel
│   └── lib
└── src
├── module_one
└── module_two
module_one's CMakeLists.txt has
find_package(catkin REQUIRED module_two)
module_two's CMakeLists.txt has
find_package(catkin REQUIRED module_one)
like above project,
Is there a CMakeLists configuration for referencing packages to each other?
I tried to imitate your setup: I made a new workspace, I make two new packages using catkin_create_pkg, and I got your error. This happens when some of the following setup issues aren't addressed:
In CMakeLists.txt, you must find_package(catkin REQUIRED COMPONENTS ...) the neccessary packages (don't forget roscpp if you use C++!)
# In module_1
find_package(catkin REQUIRED COMPONENTS
roscpp
module_2
)
In CMakeLists.txt, you must declare to catkin your dependencies as well (this CATKIN_DEPENDS list is a mirror of the find_package list above):
# In module_1
catkin_package(
# INCLUDE_DIRS include
# LIBRARIES mod2
CATKIN_DEPENDS roscpp module_2
# DEPENDS system_lib
)
In package.xml you need a <depend> for that module as well.
<!-- In module_1 -->
<depend>module_2</depend>
If you do all that, then that error goes away. But then you have a new error:
Circular dependency in subset of packages: module_1, module_2
I would recommend you re-structure your code to avoid circular dependencies, either by combining packages, or if you prefer tiny packages, pulling out the common dependencies to a third package.

Is this a good directory structure to maintain a large C++ project?

I have been trying to figure out a good directory structure that is maintainable for large C++ projects. During my search I came across this resource here link. If I loosely follow the structure stated in that document, it seems I get something similar to the following
.
├── CMakeLists.txt
├── build
│   ├── Executable-OutputA
│   └── Library-OutputA
├── cmake
│   └── *.cmake
├── docs
├── include
│   └── *.h
├── lib
│   ├── LibraryA
│   │   ├── *.cpp
│   │   └── *.h
│   └── LibraryB
│   ├── *.cpp
│   └── *.h
├── src
│   ├── ExecutableA
│   │   ├── *.cpp
│   │   └── *.h
│   └── ExecutableB
│   ├── *.cpp
│   └── *.h
├── tests
└── third_party
├── External-ProjectA
└── External-ProjectB
build: holds the outputted executables and libraries generated by the project
cmake: holds all the cmake packages that the project may require
doc: holds documentation files, typically doxygen
include: holds public headers files (might not be needed, not sure)
lib: holds all the libraries the user creates with their respective source and header files
src: holds all the executable projects the user makes with their respective headers and source files
tests: files to test both the executables and libraries
third_party: any third party projects, libraries, ect. usually downloaded from online or cloned
I believe this is an appropriate structure for large projects, but I do not have too much experience with projects that produce more than 3 or 4 targets. I want to ask the community for feedback and if they agree with the structure laid out above, or have better suggestions.
Edit: I have not been able to find too many posts detailing multiple target outputs as well as third party dependencies for large projects.

Terraform: state management for multi-tenancy

As we're in progress of evaluating Terraform to replace (partially) our Ansible provisioning process for a multi-tenancy SaaS, we realize the convenience, performance and reliability of Terraform as we can handle the infrastructure change (adding/removing) smoothly, keeping track of infra state (that's very cool).
Our application is a multi-tenancy SaaS which we provision separate instances for our customers - in Ansible we have our own dynamic inventory (quite the same as EC2 dynamic inventory). We go through lots of Terraform books/tutorials and best practices where many suggest that multi environment states should be managed separately & remotely in Terraform, but all of them look like static env (like Dev/Staging/Prod).
Is there any best practice or real example of managing dynamic inventory of states for multi-tenancy apps? We would like to track state of each customer set of instances - populate changes to them easily.
One approach might be we create a directory for each customer and place *.tf scripts inside, which will call to our module hosted somewhere global. State files might be put to S3, this way we can populate changes to each individual customer if needed.
Terraform works on a folder level, pulling in all .tf files (and by default a terraform.tfvars file).
So we do something similar to Anton's answer but do away with some complexity around templating things with sed. So as a basic example your structure might look like this:
$ tree -a --dirsfirst
.
├── components
│   ├── application.tf
│   ├── common.tf
│   ├── global_component1.tf
│   └── global_component2.tf
├── modules
│   ├── module1
│   ├── module2
│   └── module3
├── production
│   ├── customer1
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   ├── customer2
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   └── global
│   ├── common.tf -> ../../components/common.tf
│   ├── global_component1.tf -> ../../components/global_component1.tf
│   ├── global_component2.tf -> ../../components/global_component2.tf
│   └── terraform.tfvars
├── staging
│   ├── customer1
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   ├── customer2
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   └── global
│   ├── common.tf -> ../../components/common.tf
│   ├── global_component1.tf -> ../../components/global_component1.tf
│   └── terraform.tfvars
├── apply.sh
├── destroy.sh
├── plan.sh
└── remote.sh
Here you run your plan/apply/destroy from the root level where the wrapper shell scripts handle things like cd'ing into the directory and running terraform get -update=true but also running terraform init for the folder so you get a unique state file key for S3, allowing you to track state for each folder independently.
The above solution has generic modules that wrap resources to provide a common interface to things (for example our EC2 instances are tagged in a specific way depending on some input variables and also given a private Route53 record) and then "implemented components".
These components contain a bunch of modules/resources that would be applied by Terraform at the same folder. So we might put an ELB, some application servers and a database under application.tf and then symlinking that into a location gives us a single place to control with Terraform. Where we might have some differences in resources for a location then they would be separated off. In the above example you can see that staging/global has a global_component2.tf that isn't present in production. This might be something that is only applied in the non production environments such as some network control to prevent internet access to the environment.
The real benefit here is that everything is easily viewable in source control for developers directly rather than having a templating step that produces the Terraform code you want.
It also helps follow DRY where the only real differences between the environments are in the terraform.tfvars files in the locations and makes it easier to test changes before putting them live as each folder is pretty much the same as the other.
Your suggested approach sounds right to me, but there are few more things which you may consider doing.
Keep original Terraform templates (_template in the tree below) as versioned artifact (git repo, for eg) and just pass key-values properties to be able to recreate your infrastructure. This way you will have very small amount of copy pasted Terraform configuration code laying around in directories.
This is how it looks:
/tf-infra
├── _global
│   └── global
│   ├── README.md
│   ├── main.tf
│   ├── outputs.tf
│   ├── terraform.tfvars
│   └── variables.tf
└── staging
└── eu-west-1
├── saas
│   ├── _template
│   │   └── dynamic.tf.tpl
│   ├── customer1
│   │   ├── auto-generated.tf
│   │   └── terraform.tfvars
│   ├── customer2
│   │   ├── auto-generated.tf
│   │   └── terraform.tfvars
...
Two helper scripts are needed:
Template rendering. Use either sed to generate module's source attribute or use more powerful tool (as for example it is done in airbnb/streamalert )
Wrapper script. Run terraform -var-file=... is usually enough.
Shared terraform state files as well resources which should be global (directory _global above) can be stored on S3, so that other layers can access them.
PS: I am very much open for comments on the proposed solution, because this is an interesting task to work on :)

HTMLBars how to get started?

Is there any guide how to start with HTMLBars? I am following "building HTMLBars" section but finally I am stuck. I have run building tool and now I have files in my dist directory like this:
.
├── htmlbars-compiler.amd.js
├── htmlbars-runtime.amd.js
├── morph.amd.js
├── test
│   ├── htmlbars-compiler-tests.amd.js
│   ├── htmlbars-runtime-tests.amd.js
│   ├── index.html
│   ├── loader.js
│   ├── morph-tests.amd.js
│   ├── packages-config.js
│   ├── qunit.css
│   └── qunit.js
└── vendor
├── handlebars.amd.js
└── simple-html-tokenizer.amd.js
Which should I add to my ember project and is that all or have I to do something more? Is this library ready or it is still unusable for ember?
Not even close to ready yet, I'd love to give more info, but there really isn't any. Last I heard they wanted it as a beta in 1.9, but we'll see.