I've been trying to figure out and adopt the latest cmake best practices, as I'm setting up a large project. Several people have advocated what NOT to put in your cmake files. In general, it seems to boil down to (paraphrasing):
CMake files should just describe what's required for a target to build, but not make assumptions about anything else.
This all makes sense, however, this leads to the question:
Where should you define all of the "optional" settings, if not in the cmake scripts?
Also, it seems different if we're just talking about a library, vs an application that's dependent on a lot of libraries.
If you're just building a small library, this all seems fine, whoever uses the library is responsible for deciding all of these extra details.
But when building a larger application with many dependencies, it'd be nice to define all of these settings somewhere. In most build systems you get common configurations like debug and release builds. CMake seems to have standard support for "Debug", "Release", "MinSizeRel", "RelWithDebInfo" (I can never remember the abbreviations by the way), but none of these are enforced, so you might just get an empty string.
And even if you intend to respect these, do you just check the build config and set everything in the root cmake script or what?
As a specific example: My project depends on a bunch of 3rd party libraries, and instead of building them all as part of the project, I am trying to pre-build them.
So to make it simple, I currently have a build script which standardizes how I build all of the 3rd party libraries:
cmake -DBUILD_SHARED_LIBS:BOOL=OFF \
-DBUILD_STATIC_LIBS:BOOL=ON \
-DCMAKE_POSITION_INDEPENDENT_CODE=On \
-DCMAKE_INSTALL_PREFIX=${install_prefix} \
-DCMAKE_PREFIX_PATH="${DIST}" $# ${CMAKE_SRC_DIR}
However this build script is bash, and it's not exactly portable. There are better options right?
I don't expect the other developers of the project to have to memorize a lot of arguments to pass to cmake to build the project the way it was intended.
Related
prefab runs successfully, but my resulting prefab directory only has CMake files in it, no other assets. My goal is to easily share binary assets with colleagues and include them in my Android builds. What am I doing wrong?
I'm attempting to create a prefab of boost (though google might make one if it gets enough votes). I used Boost-for-Android to build boost 1.73.0, copied the relevant includes and libraries out into an assembly directory, populated it with prefab.json, module.json, and abi.json files, and ran
prefab --output "/home/developer/workspace/boost-prefab/out/boost/x86_64" \
--build-system cmake \
--platform android \
--abi "x86_64" \
--os-version "21" \
--ndk-version "21" \
--stl "c++_static" \
"/home/developer/workspace/boost-prefab/assembly/boost"
But the resulting directory only contains two CMake files per abi.
$ find x86_64 -type f
x86_64/lib/x86_64-linux-android/cmake/boost/boostConfig.cmake
x86_64/lib/x86_64-linux-android/cmake/boost/boostConfigVersion.cmake
The assembly directory I created looks like (I followed the prefab structure prescribed in the docs):
boost/prefab.json
modules/filesystem/
module.json
include/boost/...
libs/android.x86_64/
abi.json
libboost_filesystem.a
modules/system/
module.json
include/boost/...
libs/android.x86_64/
abi.json
libboost_system.a
prefab.json:
{"schema_version": 1, "name": "boost", "version": "1.73.0", "dependencies": []}
modules/filesystem/module.json:
{"library_name": "libboost_filesystem"}
modules/filesystem/libs/android.x86_64/abi.json:
{"abi": "x86_64", "api": 21, "ndk": "21", "stl": "c++_static"}
I'll admit I'm confused about prefab's os-version, from what I can tell it's simply the NDK version (because it's not the API version or Android major version number), but I don't think that that's the problem.
I also tried to generate a boost prefab via vcpkg, but boost won't build in vcpkg right now (there are some PRs to address this, but I couldn't get it building.) In any case, vcpkg's example shows a slightly different layout as well where an AAR was inserted into the prefab.
My goal is to have a prebuilt archive of boost that's easy to my colleagues to import into their Android projects.
Am I simply misunderstanding what a prefab file is for?
The CMake files that do get generated seem primitive even compared to the default boost ones, i.e. I'd prefer the boost CMake files to the prefab ones being created here
Should I instead be trying to create an AAR for boost first?
End of the day, the only way I was able to use Boost in my Android project was to include its build in my CMAKE_FIND_ROOT_PATH, but if it comes down to this what's the advantage of prefab? I feel I might as well just integrate conan into my gradle build.
Am I simply misunderstanding what a prefab file is for?
Based on your next question, only slightly. It is a system for distributing prebuilt libraries in a build system agnostic manner. Aside from "build system agnostic", that lines up with what you're trying to do.
The CMake files that do get generated seem primitive even compared to the default boost ones, i.e. I'd prefer the boost CMake files to the prefab ones being created here
Prefab was designed to support arbitrary build systems without the need for that package author to care about supporting each individually. This is important for Android because while the plurality of people use CMake, ndk-build is also an officially supported option, and dozens of other build systems are also regularly used.
What features are you missing that would be provided by boost's own CMake files? Can file a feature request at https://github.com/google/prefab/issues if it's something that we can do in a build-system agnostic manner.
If it can't be described in a build system agnostic manner, Prefab isn't a good fit. vcpkg might be a better choice for that.
Should I instead be trying to create an AAR for boost first?
That's the easiest way to do use prefab packages from AGP, yeah. vcpkg is generally the easiest path to that given the existing corpus, but as you noted you'll need to send them a patch (or a bug report) to get their build fixed first.
Alternatively, https://android.googlesource.com/platform/tools/ndkports/ is how we build the handful that we currently publish. Should be easy to check out, add your own port file, and build your AAR. If you send us the patch I'll likely merge it when I've gotten the test infrastructure up and running (currently we can't contend with supporting many packages because testing is manual).
End of the day, the only way I was able to use Boost in my Android project was to include its build in my CMAKE_FIND_ROOT_PATH, but if it comes down to this what's the advantage of prefab?
You don't need to do that if you use an AAR. AGP handles the details for you if you do that. The intended use case is for build systems to integrate prefab, not for users to have to do the work. AGP already does that, but only when consuming from an AAR.
I'd like to do some property-based testing in a C++ library I'm working on, and was thinking of going with RapidCheck unless somebody has a better idea. (I will need, for example, to generate arbitrary std::set<int>, and if I can place bounds on the range of int in the sets and the size of the sets, all the better.)
All this being said, I'm still a bit of a cmake newb. There appear to be no instructions in RapidCheck except to include it as part of the source code (although downloading it would be better). I have gotten to the point where I can include the headers for RapidCheck in my code, but when I try to build any app using RapidCheck, I'm told that there are symbols from RapidCheck missing or that the rapidcheck library is missing.
I'm assuming that I have to build RapidCheck itself as part of the project to generate the library, but I'm not entirely sure how to do this and it seems difficult to find any examples where this is done.
Does anyone have any suggestions of examples where such things are done so that I can see the string of commands necessary to build a 3rd party API and include the library when building the executables, or - even better - an example of a project using RapidCheck that does exactly this? The lack of documentation on how to set this up is discouraging.
I hope this is not overly vague. To summarize, what I'd like to do from cmake:
Preferably download RapidTest (although including the files directly from the RapidTest project would be fine as well).
Run the required commands and set up the necessary variables to have my test code (in ${PROJECT_SOURCE_DIR}/test) be able to access RapidTest headers.
Generate (if necessary) the RapidTest library and make it so that I can link it to the tests I'm running.
Thanks in advance for any help you might be able to offer!
This is probably not the right way to do this, but maybe it will help:
I was able to get this working by doing the following:
# from within the root of the rapidcheck repo:
$ cmake -DBUILD_SHARED_LIBS=true -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Debug .
# Leave off the BUILD_SHARED_LIBS flag if you don't need an SO.
$ make
That built: librapidcheck.so and librapidcheck.a, which you can then copy / install as needed.
You'll also need the include directory with the headers for rapidcheck, but that's just in the source tree.
Add the include path to your compile commands using whatever build tool you want, and link with the compiled libraries (the .so and .a)
Imagine an overall project with several components:
basic
io
web
app-a
app-b
app-c
Now, let's say web depends on io which depends on basic, and all those things are in one repo and have a CMakeLists.txt to build them as shared libraries.
How should I set things up so that I can build the three apps, if each of them is optional and may not be present at build time?
One idea is to have an empty "apps" directory in the main repo and we can clone whichever app repos we want into that. Our main CMakeLists.txt file can use GLOB to find all the app directories and build them (not knowing in advance how many there will be). Issues with this approach include:
Apparently CMake doesn't re-glob when you just say make, so if you add a new app you must run cmake again.
It imposes a specific structure on the person doing the build.
It's not obvious how one could make two clones of a single app and build them both separately against the same library build.
The general concept is like a traditional recursive CMake project, but where the lower-level modules don't necessarily know in advance which higher-level ones will be using them. Yet, I don't want to require the user to install the lower-level libraries in a fixed location (e.g. /usr/local/lib). I do however want a single invocation of make to notice changed dependencies across the entire project, so that if I'm building an app but have changed one of the low-level libraries, everything will recompile appropriately.
My first thought was to use the CMake import/export target feature.
Have a CMakeLists.txt for basic, io and web and one CMakeLists.txt that references those. You could then use the CMake export feature to export those targets and the application projects could then import the CMake targets.
When you build the library project first the application projects should be able to find the compiled libraries automatically (without the libraries having to be installed to /usr/local/lib) otherwise one can always set up the proper CMake variable to indicate the correct directory.
When doing it this way a make in the application project won't do a make in the library project, you would have to take care of this yourself.
Have multiple CMakeLists.txt.
Many open-source projects take this appraoch (LibOpenJPEG, LibPNG, poppler &etc). Take a look at their CMakeLists.txt to find out how they've done this.
Basically allowing you to just toggle features as required.
I see two additional approaches. One is to simply have basic, io, and web be submodules of each app. Yes, there is duplication of code and wasted disk space, but it is very simple to implement and guarantees that different compiler settings for each app will not interfere with each other across the shared libraries. I suppose this makes the libraries not be shared anymore, but maybe that doesn't need to be a big deal in 2011. RAM and disk have gotten cheaper, but engineering time has not, and sharing of source is arguably more portable than sharing of binaries.
Another approach is to have the layout specified in the question, and have CMakeLists.txt files in each subdirectory. The CMakeLists.txt files in basic, io, and web generate standalone shared libraries. The CMakeLists.txt files in each app directory pull in each shared library with the add_subdirectory() command. You could then pull down all the library directories and whichever app(s) you wanted and initiate the build from within each app directory.
You can use ADD_SUBDIRECTORY for this!
https://cmake.org/cmake/help/v3.11/command/add_subdirectory.html
I ended up doing what I outlined in my question, which is to check in an empty directory (containing a .gitignore file which ignores everything) and tell CMake to GLOB any directories (which are put in there by the user). Then I can just say cmake myrootdir and it does find all the various components. This works more or less OK. It does have some side drawbacks though, such as that some third-party tools like BuildBot expect a more traditional project structure which makes integrating other tools with this sort of arrangement a little more work.
The CMake BASIS tool provides utilities where you can create independent modules of a project and selectively enable and disable them using the ccmake command.
Full disclosure: I'm a developer for the project.
A little background, we have a fairly large code base, which builds in to a set of libraries - which are then distributed for internal use in various binaries. At the moment, the build process for this is haphazard and everything is built off the trunk.
We would like to explore whether there is a build system which will allow us to manage releases and automatically pull in dependencies. Such a tool exists for java, Maven. I like it's package, repository and dependency mechanism, and I know that with either the maven-native or maven-nar plugin we could get this. However the problem is that we cannot fix the source trees to the "maven way" - and unfortunately (at least the maven-nar) plugins don't seem to like code that is not structured this way...
So my question is, is there a tool which satisfies the following for C++
build
package (for example libraries with all headers, something like the .nar)
upload package to a "repository"
automatically pull in the required dependencies from said repository, extract headers and include in build, extract libraries and link. The depedencies would be described in the "release" for that binary - so if we were to use CI server to build that "release", the build script has the necessary dependencies listed (like the pom.xml files).
I could roll my own by modifying either make+shell scripts or waf/scons with extra python modules for the packaging and dependency management - however I would have thought that this is a common problem and someone somewhere has a tool for this? Or does everyone roll their own? Or have I missed a significant feature of waf/scons or CMake?
EDIT: I should add, OS is preferred, and non-MS...
Most of the linux distributions, for example, contain dependency tracking for their packages. Of all the things that I've tried to cobble together myself to take on your problem, in the end they all are "not quite perfect". The best thing to do, IMHO, is to create a local yum/deb repository or something (continuing my linux example) and then pull stuff from there as needed.
Many of the source-packages also quickly tell you the minimum components that must be installed to do a self-build (as opposed to installing a binary pre-compiled package).
Unfortunately, these methods are that much easier, though it's better than trying to do it yourself. In the end, to be cross-platform supporting, you need one of these systems per OS as well. Fun!
I am not sure if I understand correctly what you want to du, but I will tell you what we use and hope it helps.
We use cmake for our build. It hat to be noted that cmake is quite powerful. Among other things, you can "make install" in custom directories to collect headers and binaries there to build your release. We combine this with some python scripting to build our releases. YMMV, but some things might just be too specific for a generic tool and a custom script may be the simpler solution.
Our build tool builds releases directly from a svn reposity (checkout, build, ...) which I can really recommend to avoid some local state polluting the release in some unforseen way. It also enforces reproducability.
It depends a lot on the platforms you're targeting. I can only really speak for Linux, but there it also depends on the distributions you're targeting, packages being a distribution-level concept. To make things a bit simpler, there are families of distributions using similar packaging mechanisms and package names, meaning that the same recipe for making a Debian package will probably make an Ubuntu package too.
I'd definitely say that if you're willing to target a subset of all known Linux distros using a manageable set of packaging mechanisms, you will benefit in the long run from not rolling your own and building packages the way the distribution creators intended. These systems allow you to specify run- and build-time dependencies, and automatic CI environments also exist (like OBS for rpm-based distros).
I'm in the middle of setting up an build environment for a c++ game project. Our main requirement is the ability to build not just our game code, but also its dependencies (Ogre3D, Cegui, boost, etc.). Furthermore we would like to be able build on Linux as well as on Windows as our development team consists of members using different operating systems.
Ogre3D uses CMake as its build tool. This is why we based our project on CMake too so far. We can compile perfectly fine once all dependencies are set up manually on each team members system as CMake is able to find the libraries.
The Question is if there is an feasible way to get the dependencies set up automatically. As a Java developer I know of Maven, but what tools do exist in the world of c++?
Update: Thanks for the nice answers and links. Over the next few days I will be trying out some of the tools to see what meets our requirements, starting with CMake. I've indeed had my share with autotools so far and as much as I like the documentation (the autobook is a very good read), I fear autotools are not meant to be used on Windows natively.
Some of you suggested to let some IDE handle the dependency management. We consist of individuals using all possible technologies to code from pure Vim to fully blown Eclipse CDT or Visual Studio. This is where CMake allows use some flexibility with its ability to generate native project files.
In the latest CMake 2.8 version there is the new ExternalProject module.
This allows to download/checkout code, configure and build it as part of your main build tree.
It should also allow to set dependencies.
At my work (medical image processing group) we use CMake to build all our own libraries and applications. We have an in-house tool to track all the dependencies between projects (defined in a XML database). Most of the third party libraries (like Boost, Qt, VTK, ITK etc..) are build once for each system we support (MSWin32, MSWin64, Linux32 etc..) and are commited as zip-files in the version control system. CMake will then extract and configure the correct zip file depending on which system the developer is working on.
I have been using GNU Autotools (Autoconf, Automake, Libtool) for the past couple of months in several projects that I have been involved in and I think it works beautifully. Truth be told it does take a little bit to get used to the syntax, but I have used it successfully on a project that requires the distribution of python scripts, C libraries, and a C++ application. I'll give you some links that helped me out when I first asked a similar question on here.
The GNU Autotools Page provides the best documentation on the system as a whole but it is quite verbose.
Wikipedia has a page which explains how everything works. Autoconf configures the project based upon the platform that you are about to compile on, Automake builds the Makefiles for your project, and Libtool handles libraries.
A Makefile.am example and a configure.ac example should help you get started.
Some more links:
http://www.lrde.epita.fr/~adl/autotools.html
http://www.developingprogrammers.com/index.php/2006/01/05/autotools-tutorial/
http://sources.redhat.com/autobook/
One thing that I am not certain on is any type of Windows wrapper for GNU Autotools. I know you are able to use it inside of Cygwin, but as for actually distributing files and dependencies on Windows platforms you are probably better off using a Windows MSI installer (or something that can package your project inside of Visual Studio).
If you want to distribute dependencies you can set them up under a different subdirectory, for example, libzip, with a specific Makefile.am entry which will build that library. When you perform a make install the library will be installed to the lib folder that the configure script determined it should use.
Good luck!
There are several interesting make replacements that automatically track implicit dependencies (from header files), are cross-platform and can cope with generated files (e.g. shader definitions). Two examples I used to work with are SCons and Jam/BJam.
I don't know of a cross-platform way of getting *make to automatically track dependencies.
The best you can do is use some script that scans source files (or has C++ compiler do that) and finds #includes (conditional compilation makes this tricky) and generates part of makefile.
But you'd need to call this script whenever something might have changed.
The Question is if there is an feasible way to get the dependencies set up automatically.
What do you mean set up?
As you said, CMake will compile everything once the dependencies are on the machines. Are you just looking for a way to package up the dependency source? Once all the source is there, CMake and a build tool (gcc, nmake, MSVS, etc.) is all you need.
Edit: Side note, CMake has the file command which can be used to download files if they are needed: file(DOWNLOAD url file [TIMEOUT timeout] [STATUS status] [LOG log])
Edit 2: CPack is another tool by the CMake guys that can be used to package up files and such for distribution on various platforms. It can create NSIS for Windows and .deb or .tgz files for *nix.
At my place of work (we build embedded systems for power protection) we used CMake to solve the problem. Our setup allows cmake to be run from various locations.
/
CMakeLists.txt "install precompiled dependencies and build project"
project/
CMakeLists.txt "build the project managing dependencies of subsystems"
subsystem1/
CMakeLists.txt "build subsystem 1 assume dependecies are already met"
subsystem2/
CMakeLists.txt "build subsystem 2 assume dependecies are already met"
The trick is to make sure that each CMakeLists.txt file can be called in isolation but that the top level file can still build everything correctly. Technically we don't need the sub CMakeLists.txt files but it makes the developers happy. It would be an absolute pain if we all had to edit one monolithic build file at the root of the project.
I did not set up the system (I helped but it is not my baby). The author said that the boost cmake build system had some really good stuff in it, that help him get the whole thing building smoothly.
On many *nix systems, some kind of package manager or build system is used for this. The most common one for source stuff is GNU Autotools, which I've heard is a source of extreme grief. However, with a few scripts and an online depository for your deps you can set up something similar like so:
In your project Makefile, create a target (optionally with subtargets) that covers your dependencies.
Within the target for each dependency, first check to see if the dep source is in the project (on *nix you can use touch for this, but you could be more thorough)
If the dep is not there, you can use curl, etc to download the dep
In all cases, have the dep targets make a recursive make call (make; make install; make clean; etc) to the Makefile (or other configure script/build file) of the dependency. If the dep is already built and installed, make will return fairly promptly.
There are going to be lots of corner cases that will cause this to break though, depending on the installers for each dep (perhaps the installer is interactive?), but this approach should cover the general idea.
Right now I'm working on a tool able to automatically install all dependencies of a C/C++ app with exact version requirement :
compiler
libs
tools (cmake, autotools)
Right now it works, for my app. (Installing UnitTest++, Boost, Wt, sqlite, cmake all in correct order)
The tool, named «C++ Version Manager» (inspired by the excellent ruby version manager), is coded in bash and hosted on github : https://github.com/Offirmo/cvm
Any advices and suggestions are welcomed.