Imagine an overall project with several components:
basic
io
web
app-a
app-b
app-c
Now, let's say web depends on io which depends on basic, and all those things are in one repo and have a CMakeLists.txt to build them as shared libraries.
How should I set things up so that I can build the three apps, if each of them is optional and may not be present at build time?
One idea is to have an empty "apps" directory in the main repo and we can clone whichever app repos we want into that. Our main CMakeLists.txt file can use GLOB to find all the app directories and build them (not knowing in advance how many there will be). Issues with this approach include:
Apparently CMake doesn't re-glob when you just say make, so if you add a new app you must run cmake again.
It imposes a specific structure on the person doing the build.
It's not obvious how one could make two clones of a single app and build them both separately against the same library build.
The general concept is like a traditional recursive CMake project, but where the lower-level modules don't necessarily know in advance which higher-level ones will be using them. Yet, I don't want to require the user to install the lower-level libraries in a fixed location (e.g. /usr/local/lib). I do however want a single invocation of make to notice changed dependencies across the entire project, so that if I'm building an app but have changed one of the low-level libraries, everything will recompile appropriately.
My first thought was to use the CMake import/export target feature.
Have a CMakeLists.txt for basic, io and web and one CMakeLists.txt that references those. You could then use the CMake export feature to export those targets and the application projects could then import the CMake targets.
When you build the library project first the application projects should be able to find the compiled libraries automatically (without the libraries having to be installed to /usr/local/lib) otherwise one can always set up the proper CMake variable to indicate the correct directory.
When doing it this way a make in the application project won't do a make in the library project, you would have to take care of this yourself.
Have multiple CMakeLists.txt.
Many open-source projects take this appraoch (LibOpenJPEG, LibPNG, poppler &etc). Take a look at their CMakeLists.txt to find out how they've done this.
Basically allowing you to just toggle features as required.
I see two additional approaches. One is to simply have basic, io, and web be submodules of each app. Yes, there is duplication of code and wasted disk space, but it is very simple to implement and guarantees that different compiler settings for each app will not interfere with each other across the shared libraries. I suppose this makes the libraries not be shared anymore, but maybe that doesn't need to be a big deal in 2011. RAM and disk have gotten cheaper, but engineering time has not, and sharing of source is arguably more portable than sharing of binaries.
Another approach is to have the layout specified in the question, and have CMakeLists.txt files in each subdirectory. The CMakeLists.txt files in basic, io, and web generate standalone shared libraries. The CMakeLists.txt files in each app directory pull in each shared library with the add_subdirectory() command. You could then pull down all the library directories and whichever app(s) you wanted and initiate the build from within each app directory.
You can use ADD_SUBDIRECTORY for this!
https://cmake.org/cmake/help/v3.11/command/add_subdirectory.html
I ended up doing what I outlined in my question, which is to check in an empty directory (containing a .gitignore file which ignores everything) and tell CMake to GLOB any directories (which are put in there by the user). Then I can just say cmake myrootdir and it does find all the various components. This works more or less OK. It does have some side drawbacks though, such as that some third-party tools like BuildBot expect a more traditional project structure which makes integrating other tools with this sort of arrangement a little more work.
The CMake BASIS tool provides utilities where you can create independent modules of a project and selectively enable and disable them using the ccmake command.
Full disclosure: I'm a developer for the project.
Related
clarification: If the below setup is a reasaonable way to do it, then my question is mainly, how do I best add the public headers with the .lib when I run ninja install as my solution does not do this currently.
There is a myriad of examples of how to do stuff in different ways, and many seem to contradict eachother to just confuse me even more.
I'm trying to change out a in-house build system and replace it with CMake. The codebase consists of many different components/libraries, like OS/Platform abstraction, networking, functionality etc. Before this was always compiled together as one big codebase, so running clean would remove everything and rebuild all dependencies.
Now it comes to the part where I have trouble understanding on how a modern smart way to do it in CMake is.
Ignoring the fact that a dependency-chain tool on top is needed, my idea was to build the components/libraries in order of libraries with no dependency to the topmost as isolated build runs.
Then each isolated build run would be built with its dependencies, like only the windows API for the OS abstraction library, and then run install and export on it.
Is this the general prefered way, to compile a library to a .lib on windows or .a on linux and export it to a known shared location?
Im guessing it would be required then to copy all public header files for this library to the same location? If so, how is this most correctly done?
If I have my OS abstraction folder structure
OSabstraction/
includes/
OSabstraction/
abstraction.h
src/
abstraction.cpp
CMakeLists.txt
and my CMakeLists is
cmake_minimum_required(VERSION 3.12)
project(OSabstraction LANGUAGES CXX)
add_library(abstraction)
target_sources(abstraction
PRIVATE
src/abstraction.cpp
)
target_include_directories(abstraction
PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/includes/>
$<INSTALL_INTERFACE:include>
)
install(
TARGETS abstraction
EXPORT abstraction-export
ARCHIVE DESTINATION lib
)
How would I best go around to use this library in many other inhouse projects? This library is in practice many times larger, so what would be the correct way to do it for a large project?
edit: I tried to follow Pablo Arias' guide on how to do CMake Right, but I am not sure how the build_interface and install_interface generators work in regards to getting my header files with the export of the .lib file.
Recommended solution is to use a package manager together with CMake.
The link below is a full C++ project setup template which can use the package manager https://conan.io/ to support splitting up dependencies into individual libraries.
https://github.com/trondhe/cmake_template
I need some pointers/advice on how to automatically generate CMakeLists.txt files for CMake. Does anyone know of any existing generators? I've checked the ones listed in the CMake Wiki but unfortunately they are not suitable for me.
I already have a basic Python script which traverses my project's directory structure and generates the required files but it's really "dumb" right now. I would like to augment it to take into account for example the different platforms I'm building for, the compiler\cross-compiler I'm using or different versions of the libraries dependencies I might have. I don't have much\expert experience with CMake and an example I could base my work or an already working generator could be of great help.
I am of the opinion that you need not use an automated script for generating CMakeLists.Txt as it is a very simple task to write one, after you have understood the basic procedure. Yeah I do agree that understanding the procedure to write one as given in CMake Wiki is also difficult as it is too much detailed.
A very basic example showing how to write CMakeLists.txt is shown here, which I think will be of use to everyone, even someone who is going to write CMakeLists.txt for the first time.
Well i dont have much of an experience in Cmake either, but to perform a cross platform make a lot of files need to be written and modified including the CMakeLists.txt file, i suggest that you use this new tool called the ProjectGenerator Tool, its pretty cool, it does all the extra work needed and makes it easy to generate such files for 3'rd party sources with little effort.
Just read the README carefully before using it.
Link:
http://www.ogre3d.org/forums/viewtopic.php?f=1&t=54842
I think that you are doing this upside down.
When using CMake, you are supposed to write the CMakeLists.txt yourself. Typically, you don't need to handle different compilers as CMake has knowledge about them. However, if you must, you can add code in the CMakeFiles to do different things depending on the tool you are using.
CLion is an Integrated development environment that is fully based on CMake project file.
It is able to generate itself the CMakeLists.txt file when using the import project from source
However this is quite probable that you have to edit this file manually as your project grows and for adding external dependency.
I'm maintaining a C++ software environment that has more than 1000 modules (shared, static libraries, programs) and uses more than 20 third parties (boost, openCV, Qt, Qwt...). This software environment hosts many programs (~50), each one picking up some libraries, programs and third parties. I use CMake to generate the makefiles and that's really great.
However, if you write your CMakeLists.txt as it is recommended to do (declare the module as being a library/program, importing source files, adding dependencies...). I agree with celavek: maintaining those CMakeLists.txt files is a real pain:
When you add a new file to a module, you need to update its CMakeLists.txt
When you upgrade a third party, you need to update the CMakeLists.txt of all modules using it
When you add a new dependency (library A now needs library B), you may need to update the CMakeLists.txt of all programs using A
When you want a new global settings to be changed (compiler setting, predefined variable, C++ standard used), you need to update all your CMakeLists.txt
Then, I see two strategies to adress those issues and likely the one mentioned by OP.
1- Have CMakeLists.txt be well written and be smart enough not to have a frozen behaviourto update themselves on the fly. That's what we have in our software environment. Each module has a standardized file organization (sources are in src folder, includes are in inc folder...) and have simple text files to specify their dependencies (with keywords we defined, like QT to say the module needs to link with Qt). Then, our CMakeLists.txt is a two-line file and simply calls a cmake macro we wrote to automatically setup the module. As a MCVE that would be:
CMakeLists.txt:
include( utl.cmake )
add_module( "mylib", lib )
utl.cmake:
macro( add_module name what )
file(GLOB_RECURSE source_files "${CMAKE_CURRENT_SOURCE_DIR}/src/*.cpp")
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/inc)
if ( what STREQUEL "lib" )
add_library( ${name} SHARED ${source_files} )
elseif ( what STREQUEL "prg" )
add_executable( ${name} ${source_files} )
endif()
# TODO: Parse the simple texts files to add target_link_libraries accordingly
endmacro()
Then, for all situations exposed above, you simply need to update utl.cmake, not the thousand of CMakeLists.txt you have...
Honestly, we are very happy with this approach, the system becomes very easy to maintain and we can easily add new dependencies, upgrade third parties, change some build/dependency strategies...
However, there remains a lot of CMake scripts to be written. And CMake script language sucks...the tool's very powerful, right, but the script's variable scope, the cache, the painful and not so well documented syntax (just to check if a list is empty you must ask for it's size and store this in a variable!), the fact it's not object oriented...make it a real pain to maintain.
So, I'm now convinced the real good approach may be to:
2- completly generate the CMakeLists.txt from a more powerful language like Python. The Python script would do things similar to what our utl.cmake does, instead it would generate a CMakeLists.txt ready to be passed CMake tool (with a format as proposed in HelloWorld, no variable, no function....it would only call standard CMake function).
I doubt such generic tool exists, because it's hard to produce the CMakeLists.txt files that will make everyone happy, you'll have to write it yourself. Note that gen-cmake does that (generates a CMakeLists.txt), but in a very primitive way and it apparently only supports Linux, but it may be a good start point.
This is likely to be the v2 of our software environment...one day.
Note : Additionally, if you want to support both qmake and cmake for instance, a well written Python script could generate both CMakeLists and pro files on demand!
Not sure whether this is a problem original poster faced, but as I see plenty of „just write CMakefile.txt” answers above, let me shortly explain why generating CMakefiles may make sense:
a) I have another build system I am fairly happy with
(and which covers large multiplatform build of big collection
of interconnected shared and static libraries, programs, scripting
language extensions, and tools, with various internal and external
dependencies, quirks and variants)
b) Even if I were to replace it, I would not consider cmake.
I took a look at CMakefiles and I am not happy with the syntax
and not happy with the semantics.
c) CLion uses CMakefiles, and Cmakefiles only (and seems somewhat interesting)
So, to give CLion a chance (I love PyCharm, so it's tempting), but to keep using my build system, I would gladly use some tool which would let me
implement
make generate_cmake
and have all necessary CMakefiles generated on the fly according to the current
info extracted from my build system. I can gladly feed the tool/script with information which sources and headers my app consists of, which libraries and programs it is expected to build, which -I, -L, -D, etc are expected to be set for which component, etc etc.
Well, of course I would be much happier if JetBrains would allow to provide some direct protocol of feeding the IDE with the information it needs
(say, allowed me to provide my own command to compile, to run, and to
emit whatever metadata they really need - I suppose they mainly need incdirs and defines to implement on the fly code analysis, and libpaths to setup LD_LIBRARY_PATH for the debugger), without referring to cmake. CMakefiles as protocol are somewhat complicated.
Maybe this could be helpful:
https://conan.io/
The author has given some speeches about cmake and how to create modular projects using cmake into CPPCon. As far as I know, this tool require cmake, so that I suppose that generate it when you integrate new packages, or create new packages. Recently I read something about how to write a higher level description of the C/C++ project using a YAML file, but not sure if it is part of conan or not (what I read was from the author of conan). I have never used, and it is something pending for me, so that, please if you use it and fit your needs, comment your opinions about it and how it fit your scenario.
I was looking for such a generator but at the end I decided to write my own (partly because I wanted to understand how CMake works):
https://github.com/Aenteas/cmake-generator
It has a couple of additional features such as creating python wrappers (SWIG).
Writing a generator that suits everyone is impossible but I hope it will give you an idea in case you want to make your customized version.
There are already some questions about dependency managers here, but it seems to me that they are mostly about build systems, while I am looking for something targeted purely at making dependency tracking and resolution simpler (and I'm not necessarily interested in learning a new build system).
So, typically we have a project and some common code with another project. This common code is organized as a library, so when I want to get the latest code version for a project, I should also go get all the libraries from the source control. To do this, I need a list of dependencies. Then, to build the project I can reuse this list too.
I've looked at Maven and Ivy, but I'm not sure if they would be appropriate for C++, as they look quite heavily java-targeted (even though there might be plugins for C++, I haven't found people recommending them).
I see it as a GUI tool producing some standardized dependency list which can then be parsed by different scripts etc. It would be nice if it could integrate with source control (tag, get a tagged version with dependencies etc), but that's optional.
Would you have any suggestions? Maybe I'm just missing something, and usually it's done some other way with no need for such a tool? Thanks.
You can use Maven in relationship with C++ in two ways. First you can use it for dependency management of components between each other. Second you can use Maven-nar-plugin for creating shared libraries and unit tests in relationship with boost library (my experience). In the end you can create RPM's (maven-rpm-plugin) out of it to have adequate installation medium. Furthermore i have created the installation for CI environment via Maven (RPM's for Hudson, Nexus installation in RPM's).
I'm not sure if you would see an version control system (VCS) as build tool but Mercurial and Git support sub-repositories. In your case a sub-repository would be your dependencies:
Join multiple subrepos into one and preserve history in Mercurial
Multiple git repo in one project
Use your VCS to archive the build results -- needed anyway for maintenance -- and refer to the libs and header files in your build environment.
If you are looking for a reference take a look at https://android.googlesource.com/platform/manifest.
I'm in the middle of setting up an build environment for a c++ game project. Our main requirement is the ability to build not just our game code, but also its dependencies (Ogre3D, Cegui, boost, etc.). Furthermore we would like to be able build on Linux as well as on Windows as our development team consists of members using different operating systems.
Ogre3D uses CMake as its build tool. This is why we based our project on CMake too so far. We can compile perfectly fine once all dependencies are set up manually on each team members system as CMake is able to find the libraries.
The Question is if there is an feasible way to get the dependencies set up automatically. As a Java developer I know of Maven, but what tools do exist in the world of c++?
Update: Thanks for the nice answers and links. Over the next few days I will be trying out some of the tools to see what meets our requirements, starting with CMake. I've indeed had my share with autotools so far and as much as I like the documentation (the autobook is a very good read), I fear autotools are not meant to be used on Windows natively.
Some of you suggested to let some IDE handle the dependency management. We consist of individuals using all possible technologies to code from pure Vim to fully blown Eclipse CDT or Visual Studio. This is where CMake allows use some flexibility with its ability to generate native project files.
In the latest CMake 2.8 version there is the new ExternalProject module.
This allows to download/checkout code, configure and build it as part of your main build tree.
It should also allow to set dependencies.
At my work (medical image processing group) we use CMake to build all our own libraries and applications. We have an in-house tool to track all the dependencies between projects (defined in a XML database). Most of the third party libraries (like Boost, Qt, VTK, ITK etc..) are build once for each system we support (MSWin32, MSWin64, Linux32 etc..) and are commited as zip-files in the version control system. CMake will then extract and configure the correct zip file depending on which system the developer is working on.
I have been using GNU Autotools (Autoconf, Automake, Libtool) for the past couple of months in several projects that I have been involved in and I think it works beautifully. Truth be told it does take a little bit to get used to the syntax, but I have used it successfully on a project that requires the distribution of python scripts, C libraries, and a C++ application. I'll give you some links that helped me out when I first asked a similar question on here.
The GNU Autotools Page provides the best documentation on the system as a whole but it is quite verbose.
Wikipedia has a page which explains how everything works. Autoconf configures the project based upon the platform that you are about to compile on, Automake builds the Makefiles for your project, and Libtool handles libraries.
A Makefile.am example and a configure.ac example should help you get started.
Some more links:
http://www.lrde.epita.fr/~adl/autotools.html
http://www.developingprogrammers.com/index.php/2006/01/05/autotools-tutorial/
http://sources.redhat.com/autobook/
One thing that I am not certain on is any type of Windows wrapper for GNU Autotools. I know you are able to use it inside of Cygwin, but as for actually distributing files and dependencies on Windows platforms you are probably better off using a Windows MSI installer (or something that can package your project inside of Visual Studio).
If you want to distribute dependencies you can set them up under a different subdirectory, for example, libzip, with a specific Makefile.am entry which will build that library. When you perform a make install the library will be installed to the lib folder that the configure script determined it should use.
Good luck!
There are several interesting make replacements that automatically track implicit dependencies (from header files), are cross-platform and can cope with generated files (e.g. shader definitions). Two examples I used to work with are SCons and Jam/BJam.
I don't know of a cross-platform way of getting *make to automatically track dependencies.
The best you can do is use some script that scans source files (or has C++ compiler do that) and finds #includes (conditional compilation makes this tricky) and generates part of makefile.
But you'd need to call this script whenever something might have changed.
The Question is if there is an feasible way to get the dependencies set up automatically.
What do you mean set up?
As you said, CMake will compile everything once the dependencies are on the machines. Are you just looking for a way to package up the dependency source? Once all the source is there, CMake and a build tool (gcc, nmake, MSVS, etc.) is all you need.
Edit: Side note, CMake has the file command which can be used to download files if they are needed: file(DOWNLOAD url file [TIMEOUT timeout] [STATUS status] [LOG log])
Edit 2: CPack is another tool by the CMake guys that can be used to package up files and such for distribution on various platforms. It can create NSIS for Windows and .deb or .tgz files for *nix.
At my place of work (we build embedded systems for power protection) we used CMake to solve the problem. Our setup allows cmake to be run from various locations.
/
CMakeLists.txt "install precompiled dependencies and build project"
project/
CMakeLists.txt "build the project managing dependencies of subsystems"
subsystem1/
CMakeLists.txt "build subsystem 1 assume dependecies are already met"
subsystem2/
CMakeLists.txt "build subsystem 2 assume dependecies are already met"
The trick is to make sure that each CMakeLists.txt file can be called in isolation but that the top level file can still build everything correctly. Technically we don't need the sub CMakeLists.txt files but it makes the developers happy. It would be an absolute pain if we all had to edit one monolithic build file at the root of the project.
I did not set up the system (I helped but it is not my baby). The author said that the boost cmake build system had some really good stuff in it, that help him get the whole thing building smoothly.
On many *nix systems, some kind of package manager or build system is used for this. The most common one for source stuff is GNU Autotools, which I've heard is a source of extreme grief. However, with a few scripts and an online depository for your deps you can set up something similar like so:
In your project Makefile, create a target (optionally with subtargets) that covers your dependencies.
Within the target for each dependency, first check to see if the dep source is in the project (on *nix you can use touch for this, but you could be more thorough)
If the dep is not there, you can use curl, etc to download the dep
In all cases, have the dep targets make a recursive make call (make; make install; make clean; etc) to the Makefile (or other configure script/build file) of the dependency. If the dep is already built and installed, make will return fairly promptly.
There are going to be lots of corner cases that will cause this to break though, depending on the installers for each dep (perhaps the installer is interactive?), but this approach should cover the general idea.
Right now I'm working on a tool able to automatically install all dependencies of a C/C++ app with exact version requirement :
compiler
libs
tools (cmake, autotools)
Right now it works, for my app. (Installing UnitTest++, Boost, Wt, sqlite, cmake all in correct order)
The tool, named «C++ Version Manager» (inspired by the excellent ruby version manager), is coded in bash and hosted on github : https://github.com/Offirmo/cvm
Any advices and suggestions are welcomed.
I am working on a C++ library. Ultimately, I would like to make it publicly available for multiple platforms (Linux and Windows at least), along with some examples and Python bindings. Work is progressing nicely, but at the moment the project is quite messy, built solely in and for Visual C++ and not multi-platform at all.
Therefore, I feel a cleanup is in order. The first thing I'd like to improve is the project's directory structure. I'd like to create a structure that is suitable for the Automake tools to allow easy compilation on multiple platforms, but I've never used these before. Since I'll still be doing (most of the) coding in Visual Studio, I'll need somewhere to keep my Visual Studio project and solution files as well.
I tried to google for terms like "C++ library directory structure", but nothing useful seems to come up. I found some very basic guidelines, but no crystal clear solutions.
While looking at some open source libraries, I came up with the following:
\mylib
\mylib <source files, read somewhere to avoid 'src' directory>
\include? or just mix .cpp and .h
\bin <compiled examples, where to put the sources?>
\python <Python bindings stuff>
\lib <compiled library>
\projects <VC++ project files, .sln goes in project root?>
\include?
README
AUTHORS
...
I have no/little previous experience with multi-platform development/open source projects and am quite amazed that I cannot find any good guidelines on how to structure such a project.
How should one generally structure such a library project? What ca be recommended to read? Are there some good examples?
One thing that's very common among Unix libraries is that they are organized such that:
./ Makefile and configure scripts.
./src General sources
./include Header files that expose the public interface and are to be installed
./lib Library build directory
./bin Tools build directory
./tools Tools sources
./test Test suites that should be run during a `make test`
It somewhat reflects the traditional Unix filesystem under /usr where:
/usr/src Sometimes contains sources for installed programs
/usr/include Default include directory
/usr/lib Standard library install path
/usr/share/projectname Contains files specific to the project.
Of course, these may end up in /usr/local (which is the default install prefix for GNU autoconf), and they may not adhere to this structure at all.
There's no hard-and-fast rule. I personally don't organize things this way. (I avoid using a ./src/ directory at all except for the largest projects, for example. I also don't use autotools, preferring instead CMake.)
My suggestion to you is that you should choose a directory layout that makes sense for you (and your team). Do whatever is most sensible for your chosen development environment, build tools, and source control.
There is this awesome convention that I recently came across that might be helpful: The Pitchfork Layout (also on GitHub).
To sum up, subsection 1.3 states that:
PFL prescribes several directories that should appear at the root of the project tree. Not all of the directories are required, but they have an assigned purpose, and no other directory in the filesystem may assume the role of one of these directories. That is, these directories must be the ones used if their purpose is required.
Other directories should not appear at the root.
build/: A special directory that should not be considered part of the source of the project. Used for storing ephemeral build results. must not be checked into source control. If using source control, must be ignored using source control ignore-lists.
src/: Main compilable source location. Must be present for projects with compiled components that do not use submodules.
In the presence of include/, also contains private headers.
include/: Directory for public headers. May be present. May be omitted for projects that do not distinguish between private/public headers. May be omitted for projects that use submodules.
tests/: Directory for tests.
examples/: Directory for samples and examples.
external/: Directory for packages/projects to be used by the project, but not edited as part of the project.
extras/: Directory containing extra/optional submodules for the project.
data/: Directory containing non-source code aspects of the project. This might include graphics and markup files.
tools/: Directory containing development utilities, such as build and refactoring scripts
docs/: Directory for project documentation.
libs/: Directory for main project submodules.
Additionally, I think the extras/ directory is where your Python bindings should go.
I don't think there's actually any good guidelines for this. Most of it is just personal preference. Certain IDE's will determine a basic structure for you, though. Visual Studio, for example, will create a separate bin folder which is divided in a Debug and Release subfolders. In VS, this makes sense when you're compiling your code using different targets. (Debug mode, Release mode.)
As greyfade says, use a layout that makes sense to you. If someone else doesn't like it, they will just have to restructure it themselves. Fortunately, most users will be happy with the structure you've chosen. (Unless it's real messy.)
I find wxWidgets library (open source) to be a good example. They support many different platforms (Win32, Mac OS X, Linux, FreeBSD, Solaris, WinCE...) and compilers (MSVC, GCC, CodeWarrior, Watcom, etc.). You can see the tree layout here:
https://svn.wxwidgets.org/svn/wx/wxWidgets/trunk/
I can realy recommend you using CMake... it's for cross platform development and it's much more flexible that automake, use CMake and you will be able to write cross platform code with your own direcory structure on all systems.