In my team we made some applications using C++ as main language and when a new project arrives we always end in copy-pasting other project's files when needed . It's frequent and we had a discuss to make an improvement.
So, in order to change that we decided to make an unique library (or many little libraries) that contains everything that is not of the business itself. And we decided to use cmake for that.
But, my question is if is there a way to import this library or these little libraries without compiling them everytime we commit a change.
For example if we have two libraries and two projects, where:
Project A depends on -> library A and Library B
Project B depends on -> library B only
Having our source directory like this:
LIB A
include
src
CMakeLists.txt
LIB B
include
src
CMakeLists.txt
Project A
include
src
CMakeLists.txt
Project B
include
src
CMakeLists.txt
How can we set CmakeLists in project A and B so, when we change something in Library A or B, and re-run cmake and then make in project B for example, all of changes apper in it. And the same for the other project ?.
Is it possible ?
I had the same issue in the past while working on some personal project.
I can offer some suggestions, some of which use your approach towards solving the problem, and some of which don't:
Method 1 (A different approach, multiple source control repositories)
Don't split the code using different CMake files.
Instead, use a single CMake file and split the codebase into smaller repositories.
For example, all the shared utility libraries altogether could've been a single repository, while application A and B, get a repository each.
(You could of course split the utility libraries into multiple repositories as well).
This makes sure that you don't have to hold/update/work with all the projects at once, but only those you actually need. The only downside, is that you have a constrain on the way you checkout these projects, but I don't think it's an issue.
Method 2 (Same approach, using CMake's add_dependencies)
You could define dependencies on the compilation of applications A and B so that the relevant libraries are automatically built if they were updated.
Here is a link to CMake's add_dependencies manual.
I'm setting up project in C++ using CMake and Catch/gmock for unit testing (it's not very important, unit testing frameworks are working similarly) . I'm targeting Windows (MSVC compiler) and Linux platforms. I'd like to put all the tests in single executable as desribed in Catch's tutorial . For unit testing purposes I will probably make some fake implementations/mocks. I'm afraid when I will build everything into an one executable (files with test source, project sources, fake implementations) I will get linker multiple definition errors, since there will be multiple function definitions: true and fake ones. Possible solutions I see now:
Build multiple executables, every with right implementations - I see this worst solution, because I will end up with bunch of files, which I will have to execute to test program (probably via some script, but it's not convenient option).
Compile each test with it's dependencies to shared library which all will be linked with main test runner executable. I think it will be quite hard to achieve, especially I'm not familiar with Linux shared libraries. For Windows it should be doable with some dllexports, but not really easy.
How should are solve this problem? What are real world solutions for this? Maybe I don't see something really simple and I'm looking for a nonexistent problem? I'd like quite easy, multi platform solution.
I have a project which is built for multiple OSes(Linux and Windows for now, maybe OS X) and processors. To this project I have a handful of library dependencies, which are manly external but I have a couple of internal ones, in source form which I compile(cross-compile) for each OS-processor combination possible in my context.
Most of the external libraries are not changed very often, just maybe in case of a local bugfix or some feature\bugfix implemented in a newer version I think it may benefit the project. The internal libraries change quite often(1 month cycles) and are provided by another team in my company in binary form, although I also have access to the source code and if I need a bug to be fixed I can do that and generate new binaries for my usage until the next release cycle. The setup I have right now is the following(filesystem only):
-- dependencies
|
-- library_A_v1.0
|
--include
|
--lib
|
-- library_A_v1.2
|
--include
|
--lib
|
-- library_B
|
--include
|
--lib
| ...
The libraries are kept on a server and every time I make an update I have to copy any new binaries and header files on the server. The synchronization on the client side is done using a file synchronization utility. Of course any updates to the libraries need to be announced to the other developers and everyone has to remember to synchronize their "dependencies" folder.
Needless to say that I don't like very much this scheme. So I was thinking of putting my libraries under version control(GIT). Build them, pack them in a tgz\zip and push them on the repo. Each library would have its own git repository so that I could easily tag\branch already used versions and test drive new versions. A "stream" of data for each library that I could easily get, combine, update. I would like to have the following:
get rid of this normal filesystem way of keeping the libraries; right now complete separate folders are kept and managed for each OS and each version and sometimes they get out of sync resulting in a mess
more control over it, to be able to have a clear history of which versions of the libs we used for which version of our project; much like what we can obtain from git(VCS) with our source code
be able to tag\branch the versions of the dependencies I'm using(for each and every one of them); I have my v2.0.0 tag/branch for library_A from which I normally take it for my project but I would like to test drive the 2.1.0 version, so I just build it, push it on the server on a different branch and call my build script with this particular dependency pointing to the new branch
have simpler build scripts - just pull the sources from the server, pull the dependencies and build; that would allow also to use different versions of the same library for different processor-OS combinations(more than often we need that)
I tried to find some alternatives to the direct git based solution but without much success - like git-annex which kind of seems overly complicated for what I'm trying to do.
What I'm facing right now is the fact that there seems to be very strong opinion against putting binary files under git or any VCS(although technically I would have also header files; I could also push the folder structure that I described directly to git to not have the tgz\zip, but I would still have the libraries binaries) and that some of my colleagues, driven by that shared strong opinion, are against this scheme of things. I perfectly understand that git tracks content and not files, but to some extent I will be tracking also content and I believe it will definitely be an improvement over the current scheme of things we have right now.
What would be a better solution to this situation? Do you know of any alternatives to the git(VCS) based scheme of things? Would it be such a monstrous thing to have my scheme under git :)? Please share your opinions and especially your experience in handling these types of situations.
Thanks
An alternative, which would still follwo your project, would be to use git-annex, which would allow you track header files, while keeping binaries stored elsewhere.
Then each git repo can be added as a submodule to your main project.
Imagine an overall project with several components:
basic
io
web
app-a
app-b
app-c
Now, let's say web depends on io which depends on basic, and all those things are in one repo and have a CMakeLists.txt to build them as shared libraries.
How should I set things up so that I can build the three apps, if each of them is optional and may not be present at build time?
One idea is to have an empty "apps" directory in the main repo and we can clone whichever app repos we want into that. Our main CMakeLists.txt file can use GLOB to find all the app directories and build them (not knowing in advance how many there will be). Issues with this approach include:
Apparently CMake doesn't re-glob when you just say make, so if you add a new app you must run cmake again.
It imposes a specific structure on the person doing the build.
It's not obvious how one could make two clones of a single app and build them both separately against the same library build.
The general concept is like a traditional recursive CMake project, but where the lower-level modules don't necessarily know in advance which higher-level ones will be using them. Yet, I don't want to require the user to install the lower-level libraries in a fixed location (e.g. /usr/local/lib). I do however want a single invocation of make to notice changed dependencies across the entire project, so that if I'm building an app but have changed one of the low-level libraries, everything will recompile appropriately.
My first thought was to use the CMake import/export target feature.
Have a CMakeLists.txt for basic, io and web and one CMakeLists.txt that references those. You could then use the CMake export feature to export those targets and the application projects could then import the CMake targets.
When you build the library project first the application projects should be able to find the compiled libraries automatically (without the libraries having to be installed to /usr/local/lib) otherwise one can always set up the proper CMake variable to indicate the correct directory.
When doing it this way a make in the application project won't do a make in the library project, you would have to take care of this yourself.
Have multiple CMakeLists.txt.
Many open-source projects take this appraoch (LibOpenJPEG, LibPNG, poppler &etc). Take a look at their CMakeLists.txt to find out how they've done this.
Basically allowing you to just toggle features as required.
I see two additional approaches. One is to simply have basic, io, and web be submodules of each app. Yes, there is duplication of code and wasted disk space, but it is very simple to implement and guarantees that different compiler settings for each app will not interfere with each other across the shared libraries. I suppose this makes the libraries not be shared anymore, but maybe that doesn't need to be a big deal in 2011. RAM and disk have gotten cheaper, but engineering time has not, and sharing of source is arguably more portable than sharing of binaries.
Another approach is to have the layout specified in the question, and have CMakeLists.txt files in each subdirectory. The CMakeLists.txt files in basic, io, and web generate standalone shared libraries. The CMakeLists.txt files in each app directory pull in each shared library with the add_subdirectory() command. You could then pull down all the library directories and whichever app(s) you wanted and initiate the build from within each app directory.
You can use ADD_SUBDIRECTORY for this!
https://cmake.org/cmake/help/v3.11/command/add_subdirectory.html
I ended up doing what I outlined in my question, which is to check in an empty directory (containing a .gitignore file which ignores everything) and tell CMake to GLOB any directories (which are put in there by the user). Then I can just say cmake myrootdir and it does find all the various components. This works more or less OK. It does have some side drawbacks though, such as that some third-party tools like BuildBot expect a more traditional project structure which makes integrating other tools with this sort of arrangement a little more work.
The CMake BASIS tool provides utilities where you can create independent modules of a project and selectively enable and disable them using the ccmake command.
Full disclosure: I'm a developer for the project.
I need some pointers/advice on how to automatically generate CMakeLists.txt files for CMake. Does anyone know of any existing generators? I've checked the ones listed in the CMake Wiki but unfortunately they are not suitable for me.
I already have a basic Python script which traverses my project's directory structure and generates the required files but it's really "dumb" right now. I would like to augment it to take into account for example the different platforms I'm building for, the compiler\cross-compiler I'm using or different versions of the libraries dependencies I might have. I don't have much\expert experience with CMake and an example I could base my work or an already working generator could be of great help.
I am of the opinion that you need not use an automated script for generating CMakeLists.Txt as it is a very simple task to write one, after you have understood the basic procedure. Yeah I do agree that understanding the procedure to write one as given in CMake Wiki is also difficult as it is too much detailed.
A very basic example showing how to write CMakeLists.txt is shown here, which I think will be of use to everyone, even someone who is going to write CMakeLists.txt for the first time.
Well i dont have much of an experience in Cmake either, but to perform a cross platform make a lot of files need to be written and modified including the CMakeLists.txt file, i suggest that you use this new tool called the ProjectGenerator Tool, its pretty cool, it does all the extra work needed and makes it easy to generate such files for 3'rd party sources with little effort.
Just read the README carefully before using it.
Link:
http://www.ogre3d.org/forums/viewtopic.php?f=1&t=54842
I think that you are doing this upside down.
When using CMake, you are supposed to write the CMakeLists.txt yourself. Typically, you don't need to handle different compilers as CMake has knowledge about them. However, if you must, you can add code in the CMakeFiles to do different things depending on the tool you are using.
CLion is an Integrated development environment that is fully based on CMake project file.
It is able to generate itself the CMakeLists.txt file when using the import project from source
However this is quite probable that you have to edit this file manually as your project grows and for adding external dependency.
I'm maintaining a C++ software environment that has more than 1000 modules (shared, static libraries, programs) and uses more than 20 third parties (boost, openCV, Qt, Qwt...). This software environment hosts many programs (~50), each one picking up some libraries, programs and third parties. I use CMake to generate the makefiles and that's really great.
However, if you write your CMakeLists.txt as it is recommended to do (declare the module as being a library/program, importing source files, adding dependencies...). I agree with celavek: maintaining those CMakeLists.txt files is a real pain:
When you add a new file to a module, you need to update its CMakeLists.txt
When you upgrade a third party, you need to update the CMakeLists.txt of all modules using it
When you add a new dependency (library A now needs library B), you may need to update the CMakeLists.txt of all programs using A
When you want a new global settings to be changed (compiler setting, predefined variable, C++ standard used), you need to update all your CMakeLists.txt
Then, I see two strategies to adress those issues and likely the one mentioned by OP.
1- Have CMakeLists.txt be well written and be smart enough not to have a frozen behaviourto update themselves on the fly. That's what we have in our software environment. Each module has a standardized file organization (sources are in src folder, includes are in inc folder...) and have simple text files to specify their dependencies (with keywords we defined, like QT to say the module needs to link with Qt). Then, our CMakeLists.txt is a two-line file and simply calls a cmake macro we wrote to automatically setup the module. As a MCVE that would be:
CMakeLists.txt:
include( utl.cmake )
add_module( "mylib", lib )
utl.cmake:
macro( add_module name what )
file(GLOB_RECURSE source_files "${CMAKE_CURRENT_SOURCE_DIR}/src/*.cpp")
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/inc)
if ( what STREQUEL "lib" )
add_library( ${name} SHARED ${source_files} )
elseif ( what STREQUEL "prg" )
add_executable( ${name} ${source_files} )
endif()
# TODO: Parse the simple texts files to add target_link_libraries accordingly
endmacro()
Then, for all situations exposed above, you simply need to update utl.cmake, not the thousand of CMakeLists.txt you have...
Honestly, we are very happy with this approach, the system becomes very easy to maintain and we can easily add new dependencies, upgrade third parties, change some build/dependency strategies...
However, there remains a lot of CMake scripts to be written. And CMake script language sucks...the tool's very powerful, right, but the script's variable scope, the cache, the painful and not so well documented syntax (just to check if a list is empty you must ask for it's size and store this in a variable!), the fact it's not object oriented...make it a real pain to maintain.
So, I'm now convinced the real good approach may be to:
2- completly generate the CMakeLists.txt from a more powerful language like Python. The Python script would do things similar to what our utl.cmake does, instead it would generate a CMakeLists.txt ready to be passed CMake tool (with a format as proposed in HelloWorld, no variable, no function....it would only call standard CMake function).
I doubt such generic tool exists, because it's hard to produce the CMakeLists.txt files that will make everyone happy, you'll have to write it yourself. Note that gen-cmake does that (generates a CMakeLists.txt), but in a very primitive way and it apparently only supports Linux, but it may be a good start point.
This is likely to be the v2 of our software environment...one day.
Note : Additionally, if you want to support both qmake and cmake for instance, a well written Python script could generate both CMakeLists and pro files on demand!
Not sure whether this is a problem original poster faced, but as I see plenty of „just write CMakefile.txt” answers above, let me shortly explain why generating CMakefiles may make sense:
a) I have another build system I am fairly happy with
(and which covers large multiplatform build of big collection
of interconnected shared and static libraries, programs, scripting
language extensions, and tools, with various internal and external
dependencies, quirks and variants)
b) Even if I were to replace it, I would not consider cmake.
I took a look at CMakefiles and I am not happy with the syntax
and not happy with the semantics.
c) CLion uses CMakefiles, and Cmakefiles only (and seems somewhat interesting)
So, to give CLion a chance (I love PyCharm, so it's tempting), but to keep using my build system, I would gladly use some tool which would let me
implement
make generate_cmake
and have all necessary CMakefiles generated on the fly according to the current
info extracted from my build system. I can gladly feed the tool/script with information which sources and headers my app consists of, which libraries and programs it is expected to build, which -I, -L, -D, etc are expected to be set for which component, etc etc.
Well, of course I would be much happier if JetBrains would allow to provide some direct protocol of feeding the IDE with the information it needs
(say, allowed me to provide my own command to compile, to run, and to
emit whatever metadata they really need - I suppose they mainly need incdirs and defines to implement on the fly code analysis, and libpaths to setup LD_LIBRARY_PATH for the debugger), without referring to cmake. CMakefiles as protocol are somewhat complicated.
Maybe this could be helpful:
https://conan.io/
The author has given some speeches about cmake and how to create modular projects using cmake into CPPCon. As far as I know, this tool require cmake, so that I suppose that generate it when you integrate new packages, or create new packages. Recently I read something about how to write a higher level description of the C/C++ project using a YAML file, but not sure if it is part of conan or not (what I read was from the author of conan). I have never used, and it is something pending for me, so that, please if you use it and fit your needs, comment your opinions about it and how it fit your scenario.
I was looking for such a generator but at the end I decided to write my own (partly because I wanted to understand how CMake works):
https://github.com/Aenteas/cmake-generator
It has a couple of additional features such as creating python wrappers (SWIG).
Writing a generator that suits everyone is impossible but I hope it will give you an idea in case you want to make your customized version.