I'm considering reimplementing our build system (currently based on GNU Make) in CMake.
Disclaimer: this is more of a theoretical and "best practices" question. I don't know CMake in-depth. Also, please feel free to migrate the question to programmers if it's more on-topic there.
As far as I understand, the standard workflow for CMake is
cmake .
make
I suspect there may be problems of de-synchronization of CMake files and Makefiles.
So, during usual development process you're supposed to run make to avoid unnecessary rebuilds of CMakeCache and Makefiles and generally make the process more straight-forward. But then, if you add, say, a new source file to CMakeLists and run make, it'll be using old CMakeCache and Makefiles and will not regenerate them automatically. I think it may cause major problems when used at scale, since in case something is not building as it should, you'll have to try to perform make clean, then, if it doesn't help, you'll need to remove CMakeCache and regenerate everything (manually!).
If I'm not right about something of the above, please correct me.
I'd like to just do
awesome-cmake
and have it update everything what needs updating and build the project.
So, the question: is there a way to make "atomic build" with CMake so that it tracks all the required information and abstracts away the usage of make?
I think you have a couple of incorrect ideas here:
I suspect there may be problems of de-synchronization of CMake files and Makefiles.
Ultimately, CMake is all about producing correct Makefiles (or Visual Studio solution files, or XCode project files, or whatever). Unless you modify a generated Makefile by hand, there can be no synchronisation issue between CMake and the Makefile since CMake generates the Makefile.
But then, if you add, say, a new source file to CMakeLists and run make, it'll be using old CMakeCache and Makefiles and will not regenerate them automatically.
Actually, the opposite is true: if you modify the CMakeLists.txt (e.g. adding a new source, changing a compiler flag, adding a new dependency) then running make will trigger a rerun of CMake automatically. CMake will read in its previously cached values (which includes any command line args previously given to CMake) and generate an updated Makefile.
in case something is not building as it should, you'll have to try to perform make clean, then, if it doesn't help, you'll need to remove CMakeCache and regenerate everything (manually!).
Yes, this would be a pretty normal workflow if something has gone wrong. However, things don't often get that bad in my experience.
So, the question: is there a way to make "atomic build" with CMake so that it tracks all the required information and abstracts away the usage of make?
Given that running make will cause CMake to "do the right thing", i.e. rerun if required, I guess that using make is as close to an "atomic build" as possible.
One thing to beware of here is the use of file(GLOB ...) or similar to generate a list of source files. From the docs:
We do not recommend using GLOB to collect a list of source files from your source tree. If no CMakeLists.txt file changes when a source is added or removed then the generated build system cannot know when to ask CMake to regenerate.
In other words, if you do use file(GLOB ...) to gather a list of sources, you need to get into the habit of rerunning CMake after adding/removing a file from your source tree; running make won't trigger a rerun of CMake in this situation.
The standard workflow for CMake is an out of source build
mkdir build
cd build
cmake ..
make
Related
I am looking for a way to get the command invoked to compile a specific file inside CMake as a variable. This is the same "command" as it shows up in a compile_commands.json. It is of the form:
%gcc% %bunch of -DSOMETHINGS% %bunch of -Isome/files% -o %cpp files% ... etc.
The project I am in the process of porting to CMake makes a note of the invoked compiler command inside a file, which I am changing to use the configure_file cmake mechanism. The configured file is needed prior to compiling a specific object.
While searching, I found this answer that is pretty much the same question: cmake - get the used commandline flags "-D". However, it is more than 3 years old, and only concerns retrieving a list of the "-D" flags, which the user solved by manually keeping track. I consider this suboptimal. If this command is written into compile_commands.json, I should be able to access this somehow, right? When exactly is compile_commands.json written? Any change I can get this to write that file for like, a non-functional dummy target or something, then access it and use it to configure my file in the real project?
I know this is kind of against the ethos of CMake, and I love the fact that I don't ever have to manually manage the flags myself, however I find it hard to believe there is no way to get them for a specific file, am I overlooking something?
I have looked at CMake properties on source files, and retrieving them using get_property, however I don't believe I can reconstruct the whole command from these.
I am currently generating Unix makefiles, although the mechanism should optimally also work when generating MSVC project files, which I know CMAKE_EXPORT_COMPILE_COMMANDS does not.
Any thoughts and input on this is greatly appreciated!
My question is the following:
Is there a way to tell CMakeFiles where to generate it's makefiles, such as cmake_install.cmake, CMakeCache.txt etc.?
More specifically, is there a way to set some commands in the CMakeFiles that specifies where to output these generated files? I have tried to search around the web to find some answers, and most people say there's no explicit way of doing this, while others say I might be able to, using custom commands. Sadly, I'm not very strong in cmake, so I couldn't figure this out.
I'm currently using the CLion IDE and there you can specifically set the output path through the settings, but for flexibility reasons I would like as much as possible to be done through the CMakeFiles such that compiling from different computers isn't that big of a hassle.
I would also like to avoid explicitly adding additional command line arguments etc.
I hope someone might have an answer for me, thanks in advance!
You can't (easily) do this and you shouldn't try to do it.
The build tree is CMake's territory. It allows you some tiny amount of customization there (for instance you can specify where the final build artifacts will be placed through the *_OUTPUT_DIRECTORY target properties), but it does not give you any direct control over where intermediate files, like object files or internal make scripts used for bookkeeping are being placed.
This is a feature. You have no idea how all the build systems supported by CMake work internally. Maybe you can move that internal file to a different location in your build process, which is based on Unix Makefiles. But maybe that will also horribly break my build process, which is using Visual Studio. The bottom line is: You shouldn't have to care about this. CMake should take care of it, and by taking some freedom away from you, it ensures that it can actually do that job on all supported build toolchains.
But this might still be an unsatisfactory answer to you. You're the developer, shouldn't you be in full control of the results produced by your build? Of course you should, which is why CMake again grants you full control over what goes into the install tree. That is, whatever ends up in the install directory when you call make install (or whatever is the equivalent of installing in your build toolchain) is again under your control.
So you do control everything that matters: The source tree, the install tree, and that tiny portion of the build tree where the final build artifacts go. The rest of the build tree is off-limits for you and for good reasons.
If I accidentally change a header file, save it, and then change it back and re-save, how do I stop cmake from detecting the change and rebuilding all its dependencies. Usually I don't even know it got modified until after I re-run make and it starts a rebuild process.
I've tried some naive manual timestamp changes, but had no luck.
To be clear, I'm looking for a hack or someone who can explain the rules cmake uses. The environment is linux/os x using command line gcc/clang.
cmake is a makefile generator (amongst other things it can generate). That's why you build with make.
The behavior you see is indeed standard make behavior. This is a generalized build tool; it rebuilds any "target" by applying the "recipe" for that target whenever the target is outdated. These targets and recipes have been written by cmake.
You can ask make which targets it would rebuild (make --dry-run) and update the timestamp of the header to predate all targets (touch -r oldest_target header.h). Alternatively, to avoid determining which target is oldest, alternate make --dry-run and touch -r using the first target until make --dry-run returns no more targets.
I'm looking for a one file c++ build system for a simple c++ project on linux. The project has src/ and include/ directories. And I need a debug and a release build.
I can do this in one makefile but it's not straight forward and lacks readability.
I tried doing this with CMake, but it's not simple enough. It involves out of source builds and multiple CMakeLists. I guess I could live with the multiple CMakeLists, but I don't like the out of source builds.
Another thing I want is to be able to have easy one liners for debug and release version. For instance on makefile, I use, make debug and make release.
Is there a simpler one file solution that is readable?
My wishlist would also include support for git, but this probably requires its own question. By this I mean that it can combine multiple git/shell commands to simplify things. For instance, let us say you have a version in the CMakeList.txt and you want to both increment the version, push to git and tag the code all with "make release."
I looked at a few build systems and I ended up using CMake. It's not the solution I wanted, but I gave up and settled.
There a variety build systems you could use. Here is a couple:
SCONS
Gradle
I am wondering how I should manage a growing C++ project. Now, I am developing a project with Netbeans and it's dirty work generating makefiles. The project has become too big and I have decided to split it up into a few parts. What is the best way of doing this?
I am trying to use Scons as my build system. I have had some success with it, but should I edit the build scripts every time I append or delete files. It's too dull.
So I need your advice.
P.S. By the way, how does a large project like google chrome do this? Does everybody use some kind of IDE to build scripts generated only for software distribution?
I also use Netbeans for C++ and compile with SCons. I use the jVi Netbeans plugin which really works well.
For some reason the Netbeans Python plugin is no longer official, which I dont understand at all. You can still get it though, and it really makes editing the SCons build scripts a nice experience. Even though Netbeans doesnt have a SCons plugin (yet?) you can still configure its build command to execute SCons.
As for maintaining the SCons scripts automatically by the IDE, I dont do that either, I do that by hand. But its not like I have to deal with this on a daily basis, so I dont see that its that important, especially considering how easy to read the scripts are.
Here's the build script in SCons that does the same as mentioned previously for CMake:
env = Environment()
env.EnsurePythonVersion(2, 5)
env.EnsureSConsVersion(2, 1)
libTarget = env.SharedLibrary(target = 'foo', source = ['a.cpp', 'b.cpp', 'c.pp'])
env.Program(target = 'bar', source = ['bar.cpp', libTarget])
The SCons Glob() function is a nice option, but I tend to shy away from automatically building all the files in a directory. The same goes for listing sub-directories to be built. Ive been burned enough times by this, and prefer explicitly specifying the file/dirs to be built.
In case you hear those rumors that SCons is slower than other alternatives, the SCons GoFastButton has some pointers that can help out.
Most large projects stick with a build system that automatically handles all the messy details for them. I'm a huge fan of CMake (which is what KDE uses for all their components) but scons is another popular choice. My editor (KDevelop) supposedly handles CMake projects itself, but I still edit the build scripts myself because it's not that hard.
I'd recommend learning one tool really well and sticking with it (plenty of documentation is available for any tool you'll be interested in). Be sure you also look into version control if you haven't already (I have a soft spot for git, but Mercurial and Subversion are also very popular choices).
A simple CMake example:
project("My Awesome Project" CXX)
cmake_minimum_required(VERSION 2.8)
add_library(foo SHARED a.cpp b.cpp c.cpp) #we'll build an so file
add_executable(bar bar.cpp)
target_link_libraries(bar foo) #link bar to foo
This is obviously a trivial case, but it's very easy to manage and expand as needed.
I am trying to use Scons as build system. I have some success with it, but I should edit
build scripts every time I append or delete file. It's too dull.
Depending on how your files are organized, you can use, for example, Scon's Glob() function to get source files as a list without having to list all files individually. For example, to build all c++ source files into an executable, you can do:
Program('program', Glob('*.cpp'))
You can do the same in CMake using its commands.
And, if you're using SCons, since it's Python you can write arbitrary Python code to make your source file lists.
You can also organize files into multiple folders and have subsidiary SCons (or CMakeList.txt) build files that the master build script can call.