The cmake quandary - c++

I am working on a C++ project. It is not much complicated so far, yet depends on a bunch of "popular" libraries (nlohmann/json, ToruNiina/toml11 just to name a few). All of them have some CMakeLists.txt and from my not-that-experienced point of view, I consider them well structured.
Now of course I can compile the libraries one by one, or include a "copy" into my project repo, but I want to be better than that. After researching about available build tools, I have decided to use cmake to build and manage a C++ project. The promise was to get a stable, widely supported tool that will help to simplify & unify the build process. Moreover, from the project nature I have no privilege to impose any requirements on the target machine; I need to pack everything for the deployment.
I have spent several days reading, watching and testing out various cmake tutorials, handbooks and manuals. I have to admit, I quickly started to feel that a tool that is supposed to clarify development process keeps introducing new obscurities contrary to its purpose. Originally, I credited this to my lack of experience, yet...
I read articles about why not to bundle dependencies, only to be followed by methods of doing so. I have found recommendation to use one way A over B, C over B and later A over C. It took me a while to figure out the differences between 2.8 and 3.0, the obscurity of target_link_libraries, setting cxx version and/or compiler warning flags and so on.
My point is that even after an exhausting expedition into the seas of cmake, I am still not sure about some elementary questions:
How is cmake meant to be used?
What is a standard, what is a courtesy, and what is none of those?
How can I tell that something is a feature, an archaic backwards compatibility, or both?
Now I will illustrate this on my project. I only need something like this
cmake_minimum_required(VERSION 3.14)
project(CaseCore CXX)
add_executable(myBinary list/of/cpp/sources.cpp)
target_link_libraries(myBinary PUBLIC someExternalLibs likeForExample nlohmann_json::nlohmann_json oqs)
The only problem is with the libraries (there is no space for other problems anyway). I want to build them with the project and dont want to make a local copy (not to drag ton of unrelated files all along). First, I created forks of library repos in order to have a reliable source and be able to merge newer versions into my fork.
Now the decision was whether to use git submodule or some other scheme, I've read submodule doesnt perform that well and also preferred the whole thing to be managed by cmake alone. I started with ExternalProject_Add but later I found about FetchContent which helped me to add the external dependencies easily into my cmake list
FetchContent_Declare(nlohmann
GIT_REPOSITORY https://github.com/my-reliable-fork-of/json
GIT_TAG v3.7.3
)
message(STATUS "Fetching Json...this may take a while")
FetchContent_MakeAvailable(nlohmann)
Seems and works well and minimal. However, I always have to search the library itself in order to find/guess which targets to link to my executable target? There seems to be close to zero convention and unless the respective CMakeLists.txt is simple enough to read it, I tend to guess the target names until i find it.
Recently I wanted to link liboqs from here and the abovementioned scenario did not really help; for some reason, I can link oqs, #include "oqs/oqs.h", yet the dynamic library is not created and execution terminates. I am pretty sure I can resolve the problem after another span of time spent googling and playing around with various cmake variables. Yet this is not the way that I expected cmake to help me manage my project; it is actually quite the opposite.
Just to be clear, I turned down other methods including
add_subdirectory from local repo copy (git submodule)
ExternalProject_Add from local repo copy (git submodule)
ExternalProject_Add from online repo
find_package
as they seemed to be much more obscure/old-style etc (eventhough despite hours of researching, they all still seem as pretty much as just many ways to do the same thing to me)
Now that I have
Am I doing something wrong, or is it really what working with cmake should look like?
Do I really have to "reverse-engineer" other people's CMakeLists in order to use a library?
Under these circumstances, how can I convince my coworkers to use similar work process?
and finally
How can I adjust my work in order to ease these difficulities for others?
I love C++ the more I use it. Yet I spend a tremendous amount of my productive time on solving dependencies...and I do not want to make this guy even more angry.

How is cmake meant to be used?
The typical cmake usage matches the old autotools usage:
$ cmake /path/to/src #replaces /path/to/src/configure
$ make
$ make install
Some targets changed (e.g., make check vs make test), and cmake doesn't provide all the same standard targets (e.g., make distclean), but the usage I have above is what most developers will do (and since cmake re-runs itself, it's really just the second step most of the time).
If your CMakeLists.txt doesn't support this workflow, you should have a very good reason. Most tooling will assume a workflow like this, so you're severely limiting yourself.
What is a standard, what is a courtesy, and what is none of those?
Outside of the above, cmake is pretty much the wild west. Things are becoming more standardized thanks to better documentation and training, but it's far from perfect.
A well-behaved cmake project should export its targets (lots of questions and answers on Stack Overflow about this) and propagate flags and dependencies. This makes it far easier for dependent projects to consume exported targets, and luckily it's easy to do.
How can I tell that something is a feature, an archaic backwards compatibility, or both?
There's nothing I'm aware of that makes these distinctions. In general, newer methods leverage the target_* functions instead of the global ones (e.g., target_include_directories vs include_directories). The target_* functions are also used to propagate flags, include directories, compiler features, and dependent libraries like I mentioned above.
Am I doing something wrong, or is it really what working with cmake should look like?
You're talking about managing external dependencies, and I'm going to skip this to avoid getting into opinions. The short version is that C and C++ dependencies are hard, and there's many competing ways of managing them in a project. They each have pros and cons, but most are still designed for the authors' use cases. You'll have to figure out what use cases you need, and choose tools and workflows based on that.
Do I really have to "reverse-engineer" other people's CMakeLists in order to use a library?
A well-behaved cmake project will export its targets properly, even if they use different dependency management than you do. If they don't, send the project a pull request (exporting isn't hard, and it's good to learn how) or just file bugs against them, especially if they're already using cmake as a build system.
Under these circumstances, how can I convince my coworkers to use similar work process?
It depends on your coworkers, and mileage will vary. I've dealt with coworkers who want to embrace best practices and support flexibility, and I've dealt with coworkers who are content only doing enough to solve the problems we're facing right now.

Related

C++ Libraries ecosystem using CMake and Ryppl

I am interested in building a cross-platform C++ Library and distributing it in source form. I want the consumers of this library to be able to acquire it, build it and consume it inside their software very easily on whatever platform they are working on and for whatever platform they are targeting. At the same time while building my library, I also want to be able to consume other popular OSS libraries through a similar mechanism.
I see that CMake and Ryppl were created with these intentions in mind and to some extent they do solve some of these problems, especially the build problem. But I don't quite know how exactly to go about achieving the above mentioned goals. Is it OK to settle on CMake as the build solution? How do I solve the library acquisition and distribution problem? Simply host the sources somewhere and let people discover, download and build them? Or is there a better way?
At the time of writing there is no accepted solution that handles everything you want. CMake gives you cross-platform builds and git (with submodules) gives you a way to manage source-level dependencies if all other projects are using CMake. But, in practice, many common dependencies you project wil need don't use CMake, or even Git.
Ryppl is meant to solve this, but progress is slow because the challenge is such a hard one.
Arguably, the most successful solution so far is to write header-only libraries. This is common practice and means your users just include your files, and their build system of choice takes care of everthing. There are no binary dependencies to manage.
TheHouse's answer is still essentially true. Also there don't seem to have been any updates to ryppl itself for a while (3 years) and the ryppl.org domain has expired.
There are some new projects aiming to solve the packaging issue.
Both build2 and wrap from mesonbuild have that goal in mind.
A proposal was made recently to add packages to the c++ standard which may open up the debate (reddit discussion here).
Wrap looks promising as meson's author has learned from cmake.
There is a good video when its author discussing this here.
build2 seems more oblivious (and therefore condemned to reinvent). However both suffer from trying to solve the external project dependencies issue simultaneously with providing a complete build system.
conan.io is another recent attempt which doesn't try to provide the build system as well. Time will tell if any of these gain any traction.
The accepted standard for packaging C and C++ projects on Unix was always a source tarball + a configure script (autotools) + make.
cmake is now beginning to replace autotools as your first choice.
It is able create RPMs and tarballs for distribution purposes.
Its also worth considering the package managers built into the various flavours of Linux. The easiest to build and install projects are those where most of the dependencies can be pulled in via yum or apt. This won't help you on windows of course. While there is a high barrier to entry getting your own projects added to the main Linux repositories (e.g. RedHat, Debian) there is nothing to stop you adding your maintaining your own satellite repo.
The difference between that and just hosting your project on github or similar is you can provide pre-built binaries for a number of popular systems.
You might also consider that configure times checks (e.g. from cmake findLibrary()) and your own documentation will tell people what needs to be installed as a prerequisite and providing you don't make it too onerous that might be enough.

How to generate CMakeLists.txt?

I need some pointers/advice on how to automatically generate CMakeLists.txt files for CMake. Does anyone know of any existing generators? I've checked the ones listed in the CMake Wiki but unfortunately they are not suitable for me.
I already have a basic Python script which traverses my project's directory structure and generates the required files but it's really "dumb" right now. I would like to augment it to take into account for example the different platforms I'm building for, the compiler\cross-compiler I'm using or different versions of the libraries dependencies I might have. I don't have much\expert experience with CMake and an example I could base my work or an already working generator could be of great help.
I am of the opinion that you need not use an automated script for generating CMakeLists.Txt as it is a very simple task to write one, after you have understood the basic procedure. Yeah I do agree that understanding the procedure to write one as given in CMake Wiki is also difficult as it is too much detailed.
A very basic example showing how to write CMakeLists.txt is shown here, which I think will be of use to everyone, even someone who is going to write CMakeLists.txt for the first time.
Well i dont have much of an experience in Cmake either, but to perform a cross platform make a lot of files need to be written and modified including the CMakeLists.txt file, i suggest that you use this new tool called the ProjectGenerator Tool, its pretty cool, it does all the extra work needed and makes it easy to generate such files for 3'rd party sources with little effort.
Just read the README carefully before using it.
Link:
http://www.ogre3d.org/forums/viewtopic.php?f=1&t=54842
I think that you are doing this upside down.
When using CMake, you are supposed to write the CMakeLists.txt yourself. Typically, you don't need to handle different compilers as CMake has knowledge about them. However, if you must, you can add code in the CMakeFiles to do different things depending on the tool you are using.
CLion is an Integrated development environment that is fully based on CMake project file.
It is able to generate itself the CMakeLists.txt file when using the import project from source
However this is quite probable that you have to edit this file manually as your project grows and for adding external dependency.
I'm maintaining a C++ software environment that has more than 1000 modules (shared, static libraries, programs) and uses more than 20 third parties (boost, openCV, Qt, Qwt...). This software environment hosts many programs (~50), each one picking up some libraries, programs and third parties. I use CMake to generate the makefiles and that's really great.
However, if you write your CMakeLists.txt as it is recommended to do (declare the module as being a library/program, importing source files, adding dependencies...). I agree with celavek: maintaining those CMakeLists.txt files is a real pain:
When you add a new file to a module, you need to update its CMakeLists.txt
When you upgrade a third party, you need to update the CMakeLists.txt of all modules using it
When you add a new dependency (library A now needs library B), you may need to update the CMakeLists.txt of all programs using A
When you want a new global settings to be changed (compiler setting, predefined variable, C++ standard used), you need to update all your CMakeLists.txt
Then, I see two strategies to adress those issues and likely the one mentioned by OP.
1- Have CMakeLists.txt be well written and be smart enough not to have a frozen behaviourto update themselves on the fly. That's what we have in our software environment. Each module has a standardized file organization (sources are in src folder, includes are in inc folder...) and have simple text files to specify their dependencies (with keywords we defined, like QT to say the module needs to link with Qt). Then, our CMakeLists.txt is a two-line file and simply calls a cmake macro we wrote to automatically setup the module. As a MCVE that would be:
CMakeLists.txt:
include( utl.cmake )
add_module( "mylib", lib )
utl.cmake:
macro( add_module name what )
file(GLOB_RECURSE source_files "${CMAKE_CURRENT_SOURCE_DIR}/src/*.cpp")
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/inc)
if ( what STREQUEL "lib" )
add_library( ${name} SHARED ${source_files} )
elseif ( what STREQUEL "prg" )
add_executable( ${name} ${source_files} )
endif()
# TODO: Parse the simple texts files to add target_link_libraries accordingly
endmacro()
Then, for all situations exposed above, you simply need to update utl.cmake, not the thousand of CMakeLists.txt you have...
Honestly, we are very happy with this approach, the system becomes very easy to maintain and we can easily add new dependencies, upgrade third parties, change some build/dependency strategies...
However, there remains a lot of CMake scripts to be written. And CMake script language sucks...the tool's very powerful, right, but the script's variable scope, the cache, the painful and not so well documented syntax (just to check if a list is empty you must ask for it's size and store this in a variable!), the fact it's not object oriented...make it a real pain to maintain.
So, I'm now convinced the real good approach may be to:
2- completly generate the CMakeLists.txt from a more powerful language like Python. The Python script would do things similar to what our utl.cmake does, instead it would generate a CMakeLists.txt ready to be passed CMake tool (with a format as proposed in HelloWorld, no variable, no function....it would only call standard CMake function).
I doubt such generic tool exists, because it's hard to produce the CMakeLists.txt files that will make everyone happy, you'll have to write it yourself. Note that gen-cmake does that (generates a CMakeLists.txt), but in a very primitive way and it apparently only supports Linux, but it may be a good start point.
This is likely to be the v2 of our software environment...one day.
Note : Additionally, if you want to support both qmake and cmake for instance, a well written Python script could generate both CMakeLists and pro files on demand!
Not sure whether this is a problem original poster faced, but as I see plenty of „just write CMakefile.txt” answers above, let me shortly explain why generating CMakefiles may make sense:
a) I have another build system I am fairly happy with
(and which covers large multiplatform build of big collection
of interconnected shared and static libraries, programs, scripting
language extensions, and tools, with various internal and external
dependencies, quirks and variants)
b) Even if I were to replace it, I would not consider cmake.
I took a look at CMakefiles and I am not happy with the syntax
and not happy with the semantics.
c) CLion uses CMakefiles, and Cmakefiles only (and seems somewhat interesting)
So, to give CLion a chance (I love PyCharm, so it's tempting), but to keep using my build system, I would gladly use some tool which would let me
implement
make generate_cmake
and have all necessary CMakefiles generated on the fly according to the current
info extracted from my build system. I can gladly feed the tool/script with information which sources and headers my app consists of, which libraries and programs it is expected to build, which -I, -L, -D, etc are expected to be set for which component, etc etc.
Well, of course I would be much happier if JetBrains would allow to provide some direct protocol of feeding the IDE with the information it needs
(say, allowed me to provide my own command to compile, to run, and to
emit whatever metadata they really need - I suppose they mainly need incdirs and defines to implement on the fly code analysis, and libpaths to setup LD_LIBRARY_PATH for the debugger), without referring to cmake. CMakefiles as protocol are somewhat complicated.
Maybe this could be helpful:
https://conan.io/
The author has given some speeches about cmake and how to create modular projects using cmake into CPPCon. As far as I know, this tool require cmake, so that I suppose that generate it when you integrate new packages, or create new packages. Recently I read something about how to write a higher level description of the C/C++ project using a YAML file, but not sure if it is part of conan or not (what I read was from the author of conan). I have never used, and it is something pending for me, so that, please if you use it and fit your needs, comment your opinions about it and how it fit your scenario.
I was looking for such a generator but at the end I decided to write my own (partly because I wanted to understand how CMake works):
https://github.com/Aenteas/cmake-generator
It has a couple of additional features such as creating python wrappers (SWIG).
Writing a generator that suits everyone is impossible but I hope it will give you an idea in case you want to make your customized version.

Is there a build system for C++ which can manage release dependencies?

A little background, we have a fairly large code base, which builds in to a set of libraries - which are then distributed for internal use in various binaries. At the moment, the build process for this is haphazard and everything is built off the trunk.
We would like to explore whether there is a build system which will allow us to manage releases and automatically pull in dependencies. Such a tool exists for java, Maven. I like it's package, repository and dependency mechanism, and I know that with either the maven-native or maven-nar plugin we could get this. However the problem is that we cannot fix the source trees to the "maven way" - and unfortunately (at least the maven-nar) plugins don't seem to like code that is not structured this way...
So my question is, is there a tool which satisfies the following for C++
build
package (for example libraries with all headers, something like the .nar)
upload package to a "repository"
automatically pull in the required dependencies from said repository, extract headers and include in build, extract libraries and link. The depedencies would be described in the "release" for that binary - so if we were to use CI server to build that "release", the build script has the necessary dependencies listed (like the pom.xml files).
I could roll my own by modifying either make+shell scripts or waf/scons with extra python modules for the packaging and dependency management - however I would have thought that this is a common problem and someone somewhere has a tool for this? Or does everyone roll their own? Or have I missed a significant feature of waf/scons or CMake?
EDIT: I should add, OS is preferred, and non-MS...
Most of the linux distributions, for example, contain dependency tracking for their packages. Of all the things that I've tried to cobble together myself to take on your problem, in the end they all are "not quite perfect". The best thing to do, IMHO, is to create a local yum/deb repository or something (continuing my linux example) and then pull stuff from there as needed.
Many of the source-packages also quickly tell you the minimum components that must be installed to do a self-build (as opposed to installing a binary pre-compiled package).
Unfortunately, these methods are that much easier, though it's better than trying to do it yourself. In the end, to be cross-platform supporting, you need one of these systems per OS as well. Fun!
I am not sure if I understand correctly what you want to du, but I will tell you what we use and hope it helps.
We use cmake for our build. It hat to be noted that cmake is quite powerful. Among other things, you can "make install" in custom directories to collect headers and binaries there to build your release. We combine this with some python scripting to build our releases. YMMV, but some things might just be too specific for a generic tool and a custom script may be the simpler solution.
Our build tool builds releases directly from a svn reposity (checkout, build, ...) which I can really recommend to avoid some local state polluting the release in some unforseen way. It also enforces reproducability.
It depends a lot on the platforms you're targeting. I can only really speak for Linux, but there it also depends on the distributions you're targeting, packages being a distribution-level concept. To make things a bit simpler, there are families of distributions using similar packaging mechanisms and package names, meaning that the same recipe for making a Debian package will probably make an Ubuntu package too.
I'd definitely say that if you're willing to target a subset of all known Linux distros using a manageable set of packaging mechanisms, you will benefit in the long run from not rolling your own and building packages the way the distribution creators intended. These systems allow you to specify run- and build-time dependencies, and automatic CI environments also exist (like OBS for rpm-based distros).

What are the dusty corners a newcomer to CMake will want to know?

I've done a lot of projects and a lot of different build systems and CI tools. Most recently, I've been exposed to the occasionally challenging task of adding to an autotools based environment for a reasonably sized C++ application. While I love the ease of use for the end user, I'm not so fond of dealing with m4 and all of the auto* tools from the developer side.
I'm working on a reasonably large side project in my spare time and have decided that I'd like to take CMake for a test drive. Since I'm just starting out, I'm obviously planning on digging through the documentation, FAQ, wikis, etc. and learning by doing. BTW, I'd fork over money for the "Mastering CMake" book, but the comments that I found on Amazon were enough to make me decide that it probably isn't worth the money. All that being said, in anything new there are often "gotchas" that a newcomer will often stumble over that the old pros have long since learned to avoid. I'm wondering what those are based on people's experience with CMake, and I'm hoping to cut my learning pains down a bit by asking here.
I should point out that I'm planning on building on Linux primarily, and other UN*X variants. Windows isn't really a concern from my POV. This is a large server-side application, with a Web and CLI interface for operators, a northbound REST interface for automation/integration with OSS tooling, and a SOAP southbound interface for CPEs. I'm going to need a lot of third-party libraries and applications to get this all to work unless I want to take the next 10 years to build this all by hand. :)
First of all, I think CMake is an excellent build-tool. It has in my opinion by far the best support for multi-platform-builds and has a powerful mechanism for finding 3rd-party libraries. Combined with CPack it even provides reasonable options for packaging and installation. Some hints and possible problems:
Always aim for out-of-source builds: An obvious one: It can be difficult for a project designed for in-source builds to get it built out-of-source. So, if you can design it from the start, aim for out-of-source builds.
Syntax: The syntax can be bizarre with a lot of strange quirks, although this is getting better since the 2.6 and 2.8 versions.
Caching of variables. CMake keeps track of a cache of variables and settings and sometimes this can be a problem when rebuilding something. Try to remove the CMakeCache.txt (or clean your out-of-source build-directory) and rebuild. This one is also mentioned by DarenW.
Finding and configuring 3rd-party libraries: If you have a large project depending on several libraries (your own or 3rd-party), this will likely be the most troublesome.
I seriously recommend diving into the find_package command. It is very powerful, but fully depends on the quality of the FindXXX.cmake files. Especially when building on multiple platforms (mostly Windows and Mac), these files may require some tweaking or you may need to write your own.
Problems with linking libraries like debugging "undefined references" or mismatches between debug|release or static|shared can be difficult to debug. Especially with 3rd-party libraries these problems may be caused by incorrect 3rd-party FindXXX.cmake files...
One thing that took me a while to figure out: if anything goes haywire with a build because I've changed a compiler option, added/removed a .c file, had confusion with support library versions, etc. - it's best to delete the build tree and run CMake from scratch. Otherwise, even though CMake and make seem to do their job, the resulting executable crashes.
It could be we're "not doing it right", but that's becoming a common problem with many tools, languages, frameworks, etc. It's not possible for everyone to be expert at all the software they depend on, and "not doing it right" is the only way a lot of things get done even among professionals. CMake is pretty smart about a lot of things, but not everything. Nuking the build tree remains a commonly used technique on our project.

What are the differences between Autotools, Cmake and Scons?

What are the differences between Autotools, Cmake and Scons?
In truth, Autotools' only real 'saving grace' is that it is what all the GNU projects are largely using.
Issues with Autotools:
Truly ARCANE m4 macro syntax combined with verbose, twisted shell scripting for tests for "compatibility", etc.
If you're not paying attention, you will mess up cross-compilation ability (It
should clearly be noted that Nokia came up with Scratchbox/Scratchbox2 to side-step highly broken Autotools build setups for Maemo/Meego.) If you, for any reason, have fixed, static paths in your tests, you're going to break cross-compile support because it won't honor your sysroot specification and it'll pull stuff from out of your host system. If you break cross-compile support, it renders your code unusable for things like
OpenEmbedded and makes it "fun" for distributions trying to build their releases on a cross-compiler instead of on target.
Does a HUGE amount of testing for problems with ancient, broken compilers that NOBODY currently uses with pretty much anything production in this day and age. Unless you're building something like glibc, libstdc++, or GCC on a truly ancient version of Solaris, AIX, or the like, the tests are a waste of time and are a source for many, many potential breakages of things like mentioned above.
It is pretty much a painful experience to get an Autotools setup to build usable code for a Windows system. (While I've little use for Windows, it is a serious concern if you're developing purportedly cross-platform code.)
When it breaks, you're going to spend HOURS chasing your tail trying to sort out the things that whomever wrote the scripting got wrong to sort out your build (In fact, this is what I'm trying to do (or, rather, rip out Autotools completely- I doubt there's enough time in the rest of this month to sort the mess out...) for work right now as I'm typing this. Apache Thrift has one of those BROKEN build systems that won't cross-compile.)
The "normal" users are actually NOT going to just do "./configure; make"- for many things, they're going to be pulling a package provided by someone, like out of a PPA, or their distribution vendor. "Normal" users aren't devs and aren't grabbing tarballs in many cases. That's snobbery on everyone's part for presuming that is going to be the case there. The typical users for tarballs are devs doing things, so they're going to get slammed with the brokenness if it's there.
It works...most of the time...is all you can say about Autotools. It's a system that solves several problems that only really concerns the GNU project...for their base, core toolchain code. (Edit (05/24/2014): It should be noted that this type of concern is a potentially BAD thing to be worrying about- Heartbleed partially stemmed from this thinking and with correct, modern systems, you really don't have any business dealing with much of what Autotools corrects for. GNU probably needs to do a cruft removal of the codebase, in light of what happened with Heartbleed) You can use it to do your project and it might work nicely for a smallish project that you don't expect to work anywhere except Linux or where the GNU toolchain is clearly working correctly on. The statement that it "integrates nicely with Linux" is quite the bold statement and quite incorrect. It integrates with the GNU toolsuite reasonably well and solves problems that IT has with it's goals.
This is not to say that there's no problems with the other options discussed in the thread here.
SCons is more of a replacement for Make/GMake/etc. and looks pretty nice, all things considered However...
It is still really more of a POSIX only tool. You could probably more easily get MinGW to build Windows stuff with this than with Autotools, but it's still really more geared to doing POSIX stuff and you'd need to install Python and SCons to use it.
It has issues doing cross-compilation unless you're using something like Scratchbox2.
Admittedly slower and less stable than CMake from their own comparison. They come up with half-hearted (the POSIX side needs make/gmake to build...) negatives for CMake compared to SCons. (As an aside, if you're needing THAT much extensibility over other solutions, you should be asking yourself whether your project's too complicated...)
The examples given for CMake in this thread are a bit bogus.
However...
You will need to learn a new language.
There's counter-intuitive things if you're used to Make, SCons, or Autotools.
You'll need to install CMake on the system you're building for.
You'll need a solid C++ compiler if you don't have pre-built binaries for it.
In truth, your goals should dictate what you choose here.
Do you need to deal with a LOT of broken toolchains to produce a valid working binary? If yes, you may want to consider Autotools, being aware of the drawbacks I mentioned above. CMake can cope with a lot of this, but it worries less with it than Autotools does. SCons can be extended to worry about it, but it's not an out-of-box answer there.
Do you have a need to worry about Windows targets? If so, Autotools should be quite literally out of the running. If so, SCons may/may not be a good choice. If so, CMake's a solid choice.
Do you have a need to worry about cross-compilation (Universal apps/libraries, things like Google Protobufs, Apache Thrift, etc. SHOULD care about this...)? If so, Autotools might work for you so long as you don't need to worry about Windows, but you're going to spend lots of time maintaining your configuration system as things change on you. SCons is almost a no-go right at the moment unless you're using Scratchbox2- it really doesn't have a handle on cross-compilation and you're going to need to use that extensibility and maintain it much in the same manner as you will with Automake. If so, you may want to consider CMake since it supports cross-compilation without as many of the worries about leaking out of the sandbox and will work with/without something like Scratchbox2 and integrates nicely with things like OpenEmbedded.
There is a reason many, many projects are ditching qmake, Autotools, etc. and moving over to CMake. So far, I can cleanly expect a CMake based project to either drop into a cross-compile situation or onto a VisualStudio setup or only need a small amount of clean up because the project didn't account for Windows-only or OSX-only parts to the codebase. I can't really expect that out of an SCons based project- and I fully expect 1/3rd or more Autotools projects to have gotten SOMETHING wrong that precludes it building right on any context except the host building one or a Scratchbox2 one.
An important distinction must be made between who uses the tools. Cmake is a tool that must be used by the user when building the software. The autotools are used to generate a distribution tarball that can be used to build the software using only the standard tools available on any SuS compliant system. In other words, if you are installing software from a tarball that was built using the autotools, you are not using the autotools. On the other hand, if you are installing software that uses Cmake, then you are using Cmake and must have it installed to build the software.
The great majority of users do not need to have the autotools installed on their box. Historically, much confusion has been caused because many developers distribute malformed tarballs that force the user to run autoconf to regenerate the configure script, and this is a packaging error. More confusion has been caused by the fact that most major linux distributions install multiple versions of the autotools, when they should not be installing any of them by default. Even more confusion is caused by developers attempting to use a version control system (eg cvs, git, svn) to distribute their software rather than building tarballs.
It's not about GNU coding standards.
The current benefits of autotools — specifically when used with automake — is that they integrate very well with building Linux distribution.
With cmake for example, it's always "was it -DCMAKE_CFLAGS or -DCMAKE_C_FLAGS that I need?" No, it's neither, it's "-DCMAKE_C_FLAGS_RELEASE". Or -DCMAKE_C_FLAGS_DEBUG. It's confusing - in autoconf, it's just ./configure CFLAGS="-O0 -ggdb3" and you have it.
In integration with build infrastructures, scons has the problem that you cannot use make %{?_smp_mflags}, _smp_mflags in this case being an RPM macro that roughly expands to (admin may set it) system power. People put things like -jNCPUS here through their environment. With scons that's not working, so the packages using scons may only get serialed built in distros.
What is important to know about the Autotools is that they are not a general build system - they implement the GNU coding standards and nothing else. If you want to make a package that follows all the GNU standards, then Autotools are an excellent tool for the job. If you don't, then you should use Scons or CMake. (For example, see this question.) This common misunderstanding is where most of the frustration with Autotools comes from.
While from a developers point of view, cmake is currently the most easy to use, from a user perspective autotools have one big advantage
autotools generate a single file configure script and all files to generate it are shipped with the distribution. it is easy to understand and fix with help of grep/sed/awk/vi. Compare this to Cmake where a lot of files are found in /usr/share/cmak*/Modules, which can't be fixed by the user unless he has admin access.
So, if something does not quite work, it can usually easily be "fixed" by using Standard Unix tools (grep/sed/awk/vi etc.) in a sledgehammer way without having to understand the buildsystem.
Have you ever digged through your cmake build directory to find out what is wrong? Compared to the simple shellscript which can be read from top to bottom, following the generated Cmake files to find out what is going on is quite difficult. ALso, with CMake, adapting the FindFoo.cmake files requires not only knowledge of the CMake language, but also might require superuser privileges.