Suppose I have a system with multiple C/C++ compilers - various versions of GCC, clang and ICC. Also suppose I have a CMake C/C++ project which has certain requirements and certain preferences regarding the C/C++ compiler to use; and to complicate things, suppose these requirements and preferences and generated dynamically based on the combination of project options I've set (with ccmake or otherwise).
Now, other answers about using a compiler other than the default suggest setting the CC or CXX environment variables - but this is clearly inappropriate here.
Is there a way to get CMake to:
Detect the available compilers.
Choose the one it likes based on some rules/ranking mechanism?
Notes:
CMake 3.0 . You may assume a newer CMake version, but make that explicit please.
The choice of C or C++ in this question is motivated by my own needs, but it could of course be some other language, if that solution is adaptable.
Historically, and probably also technically, the C compiler is very basic to the CMake run. Many commands rely on having a compiler, like detecting symbols or trying to compile a piece of code.
As far as I know, there is no way to tests multiple compilers and chose one. To get this, you have to
either wrap the CMake calls and have some logic outside which adds the different compilers to the CMake calls
or have to re-write a bunch of CMake functions for yourself.
My advice: Accept the way CMake works and teach it to your users.
Related
I've been searching around for different custom pre-processor extensions and replacements, but all of them seem to come with 1 of 2 caveats:
Either 1), you generate the code as a separate build-system, them manually put the output into your real (CMake) build system, or 2) you end up losing the builtin preprocessor for GCC.
Is there really no tool that can, say, run each file it gets against some configured script, then through cpp, then pass the result to gcc?
I'd love to use something like Cog by just setting an environment variable for gcc, indicating a tool that runs Cog first and then the standard preprocessor.
Alternatively, is there a straightforward way to accomplish that in CMake, itself? I don't want to have to write a custom script for each file, especially if I have to then hard-code the compiler/preprocessor flags in each target.
edit: For clarity, I am aware of several partial/partially-applicable solutions. For example, how to tell GCC to use a different preprocessor. (Or really, to look in a different place for its own preprocessor, cc1. See: Custom gcc preprocessor) However, that leaves a lot of work to do, to modify files, and then correctly invoke the real cc1, with the correct original arguments.
Since that is effectively a constant/generic problem, I'm just surprised there is no drop in program.
Edit 2: After looking over several proposed solutions, I am not convinced there is an answer to this question. For example, if files are going to be generated by CMake, then they can't be included and browsed by the IDE - due to not yet existing.
As ridiculous as it sounds, I don't think there is any way to extend the preprocessor short of forking Gcc. Everything recommended so far, constitutes incomplete hacks.
The GCC (C++ compiler) is made for compiling C++ programs. As the C++ preprocessor is standardized within the C++ standard there is usually no need for anything like a "plugin" or "extension" there.
Don't listen to the comments, that suggest you using any exotic extension to CMake or change source code of GCC. Running source files through a different program (cog in your case) before compiling is a well known task and all major build systems support it right away.
In CMake you can use the add_custom_command function. If you need this for more than one file, you could use a CMake loop like e.g. suggested in this answer.
I want to use CMAKE for a project that is very big, written in C++. But this project is not using a "modern compiler" nor a popular compiler, and is for a unique OS that, for practical purposes, let say I made it so I have access to almost all of its source code (but I can't modify it, I only have modify access to my project). Lets call this OS, OSreally.
My project can be compiled Windows and can run in Windows and OSreally.
What I need to do to implement CMAKE for this project? So I can get all of the CMAKE "benefits".
In a sense, your question isn't terribly hard to answer. CMake is extremely extensible, and almost everything can be changed. For example, to use your custom compiler, you could SET(CMAKE_CXX_COMPILER /path/to/compiler) in your top level build file. However, even though a few more hacks like this will get you on your way to compiling code for your strange OS, there may also be issues with various system functions and the standard library.
If no one has implemented the standard library on your OS, you may need to spend some time building it and possibly modifying it to fit with whatever is different on the system. The same goes for basically any library you would want to import. Regardless, if you have a conforming C++ compiler for the system, the proper path would be to simply set the above variable in your CMakeLists.txt and fix any problems that arise as you encounter them.
The steps I need to do to use CMake with a custom compiler and custom OS is create a new CMake Toolchain configuration file. We can treat our custom OS as a embedded system.
After configuring all the options in the Toolchain file, CMake should work with a custom compiler/OS.
I code in C/C++ and use a (GNU) Makefile to compile the code. I can do the same with CMake and get a Makefile. However, what is the difference between using a Makefile and CMake to compile the code?
Make (or rather a Makefile) is a buildsystem - it drives the compiler and other build tools to build your code.
CMake is a generator of buildsystems. It can produce Makefiles, it can produce Ninja build files, it can produce KDEvelop or Xcode projects, it can produce Visual Studio solutions. From the same starting point, the same CMakeLists.txt file. So if you have a platform-independent project, CMake is a way to make it buildsystem-independent as well.
If you have Windows developers used to Visual Studio and Unix developers who swear by GNU Make, CMake is (one of) the way(s) to go.
I would always recommend using CMake (or another buildsystem generator, but CMake is my personal preference) if you intend your project to be multi-platform or widely usable. CMake itself also provides some nice features like dependency detection, library interface management, or integration with CTest, CDash and CPack.
Using a buildsystem generator makes your project more future-proof. Even if you're GNU-Make-only now, what if you later decide to expand to other platforms (be it Windows or something embedded), or just want to use an IDE?
The statement about CMake being a "build generator" is a common misconception.
It's not technically wrong; it just describes HOW it works, but not WHAT it does.
In the context of the question, they do the same thing: take a bunch of C/C++ files and turn them into a binary.
So, what is the real difference?
CMake is much more high-level. It's tailored to compile C++, for which you write much less build code, but can be also used for general purpose build. make has some built-in C/C++ rules as well, but they are useless at best.
CMake does a two-step build: it generates a low-level build script in ninja or make or many other generators, and then you run it. All the shell script pieces that are normally piled into Makefile are only executed at the generation stage. Thus, CMake build can be orders of magnitude faster.
The grammar of CMake is much easier to support for external tools than make's.
Once make builds an artifact, it forgets how it was built. What sources it was built from, what compiler flags? CMake tracks it, make leaves it up to you. If one of library sources was removed since the previous version of Makefile, make won't rebuild it.
Modern CMake (starting with version 3.something) works in terms of dependencies between "targets". A target is still a single output file, but it can have transitive ("public"/"interface" in CMake terms) dependencies.
These transitive dependencies can be exposed to or hidden from the dependent packages. CMake will manage directories for you. With make, you're stuck on a file-by-file and manage-directories-by-hand level.
You could code up something in make using intermediate files to cover the last two gaps, but you're on your own. make does contain a Turing complete language (even two, sometimes three counting Guile); the first two are horrible and the Guile is practically never used.
To be honest, this is what CMake and make have in common -- their languages are pretty horrible. Here's what comes to mind:
They have no user-defined types;
CMake has three data types: string, list, and a target with properties. make has one: string;
you normally pass arguments to functions by setting global variables.
This is partially dealt with in modern CMake - you can set a target's properties: set_property(TARGET helloworld APPEND PROPERTY INCLUDE_DIRECTORIES "${CMAKE_CURRENT_SOURCE_DIR}");
referring to an undefined variable is silently ignored by default;
As mentioned in the other answers CMake can generate other project files. It refers to these projects as generators.
This lets users write/describe their build using a domain specific language, and use the generator to compile the project. It often results in simpler/better code than writing to these project files directly.
A big advantage is users can use the tool that they are the most comfortable with (Makefiles, Visual Studio, XCode, Ninja, etc). This is nice but arguable introduces complexity. Why not just use Ninja?
The answer is history. (As is the norm in C/C++)
Build systems like Visual Studio have tools that will only accept those project files.
For example Microsoft has a feature called "Static Driver Verifier". A tool to analyze the code of kernel mode windows drivers. However, this tool only works on Visual Studio projects since it works alongside msbuild.
msbuild /t:sdv /p:Inputs="Parameters" ProjectFile /p:Configuration=configuration /p:Platform=platform
If your build system can't generate Visual Studio project files, then you can't use the tool. This can be a very big deal for some projects/companies.
I want to conditionally compile some c++ code that uses boost, and make it so it doesn't try to compile the boost dependent code if boost is not present.
Does boost have any global macro that will be defined, like __BOOST__, that I can check for?
EDIT: It's clear to me now that I have to achieve this on the makefile level. I am working on OSX lion. Using gnu make
The TYPICAL way that this is done is to use a "configuration script" or similar, that detects if the required/optional component(s) is/are present, and then selectively sets some -D options to the build system.
Obviously, if it's just your own project or a small distribution, you could do the same thing manually.
You probably also need a couple of ifdef type of choices in the Makefile if there are library files that you need.
One of the easier ways to determine if a part of boost that you need is installed is to try to compile it. If there are errors, the likely cause is that that part of boost isn't present (this obviously doesn't work if there are more important parts missing - for example, not having a compiler or standard library installed will ALSO cause a compile to fail. This is why nearly all configure type tools "start with the most basic features, and work their way up the tree of dependencies").
We have a C++ template library that has some features that depend on zlib, for example.
We selectively enable and disable features using preprocessor symbols, i.e. setting -DHAVE_ZLIB=1 on the command line.
Our CMake-based build system recognizes installed zlib and adds the according flag to the compiler.
Of course, this can also be done manually by users, using their favourite IDE or their Makefiles.
One property of the library is that the code that uses zlib is interleaved with the code not using zlib, i.e. using #include <library/header.h> should work regardless of zlib being present or not.
Currently, we #if out code that depends on zlib.
Thus, if the user tries to use something like CompressedStream, for example, the class is simply not found.
This is quite frustrating for users.
The build system warns them that zlib could not be found, but users being users either do not see this or forget it quickly.
I myself have fallen into this trap, too.
Now to my question:
What is the best way to warn the user that zlib is disable if he tries to use code requiring zlib.
The only thing I can think of is using the deprecator marker mechanisms implemented in many compilers.
Although different syntax is required for each of them, this could easily be abstracted away using preprocessor macros.
Is there any other good way?
The solution only has to work in VS >8, GCC >4.2 and LLVM.
The proper place to warn users about such things is (IMO) build system. Take a look at Ogre3D, KDE and many other projects - all of them print sort of outline after configuration of build. This outline contains information on what is found and what is not and what are consequences of this.
Even Qt don't do anyting to fix this. There is option to build Qt with STL support and if it's not built such way, there are no warnings or whatever, only compile errors regarding undefined methods. So, i think, there is no way to warn user about such things during compile phase.