What is Cmake? what is its role in the building process? [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I really don't understand what "Cmake" is, I know the building process which is consist of:
Preprocessor
Compiler
Assembler
Linker
but where is "Cmake" in this process? what is its role and why do we need it?
I read that, it helps in compiler dependencies, but I don't get it.
please help me to know more about it and thanks in advance.

Cmake is a build system configuration generator. To understand what that is, you must first learn what a build system is (such as make, msbuild).
It's challenging to combine the tools that you've listed for a large project to produce multiple variations of multiple executables and or libraries. The complete build process may consist of a large number of differing commands that must be invoked in the correct order, and may each have a large number of options that must be passed to the tool. This is what a build system automates. It invokes those build tools with appropriate options as described in the configuration that you write for it. They also provide convenient features such as incremental re-compilation (checking whether a translation unit has already been compiled previously, avoiding re-compilation when the source hasn't been changed).
But there is a problem: There are many build systems each of which have their own configuration format, and those build systems are often specific to particular system. In order to compile a project on multiple differing systems (such as windows, linux, osx), you would have to write and maintain configuration for each. This is what a build system generator solves: You write a single configuration, and cmake configures your system specific build system, and you invoke that build system through cmake.

CMake is a cross-platform build tool. The idea is to let you define how to build a C++ project at a higher level of abstraction than however you build on one particular platform.
The issue is that although C++, the language, is independent of platform, how you build on any particular platform is not. Say you are implementing an application and would like to support Windows, Linux, and MacOS. If you do not use CMake or something like it this means that you will have to separately maintain a Visual Studio solution file on Windows, a MAKE file on Linux (say), and an Xcode project file on the Mac -- and this assumes that your application has no third party dependencies. If it does have third party dependencies then handling those in a cross-platform manner is a whole other can of worms. You might use per-platform scripts to manage dependencies, for example.
The idea of CMake and similar tools is to solve this problem. You define how to build your project and find its dependencies once and use the CMake definition across platforms.

Like many build tools, it creates output objects based on input artifacts (e.g. by running a compiler).
It only rebuilds an output object if...
if one of the input has been modified since last time the output object was built
if the build recipe for the output object has changed
This can be done with a classical Makefile also, but writing a Makefile that lists all dependencies for building an output object (including header files for instance) is not trivial.

Related

Handling external C++ dependencies [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 months ago.
Improve this question
The project structure we used to use was that code + prebuild external dependencies were source controlled in SVN. This was cumbersome because the external libraries were large and didn't need to be source controlled since they were prebuilt binaries.
Now we have source in git and the prebuilt binaries are in the cloud. The dev has to download this lib folder from the cloud after cloning the repo. Problem here is that if you make changes to lib then things will not build correctly until the developer goes and redownloads this lib folder.
Our projects are generally developed for Windows (MSVC) but we just added Linux (GCC+Docker) compatibility and likely in the future Linux will be the main version. So now our libraries each have a Windows and a Linux build folder. Our dev environments are Windows + VSCode/WSL2/Docker for Linux.
What is the best, common practice here for handling external dependencies. I can think of 2 ways.
Version the lib folder in the cloud and check that during building. If Developer A adds/changes this and updates the CMakeLists file then when Developer B updates his git repo and tries to build then CMake can see that the version of the libs folder he has is out of date and will be told to go update that. This is little effort and changes almost nothing of our process. Cons are that Developer A has to remember to update the version in both the cloud and in the cmake check.
Build all external libraries locally. Use git submodules and have cmake build all dependencies while building the main project. I assume there's a way to cache it so it doesn't rebuild constantly (some of these libraries are large and take a long time to build). More work, but less maintenance and needing of extra developer steps. Also probably easier to link and include against.
Problem here is that if you make changes to lib then things will not build correctly until the developer goes and redownloads this lib folder.
clear indication that the exact version you want to use is part of what you should track alongside your source code.
I assume there's a way to cache it so it doesn't rebuild constantly (some of these libraries are large and take a long time to build)
As long as files don't change, nothing needs to be rebuilt.
So, yeah, if your project depends on external libraries in specific, git submodules do sound kind of attractive.
Also note that other build systems (like meson) have a neater understanding of in-tree dependency projects, can check your system for an installed version of the dependency, and if not there in appropriate version, download, and if necessary, build themselves.
So, the second option is probably the easiest to maintain solution, as you said.
I, however, come from an free and open source background, and my users and their platforms are diverse, and Linux distros have strict guidelines about not packaging N copies of the same dependency. So, that would make it harder to upstream such packages to debian, Ubuntu, Fedora, Arch… . So, for me the situation is this: if there's a library that we want to use in a project, we define very clearly what the oldest version of that library is that would work. Within a release cycle, we cannot bump the required version.
So, say, we've released 2.0.0 of some software. The CMake files define which version of a library we support. "Releasing" software means that we guarantee to devs as well as to users that the next bugfix/feature extension versions in our 2.a.b series still build on the same systems – and that includes the same libraries that might be installed there. So, if 2.0.0 built on your computer, so will 2.0.1 and 2.9.0. Development that requires a new version of an external dependency can only happen on a git branch that's not meant for further 2.a.b releases, but targetting an eventual 3.0.0. When picking minimum dependency versions for that 3.0.0 release, we look what is commonly available on the operating systems we support. For example, if my timeline was that 3.0.0 be released within 2022 or 2023, that version would be the one available in Ubuntu 22.04LTS (because that will be an important system for our users for a long time, and also relatively conservative), also looking at the debian version most likely to be the current unstable (or stable, depending on what your target audience is), the RHEL version, the next Fedora, and what is currently available in our condaforge and macports repos.
Everything not available in tolerable versions through these standard packaging channels needs to be built locally anyway. Turns out that if you're not crazily progressive and don't try to support 5 year old Linuxes, the number of projects that you need to build locally is quite small.
On windows, you're basically handicapped by Microsoft's inability to provide a really sensible way of downloading packages of binary shared libraries that are actually shared between different application software. So, on Windows systems, you're down to either doing all your builds locally, or using a third-party way of distributing platform-dependent packages, like Conan.
No matter how you do it, you'd let your build fail as early as possible, with a clear indication that the library version found is not sufficiently new. CMake makes this easy; its find_package command takes a minimum version as argument.

How to organize large project in several GIT repositories? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a C++ project cat that depends on libzzz library. The libzzz has its own git repository and now I am going to create a repository for the cat project.
How to organize a CMake build scripts for the cat?
option 1: CMake scripts of cat consider libzzz built and installed in the system and cat project provide FindLibZZZ.cmake script that search libzzz in /usr/include/libzzz + /usr/lib/libzzz. But how to deal with non-linux platforms? I presonally don't like this option.
option 2: add some kind of link or dependency in GIT repository cat that would automatically checkout libzzz sources from its origin into some cat subdirectory. So cat's CMakeLists.txt consider libzzz is placed in some cat's subdirectory. How to do that?
If libzzz is also a CMake project, then I'd suggest an alternative approach (let's call it option 3). Rather than relying on libzzz being already available on your system (which can make integrating into CI systems, etc. harder), you may want to consider building it as part of your overall project as well. Instead of trying to add it to cat via git submodules (valid, but has its own downsides, e.g. see here), you can make a top level build (aka superbuild) control the building of both libzzz and cat.
Bringing in external projects into a superbuild is often done using CMake's ExternalProject module. This allows you to download and build a project in one operation. Unfortunately, it does so at build time and provides no CMake targets for you to link against, so you end up having to manually work out the name and location of the library you want to link against from it, etc. Yes, you can robustly predict these for your build, but you have to manually handle all the platform differences. This is what CMake is supposed to take care of for us!
Instead of using ExternalProject in the normal way, you can, however, coax it to perform the download at CMake time. This then makes the external project's source code available immediately and you can call add_subdirectory to bring it directly into the main build. This in turn means any CMake targets defined by that no-longer-external build will also be visible, so you can simply link against those targets. In your particular example, the libzzz build would presumably define a zzz CMake target or similar and in your cat build, you would just add a call like:
target_link_libraries(cat PUBLIC zzz)
No need to manually work out any library names or locations, since CMake now does it for you. The trick to all this is getting ExternalProject to execute at CMake time rather than build time. The method is fully explained here, using GoogleTest as the example. The technique essentially involves using external_process to invoke a small script via CMake's script mode to call ExternalProject_Add immediately. That article includes a link to a fully generalised implementation you should be able to use to perform the downloading of libzzz in your particular case.
I believe that Option 1 is best if you ever plan to distribute the project to others (which it sounds like you do, if you are considering non-linux platforms). If you don’t plan to distribute cat to the wider world, then by all means go with Option 2, which is likely to be slightly easier to implement. However, suppose I want to use cat, and you go with option 2. In that case, there could be several problems:
If I already have libzzz because it’s also used by the dog project, then you are downloading and building something I already have. This is, at the very least, a waste of time.
If libzzz releases a critical security update, then I need to re-download your project also. If I’m lucky, you updated the cat repository to use the new version of libzzz. If I’m unlucky, then you have not noticed the new security update yet, you are in the middle of maintaining a different part of the repository, or you are dead and cat is completely unmaintained. If I’m really unlucky, I don’t notice that cat now needs to update because libzzz released a new version, and I am subject to the security problems.
Your cat project becomes popular. Now package maintainers everywhere need to find out how to compile cat with the existing libzzz from their repository.
I know I’ve seen an article from somebody who actually was a package maintainer arguing for Option 1, but I can’t seem to find it again. If I do, I’ll update this answer with the link.

C++ Development Flow with 3rd Party Dependency [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm a Python developer with some background in another language such as Ruby.
In both language, dependency is managed by package manager automatically, such as pip or gem. Anyone could install such dependency by calling pip install -r requirements.txt, and it will install the necessary dependency via Python Package Index. Although, there has been an option to build the dependency manually from the source and install into the project, it is not a recommended process, and I have not done it.
I notice that C++ unfortunately have different nature in how dependency is being resolved for some reason. (e.g. different compiler flavor, compiler parameters, platforms, etc...)
At the moment, I am learning C++ using VS2015. and I have been stumbled again and again upon these library dependencies matter. With VS2015, there is a dependency package manager like python, and it is NuGet. However, not every library is available in NuGet, in fact, there are a lot of library developed independent from its IDE.
First I'm trying to use Boost. There is a manual on how to build the project, but I'm not sure what I need. Do I need to build from source? or Perhaps I just need a library that has been readily available?
Same reason for another library that I found. (e.g. QT, yaml-cpp, googletest, etc..)They only have a document how to build, instead of how to install as dependency.
And Ultimately, I will need to use lots of 3rd party library to be more productive. So, here's some of my questions that are very related.
How do C++ developer normally include 3rd party library into their project (the flow of installation 3rd party library)?
Do I have to build from source everytime I want to include? or perhaps you just need the header file which you could just copy and paste into your project directory?
I'm working in team (git), do each of my team need to build the dependency manually? Can it be automate such that the process of including new library is transparent for everyone?
Or perhaps, I don't really understand what specific question do I need
to ask. But why it is so painful to reuse library in C++?
Do I miss some fundamental understanding of C++ environement?
I'm not sure how much relevance it is, but CMake as a build tool that most library uses to build their project. Do I really need to build these library project?
More Questions:
After building some libraries, some of them generate static library .lib or dynamic library .dll to be included into the project. So is it correct to copy these generated library in our project? Should this be committed into the source version control? Some libraries are very large, and we don't want to maintain it. Yet we need the entire team to get the library transparently.
I understand you situation quite well. You cannot see the forest because too many trees are standing in your way...
Let me get one thing clear before I start to address your specific questions:
Generally speaking, dependencies in C++ are not more complicated than in Python.
The command pip install -r requirements.txt will establish an internet connection and download the necessary libraries and files from a repository server to fulfill the requirements. Under the Linux operating system (Ubuntu) the command: sudo apt-get install libboost-all-dev installs all required dependencies for boost. This is possible because there is a whole environment with servers that hold source-code as well as libraries and binaries that work together with the client programs (apt-get) that use it. This is exactly the same thing that the authors of pip have done for microsoft windows. microsoft themselves have never done this at the operating system level. They always left that to the programmer. NuGet is microsofts attempt to make-up for past mistakes.
Having this out of the way, let me address your questions:
It depends on the size of the 3rd party library. Small libraries like pugixml can be included as source in the source tree of your project. Bigger libraries like boost are better included as binary object code (library objects). Not all libraries do have binaries available to download (boost has), so you might be required to build from source. Bear in mind that all binaries are required to be built with exact the same compiler that you use in your project. The general steps to include it in your VS-Project:
Get the distribution files (either build from source or download and install binaries)
Add include paths to your Project:
Project > "projectname" properties > Configuration Properties > C/C++ > General > Additional Include Directories
Add paths to libraries:
Project > "projectname" properties > Configuration Properties > Linker > Input > Additional Dependencies.
No. You normally just use the header file. But it's better to add the path of the library into your project instead of copying the header file, because some projects (boost) have a huge hierarchy of header files.
It is a good idea that each member of your team has the same development environment with the same set of libraries installed. There are tools for this task: Chocolatey builds on top of NuGet and is therefore windows-affine. Vagrant deals with virtual boxes ands thus offers cross-platform development environments.
But more important is a decent source-control-management system. If you don't already use one - start using one Today!. This is the main collaboration-tool. It can really save your neck if you loose a developer machine.
There is another dependency problem: We've only addressed the development dependencies above. There is the problem of deployment dependencies:
your customers will need the libraries (*.dll files) that you have used for the development. You will need to package them as well into your deployment package (Installer). This is another issue which is probably already answered on SO.
Qt: if you start using Qt, I'd suggest that you start using their development environment Qt-Creator. This will automatically handle all dependencies. It will automatically detect the Visual Studio Compiler that you have installed, and use it. The IDE is quite close to Visual Studio.
CMake: No, it is not always required to use CMake to build a library project, some also include Makefiles. Others use CMake to produce Makefiles. "Follow the instructions" is the best advice I can give here.
Update 2015-10-24: paragraph point three reworked
How do C++ developer normally include 3rd party library into their
project (the flow of installation 3rd party library)?
It depends... There are a lot of ways, how to redistribute C++ libraries.
Do I have to build from source everytime I want to include? or perhaps
you just need the header file which you could just copy and paste into
your project directory?
For now, most C++ libraries contains two parts: binaries + header files. But often, there are a lot of problems, if compiler version of library is different with your compiler.
I'm working in team, do each of my team need to build the dependency
manually? Can it be automate such that the process of including new
library is transparent for everyone?
It depents on your team guidelines. You can choose what you want.
Or perhaps, I don't really understand what specific question do I need
to ask. But why it is so painful to reuse library in C++?
Because of some legacy of C. And because C++ is low-level language in compare with python/java/c#. C++ is supported by a lot of different platforms, including embedded. And ofter, it is not possible to install complex runtime on this platforms. So there are no mechanism to transparently link a "modules" in runtime.
Hopefully, there will be a normal support of modules in C++17 standard. And Microsoft will provide a technology preview of modules in C++ in MSVC 2015 update 1.
Do I miss some fundamental understanding of C++ environement?
Yes, I propose you to read about compiling and linking in C/C++. This two things are often come together, but they are different.
First, that you should mind: code in C/C++ is splitted in two parts: declaration (.h files) and implementation (.cpp files). .CPP files are compiled into binaries. .H files just declares an interfaces.

Cleanest way for automated dependency management in c++ [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Related question: C++ Build Systems - What to use?
I am struggling to find a nice build / dependency management tool for my c++ project, with the following desired features:
able to specify a dependency by name, version
dependencies' "include" directories are automatically included during compilation of my application
the dependencies are automatically downloaded, built, and linked to my application
transitive dependencies also have the above two behaviours
ability to specify test-scope dependencies
tests are automatically built and run, potentially with a memory leak check tool (e.g. valgrind)
potentially run a coverage tool, e.g. gcov
cross platform supported
I have used Maven, with the [nar-maven-plugin], and sometimes the [cmake-maven plugin]. However, this means I have to craft a [pom.xml per dependency]. This method isn't particularly nice, as at times, a [nasty pom.xml] has to be crafted to get things to work. Also, it doesn't support running valgrind (no support built in yet).
I have attempted using CMake as I've seen many projects use it, but I find that a lot of time is spent on "writing a build/dependency management system" instead of "using it". Yes it's true I can write many functions that go:
function(RequireSomeLib artifact)
# ExternalProject_Add(SomeLib ... etc.)
# find SomeLib package
# add include dirs(artifact SomeLib_INCLUDE_DIRS)
# if SomeLib is not just a header library, also link its built library to the artifact
# for each of SomeLib's dependencies, do this same "call" (transitive dependencies' libraries must also be linked when building an executable)
endfunction()
for each dependency. Tedious, but currently the cleanest way I see going forward.
With the premise that CMake is used by libraries that my project depends on, is there a better method to solving this problem?
I have not seen or tried SCons, AutoTools, or QMake (yet).
In Java, the "retrieve dependencies, build, test, and publish" issue is much simpler ._.
All build systems for C++ will need you to code in dependencies and package detection. Every one of them has been created out of frustration with previous technologies, with the stated intent to remove the need for boilerplate code, and to create a complete, cross-platform, automated, easy to use solution, but at the end of the day, you will end up writing code besides your code for your package to be built.
If you look hard enough, you will find debates among advocates of each build system. I found one of such a couple years ago. Their arguments were so weak that I ended up abandoning the search.
I am a user of CMake for one simple reason: it was the first I could find some years ago that allowed me to spawn different build directories. I'm sure all modern build systems have implemented this idea already, but I stuck with CMake just because I became used to it. Honestly I have found very few sensible advantages over bare bones Makefile. I had to write boilerplate code for my CMakeFiles.txt, even though it was a C++ project without dependencies.
Some time later I decided to try out some different IDEs; I had the good fortune that Qt Creator runs on CMake projects; there is the other reason I stayed on CMake.
My advice to you is to visit each build system's webpage; check out their currently implemented features (not TODOs), comparisons to other systems, IDE support, and the complexity of the code you have to write for them. I am pretty sure you will not find a build system satisfying all of the requirements you have, so you will have to test them thoroughly to see which one works best for you.

How to run console-based target for a library project? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I am attempting to create/build/run a second target for a library project in xcode. The library is being consumed by another project in the workspace, and I have:
Created the second target, a console app
Confirmed that the generated main.cpp file is included in the
console target
Cleaned and rebuilt, confirming that the library still builds and
works
However the console target remains unbuilt. I have not received any errors.
Places I have researched looking for higher resolution steps:
Googletest xcode tip (meandmark.com)
Google test project target docs (per my use case)
Should I be using one project with multiple targets?
Build static library target with main target for...
Xcode concepts
Xcode help docs
If you think you can help, I'd be much obliged.
It might not be the answer you are looking for, but if you are new enough to XCode, that setting up a test.cpp to your library is challange enough, you might try another tool that in the long run might prove to be more useful.
CMake is an excellent cross-platform tool that is capable of generating platform-specific makefiles or project/workspace files for various IDE-s, including XCode. So you need to learn only one tool, and you're good for all platforms and compilers.
CMake has a companion app that ships with it, CTest. It is meant for just the thing you are looking for. It basically adds build targets that build a certain app (test.cpp in your case), and check if the return of int main() is zero or not. Multiple tests can be created (all testing different aspects of your library), and CTest provides nice interface to run all tests, just the specified ones and what not, plus it prints runtime of tests and shows which have failed.
CMake and CTest have good documentation, and there are myriads of tutorials available online. It might take some time to master, but in the 2 days time you spent googling, you could've ported your workspace to CMake easily. In the long run, it pays off.