Run code parallel but sections sequentially with CMake and/or Make? - c++

I am building my project using CMake. This project uses several external libraries, one of which does not specify its dependencies correctly. This causes my build to fail while running a parallel make job. I am wondering if there is a feature in cmake and/or make to run a certain piece of code serially and everything else in parallel.
More specifically I am using the FindCUDA.cmake module and within that using cuda_add_executable & cuda_add_library. I am fairly convinced, that while build the CUDA libraries, the FindCUDA module is not listing out its dependencies, causing race conditions while reading and writing intermediate object files to disk. Is there a way I can simply run the cuda_add_library macro serially while running the rest of my build in parallel?

All CMake code is run single-threaded. CMake developers have been approached several times via bug tracker etc. if they could implement some parallel features but this has been rejected every time.
As of now (current CMake v3.24.3) I am not aware of any parallel built-in feature. CMake does not execute in parallel at all. CMake provides a unified way to instruct the build system (MSBuild, Make, Ninja, ...) to build in parallel (see doc CMake -j switch) but after that the responsibility is handed over to the build system.
It is hard to synchronize any outputs of other software via CMake if it is not supported by CMake.
If your project structure in CMake supports it you could circumvent this limitation by using the approach described usr1234567's answer (here).

As far as I know, you cannot influence build parallelism per target. But you can work around this.
You can build your target (or multiple targets with repeated calls) first that needs to be sequentially build
cmake --build . --target seq_target -j 1
Then you can build the whole project, so everything left can be build in parallel. The already built is not built again, so no error will occur:
cmake --build . -j 8

Related

Autotool : Compiling same project using multiple configuration

I am setting up a project based on autotools for the first time and I need to build the project for two platform (using two different toolchains).
Currently when I want to build for one tool chain, I restart from scratch because I am not sure to fully understand how it works.
(autoconf, configure, make).
But is the autoconf / configure part required each time?
If I create two build directories, and call configure from each directory (with different flags for each one),
can I just then call make without performing all process?
Thanks
But is the autoconf / configure part required each time? If I create two build directories, and call configure from each directory (with different flags for each one), can I just then call make without performing all process?
Autoconf is for building the (cross-platform) build system. There is no reason to think that running the same version of Autoconf with the same inputs (your configure.ac and any custom macros bundled with your project, mostly) would yield different outputs. So no, you don't need to run autoconf separately for each build, and in fact it is the standard Autotools model that you not run autoconf at each build.
So yes, it is absolutely natural and expected usage to create two build directories, and in each one run configure; make. Moreover, if indeed you are creating separate build directories instead of building in the source directory, then you will be able to see that configure writes all its output to the build directory. Thus, in that case, one build cannot interfere with another.

C++ V8 Embedding project structure

I'm trying to get chrome V8 embedded in my C++ project, and I can only get what I could call, my project being embedded in V8. My only concern with this is that my program is cross-platform and I would like build commands to be the same. I started development it on Windows, but I'm using a mac now to get V8 running.
I can get V8 built and their samples running using this setup:
Get this: https://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/depot_tools_tutorial.html#_setting_up
get source: https://v8.dev/docs/source-code
build: https://v8.dev/docs/build
My current solution has a few commands install, build, run. The build command is more complicated as it attempts to automatically edit the BUILD.gn file in V8 to insert your project instead of V8. It will add all files in your source directory to the sources.
This approach feels very wrong for a few reasons. The first being that there is almost definitely a better way to configure my project than editing a build script with a python script. Secondly, I would like V8 to be embedded in my project, not the other way around. I only have SDL2 as a dependency but I have cross platform CMake builds setup, which would be abandoned for however V8 builds the source files. I feel this way could get hard to manage if I add more dependencies.
I'm currently working with a small test project with one source file.
EDIT: I can't find anything on embedding V8 between running a sample and API usage
The usual approach is to have a step in your build system that builds the V8 library as a dependency (as well as any other dependencies you might have). For that, it should use the official V8 build instructions. If you have a split between steps to get sources/dependencies and compiling them, then getting depot_tools and calling fetch_v8/gclient sync belongs in there. Note that you probably want to pin a version (latest stable branch) rather than using tip-of-tree. So, in pseudocode, you'd have something like:
step get_dependencies:
download/update depot_tools
download/update V8 # pinned_revision (using depot_tools)
step compile (depends on "get_dependencies"):
cd v8; gn args out/...; ninja -C out/...;
cd sdl; build sdl
build your own code, linking against V8/sdl/other deps.
Many build systems already have convenient ways to do these things. I don't know CMake very well though, so I can't suggest anything specific there.
I agree that using scripts to automatically modify BUILD.gn feels wrong. It'll probably also turn out to be brittle and high-maintenance over time.
I got V8 building with CMake very easily using brew:
brew install v8
then add the following lines to CMakeLists.txt
file(GLOB_RECURSE V8_LIB # just GLOB is probably fine
"/usr/local/opt/v8/lib/*.dylib"
)
include_directories(
YOUR_INCLUDES
/usr/local/opt/v8
/usr/local/opt/v8/include
)
target_link_libraries(YOUR_PROJECT LINK_PUBLIC YOUR_LIBS ${V8_LIB})
Worked on Mojave 10.14.1

`ninja` with multiple `build.ninja` files?

I would like to launch multiple ninja builds simultaneously. Each build is in its own directory and has its own build.ninja file.
I could just do the following:
cd <build-dir-1>
ninja &
cd <build-dir-2>
ninja &
...
cd <build-dir-last>
ninja
....but there are a couple of issues with this:
The default number of threads used by Ninja probably isn't optimal when launching multiple independent builds simultaneously.
Output will, I expect, be interleaved in a non-sensible way.
EDIT I could also just keep the ninja calls in the foreground (which is what I'm currently doing), but then there would be no easy way to estimate what the current progress of the (entire) build is.
So, I would like to do one of the following:
merge the build.ninja files into one big file that can perform
both builds in one ninja invocation.
somehow launch ninja with multiple target build.ninja scripts.
It doesn't look like that second option is supported by ninja, but the first seems like it could be done easily enough using subninja <build-dir-n>/build.ninja. Has anyone done something like this before? Are there any hidden pitfalls? Alternatively, I could just perform the builds in sequence (i.e. the above sequence but without the &s), but this doesn't feel like the right solution.
Use-case
I'm using CMake, which generates a separate build.ninja file for each build configuration (release and debug). I'm also targeting multiple platforms, so I have multiple compiler, and CMake must be run separately for each platform. So if I want to build release and debug code for all platforms, I need to run ninja multiple times.

Is there a way to perform atomic CMake build?

I'm considering reimplementing our build system (currently based on GNU Make) in CMake.
Disclaimer: this is more of a theoretical and "best practices" question. I don't know CMake in-depth. Also, please feel free to migrate the question to programmers if it's more on-topic there.
As far as I understand, the standard workflow for CMake is
cmake .
make
I suspect there may be problems of de-synchronization of CMake files and Makefiles.
So, during usual development process you're supposed to run make to avoid unnecessary rebuilds of CMakeCache and Makefiles and generally make the process more straight-forward. But then, if you add, say, a new source file to CMakeLists and run make, it'll be using old CMakeCache and Makefiles and will not regenerate them automatically. I think it may cause major problems when used at scale, since in case something is not building as it should, you'll have to try to perform make clean, then, if it doesn't help, you'll need to remove CMakeCache and regenerate everything (manually!).
If I'm not right about something of the above, please correct me.
I'd like to just do
awesome-cmake
and have it update everything what needs updating and build the project.
So, the question: is there a way to make "atomic build" with CMake so that it tracks all the required information and abstracts away the usage of make?
I think you have a couple of incorrect ideas here:
I suspect there may be problems of de-synchronization of CMake files and Makefiles.
Ultimately, CMake is all about producing correct Makefiles (or Visual Studio solution files, or XCode project files, or whatever). Unless you modify a generated Makefile by hand, there can be no synchronisation issue between CMake and the Makefile since CMake generates the Makefile.
But then, if you add, say, a new source file to CMakeLists and run make, it'll be using old CMakeCache and Makefiles and will not regenerate them automatically.
Actually, the opposite is true: if you modify the CMakeLists.txt (e.g. adding a new source, changing a compiler flag, adding a new dependency) then running make will trigger a rerun of CMake automatically. CMake will read in its previously cached values (which includes any command line args previously given to CMake) and generate an updated Makefile.
in case something is not building as it should, you'll have to try to perform make clean, then, if it doesn't help, you'll need to remove CMakeCache and regenerate everything (manually!).
Yes, this would be a pretty normal workflow if something has gone wrong. However, things don't often get that bad in my experience.
So, the question: is there a way to make "atomic build" with CMake so that it tracks all the required information and abstracts away the usage of make?
Given that running make will cause CMake to "do the right thing", i.e. rerun if required, I guess that using make is as close to an "atomic build" as possible.
One thing to beware of here is the use of file(GLOB ...) or similar to generate a list of source files. From the docs:
We do not recommend using GLOB to collect a list of source files from your source tree. If no CMakeLists.txt file changes when a source is added or removed then the generated build system cannot know when to ask CMake to regenerate.
In other words, if you do use file(GLOB ...) to gather a list of sources, you need to get into the habit of rerunning CMake after adding/removing a file from your source tree; running make won't trigger a rerun of CMake in this situation.
The standard workflow for CMake is an out of source build
mkdir build
cd build
cmake ..
make

C++ Buildsystem with ability to compile dependencies beforehand

I'm in the middle of setting up an build environment for a c++ game project. Our main requirement is the ability to build not just our game code, but also its dependencies (Ogre3D, Cegui, boost, etc.). Furthermore we would like to be able build on Linux as well as on Windows as our development team consists of members using different operating systems.
Ogre3D uses CMake as its build tool. This is why we based our project on CMake too so far. We can compile perfectly fine once all dependencies are set up manually on each team members system as CMake is able to find the libraries.
The Question is if there is an feasible way to get the dependencies set up automatically. As a Java developer I know of Maven, but what tools do exist in the world of c++?
Update: Thanks for the nice answers and links. Over the next few days I will be trying out some of the tools to see what meets our requirements, starting with CMake. I've indeed had my share with autotools so far and as much as I like the documentation (the autobook is a very good read), I fear autotools are not meant to be used on Windows natively.
Some of you suggested to let some IDE handle the dependency management. We consist of individuals using all possible technologies to code from pure Vim to fully blown Eclipse CDT or Visual Studio. This is where CMake allows use some flexibility with its ability to generate native project files.
In the latest CMake 2.8 version there is the new ExternalProject module.
This allows to download/checkout code, configure and build it as part of your main build tree.
It should also allow to set dependencies.
At my work (medical image processing group) we use CMake to build all our own libraries and applications. We have an in-house tool to track all the dependencies between projects (defined in a XML database). Most of the third party libraries (like Boost, Qt, VTK, ITK etc..) are build once for each system we support (MSWin32, MSWin64, Linux32 etc..) and are commited as zip-files in the version control system. CMake will then extract and configure the correct zip file depending on which system the developer is working on.
I have been using GNU Autotools (Autoconf, Automake, Libtool) for the past couple of months in several projects that I have been involved in and I think it works beautifully. Truth be told it does take a little bit to get used to the syntax, but I have used it successfully on a project that requires the distribution of python scripts, C libraries, and a C++ application. I'll give you some links that helped me out when I first asked a similar question on here.
The GNU Autotools Page provides the best documentation on the system as a whole but it is quite verbose.
Wikipedia has a page which explains how everything works. Autoconf configures the project based upon the platform that you are about to compile on, Automake builds the Makefiles for your project, and Libtool handles libraries.
A Makefile.am example and a configure.ac example should help you get started.
Some more links:
http://www.lrde.epita.fr/~adl/autotools.html
http://www.developingprogrammers.com/index.php/2006/01/05/autotools-tutorial/
http://sources.redhat.com/autobook/
One thing that I am not certain on is any type of Windows wrapper for GNU Autotools. I know you are able to use it inside of Cygwin, but as for actually distributing files and dependencies on Windows platforms you are probably better off using a Windows MSI installer (or something that can package your project inside of Visual Studio).
If you want to distribute dependencies you can set them up under a different subdirectory, for example, libzip, with a specific Makefile.am entry which will build that library. When you perform a make install the library will be installed to the lib folder that the configure script determined it should use.
Good luck!
There are several interesting make replacements that automatically track implicit dependencies (from header files), are cross-platform and can cope with generated files (e.g. shader definitions). Two examples I used to work with are SCons and Jam/BJam.
I don't know of a cross-platform way of getting *make to automatically track dependencies.
The best you can do is use some script that scans source files (or has C++ compiler do that) and finds #includes (conditional compilation makes this tricky) and generates part of makefile.
But you'd need to call this script whenever something might have changed.
The Question is if there is an feasible way to get the dependencies set up automatically.
What do you mean set up?
As you said, CMake will compile everything once the dependencies are on the machines. Are you just looking for a way to package up the dependency source? Once all the source is there, CMake and a build tool (gcc, nmake, MSVS, etc.) is all you need.
Edit: Side note, CMake has the file command which can be used to download files if they are needed: file(DOWNLOAD url file [TIMEOUT timeout] [STATUS status] [LOG log])
Edit 2: CPack is another tool by the CMake guys that can be used to package up files and such for distribution on various platforms. It can create NSIS for Windows and .deb or .tgz files for *nix.
At my place of work (we build embedded systems for power protection) we used CMake to solve the problem. Our setup allows cmake to be run from various locations.
/
CMakeLists.txt "install precompiled dependencies and build project"
project/
CMakeLists.txt "build the project managing dependencies of subsystems"
subsystem1/
CMakeLists.txt "build subsystem 1 assume dependecies are already met"
subsystem2/
CMakeLists.txt "build subsystem 2 assume dependecies are already met"
The trick is to make sure that each CMakeLists.txt file can be called in isolation but that the top level file can still build everything correctly. Technically we don't need the sub CMakeLists.txt files but it makes the developers happy. It would be an absolute pain if we all had to edit one monolithic build file at the root of the project.
I did not set up the system (I helped but it is not my baby). The author said that the boost cmake build system had some really good stuff in it, that help him get the whole thing building smoothly.
On many *nix systems, some kind of package manager or build system is used for this. The most common one for source stuff is GNU Autotools, which I've heard is a source of extreme grief. However, with a few scripts and an online depository for your deps you can set up something similar like so:
In your project Makefile, create a target (optionally with subtargets) that covers your dependencies.
Within the target for each dependency, first check to see if the dep source is in the project (on *nix you can use touch for this, but you could be more thorough)
If the dep is not there, you can use curl, etc to download the dep
In all cases, have the dep targets make a recursive make call (make; make install; make clean; etc) to the Makefile (or other configure script/build file) of the dependency. If the dep is already built and installed, make will return fairly promptly.
There are going to be lots of corner cases that will cause this to break though, depending on the installers for each dep (perhaps the installer is interactive?), but this approach should cover the general idea.
Right now I'm working on a tool able to automatically install all dependencies of a C/C++ app with exact version requirement :
compiler
libs
tools (cmake, autotools)
Right now it works, for my app. (Installing UnitTest++, Boost, Wt, sqlite, cmake all in correct order)
The tool, named «C++ Version Manager» (inspired by the excellent ruby version manager), is coded in bash and hosted on github : https://github.com/Offirmo/cvm
Any advices and suggestions are welcomed.