`ninja` with multiple `build.ninja` files? - c++

I would like to launch multiple ninja builds simultaneously. Each build is in its own directory and has its own build.ninja file.
I could just do the following:
cd <build-dir-1>
ninja &
cd <build-dir-2>
ninja &
...
cd <build-dir-last>
ninja
....but there are a couple of issues with this:
The default number of threads used by Ninja probably isn't optimal when launching multiple independent builds simultaneously.
Output will, I expect, be interleaved in a non-sensible way.
EDIT I could also just keep the ninja calls in the foreground (which is what I'm currently doing), but then there would be no easy way to estimate what the current progress of the (entire) build is.
So, I would like to do one of the following:
merge the build.ninja files into one big file that can perform
both builds in one ninja invocation.
somehow launch ninja with multiple target build.ninja scripts.
It doesn't look like that second option is supported by ninja, but the first seems like it could be done easily enough using subninja <build-dir-n>/build.ninja. Has anyone done something like this before? Are there any hidden pitfalls? Alternatively, I could just perform the builds in sequence (i.e. the above sequence but without the &s), but this doesn't feel like the right solution.
Use-case
I'm using CMake, which generates a separate build.ninja file for each build configuration (release and debug). I'm also targeting multiple platforms, so I have multiple compiler, and CMake must be run separately for each platform. So if I want to build release and debug code for all platforms, I need to run ninja multiple times.

Related

Autotool : Compiling same project using multiple configuration

I am setting up a project based on autotools for the first time and I need to build the project for two platform (using two different toolchains).
Currently when I want to build for one tool chain, I restart from scratch because I am not sure to fully understand how it works.
(autoconf, configure, make).
But is the autoconf / configure part required each time?
If I create two build directories, and call configure from each directory (with different flags for each one),
can I just then call make without performing all process?
Thanks
But is the autoconf / configure part required each time? If I create two build directories, and call configure from each directory (with different flags for each one), can I just then call make without performing all process?
Autoconf is for building the (cross-platform) build system. There is no reason to think that running the same version of Autoconf with the same inputs (your configure.ac and any custom macros bundled with your project, mostly) would yield different outputs. So no, you don't need to run autoconf separately for each build, and in fact it is the standard Autotools model that you not run autoconf at each build.
So yes, it is absolutely natural and expected usage to create two build directories, and in each one run configure; make. Moreover, if indeed you are creating separate build directories instead of building in the source directory, then you will be able to see that configure writes all its output to the build directory. Thus, in that case, one build cannot interfere with another.

Is it possible to compile recently changed files first?

The following scenario happens a lot:
I change a header which is included in a lot of places, e.g. to add a function declaration.
I add a function definition to the corresponding source file, which has an error because I'm dumb.
I compile, and wait a long time for a bunch of irrelevant stuff to be compiled before I see the error in the code I'm working on.
If cmake would prioritize compiling recently modified files first, it would reduce my test cycle time in these cases by several minutes. Is this possible?
I couldn't find anything general in CMake that allows you to specify build order, but you may be able to do this with specific build system generators that allow you to compile individual .o or .obj files. For example, using the Ninja generator:
add_executable(mytarget the-suspect-src.cpp)
The generated Ninja build system lets me build the corresponding .o file by specifying it explicitly:
ninja CMakeFiles/mytarget.dir/the-suspect-src.cpp.o
So you could achieve your desired behavior with:
ninja CMakeFiles/mytarget.dir/the-suspect-src.cpp.o && ninja
Note that I don't memorize these paths to the .o files, but instead tab-complete in the terminal.
I happen to know that the Makefile generators also have a similar ability to build individual .o files, but I'm not aware of any other generators which have this ability.
CMake isn't a build system. It's a buildsystem generator. It generates buildysstem configurations for buildsystems like Make, Ninja, Visual Studio, etc. Off the top of my head, I don't think CMake provides such a configuration point. I think you might have to dig into the docs for whichever specific buildsystem(s) you're using/generating.

set output path for cmake generated files

My question is the following:
Is there a way to tell CMakeFiles where to generate it's makefiles, such as cmake_install.cmake, CMakeCache.txt etc.?
More specifically, is there a way to set some commands in the CMakeFiles that specifies where to output these generated files? I have tried to search around the web to find some answers, and most people say there's no explicit way of doing this, while others say I might be able to, using custom commands. Sadly, I'm not very strong in cmake, so I couldn't figure this out.
I'm currently using the CLion IDE and there you can specifically set the output path through the settings, but for flexibility reasons I would like as much as possible to be done through the CMakeFiles such that compiling from different computers isn't that big of a hassle.
I would also like to avoid explicitly adding additional command line arguments etc.
I hope someone might have an answer for me, thanks in advance!
You can't (easily) do this and you shouldn't try to do it.
The build tree is CMake's territory. It allows you some tiny amount of customization there (for instance you can specify where the final build artifacts will be placed through the *_OUTPUT_DIRECTORY target properties), but it does not give you any direct control over where intermediate files, like object files or internal make scripts used for bookkeeping are being placed.
This is a feature. You have no idea how all the build systems supported by CMake work internally. Maybe you can move that internal file to a different location in your build process, which is based on Unix Makefiles. But maybe that will also horribly break my build process, which is using Visual Studio. The bottom line is: You shouldn't have to care about this. CMake should take care of it, and by taking some freedom away from you, it ensures that it can actually do that job on all supported build toolchains.
But this might still be an unsatisfactory answer to you. You're the developer, shouldn't you be in full control of the results produced by your build? Of course you should, which is why CMake again grants you full control over what goes into the install tree. That is, whatever ends up in the install directory when you call make install (or whatever is the equivalent of installing in your build toolchain) is again under your control.
So you do control everything that matters: The source tree, the install tree, and that tiny portion of the build tree where the final build artifacts go. The rest of the build tree is off-limits for you and for good reasons.

Run code parallel but sections sequentially with CMake and/or Make?

I am building my project using CMake. This project uses several external libraries, one of which does not specify its dependencies correctly. This causes my build to fail while running a parallel make job. I am wondering if there is a feature in cmake and/or make to run a certain piece of code serially and everything else in parallel.
More specifically I am using the FindCUDA.cmake module and within that using cuda_add_executable & cuda_add_library. I am fairly convinced, that while build the CUDA libraries, the FindCUDA module is not listing out its dependencies, causing race conditions while reading and writing intermediate object files to disk. Is there a way I can simply run the cuda_add_library macro serially while running the rest of my build in parallel?
All CMake code is run single-threaded. CMake developers have been approached several times via bug tracker etc. if they could implement some parallel features but this has been rejected every time.
As of now (current CMake v3.24.3) I am not aware of any parallel built-in feature. CMake does not execute in parallel at all. CMake provides a unified way to instruct the build system (MSBuild, Make, Ninja, ...) to build in parallel (see doc CMake -j switch) but after that the responsibility is handed over to the build system.
It is hard to synchronize any outputs of other software via CMake if it is not supported by CMake.
If your project structure in CMake supports it you could circumvent this limitation by using the approach described usr1234567's answer (here).
As far as I know, you cannot influence build parallelism per target. But you can work around this.
You can build your target (or multiple targets with repeated calls) first that needs to be sequentially build
cmake --build . --target seq_target -j 1
Then you can build the whole project, so everything left can be build in parallel. The already built is not built again, so no error will occur:
cmake --build . -j 8

Is there a way to perform atomic CMake build?

I'm considering reimplementing our build system (currently based on GNU Make) in CMake.
Disclaimer: this is more of a theoretical and "best practices" question. I don't know CMake in-depth. Also, please feel free to migrate the question to programmers if it's more on-topic there.
As far as I understand, the standard workflow for CMake is
cmake .
make
I suspect there may be problems of de-synchronization of CMake files and Makefiles.
So, during usual development process you're supposed to run make to avoid unnecessary rebuilds of CMakeCache and Makefiles and generally make the process more straight-forward. But then, if you add, say, a new source file to CMakeLists and run make, it'll be using old CMakeCache and Makefiles and will not regenerate them automatically. I think it may cause major problems when used at scale, since in case something is not building as it should, you'll have to try to perform make clean, then, if it doesn't help, you'll need to remove CMakeCache and regenerate everything (manually!).
If I'm not right about something of the above, please correct me.
I'd like to just do
awesome-cmake
and have it update everything what needs updating and build the project.
So, the question: is there a way to make "atomic build" with CMake so that it tracks all the required information and abstracts away the usage of make?
I think you have a couple of incorrect ideas here:
I suspect there may be problems of de-synchronization of CMake files and Makefiles.
Ultimately, CMake is all about producing correct Makefiles (or Visual Studio solution files, or XCode project files, or whatever). Unless you modify a generated Makefile by hand, there can be no synchronisation issue between CMake and the Makefile since CMake generates the Makefile.
But then, if you add, say, a new source file to CMakeLists and run make, it'll be using old CMakeCache and Makefiles and will not regenerate them automatically.
Actually, the opposite is true: if you modify the CMakeLists.txt (e.g. adding a new source, changing a compiler flag, adding a new dependency) then running make will trigger a rerun of CMake automatically. CMake will read in its previously cached values (which includes any command line args previously given to CMake) and generate an updated Makefile.
in case something is not building as it should, you'll have to try to perform make clean, then, if it doesn't help, you'll need to remove CMakeCache and regenerate everything (manually!).
Yes, this would be a pretty normal workflow if something has gone wrong. However, things don't often get that bad in my experience.
So, the question: is there a way to make "atomic build" with CMake so that it tracks all the required information and abstracts away the usage of make?
Given that running make will cause CMake to "do the right thing", i.e. rerun if required, I guess that using make is as close to an "atomic build" as possible.
One thing to beware of here is the use of file(GLOB ...) or similar to generate a list of source files. From the docs:
We do not recommend using GLOB to collect a list of source files from your source tree. If no CMakeLists.txt file changes when a source is added or removed then the generated build system cannot know when to ask CMake to regenerate.
In other words, if you do use file(GLOB ...) to gather a list of sources, you need to get into the habit of rerunning CMake after adding/removing a file from your source tree; running make won't trigger a rerun of CMake in this situation.
The standard workflow for CMake is an out of source build
mkdir build
cd build
cmake ..
make