For a project, where Waf is used, I want to try some build speedup. I see that waf has a WAFCACHE option. So, is WAFCACHE sufficient, or do I need to setup ccache additionally ? (can they work together), or does WAFCACHE[internally] makes use of ccache? (I don't see any explanation on this, in the Waf book)
--
Thanks.
I realize this is a really old question, but for others who are curious - you don't need to set up ccache if you use WAFCACHE. I'm not sure the exact mechanism waf uses, but it does it for all (or at least most) targets. This was a huge plus for us as we got object cacheing for our Fortran code for the first time.
I was wondering if I could use ccache for building RTEMS with Waf. I have asked about this on the RTEMS forum: Caching build objects: Waf and ccache . It turned out that doing a simple trick like this does the job:
To wrap the compiler using ccache, override the environment variables during the configuration:
CXX='ccache gxx' CC='ccache gcc' waf configure
Related
I am working on a C++ project. It is not much complicated so far, yet depends on a bunch of "popular" libraries (nlohmann/json, ToruNiina/toml11 just to name a few). All of them have some CMakeLists.txt and from my not-that-experienced point of view, I consider them well structured.
Now of course I can compile the libraries one by one, or include a "copy" into my project repo, but I want to be better than that. After researching about available build tools, I have decided to use cmake to build and manage a C++ project. The promise was to get a stable, widely supported tool that will help to simplify & unify the build process. Moreover, from the project nature I have no privilege to impose any requirements on the target machine; I need to pack everything for the deployment.
I have spent several days reading, watching and testing out various cmake tutorials, handbooks and manuals. I have to admit, I quickly started to feel that a tool that is supposed to clarify development process keeps introducing new obscurities contrary to its purpose. Originally, I credited this to my lack of experience, yet...
I read articles about why not to bundle dependencies, only to be followed by methods of doing so. I have found recommendation to use one way A over B, C over B and later A over C. It took me a while to figure out the differences between 2.8 and 3.0, the obscurity of target_link_libraries, setting cxx version and/or compiler warning flags and so on.
My point is that even after an exhausting expedition into the seas of cmake, I am still not sure about some elementary questions:
How is cmake meant to be used?
What is a standard, what is a courtesy, and what is none of those?
How can I tell that something is a feature, an archaic backwards compatibility, or both?
Now I will illustrate this on my project. I only need something like this
cmake_minimum_required(VERSION 3.14)
project(CaseCore CXX)
add_executable(myBinary list/of/cpp/sources.cpp)
target_link_libraries(myBinary PUBLIC someExternalLibs likeForExample nlohmann_json::nlohmann_json oqs)
The only problem is with the libraries (there is no space for other problems anyway). I want to build them with the project and dont want to make a local copy (not to drag ton of unrelated files all along). First, I created forks of library repos in order to have a reliable source and be able to merge newer versions into my fork.
Now the decision was whether to use git submodule or some other scheme, I've read submodule doesnt perform that well and also preferred the whole thing to be managed by cmake alone. I started with ExternalProject_Add but later I found about FetchContent which helped me to add the external dependencies easily into my cmake list
FetchContent_Declare(nlohmann
GIT_REPOSITORY https://github.com/my-reliable-fork-of/json
GIT_TAG v3.7.3
)
message(STATUS "Fetching Json...this may take a while")
FetchContent_MakeAvailable(nlohmann)
Seems and works well and minimal. However, I always have to search the library itself in order to find/guess which targets to link to my executable target? There seems to be close to zero convention and unless the respective CMakeLists.txt is simple enough to read it, I tend to guess the target names until i find it.
Recently I wanted to link liboqs from here and the abovementioned scenario did not really help; for some reason, I can link oqs, #include "oqs/oqs.h", yet the dynamic library is not created and execution terminates. I am pretty sure I can resolve the problem after another span of time spent googling and playing around with various cmake variables. Yet this is not the way that I expected cmake to help me manage my project; it is actually quite the opposite.
Just to be clear, I turned down other methods including
add_subdirectory from local repo copy (git submodule)
ExternalProject_Add from local repo copy (git submodule)
ExternalProject_Add from online repo
find_package
as they seemed to be much more obscure/old-style etc (eventhough despite hours of researching, they all still seem as pretty much as just many ways to do the same thing to me)
Now that I have
Am I doing something wrong, or is it really what working with cmake should look like?
Do I really have to "reverse-engineer" other people's CMakeLists in order to use a library?
Under these circumstances, how can I convince my coworkers to use similar work process?
and finally
How can I adjust my work in order to ease these difficulities for others?
I love C++ the more I use it. Yet I spend a tremendous amount of my productive time on solving dependencies...and I do not want to make this guy even more angry.
How is cmake meant to be used?
The typical cmake usage matches the old autotools usage:
$ cmake /path/to/src #replaces /path/to/src/configure
$ make
$ make install
Some targets changed (e.g., make check vs make test), and cmake doesn't provide all the same standard targets (e.g., make distclean), but the usage I have above is what most developers will do (and since cmake re-runs itself, it's really just the second step most of the time).
If your CMakeLists.txt doesn't support this workflow, you should have a very good reason. Most tooling will assume a workflow like this, so you're severely limiting yourself.
What is a standard, what is a courtesy, and what is none of those?
Outside of the above, cmake is pretty much the wild west. Things are becoming more standardized thanks to better documentation and training, but it's far from perfect.
A well-behaved cmake project should export its targets (lots of questions and answers on Stack Overflow about this) and propagate flags and dependencies. This makes it far easier for dependent projects to consume exported targets, and luckily it's easy to do.
How can I tell that something is a feature, an archaic backwards compatibility, or both?
There's nothing I'm aware of that makes these distinctions. In general, newer methods leverage the target_* functions instead of the global ones (e.g., target_include_directories vs include_directories). The target_* functions are also used to propagate flags, include directories, compiler features, and dependent libraries like I mentioned above.
Am I doing something wrong, or is it really what working with cmake should look like?
You're talking about managing external dependencies, and I'm going to skip this to avoid getting into opinions. The short version is that C and C++ dependencies are hard, and there's many competing ways of managing them in a project. They each have pros and cons, but most are still designed for the authors' use cases. You'll have to figure out what use cases you need, and choose tools and workflows based on that.
Do I really have to "reverse-engineer" other people's CMakeLists in order to use a library?
A well-behaved cmake project will export its targets properly, even if they use different dependency management than you do. If they don't, send the project a pull request (exporting isn't hard, and it's good to learn how) or just file bugs against them, especially if they're already using cmake as a build system.
Under these circumstances, how can I convince my coworkers to use similar work process?
It depends on your coworkers, and mileage will vary. I've dealt with coworkers who want to embrace best practices and support flexibility, and I've dealt with coworkers who are content only doing enough to solve the problems we're facing right now.
I have a large C++ project of hundreds of files with a CMake build system. How can I use GCC's -ftime-report option but get a single summary for the full build?
I am looking to improve build times and this would be helpful to know where to focus the effort.
You would need to implement that manually by parsing the output somehow.
A good way to get a higher level overview is to use Ninja and parse the .ninja_log file:
https://github.com/ninja-build/ninja/issues/1080#issuecomment-255436851
Also see https://github.com/nico/ninjatracing. Chromium uses tools like that to keep track of build times.
Update:
-ftime-report is simply not suitable for this task as it's meant for compiler devs. Use clang and https://github.com/aras-p/ClangBuildAnalyzer for this.
gcc is far from supporting -ftime-trace: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92396
I found similar topic: What are the differences between Autotools, Cmake and Scons? , but my question is a little bit other and I think the answers could be other too.
I found a lot of articles telling that waf is unstalbe (API changes), is not yet ready for production etc (but all of these articles are 2 or 3 years old).
Which of these build tools should be used if I want to:
create big C++ (11) project - lets say a complex compiler
use with LLVM
be sure it will be flexible and simple to use
be sure it will be fast enought
compile under all standard platforms (the base platform is Linux, but I want to compile under Windows and MacOSX also)
Reading a lot of articles I found out Cmake and waf the "best" tools available, but I have no expirence with them and it is really hard to find out any comparison, which is not very biased (like comparison of the scons author) and not very old.really
waf covers nearly all your requirements ...
API change: not a problem as waf shall be included in the source tarball (<100Ko)
big project: you can split your configuration in subdirectories (the contexts could be inherited). I've worked on projects with more than 10k files in C/C++/fortran77/fortran90/python/Cython including documentation in doxygen/sphinx.
flexibility and easyness: you can add extra modules en python (http://code.google.com/p/waf/wiki/AddingNewToolsToWaf)
fast: the tasks could be run in parallel: http://www.retropaganda.info/~bohan/work/psycle/branches/bohan/wonderbuild/benchmarks/time.xml
multi plat-form: you can run Waf everywhere python is available, that includes Windows and MacOs. Waf is compatible with mscvc, gcc, icc, and other compilers. You can produce visual/eclipse projects.
... but waf seems to have an issue with llvm: http://code.google.com/p/waf/issues/detail?id=1252
EDIT: as said by Wojciech Danilo, LLVM issue has been fixed
I'm currently using CMake for my own language implementation via C++11 and LLVM.
I like CMake for it's easy to use syntax. LLVM can be loaded with an easy 'load_package' command. After that you can use all the headers and libraries you need. CMake lets child scripts inherit variables from parent scripts. So you do not need to set variables and load packages in every sub directory.
The C++11 support depends on your compiler you want to use. All in all CMake is just a layout to create your 'real' build script.
When you're using make you can use make's --jobs=N to speed up compilation on multicore-platforms. On Windows you could generate Visual Studio 2012-project files and use Microsoft's build system and use their build-jobs to speed up the compilation process.
You should always create a subfolder for build-files (myproject/build or something). This way you keep your source tree clean (cd build; cmake ..; cd ..).
I can't speak for all the other tools out there.
we work under Linux/Eclipse/C++ using Eclipse's "native" C++ projects (.cproject). the system comprises from several C++ projects all kept under svn version control, using integrated subclipse plugin.
we want to have a script that would checkout, compile and package the system, without us needing to drive this process manually from eclipse, as we do now.
I see that there are generated makefile and support files (sources.mk, subdir.mk etc.), scattered around, which are not under version control (probably the subclipse plugin is "clever" enough to exclude them). I guess I can put them under svn and use in the script we need.
however, this feels shaky. have anybody tried it? Are there any issues to expect? Are there recommended ways to achieve what we need?
N.B. I don't believe that an idea of adopting another build system will be accepted nicely, unless it's SUPER-smooth. We are a small company of 4 developers running full-steam ahead, and any additional overhead or learning curve will not appreciated :)
thanks a lot in advance!
I would not recommend putting things that are generated in an external tool into version control. My favorite phrase for this tactic is "version the recipe, not the cake". Instead, you should use a third party tool like your script to manipulate Eclipse appropriately to generate these files from your sources, and then compile them. This avoids the risk of having one of these automatically generated files be out of sync with your root sources.
I'm not sure what your threshold for "super-smooth" is, but you might want to take a look at Maven2, which has a plugin for Eclipse projects to do just this.
I know that this is a big problem (I had exactly the same; in addition: maintaining a build-workspace in svn is a real pain!)
Problems I see:
You will get into problems as soon as somebody adds or changes project settings files but doesn't trigger a new build for all possible platforms! (makefiles aren't updated).
There is no overall make file so you can not easily use the build order of your projects that Eclipse had calculated
BTW: I wrote an Eclipse plugin that builds up a workspace from a given (textual) list of projects and then triggers the build. That's possible but also not an easy task.
Unfortunately I can't post the plugin somewhere because I wrote it for my former employer...
I'm new to make and makefiles, so forgive me if this is very basic.
I'm looking through some makefiles in my project and I'm seeing 2 types of targets -- targets that don't begin with a . character and targets that do.
And from what I'm guessing, it seems like the ".target-name" targets are always executed, is my assumption true? I did read about makefiles by Googling but didn't find anything specific to this.
And as always, thanks for the answers!
No.
The targets with a dot are normally special meaning targets (i.e. their functioniality is builtin into make). One of them is
.PHONY, this is the one that defines the targets which are always executed (that means, the commands in their rules are run unconditionally).
But there are also others, like .DEFAULT for the default rule, or .PRECIOUS with does not delete implicit built targets when interrupted.
For learning about make, and especially gmake, I'd suggest having a look at the excellent book "Managing Projects with GNU Make" (sanitised Amazon link).
HTH.
cheers,