How to overcome "'aclocal-1.15' is missing on your system" warning? - c++

Im trying to run a c++ program on github. (available at the following link https://github.com/mortehu/text-classifier)
I have a mac, and am trying to run it in the terminal. I think I have downloaded autoconf and automake but am not sure. To run the program I am going to the correct folder in terminal then running
./configure && make
But I get the error:
WARNING: 'aclocal-1.15' is missing on your system.
You should only need it if you modified 'acinclude.m4' or
'configure.ac' or m4 files included by 'configure.ac'.
The 'aclocal' program is part of the GNU Automake package:
http://www.gnu.org/software/automake
It also requires GNU Autoconf, GNU m4 and Perl in order to run:
http://www.gnu.org/software/autoconf
http://www.gnu.org/software/m4/
http://www.perl.org/ make: *** [aclocal.m4] Error 127
I have xcode and g++ and all the things required to run c programs, but as is probably obvious, I have no idea what Im doing.
What is the easiest, simplest way to run the program in the above link? I realise it comes with a readme and example usage but I can not get that to work.

Before running ./configure try running autoreconf -f -i. The autoreconf program automatically runs autoheader, aclocal, automake, autopoint and libtoolize as required.
Edit to add: This is usually caused by checking out code from Git instead of extracting it from a .zip or .tar.gz archive. In order to trigger rebuilds when files change, Git does not preserve files' timestamps, so the configure script might appear to be out of date. As others have mentioned, there are ways to get around this if you don't have a sufficiently recent version of autoreconf.
Another edit: This error can also be caused by copying the source folder extracted from an archive with scp to another machine. The timestamps can be updated, suggesting that a rebuild is necessary. To avoid this, copy the archive and extract it in place.

Often, you don't need any auto* tools and the simplest solution is to simply run touch aclocal.m4 configure in the relevant folder (and also run touch on Makefile.am and Makefile.in if they exist). This will update the timestamp of aclocal.m4 and remind the system that aclocal.m4 is up-to-date and doesn't need to be rebuilt. After this, it's probably best to empty your build directory and rerun configure from scratch after doing this. I run into this problem regularly. For me, the root cause is that I copy a library (e.g. mpfr code for gcc) from another folder and the timestamps change.
Of course, this trick isn't valid if you really do need to regenerate those files, perhaps because you have manually changed them. But hopefully the developers of the package distribute up-to-date files.
And of course, if you do want to install automake and friends, then use the appropriate package-manager for your distribution.
Install aclocal which comes with automake:
brew install automake # for Mac
apt-get install automake # for Ubuntu
Try again:
./configure && make

You can install the version you need easily:
First get source:
$ wget https://ftp.gnu.org/gnu/automake/automake-1.15.tar.gz
Unpack it:
$ tar -xzvf automake-1.15.tar.gz
Build and install:
$ cd automake-1.15
$ ./configure --prefix=/opt/aclocal-1.15
$ make
$ sudo mkdir -p /opt
$ sudo make install
Use it:
$ export PATH=/opt/aclocal-1.15/bin:$PATH
$ aclocal --version
aclocal (GNU automake) 1.15
Now when aclocal is called, you get the right version.

A generic answer that may or not apply to this specific case:
As the error message hint at, aclocal-1.15 should only be required if you modified files that were used to generate aclocal.m4
If you don't modify any of those files (including configure.ac) then you should not need to have aclocal-1.15.
In my case, the problem was not that any of those files was modified but somehow the timestamp on configure.ac was 6 minutes later compared to aclocal.m4.
I haven't figured out why, but a clean clone of my git repo solved the issue for me. Maybe something linked to git and how it created files in the first place.
Rather than rerunning autoconf and friends, I would just try to get a clean clone and try again.
It's also possible that somebody committed a change to configure.ac but didn't regenerate the aclocal.m4, in which case you indeed have to rerun automake and friends.

The whole point of Autotools is to provide an arcane M4-macro-based language which ultimately compiles to a shell script called ./configure. You can ship this compiled shell script with the source code and that script should do everything to detect the environment and prepare the program for building. Autotools should only be required by someone who wants to tweak the tests and refresh that shell script.
It defeats the point of Autotools if GNU This and GNU That has to be installed on the system for it to work. Originally, it was invented to simplify the porting of programs to various Unix systems, which could not be counted on to have anything on them. Even the constructs used by the generated shell code in ./configure had to be very carefully selected to make sure they would work on every broken old shell just about everywhere.
The problem you're running into is due to some broken Makefile steps invented by people who simply don't understand what Autotools is for and the role of the final ./configure script.
As a workaround, you can go into the Makefile and make some changes to get this out of the way. As an example, I'm building the Git head of GNU Awk and running into this same problem. I applied this patch to Makefile.in, however, and I can sucessfully make gawk:
diff --git a/Makefile.in b/Makefile.in
index 5585046..b8b8588 100644
--- a/Makefile.in
+++ b/Makefile.in
## -312,12 +312,12 ## distcleancheck_listfiles = find . -type f -print
# Directory for gawk's data files. Automake supplies datadir.
pkgdatadir = $(datadir)/awk
-ACLOCAL = #ACLOCAL#
+ACLOCAL = true
AMTAR = #AMTAR#
AM_DEFAULT_VERBOSITY = #AM_DEFAULT_VERBOSITY#
-AUTOCONF = #AUTOCONF#
-AUTOHEADER = #AUTOHEADER#
-AUTOMAKE = #AUTOMAKE#
+AUTOCONF = true
+AUTOHEADER = true
+AUTOMAKE = true
AWK = #AWK#
CC = #CC#
CCDEPMODE = #CCDEPMODE#
Basically, I changed things so that the harmless true shell command is substituted for all the Auto-stuff programs.
The actual build steps for Gawk don't need the Auto-stuff! It's only involved in some rules that get invoked if parts of the Auto-stuff have changed and need to be re-processed. However, the Makefile is structured in such a way that it fails if the tools aren't present.
Before the above patch:
$ ./configure
[...]
$ make gawk
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash /home/kaz/gawk/missing aclocal-1.15 -I m4
/home/kaz/gawk/missing: line 81: aclocal-1.15: command not found
WARNING: 'aclocal-1.15' is missing on your system.
You should only need it if you modified 'acinclude.m4' or
'configure.ac' or m4 files included by 'configure.ac'.
The 'aclocal' program is part of the GNU Automake package:
<http://www.gnu.org/software/automake>
It also requires GNU Autoconf, GNU m4 and Perl in order to run:
<http://www.gnu.org/software/autoconf>
<http://www.gnu.org/software/m4/>
<http://www.perl.org/>
make: *** [aclocal.m4] Error 127
After the patch:
$ ./configure
[...]
$ make gawk
CDPATH="${ZSH_VERSION+.}:" && cd . && true -I m4
CDPATH="${ZSH_VERSION+.}:" && cd . && true
gcc -std=gnu99 -DDEFPATH='".:/usr/local/share/awk"' -DDEFLIBPATH="\"/usr/local/lib/gawk\"" -DSHLIBEXT="\"so"\" -DHAVE_CONFIG_H -DGAWK -DLOCALEDIR='"/usr/local/share/locale"' -I. -g -O2 -DNDEBUG -MT array.o -MD -MP -MF .deps/array.Tpo -c -o array.o array.c
[...]
gcc -std=gnu99 -g -O2 -DNDEBUG -Wl,-export-dynamic -o gawk array.o awkgram.o builtin.o cint_array.o command.o debug.o dfa.o eval.o ext.o field.o floatcomp.o gawkapi.o gawkmisc.o getopt.o getopt1.o int_array.o io.o main.o mpfr.o msg.o node.o profile.o random.o re.o regex.o replace.o str_array.o symbol.o version.o -ldl -lm
$ ./gawk --version
GNU Awk 4.1.60, API: 1.2
Copyright (C) 1989, 1991-2015 Free Software Foundation.
[...]
There we go. As you can see, the CDPATH= command lines there are where the Auto-stuff was being invoked, where you see the true commands. These report successful termination, and so it just falls through that junk to do the darned build, which is perfectly configured.
I did make gawk because there are some subdirectories that get built which fail; the trick has to be repeated for their respective Makefiles.
If you're running into this kind of thing with a pristine, official tarball of the program from its developers, then complain. It should just unpack, ./configure and make without you having to patch anything or install any Automake or Autoconf materials.
Ideally, a pull of their Git head should also behave that way.

I think the touch command is the right answer e.g. do something like
touch --date="`date`" aclocal.m4 Makefile.am configure Makefile.in
before [./configure && make].
Sidebar I: Otherwise, I agree with #kaz: adding dependencies for aclocal.m4 and/or configure and/or Makefile.am and/or Makefile.in makes assumptions about the target system that may be invalid. Specifically, those assumptions are
1) that all target systems have autotools,
2) that all target systems have the same version of autotools (e.g. automake.1.15 in this case).
3) that if either (1) or (2) are not true for any user, that the user is extracting the package from a maintainer-produced TAR or ZIP format that maintains timestamps of the relevant files, in which case all autotool/configure/Makefile.am/Makefile.in dependencies in the configure-generated Makefile will be satisfied before the make command is issued.
The second assumption fails on many Mac systems because automake.1.14 is the "latest" for OSX (at least that is what I see in MacPorts, and apparently the same is true for brew).
The third assumption fails spectacularly in a world with Github. This failure is an example of an "everyone thinks they are normative" mindset; specifically, the maintainers, who are the only class of users that should need to edit Makefile.am, have now put everyone into that class.
Perhaps there is an option in autowhatever that keeps these dependencies from being added to Makefile.in and/or Makefile.
Sidebar II [Why #kaz is right]: of course it is obvious, to me and other cognoscenti, to simply try a sequence of [touch] commands to fool the configure-created Makefile from re-running configure and the autotools. But that is not the point of configure; the point of configure is to ensure as many users on as many different systems as as possible can simply do [./configure && make] and move on; most users are not interested in "shaving the yak" e.g. debugging faulty assumptions of the autotools developers.
Sidebar III: it could be argued that ./configure, now that autotools adds these dependencies, is the wrong build tool to use with Github-distributed packages.
Sidebar IV: perhaps configure-based Github repos should put the necessary touch command into their readme, e.g. https://github.com/drbitboy/Tycho2_SQLite_RTree.

2018, yet another solution ...
https://github.com/apereo/mod_auth_cas/issues/97
in some cases simply running
$ autoreconf -f -i
and nothing else .... solves the problem.
You do that in the directory /pcre2-10.30 .
What a nightmare.
(This usually did not solve the problem in 2017, but now usually does seem to solve the problem - they fixed something. Also, it seems your Dockerfile should now usually start with "FROM ibmcom/swift-ubuntu" ; previously you had to give a certain version/dev-build to make it work.)

The problem is not automake package, is the repository
sudo apt-get install automake
Installs version aclocal-1.4, that's why you can't find 1.5 (In Ubuntu 14,15)
Use this script to install latest
https://github.com/gp187/nginx-builder/blob/master/fix/aclocal.sh

2017 - High Sierra
It is really hard to get autoconf 1.15 working on Mac. We hired an expert to get it working. Everything worked beautifully.
Later I happened to upgrade a Mac to High Sierra.
The Docker pipeline stopped working!
Even though autoconf 1.15 is working fine on the Mac.
How to fix,
Short answer, I simply trashed the local repo, and checked out the repo again.
This suggestion is noted in the mix on this QA page and elsewhere.
It then worked fine!
It likely has something to do with the aclocal.m4 and similar files. (But who knows really). I endlessly massaged those files ... but nothing.
For some unknown reason if you just scratch your repo and get the repo again: everything works!
I tried for hours every combo of touching/deleting etc etc the files in question, but no. Just check out the repo from scratch!

Related

CMake include_directories isn't using one directory, but using all others around it

I'm trying to use conda to manage several dependencies for a c++ library I'm building (I was previously using git submodules in my project's external/ dir which worked fine, but for a number of reasons I wanted to migrate to conda).
The libraries I need to use are tbb, zstd, and cspice. I've verified they are all installed correctly by conda, and are in my environment (named vira_env). The headers are very clearly in ~/mamba_envs/vira_env/include/.
However when I add include_directories(${CMAKE_INSTALL_PREFIX}/include) to my root CMakeLists.txt and run cmake using -DCMAKE_INSTALL_PREFIX=${CONDA_PREFIX}, it fails to find any of the headers.
Running make VERBOSE=1 reveals that g++ has no -I argument to include the ~/mamba_envs/vira_env/include/ directory. Which makes sense, as it is failing to find the headers that are there.
However, if I use include_directories(${CMAKE_INSTALL_PREFIX}) and rerun make VERBOSE=1, or it clearly shows that g++ has a -I /home/cgnam/mamba_envs/vira_env/.
Similarly, if I use include_directories(${CMAKE_INSTALL_PREFIX}/include/oneapi) it shows g++ has a -I /home/cgnam/mamba_envs/vira_env/include/oneapi. (oneapi is the directory where the tbb headers are placed).
So include_directories() will happily use either the conda environment root, or even one of the directories inside the conda environment's include/ directory... but it will just completely ignore the include/ directory itself.
EDIT
As discovered in one of my comments below, printing CMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES shows /home/cgnam/mamba_envs/vira_env/include. Which explains why my include_directories() is not having any effect on the g++ call by make, since it believes that the compiler should be looking in that directory already.
make VERBOSE=1is showing that the compiler being used is: /home/cgnam/mamba_envs/vira_env/bin/x86_64-conda-linux-gnu-c++. However using which g++ is showing ~/mamba_envs/vira_env/bin/g++. Further, which ld returns: ~/mamba_envs/vira_env/bin/ld.
If I use ld --verbose | grep SEARCH_DIR, it does not show /home/cgnam/mamba_envs/vira_env/include. Neither does ~/mamba_envs/vira_env/bin/x86_64-conda-linux-gnu-ld --verbose | grep SEARCH_DIR.

Compiling Tensorflow with a custom Clang + Libc++ (instead of stdlibc++)

I am trying to compile tensorflow with a custom clang/llvm toolchain and using clang's native libc++ (instead of borrowing Gcc's stdlibc++).
It looks like bazel plain assumes that every clang will use Gcc's libraries because I get these errors:
$ bazel build --cxxopt=-std=c++11 --cxxopt=-stdlib=libc++ tensorflow:libtensorflow.so
INFO: Found 1 target...
INFO: From Compiling
external/protobuf/src/google/protobuf/compiler/js/embed.cc [for host]:
external/protobuf/src/google/protobuf/compiler/js/embed.cc:37:12:
warning: unused variable 'output_file' [-Wunused-const-variable]
const char output_file[] = "well_known_types_embed.cc";
^
1 warning generated.
ERROR: /home/hbucher/.cache/bazel/_bazel_hbucher/ad427c7fddd5b68de5e1cfaa7cd8c8cc/external/com_googlesource_code_re2/BUILD:11:1: undeclared inclusion(s) in rule '#com_googlesource_code_re2//:re2':
this rule is missing dependency declarations for the following files included by 'external/com_googlesource_code_re2/re2/bitstate.cc':
'/home/hbucher/install/include/c++/v1/stddef.h'
'/home/hbucher/install/include/c++/v1/__config'
I tried to hack into tools/cpp/CROSSTOOL inside bazel as some posts suggested to add the line
cxx_builtin_include_directory: "/home/hbucher/install/include/c++/v1"
but to no avail, it does not seem to make any difference.
Then I tried to follow a bazel tutorial to create a custom toolchain. The text does not help much because they are actually writing a cross tool while what I am trying to do is to tweak the existing host rules and somehow bazel seems to undo every attempt I try to tweak its parameters.
I have got to the point that is currently in my github repository https://github.com/HFTrader/BazelCustomToolchain
However it does not compile and I cannot even figure out how to start debugging this message.
$ bazel build --crosstool_top=#hbclang//:toolchain tensorflow:libtensorflow.so
.....................
ERROR: The crosstool_top you specified was resolved to
'#hbclang//:toolchain', which does not contain a CROSSTOOL file. You can
use a crosstool from the depot by specifying its label.
INFO: Elapsed time: 2.216s
I have appended these lines to my tensorflow/WORKSPACE
new_local_repository(
name="hbclang",
path="/home/hbucher/BazelCustomToolchain",
build_file = "/home/hbucher/BazelCustomToolchain/BUILD",
)
I have asked this question on bazel's google groups but they redirected me to stackoverflow. At this point I am about to give up.
Have someone attempted to do this or I'm breaking ground here?
Thank you.
Solved. Not in the intended way but it works for me.
export INSTALL_DIR="$HOME/install"
export CC=$INSTALL_DIR/bin/clang
export CXX=$INSTALL_DIR/bin/clang++
export CXXFLAGS="-stdlib=libc++ -L$INSTALL_DIR/lib"
export LDFLAGS="-L$INSTALL_DIR/lib -lm -lrt"
export LD_LIBRARY_PATH="/usr/lib:/lib/x86_64-linux-gnu/:$INSTALL_DIR/lib"
git clone https://github.com/tensorflow/tensorflow.git tensorflow-github
cd tensorflow-github
mkdir build-tmp && cd build-tmp
cmake ../tensorflow/contrib/cmake/
make -j4
Easy as 1-2-3 with cmake
[2020-05-24: Edit to make the answer up to date.]
TLDR: To build a project with Bazel with a specific Clang binary, and with libc++, this works for me (where INSTALL_DIR is where I've installed llvm):
CC="$INSTALL_DIR/bin/clang" \
BAZEL_CXXOPTS="-stdlib=libc++:-isystem$INSTALL_DIR/include" \
BAZEL_LINKOPTS="-stdlib=libc++" \
BAZEL_LINKLIBS="-L$INSTALL_DIR/lib:-Wl,-rpath,$INSTALL_DIR/lib:-lc++:-lm" \
bazel test //...
Background:
You can use --repo_env option, e.g. --repo_env=CC=clang, to put these defaults into your project- or system-wide .bazelrc.
This approach uses Bazel's C++ toolchain autoconfiguration which doesn't attempt to declare all the toolchain inputs in BUILD files. This is to simplify the configuration for the user. Therefore whenever you modify the C++ toolchain in a way that Bazel cannot know about (rebuild llvm etc.), you have to run bazel clean --expunge to flush the cache and rerun the autoconfiguration the next time.
The robust solution to specifying C++ toolchain in Bazel is to use the CcToolchainConfigInfo. See the documentation at https://docs.bazel.build/versions/master/tutorial/cc-toolchain-config.html and https://docs.bazel.build/versions/master/cc-toolchain-config-reference.html.

Separate build directory for autotools projects not designed to use one

... Sorry, this has to be a duplicate, but I'm just running across answers for people who are making their own projects and can change the build system setup.
Can I always compile a program in a separate build directory, or must it be supported by the program's build system?
For vim, you compile using: "./configure && make && sudo make install". If I'm in vim.hg.build and run "../vim.hg/configure .....", I get :
../vim.hg/configure: line 6: cd: src: No such file or directory
For git, you compile using: "make configure && ./configure && make && sudo make install". I was hoping being in git.git.build and running "make --directory=../git.git configure" would work, but that leaves the configure file in ../git.git. Well, maybe just configure left behind isn't so bad, so I then tried "../git.git/configure" which successfully created config.status, config.log, and config.mak.autogen in the build directory. But running make gives:
make: *** No targets specified and no makefile found. Stop.
... Yes, the only Makefile is in git.git itself.
I even tried symlinking the entire directory by running:
for fl in `ls -a ../vim.hg`; do
echo "$fl"
ln -s ../vim.hg/$fl $fl
done
... But, vim's configure and make only modify existing files and subdirectories, so even though I can build this way, the build directory is left with nothing more than symlinks -- no actual separation.
Go cmake!
Out-of-tree building is a feature of Autotools that requires both Autoconf and Automake.
Vim and Git both only use Autoconf and not Automake, so they can't take advantage of that feature.
As a more general answer to your question: simple Autotools projects should work with out-of-tree builds automatically. When using certain advanced features, a bit of care must be taken to ensure that out-of-tree builds continue to work. Sometimes projects don't do this.
(Running make distcheck will test out-of-tree building, so it's a good idea to run it at least once before making a release.)

How to Use CCache with CMake?

I would like to do the following: If CCache is present in PATH, use "ccache g++" for compilation, else use g++. I tried writing a small my-cmake script containing
CC="ccache gcc" CXX="ccache g++" cmake $*
but it does not seem to work (running make still does not use ccache; I checked this using CMAKE_VERBOSE_MAKEFILE on).
Update:
As per this link I tried changing my script to
cmake -D CMAKE_CXX_COMPILER="ccache" -D CMAKE_CXX_COMPILER_ARG1="g++" -D CMAKE_C_COMPILER="ccache" -D CMAKE_C_COMPILER_ARG1="gcc" $*
but cmake bails out complaining that a test failed on using the compiler ccache (which can be expected).
As of CMAKE 3.4 you can do:
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
It is now possible to specify ccache as a launcher for compile commands and link commands (since cmake 2.8.0). That works for Makefile and Ninja generator. To do this, just set the following properties :
find_program(CCACHE_FOUND ccache)
if(CCACHE_FOUND)
set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ccache)
set_property(GLOBAL PROPERTY RULE_LAUNCH_LINK ccache) # Less useful to do it for linking, see edit2
endif(CCACHE_FOUND)
It is also possible to set these properties only for specific directories or targets.
For Ninja, this is possible since version 3.4.
For XCode, Craig Scott gives a workaround in his answer.
Edit : Thanks to uprego and Lekensteyn's comment, I edited the answer to check if ccache is available before using it as launcher and for which generators is it possible to use a compile launcher.
Edit2: #Emilio Cobos recommended to avoid doing that for the linking part as ccache doesn't improve linking speed and can mess with other types of cache like sccache
I personally have /usr/lib/ccache in my $PATH. This directory contains loads of symlinks for every possible name the compiler could be called from (like gcc and gcc-4.3), all pointing to ccache.
And I didn't even create the symlinks. That directory comes pre-filled when I install ccache on Debian.
From CMake 3.1, it is possible to use ccache with the Xcode generator and Ninja is supported from CMake 3.4 onwards. Ninja will honour RULE_LAUNCH_COMPILE just like the Unix Makefiles generator (so #Babcool's answer gets you there for Ninja too), but getting ccache working for the Xcode generator takes a little more work. The following article explains the method in detail, focussing on a general implementation which works for all three CMake generators and making no assumptions about setting up ccache symlinks or the underlying compiler used (it still lets CMake decide the compiler):
https://crascit.com/2016/04/09/using-ccache-with-cmake/
The general gist of the article is as follows. The start of your CMakeLists.txt file should be set up something like this:
cmake_minimum_required(VERSION 2.8)
find_program(CCACHE_PROGRAM ccache)
if(CCACHE_PROGRAM)
# Support Unix Makefiles and Ninja
set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE "${CCACHE_PROGRAM}")
endif()
project(SomeProject)
get_property(RULE_LAUNCH_COMPILE GLOBAL PROPERTY RULE_LAUNCH_COMPILE)
if(RULE_LAUNCH_COMPILE AND CMAKE_GENERATOR STREQUAL "Xcode")
# Set up wrapper scripts
configure_file(launch-c.in launch-c)
configure_file(launch-cxx.in launch-cxx)
execute_process(COMMAND chmod a+rx
"${CMAKE_BINARY_DIR}/launch-c"
"${CMAKE_BINARY_DIR}/launch-cxx")
# Set Xcode project attributes to route compilation through our scripts
set(CMAKE_XCODE_ATTRIBUTE_CC "${CMAKE_BINARY_DIR}/launch-c")
set(CMAKE_XCODE_ATTRIBUTE_CXX "${CMAKE_BINARY_DIR}/launch-cxx")
set(CMAKE_XCODE_ATTRIBUTE_LD "${CMAKE_BINARY_DIR}/launch-c")
set(CMAKE_XCODE_ATTRIBUTE_LDPLUSPLUS "${CMAKE_BINARY_DIR}/launch-cxx")
endif()
The two script template files launch-c.in and launch-cxx.in look like this (they should be in the same directory as the CMakeLists.txt file):
launch-c.in:
#!/bin/sh
export CCACHE_CPP2=true
exec "${RULE_LAUNCH_COMPILE}" "${CMAKE_C_COMPILER}" "$#"
launch-cxx.in:
#!/bin/sh
export CCACHE_CPP2=true
exec "${RULE_LAUNCH_COMPILE}" "${CMAKE_CXX_COMPILER}" "$#"
The above uses RULE_LAUNCH_COMPILE alone for Unix Makefiles and Ninja, but for the Xcode generator it relies on help from CMake's CMAKE_XCODE_ATTRIBUTE_... variables support. The setting of the CC and CXX user-defined Xcode attributes to control the compiler command and LD and LDPLUSPLUS for the linker command is not, as far as I can tell, a documented feature of Xcode projects, but it does seem to work. If anyone can confirm it is officially supported by Apple, I'll update the linked article and this answer accordingly.
I didn't like to set a symlink from g++ to ccache. And CXX="ccache g++" didn't work for me as some cmake test case wanted to have just the compiler program without attributes.
So I used a small bash script instead:
#!/bin/bash
ccache g++ "$#"
and saved it as an executable in /usr/bin/ccache-g++.
Then C configured cmake to use /usr/bin/ccache-g++ as C++ compiler.
This way it passes the cmake test cases and I feel more comfortable than having symlinks that I might forget about in 2 or 3 weeks and then maybe wonder if something doesn't work...
I verified the following works (source: this link):
CC="gcc" CXX="g++" cmake -D CMAKE_CXX_COMPILER="ccache" -D CMAKE_CXX_COMPILER_ARG1="g++" -D CMAKE_C_COMPILER="ccache" -D CMAKE_C_COMPILER_ARG1="gcc" $*
Update: I later realized that even this does not work. Strangely it works every alternate time (the other times cmake complains).
Let me add one important item that was not mentioned here before.
While bootstrapping a minimalistic build system from the ubuntu:18.04 docker image, I've found that order of installation makes a difference.
In my case ccache worked fine when calling gcc, but failed to catch invocations of the same compiler by the other names: cc and c++.
To fully install ccache, you need to make sure all compilers are installed first, or add a call to update-ccache symlinks to be safe.
sudo /usr/sbin/update-ccache-symlinks
export PATH="/usr/lib/ccache/:$PATH"```
... and then (due to updated symlinks) also calls to cc and c++ get caught!
In my opinion the best way is to symlink gcc,g++ to ccache, but if you would like to use within cmake, try this:
export CC="ccache gcc" CXX="ccache g++" cmake ...
Here are 2 methods I think are clean/robust, and also don't pollute your CMake code.
1.) Set environment variables
This method is nice since you don't have to individually set it up for each CMake project. The con is you may not want ccache for each CMake project.
# Requires CMake 3.17 (https://cmake.org/cmake/help/latest/envvar/CMAKE_LANG_COMPILER_LAUNCHER.html)
export CMAKE_CXX_COMPILER_LAUNCHER=/usr/bin/ccache
export CMAKE_C_COMPILER_LAUNCHER=/usr/bin/ccache
2.) Pass in cache variables during project configuration
Con a bit annoying to do for each project. This can be negated by your IDE though.
# Requires CMake 3.4
$ cmake ... -D CMAKE_CXX_COMPILER_LAUNCHER=/usr/bin/ccache \
-D CMAKE_C_COMPILER_LAUNCHER=/usr/bin/ccache
NOTE: It isn't really necessary to specify the full path.
If ccache is in your path you can just specify ccache instead.
export CMAKE_CXX_COMPILER_LAUNCHER=ccache
export CMAKE_C_COMPILER_LAUNCHER=ccache
It is extending #Nicolas answer.
Add following line to your cmake file:
list(PREPEND CMAKE_PROGRAM_PATH /usr/lib/ccache)
Or add it as argument to cmake configuration step:
cmake -DCMAKE_PROGRAM_PATH=/usr/lib/ccache

Howto create software package in Unix/Linux?

How can we create a software package, so that
after extracting our software tar ball user can do
the typical steps?
$ gunzip < mycode.tar.gz | tar xvf -
$ ./configure
$ make
$ make install
An alternative to the hard to understand GNU/Autools is CMake.
http://www.cmake.org/cmake/help/examples.html
e.g. KDE is using it.
Look into the GNU autoconf/automake toolchain. Here's a free tutorial/book.
In the old days, this process was done by hand. Each Makefile was written by hand (the file make uses as a sort of script). This became problematic when it came to portability, and so the configure script was made. The ./configure script was written by hand for each project as well. Eventually this was automated by GNU with their autotools package. This consists of autoconf, automake, and a few others. While alternatives exist, particularly for make, autotools is most widely used. ...At least on GNU/Linux systems. Alternatives include the already mentioned CMake, Boost.Build, Boost.Jam, SCons, and more.
Use autotools to create the configure script (which will generate the Makefile necessary for the last two steps), then make a tarball with all your code and stuff in it.
rpmbuild is a command to build rpm packages
man page
tutorial
Autotools.
You'll need to write a configure.ac and a Makefile.am scripts.
Configure.ac is pretty easy and can be mostly autogenerated from running 'autoscan' on your source code. That will generate a 'configure.scan' file that you'll need to tweak to generate the final configure.ac file.
The Automake.am file is all based off of conventions. You'll probably need something like:
AUTOMAKE_OPTIONS = foreign subdir-objects
AM_CXXFLAGS = -std=c++11 -static-libstdc++ -Wall -Werror \
-Wfatal-errors -I blah
AM_LDFLAGS = blah
bin_PROGRAMS = mybinary
mybinary_SOURCES = \
blah.h blah.cc
everything is based on a naming schema:
dist vs nodist = should it be built
inst vs noinst = should it be installed
DATA = data files
MANS = man pages
SOURCES = source code
so dist_noinst_DATA is for data files required for building but are not installed.
Once you have both of those files you usually need to run something like:
aclocal && autoheader && automake --add-missing && autoconf
to setup autotools files required for building. This can be put in a shell script and run prior to running ./configure.