How to correctly locate libstdc++.so.* used to compile an autotools project, in order to bundle it with the distribution?
After our team switched to C++11, we need to bundle the new libstdc++.so.6 with every distribution of our software, but how to correctly find the library?
Cygwin: /lib/gcc/x86_64-pc-cygwin/5.2.0
Linux: /usr/lib64
Custom install: /usr/local/gcc-5.2.0/lib64
I already tried:
install-exec-local:
cp $(shell dirname $(shell which ${CXX}))/../lib64/libstdc++.so.6 ${prefix}/lib
(instead of make dist, we make install into a configured prefix and then bundle the prefix)
And this works, but not for Cygwin, and I'm not sure it will work on other platforms, thus my question.
Edit
See this question for a rationale of bundling libstdc++.so with the software. It works very well. We also use dlopen to load .sos that depend on libstdc++.so so it's harder to link statically than it sounds.
The only issue is locating the libstdc++.so.6 at make dist time (or our equivalent thereof), so that I can cp it to our distribution's ${prefix}/lib directory, before tar-gzipping it and delivering it to the customer. Ideally, I'm looking for something like g++ --print-path-to-libstdc++.so.
The target system, where the software is run, has an older libstdc++.so, that's the whole reason for bundling our own.
g++ -print-file-name=libstdc++.so
Why don't you just link with the -static-libstdc++ option (and -static-libgcc too if you need it)? Then you don't have to worry about bundling, library search paths, etc.
Such a simple thing, but so hard to do.
On Cygwin/MSys, it looks like g++ -print-file-name= only echoes the same file name you provide to it, without adding path.
Instead of focusing on CXX, I tried ldd BINFILE. And that does work on both Linux and Cygwin/MSys, providing full path. But on the latter - only if the binary is 64-bit; for 32-bit, it only shows paths to wow64*.dll.
The only solution which worked for all binaries on both Linux and Windows was:
objdump -p BINFILE | sed -n 's/\s*\(DLL Name:\|NEEDED\)\s*\(.*\)$/\2/p'
But - this gives only file names, without paths. Plus, complex sed query does not provide the best maintainability.
So I think the best solution is to hard-code file names for each OS, and use CXX location as the base for path - like the OP proposed.
Here is a Makefile.am snippet for copying all libraries, but only on Windows (within configure.ac, AM_CONDITIONAL([TARGET_WINDOWS],... needs to be set).
if TARGET_WINDOWS
install-exec-hook:
mkdir -p "$(DESTDIR)${prefix}/lib"
$(eval lib_SHARED_INSTALL := $(shell objdump -p BINFILE$(EXEEXT) | sed -n 's/\s*\(DLL Name:\|NEEDED\)\s*\(.*\)$$/\2/p' | xargs -I {} find $(shell dirname $(shell which ${CXX})) -name '{}'))
cp $(lib_SHARED_INSTALL) $(DESTDIR)${prefix}/lib
endif
Related
I'm trying to use conda to manage several dependencies for a c++ library I'm building (I was previously using git submodules in my project's external/ dir which worked fine, but for a number of reasons I wanted to migrate to conda).
The libraries I need to use are tbb, zstd, and cspice. I've verified they are all installed correctly by conda, and are in my environment (named vira_env). The headers are very clearly in ~/mamba_envs/vira_env/include/.
However when I add include_directories(${CMAKE_INSTALL_PREFIX}/include) to my root CMakeLists.txt and run cmake using -DCMAKE_INSTALL_PREFIX=${CONDA_PREFIX}, it fails to find any of the headers.
Running make VERBOSE=1 reveals that g++ has no -I argument to include the ~/mamba_envs/vira_env/include/ directory. Which makes sense, as it is failing to find the headers that are there.
However, if I use include_directories(${CMAKE_INSTALL_PREFIX}) and rerun make VERBOSE=1, or it clearly shows that g++ has a -I /home/cgnam/mamba_envs/vira_env/.
Similarly, if I use include_directories(${CMAKE_INSTALL_PREFIX}/include/oneapi) it shows g++ has a -I /home/cgnam/mamba_envs/vira_env/include/oneapi. (oneapi is the directory where the tbb headers are placed).
So include_directories() will happily use either the conda environment root, or even one of the directories inside the conda environment's include/ directory... but it will just completely ignore the include/ directory itself.
EDIT
As discovered in one of my comments below, printing CMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES shows /home/cgnam/mamba_envs/vira_env/include. Which explains why my include_directories() is not having any effect on the g++ call by make, since it believes that the compiler should be looking in that directory already.
make VERBOSE=1is showing that the compiler being used is: /home/cgnam/mamba_envs/vira_env/bin/x86_64-conda-linux-gnu-c++. However using which g++ is showing ~/mamba_envs/vira_env/bin/g++. Further, which ld returns: ~/mamba_envs/vira_env/bin/ld.
If I use ld --verbose | grep SEARCH_DIR, it does not show /home/cgnam/mamba_envs/vira_env/include. Neither does ~/mamba_envs/vira_env/bin/x86_64-conda-linux-gnu-ld --verbose | grep SEARCH_DIR.
I'm trying to figure out a simplified way for installing C and C++ code
and libraries built from them, primarily for Linux. I think GNU Autotools, CMake, etc. would be overkill for what I'm trying to do.
An example: I've written some code which, when 'make' is run, builds to
create a library 'example.a'. (I'm going with static linking thus far.
Shared libraries will introduce still more issues; I'll crawl before I walk.) Use of the functions requires including 'example1.h', 'example2.h', etc.
With Autotools, one would run 'make install' or 'sudo make install' to
copy example.a to /usr/local/lib and the include files to /usr/local/include.
If executables had been built, those would get copied to /usr/local/bin.
I've tried adding the following lines to my makefile:
install:
cp example.a /usr/local/lib
cp example1.h /usr/local/include
cp example2.h /usr/local/include
cp example /usr/local/bin
This seems to work, mostly. Other projects are then able to "see"
the .h files and build correctly. But the .a file is not found; the only
way to get it to be linked is to explicitly give the path as
/usr/local/lib/example.a. (Though gcc insists it's looking for libraries
in that path.) So:
Question 1: Should my user-built libraries be put in /usr/local/lib? Or am
I using the wrong directory?
Now, my next problem: this has to be done with 'sudo make install'.
I'm thinking that for some projects of this ilk, where only one user will
be making use of the code in question, the libraries should go someplace
such as $HOME/lib, include files to $HOME/include, executables to
$HOME/bin.
Question 2: Is there a "standard" way to install includes/libraries for
just the current user, rather than doing so for all users? (Thus avoiding
the need for root use, and I'd think it would be a "cleaner" system if only
the user using the program had to deal with these files.)
You should probably use install to copy with explicit permissions. example.a probably isn't being found because it doesn't start with lib (a stupid requirement, I know).
Putting your lib, include, and bin under $HOME/.local is probably the closest thing to a per-user standard setup.
Edit:
I just tested it and it /usr/local/lib works fine for me:
foo.c
#include <stdio.h>
void foo() { puts("foo"); }
Compile and install:
gcc -c foo.c
ar crs libfoo.a foo.o
install -m 0755 /usr/local/lib #0644 should do too
In a different directory:
main.c
void foo(void);
int main() { foo(); return 0; }
Compile, link, and run:
gcc main.c -lfoo
./a.out
I'm really eager to start using Google's new Tensorflow library in C++. The website and docs are just really unclear in terms of how to build the project's C++ API and I don't know where to start.
Can someone with more experience help by discovering and sharing a guide to using tensorflow's C++ API?
To get started, you should download the source code from Github, by following the instructions here (you'll need Bazel and a recent version of GCC).
The C++ API (and the backend of the system) is in tensorflow/core. Right now, only the C++ Session interface, and the C API are being supported. You can use either of these to execute TensorFlow graphs that have been built using the Python API and serialized to a GraphDef protocol buffer. There is also an experimental feature for building graphs in C++, but this is currently not quite as full-featured as the Python API (e.g. no support for auto-differentiation at present). You can see an example program that builds a small graph in C++ here.
The second part of the C++ API is the API for adding a new OpKernel, which is the class containing implementations of numerical kernels for CPU and GPU. There are numerous examples of how to build these in tensorflow/core/kernels, as well as a tutorial for adding a new op in C++.
To add to #mrry's post, I put together a tutorial that explains how to load a TensorFlow graph with the C++ API. It's very minimal and should help you understand how all of the pieces fit together. Here's the meat of it:
Requirements:
Bazel installed
Clone TensorFlow repo
Folder structure:
tensorflow/tensorflow/|project name|/
tensorflow/tensorflow/|project name|/|project name|.cc (e.g. https://gist.github.com/jimfleming/4202e529042c401b17b7)
tensorflow/tensorflow/|project name|/BUILD
BUILD:
cc_binary(
name = "<project name>",
srcs = ["<project name>.cc"],
deps = [
"//tensorflow/core:tensorflow",
]
)
Two caveats for which there are probably workarounds:
Right now, building things needs to happen within the TensorFlow repo.
The compiled binary is huge (103MB).
https://medium.com/#jimfleming/loading-a-tensorflow-graph-with-the-c-api-4caaff88463f
If you are thinking into using Tensorflow c++ api on a standalone package you probably will need tensorflow_cc.so ( There is also a c api version tensorflow.so ) to build the c++ version you can use:
bazel build -c opt //tensorflow:libtensorflow_cc.so
Note1: If you want to add intrinsics support you can add this flags as: --copt=-msse4.2 --copt=-mavx
Note2: If you are thinking into using OpenCV on your project as well, there is an issue when using both libs together (tensorflow issue) and you should use --config=monolithic.
After building the library you need to add it to your project.
To do that you can include this paths:
tensorflow
tensorflow/bazel-tensorflow/external/eigen_archive
tensorflow/bazel-tensorflow/external/protobuf_archive/src
tensorflow/bazel-genfiles
And link the library to your project:
tensorflow/bazel-bin/tensorflow/libtensorflow_framework.so (unused if you build with --config=monolithic)
tensorflow/bazel-bin/tensorflow/libtensorflow_cc.so
And when you are building your project you should also specify to your compiler that you are going to use c++11 standards.
Side Note: Paths relative to tensorflow version 1.5 (You may need to check if in your version anything changed).
Also this link helped me a lot into finding all this infos: link
First, after installing protobuf and eigen, you'd like to build Tensorflow:
./configure
bazel build //tensorflow:libtensorflow_cc.so
Then Copy the following include headers and dynamic shared library to /usr/local/lib and /usr/local/include:
mkdir /usr/local/include/tf
cp -r bazel-genfiles/ /usr/local/include/tf/
cp -r tensorflow /usr/local/include/tf/
cp -r third_party /usr/local/include/tf/
cp -r bazel-bin/libtensorflow_cc.so /usr/local/lib/
Lastly, compile using an example:
g++ -std=c++11 -o tf_example \
-I/usr/local/include/tf \
-I/usr/local/include/eigen3 \
-g -Wall -D_DEBUG -Wshadow -Wno-sign-compare -w \
-L/usr/local/lib/libtensorflow_cc \
`pkg-config --cflags --libs protobuf` -ltensorflow_cc tf_example.cpp
One alternative to using Tensorflow C++ API I found is to use cppflow.
It's a lightweight C++ wrapper around Tensorflow C API. You get very small executables and it links against the libtensorflow.so already compiled file. There are also examples of use and you use CMAKE instead of Bazel.
If you wish to avoid both building your projects with Bazel and generating a large binary, I have assembled a repository instructing the usage of the TensorFlow C++ library with CMake. You can find it here. The general ideas are as follows:
Clone the TensorFlow repository.
Add a build rule to tensorflow/BUILD (the provided ones do not include all of the C++ functionality).
Build the TensorFlow shared library.
Install specific versions of Eigen and Protobuf, or add them as external dependencies.
Configure your CMake project to use the TensorFlow library.
If you don't mind using CMake, there is also tensorflow_cc project that builds and installs TF C++ API for you, along with convenient CMake targets you can link against. The project README contains an example and Dockerfiles you can easily follow.
You can use this ShellScript to install (most) of it's dependencies, clone, build, compile and get all the necessary files into ../src/includes folder:
https://github.com/node-tensorflow/node-tensorflow/blob/master/tools/install.sh
If you don't want to build Tensorflow yourself and your operating system is Debian or Ubuntu, you can download prebuilt packages with the Tensorflow C/C++ libraries. This distribution can be used for C/C++ inference with CPU, GPU support is not included:
https://github.com/kecsap/tensorflow_cpp_packaging/releases
There are instructions written how to freeze a checkpoint in Tensorflow (TFLearn) and load this model for inference with the C/C++ API:
https://github.com/kecsap/tensorflow_cpp_packaging/blob/master/README.md
Beware: I am the developer of this Github project.
I use a hack/workaround to avoid having to build the whole TF library myself (which saves both time (it's set up in 3 minutes), disk space, installing dev dependencies, and size of the resulting binary). It's officially unsupported, but works well if you just want to quickly jump in.
Install TF through pip (pip install tensorflow or pip install tensorflow-gpu). Then find its library _pywrap_tensorflow.so (TF 0.* - 1.0) or _pywrap_tensorflow_internal.so (TF 1.1+). In my case (Ubuntu) it's located at /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so. Then create a symlink to this library called lib_pywrap_tensorflow.so somewhere where your build system finds it (e.g. /usr/lib/local). The prefix lib is important! You can also give it another lib*.so name - if you call it libtensorflow.so, you may get better compatibility with other programs written to work with TF.
Then create a C++ project as you are used to (CMake, Make, Bazel, whatever you like).
And then you're ready to just link against this library to have TF available for your projects (and you also have to link against python2.7 libraries)! In CMake, you e.g. just add target_link_libraries(target _pywrap_tensorflow python2.7).
The C++ header files are located around this library, e.g. in /usr/local/lib/python2.7/dist-packages/tensorflow/include/.
Once again: this way is officially unsupported and you may run in various issues. The library seems to be statically linked against e.g. protobuf, so you may run in odd link-time or run-time issues. But I am able to load a stored graph, restore the weights and run inference, which is IMO the most wanted functionality in C++.
answers above are good enough to show how to build the library, but how to collect the headers are still tricky. here I share the little script I use to copy the necessary headers.
SOURCE is the first param, which is the tensorflow source(build) direcoty;
DST is the second param, which is the include directory holds the collected headers. (eg. in cmake, include_directories(./collected_headers_here)).
#!/bin/bash
SOURCE=$1
DST=$2
echo "-- target dir is $DST"
echo "-- source dir is $SOURCE"
if [[ -e $DST ]];then
echo "clean $DST"
rm -rf $DST
mkdir $DST
fi
# 1. copy the source code c++ api needs
mkdir -p $DST/tensorflow
cp -r $SOURCE/tensorflow/core $DST/tensorflow
cp -r $SOURCE/tensorflow/cc $DST/tensorflow
cp -r $SOURCE/tensorflow/c $DST/tensorflow
# 2. copy the generated code, put them back to
# the right directories along side the source code
if [[ -e $SOURCE/bazel-genfiles/tensorflow ]];then
prefix="$SOURCE/bazel-genfiles/tensorflow"
from=$(expr $(echo -n $prefix | wc -m) + 1)
# eg. compiled protobuf files
find $SOURCE/bazel-genfiles/tensorflow -type f | while read line;do
#echo "procese file --> $line"
line_len=$(echo -n $line | wc -m)
filename=$(echo $line | rev | cut -d'/' -f1 | rev )
filename_len=$(echo -n $filename | wc -m)
to=$(expr $line_len - $filename_len)
target_dir=$(echo $line | cut -c$from-$to)
#echo "[$filename] copy $line $DST/tensorflow/$target_dir"
cp $line $DST/tensorflow/$target_dir
done
fi
# 3. copy third party files. Why?
# In the tf source code, you can see #include "third_party/...", so you need it
cp -r $SOURCE/third_party $DST
# 4. these headers are enough for me now.
# if your compiler complains missing headers, maybe you can find it in bazel-tensorflow/external
cp -RLf $SOURCE/bazel-tensorflow/external/eigen_archive/Eigen $DST
cp -RLf $SOURCE/bazel-tensorflow/external/eigen_archive/unsupported $DST
cp -RLf $SOURCE/bazel-tensorflow/external/protobuf_archive/src/google $DST
cp -RLf $SOURCE/bazel-tensorflow/external/com_google_absl/absl $DST
Tensorflow itself only provides very basic examples about C++ APIs.
Here is a good resource which includes examples of datasets, rnn, lstm, cnn and more
tensorflow c++ examples
We now provide a pre-built library and a Docker image for easy installation and usage of the TensorFlow C++ API at https://github.com/ika-rwth-aachen/libtensorflow_cc
We provide the pre-built libtensorflow_cc.so including accompanying headers as a one-command-install deb-package.
We provide a pre-built Docker image based on the official TensorFlow Docker image. Our Docker image has both TensorFlow Python and TensorFlow C++ installed.
Try it out yourself by running the example application:
git clone https://github.com/ika-rwth-aachen/libtensorflow_cc.git && \
cd libtensorflow_cc && \
docker run --rm \
--volume $(pwd)/example:/example \
--workdir /example \
rwthika/tensorflow-cc:latest \
./build-and-run.sh
While we currently only support x86_64 machines running Ubuntu, this could easily be extended to other OS and platforms in the future. Except for a some exceptions, all TensorFlow versions from 2.0.0 through 2.9.2 are available, 2.10.0 coming soon.
If you want to use the TensorFlow C++ API to load, inspect, and run saved models and frozen graphs in C++, we suggest that you also check out our helper library tensorflow_cpp.
Im trying to run a c++ program on github. (available at the following link https://github.com/mortehu/text-classifier)
I have a mac, and am trying to run it in the terminal. I think I have downloaded autoconf and automake but am not sure. To run the program I am going to the correct folder in terminal then running
./configure && make
But I get the error:
WARNING: 'aclocal-1.15' is missing on your system.
You should only need it if you modified 'acinclude.m4' or
'configure.ac' or m4 files included by 'configure.ac'.
The 'aclocal' program is part of the GNU Automake package:
http://www.gnu.org/software/automake
It also requires GNU Autoconf, GNU m4 and Perl in order to run:
http://www.gnu.org/software/autoconf
http://www.gnu.org/software/m4/
http://www.perl.org/ make: *** [aclocal.m4] Error 127
I have xcode and g++ and all the things required to run c programs, but as is probably obvious, I have no idea what Im doing.
What is the easiest, simplest way to run the program in the above link? I realise it comes with a readme and example usage but I can not get that to work.
Before running ./configure try running autoreconf -f -i. The autoreconf program automatically runs autoheader, aclocal, automake, autopoint and libtoolize as required.
Edit to add: This is usually caused by checking out code from Git instead of extracting it from a .zip or .tar.gz archive. In order to trigger rebuilds when files change, Git does not preserve files' timestamps, so the configure script might appear to be out of date. As others have mentioned, there are ways to get around this if you don't have a sufficiently recent version of autoreconf.
Another edit: This error can also be caused by copying the source folder extracted from an archive with scp to another machine. The timestamps can be updated, suggesting that a rebuild is necessary. To avoid this, copy the archive and extract it in place.
Often, you don't need any auto* tools and the simplest solution is to simply run touch aclocal.m4 configure in the relevant folder (and also run touch on Makefile.am and Makefile.in if they exist). This will update the timestamp of aclocal.m4 and remind the system that aclocal.m4 is up-to-date and doesn't need to be rebuilt. After this, it's probably best to empty your build directory and rerun configure from scratch after doing this. I run into this problem regularly. For me, the root cause is that I copy a library (e.g. mpfr code for gcc) from another folder and the timestamps change.
Of course, this trick isn't valid if you really do need to regenerate those files, perhaps because you have manually changed them. But hopefully the developers of the package distribute up-to-date files.
And of course, if you do want to install automake and friends, then use the appropriate package-manager for your distribution.
Install aclocal which comes with automake:
brew install automake # for Mac
apt-get install automake # for Ubuntu
Try again:
./configure && make
You can install the version you need easily:
First get source:
$ wget https://ftp.gnu.org/gnu/automake/automake-1.15.tar.gz
Unpack it:
$ tar -xzvf automake-1.15.tar.gz
Build and install:
$ cd automake-1.15
$ ./configure --prefix=/opt/aclocal-1.15
$ make
$ sudo mkdir -p /opt
$ sudo make install
Use it:
$ export PATH=/opt/aclocal-1.15/bin:$PATH
$ aclocal --version
aclocal (GNU automake) 1.15
Now when aclocal is called, you get the right version.
A generic answer that may or not apply to this specific case:
As the error message hint at, aclocal-1.15 should only be required if you modified files that were used to generate aclocal.m4
If you don't modify any of those files (including configure.ac) then you should not need to have aclocal-1.15.
In my case, the problem was not that any of those files was modified but somehow the timestamp on configure.ac was 6 minutes later compared to aclocal.m4.
I haven't figured out why, but a clean clone of my git repo solved the issue for me. Maybe something linked to git and how it created files in the first place.
Rather than rerunning autoconf and friends, I would just try to get a clean clone and try again.
It's also possible that somebody committed a change to configure.ac but didn't regenerate the aclocal.m4, in which case you indeed have to rerun automake and friends.
The whole point of Autotools is to provide an arcane M4-macro-based language which ultimately compiles to a shell script called ./configure. You can ship this compiled shell script with the source code and that script should do everything to detect the environment and prepare the program for building. Autotools should only be required by someone who wants to tweak the tests and refresh that shell script.
It defeats the point of Autotools if GNU This and GNU That has to be installed on the system for it to work. Originally, it was invented to simplify the porting of programs to various Unix systems, which could not be counted on to have anything on them. Even the constructs used by the generated shell code in ./configure had to be very carefully selected to make sure they would work on every broken old shell just about everywhere.
The problem you're running into is due to some broken Makefile steps invented by people who simply don't understand what Autotools is for and the role of the final ./configure script.
As a workaround, you can go into the Makefile and make some changes to get this out of the way. As an example, I'm building the Git head of GNU Awk and running into this same problem. I applied this patch to Makefile.in, however, and I can sucessfully make gawk:
diff --git a/Makefile.in b/Makefile.in
index 5585046..b8b8588 100644
--- a/Makefile.in
+++ b/Makefile.in
## -312,12 +312,12 ## distcleancheck_listfiles = find . -type f -print
# Directory for gawk's data files. Automake supplies datadir.
pkgdatadir = $(datadir)/awk
-ACLOCAL = #ACLOCAL#
+ACLOCAL = true
AMTAR = #AMTAR#
AM_DEFAULT_VERBOSITY = #AM_DEFAULT_VERBOSITY#
-AUTOCONF = #AUTOCONF#
-AUTOHEADER = #AUTOHEADER#
-AUTOMAKE = #AUTOMAKE#
+AUTOCONF = true
+AUTOHEADER = true
+AUTOMAKE = true
AWK = #AWK#
CC = #CC#
CCDEPMODE = #CCDEPMODE#
Basically, I changed things so that the harmless true shell command is substituted for all the Auto-stuff programs.
The actual build steps for Gawk don't need the Auto-stuff! It's only involved in some rules that get invoked if parts of the Auto-stuff have changed and need to be re-processed. However, the Makefile is structured in such a way that it fails if the tools aren't present.
Before the above patch:
$ ./configure
[...]
$ make gawk
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash /home/kaz/gawk/missing aclocal-1.15 -I m4
/home/kaz/gawk/missing: line 81: aclocal-1.15: command not found
WARNING: 'aclocal-1.15' is missing on your system.
You should only need it if you modified 'acinclude.m4' or
'configure.ac' or m4 files included by 'configure.ac'.
The 'aclocal' program is part of the GNU Automake package:
<http://www.gnu.org/software/automake>
It also requires GNU Autoconf, GNU m4 and Perl in order to run:
<http://www.gnu.org/software/autoconf>
<http://www.gnu.org/software/m4/>
<http://www.perl.org/>
make: *** [aclocal.m4] Error 127
After the patch:
$ ./configure
[...]
$ make gawk
CDPATH="${ZSH_VERSION+.}:" && cd . && true -I m4
CDPATH="${ZSH_VERSION+.}:" && cd . && true
gcc -std=gnu99 -DDEFPATH='".:/usr/local/share/awk"' -DDEFLIBPATH="\"/usr/local/lib/gawk\"" -DSHLIBEXT="\"so"\" -DHAVE_CONFIG_H -DGAWK -DLOCALEDIR='"/usr/local/share/locale"' -I. -g -O2 -DNDEBUG -MT array.o -MD -MP -MF .deps/array.Tpo -c -o array.o array.c
[...]
gcc -std=gnu99 -g -O2 -DNDEBUG -Wl,-export-dynamic -o gawk array.o awkgram.o builtin.o cint_array.o command.o debug.o dfa.o eval.o ext.o field.o floatcomp.o gawkapi.o gawkmisc.o getopt.o getopt1.o int_array.o io.o main.o mpfr.o msg.o node.o profile.o random.o re.o regex.o replace.o str_array.o symbol.o version.o -ldl -lm
$ ./gawk --version
GNU Awk 4.1.60, API: 1.2
Copyright (C) 1989, 1991-2015 Free Software Foundation.
[...]
There we go. As you can see, the CDPATH= command lines there are where the Auto-stuff was being invoked, where you see the true commands. These report successful termination, and so it just falls through that junk to do the darned build, which is perfectly configured.
I did make gawk because there are some subdirectories that get built which fail; the trick has to be repeated for their respective Makefiles.
If you're running into this kind of thing with a pristine, official tarball of the program from its developers, then complain. It should just unpack, ./configure and make without you having to patch anything or install any Automake or Autoconf materials.
Ideally, a pull of their Git head should also behave that way.
I think the touch command is the right answer e.g. do something like
touch --date="`date`" aclocal.m4 Makefile.am configure Makefile.in
before [./configure && make].
Sidebar I: Otherwise, I agree with #kaz: adding dependencies for aclocal.m4 and/or configure and/or Makefile.am and/or Makefile.in makes assumptions about the target system that may be invalid. Specifically, those assumptions are
1) that all target systems have autotools,
2) that all target systems have the same version of autotools (e.g. automake.1.15 in this case).
3) that if either (1) or (2) are not true for any user, that the user is extracting the package from a maintainer-produced TAR or ZIP format that maintains timestamps of the relevant files, in which case all autotool/configure/Makefile.am/Makefile.in dependencies in the configure-generated Makefile will be satisfied before the make command is issued.
The second assumption fails on many Mac systems because automake.1.14 is the "latest" for OSX (at least that is what I see in MacPorts, and apparently the same is true for brew).
The third assumption fails spectacularly in a world with Github. This failure is an example of an "everyone thinks they are normative" mindset; specifically, the maintainers, who are the only class of users that should need to edit Makefile.am, have now put everyone into that class.
Perhaps there is an option in autowhatever that keeps these dependencies from being added to Makefile.in and/or Makefile.
Sidebar II [Why #kaz is right]: of course it is obvious, to me and other cognoscenti, to simply try a sequence of [touch] commands to fool the configure-created Makefile from re-running configure and the autotools. But that is not the point of configure; the point of configure is to ensure as many users on as many different systems as as possible can simply do [./configure && make] and move on; most users are not interested in "shaving the yak" e.g. debugging faulty assumptions of the autotools developers.
Sidebar III: it could be argued that ./configure, now that autotools adds these dependencies, is the wrong build tool to use with Github-distributed packages.
Sidebar IV: perhaps configure-based Github repos should put the necessary touch command into their readme, e.g. https://github.com/drbitboy/Tycho2_SQLite_RTree.
2018, yet another solution ...
https://github.com/apereo/mod_auth_cas/issues/97
in some cases simply running
$ autoreconf -f -i
and nothing else .... solves the problem.
You do that in the directory /pcre2-10.30 .
What a nightmare.
(This usually did not solve the problem in 2017, but now usually does seem to solve the problem - they fixed something. Also, it seems your Dockerfile should now usually start with "FROM ibmcom/swift-ubuntu" ; previously you had to give a certain version/dev-build to make it work.)
The problem is not automake package, is the repository
sudo apt-get install automake
Installs version aclocal-1.4, that's why you can't find 1.5 (In Ubuntu 14,15)
Use this script to install latest
https://github.com/gp187/nginx-builder/blob/master/fix/aclocal.sh
2017 - High Sierra
It is really hard to get autoconf 1.15 working on Mac. We hired an expert to get it working. Everything worked beautifully.
Later I happened to upgrade a Mac to High Sierra.
The Docker pipeline stopped working!
Even though autoconf 1.15 is working fine on the Mac.
How to fix,
Short answer, I simply trashed the local repo, and checked out the repo again.
This suggestion is noted in the mix on this QA page and elsewhere.
It then worked fine!
It likely has something to do with the aclocal.m4 and similar files. (But who knows really). I endlessly massaged those files ... but nothing.
For some unknown reason if you just scratch your repo and get the repo again: everything works!
I tried for hours every combo of touching/deleting etc etc the files in question, but no. Just check out the repo from scratch!
I am trying to maintain file history on a server from a client and I have thought of using xdelta for it. There is almost no help online for using xdelta. I gathered a few bits and pieces of code and tried to run it in order to understand it myself but the compiler gives the following error:
fatal error: xdelta3.h: No such file or directory
I have installed xdelta3 using apt-get as well as by downloading .tar from the website.
Here's the header files I included:
#include <xdelta3.h>
#include <xdelta3-main.h>
Any ideas as to what the header files for xdelta are? It will be appreciated if you could also provide some links for learning how to use xdelta since the documentation is insufficient for me.
First of all, be sure that you have -dev package installed as well. On my Debian system I did:
$ sudo apt-get install libxdelta2-dev
Now you should be able to use those xdelta headers. But they can have different naming than you use. You can check it this way:
$ dpkg -L libxdelta2-dev | grep include
For me shows:
/usr/include/edsio_edsio.h
/usr/include/edsio.h
/usr/include/xd_edsio.h
/usr/include/xdelta.h
So you can see that in this case I should use xdelta.h header, no xdelta3.h. Try the same dpkg -L command with your -dev package.
If you have different version of xdelta*-dev package, check output of this command:
pkg-config --list-all | grep delta
For me output is empty, but if you have something like libxdelta in output, be sure to compile your application with these options:
$ gcc $(pkg-config --cflags --libs libxdelta) main.c
UPDATE 1
As for the linking issue you have. I guess you are using gcc, so it automatically uses correct linker for you. The thing is, you should provide library name, which you want to link your program with, as a linker option.
First you should figure out your actual library name:
$ ls -1 /usr/lib | grep xdelta | grep so
You will see something like this:
libxdelta.so
libxdelta.so.2
libxdelta.so.2.0.0
Now you know your library name: it's part of library file name between lib and .so, in this case library name is xdelta.
Now you can link your program with it, using -l option:
$ gcc -lxdelta main.c
You may be also needed to specify path to xdelta library, if it's not a standard one (like /usr/lib/):
$ gcc -L/your/path/to/xdelta/ -lxdelta main.c
UPDATE 2
How to build and use xdelta3.
Obtain sources:
cd /tmp
wget https://xdelta.googlecode.com/files/xdelta3-3.0.8.tar.xz
tar xJvf xdelta3-3.0.8.tar.xz
cd xdelta3-3.0.8/
Build and install:
./configure
make
sudo make install
Check if it works:
cd examples/
make clean
make encode_decode_test
./encode_decode_test
If everything is fine, you should have executable file "encode_decode_test". It's example uses xdelta3. Now you can build your program the same way. See "Makefile" file inside of "examples" directory to get a clue how to build your program. Note that there is no libraries involved now.