I'm using Makefiles for a Python project. There are some Python configuration variables that I need to include in every Makefile, and the Makefiles may reside in various subdirectories within the project's root (though not more than one level deep).
In the root of the project, I created a Makefile.inc with, for example, the following contents:
define PYSCRIPT
from lib.custommodule import VAR1
print(VAR1)
endef
VAR1 := $(shell python -c '$(PYSCRIPT)')
'custommodule' lives inside a Python lib directory root of the project, and all Makefiles at the top-level that include Makefile.inc execute without errors.
In 'subdirectory/Makefile', I have:
include ../Makefile.inc
echo $(VAR1)
When I cd subdirectory && make, I receive the error:
ImportError: No module named custommodule
I've tried prepending the Python path in 'Makefile.inc' (in the define PYSCRIPT block) to include the parent directory, thinking that 'subdirectory/Makefile' was executing '../Makefile.inc' in it's own path versus its parent path, but that didn't work either.
Am I missing something that's preventing this from working? Or is there a better way of achieving what I'm attempting to?
What about a Makefile.inc with:
ROOTDIR ?= .
define PYSCRIPT
from lib.custommodule import VAR1
print(VAR1)
endef
export PYSCRIPT
VAR1 := $(shell PYTHONPATH=$(ROOTDIR) python -c "$$PYSCRIPT")
and a subdirectory/Makefile with:
ROOTDIR = ..
include $(ROOTDIR)/Makefile.inc
all:
echo $(VAR1)
Not tested.
Related
From entire day I am trying to install OverSim [http://www.oversim.org/wiki/OverSimInstall]
The make file looks like this:
all: checkmakefiles
cd src && $(MAKE)
clean: checkmakefiles
cd src && $(MAKE) clean
cleanall: checkmakefiles
cd src && $(MAKE) MODE=release clean
cd src && $(MAKE) MODE=debug clean
rm -f src/Makefile
makefiles:
cd src && opp_makemake -f --deep --make-so -o inet -O out $$NSC_VERSION_DEF
checkmakefiles:
#if [ ! -f src/Makefile ]; then \
echo; \
echo '======================================================================='; \
echo 'src/Makefile does not exist. Please use "make makefiles" to generate it!'; \
echo '======================================================================='; \
echo; \
exit 1; \
fi
doxy:
doxygen doxy.cfg
tcptut:
cd doc/src/tcp && $(MAKE)
I am using Omnet5.1.1 as omnet4.2.2 is not supported on Ubuntu16.04, my gcc version is 5.4.1.
Every time I try to build this make all, it gives header file not found error where as in actual the header files are present inside the project directory.
In file included from applications/ethernet/EtherAppCli.cc:21:0:
applications/ethernet/EtherAppCli.h:21:22: fatal error: INETDefs.h: No such file or directory
The includes are done like this:
#include "INETDefs.h" //available at src/linklayer/contract/
#include "MACAddress.h" //available at src/base/
project structure:
How could I resolve this build error?
This is a basic difference between newer OMNeT++ versions 5.x and the older OMNeT++ versions 3.x and 4.x.
As far as I remember Oversim, it was released for build with OMNeT 3.x and 4.2 as well as the older INET releases.
These old versions used parameters like --deep to search for include files, that's why the included files are just named and not entered with a complete path.
The newer INET and OMNeT releases use hierarchical path settings for include files. The complete paths have to be given for the compiler to access the included file.
So for INET version 3.x and OMNeT++ version 5.x, an include looks like: #include "inet/common/INETDefs.h"
Oversim does not include the complete paths for included headers, that's why you have errors when using Oversim with newer OMNeT releases.
The first option is to either use an older OMNeT version. Either install an older GCC in parallel on your system or set-up a virtual machine with an older Ubuntu if you like.
The second (and more complex) option is to adopt all include paths or define all necessary paths via the -I option of the compiler/linker.
Frankly, I'd suggest to use the older OMNeT++ 4.2.2 version...
The #include directive searches first of all inside the same directory as the file containing the directive and then in a preconfigured list of standard system directories.
If you don't want to move the header files to the same directory as EtherAppCli.cc, you will have to add the paths to these header files to this preconfigured list, usually with the compiler option
-Ipath/to/dir
I'm not sure if this is what is intended in the app your'e compiling but this is more or less what you can do.
Check to see if you missed anything in the installation guide.
Use OMNet 4.6 instead. Keep using inet-20111118. You should be able to build inet normally. Then build Oversim
I have a C++ application which I'm trying to build with scons which consists of several subprograms.
Each subprogram has its own source files in a subdirectory of the source directory. These source files, e.g. source/prog1/prog1.cpp, are compiled into object files which reside in the object directory, e.e. object/prog1/prog1.o.
This works fine since each source directory has its target directory, and there's no possibility of clashes.
However, what I'm trying to do is link these object files into executables, which would be in the same bin directory. So multiple source files (object/prog1, object/prog2, etc) would all go into the same target directory (bin).
The directory layout looks like this:
application
source
prog1
prog1.cpp
something.cpp
prog2
prog2.cpp
somethingelse.cpp
object
prog1
prog1.o
something.o
prog2
prog2.o
somethingelse.o
bin
??? <- what I'm concerned with
I'm trying to achieve that with the following SConstruct script:
env = Environment()
Export('env')
#common objects
common=env.SConscript("source/common/SConscript_object", variant_dir="object/common", duplicate=0)
Export('common')
#sub-programs
env.SConscript("source/prog1/SConscript_bin", variant_dir="bin", duplicate=0)
env.SConscript("source/prog2/SConscript_bin", variant_dir="bin", duplicate=0)
However, scons is complaining with the following error:
scons: *** 'bin' already has a source directory: 'source/prog1'.
The error goes away if I make it so that each subprogram has its own directory in the bin directory, e.g. variant_dir="bin/prog1".
So, my question is this: how can I link object files from multiple sources into the same variant dir?
In your case I would let SCons build the different binaries in their respective folder, and then use the Install builder to copy the binary files to the bin/ directory.
You would get something like:
env = Environment()
Export('env')
common = env.SConscript("source/common/SConscript_object", variant_dir="object/common", duplicate=0)
Export('common')
prog1 = env.SConscript("source/prog1/SConscript_bin", variant_dir="object/prog1", duplicate=0)
prog2 = env.SConscript("source/prog2/SConscript_bin", variant_dir="object/prog2", duplicate=0)
env.Install('bin', prog1)
env.Install('bin', prog2)
With the SConscript of the subprograms being something like
Import('env')
Import('common')
prog1 = env.Program('prog1', [ env.Glob(*.cpp), common ])
Return('prog1')
I think SCons refuses to build different targets into a unique variant directory because variants are designed to build a given target with different build settings, like debug and release mode.
I'm new to makefile, and I'm writing a simple C++ shared library.
Is there a way of finding a library's path dynamically by the makefile itself? What I want is something like this: (in my makefile)
INCLUDE_DIRS := `which amplex-gui`
LIBRARY_DIRS := `which amplex-gui`
amplex-gui is a library I use in my code, and I need to put its lib and include directories in my makefile. I want to figure out its path dynamically because each user might install it in a different path on their machine. Therefore, I need my makefile to dynamically parse the which command (or perhaps the $PATH environment variable) to find that path. How can I go about doing this?
Remember that backquoting is shell syntax. Make doesn't do anything special with backquotes. If you're using GNU make, you can use $(shell which amplex-gui) to get equivalent behavior as backquotes.
Regarding your comment above, I'm not sure exactly what you mean by "nest commands", but you can definitely use the shell's $() syntax within a make shell function. However, as with all strings that make expands, you need to double the dollar signs to quote them so that they are passed to the shell. So for example:
INCLUDE_DIRS := $(shell echo $$(dirname $$(dirname $$(which amplex-gui))))
Of course you can also use make functions; unfortunately the make dir function is annoying in that it leaves the final slash, so it cannot be used multiple times directly. You have to put a patsubst in there, like:
INCLUDE_DIRS := $(dir $(patsubst %/,%,$(dir $(shell which amplex-gui))))
Finally, if you have a sufficiently new version of GNU make there's the abspath function, so you could do something like this:
INCLUDE_DIRS := $(abspath $(dir $(shell which amplex-gui))../..)
I have a directory /src containing all of my source files, and /bin to store all binary after running make command. The directory is something like below:
/BuildDirectory
- - /src
- - /bin
- - configure
- - Makefile.am
- - configure.ac
- - ...
Now in Makefile.am, I have to specified:
bin_PROGRAMS = bin/x bin/y bin/z bin/k ...
bin_x_SOURCES = src/x.cpp
bin_y_SOURCES = src/y.cpp
bin_z_SOURCES = src/z.cpp
Is there any variable that can help to get rid of all "bin/" and "src/" ?
For example I just specify:
$BIN = bin
$SRC = src
And they will look for the correct files in correct folders and compile it to the correct places.
Thanks
You could take advantage of remote building. Place this makefile in the bin dir:
VPATH = ../src
bin_PROGRAMS = x y z k ...
x_SOURCES = x.cpp
y_SOURCES = y.cpp
z_SOURCES = z.cpp
Now replace the current Makefile.am with this one:
SUBDIRS = bin
Now tweak your configure.ac to also generate bin/Makefile
AC_CONFIG_FILES([Makefile
bin/Makefile])
and you should be set for life.
Not to my knowledge. If you're looking to separate your compiled files from your source files, remember that you can build outside of the tree:
$ cd foo-1.2.3
$ mkdir build
$ cd build
$ ../configure
$ make
$ make install
If this is what you're looking to do, you can make the Makefile.am simpler by creating binaries without a directory prefix (and still referencing things in src/ by hand).
If what you're trying to do is what I think you're trying to do, you're trying to achieve something like:
SRCDIR = src
BINDIR = bin
bin_PROGRAMS = $(BINDIR)/x $(BINDIR)/y $(BINDIR)/z
bin_x_SOURCES = $(SRCDIR)/x.cpp
bin_y_SOURCES = $(SRCDIR)/y.cpp
bin_z_SOURCES = $(SRCDIR)/z.cpp
I've tested this a few times in various forms, and it won't compile the code as it would with your example; I somehow convinced it that it was compiling C at one stage:
gmake[1]: *** No rule to make target `bin/x.c', needed by `x.o'. Stop.
I'm thus fairly certain that it's not possible. Sorry.
I currently have a library written in C++, building with the GNU autotools, and I'd like to add a Python interface to it. Using SWIG I have developed the interface, but I'm having some trouble figuring out how integrate compilation of the Python module in with the rest of the process.
I have looked into AM_PATH_PYTHON but this macro doesn't seem to set the include path for Python.h, so when I compile my module I get a bunch of errors about missing include files. Is there a way to get the Python include path and ldflags out of AM_PATH_PYTHON?
Just for the record I don't think it will be possible to use Python's distutils method (setup.py) as this requires the location of the library in order to link the new module. Since the library has not yet been installed at compile time, I would have to use a relative path (e.g. ../src/lib.so) which of course would break once the Python module was installed (as the library is then in /usr/lib or /usr/local/lib instead.)
EDIT:
Now it can find the .h file it's compiling, but after installing it (in the correct location) Python can't load the module. The code produces foo.so, and when I "import foo" I get this:
ImportError: dynamic module does not define init function (initfoo)
If however I rename it from foo.so to _foo.so then it loads and runs fine, except I have to "import _foo" which I'd rather not have to do. When I follow the SWIG instructions to produce _foo.so in the current directory "import foo" works, so I am not sure why it breaks when the library is installed in the site directory.
EDIT2:
Turns out the problem was I forgot to copy foo.py produced by SWIG into the install directory alongside _foo.so. Once I did this everything worked as expected! Now I just have to figure out some automake rules to copy a file into the install dir...
To find the include path, I'd use python-config. The trick is to use the python-config corresponding to the python installed in $PYTHON.
AM_PATH_PYTHON
AC_ARG_VAR([PYTHON_INCLUDE], [Include flags for python, bypassing python-config])
AC_ARG_VAR([PYTHON_CONFIG], [Path to python-config])
AS_IF([test -z "$PYTHON_INCLUDE"], [
AS_IF([test -z "$PYTHON_CONFIG"], [
AC_PATH_PROGS([PYTHON_CONFIG],
[python$PYTHON_VERSION-config python-config],
[no],
[`dirname $PYTHON`])
AS_IF([test "$PYTHON_CONFIG" = no], [AC_MSG_ERROR([cannot find python-config for $PYTHON.])])
])
AC_MSG_CHECKING([python include flags])
PYTHON_INCLUDE=`$PYTHON_CONFIG --includes`
AC_MSG_RESULT([$PYTHON_INCLUDE])
])
Another alternative is to poke around in the distutils.sysconfig module (this has nothing to do with using distutils to build your code). Run python -c "import distutils.sysconfig; help(distutils.sysconfig)" and have a look.
Here is the autoconf macro I call from my ``configure.ac` to find the Python include directory (PYTHONINC) and the Python installation directory (via AM_PATH_PYTHON).
AC_DEFUN([adl_CHECK_PYTHON],
[AM_PATH_PYTHON([2.0])
AC_CACHE_CHECK([for $am_display_PYTHON includes directory],
[adl_cv_python_inc],
[adl_cv_python_inc=`$PYTHON -c "from distutils import sysconfig; print sysconfig.get_python_inc()" 2>/dev/null`])
AC_SUBST([PYTHONINC], [$adl_cv_python_inc])])
Then my wrap/python/Makefile.am builds two Swig modules using Libtool like this :
SUBDIRS = . cgi-bin ajax tests
AM_CPPFLAGS = -I$(PYTHONINC) -I$(top_srcdir)/src $(BUDDY_CPPFLAGS) \
-DSWIG_TYPE_TABLE=spot
EXTRA_DIST = spot.i buddy.i
python_PYTHON = $(srcdir)/spot.py $(srcdir)/buddy.py
pyexec_LTLIBRARIES = _spot.la _buddy.la
MAINTAINERCLEANFILES = \
$(srcdir)/spot_wrap.cxx $(srcdir)/spot.py \
$(srcdir)/buddy_wrap.cxx $(srcdir)/buddy.py
## spot
_spot_la_SOURCES = $(srcdir)/spot_wrap.cxx $(srcdir)/spot_wrap.h
_spot_la_LDFLAGS = -avoid-version -module
_spot_la_LIBADD = $(top_builddir)/src/libspot.la
$(srcdir)/spot_wrap.cxx: $(srcdir)/spot.i
$(SWIG) -c++ -python -I$(srcdir) -I$(top_srcdir)/src $(srcdir)/spot.i
$(srcdir)/spot.py: $(srcdir)/spot.i
$(MAKE) $(AM_MAKEFLAGS) spot_wrap.cxx
## buddy
_buddy_la_SOURCES = $(srcdir)/buddy_wrap.cxx
_buddy_la_LDFLAGS = -avoid-version -module $(BUDDY_LDFLAGS)
$(srcdir)/buddy_wrap.cxx: $(srcdir)/buddy.i
$(SWIG) -c++ -python $(BUDDY_CPPFLAGS) $(srcdir)/buddy.i
$(srcdir)/buddy.py: $(srcdir)/buddy.i
$(MAKE) $(AM_MAKEFLAGS) buddy_wrap.cxx
The above rules are such that the result of Swig is considered as a source file (i.e. distributed in the tarball) in the same way as a Bison-generated parser would be. This way the final user do not need Swig installed.
When you run make, the *.so files are hidden by Libtool in some .libs/ directory, but after make install they get copied to the right place.
The only trick is how to use the modules from inside the source directory before running make install. E.g. while running make check. For that case, I generate (with configure) a script called run that sets PYTHONPATH prior to running any python script, and I execute all my tests cases through this run script. Here is the contents of run.in, before configure substitutes any value:
# Darwin needs some help in figuring out where non-installed libtool
# libraries are (on this platform libtool encodes the expected final
# path of dependent libraries in each library).
modpath='../.libs:#top_builddir#/src/.libs:#top_builddir#/buddy/src/.libs'
# .. is for the *.py files, and ../.libs for the *.so. We used to
# rely on a module called ltihooks.py to teach the import function how
# to load a Libtool library, but it started to cause issues with
# Python 2.6.
pypath='..:../.libs:#srcdir#/..:#srcdir#/../.libs:$PYTHONPATH'
test -z "$1" &&
PYTHONPATH=$pypath DYLD_LIBRARY_PATH=$modpath exec #PYTHON#
case $1 in
*.py)
PYTHONPATH=$pypath DYLD_LIBRARY_PATH=$modpath exec #PYTHON# "$#";;
*.test)
exec sh -x "$#";;
*)
echo "Unknown extension" >&2
exit 2;;
esac
If you want to see all this in action in a real project, you can get it from https://spot.lrde.epita.fr/install.html
Building Manually on the Command Line
Without adding the SWIG steps to your makefiles, you can build a SWIG extension using these commands (if you have source_file.cpp and your_extension.i to create your python module) :
# creation of your_extension_wrap.cpp
swig -c++ -python -o your_extension_wrap.cpp your_extension.i
# creation of your_extension_wrap.o, source_file.o and your_extension.py
g++ -fPIC -c your_extension_wrap.cpp source_file.cpp -I/usr/include/python2.7
# creation of the library _your_extension.so
g++ -shared your_extension_wrap.o source_file.o -o _your_extension.so
Note: It’s important to prefix the name of the shared object with an underscore, otherwise Python won’t be able to find it when import your_extension is executed.
Adding SWIG to Autoconf
I writted a little program in Python in which I need a single file written in C++ (eg. rounding method).
You need additional Autoconf macros to enable SWIG support. As noted by johanvdw, it will be more easy if you use these two m4 macro : ax_pck_swig and ax_swig_python. I downloaded it from the Autoconf Macro Archive and placed it in the m4 subdirectory of my project tree:
trunk
├── configure.ac
├── __init__.py
├── m4
│ ├── ax_pkg_swig.m4
│ ├── ax_swig_python.m4
│ ├── libtool.m4
│ ├── lt~obsolete.m4
│ ├── ltoptions.m4
│ ├── ltsugar.m4
│ └── ltversion.m4
├── Makefile.am
├── rounding_swig
│ ├── compile.txt
│ ├── __init__.py
│ ├── Makefile.am
│ ├── rnd_C.cpp
│ ├── rounding.i
│ ├── rounding_wrap.cpp
└── src
├── cadna_add.py
├── cadna_computedzero.py
├── cadna_convert.py
├── __init__.py
└── Makefile.am
When you place the two m4 macro in a subdirectory, you need to add this line to your trunk/Makefile.am:
ACLOCAL_AMFLAGS = -I m4
Now, let see trunk/configure.ac :
AC_PREREQ([2.69]) # Check autoconf version
AC_INIT(CADNA_PY, 1.0.0, cadna-team#lip6.fr) # Name of your software
AC_CONFIG_SRCDIR([rounding_swig/rnd_C.cpp]) # Name of the c++ source
AC_CONFIG_MACRO_DIR(m4) # Indicate where are your m4 macro
AC_CONFIG_HEADERS(config.h)
AM_INIT_AUTOMAKE
AC_DISABLE_STATIC #enable shared libraries
# Checks for programs.
AC_PROG_LIBTOOL # check libtool
AC_PROG_CXX # check c++ compiler
AM_PATH_PYTHON(2.3) # check python version
AX_PKG_SWIG(1.3.21) # check swig version
AX_SWIG_ENABLE_CXX # fill some variable usefull later
AX_SWIG_PYTHON # same
# Checks for header files.
AC_CHECK_HEADERS([fenv.h stdlib.h string.h]) # any header needed by your c++ source
# Checks for typedefs, structures, and compiler characteristics.
AC_CHECK_HEADER_STDBOOL
AC_TYPE_SIZE_T
# Checks for library functions.
AC_FUNC_MALLOC
AC_CHECK_FUNCS([fesetround memset strstr])
AC_CONFIG_FILES([
Makefile
src/Makefile
rounding_swig/Makefile
])
LIBPYTHON="python$PYTHON_VERSION" # define the python interpreter
LDFLAGS="$LDFLAGS -l$LIBPYTHON"
AC_OUTPUT
Adding SWIG to Automake
In your trunk/Makefile.am, you need to do the following:
ACLOCAL_AMFLAGS = -I m4
# Indicate the subdir of c++ file and python file
SUBDIRS = src rounding_swig
# Indicate a list of all the files that are part of the package, but
# are not installed by default and were not specified in any other way
EXTRA_DIST= \
rounding_swig/rounding.i \
rounding_swig/testrounding.py \
rounding_swig/testrounding.cpp
In your trunk/src/Makefile.am :
# Python source files that will be install in prefix/lib/name_of_your_python_interpreter/site-packages/name_of_your_project
pkgpython_PYTHON = cadna_add.py cadna_computedzero.py cadna_convert.py
The difficult part is in trunk/rounding_swig/Makefile.am. It creates a library .la and a _your_extension.so and place it in prefix/lib64/python2.7/site-packages/name_of_the_project/.
# Name of the cpp source file
BUILT_SOURCES = rounding_wrap.cpp
# Name of the swig source file
SWIG_SOURCES = rounding.i
# Python source files that will be install in prefix/lib/name_of_your_python_interpreter/site-packages/name_of_your_project
pkgpython_PYTHON = rounding.py __init__.py
pkgpyexec_LTLIBRARIES = _rounding.la
_rounding_la_SOURCES = rounding_wrap.cpp $(SWIG_SOURCES) rnd_C.cpp
_rounding_la_CPPFLAGS = $(AX_SWIG_PYTHON_CPPFLAGS) -I$(top_srcdir)/rounding_swig -I/usr/include/python#PYTHON_VERSION# -lpython#PYTHON_VERSION#
_rounding_la_LDFLAGS = -module
rounding_wrap.cpp: $(SWIG_SOURCES)
$(SWIG) $(AX_SWIG_PYTHON_OPT) -I$(top_srcdir)/rounding_swig -I/usr/include/python#PYTHON_VERSION# -o $# $<
Af the end, if you don't have the 5 others macro, you can obtain it by typing :
autoreconf -i
Finally, to install your project :
libtoolize && aclocal && autoheader && autoconf && automake -a -c
./configure --prefix=<install prefix>
make
make install
PS : Here is an outdated but simple tutorial that helped me (outdated because it used ac_pkg_swig.m4 which fails with new version of swig).
I know this is an old post, but since I landed here anyway: there are some m4 macros which make compilation of python bindings using swig very easy:
http://www.gnu.org/software/autoconf-archive/ax_pkg_swig.html
and
http://www.gnu.org/software/autoconf-archive/ax_swig_python.html