devil not loading images with linux build - c++

I was having issues getting an image to load in to Devil so I have provided exactly how I built the libs and how I am trying to use it.
Downloaded Devel source from their website.
$ unzip DevIL-1.7.8.zip
$ mkdir devil
$ cd Devil-1.7.8
+-------------------------------------------+
| Just type: |
| autoreconf -i |
| ./configure <your options here> |
| make |
| sudo make install |
+-------------------------------------------+
If I use autoreconf -i then ./configure with prefix and ilu and ilut. I get an error ..forgot.. to record it. How important is this? I have just not used it.
$ chmod +x configure
$ ./configure --prefix=/home/path/to/TestingDevil/devil --enable-ILU --enable-ILUT
$ make
$ make install
So at this point my library should be built.
I downloaded the devil simple example (simple.c) to TestingDevil/simple/simple.c
built it.
$ gcc -I ../devil/include -L ../devil/lib/ simple.c -o simple -lIL -lILU -lILUT
$ cp ../devil/lib/*.so* .
I have added an image (jpg) to test.
$ LD_LIBRARY_PATH=. ./simple image.jpg
"Could not open file...exiting"
I ran the executable from the simple directory.
simple$ ls
image.jpg libIL.so.1.1.0 libILU.so.1.1.0 libILUT.so.1.1.0 libIL.so
libILU.so libILUT.so simple libIL.so.1 libILU.so.1
libILUT.so.1 simple.c
Whats going wrong? I am using the example from devIL, it compiles and runs fine. Just can't load any files.
My System is Ubuntu 12.10 64 with build-essential installed and other dev packages for opengl dev.
Uni System is Fedora 15(?) 32. This also has exactly the same problem after building devIL in the same way.
On my home machine I installed the package libdevel-dev and that works fine.
This question does not ask about opengl, purely the devIL lib and example.

It looks like you build it withjout jpg support or you do not have jpeg libs available ?

Related

How to install c++ program into conda environment from source

I would like to compile and then use the scientific project BASS, which is distributed as c++ source code on github. I've set up a conda environment bass to hold everything related to BASS, and I'd like to compile BASS into this environment (so that if I delete the environment, it's cleanly removed).
I don't know if I should be using conda-build or make to do this. There is a Makefile distributed with the project, but I think it might have an error.
My latest try is the following: (the source files are in code/ and gsl seems to be a dependency):
conda create -n bass
conda activate bass
conda install make
conda install gsl
cd code/
make
I get the following:
gcc -c main.cpp -I../Libs/GSL/include/ -I./
main.cpp:19:10: fatal error: 'gsl/gsl_statistics.h' file not found
#include <gsl/gsl_statistics.h>
^~~~~~~~~~~~~~~~~~~~~~
1 error generated.
make: *** [Makefile:11: main.o] Error 1
My questions:
should I be doing this with make?
do I need gcc/g++/cxx-compiler installed in my conda env?
is the Makefile correct where it says "-I../Libs/GSL/include/"
Thank you!
Here's a very basic Conda recipe for the package. Make a folder (say recipe) and put the following two files in it:
meta.yaml
package:
name: bass
version: 0.1 # this is totally arbitrary (there are no versions)
source:
git_url: https://github.com/judyboon/BASS.git
requirements:
build:
- {{ compiler('cxx') }}
host:
- gsl
run:
- gsl
about:
home: https://github.com/judyboon/BASS
license: GPL-3.0-only
license_file: COPYING
license_family: GPL
build.sh
#!/bin/bash
## change to source dir
cd ${SRC_DIR}/code
## compile
${CXX} -c main.cpp *.cpp -I./ ${CXXFLAGS}
## link
${CXX} *.o -o BASS -lgsl -lgslcblas -lm ${LDFLAGS}
## install
mkdir -p $PREFIX/bin
cp BASS ${PREFIX}/bin/BASS
You can then build this with conda build ., run from in that directory.
Installing
After successful build, you can create an environment with this package installed using
conda create -n foo --use-local bass
Since I'm on an Intel architecture, and this tool uses CBLAS, I would further specify to use MKL:
conda create -n foo --use-local bass blas=*=*mkl
Now if you activate the environment foo (that's an arbitrary name), the software will be on PATH, under the binary BASS.
Additional Notes
I verified this works on osx-64 platform with the conda-forge channel prioritized.
The Makefile accompanying the code isn't very generic. Here we just have it compile all .cpp files and subsequently link all .o files.
There aren't any releases on the repository, so the versioning in the meta.yaml is arbitrary.
The build.sh defines the final output binary as BASS - that could be changed, and differs from the original Makefile which outputs the binary as main.
Based on what you write, the correct way is to build a conda recipe. By writing a recipe the conda knows what to delete, when you delete the environment.
The recipe is a text file (the meta.yaml file), where you write some metadata describing the package, define dependencies and write a short build script that executes the make file and installs the binaries to correct location. Defining the compilation environment may not be trivial and you should follow the documentation for the package you are trying to install. If there are problems with the package code (that includes the makefile) that's something conda can't help you with.
See the documentation on how to write the recipe:
https://docs.conda.io/projects/conda-build/en/latest/concepts/recipe.html
When you've got the recipe complete, then you would run the conda build and conda install -c [path to the build] [the package name] (I'm writing this because it took me ages to realize how to correctly install a local package.)

CentOS7: rpmbuild - Unable to recognise the format of the input file

I'm trying to build an extremely simple rpm over centos7.
I just copy some pre-compiled executables from the tar.gz to /usr/bin/my_rpms/rpm1.
Here is my install section:
%install
mkdir -p %{buildroot}/usr/bin/my_rpms/rpm1/
install -D prog prog.o -t %{buildroot}/usr/bin/my_rpms/rpm1/
it used to work find for the most part.
but today when after i made some changes to the prog and re-compiled it keeps gettings these errors:
+ mkdir -p /root/rpmbuild/BUILDROOT/rpm1.x86_64/usr/bin/my_rpms/rpm1/
+ install -D prog prog.o -t /root/rpmbuild/BUILDROOT/rpm1.x86_64/usr/bin/my_rpms/rpm1/
+ /usr/lib/rpm/check-buildroot
+ /usr/lib/rpm/redhat/brp-compress
+ /usr/lib/rpm/redhat/brp-strip /usr/bin/strip
/usr/bin/strip: Unable to recognise the format of the input file `/root/rpmbuild/BUILDROOT/rpm1.x86_64/usr/bin/drivertest_rpms/rpm1/prog.o'
As you can see in error log, problem is with binary file striping which is default behavior of install command. I think your build environment is maybe different then rpm environment. cross-compiling? as suggested by #aaron-d-marasco
So I recommend to build rpm from project source. i.e move your build commands into %build section of .spec file.
Or strip your files in the same place where you have build them, and then in rpm use cp command in %install section instead of install command to move your files to target directory.

Tensorflow Op: how to include libtensorflow_framework.so?

I followed the instructions of this tutorial:
https://www.tensorflow.org/extend/adding_an_op#implement_the_gradient_in_python.
There is this comment provided: g++ -std=c++11 -shared zero_out.cc -o zero_out.so -fPIC -I$TF_INC -I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework -O2
But the linker cannot find -ltensorflow_framework (it should be a tensorflow_frameowork.so file!?)
After some research, I found following links:
https://github.com/tensorflow/tensorflow/issues/1569
https://github.com/eaplatanios/tensorflow_scala/issues/26 --> I downloaded the .jar and linked it via -l/pathto/tensorflow_framework.so, still the fatal error: tensorflow/core/framework/op_kernel.h: No such file or directory is not found.
https://github.com/tensorflow/tensorflow/issues/1270 last comment does not work and so does not help me.
I tried to search for sudo find /usr/. -name "tensorflow_framework.so" recursively but I could not find anything. Tensorflow is installed for sure via anaconda and I also cloned and compiled the repository from source.
How to find a way to include the -ltensorflow_framework?
One answer, I have found:
I have installed my python via anaconda2 and I always tried to find out TF_INC and TF_LIB when I activated my repository source activate <env>. and the could not found any ~/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow
*.so files
This time I went out every python environment with the shell command source deactivate and I typed the following command
python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())'
Now, I got a different path: ~/anaconda2/lib/python2.7/site-packages/tensorflow, where the lib libtensorflow_framework.so is located.
In my case, the file libtensorflow_framework.so.1 existed inside my TF_LIB directory instead of libtensorflow_framework.so. In order to solve this issue, I had to create a symbolic link as follows:
ln -s libtensorflow_framework.so.1 libtensorflow_framework.so
Source: Tensorflow NotFoundError: libtensorflow_framework.so: cannot open shared file or directory
tensorflow_framework is not used before Tensorflow 1.4.1
When you call python from the shell make sure you are calling the right one:
TF_LIB = $(shell python -c 'import tensorflow; print(tensorflow.sysconfig.get_lib())')
or
TF_LIB = $(shell python3 -c 'import tensorflow; print(tensorflow.sysconfig.get_lib())')
To be more clear:
Get the path from python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())', and there is a libtensorflow_framework.so.1 inside the directory. Say /home/.../lib/python3.7/site-packages/tensorflow_core/libtensorflow_framework.so.1
Run ln -s /home/.../lib/python3.7/site-packages/tensorflow_core/libtensorflow_framework.so.1 /home/.../lib/python3.7/site-packages/tensorflow_core/libtensorflow_framework.so

shared library not in ld cache

I'm attempting to use the JAUS++-2.110519 library. Following the included instructions, I have managed to install the library. I have verified the following:
Shared libraries:
libcxutils.so
libjauscore.so
libjausextras.so
libjausmobility.so
libtinyxml.so
are located in
/usr/local/lib/active
Header files
are located in
/usr/local/include/active
Source code
is located in
/usr/local/src/
Following the installation, it was mentioned in the instructions that the library path would need to be added to ld.so.conf. Since /etc/ld.so.conf.d/libc.conf already contained /usr/local/lib, running sudo ldconfig should have linked the newly installed library, however, I am not seeing said libraries in ld cache.
Running the following:
/sbin/ldconfig -p | grep libcxutils.so
/sbin/ldconfig -p | grep libjauscore.so
/sbin/ldconfig -p | grep libjausextras.so
/sbin/ldconfig -p | grep libjausmobility.so
/sbin/ldconfig -p | grep libtinyxml.so
returns nothing.
I have also created /etc/ld.so.conf.d/jaus.conf that contains the following:
/usr/local/lib/active
and ran sudo ldconfig afterwords. The results were the same, however.
Running nm -Ca on each of the *.so files appears to return valid input.
Why can't I get ldconfig to link this library properly? I am running Ubuntu 12.04.

How can I set rpath on gcc binaries during bootstrap?

I am trying to build gcc 4.7.2 using a custom prefix $PREFIX
I have built and installed all the prerequisites into my prefix location, and then successfully configured, built and installed gcc.
The problem that I now have is that $PREFIX is not in the library search path, and therefore the shared libraries cannot be found.
$PREFIX/bin $ ./g++ ~/main.cpp
$PREFIX/libexec/gcc/x86_64-suse-linux/4.7.2/cc1plus: \
error while loading shared libraries: \
libcloog-isl.so.1: \
cannot open shared object file: No such file or directory
What works, but isn't ideal
If I export LD_LIBRARY_PATH=$PREFIX/lib then it works, but I'm looking for something which works without having to set environment variables.
If I use patchelf to set the RPATH on all the gcc binaries then it also works; however this involves searching out all elf binaries and iterating over them calling patchelf, I would rather have something more permanent.
What I think would be ideal for my purposes
So I'm hoping there is a way to have -Wl,-rpath,$PREFIX/lib passed to make during the build process.
Since I know the paths won't need to be changed this seems like the most robust solution, and can be also be used for when we build the next gcc version.
Is configuring the build process to hard code the RPATH possible?
What I have tried, but doesn't work
Setting LDFLAGS_FOR_TARGET prior to calling configure:
All of these fail:
export LDFLAGS_FOR_TARGET="-L$PREFIX/lib -R$PREFIX/lib"
export LDFLAGS_FOR_TARGET="-L$PREFIX/lib"
export LDFLAGS_FOR_TARGET="-L$PREFIX/lib -Wl,-rpath,$PREFIX/lib"
Setting LDFLAGS prior to calling configure:
export LDFLAGS="-L$PREFIX/lib -Wl,-rpath,$PREFIX/lib"
In any event I worry that these will override any of the LDFLAGS gcc would have had, so I'm not sure these are a viable option even if they could be made to work?
My configure line
For completeness here is the line I pass to configure:
./configure \
--prefix=$PREFIX \
--build=x86_64-suse-linux \
--with-pkgversion='SIG build 12/10/2012' \
--disable-multilib \
--enable-cloog-backend=isl \
--with-mpc=$PREFIX \
--with-mpfr=$PREFIX \
--with-gmp=$PREFIX \
--with-cloog=$PREFIX \
--with-ppl=$PREFIX \
--with-gxx-include-dir=$PREFIX/include/c++/4.7.2
I've found that copying the source directories for gmp, mpfr, mpc, isl, cloog, etc. into the top level gcc source directory (or using symbolic links with the same name) works everywhere. This is in fact the preferred way.
You need to copy (or link) to those source directory names without the version numbers for this to work.
The compilers do not need LD_LIBRARY_PATH (although running applications built with the compilers will need an LD_LIBRARY_PATH to the $PREFIX/lib64 or something like that - but that's different)
Start in a source directory where you'll keep all your sources.
In this source directory you have your gcc directory either by unpacking a tarball or svn...
I use subversion.
Also in this top level directory you have, say, the following source tarballs:
gmp-5.1.0.tar.bz2
mpfr-3.1.1.tar.bz2
mpc-1.0.1.tar.gz
isl-0.11.1.tar.bz2
cloog-0.18.0.tar.gz
I just download these and update to the latest tarballs periodically.
In script form:
# Either:
svn checkout svn://gcc.gnu.org/svn/gcc/trunk gcc_work
# Or:
bunzip -c gcc-4.8.0.tar.bz2 | tar -xvf -
mv gcc-4.8.0 gcc_work
# Uncompress sources.. (This will produce version numbered directories).
bunzip -c gmp-5.1.0.tar.bz2 | tar -xvf -
bunzip -c mpfr-3.1.1.tar.bz2 | tar -xvf -
gunzip -c mpc-1.0.1.tar.gz | tar -xvf -
bunzip -c isl-0.11.1.tar.bz2 | tar -xvf -
gunzip -c cloog-0.18.0.tar.gz | tar -xvf -
# Link outside source directories into the top level gcc directory.
cd gcc_work
ln -s ../gmp-5.1.0 gmp
ln -s ../mpfr-3.1.1 mpfr
ln -s ../mpc-1.0.1 mpc
ln -s ../isl-0.11.1 isl
ln -s ../cloog-0.18.0 cloog
# Get out of the gcc working directory and create a build directory. I call mine obj_work.
# I configure the gcc binary and other outputs to be bin_work in the top level directory. Your choice. But I have this:
# home/ed/projects
# home/ed/projects/gcc_work
# home/ed/projects/obj_work
# home/ed/projects/bin_work
# home/ed/projects/gmp-5.1.0
# home/ed/projects/mpfr-3.1.1
# home/ed/projects/mpc-1.0.1
# home/ed/projects/isl-0.11.1
# home/ed/projects/cloog-0.18.0
mkdir obj_work
cd obj_work
../gcc_work/configure --prefix=../bin_work <other options>
# Your <other options> shouldn't need to involve anything about gmp, mpfr, mpc, isl, cloog.
# The gcc build system will find the directories you linked,
# then configure and compile the needed libraries with the necessary flags and such.
# Good luck.
I've been using this configure option with gcc-4.8.0, on FreeBSD, after building and installing gmp, isl and cloog:
LD_LIBRARY_PATH=/path/to/isl/lib ./configure (lots of other options) \
--with-stage1-ldflags="-rpath /path/to/isl/lib -rpath /path/to/cloog/lib -rpath /path/to/gmp/lib"
and the resulting gcc binary does not need any LD_LIBRARY_PATH. The LD_LIBRARY_PATH for configure is needed because it compiles a test program to check for the ISL version, which would fail if it didn't find the ISL shared lib.
I tried it on Linux (Ubuntu) where it failed during configuring because the -rpath args were passed to gcc instead of ld. I could fix this by using
--with-stage1-ldflags="-Wl,-rpath,/path/to/isl/lib,-rpath,/path/to/cloog/lib,-rpath,/path/to/gmp/lib"
instead.
Just using configure --with-stage1-ldflags="-Wl,-rpath,/path/to/lib" was not enough for me to build gcc 4.9.2, bootstrap failed in stage 2. What works is to pass he flags directly to make via
make BOOT_LDFLAGS="-Wl,-rpath,/path/to/lib"
I got this from https://gcc.gnu.org/ml/gcc/2008-09/msg00214.html
While it still involves setting environment variables, what I do is that I define LD_RUN_PATH, which sets the rpath. That way the rest of the system can keep using the system provided libraries instead of using the ones that your gcc build generates.
I am going to make a suggestion that I believe solves your problem, although it definitely does not answer your question. Let's see how many downvotes I get.
Writing a generic wrapper script to set LD_LIBRARY_PATH and then to run the executable is easy; see https://stackoverflow.com/a/7101577/768469.
The idea is to pass something like --prefix=$PREFIX/install to configure, building an install tree that looks like this:
$PREFIX/
install/
lib/
libcloogXX.so
libgmpYY.so
...
bin/
gcc
emacs
...
bin/
.wrapper
gcc -> .wrapper
emacs -> .wrapper
.wrapper is a simple shell script:
#!/bin/sh
here="${0%/*}" # or use $(dirname "$0")
base="${0##*/}" # or use $(basename "$0")
libdir="$here"/../install/lib
if [ "$LD_LIBRARY_PATH"x = x ] ; then
LD_LIBRARY_PATH="$libdir"
else
LD_LIBRARY_PATH="$libdir":"$LD_LIBRARY_PATH"
fi
export LD_LIBRARY_PATH
exec "$here"/../install/bin/"$base" "$#"
This will forward all arguments correctly, handle spaces in arguments or directory names, and so forth. For practical purposes, it is indistinguishable from setting the rpath like you want.
Also, you can use this approach not only for gcc, but for your entire my-personal-$PREFIX tree. I do this all the time in environments where I want an up-to-date suite of GNU tools, but I do not have (or want to admit to have) root access.
Try to add your $PREFIX to /etc/ld.so.conf and then run ldconfig:
# echo $PREFIX >> /etc/ld.so.conf
# ldconfig
This will recreate cache that is used by runtime linker and it will pick up your libraries.
WARNING: This operation will cause ALL applications to use your newly compiled libraries in $PREFIX instead of default location