CentOS7: rpmbuild - Unable to recognise the format of the input file - centos7

I'm trying to build an extremely simple rpm over centos7.
I just copy some pre-compiled executables from the tar.gz to /usr/bin/my_rpms/rpm1.
Here is my install section:
%install
mkdir -p %{buildroot}/usr/bin/my_rpms/rpm1/
install -D prog prog.o -t %{buildroot}/usr/bin/my_rpms/rpm1/
it used to work find for the most part.
but today when after i made some changes to the prog and re-compiled it keeps gettings these errors:
+ mkdir -p /root/rpmbuild/BUILDROOT/rpm1.x86_64/usr/bin/my_rpms/rpm1/
+ install -D prog prog.o -t /root/rpmbuild/BUILDROOT/rpm1.x86_64/usr/bin/my_rpms/rpm1/
+ /usr/lib/rpm/check-buildroot
+ /usr/lib/rpm/redhat/brp-compress
+ /usr/lib/rpm/redhat/brp-strip /usr/bin/strip
/usr/bin/strip: Unable to recognise the format of the input file `/root/rpmbuild/BUILDROOT/rpm1.x86_64/usr/bin/drivertest_rpms/rpm1/prog.o'

As you can see in error log, problem is with binary file striping which is default behavior of install command. I think your build environment is maybe different then rpm environment. cross-compiling? as suggested by #aaron-d-marasco
So I recommend to build rpm from project source. i.e move your build commands into %build section of .spec file.
Or strip your files in the same place where you have build them, and then in rpm use cp command in %install section instead of install command to move your files to target directory.

Related

How to install c++ program into conda environment from source

I would like to compile and then use the scientific project BASS, which is distributed as c++ source code on github. I've set up a conda environment bass to hold everything related to BASS, and I'd like to compile BASS into this environment (so that if I delete the environment, it's cleanly removed).
I don't know if I should be using conda-build or make to do this. There is a Makefile distributed with the project, but I think it might have an error.
My latest try is the following: (the source files are in code/ and gsl seems to be a dependency):
conda create -n bass
conda activate bass
conda install make
conda install gsl
cd code/
make
I get the following:
gcc -c main.cpp -I../Libs/GSL/include/ -I./
main.cpp:19:10: fatal error: 'gsl/gsl_statistics.h' file not found
#include <gsl/gsl_statistics.h>
^~~~~~~~~~~~~~~~~~~~~~
1 error generated.
make: *** [Makefile:11: main.o] Error 1
My questions:
should I be doing this with make?
do I need gcc/g++/cxx-compiler installed in my conda env?
is the Makefile correct where it says "-I../Libs/GSL/include/"
Thank you!
Here's a very basic Conda recipe for the package. Make a folder (say recipe) and put the following two files in it:
meta.yaml
package:
name: bass
version: 0.1 # this is totally arbitrary (there are no versions)
source:
git_url: https://github.com/judyboon/BASS.git
requirements:
build:
- {{ compiler('cxx') }}
host:
- gsl
run:
- gsl
about:
home: https://github.com/judyboon/BASS
license: GPL-3.0-only
license_file: COPYING
license_family: GPL
build.sh
#!/bin/bash
## change to source dir
cd ${SRC_DIR}/code
## compile
${CXX} -c main.cpp *.cpp -I./ ${CXXFLAGS}
## link
${CXX} *.o -o BASS -lgsl -lgslcblas -lm ${LDFLAGS}
## install
mkdir -p $PREFIX/bin
cp BASS ${PREFIX}/bin/BASS
You can then build this with conda build ., run from in that directory.
Installing
After successful build, you can create an environment with this package installed using
conda create -n foo --use-local bass
Since I'm on an Intel architecture, and this tool uses CBLAS, I would further specify to use MKL:
conda create -n foo --use-local bass blas=*=*mkl
Now if you activate the environment foo (that's an arbitrary name), the software will be on PATH, under the binary BASS.
Additional Notes
I verified this works on osx-64 platform with the conda-forge channel prioritized.
The Makefile accompanying the code isn't very generic. Here we just have it compile all .cpp files and subsequently link all .o files.
There aren't any releases on the repository, so the versioning in the meta.yaml is arbitrary.
The build.sh defines the final output binary as BASS - that could be changed, and differs from the original Makefile which outputs the binary as main.
Based on what you write, the correct way is to build a conda recipe. By writing a recipe the conda knows what to delete, when you delete the environment.
The recipe is a text file (the meta.yaml file), where you write some metadata describing the package, define dependencies and write a short build script that executes the make file and installs the binaries to correct location. Defining the compilation environment may not be trivial and you should follow the documentation for the package you are trying to install. If there are problems with the package code (that includes the makefile) that's something conda can't help you with.
See the documentation on how to write the recipe:
https://docs.conda.io/projects/conda-build/en/latest/concepts/recipe.html
When you've got the recipe complete, then you would run the conda build and conda install -c [path to the build] [the package name] (I'm writing this because it took me ages to realize how to correctly install a local package.)

How can I compile Hazelcast C++ Client with the compiler g++-8.2

Is there any solution to make compilation with g++-8.2 for the project using Hazelcast C++ client library ?
If I compile it with g++-8.2, it shows a lot of errors "undefined reference ...".
While using g++-4.9, it works well.
The issue is a bit like the discussion in this google group forum, which indicated the compilation errors are because of the wrong version of a compiler.
However, the compiler g++-4.9 is too old for me to build my big project.
The sample code can be found in the official org website, if someone needs to give it a try.
I finally solved it by upgrading the library from 3.10 to 3.11.
The 3.11 library is built manually using g++-8.2 from Hazelcast source code in Github.
Because there is no make install after building hazelcast-cpp-clienet package, so I use some scripts to arrange header files together in one directory (hazelcast-cpp-client/include) so that a program can easily link the library and headers.
Build script:
#!/bin/bash
# Package Requirements:
# - asio
mkdir hazelcast-cpp-client ; cd hazelcast-cpp-client
# Build
git clone https://github.com/hazelcast/hazelcast-cpp-client.git
mv hazelcast-cpp-client tmp
cd tmp
git checkout v3.11
mkdir release ; cd release
cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr -DCMAKE_BUILD_TYPE=Release ..
make
# Back to 'hazelcast-cpp-client' directory
cd ../..
# Copy .a library out from tmp/
cp tmp/release/*.a .
# Arrange all header files in an one directory
cp -r tmp/hazelcast/include .
cp -r tmp/hazelcast/generated-sources/include/hazelcast/client/protocol ./include/hazelcast/client
rm tmp/external/include/*.md # We don't need readme file
cp -r tmp/external/include/* ./include
# Delete tmp directory
rm -rf tmp
Compilation command is like:
g++ -std=c++11 \
-I/path/to/hazelcast-cpp-client/include \
hz_test.cpp \
/path/to/hazelcast-cpp-client/libHazelcastClient3.11_64.a \
-lpthread
Thanks for reporting this problem. We did not test with the g++-8.2 compiler. I opened an issue to solve the problems: https://github.com/hazelcast/hazelcast-cpp-client/issues/494
Can you tell me also your OS environment? What distribution and version is it?

Tensorflow Op: how to include libtensorflow_framework.so?

I followed the instructions of this tutorial:
https://www.tensorflow.org/extend/adding_an_op#implement_the_gradient_in_python.
There is this comment provided: g++ -std=c++11 -shared zero_out.cc -o zero_out.so -fPIC -I$TF_INC -I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework -O2
But the linker cannot find -ltensorflow_framework (it should be a tensorflow_frameowork.so file!?)
After some research, I found following links:
https://github.com/tensorflow/tensorflow/issues/1569
https://github.com/eaplatanios/tensorflow_scala/issues/26 --> I downloaded the .jar and linked it via -l/pathto/tensorflow_framework.so, still the fatal error: tensorflow/core/framework/op_kernel.h: No such file or directory is not found.
https://github.com/tensorflow/tensorflow/issues/1270 last comment does not work and so does not help me.
I tried to search for sudo find /usr/. -name "tensorflow_framework.so" recursively but I could not find anything. Tensorflow is installed for sure via anaconda and I also cloned and compiled the repository from source.
How to find a way to include the -ltensorflow_framework?
One answer, I have found:
I have installed my python via anaconda2 and I always tried to find out TF_INC and TF_LIB when I activated my repository source activate <env>. and the could not found any ~/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow
*.so files
This time I went out every python environment with the shell command source deactivate and I typed the following command
python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())'
Now, I got a different path: ~/anaconda2/lib/python2.7/site-packages/tensorflow, where the lib libtensorflow_framework.so is located.
In my case, the file libtensorflow_framework.so.1 existed inside my TF_LIB directory instead of libtensorflow_framework.so. In order to solve this issue, I had to create a symbolic link as follows:
ln -s libtensorflow_framework.so.1 libtensorflow_framework.so
Source: Tensorflow NotFoundError: libtensorflow_framework.so: cannot open shared file or directory
tensorflow_framework is not used before Tensorflow 1.4.1
When you call python from the shell make sure you are calling the right one:
TF_LIB = $(shell python -c 'import tensorflow; print(tensorflow.sysconfig.get_lib())')
or
TF_LIB = $(shell python3 -c 'import tensorflow; print(tensorflow.sysconfig.get_lib())')
To be more clear:
Get the path from python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())', and there is a libtensorflow_framework.so.1 inside the directory. Say /home/.../lib/python3.7/site-packages/tensorflow_core/libtensorflow_framework.so.1
Run ln -s /home/.../lib/python3.7/site-packages/tensorflow_core/libtensorflow_framework.so.1 /home/.../lib/python3.7/site-packages/tensorflow_core/libtensorflow_framework.so

devil not loading images with linux build

I was having issues getting an image to load in to Devil so I have provided exactly how I built the libs and how I am trying to use it.
Downloaded Devel source from their website.
$ unzip DevIL-1.7.8.zip
$ mkdir devil
$ cd Devil-1.7.8
+-------------------------------------------+
| Just type: |
| autoreconf -i |
| ./configure <your options here> |
| make |
| sudo make install |
+-------------------------------------------+
If I use autoreconf -i then ./configure with prefix and ilu and ilut. I get an error ..forgot.. to record it. How important is this? I have just not used it.
$ chmod +x configure
$ ./configure --prefix=/home/path/to/TestingDevil/devil --enable-ILU --enable-ILUT
$ make
$ make install
So at this point my library should be built.
I downloaded the devil simple example (simple.c) to TestingDevil/simple/simple.c
built it.
$ gcc -I ../devil/include -L ../devil/lib/ simple.c -o simple -lIL -lILU -lILUT
$ cp ../devil/lib/*.so* .
I have added an image (jpg) to test.
$ LD_LIBRARY_PATH=. ./simple image.jpg
"Could not open file...exiting"
I ran the executable from the simple directory.
simple$ ls
image.jpg libIL.so.1.1.0 libILU.so.1.1.0 libILUT.so.1.1.0 libIL.so
libILU.so libILUT.so simple libIL.so.1 libILU.so.1
libILUT.so.1 simple.c
Whats going wrong? I am using the example from devIL, it compiles and runs fine. Just can't load any files.
My System is Ubuntu 12.10 64 with build-essential installed and other dev packages for opengl dev.
Uni System is Fedora 15(?) 32. This also has exactly the same problem after building devIL in the same way.
On my home machine I installed the package libdevel-dev and that works fine.
This question does not ask about opengl, purely the devIL lib and example.
It looks like you build it withjout jpg support or you do not have jpeg libs available ?

Error in making OpenCV

I'm getting this error:
OpenCV-2.4.3/modules/features2d/src/freak.cpp:437: error: unable to
find a register to spill in class 'GENERAL_REGS'
After doing:
tar xfj OpenCV-2.4.3.tar.bz2
cd OpenCV-2.4.3
mkdir release
cd release
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_NEW_PYTHON_SUPPORT=ON -D BUILD_EXAMPLES=ON ..
make
The same procedure works on another machine. Any ideas?
You need to dig into the Makefiles to remove -O for freak.cpp.
UPDATE:
This is exactly how one should do it (tested with 2.4.3 and 2.4.4).
you need to edit
YOUR_BUILD_DIR/modules/features2d/CMakeFiles/opencv_features2d.dir/build.make
Search for freak.cpp. You find three blocks: Building CXX..., Preprocessing CXX..., and Compiling CX.... I just needed to change the Building part. The last line of that block looks like this:
.... YOUR_COMPILER $(CXX_DEFINES) $(CXX_FLAGS) ...
and if you check you find out that CXX_FLAGS has a -O3 in it. If you add -O0 after CXX_FLAGS it suppresses the O3. So your lines should look like this.
.... YOUR_COMPILER $(CXX_DEFINES) $(CXX_FLAGS) -O0 ...
This is at least working here!
I struggled with this for quite a few hours as well on my CentOS 5.x boxen, and here's my solution.
It's apparent you need to update 'gcc' but natively upgrading via RPM or just grabbing RPM's at random causing some serious config mgmt issues on your server. I don't have time to compile gcc/g++ via source right now either. After grazing out in the repo for a while, I found that there is, indeed, an 4.x release of gcc in the base repo.
Do this (or someone with 'root' to do it in case of OP who doesn't have access):
# yum install gcc44 gcc44-c++ -y
...CentOS/RHEL have bundled a preview RPM of gcc-4.4.6.
Then when you go to do 'cmake' to build your release environment, do at least the following (your cmake params may vary):
# cd /path/to/OpenCV-2.4.3
# mkdir release && cd release
# env CC=/usr/bin/gcc44 CXX=/usr/bin/g++44 cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/place/to/install/ -D BUILD_PYTHON_SUPPORT=ON /path/to/OpenCV-2.4.3/
That will give you a successful build of OpenCV-2.4.3 natively with CenOS/RHEL 5.x.
Had the same problem and solved it like wisehippy with one slight change:
# yum install gcc44 gcc44-c++ -y
# mkdir release && cd release
# cmake -D CMAKE_BUILD_TYPE=RELEASE -DCMAKE_CXX_COMPILER=/usr/bin/g++44 -DCMAKE_C_COMPILER=/usr/bin/gcc44 <OpenCV_Dir>
I found the problem to be solved once I updated my c++ to point to g++44, instead of the default g++ which was 4.1.
As root verify that the files are the same before doing this step, it may not be necessary for you.
diff /usr/bin/c++ /usr/bin/g++
There should be nothing returned if the files are the same. If this is the case, continue.
Backup your old file. You can delete the file as well because it's the same as g++, but I like to be careful.
mv /usr/bin/c++ /usr/bin/c++4.1
Create a link so that C++ points to your g++44. You could use symbolic link here as well.
ln /usr/bin/g++44 /usr/bin/c++
Done.