libexpat could not be found by linux system - c++

I am developing with vddk library for ubuntu 12.10 i386. I constantly get following error:
Cannot open library: libexpat.so.0: cannot open shared object file: No such file or directory.
When I run apt-file search libexpat.so
it shows me following
lib64expat1: /usr/lib64/libexpat.so.1
lib64expat1: /usr/lib64/libexpat.so.1.6.0
lib64expat1-dev: /usr/lib64/libexpat.so
libexpat1: /lib/i386-linux-gnu/libexpat.so.1
libexpat1: /lib/i386-linux-gnu/libexpat.so.1.6.0
libexpat1-dev: /usr/lib/i386-linux-gnu/libexpat.so
I already tried to create a symlink
sudo ln -s /usr/lib/i386-linux-gnu/libexpat.so /usr/lib/i386-linux-gnu/libexpat.so.0
but it did not work.
Strange thing:
ls -l `locate libexpat.so`
ls: cannot access /lib/i386-linux-gnu/libexpat.so.1: No such file or directory
ls: cannot access /lib/i386-linux-gnu/libexpat.so.1.6.0: No such file or directory
ls: cannot access /usr/lib/vmware-vix-disklib/lib64/libexpat.so.0: No such file or directory
-rw-r--r-- 1 ubuntu ubuntu 141320 Aug 20 09:21 /home/ubuntu/vddk/lib64/libexpat.so.0
-rw-r--r-- 1 root root 141320 Feb 3 16:45 /usr/lib/vmware-vix-disk-lib/vmware-vix-disk-lib/lib64/libexpat.so.0
-rw-r--r-- 1 root root 141320 Aug 20 09:21 /usr/vmware-vix-disklib-distrib/lib64/libexpat.so.0

apt-file shows only the contents of a package, or in your case the package(s) by file name, but it works regardless of whether the package is installed or not.
You need to install libexpat1:
sudo apt-get install libexpat1
If you are about to compile and link custom C programs against libexpat1 you will also need:
sudo apt-get install libexpat1-dev

Fixed by
manually downloading and installing libexpat package form here
http://expat.sourceforge.net/ and look for downloading page. It should take you to sourceforge page and select stable package

Go to the source code download page and build with the correct flag.
Since you are building for x86, you'll need to run configure like this:
./configure CFLAGS=-m32

Related

Do you need to build opencv from source to get the headers?

I noticed these lines in my cpp file did not work if I tried to use opencv binaries versus building from source:
#include <opencv2/core.hpp>
#include <opencv2/aruco/charuco.hpp>
Is there a way to make this work without building from source? Building from source is very slow, and especially when I am using docker files this is problematic (yes, I know docker builds are usually cached, but when you have to break the cache, the opencv build adds a lot of slow down).
Yes. If you are in a distro like Ubuntu, you can just install the development package for that library with
$ sudo apt install libopencv-dev
or on Redhat/EPEL
$ sudo yum install libopencv-devel
Your headers will be installed inside /usr
$ find /usr -name 'opencv.hpp'
/usr/include/opencv4/opencv2/opencv.hpp
/usr/include/boost/compute/interop/opencv.hpp
as well as the cmake modules (for find_package)
$ ls /usr/lib/x86_64-linux-gnu/cmake/opencv4/ -l
total 64
-rw-r--r-- 1 root root 14222 Feb 17 2020 OpenCVConfig.cmake
-rw-r--r-- 1 root root 418 Feb 17 2020 OpenCVConfig-version.cmake
-rw-r--r-- 1 root root 15428 Feb 17 2020 OpenCVModules.cmake
-rw-r--r-- 1 root root 26215 Feb 17 2020 OpenCVModules-release.cmake
Within docker that's usually a line in Dockerfile (as root before switching to the end user)
RUN apt install -y libopencv-dev

yum : using boost 1.69 instead of default (1.53) version on centos

I want to compile (c++/cmake) code using boost 1.69. I am on centos 7.
After :
sudo yum install boost-devel.x86.64
the code compiles fine, but using the default version which is 1.53.
If I look at the libraries installed in /lib64, I see for example:
>> ls -al /lib64/ | grep boost_timer
lrwxrwxrwx. 1 root root 27 Jun 9 11:50 libboost_timer-mt.so -> libboost_timer-mt.so.1.53.0
-rwxr-xr-x. 1 root root 19848 Apr 1 04:26 libboost_timer-mt.so.1.53.0
The yum installation for boost 1.69 is also available. So I can do:
sudo yum install boost169-devel.x86_64
which updates for example the content of /lib64/
>> ls -al /lib64/ | grep boost_timer
lrwxrwxrwx. 1 root root 27 Jun 9 11:50 libboost_timer-mt.so -> libboost_timer-mt.so.1.53.0
-rwxr-xr-x. 1 root root 19848 Apr 1 04:26 libboost_timer-mt.so.1.53.0
lrwxrwxrwx. 1 root root 24 Jun 9 11:50 libboost_timer.so -> libboost_timer.so.1.53.0
-rwxr-xr-x. 1 root root 19848 Apr 1 04:26 libboost_timer.so.1.53.0
-rwxr-xr-x. 1 root root 24104 Apr 23 2019 libboost_timer.so.1.69.0
also :
>> ls /usr/include/ | grep boost
boost
boost169
At this point my workspace still compiles, but still using 1.53.
I would like my workspace to compile using 1.69. I could achieve this by botching
FindBoost.cmake but this does not feel like the clean thing to do.
Also I tried (yum) removed boost-dev.x86-64, which removed the folder /usr/include/boost and the related so files in /lib64, which leaves for example:
>> ls -al /lib64/ | grep boost_timer
libboost_timer-mt.so.1.53.0
libboost_timer.so.1.53.0
libboost_timer.so.1.69.0
(note that there is no longer a "libboost_timer-mt.so")
At this point I believe I could also get my workspace to compile by manually creating symbolic links /usr/include/boost and /lib64/libboost_*.so , but that does also not feel like the clean thing to do
(note: I created the symbolic link /usr/include/boost pointing to /usr/include/boost64/boost, and indeed cmake stopped complaining about BOOST_INCLUDE_DIR, but because I did not create the symbolic links for the libraries, cmake still complains about those).
Is there a cleaner alternative way to manual creation of symbolic links ?
edit : I did create manually symbolic links for all the boost related libraries the compiler was complaining about, and I can confirm this worked.
So there's obviously BOOST_INCLUDE_DIR which you can use to control where boost headers, so why not just
cmake -DBOOST_INCLUDEDIR=/usr/include/boost169 \
-DBOOST_LIBRARYDIR=/usr/lib64/boost169 \
...
The closest you can get to setting the default for CMake is to set BOOST_INCLUDEDIR and BOOST_LIBRARYDIR as environment variables. FindBoost.cmake explicitly looks for those variables in the environment (something CMake does not do by default). So you can globally export BOOST_INCLUDEDIR=/usr/include/boost169 and export BOOST_LIBRARYDIR=/usr/lib64/boost169 somewhere, or you can also wrap command invocations with, e.g., BOOST_INCLUDEDIR=/usr/include/boost169 BOOST_LIBRARYDIR=/usr/lib64/boost169 ./mybuild.sh (assuming that mybuild.sh ends up invoking CMake or handles those environment variables itself, of course).

Why does 'apt-file list' list *.so files that arn't on the host? And which one do I link to?

I've ran the following command to list the openssl library files on disk:
apt-file list libssl-dev
And got the output:
(a long list of *.h files)
libssl-dev: /usr/lib/x86_64-linux-gnu/libcrypto.a
libssl-dev: /usr/lib/x86_64-linux-gnu/libcrypto.so
libssl-dev: /usr/lib/x86_64-linux-gnu/libssl.a
libssl-dev: /usr/lib/x86_64-linux-gnu/libssl.so
(.pc, .gz and others)
But the libssl.a and libssl.so arn't on the disk.
ls -l /usr/lib/x86_64-linux-gnu/libssl*
Gives this output:
-rw-r--r-- 1 root root 328128 Feb 5 2018 /usr/lib/x86_64-linux-gnu/libssl3.so
-rw-r--r-- 1 root root 426232 Jun 20 05:00 /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
-rw-r--r-- 1 root root 433760 Jun 20 04:29 /usr/lib/x86_64-linux-gnu/libssl.so.1.1
So how do I link to the ssl library in my C++ applications? When I specify -llibssl.so, I get an error telling me its not found.
If I'm supposed to pick one of the existing files on disk, which one do I pick and what criteria do I use?
I'm using Ubuntu 18.04.
Just install the develop package sudo apt install libssl-dev

AMPL IDE: unable to use lpsolve

I downloaded and extracted amplide-demo-linux64.tar.gz to /opt/amplide/.
Then I downloaded lp_solve_5.5.2.0_exe_ux32.tar.gz and extracted file lpsolve to /opt/amplide/ampl/.
And I have liblpsolve55.so under directory suggested on AMPL page:
kjrz#kjrz-tsh ~ $ ll /usr/lib/lp_solve/
total 604
drwxr-xr-x 2 root root 4096 Jun 24 2014 .
drwxr-xr-x 185 root root 20480 Jan 14 11:11 ..
-rw-r--r-- 1 root root 590168 Dec 23 2013 liblpsolve55.so
This is what happens:
ampl: option solver lpsolve;
ampl: model owd.mod;
ampl: data owd.dat;
ampl: solve;
lpsolve: error while loading shared libraries: liblpsolve55.so: cannot open shared object file: No such file or directory
exit code 127
<BREAK>
How is that?
You should put (a link to) the liblpsolve55.so somewhere on the library search paths, for example /usr/lib:
$ sudo ln -s /usr/lib/lp_solve/liblpsolve55.so /usr/lib
Also make sure that you have 32-bit (x86) version of liblpsolve55.so installed. For example, on 64-bit Ubuntu you can install 32-bit version of liblpsolve55.so as follows:
$ sudo apt-get install lp-solve:i386

Management command not found without unzipping .egg

I have a django app that I've packaged according to the docs here: https://docs.djangoproject.com/en/1.5/intro/reusable-apps/
I installed the app into a virtual environment using setup.py.
./setup.py install
The app's web UI runs fine from the virtual environment. But I cannot access the custom management command with this vanilla install.
(django_grm)[grm#controller django_grm]$ python ./manage.py sync_to_graphite
Unknown command: 'sync_to_graphite'
Here's what the virtual environment looks like when the command will not execute:
(django_grm)[grm#controller django_grm]$ ll /home/grm/venv/django_grm/lib/python2.7/site-packages
total 1148
...
-rw-rw-r-- 1 grm grm 243962 Jun 19 17:11 django_grm-0.0.4-py2.7.egg
...
However, once I unzip the .egg file, the management command works as expected.
(django_grm)[grm#controller django_grm]$ cd /home/grm/venv/django_grm/lib/python2.7/site-packages
(django_grm)[grm#controller site-packages]$ unzip django_grm-0.0.4-py2.7.egg
(django_grm)[grm#controller site-packages]$ ll /home/grm/venv/django_grm/lib/python2.7/site-packages
total 1152
...
-rw-rw-r-- 1 grm grm 243962 Jun 19 17:11 django_grm-0.0.4-py2.7.egg
drwxrwxr-x 6 grm grm 4096 Jun 19 17:16 dj_grm
...
(django_grm)[grm#controller site-packages]$ cd /home/grm/django_projects/django_grm/
(django_grm)[grm#controller django_grm]$ python ./manage.py sync_to_graphite
<success>
Is this normal behaviour? It feels wonky.
I strongly suggest using pip instead of setup.py. It tends to do a much better job of installing packages as well as managing them.
Once you have your virtual environment in place, it would be:
$ . env/bin/activate
$ pip install [APP_NAME]
This installs a non-zipped version of the app in the virtual environment.
If the app is a zip from somewhere, you can still use pip
$ pip install http://[URL_TO_ZIP]
Let's take a look at the part of the source that loads management commands:
def find_commands(management_dir):
"""
Given a path to a management directory, returns a list of all the command
names that are available.
Returns an empty list if no commands are defined.
"""
command_dir = os.path.join(management_dir, 'commands')
try:
return [f[:-3] for f in os.listdir(command_dir)
if not f.startswith('_') and f.endswith('.py')]
except OSError:
return []
which is called by:
# Find and load the management module for each installed app.
for app_name in apps:
try:
path = find_management_module(app_name)
_commands.update(dict([(name, app_name)
for name in find_commands(path)]))
except ImportError:
pass # No management module - ignore this app
So, yeah, Django doesn't support apps installed in a zipped file, at least here; it wants an explicit commands directory inside management_dir.
As #tghw notes, installing via pip will keep the package in a directory instead of zipping it. You can also (and probably should also) set zip_safe=False in your setup() command; this will stop setuptools/distribute/etc from trying to zip up your package no matter how you install it.