I'm trying to set up MaxHS SAT solver from this git repo - https://github.com/fbacchus/MaxHS.
I get an error that says '/usr/bin/ld: cannot find -lcplex'.
Can anyone guide me on what is lcplex library and how to fix this?
My console looks like this..
install -d /mnt/c/Akhil/Abhyas/CQA/maxhs_installation/include/maxhs
install -d /mnt/c/Akhil/Abhyas/CQA/maxhs_installation/include/minisat
for dir in maxhs/core maxhs/ifaces maxhs/ds maxhs/utils; do \
install -d /mnt/c/Akhil/Abhyas/CQA/maxhs_installation/include/$dir ; \
done
for dir in minisat/mtl minisat/utils minisat/core minisat/simp; do \
install -d /mnt/c/Akhil/Abhyas/CQA/maxhs_installation/include/$dir ; \
done
for h in minisat/mtl/Alg.h minisat/mtl/Map.h minisat/mtl/Alloc.h
minisat/mtl/Vec.h minisat/mtl/Rnd.h minisat/mtl/Sort.h minisat/mtl/IntMap.h
minisat/mtl/Queue.h minisat/mtl/IntTypes.h minisat/mtl/Heap.h
minisat/mtl/XAlloc.h minisat/core/SolverTypes.h minisat/core/Dimacs.h
minisat/core/Solver.h minisat/utils/System.h minisat/utils/ParseUtils.h
minisat/utils/Options.h minisat/simp/SimpSolver.h ; do \
install -m 644 $h /mnt/c/Akhil/Abhyas/CQA/maxhs_installation/include/$h ; \
done
for h in maxhs/core/Bvars.h maxhs/core/Dimacs.h maxhs/core/MaxSolverTypes.h
maxhs/core/Assumptions.h maxhs/core/Wcnf.h maxhs/core/MaxSolver.h
maxhs/ifaces/miniSatSolver.h maxhs/ifaces/GreedySolver.h
maxhs/ifaces/Cplex.h maxhs/ifaces/greedySatSolver.h maxhs/ifaces/muser.h
maxhs/ifaces/SatSolver.h maxhs/ds/Packed.h maxhs/utils/io.h
maxhs/utils/Params.h maxhs/utils/hash.h ; do \
install -m 644 $h /mnt/c/Akhil/Abhyas/CQA/maxhs_installation/include/$h ; \
done
install -d /mnt/c/Akhil/Abhyas/CQA/maxhs_installation/lib
install -m 644 build/release/lib/libmaxhs.a
/mnt/c/Akhil/Abhyas/CQA/maxhs_installation/lib
Linking Binary: build/release/bin/maxhs
/usr/bin/ld: cannot find -lcplex
collect2: error: ld returned 1 exit status
Makefile:155: recipe for target 'build/release/bin/maxhs' failed
make: *** [build/release/bin/maxhs] Error 1
Linking Binary: build/release/bin/maxhs
/usr/bin/ld: cannot find -lcplex
This says that linker is unable to find libcplex.so
You need to give path of this library something like -L/pathtolibrary
Related
Good day.
I am attempting to compile a vulnerable vsftpd version for a University assignment. I'm having troubling compiling the source code onto the lxc container launched to host the vulnerable ftp server. The following message is spat at me when I attempt to execute the make command:
/usr/bin/ld: cannot find : No such file or directory
collect2: error: ld returned 1 exit status
make: *** [Makefile:24: vsftpd] Error 1
Despite my many years writing code I am quite new to C++ in general, however I gather this might have something to do with line 24 of the Makefile, which is highlighted below:
# Makefile for systems with GNU tools
CC = gcc
INSTALL = install
IFLAGS = -idirafter dummyinc
#CFLAGS = -g
CFLAGS = -O2 -Wall -W -Wshadow #-pedantic -Werror -Wconversion
LIBS = `./vsf_findlibs.sh`
LINK = -Wl,-s, -lcrypt
OBJS = main.o utility.o prelogin.o ftpcmdio.o postlogin.o privsock.o \
tunables.o ftpdataio.o secbuf.o ls.o \
postprivparent.o logging.o str.o netstr.o sysstr.o strlist.o \
banner.o filestr.o parseconf.o secutil.o \
ascii.o oneprocess.o twoprocess.o privops.o standalone.o hash.o \
tcpwrap.o ipaddrparse.o access.o features.o readwrite.o opts.o \
ssl.o sslslave.o ptracesandbox.o ftppolicy.o sysutil.o sysdeputil.o
.c.o:
$(CC) -c $*.c $(CFLAGS) $(IFLAGS)
vsftpd: $(OBJS)
24th line >>>> $(CC) -o vsftpd $(OBJS) $(LINK) $(LIBS) $(LDFLAGS)
install:
if [ -x /usr/local/sbin ]; then \
$(INSTALL) -m 755 vsftpd /usr/local/sbin/vsftpd; \
else \
$(INSTALL) -m 755 vsftpd /usr/sbin/vsftpd; fi
if [ -x /usr/local/man ]; then \
$(INSTALL) -m 644 vsftpd.8 /usr/local/man/man8/vsftpd.8; \
$(INSTALL) -m 644 vsftpd.conf.5 /usr/local/man/man5/vsftpd.conf.5; \
elif [ -x /usr/share/man ]; then \
$(INSTALL) -m 644 vsftpd.8 /usr/share/man/man8/vsftpd.8; \
$(INSTALL) -m 644 vsftpd.conf.5 /usr/share/man/man5/vsftpd.conf.5; \
else \
$(INSTALL) -m 644 vsftpd.8 /usr/man/man8/vsftpd.8; \
$(INSTALL) -m 644 vsftpd.conf.5 /usr/man/man5/vsftpd.conf.5; fi
if [ -x /etc/xinetd.d ]; then \
$(INSTALL) -m 644 xinetd.d/vsftpd /etc/xinetd.d/vsftpd; fi
clean:
rm -f *.o *.swp vsftpd
Despite my researching I do not understand how to resolve this.
Thanks in advance.
Installing the libmysqlclient should resolve this issue. On Ubuntu, run:
sudo apt-get install libmysqlclient-dev
I'm building PDAL this way in my Ubuntu 18 :
cd /home/magno/install && \
git clone https://github.com/hobu/laz-perf.git && \
cd laz-perf && \
mkdir build && \
cd build && \
cmake .. \
-DEMSCRIPTEN=1 \
-DCMAKE_TOOLCHAIN_FILE=/home/magno/install/emsdk/upstream/emscripten/cmake/Modules/Platform/Emscripten.cmake && \
VERBOSE=1 make && \
make install
cd /home/magno/install && \
git clone https://github.com/pgpointcloud/pointcloud && \
cd pointcloud && \
./autogen.sh && \
./configure --with-lazperf=/usr/local/ && \
make && \
make install
cd /home/magno/install && \
git clone https://github.com/PDAL/PDAL.git && \
cd PDAL && \
mkdir build && \
cd build && \
cmake -G Ninja .. && \
ninja && \
ninja install
Running PGUSER=postgres PGPASSWORD=*** PGHOST=localhost PGPORT=5432 ctest can confirm all was fine.
But when I try to check a LAZ file I'm getting this error:
PDAL: readers.las: Can't read compressed file without LASzip or LAZperf decompression library.
This is my pipe file:
{
"pipeline":[
{
"type":"readers.las",
"filename":"airport.laz",
"spatialreference":"EPSG:32616",
"compression":"lazperf"
},
{
"type":"writers.pgpointcloud",
"connection":"dbname=mydb host='localhost' user='postgres' password='****'",
"table":"patchs",
"compression":"lazperf",
"srid":"32616",
"overwrite":"false"
}
]
}
I think lazperf is ok because pgpointcloud doesn't complains with PGUSER=postgres PGPASSWORD=**** PGHOST=localhost make installcheck and tells me :
# PointCloud is now configured for
# -------------- Compiler Info -------------
# C compiler: gcc -g -O2
# SQL preprocessor: /usr/bin/cpp -traditional-cpp -w -P
# -------------- Dependencies --------------
# PostgreSQL config: /usr/bin/pg_config
# PostgreSQL version: PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) (120)
# Libxml2 config: /usr/bin/xml2-config
# Libxml2 version: 2.9.4
# LazPerf status: /usr/local//include/laz-perf
# CUnit status: enabled
PDAL tests tells me nothing about compression.
How can I build or tell PDAL about my LAZPerf installation?
EDIT pdal info install/PDAL/test/data/las/autzen_trim.las is all ok .
God bless the Google!
Found the solution by reading this, this and this.
Just need to change to cmake -G Ninja -DLazperf_DIR=/usr/local/ -DWITH_LAZPERF=ON ..
and voilĂ :
-- The following OPTIONAL packages have been found:
* Lazperf
* ZSTD
General compression support
* LibXml2
* PkgConfig
* PythonInterp
What would cause the following Linker Error when compiling C++ XSDK project??
Linker Error Message:
./src/platform/platform_zynq.o: In function `timer_callback(XScuTimer*)':
platform_zynq.c:115:undefined reference to `dhcp_fine_tmr()'
platform_zynq.c:117:undefined reference to `dhcp_coarse_tmr()'
collect2.exe: error: ld returned 1 exit status
make: *** [uartcmd.elf] Error 1
Linker Command as follows:
arm-none-eabi-g++ \
-mcpu=cortex-a9 \
-mfpu=vfpv3 \
-mfloat-abi=hard \
-Wl,-build-id=none \
-specs=Xilinx.spec \
-Wl,-T\
-Wl,../src/lscript.ld \
-L../../uartcmd_bsp/ps7_cortexa9_0/lib \
-o "uartcmd.elf" \
./src/platform/platform.o \
./src/platform/platform_mb.o \
./src/platform/platform_ppc.o \
./src/platform/platform_zynq.o \
./src/platform/platform_zynqmp.o \
./src/fpga_registers2.clang.o \
./src/main.o \
./src/network.o \
-Wl,\
--start-group,-lxil,-lfreertos,-lgcc,-lc,-lstdc++,--end-group\
-Wl,--start-group,-lxil,-llwip4,-lgcc,-lc,--end-group
Configured Xilinx SDK project as follows:
// Step 0: Create Application Project with FreeRTOS Selected
//
// Step 1: Adding lwIP library to XSDK project.
//
// (2) Right click <app>_bsp and select
// "Board Support Package Settings"
// (3) Click Check box "lwip"
//
// Step 2: Add DHCP Support
// (1) Right click <app>_bsp and select
// "Board Support Package Settings"
// (2) Click Overview:freertos901_xilinx.lwip141
// (3) Check dhcp_options.dhcp_does_arp_check true
// (4) Check dhcp_options.lwip_dhcp true
Symbol Table as Follows:
$ nm ./zynq_rtl.sdk/uartcmd_bsp/ps7_cortexa9_0/lib/liblwip4.a | grep -i dhcp_fine
00001c7c T dhcp_fine_tmr
$ nm ./zynq_rtl.sdk/uartcmd_bsp/ps7_cortexa9_0/lib/liblwip4.a | grep -i dhcp_coarse
00001a54 T dhcp_coarse_tmr
$ nm -C ./uartcmd/Debug/src/platform/platform_zynq.o | grep dhcp_fine
U dhcp_fine_tmr()
$ nm -C ./uartcmd/Debug/src/platform/platform_zynq.o | grep dhcp_coarse
U dhcp_coarse_tmr()
Doesn't look like name mangle problem to me... I add "-C" option to prevent demangling of the symbol names...
I have a simple program that shows an image.
String imagePath = "D:/Dev Tools/Docker/alpineOpenCV/";
Mat img = imread(imagePath+"lena.jpg", IMREAD_COLOR);
imshow ("Test Image", img);
And I have an image built using the following dockerfile.
FROM alpine:3.9
RUN echo -e '#edgunity http://nl.alpinelinux.org/alpine/edge/community\n\
#edge http://nl.alpinelinux.org/alpine/edge/main\n\
#testing http://nl.alpinelinux.org/alpine/edge/testing\n\
#community http://dl-cdn.alpinelinux.org/alpine/edge/community'\
>> /etc/apk/repositories
RUN apk add --update \
# --virtual .build-deps \
build-base \
openblas-dev \
unzip \
wget \
cmake \
libtbb#testing \
libtbb-dev#testing \
libjpeg \
libjpeg-turbo-dev \
libpng-dev \
jasper-dev \
tiff-dev \
libwebp-dev \
clang-dev \
linux-headers
ENV CC /usr/bin/clang
ENV CXX /usr/bin/clang++
ENV OPENCV_VERSION=4.0.1
RUN cd /opt && \
wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip && \
unzip ${OPENCV_VERSION}.zip && \
rm -rf ${OPENCV_VERSION}.zip
RUN mkdir -p /opt/opencv-${OPENCV_VERSION}/build && \
cd /opt/opencv-${OPENCV_VERSION}/build && \
cmake \
-D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D WITH_FFMPEG=NO \
-D WITH_IPP=NO \
-D WITH_OPENEXR=NO \
-D WITH_TBB=YES \
-D BUILD_EXAMPLES=NO \
-D BUILD_ANDROID_EXAMPLES=NO \
-D INSTALL_PYTHON_EXAMPLES=NO \
-D BUILD_DOCS=NO \
-D BUILD_opencv_python2=NO \
-D BUILD_opencv_python3=ON \
-D PYTHON3_EXECUTABLE=/usr/local/bin/python \
-D PYTHON3_INCLUDE_DIR=/usr/local/include/python3.6m/ \
-D PYTHON3_LIBRARY=/usr/local/lib/libpython3.so \
-D PYTHON_LIBRARY=/usr/local/lib/libpython3.so \
-D PYTHON3_PACKAGES_PATH=/usr/local/lib/python3.6/site-packages/ \
-D PYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/lib/python3.6/site-packages/numpy/core/include/ \
.. && \
make VERBOSE=1 && \
make && \
make install && \
rm -rf /opt/opencv-${OPENCV_VERSION}
I can compile my program successfully using:
g++ -I/usr/local/include/opencv4/ -I/usr/local/include/opencv4/ -L/usr/local/lib64/ -g -o binary main.cpp -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_ml -lop
encv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_stitching -lopencv_imgcodecs
However I get linker errors when trying to run ./binary
Error loading shared library libopencv_core.so.4.0: No such file or directory (needed by ./binary2)
Error loading shared library libopencv_highgui.so.4.0: No such file or directory (needed by ./binary2)
Error loading shared library libopencv_imgcodecs.so.4.0: No such file or directory (needed by ./binary2)
Error relocating ./binary2: _ZN2cv8fastFreeEPv: symbol not found
Error relocating ./binary2: _ZN2cv6imreadERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEi: symbol not found
Error relocating ./binary2: _ZN2cv6imshowERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_11_InputArrayE: symbol not found
Error relocating ./binary2: _ZN2cv7waitKeyEi: symbol not found
Error relocating ./binary2: _ZN2cv3Mat10deallocateEv: symbol not found
I have tried the following solutions in this this thread: to no avail. I can see that my so files are in /usr/local/lib64/ but I cannot seem to link them properly.
UPDATE: ldd ./binary output
/lib/ld-musl-x86_64.so.1 (0x7f13f91db000)
libopencv_core.so.4.0 => /usr/local/lib/libopencv_core.so.4.0 (0x7f13f8d33000)
libopencv_highgui.so.4.0 => /usr/local/lib/libopencv_highgui.so.4.0 (0x7f13f8d24000)
libopencv_imgcodecs.so.4.0 => /usr/local/lib/libopencv_imgcodecs.so.4.0 (0x7f13f8cdb000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x7f13f8b86000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x7f13f8b72000)
libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f13f91db000)
libtbb.so.2 => /usr/lib/libtbb.so.2 (0x7f13f893d000)
libz.so.1 => /lib/libz.so.1 (0x7f13f8923000)
libopenblas.so.3 => /usr/lib/libopenblas.so.3 (0x7f13f6da6000)
libopencv_videoio.so.4.0 => /usr/local/lib64/libopencv_videoio.so.4.0 (0x7f13f6d70000)
libopencv_imgproc.so.4.0 => /usr/local/lib64/libopencv_imgproc.so.4.0 (0x7f13f689c000)
libjpeg.so.8 => /usr/lib/libjpeg.so.8 (0x7f13f683b000)
libwebp.so.7 => /usr/lib/libwebp.so.7 (0x7f13f67e5000)
libpng16.so.16 => /usr/lib/libpng16.so.16 (0x7f13f67b5000)
libtiff.so.5 => /usr/lib/libtiff.so.5 (0x7f13f674b000)
libjasper.so.4 => /usr/lib/libjasper.so.4 (0x7f13f66da000)
libgfortran.so.5 => /usr/lib/libgfortran.so.5 (0x7f13f6547000)
libquadmath.so.0 => /usr/lib/../lib/libquadmath.so.0 (0x7f13f6514000)
Maybe you need RUN ldconfig or ENV PATH="/usr/local/lib64:${PATH}" at the end of you docker runfile.
After trying so many times and rebuilding for so many hours. I just went and copied all the files from /usr/local/lib64/ to /usr/local/lib/. It may not be elegant. But it works.
I've recently bumped spotted caffe_rtpose and I tried to compile and run the example. Unfortunately I'm super experienced with c++ so I ran into a lot of issues compiling and linking.
I've tried tweaking the Makefile config (modified from the existing Ubuntu config). (I'm using a system running OSX 10.11.5 with an nVidia GeForce 750M and I have installed CUDA 7.5 and libcudnn):
## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!
# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1
# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1
# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0
# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1
# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3
# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++
# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr
# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=sm_50 \
-gencode arch=compute_50,code=compute_50 \
-gencode arch=compute_52,code=sm_52 \
# -gencode arch=compute_60,code=sm_60 \
# -gencode arch=compute_61,code=sm_61
# Deprecated
#CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
# -gencode arch=compute_20,code=sm_21 \
# -gencode arch=compute_30,code=sm_30 \
# -gencode arch=compute_35,code=sm_35 \
# -gencode arch=compute_50,code=sm_50 \
# -gencode arch=compute_50,code=compute_50
# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas
# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib
BLAS_INCLUDE := /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/Headers/
BLAS_LIB := /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A
# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app
# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
# $(ANACONDA_HOME)/include/python2.7 \
# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \
# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib
# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib
# Uncomment to support layers written in Python (will link against Python libs)
# WITH_PYTHON_LAYER := 1
# Whatever else you find you need goes here.
# INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
# LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib
# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1
BUILD_DIR := build
DISTRIBUTE_DIR := distribute
# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1
# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0
# enable pretty build (comment to see full commands)
# Q ?= #
And this is the modified version of the install_caffe_and_cpm_osx.sh script:
#!/bin/bash
echo "------------------------- INSTALLING CAFFE AND CPM -------------------------"
echo "NOTE: This script assumes that CUDA and cuDNN are already installed on your machine. Otherwise, it might fail."
function exitIfError {
if [[ $? -ne 0 ]] ; then
echo ""
echo "------------------------- -------------------------"
echo "Errors detected. Exiting script. The software might have not been successfully installed."
echo "------------------------- -------------------------"
exit 1
fi
}
# echo "------------------------- Checking Ubuntu Version -------------------------"
# ubuntu_version="$(lsb_release -r)"
# echo "Ubuntu $ubuntu_version"
# if [[ $ubuntu_version == *"14."* ]]; then
# ubuntu_le_14=true
# elif [[ $ubuntu_version == *"16."* || $ubuntu_version == *"15."* || $ubuntu_version == *"17."* || $ubuntu_version == *"18."* ]]; then
# ubuntu_le_14=false
# else
# echo "Ubuntu release older than version 14. This installation script might fail."
# ubuntu_le_14=true
# fi
# exitIfError
# echo "------------------------- Ubuntu Version Checked -------------------------"
# echo ""
echo "------------------------- Checking Number of Processors -------------------------"
NUM_CORES=$(grep -c ^processor /proc/cpuinfo 2>/dev/null || sysctl -n hw.ncpu)
echo "$NUM_CORES cores"
exitIfError
echo "------------------------- Number of Processors Checked -------------------------"
echo ""
echo "------------------------- Installing some Caffe Dependencies -------------------------"
# Basic
# sudo apt-get --assume-yes update
# sudo apt-get --assume-yes install build-essential
#General dependencies
brew install protobuf leveldb snappy hdf5
# with Python pycaffe needs dependencies built from source - from http://caffe.berkeleyvision.org/install_osx.html
# brew install --build-from-source --with-python -vd protobuf
# brew install --build-from-source -vd boost boost-python
# without Python the usual installation suffices
brew install boost
# sudo apt-get --assume-yes install libprotobuf-dev libleveldb-dev libsnappy-dev libhdf5-serial-dev protobuf-compiler
# sudo apt-get --assume-yes install --no-install-recommends libboost-all-dev
# Remaining dependencies, 14.04
brew install gflags glog lmdb
# if [[ $ubuntu_le_14 == true ]]; then
# sudo apt-get --assume-yes install libgflags-dev libgoogle-glog-dev liblmdb-dev
# fi
# OpenCV 2.4
# sudo apt-get --assume-yes install libopencv-dev
exitIfError
echo "------------------------- Some Caffe Dependencies Installed -------------------------"
echo ""
echo "------------------------- Compiling Caffe & CPM -------------------------"
cp Makefile.config.OSX.10.11.5.example Makefile.config
make all -j$NUM_CORES
# make test -j$NUM_CORES
# make runtest -j$NUM_CORES
exitIfError
echo "------------------------- Caffe & CPM Compiled -------------------------"
echo ""
# echo "------------------------- Installing CPM -------------------------"
# echo "Compiled"
# exitIfError
# echo "------------------------- CPM Installed -------------------------"
# echo ""
echo "------------------------- Downloading CPM Models -------------------------"
models_folder="./model/"
# COCO
coco_folder="$models_folder"coco/""
coco_model="$coco_folder"pose_iter_440000.caffemodel""
if [ ! -f $coco_model ]; then
wget http://posefs1.perception.cs.cmu.edu/Users/tsimon/Projects/coco/data/models/coco/pose_iter_440000.caffemodel -P $coco_folder
fi
exitIfError
# MPI
mpi_folder="$models_folder"mpi/""
mpi_model="$mpi_folder"pose_iter_160000.caffemodel""
if [ ! -f $mpi_model ]; then
wget http://posefs1.perception.cs.cmu.edu/Users/tsimon/Projects/coco/data/models/mpi/pose_iter_160000.caffemodel -P $mpi_folder
fi
exitIfError
echo "Models downloaded"
echo "------------------------- CPM Models Downloaded -------------------------"
echo ""
echo "------------------------- CAFFE AND CPM INSTALLED -------------------------"
echo ""
But I get this error:
examples/rtpose/rtpose.cpp:1088:22: error: variable length array of non-POD element type 'Frame'
Frame frame_batch[BATCH_SIZE];
I've tried swapping the array for a vector:
std::vector<Frame> frame_batch;
std::cout << "allocating " << BATCH_SIZE << " frames" << std::endl;
frame_batch.reserve(BATCH_SIZE);
That seems to take care of that compile error, but now I get a linker error:
ld: library not found for -lgomp
clang: error: linker command failed with exit code 1 (use -v to see invocation)
I've searched for lib lib gomp and found a few related posts on caffe and OpenMP mentioning issues with clang on OSX and OpenMP.
What I tried:
Following this post I've installed gcc 4.9 with homebrew (as the homebrew formula for gcc 5 installs 5.9 which might be too high?)
I've set -fopenmp=libomp based on Andrey Bokhanko's answer: this didn't work for me ++-4.9: error: unrecognized command line option '-fopenmp=libomp'
I could download and build Caffe separately using the official instructions, but I can't seem to figure out how to compile this awesome looking demo.
Unfortunately I'm not experienced with c++ and OpenMP so I could really use your suggestions here. Thank you
Update: I've tried Mark Setchell's helpful suggestion of installing llvm via clang. I've updated the Makefile config to use
CUSTOM_CXX := /usr/local/opt/llvm/bin/clang++
but CUDA doesn't like it:
nvcc fatal : The version ('30801') of the host compiler ('clang') is not supported
I've tried compiling with CPU_ONLY but I still get CUDA errors:
examples/rtpose/rtpose.cpp:235:5: error: use of undeclared identifier 'cudaMalloc'
cudaMalloc(&net_copies[device_id].canvas, DISPLAY_RESOLUTION_WIDTH * DISPLAY_RESOLUTION_HEIGHT * 3 * sizeof(float));
^
examples/rtpose/rtpose.cpp:236:5: error: use of undeclared identifier 'cudaMalloc'
cudaMalloc(&net_copies[device_id].joints, MAX_NUM_PARTS*3*MAX_PEOPLE * sizeof(float) );
^
examples/rtpose/rtpose.cpp:1130:146: error: use of undeclared identifier 'cudaMemcpyHostToDevice'
cudaMemcpy(net_copies[tid].canvas, frame.data_for_mat, DISPLAY_RESOLUTION_WIDTH * DISPLAY_RESOLUTION_HEIGHT * 3 * sizeof(float), cudaMemcpyHostToDevice);
^
examples/rtpose/rtpose.cpp:1136:108: error: use of undeclared identifier 'cudaMemcpyHostToDevice'
cudaMemcpy(pointer + 0 * offset, frame_batch[0].data, BATCH_SIZE * offset * sizeof(float), cudaMemcpyHostToDevice);
^
examples/rtpose/rtpose.cpp:1178:13: error: use of undeclared identifier 'cudaMemcpyHostToDevice'
cudaMemcpyHostToDevice);
^
examples/rtpose/rtpose.cpp:1192:155: error: use of undeclared identifier 'cudaMemcpyDeviceToHost'
cudaMemcpy(frame_batch[n].data_for_mat, net_copies[tid].canvas, DISPLAY_RESOLUTION_HEIGHT * DISPLAY_RESOLUTION_WIDTH * 3 * sizeof(float), cudaMemcpyDeviceToHost);
^
examples/rtpose/rtpose.cpp:1202:155: error: use of undeclared identifier 'cudaMemcpyDeviceToHost'
cudaMemcpy(frame_batch[n].data_for_mat, net_copies[tid].canvas, DISPLAY_RESOLUTION_HEIGHT * DISPLAY_RESOLUTION_WIDTH * 3 * sizeof(float), cudaMemcpyDeviceToHost);
I'm no expert, but having a quick scan through the code, I don't see how the CPU_ONLY version will work with the cuda calls.
Having another look at the caffe OSX Installation guide, I may try the route >not for the faint of heart
I have finally managed to compile the rtpose example.
Here's what I did:
Swapped the Frame array for a vector in examples/rtpose/rtpose.cpp, as mentioned above:
std::vector<Frame> frame_batch;
std::cout << "allocating " << BATCH_SIZE << " frames" << std::endl;
frame_batch.reserve(BATCH_SIZE);
Used the default clang++ compiler, after failed attempts at using gcc++-4.9 and Homebrew installed LLVM's clang++, but removed the -fopenmp flags and the -pthread linker flag, not the compiler flag, based on this answer
After the compile finished, I tried to run it, but got a libjpeg related error:
dyld: Symbol not found: __cg_jpeg_resync_to_restart
Referenced from: /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO
Expected in: /usr/local/lib/libJPEG.dylib
in /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO
Trace/BPT trap: 5
The workaround was mdemirst's answer. I made a backup of the old symbolic links just in case though. I did symlink libjpeg/libpng/libtiff/libgif from ImageIO.framework.
I've commited the above config/setup script on github.
Now that the example is compiled, I still can't run it, possibly due to not enough GPU memory:
F0331 02:02:16.231935 528384 syncedmem.cpp:56] Check failed: error == cudaSuccess (2 vs. 0) out of memory
*** Check failure stack trace: ***
# 0x10c7a89da google::LogMessage::Fail()
# 0x10c7a80d5 google::LogMessage::SendToLog()
# 0x10c7a863b google::LogMessage::Flush()
# 0x10c7aba17 google::LogMessageFatal::~LogMessageFatal()
# 0x10c7a8cc7 google::LogMessageFatal::~LogMessageFatal()
# 0x1079481db caffe::SyncedMemory::to_gpu()
# 0x107947c9e caffe::SyncedMemory::mutable_gpu_data()
# 0x1079affba caffe::CuDNNConvolutionLayer<>::Forward_gpu()
# 0x107861331 caffe::Layer<>::Forward()
# 0x107918016 caffe::Net<>::ForwardFromTo()
# 0x1077a86f1 warmup()
# 0x1077b211d processFrame()
# 0x7fff8b11899d _pthread_body
# 0x7fff8b11891a _pthread_start
# 0x7fff8b116351 thread_start
Abort trap: 6
I have tried dimming the settings down as much as possible:
./build/examples/rtpose/rtpose.bin -caffemodel ./model/coco/pose_iter_440000.caffemodel -caffeproto ./model/coco/pose_deploy_linevec.prototxt -camera_resolution "40x30" -camera 0 -resolution "40x30" -start_scale 0.1 -num_scales=0 -no_display true -net_resolution "16x16"
But to no avail. Actually running the example may be another question in itself.