build PDAL with LAZperf - compression

I'm building PDAL this way in my Ubuntu 18 :
cd /home/magno/install && \
git clone https://github.com/hobu/laz-perf.git && \
cd laz-perf && \
mkdir build && \
cd build && \
cmake .. \
-DEMSCRIPTEN=1 \
-DCMAKE_TOOLCHAIN_FILE=/home/magno/install/emsdk/upstream/emscripten/cmake/Modules/Platform/Emscripten.cmake && \
VERBOSE=1 make && \
make install
cd /home/magno/install && \
git clone https://github.com/pgpointcloud/pointcloud && \
cd pointcloud && \
./autogen.sh && \
./configure --with-lazperf=/usr/local/ && \
make && \
make install
cd /home/magno/install && \
git clone https://github.com/PDAL/PDAL.git && \
cd PDAL && \
mkdir build && \
cd build && \
cmake -G Ninja .. && \
ninja && \
ninja install
Running PGUSER=postgres PGPASSWORD=*** PGHOST=localhost PGPORT=5432 ctest can confirm all was fine.
But when I try to check a LAZ file I'm getting this error:
PDAL: readers.las: Can't read compressed file without LASzip or LAZperf decompression library.
This is my pipe file:
{
"pipeline":[
{
"type":"readers.las",
"filename":"airport.laz",
"spatialreference":"EPSG:32616",
"compression":"lazperf"
},
{
"type":"writers.pgpointcloud",
"connection":"dbname=mydb host='localhost' user='postgres' password='****'",
"table":"patchs",
"compression":"lazperf",
"srid":"32616",
"overwrite":"false"
}
]
}
I think lazperf is ok because pgpointcloud doesn't complains with PGUSER=postgres PGPASSWORD=**** PGHOST=localhost make installcheck and tells me :
# PointCloud is now configured for
# -------------- Compiler Info -------------
# C compiler: gcc -g -O2
# SQL preprocessor: /usr/bin/cpp -traditional-cpp -w -P
# -------------- Dependencies --------------
# PostgreSQL config: /usr/bin/pg_config
# PostgreSQL version: PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) (120)
# Libxml2 config: /usr/bin/xml2-config
# Libxml2 version: 2.9.4
# LazPerf status: /usr/local//include/laz-perf
# CUnit status: enabled
PDAL tests tells me nothing about compression.
How can I build or tell PDAL about my LAZPerf installation?
EDIT pdal info install/PDAL/test/data/las/autzen_trim.las is all ok .

God bless the Google!
Found the solution by reading this, this and this.
Just need to change to cmake -G Ninja -DLazperf_DIR=/usr/local/ -DWITH_LAZPERF=ON ..
and voilà:
-- The following OPTIONAL packages have been found:
* Lazperf
* ZSTD
General compression support
* LibXml2
* PkgConfig
* PythonInterp

Related

ERROR: No matching distribution found for jax[gpu]>=0.3.4 (from -r vit_jax/requirements.txt (line 8))

trying to build my first dockerfile for vision transformer. ran into
ERROR: Could not find a version that satisfies the requirement
jax[gpu]>=0.3.4 (from -r vit_jax/requirements.txt (line 8)) (from
versions: 0.0, 0.1, 0.1.1, 0.1.2, 0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7,
0.1.8, 0.1.9, 0.1.10, 0.1.11, 0.1.12, 0.1.13, 0.1.14, 0.1.15, 0.1.16, 0.1.18, 0.1.19, 0.1.20, 0.1.21, 0.1.22, 0.1.23, 0.1.24, 0.1.25, 0.1.26, 0.1.27, 0.1.28, 0.1.29, 0.1.30, 0.1.31, 0.1.32, 0.1.33, 0.1.34, 0.1.35, 0.1.36, 0.1.37, 0.1.38, 0.1.39, 0.1.40, 0.1.41, 0.1.42, 0.1.43, 0.1.44, 0.1.45, 0.1.46, 0.1.47, 0.1.48, 0.1.49, 0.1.50, 0.1.51, 0.1.52, 0.1.53, 0.1.54, 0.1.55, 0.1.56, 0.1.57, 0.1.58, 0.1.59, 0.1.60, 0.1.61, 0.1.62, 0.1.63, 0.1.64, 0.1.65, 0.1.66, 0.1.67, 0.1.68, 0.1.69, 0.1.70, 0.1.71, 0.1.72, 0.1.73, 0.1.74, 0.1.75, 0.1.76, 0.1.77, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.2.4, 0.2.5, 0.2.6, 0.2.7, 0.2.8, 0.2.9, 0.2.10, 0.2.11, 0.2.12, 0.2.13, 0.2.14, 0.2.15, 0.2.16, 0.2.17) ERROR: No matching distribution found for jax[gpu]>=0.3.4 (from -r vit_jax/requirements.txt (line 8))
didn't find anyone running vit ran into this problem, so i assume it's my dockerfile's flaw not requirements.txt's? below is my dockerfile
FROM pytorch/pytorch:1.2-cuda10.0-cudnn7-runtime
ENV DEBIAN_FRONTEND=noninteractive
ARG USERNAME=user
WORKDIR /dockertest
ARG WORKDIR=/dockertest
RUN apt-get update && apt-get install -y \
automake autoconf libpng-dev nano python3-pip \
sudo curl zip unzip libtool swig zlib1g-dev pkg-config \
python3-mock libpython3-dev libpython3-all-dev \
g++ gcc cmake make pciutils cpio gosu wget \
libgtk-3-dev libxtst-dev sudo apt-transport-https \
build-essential gnupg git xz-utils vim libgtk2.0-0 libcanberra-gtk-module\
libva-dev libdrm-dev xorg xorg-dev protobuf-compiler \
openbox libx11-dev libgl1-mesa-glx libgl1-mesa-dev \
libtbb2 libtbb-dev libopenblas-dev libopenmpi-dev \
&& sed -i 's/# set linenumbers/set linenumbers/g' /etc/nanorc \
&& apt clean \
&& rm -rf /var/lib/apt/lists/*
RUN git clone https://github.com/google-research/vision_transformer.git \
&&cd vision_transformer \
&& pip3 install pip --upgrade \
&& pip install -r vit_jax/requirements.txt \
&&python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 \
--config.pretrained_dir='gs://vit_models/imagenet21k' \
&& pip cache purge
RUN echo "root:root" | chpasswd \
&& adduser --disabled-password --gecos "" "${USERNAME}" \
&& echo "${USERNAME}:${USERNAME}" | chpasswd \
&& echo "%${USERNAME} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/${USERNAME} \
&& chmod 0440 /etc/sudoers.d/${USERNAME}
USER ${USERNAME}
RUN sudo chown -R ${USERNAME}:${USERNAME} ${WORKDIR}
WORKDIR ${WORKDIR}
The issue is that you're using Python 3.6 (as specified in the docker file), which is not supported by JAX version 0.2.18 and newer (see JAX Changelog).
To fix the issue, you should upgrade Python to version 3.7 or newer. Python 3.6 has reached its end of life and is no longer receiving security updates.
Alternatively, if for some reason you must continue using Python 3.6, you should install jax version 0.2.17 and jaxlib version 0.1.69, which were the last releases to be compatible with Python 3.6.

bazel test vs direct execution

I'm using GoogleTest with bazel and when I run bazel test :tokens_test (rule defined below) I have a failing test, but when I run the compiled tests I see the expected result. It is failing because the test can not open the test data file. The directory layout and the BUILD rule for the test I'm running look like this:
tokens/
BUILD
tokens_test.cpp
test_data/
test_input1.txt
cc_test(
name = "tokens_test",
srcs = ["tokens_test.cpp"],
deps = [
'#gtest//:gtest'
'#gtest//:gtest_main',
],
data = ["//tokens/test_data:test_input1.txt"]
)
At this point the test is just a wrapper to open the file and read in the test data which is the bit thats failing.
TEST(Tokenizer, OpenFileTest) {
auto fin = std::ifstream("test_data/test_input1.txt");
std::cout << a.get(); // Outputs -1.
}
When I navigate to the bazel-out location and find my way to the tokens runfiles directory I can see the compiled test excutable.
[jibberish]/__main__/tokens>ls
test_data/
test_input1.txt
tokens_test
And when I run the executable:
[jibberish]/__main__/tokens>./tokens_test
...normal test output...
The correct output!
...more test output...
I'm lost as to where to start looking for the problem. I've tried including a BUILD in the test_data directory with an exports_files rule, using a variety of relatives paths in the BUILD files and my source code and a lot of permutations of those relative paths.
Define your data as
data = ["test_data/test_input1.txt"]
Use absolute path: ./tokens/test_data/test_input1.txt
To find it run your test with a bazel test --sandbox_debug -s. The last command will looks like that one:
SUBCOMMAND: # //:hello_test [action 'Testing //:hello_test', configuration: faff19e6fd939f490ac11578d94024c6b7a032836cde039fd5edd28b838194e8, execution platform: #local_config_platform//:host]
(cd /home/s/.cache/bazel/_bazel_s/fa4c7c7c7db2888182e4f15990b55d58/execroot/com_google_absl_hello_world && \
exec env - \
EXPERIMENTAL_SPLIT_XML_GENERATION=1 \
RUNFILES_DIR=bazel-out/k8-fastbuild/bin/hello_test.runfiles \
RUN_UNDER_RUNFILES=1 \
TEST_BINARY=./hello_test \
TEST_INFRASTRUCTURE_FAILURE_FILE=bazel-out/k8-fastbuild/testlogs/hello_test/test.infrastructure_failure \
TEST_LOGSPLITTER_OUTPUT_FILE=bazel-out/k8-fastbuild/testlogs/hello_test/test.raw_splitlogs/test.splitlogs \
TEST_PREMATURE_EXIT_FILE=bazel-out/k8-fastbuild/testlogs/hello_test/test.exited_prematurely \
TEST_SIZE=medium \
TEST_SRCDIR=bazel-out/k8-fastbuild/bin/hello_test.runfiles \
TZ=UTC \
XML_OUTPUT_FILE=bazel-out/k8-fastbuild/testlogs/hello_test/test.xml \
external/bazel_tools/tools/test/test-setup.sh ./hello_test)
Then you can paste the whole (cd ...) one-liner to reproduce the exact sandbox environment as during the bazel test run. For example you can substitute the last line in that way:
(cd /home/s/.cache/bazel/_bazel_s/fa4c7c7c7db2888182e4f15990b55d58/execroot/com_google_absl_hello_world && \
exec env - \
EXPERIMENTAL_SPLIT_XML_GENERATION=1 \
RUNFILES_DIR=bazel-out/k8-fastbuild/bin/hello_test.runfiles \
RUN_UNDER_RUNFILES=1 \
TEST_BINARY=./hello_test \
TEST_INFRASTRUCTURE_FAILURE_FILE=bazel-out/k8-fastbuild/testlogs/hello_test/test.infrastructure_failure \
TEST_LOGSPLITTER_OUTPUT_FILE=bazel-out/k8-fastbuild/testlogs/hello_test/test.raw_splitlogs/test.splitlogs \
TEST_PREMATURE_EXIT_FILE=bazel-out/k8-fastbuild/testlogs/hello_test/test.exited_prematurely \
TEST_SIZE=medium \
TEST_SRCDIR=bazel-out/k8-fastbuild/bin/hello_test.runfiles \
TZ=UTC \
XML_OUTPUT_FILE=bazel-out/k8-fastbuild/testlogs/hello_test/test.xml \
bash -c 'pwd && ls')
So bash -c 'pwd && ls' will show the current directory path and content.

Does anyone know where can I find DockerFIle for aws/codebuild/nodejs:10.1.0?

I am looking for dockerFile for aws/codebuild/nodejs:10.1.0 which used to be available on GitHub earlier but not anymore?
Here you can find the aws/codebuild/nodejs:10.1.0.
Release updated standard 2.0 image #minethai minethai released this on
Jun 26 · 15 commits to master since this release
You can download the zip folder form here.
Changes:
Updated minor versions for node8, node10, powershell, and gradle 5
# Copyright 2017-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License.
# A copy of the License is located at
#
# http://aws.amazon.com/asl/
#
# or in the "license" file accompanying this file.
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied.
# See the License for the specific language governing permissions and limitations under the License.
#
FROM ubuntu:14.04.5
ENV DOCKER_BUCKET="download.docker.com" \
DOCKER_VERSION="17.09.0-ce" \
DOCKER_CHANNEL="stable" \
DOCKER_SHA256="a9e90a73c3cdfbf238f148e1ec0eaff5eb181f92f35bdd938fd7dab18e1c4647" \
DIND_COMMIT="3b5fac462d21ca164b3778647420016315289034" \
DOCKER_COMPOSE_VERSION="1.21.2" \
GITVERSION_VERSION="3.6.5"
# Install git, SSH, and other utilities
RUN set -ex \
&& echo 'Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/99use-gzip-compression \
&& apt-get update \
&& apt install -y apt-transport-https \
&& apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF \
&& echo "deb https://download.mono-project.com/repo/ubuntu stable-trusty main" | tee /etc/apt/sources.list.d/mono-official-stable.list \
&& apt-get update \
&& apt-get install software-properties-common -y --no-install-recommends \
&& apt-add-repository ppa:git-core/ppa \
&& apt-get update \
&& apt-get install git=1:2.* -y --no-install-recommends \
&& git version \
&& apt-get install -y --no-install-recommends openssh-client=1:6.6* \
&& mkdir ~/.ssh \
&& touch ~/.ssh/known_hosts \
&& ssh-keyscan -t rsa,dsa -H github.com >> ~/.ssh/known_hosts \
&& ssh-keyscan -t rsa,dsa -H bitbucket.org >> ~/.ssh/known_hosts \
&& chmod 600 ~/.ssh/known_hosts \
&& apt-get install -y --no-install-recommends \
wget=1.15-* python=2.7.* python2.7-dev=2.7.* fakeroot=1.20-* ca-certificates \
tar=1.27.* gzip=1.6-* zip=3.0-* autoconf=2.69-* automake=1:1.14.* \
bzip2=1.0.* file=1:5.14-* g++=4:4.8.* gcc=4:4.8.* imagemagick=8:6.7.* \
libbz2-dev=1.0.* libc6-dev=2.19-* libcurl4-openssl-dev=7.35.* libdb-dev=1:5.3.* \
libevent-dev=2.0.* libffi-dev=3.1~* libgeoip-dev=1.6.* libglib2.0-dev=2.40.* \
libjpeg-dev=8c-* libkrb5-dev=1.12+* liblzma-dev=5.1.* \
libmagickcore-dev=8:6.7.* libmagickwand-dev=8:6.7.* libmysqlclient-dev=5.5.* \
libncurses5-dev=5.9+* libpng12-dev=1.2.* libpq-dev=9.3.* libreadline-dev=6.3-* \
libsqlite3-dev=3.8.* libssl-dev=1.0.* libtool=2.4.* libwebp-dev=0.4.* \
libxml2-dev=2.9.* libxslt1-dev=1.1.* libyaml-dev=0.1.* make=3.81-* \
patch=2.7.* xz-utils=5.1.* zlib1g-dev=1:1.2.* unzip=6.0-* curl=7.35.* \
e2fsprogs=1.42.* iptables=1.4.* xfsprogs=3.1.* xz-utils=5.1.* \
mono-devel less=458-* groff=1.22.* liberror-perl=0.17-* \
asciidoc=8.6.* build-essential=11.* bzr=2.6.* cvs=2:1.12.* cvsps=2.1-* docbook-xml=4.5-* docbook-xsl=1.78.* dpkg-dev=1.17.* \
libdbd-sqlite3-perl=1.40-* libdbi-perl=1.630-* libdpkg-perl=1.17.* libhttp-date-perl=6.02-* \
libio-pty-perl=1:1.08-* libserf-1-1=1.3.* libsvn-perl=1.8.* libsvn1=1.8.* libtcl8.6=8.6.* libtimedate-perl=2.3000-* \
libunistring0=0.9.* libxml2-utils=2.9.* libyaml-perl=0.84-* python-bzrlib=2.6.* python-configobj=4.7.* \
sgml-base=1.26+* sgml-data=2.0.* subversion=1.8.* tcl=8.6.* tcl8.6=8.6.* xml-core=0.13+* xmlto=0.0.* xsltproc=1.1.* \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
# Download and set up GitVersion
RUN set -ex \
&& wget "https://github.com/GitTools/GitVersion/releases/download/v${GITVERSION_VERSION}/GitVersion_${GITVERSION_VERSION}.zip" -O /tmp/GitVersion_${GITVERSION_VERSION}.zip \
&& mkdir -p /usr/local/GitVersion_${GITVERSION_VERSION} \
&& unzip /tmp/GitVersion_${GITVERSION_VERSION}.zip -d /usr/local/GitVersion_${GITVERSION_VERSION} \
&& rm /tmp/GitVersion_${GITVERSION_VERSION}.zip \
&& echo "mono /usr/local/GitVersion_${GITVERSION_VERSION}/GitVersion.exe \$#" >> /usr/local/bin/gitversion \
&& chmod +x /usr/local/bin/gitversion
# Install Docker
RUN set -ex \
&& curl -fSL "https://${DOCKER_BUCKET}/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz" -o docker.tgz \
&& echo "${DOCKER_SHA256} *docker.tgz" | sha256sum -c - \
&& tar --extract --file docker.tgz --strip-components 1 --directory /usr/local/bin/ \
&& rm docker.tgz \
&& docker -v \
# set up subuid/subgid so that "--userns-remap=default" works out-of-the-box
&& addgroup dockremap \
&& useradd -g dockremap dockremap \
&& echo 'dockremap:165536:65536' >> /etc/subuid \
&& echo 'dockremap:165536:65536' >> /etc/subgid \
&& wget "https://raw.githubusercontent.com/docker/docker/${DIND_COMMIT}/hack/dind" -O /usr/local/bin/dind \
&& curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-Linux-x86_64 > /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/dind /usr/local/bin/docker-compose \
# Ensure docker-compose works
&& docker-compose version
# Install dependencies by all python images equivalent to buildpack-deps:jessie
# on the public repos.
RUN set -ex \
&& wget "https://bootstrap.pypa.io/2.6/get-pip.py" -O /tmp/get-pip.py \
&& python /tmp/get-pip.py \
&& pip install awscli==1.* \
&& rm -fr /var/lib/apt/lists/* /tmp/* /var/tmp/*
VOLUME /var/lib/docker
COPY dockerd-entrypoint.sh /usr/local/bin/
ENV NODE_VERSION="10.1.0"
# gpg keys listed at https://github.com/nodejs/node#release-team
RUN set -ex \
&& for key in \
94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
B9AE9905FFD7803F25714661B63B535A4C206CA9 \
77984A986EBC2AA786BC0F66B01FBB92821C587A \
56730D5401028683275BD23C23EFEFE93C4CFFFE \
71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
FD3A5288F042B6850C66B31F09FE44734EB7990E \
8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 \
C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
9554F04D7259F04124DE6B476D5A82AC7E37093B \
93C7E9E91B49E432C2F75674B0A78B0A6C481CF6 \
114F43EE0176B71C7BC219DD50A3051F888C628D \
7937DFD2AB06298B2293C3187D33FF9D0246406D \
; do \
gpg --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || \
gpg --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || \
gpg --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; \
done
RUN set -ex \
&& wget "https://nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.gz" -O node-v$NODE_VERSION-linux-x64.tar.gz \
&& wget "https://nodejs.org/download/release/v$NODE_VERSION/SHASUMS256.txt.asc" -O SHASUMS256.txt.asc \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-x64.tar.gz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xzf "node-v$NODE_VERSION-linux-x64.tar.gz" -C /usr/local --strip-components=1 \
&& rm "node-v$NODE_VERSION-linux-x64.tar.gz" SHASUMS256.txt.asc SHASUMS256.txt \
&& ln -s /usr/local/bin/node /usr/local/bin/nodejs \
&& rm -fr /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN npm set unsafe-perm true
CMD [ "node" ]

tesseract command not working and giving file error

I have installed tesseract version 4.0 in ubuntu.
I am able to perform all the actions of tesseract using Tesseract CLI like simple OCR text generation.
I want to train the LSTM.
I read this article and tried to run the following command directly on terminal after isntalling Tesseract from Build.
mkdir -p ~/tesstutorial/engoutput
training/lstmtraining --debug_interval 100 \
--traineddata ~/tesstutorial/engtrain/eng/eng.traineddata \
--net_spec '[1,36,0,1 Ct3,3,16 Mp3,3 Lfys48 Lfx96 Lrx96 Lfx256 O1c111]' \
--model_output ~/tesstutorial/engoutput/base --learning_rate 20e-4 \
--train_listfile ~/tesstutorial/engtrain/eng.training_files.txt \
--eval_listfile ~/tesstutorial/engeval/eng.training_files.txt \
--max_iterations 5000 &>~/tesstutorial/engoutput/basetrain.log
Althoguh it created the engouput directory.
Current path was pointed to SRC directory of tesseract.
Get the following error :
bash: training/lstmtraining: No such file or directory
Running as
Fixed by following code
Create Training Data First
cd ~/tesseract-ocr/src
training/tesstrain.sh \
--fonts_dir /usr/share/fonts/ \
--lang eng \
--linedata_only \
--noextract_font_properties \
--exposures "0" \
--langdata_dir /home/shan/langdata_lstm \
--output_dir /home/shan/tesstutorial/engtrain \
--tessdata_dir /home/shan/tesseract-ocr/tessdata \
--fontlist "Arial"
sudo chmod -R 777 /home/shan/tesstutorial/engtrain
Then LSTM Model
sudo chmod -R 777 /home/shan/tesstutorial/
cd ~/tesseract-ocr/src/
training/lstmtraining --stop_training \
--continue_from ~/tesstutorial/engoutput/base_checkpoint \
--traineddata ~/tesstutorial/engtrain/eng/eng.traineddata \
--model_output ~/tesstutorial/engoutput/eng.traineddata
sudo chmod -R 777 ~/tesstutorial
cd ~/tesseract-ocr/src/
training/lstmtraining --debug_interval 100 \
--traineddata ~/tesstutorial/engtrain/eng/eng.traineddata \
--net_spec '[1,36,0,1 Ct3,3,16 Mp3,3 Lfys48 Lfx96 Lrx96 Lfx256 O1c111]' \
--model_output ~/tesstutorial/engoutput/base --learning_rate 20e-4 \
--train_listfile ~/tesstutorial/engtrain/eng.training_files.txt \
--max_iterations 5000 &>~/tesstutorial/engoutput/basetrain.log

How to compile caffe_rtpose on OSX?

I've recently bumped spotted caffe_rtpose and I tried to compile and run the example. Unfortunately I'm super experienced with c++ so I ran into a lot of issues compiling and linking.
I've tried tweaking the Makefile config (modified from the existing Ubuntu config). (I'm using a system running OSX 10.11.5 with an nVidia GeForce 750M and I have installed CUDA 7.5 and libcudnn):
## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!
# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1
# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1
# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0
# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1
# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3
# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++
# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr
# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=sm_50 \
-gencode arch=compute_50,code=compute_50 \
-gencode arch=compute_52,code=sm_52 \
# -gencode arch=compute_60,code=sm_60 \
# -gencode arch=compute_61,code=sm_61
# Deprecated
#CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
# -gencode arch=compute_20,code=sm_21 \
# -gencode arch=compute_30,code=sm_30 \
# -gencode arch=compute_35,code=sm_35 \
# -gencode arch=compute_50,code=sm_50 \
# -gencode arch=compute_50,code=compute_50
# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas
# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib
BLAS_INCLUDE := /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/Headers/
BLAS_LIB := /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A
# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app
# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
# $(ANACONDA_HOME)/include/python2.7 \
# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \
# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib
# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib
# Uncomment to support layers written in Python (will link against Python libs)
# WITH_PYTHON_LAYER := 1
# Whatever else you find you need goes here.
# INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
# LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib
# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1
BUILD_DIR := build
DISTRIBUTE_DIR := distribute
# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1
# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0
# enable pretty build (comment to see full commands)
# Q ?= #
And this is the modified version of the install_caffe_and_cpm_osx.sh script:
#!/bin/bash
echo "------------------------- INSTALLING CAFFE AND CPM -------------------------"
echo "NOTE: This script assumes that CUDA and cuDNN are already installed on your machine. Otherwise, it might fail."
function exitIfError {
if [[ $? -ne 0 ]] ; then
echo ""
echo "------------------------- -------------------------"
echo "Errors detected. Exiting script. The software might have not been successfully installed."
echo "------------------------- -------------------------"
exit 1
fi
}
# echo "------------------------- Checking Ubuntu Version -------------------------"
# ubuntu_version="$(lsb_release -r)"
# echo "Ubuntu $ubuntu_version"
# if [[ $ubuntu_version == *"14."* ]]; then
# ubuntu_le_14=true
# elif [[ $ubuntu_version == *"16."* || $ubuntu_version == *"15."* || $ubuntu_version == *"17."* || $ubuntu_version == *"18."* ]]; then
# ubuntu_le_14=false
# else
# echo "Ubuntu release older than version 14. This installation script might fail."
# ubuntu_le_14=true
# fi
# exitIfError
# echo "------------------------- Ubuntu Version Checked -------------------------"
# echo ""
echo "------------------------- Checking Number of Processors -------------------------"
NUM_CORES=$(grep -c ^processor /proc/cpuinfo 2>/dev/null || sysctl -n hw.ncpu)
echo "$NUM_CORES cores"
exitIfError
echo "------------------------- Number of Processors Checked -------------------------"
echo ""
echo "------------------------- Installing some Caffe Dependencies -------------------------"
# Basic
# sudo apt-get --assume-yes update
# sudo apt-get --assume-yes install build-essential
#General dependencies
brew install protobuf leveldb snappy hdf5
# with Python pycaffe needs dependencies built from source - from http://caffe.berkeleyvision.org/install_osx.html
# brew install --build-from-source --with-python -vd protobuf
# brew install --build-from-source -vd boost boost-python
# without Python the usual installation suffices
brew install boost
# sudo apt-get --assume-yes install libprotobuf-dev libleveldb-dev libsnappy-dev libhdf5-serial-dev protobuf-compiler
# sudo apt-get --assume-yes install --no-install-recommends libboost-all-dev
# Remaining dependencies, 14.04
brew install gflags glog lmdb
# if [[ $ubuntu_le_14 == true ]]; then
# sudo apt-get --assume-yes install libgflags-dev libgoogle-glog-dev liblmdb-dev
# fi
# OpenCV 2.4
# sudo apt-get --assume-yes install libopencv-dev
exitIfError
echo "------------------------- Some Caffe Dependencies Installed -------------------------"
echo ""
echo "------------------------- Compiling Caffe & CPM -------------------------"
cp Makefile.config.OSX.10.11.5.example Makefile.config
make all -j$NUM_CORES
# make test -j$NUM_CORES
# make runtest -j$NUM_CORES
exitIfError
echo "------------------------- Caffe & CPM Compiled -------------------------"
echo ""
# echo "------------------------- Installing CPM -------------------------"
# echo "Compiled"
# exitIfError
# echo "------------------------- CPM Installed -------------------------"
# echo ""
echo "------------------------- Downloading CPM Models -------------------------"
models_folder="./model/"
# COCO
coco_folder="$models_folder"coco/""
coco_model="$coco_folder"pose_iter_440000.caffemodel""
if [ ! -f $coco_model ]; then
wget http://posefs1.perception.cs.cmu.edu/Users/tsimon/Projects/coco/data/models/coco/pose_iter_440000.caffemodel -P $coco_folder
fi
exitIfError
# MPI
mpi_folder="$models_folder"mpi/""
mpi_model="$mpi_folder"pose_iter_160000.caffemodel""
if [ ! -f $mpi_model ]; then
wget http://posefs1.perception.cs.cmu.edu/Users/tsimon/Projects/coco/data/models/mpi/pose_iter_160000.caffemodel -P $mpi_folder
fi
exitIfError
echo "Models downloaded"
echo "------------------------- CPM Models Downloaded -------------------------"
echo ""
echo "------------------------- CAFFE AND CPM INSTALLED -------------------------"
echo ""
But I get this error:
examples/rtpose/rtpose.cpp:1088:22: error: variable length array of non-POD element type 'Frame'
Frame frame_batch[BATCH_SIZE];
I've tried swapping the array for a vector:
std::vector<Frame> frame_batch;
std::cout << "allocating " << BATCH_SIZE << " frames" << std::endl;
frame_batch.reserve(BATCH_SIZE);
That seems to take care of that compile error, but now I get a linker error:
ld: library not found for -lgomp
clang: error: linker command failed with exit code 1 (use -v to see invocation)
I've searched for lib lib gomp and found a few related posts on caffe and OpenMP mentioning issues with clang on OSX and OpenMP.
What I tried:
Following this post I've installed gcc 4.9 with homebrew (as the homebrew formula for gcc 5 installs 5.9 which might be too high?)
I've set -fopenmp=libomp based on Andrey Bokhanko's answer: this didn't work for me ++-4.9: error: unrecognized command line option '-fopenmp=libomp'
I could download and build Caffe separately using the official instructions, but I can't seem to figure out how to compile this awesome looking demo.
Unfortunately I'm not experienced with c++ and OpenMP so I could really use your suggestions here. Thank you
Update: I've tried Mark Setchell's helpful suggestion of installing llvm via clang. I've updated the Makefile config to use
CUSTOM_CXX := /usr/local/opt/llvm/bin/clang++
but CUDA doesn't like it:
nvcc fatal : The version ('30801') of the host compiler ('clang') is not supported
I've tried compiling with CPU_ONLY but I still get CUDA errors:
examples/rtpose/rtpose.cpp:235:5: error: use of undeclared identifier 'cudaMalloc'
cudaMalloc(&net_copies[device_id].canvas, DISPLAY_RESOLUTION_WIDTH * DISPLAY_RESOLUTION_HEIGHT * 3 * sizeof(float));
^
examples/rtpose/rtpose.cpp:236:5: error: use of undeclared identifier 'cudaMalloc'
cudaMalloc(&net_copies[device_id].joints, MAX_NUM_PARTS*3*MAX_PEOPLE * sizeof(float) );
^
examples/rtpose/rtpose.cpp:1130:146: error: use of undeclared identifier 'cudaMemcpyHostToDevice'
cudaMemcpy(net_copies[tid].canvas, frame.data_for_mat, DISPLAY_RESOLUTION_WIDTH * DISPLAY_RESOLUTION_HEIGHT * 3 * sizeof(float), cudaMemcpyHostToDevice);
^
examples/rtpose/rtpose.cpp:1136:108: error: use of undeclared identifier 'cudaMemcpyHostToDevice'
cudaMemcpy(pointer + 0 * offset, frame_batch[0].data, BATCH_SIZE * offset * sizeof(float), cudaMemcpyHostToDevice);
^
examples/rtpose/rtpose.cpp:1178:13: error: use of undeclared identifier 'cudaMemcpyHostToDevice'
cudaMemcpyHostToDevice);
^
examples/rtpose/rtpose.cpp:1192:155: error: use of undeclared identifier 'cudaMemcpyDeviceToHost'
cudaMemcpy(frame_batch[n].data_for_mat, net_copies[tid].canvas, DISPLAY_RESOLUTION_HEIGHT * DISPLAY_RESOLUTION_WIDTH * 3 * sizeof(float), cudaMemcpyDeviceToHost);
^
examples/rtpose/rtpose.cpp:1202:155: error: use of undeclared identifier 'cudaMemcpyDeviceToHost'
cudaMemcpy(frame_batch[n].data_for_mat, net_copies[tid].canvas, DISPLAY_RESOLUTION_HEIGHT * DISPLAY_RESOLUTION_WIDTH * 3 * sizeof(float), cudaMemcpyDeviceToHost);
I'm no expert, but having a quick scan through the code, I don't see how the CPU_ONLY version will work with the cuda calls.
Having another look at the caffe OSX Installation guide, I may try the route >not for the faint of heart
I have finally managed to compile the rtpose example.
Here's what I did:
Swapped the Frame array for a vector in examples/rtpose/rtpose.cpp, as mentioned above:
std::vector<Frame> frame_batch;
std::cout << "allocating " << BATCH_SIZE << " frames" << std::endl;
frame_batch.reserve(BATCH_SIZE);
Used the default clang++ compiler, after failed attempts at using gcc++-4.9 and Homebrew installed LLVM's clang++, but removed the -fopenmp flags and the -pthread linker flag, not the compiler flag, based on this answer
After the compile finished, I tried to run it, but got a libjpeg related error:
dyld: Symbol not found: __cg_jpeg_resync_to_restart
Referenced from: /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO
Expected in: /usr/local/lib/libJPEG.dylib
in /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO
Trace/BPT trap: 5
The workaround was mdemirst's answer. I made a backup of the old symbolic links just in case though. I did symlink libjpeg/libpng/libtiff/libgif from ImageIO.framework.
I've commited the above config/setup script on github.
Now that the example is compiled, I still can't run it, possibly due to not enough GPU memory:
F0331 02:02:16.231935 528384 syncedmem.cpp:56] Check failed: error == cudaSuccess (2 vs. 0) out of memory
*** Check failure stack trace: ***
# 0x10c7a89da google::LogMessage::Fail()
# 0x10c7a80d5 google::LogMessage::SendToLog()
# 0x10c7a863b google::LogMessage::Flush()
# 0x10c7aba17 google::LogMessageFatal::~LogMessageFatal()
# 0x10c7a8cc7 google::LogMessageFatal::~LogMessageFatal()
# 0x1079481db caffe::SyncedMemory::to_gpu()
# 0x107947c9e caffe::SyncedMemory::mutable_gpu_data()
# 0x1079affba caffe::CuDNNConvolutionLayer<>::Forward_gpu()
# 0x107861331 caffe::Layer<>::Forward()
# 0x107918016 caffe::Net<>::ForwardFromTo()
# 0x1077a86f1 warmup()
# 0x1077b211d processFrame()
# 0x7fff8b11899d _pthread_body
# 0x7fff8b11891a _pthread_start
# 0x7fff8b116351 thread_start
Abort trap: 6
I have tried dimming the settings down as much as possible:
./build/examples/rtpose/rtpose.bin -caffemodel ./model/coco/pose_iter_440000.caffemodel -caffeproto ./model/coco/pose_deploy_linevec.prototxt -camera_resolution "40x30" -camera 0 -resolution "40x30" -start_scale 0.1 -num_scales=0 -no_display true -net_resolution "16x16"
But to no avail. Actually running the example may be another question in itself.