Dockerfile with condition operating system - dockerfile

How can I query the operating system in Dockerfile.
I would like to give a different path for the download.
# ioncube loader
RUN curl -fSL 'http://downloads3.ioncube.com/loader_downloads/ioncube_loaders_lin_x86-64.tar.gz' -o ioncube.tar.gz \
&& mkdir -p ioncube \
&& tar -xf ioncube.tar.gz -C ioncube --strip-components=1 \
&& rm ioncube.tar.gz \
&& mv ioncube/ioncube_loader_lin_7.4.so /var/www/ioncube_loader_lin_7.4.so \
&& rm -r ioncube
I'm using the latest Docker version with ddev.

Related

/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)

I'm receiving the error in the title and I'm not sure why. I'm trying to run a docker container in Unraid. The dockerfile code is down below if anyone would like to critique.
ENV DEBIAN_FRONTEND=noninteractive \
LANG=en_US.UTF-8 \
LANGUAGE=en_US.UTF-8 \
LC_ALL=en_US.UTF-8 \
TERM=xterm \
TZ=:/etc/localtime \
PATH=$PATH:/usr/local/go/bin \
GOBIN=/go/bin \
APP=/go/src/smugmug-backup
RUN sed -e "/deb-src/d" -i /etc/apt/sources.list \
&& apt-get update \
&& apt-get install --no-install-recommends --yes \
ca-certificates \
&& apt-get clean \
&& rm -rf /.root/cache \
&& rm -rf /var/lib/apt/lists/*
$ sudo apt install locales
$ sudo locale-gen en_US.UTF-8
$ sudo dpkg-reconfigure locales
In the last step you, would see a text based UI, select en_US.UTF-8 by moving using up and down arrow and selecting via spacebar or typing its id, which is 159.

Why does my aws-cli build work on the intermediate container during buildtime, but not on the final container?

I've been trying to put aws-cli on an Alpine based docker image I have. I found someone's tips for including the appropriate libraries for glibc here and I was able to make things run smoothly on my docker build. If I call the aws executable in the build process from the intermediate container it's being built on, it works.
After COPYing the aws binary directory to the destination, the files all are transferred, but the aws executable no longer works. I try to do it from either the CMD or docker exec and I just get an error that the file doesn't exist:
Does anyone have any idea what's going on?
This is the docker repo I started with: https://hub.docker.com/r/alfg/nginx-rtmp/. I'm just pasting the aws cli build code below after the ffmpeg FROM block, and adding the COPY --from=2 line further down in this question.
Here is the build section for the aws cli:
FROM alpine:3.11
ENV GLIBC_VER=2.31-r0
# install glibc compatibility for alpine
RUN apk --no-cache add \
binutils \
curl \
&& curl -sL https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub -o /etc/apk/keys/sgerrand.rsa.pub \
&& curl -sLO https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VER}/glibc-${GLIBC_VER}.apk \
&& curl -sLO https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VER}/glibc-bin-${GLIBC_VER}.apk \
&& apk add --no-cache \
glibc-${GLIBC_VER}.apk \
glibc-bin-${GLIBC_VER}.apk \
&& curl -sL https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -o awscliv2.zip \
&& unzip awscliv2.zip \
&& aws/install \
&& rm -rf \
awscliv2.zip \
aws \
/usr/local/aws-cli/v2/*/dist/aws_completer \
/usr/local/aws-cli/v2/*/dist/awscli/data/ac.index \
/usr/local/aws-cli/v2/*/dist/awscli/examples \
&& apk --no-cache del \
binutils \
curl \
&& rm glibc-${GLIBC_VER}.apk \
&& rm glibc-bin-${GLIBC_VER}.apk \
&& rm -rf /var/cache/apk/*
And here are the final steps of my Dockerfile which copy the files and update PATH:
COPY --from=0 /usr/local/nginx /usr/local/nginx
COPY --from=0 /etc/nginx /etc/nginx
COPY --from=1 /usr/local /usr/local
COPY --from=1 /usr/lib/libfdk-aac.so.2 /usr/lib/libfdk-aac.so.2
COPY --from=2 /usr/local/aws-cli /usr/local/aws-cli
# Add NGINX path, AWS-CLI path, config and static files.
ENV PATH "${PATH}:/usr/local/nginx/sbin:/usr/local/aws-cli/v2/current/bin"
ADD nginx.conf /etc/nginx/nginx.conf.template
RUN mkdir -p /opt/data && mkdir /www
ADD static /www/static
EXPOSE 1935
EXPOSE 80
CMD envsubst "$(env | sed -e 's/=.*//' -e 's/^/\$/g')" < \
/etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && \
nginx
The docker build runs succesfully and I can launch containers from the image. I've also put things like aws s3 ls s3://my-public-bucket in the Dockerfile at the end of the aws-cli block, and during the build they run succesfully and can pull from s3.
The only thing I can see wrong in the build is in the image below - it happens during the glibc/aws build, but the build still succesfully completes and the binary is functional afterwards:
/usr/glibc-compat/sbin/ldconfig: /usr/glibc-compat/lib/ld-linux-x86-64.so.2 is not a symbolic link

complile tensorflow with bazel error: tensorflow_cc library not found

I'm compiling tensorflow with bazel to a audio neural network source-separation model in C++. (spleeterpp). I wrote this Dockerfile:
FROM continuumio/anaconda3
RUN apt-get update && apt-get install -y \
wget \
unzip \
rsync \
gcc \
build-essential \
software-properties-common \
cmake
# spleeterpp source
WORKDIR spleeterpp
COPY . .
# bazel install
RUN wget https://github.com/bazelbuild/bazel/releases/download/0.25.2/bazel-0.25.2-installer-linux-x86_64.sh
RUN bash bazel-0.25.2-installer-linux-x86_64.sh
# tensorflow bazel build
RUN git clone https://github.com/tensorflow/tensorflow.git && \
cd tensorflow && \
git checkout v1.14.0 && \
rm BUILD
RUN cd tensorflow && \
python tensorflow/tools/pip_package/setup.py install && \
mv build build-bu && \
git checkout BUILD && \
./configure
# build tensorflow to bazel-bin/tensorflow/libtensorflow_cc.so
RUN cd tensorflow && \
bazel build --config=monolithic --jobs=6 --verbose_failures //tensorflow:libtensorflow_cc.so
# tensorflow install
ENV INSTALL_DIR=install
ENV INCLUDE_DIR=$INSTALL_DIR/include
RUN cd tensorflow && \
mkdir -p $INSTALL_DIR/bin && \
cp bazel-bin/tensorflow/libtensorflow_cc.so* $INSTALL_DIR/bin/ && \
mkdir -p $INSTALL_DIR/include && \
rsync -a --prune-empty-dirs --include '*/' --include '*.h' --exclude '*' tensorflow/ $INCLUDE_DIR/tensorflow && \
mkdir -p $INSTALL_DIR/include/third_party/eigen3/unsupported/ && \
cp -r ./bazel-tensorflow/external/eigen_archive/unsupported/Eigen $INSTALL_DIR/include/third_party/eigen3/unsupported/Eigen && \
cp -r ./bazel-tensorflow/external/eigen_archive/Eigen $INSTALL_DIR/include/third_party/eigen3/Eigen
# spleeterpp build
RUN mkdir build && cd build && \
cmake -DTENSORFLOW_CC_INSTALL_DIR=$INSTALL_DIR/bin/ .. && \
cmake --build .
# defaults command
CMD ["bash"]
I get the error
CMake Error at cmake/add_tensorflow.cmake:7 (message):
tensorflow_cc library not found
The root path is the root project folder (where the CMakeLists.txt file is), while the install path for tensorflow Bazel build is $INSTALL_DIR, so I was expecting to have the static library in $INSTALL_DIR/bin/ where it was copied before, so I set cmake -DTENSORFLOW_CC_INSTALL_DIR=$INSTALL_DIR/bin/, but it does not work.

Discrepancy between Python in docker-compose and independent container with same image

In my project, I have a dockerized microservice based off of ubuntu:trusty which I wanted to update to python 2.7.13 from the standard apt-get 2.7.6 version. In doing so, I ran into some module import issues. Since then, I've added to the beginning of my pythonpath python2.7/dist-packages, which contains all of the modules I'm concerned with.
I built my microservice images using docker-compose build, but here's the issue: When I run docker-compose up, this microservice fails on importing all non-standard modules, yet when I create my own container from the same image using docker run -it image_id /bin/bash and then subsequently run a python shell and import any of the said modules, everything works perfectly. Even when I run the same python script, it gets past all of these import statements (but fails for other issues due to being run in isolation without proper linking).
I've asserted that python 2.7.13 is running on both docker-compose up and when I run my own container. I've cleared all of my containers, images, and cache and have rebuilt with no progress. The command being run at the end of the docker file is CMD python /filename/file.py.
Any ideas what could cause such a discrepancy?
EDIT:
As requested, here's the Dockerfile. The file structure is simply a project folder with subfolders, each being their own dockerized microservice. The one of concern here is called document_analyzer and following is the relevant section of the docker-compose file. Examples of the files that aren't properly installing are PyPDF2, pymongo, boto3.
FROM ubuntu:trusty
# Built using PyImageSearch guide:
# http://www.pyimagesearch.com/2015/06/22/install-opencv-3-0-and-python-2-7-on-ubuntu/
# Install dependencies
RUN \
apt-get -qq update && apt-get -qq upgrade -y && \
apt-get -qq install -y \
wget \
unzip \
libtbb2 \
libtbb-dev && \
apt-get -qq install -y \
build-essential \
cmake \
git \
pkg-config \
libjpeg8-dev \
libtiff4-dev \
libjasper-dev \
libpng12-dev \
libgtk2.0-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libatlas-base-dev \
gfortran \
libhdf5-dev \
libreadline-gplv2-dev \
libncursesw5-dev \
libssl-dev \
libsqlite3-dev \
tk-dev \
libgdbm-dev \
libc6-dev \
libbz2-dev \
libxml2-dev \
libxslt-dev && \
wget https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz && \
tar -xvf Python-2.7.13.tgz && \
cd Python-2.7.13 && \
./configure && \
make && \
make install && \
apt-get install -y python-dev python-setuptools && \
easy_install pip && \
pip install numpy==1.12.0 && \
apt-get autoclean && apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Download OpenCV 3.2.0 and install
# step 10
RUN \
cd ~ && \
wget https://github.com/Itseez/opencv/archive/3.2.0.zip && \
unzip 3.2.0.zip && \
mv ~/opencv-3.2.0/ ~/opencv/ && \
rm -rf ~/3.2.0.zip && \
cd ~ && \
wget https://github.com/opencv/opencv_contrib/archive/3.2.0.zip -O 3.2.0-contrib.zip && \
unzip 3.2.0-contrib.zip && \
mv opencv_contrib-3.2.0 opencv_contrib && \
rm -rf ~/3.2.0-contrib.zip && \
cd /root/opencv && \
mkdir build && \
cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D BUILD_EXAMPLES=ON .. && \
cd ~/opencv/build && \
make -j $(nproc) && \
make install && \
ldconfig && \
# clean opencv repos
rm -rf ~/opencv/build && \
rm -rf ~/opencv/3rdparty && \
rm -rf ~/opencv/doc && \
rm -rf ~/opencv/include && \
rm -rf ~/opencv/platforms && \
rm -rf ~/opencv/modules && \
rm -rf ~/opencv_contrib/build && \
rm -rf ~/opencv_contrib/doc
RUN mkdir ~/.aws/ && touch ~/.aws/config && touch ~/.aws/credentials && \
echo "[default]" > ~/.aws/credentials && \
echo "AWS_ACCESS_KEY_ID=xxxxxxx" >> ~/.aws/credentials && \
echo "AWS_SECRET_ACCESS_KEY=xxxxxxx" >> ~/.aws/credentials && \
echo "[default]" > ~/.aws/config && \
echo "output = json" >> ~/.aws/config && \
echo "region = us-east-1" >> ~/.aws/config
RUN apt-get update && \
apt-get -y install bcrypt \
libssl-dev \
libffi-dev \
libpq-dev \
vim \
redis-server \
rsyslog \
imagemagick \
libmagickcore-dev \
libmagickwand-dev \
libmagic-dev \
curl
RUN pip install pyopenssl ndg-httpsclient pyasn1
WORKDIR /document_analyzer
# Add requirements and install
COPY . /document_analyzer
RUN pip install -r /document_analyzer/requirements.txt && \
pip install -Iv https://pypi.python.org/packages/f5/1f/2d7579a6d8409a61b6b8e84ed02ca9efae8b51fd6228e24be88588fac255/tika-1.14.1.tar.gz#md5=aa7d77a4215e252f60243d423946de8d && \
pip install awscli
ENV PYTHONPATH="/usr/local/lib/python2.7/dist-packages/:${PYTHONPATH}"
CMD python /document_analyzer/api.py
Docker-compose:
document_analyzer:
environment:
- IP=${IP}
extends:
file: common.yml
service: microservice
build: document_analyzer
ports:
- "5001:5001"
volumes:
- ./document_analyzer:/document_analyzer
- .:/var/lib/
environment:
- PYTHONPATH=$PYTHONPATH:/var/lib
links:
- redis
- rabbit
- ocr_runner
- tika
- document_envelope
- converter
restart: on-failure
You have this work being done during the build phase:
WORKDIR /document_analyzer
# Add requirements and install
COPY . /document_analyzer
RUN pip install -r /document_analyzer/requirements.txt && \
pip install -Iv https://pypi.python.org/packages/f5/1f/2d7579a6d8409a61b6b8e84ed02ca9efae8b51fd6228e24be88588fac255/tika-1.14.1.tar.gz#md5=aa7d77a4215e252f60243d423946de8d && \
pip install awscli
And at runtime you do this in the compose yaml file:
volumes:
- ./document_analyzer:/document_analyzer
That volume mount will override everything you did in /document_analyzer during the build. Only what is in the directory outside the container will now be available at /document_analyzer inside the container. Whatever was at /document_analyzer before, from the build phase, is now hidden by this mount and not available.
The difference when you use docker run is that you did not create this mount.

Program can't find Boost Graph installed in Docker

I am using dockers to store the dependencies of my c++ program to use when doing CI testing with gitlab CI. I first build a base docker which contains all of the program dependencies (lets call it DOCKER_A):
FROM gcc:5
RUN mkdir -p /usr/src/optimization
WORKDIR /usr/optimization
#COPY . /usr/optimization
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y build-essential && \
apt-get install -y openssh-client && \
apt-get install -y python3 && \
apt-get install -y python3-pip && \
pip3 install --upgrade pip && \
pip3 install virtualenv
RUN wget http://www.cmake.org/files/v3.7/cmake-3.7.2.tar.gz && \
tar xf cmake-3.7.2.tar.gz && \
cd cmake-3.7.2/ && \
./configure && \
make && \
make install && \
export PATH=/usr/local/bin:$PATH && \
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH && \
cd ..
RUN wget -O boost_1_64_0.tar.gz http://sourceforge.net/projects/boost/files/boost/1.64.0/boost_1_64_0.tar.gz/download && \
tar xzvf boost_1_64_0.tar.gz && \
cd boost_1_64_0 && \
./bootstrap.sh --exec-prefix=/usr/local --with-python=python3 && \
./b2 threading=multi && \
./b2 install threading=multi && \
cd .. && \
rm boost_1_64_0.tar.gz && \
rm -r boost_1_64_0 && \
ln -s /usr/lib/x86_64-linux-gnu/libboost_python-py34.so /usr/lib/x86_64-linux-gnu/libboost_python3.so
This docker doesn't change. Then every time I push to gitlab, I build another docker, starting from DOCKER_A:
FROM DOCKER_A
ARG SSH_PRIVATE_KEY
WORKDIR /usr/optimization
COPY . /usr/optimization
RUN chmod +x ADD_KEY.sh
RUN ./ADD_KEY.sh "$SSH_PRIVATE_KEY"
RUN mkdir build && \
cd build && \
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_VERBOSE_MAKEFILE=on .. && \
make && \
cd ..
This builds the code from the new commit (up to this point everything works as expected).
Next, in my YAML file for gitlab CI, I run my tests, which consist of calling the executable files generated by my build process.
before_script:
- docker info
- docker login -u user -p $CI_JOB_TOKEN docker.registry.url
after_script:
- echo "After script section"
- echo "For example you might do some cleanup here"
buildRelease:
stage: build
script:
- echo "Do your build here"
- docker login -u user -p $CI_JOB_TOKEN docker.registry.url
- docker build --pull -i $CONTAINER_IMAGE_PUSH --build-arg SSH_PRIVATE_KEY="$SSH_PRIVATE_KEY" .
- docker push $CONTAINER_IMAGE_PUSH
testDispatch:
stage: test
script:
- echo "Do a test here"
- echo "For example run a test suite"
- docker run -t $CONTAINER_IMAGE_PULL ./bin/dispatch
testState:
stage: test
script:
- docker run -t $CONTAINER_IMAGE_PULL ./bin/state-test
testAlgorithm:
stage: test
script:
- docker run -t $CONTAINER_IMAGE_PULL ./bin/algorithm-test
testSystem:
stage: test
script:
- docker run -t $CONTAINER_IMAGE_PULL ./bin/system-test
Each of these the tests in stage test fails, all giving the same error. Here is an example of the output:
$ docker run -t $CONTAINER_IMAGE_PULL ./bin/algorithm-test
./bin/algorithm-test: error while loading shared libraries:
libboost_graph.so.1.64.0: cannot open shared object file: No such file or directory
I don't understand why my binary cannot find libboost graph, as it is installed in the first docker container, which I am inheriting from.
Any help that could be provided would be appreciated.