I am installing openjdk and python in Dockerfile. This is how it looks:
From Ubuntu:latest
RUN apt-get update && \
apt-get install -y openjdk-8-jdk && \
apt-get install -y ant && \
apt-get install -y ca-certificates-java && \
apt-get clean && \
update-ca-certificates -f && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/oracle-jdk8-installer && \
apt-get update && apt-get install -y python-pip python-dev build-essential && \
apt-get install -y python3 && \
apt-get clean
Should I use apt-get clean while installing CA certificate? I am using apt-get clean at bottom of all installation.
It's useless to run these 2 times. The last one is enough.
Also, the rm -rf /var/lib/apt/lists/* command should be moved to the end.
Related
trying to build my first dockerfile for vision transformer on ubuntu 20.04. ran into
2022-11-04 09:08:49.205922: W external/org_tensorflow/tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64]
Could not load dynamic library 'libcudart.so.11.0'; dlerror:
libcudart.so.11.0: cannot open shared object file: No such file or
directory; LD_LIBRARY_PATH:
/usr/local/nvidia/lib:/usr/local/nvidia/lib64
it stopped at git clone (since this is the last step appeared in the terminal before the error):
Step 7/11 : RUN git clone
https://github.com/google-research/vision_transformer.git &&cd
vision_transformer && pip3 install pip --upgrade && pip
install -r vit_jax/requirements.txt &&python -m vit_jax.main
--workdir=/tmp/vit-$(date +%s) --config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 --config.pretrained_dir='gs://vit_models/imagenet21k' && pip cache purge
below is my dockerfile:
FROM pytorch/pytorch:1.8.1-cuda10.2-cudnn7-runtime
ENV DEBIAN_FRONTEND=noninteractive
ARG USERNAME=user
WORKDIR /dockertest
ARG WORKDIR=/dockertest
RUN apt-get update && apt-get install -y \
automake autoconf libpng-dev nano python3-pip \
sudo curl zip unzip libtool swig zlib1g-dev pkg-config \
python3-mock libpython3-dev libpython3-all-dev \
g++ gcc cmake make pciutils cpio gosu wget \
libgtk-3-dev libxtst-dev sudo apt-transport-https \
build-essential gnupg git xz-utils vim libgtk2.0-0 libcanberra-gtk-module\
# libva-drm2 libva-x11-2 vainfo libva-wayland2 libva-glx2 \
libva-dev libdrm-dev xorg xorg-dev protobuf-compiler \
openbox libx11-dev libgl1-mesa-glx libgl1-mesa-dev \
libtbb2 libtbb-dev libopenblas-dev libopenmpi-dev \
&& sed -i 's/# set linenumbers/set linenumbers/g' /etc/nanorc \
&& apt clean \
&& rm -rf /var/lib/apt/lists/*
RUN git clone https://github.com/google-research/vision_transformer.git \
&&cd vision_transformer \
&& pip3 install pip --upgrade \
&& pip install -r vit_jax/requirements.txt \
&&python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 \
--config.pretrained_dir='gs://vit_models/imagenet21k' \
&& pip cache purge
RUN echo "root:root" | chpasswd \
&& adduser --disabled-password --gecos "" "${USERNAME}" \
&& echo "${USERNAME}:${USERNAME}" | chpasswd \
&& echo "%${USERNAME} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/${USERNAME} \
&& chmod 0440 /etc/sudoers.d/${USERNAME}
USER ${USERNAME}
RUN sudo chown -R ${USERNAME}:${USERNAME} ${WORKDIR}
WORKDIR ${WORKDIR}
We have been using Google Cloud Build via build triggers for our GitHub repository which holds a C++ application that is deployed via Google Cloud Kubernetes Cluster.
As seen above, our build configuration is arriving from Dockerfile which is located in our GitHub repository.
Everything is working as expected, however our builds lasts about 55+ minutes. I would like to add Kaniko Cache support as suggested [here], however Google Cloud document only suggests a way to add it via a yaml file as below :
steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --destination=gcr.io/$PROJECT_ID/image
- --cache=true
- --cache-ttl=XXh
How shall I achieve Kaniko builds with a Dockerfile based trigger ?
FROM --platform=amd64 ubuntu:22.10
ENV GCSFUSE_REPO gcsfuse-stretch
RUN apt-get update && apt-get install --yes --no-install-recommends \
ca-certificates \
curl \
gnupg \
&& echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" \
| tee /etc/apt/sources.list.d/gcsfuse.list \
&& curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt-get update \
&& apt-get install --yes gcsfuse \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EXPOSE 80
RUN \
sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y gcc && \
apt-get install -y software-properties-common && \
apt install -y cmake && \
apt-get install -y make && \
apt-get install -y clang && \
apt-get install -y mesa-common-dev && \
apt-get install -y git && \
apt-get install -y xorg-dev && \
apt-get install -y nasm && \
apt-get install -y byobu curl git htop man unzip vim wget && \
rm -rf /var/lib/apt/lists/*
# Update and upgrade repo
RUN apt-get update -y -q && apt-get upgrade -y -q
COPY . /app
RUN cd /app
RUN ls -la
# Set environment variables.
ENV HOME /root
ENV WDIR /app
# Define working directory.
WORKDIR /app
RUN cd /app/lib/glfw && cmake -G "Unix Makefiles" && make && apt-get install libx11-dev
RUN apt-cache policy libxrandr-dev
RUN apt install libxrandr-dev
RUN cd /app/lib/ffmpeg && ./configure && make && make install
RUN cmake . && make
# Define default command.
CMD ["bash"]
Any suggestions are quite welcome.
As I mentioned in the comment, You can only add your kaniko in your cloudbuild.yaml files as its also the only options shown in this github link but you can add the --dockerfile argument to find your Dockerfile path.
I have the following Dockerfile:
# Use Python base image from DockerHub
FROM python:2.7
WORKDIR /salmon
# INSTALL CMAKE
RUN apt-get update && apt-get install -y sudo \
&& sudo apt-get update \
&& sudo apt-get install -y \
python \
cmake \
wget
#INSTALL BOOST
RUN wget https://dl.bintray.com/boostorg/release/1.66.0/source/boost_1_66_0.tar.gz \
&& mv boost_1_66_0.tar.gz /usr/local/bin/ \
&& cd /usr/local/bin/ \
&& tar -xzf boost_1_66_0.tar.gz \
&& cd ./boost_1_66_0/ \
&& ./bootstrap.sh \
&& ./b2 install
#INSTALL SALMON
RUN wget https://github.com/COMBINE-lab/salmon/releases/download/v0.14.1/salmon-0.14.1_linux_x86_64.tar.gz \
&& mv salmon-0.14.1_linux_x86_64.tar.gz /usr/local/bin/ \
&& cd /usr/local/bin/ \
&& tar -xzf salmon-0.14.1_linux_x86_64.tar.gz \
&& cd salmon-latest_linux_x86_64/
ENV PATH=/salmon/
ADD . /salmon
When I try to run it interactively via sudo docker run -v ~/Documents/Docker/salmon_test/:/data -it salmon:00.00.01, I get the error:
"exec: \"python2\": executable file not found in $PATH": unknown."
I don't understand why I'm getting this error. I even added the sudo apt-get install python command (which I didn't have before) but that didn't solve this either. Any thoughts?
This is because of overriding the $PATH variable and as a result, the container failed to find executable.
Default PATH value is
/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
So when you set this to /salmon/ you can then call python using full path like /usr/local/bin/python, btw you should not update PATH variable like this.
Better to update with the existing PATH variable.
FROM python:2.7
ENV PATH="/salmon/:${PATH}"
.
.
Hello I have the following docker container definition
FROM temp_base_image_name_for_post
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& pip3 install boto3
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY /src/* $INSTALL_PATH/src/
ENTRYPOINT python3 src/main.py
My terraform module points to this container and has a parameter called --object_key
the module submission is getting the parameters correctly, but it is not being populated in my docker for my python script. How do I modify my current docker image definition in order to get my arguments that are passed into my terraform definition?
for future reference the fix was
FROM temp_base_image_name_for_post
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& pip3 install boto3
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY /src/* $INSTALL_PATH/src/
ENTRYPOINT python3 src/main.py
to
FROM temp_base_image_name_for_post
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& pip3 install boto3
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY /src/* $INSTALL_PATH/src/
**ENTRYPOINT ["python3", "src/main.py"]**
In my project, I have a dockerized microservice based off of ubuntu:trusty which I wanted to update to python 2.7.13 from the standard apt-get 2.7.6 version. In doing so, I ran into some module import issues. Since then, I've added to the beginning of my pythonpath python2.7/dist-packages, which contains all of the modules I'm concerned with.
I built my microservice images using docker-compose build, but here's the issue: When I run docker-compose up, this microservice fails on importing all non-standard modules, yet when I create my own container from the same image using docker run -it image_id /bin/bash and then subsequently run a python shell and import any of the said modules, everything works perfectly. Even when I run the same python script, it gets past all of these import statements (but fails for other issues due to being run in isolation without proper linking).
I've asserted that python 2.7.13 is running on both docker-compose up and when I run my own container. I've cleared all of my containers, images, and cache and have rebuilt with no progress. The command being run at the end of the docker file is CMD python /filename/file.py.
Any ideas what could cause such a discrepancy?
EDIT:
As requested, here's the Dockerfile. The file structure is simply a project folder with subfolders, each being their own dockerized microservice. The one of concern here is called document_analyzer and following is the relevant section of the docker-compose file. Examples of the files that aren't properly installing are PyPDF2, pymongo, boto3.
FROM ubuntu:trusty
# Built using PyImageSearch guide:
# http://www.pyimagesearch.com/2015/06/22/install-opencv-3-0-and-python-2-7-on-ubuntu/
# Install dependencies
RUN \
apt-get -qq update && apt-get -qq upgrade -y && \
apt-get -qq install -y \
wget \
unzip \
libtbb2 \
libtbb-dev && \
apt-get -qq install -y \
build-essential \
cmake \
git \
pkg-config \
libjpeg8-dev \
libtiff4-dev \
libjasper-dev \
libpng12-dev \
libgtk2.0-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libatlas-base-dev \
gfortran \
libhdf5-dev \
libreadline-gplv2-dev \
libncursesw5-dev \
libssl-dev \
libsqlite3-dev \
tk-dev \
libgdbm-dev \
libc6-dev \
libbz2-dev \
libxml2-dev \
libxslt-dev && \
wget https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz && \
tar -xvf Python-2.7.13.tgz && \
cd Python-2.7.13 && \
./configure && \
make && \
make install && \
apt-get install -y python-dev python-setuptools && \
easy_install pip && \
pip install numpy==1.12.0 && \
apt-get autoclean && apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Download OpenCV 3.2.0 and install
# step 10
RUN \
cd ~ && \
wget https://github.com/Itseez/opencv/archive/3.2.0.zip && \
unzip 3.2.0.zip && \
mv ~/opencv-3.2.0/ ~/opencv/ && \
rm -rf ~/3.2.0.zip && \
cd ~ && \
wget https://github.com/opencv/opencv_contrib/archive/3.2.0.zip -O 3.2.0-contrib.zip && \
unzip 3.2.0-contrib.zip && \
mv opencv_contrib-3.2.0 opencv_contrib && \
rm -rf ~/3.2.0-contrib.zip && \
cd /root/opencv && \
mkdir build && \
cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D BUILD_EXAMPLES=ON .. && \
cd ~/opencv/build && \
make -j $(nproc) && \
make install && \
ldconfig && \
# clean opencv repos
rm -rf ~/opencv/build && \
rm -rf ~/opencv/3rdparty && \
rm -rf ~/opencv/doc && \
rm -rf ~/opencv/include && \
rm -rf ~/opencv/platforms && \
rm -rf ~/opencv/modules && \
rm -rf ~/opencv_contrib/build && \
rm -rf ~/opencv_contrib/doc
RUN mkdir ~/.aws/ && touch ~/.aws/config && touch ~/.aws/credentials && \
echo "[default]" > ~/.aws/credentials && \
echo "AWS_ACCESS_KEY_ID=xxxxxxx" >> ~/.aws/credentials && \
echo "AWS_SECRET_ACCESS_KEY=xxxxxxx" >> ~/.aws/credentials && \
echo "[default]" > ~/.aws/config && \
echo "output = json" >> ~/.aws/config && \
echo "region = us-east-1" >> ~/.aws/config
RUN apt-get update && \
apt-get -y install bcrypt \
libssl-dev \
libffi-dev \
libpq-dev \
vim \
redis-server \
rsyslog \
imagemagick \
libmagickcore-dev \
libmagickwand-dev \
libmagic-dev \
curl
RUN pip install pyopenssl ndg-httpsclient pyasn1
WORKDIR /document_analyzer
# Add requirements and install
COPY . /document_analyzer
RUN pip install -r /document_analyzer/requirements.txt && \
pip install -Iv https://pypi.python.org/packages/f5/1f/2d7579a6d8409a61b6b8e84ed02ca9efae8b51fd6228e24be88588fac255/tika-1.14.1.tar.gz#md5=aa7d77a4215e252f60243d423946de8d && \
pip install awscli
ENV PYTHONPATH="/usr/local/lib/python2.7/dist-packages/:${PYTHONPATH}"
CMD python /document_analyzer/api.py
Docker-compose:
document_analyzer:
environment:
- IP=${IP}
extends:
file: common.yml
service: microservice
build: document_analyzer
ports:
- "5001:5001"
volumes:
- ./document_analyzer:/document_analyzer
- .:/var/lib/
environment:
- PYTHONPATH=$PYTHONPATH:/var/lib
links:
- redis
- rabbit
- ocr_runner
- tika
- document_envelope
- converter
restart: on-failure
You have this work being done during the build phase:
WORKDIR /document_analyzer
# Add requirements and install
COPY . /document_analyzer
RUN pip install -r /document_analyzer/requirements.txt && \
pip install -Iv https://pypi.python.org/packages/f5/1f/2d7579a6d8409a61b6b8e84ed02ca9efae8b51fd6228e24be88588fac255/tika-1.14.1.tar.gz#md5=aa7d77a4215e252f60243d423946de8d && \
pip install awscli
And at runtime you do this in the compose yaml file:
volumes:
- ./document_analyzer:/document_analyzer
That volume mount will override everything you did in /document_analyzer during the build. Only what is in the directory outside the container will now be available at /document_analyzer inside the container. Whatever was at /document_analyzer before, from the build phase, is now hidden by this mount and not available.
The difference when you use docker run is that you did not create this mount.