dockerfile: dlerror:libcudart.so.11.0: cannot open shared object file - dockerfile

trying to build my first dockerfile for vision transformer on ubuntu 20.04. ran into
2022-11-04 09:08:49.205922: W external/org_tensorflow/tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64]
Could not load dynamic library 'libcudart.so.11.0'; dlerror:
libcudart.so.11.0: cannot open shared object file: No such file or
directory; LD_LIBRARY_PATH:
/usr/local/nvidia/lib:/usr/local/nvidia/lib64
it stopped at git clone (since this is the last step appeared in the terminal before the error):
Step 7/11 : RUN git clone
https://github.com/google-research/vision_transformer.git &&cd
vision_transformer && pip3 install pip --upgrade && pip
install -r vit_jax/requirements.txt &&python -m vit_jax.main
--workdir=/tmp/vit-$(date +%s) --config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 --config.pretrained_dir='gs://vit_models/imagenet21k' && pip cache purge
below is my dockerfile:
FROM pytorch/pytorch:1.8.1-cuda10.2-cudnn7-runtime
ENV DEBIAN_FRONTEND=noninteractive
ARG USERNAME=user
WORKDIR /dockertest
ARG WORKDIR=/dockertest
RUN apt-get update && apt-get install -y \
automake autoconf libpng-dev nano python3-pip \
sudo curl zip unzip libtool swig zlib1g-dev pkg-config \
python3-mock libpython3-dev libpython3-all-dev \
g++ gcc cmake make pciutils cpio gosu wget \
libgtk-3-dev libxtst-dev sudo apt-transport-https \
build-essential gnupg git xz-utils vim libgtk2.0-0 libcanberra-gtk-module\
# libva-drm2 libva-x11-2 vainfo libva-wayland2 libva-glx2 \
libva-dev libdrm-dev xorg xorg-dev protobuf-compiler \
openbox libx11-dev libgl1-mesa-glx libgl1-mesa-dev \
libtbb2 libtbb-dev libopenblas-dev libopenmpi-dev \
&& sed -i 's/# set linenumbers/set linenumbers/g' /etc/nanorc \
&& apt clean \
&& rm -rf /var/lib/apt/lists/*
RUN git clone https://github.com/google-research/vision_transformer.git \
&&cd vision_transformer \
&& pip3 install pip --upgrade \
&& pip install -r vit_jax/requirements.txt \
&&python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 \
--config.pretrained_dir='gs://vit_models/imagenet21k' \
&& pip cache purge
RUN echo "root:root" | chpasswd \
&& adduser --disabled-password --gecos "" "${USERNAME}" \
&& echo "${USERNAME}:${USERNAME}" | chpasswd \
&& echo "%${USERNAME} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/${USERNAME} \
&& chmod 0440 /etc/sudoers.d/${USERNAME}
USER ${USERNAME}
RUN sudo chown -R ${USERNAME}:${USERNAME} ${WORKDIR}
WORKDIR ${WORKDIR}

Related

Using Kaniko cache with Google Cloud Build for Google Cloud Kubernetes Deployments

We have been using Google Cloud Build via build triggers for our GitHub repository which holds a C++ application that is deployed via Google Cloud Kubernetes Cluster.
As seen above, our build configuration is arriving from Dockerfile which is located in our GitHub repository.
Everything is working as expected, however our builds lasts about 55+ minutes. I would like to add Kaniko Cache support as suggested [here], however Google Cloud document only suggests a way to add it via a yaml file as below :
steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --destination=gcr.io/$PROJECT_ID/image
- --cache=true
- --cache-ttl=XXh
How shall I achieve Kaniko builds with a Dockerfile based trigger ?
FROM --platform=amd64 ubuntu:22.10
ENV GCSFUSE_REPO gcsfuse-stretch
RUN apt-get update && apt-get install --yes --no-install-recommends \
ca-certificates \
curl \
gnupg \
&& echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" \
| tee /etc/apt/sources.list.d/gcsfuse.list \
&& curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt-get update \
&& apt-get install --yes gcsfuse \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EXPOSE 80
RUN \
sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y gcc && \
apt-get install -y software-properties-common && \
apt install -y cmake && \
apt-get install -y make && \
apt-get install -y clang && \
apt-get install -y mesa-common-dev && \
apt-get install -y git && \
apt-get install -y xorg-dev && \
apt-get install -y nasm && \
apt-get install -y byobu curl git htop man unzip vim wget && \
rm -rf /var/lib/apt/lists/*
# Update and upgrade repo
RUN apt-get update -y -q && apt-get upgrade -y -q
COPY . /app
RUN cd /app
RUN ls -la
# Set environment variables.
ENV HOME /root
ENV WDIR /app
# Define working directory.
WORKDIR /app
RUN cd /app/lib/glfw && cmake -G "Unix Makefiles" && make && apt-get install libx11-dev
RUN apt-cache policy libxrandr-dev
RUN apt install libxrandr-dev
RUN cd /app/lib/ffmpeg && ./configure && make && make install
RUN cmake . && make
# Define default command.
CMD ["bash"]
Any suggestions are quite welcome.
As I mentioned in the comment, You can only add your kaniko in your cloudbuild.yaml files as its also the only options shown in this github link but you can add the --dockerfile argument to find your Dockerfile path.

Libreoffice convert-to not working in AWS docker image

I trying to convert a file to pdf using libreoffice, currently the best I achived is:
RUN wget http://download.documentfoundation.org/libreoffice/stable/7.2.5/rpm/x86_64/LibreOffice_7.2.5_Linux_x86-64_rpm.tar.gz
RUN tar -xvzf LibreOffice_7.2.5_Linux_x86-64_rpm.tar.gz
RUN cd LibreOffice_7.2.5.2_Linux_x86-64_rpm/RPMS; yum -y localinstall *.rpm;
RUN yum -y install cairo
RUN echo instalacion completada
RUN /opt/libreoffice7.2/program/soffice.bin --version
Until here, works! Shows the version of libreoffice correctly installed, but when trying to run, it does not work:
RUN /opt/libreoffice7.2/program/soffice.bin --headless --convert-to pdf my_file.xlsm
Returns:
The command '/bin/sh -c /opt/libreoffice7.2/program/soffice.bin
--headless --convert-to pdf my_file.xlsm' returned a non-zero code: 81
My complete Dockerfile
# Pull the base image with python 3.8 as a runtime for your Lambda
FROM public.ecr.aws/lambda/python:3.8
RUN mkdir experimento/
COPY my_file.xlsm .
# Install OS packages for Pillow-SIMD
RUN yum -y install wget tar gzip zlib freetype-devel \
gcc \
ghostscript \
lcms2-devel \
libffi-devel \
libimagequant-devel \
libjpeg-devel \
libraqm-devel \
libtiff-devel \
libwebp-devel \
make \
openjpeg2-devel \
rh-python36 \
rh-python36-python-virtualenv \
sudo \
tcl-devel \
tk-devel \
tkinter \
which \
xorg-x11-server-Xvfb \
zlib-devel \
&& yum clean all
RUN wget http://download.documentfoundation.org/libreoffice/stable/7.2.5/rpm/x86_64/LibreOffice_7.2.5_Linux_x86-64_rpm.tar.gz
RUN tar -xvzf LibreOffice_7.2.5_Linux_x86-64_rpm.tar.gz
RUN cd LibreOffice_7.2.5.2_Linux_x86-64_rpm/RPMS; yum -y localinstall *.rpm;
RUN yum -y install cairo
RUN echo instalacion completada
RUN /opt/libreoffice7.2/program/soffice.bin --version
RUN /opt/libreoffice7.2/program/soffice.bin -h
RUN sudo find / -name soffice.bin
RUN yum install -y libXinerama.x86_64 cups-libs dbus-glib
RUN sudo /opt/libreoffice7.2/program/soffice.bin --headless --invisible --nodefault --nofirststartwizard --nolockcheck --nologo --norestore --convert-to 'pdf:writer_pdf_Export' --outdir experimento/ my_file.xlsm
I found a solution, the problem was .bin at the end of soffice:
# Pull the base image with python 3.8 as a runtime for your Lambda
FROM public.ecr.aws/lambda/python:3.8
COPY my_file.xlsm .
# Install OS packages for Pillow-SIMD
RUN yum -y install curl wget tar gzip zlib freetype-devel \
libxslt \
libxslt1-dev \
gcc \
ghostscript \
lcms2-devel \
libffi-devel \
libimagequant-devel \
libjpeg-devel \
libraqm-devel \
libtiff-devel \
libwebp-devel \
make \
openjpeg2-devel \
rh-python36 \
rh-python36-python-virtualenv \
sudo \
tcl-devel \
tk-devel \
tkinter \
which \
xorg-x11-server-Xvfb \
zlib-devel \
java \
&& yum clean all
RUN wget http://download.documentfoundation.org/libreoffice/stable/7.2.5/rpm/x86_64/LibreOffice_7.2.5_Linux_x86-64_rpm.tar.gz
RUN tar -xvzf LibreOffice_7.2.5_Linux_x86-64_rpm.tar.gz
RUN cd LibreOffice_7.2.5.2_Linux_x86-64_rpm/RPMS; yum -y localinstall *.rpm;
RUN yum -y install cairo
RUN /opt/libreoffice7.2/program/soffice.bin --version
COPY carta_2020.xlsm /tmp/
RUN ls /tmp/
RUN /opt/libreoffice7.2/program/soffice --headless --convert-to pdf --outdir /tmp my_file.xlsm

Missing dirs and files when migrating PHP-Apache2 from Debian to Alpine

I'm trying to migrate a dockerized PHP-Apache2 server from Debian to Alpine.
The Debian dockerfile:
FROM php:7.3.24-apache-buster
COPY conf/php.ini /usr/local/etc/php
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
curl git \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libxml2-dev \
libpng-dev \
python-setuptools python-dev build-essential python-pip \
libzip-dev
RUN pecl install mcrypt-1.0.2 \
&& docker-php-ext-enable mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-install zip \
&& docker-php-ext-install soap
RUN pip install --upgrade virtualenv && pip install xhtml2pdf
WORKDIR /var/www/app
COPY ./ /var/www/app
RUN cd /var/www/app && \
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php && \
php -r "unlink('composer-setup.php');" && \
php composer.phar install --ignore-platform-reqs --prefer-dist
COPY ./conf/default.conf /etc/apache2/sites-enabled/000-default.conf
COPY ./conf/cert /etc/apache2/cert
RUN mkdir /var/log/gts && chmod 777 -R /var/log/gts
RUN mv /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled/rewrite.load
RUN mv /etc/apache2/mods-available/ssl.load /etc/apache2/mods-enabled/ssl.load
COPY conf/apache2.conf /etc/apache2/apache2.conf
RUN mv /etc/apache2/mods-available/remoteip.load /etc/apache2/mods-enabled/remoteip.load
EXPOSE 443
The Alpine dockerfile:
FROM webdevops/php-apache:7.3-alpine
COPY conf/php.ini /usr/local/etc/php
RUN apk update && apk upgrade && apk add \
git \
curl \
autoconf \
freetype-dev \
libjpeg-turbo-dev \
libmcrypt-dev \
libxml2-dev \
libpng-dev \
libzip-dev \
py-setuptools \
python3-dev \
build-base \
py-pip \
libzip-dev \
apache2-dev
RUN PHP_AUTOCONF="/usr/bin/autoconf" \
&& pecl install mcrypt-1.0.2 \
&& docker-php-ext-enable mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/
RUN pip install --upgrade --ignore-installed virtualenv && pip install xhtml2pdf
WORKDIR /var/www/app
COPY ./ /var/www/app
RUN cd /var/www/app && \
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php && \
php -r "unlink('composer-setup.php');" && \
php composer.phar install --ignore-platform-reqs --prefer-dist
COPY ./conf/default.conf /etc/apache2/sites-enabled/000-default.conf
COPY ./conf/cert /etc/apache2/cert
RUN mkdir /var/log/gts && chmod 777 -R /var/log/gts
RUN mv /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled/rewrite.load \
&& mv /etc/apache2/mods-available/ssl.load /etc/apache2/mods-enabled/ssl.load
COPY conf/apache2.conf /etc/apache2/apache2.conf
RUN mv /etc/apache2/mods-available/remoteip.load /etc/apache2/mods-enabled/remoteip.load
EXPOSE 443
The Alpine build failed:
mv: can't rename '/etc/apache2/mods-available/rewrite.load': No such file or directory
Turns out that the Alpine container:
Has no /etc/apache2/mods-* directories and no *.load files
Has only *.so files in /usr/lib/apache2 (similar to the list of *.so files in /usr/lib/apache2/modules on Debian).
Questions:
Why don't I have *.load files in the Alpine container?
Why don't I have /etc/apache2/mods-* directories in the Alpine container?
Are *.so files equivalent to *.load files?
If the previous is true, then how do I use the *.so files?
PS: I prefer not to change httpd.conf if possible.
I try to answer your questions one by one:
Why don't I have *.load files in the Alpine container?
The load-files normally just contain one line to load the specific module and can be seen as a convenience method to toggle modules with a shell script (a2enmod)
Why don't I have /etc/apache2/mods-* directories in the Alpine container?
Alpine tries to be minimalistic. This also means you should not install modules you don't need. So having modules installed you don't need (e.g. only in mods-available) is bad practice.
Are *.so files equivalent to *.load files?
The so-files are also present in debian and are the modules itself. The load-files are configuration fragments to load these .so files.
If the previous is true, then how do I use the *.so files?
Just take a look into the load-Files. For example, you can load the rewrite module by using this configuration line:
LoadModule rewrite_module modules/mod_rewrite.so
You shouldn't change httpd.conf, your preference is right here. But you can put custom configuration into the /etc/apache2/conf.d and load all your modules there. Just make sure your configfiles there end with a .conf

Getting the error ""exec: \"python2\": executable file not found in $PATH": unknown." when trying to run container interactively

I have the following Dockerfile:
# Use Python base image from DockerHub
FROM python:2.7
WORKDIR /salmon
# INSTALL CMAKE
RUN apt-get update && apt-get install -y sudo \
&& sudo apt-get update \
&& sudo apt-get install -y \
python \
cmake \
wget
#INSTALL BOOST
RUN wget https://dl.bintray.com/boostorg/release/1.66.0/source/boost_1_66_0.tar.gz \
&& mv boost_1_66_0.tar.gz /usr/local/bin/ \
&& cd /usr/local/bin/ \
&& tar -xzf boost_1_66_0.tar.gz \
&& cd ./boost_1_66_0/ \
&& ./bootstrap.sh \
&& ./b2 install
#INSTALL SALMON
RUN wget https://github.com/COMBINE-lab/salmon/releases/download/v0.14.1/salmon-0.14.1_linux_x86_64.tar.gz \
&& mv salmon-0.14.1_linux_x86_64.tar.gz /usr/local/bin/ \
&& cd /usr/local/bin/ \
&& tar -xzf salmon-0.14.1_linux_x86_64.tar.gz \
&& cd salmon-latest_linux_x86_64/
ENV PATH=/salmon/
ADD . /salmon
When I try to run it interactively via sudo docker run -v ~/Documents/Docker/salmon_test/:/data -it salmon:00.00.01, I get the error:
"exec: \"python2\": executable file not found in $PATH": unknown."
I don't understand why I'm getting this error. I even added the sudo apt-get install python command (which I didn't have before) but that didn't solve this either. Any thoughts?
This is because of overriding the $PATH variable and as a result, the container failed to find executable.
Default PATH value is
/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
So when you set this to /salmon/ you can then call python using full path like /usr/local/bin/python, btw you should not update PATH variable like this.
Better to update with the existing PATH variable.
FROM python:2.7
ENV PATH="/salmon/:${PATH}"
.
.

Discrepancy between Python in docker-compose and independent container with same image

In my project, I have a dockerized microservice based off of ubuntu:trusty which I wanted to update to python 2.7.13 from the standard apt-get 2.7.6 version. In doing so, I ran into some module import issues. Since then, I've added to the beginning of my pythonpath python2.7/dist-packages, which contains all of the modules I'm concerned with.
I built my microservice images using docker-compose build, but here's the issue: When I run docker-compose up, this microservice fails on importing all non-standard modules, yet when I create my own container from the same image using docker run -it image_id /bin/bash and then subsequently run a python shell and import any of the said modules, everything works perfectly. Even when I run the same python script, it gets past all of these import statements (but fails for other issues due to being run in isolation without proper linking).
I've asserted that python 2.7.13 is running on both docker-compose up and when I run my own container. I've cleared all of my containers, images, and cache and have rebuilt with no progress. The command being run at the end of the docker file is CMD python /filename/file.py.
Any ideas what could cause such a discrepancy?
EDIT:
As requested, here's the Dockerfile. The file structure is simply a project folder with subfolders, each being their own dockerized microservice. The one of concern here is called document_analyzer and following is the relevant section of the docker-compose file. Examples of the files that aren't properly installing are PyPDF2, pymongo, boto3.
FROM ubuntu:trusty
# Built using PyImageSearch guide:
# http://www.pyimagesearch.com/2015/06/22/install-opencv-3-0-and-python-2-7-on-ubuntu/
# Install dependencies
RUN \
apt-get -qq update && apt-get -qq upgrade -y && \
apt-get -qq install -y \
wget \
unzip \
libtbb2 \
libtbb-dev && \
apt-get -qq install -y \
build-essential \
cmake \
git \
pkg-config \
libjpeg8-dev \
libtiff4-dev \
libjasper-dev \
libpng12-dev \
libgtk2.0-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libatlas-base-dev \
gfortran \
libhdf5-dev \
libreadline-gplv2-dev \
libncursesw5-dev \
libssl-dev \
libsqlite3-dev \
tk-dev \
libgdbm-dev \
libc6-dev \
libbz2-dev \
libxml2-dev \
libxslt-dev && \
wget https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz && \
tar -xvf Python-2.7.13.tgz && \
cd Python-2.7.13 && \
./configure && \
make && \
make install && \
apt-get install -y python-dev python-setuptools && \
easy_install pip && \
pip install numpy==1.12.0 && \
apt-get autoclean && apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Download OpenCV 3.2.0 and install
# step 10
RUN \
cd ~ && \
wget https://github.com/Itseez/opencv/archive/3.2.0.zip && \
unzip 3.2.0.zip && \
mv ~/opencv-3.2.0/ ~/opencv/ && \
rm -rf ~/3.2.0.zip && \
cd ~ && \
wget https://github.com/opencv/opencv_contrib/archive/3.2.0.zip -O 3.2.0-contrib.zip && \
unzip 3.2.0-contrib.zip && \
mv opencv_contrib-3.2.0 opencv_contrib && \
rm -rf ~/3.2.0-contrib.zip && \
cd /root/opencv && \
mkdir build && \
cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D BUILD_EXAMPLES=ON .. && \
cd ~/opencv/build && \
make -j $(nproc) && \
make install && \
ldconfig && \
# clean opencv repos
rm -rf ~/opencv/build && \
rm -rf ~/opencv/3rdparty && \
rm -rf ~/opencv/doc && \
rm -rf ~/opencv/include && \
rm -rf ~/opencv/platforms && \
rm -rf ~/opencv/modules && \
rm -rf ~/opencv_contrib/build && \
rm -rf ~/opencv_contrib/doc
RUN mkdir ~/.aws/ && touch ~/.aws/config && touch ~/.aws/credentials && \
echo "[default]" > ~/.aws/credentials && \
echo "AWS_ACCESS_KEY_ID=xxxxxxx" >> ~/.aws/credentials && \
echo "AWS_SECRET_ACCESS_KEY=xxxxxxx" >> ~/.aws/credentials && \
echo "[default]" > ~/.aws/config && \
echo "output = json" >> ~/.aws/config && \
echo "region = us-east-1" >> ~/.aws/config
RUN apt-get update && \
apt-get -y install bcrypt \
libssl-dev \
libffi-dev \
libpq-dev \
vim \
redis-server \
rsyslog \
imagemagick \
libmagickcore-dev \
libmagickwand-dev \
libmagic-dev \
curl
RUN pip install pyopenssl ndg-httpsclient pyasn1
WORKDIR /document_analyzer
# Add requirements and install
COPY . /document_analyzer
RUN pip install -r /document_analyzer/requirements.txt && \
pip install -Iv https://pypi.python.org/packages/f5/1f/2d7579a6d8409a61b6b8e84ed02ca9efae8b51fd6228e24be88588fac255/tika-1.14.1.tar.gz#md5=aa7d77a4215e252f60243d423946de8d && \
pip install awscli
ENV PYTHONPATH="/usr/local/lib/python2.7/dist-packages/:${PYTHONPATH}"
CMD python /document_analyzer/api.py
Docker-compose:
document_analyzer:
environment:
- IP=${IP}
extends:
file: common.yml
service: microservice
build: document_analyzer
ports:
- "5001:5001"
volumes:
- ./document_analyzer:/document_analyzer
- .:/var/lib/
environment:
- PYTHONPATH=$PYTHONPATH:/var/lib
links:
- redis
- rabbit
- ocr_runner
- tika
- document_envelope
- converter
restart: on-failure
You have this work being done during the build phase:
WORKDIR /document_analyzer
# Add requirements and install
COPY . /document_analyzer
RUN pip install -r /document_analyzer/requirements.txt && \
pip install -Iv https://pypi.python.org/packages/f5/1f/2d7579a6d8409a61b6b8e84ed02ca9efae8b51fd6228e24be88588fac255/tika-1.14.1.tar.gz#md5=aa7d77a4215e252f60243d423946de8d && \
pip install awscli
And at runtime you do this in the compose yaml file:
volumes:
- ./document_analyzer:/document_analyzer
That volume mount will override everything you did in /document_analyzer during the build. Only what is in the directory outside the container will now be available at /document_analyzer inside the container. Whatever was at /document_analyzer before, from the build phase, is now hidden by this mount and not available.
The difference when you use docker run is that you did not create this mount.