I am trying to improve the Dockerfile we use for deploying a Django-based app at work and first thing I would like to do is change the base image of python from alpine to slim-buster but I have to translate it to a debian-based image. I would like some suggestions on how I could translate it since I have zero to none experience with alpine. This is the original snippet from Docker.
FROM python:3.8.6-alpine3.12
RUN apk update && \
apk add --virtual build-deps gcc g++ musl-dev && \
apk add postgresql-dev vim bash nginx supervisor curl && \
apk add libffi-dev && \
apk add --update npm && \
apk add git make cmake
You'd need to
use apt-get instead
find the equivalents of those packages in the Debian repositories
Some of these will likely be wrong, but you get the gist.
FROM python:3.8.6-slim-buster
RUN apt-get update && \
apt-get install -y \
bash \
build-essential \
cmake
curl \
git \
libffi-dev \
libpostgresql-dev \
make \
nginx \
nodejs \
supervisor \
vim
Related
The error
ERRORS:
app_1 | core.Page.image: (fields.E210) Cannot use ImageField because Pillow is not installed.
It seems that Pillow is detected as not installed in my docker container if I delete the .temp-builds after installing requirements.txt. I say this because if I remove the 'apk del .tmp-deps' the error went away. However, I want to remove the .tmp-builds because I learn it's best practice to make the docker container as lean as possible.
Dockerfile
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --update --no-cache postgresql-client && \
apk add --update --no-cache --virtual .tmp-deps \
build-base postgresql-dev musl-dev linux-headers \
python3-dev zlib-dev jpeg-dev gcc musl-dev && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps
requirements.txt
django>=3.2.3,<3.3
psycopg2>=2.8.6,<2.9
uWSGI>=2.0.19.1,<2.1
djangorestframework >=3.12.4, <3.20.0
Pillow >= 8.4.0, <8.5.0
Any pointer would be greatly appreaciated.
Alright. After looking at the dockerfile, I saw that postgresql-client is not inth e --virtual .tmp-deps. Which means, some dependencies have to stay in the container for some package to work (it was not obvious to me).
What I learn from here is that I need to include jpeg-dev to the line outise the .tmp-deps.
Updated Dockerfile
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --update --no-cache postgresql-client jpeg-dev && \
apk add --update --no-cache --virtual .tmp-deps \
build-base postgresql-dev musl-dev linux-headers python3-dev gcc zlib-dev && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps && \
Working on a Django project which is running on docker-container with python:3.9-alpine3.13
FROM python:3.9-alpine3.13
LABEL maintainer=<do not want to show>
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./app /app
COPY ./scripts /scripts
WORKDIR /app
EXPOSE 8000
RUN python -m venv /py && \
apk add --update --no-cache postgresql-client && \
apk add --update --no-cache --virtual .tmp-deps \
build-base postgresql-dev musl-dev gcc python3-dev bash openssl-dev libffi-dev libsodium-dev linux-headers && \
apk add jpeg-dev zlib-dev libjpeg && \
apk add --update busybox-suid && \
apk --no-cache add dcron libcap && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps && \
adduser --disabled-password --no-create-home app &&\
mkdir -p /vol/web/static && \
chown -R app:app /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
ENV PATH="/scripts:/py/bin:$PATH"
USER app
CMD ["run.sh"]
I used this tutorial for implementation and I don't this error is because of this because
I am getting this error.
sumit#LAPTOP-RT539Q9C MINGW64 ~/Desktop/RentYug/rentyug-backend-deployment (main)
$ docker-compose run --rm app sh -c "python manage.py crontab show"
WARNING: Found orphan containers (rentyug-backend-deployment_proxy_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Creating rentyug-backend-deployment_app_run ... done
/bin/sh: /usr/bin/crontab: Permission denied
Currently active jobs in crontab:
I used these lines for that
apk add --update busybox-suid && \
apk --no-cache add dcron libcap && \
I found my answer that is, cron should run as the root user. I found that answer there.
I'm trying to migrate a dockerized PHP-Apache2 server from Debian to Alpine.
The Debian dockerfile:
FROM php:7.3.24-apache-buster
COPY conf/php.ini /usr/local/etc/php
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
curl git \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libxml2-dev \
libpng-dev \
python-setuptools python-dev build-essential python-pip \
libzip-dev
RUN pecl install mcrypt-1.0.2 \
&& docker-php-ext-enable mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-install zip \
&& docker-php-ext-install soap
RUN pip install --upgrade virtualenv && pip install xhtml2pdf
WORKDIR /var/www/app
COPY ./ /var/www/app
RUN cd /var/www/app && \
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php && \
php -r "unlink('composer-setup.php');" && \
php composer.phar install --ignore-platform-reqs --prefer-dist
COPY ./conf/default.conf /etc/apache2/sites-enabled/000-default.conf
COPY ./conf/cert /etc/apache2/cert
RUN mkdir /var/log/gts && chmod 777 -R /var/log/gts
RUN mv /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled/rewrite.load
RUN mv /etc/apache2/mods-available/ssl.load /etc/apache2/mods-enabled/ssl.load
COPY conf/apache2.conf /etc/apache2/apache2.conf
RUN mv /etc/apache2/mods-available/remoteip.load /etc/apache2/mods-enabled/remoteip.load
EXPOSE 443
The Alpine dockerfile:
FROM webdevops/php-apache:7.3-alpine
COPY conf/php.ini /usr/local/etc/php
RUN apk update && apk upgrade && apk add \
git \
curl \
autoconf \
freetype-dev \
libjpeg-turbo-dev \
libmcrypt-dev \
libxml2-dev \
libpng-dev \
libzip-dev \
py-setuptools \
python3-dev \
build-base \
py-pip \
libzip-dev \
apache2-dev
RUN PHP_AUTOCONF="/usr/bin/autoconf" \
&& pecl install mcrypt-1.0.2 \
&& docker-php-ext-enable mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/
RUN pip install --upgrade --ignore-installed virtualenv && pip install xhtml2pdf
WORKDIR /var/www/app
COPY ./ /var/www/app
RUN cd /var/www/app && \
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php && \
php -r "unlink('composer-setup.php');" && \
php composer.phar install --ignore-platform-reqs --prefer-dist
COPY ./conf/default.conf /etc/apache2/sites-enabled/000-default.conf
COPY ./conf/cert /etc/apache2/cert
RUN mkdir /var/log/gts && chmod 777 -R /var/log/gts
RUN mv /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled/rewrite.load \
&& mv /etc/apache2/mods-available/ssl.load /etc/apache2/mods-enabled/ssl.load
COPY conf/apache2.conf /etc/apache2/apache2.conf
RUN mv /etc/apache2/mods-available/remoteip.load /etc/apache2/mods-enabled/remoteip.load
EXPOSE 443
The Alpine build failed:
mv: can't rename '/etc/apache2/mods-available/rewrite.load': No such file or directory
Turns out that the Alpine container:
Has no /etc/apache2/mods-* directories and no *.load files
Has only *.so files in /usr/lib/apache2 (similar to the list of *.so files in /usr/lib/apache2/modules on Debian).
Questions:
Why don't I have *.load files in the Alpine container?
Why don't I have /etc/apache2/mods-* directories in the Alpine container?
Are *.so files equivalent to *.load files?
If the previous is true, then how do I use the *.so files?
PS: I prefer not to change httpd.conf if possible.
I try to answer your questions one by one:
Why don't I have *.load files in the Alpine container?
The load-files normally just contain one line to load the specific module and can be seen as a convenience method to toggle modules with a shell script (a2enmod)
Why don't I have /etc/apache2/mods-* directories in the Alpine container?
Alpine tries to be minimalistic. This also means you should not install modules you don't need. So having modules installed you don't need (e.g. only in mods-available) is bad practice.
Are *.so files equivalent to *.load files?
The so-files are also present in debian and are the modules itself. The load-files are configuration fragments to load these .so files.
If the previous is true, then how do I use the *.so files?
Just take a look into the load-Files. For example, you can load the rewrite module by using this configuration line:
LoadModule rewrite_module modules/mod_rewrite.so
You shouldn't change httpd.conf, your preference is right here. But you can put custom configuration into the /etc/apache2/conf.d and load all your modules there. Just make sure your configfiles there end with a .conf
I am trying to build the following Radare2 dockerfile, but I think I may have some formatting wrong. I can't seem to figure out how to make everything install correctly and build. Any help would be appreciated.
FROM radare/radare2
USER root
RUN apt-get update && \
apt-get install -y \
build-essential \
nasm \
gdb \
python \
python-pip \
python-dev \
vim \
git \
libffi-dev \
libssl-dev \
libc6-i386 \
lsb-core \
pip install --upgrade pip \
pip install --upgrade pwntools \
libc6-dev-i386
USER r2
RUN git clone https://github.com/longld/peda.git ~/peda && \
echo "source ~/peda/peda.py" >> ~/.gdbinit
RUN \
"/bin/bash"
I get the following error when I try to build this dockerfile:
E: Unable to locate package pip
E: Unable to locate package install
E: Unable to locate package pip
E: Unable to locate package pip
E: Unable to locate package install
E: Unable to locate package pwntools
The pip install lines are new commands to use RUN keyword, not part of apt-get, so you need to remove the previous backlash and add RUN before the lines. Try this:
FROM radare/radare2
USER root
RUN apt-get update && \
apt-get install -y \
build-essential \
nasm \
gdb \
python \
python-pip \
python-dev \
vim \
git \
libffi-dev \
libssl-dev \
libc6-i386 \
libc6-dev-i386 \
lsb-core
RUN pip install --upgrade pip
RUN pip install --upgrade pwntools
USER r2
RUN git clone https://github.com/longld/peda.git ~/peda && \
echo "source ~/peda/peda.py" >> ~/.gdbinit
RUN "/bin/bash"
or better in a single RUN instruction:
RUN apt-get update && \
apt-get install -y \
build-essential \
(...)
lsb-core \
&& pip install --upgrade pip \
&& pip install --upgrade pwntools
In my project, I have a dockerized microservice based off of ubuntu:trusty which I wanted to update to python 2.7.13 from the standard apt-get 2.7.6 version. In doing so, I ran into some module import issues. Since then, I've added to the beginning of my pythonpath python2.7/dist-packages, which contains all of the modules I'm concerned with.
I built my microservice images using docker-compose build, but here's the issue: When I run docker-compose up, this microservice fails on importing all non-standard modules, yet when I create my own container from the same image using docker run -it image_id /bin/bash and then subsequently run a python shell and import any of the said modules, everything works perfectly. Even when I run the same python script, it gets past all of these import statements (but fails for other issues due to being run in isolation without proper linking).
I've asserted that python 2.7.13 is running on both docker-compose up and when I run my own container. I've cleared all of my containers, images, and cache and have rebuilt with no progress. The command being run at the end of the docker file is CMD python /filename/file.py.
Any ideas what could cause such a discrepancy?
EDIT:
As requested, here's the Dockerfile. The file structure is simply a project folder with subfolders, each being their own dockerized microservice. The one of concern here is called document_analyzer and following is the relevant section of the docker-compose file. Examples of the files that aren't properly installing are PyPDF2, pymongo, boto3.
FROM ubuntu:trusty
# Built using PyImageSearch guide:
# http://www.pyimagesearch.com/2015/06/22/install-opencv-3-0-and-python-2-7-on-ubuntu/
# Install dependencies
RUN \
apt-get -qq update && apt-get -qq upgrade -y && \
apt-get -qq install -y \
wget \
unzip \
libtbb2 \
libtbb-dev && \
apt-get -qq install -y \
build-essential \
cmake \
git \
pkg-config \
libjpeg8-dev \
libtiff4-dev \
libjasper-dev \
libpng12-dev \
libgtk2.0-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libatlas-base-dev \
gfortran \
libhdf5-dev \
libreadline-gplv2-dev \
libncursesw5-dev \
libssl-dev \
libsqlite3-dev \
tk-dev \
libgdbm-dev \
libc6-dev \
libbz2-dev \
libxml2-dev \
libxslt-dev && \
wget https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz && \
tar -xvf Python-2.7.13.tgz && \
cd Python-2.7.13 && \
./configure && \
make && \
make install && \
apt-get install -y python-dev python-setuptools && \
easy_install pip && \
pip install numpy==1.12.0 && \
apt-get autoclean && apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Download OpenCV 3.2.0 and install
# step 10
RUN \
cd ~ && \
wget https://github.com/Itseez/opencv/archive/3.2.0.zip && \
unzip 3.2.0.zip && \
mv ~/opencv-3.2.0/ ~/opencv/ && \
rm -rf ~/3.2.0.zip && \
cd ~ && \
wget https://github.com/opencv/opencv_contrib/archive/3.2.0.zip -O 3.2.0-contrib.zip && \
unzip 3.2.0-contrib.zip && \
mv opencv_contrib-3.2.0 opencv_contrib && \
rm -rf ~/3.2.0-contrib.zip && \
cd /root/opencv && \
mkdir build && \
cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D BUILD_EXAMPLES=ON .. && \
cd ~/opencv/build && \
make -j $(nproc) && \
make install && \
ldconfig && \
# clean opencv repos
rm -rf ~/opencv/build && \
rm -rf ~/opencv/3rdparty && \
rm -rf ~/opencv/doc && \
rm -rf ~/opencv/include && \
rm -rf ~/opencv/platforms && \
rm -rf ~/opencv/modules && \
rm -rf ~/opencv_contrib/build && \
rm -rf ~/opencv_contrib/doc
RUN mkdir ~/.aws/ && touch ~/.aws/config && touch ~/.aws/credentials && \
echo "[default]" > ~/.aws/credentials && \
echo "AWS_ACCESS_KEY_ID=xxxxxxx" >> ~/.aws/credentials && \
echo "AWS_SECRET_ACCESS_KEY=xxxxxxx" >> ~/.aws/credentials && \
echo "[default]" > ~/.aws/config && \
echo "output = json" >> ~/.aws/config && \
echo "region = us-east-1" >> ~/.aws/config
RUN apt-get update && \
apt-get -y install bcrypt \
libssl-dev \
libffi-dev \
libpq-dev \
vim \
redis-server \
rsyslog \
imagemagick \
libmagickcore-dev \
libmagickwand-dev \
libmagic-dev \
curl
RUN pip install pyopenssl ndg-httpsclient pyasn1
WORKDIR /document_analyzer
# Add requirements and install
COPY . /document_analyzer
RUN pip install -r /document_analyzer/requirements.txt && \
pip install -Iv https://pypi.python.org/packages/f5/1f/2d7579a6d8409a61b6b8e84ed02ca9efae8b51fd6228e24be88588fac255/tika-1.14.1.tar.gz#md5=aa7d77a4215e252f60243d423946de8d && \
pip install awscli
ENV PYTHONPATH="/usr/local/lib/python2.7/dist-packages/:${PYTHONPATH}"
CMD python /document_analyzer/api.py
Docker-compose:
document_analyzer:
environment:
- IP=${IP}
extends:
file: common.yml
service: microservice
build: document_analyzer
ports:
- "5001:5001"
volumes:
- ./document_analyzer:/document_analyzer
- .:/var/lib/
environment:
- PYTHONPATH=$PYTHONPATH:/var/lib
links:
- redis
- rabbit
- ocr_runner
- tika
- document_envelope
- converter
restart: on-failure
You have this work being done during the build phase:
WORKDIR /document_analyzer
# Add requirements and install
COPY . /document_analyzer
RUN pip install -r /document_analyzer/requirements.txt && \
pip install -Iv https://pypi.python.org/packages/f5/1f/2d7579a6d8409a61b6b8e84ed02ca9efae8b51fd6228e24be88588fac255/tika-1.14.1.tar.gz#md5=aa7d77a4215e252f60243d423946de8d && \
pip install awscli
And at runtime you do this in the compose yaml file:
volumes:
- ./document_analyzer:/document_analyzer
That volume mount will override everything you did in /document_analyzer during the build. Only what is in the directory outside the container will now be available at /document_analyzer inside the container. Whatever was at /document_analyzer before, from the build phase, is now hidden by this mount and not available.
The difference when you use docker run is that you did not create this mount.