resolve pipenv install weasyprint error in Docker - django

I'm using Alpine linux for my Docker setup.
Here is the Dockerfile.
# pull official base image
FROM python:3.7.4-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk --update --upgrade --no-cache add cairo-dev pango-dev gdk-pixbuf
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev jpeg-dev zlib-dev libffi-dev\
&& apk add postgresql \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk add jpeg-dev zlib-dev libjpeg \
&& pip install Pillow \
&& apk del build-deps
# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./Pipfile /usr/src/app/Pipfile
RUN pipenv install --skip-lock --system --dev
# copy entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
# copy project
COPY . /usr/src/app/
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
which results in stalling on the installation of cairocffi and giving the error of
unable to execute 'gcc': No such file or directory.

This page is gonna save your life, no matter if you're using pipenv or not,
For Alpine(>=3.6), use
apk --update --upgrade add gcc musl-dev jpeg-dev zlib-dev libffi-dev cairo-dev pango-dev gdk-pixbuf-dev

I found this link however, which recommends adding the line:
RUN apk add --update python python-dev py-pip build-base
to the build file and works.

Related

Pillow is not installed after removing .temp-builds

The error
ERRORS:
app_1 | core.Page.image: (fields.E210) Cannot use ImageField because Pillow is not installed.
It seems that Pillow is detected as not installed in my docker container if I delete the .temp-builds after installing requirements.txt. I say this because if I remove the 'apk del .tmp-deps' the error went away. However, I want to remove the .tmp-builds because I learn it's best practice to make the docker container as lean as possible.
Dockerfile
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --update --no-cache postgresql-client && \
apk add --update --no-cache --virtual .tmp-deps \
build-base postgresql-dev musl-dev linux-headers \
python3-dev zlib-dev jpeg-dev gcc musl-dev && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps
requirements.txt
django>=3.2.3,<3.3
psycopg2>=2.8.6,<2.9
uWSGI>=2.0.19.1,<2.1
djangorestframework >=3.12.4, <3.20.0
Pillow >= 8.4.0, <8.5.0
Any pointer would be greatly appreaciated.
Alright. After looking at the dockerfile, I saw that postgresql-client is not inth e --virtual .tmp-deps. Which means, some dependencies have to stay in the container for some package to work (it was not obvious to me).
What I learn from here is that I need to include jpeg-dev to the line outise the .tmp-deps.
Updated Dockerfile
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --update --no-cache postgresql-client jpeg-dev && \
apk add --update --no-cache --virtual .tmp-deps \
build-base postgresql-dev musl-dev linux-headers python3-dev gcc zlib-dev && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps && \

How to add django-crontab in docker container with non-rooted user django project

Working on a Django project which is running on docker-container with python:3.9-alpine3.13
FROM python:3.9-alpine3.13
LABEL maintainer=<do not want to show>
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./app /app
COPY ./scripts /scripts
WORKDIR /app
EXPOSE 8000
RUN python -m venv /py && \
apk add --update --no-cache postgresql-client && \
apk add --update --no-cache --virtual .tmp-deps \
build-base postgresql-dev musl-dev gcc python3-dev bash openssl-dev libffi-dev libsodium-dev linux-headers && \
apk add jpeg-dev zlib-dev libjpeg && \
apk add --update busybox-suid && \
apk --no-cache add dcron libcap && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps && \
adduser --disabled-password --no-create-home app &&\
mkdir -p /vol/web/static && \
chown -R app:app /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
ENV PATH="/scripts:/py/bin:$PATH"
USER app
CMD ["run.sh"]
I used this tutorial for implementation and I don't this error is because of this because
I am getting this error.
sumit#LAPTOP-RT539Q9C MINGW64 ~/Desktop/RentYug/rentyug-backend-deployment (main)
$ docker-compose run --rm app sh -c "python manage.py crontab show"
WARNING: Found orphan containers (rentyug-backend-deployment_proxy_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Creating rentyug-backend-deployment_app_run ... done
/bin/sh: /usr/bin/crontab: Permission denied
Currently active jobs in crontab:
I used these lines for that
apk add --update busybox-suid && \
apk --no-cache add dcron libcap && \
I found my answer that is, cron should run as the root user. I found that answer there.

Docker Django installation error of Pillow Package

I am dockerising my django apps, you know all if you use django image field, you need to use Pillow package but currently my docker installing all the package and show error when it try install pillow
my Dockerfile
# pull official base image
FROM python:3.7-alpine
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DEBUG 0
# install psycopg2
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
# install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
# collect static files
RUN python manage.py collectstatic --noinput
# add and run as non-root user
RUN adduser -D myuser
USER myuser
# run gunicorn
CMD gunicorn projectile.wsgi:application --bind 0.0.0.0:$PORT
and this is requirements.txt file
Django==2.2.2
Pillow==5.0.0
dj-database-url==0.5.0
gunicorn==19.9.0
whitenoise==4.1.2
psycopg2==2.8.4
I am not getting whats wrong with it, why pilow not installing, it throws an error, this is below:
The headers or library files could not be found for zlib,
remote: a required dependency when compiling Pillow from source.
remote:
remote: Please see the install instructions at:
remote: https://pillow.readthedocs.io/en/latest/installation.html
remote:
remote:
remote: ----------------------------------------
remote: ERROR: Command errored out with exit status 1: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-k4_gcdwn/Pillow/setup.py'"'"'; __file__='"'"'/tmp/pip-install-k4_gcdwn/Pillow/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-qgvai9fm/install-record.txt --single-version-externally-managed --compile Check the logs for full command output.
Can anyone help me to fix this error?
Thanks
You can add zlib-dev in your apk add section and install pillow there. For example(explanations are in comment section):
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev zlib-dev postgresql-dev jpeg-dev \ # will be removed after dependent python packages are installed
&& apk add postgresql zlib jpeg \ # these packages won't be deleted from docker image
&& pip install psycopg2 Pillow==5.0.0 \ # Here I am installing these python packages which have dependencies on the libraries installed in build-deps, because later build-deps will be deleted
&& apk del build-deps # for reducing the size of the Docker Image, we are removing the build-deps folder
# install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# Rest of the code

How to setup docker-compose, pytest, Selenium Driver and geckodriver/chromedriver correctly

I am testing my django app with pytest. Now I want to use Selenium for system tests. I am also using docker so to execute my tests I do docker-compose -f local.yml run django pytest
When I try to use Selenium I get the message that 'geckodriver' executable needs to be in PATH..
I read many questions here on SO and I followed the instructions. Yet, I think my path can't be found because I am executing everything in a docker-container. Or am I mistaken?
I added the geckodriver path to my bash-profile by checking my path with:
which geckodriver. The result of this is:/usr/local/bin/geckodriver
Then I added this to my profile and when I type echo $PATH I can see this: /usr/local/bin/geckodriver/bin:. So I assume all is set correctly. I also tried setting it manually doing:
self.selenium = webdriver.Firefox(executable_path=r'/usr/local/bin/geckodriver')
Since I still get the error I assume it has sth to do with docker. That's why my specific question is: How can I setup the PATH for geckodriver with pytes, selenium and docker-compose?
Any help or hint into the right direction is very much appreciated! Thanks in advance!
Here is my docker-compose file as requested:
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# Translations dependencies
&& apk add gettext \
# https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell
&& apk add postgresql-client
# Requirements are installed here to ensure they will be cached.
COPY ./requirements /requirements
RUN pip install -r /requirements/local.txt
COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r//' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/django/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
When I am debugging by doing
for p in sys.path:
print(p)
I get:
usr/local/bin
/usr/local/lib/python36.zip
/usr/local/lib/python3.6
/usr/local/lib/python3.6/site-packages
So my geckodriver is not there I assume?

apt-get not found in Docker

I've got this Dockerfile:
FROM python:3.6-alpine
FROM ubuntu
FROM alpine
RUN apk update && \
apk add --virtual build-deps gcc python-dev musl-dev
RUN apt-get update && apt-get install -y python-pip
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "main.py"]
and it's throwing error saying /bin/sh: apt-get: not found.
I thought apt-get package is part of Ubuntu image that I'm pulling on the
second line but yet it's giving me this error.
How can I fix this ?
as tkausl said: you can only use one base image (one FROM).
alpine's package manager is apk not apt-get. you have to use apk to install packages. however, pip is already available.
that Dockerfile should work:
FROM python:3.6-alpine
RUN apk update && \
apk add --virtual build-deps gcc python-dev musl-dev
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "main.py"]
apt-get does not work because the active Linux distribution is alpine, and it does not have the apt-get command.
You can fix it using apk command.
most probbly the image you're using is Alpine,
so you can't use apt-get
you can use Ubuntu's package manager.
you can use
apk update and apk add
Multiple FROM lines can be used in a single Dockerfile.
See discussion and Multi stage tutorial
The use of Python Alpine, plus Ubuntu, plus Ubuntu is probably redundant. The Python Alpine one should be sufficient as it uses Alpine internally.
I had a similar issue not with apk but with apt-get.
FROM node:14
FROM jekyll/jekyll
RUN apt-get update
RUN apt-get install -y \
sqlite
Error:
/bin/sh: apt-get: not found
If I change the order, then it works.
FROM node:14
RUN apt-get update
RUN apt-get install -y \
sqlite
FROM jekyll/jekyll
Note, as in first link I added above, multiple FROMs might removed from Docker as a feature.