I have a Django REST framework API that I'm trying to run in Docker. The project uses Poetry 1.1.12. When running, I can see that Poetry is installed correctly, and that Poetry installs the packages in my pyproject.toml, including Django. I'm using supervisor to run the API using Daphne, as well as some other tasks (like collecting static files).
However, when supervisor runs the app, I get:
Traceback (most recent call last):
File "/home/docker/api/manage.py", line 22, in <module>
main()
File "/home/docker/api/manage.py", line 13, in main
raise ImportError(
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
Traceback (most recent call last):
File "/home/docker/api/manage.py", line 11, in main
from django.core.management import execute_from_command_line
ModuleNotFoundError: No module named 'django'
Notice how I set POETRY_VIRTUALENVS_CREATE=false and ENV PATH="/root/.local/bin:${PATH}". According to the poetry installation script, that is the path that needs to be added to PATH.
Here is an abridged versioned of my Dockerfile:
FROM python:3.9-slim-buster
ENV PATH="/root/.local/bin:${PATH}"
RUN apt-get update && apt-get install -y --no-install-recommends \
... \
curl \
supervisor \
&& curl -sSL 'https://install.python-poetry.org' | python - && poetry --version \
&& apt-get remove -y curl \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/apt/lists/*
COPY poetry.lock pyproject.toml /home/docker/api/
WORKDIR /home/docker/api
RUN if [ "$DEBUG" = "false" ] \
; then POETRY_VIRTUALENVS_CREATE=false poetry install --no-dev --no-interaction --no-ansi -vvv --extras "production" \
; else POETRY_VIRTUALENVS_CREATE=false poetry install --no-interaction --no-ansi -vvv --extras "production" \
; fi
COPY . /home/docker/api/
COPY .docker/services/api/files/supervisor.conf /etc/supervisor/conf.d/
CMD ["supervisord", "-n"]
Which is pretty much how I see others doing it. Any ideas?
Poetry documents itself as trying very very hard to always run inside a virtual environment. However, a Docker container is itself isolation from other Pythons, and it's normal (and easiest) to install packages in the "system" Python.
There is a poetry export command that can convert Poetry's files to a normal pip requirements.txt file, and from there you can RUN pip install in your Dockerfile. You could use a multi-stage Dockerfile to generate that file without actually including Poetry in your main image.
FROM python:3.9-slim-buster AS poetry
RUN pip install poetry
WORKDIR /app
COPY pyproject.toml poetry.lock .
RUN poetry export -f requirements.txt --output requirements.txt
FROM python:3.9-slim-buster
WORKDIR /app
COPY --from=poetry /app/requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["./manage.py", "runserver", "0.0.0.0:8000"]
django should show up in the generated requirements.txt file, and since pip install installs it as a "normal" "system" Python package, your application should be able to see it normally, without tweaking environment variables or other settings.
Could it be because of a missing DJANGO_SETTINGS_MODULE environment variable?
Related
Docker project was created on Linux machine, I'm running windows and I can't get docker-compose up to work. I've read through Stackoverflow's answers and so far I've tried the following (none have worked):
Using Visual Studio Code I saved as "LF" instead of CRLF
Deleted the file entirely, created a new one using Visual Studio Code, typed the words
Cut the entire file, pasted it in Notepad so that formatting gets cleared, copied and pasted back
Added various forms of #!/bin/bash to the start of the entrypoint.sh
Changed Docker File to use COPY instead of ADD
At this point I'm not sure what else to try. Any ideas?
Edit
entrypoint.sh
if [ "$1" == 'celery' ]; then
celery -A vicmun worker -l info --uid=celery --gid=celery
else
./../wait_for_it.sh db:5433 --timeout=10
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
fi
Dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED 1
ARG APP_ENV=${APP_ENV}
RUN mkdir /src
RUN mkdir /static
WORKDIR /src
ADD ./src /src
ADD entrypoint-${APP_ENV}.sh /entrypoint.sh
ADD wait_for_it.sh /wait_for_it.sh
RUN addgroup --system celery && adduser --system --ingroup celery celery
RUN ["chmod", "+x", "/wait_for_it.sh"]
RUN apt-get -y update
RUN apt-get -y install ffmpeg
RUN pip install -r requirements.txt
ENTRYPOINT ["bash", "/entrypoint.sh"]
I don't know your config, but I could resolve that problem by adding in the CMD.
In my case, I could execute a script with docker as follows:
Dockerfile
FROM python:3.10-alpine3.15
ENV PYTHONUNBUFFERED=1
WORKDIR /app
RUN apk update \
&& apk add --no-cache gcc musl-dev postgresql-dev python3-dev libffi-dev \
&& pip install --upgrade pip
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
COPY . .
CMD [ "sh", "entrypoint.sh" ]
entrypoint.sh
#!/bin/sh
python manage.py makemigrations
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
Well, I feel the cringe for this. Turns out the solution was something I had already done, but it didn't go through until I rebuilt with --no-cache option.
Solution was to:
Using Visual Studio Code I saved as "LF" instead of CRLF
and run docker-compose build --no-cache
As I understand the documentation, whenever I add these lines to the config:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.1.0
hooks:
- id: trailing-whitespace
it makes pre-commit to download the hooks code from this repo and execute it. Is it possible to pre-install all the hooks somehow into a Docker image. So when I call pre-commit run no network is used?
I found this section of the documentation describing how pre-commit caches all the repositories. They are stored in ~/.cache/pre-commit and this could be configured by updating PRE_COMMIT_HOME env variable.
However, the caching only works when I do pre-commit run. But I want to pre-install everything w/o running the checks. Is it possible?
you're looking for the pre-commit install-hooks command
at the least you need something like this to cache the pre-commit environments:
COPY .pre-commit-config.yaml .
RUN git init . && pre-commit install-hooks
disclaimer: I created pre-commit
Snippet provided by #anthony-sottile works like charm. It helps utilize docker cache. Here is a working variation for it from django world.
ARG PYTHON_VERSION=3.9-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG BUILD_ENVIRONMENT=test
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${BUILD_ENVIRONMENT}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG BUILD_ENVIRONMENT=test
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${BUILD_ENVIRONMENT}
WORKDIR ${APP_HOME}
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY ./compose/test/django/entrypoint /entrypoint
RUN chmod +x /entrypoint
COPY .pre-commit-config.yaml .
RUN git init . && pre-commit install-hooks
# copy application code to WORKDIR
COPY . ${APP_HOME}
ENTRYPOINT ["/entrypoint"]
then you can fire pre-commit checks in similar fashion:
docker-compose -p project_name -f test.yml run --rm django pre-commit run --all-files
I can't figure out how to use external library as python module in production. Any help on this issue is much appreciated.
I am importing FreeCAD as python module in my django app as follow.
views.py
import sys
sys.path.append('freecad/lib')
import FreeCAD
import Part
And Freecad bin and library files reside at root directory where manage.py file is as shown below.
Everything works fine on my local sever. I can import FreeCad and do data processing on CAD files.
But things start to break when I deploy app on google cloud engine. After deployment it threw this error.
Exception Value: libFreeCADApp.so: cannot open shared object file: No such file or directory
I also built docker image of this application to make sure consistent dependencies. But same result local sever finds Freecad library and runs fine, but docker throws this error.
ModuleNotFoundError: No module named 'FreeCAD'.
Docker file content
# Base Image
FROM python:3.8.5
# set default environment variables # Dont wory about this
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
# create and set working directory
RUN mkdir /app
WORKDIR /app
# Add current directory code to working directory
ADD . /app/
# Pass requirements to Docker
COPY ./requirements.txt /requirements.txt
# set project environment variables
# grab these via Python's os.environ
# these are 100% optional here
ENV PORT=8888
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
tzdata \
python3-setuptools \
python3-pip \
python3-dev \
python3-venv \
git \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install environment dependencies
RUN pip3 install --upgrade pip
RUN pip3 install pipenv
RUN pip3 install -r /requirements.txt
# Install project dependencies
RUN pipenv install --skip-lock --system --dev
EXPOSE 8888
CMD gunicorn mod_project.wsgi:application --bind 0.0.0.0:$PORT
requirements.txt
asgiref==3.3.1
cachetools==4.1.1
certifi==2020.12.5
cffi==1.14.4
chardet==3.0.4
cycler==0.10.0
Django==3.1.4
django-storages==1.10.1
google-api-core==1.23.0
google-auth==1.23.0
google-cloud-core==1.4.4
google-cloud-storage==1.33.0
google-crc32c==1.0.0
google-resumable-media==1.1.0
googleapis-common-protos==1.52.0
gunicorn==20.0.4
idna==2.10
kiwisolver==1.3.1
matplotlib==3.3.3
numpy==1.19.4
numpy-stl==2.13.0
Pillow==8.0.1
protobuf==3.14.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
PyMySQL==0.10.1
pyparsing==2.4.7
python-dateutil==2.8.1
python-utils==2.4.0
pytz==2020.4
requests==2.25.0
rsa==4.6
six==1.15.0
sqlparse==0.4.1
stripe==2.55.1
urllib3==1.26.2
I am using pdfkit in my django application and it seems to be working fine after I installed wkhtmltopdf on my machine.
But when I build a docker image of my application for production and run it locally, it gives me OS Error for docker image. I have tried everything I found on the web but can't seem to install wkhtmltopdf on my docker container.
Here's my Docker File for building an image, this gives error while installing the package.
FROM python:3.6.9
RUN wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.1/wkhtmltox-0.12.1_linux-wheezy-amd64.deb
RUN dpkg -i ~/Downloads/wkhtmltox-0.12.1_linux-wheezy-amd64.deb
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Here's the error I get in the terminal while building the image
Here's the error without wkhtmltopdf in docker
I figured it out.
My DockerFile was missing some code.
FROM python:3.6.9
RUN wget https://s3.amazonaws.com/shopify-managemant-app/wkhtmltopdf-0.9.9-static-amd64.tar.bz2
RUN tar xvjf wkhtmltopdf-0.9.9-static-amd64.tar.bz2
RUN mv wkhtmltopdf-amd64 /usr/local/bin/wkhtmltopdf
RUN chmod +x /usr/local/bin/wkhtmltopdf
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Now the image is running just fine
This Dockerfile works with django and the newest version of wkhtmltopdf (0.12.6-1)
# pull official base image
FROM python:3.9-buster
RUN apt-get update \
&& apt-get install -y \
curl \
libxrender1 \
libjpeg62-turbo \
fontconfig \
libxtst6 \
xfonts-75dpi \
xfonts-base \
xz-utils
RUN curl "https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox_0.12.6-1.buster_amd64.deb" -L -o "wkhtmltopdf.deb"
RUN dpkg -i wkhtmltopdf.de
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
I'm using Django 2 and Python 3.7. I have the following directory structure.
web
- Dockerfile
- manage.py
+ maps
- requirements.txt
+ static
+ tests
+ venv
"requirements.txt" is just a file I generated by running "pip3 freeze > requirements.txt". I have the below Dockerfile for my Django container ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
pip3 freeze > requirements.txt
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . /app/
I was wondering if there is a way to build my container such that it auto-generates and copies the correct requirements.txt file. As you might guess, the line
pip3 freeze > requirements.txt
I have attempted to include above causes the whole thing to die when running "docker-compose build" with the error
ERROR: Dockerfile parse error line 15: unknown instruction: PIP3
This makes no sense as your environment on docker container will be empty and rewrite your requirements.txt
You are also missing run
RUN pip3 freeze > requirements.txt