I'm using Django 2.x and dockerizing the application.
I have the following Dockerfile content.
FROM python:3.7-alpine
# Install Rabbit-mq server
RUN echo #testing http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk --update add \
libxml2-dev \
libxslt-dev \
libffi-dev \
gcc \
musl-dev \
libgcc curl \
jpeg-dev \
zlib-dev \
freetype-dev \
lcms2-dev \
openjpeg-dev \
tiff-dev \
tk-dev \
tcl-dev \
mariadb-connector-c-dev \
supervisor \
nginx \
--no-cache bash
# Set environment variable
ENV PYTHONUNBUFFERED 1
# Set locale variables
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
# -- Install Application into container:
RUN set -ex && mkdir /app
# -- Adding dependencies:
ADD . /app/
# Copy environment variable file
ADD src/production.env /app/src/.env
COPY scripts/docker/entrypoint.sh /app/
# Switch to the working directory
WORKDIR /app
RUN chmod ug+x /app/entrypoint.sh
# Install Pipenv first to use pipenv module
RUN pip install pipenv
# -- Adding Pipfiles
ONBUILD COPY Pipfile Pipfile
ONBUILD COPY Pipfile.lock Pipfile.lock
RUN pipenv install --system --deploy
RUN mkdir -p /etc/supervisor.d
COPY configs/docker/supervisor/supervisor.conf /etc/supervisor.d/supervisord.ini
EXPOSE 80 8000
CMD ["/app/entrypoint.sh"]
and the entrypoint.sh hash
#!/usr/bin/env bash
exec gunicorn --pythonpath src qcg.wsgi:application -w 3 -b 0.0.0.0:8000 -t 300 --max-requests=100
I use the command to build the image
docker build . -t qcg_new
and running it using
docker run qcg_new
It runs the docker and the gunicorn server is also running on 8000 port
[2019-09-16 09:03:31 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-09-16 09:03:31 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
[2019-09-16 09:03:31 +0000] [1] [INFO] Using worker: sync
[2019-09-16 09:03:31 +0000] [10] [INFO] Booting worker with pid: 10
[2019-09-16 09:03:31 +0000] [11] [INFO] Booting worker with pid: 11
[2019-09-16 09:03:31 +0000] [12] [INFO] Booting worker with pid: 12
But when I visit http://127.0.0.1:8000 or http://localhost:8000 in the browser, it does not opens.
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published. To actually publish the port when running
the container, use the -p flag on docker run to publish and map one or
more ports, or the -P flag to publish all exposed ports and map them
to high-order ports.
see it here
so you need to -p 8000:8000 in your run command
Related
My docker-compose creates 3 containers - django, celery and rabbitmq. When i run the following commands -> docker-compose build and docker-compose up, the containers run successfully.
However I am having issues with deploying the image. The image generated has an image ID of 24d7638e2aff. For whatever reason however, if I just run the command below, nothing happens with an exit 0. Both the django and celery applications have the same image id.
docker run 24d7638e2aff
This is not good, as I am unable to deploy this image on kubernetes. My only thought is that the dockerfile has been configured wrongly, but i cannot figure out what is the cause
docker-compose.yaml
version: "3.9"
services:
django:
container_name: testapp_django
build:
context: .
args:
build_env: production
ports:
- "8000:8000"
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
links:
- rabbitmq
- celery
rabbitmq:
container_name: testapp_rabbitmq
restart: always
image: rabbitmq:3.10-management
ports:
- "5672:5672" # specifies port of queue
- "15672:15672" # specifies port of management plugin
celery:
container_name: testapp_celery
restart: always
build:
context: .
args:
build_env: production
command: celery -A testapp worker -l INFO -c 4
depends_on:
- rabbitmq
Dockerfile
ARG PYTHON_VERSION=3.9-slim-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG build_env
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${build_env}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG build_env
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${build_env}
WORKDIR ${APP_HOME}
RUN addgroup --system appuser \
&& adduser --system --ingroup appuser appuser
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# git for GitPython commands
git-all \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY --chown=appuser:appuser ./docker_scripts/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
# copy application code to WORKDIR
COPY --chown=appuser:appuser . ${APP_HOME}
# make appuser owner of the WORKDIR directory as well.
RUN chown appuser:appuser ${APP_HOME}
USER appuser
EXPOSE 8000
ENTRYPOINT ["/entrypoint"]
entrypoint
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
exec "$#"
How do I build images of these containers so that I can deploy them to k8s?
The Compose command: overrides the Dockerfile CMD. docker run doesn't look at the docker-compose.yml file at all, and docker run with no particular command runs the image's CMD. You haven't declared anything for that, which is why the container exits immediately.
Leave the entrypoint script unchanged (or even delete it entirely, since it doesn't really do anything). Add a CMD line to the Dockerfile
CMD python manage.py migrate && python manage.py runserver 0.0.0.0:8000
Now plain docker run as you've shown it will attempt to start the Django server. For the Celery container, you can still pass a command override
docker run -d --net ... your-image \
celery -A testapp worker -l INFO -c 4
If you do deploy to Kubernetes, and you keep the entrypoint script, then you need to use args: in your pod spec to provide the alternate command, not command:.
I think that is because the commands to run the django server are in the docker-compose.yml.
You should move these commands inside the entrypoint.
set -o errexit
set -o pipefail
set -o nounset
python manage.py migrate && python manage.py runserver 0.0.0.0:8000
exec "$#"
Pay attention that this command python manage.py runserver 0.0.0.0:8000 will start the application with a development server that cannot be used for production purposes.
You should look for gunicorn or similar.
I'm trying to deploy a Django app in a Docker container.
It works well but the first time I open a page I receive these messages:
[CRITICAL] WORKER TIMEOUT
[INFO] Worker exiting
[INFO] Booting worker with pid: 19
The next time I open the same page it's pretty fast.
Here's my Dockerfile (generated by wagtail app starter)
FROM python:3.8.6-slim-buster
RUN useradd wagtail
EXPOSE 8000
ENV PYTHONUNBUFFERED=1 \
PORT=8000
RUN apt-get update --yes --quiet && apt-get install --yes --quiet --no-install-recommends \
build-essential \
libpq-dev \
libmariadbclient-dev \
libjpeg62-turbo-dev \
zlib1g-dev \
libwebp-dev \
nano \
vim \
&& rm -rf /var/lib/apt/lists/*
RUN pip install "gunicorn==20.0.4"
COPY requirements.txt /
RUN pip install -r /requirements.txt
WORKDIR /app
RUN chown wagtail:wagtail /app
COPY --chown=wagtail:wagtail . .
USER wagtail
RUN python manage.py collectstatic --noinput --clear
CMD set -xe; python manage.py migrate --noinput; gunicorn mysite.wsgi:application
How can I fix that?
I am trying to deploy a fairly simple Python web app with FastAPI and Gunicorn on Google Cloud Run with a Docker container following this tutorial and upon deploying I keep falling on the same error:
Invalid ENTRYPOINT. [name: "gcr.io/<project_id>/<image_name>#sha256:xxx" error: "Invalid command \"/bin/sh\": file not found" ].
It works fine to build the image and push it to the Container Registry.
On the cloud run I have set my secrets for database connection and I am passing as an argument to the Dockerfile which settings.py file to use for the production environment, as I did locally to build/run the container.
Any idea on what am I missing or doing wrong in the process? It's my first attempt to deploying a web app on a cloud service so I might not have all the concepts on point just yet.
Dockerfile
FROM ubuntu:latest
ENV PYTHONUNBUFFERED 1
RUN apt update && apt upgrade -y
RUN apt install -y -q build-essential python3-pip python3-dev
RUN pip3 install -U pip setuptools wheel
RUN pip3 install gunicorn uvloop httptools
COPY requirements.txt /code/requirements.txt
RUN pip3 install -r /code/requirements.txt
COPY . code/
# Pass the settings_module as an argument when building the image
ARG SETTINGS_MODULE
ENV SETTINGS_MODULE $SETTINGS_MODULE
EXPOSE $PORT
CMD exec /usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker app.main:app --chdir /code
cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args: ["build", "--build-arg", "SETTINGS_MODULE=app.settings_production", "-t", "gcr.io/$PROJECT_ID/<image_name>", "."]
images:
- gcr.io/$PROJECT_ID/<image_name>
gcloud builds submit --config=cloudbuild.yaml
Update
I replaced ubuntu:latest (==20.04) with debian:buster-slim and it worked.
Previously
Deploying to Cloud Run, I receive the error too...I suspect it's the PORT, investigating. Not the PORT. Curiously, the image runs locally. Trying a different OS!
I repro'd your Dockerfile and cloudbuild.yaml in a project and the build and run succeed for me:
docker run \
--interactive --tty \
--env=PORT=8888 \
gcr.io/${PROJECT}/67486954
[2021-05-11 16:09:44 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2021-05-11 16:09:44 +0000] [1] [INFO] Listening at: http://0.0.0.0:8888 (1)
NOTE To build from a Dockerfile, you need not create a cloudbuild.yaml and can just gcloud builds submit --tag gcr.io/PROJECT_ID/...
A good way to diagnose the issue is to run the docker build locally:
docker build \
--build-arg=SETTINGS_MODULE=app.settings_production \
--tag=gcr.io/$PROJECT_ID/<image_name> \
.
And then attempt to run it:
docker run \
--interactive --tty --rm \
gcr.io/$PROJECT_ID/<image_name>
This isolates Cloud Build as the issue and will likely result in the same error.
The error suggests that the container isn't finding a shell (/bin/sh) in the ubuntu:latest image which is curious.
I think you can|should drop the `exec` after `CMD`
NOTE I read through Google's tutorial and see that the instructions include CMD exec ..., I'm unclear why that would be necessary but presumably it's not a problem.
Can you run the gunicorn command locally without issue??
/usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker app.main:app --chdir /code
The placement of --chdir /code is curious too. How about:
WORKDIR code
COPY . .
...
CMD /usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker app.main:app
Hmmm link perhaps move the --chdir before the app.main:app too so that it's applied to the gunicorn rather than your app.
/usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker --chdir /code app.main:app
I created simple project in Django with docker. According to heroku's documentation about release phase with container registry (https://devcenter.heroku.com/articles/container-registry-and-runtime#release-phase) I created a new app with postgres addon. To deploy app with docker I executed following commands:
heroku container:push web
heroku container:push release
heroku container:release web release
But after last command my terminal is blocked and it looks like release phase actually run a container.
Releasing images web,release to teleagh... done
Running release command...
[2019-12-30 21:22:00 +0000] [17] [INFO] Starting gunicorn 19.9.0
[2019-12-30 21:22:00 +0000] [17] [INFO] Listening at: http://0.0.0.0:5519 (17)
[2019-12-30 21:22:00 +0000] [17] [INFO] Using worker: sync
[2019-12-30 21:22:00 +0000] [27] [INFO] Booting worker with pid: 27
My goal is tu run django migrations before release. I would really apreciate any help.
Procfile:
release: python manage.py migrate
Dockerfile:
FROM python:3.7-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PORT=8000
WORKDIR /app
ADD requirements.txt .
RUN pip install -r requirements.txt
RUN apt-get update && apt-get install -y curl
ADD . ./
RUN python manage.py collectstatic --noinput
CMD gunicorn --bind 0.0.0.0:$PORT teleagh.wsgi
Procfile does not make effect when container deploy is used in heroku. If you want to set release phase command I can suggest two options that I have tested a lot:
1. Create dedicated Dockerfile for each phase with extensions matching the phase name.
Dockerfile.web
FROM python:3.7-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PORT=8000
WORKDIR /app
ADD requirements.txt .
RUN pip install -r requirements.txt
RUN apt-get update && apt-get install -y curl
ADD . ./
RUN python manage.py collectstatic --noinput
CMD gunicorn --bind 0.0.0.0:$PORT teleagh.wsgi
Dockerfile.release
FROM python:3.7-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PORT=8000
WORKDIR /app
ADD requirements.txt .
RUN pip install -r requirements.txt
RUN apt-get update && apt-get install -y curl
ADD . ./
RUN python manage.py collectstatic --noinput
CMD python manage.py migrate
The deploying process will be looking the same like yours with one exception — push command must have additional argument --recursive. Moreover it is possible to push all containers in one command:
heroku container:push web release --recursive
heroku container:release web release
2. Create a bash script to detect what phase is running in the container at the moment.
start.sh
#!/bin/bash
if [ -z "$SSH_CLIENT" ] && [ -n "$HEROKU_EXEC_URL" ];
then
source <(curl --fail --retry 3 -sSL "$HEROKU_EXEC_URL")
fi
if [[ "$DYNO" =~ ^release.* ]];
then
set -e
python3 manage.py migrate
else
exec gunicorn teleagh.wsgi -b 0.0.0.0:${PORT} --reload --access-logfile -
fi
Then the only Dockerfile will be looking like:
FROM python:3.7-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PORT=8000
WORKDIR /app
ADD requirements.txt .
RUN pip install -r requirements.txt
RUN apt-get update && apt-get install -y curl
ADD . ./
RUN python manage.py collectstatic --noinput
CMD ./start.sh
Hope this will be helpful
You can add a release phase to the heroku.yml file.
ex.
build:
...
release:
command:
- rake db:migrate
run:
web: bundle exec puma -C config/puma.rb
Source
I have my Ubuntu server setup so my Django project will bestarted by upstart like this:
#!/bin/bash
set -e
LOGFILE=/var/log/gunicorn/foo.log
LOGDIR=$(dirname $LOGFILE)
NUM_WORKERS=3
# user/group to run as
USER=django
GROUP=django
cd /var/www/webapps/foo
source ../env/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec ../env/bin/gunicorn_django -w $NUM_WORKERS \
--user=$USER --group=$GROUP --log-level=debug \
--log-file=$LOGFILE 2>>$LOGFILE && celeryd -l info -B
As you can see I also added celeryd at the end. But its not started Im sure it does not start as my tasks is not getting done. When I start it in the terminal on the server with:
manage.py celeryd -l info -B it does start and I can see the tasks being done.
How am I supposed to start it with Django?
You should create a separate upstart script for starting celeryd. This should get you started.