I created simple project in Django with docker. According to heroku's documentation about release phase with container registry (https://devcenter.heroku.com/articles/container-registry-and-runtime#release-phase) I created a new app with postgres addon. To deploy app with docker I executed following commands:
heroku container:push web
heroku container:push release
heroku container:release web release
But after last command my terminal is blocked and it looks like release phase actually run a container.
Releasing images web,release to teleagh... done
Running release command...
[2019-12-30 21:22:00 +0000] [17] [INFO] Starting gunicorn 19.9.0
[2019-12-30 21:22:00 +0000] [17] [INFO] Listening at: http://0.0.0.0:5519 (17)
[2019-12-30 21:22:00 +0000] [17] [INFO] Using worker: sync
[2019-12-30 21:22:00 +0000] [27] [INFO] Booting worker with pid: 27
My goal is tu run django migrations before release. I would really apreciate any help.
Procfile:
release: python manage.py migrate
Dockerfile:
FROM python:3.7-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PORT=8000
WORKDIR /app
ADD requirements.txt .
RUN pip install -r requirements.txt
RUN apt-get update && apt-get install -y curl
ADD . ./
RUN python manage.py collectstatic --noinput
CMD gunicorn --bind 0.0.0.0:$PORT teleagh.wsgi
Procfile does not make effect when container deploy is used in heroku. If you want to set release phase command I can suggest two options that I have tested a lot:
1. Create dedicated Dockerfile for each phase with extensions matching the phase name.
Dockerfile.web
FROM python:3.7-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PORT=8000
WORKDIR /app
ADD requirements.txt .
RUN pip install -r requirements.txt
RUN apt-get update && apt-get install -y curl
ADD . ./
RUN python manage.py collectstatic --noinput
CMD gunicorn --bind 0.0.0.0:$PORT teleagh.wsgi
Dockerfile.release
FROM python:3.7-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PORT=8000
WORKDIR /app
ADD requirements.txt .
RUN pip install -r requirements.txt
RUN apt-get update && apt-get install -y curl
ADD . ./
RUN python manage.py collectstatic --noinput
CMD python manage.py migrate
The deploying process will be looking the same like yours with one exception — push command must have additional argument --recursive. Moreover it is possible to push all containers in one command:
heroku container:push web release --recursive
heroku container:release web release
2. Create a bash script to detect what phase is running in the container at the moment.
start.sh
#!/bin/bash
if [ -z "$SSH_CLIENT" ] && [ -n "$HEROKU_EXEC_URL" ];
then
source <(curl --fail --retry 3 -sSL "$HEROKU_EXEC_URL")
fi
if [[ "$DYNO" =~ ^release.* ]];
then
set -e
python3 manage.py migrate
else
exec gunicorn teleagh.wsgi -b 0.0.0.0:${PORT} --reload --access-logfile -
fi
Then the only Dockerfile will be looking like:
FROM python:3.7-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PORT=8000
WORKDIR /app
ADD requirements.txt .
RUN pip install -r requirements.txt
RUN apt-get update && apt-get install -y curl
ADD . ./
RUN python manage.py collectstatic --noinput
CMD ./start.sh
Hope this will be helpful
You can add a release phase to the heroku.yml file.
ex.
build:
...
release:
command:
- rake db:migrate
run:
web: bundle exec puma -C config/puma.rb
Source
Related
My docker-compose creates 3 containers - django, celery and rabbitmq. When i run the following commands -> docker-compose build and docker-compose up, the containers run successfully.
However I am having issues with deploying the image. The image generated has an image ID of 24d7638e2aff. For whatever reason however, if I just run the command below, nothing happens with an exit 0. Both the django and celery applications have the same image id.
docker run 24d7638e2aff
This is not good, as I am unable to deploy this image on kubernetes. My only thought is that the dockerfile has been configured wrongly, but i cannot figure out what is the cause
docker-compose.yaml
version: "3.9"
services:
django:
container_name: testapp_django
build:
context: .
args:
build_env: production
ports:
- "8000:8000"
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
links:
- rabbitmq
- celery
rabbitmq:
container_name: testapp_rabbitmq
restart: always
image: rabbitmq:3.10-management
ports:
- "5672:5672" # specifies port of queue
- "15672:15672" # specifies port of management plugin
celery:
container_name: testapp_celery
restart: always
build:
context: .
args:
build_env: production
command: celery -A testapp worker -l INFO -c 4
depends_on:
- rabbitmq
Dockerfile
ARG PYTHON_VERSION=3.9-slim-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG build_env
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${build_env}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG build_env
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${build_env}
WORKDIR ${APP_HOME}
RUN addgroup --system appuser \
&& adduser --system --ingroup appuser appuser
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# git for GitPython commands
git-all \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY --chown=appuser:appuser ./docker_scripts/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
# copy application code to WORKDIR
COPY --chown=appuser:appuser . ${APP_HOME}
# make appuser owner of the WORKDIR directory as well.
RUN chown appuser:appuser ${APP_HOME}
USER appuser
EXPOSE 8000
ENTRYPOINT ["/entrypoint"]
entrypoint
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
exec "$#"
How do I build images of these containers so that I can deploy them to k8s?
The Compose command: overrides the Dockerfile CMD. docker run doesn't look at the docker-compose.yml file at all, and docker run with no particular command runs the image's CMD. You haven't declared anything for that, which is why the container exits immediately.
Leave the entrypoint script unchanged (or even delete it entirely, since it doesn't really do anything). Add a CMD line to the Dockerfile
CMD python manage.py migrate && python manage.py runserver 0.0.0.0:8000
Now plain docker run as you've shown it will attempt to start the Django server. For the Celery container, you can still pass a command override
docker run -d --net ... your-image \
celery -A testapp worker -l INFO -c 4
If you do deploy to Kubernetes, and you keep the entrypoint script, then you need to use args: in your pod spec to provide the alternate command, not command:.
I think that is because the commands to run the django server are in the docker-compose.yml.
You should move these commands inside the entrypoint.
set -o errexit
set -o pipefail
set -o nounset
python manage.py migrate && python manage.py runserver 0.0.0.0:8000
exec "$#"
Pay attention that this command python manage.py runserver 0.0.0.0:8000 will start the application with a development server that cannot be used for production purposes.
You should look for gunicorn or similar.
I am trying to deploy a fairly simple Python web app with FastAPI and Gunicorn on Google Cloud Run with a Docker container following this tutorial and upon deploying I keep falling on the same error:
Invalid ENTRYPOINT. [name: "gcr.io/<project_id>/<image_name>#sha256:xxx" error: "Invalid command \"/bin/sh\": file not found" ].
It works fine to build the image and push it to the Container Registry.
On the cloud run I have set my secrets for database connection and I am passing as an argument to the Dockerfile which settings.py file to use for the production environment, as I did locally to build/run the container.
Any idea on what am I missing or doing wrong in the process? It's my first attempt to deploying a web app on a cloud service so I might not have all the concepts on point just yet.
Dockerfile
FROM ubuntu:latest
ENV PYTHONUNBUFFERED 1
RUN apt update && apt upgrade -y
RUN apt install -y -q build-essential python3-pip python3-dev
RUN pip3 install -U pip setuptools wheel
RUN pip3 install gunicorn uvloop httptools
COPY requirements.txt /code/requirements.txt
RUN pip3 install -r /code/requirements.txt
COPY . code/
# Pass the settings_module as an argument when building the image
ARG SETTINGS_MODULE
ENV SETTINGS_MODULE $SETTINGS_MODULE
EXPOSE $PORT
CMD exec /usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker app.main:app --chdir /code
cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args: ["build", "--build-arg", "SETTINGS_MODULE=app.settings_production", "-t", "gcr.io/$PROJECT_ID/<image_name>", "."]
images:
- gcr.io/$PROJECT_ID/<image_name>
gcloud builds submit --config=cloudbuild.yaml
Update
I replaced ubuntu:latest (==20.04) with debian:buster-slim and it worked.
Previously
Deploying to Cloud Run, I receive the error too...I suspect it's the PORT, investigating. Not the PORT. Curiously, the image runs locally. Trying a different OS!
I repro'd your Dockerfile and cloudbuild.yaml in a project and the build and run succeed for me:
docker run \
--interactive --tty \
--env=PORT=8888 \
gcr.io/${PROJECT}/67486954
[2021-05-11 16:09:44 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2021-05-11 16:09:44 +0000] [1] [INFO] Listening at: http://0.0.0.0:8888 (1)
NOTE To build from a Dockerfile, you need not create a cloudbuild.yaml and can just gcloud builds submit --tag gcr.io/PROJECT_ID/...
A good way to diagnose the issue is to run the docker build locally:
docker build \
--build-arg=SETTINGS_MODULE=app.settings_production \
--tag=gcr.io/$PROJECT_ID/<image_name> \
.
And then attempt to run it:
docker run \
--interactive --tty --rm \
gcr.io/$PROJECT_ID/<image_name>
This isolates Cloud Build as the issue and will likely result in the same error.
The error suggests that the container isn't finding a shell (/bin/sh) in the ubuntu:latest image which is curious.
I think you can|should drop the `exec` after `CMD`
NOTE I read through Google's tutorial and see that the instructions include CMD exec ..., I'm unclear why that would be necessary but presumably it's not a problem.
Can you run the gunicorn command locally without issue??
/usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker app.main:app --chdir /code
The placement of --chdir /code is curious too. How about:
WORKDIR code
COPY . .
...
CMD /usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker app.main:app
Hmmm link perhaps move the --chdir before the app.main:app too so that it's applied to the gunicorn rather than your app.
/usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker --chdir /code app.main:app
I am a newbie to Docker. I have created one Django project and can run it in Docker. However, I have started a second project and have encountered a problem.
I created a virtual env and entered it
pipenv install django~=3.1.0 && pipenv shell
I created a Django project
django-admin startproject config .
I ran it within the virtualenv
python manage.py runserver
and could see the Django spaceship
I then exited the virtualenv and created a Dockerfile
Dockerfile
# Pull base image
FROM python:3.8
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY Pipfile Pipfile.lock /code/
RUN pip install pipenv && pipenv install --system
# Copy project
COPY . /code/
I ran
docker build .
and it reported a successful build
I created a docker-compose.yml file
docker-compose.yml
version: '3.8'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
When I run
docker-compose up
it complains
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
I have read in the comments to this question that virtual envs should not be used in docker files, so I replaced
RUN pip install pipenv && pipenv install --system
with
RUN pip install django~=3.1.0
but I still get the same error.
What is wrong?
Have you tried installing your list of requirements from a separate file, something like this?
COPY requirements.txt /code/requirements.txt
WORKDIR /code
RUN pip install -r requirements.txt
Once you have it installed you can run docker-compose run web /bin/sh to start a shell and then use django-admin startproject to create a django project. You may need to change the path in the docker-compose file so that it reflects where your manage.py file ended up (I moved mine to the root). I was able to get it working with the following:
requirements.txt
django==3.1.0
docker-compose.yml
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
Dockerfile
# Pull base image
FROM python:3.8
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirements.txt /code/requirements.txt
WORKDIR /code
RUN pip install -r requirements.txt
# Copy project
COPY . /code/
File tree looks like this:
Using Docker to install gunicorn, I am unable to to use the gunicorn command.
To start Django, I have this line in my docker-compose.yaml:
command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn myproject.wsgi -b 0.0.0.0:8000"
This results in bash: gunicorn: command not found
When I build the Docker images it says gunicorn has been successfully installed.
My Dockerfile looks like:
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN mkdir /config
ADD requirements.txt /config/
RUN pip install -r /config/requirements.txt
RUN mkdir /src;
WORKDIR /src
I've been using this http://ruddra.com/2016/08/14/docker-django-nginx-postgres/ as a guide.
If you are finding that gunicorn doesn't exist it could be because
docker image may use a cached layer of the requirements.txt which doesn't have gunicorn in it as a dependency.
Therefore it will result in not installing gunicorn meanwhile specifying pip install gunicorn in a seperate RUN command will work.
Solution:
Build docker image without caching when edits have been made to requirements.txt
docker build --no-cache .
I'm trying to create docker container with my application, that works good on local machine and production server. But when I create docker container with my project I see only "It worked" page instead of my project!
FROM python:2.7
MAINTAINER Name name <mail#gmail.com>
ENV PYTHONUNBUFFERED 1
ENV DJANGO_SETTINGS_MODULE lms.settings
RUN ls -la /
RUN mkdir /lms/
WORKDIR /lms/
RUN pip install six Django==1.5.12 numpy python-dateutil Pillow django-colorful gunicorn south djangorestframework djangorestframework-jsonp simplejson psutil
ADD . /lms/
RUN (cd /lms/ && python manage.py syncdb --noinput)
RUN (cd /lms/ && python manage.py collectstatic --noinput)
RUN ls -la /lms/
RUN cd /lms/
CMD gunicorn --env DJANGO_SETTINGS_MODULE=lms.settings lms.wsgi --pythonpath '/lms/' --bind 0.0.0.0:8000
You should modify your Dockerfile like this
FROM python:2.7
MAINTAINER Name name <mail#gmail.com>
ENV PYTHONUNBUFFERED 1
ENV DJANGO_SETTINGS_MODULE lms.settings
RUN ls -la /
RUN mkdir /lms
WORKDIR /lms
RUN pip install six Django==1.5.12 numpy python-dateutil Pillow django-colorful gunicorn south djangorestframework djangorestframework-jsonp simplejson psutil
ADD . /lms
RUN python manage.py syncdb --noinput
RUN python manage.py collectstatic --noinput
RUN ls -la
CMD gunicorn --env DJANGO_SETTINGS_MODULE=lms.settings lms.wsgi --pythonpath '/lms/' --bind 0.0.0.0:8000`
See the doc for WORKDIR
https://docs.docker.com/reference/builder/#workdir
and this "best practices"
https://docs.docker.com/articles/dockerfile_best-practices/
When you RUN cd /lms it executes the cd and exits, so the next command is not in /lms and also, do not put a / at the end of your directory when you specify the WORKDIR, as you had twice WORDKIR, your default directory was /lms/lms I guess