I have the following heroku.yml file for containers deployment:
build:
docker:
release:
dockerfile: Dockerfile
target: release_image
web: Dockerfile
config:
PROD: "True"
release:
image: web
command:
- python manage.py collectstatic --noinput && python manage.py migrate users && python manage.py migrate
run:
# web: python manage.py runserver 0.0.0.0:$PORT
web: daphne config.asgi:application --port $PORT --bind 0.0.0.0 -v2
celery:
command:
- celery --app=my_app worker --pool=prefork --concurrency=4 --statedb=celery/worker.state -l info
image: web
celery_beat:
command:
- celery --app=my_app beat -l info
image: web
When I deploy I get the following warning, which does not make any sense to me:
Warning: You have declared both a release process type and a release section. Your release process type will be overridden.
My Dockerfile is composed of two stages and I want to keep only the release_image stage:
FROM python:3.8 as builder
...
FROM python:3.8-slim as release_image
...
According to the docs the proper way to choose release_image is to use the target section within build step.
But it also mentions that I can run my migrations within a release section.
So what am I supposed to do to get rid of this warning? I could only live with it if it was clear that both my migrations and target are being considered during deploy.Thanks in advance!
I want to keep only the release_image stage
Assuming this is true for your web process as well, update your build section accordingly:
build:
docker:
web:
dockerfile: Dockerfile
target: release_image
config:
PROD: "True"
Now you only have one process type defined and it targets the build stage you want to use.
Since you can run your migrations from the web container there's no need to build a whole container just for your Heroku release process. (And since your release section uses the web image the release process defined in build wouldn't have been for anything used anyway.)
Related
Question
I am a beginner with docker; this being the first project I have set up with it and don't particularly know what I am doing. I would very much appreciate if someone could give me some advice on what the best way to get migrations from a dockerized django app to store locally
What I have tried so far
I have a local django project setup with the following file structure:
Project
.docker
-Dockerfile
project
-data
-models
- __init__.py
- user.py
- test.py
-migrations
- 0001_initial.py
- 0002_user_role.py
...
settings.py
...
manage.py
Makefile
docker-compose.yml
...
In the current state the migrations for the test.py model have not been run; so I attempted to do so using docker-compose exec main python manage.py makemigrations. This worked successfully returning the following:
Migrations for 'data':
project/data/migrations/0003_test.py
- Create model Test
But produced no local file. However, if I explore the file system of the container I can see that the file exists on the container itself.
Upon running the following:
docker-compose exec main python manage.py migrate
I receive:
Running migrations:
No migrations to apply.
Your models in app(s): 'data' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
I was under the impression that even if this did not create the local file it would at least run the migrations on the container.
Regardless, my intention was that when I run docker-compose exec main python manage.py makemigrations it store the file locally in the project/data/migrations folder and then I just run migrate manually. I can't find much documentation on how to do this; the only post I have seen suggested bind mounts (Migrations files not created in dockerized Django) which I attempted by adding the following to my docker-compose file:
volumes:
- type: bind
source: ./data/migrations
target: /var/lib/migrations_test
but I was struggling to get it to work and following from this I had no idea how to run commands through this volume using docker-compose and I was questioning whether this was even a good idea as I had read somewhere it was not best practice to use bind mounts.
Project setup:
The docker-compose.yml file looking like so:
version: '3.7'
x-common-variables: &common-variables
ENV: 'DEV'
DJANGO_SETTINGS_MODULE: 'project.settings'
DATABASE_NAME: 'postgres'
DATABASE_USER: 'postgres'
DATABASE_PASSWORD: 'postgres'
DATABASE_HOST: 'postgres'
CELERY_BROKER_URLS: 'redis://redis:6379/0'
volumes:
postgres:
services:
main:
command:
python manage.py runserver 0.0.0.0:8000
build:
context: ./
dockerfile: .docker/Dockerfile
target: main
environment:
<<: *common-variables
ports:
- '8000:8000'
env_file:
- dev.env
networks:
- default
postgres:
image: postgres:13.6
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '25432:5432'
environment:
POSTGRES_PASSWORD: 'postgres'
command: postgres -c log_min_messages=INFO -c log_statement=all
wait_for_dependencies:
image: dadarek/wait-for-dependencies
environment:
SLEEP_LENGTH: '0.5'
redis:
image: redis:latest
ports:
- '16379:6379'
worker:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project worker -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
beat:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project beat -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
networks:
default:
Makefile:
build: pre-run
build:
docker-compose build --pull
dev-deps: pre-run
dev-deps:
docker-compose up -d postgres redis
docker-compose run --rm wait_for_dependencies postgres:5432 redis:6379
migrate: pre-run
migrate:
docker-compose run --rm main python manage.py migrate
setup: build dev-deps migrate
up: dev-deps
docker-compose up -d main
Dockerfile:
FROM python:3.10.2 as main
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir -p /code
WORKDIR /code
ADD . ./
RUN useradd -m -s /bin/bash app
RUN chown -R app:app .
USER app
EXPOSE 8000
Follow up based on diptangsu-goswami's response
I tried adding the following:
volumes:
- type: bind
source: C:\dev\Project\project
target: /code/
This creates an empty directory in my Project folder; named C:\dev\Project\project but the app doesn't run as it cannot find the manage.py file... I assumed this was because it was in the parent directory Project and tried again with:
volumes:
- type: bind
source: C:\dev\Project
target: /code/
But the same problem occured. Why is it creating the empty directory? surely it should just be binding the existing directory with the container directory? Also using this method, would I need to change my Dockerfile to not copy the codebase to the container in the first place and just mount it on instead?
I managed to fix it by adding the following to my 'main' service in my docker compose:
volumes:
- .:/code:delegated
So, I have followed this tutorial by Docker to create a Django image.
It completely works on my local machine by just running a docker-compose up command from the root directory of my project.
But, after pushing the image to docker hub https://hub.docker.com/repository/docker/vivanks/firsttry
I am pulling the image to another machine and then running:
docker run -p 8020:8020 vivanks/firsttry
But it's not getting started and showing this error:
EXITED(0)
Can anyone help me on how to pull this image and run it?
My Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
My docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
As #larsks mentioned in his answer your problem is that your command is in the Compose file, rather than in Dockerfile.
To run your project on another machine as-is, use the following docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
If you already added CMD python manage.py runserver 0.0.0.0:8000 to your Dockerfile and rebuilt the image, the above can be further simplified to:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
ports:
- "8000:8000"
depends_on:
- db
Using docker run will fail in either case, since it won't set up a database.
Edit:
OP, I admire your persistence, but at the same time do not understand the insistence on using Docker CLI rather than docker-compose. I recommend using one of the above docker-compose.yml files to start your app.
Nevertheless, I accept the challenge of running it without docker-compose.
Your application fails to start when you use docker run command, because it tries to connect to database on host db, which does not exist. In your (and mine) docker-compose.yml there is a definition of a service called db. Docker-compose uses that definition to set up a database container for you and makes it available for your application under hostname db.
To start your application without using docker-compose, you need to manually do everything it does for you automatically (the commands below assume you have added CMD... to your Dockerfile:
docker network create --driver bridge django-test-network
docker run --detach --env POSTGRES_DB=postgres --env POSTGRES_USER=postgres --env POSTGRES_PASSWORD=postgres --network django-test-network --name db postgres:latest
docker run -it --rm --network django-test-network --publish 8080:8000 vivanks/firsttry:latest
The above 3 commands create a new bridged network, create and start a detached (background) container with properly configured database connected to that network and finally create and start an attached (foreground) container based on your image, also attached to that new network. Since both containers are on the same, non-default bridged network, your application will be able to resolve hostname db to internal IP address of the database container and start properly.
Once you shut it down with Ctrl+C, the container with your application will delete itself (as it was started with option --rm), but you need to also manually clean up the rest. To do so run the following commands:
docker stop db
docker rm -v db
docker network remove django-test-network
The first one stops the database container, the second one removes it and its anonymous volume and the third one removes the network.
I hope this explains everything.
Your Dockerfile doesn't specify a CMD or ENTRYPOINT. When you run...
docker run -p 8020:8020 vivanks/firsttry
...the container has nothing to do (which means it will actually try to start a Python interactive shell, but since you're not allocating a terminal with -t, the shell just exits. Successfully). In your docker-compose.yml, you're passing in an explicit command:
command: python manage.py runserver 0.0.0.0:8000
So the equivalent docker run command line would look like:
docker run -docker run -p 8020:8020 vivanks/firsttry python manage.py runserver 0.0.0.0:8000
But you probably want to bake that into your Dockerfile like this:
CMD python manage.py runserver 0.0.0.0:8000
I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...
I would to run a script (populate my MySql Docker container) only when my docker containers are built. I'm running the following docker-compose.yml file, which contains a Django container.
version: '3'
services:
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
depends_on:
- mysql
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
volumes:
my-db:
I have this web/Dockerfile
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY entrypoint.sh /app/
COPY . /app/
RUN ["chmod", "+x", "/app/entrypoint.sh"]
ENTRYPOINT ["/app/entrypoint.sh"]
and these are the contents of my entrypoint.sh file
#!/bin/bash
set -e
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
exec "$#"
The issue is, when I repeatedly run "docker-compose up," the entrypoint.sh script is getting run with its commands. I would prefer the commands only get run when the docker container is first built but they seem to always get run when the container is restored. Is there any way to adjust what I have to achieve this?
An approach that I've used before is to wrap your loaddata calls in your own management command, which first checks if there's any data in the database, and if there is, doesn't do anything. Something like this:
# your_app/management/commands/maybe_init_data.py
from django.core.management import call_command
from django.core.management.base import BaseCommand
from address.models import Country
class Command(BaseCommand):
def handle(self, *args, **options):
if not Country.objects.exists():
self.stdout.write('Seeding initial data')
call_command('loaddata', 'maps/fixtures/country_data.yaml')
call_command('loaddata', 'maps/fixtures/seed_data.yaml')
And then change your entrypoint script to:
python manage.py migrate
python manage.py maybe_init_data
(Assumption here that you have a Country model - replace with a model that you do actually have in your fixtures.)
The idea of seeding your database in the first run, is a very common case. As others have suggested, you can change your entrypoint.sh script and apply some conditioning logic to it and make it the way you want it to work.
But I think it is a really better practice if you separate the logic for seeding the database and running services and do not keep them tangled to each other. This might cause some unwanted behavior in the future.
I was going to suggest a workaround using docker-compose and started searching for some syntax for excluding some services while doing docker-compose up but found out this is still an open issue. But I found this stack overflow answer witch has suggested a very nice approach.
version: '3'
services:
all-services:
image: docker4w/nsenter-dockerd # you want to put there some small image
command: sh -c "echo start"
depends_on:
- mysql
- web
- apache
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
depends_on:
- mysql
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
seed:
build: ./web
env_file: .env
environment:
DEBUG: 'true'
entrypoint: /bin/bash -c "/bin/bash -c \"$${#}\""
command: |
/bin/bash -c "
set -e
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
/bin/bash || exit 0
"
depends_on:
- mysql
volumes:
my-db:
If you use something like above, you will be able to run seeding stage before running docker-compose up.
For seeding your databse, run:
docker-compose up seed
For running all your stack, use:
docker-compose up -d all-services
I think it is a clean approach and, can be extended to many different scenarios and use cases.
UPDATE
If you really want to be able to run the whole stack altogether and also prevent unexpected behaviors caused by running loaddata command multiple times, I would suggest you define a new django management command to check for existing data. Look at this:
checkseed.py
from django.core.management.base import BaseCommand, CommandError
from project.models import Country # or whatever model you have seeded
class Command(BaseCommand):
help = 'Check if seed data already exists'
def handle(self, *args, **options):
if Country.objects.all().count() > 0:
self.stdout.write(self.style.WARNING('Data already exists .. skipping'))
return False
# do all the checks for your data integrity
self.stdout.write(self.style.SUCCESS('Nothing exists'))
return True
And after this, you can change your seed part of docker-compose as below:
seed:
build: ./web
env_file: .env
environment:
DEBUG: 'true'
entrypoint: /bin/bash -c "/bin/bash -c \"$${#}\""
command: |
/bin/bash -c "
set -e
python manage.py checkseed &&
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
/bin/bash || exit 0
"
depends_on:
- mysql
This way, you can be sure that if anyone runs docker-compose up -d by mistake, will not cause integrity errors and problems like that.
Instead of using the entrypoint.sh file, why not just run the commands in the web/Dockerfile?
RUN python manage.py migrate maps
RUN python manage.py loaddata maps/fixtures/country_data.yaml
RUN python manage.py loaddata maps/fixtures/seed_data.yaml
That way these changes will be baked into the image and, when you start the image, these changes will already have been executed.
I had a similar case recently. As the "ENTRYPOINT" contains the command that will be executed every time the container starts a solution would be to include some logic on the entrypoint.sh script in order to avoid to apply the updates ( in your case the migration and the load of the data ) if the effects of these operations are already present on the database.
For example:
#!/bin/bash
set -e
#Function that goes to verify if effects of migration and load data are present on database
function checkEffects() {
IS_UPDATED=0
#Check effects and set to 1 IS_UPDATED if effects are not present
}
checkEffects
if [[ $IS_UPDATED == 0 ]]
then
echo "Database already initialized. Nothing to do"
else
echo "Database is clean. Initializing it"
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
fi
exec "$#"
However the scenario is more complex because to verify the effects that allow to decide if to proceed or not with the updates can be a quite difficult if these involves multiple data and data.
Moreover it becomes very complex if you think on the containers upgrade over time.
Example: Today you're working with a local Dockerfile for your web service but I think in production you'll start to versioning this service uploading it on a Docker registry. So when you'll upload
your first release ( for example the 1.0.0 version ) you'll
specify the following on your docker-compose.yml:
web:
restart: always
image: <DOCKER_REGISTRY_HOST>:<DOCKER_REGISTRY_PORT>/web:1.0.0
ports: # to access the container from outside
- "8000:8000"
Then you'll release the "1.2.0" version of the web service
container when you'll include other changes on the schema for example
loading other data on entrypoint.sh:
#1.0.0 updates
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
#1.2.0 updates
python manage.py loaddata maps/fixtures/other_seed_data.yaml
Here you'll have 2 scenarios (
let's ignore for now the need to check for effects on the script ):
1- You deploy for the first time your services with web:1.2.0: As you start from a clean database you should be sure that all
updates are executed ( both 1.1.0 and 1.2.0 ).
The solution to this case is easy because you can just execute all updates.
2- You upgrade web container to 1.2.0 on an existing environment where 1.0.0 was running: As your database has been initialized
with updates from 1.0.0 you should be sure that only 1.2.0
updates are executed
Here is difficult because you should be able to check what is the version on database applied in order to skip 1.0.0 updates. This will
means you should store the web version somewhere on database for
example
As per all this discussion so I think the best solution is to work directly on scripts that goes to create schema and populate data in order to make these instructions idempotent paying attention on upgrade ones.
Some examples:
1- Create a table
Instead to create table as follow:
CREATE TABLE country
use the if not exists to avoid table already present error:
CREATE TABLE IF NOT EXISTS country
2- Insert default data
Instead to insert data without primary key specified:
INSERT INTO maps.country (name) VALUES ("USA");
Include primary key in order to avoid duplicates:
INSERT INTO maps.country (id,name) VALUES (1,"USA");
Usually build and deploy steps are separated.
Your ENTRYPOINT is part of deploy.
If you want configure manually witch deploy run should run migrate commands and witch just replace containers by a new one (maybe from fresh image), then you can slit it into a separate commands
start database (if not running)
docker-compose -p production -f docker-compose.yml up mysql -d
migrate
docker run \
--rm \
--network production_default \
--env-file docker.env \
--entrypoint python \
my-backend-image-name:prod python manage.py migrate maps
and then deploy fresh image
docker-compose -p production -f docker-compose.yml up -d
And each time manually decide should you run migrate step or not
I've been trying to find the best method to handle setting up a Django project with Docker. But I'm somewhat confused as to how CMD and ENTRYPOINT function in relation to the compose commands.
When I first set the project up, I need to run createsuperuser and migrate for the database. I've tried using a script to run the commands as the entrypoint in my Dockerfile but it didn't seem to work consistently. I switched to the configuration shown below, where I overwrite the Dockerfile CMD with commands in my compose file where it is told to run makemigrations, migrate, and createsuperuser.
The issue I'm having is exactly how to set it up so that it does what I need. If I set a command (shown as commented out in the code) in my compose file it should overwrite the CMD in my Dockerfile from what I understand.
What I'm unsure of is whether or not I need to use ENTRYPOINT or CMD in my Dockerfile to achieve this? Since CMD is overwritten by my compose file and ENTRYPOINT isn't, wouldn't it cause problems if it was set to ENTRYPOINT, since it would try to run gunicorn a second time after the compose command is executed?
Would there be any drawbacks in this approach compared to using an entrypoint script?
Lastly, is there a general best practice approach to handling Django's setup commands when deploying a dockerized Django application? Or am I already doing what is typically done?
Here is my Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=site.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY site site
COPY templates templates
COPY logs logs
COPY scripts scripts
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=site.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "site.wsgi:application"]
And my compose file (omitted the nginx and postgres sections as they are unnecessary to illustrate the issue):
version: "3.2"
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile.prodtest
args:
requirements: requirements/production.txt
#command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn --config gunicorn.conf --log-config loggigng.conf -e DJANGO_SETTINGS_MODULE=site.settings.production_test -W 4 -b 0.0.0.0:8000 site.wsgi"
container_name: dj01
environment:
- DJANGO_SETTINGS_MODULE=site.settings.production_test
- PYTHONDONTWRITEBYTECODE=1
volumes:
- ./:/app
- /static:/static
- /media:/media
networks:
- main
depends_on:
- db
I have the following entrypoint script that will attempt to do the migrate automatically on my Django project:
#!/bin/bash -x
python manage.py migrate --noinput || exit 1
exec "$#"
The only change that would need to happen to your Dockerfile is to ADD it and specify the ENTRYPOINT. I usually put these lines directly about the CMD instruction:
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
(please note that the chmod is only necessary if the docker-entrypoint.sh file on in your build environment is not executable already)
I add || exit 1 so that the script will stop the container should the migrate fail for any reason. When starting your project via docker-compose, it's possible that the database may not be 100% ready to accept connections when this migrate command runs. Between the exit on error approach and the restart: always that you have in your docker-compose.yml already, this will handle that race condition properly.
Note that the -x option I specify for bash echoes out what bash is doing, which I find helpful for debugging my scripts. It can be omitted if you want less verbosity in the container logs.
Dockerfile:
...
ENTRYPOINT ["entrypoint.sh"]
CMD ["start"]
entrypoint.sh will be executed all the time whilst CMD will be the default argument for it (docs)
entrypoint.sh:
if ["$1" = "start"]
then
/usr/local/bin/gunicorn --config config/gunicorn.conf \
--log-config config/logging.conf ...
elif ["$1" = "migrate"]
# whatever
python manage.py migrate
fi
now it is possible to do something like
version: "3.2"
services:
app:
restart: always
build:
...
command: migrate # if needed
or
docker exec -it <container> bash -c entrypoint.sh migrate