I have somehow bad setup of my docker containers I guess.
Because each time I run task from django I see in docker container output of ps aux that there is new process created of python mange.py rqworker mail instead of using the existing one.
See the screencast: https://imgur.com/a/HxUjzJ5
the process executed from command in my docker compose for rq worker container looks like this.
#!/bin/sh -e
wait-for-it
for KEY in $(redis-cli -h $REDIS_HOST -n 2 KEYS "rq:worker*"); do
redis-cli -h $REDIS_HOST -n 2 DEL $KEY
done
if [ "$ENVIRONMENT" = "development" ]; then
python manage.py rqworkers --worker-class rq.SimpleWorker --autoreload;
else
python manage.py rqworkers --worker-class rq.SimpleWorker --workers 4;
fi
I am new to docker and wondering a bit that this is started like this without deamonization... but is it a dockerish way of doing thing, right?
Here's what I do, with docker-compose:
version: '3'
services:
web:
build: .
image: mysite
[...]
rqworker:
image: mysite
command: python manage.py rqworker
[...]
rqworker_high:
image: mysite
command: python manage.py rqworker high
[...]
Then start with:
$ docker-compose up --scale rqworker_high=4
Related
I have "django_crontab" in my installed apps.
I have a cron job configured
CRONJOBS = [
('* * * * *', 'django.core.management.call_command',['dbbackup']),
]
my YAML looks like this:
web:
build: .
command:
- /bin/bash
- -c
- |
python manage.py migrate
python manage.py crontab add
python manage.py runserver 0.0.0.0:8000
build + up then I open the CLI:
$ python manage.py crontab show
Currently active jobs in crontab:
efa8dfc6d4b0cf6963932a5dc3726b23 -> ('* * * * *', 'django.core.management.call_command', ['dbbackup'])
Then I try that:
$ python manage.py crontab run efa8dfc6d4b0cf6963932a5dc3726b23
Backing Up Database: postgres
Writing file to default-858b61d9ccb6-2021-07-05-084119.psql
All good, but the cronjob never gets executed. I don't see new database dumps every minute as expected.
django-crontab doesn't run scheduled jobs itself; it's just a wrapper around the system cron daemon (you need to configure it with the location of crontab(1), for example). Since a Docker container only runs one process, you need to have a second container to run the cron daemon.
A setup I might recommend here is to write some other script that does all of the required startup-time setup, then runs some command that can be passed as additional arguments:
#!/bin/sh
# entrypoint.sh: runs as the main container process
# Gets passed the container's command as arguments
# Run database migrations. (Should be safe, if inefficient, to run
# multiple times concurrently.)
python manage.py migrate
# Set up scheduled jobs, if this is the cron container.
python manage.py crontab add
# Run whatever command we got passed.
exec "$#"
Then in your Dockerfile, make this script be the ENTRYPOINT. Make sure to supply a default CMD too, probably what would run the main server. With both provided, Docker will pass the CMD as arguments to the ENTRYPOINT.
# You probably already have a line like
# COPY . .
# which includes entrypoint.sh; it must be marked executable too
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array form
CMD python manage.py runserver 0.0.0.0:8000
Now in the docker-compose.yml file, you can provide both containers, from the same image, but only override the command: for the cron container. The entrypoint script will run for both but launch a different command at its last line.
version: '3.8'
services:
web:
build: .
ports:
- '8000:8000'
# use the Dockerfile CMD, don't need a command: override
cron:
build: .
command: crond -n # for Vixie cron; BusyBox uses "crond -f"
# no ports:
I would to run a script (populate my MySql Docker container) only when my docker containers are built. I'm running the following docker-compose.yml file, which contains a Django container.
version: '3'
services:
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
depends_on:
- mysql
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
volumes:
my-db:
I have this web/Dockerfile
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY entrypoint.sh /app/
COPY . /app/
RUN ["chmod", "+x", "/app/entrypoint.sh"]
ENTRYPOINT ["/app/entrypoint.sh"]
and these are the contents of my entrypoint.sh file
#!/bin/bash
set -e
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
exec "$#"
The issue is, when I repeatedly run "docker-compose up," the entrypoint.sh script is getting run with its commands. I would prefer the commands only get run when the docker container is first built but they seem to always get run when the container is restored. Is there any way to adjust what I have to achieve this?
An approach that I've used before is to wrap your loaddata calls in your own management command, which first checks if there's any data in the database, and if there is, doesn't do anything. Something like this:
# your_app/management/commands/maybe_init_data.py
from django.core.management import call_command
from django.core.management.base import BaseCommand
from address.models import Country
class Command(BaseCommand):
def handle(self, *args, **options):
if not Country.objects.exists():
self.stdout.write('Seeding initial data')
call_command('loaddata', 'maps/fixtures/country_data.yaml')
call_command('loaddata', 'maps/fixtures/seed_data.yaml')
And then change your entrypoint script to:
python manage.py migrate
python manage.py maybe_init_data
(Assumption here that you have a Country model - replace with a model that you do actually have in your fixtures.)
The idea of seeding your database in the first run, is a very common case. As others have suggested, you can change your entrypoint.sh script and apply some conditioning logic to it and make it the way you want it to work.
But I think it is a really better practice if you separate the logic for seeding the database and running services and do not keep them tangled to each other. This might cause some unwanted behavior in the future.
I was going to suggest a workaround using docker-compose and started searching for some syntax for excluding some services while doing docker-compose up but found out this is still an open issue. But I found this stack overflow answer witch has suggested a very nice approach.
version: '3'
services:
all-services:
image: docker4w/nsenter-dockerd # you want to put there some small image
command: sh -c "echo start"
depends_on:
- mysql
- web
- apache
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
depends_on:
- mysql
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
seed:
build: ./web
env_file: .env
environment:
DEBUG: 'true'
entrypoint: /bin/bash -c "/bin/bash -c \"$${#}\""
command: |
/bin/bash -c "
set -e
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
/bin/bash || exit 0
"
depends_on:
- mysql
volumes:
my-db:
If you use something like above, you will be able to run seeding stage before running docker-compose up.
For seeding your databse, run:
docker-compose up seed
For running all your stack, use:
docker-compose up -d all-services
I think it is a clean approach and, can be extended to many different scenarios and use cases.
UPDATE
If you really want to be able to run the whole stack altogether and also prevent unexpected behaviors caused by running loaddata command multiple times, I would suggest you define a new django management command to check for existing data. Look at this:
checkseed.py
from django.core.management.base import BaseCommand, CommandError
from project.models import Country # or whatever model you have seeded
class Command(BaseCommand):
help = 'Check if seed data already exists'
def handle(self, *args, **options):
if Country.objects.all().count() > 0:
self.stdout.write(self.style.WARNING('Data already exists .. skipping'))
return False
# do all the checks for your data integrity
self.stdout.write(self.style.SUCCESS('Nothing exists'))
return True
And after this, you can change your seed part of docker-compose as below:
seed:
build: ./web
env_file: .env
environment:
DEBUG: 'true'
entrypoint: /bin/bash -c "/bin/bash -c \"$${#}\""
command: |
/bin/bash -c "
set -e
python manage.py checkseed &&
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
/bin/bash || exit 0
"
depends_on:
- mysql
This way, you can be sure that if anyone runs docker-compose up -d by mistake, will not cause integrity errors and problems like that.
Instead of using the entrypoint.sh file, why not just run the commands in the web/Dockerfile?
RUN python manage.py migrate maps
RUN python manage.py loaddata maps/fixtures/country_data.yaml
RUN python manage.py loaddata maps/fixtures/seed_data.yaml
That way these changes will be baked into the image and, when you start the image, these changes will already have been executed.
I had a similar case recently. As the "ENTRYPOINT" contains the command that will be executed every time the container starts a solution would be to include some logic on the entrypoint.sh script in order to avoid to apply the updates ( in your case the migration and the load of the data ) if the effects of these operations are already present on the database.
For example:
#!/bin/bash
set -e
#Function that goes to verify if effects of migration and load data are present on database
function checkEffects() {
IS_UPDATED=0
#Check effects and set to 1 IS_UPDATED if effects are not present
}
checkEffects
if [[ $IS_UPDATED == 0 ]]
then
echo "Database already initialized. Nothing to do"
else
echo "Database is clean. Initializing it"
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
fi
exec "$#"
However the scenario is more complex because to verify the effects that allow to decide if to proceed or not with the updates can be a quite difficult if these involves multiple data and data.
Moreover it becomes very complex if you think on the containers upgrade over time.
Example: Today you're working with a local Dockerfile for your web service but I think in production you'll start to versioning this service uploading it on a Docker registry. So when you'll upload
your first release ( for example the 1.0.0 version ) you'll
specify the following on your docker-compose.yml:
web:
restart: always
image: <DOCKER_REGISTRY_HOST>:<DOCKER_REGISTRY_PORT>/web:1.0.0
ports: # to access the container from outside
- "8000:8000"
Then you'll release the "1.2.0" version of the web service
container when you'll include other changes on the schema for example
loading other data on entrypoint.sh:
#1.0.0 updates
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
#1.2.0 updates
python manage.py loaddata maps/fixtures/other_seed_data.yaml
Here you'll have 2 scenarios (
let's ignore for now the need to check for effects on the script ):
1- You deploy for the first time your services with web:1.2.0: As you start from a clean database you should be sure that all
updates are executed ( both 1.1.0 and 1.2.0 ).
The solution to this case is easy because you can just execute all updates.
2- You upgrade web container to 1.2.0 on an existing environment where 1.0.0 was running: As your database has been initialized
with updates from 1.0.0 you should be sure that only 1.2.0
updates are executed
Here is difficult because you should be able to check what is the version on database applied in order to skip 1.0.0 updates. This will
means you should store the web version somewhere on database for
example
As per all this discussion so I think the best solution is to work directly on scripts that goes to create schema and populate data in order to make these instructions idempotent paying attention on upgrade ones.
Some examples:
1- Create a table
Instead to create table as follow:
CREATE TABLE country
use the if not exists to avoid table already present error:
CREATE TABLE IF NOT EXISTS country
2- Insert default data
Instead to insert data without primary key specified:
INSERT INTO maps.country (name) VALUES ("USA");
Include primary key in order to avoid duplicates:
INSERT INTO maps.country (id,name) VALUES (1,"USA");
Usually build and deploy steps are separated.
Your ENTRYPOINT is part of deploy.
If you want configure manually witch deploy run should run migrate commands and witch just replace containers by a new one (maybe from fresh image), then you can slit it into a separate commands
start database (if not running)
docker-compose -p production -f docker-compose.yml up mysql -d
migrate
docker run \
--rm \
--network production_default \
--env-file docker.env \
--entrypoint python \
my-backend-image-name:prod python manage.py migrate maps
and then deploy fresh image
docker-compose -p production -f docker-compose.yml up -d
And each time manually decide should you run migrate step or not
I've been trying to find the best method to handle setting up a Django project with Docker. But I'm somewhat confused as to how CMD and ENTRYPOINT function in relation to the compose commands.
When I first set the project up, I need to run createsuperuser and migrate for the database. I've tried using a script to run the commands as the entrypoint in my Dockerfile but it didn't seem to work consistently. I switched to the configuration shown below, where I overwrite the Dockerfile CMD with commands in my compose file where it is told to run makemigrations, migrate, and createsuperuser.
The issue I'm having is exactly how to set it up so that it does what I need. If I set a command (shown as commented out in the code) in my compose file it should overwrite the CMD in my Dockerfile from what I understand.
What I'm unsure of is whether or not I need to use ENTRYPOINT or CMD in my Dockerfile to achieve this? Since CMD is overwritten by my compose file and ENTRYPOINT isn't, wouldn't it cause problems if it was set to ENTRYPOINT, since it would try to run gunicorn a second time after the compose command is executed?
Would there be any drawbacks in this approach compared to using an entrypoint script?
Lastly, is there a general best practice approach to handling Django's setup commands when deploying a dockerized Django application? Or am I already doing what is typically done?
Here is my Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=site.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY site site
COPY templates templates
COPY logs logs
COPY scripts scripts
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=site.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "site.wsgi:application"]
And my compose file (omitted the nginx and postgres sections as they are unnecessary to illustrate the issue):
version: "3.2"
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile.prodtest
args:
requirements: requirements/production.txt
#command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn --config gunicorn.conf --log-config loggigng.conf -e DJANGO_SETTINGS_MODULE=site.settings.production_test -W 4 -b 0.0.0.0:8000 site.wsgi"
container_name: dj01
environment:
- DJANGO_SETTINGS_MODULE=site.settings.production_test
- PYTHONDONTWRITEBYTECODE=1
volumes:
- ./:/app
- /static:/static
- /media:/media
networks:
- main
depends_on:
- db
I have the following entrypoint script that will attempt to do the migrate automatically on my Django project:
#!/bin/bash -x
python manage.py migrate --noinput || exit 1
exec "$#"
The only change that would need to happen to your Dockerfile is to ADD it and specify the ENTRYPOINT. I usually put these lines directly about the CMD instruction:
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
(please note that the chmod is only necessary if the docker-entrypoint.sh file on in your build environment is not executable already)
I add || exit 1 so that the script will stop the container should the migrate fail for any reason. When starting your project via docker-compose, it's possible that the database may not be 100% ready to accept connections when this migrate command runs. Between the exit on error approach and the restart: always that you have in your docker-compose.yml already, this will handle that race condition properly.
Note that the -x option I specify for bash echoes out what bash is doing, which I find helpful for debugging my scripts. It can be omitted if you want less verbosity in the container logs.
Dockerfile:
...
ENTRYPOINT ["entrypoint.sh"]
CMD ["start"]
entrypoint.sh will be executed all the time whilst CMD will be the default argument for it (docs)
entrypoint.sh:
if ["$1" = "start"]
then
/usr/local/bin/gunicorn --config config/gunicorn.conf \
--log-config config/logging.conf ...
elif ["$1" = "migrate"]
# whatever
python manage.py migrate
fi
now it is possible to do something like
version: "3.2"
services:
app:
restart: always
build:
...
command: migrate # if needed
or
docker exec -it <container> bash -c entrypoint.sh migrate
What's the best practices for running periodic/scheduled tasks ( like manage.py custom_command ) when running Django with docker (docker-compose) ?
f.e. the most common case - ./manage.py clearsessions
Django recommends to run it with cronjobs...
But Docker does not recommend adding more then one running service to single container...
I guess I can create a docker-compose service from the same image for each command that i need to run - and the command should run infinite loop with a needed sleeps, but that seems overkill doing that for every command that need to be scheduled
What's your advice ?
The way that worked for me
in my django project I have a crontab file like this:
0 0 * * * root python manage.py clearsessions > /proc/1/fd/1 2>/proc/1/fd/2
Installed/configured cron inside my Dockerfile
RUN apt-get update && apt-get -y install cron
ADD crontab /etc/cron.d/crontab
RUN chmod 0644 /etc/cron.d/crontab
and in docker-compose.yml add a new service that will build same image as django project but will run cron -f as CMD
version: '3'
services:
web:
build: ./myprojectname
ports:
- "8000:8000"
#...
cronjobs:
build: ./myprojectname
command: ["cron", "-f"]
I ended up using this project - Ofelia
https://github.com/mcuadros/ofelia
so you just add it to your docker-compose
and have config like:
[job-exec "task name"]
schedule = #daily
container = myprojectname_1
command = python ./manage.py clearsessions
Create one docker image with your Django application.
You can use it to run your Django app (the web interface), and at the same time, using cron schedule your period tasks by passing in the command to the docker executable, like this:
docker exec --rm your_container python manage.py clearsessions
The --rm will make sure that docker removes the container once it finishes; otherwise you will quickly have containers stopped that are of no use.
You can also pass in any extra arguments, for example using -e to modify the environment:
docker exec --rm -e DJANGO_DEBUG=True -e DJANGO_SETTINGS_MODULE=production \
python manage.py clearsessions
I'm trying to create a Docker image with my Django application, but unfortunately I'm getting troubles trying to run my entrypoint script.
Docker exits eith code error 127 and display the following message:
/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
You find below the respective configuration files:
Dockerfile
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /web/src
ADD . /web/src
WORKDIR /web/src
RUN pip install -U pip
RUN pip install -r requirements.txt -U
RUN chmod u+x docker-entrypoint.sh
ENTRYPOINT ["/bin/bash", "docker-entrypoint.sh"]
docker-entrypoint.sh
#!/bin/bash
python manage.py migrate
python manage.py collectstatic --noinput
touch /srv/logs/gunicorn.log
touch /srv/logs/access.log
tail -n 0 -f /srv/logs/*.log &
echo Starting Gunicorn...
exec gunicorn config.wsgi:application \
--name django_server \
--bind 0.0.0.0:8000 \
--workers 3 \
--log-level=info \
--log-file=/srv/logs/gunicorn.log \
--access-logfile=/srv/logs/access.log \
"$#"
docker-compose.yml
version: '2.0'
services:
db:
container_name: db_server
image: postgres
web:
container_name: django_server
build: .
volumes:
- .:/web/src
environment:
- SECRET_KEY=k3jghf1jk%$JH^1GJH5#YUTR#!MBMB<5=7DXXG)JHSX=
- PGDATABASE=postgres
- PGUSER=postgres
- PGPASSWORD=''
- PGHOST=db
- DJANGO_ENV=development
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
links:
- db
After reproducing the problem locally: docker build . build the image successfully, but when trying to start the image using docker-compose up I got the following error exec: gunicorn: not found as the OP mentioned above. Based on this thread I could solve the problem running docker-compose build. So to sum up the 3 following commands should solve the problem:
docker build .
docker-compose build
docker-compose up
Despite this solves the problem for me I'm still confused here, why do I need to run build twice. I mean it should be something wrong somewhere, because as I far as I have understood, docker-compose build should do the same work as docker build ..