Django + docker + periodic commands - django

What's the best practices for running periodic/scheduled tasks ( like manage.py custom_command ) when running Django with docker (docker-compose) ?
f.e. the most common case - ./manage.py clearsessions
Django recommends to run it with cronjobs...
But Docker does not recommend adding more then one running service to single container...
I guess I can create a docker-compose service from the same image for each command that i need to run - and the command should run infinite loop with a needed sleeps, but that seems overkill doing that for every command that need to be scheduled
What's your advice ?

The way that worked for me
in my django project I have a crontab file like this:
0 0 * * * root python manage.py clearsessions > /proc/1/fd/1 2>/proc/1/fd/2
Installed/configured cron inside my Dockerfile
RUN apt-get update && apt-get -y install cron
ADD crontab /etc/cron.d/crontab
RUN chmod 0644 /etc/cron.d/crontab
and in docker-compose.yml add a new service that will build same image as django project but will run cron -f as CMD
version: '3'
services:
web:
build: ./myprojectname
ports:
- "8000:8000"
#...
cronjobs:
build: ./myprojectname
command: ["cron", "-f"]

I ended up using this project - Ofelia
https://github.com/mcuadros/ofelia
so you just add it to your docker-compose
and have config like:
[job-exec "task name"]
schedule = #daily
container = myprojectname_1
command = python ./manage.py clearsessions

Create one docker image with your Django application.
You can use it to run your Django app (the web interface), and at the same time, using cron schedule your period tasks by passing in the command to the docker executable, like this:
docker exec --rm your_container python manage.py clearsessions
The --rm will make sure that docker removes the container once it finishes; otherwise you will quickly have containers stopped that are of no use.
You can also pass in any extra arguments, for example using -e to modify the environment:
docker exec --rm -e DJANGO_DEBUG=True -e DJANGO_SETTINGS_MODULE=production \
python manage.py clearsessions

Related

how to make the django devserver run everytime i create a docker container, instead of when i build the image

tldr version: how do i do x everytime i build a container, instead of everytime i build a new image.
im building a very basic docker django example. when i do docker-compose build everything works as i want
version: '3.9'
services:
app:
build:
context: .
command: sh -c "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- ./app:/app
environment:
- SECRET_KEY=devsecretkey
- DEBUG=1
this runs the django devserver, however only when the image is being built. the containers created by the image do nothing, but actually i want them to run the django devserver. So i figure i should just move the command: sh -c "python manage.py runserver 0.0.0.0:8000" from docker-compose to my dockerfile as an entrypoint.
below is my docker file
FROM python:3.9-alpine3.13
LABEL maintainer="madshit.com"
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./app /app
WORKDIR /app
EXPOSE 8000
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /requirements.txt && \
adduser --disabled-password --no-create-home app
ENV PATH="/py/bin:$PATH"
USER app
ENTRYPOINT python manage.py runserver # i added this because i thought it would be called everytime my docker environment was finished setting up. no dice :(
The bottom section of the image below is a screenshot of the logs of my image from docker desktop. strangely the last command it accepted was to set the user not anything to do with entrypoint. maybe it ignored entrypoint and thats the problem? The top section shows the logs of the instance created from this image (kinda bare).
what do i need to do to make the django webserver run in each container when deployed?
why doesnt entrypoint seem to get called? (its not in the logs)
I would recommend changing your environment variable logic slightly.
environment:
- SECRET_KEY=devsecretkey
- DEBUG=1 <-- replace this
- SERVER='localhost' <-- or other env like staging or live
And then in your settings file you can do:
SERVER = os.environ.get('SERVER')
And then you can set variables based on the string like so:
if SERVER == 'production':
DEBUG = FALSE
else:
DEBUG = True
This is a very regular practice so that we can customise all kinds of settings and there are plenty of use cases for this method.
If that still doesn't work, we may have to look at other issues that might be causing these symptoms.

Cron jobs show but are not executed in dockerized Django app

I have "django_crontab" in my installed apps.
I have a cron job configured
CRONJOBS = [
('* * * * *', 'django.core.management.call_command',['dbbackup']),
]
my YAML looks like this:
web:
build: .
command:
- /bin/bash
- -c
- |
python manage.py migrate
python manage.py crontab add
python manage.py runserver 0.0.0.0:8000
build + up then I open the CLI:
$ python manage.py crontab show
Currently active jobs in crontab:
efa8dfc6d4b0cf6963932a5dc3726b23 -> ('* * * * *', 'django.core.management.call_command', ['dbbackup'])
Then I try that:
$ python manage.py crontab run efa8dfc6d4b0cf6963932a5dc3726b23
Backing Up Database: postgres
Writing file to default-858b61d9ccb6-2021-07-05-084119.psql
All good, but the cronjob never gets executed. I don't see new database dumps every minute as expected.
django-crontab doesn't run scheduled jobs itself; it's just a wrapper around the system cron daemon (you need to configure it with the location of crontab(1), for example). Since a Docker container only runs one process, you need to have a second container to run the cron daemon.
A setup I might recommend here is to write some other script that does all of the required startup-time setup, then runs some command that can be passed as additional arguments:
#!/bin/sh
# entrypoint.sh: runs as the main container process
# Gets passed the container's command as arguments
# Run database migrations. (Should be safe, if inefficient, to run
# multiple times concurrently.)
python manage.py migrate
# Set up scheduled jobs, if this is the cron container.
python manage.py crontab add
# Run whatever command we got passed.
exec "$#"
Then in your Dockerfile, make this script be the ENTRYPOINT. Make sure to supply a default CMD too, probably what would run the main server. With both provided, Docker will pass the CMD as arguments to the ENTRYPOINT.
# You probably already have a line like
# COPY . .
# which includes entrypoint.sh; it must be marked executable too
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array form
CMD python manage.py runserver 0.0.0.0:8000
Now in the docker-compose.yml file, you can provide both containers, from the same image, but only override the command: for the cron container. The entrypoint script will run for both but launch a different command at its last line.
version: '3.8'
services:
web:
build: .
ports:
- '8000:8000'
# use the Dockerfile CMD, don't need a command: override
cron:
build: .
command: crond -n # for Vixie cron; BusyBox uses "crond -f"
# no ports:

the proper way to run django rq in docker microservices setup

I have somehow bad setup of my docker containers I guess.
Because each time I run task from django I see in docker container output of ps aux that there is new process created of python mange.py rqworker mail instead of using the existing one.
See the screencast: https://imgur.com/a/HxUjzJ5
the process executed from command in my docker compose for rq worker container looks like this.
#!/bin/sh -e
wait-for-it
for KEY in $(redis-cli -h $REDIS_HOST -n 2 KEYS "rq:worker*"); do
redis-cli -h $REDIS_HOST -n 2 DEL $KEY
done
if [ "$ENVIRONMENT" = "development" ]; then
python manage.py rqworkers --worker-class rq.SimpleWorker --autoreload;
else
python manage.py rqworkers --worker-class rq.SimpleWorker --workers 4;
fi
I am new to docker and wondering a bit that this is started like this without deamonization... but is it a dockerish way of doing thing, right?
Here's what I do, with docker-compose:
version: '3'
services:
web:
build: .
image: mysite
[...]
rqworker:
image: mysite
command: python manage.py rqworker
[...]
rqworker_high:
image: mysite
command: python manage.py rqworker high
[...]
Then start with:
$ docker-compose up --scale rqworker_high=4

Docker Compose ENTRYPOINT and CMD with Django Migrations

I've been trying to find the best method to handle setting up a Django project with Docker. But I'm somewhat confused as to how CMD and ENTRYPOINT function in relation to the compose commands.
When I first set the project up, I need to run createsuperuser and migrate for the database. I've tried using a script to run the commands as the entrypoint in my Dockerfile but it didn't seem to work consistently. I switched to the configuration shown below, where I overwrite the Dockerfile CMD with commands in my compose file where it is told to run makemigrations, migrate, and createsuperuser.
The issue I'm having is exactly how to set it up so that it does what I need. If I set a command (shown as commented out in the code) in my compose file it should overwrite the CMD in my Dockerfile from what I understand.
What I'm unsure of is whether or not I need to use ENTRYPOINT or CMD in my Dockerfile to achieve this? Since CMD is overwritten by my compose file and ENTRYPOINT isn't, wouldn't it cause problems if it was set to ENTRYPOINT, since it would try to run gunicorn a second time after the compose command is executed?
Would there be any drawbacks in this approach compared to using an entrypoint script?
Lastly, is there a general best practice approach to handling Django's setup commands when deploying a dockerized Django application? Or am I already doing what is typically done?
Here is my Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=site.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY site site
COPY templates templates
COPY logs logs
COPY scripts scripts
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=site.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "site.wsgi:application"]
And my compose file (omitted the nginx and postgres sections as they are unnecessary to illustrate the issue):
version: "3.2"
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile.prodtest
args:
requirements: requirements/production.txt
#command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn --config gunicorn.conf --log-config loggigng.conf -e DJANGO_SETTINGS_MODULE=site.settings.production_test -W 4 -b 0.0.0.0:8000 site.wsgi"
container_name: dj01
environment:
- DJANGO_SETTINGS_MODULE=site.settings.production_test
- PYTHONDONTWRITEBYTECODE=1
volumes:
- ./:/app
- /static:/static
- /media:/media
networks:
- main
depends_on:
- db
I have the following entrypoint script that will attempt to do the migrate automatically on my Django project:
#!/bin/bash -x
python manage.py migrate --noinput || exit 1
exec "$#"
The only change that would need to happen to your Dockerfile is to ADD it and specify the ENTRYPOINT. I usually put these lines directly about the CMD instruction:
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
(please note that the chmod is only necessary if the docker-entrypoint.sh file on in your build environment is not executable already)
I add || exit 1 so that the script will stop the container should the migrate fail for any reason. When starting your project via docker-compose, it's possible that the database may not be 100% ready to accept connections when this migrate command runs. Between the exit on error approach and the restart: always that you have in your docker-compose.yml already, this will handle that race condition properly.
Note that the -x option I specify for bash echoes out what bash is doing, which I find helpful for debugging my scripts. It can be omitted if you want less verbosity in the container logs.
Dockerfile:
...
ENTRYPOINT ["entrypoint.sh"]
CMD ["start"]
entrypoint.sh will be executed all the time whilst CMD will be the default argument for it (docs)
entrypoint.sh:
if ["$1" = "start"]
then
/usr/local/bin/gunicorn --config config/gunicorn.conf \
--log-config config/logging.conf ...
elif ["$1" = "migrate"]
# whatever
python manage.py migrate
fi
now it is possible to do something like
version: "3.2"
services:
app:
restart: always
build:
...
command: migrate # if needed
or
docker exec -it <container> bash -c entrypoint.sh migrate

django-cookiecutter, docker-compose run: can't type in terminal

I'm having an issue with a project started with django cookiecutter.
To verify that the issue was not with my project, I'm testing on a blank django cookiecutter project.
The issue is that when I run:
docker-compose -f production.yml run --rm django python manage.py createsuperuser
I get the prompt but can't type in the terminal.
Same thing when I run:
docker-compose -f production.yml run --rm django python manage.py shell
I get the shell prompt, but I can't type.
The app is running on a machine on DigitalOcean created with the docker-machine create command.
Any thoughts on what the issue could be and how I could debug this?
to enable typing in docker-compose terminal, you need to specify the terminal session is interactive on the docker-compose.yml. Because by default, docker console is not interactive.
stdin_open: true
tty: true
bash can be accessed inside the docker container using the following command.
docker-compose -f production.yml exec django bash
This will give you full access to the container of your django server.
There you can run all your normal django commands with full interactivity.
Create Superuser
python manage.py createsuperuser
For Local ENV
docker-compose -f local.yml exec django bash