How to execute mulitple commands in single line linux (openshift / docker ) - django

I'm looking for a way to run django server and celery in single line. The services (django and celery) are deployed in openshift as two separate pods with same image and currently i'm running django service (pod) using python manage.py runserver and celery (pod) using celery -A myapp worker --loglevel=info --concurrency=8
instead of running separate pods for each, i want to execute the runserver command and celery worker command together. How to do that.
I know && ; || is used for such scenarios. but those doesn't work.
for example :
cd ./app && python manage.py runserver #this works
cd ./app && python manage.py runserver && celery -A myapp worker --loglevel=info --concurrency=8
#this will cd to app, execute runserver command. but celery command doesn't get executed.

create a bash file and add in it the two commands like this :
python manage.py runserver &
celery -A myapp worker --loglevel=info --concurrency=8
make it executable with "chmod +x"
and run it in your docker container with
bash my_file.sh

The && joins two commands, and means the following command will run AFTER THE FIRST HAS FINISHED, and depending on it's exit code.
It's a logical and -- so, if the first command returns false, the second doesn't run (since the logical and is already known to evaluate as false).
Try this:
cd /path; ( python manage.py runserver &); ( celery -A myapp worker --loglevel=info --concurrency=8 &);
The final & puts celery in the background as well. Remove it if you want celery in the foreground. As noted above, this is not a "container native" design - if you have one process per container the rest of the management becomes significantly more straightforward. It's commonly worthwhile to figure out the rest of the adjustments needed (e.g. an emptyDir{} can be used to share files between containers within the pod) and simplify deployment sooner rather than later.

Related

Cron jobs show but are not executed in dockerized Django app

I have "django_crontab" in my installed apps.
I have a cron job configured
CRONJOBS = [
('* * * * *', 'django.core.management.call_command',['dbbackup']),
]
my YAML looks like this:
web:
build: .
command:
- /bin/bash
- -c
- |
python manage.py migrate
python manage.py crontab add
python manage.py runserver 0.0.0.0:8000
build + up then I open the CLI:
$ python manage.py crontab show
Currently active jobs in crontab:
efa8dfc6d4b0cf6963932a5dc3726b23 -> ('* * * * *', 'django.core.management.call_command', ['dbbackup'])
Then I try that:
$ python manage.py crontab run efa8dfc6d4b0cf6963932a5dc3726b23
Backing Up Database: postgres
Writing file to default-858b61d9ccb6-2021-07-05-084119.psql
All good, but the cronjob never gets executed. I don't see new database dumps every minute as expected.
django-crontab doesn't run scheduled jobs itself; it's just a wrapper around the system cron daemon (you need to configure it with the location of crontab(1), for example). Since a Docker container only runs one process, you need to have a second container to run the cron daemon.
A setup I might recommend here is to write some other script that does all of the required startup-time setup, then runs some command that can be passed as additional arguments:
#!/bin/sh
# entrypoint.sh: runs as the main container process
# Gets passed the container's command as arguments
# Run database migrations. (Should be safe, if inefficient, to run
# multiple times concurrently.)
python manage.py migrate
# Set up scheduled jobs, if this is the cron container.
python manage.py crontab add
# Run whatever command we got passed.
exec "$#"
Then in your Dockerfile, make this script be the ENTRYPOINT. Make sure to supply a default CMD too, probably what would run the main server. With both provided, Docker will pass the CMD as arguments to the ENTRYPOINT.
# You probably already have a line like
# COPY . .
# which includes entrypoint.sh; it must be marked executable too
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array form
CMD python manage.py runserver 0.0.0.0:8000
Now in the docker-compose.yml file, you can provide both containers, from the same image, but only override the command: for the cron container. The entrypoint script will run for both but launch a different command at its last line.
version: '3.8'
services:
web:
build: .
ports:
- '8000:8000'
# use the Dockerfile CMD, don't need a command: override
cron:
build: .
command: crond -n # for Vixie cron; BusyBox uses "crond -f"
# no ports:

running celery tasks and celery beat in ECS with Django

I'm using ECS for the first time. I have dockerized my Django 2.2 application and using ECS and uwsgi to run the Django application in production.
While in the development environment, I had to run three commands to run Django server, celery and celery beat
python manage.py runserver
celery -A qcg worker -l info
celery beat -A qcg worker -l info
Where qcg is my application.
My Dockerfile has following uwsgi configuration
EXPOSE 8000
ENV UWSGI_WSGI_FILE=qcg/wsgi.py
ENV UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy
ENV UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_MAP="/static/=/static_cdn/static_root/" UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
USER ${APP_USER}:${APP_USER}
ENTRYPOINT ["/app/scripts/docker/entrypoint.sh"]
The entrypoint.sh file has
exec "$#"
I have created the ECS task definition and in the container's command input, I have
uwsgi --show-config
This starts the uwsgi server.
Now I'm running 1 EC2 instance in the cluster and running one service with two instances of the task definition.
I couldn't figure out how to run the celery task and celery beat in my application.
Do I need to create separate tasks for running celery and celery-beat?
Yes, you need to run separate ECS tasks (or separate ECS services) for celery beat and celery worker. Celery Beat will send the Celery tasks to the Celery worker.
I use separate Dockerfiles for Celery, Celery beat, and Django.
Worker Dockerfile something like this:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
ADD requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
ADD . /src
WORKDIR /src
CMD ["celery", "-A", "<appname>", "worker"]
and Beat Dockerfile:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
ADD requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
ADD . /src
WORKDIR /src
CMD ["celery", "-A", "<appname>", "beat"]

Docker Compose ENTRYPOINT and CMD with Django Migrations

I've been trying to find the best method to handle setting up a Django project with Docker. But I'm somewhat confused as to how CMD and ENTRYPOINT function in relation to the compose commands.
When I first set the project up, I need to run createsuperuser and migrate for the database. I've tried using a script to run the commands as the entrypoint in my Dockerfile but it didn't seem to work consistently. I switched to the configuration shown below, where I overwrite the Dockerfile CMD with commands in my compose file where it is told to run makemigrations, migrate, and createsuperuser.
The issue I'm having is exactly how to set it up so that it does what I need. If I set a command (shown as commented out in the code) in my compose file it should overwrite the CMD in my Dockerfile from what I understand.
What I'm unsure of is whether or not I need to use ENTRYPOINT or CMD in my Dockerfile to achieve this? Since CMD is overwritten by my compose file and ENTRYPOINT isn't, wouldn't it cause problems if it was set to ENTRYPOINT, since it would try to run gunicorn a second time after the compose command is executed?
Would there be any drawbacks in this approach compared to using an entrypoint script?
Lastly, is there a general best practice approach to handling Django's setup commands when deploying a dockerized Django application? Or am I already doing what is typically done?
Here is my Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=site.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY site site
COPY templates templates
COPY logs logs
COPY scripts scripts
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=site.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "site.wsgi:application"]
And my compose file (omitted the nginx and postgres sections as they are unnecessary to illustrate the issue):
version: "3.2"
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile.prodtest
args:
requirements: requirements/production.txt
#command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn --config gunicorn.conf --log-config loggigng.conf -e DJANGO_SETTINGS_MODULE=site.settings.production_test -W 4 -b 0.0.0.0:8000 site.wsgi"
container_name: dj01
environment:
- DJANGO_SETTINGS_MODULE=site.settings.production_test
- PYTHONDONTWRITEBYTECODE=1
volumes:
- ./:/app
- /static:/static
- /media:/media
networks:
- main
depends_on:
- db
I have the following entrypoint script that will attempt to do the migrate automatically on my Django project:
#!/bin/bash -x
python manage.py migrate --noinput || exit 1
exec "$#"
The only change that would need to happen to your Dockerfile is to ADD it and specify the ENTRYPOINT. I usually put these lines directly about the CMD instruction:
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
(please note that the chmod is only necessary if the docker-entrypoint.sh file on in your build environment is not executable already)
I add || exit 1 so that the script will stop the container should the migrate fail for any reason. When starting your project via docker-compose, it's possible that the database may not be 100% ready to accept connections when this migrate command runs. Between the exit on error approach and the restart: always that you have in your docker-compose.yml already, this will handle that race condition properly.
Note that the -x option I specify for bash echoes out what bash is doing, which I find helpful for debugging my scripts. It can be omitted if you want less verbosity in the container logs.
Dockerfile:
...
ENTRYPOINT ["entrypoint.sh"]
CMD ["start"]
entrypoint.sh will be executed all the time whilst CMD will be the default argument for it (docs)
entrypoint.sh:
if ["$1" = "start"]
then
/usr/local/bin/gunicorn --config config/gunicorn.conf \
--log-config config/logging.conf ...
elif ["$1" = "migrate"]
# whatever
python manage.py migrate
fi
now it is possible to do something like
version: "3.2"
services:
app:
restart: always
build:
...
command: migrate # if needed
or
docker exec -it <container> bash -c entrypoint.sh migrate

Django + docker + periodic commands

What's the best practices for running periodic/scheduled tasks ( like manage.py custom_command ) when running Django with docker (docker-compose) ?
f.e. the most common case - ./manage.py clearsessions
Django recommends to run it with cronjobs...
But Docker does not recommend adding more then one running service to single container...
I guess I can create a docker-compose service from the same image for each command that i need to run - and the command should run infinite loop with a needed sleeps, but that seems overkill doing that for every command that need to be scheduled
What's your advice ?
The way that worked for me
in my django project I have a crontab file like this:
0 0 * * * root python manage.py clearsessions > /proc/1/fd/1 2>/proc/1/fd/2
Installed/configured cron inside my Dockerfile
RUN apt-get update && apt-get -y install cron
ADD crontab /etc/cron.d/crontab
RUN chmod 0644 /etc/cron.d/crontab
and in docker-compose.yml add a new service that will build same image as django project but will run cron -f as CMD
version: '3'
services:
web:
build: ./myprojectname
ports:
- "8000:8000"
#...
cronjobs:
build: ./myprojectname
command: ["cron", "-f"]
I ended up using this project - Ofelia
https://github.com/mcuadros/ofelia
so you just add it to your docker-compose
and have config like:
[job-exec "task name"]
schedule = #daily
container = myprojectname_1
command = python ./manage.py clearsessions
Create one docker image with your Django application.
You can use it to run your Django app (the web interface), and at the same time, using cron schedule your period tasks by passing in the command to the docker executable, like this:
docker exec --rm your_container python manage.py clearsessions
The --rm will make sure that docker removes the container once it finishes; otherwise you will quickly have containers stopped that are of no use.
You can also pass in any extra arguments, for example using -e to modify the environment:
docker exec --rm -e DJANGO_DEBUG=True -e DJANGO_SETTINGS_MODULE=production \
python manage.py clearsessions

Setting number of gunicorn workers in django settings.py

I'm using gunicorn to run my django site locally when developing. I have gunicorn added under INSTALLED_APPS in my settings.py and I can call "python manage.py run_gunicorn" and gunicorn starts up and loads my site just fine. My problem is that I would like to run multiple workers with gunicorn and by default it only seems to be spawning one. I can also use "gunicorn_django --workers=2" and it fires 2 workers. Can I configure the default number of workers that are created when I call "python manage.py run_gunicorn" to be more than 1?
Machine specs:
Ubuntu 11.10 in Virtualbox instance
2 virtualized processors
1.5 gigs of ram
Just add:
run_gunicorn --workers=4
You can make the command a unix alias if you want to have the number of workers default to this.
You can use the -c command to specify a configuration file (written in python):
http://docs.gunicorn.org/en/latest/configure.html#settings
Since this is a python file, you can from django.conf import settings and do
workers = settings.GUNICORN_WORKERS
You're not supposed to do python manage.py run_gunicorn.
do gunicorn project_name.wsgi -b 0.0.0.0:8000 -w 10
or do python -m gunicorn project_name.wsgi -b 0.0.0.0:8000 -w 10
if you're keen on using manage.py command read this on how to create a custom management command, create when that uses the subprocess module's Popen method to call one of commands i've written above