run initial commands in a docker-compose service - django

I followed this tutorial to run my django web-app locally, apart for the web-app the only service is a postgres db.
I wrote a simple script entrypoint.sh to automate the initial operations needed by a django app, like migrate, makemigrations, collectstatic, createsuperuser;
Everything works fine, except that entrypoint.sh runs everytime I use docker-compose up, performing initial operations that should only run once.
How can I set up my Dockerfile or docker-compose.yml so that entrypoint.sh is run just the first time and not everytime I docker-compose down and then docker-compose up again?
Dockerfile
# importing base image
FROM python:3.9
# updating docker host or host machine
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
# changing current working directory to /usr/src/app
WORKDIR /usr/src/app
# copying requirement.txt file to present working directory
COPY requirements.txt ./
# installing dependency in container
RUN pip install -r requirements.txt
# copying all the files to present working directory
COPY . .
# informing Docker that the container listens on the
# specified network ports at runtime i.e 8000.
EXPOSE 8000
ENTRYPOINT ["./entrypoint.sh"]
docker-compose.yml
version: '3.7'
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
app:
build: ./
command: gunicorn sial.wsgi:application --workers=2 --bind 0.0.0.0:8000
volumes:
- ./data/:/usr/src/app/data/
- ./media/:/usr/src/app/media/
ports:
- 8000:8000
- 5432:5432
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- DJANGO_SUPERUSER_EMAIL=admin#email.it
- DJANGO_SUPERUSER_USERNAME=admin#email.it
- DJANGO_SUPERUSER_PASSWORD=passadmin
depends_on:
- db
entrypoint.sh
#!/bin/bash
python3 manage.py migrate;
python3 manage.py makemigrations;
python3 manage.py migrate;
python3 manage.py collectstatic --clear;
python3 manage.py createsuperuser --no-input;
gunicorn sial.wsgi:application --workers=2 --bind 0.0.0.0:8000;
RECAP
In the directory where my Dockerfile and docker-compose.yml file are:
sudo docker-compose build app
sudo docker-compose up -> initial migrations are applied, static files are collected, superuser created
sudo docker-compose down
sudo docker-compose up -> initial migrations are applied, static files are collected, superuser created AGAIN. I'm trying to avoid this.
I'm new to docker-compose and any help is really appreciated thanks.

A dirty but simple way would be to ignore the error of the createsuperuser command by changing it to python3 manage.py createsuperuser --no-input || true;.
This might even be the solution you prefer, because if the variables for docker-compose change, a new superuser would be created with the changed values.
#!/bin/bash
python3 manage.py migrate;
python3 manage.py makemigrations;
python3 manage.py migrate;
python3 manage.py collectstatic --clear;
python3 manage.py createsuperuser --no-input || true;
gunicorn sial.wsgi:application --workers=2 --bind 0.0.0.0:8000;

Related

Docker-compose executes django twice

I am running in windows 10, and trying to set up a project via docker-compose and django.
If you are interested, It will take you 3 minutes to follow this tutorial and you will get the same error as me. docs.docker.com/samples/django –
When I run
docker-compose run app django-admin startproject app_settings .
I get the following error
CommandError: /app /manage.py already exists. Overlaying a project into an existing directory won't replace conflicting files.
Or when I do this
docker-compose run app python manage.py startapp core
I get the following error
CommandError: 'core' conflicts with the name of an existing Python module and cannot be used as an
app name. Please try another name.
Seems like the command is maybe executed twice? Not sure why?
Docker file
FROM python:3.9-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install
RUN apt-get install -y \
libpq-dev \
gcc \
&& apt-get clean
COPY ./requirements.txt .
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
Docker-compose
version: "3.9"
compute:
container_name: compute
build: ./backend
# command: python manage.py runserver 0.0.0.0:8000
# volumes:
# - ./backend/app:/app
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- db
Try running your image without any arguments, you are already using the command keyword in your docker-compose or just remove that line from the file.

django migrations issues with postgres

I'm working on a Django project which dockerized and using Postgres for the database, but we are facing migrations issues, every time someone made changes in the model so if the other dev took a pull from git and try to migrate the migrations using python manage.py migrate because we already have the migrations file so sometimes the error is table already exists or table doesn't exists so every time I need to apply migrations using --fake but I guess that's not a good approach to migrate every time using --fake flag.
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- dev.env
ports:
- "5432:5432"
environment:
- POSTGRES_DB=POSTGRES_DB
- POSTGRES_USER=POSTGRES_USER
- POSTGRES_PASSWORD=POSTGRES_PASSWORD
app:
container_name: app
build:
context: .
command: bash -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- ./core:/app
- ./data/web:/vol/web
env_file:
- dev.env
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data:
Dockerfile
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
EXPOSE 8000
COPY ./core/ /app/
COPY ./scripts /scripts
# installing nano and cron service
RUN apt-get update
RUN apt-get install -y cron
RUN apt-get install nano
RUN pip install --upgrade pip
COPY requirements.txt /app/
# install dependencies and manage assets
RUN pip install -r requirements.txt && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media
# files for cron logs
RUN mkdir /cron
RUN touch /cron/django_cron.log
# start cron service
RUN service cron start
RUN service cron restart
RUN chmod +x /scripts/run.sh
CMD ["/scripts/run.sh"]
run.sh
#!/bin/sh
set -e
ls -la /vol/
ls -la /vol/web
whoami
python manage.py collectstatic --noinput
python manage.py migrate
service cron start
service cron restart
python manage.py crontab add
printenv > env.txt
cat /var/spool/cron/crontabs/root >> env.txt
cat env.txt > /var/spool/cron/crontabs/root
uwsgi --socket :9000 --workers 4 --master --enable-threads --module alectify.wsgi
Django offers the ability to create updated migrations when the models change see https://docs.djangoproject.com/en/4.0/topics/migrations/#workflow for more information, but you can generate then apply updated migrations using:
python manage.py makemigrations
python manage.py migrate

How to start cron service on Dockerfile [duplicate]

This question already has answers here:
How to run a cron job inside a docker container?
(29 answers)
Docker Compose - How to execute multiple commands?
(20 answers)
Closed last year.
I have installed django-crontab==0.7.1 and added to INSTALLED_APPS Django configuration. I'm trying to start cron service on the Docker image build and add the command cron task with python manage.py crontab add but nothing occurs.
Dockerfile:
FROM python:3.8-slim-buster
LABEL maintainer="info#albertosanmartinmartinez.es" version="1.0.0"
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update -y && apt-get install -y build-essential postgresql python-scipy python-numpy python-pandas libgdal-dev && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN mkdir /industrialareas
COPY ./project /industrialareas/
COPY ./requirements.txt /industrialareas/
WORKDIR /industrialareas
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8000
CMD service cron start
docker-compose.yml:
version: '3.7'
services:
django:
container_name: industrialareas_django_ctnr
build:
context: .
dockerfile: Dockerfile-django
restart: unless-stopped
env_file: ./project/project/settings/.env
command: python manage.py check
command: python manage.py collectstatic --noinput
command: python manage.py runserver 0.0.0.0:8000
command: python manage.py crontab add
volumes:
- ./project:/industrialareas
depends_on:
- postgres
ports:
- 8000:8000
But when I go into the container and run the service cron status command, I get the error.
[FAIL] cron is not running ... failed!
Anybody could help me please ?
Thanks in advance.

Docker-compose command: file not found

I want to initialize Docker for my Django project with postreSQL. I followed instrunctions from https://docs.docker.com/compose/django/
I also want to be sure that db runs before web so I use wait_for_db.sh. When I try to execute command docker-compose up
I see following respond:
web_1 | chmod: cannot access 'wait_for_db.sh': No such file or directory
pipingapi_web_1 exited with code 1
Before I try to use "docker-compose run", I Change directory to project root. I tried also to write
$ docker-compose run web django-admin startproject pipingapi . even though project was created before with venv.
I guess its not exactly about .sh file because when I erase lines reffering to that file, Docker cant find manage.py then (look at command order in docker-compose.yml). I also tried to put code/ before wait_for_db.sh in docker-compose.yml but it did not work.
My project tree:
.
L apienv/
L docker-compose.yml
L Dockerfile
L manage.py
L project/
L README.md
L requirements.txt
L restapi/
L wait_for_db.sh
Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
RUN apt-get update -q
RUN apt-get install -yq netcat
docker-compose.yml
version: '3'
services:
db:
image: postgres:12.3
volumes:
- /var/lib/postgresql/data
env_file:
- ./.env
web:
build: .
command:
sh -c "chmod +x wait_for_db.sh
&& ./wait_for_db.sh
&& python manage.py makemigrations
&& python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
env_file:
- ./.env
If it matters: I use Docker Toolbox on win 8.1
EDIT(SOLVED):
It looked like I was overwritting my tree with "code" directory so I deleted
volumes:
- .:/code
and it works
As the image building stage is complete, you could drop into the docker image and interactively run the commands you are trying to fix.
That should give you some hints
docker run -it web_1 bash
My guess is, as you are setting WORKDIR before you run the COPY, you are probably in the wrong directory.
It looked like I was overwritting my tree with "code" directory so I deleted
volumes:
- .:/code
and it works

Behavior from docker-compose command not the same as run in Dockerfile

I have a Django project and I've been struggling with the automation of the static files generation. My project structure has a docker-compose.yml file and a Dockerfile for every container image.
The docker-compose.yml file for my project:
version: '3'
services:
web:
build: ./dispenser
command: gunicorn -c gunicorn.conf.py dispenser.wsgi
volumes:
- ./dispenser:/dispenser
ports:
- "8000:8000"
restart: on-failure
nginx:
build: ./nginx/
depends_on:
- web
command: nginx -g 'daemon off;'
ports:
- "80:80"
volumes:
- ./dispenser/staticfiles:/var/www/static
restart: on-failure
The Dockerfile for the Django project I'm using:
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
WEBAPP_DIR=/dispenser \
GUNICORN_LOG_DIR=/var/log/gunicorn
WORKDIR $WEBAPP_DIR
RUN mkdir -p $GUNICORN_LOG_DIR \
mkdir -p $WEBAPP_DIR
ADD pc-requirements.txt $WEBAPP_DIR
RUN pip install -r pc-requirements.txt
ADD . $WEBAPP_DIR
RUN python manage.py makemigrations && \
python manage.py migrate && \
python manage.py collectstatic --no-input
After several hours of test and research I've found out that running the collectstatic and migrations commands from the Dockerfile doesn't produce the same result as doing it via the command argument on the docker-compose.yml file.
If I do it as shown above, when time for running the collectstatic command comes, only the "staticfiles" folder is generated (no files inside it). Also database migrations weren't applied (note that I'm using the default .sqlite3 db). Even though the stdout when creating the container said that migrations were applied and staticfiles generated.
The only workaround I found to make it work was executing bash from the container and then running those commands from there.
But later I've found out that if I specify those commands on the docker-file.yml everything works as expected. Leaving the files as follows:
docker-compose.yml
version: '3'
services:
web:
build: ./dispenser
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py collectstatic --no-input && gunicorn -c gunicorn.conf.py dispenser.wsgi"
volumes:
- ./dispenser:/dispenser
ports:
- "8000:8000"
restart: on-failure
nginx:
build: ./nginx/
depends_on:
- web
command: nginx -g 'daemon off;'
ports:
- "80:80"
volumes:
- ./dispenser/staticfiles:/var/www/static
restart: on-failure
Dockerfile
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
WEBAPP_DIR=/dispenser \
GUNICORN_LOG_DIR=/var/log/gunicorn
WORKDIR $WEBAPP_DIR
RUN mkdir -p $GUNICORN_LOG_DIR \
mkdir -p $WEBAPP_DIR
ADD pc-requirements.txt $WEBAPP_DIR
RUN pip install -r pc-requirements.txt
ADD . $WEBAPP_DIR
Can anyone explain me why does this occur? And if is there another way of achieving what I intend without having to specify the commands on the docker-compose.yml file?
When you mount a host directory into a container, the contents of host directory shadow the contents of the container.
volumes:
- ./dispenser:/dispenser
So when you run your container, the initial contents of /dispenser inside container will be the contents of ./dispenser from host machine. Any content already at /dispenser inside the container is shadowed. So the content generated during image build time by the RUN instructions inside your Dockerfile will be lost.
In your second approach of using command in compose file, you are mounting the volume first and then generating the content and hence it works.
The command instruction in Dockerfile is used to override the default command in the Docker image which can be set using CMD instruction in Dockerfile. Since you want to use the first approach of running your python script during image build time using RUN instructions, you can RUN them in a different directory(say /tmp/dispenser) and as part of the command in compose or CMD in Dockerfile, you can move the generated content from /tmp/dispenser to /dispenser.