Separate processes for Dockerfile for dev and prod? - django

I have a project with the following structure.
ProjectName/
├── Dockerfile
├── api/
│   ├── Dockerfile
│   └── manage.py
├── docker-compose.yml
├── frontend/
│   ├── Dockerfile
│   ├── build/
│   └── src/
└── manifests/
├── development.yml
└── production.yml
docker-compose.yml has a database image that's common between both environments, and the dev.yml and prod.yml have similar but slightly different images for production and dev.
Example: The api dev uses django and just runs python manage.py runserver, but in prod it will run gunicorn api.wsgi.
And the frontend runs npm start but in prod I want it to be based off a different image. Currently the dockerfile only works with one or the other, since the npm command is only found when I use FROM node and the nginx command only shows up when I use FROM kyma/docker-nginx.
So how can I separate these out when in different environments?
./frontend/Dockerfile:
FROM node
WORKDIR /app/frontend
COPY package.json /app/frontend
RUN npm install
EXPOSE 3000
CMD ["npm", "start"]
# Only run this bit in production environment, and not anything above this line.
#FROM kyma/docker-nginx
#COPY build/ /var/www
#CMD 'nginx'
./api/Dockerfile:
FROM python:3.5
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app/api
COPY requirements.txt /app/api
RUN pip install -r requirements.txt
EXPOSE 8000
# Run this command in dev
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
# Run this command in prod
#CMD ["gunicorn", "api.wsgi", "-b 0.0.0.0:8000"]
./docker-compose.yml:
version: '3'
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
volumes:
node-modules:
./manifests/production.yml:
version: '3'
services:
gunicorn:
build: ./api
command: ["gunicorn", "api.wsgi", "-b", "0.0.0.0:8000"]
restart: always
volumes:
- ./api:/app/api
ports:
- "8000:8000"
depends_on:
- db
nginx:
build: ./frontend
command: ["nginx"]
restart: always
volumes:
- ./frontend:/app/frontend
- ./frontend:/var/www
- node-modules:/app/frontend/node_modules
ports:
- "80:80"
volumes:
node-modules:
./manifests/development.yml:
version: '3'
services:
django:
build: ./api
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
restart: always
volumes:
- ./api:/app/api
ports:
- "8000:8000"
depends_on:
- db
frontend:
build: ./frontend
command: ["npm", "start"]
restart: always
volumes:
- ./frontend:/app/frontend
- node-modules:/app/frontend/node_modules
ports:
- "3000:3000"
volumes:
node-modules:

You could have as an ENTRYPOINT a script running one or the other command depending of an environment variable that you can set at run time:
docker run -e env=DEV
# or
docker run -e env=PROD
You can set that same environment variable in a docker compose file.

Related

how do i start the Django server directly from the Docker container folder

I have this project structure:
└── folder
└── my_project_folder
├── my_app
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py
├── .env.dev
├── docker-compose.yml
├── entrypoint.sh
├── requirements.txt
└── Dockerfile
docker-compose.yml:
version: '3.9'
services:
web:
build: .
command: python my_app/manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/app/
ports:
- 8000:8000
env_file:
- .env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=db_admin
- POSTGRES_PASSWORD=db_pass
- POSTGRES_DB=some_db
volumes:
postgres_data:
Dockerfile:
FROM python:3.10.0-alpine
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
RUN apk update
RUN apk add postgresql-dev gcc python3-dev musl-dev
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
COPY . .
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
It's working, but i dont like the line python my_app/manage.py runserver 0.0.0.0:8000 in my docker-compose file.
What should i change to run manage.py from the docker folder?
I mean, how can i use python manage.py runserver 0.0.0.0:8000 (without my_app)?
In your Dockerfile, you can use WORKDIR to change your directory inside docker file:
...
COPY . .
WORKDIR "my_app"
...
Then you are inside my_app dir and you can call your command:
python manage.py ...

python collectstatic commad is not ran in Docker compose and gitlab

I am trying to run python manage.py collectstatic , in docker but nothing works, my python project misses some icons, and this command will solve the issue, but I can't know where to place the command, I have read several questions here but no luck.
Below is my docker-compose.ci.stag.yml file:
version: "3.7"
services:
web:
build:
context: .
dockerfile: Dockerfile.staging
cache_from:
- gingregisr*ty.azurecr.io/guio-tag:tag
image: gingregisrty.azurecr.io/guio-tag:tag
expose:
- 7000
env_file: .env
Then my docker-compose.staging.yml file :
version: '3.5'
# sudo docker login -p <password> -u <username>
services:
api:
container_name: api
image: gingregisrty.azurecr.io/guio-tag:tag
ports:
- 7000:7000
restart: unless-stopped
env_file:
- .env
networks:
- app-network
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
command: --interval 30
environment:
- WATCHTOWER_CLEANUP=true
networks:
- app-network
nginx-proxy:
container_name: nginx-proxy
image: jwilder/nginx-proxy:0.9
restart: always
ports:
- 443:443
- 90:90
volumes:
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- api
networks:
- app-network
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.prod.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- acme:/etc/acme.sh
depends_on:
- nginx-proxy
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
certs:
html:
vhost:
acme:
then my Docker.staging file :
# ./django-docker/app/Dockerfile
FROM python:3.7.5-buster
# set work directory
WORKDIR /opt/app
# Add current directory code to working directory
ADD . /opt/app/
# set environment variables
# Prevents Python from writing pyc files to disc.
ENV PYTHONDONTWRITEBYTECODE 1
# Prevents Python from buffering stdout and stderr.
ENV PYTHONUNBUFFERED 1
# Copy firebase file
# COPY afro-mobile-test-firebase-adminsdk-cspoa.json
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
tzdata \
python3-setuptools \
python3-pip \
python3-dev \
python3-venv \
git \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install environment dependencies
RUN pip3 install --upgrade pip
# Install project dependencies
RUN pip3 install -r requirements.txt
EXPOSE 7000
# copy project
COPY . /opt/app/
CMD ["bash", "start-app.sh"]
then my start-app.sh file :
#Run migrations
python manage.py migrate
#run tests
# python manage.py test
# run collect statics
python manage.py collectstatic
echo 'COLLECT STAIIC DONE ********'
echo $PORT
# Start server
# python manage.py runserver 0.0.0.0:$PORT
gunicorn server.wsgi:application --bind 0.0.0.0:$PORT
Am using gitlab ci to automate the pipeline, so here is my gitlab.yml build script:
# Build and Deploy to Azure.
build-dev:
stage: build
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
script:
- apk add --no-cache bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login $AZ_REGISTRY_IMAGE -u $AZ_USERNAME_REGISTRY -p $AZ_PASSWORD_REGISTRY
- docker pull $AZ_REGISTRY_IMAGE/guio-tag:tag || true
- docker-compose -f docker-compose.ci.stag.yml build
- docker push $AZ_REGISTRY_IMAGE/guio-tag:tag
only:
- develop
- test-branch
The build runs successfully, but am sure python manage.py collectstatic is not ran, how best can I do this ?

Uploading files to Django + Nginx doesn't save it in the media volume in Docker

Basically whenever I try to upload a file using my website, the file doesn't get saved on the media volume.
I don't think its a code issue as it works perfectly fine without the container even when paired with nginx.
I followed this tutorial to setup my docker containers.
Here is my Dockerfile:
# pull official base image
FROM python:3.9.6-alpine
# set work directory
WORKDIR /home/azureuser/ecommerce3
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# fixing alpine related pip errors
RUN apk update && apk add gcc libc-dev make git libffi-dev openssl-dev python3-dev libxml2-dev libxslt-dev
RUN apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
# copy project
COPY . .
# running entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
 
docker-compose.yml:
version: '3.8'
services:
web:
build:
context: ./
dockerfile: Dockerfile
command: sh -c "cd DVM-Recruitment-Task/ && gunicorn DVM_task3.wsgi:application --bind 0.0.0.0:8000"
volumes:
- static_volume:/home/azureuser/ecommerce3/staticfiles:Z
- media_volume:/home/azureuser/ecommerce3/mediafiles:Z
- log_volume:/home/azureuser/ecommerce3/logs
expose:
- 8000
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=---
- POSTGRES_PASSWORD=---
- POSTGRES_DB=---
nginx:
image: nginx
ports:
- 80:80
- 443:443
restart: always
volumes:
- ./nginx/DVM_task3:/etc/nginx/conf.d/default.conf
- static_volume:/home/azureuser/ecommerce3/staticfiles/:Z
- media_volume:/home/azureuser/ecommerce3/mediafiles/:Z
- log_volume:/home/azureuser/ecommerce3/logs
- (ssl certificate stuff here)
volumes:
postgres_data:
media_volume:
static_volume:
log_volume:
 
entrypoint.sh:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python DVM-Recruitment-Task/manage.py makemigrations ecommerce
python DVM-Recruitment-Task/manage.py migrate --noinput
python DVM-Recruitment-Task/manage.py collectstatic --no-input --clear
exec "$#"
 
Also my nginx file already has this inside a server block
location /media/ {
autoindex on;
alias /home/azureuser/ecommerce3/mediafiles/;
}
 
settings.py has this:
MEDIA_URL = '/media/'
MEDIA_ROOT = 'mediafiles'
 
urls.py already has this line in it
urlpatterns[...] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
 
Also my project structure looks something like this:
.
├── DVM-Recruitment-Task
│ ├── DVM_task3
│ ├── README.md
│ ├── ecommerce
│ ├── manage.py
│ ├── static
│ └── templates
├── Dockerfile
├── docker-compose.yml
├── entrypoint.sh
├── nginx
│ └── DVM_task3
└── requirements.txt
everything inside a directory named 'ecommerce3'.
 
 
The mediafiles, staticfiles and logs volume are supposed to be created inside the same directory (ecommerce3).
On running --collectstatic the staticfiles load correctly, the logs work as well but the media files just won't save to the mediafiles folder.
if I go into the web container's shell and manually create a file inside the mediafiles directory, I am able to view in the /media url so I assume nginx is pointing in the right direction. However when it comes to saving the files, the files never get saved on this volume.
I am very new to django and docker so any help or nudge in the right direction will be greatly appreciated.
Try to create the mediafiles folder in your dockerfile
RUN mkdir $APP_HOME/mediafiles
where $APP_HOME is where your app is located: /home/azureuser/ecommerce3

How to run periodic task using crontab inside docker container?

I'm building Django+Angular web application which is deployed on server using docker-compose. And I need to periodically run one django management command. I was searching SO a bit and tried following:
docker-compose:
version: '3.7'
services:
db:
restart: always
image: postgres:12-alpine
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
ports:
- "5432:5432"
volumes:
- ./db:/var/lib/postgresql/data
api:
restart: always
image: registry.gitlab.com/*******/price_comparison_tool/backend:${CI_COMMIT_REF_NAME:-latest}
build: ./backend
ports:
- "8000:8000"
volumes:
- ./backend:/code
environment:
- SUPERUSER_PASSWORD=********
- DB_HOST=db
- DB_PORT=5432
- DB_NAME=price_tool
- DB_USER=price_tool
- DB_PASSWORD=*********
depends_on:
- db
web:
restart: always
image: registry.gitlab.com/**********/price_comparison_tool/frontend:${CI_COMMIT_REF_NAME:-latest}
build:
context: ./frontend
dockerfile: Dockerfile
volumes:
- .:/frontend
ports:
- "80:80"
depends_on:
- api
volumes:
backend:
db:
Dockerfile (backend):
FROM python:3.8.3-alpine
ENV PYTHONUNBUFFERED 1
RUN apk add postgresql-dev gcc python3-dev musl-dev && pip3 install psycopg2
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
ADD entrypoint.sh /entrypoint.sh
ADD crontab_task /crontab_task
ADD run_boto.sh /run_boto.sh
RUN chmod a+x /entrypoint.sh
RUN chmod a+x /run_boto.sh
RUN /usr/bin/crontab /crontab_task
RUN pip install -r requirements.txt
ADD . /code/
RUN mkdir -p db
RUN mkdir -p logs
ENTRYPOINT ["/entrypoint.sh"]
CMD ["gunicorn", "-w", "3", "--timeout", "300", "--bind", "0.0.0.0:8000", "--access-logfile", "-", "price_tool_project.wsgi>
crontab_task:
*/1 * * * * /run_boto.sh > /proc/1/fd/1 2>/proc/1/fd/2
run_boto.sh:
#!/bin/bash -e
cd price_comparison_tool/backend/
python manage.py boto.py
But when I run docker-compose up --build I get following messages in terminal:
api_1 | /bin/ash: python manage.py boto > /proc/1/fd/1 2>/proc/1/fd/2: not found
api_1 | /bin/ash: /run_boto.sh: not found
Project structure is following:
.
├── backend
├── db
├── docker-compose.yml
└── frontend
Can anybody give me an advice how to fix this issue and run management command periodically? Thanks in advance!
EDIT
I made following update:
crontab_task:
*/1 * * * * /code/run_boto.sh > /proc/1/fd/1 2>/proc/1/fd/2
and now run_boto.sh path is correct, but I get following error:
/bin/ash: /code/run_boto.sh: Permission denied
If you are running this application, not as a root user then the problem is, cron or crontab cannot be used as a non-rooted user.
you can take a look at this answer which I got when I was facing the same problem.

Unable to Run Celery and celery beat using docker in django application (Unable to load celery application)

when I am trying to run my application I using without docker its working perfectly , but In docker-compose I am getting this error :
| Error: Invalid value for '-A' / '--app':
| Unable to load celery application.
| The module sampleproject was not found.
my docker-compose file
app:
container_name: myapp
hostname: myapp
build:
context: .
dockerfile: Dockerfile
image: sampleproject
tty: true
command: >
bash -c "
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000
"
env_file: .env
ports:
- "8000:8000"
volumes:
- .:/project
depends_on:
- database
- redis
redis:
image: redis:alpine
celery:
build:
context: ./
dockerfile: Dockerfile
command: celery -A sampleproject worker -l info
depends_on:
- database
- redis
celery-beat:
build: .
command: celery -A sampleproject beat -l info
depends_on:
- database
- redis
- celery
my Docker file
FROM python:3.8
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt --no-cache-dir \
&& rm -rf requirements.txt
RUN mkdir /project
WORKDIR /project
my folder structure is something like this :
I had the same problem and with that we found the solution. In fact, celery is right to complain about not being able to run, as it needs an instance of the application.
For that, it is only necessary to add the volume directive in the docker-compose .yaml file, directing to the project folder, in case into the service celery and celery-beat.
Example:
app:
container_name: myapp
hostname: myapp
build:
context: .
dockerfile: Dockerfile
image: sampleproject
tty: true
command: >
bash -c "
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000
"
env_file: .env
ports:
- "8000:8000"
volumes:
- .:/project
depends_on:
- database
- redis
redis:
image: redis:alpine
celery:
build:
context: ./
dockerfile: Dockerfile
command: celery -A sampleproject worker -l info
volumes:
- .:/project
depends_on:
- database
- redis
celery-beat:
build: .
command: celery -A sampleproject beat -l info
volumes:
- .:/project
depends_on:
- database
- redis
- celery
So when the celery container is executed it will see that there is a volume and will execute the project without any problems.