Docker auto reload not working on remote server - django

I've setup a Django project that runs in Docker. I want to have my dev environment on a remote server and have it auto reload as I change my files locally. I've set up remote deployment in Pycharm and it works fine. All local changes are reflected on my remote server, I can also see that files get changed inside the Docker container as I've setup a volume in my docker-compose file. However auto reloading is not working, and I cannot figure out why.
My docker-compose file:
version: '3.7'
services:
web:
container_name: my_project_ws
restart: always
build:
context: .
dockerfile: Dockerfile
command: gunicorn my_project.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/my_project/staticfiles
- media_volume:/my_project/mediafiles
- .:/my_project
expose:
- 8000
env_file:
- .env
depends_on:
- db
db:
container_name: my_project_db
restart: always
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- .env
nginx:
restart: always
build: ./nginx
volumes:
- static_volume:/my_project/staticfiles
- media_volume:/my_project/mediafiles
ports:
- 80:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
media_volume:

Running Gunicorn with --reload solved the problem.

Related

django.db.utils.OperationalError: could not translate host name "db" to address: Name does not resolve. How to solve this issue?

Can some body help me solve this issue. Why am i getting this error? I have db in .env host and links, network in docker-compose file too. I am not being to figure out where the issue is being raised.
Here is my docker-compose file.
version: "3.9"
volumes:
dbdata:
networks:
django:
driver: bridge
services:
web:
build:
context: .
volumes:
- .:/home/django
ports:
- "8000:8000"
command: gunicorn Django.wsgi:application --bind 0.0.0.0:8000
container_name: django_web
restart: always
env_file: .env
depends_on:
- db
links:
- db:db
networks:
- django
db:
image: postgres
volumes:
- dbdata:/var/lib/postgresql
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- 5430:5432
networks:
- django
container_name: django_db
here is my .env with database settings
DB_USER=admin
DB_NAME=test
DB_PASSWORD=admin
DB_HOST=db
DB_PORT=5432
DB_SCHEMA=public
CONN_MAX_AGE=60
The problem is that when you are using:
docker compose up --build
As docker compose document describes :
flag
reference
--build
Build images before starting containers.
That means that during the build time there is no "db" container running, so there is no possible to exist a "db" host name.
A suggestion would be to to not engage any DB transaction during the build phase.
You can make any database "rendering" **as a seeding process in your app start

service refers to undefined volume

I want to share temp files between Django project and celery worker (it works with TemporaryUploadedFiles, so I want to have access to these files from celery worker to manage them). I've read about shared volumes, so I tried to imlement it in my docker-compose file and run it, but the command gave me this error:
$ docker compose up --build
service "web" refers to undefined volume shared_web_volume/: invalid compose project
And sometimes "web" replace with "celery", so both celery and django have no access to this volume.
Here is my docker-compose.yml file:
volumes:
shared_web_volume:
postgres_data:
services:
db:
image: postgres:12.0-alpine
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env
web:
build:
context: ./MoreEnergy
dockerfile: Dockerfile
entrypoint: sh ./entrypoint.sh
command: python manage.py runserver 0.0.0.0:8000
volumes:
- "shared_web_volume/:/MoreEnergy/"
ports:
- 1337:8000
env_file:
- ./.env
depends_on:
- db
celery:
build:
context: ./MoreEnergy
dockerfile: Dockerfile
entrypoint: sh ./entrypoint.sh
command: celery -A MoreEnergy worker --loglevel=info
volumes:
- "shared_web_volume/:/MoreEnergy/"
env_file:
- ./.env
depends_on:
- web
- redis
redis:
image: redis:5-alpine
What am I doing wrong?
Upd: temp dir is my project folder (I've set it with FILE_UPLOAD_TEMP_DIR variable in settings file), so I don't need to make one more volume only for shared temp files (If I have to, tell me).
your shared volume name is shared_web_volume and it's different from the one that you use in the volumes section which is shared_web_volume/
so my suggestion is to erase the forward slash to be like this
volumes:
- "shared_web_volume:/MoreEnergy/"
Don't forget to do the same thing in celery container

Keep Redis data alive between docker-compose down and up in Docker container

Question is about keeping Redis data alive between docker-compose up and docker-compose down.
In the docker-compose.yaml file bellow db service uses - postgres_data:/var/lib/postgresql/data/ volume to keep data alive.
I would like to do something like this for redis service but I can not find workable solution to do so. Only one way I have managed to achieve this goal is to store data in local storage - ./storage/redis/data:/data. All experiments with external volume gave no results.
Question is -is it possible somehow to store redis data between docker-compose down and docker-compose up in a volume likewise it made in DB service?
Sorry if question is naiveā€¦
Thanks
version: '3.8'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
env_file:
- ./series/.env
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
- redis
db:
build:
context: .
dockerfile: postgres.dockerfile
restart: always
env_file:
- ./series/.env
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=1q2w3e
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- target: 5432
published: 5433
protocol: tcp
mode: host
redis:
image: redis:alpine
command: redis-server --appendonly yes
ports:
- target: 6379
published: 6380
protocol: tcp
mode: host
volumes:
- ./storage/redis/data:/data
restart: always
environment:
- REDIS_REPLICATION_MODE=master
volumes:
postgres_data:
You just need to add a named volume for Redis data next to the postgres_data:
volumes:
postgres_data:
redis_data:
Then change host path to the named volume:
redis:
...
volumes:
- redis_data:/data
If Redis saved data with host path, then the above will work for you. I mention that because you have to configure Redis to enable persistent storage (see Redis Docker Hub page https://hub.docker.com/_/redis).
Beware, running docker-compose down -v will destroy volumes as well.

Azure Container Instances: Create a multi-container group from Django+Nginx+Postgres

I have dockerized a Django project with Postgres, Gunicorn, and Nginx following this tutorial.
Now i want to move the application to azure container instances. Can i simply create a container group following this tutorial, and expect the container images to communicate the right way?
To run the project locally i use docker-compose -f docker-**compose.prod.yml** up -d --build But how is the communication between the containers handled in azure container instances?
The docker-compose.prod.yml looks like this:
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
media_volume:
The containers will be able to communicate with each others using the services names (web, db, nginx) because they are part of the container group's local network. Also, take a look at the documentation as you can't use docker-composes file directly unless you use the edge version of Docker Desktop.
On another note, upon restarting, you will loose whatever you stored in your volumes because you are not using some kind of external storage. Look at the documentation.

Beginner Docker docker-compose

I'm going through this tutorial and I've successfully got the stack up and running.
What's bugging me is that when I change my code (in the web service) on my host, it does automatically make the changes when I reload the page in the browser. I don't understand why it's doing that. Here's my docker-compose.yml file:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- ./web:/usr/src/app
- ./web/static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
I didn't think that this was gunicorn doing the reloading because I believe gunicorn needs the --reload flag to actually do hot reloading.
This line means that you are mapping locations on your host to locations within your web container.
volumes:
- ./web:/usr/src/app
- ./web/static:/usr/src/app/static
Therefore any time you change you change code in the .web directory, it is updated within the container. If you don't want that to happen then you need to copy those directories when you build your container by specifying that in Dockerfile for that container.