I am using cookiecutter-django(https://github.com/pydanny/cookiecutter-django) for one of my live projects. And from last few days, I am observing the database data randomly getting deleted in parts. I checked logs but I found nothing. I don't know how to approach the issue to resolve it. Will really appreciate any guidance. I am using docker setup with Traefik, Postgres, Redis and celery and django. The code is deployed on a Digital Ocean Bucket. Only I have access to the bucket (Rules out the possibility of any person doing this)
version: '3'
volumes:
production_postgres_data: {}
production_postgres_data_backups: {}
production_traefik: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: fancy_tsunami_production_django
depends_on:
- postgres
- redis
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: fancy_tsunami_production_postgres
volumes:
- production_postgres_data:/var/lib/postgresql/data
- production_postgres_data_backups:/backups
env_file:
- ./.envs/.production/.postgres
traefik:
build:
context: .
dockerfile: ./compose/production/traefik/Dockerfile
image: fancy_tsunami_production_traefik
depends_on:
- django
volumes:
- production_traefik:/etc/traefik/acme
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
redis:
image: redis:3.2
celeryworker:
<<: *django
image: fancy_tsunami_production_celeryworker
command: /start-celeryworker
celerybeat:
<<: *django
image: fancy_tsunami_production_celerybeat
command: /start-celerybeat
flower:
<<: *django
image: fancy_tsunami_production_flower
ports:
- "5555:5555"
command: /start-flower
Related
I have a problem with the initial launch of docker-compose up, when DB is not initialized yet and django throws an error.
I tried to use 'restart_police', but it didn't help and the webservice was restarted almost without waiting and forward the DB service, regardless of which reload period I wouldn't set
version: "3.9"
services:
web:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
deploy:
restart_policy:
condition: on-failure
delay: 15s
environment:
- POSTGRES_NAME=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_HOST=db
depends_on:
- db
db:
container_name: db_pg
image: postgres
hostname: postgres
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_HOST: db
volumes:
- ./data/db:/var/lib/postgresql/data
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
depends_on:
- db
ports:
- "5555:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- ./data/pgadmin:/var/lib/pgadmin/data
I'm trying to import PostgreSql dump to docker container, but it doesn't work
Dockerfile:
FROM postgres
COPY postgres.sql /docker-entrypoint-initdb.d/
version: "3.9"
docker-compose.yml
services:
db:
build: ./DB
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=gamenews
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=321678
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=gamenews
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=321678
depends_on:
- db
structure:
docker compose up LOGS:
enter image description here
I would suggest that you use a proper Postgres image:
postgres:
image: postgres:13
volumes:
- '.:/app:rw'
- 'postgres:/var/lib/postgresql/data'
Here's a list of all the tags you can use: https://hub.docker.com/_/postgres
Just spin that up, your volume maps the data to your hd, so it's not ephemeral in the container, then you can run pg_restore on your dump file.
I want to create 2 environments. Test and the standard dev environment. I need to run the django test server within test environment and the regular server, manage.py runserver on the other. The main dev environemnt will use the docker-compse.yml and the test environment will use test.yml. When I run docker-compose up, live-reload works normally. When i run docker-compose -f test.yml up, the test server runs but docker does not do live reloads. I add the same services to both files to shorten the CLI syntax.
docker-compose.yml
version: "3.9"
services:
web:
build:
dockerfile: ./compose/django/Dockerfile
context: .
container_name: main_app_django
env_file:
- ./.local/.env
command: compose/django/start.sh
volumes:
- .:/code
ports:
- "8000:8000"
redis:
container_name: main_app_redis
image: "redis:alpine"
command: redis-server
ports:
- "6379:6379"
celeryworker:
build:
dockerfile: ./compose/celery/Dockerfile
context: .
container_name: main_app_celery
command: celery -A app worker -l INFO
env_file:
- ./.local/.env
volumes:
- .:/code
depends_on:
- redis
test.yml
version: "3.9"
services:
web:
build:
dockerfile: ./compose/django/Dockerfile
context: .
container_name: test_main_app_django
env_file:
- ./.local/.env
command: >
sh -c "python manage.py makemigrations &&
python manage.py migrate && python manage.py testserver cypress/fixtures/user.json cypress/fixtures/tracks.json --addrport 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
redis:
container_name: test_main_app_redis
image: "redis:alpine"
command: redis-server
ports:
- "6379:6379"
celeryworker:
build:
dockerfile: ./compose/celery/Dockerfile
context: .
container_name: test_main_app_celery
command: celery -A appworker -l INFO
env_file:
- ./.local/.env
volumes:
- .:/code
depends_on:
- redis
I have following docker file
version: '3'
volumes:
production_postgres_data: {}
production_postgres_data_backups: {}
production_caddy: {}
ghotel_cc_production_static_vol: {}
services:
django: &django
restart: always
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: ghotel_cc_api_production_django
volumes:
- production_static_vol:/app/static
depends_on:
- postgres
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start
postgres:
restart: always
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: api_production_postgres
volumes:
- production_postgres_data:/var/lib/postgresql/data
- production_postgres_data_backups:/backups
env_file:
- ./.envs/.production/.postgres
caddy:
restart: always
build:
context: .
dockerfile: ./compose/production/caddy/Dockerfile
image: api_production_caddy
depends_on:
- django
volumes:
- production_caddy:/root/.caddy
- ghotel_cc_production_static_vol:/app/static
env_file:
- ./.envs/.production/.caddy
ports:
- "0.0.0.0:8550:80"
The problem is that email cannot be sent. There is no errors, Django part works well - i see correct output in console. But after Django sends email it is just lost - never arrives to recipient.
How to correctly configure docker so the email can be sent ?
This is my docker-compose.yml.
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx_airport
ports:
- "8080:8080"
volumes:
- ./:/app
- ./docker_nginx:/etc/nginx/conf.d
- ./timezone:/etc/timezone
depends_on:
- web
rabbit:
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=asdasdasd
ports:
- "5672:5672"
- "15672:15672"
web:
build:
context: .
dockerfile: Dockerfile
command: /app/start_web.sh
container_name: django_airport
volumes:
- ./:/app
- ./timezone:/etc/timezone
expose:
- "8080"
depends_on:
- celerybeat
celerybeat:
build:
context: .
dockerfile: Dockerfile
command: /app/start_celerybeat.sh
volumes:
- ./:/app
- ./timezone:/etc/timezone
depends_on:
- celeryd
celeryd:
build:
context: .
dockerfile: Dockerfile
command: /app/start_celeryd.sh
volumes:
- ./:/app
- ./timezone:/etc/timezone
depends_on:
- rabbit
Normally, I have a task that executed every minutes and it updates the database located in "web". Everything works fine in development environment. However, the "celerybeat" and "celeryd" don't update my database when ran via docker-compose? What went wrong?