how to set delay to restart container in docker-compose? - django

I have a problem with the initial launch of docker-compose up, when DB is not initialized yet and django throws an error.
I tried to use 'restart_police', but it didn't help and the webservice was restarted almost without waiting and forward the DB service, regardless of which reload period I wouldn't set
version: "3.9"
services:
web:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
deploy:
restart_policy:
condition: on-failure
delay: 15s
environment:
- POSTGRES_NAME=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_HOST=db
depends_on:
- db
db:
container_name: db_pg
image: postgres
hostname: postgres
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_HOST: db
volumes:
- ./data/db:/var/lib/postgresql/data
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
depends_on:
- db
ports:
- "5555:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- ./data/pgadmin:/var/lib/pgadmin/data

Related

How to import a Postgres database into a docker container?

I'm trying to import PostgreSql dump to docker container, but it doesn't work
Dockerfile:
FROM postgres
COPY postgres.sql /docker-entrypoint-initdb.d/
version: "3.9"
docker-compose.yml
services:
db:
build: ./DB
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=gamenews
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=321678
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=gamenews
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=321678
depends_on:
- db
structure:
docker compose up LOGS:
enter image description here
I would suggest that you use a proper Postgres image:
postgres:
image: postgres:13
volumes:
- '.:/app:rw'
- 'postgres:/var/lib/postgresql/data'
Here's a list of all the tags you can use: https://hub.docker.com/_/postgres
Just spin that up, your volume maps the data to your hd, so it's not ephemeral in the container, then you can run pg_restore on your dump file.

How to deploy django-q worker with docker?

Assume this simple docker-compose file.
version: "3.9"
services:
redis:
image: redis:alpine
ports:
- 6379:6379
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- redis
How can i add django-q worker to handle tasks from web container?
I could probably build same image with different command such as python manage.py qcluster but I dont think this solution si elegant. Could you suggest some approach how to do that?
Probably the most easy thing that you can do is to add a new cluster for qcluster.
Something like this:
version: "3.9"
services:
redis:
image: redis:alpine
ports:
- 6379:6379
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- redis
djangoq:
build: .
command: python manage.py qcluster
volumes:
- .:/code
ports:
- "8000:8001"
depends_on:
- redis

Docker Django, redis and celery conf

I dont know what part I am missing but celery not conneting to redis when I am running docker-compose up --build
error: Cannot connect to redis://127.0.0.1:6379/0: Error 111 connecting to 127.0.0.1:
6379. Connection refused.
Here is my file docker-compose.yml
version: '3'
services:
web:
build: .
image: resolution
depends_on:
- db
- redis
- celery
command: bash -c "python3 /code/manage.py migrate && python3 /code/manage.py initialsetup && python3 /code/manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db:db
- redis:redis
- celery:celery
restart: always
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGHOST=trust
- PGPORT=5432
db:
image: postgres:latest
environment:
POSTGRES_DB: 'postgres'
POSTGRES_PASSWORD: 'postgres'
POSTGRES_USER: 'postgres'
POSTGRES_HOST: 'trust'
redis:
image: "redis:alpine"
ports:
- "6379:6379"
restart: on-failure
celery:
image: resolution
command: celery -A mayan worker -l info
environment:
- DJANGO_SETTINGS_MODULE=mayan.settings.production
volumes:
- .:/code
depends_on:
- db
- redis
links:
- redis:redis
restart: on-failure
celery and redis are running in different containers.
According to the error message that you shared, most likely, your celery is trying to connect to localhost to reach the RedisDB, which is not on localhost.
Seach for the celery configuration file that contains the CELERY_BROKER_URL and CELERY_RESULT_BACKEND values. Most likely they look like this:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
They should look like this, pointing to the redis service name that you defined in your compose file:
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
If you don't have such a config, search directly for the place where the Celery instance is initialized and make sure it looks like this:
app = Celery('server', broker='redis://redis:6379/0')

Invalid interpolation format for "environment" option in service "web": "SECRET_KEY

I'm doing a project online book store on django, when i try to setup environment variable I am facing the problem.
My docker-compose.yml file looks like:
version: '3.7'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
environment:
- SECRET_KEY=secret_key
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
db:
hostname: db
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
ports:
- "5432:5432"
volumes:
postgres_data:
and my settings.py:
SECRET_KEY = os.environ.get('SECRET_KEY')
The secret_key is:
SECRET_KEY=-)hy!+7txe6))yhcy4o3ruxj(gy(vwx)6^h&+-i*=0f$4q(&bh

In my Dockered Django application, my Celery task does not update the SQLite database (in other container). What should I do?

This is my docker-compose.yml.
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx_airport
ports:
- "8080:8080"
volumes:
- ./:/app
- ./docker_nginx:/etc/nginx/conf.d
- ./timezone:/etc/timezone
depends_on:
- web
rabbit:
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=asdasdasd
ports:
- "5672:5672"
- "15672:15672"
web:
build:
context: .
dockerfile: Dockerfile
command: /app/start_web.sh
container_name: django_airport
volumes:
- ./:/app
- ./timezone:/etc/timezone
expose:
- "8080"
depends_on:
- celerybeat
celerybeat:
build:
context: .
dockerfile: Dockerfile
command: /app/start_celerybeat.sh
volumes:
- ./:/app
- ./timezone:/etc/timezone
depends_on:
- celeryd
celeryd:
build:
context: .
dockerfile: Dockerfile
command: /app/start_celeryd.sh
volumes:
- ./:/app
- ./timezone:/etc/timezone
depends_on:
- rabbit
Normally, I have a task that executed every minutes and it updates the database located in "web". Everything works fine in development environment. However, the "celerybeat" and "celeryd" don't update my database when ran via docker-compose? What went wrong?