Django/Docker/Postgresql: app connected to the 'wrong' database - django

I have develop a project with Django/Docker/Postgresql and use docker-compose to deploy on a linux remote server.
I want to deploy 2 apps based on the same code (and same settings file), preprod and demo, with two disctincts PostgreSQL databases (databases are not dockerized): ecrf_covicompare_preprod and ecrf_covicompare_demo, respectively for preprod and demo.
Apps tests will be done by differents teams.
I have :
2 docker-compose files, docker-compose.preprod.yml and docker-compose.demo.yml, respectively for preprod and demo
.env files, .env.preprod and .env.preprod.demo, respectively for preprod and demo
Databases parameters of connection are set in these .env files.
But my 2 apps connect to the same database (ecrf_covicompare_preprod).
If I connect to my 'web demo' container to print environment variables I get SQL_DATABASE=ecrf_covicompare_demo which is correct
docker-compose.preprod.yml
version: '3.7'
services:
web:
restart: always
container_name: ecrf_covicompare_web
build:
context: ./app
dockerfile: Dockerfile.preprod
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
entrypoint: [ "/usr/src/app/entrypoint.preprod.sh" ]
depends_on:
- redis
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:8000/" ]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: ecrf_covicompare_redis
image: "redis:alpine"
celery:
container_name: ecrf_covicompare_celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
celery-beat:
container_name: ecrf_covicompare_celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
nginx:
container_name: ecrf_covicompare_nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1370:80
depends_on:
- web
.env.preprod
SQL_DATABASE=ecrf_covicompare_preprod
SQL_USER=user_preprod
DATABASE=postgres
DJANGO_SETTINGS_MODULE=core.settings.preprod
docker-compose.demo.yml (simplified)
version: '3.7'
services:
demo_web:
container_name: ecrf_covicompare_web_demo
//
env_file:
- ./.env.preprod.demo
//
demo_redis:
container_name: ecrf_covicompare_redis_demo
image: "redis:alpine"
demo_celery:
container_name: ecrf_covicompare_celery_demo
//
env_file:
- ./.env.preprod.demo
depends_on:
- demo_web
- demo_redis
demo_celery-beat:
container_name: ecrf_covicompare_celery-beat_demo
//
env_file:
- ./.env.preprod.demo
depends_on:
- demo_web
- demo_redis
demo_nginx:
container_name: ecrf_covicompare_nginx_demo
//
ports:
- 1380:80
depends_on:
- demo_web
.env.preprod.demo
SQL_DATABASE=ecrf_covicompare_demo
SQL_USER=user_preprod
DATABASE=postgres
DJANGO_SETTINGS_MODULE=core.settings.preprod

Im new to all the docker compose stuff but to me your configuration looks fine. A few ideas I had:
you mention two different PostgreSQL databases. Are those hosted on the same PostgreSQL server or two different servers? In both .env files you set DATABASE=postgres. If they are running on the same server instance I could imagine that this leads to them using the same database depending on how this variable is used later on.
are you sure that the env variables are set on time? Once you manually check them from inside th container they are set correctly. But also while your containers are booting up? No expert on how docker compose handles these files but maybe you could try printing the env variables during container initialization from within some script.
Are you completely sure its not hardcoded somewhere? Maybe try searching all source files for the DB name they both connect to. I failed with this far too often to not check this.
Hope this helps. Its a bit of a guess but your configuration looks fine to me otherwise.

Related

How to pass persistent volume to postgres docker

I want to run PostgreSql database with docker, I created a docker-compose like below:
django:
restart: always
build: .
ports:
- "8000:8000"
depends_on:
- pgdb
#environment:
# - url=https://api.backend.example.com
#command: "gunicorn config.wsgi:application --bind 0.0.0.0:8000"
#networks:
# - bridge
pgdb:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=hbys_filyos
- POSTGRES_USER=healmedy
- POSTGRES_PASSWORD=mhacare1
I want to run PostgreSql database with docker, I created a docker-compose like below:
After building I run docker run -p 80:8000 surgery4:dev & as follows.
I am getting the following error in terminal:
django.db.utils.OperationalError: could not translate host name "pgdb" to address: Try again
There are indentation issue in your docker-compose file : django should be in proper place.
django:
restart: always
build: .
ports:
- "8000:8000"
depends_on:
- pgdb
#environment:
# - url=https://api.backend.example.com
#command: "gunicorn config.wsgi:application --bind 0.0.0.0:8000"
#networks:
# - bridge
pgdb:
image: postgres
container_name: pgdb
volumes:
- pg-data/:/var/lib/postgresql
environment:
- POSTGRES_DB=hbys_filyos
- POSTGRES_USER=healmedy
- POSTGRES_PASSWORD=mhacare1
Also you just need to execute : docker-compose up -d from where your docker-compose file located.

Connection refused with psql django and docker (cookiecutter-django)

My django app is failing to connect to the psql container with the standard connection refused error. I used django-cookiecutter which supplies the psql username and password automatically via environment variables and then this I gather is passed back into django with via a .env file that hosts a DATABASE_URL string.
Error
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
When I set a breakpoint in django settings I can see that the DATABASE_URL seems to be converted appropriately into the standard db dict:
{'NAME': 'hustlestat', 'USER': 'HjhPLEwuVjUIIKEHebPqNG<redacted>', 'PASSWORD': 'I43443fR42wRkUaaQ8mkd<redacted>', 'HOST': 'postgres', 'PORT': 5432, 'ENGINE': 'django.db.backends.postgresql'}
When I exec into the psql container with psql hustlestat -U HjhPLEwuVjUIIKEHebPqN<redcated> I can connect to the db using that username. I'm not 100% on the password as it isn't asking me for one when I try to connect.
Here is the docker compose which is generated automatically by cookie cutter:
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: hustlestat_local_django
container_name: django
depends_on:
- postgres
- mailhog
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: hustlestat_production_postgres
container_name: postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data:Z
- local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: hustlestat_local_docs
container_name: docs
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./hustlestat:/app/hustlestat:z
ports:
- "7000:7000"
command: /start-docs
mailhog:
image: mailhog/mailhog:v1.0.0
container_name: mailhog
ports:
- "8025:8025"
redis:
image: redis:5.0
container_name: redis
celeryworker:
<<: *django
image: hustlestat_local_celeryworker
container_name: celeryworker
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: hustlestat_local_celerybeat
container_name: celerybeat
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celerybeat
flower:
<<: *django
image: hustlestat_local_flower
container_name: flower
ports:
- "5555:5555"
command: /start-flower
node:
build:
context: .
dockerfile: ./compose/local/node/Dockerfile
image: hustlestat_local_node
container_name: node
depends_on:
- django
volumes:
- .:/app:z
# http://jdlm.info/articles/2016/03/06/lessons-building-node-app-docker.html
- /app/node_modules
command: npm run dev
ports:
- "3000:3000"
# Expose browsersync UI: https://www.browsersync.io/docs/options/#option-ui
- "3001:3001"
The only oddity I have noticed is that despite django being named in the docker compose, when I view the running containers it has a random name such as:
hustlestat_django_run_37888ff2c9ca
Not sure if that is relevant.
Thanks for any help!
Okay have figured this out. I set a DATABASE_URL environment variable because I was originally getting an error saying it was unset. After googling I came across a cookie cutter doc that said to set it but didn't read it well enough to realise that the instruction was intended for non-docker setups. Mine is docker.
The reason I was getting that error is because I was exec'ing into the container and running management commands like this:
docker exec -it django bash then python manage.py migrate
The way this project is setup and environment variables are setup, you can't do that, you have to use this method from outside the exec:
docker-compose -f local.yml run --rm django python manage.py migrate
I thought the two methods were interchangeable but they are not. Everything works now.

Want to share backup file in Django web container with Postgres database container: How to?

EDIT 2
In fact I realized that automatic backup done today in preprod environment (nginx) by celery/dbbackup are not available in container?
I only have backup from yesterday when I use development environment (django runserver)....
EDIT 1
change docker-compose accordingly
backup_volume is mounted
I can not found where in local but if I cnnect to the db container and run \i 'path/to/file.psql' it works
[
{
"CreatedAt": "2021-01-19T14:37:55Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "cafe_tropical",
"com.docker.compose.version": "1.27.4",
"com.docker.compose.volume": "backup_volume"
},
"Mountpoint": "/var/lib/docker/volumes/cafe_tropical_backup_volume/_data",
"Name": "cafe_tropical_backup_volume",
"Options": null,
"Scope": "local"
}
]
I have a Django app on Docker with web container where automatics backups are stored and another postgresql database container.
I want to be able to restore postgresql database using \i 'path\to\backup.psql' in a psql shell but it failed because file is not found
db_preprod=# \i '/usr/src/app/backup/backup_2021-01-19_1336.psql'
/usr/src/app/backup/backup_2021-01-19_1336.psql: No such file or directory
I also tryed to copy with docker cp but not works:
docker cp web:/usr/src/ap/backup/backup_2021-01-19_1336.psql db:/.
copying between containers is not supported
docker-compose
version: '3.7'
services:
web:
restart: always
container_name: web
build:
context: ./app
dockerfile: Dockerfile.preprod
restart: always
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- ./app:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
- backup_volume:/usr/src/app/backup <-- added volume
expose:
- 8000
env_file:
- ./.env.preprod
depends_on:
- db
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/"]
interval: 30s
timeout: 10s
retries: 50
db:
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- backup_volume:/var/lib/postgresql/data/backup <-- added volume
env_file:
- ./.env.preprod.db
nginx:
container_name: nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1340:80
depends_on:
web:
condition: service_healthy
volumes:
postgres_data:
static_volume:
media_volume:
backup_volume:
You need to use volumes. Both containers mount the same path. Django outputs the dump and then psql gets it. But why does it work like this? Shouldn't you be able to output and restore from either django or postgres?

docker-compose issue - Celery container not able to access DB container

Regards
I have been working on a Django Application, that runs on Redis, PostgreSQL, Celery, RabbitMQ.
I have written a docker-compose to run all of these services in their separate containers.
Here's my docker-compose.yml
version: "3.2"
services:
app:
build:
context: .
image: &app app
ports:
- "8000:8000"
env_file: &envfile
- env.env
volumes:
- ./app:/app
environment:
- DB_HOST=db
command: >
sh -c "python manage.py wait_for_db &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
depends_on:
- db
- redis
- broker
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
db:
image: postgres:12-alpine
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
worker:
build: .
image: *app
restart: always
env_file: *envfile
command: ["celery", "worker", "--app=worker.worker.app", "--concurrency=1", "--hostname=worker#%h", "--loglevel=INFO"]
volumes:
- ./app:/app
depends_on:
- broker
- redis
- db
broker:
image: rabbitmq:3
env_file: *envfile
ports:
- 5672:5672
flower:
image: zoomeranalytics/flower:0.9.1-4.0.2
restart: "no"
env_file: *envfile
ports:
- "5555:5555"
depends_on:
- broker
My application is working just fine, and the containers seems to be working, the problem arises when I push an Async job to the worker container. The worker container picks up the job, and start processing, I am trying to access the DB in the worker, but it's giving me the following error -
Task [640127f3-7769-4757-8c33-8de9052ca92c] raised unexpected: OperationalError('could not connect to server: No such file or directory\n\tIs the server running locally and accepting\n\tconnections on Unix domain socket "/tmp/.s.PGSQL.5432"?\n')
I understand that my worker container is trying to access DB, which it is not able to find, can someone please help me access my DB container through the Worker container, quite stuck with this.
I tried depends_on, links, but nothing seems to be working.
Got it, thanks to my teammate.
I missed giving environment variable in the worker container, as soon as I passed, voila!
environment:
- DB_HOST=db

Handling RACE CONDITION in Docker containers of a django app which include postgres,nginx,celery,redis,elasticsearch

I am new with docker. I am having trouble with multiple containers deploy at a same time, it's occurring race condition. Every time I enter docker-compose up --build command elasticsearch or redis starts first and database starts and exits with error code 0 as well as celery and nginx. I tried using "sleep" command, but no luck(maybe I missed something). Here is my docker-compose.yml file -
version: "3"
services:
db:
image: postgres:9.6-alpine
container_name: myblogdb
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mydb
volumes:
- myblogdb_data:/var/lib/postgresql/data/
ports:
- "4949:5432"
web:
build: ./app
command: sh -c "gunicorn djangoApp.wsgi:application --bind 0.0.0.0:8000"
volumes:
- ./app:/usr/src/app/
- my_blog_static_volume:/usr/src/app/djangoApp/settings/staticfiles
- my_blog_media_volume:/usr/src/app/mediafiles
ports:
- "8000:8000"
depends_on:
- db
- redis
- es
nginx:
restart: always
build: ./nginx
volumes:
- my_blog_static_volume:/usr/src/app/djangoApp/settings/staticfiles
- my_blog_media_volume:/usr/src/app/mediafiles
ports:
- "1337:80"
depends_on:
- web
redis:
image: "redis:alpine"
es:
image: elasticsearch:5.6.15-alpine
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms256M -Xmx256M"
volumes:
- my_blog_esdata:/usr/share/elasticsearch/data/
ports:
- "9200:9200"
celery:
restart: always
build: ./app
command: sh -c "celery -A djangoApp worker -l info"
volumes:
- ./app:/usr/src/app/
depends_on:
- db
- redis
- web
celery-beat:
restart: always
build: ./app
command: sh -c "celery -A djangoApp beat -l info"
volumes:
- ./app:/usr/src/app/
depends_on:
- db
- redis
- web
volumes:
myblogdb_data:
my_blog_static_volume:
my_blog_media_volume:
my_blog_esdata:
Please let me know if I'm missing something here. Thanks
You need to add a script like wait-for-it or wait-for in order to control startup and shutdown order in compose that basically tells a service to wait for another service before running the start command.
So if you want Django to wait for PostgreSQL the command in docker-compose will be:
["./wait-for", "db:5432", "--", "gunicorn", "djangoApp.wsgi:application", "--bind", "0.0.0.0:8000"]
There is a full explanation in the following answer, the answer describe it for MySQL and Golang but same concept applies to your case