Docker-Compose Django can't connect to DB or Celery - django

I have everything on one network (it talks to another app in another docker-compose file).
But I keep getting this error:
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" (192.168.208.3) and accepting
TCP/IP connections on port 5000?
When I change my SQL_PORT=5432 (default port that's running in the postgres container) the error above disappears and my app is up, but then it has problems when trying to connect to celery or in shell it says db is not connected.
I have to use 5000 cause there is another postgres db on the other app in the second docker-compose setup.
I think I'm lost on the internals of networks. I was pretty sure I am suppose to use the exposed port of 5000 for my database.
version: "3.9"
services:
app:
build: .
command: python manage.py runserver 0.0.0.0:8000
container_name: app
environment:
- DEBUG=True
- PYTHONUNBUFFERED=1
- CELERY_BROKER=redis://broker:6379/0
- CELERY_BACKEND=redis://broker:6379/
- APP_BASIC_AUTH_PASSWORD=adPswd12*
- APP_BASIC_AUTH_USER=admin
- APP_TOKEN_AUTH=NHEC_UTILITY
- VTN_API_URL=vtn_app:8000
- VTN_API_TOKEN=NHECAPP_UTILITY
- SQL_PORT=5000
volumes:
- .:/code
ports:
- "9000:8000"
networks:
- app-web-net
depends_on:
- db
- celery-worker
- broker
app_test:
build: .
command: python manage.py test
container_name: app_test
environment:
- DEBUG=True
- PYTHONUNBUFFERED=1
volumes:
- .:/code
depends_on:
- db
db:
image: postgres:10
container_name: app_postgres
ports:
- 5000:5432
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: nhec
POSTGRES_USER: nhec
POSTGRES_PASSWORD: nhec
networks:
- app-web-net
celery-worker:
build: .
command: celery -A app worker -l DEBUG
depends_on:
- db
- broker
environment:
CELERY_BROKER_URL: redis://broker:6379/0
networks:
- app-web-net
broker:
image: redis:6-alpine
ports:
- 6379:6379
networks:
- app-web-net
volumes:
db_data: {}
networks:
app-web-net:
driver: bridge

The ports will expose the postgres server on the your localhost port 5000, not to internal services. Your services will be able to reach the database container and its ports within the same network.
If you still want to change the default postgres port, here's a related answer that you might find helpful. Changing a postgres containers server port in Docker Compose
Your container name is actually app_postgres, not db as specified by this in your docker compose:
container_name: app_postgres
The other thing you can do is change the name of the container from app_postgres to something not the same as the postgres in your other app's docker compose file. Both postgres instances can have port 5432 exposed for your apps to use.

Related

django.db.utils.OperationalError: could not translate host name "db" to address: Name does not resolve. How to solve this issue?

Can some body help me solve this issue. Why am i getting this error? I have db in .env host and links, network in docker-compose file too. I am not being to figure out where the issue is being raised.
Here is my docker-compose file.
version: "3.9"
volumes:
dbdata:
networks:
django:
driver: bridge
services:
web:
build:
context: .
volumes:
- .:/home/django
ports:
- "8000:8000"
command: gunicorn Django.wsgi:application --bind 0.0.0.0:8000
container_name: django_web
restart: always
env_file: .env
depends_on:
- db
links:
- db:db
networks:
- django
db:
image: postgres
volumes:
- dbdata:/var/lib/postgresql
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- 5430:5432
networks:
- django
container_name: django_db
here is my .env with database settings
DB_USER=admin
DB_NAME=test
DB_PASSWORD=admin
DB_HOST=db
DB_PORT=5432
DB_SCHEMA=public
CONN_MAX_AGE=60
The problem is that when you are using:
docker compose up --build
As docker compose document describes :
flag
reference
--build
Build images before starting containers.
That means that during the build time there is no "db" container running, so there is no possible to exist a "db" host name.
A suggestion would be to to not engage any DB transaction during the build phase.
You can make any database "rendering" **as a seeding process in your app start

could not connect to server: Connection refused Is the server running on host "database" (172.19.0.3) and accepting TCP/IP connections on port 5432?

I am trying to run Django tests on Gitlab CI but getting this error, Last week it was working perfectly but suddenly I am getting this error during test run
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "database" (172.19.0.3) and accepting
TCP/IP connections on port 5432?
My gitlab-ci file is like this
image: docker:latest
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
test:
stage: test
image: tiangolo/docker-with-compose
script:
- docker-compose -f docker-compose.yml build
- docker-compose run app python3 manage.py test
my docker-compose is like this:
version: '3'
volumes:
postgresql_data:
services:
database:
image: postgres:12-alpine
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=123
- POSTGRES_HOST=database
- POSTGRES_PORT=5432
volumes:
- postgresql_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -e \"SHOW DATABASES;\""]
interval: 5s
timeout: 5s
retries: 5
ports:
- "5432"
restart: on-failure
app:
container_name: proj
hostname: proj
build:
context: .
dockerfile: Dockerfile
image: sampleproject
command: >
bash -c "
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
gunicorn sampleproject.wsgi:application -c ./gunicorn.py
"
env_file: .env
ports:
- "8000:8000"
volumes:
- .:/srv/app
depends_on:
- database
- redis
So why its refusing connection? I don't have any idea and it was working last week.
Unsure if it would help in your case but I was getting the same issue with docker-compose. What solved it for me was explicitly specifying the hostname for postgres.
services:
database:
image: postgres:12-alpine
hostname: database
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=123
- POSTGRES_HOST=database
- POSTGRES_PORT=5432
...
Could you do a docker container ls and check if the container name of the database is in fact, "database"?
You've skipped setting the container_name for that container, and it may be so that docker isn't creating it with the default name of the service, i.e. "database", thus the DNS isn't able to find it under that name in the network.
Reboot the server. I encounter similar errors on Mac and Linux from time to time when I run more than one container that use postgres.

Keep Redis data alive between docker-compose down and up in Docker container

Question is about keeping Redis data alive between docker-compose up and docker-compose down.
In the docker-compose.yaml file bellow db service uses - postgres_data:/var/lib/postgresql/data/ volume to keep data alive.
I would like to do something like this for redis service but I can not find workable solution to do so. Only one way I have managed to achieve this goal is to store data in local storage - ./storage/redis/data:/data. All experiments with external volume gave no results.
Question is -is it possible somehow to store redis data between docker-compose down and docker-compose up in a volume likewise it made in DB service?
Sorry if question is naive…
Thanks
version: '3.8'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
env_file:
- ./series/.env
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
- redis
db:
build:
context: .
dockerfile: postgres.dockerfile
restart: always
env_file:
- ./series/.env
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=1q2w3e
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- target: 5432
published: 5433
protocol: tcp
mode: host
redis:
image: redis:alpine
command: redis-server --appendonly yes
ports:
- target: 6379
published: 6380
protocol: tcp
mode: host
volumes:
- ./storage/redis/data:/data
restart: always
environment:
- REDIS_REPLICATION_MODE=master
volumes:
postgres_data:
You just need to add a named volume for Redis data next to the postgres_data:
volumes:
postgres_data:
redis_data:
Then change host path to the named volume:
redis:
...
volumes:
- redis_data:/data
If Redis saved data with host path, then the above will work for you. I mention that because you have to configure Redis to enable persistent storage (see Redis Docker Hub page https://hub.docker.com/_/redis).
Beware, running docker-compose down -v will destroy volumes as well.

how to open a postgres database created with docker in a database client?

I have a question about how to open a database created with docker using https://github.com/cookiecutter/cookiecutter in a database client
image 1
image 2
image3
COMPOSE LOCAL FILE
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: tienda_local_django
depends_on:
- postgres
volumes:
- .:/app
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: tienda_production_postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.local/.postgres
Is the port on the container mapped? Try using 127.0.0.1(assuming this is on the same machine) as the host in your local client instead of the container name. If that doesn't work can you share your docker-compose.yaml
You haven't network between containers/services in docker-compose:
You can achieve this in number ways:
link between containers/services. This is a deprecated way but, depends on your docker version still work. Add to your docker-compose file:
django:
...
links:
- postgres
Connect services to the same network. Add to both your services definition:
networks:
- django
Also, you need to define network django in docker-compose file
networks
django:
Connect your services via a host network. Just add to both services defenition:
network: host

127.0.0.1 refused to connect in docker django

I'm trying to connect to an instance of django running in docker. As far as i can tell I've opened the correct port, and see in docker ps that there is tcp on port 8000, but it there is no forwarding to the port.
After reading the docker compose docs on ports, i would expect this to work (I can view pgadmin on 127.0.0.1:9000 too).
My docker compose:
version: '3'
services:
postgresql:
restart: always
image: postgres:latest
environment:
POSTGRES_USER: pguser
POSTGRES_PASSWORD: pgpassword
POSTGRES_DB: pgdb
pgadmin:
restart: always
image: dpage/pgadmin4:latest
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: admin
GUNICORN_THREADS: 4
PGADMIN_LISTEN_PORT: 9000
volumes:
- ./utility/pgadmin4-servers.json:/pgadmin4/servers.json
depends_on:
- postgresql
ports:
- "9000:9000"
app:
build: .
environment:
POSTGRES_DB: pgdb
POSTGRES_USER: pguser
POSTGRES_PASSWORD: pgpassword
POSTGRES_HOST: postgresql
volumes:
- .:/code
ports:
- "127.0.0.1:8000:8000"
- "5555:5555"
depends_on:
- postgresql
- pgadmin
I have tried with the following combinations for (app) ports, as are suggested here:
app:
...
ports:
- "8000"
- "8000:8000"
- "127.0.0.1:8000:8000"
but i still see This site can’t be reached 127.0.0.1 refused to connect. on trying to access the site.
I'm sure that this is a port forwarding problem, and that my server is turning correctly in django because i can run a docker attach to the container and curl a url with the expected response.
What am i doing wrong?
I was running my application using the command:
docker-compose run app python3 manage.py runserver 0.0.0.0:8000
With docker-compose you need to use the argument --service-ports to:
Run command with the service's ports enabled and mapped to the host.
Thus my final command looked like this:
docker-compose run --service-ports app python3 manage.py runserver 0.0.0.0:8000
Documentation on run can be found here
If you are running docker inside a virtual machine then you need to access your application through the virtual machine IP address and not using localhost or 127.0.0.1. Try to get the virtual machine IP. Also please specify in which platform/environment you installed and running the docker.