I'm currently trying to figure out how to configure my docker-compose.yml to allow a web-server (django) to communicate with a PostgreSQL database running on the host machine.
The app is perfectly working outside of a container.
And now I want to create a container for better management and deployment capabilities.
I've tried this,
docker-compose.yml :
version: '3'
services:
web:
image: myimage
volumes:
- .:/appdir
environment:
- DB_NAME=test
- DB_USER=test
- DB_PASSWORD=test
- DB_HOST=localhost
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
networks:
- mynet
networks:
mynet:
Dockerfile :
FROM python:3
ENV PYTHONUNBUFFERED=1 \
DJANGO_SETTINGS_MODULE=app.settings \
DEBUG=True \
SECRET_KEY=akey
WORKDIR /appdir
COPY . /appdir
EXPOSE 8000
RUN pip install -r requirements.txt
But when I do so, I get the following error :
web_1 | django.db.utils.OperationalError: could not connect to server: Connection refused
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 5432?
Thanks
localhost is relative - inside the docker container - localhost (aka 127.0.0.1) refers to the container itself. if you want to connect to your host- give the container your host real ip as the DB_HOST.
there are many ways to find your host ip, for instance:
run in your terminal hostname -I | awk '{print $1}'
Related
Hey everyone i am trying to connect my postgres database install in ubuntu 20.04 to the docker container, which will be outside of the container. I am working on django project.
I am able to create the postgres database inside the docker container and connect my django project to that database, but i want is to connect localdatabase to the django project which is running in docker container
Here is my docker-compose.yml file
version: '3.3'
services:
# Description (For the postgres databse)
kapediadb:
image: postgres
restart: always
container_name: kapediadb
# For accessing env data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
# Description (For django applications)
kapedia:
restart: always
container_name: kapedia
command:
- /bin/bash
- -c
- |
python manage.py makemigrations accounts
python manage.py makemigrations posts
python manage.py makemigrations quiz
python manage.py migrate
gunicorn kapedia.wsgi:application --bind 0.0.0.0:8000
image: kapedia
# Description (define your dockerfile location here)
build: .
volumes:
- .:/kapedia
ports:
- "8000:8000"
depends_on:
- kapediadb
env_file:
- .env
# Description (For volumes)
volumes:
static:
simply you can add this inside project container:
extra_hosts:
- "host.docker.internal:172.17.0.1"
To find IP of docker i.e. 172.17.0.1 (in my case) you can use in local machine's terminal:
$> ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
In postgresql.conf, change listen_addresses to listen_addresses = '*'
In pg_hba.conf, add this at the end of line
host all all 0.0.0.0/0 md5
Now restart postgresql service using, sudo service postgresql restart
Please use host.docker.internal hostname to connect database from Server Application.
Ex: jdbc:postgresql://host.docker.internal:5432/bankDB
Note: sudo nano /etc/postgresql/<your_version>/main/postgresql.conf use this command to open postgresql.conf file
This is the way you can connect your local database to docker-contaner
I am running Django application dockerized and trying to connect to PostgreSQL database that is located at external host with a public IP.
When running a container, makemigrations command falls with the following error:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "myhost" (89.xx.xx.102) and accepting
TCP/IP connections on port 5432?
However, it successfully connects when not dockerized.
Here the docker-compose.yml:
services:
backend:
build: .
ports:
- 65534:65534
and corresponding Dockerfile:
FROM python:3.10 AS builder
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
COPY requirements.txt /app/
RUN pip install -r /app/requirements.txt
RUN pip install gunicorn
FROM builder
COPY ./ /app/
ENTRYPOINT [ "/app/entrypoint.sh" ]
and entrypoint.sh:
#!/bin/bash
python /app/manage.py collectstatic --no-input --clear
python /app/manage.py makemigrations
python /app/manage.py migrate --no-input
gunicorn --pythonpath /app/ -b 0.0.0.0:65534 app.wsgi:application
How to make it possible for the Django application to connect to externally hosted PostgresSQL database?
In your terminal run below command:
vim /etc/postgresql/14/main/postgresql.conf #14 is the version of postgres.
Uncomment and edit the listen_addresses attribute to start listening to start listening to all available IP addresses.
listen_addresses = '*'
Append a new connection policy (a pattern stands for [CONNECTION_TYPE][DATABASE][USER] [ADDRESS][METHOD]) in the bottom of the file.
host all all 0.0.0.0/0 md5
We are allowing TCP/IP connections (host) to all databases (all) for all users (all) with any IPv4 address (0.0.0.0/0) using an MD5 encrypted password for authentication (md5).
It is now time to restart your PostgreSQL service to load your configuration changes.
systemctl restart postgresql
And make sure your system is listening to the 5432 port that is reserved for PostgreSQL.
ss -nlt | grep 5432
Connect to PostgreSQL database through a remote host:
Your PostgreSQL server is now running and listening for external requests. It is now time to connect to your database through a remote host.
Connect via Command Line Tool:
You may now connect to a remote database by using the following command pattern:
psql -h [ip address] -p [port] -d [database] -U [username]
Let’s now connect to a remote PostgreSQL database that we have hosted.
psql -h 5.199.162.56 -p 5432 -d test_erp -U postgres
To double check your connection details use the \conninfo command.
Problems I am having with my Docker instance on a GCE VM:
It keeps restarting
I cannot access tcp:80 after creating several firewall policies
Here are the steps I took in creating the instance with gcloud: Why is Google Compute Engine not running my container?
What I have tried:
To open the port, I created several Firewall policies and even tagged the VM, but still getting this when I run nmap -Pn 34.XX.XXX.XXX
PORT STATE SERVICE
25/tcp open smtp
110/tcp open pop3
119/tcp open nntp
143/tcp open imap
465/tcp open smtps
563/tcp open snews
587/tcp open submission
993/tcp open imaps
995/tcp open pop3s
# 80/tcp is not open
Alternatively, I tried opening the port from inside the container after SSH:
docker run --rm -d -p 80:80 us-central1-docker.pkg.dev/<project>/<repo>/<image:v2>
curl http://127.0.0.1
#which results in:
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
Also, running docker ps from within the container shows a restarting status.
If it helps, I have two docker-compose files, one local and prod. The prod has the following details:
version: '3'
services:
web:
restart: on-failure
build:
context: ./
dockerfile: Dockerfile.prod
image: <image-name>
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:10-alpine
env_file:
- ./.env.prod
volumes:
- pgdata:/var/lib/postgresql/data
expose:
- 5432
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
pgdata:
static_volume:
media_volume:
I don't know what I am doing wrong at this point. I've killed many instances with different restart-policies but it stays the same. I have the sample setup running nginx successfully. It also works successfully locally.
Could I be using the wrong compose file?
Is my project nginx settings wrong?
Why is the container always restarting?
Why can't I open tcp:80 after setting Firewall rules
Does tagging such as :v2 after the performance on the container in GCE?
I'd appreciate any help. I have wasted hours trying to figure it out.
PS: If it helps, the project is built with Django and Python3.9
I am trying to run Django tests on Gitlab CI but getting this error, Last week it was working perfectly but suddenly I am getting this error during test run
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "database" (172.19.0.3) and accepting
TCP/IP connections on port 5432?
My gitlab-ci file is like this
image: docker:latest
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
test:
stage: test
image: tiangolo/docker-with-compose
script:
- docker-compose -f docker-compose.yml build
- docker-compose run app python3 manage.py test
my docker-compose is like this:
version: '3'
volumes:
postgresql_data:
services:
database:
image: postgres:12-alpine
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=123
- POSTGRES_HOST=database
- POSTGRES_PORT=5432
volumes:
- postgresql_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -e \"SHOW DATABASES;\""]
interval: 5s
timeout: 5s
retries: 5
ports:
- "5432"
restart: on-failure
app:
container_name: proj
hostname: proj
build:
context: .
dockerfile: Dockerfile
image: sampleproject
command: >
bash -c "
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
gunicorn sampleproject.wsgi:application -c ./gunicorn.py
"
env_file: .env
ports:
- "8000:8000"
volumes:
- .:/srv/app
depends_on:
- database
- redis
So why its refusing connection? I don't have any idea and it was working last week.
Unsure if it would help in your case but I was getting the same issue with docker-compose. What solved it for me was explicitly specifying the hostname for postgres.
services:
database:
image: postgres:12-alpine
hostname: database
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=123
- POSTGRES_HOST=database
- POSTGRES_PORT=5432
...
Could you do a docker container ls and check if the container name of the database is in fact, "database"?
You've skipped setting the container_name for that container, and it may be so that docker isn't creating it with the default name of the service, i.e. "database", thus the DNS isn't able to find it under that name in the network.
Reboot the server. I encounter similar errors on Mac and Linux from time to time when I run more than one container that use postgres.
I am trying to run celery in a separate docker container alongside a django/redis docker setup.
When I run docker-compose up -d --build, my logs via docker-compose logs --tail=0 --follow show the celery_1 container spamming the console repeatedly with
Usage: nc [OPTIONS] HOST PORT - connect
nc [OPTIONS] -l -p PORT [HOST] [PORT] - listen
-e PROG Run PROG after connect (must be last)
-l Listen mode, for inbound connects
-lk With -e, provides persistent server
-p PORT Local port
-s ADDR Local address
-w SEC Timeout for connects and final net reads
-i SEC Delay interval for lines sent
-n Don't do DNS resolution
-u UDP mode
-v Verbose
-o FILE Hex dump traffic
-z Zero-I/O mode (scanning)
I am able to get celery working correctly by removing the celery service from docker-compose.yaml and manually running docker exec -it backend_1 celery -A proj -l info after docker-compose up -d --build. How do I replicate the functionality of this manual process within docker-compose.yaml?
My docker-compose.yaml looks like
version: '3.7'
services:
backend:
build: ./backend
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./backend/app/:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
- redis
links:
- db:db
celery:
build: ./backend
command: celery -A proj worker -l info
volumes:
- ./backend/app/:/usr/src/app/
depends_on:
- db
- redis
redis:
image: redis:5.0.6-alpine
command: redis-server
expose:
- "6379"
db:
image: postgres:12.0-alpine
ports:
- 5432:5432
volumes:
- /tmp/postgres_data:/var/lib/postgresql/data/
I found out the problem was that my celery service could not resolve the SQL host. This was because my SQL host is defined in .env.dev which the celery service did not have access to. I added
env_file:
- ./.env.dev
to the celery service and everything worked as expected.