Django 'failed connection' between two containers - django

I'm currently trying to make a connection between two of my docker containers (The requesting container running Gunicorn/Django and the api container running kroki).
I've had a look at other answers but seem to be coming up blank with a solution, so was hoping for a little poke in the right direction.
Docker-compose:
version: '3.8'
services:
app:
build:
context: ./my_app
dockerfile: Dockerfile.prod
command: gunicorn my_app.wsgi:application --bind 0.0.0.0:8000 --access-logfile -
volumes:
- static_volume:/home/app/web/staticfiles
expose:
- 8000
environment:
- DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 kroki
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
ports:
- 1337:80
depends_on:
- app
kroki:
image: yuzutech/kroki
ports:
- 7331:8000
volumes:
postgres_data:
static_volume:
settings.py
ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS").split(" ")
Requesting code in django
url = 'http://kroki:7331/bytefield/svg/' + base64_var
try:
response = requests.get(url)
return response.text
except ConnectionError as e:
print("Connection to bytefield module, unavailable")
return None
I'm able to access both containers via my browser successfully, however initiating the code for an internal call between the two throws out
requests.exceptions.ConnectionError: HTTPConnectionPool(host='kroki', port=7331): Max retries exceeded with url: /bytefield/svg/<API_URL_HERE> (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f286f5ecaf0>: Failed to establish a new connection: [Errno 111] Connection refused'))
I've had a go accessing the url via localhost:7331 & 127.0.0.1:7331 however neither seem to help at all

When you access to other containers within the same network you don't have to access to them using the exposed port in the host but using instead the actual port where the application within the container is listening.
I made a really simple example so you can understand what's your problem:
version: '3.8'
services:
app:
image: busybox
entrypoint: tail -f /dev/null
kroki:
image: yuzutech/kroki
ports:
- 7331:8000
From Host
❯ curl -s -o /dev/null -w "%{http_code}" localhost:7331
200
From App
/ # wget kroki:7331
Connecting to kroki:7331 (172.18.0.3:7331)
wget: can't connect to remote host (172.18.0.3): Connection refused
/ # wget kroki:8000
Connecting to kroki:8000 (172.18.0.3:8000)
saving to 'index.html'
index.html 100% |************************************************************************| 51087 0:00:00 ETA
'index.html' saved

Related

Django doesn't connect to postgresql database using docker-compose

Yes i know this question has been asked, and the usual error is making the DB_HOST = docker service name, in my case db.
Here's my docker-compose.yml:
version: "3"
services:
db:
image: postgres:latest
restart: always
ports:
- "54320:5432"
expose:
- "54320"
volumes:
- ./database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
# POSTGRES_HOST_AUTH_METHOD: trust
django:
build: ./api
restart: always
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
volumes:
- ./api:/app/api
ports:
- "8000:8000"
depends_on:
- db
raddl_admin:
build: ./frontend/raddl_admin
stdin_open: true # docker run -i
tty: true # docker run -t
command: ["npm", "start"]
ports:
- "3000:3000"
volumes:
- ./frontend/raddl_admin:/app/frontend/raddl_admin
- /app/frontend/raddl_admin/node_modules
volumes:
database-data: # named volumes can be managed easier using docker-compose
Logging the django service gives:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" (172.18.0.2) and accepting
TCP/IP connections on port 54320?
So it's clearly trying to connect to db on that port and my environment vars are set for the django instance.
I can even ping my db service from my django service:
docker exec -it 1f764333ace2 bash | ping db
PING db (23.221.222.250): 56 data bytes
64 bytes from 23.221.222.250: icmp_seq=0 ttl=50 time=42.055 ms
64 bytes from 23.221.222.250: icmp_seq=1 ttl=50 time=42.382 ms
6
Why can't I connect to the db service from Django?
Ah, so my issue was with the port number. For some reason, keeping the db exposed port to "5432" resolved this. I thought that it would have conflicted with my host's localhost instance of postgres. I guess not.

Django + ElasticSearch + Docker - Connection Timeout no matter what hostname I use

I'm having issues connecting with my Elasticsearch container since day 1.
First I was using elasticsearch as the hostname, then I've tried the container name web_elasticsearch_1, and finally I'd set a Static IP address to the container and passed it in my configuration file.
PYPI packages:
django==3.2.9
elasticsearch==7.15.1
elasticsearch-dsl==7.4.0
docker-compose.yml
version: "3.3"
services:
web:
build:
context: .
dockerfile: local/Dockerfile
image: project32439/python
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- local/python.env
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.10.1
environment:
- xpack.security.enabled=false
- discovery.type=single-node
networks:
default:
ipv4_address: 172.18.0.10
settings.py
# Elasticsearch
ELASTICSEARCH_HOST = "172.18.0.10"
ELASTICSEARCH_PORT = 9200
service.py
from django.conf import settings
from elasticsearch import Elasticsearch, RequestsHttpConnection
es = Elasticsearch(
hosts=[{"host": settings.ELASTICSEARCH_HOST, "port": settings.ELASTICSEARCH_PORT}],
use_ssl=False,
verify_certs=False,
connection_class=RequestsHttpConnection,
)
traceback
HTTPConnectionPool(host='172.18.0.10', port=9200): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f1973ebd6d0>, 'Connection to 172.18.0.10 timed out. (connect timeout=5)'))
By default Docker Compose uses a bridge network to provision inter-container communication. You can read more about this network at the Debian Wiki.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file. So update your file:
version: "3.3"
services:
web:
build:
context: .
dockerfile: local/Dockerfile
image: project32439/python
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- local/python.env
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.10.1
environment:
- xpack.security.enabled=false
- discovery.type=single-node
And now you can connect with elasticsearch:9200 instead of 172.18.0.10 from your web container. For more info see this article.

Connection refused with psql django and docker (cookiecutter-django)

My django app is failing to connect to the psql container with the standard connection refused error. I used django-cookiecutter which supplies the psql username and password automatically via environment variables and then this I gather is passed back into django with via a .env file that hosts a DATABASE_URL string.
Error
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
When I set a breakpoint in django settings I can see that the DATABASE_URL seems to be converted appropriately into the standard db dict:
{'NAME': 'hustlestat', 'USER': 'HjhPLEwuVjUIIKEHebPqNG<redacted>', 'PASSWORD': 'I43443fR42wRkUaaQ8mkd<redacted>', 'HOST': 'postgres', 'PORT': 5432, 'ENGINE': 'django.db.backends.postgresql'}
When I exec into the psql container with psql hustlestat -U HjhPLEwuVjUIIKEHebPqN<redcated> I can connect to the db using that username. I'm not 100% on the password as it isn't asking me for one when I try to connect.
Here is the docker compose which is generated automatically by cookie cutter:
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: hustlestat_local_django
container_name: django
depends_on:
- postgres
- mailhog
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: hustlestat_production_postgres
container_name: postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data:Z
- local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: hustlestat_local_docs
container_name: docs
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./hustlestat:/app/hustlestat:z
ports:
- "7000:7000"
command: /start-docs
mailhog:
image: mailhog/mailhog:v1.0.0
container_name: mailhog
ports:
- "8025:8025"
redis:
image: redis:5.0
container_name: redis
celeryworker:
<<: *django
image: hustlestat_local_celeryworker
container_name: celeryworker
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: hustlestat_local_celerybeat
container_name: celerybeat
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celerybeat
flower:
<<: *django
image: hustlestat_local_flower
container_name: flower
ports:
- "5555:5555"
command: /start-flower
node:
build:
context: .
dockerfile: ./compose/local/node/Dockerfile
image: hustlestat_local_node
container_name: node
depends_on:
- django
volumes:
- .:/app:z
# http://jdlm.info/articles/2016/03/06/lessons-building-node-app-docker.html
- /app/node_modules
command: npm run dev
ports:
- "3000:3000"
# Expose browsersync UI: https://www.browsersync.io/docs/options/#option-ui
- "3001:3001"
The only oddity I have noticed is that despite django being named in the docker compose, when I view the running containers it has a random name such as:
hustlestat_django_run_37888ff2c9ca
Not sure if that is relevant.
Thanks for any help!
Okay have figured this out. I set a DATABASE_URL environment variable because I was originally getting an error saying it was unset. After googling I came across a cookie cutter doc that said to set it but didn't read it well enough to realise that the instruction was intended for non-docker setups. Mine is docker.
The reason I was getting that error is because I was exec'ing into the container and running management commands like this:
docker exec -it django bash then python manage.py migrate
The way this project is setup and environment variables are setup, you can't do that, you have to use this method from outside the exec:
docker-compose -f local.yml run --rm django python manage.py migrate
I thought the two methods were interchangeable but they are not. Everything works now.

Get NGINX ip address in docker in Django settings.py for Django-debug-toolbar

I have a dockerized DRF project with installed NGINX in it. All works fine except one thing:
Django debug toolbar requires INTERNAL_IPS parameter to be specified in settings.py.
For docker I use this one:
hostname, _, ips = socket.gethostbyname_ex(socket.gethostname())
INTERNAL_IPS = [ip[:-1] + "1" for ip in ips]
It also works fine but not with NGINX as NGINX use it’s own ip dynamically(probably?) definded inside or assigned by docker or anything else.
I can get this ip from server logs:
172.19.0.8 - - [09/Oct/2020:17:10:40 +0000] "GET /admin/ HTTP/1.0" 200 6166 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36 OPR/71.0.3770.228"
and add it to setting.py:
hostname, _, ips = socket.gethostbyname_ex(socket.gethostname())
INTERNAL_IPS = [ip[:-1] + "1" for ip in ips]
INTERNAL_IPS.append('172.18.0.8')
but I expect that this ip might be different on different machines etc, so it is not reliable enough.
So that question is -is it possible somehow to get NGINX docker ip in settings.py dynamically or fix docker-compose somehow???
docker-compose:
version: '3.8'
volumes:
postgres_data:
redis_data:
static_volume:
media_volume:
services:
web:
build: .
#command: python /code/manage.py runserver 0.0.0.0:8000
command: gunicorn series.wsgi:application --config ./gunicorn.conf.py
env_file:
- ./series/.env
volumes:
- .:/code
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
# ports:
# - 8000:8000
expose:
- 8000
depends_on:
- db
- redis
- celery
- celery-beat
links:
- db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
# - 1337:80
- 8000:80
depends_on:
- web
db:
build:
context: .
dockerfile: postgres.dockerfile
restart: always
env_file:
- ./series/.env
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- target: 5432
published: 5433
protocol: tcp
mode: host
redis:
image: redis:alpine
command: >
redis-server
--appendonly yes
--appendfsync no
--auto-aof-rewrite-percentage 100
--auto-aof-rewrite-min-size 64mb
ports:
- target: 6379
published: 6380
protocol: tcp
mode: host
volumes:
- redis_data:/data
restart: always
environment:
- REDIS_REPLICATION_MODE=master
celery:
build: .
command: celery worker -A series --loglevel=INFO --concurrency=4 -E
restart: always
environment:
- C_FORCE_ROOT=1
volumes:
- .:/code
depends_on:
- db
- redis
hostname: celery-main
celery-beat:
build: .
command: celery -A series beat --loglevel=INFO --pidfile=
restart: always
volumes:
- .:/code
depends_on:
- db
- redis
hostname: celery-beat
flower:
# http://localhost:8888/
image: mher/flower
environment:
- CELERY_BROKER_URL=redis://redis:6379/1
- FLOWER_PORT=8888
depends_on:
- celery
- celery-beat
- redis
restart: always
ports:
- target: 8888
published: 8888
protocol: tcp
mode: host
dockerfile of NGINX:
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
Thank you
Following an approach similar to the latest cookiecutter-django code which adds an internal IP address for gulp, we can add the nginx container's IP address dynamically there as well (no need to hard-code the IP address):
hostname, _, ips = socket.gethostbyname_ex(socket.gethostname())
INTERNAL_IPS += [".".join(ip.split(".")[:-1] + ["1"]) for ip in ips]
# Since our requests will be routed to Django via the nginx container, include
# the nginx IP address as internal as well
hostname, _, nginx_ips = socket.gethostbyname_ex("nginx")
INTERNAL_IPS += nginx_ips
In the above code, we use the hostname nginx to match the name of the docker service you specified in your docker-compose file. (I had the same issue as you, and this worked for me.)

Can't connect to MySQL server on db. nodename nor server provided or not known using Docker

I'm trying to connect my application to my mysql database which I've got up and running in a docker-compose file. I'm using flask and trying to connect using DBUtils
I keep getting the error message described in my title:
(pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'db' ([Errno 8] nodename nor servname provided, or not known)"))
I've tried using the IPAddresses of my docker instances as well as several other solutions in similar problems discussed here on StackOverflow:
Docker-Compose can't connect to MySQL
Connecting to MySQL from Flask Application using docker-compose.
however, the offered solutions don't seem to be working for me.
my docker-compose file looks as follows:
version: '3.3'
services:
db:
image: mysql:8.0
container_name: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: 'myPassword'
MYSQL_DATABASE: 'databaseName'
volumes:
- .:/dockerFiles
ports:
- "3306:3306"
expose:
- "3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
restart: always
ports:
- "8080:80"
volumes:
- /sessions
and my connect string looks as follows:
def connect_db():
# Connects to the database and takes care of the connection
return PersistentDB(
creator=pymysql, host='db',
user='root', password='myPassword', database='databaseName', port=3306,
autocommit=True, charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
The error [Errno 8] nodename nor servname provided, or not known states that it can't find the database server. Thus, host='db' will only work if your Python code is inside a docker in the same network of the database service db. Try adding a service for the Python code in your docker-compose:
version: '3.3'
services:
db:
image: mysql:8.0
container_name: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: 'myPassword'
MYSQL_DATABASE: 'databaseName'
volumes:
- .:/dockerFiles
ports:
- "3306:3306"
expose:
- "3306"
web:
build: . # where is your dockerfile
command: sh -c 'python app.py' # this should be the command to start your application
ports:
- "8082:8082"
volumes:
- .:/code # it depends on the WORKDIR of your dockerfile
links:
- db