Docker-compose - Cannot connect to Postgres - django

It looks like a common issue: can't connect to Postgres from a Django app in Docker Compose.
Actually, I tried several solution from the web, but probably I'm missing something I cannot see.
The error I got is:
django.db.utils.OperationalError: could not translate host name "db" to address: Try again
Where the "db" should be the name of the docker-compose service and which must setup in the .env.
My docker-compose.yml:
version: '3.3'
services:
web:
build: .
container_name: drf_app
volumes:
- ./src:/drf
links:
- db:db
ports:
- 9090:8080
env_file:
- /.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypass
- POSTGRES_DB=mydb
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5432:5432
volumes:
postgres_data:
My .env:
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=mydb
SQL_USER=myuser
SQL_PASSWORD=mypass
SQL_HOST=db #this one should match the service name
SQL_PORT=5432
As far as I know, web and db should automatically see each other in the same network, but this doesn't happens.
Inspecting the ip address with ifconfig on each container: django app has 172.17.0.2 and the db 172.19.0.2. They are not able to ping each other.
The result of docker ps command:
400879d47887 postgres:13-alpine "docker-entrypoint.s…" 38 minutes ago Up 38 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp backend_db_1
I really cannot figure out the issue, so am I missing something?

I write this to save anyone in future from the same issue.
After countless tries, I started thinking that nothing was wrong from the pure docker perspective: I was right.
SOLUTION: My only suspect was related to the execution inside a Virtual Machine, so executing the same docker image on the host worked like a charm!
The networking issue was related to the VM (VirtualBox Ubuntu 20.04)
I do not know if there is a way to work with docker-compose inside a VM, so any suggestion is appreciated.

You said in a comment:
The command I run is the following: docker run -it --entrypoint /bin/sh backend_web
Docker Compose creates several Docker resources, including a default network. If you separately docker run a container it doesn't see any of these resources. The docker run container is on the "default bridge network" and can't use present-day Docker networking capabilities, but the docker-compose container is on a separate "user-defined bridge network" named backend_default. That's why you're seeing a couple of the symptoms you do: the two networks have separate IPv4 CIDR ranges, and Docker's container DNS resolution only happens for the current network.
There's no reason to start a container with an interactive shell and then start your application within that (any more than you don't normally run python and then manually call main() from its REPL). Just start the entire application:
docker-compose up -d
If you do happen to need an interactive shell to debug your container setup or to run some manual tasks like database migrations, you can use docker-compose run for this. This honors most, but not all, of the settings in the docker-compose.yml file. In particular you can't docker-compose run an interactive shell and start your application from it, since it ignores ports:.
# Typical use: run database migrations
docker-compose run web \
./manage.py migrate
# For debugging: run an interactive shell
docker-compose run web bash

Related

Unable to connect Django docker image to GCP instance using GCloud Proxy

I am using cloud_proxy to connect to google cloud postgres instance. I followed the steps in GCP website https://cloud.google.com/sql/docs/postgres/connect-admin-proxy. When I run it locally using python manage.py runserver with host for db as 127.0.0.1 and port as 5432, the program is working fine.
If I try to dockerize the application and run the program, I am facing the error
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
Docker file
services:
web:
build: .
command: python manage.py runserver
volumes:
- .:/code
ports:
- 8000:8000
So I tried to dockerize the application using the stack overflow answer Is there a way to access google cloud SQL via proxy inside docker container modified the host in settings.py file too.
Now facing the error
gcloud is not in the path and -instances and -projects are empty
services:
web:
build: .
command: python manage.py runserver
depends_on:
- cloud-sql-proxy
volumes:
- .:/code
ports:
- 8000:8000
env_file:
- ./.env.dev
cloud-sql-proxy:
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command: /cloud_sql_proxy --dir=/cloudsql instances=abc:us-central1:def=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
ports:
- 5432:5432
volumes:
- credentials.json:/secrets/cloudsql/credentials.json
restart: always
Could you please help me with this issue. My requirement is to create a docker image with Django application so that it can be deployed to GCP.
I think You are missing - It should be
command: /cloud_sql_proxy --dir=/cloudsql -instances=abc:us-central1:def=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
I recommend you to follow the next documentation:
Connecting psql client using the Cloud SQL Proxy docker Image
This page describes how to connect a psql client to your Cloud SQL instance, from a client machine running Linux or Compute Engine Linux instance, using the Cloud SQL Proxy Docker image, I think this guide could meet your needs.
This guide mentions the way to start the proxy at point 9.
Unix sockets:
docker run -d -v /cloudsql:/cloudsql \
-v <PATH_TO_KEY_FILE>:/config \
gcr.io/cloudsql-docker/gce-proxy:1.16 /cloud_sql_proxy -dir=/cloudsql \
-instances=<INSTANCE_CONNECTION_NAME> -credential_file=/config
If you are using the credentials provided by your Compute Engine instance, do not include the credential_file parameter and the -v <PATH_TO_KEY_FILE>:/config line.
If you are using a container optimized image, use a writeable directory in place of /cloudsql, for example:
-v /mnt/stateful_partition/cloudsql:/cloudsql
Additionally if you want to know more about Cloud SQL Proxy parameters and flags I recommend to take a look at this page
I hope this information would be useful to you

docker-compose returns error when trying to start on AWS

I want to deploy my server to an AWS EC2 instance. When I enter 'sudo docker-compose up' in the ssh console, I get the following error:
ERROR: for nginx Cannot start service nginx: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\"/home/ubuntu/site/nginx/default.conf\\" to rootfs \\"/var/lib/docker/overlay2/b24f64910c6ab7727a4cb08afac0d034bb759baa4bfd605466ca760359f411c2/merged\\" at \\"/var/lib/docker/overlay2/b24f64910c6ab7727a4cb08afac0d034bb759baa4bfd605466ca760359f411c2/merged/etc/nginx/conf.d/default.conf\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
This is my docker-compose.yml file:
version: '2'
networks:
SRVR:
services:
nginx:
image: nginx:stable-alpine
container_name: SRVR_nginx
ports:
- "8080:80"
volumes:
- ./code:/code
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./nginx/logs:/var/log/nginx
depends_on:
- php
networks:
- SRVR
php:
build: ./php
container_name: SRVR_php
volumes:
- ./code:/code
ports:
- "9000:9000"
networks:
- SRVR
The same docker-compose.yml is working fine on my local compuer which runs Ubuntu OS. The EC2 instance also runs Ubuntu OS.
Your problem is with ./nginx/default.conf. docker recognises that as a folder and you have /etc/nginx/conf.d/default.conf being a file.
I was too hasty to ask. Here's what's going on. The Nginx container requires the 'default.conf' file to be created manually. If it's not, bringing nginx server up creates a folder called 'default.conf'. Once I've manually copied my original 'default.conf' file to the appropriate location, everything seems to be working fine.

Django Docker/Kubernetes Postgres data not appearing

I just tried switching from docker-compose to docker stacks/kubernetes. In compose I was able to specify where the postgres data volume was and the data persisted nicely.
volumes:
- ./postgres-data:/var/lib/postgresql/data
I tried doing the same thing with the stack file and I can connect to the pod and use psql to see the schema but none of the data entered from docker-compose is there.
Any ideas why this might be?
Here's the stack.yml
version: '3.3'
services:
django:
image: image
build:
context: .
dockerfile: docker/Dockerfile
deploy:
replicas: 5
environment:
- DJANGO_SETTINGS_MODULE=config.settings.local
- SECRET_KEY=password
- NAME=postgres
- USER=postgres
- HOST=db
- PASSWORD=password
- PORT=5432
volumes:
- .:/application
command: ["gunicorn", "--bind 0.0.0.0:8000", "config.wsgi"]
ports:
- "8000:8000"
links:
- db
db:
image: mdillon/postgis:9.6-alpine
volumes:
- ./postgres-data:/var/lib/postgresql/data
You failed to mention how your cluster is provisioned, where is it running etc. so I will make an assumption we're talking about local tests here. If so, you probably have local docker/docker-compose and minikube installed.
If that is the case, please mind that minikube runs in it's own VM so it will not be affected by changes you make on your host by ie. docker, as it has it's own filesystem in vm.
Hint: you can run docker against docker daemon of minikube if you first run eval $(minikube docker-env)
For docker stacks, run the docker inspect command, it should show the mount point of the Postgres container.
docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Mounts}} {{.Source}}{{end}}' <StackName>
Fixed in the last Docker Edge update.

Vagrant to Docker communication via Docker IP address

Here's my situation. We are slowly migrating our VMs from Vagrant to Docker but we are mostly still Docker newbs. Some of our newer system's development environments have already been moved to Docker. We have test code that runs on an older Vagrant VM and used to communicate with a Vagrant running a Django Restful API application in order to run integration tests. This Django API is now in a Docker container. So now we need the test code that runs in a vagrant to be able to make requests to the API running in docker. Both the docker container and the vagrant are running side by side on a host machine (MacOS). We are using Docker compose to initialize the docker container the main compose yaml file is shown below.
services:
django-api:
ports:
- "8080:8080"
build:
context: ..
dockerfile: docker/Dockerfile.bapi
extends:
file: docker-compose-base.yml
service: django-api
depends_on:
- db
volumes:
- ../:/workspace
command: ["tail", "-f", "/dev/null"]
env_file:
- ${HOME}/.pam_environment
environment:
- DATABASE_URL=postgres://postgres:password#db
- PGHOST=db
- PGPORT=5432
- PGUSER=postgres
- PGPASSWORD=password
- CLOUDAMQP_URL=amqp://rabbitmq
db:
ports:
- "5432"
extends:
file: docker-compose-base.yml
service: db
volumes:
- ./docker-entrypoint-initdb.d/init-postgress-db.sh:/docker-entrypoint-initdb.d/init-postgress-db.sh
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: django-api-dev
I would like the tests that are running on the vagrant to still be able to communicate with the django application that's now running on docker, similar to the way it could communicate with the api when it was running in a vagrant. I have tried several different types of network configurations within the docker compose file but alas networking is not my strong suit and I'm really just shooting in the dark here.
Is there a way to configure my docker container and/or my vagrant so that they can talk to each other? I need to expose my docker container's IP address so that my vagrant can access it.
Any help/tips/guidance here would be greatly appreciated!
In your Vagrantfile, make sure you have a private host only network. I usually use them with a fixed IP
config.vm.network "private_network", ip: "192.168.33.100"
Now both the VMs will get a static IP in the host only network. When you run docker-compose up -d in your Django VM. Your VM will map port 8080 to the container 8080. So you can use 192.168.33.100:8080 in the other VM for testing the APIs
I would like the tests that are running on the vagrant to still be
able to communicate with the django application that's now running on
docker, similar to the way it could communicate with the api when it
was running in a vagrant.
As you say, you are using docker-compose, so exposing ports would do the purpose you are looking for. In the yml file, where django application is defined, create a port mapping which would bind the port on host to the port in container. You can do this by including this:
ports:
"<host_port_where_you_want_to_access>:<container_port_where_application_is_running>"
Is there a way to configure my docker container and/or my vagrant so
that they can talk to each other? I need to expose my docker
container's IP address so that my vagrant can access it.
It is. If both the containers are in the same network (when services are declared in same compose file, all services are in same network by default), then one container can talk to other container by calling to their service name.
Example: In the yml file specified in question, django-api can access db at http://db:xxxx/ where xxxx can be any port inside container. xxxx need not be mapped to host or need not be exposed.

Symfony Application on AWS ECS with a data-only container - Is this the right direction?

I have a dockerized Symfony2 Application consisting of four containers:
php-fpm
nginx
mysql
code (data container with a volume)
On my local machine this setup runs without problem with docker-compose:
code:
image: ebc9f7b635b3
nginx:
build: docker/nginx
ports:
- "80:80"
links:
- php
volumes_from:
- code
php:
build: docker/php
volumes_from:
- code
links:
- mysql
mysql:
image: mysql
ports:
- "5000:3306"
command: mysqld --sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
environment:
- MYSQL_ROOT_PASSWORD=xyz
- MYSQL_DATABASE=xyz
- MYSQL_USER=xyz
- MYSQL_PASSWORD=xyz
I wanted to deploy my application on AWS ECS, therefore I prebuilt all the images and pushed them to the AWS Container Registry, created a new cluster with a new service and translated my local docker-compose.yml to a TaskDefinition.
Since yesterday I tried to get it running, but after following the Official Dokumentation
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
and searching for hours I can't find a way to get it working.
Either the service get stucked in a PENDING state without bringing up a container (except for the mysql container) or if I attach a volume to the task definition the containers coming up but the data is not mapped.
Is it that I must reference the data only container in a special syntax in the volumesFrom section of the Task Definition?
Is there a solution for this by now other than using EFS?
Thanks!