Django cannot connect to running postgres docker container - django

I am surprised that I cannot get this to work, as I have run a django applications with a postgres DB in docker containers multiple times, however, this time on my Linux machine the django application cannot connect to the postgres docker container, whereas on my Mac the setup works without a problem.
I am getting the standard error:
django.db.utils.OperationalError: could not connect to server: Host is unreachable
Is the server running on host "postgresdb" (172.27.0.2) and accepting
TCP/IP connections on port 5432?
The postgres container is running and should be reachable. I check the IP address of the docker container and it seems correct. My docker-compose file looks like this:
version: "3.8"
services:
app:
container_name: data-management
restart: always
build: ./
ports:
- '3000:3000'
links:
- 'postgresdb'
volumes:
- ./:/usr/local/share/src/app
env_file:
- .dev.env
postgresdb:
container_name: be-postgres
restart: always
image: postgres:13.4-alpine
ports:
- '5432:5432'
volumes:
- postgres-db:/var/lib/postgresql/data
env_file:
- .dev.env
volumes:
postgres-db:
The configurations are stored in an environment file, which is used in both containers to avoid any mismatches.
POSTGRES_PASSWORD=devpassword123
POSTGRES_USER=devuser
POSTGRES_SERVICE=postgresdb
POSTGRES_PORT=5432
POSTGRES_DB=data_management
The django DB settings are as standard as they could be:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'HOST': os.environ['POSTGRES_SERVICE'],
'NAME': os.environ['POSTGRES_DB'],
'PORT': os.environ['POSTGRES_PORT'],
'USER': os.environ['POSTGRES_USER'],
'PASSWORD': os.environ['POSTGRES_PASSWORD'],
}
}
I am out of ideas as what could be causing this as this happens only on my Fedora machine and there are no issues when running on my Mac. I have already had SELinux problems with some docker volumes in the past, however, after googling I could not find any suggestions that SELinux could be the culprit again.

Related

Python Flask not running with docker-compose [duplicate]

This question already has answers here:
Deploying a minimal flask app in docker - server connection issues
(8 answers)
Closed 8 months ago.
I have a simple Flask app consisting of a web part and a database part. Code can be found here. Running this Flask app locally works perfectly well.
I'd like to have the app run on Docker. Therefore I created the following docker-compose file.
version: '3.6'
services:
web:
build: .
depends_on:
- db
networks:
- default
ports:
- 50000:5000
volumes:
- ./app:/usr/src/app/app
- ./migrations:/usr/src/app/migrations
restart: always
db:
environment:
MYSQL_ROOT_PASSWORD: ***
MYSQL_DATABASE: flask_employees
MYSQL_USER: root
MYSQL_PASSWORD: ***
image: mariadb:latest
networks:
- default
ports:
- 33060:3306
restart: always
volumes:
- db_data:/var/lib/mysql
volumes:
db_data: {}
Inside the Flask app, I'm setting the following SQLALCHEMY_DATABASE_URI = 'mysql://root:***#localhost:33060/flask_employees'.
When I do docker-compose up, both containers are created and are running however when I go to http://localhost:50000 or http://127.0.0.1:50000 I get:
This site can’t be reached, 127.0.0.1 refused to connect.
You have to run your flask app on "0.0.0.0" host, in order to be able to map ports from the docker container.
if __name__ == '__main__':
app.run(host='0.0.0.0', port='5000')

Docker-compose: db connection from web container to neo4j container using bolt

I'm working on django project with neo4j db using neomodel and django-neomodel.
I'm trying to containerize it using docker-compose.
when I build the images everything seems fine, but any connection from web container to db using bolt is refused. although I can access the neo4j db from the browser on http, and even from local machine on bolt.
this is the error I get:
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to ('127.0.0.1', 7688) (reason 111)
I'm using the following configs:
<pre>Django == 3.1.1
neo4j==4.1.0
neomodel==3.3.0
neobolt==1.7.17 </pre>
this is my docker-compose file:
version: '3'
services:
backend:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- neo4j_db
networks:
- mynetwork
links:
- neo4j_db
neo4j_db:
image: neo4j:3.5.17-enterprise
ports:
- "7474:7474"
- "7688:7687"
expose:
- 7474
- 7687
volumes:
- ./db/dbms:/data/dbms
environment:
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- dbms.connector.bolt.listen_address=:7688
- dbms.connector.bolt.advertised_address=:7688
networks:
- mynetwork
networks:
mynetwork:
driver: bridge
and here's connection configs in django settings:
NEOMODEL_NEO4J_BOLT_URL = os.environ.get('NEO4J_BOLT_URL', 'bolt://neo4j:pass#123#127.0.0.1:7688')
Thanks in advance..
To connect from one container to another one (inside the same docker-compose project) you should use container name of the target container instead of the localhost (or 127.0.0.1). In your case it would be neo4j_db.
When connecting from other container you should use the internal port, in your case 7687.
In the neo4j service, the bolt.listen_address should be 7687 instead of 7688 (honestly, I'm not sure why you are changing the default port).
To wrap up, the connection url should be:
bolt://neo4j:pass#neo4j_db:7687

Can't connect postgresql in a docker (receiving fastshut down request)

I am running a django/postgresql in docker container. When run docker-compose.yml, the postgresql service will start and listen for a moment then it will be shut-down with LOG:
"listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" then
"database system was shut down".
I have read it somewhere that it is receiving a postmaster fast shutdown request but I don't know how to solve this. I have tried to change ports and other PostgreSQL env variables without success.
Here is my .ylm for docker-compose
version: '3'
volumes:
postgres_data_local: {}
postgres_backup_local: {}
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
depends_on:
- postgres
- redis
volumes:
- .:/app
env_file: .env
ports:
- "8000:8000"
command: /start.sh
postgres:
image: postgres:10.1-alpine
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
volumes:
- postgres_data_local:/var/lib/postgresql/data
- postgres_backup_local:/backups
env_file: .env
redis:
image: redis:3.0
ports:
- '6379:6379'
My .env file is somehow like
# PostgreSQL conf
POSTGRES_PASSWORD=p3wrd
POSTGRES_USER=postgres
POSTGRES_DB=postgres
POSTGRES_HOST=127.0.0.1 #have tried localhost, 0.0.0.0 etc
POSTGRES_PORT=5432
DATABASE_URL= postgresql://postgres:p3wrd#127.0.0.1:5432/postgres
# General settings
READ_DOT_ENV_FILE=True
SETTINGS_MODULE=config.settings.test
SECRET_KEY=Sup3rS3cr3tP#22word
DEBUG=True
ALLOWED_HOSTS=*
# URL for Redis
REDIS_URL=redis://127.0.0.1:6379
Why are you setting POSTGRES_HOST to 127.0.0.1 or variants thereof? That means "localhost", which in a container means "the local container". Postgres isn't running inside the django container so that won't work.
Since you're using docker-compose, your containers are all running in a user-defined network. This means that Docker maintains a DNS server for you that maps service names to container ip addresses. In other words, you can simply set:
POSTGRES_HOST=postgres
DATABASE_URL= postgresql://postgres:p3wrd#postgres:5432/postgres

Password authentication failed for Docker's postgres container

I am trying to dockerize my Django project. For this purpose, I am trying to divide the whole project into 2 parts
The whole web related things in one container.
Database i.e Postgres in another
I am creating the Postgres database container using command:
docker run --name postgres -it -e POSTGRES_USER=username -e POSTGRES_PASSWORD=mysecretpassword postgres
When this postgres instance started running I entered the shell postgres instance using:
docker exec -it postgres /bin/bash
root#ae052fbce400:/# psql -U psql
Inside Psql shell that i got, I am creating the Database named DBNAME and granted all the privileges to username;
Database settings inside the webapp container is:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'DBNAME',
'USER': 'username',
'PASSWORD': 'mysecretpassword',
'HOST': 'postgres',
'PORT': 5432
}
}
Here is my docker-compose.yml file
services:
web:
image: 1ce04167758d #image build of webapp container
command: python manage.py runserver
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- postgres
postgres:
image: postgres
env_file:
- .env
expose:
- "5432"
When I ran docker-compose up I am getting the following error:-
web_1 | django.db.utils.OperationalError: FATAL: password authentication failed for user "username"
I tried various steps but this is the one which solved my problem.
docker stop $(docker ps -qa) && docker system prune -af --volumes
docker-compose up
This is because you created two database services. One manually via docker run and one via docker-compose. Unfortunately both unusable, meaning they'd have to be reconfigured in order to cooperate.
Scenario 1 - using a separate DB
You should remove the database definition from compose file - so that it looks like this:
version: '3'
services:
web:
image: 1ce04167758d
command: python manage.py runserver
volumes:
- .:/code
ports:
- "8000:8000"
And in your config you should change postgres to your host machine - for example 192.168.1.2
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'itoucan',
'USER': 'mannu',
'PASSWORD': 'mysecretpassword',
'HOST': '192.168.1.2',
'PORT': 5432
}
}
Then, run a separate database service just like you did, via the run command, but exposing a port publicly.
docker run --name postgres -it -p 5432:5432 -e POSTGRES_USER=mannu -e POSTGRES_PASSWORD=mysecretpassword postgres
When it finished initializing and when you finish adding databases and users you can fire up your Django app and it'll connect.
further reading on postgres env variables
Scenario 2 - using composed database
There's a lot of explaining here, as you have to set up a entrypoint that will wait until the DB is fully initialized. But I've already written a step by step answer on how to do it here on stack
Your situation is basically the same except for the DB service. You leave your compose nearly as it is now, with a little changes:
version: '3'
services:
web:
image: 1ce04167758d
command: python manage.py runserver
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- postgres
entrypoint: ["./docker-entrypoint.sh"]
postgres:
image: postgres
env_file:
- .env
I've added a entrypoint that is supposed to wait until your DB service completes initialization (for instructions on how to set it up you should refer to the link I provided earlier on).
I can see you've defined a entrypoint already - I'd suggest removing this entrypoint from Dockerfile, move it to the compose file and merge it with what I've described in the referred link. It's a common practice in commercial/bigger environments, as you might have many entrypoints, or/and as your entrypoint might not be intended to run while building - like the one I suggest is.
I've removed DB port mapping as you shouldn't expose services if there's no need - if only the web service is supposed to use the DB, then we shouldn't expose the DB for other possibilities.
With the above configuration, your Django configuration would be perfectly fine.
edit from comments
The 0.0.0.0 IP provided for postgres states that the server will listen on all incoming connections. It means that in settings.py you should specify not the 0.0.0.0 address but a address of the host on which your service runs - in your case I guess it's run on your computer - so simply running:
$ ifconfig
on your host will give your your local ip address ( 192.x.x.x or 10.x.x.x ) and this IP is what you specify in settings

difference between localhost and postgres for host in docker

I am developing a django app and trying to run it inside docker. I have an issue that I could not understand so far. while running the app with docker-compose, it seems that the web app cannot connect to the database when i use these configurations:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'my_db',
'USER': 'my_user',
'PASSWORD': '',
'HOST': 'localhost',
'PORT': '5432',
}
but once I change the host to postgres, it works. like this
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'my_db',
'USER': 'my_user',
'PASSWORD': '',
'HOST': 'postgres',
'PORT': '5432',
}
what is the difference between postgres and localhost. One is running without and issue inside docker and not in development environment in my mac and the other one is the opposite.
# docker-compose.yml
version: '3'
services:
db:
image: postgres
expose:
- "5432"
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Docker Compose actually add the hostnames of all your linked containers to each other.
On you machine, the postgres database is actually running in localhost, that why you have the localhost hostname.
In Compose, it's running in the postgres container, with the hostname postgres, that's why you have the postgres hostname.
If you want, you can create an entry in your host file to redirect postgres to localhost, you will then just have to use postgres everywhere.
Each docker container comes with it's own networking namespace by default. That namespace includes it's own private loopback interface, aka localhost. And they are also attached to networks inside of docker where they have their own internal DNS entry and can talk to other containers on that same network.
When you run your application inside a container with a bridge network, localhost will point to the container, not the docker host you are running on. The hostname to use depends on your scenario:
To talk to other containers, use the container name in DNS.
If it's started by docker-compose, use the service name to talk to one of the containers in that service using DNS round robin.
If it's started inside of swarm mode, you can use the service name there to go to a VIP that round robin load balances to all containers providing that service.
And if you need to talk to the docker host itself, use a non-loopback IP address of the docker host.