I have a Django-Postgres app deployed with Docker. I have two docker containers web and dband one docker volume for persistent storage.
My docker-compose.yml file:
version: '2'
services:
db:
image: postgres
ports:
- '5432:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
env_file: .env
web:
build: .
command: ./start.sh
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
env_file: .env
volumes:
postgres:
I made a change in my django model.py:
class Meeting(models.Model):
[...]
participants = models.CharField(max_length=200)
to
class Meeting(models.Model):
[...]
user_participants = models.CharField(max_length=200)
However, this change is not reflected on my Django app and I get the following error:
column call_meeting.user_participants does not exist
I ran:
python manage.py makemigrations call
python manage.py migrate
Then I tried to delete Django migrations and re-run the above commands. I have tried to re-run docker-compose build but none of them worked.
Why does this happen? Should I change the column names with raw SQL?
From your question and comments, I get the impression that your django is not using the correct database. Or more probably (rather than two co-existing db instances), the database is recreated on every docker restart even though the docker-compose file does not look like it.
We are using a similar setup which works for us, but the data volume is mounted differently - so that might be the issue here:
- ./pgdata:/var/lib/postgresql/data/pgdata
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': os.environ.get('POSTGRES_DB', 'my_db'),
'HOST': os.environ.get('POSTGRES_HOST', 'localhost'),
'PORT': os.environ.get('POSTGRES_PORT', '5432'),
'USER': os.environ.get('POSTGRES_USER', 'my_user'),
'PASSWORD': os.environ.get('POSTGRES_PASSWORD', 'my_password'),
'CONN_MAX_AGE': 120
}
}
docker-compose.yml
version: '2'
services:
postgres:
image: postgres:9.5
volumes:
- ./pgdata:/var/lib/postgresql/data/pgdata
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_DB=<name> #django.settings.DATABASES['default']['NAME']
- POSTGRES_USER=<user> #django.settings.DATABASES['default']['USER']
- POSTGRES_PASSWORD=<password> #django.settings.DATABASES['default']['PASSWORD']
ports:
- 5432:5432
web:
depends_on:
- postgres
volumes:
- .:/app
environment:
- POSTGRES_HOST=postgres
- POSTGRES_DB=<host> # no need if settings.py is fine
- POSTGRES_USER=<user> # no need if settings.py is fine
- POSTGRES_PASSWORD=<password> # no need if settings.py is fine
build: .
ports:
- 8000:8000
command: ./start.sh
Related
I have a django and postgres containers. When it's time for django to apply migrations, it doesn't see a postgres container that I named pgdb and I get this error:
django.db.utils.OperationalError: could not translate host name "pgdb"
to address: Temporary failure in name resolution
It appears that there is no docker container with name "pgdb". If I run "docker-compose run pgdb" it creates a postgres container with a name of "app_pgdb_run_23423423" under the "app" group. The cute thing is that I made it work previously with this settings.py setup and "pgdb" postgres container name. What could be the underlying issue?
You can clone the full code from https://github.com/UberStreuner/mailing-service
My settings.py setup, the environment variables definitely aren't at fault.
DATABASES = {
'default': {
'ENGINE': os.environ.get('DB_ENGINE', 'django.db.backends.postgresql_psycopg2'),
'NAME': os.environ.get('POSTGRES_DB'),
'USER': os.environ.get('POSTGRES_USER'),
'PASSWORD': os.environ.get('POSTGRES_PASSWORD'),
'HOST': os.environ.get('POSTGRES_HOST', 'pgdb'),
'PORT': os.environ.get('POSTGRES_PORT', '5432')
}
}
docker-compose.yml
version: "3.8"
services:
django:
build: .
container_name: django
command: ./docker-entrypoint.sh
volumes:
- .:/usr/src/app/
ports:
- "8000:8000"
env_file:
- ./.dev.env
depends_on:
- pgdb
- redis
celery:
build: .
command: celery -A mailing worker -l INFO
volumes:
- .:/usr/src/app/
env_file:
- ./.dev.env
depends_on:
- django
- redis
- pgdb
celery-beat:
build: .
command: celery -A mailing beat -l INFO
volumes:
- .:/usr/src/app/
env_file:
- ./.dev.env
depends_on:
- django
- redis
- pgdb
pgdb:
image: postgres
container_name: pgdb
env_file:
- ./.dev.env
volumes:
- pgdata:/var/lib/postgresql/data
expose:
- "5432"
ports:
- "5432:5432"
redis:
image: "redis:alpine"
volumes:
pgdata:
.dev.env
SECRET_KEY=django-insecure-kh8ot+ag-^osyhg5itzj!iw_g#dklw3f75c0)^-7)*#^)#tc#x
DEBUG=1
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1
CELERY_BROKER=redis://redis:6379/0
CELERY_BACKEND=redis://redis:6379/0
POSTGRES_DB=django_postgres
POSTGRES_USER=postgres
POSTGRES_PASSWORD=password
docker-entrypoint.sh
#!/bin/bash
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
I fixed the issue.
It probably was in the fact that I import the environment variables for Django from both python-dotenv through .dev.env file and from docker-compose, where it's passed as env_file. So I commented out dotenv loading from settings.py, and it worked.
I'm trying to dockerize an existing django application. I've followed the example docker-compose and docker files on https://docs.docker.com/samples/django/
I've added the links and networks suggestion as see on other posts, to no avail.
I've brought up the db by itself with docker-compose up db and run docker ps and confirmed it's up, accepting connections, and has the name db. I'm not sure what else to try.
My docker-compose:
version: '3'
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
container_name: db
networks:
harpnetwork:
harp:
build:
context: .
dockerfile: docker/harp/Dockerfile
volumes:
- ./harp:/harp
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- 16765:16765
depends_on:
- db
networks:
harpnetwork:
links:
- db:db
networks:
harpnetwork:
My django db config:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('POSTGRES_NAME'),
'USER': os.environ.get('POSTGRES_USER'),
'PASSWORD': os.environ.get('POSTGRES_PASSWORD'),
'HOST': 'db',
'PORT': 5432,
}
}
and my Dockerfile to build the django project
From python:3.10
COPY ./harp harp
WORKDIR /harp
RUN pip install -r requirements.txt
RUN python manage.py migrate
CMD ["python", "manage.py", "runserver", "0.0.0.0:16765"]
I know there have been similar questions on this site, but the answers to these are sadly outdated.
So I have a django application and i want to use postgres as the underlying database.
Additionally a want to separate both programs in separate dockers.
Now the docker docs have a way to do it, but sadly it seems to be outdated: link
The problem appears when i call manage.py migrate in the docker build function which is being run by the docker compose file.
But i get the error that the host 'db' is unknown/invalid.
Compose excerpt:
services:
db:
image: postgres
restart: always
volumes:
- DataBase:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
app:
build: FehlzeitErfassungProject
restart: always
volumes:
- Logs:/app/logs
- Backups:/app/Backups
- Media:/app/MEDIA
ports:
- "5432:5432"
depends_on:
- db
app dockerfile:
FROM ubuntu
WORKDIR /app
ADD ./requirements.txt ./
RUN apt-get update && \
apt-get upgrade -y
# getting tzdata to shutup
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=Europe/Berlin
RUN apt-get -y install tzdata
#installing all needed porgramms
RUN apt-get install -y uwsgi python3 python3-pip
RUN python3 -m pip install pip --upgrade
RUN python3 -m pip install -r requirements.txt
COPY ./ ./
RUN ./manage.py migrate
CMD ["uwsgi", "uwsgu.ini"]
PS: It seems to appear, that the other dockers are only launched when the app docker has already finished building
Edit:
the database_settings:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'USER': 'postgres',
'PASSWORD': 'postgres',
'HOST': '127.0.0.1',
'PORT': 5432,
}
}
I think that your problem is in the file "settings.py" in the DB configuration for the HOST param, you need to have the name of the postgresql service that you have defined in the docker-compose file, in this case, is "db".
BTW I think that using constants in settings.py is bad behavior, it's better to define in the docker-compose file environmental variables.
In your case would be:
docker-compose.yml
services:
db:
image: postgres
restart: always
volumes:
- DataBase:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
app:
build: FehlzeitErfassungProject
restart: always
volumes:
- Logs:/app/logs
- Backups:/app/Backups
- Media:/app/MEDIA
ports:
- "5432:5432"
environment:
- DB_HOST=db
- DB_NAME=postgres
- DB_USER=postgres
- DB_PASSWORD=postgres
depends_on:
- db
Settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('DB_NAME'),
'USER': os.environ.get('DB_USER'),
'PASSWORD': os.environ.get('DB_PASSWORD'),
'HOST': os.environ.get('DB_HOST'),
}
}
Remember to import os in settings.py.
binding both container port at same port for the host will not work.
to access your database in other container you should link to it.
version: "3.0"
services:
db:
image: postgres
restart: always
volumes:
- DataBase:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
app:
build: FehlzeitErfassungProject
restart: always
volumes:
- Logs:/app/logs
- Backups:/app/Backups
- Media:/app/MEDIA
ports:
- "8000:8000"
links:
- "db"
depends_on:
- db
I'm using docker and I have started a new project.
I have a backend in Django and I want to use a postgreSQL Database.
I can't connect to my database with Django or with PGAdmin.
This is what I Got when I try to connect to my database (with Django or with PGAdmin)
He is a part of my docker-compose file:
services:
db:
image: postgres
container_name: db_wishlist
restart: always
volumes:
- ./postgres-data-wishlist:/var/lib/postgresql/data
env_file:
- env/base/db.env
- env/dev/db.env
ports:
- "5432:5432"
networks:
wishlist:
aliases:
- db
pgadmin:
image: dpage/pgadmin4
container_name: pgadmin
depends_on:
- db
ports:
- "5051:80"
env_file:
- env/base/db.env
- env/dev/db.env
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: root
restart: always
networks:
wishlist:
aliases:
- pgadmin
api:
container_name: backend
build:
context: ..
dockerfile: Docker/Dockerfiles/Django
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ../Django/:/api
ports:
- "8000:8000"
env_file:
- env/base/db.env
- env/dev/db.env
depends_on:
- db
links:
- db:db
networks:
wishlist:
aliases:
- api
Here is my db.env:
POSTGRES_DB=wishlist
POSTGRES_USER=maxdal1
POSTGRES_PASSWORD=demo
Here is my settings.py in Django:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'wishlist',
'USER': 'maxdal1',
'PASSWORD': 'demo',
'HOST': 'db',
'PORT': 5432,
}
}
Here is an additional screenshot when I try to connect to my db with PGAdmin:
Error with PGAdmin
I checked every discussion about this error but I can't figure it out.
Can you help me please ?
Thanks.
Have you tried passing your db user to postgres container via environment variable?
environment:
POSTGRES_USER: "youruser"
The following setup works for me:
docker-compose.yaml
services:
postgres:
image: postgres:10-alpine
volumes:
- ./pgdata:/var/lib/postgresql/data/pgdata
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_DB=django_db
- POSTGRES_USER=XXX
- POSTGRES_PASSWORD=XXX
ports:
- 127.0.0.1:5432:5432
networks:
- backend
django:
build:
context: .
dockerfile: Dockerfile
environment:
- POSTGRES_HOST=postgres
- POSTGRES_DB=django_db
- POSTGRES_USER=XXX
- POSTGRES_PASSWORD=XXX
volumes:
- ./staticfiles:/staticfiles
- ./media:/media
ports:
- 8000:8000
networks:
- backend
- frontend
settings.py
POSTGRES_HOST = os.environ.get('POSTGRES_HOST', 'localhost')
POSTGRES_DB = os.environ.get('POSTGRES_DB', 'django_db')
POSTGRES_USER = os.environ.get('POSTGRES_USER', 'XXX')
POSTGRES_PW = os.environ.get('POSTGRES_PASSWORD', 'XXX')
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': POSTGRES_DB,
'HOST': POSTGRES_HOST,
'USER': POSTGRES_USER,
'PASSWORD': POSTGRES_PW,
'CONN_MAX_AGE': 120,
}
}
The Dockerfile for the django app specifies the following env vars:
ENV POSTGRES_DB=$POSTGRES_DB
ENV POSTGRES_HOST=$POSTGRES_HOST
ENV POSTGRES_USER=$POSTGRES_USER
ENV POSTGRES_PASSWORD=$POSTGRES_PASSWORD
You can run this either like this:
$ docker-compose up
or only run the postgres via docker and run Django locally:
$ docker-compose up postgres
in a different terminal:
$ ./manage.py runserver
I'm relatively new to Django and Docker and am following a tutorial to build a mini application. However, I'm getting stuck with the following error:
django.db.utils.OperationalError: could not translate host name "db" to address: nodename nor servname provided, or not known
My docker-compose file looks as follows:
version: '3'
services:
db:
image: 'postgres'
ports:
- '5432'
core:
build:
context: .
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- '8000:8000'
volumes:
- .:/code
depends_on:
- db
links:
- db:db
My settings.py file contains the database:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'USER': 'postgres',
'HOST': 'db',
'PORT': 5432,
}
}
I've seen the post here and here however both have not fixed the issue.
Would appreciate some guidance. Thanks.
So you are trying to reach the db which is running in one container from another container? If yes - the following could potentially help, at least it helped me when I had similar issues.
Try to define networks config in addition to links in your compose file, create a network with some name and define it in both services. Like described here, as the docks on links config recommend to do that.
Something like this for your case:
version: '3'
services:
db:
image: 'postgres'
ports:
- '5432'
networks:
some_network:
core:
build:
context: .
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- '8000:8000'
volumes:
- .:/code
depends_on:
- db
links:
- db:db
networks:
some_network:
networks:
some_network:
It helped me to resolve the host name to connect to the db.
I had to use docker-compose up to make sure my db container was running.
Instead of redefining the networks, use
docker-compose down -v
to stop all containers and remove all cached data.
Then
docker-compose up
to restart from scratch.
I've been searching for several days for the solution to this issue. Here's what I did:
1 - I copied the postgresql.conf.sample file from my postgres container to my project folder
you must get in to cd usr/share/postgresql
with docker exec -it yourcontainer bash
2 - I changed its name to postgresql.conf
3 - I modified these variables in postgresql.conf
listen_addresses = '*'
port = 5432
max_connections = 100
superuser_reserved_connections = 3
unix_socket_directories = '/tmp'
4 - I added it to my Dockerfile
COPY postgresql.conf /tmp/postgresql.conf
5 - settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'test',
'USER': 'testing',
'PASSWORD': 'tests',
'HOST': 'localhost',
'PORT': 5432,
}
}
6 - I did docker-compose up -build again
I came across this issue recently, i solved it by adding postgres environment variables to db in my docker-compose.yml file like so
version: '3'
services:
db:
image: 'postgres'
ports:
- '5432'
environment:
- POSTGRES_DB=booksdb
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
core:
build:
context: .
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- '8000:8000'
volumes:
- .:/code
depends_on:
- db
links:
- db:db
In my specific case, I had postgres without any version, so postgres image gets updated and the db (postgres) container won't get up
The following configuration works for me from official documentation
version: "3.8"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
then run those commands:
docker-compose up -d --build
docker-compose logs
docker exec -it "container_name" python3 manage.py migrate
This works for me. Make sure of the path to the manage.py file is correct.