This is the first time I've tried adding CI/CD for a Django project on gitlab. I want to set up automatic testing and deploying to the server in the development branch if it is successful.
With tests almost everything worked out, dependencies are installed and started python manage.py test but the problem with the testing database. The traceback error is a bit lower and here I don't quite understand how the interaction with the database happens during the tests.
Creating test database for alias 'default'...
.....
MySQLdb._exceptions.OperationalError: (2002, "Can't connect to MySQL server on '127.0.0.1' (115)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 21, in <module>
main()
File "manage.py", line 17, in main
...
super(Connection, self).__init__(*args, **kwargs2)
django.db.utils.OperationalError: (2002, "Can't connect to MySQL server on '127.0.0.1' (115)")
In the settings of django settings.py, a connector to the database is obtained through such variables in a .env file.
.env
SECRET_KEY=ja-t8ihm#h68rtytii5vw67*o8=o)=tmojpov)9)^$h%9#16v&
DEBUG=True
DB_NAME=db_name
DB_USER=username
DB_PASSWORD=dbpass
DB_HOST=127.0.0.1
And with the deployment of the project, everything is not clear yet. I'd appreciate your help setting it up.
gitlab-ci.yml
stages:
- test
- deploy
test:
stage: test
script:
- apt update -qy
- apt install python3 python3-pip virtualenvwrapper -qy
- virtualenv --python=python3 venv/
- source venv/bin/activate
- pip install -r requirements.txt
- python manage.py test
stage: deploy
script:
...
???
only:
- develop
UPD
Accordingly Ruddra recommendation I added to yml file next line:
services:
- mysql
variables:
# Configure mysql service (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: mysql
connect:
image: mysql
script:
- echo "SELECT 'OK';" | mysql --user=root --password="$MYSQL_ROOT_PASSWORD" --host=mysql "$MYSQL_DATABASE"
And as a result I got connect successful status and test error
status with the same traceback as a starting question
Actually you can run MySQL as service in GitLab. For example:
services:
- mysql:latest
variables:
# Configure mysql environment variables (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: "db_name"
MYSQL_ROOT_PASSWORD: "dbpass"
MYSQL_USER: "username"
MYSQL_PASSWORD: "dbpass"
Update: In your .env file, update the following settings:
DB_HOST=mysql
Update 2: (Based on this issue on GitLab) You can update the code like this:
variables:
MYSQL_DATABASE: "db_name"
MYSQL_ROOT_PASSWORD: "dbpass"
MYSQL_USER: "username"
MYSQL_PASSWORD: "dbpass"
test:
script:
- apt update -qy
- mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE --host=$MYSQL_HOST --execute="SHOW DATABASES; ALTER USER '$MYSQL_USER'#'%' IDENTIFIED WITH mysql_native_password BY '$MYSQL_PASSWORD'"
Related
I'm trying to make a django microservice with postgres database. and I have a problem which I cant solve it for few days. this question contains serveral errors. so you may ignore them and answer last error which is about access psql shell (port 5432 failed: FATAL: password authentication failed for user "postgres" Error section) or ignore all and answer the main question of how to have a docker-compose with django and postgres containers.
the docker-compose.yml looks like:
version: "3.9"
services:
# Redis
redis:
image: redis:7.0.4-alpine
container_name: redis
# rabbit
rabbit:
hostname: rabbit
image: "rabbitmq:3.10.7-alpine"
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "15672:15672"
- "5672:5672"
mongodb_container:
image: mongo:5.0.10
ports:
- "27017:27017"
depends_on:
- redis
# Main Database Postgres
main_postgres_ser:
image: postgres:14.4-alpine
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB= postgres # NAME
- POSTGRES_USER= postgres # USER
- POSTGRES_PASSWORD= postgrespass # PASSWORD
container_name: postgres_container
restart: always
ports:
# - 8000:8000 # HTTP port
- 5432:5432 # DB port
networks:
- djangonetwork
depends_on:
- rabbit
# Main Django Application
main_django_ser:
build:
context: . #/main_ms
dockerfile: Dockerfile_main_ms
container_name: main_django
command: "python manage.py runserver 0.0.0.0:8000"
environment:
PYTHONUNBUFFERED: 1
ports:
- 8000:8000
volumes:
- .:/main_ms
networks:
- djangonetwork
depends_on:
- main_postgres_ser
- rabbit
links:
- main_postgres_ser:main_postgres_ser
networks:
djangonetwork:
driver: bridge
volumes:
main_postgres_ser:
driver: local
the Dockerfile for django service looks like:
FROM python:3.10.6-buster
ENV PYTHONUNBUFFERED=1
RUN apt-get update -y
RUN apt-get update && \
apt-get -y install sudo
WORKDIR /main_ms
COPY requirements.txt ./main_ms/requirements.txt
RUN pip3 install -r ./main_ms/requirements.txt
and in settings.py in DATABASES I have
DATABASES = {
'default': {
# 'default':'psql://postgres:postgrespass#postgres:5432/postgres',
# 'default':'postgres://postgres:postgrespass#postgres:5432/postgres',
# 'ENGINE': 'django.db.backends.postgresql_psycopg2',
'ENGINE': 'django.db.backends.postgresql',
'NAME' : 'postgres',
'USER' : 'postgres',
'PASSWORD' : 'postgrespass',
# HOST should be as postgres service name
'HOST' : 'main_postgres_ser',
'PORT' : '5432',
}}
as I know the HOST should be as postgres service name(here main_postgres_ser) because DNS name will be the same.
password authentication failed for user Error
so I built docker-compose then initialized the django project, ran docker-compose up and inside the django container made migrations but when I do python manage.py migrate I get django.db.utils.OperationalError: FATAL: password authentication failed for user "postgres"
so with just keeping the default in DATABASES in settings.py ('default':'postgres://postgres:postgrespass#main_postgres_ser:5432/postgres',) with pattern of 'default':'postgres://postgres:pass#host:port/dbname',.
thus I got everything commented in DATABASES (except default and ENGINE) and got rid of last error. note that I had to keep 'ENGINE': 'django.db.backends.postgresql', (not to get another error) in DATABASES.
Please supply the NAME or OPTIONS['service'] Error
but then again I received django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the NAME or OPTIONS['service'] value. note I also can access http://localhost:8000/ on browser and there I had django's yellow debug page.
connections on Unix domain socket PGSQL.5432 Error
but when I uncomment 'NAME' : 'postgres', in DATABASES I got connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? error I have not access to http://localhost:8000/ in the browser again.
no password supplied Error
so I uncommented HOST in DATABASES and got django.db.utils.OperationalError: fe_sendauth: no password supplied
again password authentication failed for user Error
so I uncommented PASSWORD and got django.db.utils.OperationalError: FATAL: password authentication failed for user "root" which is same as django.db.utils.OperationalError: FATAL: password authentication failed for user "postgres" (in the case of uncommenting 'USER'='postgres') so I'm again at the first step!!!
inside the django container I tried to CREATE USER with CREATE USER postgres WITH PASSWORD 'postgrespass'; or postgres=# CREATE USER postgres WITH PASSWORD 'postgrespass'; but I got CREATE: not foundI again tried it in postgres container and got same result.
I also tried last solution by adding local all postgres peer to pg_hba.conf still didnt work.
unrecognized service Error
sudo -u root postgresql psql resulted sudo: postgresql: command not found or sudo service postgresql start resulted postgresql: unrecognized service
psql: error: connection PGSQL.5432 Error
then I tried to access to psql shell with docker exec -it -u postgres postgres_container psql which I received the psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "postgres" does not exist
port 5432 failed: FATAL: password authentication failed for user "postgres" Error
in order to try CREATE USER user WITH PASSWORD 'pass'; I want to access psql shell with docker-compose run --rm main_postgres_ser psql -h main_postgres_ser -U postgres -d postgres things seems to fine and working because it pops up enter password input but when I enter the postgrespass which is my password for postgres user I get port 5432 failed: FATAL: password authentication failed for user "postgres" ERROR: 2 Error in the postgres container and in the terminal that docker-compose is up I get DETAIL: Role "postgres" does not exist..
I also specified user: postgres in postgres docker-compose like this suggestion and didnt change any thing.
questions
so why I get error even I put my password for postgres user?
or how can I make a docker-compose for django and postgres??
thanks to #hedayat I realized that the spaces after = for environment variables in docker-compose are not the cause and the main cause was the in between step of creating django folders(project and the apps) the data\db gets created and it doesn't let the authentication happen. so by deleting data\db I was able to do migrate the database.
My gitlab ci pipeline keeps failing. It seems am stuck here. Am actually still new to the CI thing so I don't know what am doing wrong. Any help will be appreciated Below is .gitlab-ci.yml file
image: python:latest
services:
- postgres:latest
variables:
POSTGRES_DB: thorprojectdb
POSTGRES_PASSWORD: $''
POSTGRES_USER: $majesty
POSTGRES_HOST_AUTH_METHOD: trust
# This folder is cached between builds
# http://docs.gitlab.com/ee/ci/yaml/README.html#cache
cache:
paths:
- ~/.cache/pip/
before_script:
- python -V
connect:
image: postgres
script:
# official way to provide password to psql: http://www.postgresql.org/docs/9.3/static/libpq-envars.html
- export PGPASSWORD=$POSTGRES_PASSWORD
- psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "SELECT 'OK' AS status;"
build:
stage: build
script:
- pip install -r requirements.txt
- python manage.py migrate
only:
- EC-30
In my settings.py file, I have the following settings
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'projectdb',
'HOST': 'postgres',
'PASSWORD': ''
}
}
But when I push to gitlab, the build process keeps failing. The - pip install -r requirements.txt runs perfectly but once it gets to - python manage.py migrate, it fails. Below is the error I do get
django.db.utils.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 1
Analyzing the description of the .gitlab-ci.yml file, it is clear that you declared the database in the file (POSTGRES_DB), but you are missing information related to the credentials, DB_USER, DB_PASS, as described at this link below:
gitlab-config-postgres
Remembering that it's good practice use the declaring variables at CI/CD section at your repository. For more information:
ci-cd-gitlab-variables
Repository with a config example:
repository-example-gitlab-postgres
services:
- postgres
variables:
# Configure postgres service (https://hub.docker.com/_/postgres/)
POSTGRES_DB: custom_db
POSTGRES_USER: custom_user
POSTGRES_PASSWORD: custom_pass
connect:
image: postgres
script:
# official way to provide password to psql: http://www.postgresql.org/docs/9.3/static/libpq-envars.html
- export PGPASSWORD=$POSTGRES_PASSWORD
- psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "SELECT 'OK' AS status;"
So I've managed to build my Docker image locally using docker-compose build and I've pushed the image to my Docker Hub repository.
Now I'm trying to get it working on DigitalOcean so I can host it. I've pulled the correct version of the image and I am trying to run it with the following command:
root#my-droplet:~# docker run --rm -it -p 8000:8000/tcp mycommand/myapp:1.1 (yes, 1.1)
However I soon run into these two errors:
...
File "/usr/local/lib/python3.8/dist-packages/django/db/backends/postgresql/base.py",
line 185, in get_new_connection
connection = Database.connect(**conn_params) File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "postgres" to address: Name or service not known ```
...
File "/usr/local/lib/python3.8/dist-packages/django/db/backends/base/base.py",
line 197, in connect
self.connection = self.get_new_connection(conn_params) File "/usr/local/lib/python3.8/dist-packages/django/utils/asyncio.py", line
26, in inner
return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/django/db/backends/postgresql/base.py",
line 185, in get_new_connection
connection = Database.connect(**conn_params) File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known ```
This may be due to how I have divided my application (using docker-compose,yml) into two services and have only pushed the image of the app since my previous post. Here is my docker-compose.yml file:
Here is my docker-compose.yml file:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
Here is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
and here is a snippet of my settings.py file:
...
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'USER': 'admin',
'PASSWORD': '[THE PASSWORD]',
'HOST': 'db',
'PORT': 5432,
}
}
...
How can I fix this issue? I've looked around but can't see what I'm doing wrong.
From what I saw online, I added the following to my project, but it doesn't work either:
Here is my base.py (new file):
import psycopg2
conn = psycopg2.connect("dbname='postgres' user='admin' host='db' password='[THE PASSWORD]'")
Additions to Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN pip3 install psycopg2-binary
COPY base.py base.py
RUN python3 base.py
...
Yet the build fails due to this error:
Traceback (most recent call last): File "base.py", line 3, in
<module>
conn = psycopg2.connect("dbname='postgres' user='admin' host='db' password='[THE PASSWORD]'")
File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" ([SOME-IP-ADDRESS]) and accepting
TCP/IP connections on port 5432?
ERROR: Service 'app' failed to build: The command '/bin/sh -c python3 base.py' returned a non-zero code: 1
me#My-MacBook-Pro-5 mellon %
I'm unsure as to what to try next but I feel like I am making a simple mistake. Why is the connection being refused at the moment?
When you launch a docker-compose.yml file, Compose will automatically create a Docker network for you and attach containers to it. Absent any specific networks: declarations in the file, Compose will name the network default, and then most things Compose creates are prefixed with the current directory name. This setup is described in more detail in Networking in Compose.
docker run doesn't look at the Compose settings at all, so anything you manually do with docker run needs to replicate the settings docker-compose.yml has or would automatically generate. To make a docker run container connect to a Compose network, you need to do something like:
docker network ls # looking for somename_default
docker run \
--rm -it -p 8000:8000/tcp \
--net somename_default \ # <-- add this option
mycommand/myapp:1.1
(The specific error message could not translate host name "postgres" to address hints at a Docker networking setup issue: the database client container isn't on the same network as the database server and so the hostname resolution fails. Contrast with a "connection refused" type error, which would suggest one container is finding the other but either the database isn't running yet or the port number is wrong.)
In your docker build example, code in a Dockerfile can never connect to a database. At a mechanical level, there's no way to attach it to a Docker network as we did with docker run. At a conceptual level, building an image produces a reusable artifact; if I docker push the image to a registry and docker run it on a different host, that won't have the database setup, or similarly if I delete and recreate the database container locally.
I am trying to run it with the following command: root#my-droplet:~#
docker run --rm -it -p 8000:8000/tcp mycommand/myapp:1.1 (yes, 1.1)
Docker-compose allows to create a composition of several containers.
Running the django container with docker run defeats its purpose.
What you want is running :
docker-compose build
docker-compose up
or in a single command :
docker-compose up --build
Why is the connection being refused at the moment?
Maybe because that is not done at the right moment.
I am not a Django specialist but your second try looks ok while your django service misses the Dockerfile attribute in the compose declaration (typing error I assume).
For your first try, that error :
django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known ```
means that django uses "postgres" as host of the database.
But according to your compose, the database could be reached with a host named "db". These don't match.
Regarding your second try that defines database properties, it looks fine and it also matches to what the official doc states :
In this section, you set up the database connection for Django.
In your project directory, edit the composeexample/settings.py file.
Replace the DATABASES = ... with the following:
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'USER': 'postgres',
'PASSWORD': 'postgres',
'HOST': 'db',
'PORT': 5432,
} } These settings are determined by the postgres Docker image specified in docker-compose.yml.
For your second try, that error :
conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" ([SOME-IP-ADDRESS]) and accepting
TCP/IP connections on port 5432?
may mean that the Postgres DB listener is not ready yet when Django tries to init the connection with.
That attributes of your compose : depends_on: defines order of container start but doesn't wait for that the application behind the container be "ready".
Here two way to addresse that kind of scenario :
either add a sleep of some seconds before starting your django service
or set your django compose conf to restart on failure.
For the first way, configure your django service with a CMD such as (not tried) :
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
command: /bin/bash -c "sleep 30; python3 manage.py collectstatic && python3 manage.py makemigrations && python3 manage.py migrate && gunicorn mellon.wsgi --bind 0.0.0.0:8000 "
For the second way, try something like that :
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
restart: on-failure
I am currently working on a Django project, which is supposed to send messages to a mobile app via websockets. For the Django project I used Docker to put it online. Now I want to send planned messages for the first time, for this I use Apscheduler or django-apscheduler. I try to save the jobs in my Redis container. But for some reason the connection is denied. Am I basically doing something wrong or does it hang somewhere?
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
RUN pip install -r requirements.txt
docker-compose.yml
version: '3'
services:
redis:
image: redis
command: redis-server
ports:
- '6379:6379'
- '6380:6380'
web:
build: .\experiencesampling
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:\code
ports:
- "8000:8000"
# worker_channels:
#
# build: .\experiencesampling
# command: python manage.py runworker channels
# volumes:
# - .:\code
# links:
# - redis
channels:
build: .\experiencesampling
command: daphne -p 8001 experiencesampling.asgi:application
volumes:
- .:\code
ports:
- "8001:8001"
links:
- redis
jobs.py (trying connect to redis), I have already tried 0.0.0.0, localhost, redis://redis for the "host"
jobstores = {
'default': RedisJobStore(jobs_key='dispatched_trips_jobs', run_times_key='dispatched_trips_running', host='redis', port=6380)
}
executors = {
'default': ThreadPoolExecutor(20),
'processpool': ProcessPoolExecutor(5)
}
job_defaults = {
'coalesce': False,
'max_instances': 3
}
#jobStore.remove_all_jobs()
scheduler = BackgroundScheduler(jobstores=jobstores, executors=executors, job_defaults=job_defaults)
register_events(scheduler)
scheduler.start()
print("Scheduler started!")
Error (appears multiple times)
web_1 |
web_1 | Scheduler started!
web_1 | Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | System check identified no issues (0 silenced).
web_1 | July 11, 2020 - 19:00:30
channels_1 | 2020-07-11 19:00:29,866 WARNING Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | Django version 3.0.8, using settings 'experiencesampling.settings'
web_1 | Starting ASGI/Channels version 2.3.1 development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.
I'm trying to create a Django app in a docker container. The app would use a postgres db with postgis extension, which I have in another database. I'm trying to solve this using docker-compose but can not get it working.
I can get the app working without the container with the database containerized just fine. I can also get the app working in a container using a sqlite db (so a file included without external container dependencies). Whatever I do, it can't find the database.
My docker-compose file:
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- "${POSTGRES_PORT}:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: 0.0.0.0:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}
My Dockerfile (of the app):
# Pull base image
FROM python:3.7
LABEL maintainer="yb.leeuwen#portofrotterdam.com"
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
# RUN pip install pipenv
RUN pip install pipenv
RUN mkdir /code/
COPY . /code
WORKDIR /code/
RUN pipenv install --system
# RUN pipenv install pygdal
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gdal-bin python-gdal python3-gdal postgresql-client
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait
# set work directory
WORKDIR /code/app
# RUN python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# RUN cd ${WORKDIR}
# If we want to run docker by itself we need to use below
# but if we want to run from docker-compose we'll set it there
EXPOSE 8000
# CMD /wait && python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
My .env file:
# POSTGRES
POSTGRES_PORT=25432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=localhost
# GEOSERVER
# DJANGO
APP_PORT=8000
And finally my in my settings.py of the django app:
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': os.getenv('POSTGRES_DBNAME'),
'USER': os.getenv('POSTGRES_USER'),
'PASSWORD': os.getenv('POSTGRES_PASS'),
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
'PORT': os.getenv('POSTGRES_PORT')
}
}
I've tried quite a lot of things (as you see in some comments). I realized that docker-compose doesn't seem to wait until postgres is fully up, spinning and accepting requests so I tried to build in a waiting function (as suggested on the website). I first had migrations and running the server inside the Dockerfile (migrations in the build process and runserver as the startup command), but that requires postgres and as it wasn't waiting for it it didn't function. I finally took it all out to the docker-compose.yml file but still can't get it working.
The error I get:
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 25432?
web_1 | could not connect to server: Cannot assign requested address
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 25432?
Does anybody have an idea why this isn't working?
I see that in your settings.py of the django app, you are connecting to Postgres via
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
While in .env you are setting the value of to POSTGRES_HOST to localhost. This means that the web container is trying to reach the Postgres server postgis at localhost which should not be the case.
In order to solve this problem, simply update your .env file to be like this:
POSTGRES_PORT=5432
...
POSTGRES_HOST=postgis
...
The reason is that in your case, the docker-compose brings up 2 containers: postgis and web inside the same Docker network and they can reach each other via their DNS name i.e. postgis and web respectively.
Regarding the port, web container can reach postgis at port 5432 but not 25432 while your host machine can reach the database at port 25432 but not 5432
you can not use localhost for the docker containers, it will be pointing to the container itself, not to the host of the containers. Instead switch to use the service name.
to fix the issue, change your env to
# POSTGRES
POSTGRES_PORT=5432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=postgis
# DJANGO
APP_PORT=8000
and you compose file to
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: postgis:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}