docker-compose django app cannot find postgresql db - django

I'm trying to create a Django app in a docker container. The app would use a postgres db with postgis extension, which I have in another database. I'm trying to solve this using docker-compose but can not get it working.
I can get the app working without the container with the database containerized just fine. I can also get the app working in a container using a sqlite db (so a file included without external container dependencies). Whatever I do, it can't find the database.
My docker-compose file:
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- "${POSTGRES_PORT}:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: 0.0.0.0:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}
My Dockerfile (of the app):
# Pull base image
FROM python:3.7
LABEL maintainer="yb.leeuwen#portofrotterdam.com"
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
# RUN pip install pipenv
RUN pip install pipenv
RUN mkdir /code/
COPY . /code
WORKDIR /code/
RUN pipenv install --system
# RUN pipenv install pygdal
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gdal-bin python-gdal python3-gdal postgresql-client
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait
# set work directory
WORKDIR /code/app
# RUN python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# RUN cd ${WORKDIR}
# If we want to run docker by itself we need to use below
# but if we want to run from docker-compose we'll set it there
EXPOSE 8000
# CMD /wait && python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
My .env file:
# POSTGRES
POSTGRES_PORT=25432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=localhost
# GEOSERVER
# DJANGO
APP_PORT=8000
And finally my in my settings.py of the django app:
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': os.getenv('POSTGRES_DBNAME'),
'USER': os.getenv('POSTGRES_USER'),
'PASSWORD': os.getenv('POSTGRES_PASS'),
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
'PORT': os.getenv('POSTGRES_PORT')
}
}
I've tried quite a lot of things (as you see in some comments). I realized that docker-compose doesn't seem to wait until postgres is fully up, spinning and accepting requests so I tried to build in a waiting function (as suggested on the website). I first had migrations and running the server inside the Dockerfile (migrations in the build process and runserver as the startup command), but that requires postgres and as it wasn't waiting for it it didn't function. I finally took it all out to the docker-compose.yml file but still can't get it working.
The error I get:
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 25432?
web_1 | could not connect to server: Cannot assign requested address
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 25432?
Does anybody have an idea why this isn't working?

I see that in your settings.py of the django app, you are connecting to Postgres via
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
While in .env you are setting the value of to POSTGRES_HOST to localhost. This means that the web container is trying to reach the Postgres server postgis at localhost which should not be the case.
In order to solve this problem, simply update your .env file to be like this:
POSTGRES_PORT=5432
...
POSTGRES_HOST=postgis
...
The reason is that in your case, the docker-compose brings up 2 containers: postgis and web inside the same Docker network and they can reach each other via their DNS name i.e. postgis and web respectively.
Regarding the port, web container can reach postgis at port 5432 but not 25432 while your host machine can reach the database at port 25432 but not 5432

you can not use localhost for the docker containers, it will be pointing to the container itself, not to the host of the containers. Instead switch to use the service name.
to fix the issue, change your env to
# POSTGRES
POSTGRES_PORT=5432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=postgis
# DJANGO
APP_PORT=8000
and you compose file to
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: postgis:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}

Related

Docker container not finding manage.py using DRF

I have been trying to containerise a project of mine that uses postgresql. But for some reason it is unable to perform the commands completely, when I check the web container, by following it on docker-desktop I can see that it is performing the migrations but at some point it stops and gives me the following error:
2023-01-17 19:51:29 Apply all migrations: admin, auth, company, contenttypes, oauth2_provider, product, sessions, social_django, users, warehouse
2023-01-17 19:51:29 Running migrations:
2023-01-17 19:51:29 No migrations to apply.
2023-01-17 19:51:29 sh: manage.py: not found
For some reason it does not see manage.py.
When I run docker-compose run web sh and when I check if manage.py exists in the directory using ls in the CLI I discover it is there.
What could I be doing wrong here?
The Dockerfile:
# pull official base image
FROM python:3.9-alpine
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install python dependencies
RUN apk add -u zlib-dev jpeg-dev libffi-dev gcc musl-dev
RUN python3 -m pip install --upgrade pip
COPY requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# copy project
COPY . .
The docker-compose.yml:
version: '3.9'
services:
nginx:
container_name: core_web
restart: on-failure
image: nginx:stable
volumes:
- ./nginx/nginx.dev.conf:/etc/nginx/conf.d/default.conf
- static_volume:/app/static
ports:
- "80:80"
depends_on:
- web
web:
container_name: web
build: .
env_file: .env
ports:
- "5000:5000"
command: >
sh -c " python manage.py makemigrations && python manage.py migrate &&
manage.py runserver 0.0.0.0:5000"
volumes:
- .:/app
- static_volume:/app/static
depends_on:
- db
db:
container_name: core_db
image: postgres:15
env_file: .env
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
static_volume:
postgres_data:
I changed the web of my docker-compose.yml file:
web:
tty: true
container_name: web
build: .
restart: always
env_file: .env
ports:
- "5000:5000"
command: >
sh -c " python3 /app/manage.py makemigrations && python3 /app/manage.py migrate && python3 /app/manage.py runserver 0.0.0.0:5000"
volumes:
- .:/app
- static_volume:/app/static
depends_on:
- db
Basically, I added tty: true and changed command to:
sh -c " python3 /app/manage.py makemigrations && python3 /app/manage.py migrate && python3 /app/manage.py runserver 0.0.0.0:5000"
What I believe was happening:
I did not specify the folder of the location of the manage.py, which is what I believe I set in volumes to be /app. Thus I had to state in the command its location, feel free to correct me on this.
This made it functional and allowed me to access the backend and its postgres database.

How to start a Dockerized Django server with custom ip address?

When we want to run a bare Django app we use the term: python manage.py runserver <ip>:<port> to start a web server with the desired IP address. I tried to do a similar thing with Docker and Django inside the docker-compose file:
version: '3.1'
volumes:
init-db:
data-db:
services:
mongo:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: <name>
MONGO_INITDB_ROOT_PASSWORD: <password>
ports:
- 27020:27020
command: mongod --quiet --logpath /dev/null
splash:
image: scrapinghub/splash
ports:
- "8050:8050"
restart: always
web:
build: .
restart: unless-stopped
command: >
bash -c "pip install -r requirements.txt && python manage.py makemigrations && python manage.py migrate && python manage.py runserver <some_ip>:<some_port>"
tty: true
volumes:
- .:/usr/src/app
ports:
- 8000:8000
depends_on:
- mongo
Now after running it is giving me web_1 | Error: That IP address can't be assigned to.
Note: I get the IP address using windows' ipconfig then I select the ipv4
The process inside the Docker container must listen on the special "all interfaces" address 0.0.0.0. The container itself has its own private IP space, and can't bind to arbitrary host interfaces.
Conversely, the ports: declaration in your docker-compose.yml can name a specific IP address to listen on, and this must be one of the host's interfaces.
So your docker-compose.yml file should look roughly like:
version: '3.8'
services:
web:
build: .
restart: unless-stopped
ports: ['192.168.1.2:8000:8000'] # <-- host IP address here
depends_on: [mongo]
# shouldn't need tty:, volumes:, or command:
mongo: # as in the question, except
ports: ['27020:27017'] # <-- second port needs to be standard MongoDB port
And the corresponding Dockerfile should look something like:
FROM python:3.9
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
ENTRYPOINT ["/app/entrypoint.sh"]
# #!/bin/sh
# python manage.py migrate
# exec "$#"
EXPOSE 8000
# Use 0.0.0 as the listen address vvvvvvv
CMD ["./manage.py", "runserver", "0.0.0.0:8000"]
I ended up using the same code I added, restarting my pc was the solution that I did not expect.

Unable to communicate with postgres using docker and django

I don't achieve to communicate with my database postgres using Docker and Django. Here is my docker-compose.yml :
version: '3'
services:
web:
container_name: web
build:
context: ./web
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./web/:/usr/src/web/
ports:
- 8000:8000
- 3000:3000
- 35729:35729
env_file:
- database.env
stdin_open: true
depends_on:
- database
database:
container_name: database
image: postgres
volumes:
- database-data:/var/lib/postgresql/data/
ports:
- 5432:5432
volumes:
database-data:
Here is my database.env :
# database.env
POSTGRES_USERNAME=admin
POSTGRES_PASSWORD=pass
POSTGRES_DBNAME=db
POSTGRES_HOST=database
POSTGRES_PORT=5432
PGUSER=admin
PGPASSWORD=pass
PGDATABASE=db
PGHOST=database
PGPORT=5432
DATABASE=db
SQL_HOST=database
SQL_PORT=5432
And here is my Dockerfile :
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/web
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
RUN apk add zlib-dev jpeg-dev gcc musl-dev
# install nodejs
RUN apk add --update nodejs nodejs-npm
# copy project
ADD . .
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# run entrypoint.sh
ENTRYPOINT ["sh", "/usr/src/web/entrypoint.sh"]
And there my entrypoint.sh :
#!/bin/sh
if [ "$DATABASE" = "db" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 10
done
echo "PostgreSQL started"
fi
exec "$#"
I build the docker using that : docker-compose up -d --build
Then I type that : docker-composexec web npm start --prefix ./front/ .
I can access to the frontent : http://localhost:3000
But when I do docker logs database I got that :
2021-01-18 06:31:49.207 UTC [1] LOG: database system is ready to accept connections
2021-01-18 06:31:51.640 UTC [32] FATAL: password authentication failed for user "admin"
2021-01-18 06:31:51.640 UTC [32] DETAIL: Role "admin" does not exist.
Connection matched pg_hba.conf line 99: "host all all all md5"
Here is the status :
37ee3e314d52 web "sh /usr/src/web/ent…" About a minute ago Up About a minute 0.0.0.0:3000->3000/tcp, 0.0.0.0:8000->8000/tcp, 5432/tcp web
65dfeae57a94 postgres "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:5432->5432/tcp database
Coud you help me ?
Thank you very much !
It seems like the postgres user you are using doesn't exist. You can add some environment variables to database docker-compose to create those (you probably need to create the database, too), Or you can write some script to create those for the first time.
version: '3'
services:
web:
container_name: web
build:
context: ./web
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./web/:/usr/src/web/
ports:
- 8000:8000
- 3000:3000
- 35729:35729
env_file:
- database.env
stdin_open: true
depends_on:
- database
database:
container_name: database
image: postgres
volumes:
- database-data:/var/lib/postgresql/data/
ports:
- 5432:5432
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
database-data:
About postgres image envs you can check this link .

Django server connection error with Docker Toolbox in OS X Yosemite

I run the following yml file with $docker-compose up command. I'm developing a REST API with django (Udemy "Build a Backend REST API with Python & Django - Advanced").
System: OS X 10.10.5
Docker version: 18.03.0-ce
docker-compose version: 1.20.1
Q: I am unable to access localhost with 127.0.0.1
docker-compose.yml
version: "2.2"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py wait_for_db &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=supersecretpassword
depends_on:
- db
db:
image: postgres:10-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=supersecretpassword
Dockerfile
FROM python:3.7-alpine
#python unbuffered environmental variable
ENV PYTHONUNBUFFERED 1
#copy requirements.txt from the directory adjacent to
#the Dockerfile into dockerimage/requirments.txt
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
#create an empty folder named app in docker image
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
USER user
Some suggest to add CMD ["python", "manage.py", "runserver",
"0.0.0.0:8000"] in Dockerfile; didn't work in this case. In django
settings.py, ALLOWED_HOSTS = ['192.168.99.100']
Console output (part of: see bold)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/postgresql/base.py",
line 178, in get_new_connection connection = Database.connect(conn_params)
File "/usr/local/lib/python3.7/site-packages/psycopg2/init.py",
line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, kwasync)
django.db.utils.OperationalError: could not connect to server: Connection refused **
Is the server running on host "db" (172.19.0.2) and accepting
TCP/IP connections on port 5432?
I'm super new to this stuff, appreciate your help and suggestions.

Can't create database with default postgresql docker image

Official site says, that if you wanna build postgresql databse just put varibles POSTGRES_USER, POSTGRES_PASSWORD and POSTGRES_DB. I put them into .env file and write in docker-compose file env_file: ./.env, but then django says: django.db.utils.OperationalError: FATAL: database "languages" does not exist. It means I should write .sql script with commands. Questions:
Writing .sql sript is the only way to make it work?
How can I write it and where to put it if I don't want explicitly
write sensetive info like password, I wanna use .env file as i will store .sql script in public repo?
My .env file
DEBUG=True
SECRET_KEY=...
POSTGRESQL_DATABASE=languages
POSTGRESQL_USERNAME=admin
POSTGRESQL_PASSWORD=pass
POSTGRESQL_ADDRESS=postgres
POSTGRES_DB=languages
My docker-compose file:
version: '3'
services:
web:
build: ./web
env_file: ./web/microservice/.env
volumes:
- ./web:/code/
depends_on:
- postgres
command: python manage.py runserver 0.0.0.0:8000
nginx:
build: ./nginx/
ports:
- "80:80"
- "443:443"
volumes:
- ./web/static:/code/static
depends_on:
- web
postgres:
image: postgres:latest
ports:
- "5432:5432"
env_file: ./web/microservice/.env
volumes:
- /var/lib/postgresql/data/
My Dockefile:
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD ./requirements.txt /code/
RUN pip install --no-cache-dir -r requirements.txt
ADD . /code/
EXPOSE 8000
CMD ["/usr/local/bin/gunicorn", "translation_microservice.wsgi", "-w", "2", "-b", ":8000", "--access-logfile", "-", "--capture-output"]
Seems like you are using incorrect variable names. Change POSTGRESQL_USERNAME to POSTGRES_USER and so on in your .env file
You might have to clear the docker volume containing your database if it has already been built using another environment. The default is to create a user and a database named postgres
Do not add the .env file to a public repo.