Django server connection error with Docker Toolbox in OS X Yosemite - django

I run the following yml file with $docker-compose up command. I'm developing a REST API with django (Udemy "Build a Backend REST API with Python & Django - Advanced").
System: OS X 10.10.5
Docker version: 18.03.0-ce
docker-compose version: 1.20.1
Q: I am unable to access localhost with 127.0.0.1
docker-compose.yml
version: "2.2"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py wait_for_db &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=supersecretpassword
depends_on:
- db
db:
image: postgres:10-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=supersecretpassword
Dockerfile
FROM python:3.7-alpine
#python unbuffered environmental variable
ENV PYTHONUNBUFFERED 1
#copy requirements.txt from the directory adjacent to
#the Dockerfile into dockerimage/requirments.txt
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
#create an empty folder named app in docker image
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
USER user
Some suggest to add CMD ["python", "manage.py", "runserver",
"0.0.0.0:8000"] in Dockerfile; didn't work in this case. In django
settings.py, ALLOWED_HOSTS = ['192.168.99.100']
Console output (part of: see bold)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/postgresql/base.py",
line 178, in get_new_connection connection = Database.connect(conn_params)
File "/usr/local/lib/python3.7/site-packages/psycopg2/init.py",
line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, kwasync)
django.db.utils.OperationalError: could not connect to server: Connection refused **
Is the server running on host "db" (172.19.0.2) and accepting
TCP/IP connections on port 5432?
I'm super new to this stuff, appreciate your help and suggestions.

Related

Unable to open unix socket in redis - Permission denied while firing up docker container

I am trying to fire up a separate redis container which will work as a broker for celery. Can someone help me with as to why the docker user is not able to open the UNIX socket. I have even tried making the user as root but it doesn't seem to work. Please find below the Dockerfile, docker-compose file and redis.conf file.
Dockerfile:
FROM centos/python-36-centos7
USER root
ENV DockerHOME=/home/django
RUN mkdir -p $DockerHOME
ENV PYTHONWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1
ENV PATH=/home/django/.local/bin:$PATH
COPY ./oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm /home/django
COPY ./oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm /home/django
COPY ./oracle.conf /home/django
RUN yum install -y dnf
RUN dnf install -y libaio libaio-devel
RUN rpm -i /home/django/oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm && \
cp /home/django/oracle.conf /etc/ld.so.conf.d/ && \
ldconfig && \
ldconfig -p | grep client64
COPY ./requirements /home/django/requirements
WORKDIR /home/django
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r ./requirements/development.txt
COPY . .
RUN chmod 777 /home/django
EXPOSE 8000
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]
Docker-compose file:
version : '3.8'
services:
app:
build: .
volumes:
- .:/django
- cache:/var/run/redis
image: app_name:django
container_name: app_name
ports:
- 8000:8000
depends_on:
- db
- redis
db:
image: postgres:10.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_USER=app_name
- POSTGRES_PASSWORD=app_password
- POSTGRES_DB=app_db
labels:
description : "Postgres Database"
container_name: app_name-db-1
redis:
image: redis:alpine
command: redis-server /etc/redis/redis.conf
restart: unless-stopped
ports:
- 6379:6379
volumes:
- ./redis/data:/var/lib/redis
- ./redis/redis-server.log:/var/log/redis/redis-server.log
- cache:/var/run/redis/
- ./redis/redis.conf:/etc/redis/redis.conf
container_name: redis
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
volumes:
postgres_data:
cache:
static-volume:
docker-entrypoint.sh:
# run migration first
python manage.py migrate
python manage.py preload_sites -uvasas -l
python manage.py preload_endpoints -uvasas -l
python manage.py collectstatic --noinput
#start celery
export C_FORCE_ROOT='true'
celery multi start 1 -A realm -l INFO -c4
# start the server
python manage.py runserver 0:8000
redis.conf
unixsocket /var/run/redis/redis.sock
unixsocketperm 770
logfile /var/log/redis/redis-server.log
I am new to docker so apologies if I have not done something very obvious or if I have not followed some of the best practices.

Unable to communicate with postgres using docker and django

I don't achieve to communicate with my database postgres using Docker and Django. Here is my docker-compose.yml :
version: '3'
services:
web:
container_name: web
build:
context: ./web
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./web/:/usr/src/web/
ports:
- 8000:8000
- 3000:3000
- 35729:35729
env_file:
- database.env
stdin_open: true
depends_on:
- database
database:
container_name: database
image: postgres
volumes:
- database-data:/var/lib/postgresql/data/
ports:
- 5432:5432
volumes:
database-data:
Here is my database.env :
# database.env
POSTGRES_USERNAME=admin
POSTGRES_PASSWORD=pass
POSTGRES_DBNAME=db
POSTGRES_HOST=database
POSTGRES_PORT=5432
PGUSER=admin
PGPASSWORD=pass
PGDATABASE=db
PGHOST=database
PGPORT=5432
DATABASE=db
SQL_HOST=database
SQL_PORT=5432
And here is my Dockerfile :
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/web
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
RUN apk add zlib-dev jpeg-dev gcc musl-dev
# install nodejs
RUN apk add --update nodejs nodejs-npm
# copy project
ADD . .
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# run entrypoint.sh
ENTRYPOINT ["sh", "/usr/src/web/entrypoint.sh"]
And there my entrypoint.sh :
#!/bin/sh
if [ "$DATABASE" = "db" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 10
done
echo "PostgreSQL started"
fi
exec "$#"
I build the docker using that : docker-compose up -d --build
Then I type that : docker-composexec web npm start --prefix ./front/ .
I can access to the frontent : http://localhost:3000
But when I do docker logs database I got that :
2021-01-18 06:31:49.207 UTC [1] LOG: database system is ready to accept connections
2021-01-18 06:31:51.640 UTC [32] FATAL: password authentication failed for user "admin"
2021-01-18 06:31:51.640 UTC [32] DETAIL: Role "admin" does not exist.
Connection matched pg_hba.conf line 99: "host all all all md5"
Here is the status :
37ee3e314d52 web "sh /usr/src/web/ent…" About a minute ago Up About a minute 0.0.0.0:3000->3000/tcp, 0.0.0.0:8000->8000/tcp, 5432/tcp web
65dfeae57a94 postgres "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:5432->5432/tcp database
Coud you help me ?
Thank you very much !
It seems like the postgres user you are using doesn't exist. You can add some environment variables to database docker-compose to create those (you probably need to create the database, too), Or you can write some script to create those for the first time.
version: '3'
services:
web:
container_name: web
build:
context: ./web
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./web/:/usr/src/web/
ports:
- 8000:8000
- 3000:3000
- 35729:35729
env_file:
- database.env
stdin_open: true
depends_on:
- database
database:
container_name: database
image: postgres
volumes:
- database-data:/var/lib/postgresql/data/
ports:
- 5432:5432
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
database-data:
About postgres image envs you can check this link .

django.db.utils.OperationalError: could not connect to server: Connection refused

I have 3 docker containers web(django) , nginx, db(postgresql)
When I run the following command
docker-compose -f docker-compose.prod.yml exec web python manage.py migrate --noinput
The exact error is:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Address not available
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
docker-compose.prod.yml
version: '3.7'
services:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.envs/.db
web:
build:
context: ./tubscout
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- .static_volume:/home/app/web/staticfiles
expose:
- 8000
env_file:
- ./.envs/.prod
depends_on:
- db
nginx:
build: ./nginx
volumes:
- .static_volume:/home/app/web/staticfiles
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
Dockerfile.prod
###########
# BUILDER #
###########
# pull official base image
FROM python:3.8.3-alpine as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps -w /usr/src/app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.3-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/wheels/*
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.sh"]
settings.py
DATABASES = {
'default': {
"ENGINE": os.environ.get("SQL_ENGINE", "django.db.backends.sqlite3"),
"NAME": os.environ.get("SQL_DATABASE", os.path.join(BASE_DIR, "db.sqlite3")),
"USER": os.environ.get("SQL_USER", "user"),
"PASSWORD": os.environ.get("SQL_PASSWORD", "password"),
"HOST": os.environ.get("SQL_HOST", "localhost"),
"PORT": os.environ.get("SQL_PORT", "5432"),
}
}
./.envs/.db
POSTGRES_USER=postgres
POSTGRES_PASSWORD=123456789
POSTGRES_DB=tubscoutdb_prod
./.envs/.prod
DEBUG=0
SECRET_KEY='#yinppohul88coi7*f+1^_*7#o9u#kf-sr*%v(bb7^k5)n_=-h'
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=tubscoutdb_prod
SQL_USER=postgres
SQL_PASSWORD=123456789
SQL_HOST=localhost
SQL_PORT=5432
DATABASE=postgres
Change SQL_HOST to db in your .envs/.prod file. This will let the Web container reach the DB container and perform the migration.
Docker compose containers can be accessed with their service name from other containers.

docker-compose django app cannot find postgresql db

I'm trying to create a Django app in a docker container. The app would use a postgres db with postgis extension, which I have in another database. I'm trying to solve this using docker-compose but can not get it working.
I can get the app working without the container with the database containerized just fine. I can also get the app working in a container using a sqlite db (so a file included without external container dependencies). Whatever I do, it can't find the database.
My docker-compose file:
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- "${POSTGRES_PORT}:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: 0.0.0.0:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}
My Dockerfile (of the app):
# Pull base image
FROM python:3.7
LABEL maintainer="yb.leeuwen#portofrotterdam.com"
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
# RUN pip install pipenv
RUN pip install pipenv
RUN mkdir /code/
COPY . /code
WORKDIR /code/
RUN pipenv install --system
# RUN pipenv install pygdal
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gdal-bin python-gdal python3-gdal postgresql-client
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait
# set work directory
WORKDIR /code/app
# RUN python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# RUN cd ${WORKDIR}
# If we want to run docker by itself we need to use below
# but if we want to run from docker-compose we'll set it there
EXPOSE 8000
# CMD /wait && python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
My .env file:
# POSTGRES
POSTGRES_PORT=25432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=localhost
# GEOSERVER
# DJANGO
APP_PORT=8000
And finally my in my settings.py of the django app:
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': os.getenv('POSTGRES_DBNAME'),
'USER': os.getenv('POSTGRES_USER'),
'PASSWORD': os.getenv('POSTGRES_PASS'),
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
'PORT': os.getenv('POSTGRES_PORT')
}
}
I've tried quite a lot of things (as you see in some comments). I realized that docker-compose doesn't seem to wait until postgres is fully up, spinning and accepting requests so I tried to build in a waiting function (as suggested on the website). I first had migrations and running the server inside the Dockerfile (migrations in the build process and runserver as the startup command), but that requires postgres and as it wasn't waiting for it it didn't function. I finally took it all out to the docker-compose.yml file but still can't get it working.
The error I get:
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 25432?
web_1 | could not connect to server: Cannot assign requested address
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 25432?
Does anybody have an idea why this isn't working?
I see that in your settings.py of the django app, you are connecting to Postgres via
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
While in .env you are setting the value of to POSTGRES_HOST to localhost. This means that the web container is trying to reach the Postgres server postgis at localhost which should not be the case.
In order to solve this problem, simply update your .env file to be like this:
POSTGRES_PORT=5432
...
POSTGRES_HOST=postgis
...
The reason is that in your case, the docker-compose brings up 2 containers: postgis and web inside the same Docker network and they can reach each other via their DNS name i.e. postgis and web respectively.
Regarding the port, web container can reach postgis at port 5432 but not 25432 while your host machine can reach the database at port 25432 but not 5432
you can not use localhost for the docker containers, it will be pointing to the container itself, not to the host of the containers. Instead switch to use the service name.
to fix the issue, change your env to
# POSTGRES
POSTGRES_PORT=5432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=postgis
# DJANGO
APP_PORT=8000
and you compose file to
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: postgis:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}

Run coverage test in a docker container

I am trying to use the coverage tool to measure the code coverage of my Django app, when i test it work fine, but when i pushed to github, i got some errors in travis-ci:
Traceback (most recent call last):
File "/usr/local/bin/coverage", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.6/site-packages/coverage/cmdline.py", line 756, in main
status = CoverageScript().command_line(argv)
File "/usr/local/lib/python3.6/site-packages/coverage/cmdline.py", line 491, in command_line
return self.do_run(options, args)
File "/usr/local/lib/python3.6/site-packages/coverage/cmdline.py", line 641, in do_run
self.coverage.save()
File "/usr/local/lib/python3.6/site-packages/coverage/control.py", line 782, in save
self.data_files.write(self.data, suffix=self.data_suffix)
File "/usr/local/lib/python3.6/site-packages/coverage/data.py", line 680, in write
data.write_file(filename)
File "/usr/local/lib/python3.6/site-packages/coverage/data.py", line 467, in write_file
with open(filename, 'w') as fdata:
PermissionError: [Errno 13] Permission denied: '/backend/.coverage'
The command "docker-compose run backend sh -c "coverage run manage.py test"" exited with 1.
my travis.yml:
language: python
python:
- "3.6"
services:
- docker
before_script: pip install docker-compose
script:
- docker-compose run backend sh -c "python manage.py test && flake8"
- docker-compose run backend sh -c "coverage run manage.py test"
after_success:
- coveralls
and my dockerfile
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
# Install dependencies
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client jpeg-dev
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
# Setup directory structure
RUN mkdir /backend
WORKDIR /backend
COPY scripts/start_dev.sh /
RUN dos2unix /start_dev.sh
COPY . /backend
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol/
RUN chmod -R 755 /vol/web
USER user
docker-compose:
backend:
container_name: backend_dev_blog
build: ./backend
command: "sh -c /start_dev.sh"
volumes:
- ./backend:/backend
ports:
- "8000:8000"
networks:
- main
environment:
- DB_HOST=db
- DB_NAME=blog
- DB_USER=postgres
- DB_PASS=supersecretpassword
depends_on:
- db
So after seeing the lack of permissions on /.coverage, I simply added chmod +x .coverage, however, this made no difference and I yet received the exact same error.
Your permission issue is most likely due to the fact you have a volume (./backend:/backend) and that you are using a user in your container which does not have the right permission to write on this volume (USER user)
Since you probably cannot change the permissions on the Travis CI directory ./backend, you could try to change the user which is used to write files to that location. This is easy to do with docker-compose:
backend:
container_name: backend_dev_blog
build: ./backend
command: "sh -c /start_dev.sh"
user: root
volumes:
- ./backend:/backend
ports:
- "8000:8000"
networks:
- main
environment:
- DB_HOST=db
- DB_NAME=blog
- DB_USER=postgres
- DB_PASS=supersecretpassword
depends_on:
- db