I do
docker-compose up
I get
$ docker-compose up
Starting asynchttpproxy_postgres_1
Starting asynchttpproxy_web_1
Attaching to asynchttpproxy_postgres_1, asynchttpproxy_web_1
postgres_1 | LOG: database system was interrupted; last known up at 2017-
05-01 18:52:29 UTC
postgres_1 | LOG: database system was not properly shut down; automatic
recovery in progress
postgres_1 | LOG: invalid record length at 0/150F410: wanted 24, got 0
postgres_1 | LOG: redo is not required
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
postgres_1 | LOG: database system is ready to accept connections
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
My Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
My docker-compose.yml
postgres:
image: postgres:latest
volumes:
- ./code/
env_file:
- .env
volumes:
- /usr/src/app/static
expose:
- '5432'
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
env_file:
- .env
volumes:
- .:/code
links:
- postgres
expose:
- '8000'
As you can see, the django server wont start. What am I doing wrong? Thanks in advance.
First, try to run on another terminal
docker ps
to check if you server really did not start.
And check if you postgres database setup is ready when your django application start, if not try to run the an bash script to see if the connection is setup at postgress to initialize the django container.
wait-bd.sh
#!/bin/bash
while true; do
COUNT_PG=`psql postgresql://username:password#localhost:5432/name_db -c '\l \q' | grep "name_db" | wc -l`
if ! [ "$COUNT_PG" -eq "0" ]; then
break
fi
echo "Waiting Database Setup"
sleep 10
done
and in docker-compose.yml add the tag command in the django container:
web:
build: .
command: /bin/bash wait-bd.sh && python3 manage.py runserver 0.0.0.0:8000
This script gonna wait you database setup, so will run the django setup container.
Related
My system is Ubuntu 22.04.1 LTS.
I was going through the Django for professionals book and follow all line by line but somehow when i dockerize my django project with postgres it just doesn't run localhost properly, with sqlite all works fine.
Dockerfile
# Pull base image
FROM python:3.10.6-slim-bullseye
# Set environment variables
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# Copy project
COPY . .
docker-compose.yml
version: "3.9"
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
db:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
volumes:
postgres_data:
django.settings.py
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "postgres",
"USER": "postgres",
"PASSWORD": "postgres",
"HOST": "db", # set in docker-compose.yml
"PORT": 5432, # default postgres port
}
}
When I run docker-compose up that shows and seems like all must be fine, but somehow it just doesn't work:
Attaching to ch3-postgresql_web_1, ch3-postgresql_db_1
web_1 | Watching for file changes with StatReloader
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2022-12-05 11:58:21.572 UTC [1] LOG: starting PostgreSQL 13.9 (Debian 13.9-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db_1 | 2022-12-05 11:58:21.574 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2022-12-05 11:58:21.575 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2022-12-05 11:58:21.577 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2022-12-05 11:58:21.580 UTC [26] LOG: database system was shut down at 2022-12-05 11:58:17 UTC
db_1 | 2022-12-05 11:58:21.584 UTC [1] LOG: database system is ready to accept connections
And when I go to the url 127.0.0.1:8000:
This site can’t be reached
The webpage at http://127.0.0.1:8000/ might be temporarily down or it may have moved permanently to a new web address.
ERR_SOCKET_NOT_CONNECTED
If i wait some time and then check docker-compose logs, this messages appears
psycopg2.OperationalError: connection to server at "db" (172.21.0.2), port 5432 failed: Connection timed out
web_1 | Is the server running on that host and accepting TCP/IP connections?
I spend a lot of hours of searching for info about this problem but nothing did help. I try even to reinstall different versions of Postgres and docker.
Simply try this:
change this:
volumes:
- postgres_data:/var/lib/postgresql/data/
To this:
volumes:
- postgres_data:/var/lib/postgresql/data #removed forward slash from here.
Try above thing and see if it solves that problem.
Note: after making above changes don't forget to re-build docker-compsoe
You will need to add more environment variables to make the PostgreSQL work.
Please refer to https://hub.docker.com/_/postgres "Environment Variables"
The only variable required is POSTGRES_PASSWORD, the rest are optional.
db:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
- POSTGRES_HOST_AUTH_METHOD=trust
And, you are encouraged to make your docker compose file more secure:
db:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- .env.dev
.env.dev
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=postgres
POSTGRES_HOST_AUTH_METHOD=trust
I have currently implemented websocket connections via django channels using a redis layer.
I'm new to docker and not sure where I might have made a mistake. After the docker-compose up -d --build the "static files, media, database and gunicorn wsgi" all function, but redis won't connect. even though it is running in the background.
Before trying to containerize the application with docker, it worked well with:
python manage.py runserver
with the following settings.py setction for the redis layer:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("0.0.0.0", 6379)],
},
},
}
and by calling a docker container for the redis layer:
docker run -p 6379:6379 -d redis:5
But after the trying to containerize the entire application it was unable to find the websocket
The new setup for the docker-compose is as follows:
version: '3.10'
services:
web:
container_name: web
build:
context: ./app
dockerfile: Dockerfile
command: bash -c "gunicorn core.wsgi:application --bind 0.0.0.0:8000"
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
networks:
- app_network
redis:
container_name: redis
image: redis:5
ports:
- 6379:6379
networks:
- app_network
restart: on-failure
db:
container_name: db
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- ./.env.psql
ports:
- 5432:5432
networks:
- app_network
volumes:
postgres_data:
static_volume:
media_volume:
networks:
app_network:
with this settings.py:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis", 6379)],
},
},
}
After building successfully the container and running docker-compose logs -f:
Attaching to web, db, redis
db | The files belonging to this database system will be owned by user "postgres".
db | This user must also own the server process.
db |
db | The database cluster will be initialized with locale "en_US.utf8".
db | The default database encoding has accordingly been set to "UTF8".
db | The default text search configuration will be set to "english".
db |
db | Data page checksums are disabled.
db |
db | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db | creating subdirectories ... ok
db | selecting dynamic shared memory implementation ... posix
db | selecting default max_connections ... 100
db | selecting default shared_buffers ... 128MB
db | selecting default time zone ... Etc/UTC
db | creating configuration files ... ok
db | running bootstrap script ... ok
db | performing post-bootstrap initialization ... ok
db | initdb: warning: enabling "trust" authentication for local connections
db | You can change this by editing pg_hba.conf or using the option -A, or
db | --auth-local and --auth-host, the next time you run initdb.
db | syncing data to disk ... ok
db |
db |
db | Success. You can now start the database server using:
db |
db | pg_ctl -D /var/lib/postgresql/data -l logfile start
db |
db | waiting for server to start....2022-06-27 16:18:30.303 UTC [48] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:30.310 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:30.334 UTC [49] LOG: database system was shut down at 2022-06-27 16:18:29 UTC
db | 2022-06-27 16:18:30.350 UTC [48] LOG: database system is ready to accept connections
db | done
db | server started
db | CREATE DATABASE
db |
db |
db | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db |
db | 2022-06-27 16:18:31.587 UTC [48] LOG: received fast shutdown request
db | waiting for server to shut down....2022-06-27 16:18:31.596 UTC [48] LOG: aborting any active transactions
db | 2022-06-27 16:18:31.601 UTC [48] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1
db | 2022-06-27 16:18:31.602 UTC [50] LOG: shutting down
db | 2022-06-27 16:18:31.650 UTC [48] LOG: database system is shut down
db | done
db | server stopped
db |
db | PostgreSQL init process complete; ready for start up.
db |
db | 2022-06-27 16:18:31.800 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv6 address "::", port 5432
db | 2022-06-27 16:18:31.810 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:31.818 UTC [62] LOG: database system was shut down at 2022-06-27 16:18:31 UTC
db | 2022-06-27 16:18:31.825 UTC [1] LOG: database system is ready to accept connections
redis | 1:C 27 Jun 2022 16:18:29.080 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis | 1:C 27 Jun 2022 16:18:29.080 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1, just started
redis | 1:C 27 Jun 2022 16:18:29.080 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis | 1:M 27 Jun 2022 16:18:29.082 * Running mode=standalone, port=6379.
redis | 1:M 27 Jun 2022 16:18:29.082 # Server initialized
redis | 1:M 27 Jun 2022 16:18:29.082 * Ready to accept connections
web | Waiting for postgres...
web | PostgreSQL started
web | Waiting for redis...
web | redis started
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Starting gunicorn 20.1.0
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Using worker: sync
web | [2022-06-27 16:18:33 +0000] [8] [INFO] Booting worker with pid: 8
web | [2022-06-27 16:19:18 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8)
web | [2022-06-27 18:19:18 +0200] [8] [INFO] Worker exiting (pid: 8)
web | [2022-06-27 16:19:18 +0000] [9] [INFO] Booting worker with pid: 9
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
And the docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb3e489e0831 dermatology-project_web "/usr/src/app/entryp…" 35 minutes ago Up 35 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp web
aee14c8665d0 postgres "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp db
94c29591b352 redis:5 "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
The build Dockerfile:
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update
RUN apt-get install -y libpq-dev python3-pip python-dev postgresql postgresql-contrib netcat
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# create the appropriate directories for staticfiles
# copy project
COPY . .
# staticfiles
RUN python manage.py collectstatic --no-input --clear
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
and the entrypoint that checks the connections:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
if [ "$CHANNEL" = "redis" ]
then
echo "Waiting for redis..."
while ! nc -z $REDIS_HOST $REDIS_PORT; do
sleep 0.1
done
echo "redis started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
I have also tried to run the redis container separate like before and maintain the working containers, but that doesn´t work either. I have also tried it while running daphne on a different port and passing the asgi:application (daphne -p 8001 myproject.asgi:application) and it also didn't work.
Thank you
Managed a solution eventually
To make it work I needed to run the wsgi and asgi servers separately from from each other, each with its own container. Also, the previous service "web" that exposed the ports to the applications needed to be run twice for each container as well, with nginx proxies that upstreamed to each respective port.
This was all thanks to this genius of a man:
https://github.com/pplonski/simple-tasks
Here he explains what I needed and more. He also uses celery workers to manage the asynchronous task queue/job queue based on distributed message passing, which was a bit overkill for my project but beautiful.
New docker-compose:
version: '2'
services:
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- 1337:80
volumes:
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
depends_on:
- wsgiserver
- asgiserver
postgres:
container_name: postgres
restart: always
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5433:5432
expose:
- 5432
environment:
- ./.env.db
redis:
container_name: redis
image: redis:5
restart: unless-stopped
ports:
- 6378:6379
wsgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: wsgiserver
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
links:
- postgres
- redis
expose:
- 8000
asgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: asgiserver
command: daphne core.asgi:application -b 0.0.0.0 -p 9000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
links:
- postgres
- redis
expose:
- 9000
volumes:
static_volume:
media_volume:
postgres_data:
New entrypoint.sh:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
New nginx
nginx.conf:
server {
listen 80;
# gunicon wsgi server
location / {
try_files $uri #proxy_api;
}
location #proxy_api {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://wsgiserver:8000;
}
# ASGI
# map websocket connection to daphne
location /ws {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://asgiserver:9000;
}
# static and media files
location /static/ {
alias /usr/src/app/staticfiles/;
}
location /media/ {
alias /usr/src/app/media/;
}
}
Dockerfile for nginx:
FROM nginx:1.21
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
Note
If anyone is using this as reference, this is not a production container, there are further steps needed.
This article explains the other step:
https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/#conclusion
, as well as securing the application with AWS with Docker and Let's Encrypt, in the conclusion link.
I'm having a permissions issue when trying to run my docker-compose command
docker-compose run app sh -c "django-admin.py startproject app ."
ERROR:
PermissionError: [Errno 13] Permission denied: '/app/manage.py'
I've done some research and found a similar issue here: docker-compose , PermissionError: [Errno 13] Permission denied: '/manage.py'
However I think I'm doing something wrong when trying to change permissions in my Dockerfile
Dockerfile:
FROM python:3.8-alpine
MAINTAINER Nick
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
RUN chown user:user -R /app/
RUN chmod +x /app
USER user
I add RUN chown user:user -R /app/ and RUN chmod +x /app
running docker build . works succesfully however I still keep getting permissions issue
docker-compose.yml
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app /app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
It seems your Django project works well, I've tried to reproduce your error but I couldn't get it.
My directory tree:
.
|____app
| |____app
| | |____asgi.py
| | |______init__.py
| | |____settings.py
| | |____urls.py
| | |____wsgi.py
| |____manage.py
|____requirements.txt
|____Dockerfile
|____docker-compose.yml
requirements.txt
Django==3.1.5
Dockerfile:
FROM python:3.8-alpine
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
RUN chown user:user -R /app/
RUN chmod +x /app
USER user
Docker-compose:
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app /app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
My steps:
django-admin startproject app
docker-compose build app
docker-compose up
Output:
Creating django-3-1-5_app_1 ... done
Attaching to django-3-1-5_app_1
app_1 | Watching for file changes with StatReloader
app_1 | Performing system checks...
app_1 |
app_1 | System check identified no issues (0 silenced).
app_1 |
app_1 | You have 18 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
app_1 | Run 'python manage.py migrate' to apply them.
app_1 | January 06, 2021 - 21:03:42
app_1 | Django version 3.1.5, using settings 'app.settings'
app_1 | Starting development server at http://0.0.0.0:8000/
app_1 | Quit the server with CONTROL-C.
app_1 | [06/Jan/2021 21:03:52] "GET / HTTP/1.1" 200 16351
app_1 | [06/Jan/2021 21:03:52] "GET /static/admin/css/fonts.css HTTP/1.1" 200 423
app_1 | [06/Jan/2021 21:03:52] "GET /static/admin/fonts/Roboto-Regular-webfont.woff HTTP/1.1" 200 85876
app_1 | [06/Jan/2021 21:03:52] "GET /static/admin/fonts/Roboto-Light-webfont.woff HTTP/1.1" 200 85692
app_1 | [06/Jan/2021 21:03:52] "GET /static/admin/fonts/Roboto-Bold-webfont.woff HTTP/1.1" 200 86184
app_1 | Not Found: /favicon.ico
app_1 | [06/Jan/2021 21:03:52] "GET /favicon.ico HTTP/1.1" 404 1969
^CGracefully stopping... (press Ctrl+C again to force)
Stopping django-3-1-5_app_1 ... done
Browser:
I'm thinking, you have another kind of perm issue, have you tried to build this image from a MacOS?
Try the following configuration:
System Preferences -> Security & Privacy -> Privacy tab -> Full Disk Access (on the left, somewhere in the list) -> Click on the + -> Docker application
I hope it may help you and other users.
For whatever reason changing my
docker-compose.yml from
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app /app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
To
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
I modified - ./app /app to - ./app:/app. It fixed the issue. If anyone has an explanation that would be great.
Thanks for the help.
It worked for me by removing : from the volume (docker-compose.yml):
change from:
version: '3.7'
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app
ports:
- 8000:8000
env_file:
- ./.env.dev
to :
version: '3.7'
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app//usr/src/app
ports:
- 8000:8000
env_file:
- ./.env.dev
I am currently working on a Django project, which is supposed to send messages to a mobile app via websockets. For the Django project I used Docker to put it online. Now I want to send planned messages for the first time, for this I use Apscheduler or django-apscheduler. I try to save the jobs in my Redis container. But for some reason the connection is denied. Am I basically doing something wrong or does it hang somewhere?
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
RUN pip install -r requirements.txt
docker-compose.yml
version: '3'
services:
redis:
image: redis
command: redis-server
ports:
- '6379:6379'
- '6380:6380'
web:
build: .\experiencesampling
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:\code
ports:
- "8000:8000"
# worker_channels:
#
# build: .\experiencesampling
# command: python manage.py runworker channels
# volumes:
# - .:\code
# links:
# - redis
channels:
build: .\experiencesampling
command: daphne -p 8001 experiencesampling.asgi:application
volumes:
- .:\code
ports:
- "8001:8001"
links:
- redis
jobs.py (trying connect to redis), I have already tried 0.0.0.0, localhost, redis://redis for the "host"
jobstores = {
'default': RedisJobStore(jobs_key='dispatched_trips_jobs', run_times_key='dispatched_trips_running', host='redis', port=6380)
}
executors = {
'default': ThreadPoolExecutor(20),
'processpool': ProcessPoolExecutor(5)
}
job_defaults = {
'coalesce': False,
'max_instances': 3
}
#jobStore.remove_all_jobs()
scheduler = BackgroundScheduler(jobstores=jobstores, executors=executors, job_defaults=job_defaults)
register_events(scheduler)
scheduler.start()
print("Scheduler started!")
Error (appears multiple times)
web_1 |
web_1 | Scheduler started!
web_1 | Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | System check identified no issues (0 silenced).
web_1 | July 11, 2020 - 19:00:30
channels_1 | 2020-07-11 19:00:29,866 WARNING Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | Django version 3.0.8, using settings 'experiencesampling.settings'
web_1 | Starting ASGI/Channels version 2.3.1 development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.
I'm trying to create a Django app in a docker container. The app would use a postgres db with postgis extension, which I have in another database. I'm trying to solve this using docker-compose but can not get it working.
I can get the app working without the container with the database containerized just fine. I can also get the app working in a container using a sqlite db (so a file included without external container dependencies). Whatever I do, it can't find the database.
My docker-compose file:
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- "${POSTGRES_PORT}:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: 0.0.0.0:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}
My Dockerfile (of the app):
# Pull base image
FROM python:3.7
LABEL maintainer="yb.leeuwen#portofrotterdam.com"
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
# RUN pip install pipenv
RUN pip install pipenv
RUN mkdir /code/
COPY . /code
WORKDIR /code/
RUN pipenv install --system
# RUN pipenv install pygdal
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gdal-bin python-gdal python3-gdal postgresql-client
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait
# set work directory
WORKDIR /code/app
# RUN python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# RUN cd ${WORKDIR}
# If we want to run docker by itself we need to use below
# but if we want to run from docker-compose we'll set it there
EXPOSE 8000
# CMD /wait && python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
My .env file:
# POSTGRES
POSTGRES_PORT=25432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=localhost
# GEOSERVER
# DJANGO
APP_PORT=8000
And finally my in my settings.py of the django app:
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': os.getenv('POSTGRES_DBNAME'),
'USER': os.getenv('POSTGRES_USER'),
'PASSWORD': os.getenv('POSTGRES_PASS'),
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
'PORT': os.getenv('POSTGRES_PORT')
}
}
I've tried quite a lot of things (as you see in some comments). I realized that docker-compose doesn't seem to wait until postgres is fully up, spinning and accepting requests so I tried to build in a waiting function (as suggested on the website). I first had migrations and running the server inside the Dockerfile (migrations in the build process and runserver as the startup command), but that requires postgres and as it wasn't waiting for it it didn't function. I finally took it all out to the docker-compose.yml file but still can't get it working.
The error I get:
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 25432?
web_1 | could not connect to server: Cannot assign requested address
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 25432?
Does anybody have an idea why this isn't working?
I see that in your settings.py of the django app, you are connecting to Postgres via
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
While in .env you are setting the value of to POSTGRES_HOST to localhost. This means that the web container is trying to reach the Postgres server postgis at localhost which should not be the case.
In order to solve this problem, simply update your .env file to be like this:
POSTGRES_PORT=5432
...
POSTGRES_HOST=postgis
...
The reason is that in your case, the docker-compose brings up 2 containers: postgis and web inside the same Docker network and they can reach each other via their DNS name i.e. postgis and web respectively.
Regarding the port, web container can reach postgis at port 5432 but not 25432 while your host machine can reach the database at port 25432 but not 5432
you can not use localhost for the docker containers, it will be pointing to the container itself, not to the host of the containers. Instead switch to use the service name.
to fix the issue, change your env to
# POSTGRES
POSTGRES_PORT=5432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=postgis
# DJANGO
APP_PORT=8000
and you compose file to
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: postgis:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}