I'm writing the project using Django REST Framework, Django, Postgres as database and Redis as caching. I want to dockerize my project.
But Redis doesn't want to access connection.
Django settings:
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://127.0.0.1:6379/',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
}
}
}
docker-compose.yml:
services:
postgres:
image: postgres:latest
env_file:
- ./src/main/.env
volumes:
- ./scripts/postgres:/docker-entrypoint-initdb.d
polls:
build: .
volumes:
- .:/code
env_file:
- ./src/main/.env
ports:
- "8000:8000"
depends_on:
- postgres
- redis
command: ./scripts/wait_for_it.sh
redis:
restart: always
image: redis:3.2.0
expose:
- "6379"
When I run command to up containers there are follow warnings:
polls_cache | 1:M 15 Aug 10:47:36.719 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
polls_cache | 1:M 15 Aug 10:47:36.720 # Server started, Redis version 3.2.0
polls_cache | 1:M 15 Aug 10:47:36.720 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
polls_cache | 1:M 15 Aug 10:47:36.720 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
polls_cache | 1:M 15 Aug 10:47:36.720 * The server is now ready to accept connections on port 6379
And when I try to do GET request to the endpoint where I'm using Redis for caching there is exception:
ConnectionError at /question/top/
Error 111 connecting to 127.0.0.1:6379. Connection refused.
...
Maybe someone had a similar problem?
Change connection string to below -
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://redis:6379/',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
}
}
}
For Polls service container, 127.0.0.1 is the polls container itself. While using docker compose containers are always reachable by using their service names like redis polls postgres.
Related
Yes i know this question has been asked, and the usual error is making the DB_HOST = docker service name, in my case db.
Here's my docker-compose.yml:
version: "3"
services:
db:
image: postgres:latest
restart: always
ports:
- "54320:5432"
expose:
- "54320"
volumes:
- ./database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
# POSTGRES_HOST_AUTH_METHOD: trust
django:
build: ./api
restart: always
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
volumes:
- ./api:/app/api
ports:
- "8000:8000"
depends_on:
- db
raddl_admin:
build: ./frontend/raddl_admin
stdin_open: true # docker run -i
tty: true # docker run -t
command: ["npm", "start"]
ports:
- "3000:3000"
volumes:
- ./frontend/raddl_admin:/app/frontend/raddl_admin
- /app/frontend/raddl_admin/node_modules
volumes:
database-data: # named volumes can be managed easier using docker-compose
Logging the django service gives:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" (172.18.0.2) and accepting
TCP/IP connections on port 54320?
So it's clearly trying to connect to db on that port and my environment vars are set for the django instance.
I can even ping my db service from my django service:
docker exec -it 1f764333ace2 bash | ping db
PING db (23.221.222.250): 56 data bytes
64 bytes from 23.221.222.250: icmp_seq=0 ttl=50 time=42.055 ms
64 bytes from 23.221.222.250: icmp_seq=1 ttl=50 time=42.382 ms
6
Why can't I connect to the db service from Django?
Ah, so my issue was with the port number. For some reason, keeping the db exposed port to "5432" resolved this. I thought that it would have conflicted with my host's localhost instance of postgres. I guess not.
My system is Ubuntu 22.04.1 LTS.
I was going through the Django for professionals book and follow all line by line but somehow when i dockerize my django project with postgres it just doesn't run localhost properly, with sqlite all works fine.
Dockerfile
# Pull base image
FROM python:3.10.6-slim-bullseye
# Set environment variables
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# Copy project
COPY . .
docker-compose.yml
version: "3.9"
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
db:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
volumes:
postgres_data:
django.settings.py
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "postgres",
"USER": "postgres",
"PASSWORD": "postgres",
"HOST": "db", # set in docker-compose.yml
"PORT": 5432, # default postgres port
}
}
When I run docker-compose up that shows and seems like all must be fine, but somehow it just doesn't work:
Attaching to ch3-postgresql_web_1, ch3-postgresql_db_1
web_1 | Watching for file changes with StatReloader
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2022-12-05 11:58:21.572 UTC [1] LOG: starting PostgreSQL 13.9 (Debian 13.9-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db_1 | 2022-12-05 11:58:21.574 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2022-12-05 11:58:21.575 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2022-12-05 11:58:21.577 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2022-12-05 11:58:21.580 UTC [26] LOG: database system was shut down at 2022-12-05 11:58:17 UTC
db_1 | 2022-12-05 11:58:21.584 UTC [1] LOG: database system is ready to accept connections
And when I go to the url 127.0.0.1:8000:
This site can’t be reached
The webpage at http://127.0.0.1:8000/ might be temporarily down or it may have moved permanently to a new web address.
ERR_SOCKET_NOT_CONNECTED
If i wait some time and then check docker-compose logs, this messages appears
psycopg2.OperationalError: connection to server at "db" (172.21.0.2), port 5432 failed: Connection timed out
web_1 | Is the server running on that host and accepting TCP/IP connections?
I spend a lot of hours of searching for info about this problem but nothing did help. I try even to reinstall different versions of Postgres and docker.
Simply try this:
change this:
volumes:
- postgres_data:/var/lib/postgresql/data/
To this:
volumes:
- postgres_data:/var/lib/postgresql/data #removed forward slash from here.
Try above thing and see if it solves that problem.
Note: after making above changes don't forget to re-build docker-compsoe
You will need to add more environment variables to make the PostgreSQL work.
Please refer to https://hub.docker.com/_/postgres "Environment Variables"
The only variable required is POSTGRES_PASSWORD, the rest are optional.
db:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
- POSTGRES_HOST_AUTH_METHOD=trust
And, you are encouraged to make your docker compose file more secure:
db:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- .env.dev
.env.dev
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=postgres
POSTGRES_HOST_AUTH_METHOD=trust
I have currently implemented websocket connections via django channels using a redis layer.
I'm new to docker and not sure where I might have made a mistake. After the docker-compose up -d --build the "static files, media, database and gunicorn wsgi" all function, but redis won't connect. even though it is running in the background.
Before trying to containerize the application with docker, it worked well with:
python manage.py runserver
with the following settings.py setction for the redis layer:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("0.0.0.0", 6379)],
},
},
}
and by calling a docker container for the redis layer:
docker run -p 6379:6379 -d redis:5
But after the trying to containerize the entire application it was unable to find the websocket
The new setup for the docker-compose is as follows:
version: '3.10'
services:
web:
container_name: web
build:
context: ./app
dockerfile: Dockerfile
command: bash -c "gunicorn core.wsgi:application --bind 0.0.0.0:8000"
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
networks:
- app_network
redis:
container_name: redis
image: redis:5
ports:
- 6379:6379
networks:
- app_network
restart: on-failure
db:
container_name: db
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- ./.env.psql
ports:
- 5432:5432
networks:
- app_network
volumes:
postgres_data:
static_volume:
media_volume:
networks:
app_network:
with this settings.py:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis", 6379)],
},
},
}
After building successfully the container and running docker-compose logs -f:
Attaching to web, db, redis
db | The files belonging to this database system will be owned by user "postgres".
db | This user must also own the server process.
db |
db | The database cluster will be initialized with locale "en_US.utf8".
db | The default database encoding has accordingly been set to "UTF8".
db | The default text search configuration will be set to "english".
db |
db | Data page checksums are disabled.
db |
db | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db | creating subdirectories ... ok
db | selecting dynamic shared memory implementation ... posix
db | selecting default max_connections ... 100
db | selecting default shared_buffers ... 128MB
db | selecting default time zone ... Etc/UTC
db | creating configuration files ... ok
db | running bootstrap script ... ok
db | performing post-bootstrap initialization ... ok
db | initdb: warning: enabling "trust" authentication for local connections
db | You can change this by editing pg_hba.conf or using the option -A, or
db | --auth-local and --auth-host, the next time you run initdb.
db | syncing data to disk ... ok
db |
db |
db | Success. You can now start the database server using:
db |
db | pg_ctl -D /var/lib/postgresql/data -l logfile start
db |
db | waiting for server to start....2022-06-27 16:18:30.303 UTC [48] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:30.310 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:30.334 UTC [49] LOG: database system was shut down at 2022-06-27 16:18:29 UTC
db | 2022-06-27 16:18:30.350 UTC [48] LOG: database system is ready to accept connections
db | done
db | server started
db | CREATE DATABASE
db |
db |
db | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db |
db | 2022-06-27 16:18:31.587 UTC [48] LOG: received fast shutdown request
db | waiting for server to shut down....2022-06-27 16:18:31.596 UTC [48] LOG: aborting any active transactions
db | 2022-06-27 16:18:31.601 UTC [48] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1
db | 2022-06-27 16:18:31.602 UTC [50] LOG: shutting down
db | 2022-06-27 16:18:31.650 UTC [48] LOG: database system is shut down
db | done
db | server stopped
db |
db | PostgreSQL init process complete; ready for start up.
db |
db | 2022-06-27 16:18:31.800 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv6 address "::", port 5432
db | 2022-06-27 16:18:31.810 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:31.818 UTC [62] LOG: database system was shut down at 2022-06-27 16:18:31 UTC
db | 2022-06-27 16:18:31.825 UTC [1] LOG: database system is ready to accept connections
redis | 1:C 27 Jun 2022 16:18:29.080 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis | 1:C 27 Jun 2022 16:18:29.080 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1, just started
redis | 1:C 27 Jun 2022 16:18:29.080 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis | 1:M 27 Jun 2022 16:18:29.082 * Running mode=standalone, port=6379.
redis | 1:M 27 Jun 2022 16:18:29.082 # Server initialized
redis | 1:M 27 Jun 2022 16:18:29.082 * Ready to accept connections
web | Waiting for postgres...
web | PostgreSQL started
web | Waiting for redis...
web | redis started
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Starting gunicorn 20.1.0
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Using worker: sync
web | [2022-06-27 16:18:33 +0000] [8] [INFO] Booting worker with pid: 8
web | [2022-06-27 16:19:18 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8)
web | [2022-06-27 18:19:18 +0200] [8] [INFO] Worker exiting (pid: 8)
web | [2022-06-27 16:19:18 +0000] [9] [INFO] Booting worker with pid: 9
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
And the docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb3e489e0831 dermatology-project_web "/usr/src/app/entryp…" 35 minutes ago Up 35 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp web
aee14c8665d0 postgres "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp db
94c29591b352 redis:5 "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
The build Dockerfile:
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update
RUN apt-get install -y libpq-dev python3-pip python-dev postgresql postgresql-contrib netcat
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# create the appropriate directories for staticfiles
# copy project
COPY . .
# staticfiles
RUN python manage.py collectstatic --no-input --clear
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
and the entrypoint that checks the connections:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
if [ "$CHANNEL" = "redis" ]
then
echo "Waiting for redis..."
while ! nc -z $REDIS_HOST $REDIS_PORT; do
sleep 0.1
done
echo "redis started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
I have also tried to run the redis container separate like before and maintain the working containers, but that doesn´t work either. I have also tried it while running daphne on a different port and passing the asgi:application (daphne -p 8001 myproject.asgi:application) and it also didn't work.
Thank you
Managed a solution eventually
To make it work I needed to run the wsgi and asgi servers separately from from each other, each with its own container. Also, the previous service "web" that exposed the ports to the applications needed to be run twice for each container as well, with nginx proxies that upstreamed to each respective port.
This was all thanks to this genius of a man:
https://github.com/pplonski/simple-tasks
Here he explains what I needed and more. He also uses celery workers to manage the asynchronous task queue/job queue based on distributed message passing, which was a bit overkill for my project but beautiful.
New docker-compose:
version: '2'
services:
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- 1337:80
volumes:
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
depends_on:
- wsgiserver
- asgiserver
postgres:
container_name: postgres
restart: always
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5433:5432
expose:
- 5432
environment:
- ./.env.db
redis:
container_name: redis
image: redis:5
restart: unless-stopped
ports:
- 6378:6379
wsgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: wsgiserver
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
links:
- postgres
- redis
expose:
- 8000
asgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: asgiserver
command: daphne core.asgi:application -b 0.0.0.0 -p 9000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
links:
- postgres
- redis
expose:
- 9000
volumes:
static_volume:
media_volume:
postgres_data:
New entrypoint.sh:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
New nginx
nginx.conf:
server {
listen 80;
# gunicon wsgi server
location / {
try_files $uri #proxy_api;
}
location #proxy_api {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://wsgiserver:8000;
}
# ASGI
# map websocket connection to daphne
location /ws {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://asgiserver:9000;
}
# static and media files
location /static/ {
alias /usr/src/app/staticfiles/;
}
location /media/ {
alias /usr/src/app/media/;
}
}
Dockerfile for nginx:
FROM nginx:1.21
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
Note
If anyone is using this as reference, this is not a production container, there are further steps needed.
This article explains the other step:
https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/#conclusion
, as well as securing the application with AWS with Docker and Let's Encrypt, in the conclusion link.
I have a Django project that uses django-elasticsearch-dsl. The project is dockerized, so elasticsearch and the web projects leave in separate containers.
Now my goal is to recreate and repopulate the indices running
python manage.py search_index --rebuild
In order to do that, I try to run the command from the container of the web service the following way:
docker-compose exec web /bin/bash
> python manage.py search_index --rebuild
Not surprsiginly, I get an error
Failed to establish a new connection: [Errno 111] Connection refused)
apparently because python tried to connect to elasticsearch using localhost:9200.
So the question is, how do I tell the management command the host where elasticsearch lives ?
Here's my docker-compose.yml file:
version: '2'
services:
web:
build: .
restart: "no"
command: ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
env_file: &envfile
- .env
environment:
- DEBUG=True
ports:
- "${DJANGO_PORT}:8000"
networks:
- deploy_network
depends_on:
- elasticsearch
- db
elasticsearch:
image: 'elasticsearch:2.4.6'
ports:
- "9200:9200"
- "9300:9300"
networks:
- deploy_network
db:
image: "postgres"
container_name: "postgres"
restart: "no"
env_file: *envfile
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
networks:
deploy_network:
driver: bridge
UPDATE:
In the Django project's settings I setup the elasticsearch dsl host:
# settings.py
ELASTICSEARCH_DSL = {
'default': {
'hosts': 'localhost:9200'
}
}
Since your Django project and Elasticsearch are in 2 separate containers, setting ELASTICSEARCH_DSL's host to 'localhost:9200' won't work, in this case localhost refers to localhost inside Django container.
So you need to set it like this:
# settings.py
ELASTICSEARCH_DSL = {
'default': {
'hosts': 'elasticsearch:9200'
}
}
I am currently working on a Django project, which is supposed to send messages to a mobile app via websockets. For the Django project I used Docker to put it online. Now I want to send planned messages for the first time, for this I use Apscheduler or django-apscheduler. I try to save the jobs in my Redis container. But for some reason the connection is denied. Am I basically doing something wrong or does it hang somewhere?
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
RUN pip install -r requirements.txt
docker-compose.yml
version: '3'
services:
redis:
image: redis
command: redis-server
ports:
- '6379:6379'
- '6380:6380'
web:
build: .\experiencesampling
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:\code
ports:
- "8000:8000"
# worker_channels:
#
# build: .\experiencesampling
# command: python manage.py runworker channels
# volumes:
# - .:\code
# links:
# - redis
channels:
build: .\experiencesampling
command: daphne -p 8001 experiencesampling.asgi:application
volumes:
- .:\code
ports:
- "8001:8001"
links:
- redis
jobs.py (trying connect to redis), I have already tried 0.0.0.0, localhost, redis://redis for the "host"
jobstores = {
'default': RedisJobStore(jobs_key='dispatched_trips_jobs', run_times_key='dispatched_trips_running', host='redis', port=6380)
}
executors = {
'default': ThreadPoolExecutor(20),
'processpool': ProcessPoolExecutor(5)
}
job_defaults = {
'coalesce': False,
'max_instances': 3
}
#jobStore.remove_all_jobs()
scheduler = BackgroundScheduler(jobstores=jobstores, executors=executors, job_defaults=job_defaults)
register_events(scheduler)
scheduler.start()
print("Scheduler started!")
Error (appears multiple times)
web_1 |
web_1 | Scheduler started!
web_1 | Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | System check identified no issues (0 silenced).
web_1 | July 11, 2020 - 19:00:30
channels_1 | 2020-07-11 19:00:29,866 WARNING Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | Django version 3.0.8, using settings 'experiencesampling.settings'
web_1 | Starting ASGI/Channels version 2.3.1 development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.