In my local environment i used celery for schedule task it works in local system i used redis as a worker
now i want to configure django celery in heroku server
i tried to use heroku-redis add-ons in heroku app
added this stuff in my settings.py:
r = redis.from_url(os.environ.get("REDIS_URL"))
BROKER_URL = redis.from_url(os.environ.get("REDIS_URL"))
CELERY_RESULT_BACKEND = os.environ.get('REDIS_URL')
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Canada/Eastern'
redis_url = urlparse.urlparse(os.environ.get('REDIS_URL'))
CACHES = {
"default": {
"BACKEND": "redis_cache.RedisCache",
"LOCATION": "{0}:{1}".format(redis_url.hostname, redis_url.port),
"OPTIONS": {
"PASSWORD": redis_url.password,
"DB": 0,
}
}
}
after in my procfile I added:
web: gunicorn bizbii.wsgi --log-file -
worker : celery workder -A tasks.app -l INFO
python manage.py celeryd -v 2 -B -s celery -E -l INFO
but still task does not run
After that I run command for log so it return:
2016-07-30T08:53:19+00:00 app[heroku-redis]: source=REDIS sample#active-connections=1 sample#load-avg-1m=0.07 sample#load-avg-5m=0.075 sample#load-avg-15m=0.07 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664876.0kB sample#memory-free=13426732.0kB sample#memory-cached=460140kB sample#memory-redis=299616bytes sample#hit-rate=1 sample#evicted-keys=0
after that create dyno with this command:
heroku run bash -a bizbii2
and run following command:
python manage.py celeryd -v 2 -B -s celery -E -l INFO
so it return error like:
[2016-08-03 08:23:26,506: ERROR/Beat] beat: Connection error: [Errno 111] Connection refused. Trying again in 8.0 seconds...
[2016-08-03 08:23:26,843: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 8.00 seconds...
Please give me suggestion how we deploy celery on heroku server
I had this exact problem. I updated my procfile with the following line and the error is gone:
worker: celery -A TASKFILE worker -B --loglevel=info
Replace TASKFILE with for example: proj.celery or proj.tasks. This depends on where you put the tasks.
Related
I have currently implemented websocket connections via django channels using a redis layer.
I'm new to docker and not sure where I might have made a mistake. After the docker-compose up -d --build the "static files, media, database and gunicorn wsgi" all function, but redis won't connect. even though it is running in the background.
Before trying to containerize the application with docker, it worked well with:
python manage.py runserver
with the following settings.py setction for the redis layer:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("0.0.0.0", 6379)],
},
},
}
and by calling a docker container for the redis layer:
docker run -p 6379:6379 -d redis:5
But after the trying to containerize the entire application it was unable to find the websocket
The new setup for the docker-compose is as follows:
version: '3.10'
services:
web:
container_name: web
build:
context: ./app
dockerfile: Dockerfile
command: bash -c "gunicorn core.wsgi:application --bind 0.0.0.0:8000"
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
networks:
- app_network
redis:
container_name: redis
image: redis:5
ports:
- 6379:6379
networks:
- app_network
restart: on-failure
db:
container_name: db
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- ./.env.psql
ports:
- 5432:5432
networks:
- app_network
volumes:
postgres_data:
static_volume:
media_volume:
networks:
app_network:
with this settings.py:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis", 6379)],
},
},
}
After building successfully the container and running docker-compose logs -f:
Attaching to web, db, redis
db | The files belonging to this database system will be owned by user "postgres".
db | This user must also own the server process.
db |
db | The database cluster will be initialized with locale "en_US.utf8".
db | The default database encoding has accordingly been set to "UTF8".
db | The default text search configuration will be set to "english".
db |
db | Data page checksums are disabled.
db |
db | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db | creating subdirectories ... ok
db | selecting dynamic shared memory implementation ... posix
db | selecting default max_connections ... 100
db | selecting default shared_buffers ... 128MB
db | selecting default time zone ... Etc/UTC
db | creating configuration files ... ok
db | running bootstrap script ... ok
db | performing post-bootstrap initialization ... ok
db | initdb: warning: enabling "trust" authentication for local connections
db | You can change this by editing pg_hba.conf or using the option -A, or
db | --auth-local and --auth-host, the next time you run initdb.
db | syncing data to disk ... ok
db |
db |
db | Success. You can now start the database server using:
db |
db | pg_ctl -D /var/lib/postgresql/data -l logfile start
db |
db | waiting for server to start....2022-06-27 16:18:30.303 UTC [48] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:30.310 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:30.334 UTC [49] LOG: database system was shut down at 2022-06-27 16:18:29 UTC
db | 2022-06-27 16:18:30.350 UTC [48] LOG: database system is ready to accept connections
db | done
db | server started
db | CREATE DATABASE
db |
db |
db | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db |
db | 2022-06-27 16:18:31.587 UTC [48] LOG: received fast shutdown request
db | waiting for server to shut down....2022-06-27 16:18:31.596 UTC [48] LOG: aborting any active transactions
db | 2022-06-27 16:18:31.601 UTC [48] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1
db | 2022-06-27 16:18:31.602 UTC [50] LOG: shutting down
db | 2022-06-27 16:18:31.650 UTC [48] LOG: database system is shut down
db | done
db | server stopped
db |
db | PostgreSQL init process complete; ready for start up.
db |
db | 2022-06-27 16:18:31.800 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv6 address "::", port 5432
db | 2022-06-27 16:18:31.810 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:31.818 UTC [62] LOG: database system was shut down at 2022-06-27 16:18:31 UTC
db | 2022-06-27 16:18:31.825 UTC [1] LOG: database system is ready to accept connections
redis | 1:C 27 Jun 2022 16:18:29.080 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis | 1:C 27 Jun 2022 16:18:29.080 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1, just started
redis | 1:C 27 Jun 2022 16:18:29.080 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis | 1:M 27 Jun 2022 16:18:29.082 * Running mode=standalone, port=6379.
redis | 1:M 27 Jun 2022 16:18:29.082 # Server initialized
redis | 1:M 27 Jun 2022 16:18:29.082 * Ready to accept connections
web | Waiting for postgres...
web | PostgreSQL started
web | Waiting for redis...
web | redis started
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Starting gunicorn 20.1.0
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Using worker: sync
web | [2022-06-27 16:18:33 +0000] [8] [INFO] Booting worker with pid: 8
web | [2022-06-27 16:19:18 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8)
web | [2022-06-27 18:19:18 +0200] [8] [INFO] Worker exiting (pid: 8)
web | [2022-06-27 16:19:18 +0000] [9] [INFO] Booting worker with pid: 9
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
And the docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb3e489e0831 dermatology-project_web "/usr/src/app/entryp…" 35 minutes ago Up 35 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp web
aee14c8665d0 postgres "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp db
94c29591b352 redis:5 "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
The build Dockerfile:
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update
RUN apt-get install -y libpq-dev python3-pip python-dev postgresql postgresql-contrib netcat
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# create the appropriate directories for staticfiles
# copy project
COPY . .
# staticfiles
RUN python manage.py collectstatic --no-input --clear
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
and the entrypoint that checks the connections:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
if [ "$CHANNEL" = "redis" ]
then
echo "Waiting for redis..."
while ! nc -z $REDIS_HOST $REDIS_PORT; do
sleep 0.1
done
echo "redis started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
I have also tried to run the redis container separate like before and maintain the working containers, but that doesn´t work either. I have also tried it while running daphne on a different port and passing the asgi:application (daphne -p 8001 myproject.asgi:application) and it also didn't work.
Thank you
Managed a solution eventually
To make it work I needed to run the wsgi and asgi servers separately from from each other, each with its own container. Also, the previous service "web" that exposed the ports to the applications needed to be run twice for each container as well, with nginx proxies that upstreamed to each respective port.
This was all thanks to this genius of a man:
https://github.com/pplonski/simple-tasks
Here he explains what I needed and more. He also uses celery workers to manage the asynchronous task queue/job queue based on distributed message passing, which was a bit overkill for my project but beautiful.
New docker-compose:
version: '2'
services:
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- 1337:80
volumes:
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
depends_on:
- wsgiserver
- asgiserver
postgres:
container_name: postgres
restart: always
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5433:5432
expose:
- 5432
environment:
- ./.env.db
redis:
container_name: redis
image: redis:5
restart: unless-stopped
ports:
- 6378:6379
wsgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: wsgiserver
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
links:
- postgres
- redis
expose:
- 8000
asgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: asgiserver
command: daphne core.asgi:application -b 0.0.0.0 -p 9000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
links:
- postgres
- redis
expose:
- 9000
volumes:
static_volume:
media_volume:
postgres_data:
New entrypoint.sh:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
New nginx
nginx.conf:
server {
listen 80;
# gunicon wsgi server
location / {
try_files $uri #proxy_api;
}
location #proxy_api {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://wsgiserver:8000;
}
# ASGI
# map websocket connection to daphne
location /ws {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://asgiserver:9000;
}
# static and media files
location /static/ {
alias /usr/src/app/staticfiles/;
}
location /media/ {
alias /usr/src/app/media/;
}
}
Dockerfile for nginx:
FROM nginx:1.21
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
Note
If anyone is using this as reference, this is not a production container, there are further steps needed.
This article explains the other step:
https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/#conclusion
, as well as securing the application with AWS with Docker and Let's Encrypt, in the conclusion link.
I am currently working on a Django project, which is supposed to send messages to a mobile app via websockets. For the Django project I used Docker to put it online. Now I want to send planned messages for the first time, for this I use Apscheduler or django-apscheduler. I try to save the jobs in my Redis container. But for some reason the connection is denied. Am I basically doing something wrong or does it hang somewhere?
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
RUN pip install -r requirements.txt
docker-compose.yml
version: '3'
services:
redis:
image: redis
command: redis-server
ports:
- '6379:6379'
- '6380:6380'
web:
build: .\experiencesampling
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:\code
ports:
- "8000:8000"
# worker_channels:
#
# build: .\experiencesampling
# command: python manage.py runworker channels
# volumes:
# - .:\code
# links:
# - redis
channels:
build: .\experiencesampling
command: daphne -p 8001 experiencesampling.asgi:application
volumes:
- .:\code
ports:
- "8001:8001"
links:
- redis
jobs.py (trying connect to redis), I have already tried 0.0.0.0, localhost, redis://redis for the "host"
jobstores = {
'default': RedisJobStore(jobs_key='dispatched_trips_jobs', run_times_key='dispatched_trips_running', host='redis', port=6380)
}
executors = {
'default': ThreadPoolExecutor(20),
'processpool': ProcessPoolExecutor(5)
}
job_defaults = {
'coalesce': False,
'max_instances': 3
}
#jobStore.remove_all_jobs()
scheduler = BackgroundScheduler(jobstores=jobstores, executors=executors, job_defaults=job_defaults)
register_events(scheduler)
scheduler.start()
print("Scheduler started!")
Error (appears multiple times)
web_1 |
web_1 | Scheduler started!
web_1 | Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | System check identified no issues (0 silenced).
web_1 | July 11, 2020 - 19:00:30
channels_1 | 2020-07-11 19:00:29,866 WARNING Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | Django version 3.0.8, using settings 'experiencesampling.settings'
web_1 | Starting ASGI/Channels version 2.3.1 development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.
I followed the official Celery documentation regarding how to configure Celery to work with Django (python 3) and RabbitMQ. I already have a systemd service to start my Django Application using Gunicorn, NGINX is used as a reverse reverse-proxy.
Now I need to daemonize Celery itself based on the offical documentation but my current settings doesn't seem to work properly as my application is not recognized, I get error below at Celery systemd service start:
# systemctl start celery-my_project
# journalctl -xe
Error:
Unable to load celery application
The module celery-my_project.celery was not found
Failed to start Celery daemon
As a test, I got a rid of all the systemd/Gunicorn/NGINX and basically started my virtualenv/Django application & Celery worker manually: Celery tasks are properly detected by Celery worker:
celery -A my_project worker -l debug
How to properly configure systemd unit so that I can daemonize Celery?
Application service (systemd unit)
[Unit]
Description=My Django Application
After=network.target
[Service]
User=myuser
Group=mygroup
WorkingDirectory=/opt/my_project/
ExecStart=opt/my_project/venv/bin/gunicorn --workers 3 --log-level debug --bind unix:/opt/my_project/my_project/my_project.sock my_project.wsgi:application
[Install]
WantedBy=multi-user.target
Celery service (systemd unit)
[Unit]
Description=Celery daemon
After=network.target
[Service]
Type=forking
User=celery
Group=mygroup
EnvironmentFile=/etc/celery/celery-my_project.conf
WorkingDirectory=/opt/my_project
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
Celery service configuration file (systemd EnvironmentFile)
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/opt/my_project/venv/bin/celery"
# App instance to use
CELERY_APP="my_project"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="DEBUG"
Django project layout
# Project root: /opt/my_project
my_project
manage.py
my_project
__init__.py
settings.py
celery.py
my_app
tasks.py
forms.py
models.py
urls.py
views.py
venv
__init__.py
from .celery import app as celery_app
__all__ = ('celery_app',)
celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE','my_project.settings')
app = Celery('my_project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
project/project/settings.py
...
CELERY_BEAT_SCHEDULE = {
'find-subdomains': {
'task': 'subdiscovery.tasks.mytask',
'schedule': 10.0
}
}
project/subdiscovery/tasks.py
from __future__ import absolute_import, unicode_literals
from celery import shared_task
from subdiscovery.models import Domain
#shared_task
def mytask():
print(Domain.objects.all())
return 99
The celery worker shows an empty QuerySet:
celery_worker_1 | [2019-08-12 07:07:44,229: WARNING/ForkPoolWorker-2] <QuerySet []>
celery_worker_1 | [2019-08-12 07:07:44,229: INFO/ForkPoolWorker-2] Task subdiscovery.tasks.mytask[60c59024-cd19-4ce9-ae69-782a3a81351b] succeeded in 0.004897953000181587s: 99
However, importing the same model works in a python shell:
./manage.py shell
>>> from subdiscovery.models import Domain
>>> Domain.objects.all()
<QuerySet [<Domain: example1.com>, <Domain: example2.com>, <Domain: example3.com>]>
I should mention it's running in a docker stack
EDIT:
Ok, entering the running docker container
docker exec -it <web service container id> /bin/sh
and running
$ celery -A project worker -l info
works as expected:
[2019-08-13 05:12:28,945: INFO/MainProcess] Received task: subdiscovery.tasks.mytask[7b2760cf-1e7f-41f8-bc13-fa4042eedf33]
[2019-08-13 05:12:28,957: WARNING/ForkPoolWorker-8] <QuerySet [<Domain: uber.com>, <Domain: example1.com>, <Domain: example2.com>, <Domain: example3.com>]>
Here's what the docker-compose.yml looks like
version: '3'
services:
web:
build: .
image: app-image
ports:
- 80:8000
volumes:
- .:/app
command: gunicorn -b 0.0.0.0:8000 project.wsgi
redis:
image: "redis:alpine"
ports:
- 6379:6379
celery_worker:
working_dir: /app
command: sh -c './wait-for web:8000 && ./wait-for redis:6379 -- celery -A project worker -l info'
image: app-image
depends_on:
- web
- redis
celery_beat:
working_dir: /app
command: sh -c 'celery -A project beat -l info'
image: app-image
depends_on:
- celery_worker
Any idea why the worker started with docker-compose doesn't work, but entering the running container and starting a worker does?
Reposting from reddit https://www.reddit.com/r/docker/comments/cpoedr/different_behavior_when_starting_a_celery_worker/ewqx3mp?utm_source=share&utm_medium=web2x
Your problem here is that your celery worker doesn’t see sqlite database. You need to switch to different DB or make your ./app volume visible.
version: '3'
services:
...
celery_worker:
working_dir: /app
command: sh -c './wait-for web:8000 && ./wait-for redis:6379 -- celery -A project worker -l info'
image: app-image
volumes: # <-here
- .:/app
depends_on:
- web
- redis
...
I suggest switching to more production ready DB like postgres
I can't run Celery beat using Docker.
celerybeat_1 | celery.platforms.LockFailed: [Errno 13] Permission
denied: '/code/celerybeat.pid'
docker service:
celerybeat:
<<: *django
depends_on:
- postgres
- redis
command: /start-celerybeat.sh
start-celerybeat.sh
#!/bin/sh
set -o errexit
set -o nounset
celery -A my_project.taskapp beat -l info --loglevel=debug --scheduler django_celery_beat.schedulers:DatabaseScheduler
How can I fix that?
Delete that file. Then, modify the last line of start-celerybeat.sh, adding --pidfile /tmp/celerybeat.pid to the end